User login
Decreasing suicide risk with math
Suicide is a common reality, accounting for approximately 800,000 deaths per year worldwide.1 Properly assessing and minimizing suicide risk can be challenging. We are taught that lithium and clozapine can decrease suicidality, and many psychiatrists prescribe these medications with the firm, “evidence-based” belief that doing so reduces suicide risk. Paradoxically, what they in fact might be doing is the exact opposite; they may be giving high-risk patients the opportunity and the means to attempt suicide with a lethal amount of medication.
One patient diagnosed with a mood disorder who attempted suicide had a surprising point of view. After taking a large qu
Operations research is a subfield of mathematics that tries to optimize one or more variables when multiple variables are in play. One example would be to maximize profit while minimizing cost. During World War II, operations research was used to decrease the number of munitions used to shoot down airplanes, and to sink submarines more efficiently.
Focusing on the patient who attempted suicide by overdose, the question was: If she was discharged from the psychiatry unit with a 30-day supply of medication, how lethal would that prescription be if deliberately taken all at once? And what can be done to minimize this suicide risk? Psychiatrists know that some medications are more dangerous than others, but few have performed quantitative analysis to determine the potential lethality of these medications. The math analysis did not involve multivariable calculus or differential equations, only multiplication and division. The results were eye-opening.
Calculating relative lethality
The lethal dose 50 (LD50) is the dose of a medication expressed in mg/kg that results in the death of 50% of the animals (usually rats) used in a controlled experiment. Open-source data for the LD50 of medications is provided by the manufacturers.
I tabulated this data for a wide range of psychiatric medications, including antipsychotics, mood stabilizers, and selective serotonin reuptake inhibitors, in a spreadsheet with columns for maximum daily dose, 30-day supply of the medication, LD50 in mg/kg, LD50 for a 60-kg subject, and percentage of the 30-day supply compared with LD50. I then sorted this data by relative lethality (for my complete data, see Figure 1 and the Table).
The rat dose in mg/kg was extrapolated to the human equivalent dose (HED) in mg/kg using a conversion factor of 6.2 (for a person who weighs 60 kg, the HED = LD50/6.2) as suggested by the FDA.2 The dose for the first fatality is smaller than the HED, and toxicity occurs at even smaller doses. After simplifying all the terms, the formula for the HED-relative lethality is f(x) = 310x/LD50, where x is the daily dose of a medication prescribed for 30 days. This is the equation of a straight line with a slope inversely proportional to the LD50 of each medication and a y-axis intercept of 0. Each medication line shows that any dose rising above 100% on the y-axis is a quantum higher than the lethal dose.
Some commonly prescribed psychotropics are highly lethal
The relative lethality of many commonly prescribed psychiatric medications, including those frequently used to reduce suicidality, varies tremendously. For example, it is widely known that the first-line mood stabilizer lithium has a narrow therapeutic window and can rapidly become toxic. If a patient becomes dehydrated, even a normal lithium dose can be toxic or lethal. Lithium has a relative lethality of 1,063% (Figure 2). Clozapine has a relative lethality of 1,112%. Valproic acid has an even higher relative lethality of 1,666%. By contrast, aripiprazole and olanzapine have a relative lethality of 10% and 35%, respectively. For preventing suicide, prescribing a second-generation antipsychotic with a lower relative lethality may be preferable over prescribing a medication with a higher relative lethality.
According to U.S. poison control centers,3 from 2000 to 2014, there were 15,036 serious outcomes, including 61 deaths, associated with lithium use, and 6,109 serious outcomes, including 37 deaths, associated with valproic acid. In contrast, there were only 1,446 serious outcomes and no deaths associated with aripiprazole use.3 These outcomes may be underreported, but they are consistent with the mathematical model predicting that medications with a higher relative lethality will have higher morbidity and mortality outcomes, regardless of a patient’s intent to overdose.
Many psychiatrists have a preferred antidepressant, mood stabilizer, or antipsychotic, and may prescribe this medication to many of their patients based on familiarity with the agent or other factors. However, simple math can give the decision process of selecting a specific medication for a given patient a more quantitative basis.
Even a small reduction in suicide would save many lives
Ultimately, the math problem comes down to 4 minutes, which is approximately how long the brain can survive without oxygen. By prescribing medications with a lower relative lethality, or by prescribing a less-than-30-day supply of the most lethal medications, it may be possible to decrease overdose morbidity and mortality, and also buy enough time for emergency personnel to save a life. If simple math can put even a 1% dent in the rate of death from suicide, approximately 8,000 lives might be saved every year.
1. World Health Organization. Suicide. Fact sheet. http://www.who.int/mediacentre/factsheets/fs398/en. Updated August 2017. Accessed January 3, 2018.
2. U.S. Food and Drug Administration. Estimating the maximum safe starting dose in initial clinical trials for therapeutics in adult healthy volunteers. https://www.fda.gov/downloads/drugs/guidances/ucm078932.pdf. Published July 6, 2005. Accessed January 8, 2018.
3. Nelson JC, Spyker DA. Morbidity and mortality associated with medications used in the treatment of depression: an analysis of cases reported to U.S. Poison Control Centers, 2000-2014. Am J Psychiatry. 2017;174(5):438-450.
Suicide is a common reality, accounting for approximately 800,000 deaths per year worldwide.1 Properly assessing and minimizing suicide risk can be challenging. We are taught that lithium and clozapine can decrease suicidality, and many psychiatrists prescribe these medications with the firm, “evidence-based” belief that doing so reduces suicide risk. Paradoxically, what they in fact might be doing is the exact opposite; they may be giving high-risk patients the opportunity and the means to attempt suicide with a lethal amount of medication.
One patient diagnosed with a mood disorder who attempted suicide had a surprising point of view. After taking a large qu
Operations research is a subfield of mathematics that tries to optimize one or more variables when multiple variables are in play. One example would be to maximize profit while minimizing cost. During World War II, operations research was used to decrease the number of munitions used to shoot down airplanes, and to sink submarines more efficiently.
Focusing on the patient who attempted suicide by overdose, the question was: If she was discharged from the psychiatry unit with a 30-day supply of medication, how lethal would that prescription be if deliberately taken all at once? And what can be done to minimize this suicide risk? Psychiatrists know that some medications are more dangerous than others, but few have performed quantitative analysis to determine the potential lethality of these medications. The math analysis did not involve multivariable calculus or differential equations, only multiplication and division. The results were eye-opening.
Calculating relative lethality
The lethal dose 50 (LD50) is the dose of a medication expressed in mg/kg that results in the death of 50% of the animals (usually rats) used in a controlled experiment. Open-source data for the LD50 of medications is provided by the manufacturers.
I tabulated this data for a wide range of psychiatric medications, including antipsychotics, mood stabilizers, and selective serotonin reuptake inhibitors, in a spreadsheet with columns for maximum daily dose, 30-day supply of the medication, LD50 in mg/kg, LD50 for a 60-kg subject, and percentage of the 30-day supply compared with LD50. I then sorted this data by relative lethality (for my complete data, see Figure 1 and the Table).
The rat dose in mg/kg was extrapolated to the human equivalent dose (HED) in mg/kg using a conversion factor of 6.2 (for a person who weighs 60 kg, the HED = LD50/6.2) as suggested by the FDA.2 The dose for the first fatality is smaller than the HED, and toxicity occurs at even smaller doses. After simplifying all the terms, the formula for the HED-relative lethality is f(x) = 310x/LD50, where x is the daily dose of a medication prescribed for 30 days. This is the equation of a straight line with a slope inversely proportional to the LD50 of each medication and a y-axis intercept of 0. Each medication line shows that any dose rising above 100% on the y-axis is a quantum higher than the lethal dose.
Some commonly prescribed psychotropics are highly lethal
The relative lethality of many commonly prescribed psychiatric medications, including those frequently used to reduce suicidality, varies tremendously. For example, it is widely known that the first-line mood stabilizer lithium has a narrow therapeutic window and can rapidly become toxic. If a patient becomes dehydrated, even a normal lithium dose can be toxic or lethal. Lithium has a relative lethality of 1,063% (Figure 2). Clozapine has a relative lethality of 1,112%. Valproic acid has an even higher relative lethality of 1,666%. By contrast, aripiprazole and olanzapine have a relative lethality of 10% and 35%, respectively. For preventing suicide, prescribing a second-generation antipsychotic with a lower relative lethality may be preferable over prescribing a medication with a higher relative lethality.
According to U.S. poison control centers,3 from 2000 to 2014, there were 15,036 serious outcomes, including 61 deaths, associated with lithium use, and 6,109 serious outcomes, including 37 deaths, associated with valproic acid. In contrast, there were only 1,446 serious outcomes and no deaths associated with aripiprazole use.3 These outcomes may be underreported, but they are consistent with the mathematical model predicting that medications with a higher relative lethality will have higher morbidity and mortality outcomes, regardless of a patient’s intent to overdose.
Many psychiatrists have a preferred antidepressant, mood stabilizer, or antipsychotic, and may prescribe this medication to many of their patients based on familiarity with the agent or other factors. However, simple math can give the decision process of selecting a specific medication for a given patient a more quantitative basis.
Even a small reduction in suicide would save many lives
Ultimately, the math problem comes down to 4 minutes, which is approximately how long the brain can survive without oxygen. By prescribing medications with a lower relative lethality, or by prescribing a less-than-30-day supply of the most lethal medications, it may be possible to decrease overdose morbidity and mortality, and also buy enough time for emergency personnel to save a life. If simple math can put even a 1% dent in the rate of death from suicide, approximately 8,000 lives might be saved every year.
Suicide is a common reality, accounting for approximately 800,000 deaths per year worldwide.1 Properly assessing and minimizing suicide risk can be challenging. We are taught that lithium and clozapine can decrease suicidality, and many psychiatrists prescribe these medications with the firm, “evidence-based” belief that doing so reduces suicide risk. Paradoxically, what they in fact might be doing is the exact opposite; they may be giving high-risk patients the opportunity and the means to attempt suicide with a lethal amount of medication.
One patient diagnosed with a mood disorder who attempted suicide had a surprising point of view. After taking a large qu
Operations research is a subfield of mathematics that tries to optimize one or more variables when multiple variables are in play. One example would be to maximize profit while minimizing cost. During World War II, operations research was used to decrease the number of munitions used to shoot down airplanes, and to sink submarines more efficiently.
Focusing on the patient who attempted suicide by overdose, the question was: If she was discharged from the psychiatry unit with a 30-day supply of medication, how lethal would that prescription be if deliberately taken all at once? And what can be done to minimize this suicide risk? Psychiatrists know that some medications are more dangerous than others, but few have performed quantitative analysis to determine the potential lethality of these medications. The math analysis did not involve multivariable calculus or differential equations, only multiplication and division. The results were eye-opening.
Calculating relative lethality
The lethal dose 50 (LD50) is the dose of a medication expressed in mg/kg that results in the death of 50% of the animals (usually rats) used in a controlled experiment. Open-source data for the LD50 of medications is provided by the manufacturers.
I tabulated this data for a wide range of psychiatric medications, including antipsychotics, mood stabilizers, and selective serotonin reuptake inhibitors, in a spreadsheet with columns for maximum daily dose, 30-day supply of the medication, LD50 in mg/kg, LD50 for a 60-kg subject, and percentage of the 30-day supply compared with LD50. I then sorted this data by relative lethality (for my complete data, see Figure 1 and the Table).
The rat dose in mg/kg was extrapolated to the human equivalent dose (HED) in mg/kg using a conversion factor of 6.2 (for a person who weighs 60 kg, the HED = LD50/6.2) as suggested by the FDA.2 The dose for the first fatality is smaller than the HED, and toxicity occurs at even smaller doses. After simplifying all the terms, the formula for the HED-relative lethality is f(x) = 310x/LD50, where x is the daily dose of a medication prescribed for 30 days. This is the equation of a straight line with a slope inversely proportional to the LD50 of each medication and a y-axis intercept of 0. Each medication line shows that any dose rising above 100% on the y-axis is a quantum higher than the lethal dose.
Some commonly prescribed psychotropics are highly lethal
The relative lethality of many commonly prescribed psychiatric medications, including those frequently used to reduce suicidality, varies tremendously. For example, it is widely known that the first-line mood stabilizer lithium has a narrow therapeutic window and can rapidly become toxic. If a patient becomes dehydrated, even a normal lithium dose can be toxic or lethal. Lithium has a relative lethality of 1,063% (Figure 2). Clozapine has a relative lethality of 1,112%. Valproic acid has an even higher relative lethality of 1,666%. By contrast, aripiprazole and olanzapine have a relative lethality of 10% and 35%, respectively. For preventing suicide, prescribing a second-generation antipsychotic with a lower relative lethality may be preferable over prescribing a medication with a higher relative lethality.
According to U.S. poison control centers,3 from 2000 to 2014, there were 15,036 serious outcomes, including 61 deaths, associated with lithium use, and 6,109 serious outcomes, including 37 deaths, associated with valproic acid. In contrast, there were only 1,446 serious outcomes and no deaths associated with aripiprazole use.3 These outcomes may be underreported, but they are consistent with the mathematical model predicting that medications with a higher relative lethality will have higher morbidity and mortality outcomes, regardless of a patient’s intent to overdose.
Many psychiatrists have a preferred antidepressant, mood stabilizer, or antipsychotic, and may prescribe this medication to many of their patients based on familiarity with the agent or other factors. However, simple math can give the decision process of selecting a specific medication for a given patient a more quantitative basis.
Even a small reduction in suicide would save many lives
Ultimately, the math problem comes down to 4 minutes, which is approximately how long the brain can survive without oxygen. By prescribing medications with a lower relative lethality, or by prescribing a less-than-30-day supply of the most lethal medications, it may be possible to decrease overdose morbidity and mortality, and also buy enough time for emergency personnel to save a life. If simple math can put even a 1% dent in the rate of death from suicide, approximately 8,000 lives might be saved every year.
1. World Health Organization. Suicide. Fact sheet. http://www.who.int/mediacentre/factsheets/fs398/en. Updated August 2017. Accessed January 3, 2018.
2. U.S. Food and Drug Administration. Estimating the maximum safe starting dose in initial clinical trials for therapeutics in adult healthy volunteers. https://www.fda.gov/downloads/drugs/guidances/ucm078932.pdf. Published July 6, 2005. Accessed January 8, 2018.
3. Nelson JC, Spyker DA. Morbidity and mortality associated with medications used in the treatment of depression: an analysis of cases reported to U.S. Poison Control Centers, 2000-2014. Am J Psychiatry. 2017;174(5):438-450.
1. World Health Organization. Suicide. Fact sheet. http://www.who.int/mediacentre/factsheets/fs398/en. Updated August 2017. Accessed January 3, 2018.
2. U.S. Food and Drug Administration. Estimating the maximum safe starting dose in initial clinical trials for therapeutics in adult healthy volunteers. https://www.fda.gov/downloads/drugs/guidances/ucm078932.pdf. Published July 6, 2005. Accessed January 8, 2018.
3. Nelson JC, Spyker DA. Morbidity and mortality associated with medications used in the treatment of depression: an analysis of cases reported to U.S. Poison Control Centers, 2000-2014. Am J Psychiatry. 2017;174(5):438-450.
Integrate brief CBT interventions into medication management visits
Patients who are treated with psychotropics may experience better recovery from their symptoms and improved quality of life when they receive targeted treatment with cognitive-behavioral therapy (CBT). Clinicians can use certain CBT techniques to “jump-start” recovery in patients before prescribed medications produce their intended therapeutic effects. When practitioners are familiar with their use, techniques such as behavioral activation and tools that enhance adherence can be employed during a brief medication management (“med check”) visit.
Take these steps to implement brief CBT interventions into your patient’s routine visits:
- develop a clear, formulation-driven treatment target
- design an intervention that can be explained during a brief visit
- have handouts and worksheets available for patients to use
- provide written explanations and reminders for patients to use in out-of-session practice.
We present a case report that illustrates incorporating brief CBT interventions in a patient with major depressive disorder (MDD).
CASE REPORT
Using CBT to help a patient with MDD
Mr. L, age 52, presents with moderate MDD, and is started on fluoxetine, 20 mg/d. Mr. L has significant anhedonia and poor energy, and has been avoiding going to work and seeing friends. The psychiatrist explains to him how individuals with depression often want to refrain from activity and “shut down,” but that doing so will not improve his quality of life, and his mood will worsen.
The psychiatrist asks Mr. L to identify a pleasurable or important activity to complete before his next appointment. Mr. L decides that he would like to call a friend, because he has been isolated and his friends have been calling him. The psychiatrist encourages him to call one of his golf buddies. She instructs Mr. L to set reminders, such as cell phone alarms and notes on the refrigerator, to prompt him to “Call Phil Saturday at 10
To increase the likelihood that Mr. L will make this call, he and his psychiatrist discuss anticipated obstacles and potential facilitators of this behavior.
The psychiatrist also encourages Mr. L to complete a Behavioral Activation Worksheet (for examples, see http://www.cci.health.wa.gov.au/docs/ACF3B92.pdf or https://www.therapistaid.com/worksheets/behavioral-activation.pdf) to track his depression, pleasure, and sense of achievement before and after completing this activity.
As illustrated by this case, collaborating with the patient is critical to developing a realistic treatment plan that incorporates CBT techniques. With your help and encouragement, patients can use these tools to reach their goals and target the symptoms of their illnesses.
1. Wright JH, McCray LW. Restoring energy and enjoying life. In: Wright JH, McCray LW. Breaking free from depression: pathways to wellness. New York, NY: The Guilford Press; 2012:97-129.
2. Wright JH, Basco MR, Thase ME. Working with automatic thoughts. In: Wright JH, Basco MR, Thase ME. Learning cognitive-behavior therapy: an illustrated guide. Arlington, VA: American Psychiatric Publishing, Inc.; 2005:118-121.
Patients who are treated with psychotropics may experience better recovery from their symptoms and improved quality of life when they receive targeted treatment with cognitive-behavioral therapy (CBT). Clinicians can use certain CBT techniques to “jump-start” recovery in patients before prescribed medications produce their intended therapeutic effects. When practitioners are familiar with their use, techniques such as behavioral activation and tools that enhance adherence can be employed during a brief medication management (“med check”) visit.
Take these steps to implement brief CBT interventions into your patient’s routine visits:
- develop a clear, formulation-driven treatment target
- design an intervention that can be explained during a brief visit
- have handouts and worksheets available for patients to use
- provide written explanations and reminders for patients to use in out-of-session practice.
We present a case report that illustrates incorporating brief CBT interventions in a patient with major depressive disorder (MDD).
CASE REPORT
Using CBT to help a patient with MDD
Mr. L, age 52, presents with moderate MDD, and is started on fluoxetine, 20 mg/d. Mr. L has significant anhedonia and poor energy, and has been avoiding going to work and seeing friends. The psychiatrist explains to him how individuals with depression often want to refrain from activity and “shut down,” but that doing so will not improve his quality of life, and his mood will worsen.
The psychiatrist asks Mr. L to identify a pleasurable or important activity to complete before his next appointment. Mr. L decides that he would like to call a friend, because he has been isolated and his friends have been calling him. The psychiatrist encourages him to call one of his golf buddies. She instructs Mr. L to set reminders, such as cell phone alarms and notes on the refrigerator, to prompt him to “Call Phil Saturday at 10
To increase the likelihood that Mr. L will make this call, he and his psychiatrist discuss anticipated obstacles and potential facilitators of this behavior.
The psychiatrist also encourages Mr. L to complete a Behavioral Activation Worksheet (for examples, see http://www.cci.health.wa.gov.au/docs/ACF3B92.pdf or https://www.therapistaid.com/worksheets/behavioral-activation.pdf) to track his depression, pleasure, and sense of achievement before and after completing this activity.
As illustrated by this case, collaborating with the patient is critical to developing a realistic treatment plan that incorporates CBT techniques. With your help and encouragement, patients can use these tools to reach their goals and target the symptoms of their illnesses.
Patients who are treated with psychotropics may experience better recovery from their symptoms and improved quality of life when they receive targeted treatment with cognitive-behavioral therapy (CBT). Clinicians can use certain CBT techniques to “jump-start” recovery in patients before prescribed medications produce their intended therapeutic effects. When practitioners are familiar with their use, techniques such as behavioral activation and tools that enhance adherence can be employed during a brief medication management (“med check”) visit.
Take these steps to implement brief CBT interventions into your patient’s routine visits:
- develop a clear, formulation-driven treatment target
- design an intervention that can be explained during a brief visit
- have handouts and worksheets available for patients to use
- provide written explanations and reminders for patients to use in out-of-session practice.
We present a case report that illustrates incorporating brief CBT interventions in a patient with major depressive disorder (MDD).
CASE REPORT
Using CBT to help a patient with MDD
Mr. L, age 52, presents with moderate MDD, and is started on fluoxetine, 20 mg/d. Mr. L has significant anhedonia and poor energy, and has been avoiding going to work and seeing friends. The psychiatrist explains to him how individuals with depression often want to refrain from activity and “shut down,” but that doing so will not improve his quality of life, and his mood will worsen.
The psychiatrist asks Mr. L to identify a pleasurable or important activity to complete before his next appointment. Mr. L decides that he would like to call a friend, because he has been isolated and his friends have been calling him. The psychiatrist encourages him to call one of his golf buddies. She instructs Mr. L to set reminders, such as cell phone alarms and notes on the refrigerator, to prompt him to “Call Phil Saturday at 10
To increase the likelihood that Mr. L will make this call, he and his psychiatrist discuss anticipated obstacles and potential facilitators of this behavior.
The psychiatrist also encourages Mr. L to complete a Behavioral Activation Worksheet (for examples, see http://www.cci.health.wa.gov.au/docs/ACF3B92.pdf or https://www.therapistaid.com/worksheets/behavioral-activation.pdf) to track his depression, pleasure, and sense of achievement before and after completing this activity.
As illustrated by this case, collaborating with the patient is critical to developing a realistic treatment plan that incorporates CBT techniques. With your help and encouragement, patients can use these tools to reach their goals and target the symptoms of their illnesses.
1. Wright JH, McCray LW. Restoring energy and enjoying life. In: Wright JH, McCray LW. Breaking free from depression: pathways to wellness. New York, NY: The Guilford Press; 2012:97-129.
2. Wright JH, Basco MR, Thase ME. Working with automatic thoughts. In: Wright JH, Basco MR, Thase ME. Learning cognitive-behavior therapy: an illustrated guide. Arlington, VA: American Psychiatric Publishing, Inc.; 2005:118-121.
1. Wright JH, McCray LW. Restoring energy and enjoying life. In: Wright JH, McCray LW. Breaking free from depression: pathways to wellness. New York, NY: The Guilford Press; 2012:97-129.
2. Wright JH, Basco MR, Thase ME. Working with automatic thoughts. In: Wright JH, Basco MR, Thase ME. Learning cognitive-behavior therapy: an illustrated guide. Arlington, VA: American Psychiatric Publishing, Inc.; 2005:118-121.
Shorter Versus Longer Courses of Antibiotics for Infection in Hospitalized Patients: A Systematic Review and Meta-Analysis
Acute infections are a leading cause of hospitalization and are associated with high cost, morbidity, and mortality.1 There is a growing body of literature to support shorter antibiotic courses to treat several different infection types.2-6 This is because longer treatment courses promote the emergence of multidrug resistant (MDR) organisms,7-9 microbiome perturbation,10 and Clostridium difficile infection (CDI).11 They are also associated with more drug side effects, longer hospitalizations, and increased costs.
Despite increasing support for shorter treatment courses, inpatient prescribing practice varies widely, and redundant antibiotic therapy is common.12-14 Furthermore, aside from ventilator-associated pneumonia (VAP),15,16 prior systematic reviews of antibiotic duration have typically included outpatient and pediatric patients,3-6,17-19 for whom the risk of treatment failure may be lower.
Given the potential for harm with inappropriate antibiotic treatment duration and the variation in current clinical practice, we sought to systematically review clinical trials comparing shorter versus longer antibiotic courses in adolescents and adults hospitalized for acute infection. We focused on common sites of infection in hospitalized patients, including pulmonary, bloodstream, soft tissue, intra-abdominal, and urinary.20,21 We hypothesized that shorter courses would be sufficient to cure infection and associated with lower costs and fewer complications. Because we hypothesized that shorter durations would be sufficient regardless of clinical course, we focused on studies in which the short course of antibiotics was specified at study onset, not determined by clinical improvement or biomarkers. We analyzed all infection types together because current sepsis treatment guidelines place little emphasis on infection site.22 In contrast to prior reviews, we focused exclusively on adult and adolescent inpatients because the risks of a too-short treatment duration may be lower in pediatric and outpatient populations.
METHODS
We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.23 The review was registered on the Prospero database.24
Information Sources and Search Strategy
We performed serial literature searches for articles in English comparing shorter versus longer antibiotics courses in hospitalized patients. We searched MEDLINE via PubMed and Embase (January 1, 1990, to July 1, 2017). We used Boolean operators, Boolean logic, and controlled vocabulary (eg, Medical Subject Heading [MeSH] terms) for each key word. We identified published randomized controlled trials (RCTs) of conditions of interest (MeSH terms: “bacteremia,” “sepsis,” “pneumonia,” “pyelonephritis,” “intra-abdominal infection,” “cellulitis,” “soft tissue infection”) that compared differing lengths of antibiotic treatment (keywords: “time factors,” “duration,” “long course,” “short course”) and evaluated outcomes (key words: “mortality,” “recurrence,” “secondary infections”). We hand searched references of included citations. The full search strategy is presented in supplementary Appendix 1.
Study Eligibility and Selection Criteria
To meet criteria for inclusion, a study had to (1) be an RCT; (2) involve an adult or adolescent population age ≥12 years (or report outcomes separately for such patients); (3) involve an inpatient population (or report outcomes separately for inpatients); (4) stipulate a short course of antibiotics per protocol prior to randomization and not determined by clinical response, change in biomarkers, or physician discretion; (5) compare the short course to a longer course of antibiotics, which could be determined either per protocol or by some other measure; and (6) involve antibiotics given to treat infection, not as prophylaxis.
Two authors (SR and HCP) independently reviewed the title and/or abstracts of all articles identified by the search strategy. We calculated interrater agreement with a kappa coefficient. Both authors (SR and HCP) independently reviewed the full text of each article selected for possible inclusion by either author. Disagreement regarding eligibility was adjudicated by discussion.
Data Abstraction
Two authors (SR and HCP) independently abstracted study methodology, definitions, and outcomes for each study using a standardized abstraction tool (see supplementary Appendix 2).
Study Quality
We assessed article quality using the Cochrane Collaboration’s tool,25 which evaluates 6 domains of possible bias, including sequence generation, concealment, blinding, and incomplete or selective outcome reporting. The tool is a 6-point scale, with 6 being the best score. It is recommended for assessing bias because it evaluates randomization and allocation concealment, which are not included in other tools.26 We did not exclude studies based on quality but considered studies with scores of 5-6 to have a low overall risk of bias.
Study Outcomes and Statistical Analysis
Our primary outcomes were clinical cure, microbiologic cure, mortality, and infection recurrence. Secondary outcomes were secondary MDR infection, cost, and length of stay (LOS). We conducted all analyses with Stata MP version 14 (StataCorp, College Station, TX). For each outcome, we reported the difference (95% confidence interval [CI]) between treatment arms as the rate in the short course arm minus the rate in the long course arm, consistent with the typical presentation of noninferiority data. When not reported in a study, we calculated risk difference and 95% CI using reported patient-level data. Positive values for risk difference favor the short course arm for favorable outcomes (ie, clinical and microbiologic cure) and the long course arm for adverse outcomes (ie, mortality and recurrence). A meta-analysis was used to pool risk differences across all studies for primary outcomes and for clinical cure in the community-acquired pneumonia (CAP) subgroup. We also present results as odds ratios and risk ratios in the online supplement. All meta-analyses used random effects models, as described by DerSimonian and Laird,27,28 with variance estimates of heterogeneity taken from the Mantel-Haenszel fixed effects model. We investigated heterogeneity between studies using the χ2 I2 statistic. We considered a P < .1 to indicate statistically significant heterogeneity and classified heterogeneity as low, moderate, or high on the basis of an I2 of 25%, 50%, or 75%, respectively. We used funnel plots to assess for publication bias.
RESULTS
Search Results
Characteristics of Included Studies
Common study outcomes included clinical cure or efficacy (composite of symptom cure and improvement; n = 13), infection recurrence (n = 10), mortality (n = 9), microbiologic cure (n = 8), and LOS (n = 7; supplementary Table 1).
Nine studies were pilot studies, 1 was a traditional superiority design study, and 9 were noninferiority studies with a prespecified limit of equivalence of either 10% (n = 7) or 15% (n = 2).
Clinical Cure and Efficacy
Nine studies of 1225 patients evaluated clinical cure and efficacy in CAP (supplementary Figure 1).29,35,38-40,44-47 The overall risk difference was d = 2.4% (95% CI, −0.7%-5.5%). There was no heterogeneity between studies (I2 = 0%, P = .45).
Microbiologic Cure
Eight studies of 366 patients evaluated microbiologic cure (supplementary Figure 2).32-34,36,38,40,41,47 The overall risk difference was d = 1.2% (95% CI, −4.1%-6.4%). There was no statistically significant heterogeneity between studies (I2 = 13.3%, P = .33).
Mortality
Eight studies of 1740 patients evaluated short-term mortality (in hospital to 45 days; Figure 2),30-32,37,39,41,43 while 3 studies of 654 patients evaluated longer-term mortality (60 to 180 days; supplementary Figure 3).30,31,33 The overall risk difference was d = 0.3% (95% CI, −1.2%-1.8%) for short-term mortality and d = −0.4% (95% CI, −6.3%-5.5%) for longer-term mortality. There was no heterogeneity between studies for either short-term (I2 = 0.0%, P = .66) or longer-term mortality (I2 = 0.0%, P = .69).
Infection Recurrence
Ten studies of 1554 patients evaluated infection recurrence (Figure 2).30-34,40-42,45,46 The overall risk difference was d = 2.1% (95% CI, −1.2%-5.3%). There was no statistically significant heterogeneity between studies (I2 = 21.0%, P = .25). Two of the 3 studies with noninferiority design (both evaluating intra-abdominal infections) met their prespecified margins.41,42 In Chastre et al.,31 the overall population (d = 3.0%; 95% CI, −5.8%-11.7%) and the subgroup with VAP due to nonfermenting gram-negative bacilli (NF-GNB; d = 15.2%; 95% CI, −0.9%-31.4%) failed to meet the 10% noninferiority margin.
Secondary Outcomes
Three studies30,31,42 of 286 patients (with VAP or intra-abdominal infection) evaluated the emergence of MDR organisms. The overall risk difference was d = −9.0% (95% CI, −19.1%-1.1%; P = .081). There was no statistically significant heterogeneity between studies (I2 = 7.6%, P = .34).
Seven studies examined LOS—3 in the intensive care unit (ICU)30,31,43 and 4 on the wards32,36,40,41—none of which found significant differences between treatment arms. Across 3 studies of 672 patients, the weighted average for ICU LOS was 23.6 days in the short arm versus 22.2 days in the long arm. Across 4 studies of 235 patients, the weighted average for hospital LOS was 23.3 days in the short arm versus 29.7 days in the long arm. This difference was driven by a 1991 study41 of spontaneous bacterial peritonitis (SBP), in which the average LOS was 37 days and 50 days in the short- and long-course arms, respectively.
Three studies32,41,43 of 186 total patients (with SBP or hospital-acquired infection of unknown origin) examined the cost of antibiotics. The weighted average cost savings for shorter courses in 2016 US dollars48 was $265.19.
Three studies30,33,43 of 618 patients evaluated cases of CDI, during 10-, 30-, and 180-day total follow-up. The overall risk difference was d = 0.7% (95% CI, −1.3%-2.8%), with no statistically significant heterogeneity between studies (I2 = 0%, P = .97).
Study Quality
Included studies scored 2-5 on the Cochrane Collaboration Risk of Bias Tool (supplementary Figure 4). Four studies had an overall low risk of bias,36,37,43,46 while 15 had a moderate to high risk of bias (supplementary Table 3).29-35,38-42,44,45,47 Common sources of bias included inadequate details to confirm adequate randomization and/or concealment (n = 13) and lack of adequate blinding (n = 18). Two studies were stopped early,37,42 and 3 others were possibly stopped early because it was unclear how the number of participants was determined.29,33,47 Covariate imbalance (failure of randomization) was present in 2 studies.37,47 There was no evidence of selective outcome reporting or publication bias based on the funnel plots (supplementary Figure 5).
DISCUSSION
In this study, we performed a systematic review and meta-analysis of RCTs of shorter versus longer antibiotic courses for adults and adolescents hospitalized for infection. The rate of clinical cure was indistinguishable between patients randomized to shorter versus longer durations of antibiotic therapy, and the meta-analysis was well powered to confirm noninferiority. The lower 95% CI indicates that any potential benefit of longer antibiotics is not more than 1%, far below the typical margin of noninferiority. Subgroup analysis of patients hospitalized with CAP also showed noninferiority of a prespecified shorter treatment duration.
The rate of microbiologic cure was likewise indistinguishable, and the meta-analysis was again well powered to confirm noninferiority. Any potential benefit of longer antibiotics for microbial cure is quite small (not more than 4%).
Our study also demonstrates noninferiority of prespecified shorter antibiotic courses for mortality. Shorter- and longer-term mortality were both indistinguishable in patients randomized to shorter antibiotic courses. The meta-analyses for mortality were well powered, with any potential benefit of longer antibiotic durations being less than 2% for short-term and less than 6% for long-term mortality.
We also examined for complications related to antibiotic therapy. Infection recurrence was indistinguishable, with any potential benefit of longer antibiotics being less than 6%. Select infections (eg, VAP due to NF-GNB, catheter-associated UTI) may be more susceptible to relapse after shorter treatment courses, while most patients hospitalized with infection do not have an increased risk for relapse with shorter treatment courses. Consistent with other studies,8 the emergence of MDR organisms was 9% less common in patients randomized to shorter antibiotic courses. This difference failed to meet statistical significance, likely due to poor power. The emergence of MDR pathogens was included in just 3 of 19 studies, underscoring the need for additional studies on this outcome.
Although our meta-analyses indicate noninferiority of shorter antibiotic courses in hospitalized patients, the included studies are not without shortcomings. Only 4 of the included studies had low risk of bias, while 15 had at least moderate risk. The nearly universal source of bias was a lack of blinding. Only 1 study37 was completely blinded, and only 3 others had partial blinding. Adequate randomization and concealment were also lacking in several studies. However, there was no evidence of selective outcome reporting or publication bias.
Our findings are consistent with prior studies indicating noninferiority of shorter antibiotic courses in other settings and patient populations. Pediatric studies have demonstrated the success of shorter antibiotic courses in both outpatient49 and inpatient populations.50 Prior meta-analyses have shown noninferiority of shorter antibiotic courses in adults with VAP15,16; in neonatal, pediatric, and adult patients with bacteremia17; and in pediatric and adult patients with pneumonia and UTI.3-6,18,19 Our meta-analysis extends the evidence for the safety of shorter treatment courses to adults hospitalized with common infections, including pneumonia, UTI, and intra-abdominal infections. Because neonatal, pediatric, and nonhospitalized adult patients may have a lower risk for treatment failure and lower risk for mortality in the event of treatment failure, we focused exclusively on hospitalized adults and adolescents.
In contrast to prior meta-analyses, we included studies of multiple different sites of infection. This allowed us to assess a large number of hospitalized patients and achieve a narrow margin of noninferiority. It is possible that the benefit of optimal treatment duration varies by type of infection. (And indeed, absolute duration of treatment differed across studies.) We used a random-effects framework, which recognizes that the true benefit of shorter versus longer duration may vary across study populations. The heterogeneity between studies in our meta-analysis was quite low, suggesting that the results are not explained by a single infection type.
There are limited data on late effects of longer antibiotic courses. Antibiotic therapy is associated with an increased risk for CDI for 3 months afterwards.11 However, the duration of follow-up in the included studies rarely exceeded 1 month, which could underestimate incidence. The effect of antibiotics on gut microbiota may persist for months, predisposing patients to secondary infections. It is plausible that disruption in gut microbiota and risk for CDI may persist longer in patients treated with longer antibiotic courses. However, the existing studies do not include sufficient follow-up to confirm or refute this hypothesis.
Our review has several limitations. First, we included studies that compared an a priori-defined short course of antibiotics to a longer course and excluded studies that defined a short course of antibiotics based on clinical response. Because we did not specify an exact length for short or long courses, we cannot make explicit recommendations about the absolute duration of antibiotic therapy. Second, we included multiple infection types. It is possible that the duration of antibiotics required may differ by infection type. However, there were not sufficient data for subgroup analyses for each infection type. This highlights the need for additional data to guide the treatment of severe infections. Third, not all studies considered antibiotic duration in isolation. One study included a catheter change in the short arm only, which could have favored the short course.33 Three studies used different doses of antibiotics in addition to different durations.35,45,47 Fourth, the quality of included studies was variable, with lack of blinding and inadequate randomization present in most studies.
CONCLUSION
Based on the available literature, shorter courses of antibiotics can be safely utilized in hospitalized adults and adolescents to achieve clinical and microbiologic resolution of common infections, including pneumonia, UTI, and intra-abdominal infection, without adverse effect on infection recurrence. Moreover, short- and longer-term mortality are indistinguishable after treatment courses of differing duration. There are limited data on the longer-term risks associated with antibiotic duration, such as secondary infection or the emergence of MDR organisms.
Acknowledgments
The authors would like to thank their research librarian, Marisa Conte, for her help with the literature search for this review.
Disclosure
Drs. Royer and Prescott designed the study, performed data analysis, and drafted the manuscript. Drs. DeMerle and Dickson revised the manuscript critically for intellectual content. Dr. Royer holds stock in Pfizer. The authors have no other potential financial conflicts of interest to report.
1. Torio CM, Andrews RM. National Inpatient Hospital Costs: The Most Expensive Conditions by Payer, 2011: Statistical Brief #160. Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. www.hcup-us.ahrq.gov/reports/statbriefs/sb160.pdf. Accessed May 1, 2016.
2. Kalil AC, Metersky ML, Klompas M, et al. Management of Adults With Hospital-acquired and Ventilator-associated Pneumonia: 2016 Clinical Practice Guidelines by the Infectious Diseases Society of America and the American Thoracic Society. Clin Infect Dis. 2016;63(5):575-582. PubMed
3. Dimopoulos G, Matthaiou DK, Karageorgopoulos DE, Grammatikos AP, Athanassa Z, Falagas ME. Short- versus long-course antibacterial therapy for community-acquired pneumonia : a meta-analysis. Drugs. 2008;68(13):1841-1854. PubMed
4. Li JZ, Winston LG, Moore DH, Bent S. Efficacy of short-course antibiotic regimens for community-acquired pneumonia: a meta-analysis. Am J Med. 2007;120(9):783-790. PubMed
5. Eliakim-Raz N, Yahav D, Paul M, Leibovici L. Duration of antibiotic treatment for acute pyelonephritis and septic urinary tract infection-- 7 days or less versus longer treatment: systematic review and meta-analysis of randomized controlled trials. J Antimicrob Chemother. 2013;68(10):2183-2191. PubMed
6. Kyriakidou KG, Rafailidis P, Matthaiou DK, Athanasiou S, Falagas ME. Short- versus long-course antibiotic therapy for acute pyelonephritis in adolescents and adults: a meta-analysis of randomized controlled trials. Clin Ther. 2008;30(10):1859-1868. PubMed
7. Spellberg B, Bartlett JG, Gilbert DN. The future of antibiotics and resistance. N Engl J Med. 2013;368(4):299-302. PubMed
8. Spellberg B. The New Antibiotic Mantra-”Shorter Is Better”. JAMA Intern Med. 2016;176(9):1254-1255. PubMed
9. Rice LB. The Maxwell Finland Lecture: for the duration-rational antibiotic administration in an era of antimicrobial resistance and clostridium difficile. Clin Infect Dis. 2008;46(4):491-496. PubMed
10. Dethlefsen L, Relman DA. Incomplete recovery and individualized responses of the human distal gut microbiota to repeated antibiotic perturbation. Proc Natl Acad Sci U S A. 2011;108 Suppl 1:4554-4561. PubMed
11. Hensgens MP, Goorhuis A, Dekkers OM, Kuijper EJ. Time interval of increased risk for Clostridium difficile infection after exposure to antibiotics. J Antimicrob Chemother. 2012;67(3):742-748. PubMed
12. Huttner B, Jones M, Huttner A, Rubin M, Samore MH. Antibiotic prescription practices for pneumonia, skin and soft tissue infections and urinary tract infections throughout the US Veterans Affairs system. J Antimicrob Chemother. 2013;68(10):2393-2399. PubMed
13. Daneman N, Shore K, Pinto R, Fowler R. Antibiotic treatment duration for bloodstream infections in critically ill patients: a national survey of Canadian infectious diseases and critical care specialists. Int J Antimicrob Agents. 2011;38(6):480-485. PubMed
14. Schultz L, Lowe TJ, Srinivasan A, Neilson D, Pugliese G. Economic impact of redundant antimicrobial therapy in US hospitals. Infect Control Hosp Epidemiol. 2014;35(10):1229-1235. PubMed
15. Dimopoulos G, Poulakou G, Pneumatikos IA, Armaganidis A, Kollef MH, Matthaiou DK. Short- vs long-duration antibiotic regimens for ventilator-associated pneumonia: a systematic review and meta-analysis. Chest. 2013;144(6):1759-1767. PubMed
16. Pugh R, Grant C, Cooke RP, Dempsey G. Short-course versus prolonged-course antibiotic therapy for hospital-acquired pneumonia in critically ill adults. Cochrane Database Syst Rev. 2015(8):CD007577. PubMed
17. Havey TC, Fowler RA, Daneman N. Duration of antibiotic therapy for bacteremia: a systematic review and meta-analysis. Crit Care. 2011;15(6):R267. PubMed
18. Haider BA, Saeed MA, Bhutta ZA. Short-course versus long-course antibiotic therapy for non-severe community-acquired pneumonia in children aged 2 months to 59 months. Cochrane Database Syst Rev. 2008(2):CD005976. PubMed
19. Strohmeier Y, Hodson EM, Willis NS, Webster AC, Craig JC. Antibiotics for acute pyelonephritis in children. Cochrane Database Syst Rev. 2014(7):Cd003772. PubMed
20. Leligdowicz A, Dodek PM, Norena M, et al. Association between source of infection and hospital mortality in patients who have septic shock. Am J Respir Crit Care Med. 2014;189(10):1204-1213. PubMed
21. Cagatay AA, Tufan F, Hindilerden F, et al. The causes of acute Fever requiring hospitalization in geriatric patients: comparison of infectious and noninfectious etiology. J Aging Res. 2010;2010:380892. PubMed
22. Rhodes A, Evans LE, Alhazzani W, et al. Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock: 2016. Crit Care Med. 2017;45(3):486-552. PubMed
23. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264-269, W64. PubMed
24. Royer S, DeMerle K, Dickson RP, Prescott HC. Shorter versus longer courses of antibiotics for infection in hospitalized patients: a systematic review and meta-analysis. PROSPERO 2016:CRD42016029549. http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016029549. Accessed May 2, 2017.
25. Higgins JP, Altman DG, Gøtzsche PC, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. PubMed
26. Turner L, Boutron I, Hróbjartsson A, Altman DG, Moher D. The evolution of assessing bias in Cochrane systematic reviews of interventions: celebrating methodological contributions of the Cochrane Collaboration. Syst Rev. 2013;2:79. PubMed
27. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177-188. PubMed
28. Newton HJ, Cox NJ, Diebold FX, Garrett HM, Pagano M, Royston JP (Eds). Stata Technical Bulletin 44: sbe24. http://www.stata.com/products/stb/journals/stb44.pdf. Accessed February 22, 2017.
29. Bohte R, van’t Wout JW, Lobatto S, et al. Efficacy and safety of azithromycin versus benzylpenicillin or erythromycin in community-acquired pneumonia. Eur J Clin Microbiol Infect Dis. 1995;14(3):182-187. PubMed
30. Capellier G, Mockly H, Charpentier C, et al. Early-onset ventilator-associated pneumonia in adults randomized clinical trial: comparison of 8 versus 15 days of antibiotic treatment. PLoS One. 2012;7(8):e41290. PubMed
31. Chastre J, Wolff M, Fagon JY, et al. Comparison of 8 vs 15 days of antibiotic therapy for ventilator-associated pneumonia in adults: a randomized trial. JAMA. 2003;290(19):2588-2598. PubMed
32. Chaudhry ZI, Nisar S, Ahmed U, Ali M. Short course of antibiotic treatment in spontaneous bacterial peritonitis: A randomized controlled study. Journal of the College of Physicians and Surgeons Pakistan. 2000;10(8):284-288.
33. Darouiche RO, Al Mohajer M, Siddiq DM, Minard CG. Short versus long course of antibiotics for catheter-associated urinary tract infections in patients with spinal cord injury: a randomized controlled noninferiority trial. Arch Phys Med Rehabil. 2014;95(2):290-296. PubMed
34. de Gier R, Karperien A, Bouter K, et al. A sequential study of intravenous and oral Fleroxacin for 7 or 14 days in the treatment of complicated urinary tract infections. Int J Antimicrob Agents. 1995;6(1):27-30. PubMed
35. Dunbar LM, Wunderink RG, Habib MP, et al. High-dose, short-course levofloxacin for community-acquired pneumonia: a new treatment paradigm. Clin Infect Dis. 2003;37(6):752-760. PubMed
36. Gasem MH, Keuter M, Dolmans WM, Van Der Ven-Jongekrijg J, Djokomoeljanto R, Van Der Meer JW. Persistence of Salmonellae in blood and bone marrow: randomized controlled trial comparing ciprofloxacin and chloramphenicol treatments against enteric fever. Antimicrob Agents Chemother. 2003;47(5):1727-1731. PubMed
37. Kollef MH, Chastre J, Clavel M, et al. A randomized trial of 7-day doripenem versus 10-day imipenem-cilastatin for ventilator-associated pneumonia. Crit Care. 2012;16(6):R218. PubMed
38. Kuzman I, Daković-Rode O, Oremus M, Banaszak AM. Clinical efficacy and safety of a short regimen of azithromycin sequential therapy vs standard cefuroxime sequential therapy in the treatment of community-acquired pneumonia: an international, randomized, open-label study. J Chemother. 2005;17(6):636-642. PubMed
39. Léophonte P, Choutet P, Gaillat J, et al. Efficacy of a ten day course of ceftriaxone compared to a shortened five day course in the treatment of community-acquired pneumonia in hospitalized adults with risk factors. Medecine et Maladies Infectieuses. 2002;32(7):369-381.
40. Rizzato G, Montemurro L, Fraioli P, et al. Efficacy of a three day course of azithromycin in moderately severe community-acquired pneumonia. Eur Respir J. 1995;8(3):398-402. PubMed
41. Runyon BA, McHutchison JG, Antillon MR, Akriviadis EA, Montano AA. Short-course versus long-course antibiotic treatment of spontaneous bacterial peritonitis. A randomized controlled study of 100 patients. Gastroenterology. 1991;100(6):1737-1742. PubMed
42. Sawyer RG, Claridge JA, Nathens AB, et al. Trial of short-course antimicrobial therapy for intraabdominal infection. N Engl J Med. 2015;372(21):1996-2005. PubMed
43. Scawn N, Saul D, Pathak D, et al. A pilot randomised controlled trial in intensive care patients comparing 7 days’ treatment with empirical antibiotics with 2 days’ treatment for hospital-acquired infection of unknown origin. Health Technol Assess. 2012;16(36):i-xiii, 1-70. PubMed
44. Schönwald S, Barsić B, Klinar I, Gunjaca M. Three-day azithromycin compared with ten-day roxithromycin treatment of atypical pneumonia. Scand J Infect Dis. 1994;26(6):706-710. PubMed
45. Schönwald S, Kuzman I, Oresković K, et al. Azithromycin: single 1.5 g dose in the treatment of patients with atypical pneumonia syndrome--a randomized study. Infection. 1999;27(3):198-202. PubMed
46. Siegel RE, Alicea M, Lee A, Blaiklock R. Comparison of 7 versus 10 days of antibiotic therapy for hospitalized patients with uncomplicated community-acquired pneumonia: a prospective, randomized, double-blind study. Am J Ther. 1999;6(4):217-222. PubMed
47. Zhao X, Wu JF, Xiu QY, et al. A randomized controlled clinical trial of levofloxacin 750 mg versus 500 mg intravenous infusion in the treatment of community-acquired pneumonia. Diagn Microbiol Infect Dis. 2014;80(2):141-147. PubMed
48. Bureau of Economic Analysis. U.S. Department of Commerce. https://bea.gov/iTable/iTable.cfm?ReqID=9&step=1#reqid=9&step=1&isuri=1&903=4. Accessed March 2, 2017.
49. Pakistan Multicentre Amoxycillin Short Course Therapy (MASCOT) pneumonia study group. Clinical efficacy of 3 days versus 5 days of oral amoxicillin for treatment of childhood pneumonia: a multicentre double-blind trial. Lancet. 2002;360(9336):835-841. PubMed
50. Peltola H, Vuori-Holopainen E, Kallio MJ, SE-TU Study Group. Successful shortening from seven to four days of parenteral beta-lactam treatment for common childhood infections: a prospective and randomized study. Int J Infect Dis. 2001;5(1):3-8. PubMed
Acute infections are a leading cause of hospitalization and are associated with high cost, morbidity, and mortality.1 There is a growing body of literature to support shorter antibiotic courses to treat several different infection types.2-6 This is because longer treatment courses promote the emergence of multidrug resistant (MDR) organisms,7-9 microbiome perturbation,10 and Clostridium difficile infection (CDI).11 They are also associated with more drug side effects, longer hospitalizations, and increased costs.
Despite increasing support for shorter treatment courses, inpatient prescribing practice varies widely, and redundant antibiotic therapy is common.12-14 Furthermore, aside from ventilator-associated pneumonia (VAP),15,16 prior systematic reviews of antibiotic duration have typically included outpatient and pediatric patients,3-6,17-19 for whom the risk of treatment failure may be lower.
Given the potential for harm with inappropriate antibiotic treatment duration and the variation in current clinical practice, we sought to systematically review clinical trials comparing shorter versus longer antibiotic courses in adolescents and adults hospitalized for acute infection. We focused on common sites of infection in hospitalized patients, including pulmonary, bloodstream, soft tissue, intra-abdominal, and urinary.20,21 We hypothesized that shorter courses would be sufficient to cure infection and associated with lower costs and fewer complications. Because we hypothesized that shorter durations would be sufficient regardless of clinical course, we focused on studies in which the short course of antibiotics was specified at study onset, not determined by clinical improvement or biomarkers. We analyzed all infection types together because current sepsis treatment guidelines place little emphasis on infection site.22 In contrast to prior reviews, we focused exclusively on adult and adolescent inpatients because the risks of a too-short treatment duration may be lower in pediatric and outpatient populations.
METHODS
We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.23 The review was registered on the Prospero database.24
Information Sources and Search Strategy
We performed serial literature searches for articles in English comparing shorter versus longer antibiotics courses in hospitalized patients. We searched MEDLINE via PubMed and Embase (January 1, 1990, to July 1, 2017). We used Boolean operators, Boolean logic, and controlled vocabulary (eg, Medical Subject Heading [MeSH] terms) for each key word. We identified published randomized controlled trials (RCTs) of conditions of interest (MeSH terms: “bacteremia,” “sepsis,” “pneumonia,” “pyelonephritis,” “intra-abdominal infection,” “cellulitis,” “soft tissue infection”) that compared differing lengths of antibiotic treatment (keywords: “time factors,” “duration,” “long course,” “short course”) and evaluated outcomes (key words: “mortality,” “recurrence,” “secondary infections”). We hand searched references of included citations. The full search strategy is presented in supplementary Appendix 1.
Study Eligibility and Selection Criteria
To meet criteria for inclusion, a study had to (1) be an RCT; (2) involve an adult or adolescent population age ≥12 years (or report outcomes separately for such patients); (3) involve an inpatient population (or report outcomes separately for inpatients); (4) stipulate a short course of antibiotics per protocol prior to randomization and not determined by clinical response, change in biomarkers, or physician discretion; (5) compare the short course to a longer course of antibiotics, which could be determined either per protocol or by some other measure; and (6) involve antibiotics given to treat infection, not as prophylaxis.
Two authors (SR and HCP) independently reviewed the title and/or abstracts of all articles identified by the search strategy. We calculated interrater agreement with a kappa coefficient. Both authors (SR and HCP) independently reviewed the full text of each article selected for possible inclusion by either author. Disagreement regarding eligibility was adjudicated by discussion.
Data Abstraction
Two authors (SR and HCP) independently abstracted study methodology, definitions, and outcomes for each study using a standardized abstraction tool (see supplementary Appendix 2).
Study Quality
We assessed article quality using the Cochrane Collaboration’s tool,25 which evaluates 6 domains of possible bias, including sequence generation, concealment, blinding, and incomplete or selective outcome reporting. The tool is a 6-point scale, with 6 being the best score. It is recommended for assessing bias because it evaluates randomization and allocation concealment, which are not included in other tools.26 We did not exclude studies based on quality but considered studies with scores of 5-6 to have a low overall risk of bias.
Study Outcomes and Statistical Analysis
Our primary outcomes were clinical cure, microbiologic cure, mortality, and infection recurrence. Secondary outcomes were secondary MDR infection, cost, and length of stay (LOS). We conducted all analyses with Stata MP version 14 (StataCorp, College Station, TX). For each outcome, we reported the difference (95% confidence interval [CI]) between treatment arms as the rate in the short course arm minus the rate in the long course arm, consistent with the typical presentation of noninferiority data. When not reported in a study, we calculated risk difference and 95% CI using reported patient-level data. Positive values for risk difference favor the short course arm for favorable outcomes (ie, clinical and microbiologic cure) and the long course arm for adverse outcomes (ie, mortality and recurrence). A meta-analysis was used to pool risk differences across all studies for primary outcomes and for clinical cure in the community-acquired pneumonia (CAP) subgroup. We also present results as odds ratios and risk ratios in the online supplement. All meta-analyses used random effects models, as described by DerSimonian and Laird,27,28 with variance estimates of heterogeneity taken from the Mantel-Haenszel fixed effects model. We investigated heterogeneity between studies using the χ2 I2 statistic. We considered a P < .1 to indicate statistically significant heterogeneity and classified heterogeneity as low, moderate, or high on the basis of an I2 of 25%, 50%, or 75%, respectively. We used funnel plots to assess for publication bias.
RESULTS
Search Results
Characteristics of Included Studies
Common study outcomes included clinical cure or efficacy (composite of symptom cure and improvement; n = 13), infection recurrence (n = 10), mortality (n = 9), microbiologic cure (n = 8), and LOS (n = 7; supplementary Table 1).
Nine studies were pilot studies, 1 was a traditional superiority design study, and 9 were noninferiority studies with a prespecified limit of equivalence of either 10% (n = 7) or 15% (n = 2).
Clinical Cure and Efficacy
Nine studies of 1225 patients evaluated clinical cure and efficacy in CAP (supplementary Figure 1).29,35,38-40,44-47 The overall risk difference was d = 2.4% (95% CI, −0.7%-5.5%). There was no heterogeneity between studies (I2 = 0%, P = .45).
Microbiologic Cure
Eight studies of 366 patients evaluated microbiologic cure (supplementary Figure 2).32-34,36,38,40,41,47 The overall risk difference was d = 1.2% (95% CI, −4.1%-6.4%). There was no statistically significant heterogeneity between studies (I2 = 13.3%, P = .33).
Mortality
Eight studies of 1740 patients evaluated short-term mortality (in hospital to 45 days; Figure 2),30-32,37,39,41,43 while 3 studies of 654 patients evaluated longer-term mortality (60 to 180 days; supplementary Figure 3).30,31,33 The overall risk difference was d = 0.3% (95% CI, −1.2%-1.8%) for short-term mortality and d = −0.4% (95% CI, −6.3%-5.5%) for longer-term mortality. There was no heterogeneity between studies for either short-term (I2 = 0.0%, P = .66) or longer-term mortality (I2 = 0.0%, P = .69).
Infection Recurrence
Ten studies of 1554 patients evaluated infection recurrence (Figure 2).30-34,40-42,45,46 The overall risk difference was d = 2.1% (95% CI, −1.2%-5.3%). There was no statistically significant heterogeneity between studies (I2 = 21.0%, P = .25). Two of the 3 studies with noninferiority design (both evaluating intra-abdominal infections) met their prespecified margins.41,42 In Chastre et al.,31 the overall population (d = 3.0%; 95% CI, −5.8%-11.7%) and the subgroup with VAP due to nonfermenting gram-negative bacilli (NF-GNB; d = 15.2%; 95% CI, −0.9%-31.4%) failed to meet the 10% noninferiority margin.
Secondary Outcomes
Three studies30,31,42 of 286 patients (with VAP or intra-abdominal infection) evaluated the emergence of MDR organisms. The overall risk difference was d = −9.0% (95% CI, −19.1%-1.1%; P = .081). There was no statistically significant heterogeneity between studies (I2 = 7.6%, P = .34).
Seven studies examined LOS—3 in the intensive care unit (ICU)30,31,43 and 4 on the wards32,36,40,41—none of which found significant differences between treatment arms. Across 3 studies of 672 patients, the weighted average for ICU LOS was 23.6 days in the short arm versus 22.2 days in the long arm. Across 4 studies of 235 patients, the weighted average for hospital LOS was 23.3 days in the short arm versus 29.7 days in the long arm. This difference was driven by a 1991 study41 of spontaneous bacterial peritonitis (SBP), in which the average LOS was 37 days and 50 days in the short- and long-course arms, respectively.
Three studies32,41,43 of 186 total patients (with SBP or hospital-acquired infection of unknown origin) examined the cost of antibiotics. The weighted average cost savings for shorter courses in 2016 US dollars48 was $265.19.
Three studies30,33,43 of 618 patients evaluated cases of CDI, during 10-, 30-, and 180-day total follow-up. The overall risk difference was d = 0.7% (95% CI, −1.3%-2.8%), with no statistically significant heterogeneity between studies (I2 = 0%, P = .97).
Study Quality
Included studies scored 2-5 on the Cochrane Collaboration Risk of Bias Tool (supplementary Figure 4). Four studies had an overall low risk of bias,36,37,43,46 while 15 had a moderate to high risk of bias (supplementary Table 3).29-35,38-42,44,45,47 Common sources of bias included inadequate details to confirm adequate randomization and/or concealment (n = 13) and lack of adequate blinding (n = 18). Two studies were stopped early,37,42 and 3 others were possibly stopped early because it was unclear how the number of participants was determined.29,33,47 Covariate imbalance (failure of randomization) was present in 2 studies.37,47 There was no evidence of selective outcome reporting or publication bias based on the funnel plots (supplementary Figure 5).
DISCUSSION
In this study, we performed a systematic review and meta-analysis of RCTs of shorter versus longer antibiotic courses for adults and adolescents hospitalized for infection. The rate of clinical cure was indistinguishable between patients randomized to shorter versus longer durations of antibiotic therapy, and the meta-analysis was well powered to confirm noninferiority. The lower 95% CI indicates that any potential benefit of longer antibiotics is not more than 1%, far below the typical margin of noninferiority. Subgroup analysis of patients hospitalized with CAP also showed noninferiority of a prespecified shorter treatment duration.
The rate of microbiologic cure was likewise indistinguishable, and the meta-analysis was again well powered to confirm noninferiority. Any potential benefit of longer antibiotics for microbial cure is quite small (not more than 4%).
Our study also demonstrates noninferiority of prespecified shorter antibiotic courses for mortality. Shorter- and longer-term mortality were both indistinguishable in patients randomized to shorter antibiotic courses. The meta-analyses for mortality were well powered, with any potential benefit of longer antibiotic durations being less than 2% for short-term and less than 6% for long-term mortality.
We also examined for complications related to antibiotic therapy. Infection recurrence was indistinguishable, with any potential benefit of longer antibiotics being less than 6%. Select infections (eg, VAP due to NF-GNB, catheter-associated UTI) may be more susceptible to relapse after shorter treatment courses, while most patients hospitalized with infection do not have an increased risk for relapse with shorter treatment courses. Consistent with other studies,8 the emergence of MDR organisms was 9% less common in patients randomized to shorter antibiotic courses. This difference failed to meet statistical significance, likely due to poor power. The emergence of MDR pathogens was included in just 3 of 19 studies, underscoring the need for additional studies on this outcome.
Although our meta-analyses indicate noninferiority of shorter antibiotic courses in hospitalized patients, the included studies are not without shortcomings. Only 4 of the included studies had low risk of bias, while 15 had at least moderate risk. The nearly universal source of bias was a lack of blinding. Only 1 study37 was completely blinded, and only 3 others had partial blinding. Adequate randomization and concealment were also lacking in several studies. However, there was no evidence of selective outcome reporting or publication bias.
Our findings are consistent with prior studies indicating noninferiority of shorter antibiotic courses in other settings and patient populations. Pediatric studies have demonstrated the success of shorter antibiotic courses in both outpatient49 and inpatient populations.50 Prior meta-analyses have shown noninferiority of shorter antibiotic courses in adults with VAP15,16; in neonatal, pediatric, and adult patients with bacteremia17; and in pediatric and adult patients with pneumonia and UTI.3-6,18,19 Our meta-analysis extends the evidence for the safety of shorter treatment courses to adults hospitalized with common infections, including pneumonia, UTI, and intra-abdominal infections. Because neonatal, pediatric, and nonhospitalized adult patients may have a lower risk for treatment failure and lower risk for mortality in the event of treatment failure, we focused exclusively on hospitalized adults and adolescents.
In contrast to prior meta-analyses, we included studies of multiple different sites of infection. This allowed us to assess a large number of hospitalized patients and achieve a narrow margin of noninferiority. It is possible that the benefit of optimal treatment duration varies by type of infection. (And indeed, absolute duration of treatment differed across studies.) We used a random-effects framework, which recognizes that the true benefit of shorter versus longer duration may vary across study populations. The heterogeneity between studies in our meta-analysis was quite low, suggesting that the results are not explained by a single infection type.
There are limited data on late effects of longer antibiotic courses. Antibiotic therapy is associated with an increased risk for CDI for 3 months afterwards.11 However, the duration of follow-up in the included studies rarely exceeded 1 month, which could underestimate incidence. The effect of antibiotics on gut microbiota may persist for months, predisposing patients to secondary infections. It is plausible that disruption in gut microbiota and risk for CDI may persist longer in patients treated with longer antibiotic courses. However, the existing studies do not include sufficient follow-up to confirm or refute this hypothesis.
Our review has several limitations. First, we included studies that compared an a priori-defined short course of antibiotics to a longer course and excluded studies that defined a short course of antibiotics based on clinical response. Because we did not specify an exact length for short or long courses, we cannot make explicit recommendations about the absolute duration of antibiotic therapy. Second, we included multiple infection types. It is possible that the duration of antibiotics required may differ by infection type. However, there were not sufficient data for subgroup analyses for each infection type. This highlights the need for additional data to guide the treatment of severe infections. Third, not all studies considered antibiotic duration in isolation. One study included a catheter change in the short arm only, which could have favored the short course.33 Three studies used different doses of antibiotics in addition to different durations.35,45,47 Fourth, the quality of included studies was variable, with lack of blinding and inadequate randomization present in most studies.
CONCLUSION
Based on the available literature, shorter courses of antibiotics can be safely utilized in hospitalized adults and adolescents to achieve clinical and microbiologic resolution of common infections, including pneumonia, UTI, and intra-abdominal infection, without adverse effect on infection recurrence. Moreover, short- and longer-term mortality are indistinguishable after treatment courses of differing duration. There are limited data on the longer-term risks associated with antibiotic duration, such as secondary infection or the emergence of MDR organisms.
Acknowledgments
The authors would like to thank their research librarian, Marisa Conte, for her help with the literature search for this review.
Disclosure
Drs. Royer and Prescott designed the study, performed data analysis, and drafted the manuscript. Drs. DeMerle and Dickson revised the manuscript critically for intellectual content. Dr. Royer holds stock in Pfizer. The authors have no other potential financial conflicts of interest to report.
Acute infections are a leading cause of hospitalization and are associated with high cost, morbidity, and mortality.1 There is a growing body of literature to support shorter antibiotic courses to treat several different infection types.2-6 This is because longer treatment courses promote the emergence of multidrug resistant (MDR) organisms,7-9 microbiome perturbation,10 and Clostridium difficile infection (CDI).11 They are also associated with more drug side effects, longer hospitalizations, and increased costs.
Despite increasing support for shorter treatment courses, inpatient prescribing practice varies widely, and redundant antibiotic therapy is common.12-14 Furthermore, aside from ventilator-associated pneumonia (VAP),15,16 prior systematic reviews of antibiotic duration have typically included outpatient and pediatric patients,3-6,17-19 for whom the risk of treatment failure may be lower.
Given the potential for harm with inappropriate antibiotic treatment duration and the variation in current clinical practice, we sought to systematically review clinical trials comparing shorter versus longer antibiotic courses in adolescents and adults hospitalized for acute infection. We focused on common sites of infection in hospitalized patients, including pulmonary, bloodstream, soft tissue, intra-abdominal, and urinary.20,21 We hypothesized that shorter courses would be sufficient to cure infection and associated with lower costs and fewer complications. Because we hypothesized that shorter durations would be sufficient regardless of clinical course, we focused on studies in which the short course of antibiotics was specified at study onset, not determined by clinical improvement or biomarkers. We analyzed all infection types together because current sepsis treatment guidelines place little emphasis on infection site.22 In contrast to prior reviews, we focused exclusively on adult and adolescent inpatients because the risks of a too-short treatment duration may be lower in pediatric and outpatient populations.
METHODS
We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.23 The review was registered on the Prospero database.24
Information Sources and Search Strategy
We performed serial literature searches for articles in English comparing shorter versus longer antibiotics courses in hospitalized patients. We searched MEDLINE via PubMed and Embase (January 1, 1990, to July 1, 2017). We used Boolean operators, Boolean logic, and controlled vocabulary (eg, Medical Subject Heading [MeSH] terms) for each key word. We identified published randomized controlled trials (RCTs) of conditions of interest (MeSH terms: “bacteremia,” “sepsis,” “pneumonia,” “pyelonephritis,” “intra-abdominal infection,” “cellulitis,” “soft tissue infection”) that compared differing lengths of antibiotic treatment (keywords: “time factors,” “duration,” “long course,” “short course”) and evaluated outcomes (key words: “mortality,” “recurrence,” “secondary infections”). We hand searched references of included citations. The full search strategy is presented in supplementary Appendix 1.
Study Eligibility and Selection Criteria
To meet criteria for inclusion, a study had to (1) be an RCT; (2) involve an adult or adolescent population age ≥12 years (or report outcomes separately for such patients); (3) involve an inpatient population (or report outcomes separately for inpatients); (4) stipulate a short course of antibiotics per protocol prior to randomization and not determined by clinical response, change in biomarkers, or physician discretion; (5) compare the short course to a longer course of antibiotics, which could be determined either per protocol or by some other measure; and (6) involve antibiotics given to treat infection, not as prophylaxis.
Two authors (SR and HCP) independently reviewed the title and/or abstracts of all articles identified by the search strategy. We calculated interrater agreement with a kappa coefficient. Both authors (SR and HCP) independently reviewed the full text of each article selected for possible inclusion by either author. Disagreement regarding eligibility was adjudicated by discussion.
Data Abstraction
Two authors (SR and HCP) independently abstracted study methodology, definitions, and outcomes for each study using a standardized abstraction tool (see supplementary Appendix 2).
Study Quality
We assessed article quality using the Cochrane Collaboration’s tool,25 which evaluates 6 domains of possible bias, including sequence generation, concealment, blinding, and incomplete or selective outcome reporting. The tool is a 6-point scale, with 6 being the best score. It is recommended for assessing bias because it evaluates randomization and allocation concealment, which are not included in other tools.26 We did not exclude studies based on quality but considered studies with scores of 5-6 to have a low overall risk of bias.
Study Outcomes and Statistical Analysis
Our primary outcomes were clinical cure, microbiologic cure, mortality, and infection recurrence. Secondary outcomes were secondary MDR infection, cost, and length of stay (LOS). We conducted all analyses with Stata MP version 14 (StataCorp, College Station, TX). For each outcome, we reported the difference (95% confidence interval [CI]) between treatment arms as the rate in the short course arm minus the rate in the long course arm, consistent with the typical presentation of noninferiority data. When not reported in a study, we calculated risk difference and 95% CI using reported patient-level data. Positive values for risk difference favor the short course arm for favorable outcomes (ie, clinical and microbiologic cure) and the long course arm for adverse outcomes (ie, mortality and recurrence). A meta-analysis was used to pool risk differences across all studies for primary outcomes and for clinical cure in the community-acquired pneumonia (CAP) subgroup. We also present results as odds ratios and risk ratios in the online supplement. All meta-analyses used random effects models, as described by DerSimonian and Laird,27,28 with variance estimates of heterogeneity taken from the Mantel-Haenszel fixed effects model. We investigated heterogeneity between studies using the χ2 I2 statistic. We considered a P < .1 to indicate statistically significant heterogeneity and classified heterogeneity as low, moderate, or high on the basis of an I2 of 25%, 50%, or 75%, respectively. We used funnel plots to assess for publication bias.
RESULTS
Search Results
Characteristics of Included Studies
Common study outcomes included clinical cure or efficacy (composite of symptom cure and improvement; n = 13), infection recurrence (n = 10), mortality (n = 9), microbiologic cure (n = 8), and LOS (n = 7; supplementary Table 1).
Nine studies were pilot studies, 1 was a traditional superiority design study, and 9 were noninferiority studies with a prespecified limit of equivalence of either 10% (n = 7) or 15% (n = 2).
Clinical Cure and Efficacy
Nine studies of 1225 patients evaluated clinical cure and efficacy in CAP (supplementary Figure 1).29,35,38-40,44-47 The overall risk difference was d = 2.4% (95% CI, −0.7%-5.5%). There was no heterogeneity between studies (I2 = 0%, P = .45).
Microbiologic Cure
Eight studies of 366 patients evaluated microbiologic cure (supplementary Figure 2).32-34,36,38,40,41,47 The overall risk difference was d = 1.2% (95% CI, −4.1%-6.4%). There was no statistically significant heterogeneity between studies (I2 = 13.3%, P = .33).
Mortality
Eight studies of 1740 patients evaluated short-term mortality (in hospital to 45 days; Figure 2),30-32,37,39,41,43 while 3 studies of 654 patients evaluated longer-term mortality (60 to 180 days; supplementary Figure 3).30,31,33 The overall risk difference was d = 0.3% (95% CI, −1.2%-1.8%) for short-term mortality and d = −0.4% (95% CI, −6.3%-5.5%) for longer-term mortality. There was no heterogeneity between studies for either short-term (I2 = 0.0%, P = .66) or longer-term mortality (I2 = 0.0%, P = .69).
Infection Recurrence
Ten studies of 1554 patients evaluated infection recurrence (Figure 2).30-34,40-42,45,46 The overall risk difference was d = 2.1% (95% CI, −1.2%-5.3%). There was no statistically significant heterogeneity between studies (I2 = 21.0%, P = .25). Two of the 3 studies with noninferiority design (both evaluating intra-abdominal infections) met their prespecified margins.41,42 In Chastre et al.,31 the overall population (d = 3.0%; 95% CI, −5.8%-11.7%) and the subgroup with VAP due to nonfermenting gram-negative bacilli (NF-GNB; d = 15.2%; 95% CI, −0.9%-31.4%) failed to meet the 10% noninferiority margin.
Secondary Outcomes
Three studies30,31,42 of 286 patients (with VAP or intra-abdominal infection) evaluated the emergence of MDR organisms. The overall risk difference was d = −9.0% (95% CI, −19.1%-1.1%; P = .081). There was no statistically significant heterogeneity between studies (I2 = 7.6%, P = .34).
Seven studies examined LOS—3 in the intensive care unit (ICU)30,31,43 and 4 on the wards32,36,40,41—none of which found significant differences between treatment arms. Across 3 studies of 672 patients, the weighted average for ICU LOS was 23.6 days in the short arm versus 22.2 days in the long arm. Across 4 studies of 235 patients, the weighted average for hospital LOS was 23.3 days in the short arm versus 29.7 days in the long arm. This difference was driven by a 1991 study41 of spontaneous bacterial peritonitis (SBP), in which the average LOS was 37 days and 50 days in the short- and long-course arms, respectively.
Three studies32,41,43 of 186 total patients (with SBP or hospital-acquired infection of unknown origin) examined the cost of antibiotics. The weighted average cost savings for shorter courses in 2016 US dollars48 was $265.19.
Three studies30,33,43 of 618 patients evaluated cases of CDI, during 10-, 30-, and 180-day total follow-up. The overall risk difference was d = 0.7% (95% CI, −1.3%-2.8%), with no statistically significant heterogeneity between studies (I2 = 0%, P = .97).
Study Quality
Included studies scored 2-5 on the Cochrane Collaboration Risk of Bias Tool (supplementary Figure 4). Four studies had an overall low risk of bias,36,37,43,46 while 15 had a moderate to high risk of bias (supplementary Table 3).29-35,38-42,44,45,47 Common sources of bias included inadequate details to confirm adequate randomization and/or concealment (n = 13) and lack of adequate blinding (n = 18). Two studies were stopped early,37,42 and 3 others were possibly stopped early because it was unclear how the number of participants was determined.29,33,47 Covariate imbalance (failure of randomization) was present in 2 studies.37,47 There was no evidence of selective outcome reporting or publication bias based on the funnel plots (supplementary Figure 5).
DISCUSSION
In this study, we performed a systematic review and meta-analysis of RCTs of shorter versus longer antibiotic courses for adults and adolescents hospitalized for infection. The rate of clinical cure was indistinguishable between patients randomized to shorter versus longer durations of antibiotic therapy, and the meta-analysis was well powered to confirm noninferiority. The lower 95% CI indicates that any potential benefit of longer antibiotics is not more than 1%, far below the typical margin of noninferiority. Subgroup analysis of patients hospitalized with CAP also showed noninferiority of a prespecified shorter treatment duration.
The rate of microbiologic cure was likewise indistinguishable, and the meta-analysis was again well powered to confirm noninferiority. Any potential benefit of longer antibiotics for microbial cure is quite small (not more than 4%).
Our study also demonstrates noninferiority of prespecified shorter antibiotic courses for mortality. Shorter- and longer-term mortality were both indistinguishable in patients randomized to shorter antibiotic courses. The meta-analyses for mortality were well powered, with any potential benefit of longer antibiotic durations being less than 2% for short-term and less than 6% for long-term mortality.
We also examined for complications related to antibiotic therapy. Infection recurrence was indistinguishable, with any potential benefit of longer antibiotics being less than 6%. Select infections (eg, VAP due to NF-GNB, catheter-associated UTI) may be more susceptible to relapse after shorter treatment courses, while most patients hospitalized with infection do not have an increased risk for relapse with shorter treatment courses. Consistent with other studies,8 the emergence of MDR organisms was 9% less common in patients randomized to shorter antibiotic courses. This difference failed to meet statistical significance, likely due to poor power. The emergence of MDR pathogens was included in just 3 of 19 studies, underscoring the need for additional studies on this outcome.
Although our meta-analyses indicate noninferiority of shorter antibiotic courses in hospitalized patients, the included studies are not without shortcomings. Only 4 of the included studies had low risk of bias, while 15 had at least moderate risk. The nearly universal source of bias was a lack of blinding. Only 1 study37 was completely blinded, and only 3 others had partial blinding. Adequate randomization and concealment were also lacking in several studies. However, there was no evidence of selective outcome reporting or publication bias.
Our findings are consistent with prior studies indicating noninferiority of shorter antibiotic courses in other settings and patient populations. Pediatric studies have demonstrated the success of shorter antibiotic courses in both outpatient49 and inpatient populations.50 Prior meta-analyses have shown noninferiority of shorter antibiotic courses in adults with VAP15,16; in neonatal, pediatric, and adult patients with bacteremia17; and in pediatric and adult patients with pneumonia and UTI.3-6,18,19 Our meta-analysis extends the evidence for the safety of shorter treatment courses to adults hospitalized with common infections, including pneumonia, UTI, and intra-abdominal infections. Because neonatal, pediatric, and nonhospitalized adult patients may have a lower risk for treatment failure and lower risk for mortality in the event of treatment failure, we focused exclusively on hospitalized adults and adolescents.
In contrast to prior meta-analyses, we included studies of multiple different sites of infection. This allowed us to assess a large number of hospitalized patients and achieve a narrow margin of noninferiority. It is possible that the benefit of optimal treatment duration varies by type of infection. (And indeed, absolute duration of treatment differed across studies.) We used a random-effects framework, which recognizes that the true benefit of shorter versus longer duration may vary across study populations. The heterogeneity between studies in our meta-analysis was quite low, suggesting that the results are not explained by a single infection type.
There are limited data on late effects of longer antibiotic courses. Antibiotic therapy is associated with an increased risk for CDI for 3 months afterwards.11 However, the duration of follow-up in the included studies rarely exceeded 1 month, which could underestimate incidence. The effect of antibiotics on gut microbiota may persist for months, predisposing patients to secondary infections. It is plausible that disruption in gut microbiota and risk for CDI may persist longer in patients treated with longer antibiotic courses. However, the existing studies do not include sufficient follow-up to confirm or refute this hypothesis.
Our review has several limitations. First, we included studies that compared an a priori-defined short course of antibiotics to a longer course and excluded studies that defined a short course of antibiotics based on clinical response. Because we did not specify an exact length for short or long courses, we cannot make explicit recommendations about the absolute duration of antibiotic therapy. Second, we included multiple infection types. It is possible that the duration of antibiotics required may differ by infection type. However, there were not sufficient data for subgroup analyses for each infection type. This highlights the need for additional data to guide the treatment of severe infections. Third, not all studies considered antibiotic duration in isolation. One study included a catheter change in the short arm only, which could have favored the short course.33 Three studies used different doses of antibiotics in addition to different durations.35,45,47 Fourth, the quality of included studies was variable, with lack of blinding and inadequate randomization present in most studies.
CONCLUSION
Based on the available literature, shorter courses of antibiotics can be safely utilized in hospitalized adults and adolescents to achieve clinical and microbiologic resolution of common infections, including pneumonia, UTI, and intra-abdominal infection, without adverse effect on infection recurrence. Moreover, short- and longer-term mortality are indistinguishable after treatment courses of differing duration. There are limited data on the longer-term risks associated with antibiotic duration, such as secondary infection or the emergence of MDR organisms.
Acknowledgments
The authors would like to thank their research librarian, Marisa Conte, for her help with the literature search for this review.
Disclosure
Drs. Royer and Prescott designed the study, performed data analysis, and drafted the manuscript. Drs. DeMerle and Dickson revised the manuscript critically for intellectual content. Dr. Royer holds stock in Pfizer. The authors have no other potential financial conflicts of interest to report.
1. Torio CM, Andrews RM. National Inpatient Hospital Costs: The Most Expensive Conditions by Payer, 2011: Statistical Brief #160. Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. www.hcup-us.ahrq.gov/reports/statbriefs/sb160.pdf. Accessed May 1, 2016.
2. Kalil AC, Metersky ML, Klompas M, et al. Management of Adults With Hospital-acquired and Ventilator-associated Pneumonia: 2016 Clinical Practice Guidelines by the Infectious Diseases Society of America and the American Thoracic Society. Clin Infect Dis. 2016;63(5):575-582. PubMed
3. Dimopoulos G, Matthaiou DK, Karageorgopoulos DE, Grammatikos AP, Athanassa Z, Falagas ME. Short- versus long-course antibacterial therapy for community-acquired pneumonia : a meta-analysis. Drugs. 2008;68(13):1841-1854. PubMed
4. Li JZ, Winston LG, Moore DH, Bent S. Efficacy of short-course antibiotic regimens for community-acquired pneumonia: a meta-analysis. Am J Med. 2007;120(9):783-790. PubMed
5. Eliakim-Raz N, Yahav D, Paul M, Leibovici L. Duration of antibiotic treatment for acute pyelonephritis and septic urinary tract infection-- 7 days or less versus longer treatment: systematic review and meta-analysis of randomized controlled trials. J Antimicrob Chemother. 2013;68(10):2183-2191. PubMed
6. Kyriakidou KG, Rafailidis P, Matthaiou DK, Athanasiou S, Falagas ME. Short- versus long-course antibiotic therapy for acute pyelonephritis in adolescents and adults: a meta-analysis of randomized controlled trials. Clin Ther. 2008;30(10):1859-1868. PubMed
7. Spellberg B, Bartlett JG, Gilbert DN. The future of antibiotics and resistance. N Engl J Med. 2013;368(4):299-302. PubMed
8. Spellberg B. The New Antibiotic Mantra-”Shorter Is Better”. JAMA Intern Med. 2016;176(9):1254-1255. PubMed
9. Rice LB. The Maxwell Finland Lecture: for the duration-rational antibiotic administration in an era of antimicrobial resistance and clostridium difficile. Clin Infect Dis. 2008;46(4):491-496. PubMed
10. Dethlefsen L, Relman DA. Incomplete recovery and individualized responses of the human distal gut microbiota to repeated antibiotic perturbation. Proc Natl Acad Sci U S A. 2011;108 Suppl 1:4554-4561. PubMed
11. Hensgens MP, Goorhuis A, Dekkers OM, Kuijper EJ. Time interval of increased risk for Clostridium difficile infection after exposure to antibiotics. J Antimicrob Chemother. 2012;67(3):742-748. PubMed
12. Huttner B, Jones M, Huttner A, Rubin M, Samore MH. Antibiotic prescription practices for pneumonia, skin and soft tissue infections and urinary tract infections throughout the US Veterans Affairs system. J Antimicrob Chemother. 2013;68(10):2393-2399. PubMed
13. Daneman N, Shore K, Pinto R, Fowler R. Antibiotic treatment duration for bloodstream infections in critically ill patients: a national survey of Canadian infectious diseases and critical care specialists. Int J Antimicrob Agents. 2011;38(6):480-485. PubMed
14. Schultz L, Lowe TJ, Srinivasan A, Neilson D, Pugliese G. Economic impact of redundant antimicrobial therapy in US hospitals. Infect Control Hosp Epidemiol. 2014;35(10):1229-1235. PubMed
15. Dimopoulos G, Poulakou G, Pneumatikos IA, Armaganidis A, Kollef MH, Matthaiou DK. Short- vs long-duration antibiotic regimens for ventilator-associated pneumonia: a systematic review and meta-analysis. Chest. 2013;144(6):1759-1767. PubMed
16. Pugh R, Grant C, Cooke RP, Dempsey G. Short-course versus prolonged-course antibiotic therapy for hospital-acquired pneumonia in critically ill adults. Cochrane Database Syst Rev. 2015(8):CD007577. PubMed
17. Havey TC, Fowler RA, Daneman N. Duration of antibiotic therapy for bacteremia: a systematic review and meta-analysis. Crit Care. 2011;15(6):R267. PubMed
18. Haider BA, Saeed MA, Bhutta ZA. Short-course versus long-course antibiotic therapy for non-severe community-acquired pneumonia in children aged 2 months to 59 months. Cochrane Database Syst Rev. 2008(2):CD005976. PubMed
19. Strohmeier Y, Hodson EM, Willis NS, Webster AC, Craig JC. Antibiotics for acute pyelonephritis in children. Cochrane Database Syst Rev. 2014(7):Cd003772. PubMed
20. Leligdowicz A, Dodek PM, Norena M, et al. Association between source of infection and hospital mortality in patients who have septic shock. Am J Respir Crit Care Med. 2014;189(10):1204-1213. PubMed
21. Cagatay AA, Tufan F, Hindilerden F, et al. The causes of acute Fever requiring hospitalization in geriatric patients: comparison of infectious and noninfectious etiology. J Aging Res. 2010;2010:380892. PubMed
22. Rhodes A, Evans LE, Alhazzani W, et al. Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock: 2016. Crit Care Med. 2017;45(3):486-552. PubMed
23. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264-269, W64. PubMed
24. Royer S, DeMerle K, Dickson RP, Prescott HC. Shorter versus longer courses of antibiotics for infection in hospitalized patients: a systematic review and meta-analysis. PROSPERO 2016:CRD42016029549. http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016029549. Accessed May 2, 2017.
25. Higgins JP, Altman DG, Gøtzsche PC, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. PubMed
26. Turner L, Boutron I, Hróbjartsson A, Altman DG, Moher D. The evolution of assessing bias in Cochrane systematic reviews of interventions: celebrating methodological contributions of the Cochrane Collaboration. Syst Rev. 2013;2:79. PubMed
27. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177-188. PubMed
28. Newton HJ, Cox NJ, Diebold FX, Garrett HM, Pagano M, Royston JP (Eds). Stata Technical Bulletin 44: sbe24. http://www.stata.com/products/stb/journals/stb44.pdf. Accessed February 22, 2017.
29. Bohte R, van’t Wout JW, Lobatto S, et al. Efficacy and safety of azithromycin versus benzylpenicillin or erythromycin in community-acquired pneumonia. Eur J Clin Microbiol Infect Dis. 1995;14(3):182-187. PubMed
30. Capellier G, Mockly H, Charpentier C, et al. Early-onset ventilator-associated pneumonia in adults randomized clinical trial: comparison of 8 versus 15 days of antibiotic treatment. PLoS One. 2012;7(8):e41290. PubMed
31. Chastre J, Wolff M, Fagon JY, et al. Comparison of 8 vs 15 days of antibiotic therapy for ventilator-associated pneumonia in adults: a randomized trial. JAMA. 2003;290(19):2588-2598. PubMed
32. Chaudhry ZI, Nisar S, Ahmed U, Ali M. Short course of antibiotic treatment in spontaneous bacterial peritonitis: A randomized controlled study. Journal of the College of Physicians and Surgeons Pakistan. 2000;10(8):284-288.
33. Darouiche RO, Al Mohajer M, Siddiq DM, Minard CG. Short versus long course of antibiotics for catheter-associated urinary tract infections in patients with spinal cord injury: a randomized controlled noninferiority trial. Arch Phys Med Rehabil. 2014;95(2):290-296. PubMed
34. de Gier R, Karperien A, Bouter K, et al. A sequential study of intravenous and oral Fleroxacin for 7 or 14 days in the treatment of complicated urinary tract infections. Int J Antimicrob Agents. 1995;6(1):27-30. PubMed
35. Dunbar LM, Wunderink RG, Habib MP, et al. High-dose, short-course levofloxacin for community-acquired pneumonia: a new treatment paradigm. Clin Infect Dis. 2003;37(6):752-760. PubMed
36. Gasem MH, Keuter M, Dolmans WM, Van Der Ven-Jongekrijg J, Djokomoeljanto R, Van Der Meer JW. Persistence of Salmonellae in blood and bone marrow: randomized controlled trial comparing ciprofloxacin and chloramphenicol treatments against enteric fever. Antimicrob Agents Chemother. 2003;47(5):1727-1731. PubMed
37. Kollef MH, Chastre J, Clavel M, et al. A randomized trial of 7-day doripenem versus 10-day imipenem-cilastatin for ventilator-associated pneumonia. Crit Care. 2012;16(6):R218. PubMed
38. Kuzman I, Daković-Rode O, Oremus M, Banaszak AM. Clinical efficacy and safety of a short regimen of azithromycin sequential therapy vs standard cefuroxime sequential therapy in the treatment of community-acquired pneumonia: an international, randomized, open-label study. J Chemother. 2005;17(6):636-642. PubMed
39. Léophonte P, Choutet P, Gaillat J, et al. Efficacy of a ten day course of ceftriaxone compared to a shortened five day course in the treatment of community-acquired pneumonia in hospitalized adults with risk factors. Medecine et Maladies Infectieuses. 2002;32(7):369-381.
40. Rizzato G, Montemurro L, Fraioli P, et al. Efficacy of a three day course of azithromycin in moderately severe community-acquired pneumonia. Eur Respir J. 1995;8(3):398-402. PubMed
41. Runyon BA, McHutchison JG, Antillon MR, Akriviadis EA, Montano AA. Short-course versus long-course antibiotic treatment of spontaneous bacterial peritonitis. A randomized controlled study of 100 patients. Gastroenterology. 1991;100(6):1737-1742. PubMed
42. Sawyer RG, Claridge JA, Nathens AB, et al. Trial of short-course antimicrobial therapy for intraabdominal infection. N Engl J Med. 2015;372(21):1996-2005. PubMed
43. Scawn N, Saul D, Pathak D, et al. A pilot randomised controlled trial in intensive care patients comparing 7 days’ treatment with empirical antibiotics with 2 days’ treatment for hospital-acquired infection of unknown origin. Health Technol Assess. 2012;16(36):i-xiii, 1-70. PubMed
44. Schönwald S, Barsić B, Klinar I, Gunjaca M. Three-day azithromycin compared with ten-day roxithromycin treatment of atypical pneumonia. Scand J Infect Dis. 1994;26(6):706-710. PubMed
45. Schönwald S, Kuzman I, Oresković K, et al. Azithromycin: single 1.5 g dose in the treatment of patients with atypical pneumonia syndrome--a randomized study. Infection. 1999;27(3):198-202. PubMed
46. Siegel RE, Alicea M, Lee A, Blaiklock R. Comparison of 7 versus 10 days of antibiotic therapy for hospitalized patients with uncomplicated community-acquired pneumonia: a prospective, randomized, double-blind study. Am J Ther. 1999;6(4):217-222. PubMed
47. Zhao X, Wu JF, Xiu QY, et al. A randomized controlled clinical trial of levofloxacin 750 mg versus 500 mg intravenous infusion in the treatment of community-acquired pneumonia. Diagn Microbiol Infect Dis. 2014;80(2):141-147. PubMed
48. Bureau of Economic Analysis. U.S. Department of Commerce. https://bea.gov/iTable/iTable.cfm?ReqID=9&step=1#reqid=9&step=1&isuri=1&903=4. Accessed March 2, 2017.
49. Pakistan Multicentre Amoxycillin Short Course Therapy (MASCOT) pneumonia study group. Clinical efficacy of 3 days versus 5 days of oral amoxicillin for treatment of childhood pneumonia: a multicentre double-blind trial. Lancet. 2002;360(9336):835-841. PubMed
50. Peltola H, Vuori-Holopainen E, Kallio MJ, SE-TU Study Group. Successful shortening from seven to four days of parenteral beta-lactam treatment for common childhood infections: a prospective and randomized study. Int J Infect Dis. 2001;5(1):3-8. PubMed
1. Torio CM, Andrews RM. National Inpatient Hospital Costs: The Most Expensive Conditions by Payer, 2011: Statistical Brief #160. Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. www.hcup-us.ahrq.gov/reports/statbriefs/sb160.pdf. Accessed May 1, 2016.
2. Kalil AC, Metersky ML, Klompas M, et al. Management of Adults With Hospital-acquired and Ventilator-associated Pneumonia: 2016 Clinical Practice Guidelines by the Infectious Diseases Society of America and the American Thoracic Society. Clin Infect Dis. 2016;63(5):575-582. PubMed
3. Dimopoulos G, Matthaiou DK, Karageorgopoulos DE, Grammatikos AP, Athanassa Z, Falagas ME. Short- versus long-course antibacterial therapy for community-acquired pneumonia : a meta-analysis. Drugs. 2008;68(13):1841-1854. PubMed
4. Li JZ, Winston LG, Moore DH, Bent S. Efficacy of short-course antibiotic regimens for community-acquired pneumonia: a meta-analysis. Am J Med. 2007;120(9):783-790. PubMed
5. Eliakim-Raz N, Yahav D, Paul M, Leibovici L. Duration of antibiotic treatment for acute pyelonephritis and septic urinary tract infection-- 7 days or less versus longer treatment: systematic review and meta-analysis of randomized controlled trials. J Antimicrob Chemother. 2013;68(10):2183-2191. PubMed
6. Kyriakidou KG, Rafailidis P, Matthaiou DK, Athanasiou S, Falagas ME. Short- versus long-course antibiotic therapy for acute pyelonephritis in adolescents and adults: a meta-analysis of randomized controlled trials. Clin Ther. 2008;30(10):1859-1868. PubMed
7. Spellberg B, Bartlett JG, Gilbert DN. The future of antibiotics and resistance. N Engl J Med. 2013;368(4):299-302. PubMed
8. Spellberg B. The New Antibiotic Mantra-”Shorter Is Better”. JAMA Intern Med. 2016;176(9):1254-1255. PubMed
9. Rice LB. The Maxwell Finland Lecture: for the duration-rational antibiotic administration in an era of antimicrobial resistance and clostridium difficile. Clin Infect Dis. 2008;46(4):491-496. PubMed
10. Dethlefsen L, Relman DA. Incomplete recovery and individualized responses of the human distal gut microbiota to repeated antibiotic perturbation. Proc Natl Acad Sci U S A. 2011;108 Suppl 1:4554-4561. PubMed
11. Hensgens MP, Goorhuis A, Dekkers OM, Kuijper EJ. Time interval of increased risk for Clostridium difficile infection after exposure to antibiotics. J Antimicrob Chemother. 2012;67(3):742-748. PubMed
12. Huttner B, Jones M, Huttner A, Rubin M, Samore MH. Antibiotic prescription practices for pneumonia, skin and soft tissue infections and urinary tract infections throughout the US Veterans Affairs system. J Antimicrob Chemother. 2013;68(10):2393-2399. PubMed
13. Daneman N, Shore K, Pinto R, Fowler R. Antibiotic treatment duration for bloodstream infections in critically ill patients: a national survey of Canadian infectious diseases and critical care specialists. Int J Antimicrob Agents. 2011;38(6):480-485. PubMed
14. Schultz L, Lowe TJ, Srinivasan A, Neilson D, Pugliese G. Economic impact of redundant antimicrobial therapy in US hospitals. Infect Control Hosp Epidemiol. 2014;35(10):1229-1235. PubMed
15. Dimopoulos G, Poulakou G, Pneumatikos IA, Armaganidis A, Kollef MH, Matthaiou DK. Short- vs long-duration antibiotic regimens for ventilator-associated pneumonia: a systematic review and meta-analysis. Chest. 2013;144(6):1759-1767. PubMed
16. Pugh R, Grant C, Cooke RP, Dempsey G. Short-course versus prolonged-course antibiotic therapy for hospital-acquired pneumonia in critically ill adults. Cochrane Database Syst Rev. 2015(8):CD007577. PubMed
17. Havey TC, Fowler RA, Daneman N. Duration of antibiotic therapy for bacteremia: a systematic review and meta-analysis. Crit Care. 2011;15(6):R267. PubMed
18. Haider BA, Saeed MA, Bhutta ZA. Short-course versus long-course antibiotic therapy for non-severe community-acquired pneumonia in children aged 2 months to 59 months. Cochrane Database Syst Rev. 2008(2):CD005976. PubMed
19. Strohmeier Y, Hodson EM, Willis NS, Webster AC, Craig JC. Antibiotics for acute pyelonephritis in children. Cochrane Database Syst Rev. 2014(7):Cd003772. PubMed
20. Leligdowicz A, Dodek PM, Norena M, et al. Association between source of infection and hospital mortality in patients who have septic shock. Am J Respir Crit Care Med. 2014;189(10):1204-1213. PubMed
21. Cagatay AA, Tufan F, Hindilerden F, et al. The causes of acute Fever requiring hospitalization in geriatric patients: comparison of infectious and noninfectious etiology. J Aging Res. 2010;2010:380892. PubMed
22. Rhodes A, Evans LE, Alhazzani W, et al. Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock: 2016. Crit Care Med. 2017;45(3):486-552. PubMed
23. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264-269, W64. PubMed
24. Royer S, DeMerle K, Dickson RP, Prescott HC. Shorter versus longer courses of antibiotics for infection in hospitalized patients: a systematic review and meta-analysis. PROSPERO 2016:CRD42016029549. http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016029549. Accessed May 2, 2017.
25. Higgins JP, Altman DG, Gøtzsche PC, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. PubMed
26. Turner L, Boutron I, Hróbjartsson A, Altman DG, Moher D. The evolution of assessing bias in Cochrane systematic reviews of interventions: celebrating methodological contributions of the Cochrane Collaboration. Syst Rev. 2013;2:79. PubMed
27. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177-188. PubMed
28. Newton HJ, Cox NJ, Diebold FX, Garrett HM, Pagano M, Royston JP (Eds). Stata Technical Bulletin 44: sbe24. http://www.stata.com/products/stb/journals/stb44.pdf. Accessed February 22, 2017.
29. Bohte R, van’t Wout JW, Lobatto S, et al. Efficacy and safety of azithromycin versus benzylpenicillin or erythromycin in community-acquired pneumonia. Eur J Clin Microbiol Infect Dis. 1995;14(3):182-187. PubMed
30. Capellier G, Mockly H, Charpentier C, et al. Early-onset ventilator-associated pneumonia in adults randomized clinical trial: comparison of 8 versus 15 days of antibiotic treatment. PLoS One. 2012;7(8):e41290. PubMed
31. Chastre J, Wolff M, Fagon JY, et al. Comparison of 8 vs 15 days of antibiotic therapy for ventilator-associated pneumonia in adults: a randomized trial. JAMA. 2003;290(19):2588-2598. PubMed
32. Chaudhry ZI, Nisar S, Ahmed U, Ali M. Short course of antibiotic treatment in spontaneous bacterial peritonitis: A randomized controlled study. Journal of the College of Physicians and Surgeons Pakistan. 2000;10(8):284-288.
33. Darouiche RO, Al Mohajer M, Siddiq DM, Minard CG. Short versus long course of antibiotics for catheter-associated urinary tract infections in patients with spinal cord injury: a randomized controlled noninferiority trial. Arch Phys Med Rehabil. 2014;95(2):290-296. PubMed
34. de Gier R, Karperien A, Bouter K, et al. A sequential study of intravenous and oral Fleroxacin for 7 or 14 days in the treatment of complicated urinary tract infections. Int J Antimicrob Agents. 1995;6(1):27-30. PubMed
35. Dunbar LM, Wunderink RG, Habib MP, et al. High-dose, short-course levofloxacin for community-acquired pneumonia: a new treatment paradigm. Clin Infect Dis. 2003;37(6):752-760. PubMed
36. Gasem MH, Keuter M, Dolmans WM, Van Der Ven-Jongekrijg J, Djokomoeljanto R, Van Der Meer JW. Persistence of Salmonellae in blood and bone marrow: randomized controlled trial comparing ciprofloxacin and chloramphenicol treatments against enteric fever. Antimicrob Agents Chemother. 2003;47(5):1727-1731. PubMed
37. Kollef MH, Chastre J, Clavel M, et al. A randomized trial of 7-day doripenem versus 10-day imipenem-cilastatin for ventilator-associated pneumonia. Crit Care. 2012;16(6):R218. PubMed
38. Kuzman I, Daković-Rode O, Oremus M, Banaszak AM. Clinical efficacy and safety of a short regimen of azithromycin sequential therapy vs standard cefuroxime sequential therapy in the treatment of community-acquired pneumonia: an international, randomized, open-label study. J Chemother. 2005;17(6):636-642. PubMed
39. Léophonte P, Choutet P, Gaillat J, et al. Efficacy of a ten day course of ceftriaxone compared to a shortened five day course in the treatment of community-acquired pneumonia in hospitalized adults with risk factors. Medecine et Maladies Infectieuses. 2002;32(7):369-381.
40. Rizzato G, Montemurro L, Fraioli P, et al. Efficacy of a three day course of azithromycin in moderately severe community-acquired pneumonia. Eur Respir J. 1995;8(3):398-402. PubMed
41. Runyon BA, McHutchison JG, Antillon MR, Akriviadis EA, Montano AA. Short-course versus long-course antibiotic treatment of spontaneous bacterial peritonitis. A randomized controlled study of 100 patients. Gastroenterology. 1991;100(6):1737-1742. PubMed
42. Sawyer RG, Claridge JA, Nathens AB, et al. Trial of short-course antimicrobial therapy for intraabdominal infection. N Engl J Med. 2015;372(21):1996-2005. PubMed
43. Scawn N, Saul D, Pathak D, et al. A pilot randomised controlled trial in intensive care patients comparing 7 days’ treatment with empirical antibiotics with 2 days’ treatment for hospital-acquired infection of unknown origin. Health Technol Assess. 2012;16(36):i-xiii, 1-70. PubMed
44. Schönwald S, Barsić B, Klinar I, Gunjaca M. Three-day azithromycin compared with ten-day roxithromycin treatment of atypical pneumonia. Scand J Infect Dis. 1994;26(6):706-710. PubMed
45. Schönwald S, Kuzman I, Oresković K, et al. Azithromycin: single 1.5 g dose in the treatment of patients with atypical pneumonia syndrome--a randomized study. Infection. 1999;27(3):198-202. PubMed
46. Siegel RE, Alicea M, Lee A, Blaiklock R. Comparison of 7 versus 10 days of antibiotic therapy for hospitalized patients with uncomplicated community-acquired pneumonia: a prospective, randomized, double-blind study. Am J Ther. 1999;6(4):217-222. PubMed
47. Zhao X, Wu JF, Xiu QY, et al. A randomized controlled clinical trial of levofloxacin 750 mg versus 500 mg intravenous infusion in the treatment of community-acquired pneumonia. Diagn Microbiol Infect Dis. 2014;80(2):141-147. PubMed
48. Bureau of Economic Analysis. U.S. Department of Commerce. https://bea.gov/iTable/iTable.cfm?ReqID=9&step=1#reqid=9&step=1&isuri=1&903=4. Accessed March 2, 2017.
49. Pakistan Multicentre Amoxycillin Short Course Therapy (MASCOT) pneumonia study group. Clinical efficacy of 3 days versus 5 days of oral amoxicillin for treatment of childhood pneumonia: a multicentre double-blind trial. Lancet. 2002;360(9336):835-841. PubMed
50. Peltola H, Vuori-Holopainen E, Kallio MJ, SE-TU Study Group. Successful shortening from seven to four days of parenteral beta-lactam treatment for common childhood infections: a prospective and randomized study. Int J Infect Dis. 2001;5(1):3-8. PubMed
© 2018 Society of Hospital Medicine
Clinical presentation, diagnosis, and management of typical and atypical bronchopulmonary carcinoid
Carcinoid lung tumors represent the most indolent form of a spectrum of bronchopulmonary neuroendocrine tumors (NETs) that includes small-cell carcinoma as its most malignant member, as well as several other forms of intermediately aggressive tumors, such as atypical carcinoid.1 Carcinoids represent 1.2% of all primary lung malignancies. Their incidence in the United States has increased rapidly over the last 30 years and is currently about 6% a year. Lung carcinoids are more prevalent in whites compared with blacks, and in Asians compared with non-Asians. They are less common in Hispanics compared with non-Hispanics.1 Typical carcinoids represent 80%-90% of all lung carcinoids and occur more frequently in the fifth and sixth decades of life. They can, however, occur at any age, and are the most common lung tumor in childhood.1
Etiology and risk factors
Unlike carcinoma of the lung, no external environmental toxin or other stimulus has been identified as a causative agent for the development of pulmonary carcinoid tumors. It is not clear if there is an association between bronchial NETs and smoking.1 Nearly all bronchial NETs are sporadic; however, they can rarely occur in the setting of multiple endocrine neoplasia type 1.1
Presentation
About 60% of the patients with bronchial carcinoids are symptomatic at presentation. The most common clinical findings are those associated with bronchial obstruction, such as persistent cough, hemoptysis, and recurrent or obstructive pneumonitis. Wheezing, chest pain, and dyspnea also may be noted.2 Various endocrine or neuroendocrine syndromes can be initial clinical manifestations of either typical or atypical pulmonary carcinoid tumors, but that is not common Cushing syndrome (ectopic production and secretion of adrenocorticotropic hormone [ACTH]) may occur in about 2% of lung carcinoid.3 In cases of malignancy, the presence of metastatic disease can produce weight loss, weakness, and a general feeling of ill health.
Diagnostic work-up
Biochemical test
There is no biochemical study that can be used as a screening test to determine the presence of a carcinoid tumor or to diagnose a known pulmonary mass as a carcinoid tumor. Neuroendocrine cells produce biologically active amines and peptides that can be detected in serum and urine. Although the syndromes associated with lung carcinoids are seen in about 1%-2% of the patients, assays of specific hormones or other circulating neuroendocrine substances, such as ACTH, melanocyte-stimulating hormone, or growth hormone may establish the existence of a clinically suspected syndrome.
Chest radiography
An abnormal finding on chest radiography is present in about 75% of patients with a pulmonary carcinoid tumor.1 Findings include either the presence of the tumor mass itself or indirect evidence of its presence observed as parenchymal changes associated with bronchial obstruction from the mass.
Computed-tomography imaging
High-resolution computed-tomography (CT) imaging is the one of the best types of CT examination for evaluation of a pulmonary carcinoid tumor.4 A CT scan provides excellent resolution of tumor extent, location, and the presence or absence of mediastinal adenopathy. It also aids in morphologic characterization of peripheral (Figure 1) and especially centrally located carcinoids, which may be purely intraluminal (polypoid configuration), exclusively extra luminal, or more frequently, a mixture of intraluminal and extraluminal components.
CT scans may also be helpful for differentiating tumor from postobstructive atelectasis or bronchial obstruction-related mucoid impaction. Intravenous contrast in CT imaging can be useful in differentiating malignant from benign lesions. Because carcinoid tumors are highly vascular, they show greater enhancement on contrast CT than do benign lesions. The sensitivity of CT for detecting metastatic hilar or mediastinal nodes is high, but specificity is as low as 45%.4
Typical carcinoid is rarely metastatic so most patients do not need CT or MRI imaging to evaluate for liver involvement. Liver imaging is appropriate in patients with evidence of mediastinal involvement, relatively high mitotic rate, or clinical evidence of the carcinoid syndrome.8 To evaluate for metastatic spread to the liver, multiphase contrast-enhanced liver CT scans should be performed with arterial and portal-venous phases because carcinoid liver metastases are often hypervascular and appear isodense relative to the liver parenchyma after contrast administration.4 An MRI is often preferred the modality to evaluate for metastatic spread to the liver because of its higher sensitivity.5
Positron-emission tomography
Although carcinoid tumors of the lung are highly vascular, they do not show increased metabolic activity on positron-emission tomography (PET) and would be incorrectly designated as benign lesions on the basis of findings from a PET scan. Fludeoxyglucose F-18 PET has shown utility as a radiologic marker for atypical carcinoids, particularly for those with a higher proliferation index with Ki-67 index of 10%-20%.6
Radionucleotide studies
Somatostatin receptors (SSRs) are present in many tumors of neuroendocrine origin, including carcinoid tumors. These receptors interact with each other and undergo dimerization and internalization. SSTR subtypes (SSTRs) overexpressed in NETs are related to the type, origin, and grade of differentiation of tumor. The overexpression of an SSTR is a characteristic feature of bronchial NETs, which can be used to localize the primary tumor and its metastases by imaging with the radiolabeled SST analogues. Radionucleotide imaging modalities commonly used include single-photon–emission tomography and positron-emission tomography.
With regard to SSR scintigraphy testing, PET using Ga–DOTATATE/TOC is preferable to Octreoscan if it is available, because offers better spatial resolution and has a shorter scanning time. It has sensitivity of 97% and specificity of 92% and hence is preferable over Octreoscan in highly aggressive, atypical bronchial NETs. It also provides an estimate of receptor density and evidence of the functionality of receptors, which helps with selection of suitable treatments that act on these receptors.7
Tumor markers
Serum levels of chromogranin A in bronchial NETs are expressed at a lower rate than are other sites of carcinoid tumors, so its measurement is of limited utility in following disease activity in bronchial NETs.4,8
Bronchoscopy
About 75% of pulmonary carcinoids are visible on bronchoscopy. The bronchoscopic appearance may be characteristic but it is preferable that brushings or biopsy be performed to confirm the diagnosis. For central tumors endobronchial; and for peripheral tumors, CT-guided percutaneous biopsy is the accepted diagnostic approach. Cytologic study of bronchial brushings is more sensitive than sputum cytology, but the diagnostic yield of brushing is low overall (about 40%) and hence fine-needle biopsy is preferred. 8
A negative finding on biopsy should not produce a false sense of confidence. If a suspicion of malignancy exists despite a negative finding on transthoracic biopsy, surgical excision of the nodule and pathologic analysis should be undertaken.
Histological findings
In typical carcinoid tumors, cells tend to group in nests, cords, or broad sheets. Arrangement is orderly, with groups of cells separated by highly vascular septa of connective tissue.9Individual cell features include small and polygonal cells with finely granular eosinophilic cytoplasm (Figure 2). Nuclei are small and round. Mitoses are infrequent (Figure 3).
On electron microscopy, well-formed desmosomes and abundant neurosecretory granules are seen. Many pulmonary carcinoid tumors stain positive for a variety of neuroendocrine markers. Electron microscopy is of historical interest but is not used for tissue diagnosis for bronchial carcinoid patients.
Typical vs atypical tumors
In all, 10% of the carcinoid tumors are atypical in nature. They are generally larger than typical carcinoids and are located in the periphery of the lung in about 50% of cases. They have more aggressive behavior and tend to metastasize more commonly.2 Neither location nor size are distinguishing features. The distinction is based on histology and includes one or all of the following features:8,9
n Increased mitotic activity in a tumor with an identifiable carcinoid cellular arrangement with 2-10 mitotic figures per high-power field.9
n Pleomorphism and irregular nuclei with hyperchromatism and prominent nucleoli.
n Areas of increased cellularity with loss of the regular, organized architecture observed in typical carcinoid.
n Areas of necrosis within the tumor.
Ki-67 cell proliferation labeling index can be used to distinguish between high-grade lung NETs (>40%) and carcinoids (<20%), particularly in crushed biopsy specimens in which carcinoids may be mistaken for small-cell lung cancers. However, given overlap in the distribution of Ki-67 labeling index between typical carcinoids (≤5%) and atypical carcinoids (≤20%), Ki-67 expression does not reliably distinguish between well-differentiated lung carcinoids. The utility of Ki-67 to differentiate between typical and atypical carcinoids has yet to be established, and it is not presently recommended.9 Hence, the number of mitotic figures per high-power field of viable tumor area and presence or absence of necrosis continue to be the salient features distinguishing typical and atypical bronchial NETs.
Staging10
Lung NETs are staged using the same tumor, node, metastasis (TNM) classification from the American Joint Committee on Cancer (AJCC) that is used for bronchogenic lung carcinomas (Table).
Typical bronchial NETs most commonly present as stage I tumors, whereas more than one-half of atypical tumors are stage II (bronchopulmonary nodal involvement) or III (mediastinal nodal involvement) at presentation.
Treatment
Localized or nonmetastatic and rescetable disease
Surgical treatment. As with other non–small-cell lung cancers (NSCLCs), surgical resection is the treatment of choice for early-stage carcinoid. The long-term prognosis is typically excellent, with a 10-year disease-free survival of 77%-94%.11, 12 The extent of resection is determined by the tumor size, histology, and location. For NSCLC, the standard surgical approach is the minimal anatomic resection (lobectomy, sleeve lobectomy, bilobectomy, or pneumonectomy) needed to get microscopically negative margins, with an associated mediastinal and hilar lymph node dissection for staging.13
Given the indolent nature of typical carcinoids, there has been extensive research to evaluate whether a sublobar resection is oncologically appropriate for these tumors. Although there are no comprehensive randomized studies comparing sublobar resection with lobectomy for typical carcinoids, findings from numerous database reviews and single-center studies suggest that sublobar resections are noninferior.14-17 Due to the higher nodal metastatic rate and the overall poorer prognosis associated with atypical carcinoids, formal anatomic resection is still recommended with atypical histology.18
An adaptive approach must be taken for patients who undergo wedge resection of pulmonary lesions without a known diagnosis. If intraoperative frozen section is consistent with carcinoid and the margins are negative, mediastinal lymph node dissection should be performed. If the patient is node negative, then completion lobectomy is not required. In node-positive patients with adequate pulmonary reserve, lobectomy should be performed regardless of histology. If atypical features are found during pathologic evaluation, then interval completion lobectomy may be patients with adequate pulmonary reserve.19,20
As with other pulmonary malignancies, clinical or radiographic suspicion of mediastinal lymph node involvement requires invasive staging before pulmonary resection is considered. If the patient is proven to have mediastinal metastatic disease, then multimodality treatment should be considered.20
Adjuvant therapy. Postoperative adjuvant therapy for most resected bronchial NETs, even in the setting of positive lymph nodes, is generally not recommended.7 In clinical practice, adjuvant platinum-based chemotherapy with or without radiation therapy (RT) is a reasonable option for patients with histologically aggressive-appearing or poorly differentiated stage III atypical bronchial NETs, although there is only limited evidence to support this. RT is a reasonable option for atypical bronchial NETs if gross residual disease remains after surgery, although it has not been proven that this improves outcomes.7
Nonmetastatic and unresectable disease
For inoperable patients and for those with surgically unresectable but nonmetastatic disease, options for local control of tumor growth include RT with or without concurrent chemotherapy and palliative endobronchial resection of obstructing tumor.21
Metastatic and unresectable disease
Everolimus. In February 2016, everolimus was approved by the US Food and Drug Administration (FDA) as first-line therapy for progressive, well-differentiated, nonfunctional NETs of lung origin that are unresectable, locally advanced, or metastatic. The aApproval was based on the RADIANT-4 trial, in which median progression-free survival was 11 months in the 205 patients allocated to receive everolimus (10 mg/day) and 3.9 months in the 97 patients who received placebo. Everolimus was associated with a 52% reduction in the estimated risk of progression or death.22
Somastatin analogues (SSA). There is lack of comprehensive data on the role of SSA compared with everolimus in lung carcinoid. The National Comprehensive Cancer Network (NCCN) guidelines on NETs and SCLCs recommend the consideration of octreotide or lanreotide as first-line therapies for select patients with symptoms of carcinoid syndrome or octreotide-positive scans.21 Guidelines from the European Neuroendocrine Tumor Society (ENETS)19 also recommend the use of SSAs as a first-line option in patients with: lung carcinoids exhibiting hormone-related symptoms or slowly progressive typical or atypical carcinoid with a low proliferative index (preferably Ki-67 <10%), provided there is a strongly positive SSTR status.
In cases in which metastatic lung NETs are associated with the carcinoid syndrome, initiation of long-acting SSA therapy in combination with everolimus is recommended.
Cytotoxic chemotherapy. According to the NCCN guidelines, cisplatin-etoposide or other cytotoxic regimens (eg, those that are temozolomide based) are recommended for advanced typical and atypical carcinoids, with cisplatin-etoposide being the preferred first-line systemic regimen in stage IV atypical carcinoid.22 ENETS guidelines stipulate that systemic chemotherapy is generally restricted to atypical carcinoid after failure of first-line therapies and only under certain conditions (Ki-67 >15%, rapidly progressive disease, and SSTR-negative disease).19 Based on a summary of NCCN and ENET guidelines:
n For patients with highly aggressive atypical bronchial NETs, a combination of platinum- and etoposide-based regimens such as those used for small-cell lung cancer has shown better response rate and overall survival data.
n For patients with typical or atypical bronchial NETs, temozolomide can be used as monotherapy or combination with capecitabine, although there are no findings from large randomized controlled trials to support this. Capecitabine-temozolomide has recently shown moderate activity in a small, single-institution study of patients with advanced lung carcinoids (N = 19), with 11 of 17 assessable patients (65%) demonstrating stable disease or partial response.23
n The following regimens can also be used for advanced disease after failure of somastatin analogues, although there are limited data for objective responses:24,25fluorouracil plus dacarbazine; epirubicin, capecitabine plus oxaliplatin; and capecitabine plus liposomal doxorubicin.
Participation in a clinical trial should be encouraged for patients with progressive bronchial NETs during any line of therapy. For patients who have a limited, potentially resectable liver-isolated metastatic NET, surgical resection should be pursued. For more extensive unresectable liver-dominant metastatic disease, treatment options include embolization, radiofrequency ablation, and cryoablation.20,22
Posttreatment surveillance
Posttreatment surveillance after resection of node-positive typical bronchial NETs and for all atypical tumors.26 Patients with lymph-node–negative typical bronchial NETs are very unlikely to benefit from postoperative surveillance because of the very low risk of recurrence. CT imaging (including the thorax and abdomen) every 6 months for 2 years, followed by annual scans for a total of 5-10 years are a reasonable surveillance schedule.
Prognosis 18,27
Typical bronchial NETs have an excellent prognosis after surgical resection. Reported 5-year survival rates are 87%-100%; the corresponding rates at 10 years are 82%-87%. Features associated with negative prognostic significance include lymph-node involvement and incomplete resection.
Atypical bronchial NETs have a worse prognosis than do typical tumors. Five-year survival rates range widely, from 30%-95%; the corresponding rates at 10 years are 35%-56%. Atypical tumors have a greater tendency to metastasize (16%-23%) and recur locally (3%-25%). Distant metastases to the liver or bone are more common than local recurrence. Adverse influence of nodal metastases on prognosis is more profound than for typical tumors. Survival rates by stage for patients who underwent surgical resection (including typical and atypical carcinoid27) are: stage I, 93%; stage II, 85%; stage III, 75%; and stage IV, 57%.
1. Hauso O, Gustafsson BI, Kidd M, et al. Neuroendocrine tumor epidemiology: contrasting Norway and North America. Cancer. 2008;113(10):2655-2664.
2. Fink G, Krelbaum T, Yellin A, et al. Pulmonary carcinoid: presentation, diagnosis, and outcome in 142 cases in Israel and review of 640 cases from the literature. Chest. 2001;119(6):1647-1651.
3. Limper AH, Carpenter PC, Scheithauer B, Staats BA. The Cushing syndrome induced by bronchial carcinoid tumors. Ann Intern Med. 1992;117(3):209-214.
4. Meisinger QC, Klein JS, Butnor KJ, Gentchos G, Leavitt BJ. CT features of peripheral pulmonary carcinoid tumors. AJR Am J Roentgenol. 2011;197(5):1073-1080.
5. Guckel C, Schnabel K, Deimling M, Steinbrich W. Solitary pulmonary nodules: MR evaluation of enhancement patterns with contrast-enhanced dynamic snapshot gradient-echo imaging. Radiology. 1996;200(3):681-686.
6. Jindal T, Kumar A, Venkitaraman B, et al. Evaluation of the role of [18F]FDG-PET/CT and [68Ga]DOTATOC-PET/CT in differentiating typical and atypical pulmonary carcinoids. Cancer Imaging. 2011;11:70-75.
7. Caplin ME, Baudin E, Ferolla P, et al. Pulmonary neuroendocrine (carcinoid) tumors: European Neuroendocrine Tumor Society expert consensus and recommendations for best practice for typical and atypical pulmonary carcinoids. Ann Oncol. 2015;26(8):1604-1620.
8. Travis WD. Pathology and diagnosis of neuroendocrine tumors: lung neuroendocrine. Thorac Surg Clin. 2014;24(3):257-266.
9. Warren WH, Memoli VA, Gould VE. Immunohistochemical and ultrastructural analysis of bronchopulmonary neuroendocrine neoplasms. II. Well-differentiated neuroendocrine carcinomas. Ultrastruct Pathol. 1984;7(2-3):185-199.
10. Goldstraw P, Chansky K, Crowley J, et al. The IASLC Lung Cancer Staging Project: Proposals for revision of the TNM stage groupings in the forthcoming (eighth) edition of the TNM classification for lung cancer. J Thorac Oncol. 2016;11(1):39-51.
11. McCaughan BC, Martini N, Bains MS. Bronchial carcinoids. Review of 124 cases. J Thorac Cardiovasc Surg. 1985;89(1):8-17.
12. Hurt R, Bates M. Carcinoid tumours of the bronchus: a 33-year experience. Thorax. 1984;39(8):617-623.
13. Ettinger DS, Wood DE, Akerley W, et al. Non-small cell lung cancer, version 1.2015. J Natl Compr Cancer Netw. 2014;12(12):1738-1761.
14. Ferguson MK, Landreneau RJ, Hazelrigg SR, et al. Long-term outcome after resection for bronchial carcinoid tumors. Eur J Cardiothorac Surg. 2000;18(2):156-161.
15. Lucchi M, Melfi F, Ribechini A, et al. Sleeve and wedge parenchyma-sparing bronchial resections in low-grade neoplasms of the bronchial airway. J Thorac Cardiovasc Surg. 2007;134(2):373-377.
16. Yendamuri S, Gold D, Jayaprakash V, Dexter E, Nwogu C, Demmy T. Is sublobar resection sufficient for carcinoid tumors? Ann Thorac Surg. 2011;92(5):1774-1778; discussion 8-9.
17. Fox M, Van Berkel V, Bousamra M II, Sloan S, Martin RC II. Surgical management of pulmonary carcinoid tumors: sublobar resection versus lobectomy. Am J Surg. 2013;205(2):200-208.
18. Cardillo G, Sera F, Di Martino M, et al. Bronchial carcinoid tumors: nodal status and long-term survival after resection. Ann Thorac Surg. 2004;77(5):1781-1785.
19. Oberg K, Hellman P, Ferolla P, Papotti M; ESMO Guidelines Working Group. Neuroendocrine bronchial and thymic tumors: ESMO clinical practice guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2012;23(suppl 7:vii120-3).
20. Filosso PL, Ferolla P, Guerrera F, et al. Multidisciplinary management of advanced lung neuroendocrine tumors. J Thorac Dis. 2015;7(Suppl 2):S163-171.
21. Kulke MH, Shah MH, Benson AB III, et al. Neuroendocrine tumors, version 1.2015. J Natl Compr Cancer Netw. 2015;13(1):78-108.
22. Yao JC, Fazio N, Singh S, et al. Everolimus for the treatment of advanced, non-functional neuroendocrine tumours of the lung or gastrointestinal tract (RADIANT-4): a randomised, placebo-controlled, phase 3 study. Lancet. 2016;387(10022):968-977.
23. Ramirez RA, Beyer DT, Chauhan A, Boudreaux JP, Wang YZ, Woltering EA. The role of capecitabine/temozolomide in metastatic neuroendocrine tumors. Oncologist. 2016;21(6):671-675.
24. Bajetta E, Rimassa L, Carnaghi C, et al. 5-Fluorouracil, dacarbazine, and epirubicin in the treatment of patients with neuroendocrine tumors. Cancer. 1998;83(2):372-378.
25. Masi G, Fornaro L, Cupini S, et al. Refractory neuroendocrine tumor-response to liposomal doxorubicin and capecitabine. Nat Rev Clin Oncol. 2009;6(11):670-674.
26. Lou F, Sarkaria I, Pietanza C, et al. Recurrence of pulmonary carcinoid tumors after resection: implications for postoperative surveillance. Ann Thorac Surg. 2013;96(4):1156-1162.
27. Beasley MB, Thunnissen FB, Brambilla E, et al. Pulmonary atypical carcinoid: predictors of survival in 106 cases. Human Pathol. 2000;31(10):1255-1265.
Carcinoid lung tumors represent the most indolent form of a spectrum of bronchopulmonary neuroendocrine tumors (NETs) that includes small-cell carcinoma as its most malignant member, as well as several other forms of intermediately aggressive tumors, such as atypical carcinoid.1 Carcinoids represent 1.2% of all primary lung malignancies. Their incidence in the United States has increased rapidly over the last 30 years and is currently about 6% a year. Lung carcinoids are more prevalent in whites compared with blacks, and in Asians compared with non-Asians. They are less common in Hispanics compared with non-Hispanics.1 Typical carcinoids represent 80%-90% of all lung carcinoids and occur more frequently in the fifth and sixth decades of life. They can, however, occur at any age, and are the most common lung tumor in childhood.1
Etiology and risk factors
Unlike carcinoma of the lung, no external environmental toxin or other stimulus has been identified as a causative agent for the development of pulmonary carcinoid tumors. It is not clear if there is an association between bronchial NETs and smoking.1 Nearly all bronchial NETs are sporadic; however, they can rarely occur in the setting of multiple endocrine neoplasia type 1.1
Presentation
About 60% of the patients with bronchial carcinoids are symptomatic at presentation. The most common clinical findings are those associated with bronchial obstruction, such as persistent cough, hemoptysis, and recurrent or obstructive pneumonitis. Wheezing, chest pain, and dyspnea also may be noted.2 Various endocrine or neuroendocrine syndromes can be initial clinical manifestations of either typical or atypical pulmonary carcinoid tumors, but that is not common Cushing syndrome (ectopic production and secretion of adrenocorticotropic hormone [ACTH]) may occur in about 2% of lung carcinoid.3 In cases of malignancy, the presence of metastatic disease can produce weight loss, weakness, and a general feeling of ill health.
Diagnostic work-up
Biochemical test
There is no biochemical study that can be used as a screening test to determine the presence of a carcinoid tumor or to diagnose a known pulmonary mass as a carcinoid tumor. Neuroendocrine cells produce biologically active amines and peptides that can be detected in serum and urine. Although the syndromes associated with lung carcinoids are seen in about 1%-2% of the patients, assays of specific hormones or other circulating neuroendocrine substances, such as ACTH, melanocyte-stimulating hormone, or growth hormone may establish the existence of a clinically suspected syndrome.
Chest radiography
An abnormal finding on chest radiography is present in about 75% of patients with a pulmonary carcinoid tumor.1 Findings include either the presence of the tumor mass itself or indirect evidence of its presence observed as parenchymal changes associated with bronchial obstruction from the mass.
Computed-tomography imaging
High-resolution computed-tomography (CT) imaging is the one of the best types of CT examination for evaluation of a pulmonary carcinoid tumor.4 A CT scan provides excellent resolution of tumor extent, location, and the presence or absence of mediastinal adenopathy. It also aids in morphologic characterization of peripheral (Figure 1) and especially centrally located carcinoids, which may be purely intraluminal (polypoid configuration), exclusively extra luminal, or more frequently, a mixture of intraluminal and extraluminal components.
CT scans may also be helpful for differentiating tumor from postobstructive atelectasis or bronchial obstruction-related mucoid impaction. Intravenous contrast in CT imaging can be useful in differentiating malignant from benign lesions. Because carcinoid tumors are highly vascular, they show greater enhancement on contrast CT than do benign lesions. The sensitivity of CT for detecting metastatic hilar or mediastinal nodes is high, but specificity is as low as 45%.4
Typical carcinoid is rarely metastatic so most patients do not need CT or MRI imaging to evaluate for liver involvement. Liver imaging is appropriate in patients with evidence of mediastinal involvement, relatively high mitotic rate, or clinical evidence of the carcinoid syndrome.8 To evaluate for metastatic spread to the liver, multiphase contrast-enhanced liver CT scans should be performed with arterial and portal-venous phases because carcinoid liver metastases are often hypervascular and appear isodense relative to the liver parenchyma after contrast administration.4 An MRI is often preferred the modality to evaluate for metastatic spread to the liver because of its higher sensitivity.5
Positron-emission tomography
Although carcinoid tumors of the lung are highly vascular, they do not show increased metabolic activity on positron-emission tomography (PET) and would be incorrectly designated as benign lesions on the basis of findings from a PET scan. Fludeoxyglucose F-18 PET has shown utility as a radiologic marker for atypical carcinoids, particularly for those with a higher proliferation index with Ki-67 index of 10%-20%.6
Radionucleotide studies
Somatostatin receptors (SSRs) are present in many tumors of neuroendocrine origin, including carcinoid tumors. These receptors interact with each other and undergo dimerization and internalization. SSTR subtypes (SSTRs) overexpressed in NETs are related to the type, origin, and grade of differentiation of tumor. The overexpression of an SSTR is a characteristic feature of bronchial NETs, which can be used to localize the primary tumor and its metastases by imaging with the radiolabeled SST analogues. Radionucleotide imaging modalities commonly used include single-photon–emission tomography and positron-emission tomography.
With regard to SSR scintigraphy testing, PET using Ga–DOTATATE/TOC is preferable to Octreoscan if it is available, because offers better spatial resolution and has a shorter scanning time. It has sensitivity of 97% and specificity of 92% and hence is preferable over Octreoscan in highly aggressive, atypical bronchial NETs. It also provides an estimate of receptor density and evidence of the functionality of receptors, which helps with selection of suitable treatments that act on these receptors.7
Tumor markers
Serum levels of chromogranin A in bronchial NETs are expressed at a lower rate than are other sites of carcinoid tumors, so its measurement is of limited utility in following disease activity in bronchial NETs.4,8
Bronchoscopy
About 75% of pulmonary carcinoids are visible on bronchoscopy. The bronchoscopic appearance may be characteristic but it is preferable that brushings or biopsy be performed to confirm the diagnosis. For central tumors endobronchial; and for peripheral tumors, CT-guided percutaneous biopsy is the accepted diagnostic approach. Cytologic study of bronchial brushings is more sensitive than sputum cytology, but the diagnostic yield of brushing is low overall (about 40%) and hence fine-needle biopsy is preferred. 8
A negative finding on biopsy should not produce a false sense of confidence. If a suspicion of malignancy exists despite a negative finding on transthoracic biopsy, surgical excision of the nodule and pathologic analysis should be undertaken.
Histological findings
In typical carcinoid tumors, cells tend to group in nests, cords, or broad sheets. Arrangement is orderly, with groups of cells separated by highly vascular septa of connective tissue.9Individual cell features include small and polygonal cells with finely granular eosinophilic cytoplasm (Figure 2). Nuclei are small and round. Mitoses are infrequent (Figure 3).
On electron microscopy, well-formed desmosomes and abundant neurosecretory granules are seen. Many pulmonary carcinoid tumors stain positive for a variety of neuroendocrine markers. Electron microscopy is of historical interest but is not used for tissue diagnosis for bronchial carcinoid patients.
Typical vs atypical tumors
In all, 10% of the carcinoid tumors are atypical in nature. They are generally larger than typical carcinoids and are located in the periphery of the lung in about 50% of cases. They have more aggressive behavior and tend to metastasize more commonly.2 Neither location nor size are distinguishing features. The distinction is based on histology and includes one or all of the following features:8,9
n Increased mitotic activity in a tumor with an identifiable carcinoid cellular arrangement with 2-10 mitotic figures per high-power field.9
n Pleomorphism and irregular nuclei with hyperchromatism and prominent nucleoli.
n Areas of increased cellularity with loss of the regular, organized architecture observed in typical carcinoid.
n Areas of necrosis within the tumor.
Ki-67 cell proliferation labeling index can be used to distinguish between high-grade lung NETs (>40%) and carcinoids (<20%), particularly in crushed biopsy specimens in which carcinoids may be mistaken for small-cell lung cancers. However, given overlap in the distribution of Ki-67 labeling index between typical carcinoids (≤5%) and atypical carcinoids (≤20%), Ki-67 expression does not reliably distinguish between well-differentiated lung carcinoids. The utility of Ki-67 to differentiate between typical and atypical carcinoids has yet to be established, and it is not presently recommended.9 Hence, the number of mitotic figures per high-power field of viable tumor area and presence or absence of necrosis continue to be the salient features distinguishing typical and atypical bronchial NETs.
Staging10
Lung NETs are staged using the same tumor, node, metastasis (TNM) classification from the American Joint Committee on Cancer (AJCC) that is used for bronchogenic lung carcinomas (Table).
Typical bronchial NETs most commonly present as stage I tumors, whereas more than one-half of atypical tumors are stage II (bronchopulmonary nodal involvement) or III (mediastinal nodal involvement) at presentation.
Treatment
Localized or nonmetastatic and rescetable disease
Surgical treatment. As with other non–small-cell lung cancers (NSCLCs), surgical resection is the treatment of choice for early-stage carcinoid. The long-term prognosis is typically excellent, with a 10-year disease-free survival of 77%-94%.11, 12 The extent of resection is determined by the tumor size, histology, and location. For NSCLC, the standard surgical approach is the minimal anatomic resection (lobectomy, sleeve lobectomy, bilobectomy, or pneumonectomy) needed to get microscopically negative margins, with an associated mediastinal and hilar lymph node dissection for staging.13
Given the indolent nature of typical carcinoids, there has been extensive research to evaluate whether a sublobar resection is oncologically appropriate for these tumors. Although there are no comprehensive randomized studies comparing sublobar resection with lobectomy for typical carcinoids, findings from numerous database reviews and single-center studies suggest that sublobar resections are noninferior.14-17 Due to the higher nodal metastatic rate and the overall poorer prognosis associated with atypical carcinoids, formal anatomic resection is still recommended with atypical histology.18
An adaptive approach must be taken for patients who undergo wedge resection of pulmonary lesions without a known diagnosis. If intraoperative frozen section is consistent with carcinoid and the margins are negative, mediastinal lymph node dissection should be performed. If the patient is node negative, then completion lobectomy is not required. In node-positive patients with adequate pulmonary reserve, lobectomy should be performed regardless of histology. If atypical features are found during pathologic evaluation, then interval completion lobectomy may be patients with adequate pulmonary reserve.19,20
As with other pulmonary malignancies, clinical or radiographic suspicion of mediastinal lymph node involvement requires invasive staging before pulmonary resection is considered. If the patient is proven to have mediastinal metastatic disease, then multimodality treatment should be considered.20
Adjuvant therapy. Postoperative adjuvant therapy for most resected bronchial NETs, even in the setting of positive lymph nodes, is generally not recommended.7 In clinical practice, adjuvant platinum-based chemotherapy with or without radiation therapy (RT) is a reasonable option for patients with histologically aggressive-appearing or poorly differentiated stage III atypical bronchial NETs, although there is only limited evidence to support this. RT is a reasonable option for atypical bronchial NETs if gross residual disease remains after surgery, although it has not been proven that this improves outcomes.7
Nonmetastatic and unresectable disease
For inoperable patients and for those with surgically unresectable but nonmetastatic disease, options for local control of tumor growth include RT with or without concurrent chemotherapy and palliative endobronchial resection of obstructing tumor.21
Metastatic and unresectable disease
Everolimus. In February 2016, everolimus was approved by the US Food and Drug Administration (FDA) as first-line therapy for progressive, well-differentiated, nonfunctional NETs of lung origin that are unresectable, locally advanced, or metastatic. The aApproval was based on the RADIANT-4 trial, in which median progression-free survival was 11 months in the 205 patients allocated to receive everolimus (10 mg/day) and 3.9 months in the 97 patients who received placebo. Everolimus was associated with a 52% reduction in the estimated risk of progression or death.22
Somastatin analogues (SSA). There is lack of comprehensive data on the role of SSA compared with everolimus in lung carcinoid. The National Comprehensive Cancer Network (NCCN) guidelines on NETs and SCLCs recommend the consideration of octreotide or lanreotide as first-line therapies for select patients with symptoms of carcinoid syndrome or octreotide-positive scans.21 Guidelines from the European Neuroendocrine Tumor Society (ENETS)19 also recommend the use of SSAs as a first-line option in patients with: lung carcinoids exhibiting hormone-related symptoms or slowly progressive typical or atypical carcinoid with a low proliferative index (preferably Ki-67 <10%), provided there is a strongly positive SSTR status.
In cases in which metastatic lung NETs are associated with the carcinoid syndrome, initiation of long-acting SSA therapy in combination with everolimus is recommended.
Cytotoxic chemotherapy. According to the NCCN guidelines, cisplatin-etoposide or other cytotoxic regimens (eg, those that are temozolomide based) are recommended for advanced typical and atypical carcinoids, with cisplatin-etoposide being the preferred first-line systemic regimen in stage IV atypical carcinoid.22 ENETS guidelines stipulate that systemic chemotherapy is generally restricted to atypical carcinoid after failure of first-line therapies and only under certain conditions (Ki-67 >15%, rapidly progressive disease, and SSTR-negative disease).19 Based on a summary of NCCN and ENET guidelines:
n For patients with highly aggressive atypical bronchial NETs, a combination of platinum- and etoposide-based regimens such as those used for small-cell lung cancer has shown better response rate and overall survival data.
n For patients with typical or atypical bronchial NETs, temozolomide can be used as monotherapy or combination with capecitabine, although there are no findings from large randomized controlled trials to support this. Capecitabine-temozolomide has recently shown moderate activity in a small, single-institution study of patients with advanced lung carcinoids (N = 19), with 11 of 17 assessable patients (65%) demonstrating stable disease or partial response.23
n The following regimens can also be used for advanced disease after failure of somastatin analogues, although there are limited data for objective responses:24,25fluorouracil plus dacarbazine; epirubicin, capecitabine plus oxaliplatin; and capecitabine plus liposomal doxorubicin.
Participation in a clinical trial should be encouraged for patients with progressive bronchial NETs during any line of therapy. For patients who have a limited, potentially resectable liver-isolated metastatic NET, surgical resection should be pursued. For more extensive unresectable liver-dominant metastatic disease, treatment options include embolization, radiofrequency ablation, and cryoablation.20,22
Posttreatment surveillance
Posttreatment surveillance after resection of node-positive typical bronchial NETs and for all atypical tumors.26 Patients with lymph-node–negative typical bronchial NETs are very unlikely to benefit from postoperative surveillance because of the very low risk of recurrence. CT imaging (including the thorax and abdomen) every 6 months for 2 years, followed by annual scans for a total of 5-10 years are a reasonable surveillance schedule.
Prognosis 18,27
Typical bronchial NETs have an excellent prognosis after surgical resection. Reported 5-year survival rates are 87%-100%; the corresponding rates at 10 years are 82%-87%. Features associated with negative prognostic significance include lymph-node involvement and incomplete resection.
Atypical bronchial NETs have a worse prognosis than do typical tumors. Five-year survival rates range widely, from 30%-95%; the corresponding rates at 10 years are 35%-56%. Atypical tumors have a greater tendency to metastasize (16%-23%) and recur locally (3%-25%). Distant metastases to the liver or bone are more common than local recurrence. Adverse influence of nodal metastases on prognosis is more profound than for typical tumors. Survival rates by stage for patients who underwent surgical resection (including typical and atypical carcinoid27) are: stage I, 93%; stage II, 85%; stage III, 75%; and stage IV, 57%.
Carcinoid lung tumors represent the most indolent form of a spectrum of bronchopulmonary neuroendocrine tumors (NETs) that includes small-cell carcinoma as its most malignant member, as well as several other forms of intermediately aggressive tumors, such as atypical carcinoid.1 Carcinoids represent 1.2% of all primary lung malignancies. Their incidence in the United States has increased rapidly over the last 30 years and is currently about 6% a year. Lung carcinoids are more prevalent in whites compared with blacks, and in Asians compared with non-Asians. They are less common in Hispanics compared with non-Hispanics.1 Typical carcinoids represent 80%-90% of all lung carcinoids and occur more frequently in the fifth and sixth decades of life. They can, however, occur at any age, and are the most common lung tumor in childhood.1
Etiology and risk factors
Unlike carcinoma of the lung, no external environmental toxin or other stimulus has been identified as a causative agent for the development of pulmonary carcinoid tumors. It is not clear if there is an association between bronchial NETs and smoking.1 Nearly all bronchial NETs are sporadic; however, they can rarely occur in the setting of multiple endocrine neoplasia type 1.1
Presentation
About 60% of the patients with bronchial carcinoids are symptomatic at presentation. The most common clinical findings are those associated with bronchial obstruction, such as persistent cough, hemoptysis, and recurrent or obstructive pneumonitis. Wheezing, chest pain, and dyspnea also may be noted.2 Various endocrine or neuroendocrine syndromes can be initial clinical manifestations of either typical or atypical pulmonary carcinoid tumors, but that is not common Cushing syndrome (ectopic production and secretion of adrenocorticotropic hormone [ACTH]) may occur in about 2% of lung carcinoid.3 In cases of malignancy, the presence of metastatic disease can produce weight loss, weakness, and a general feeling of ill health.
Diagnostic work-up
Biochemical test
There is no biochemical study that can be used as a screening test to determine the presence of a carcinoid tumor or to diagnose a known pulmonary mass as a carcinoid tumor. Neuroendocrine cells produce biologically active amines and peptides that can be detected in serum and urine. Although the syndromes associated with lung carcinoids are seen in about 1%-2% of the patients, assays of specific hormones or other circulating neuroendocrine substances, such as ACTH, melanocyte-stimulating hormone, or growth hormone may establish the existence of a clinically suspected syndrome.
Chest radiography
An abnormal finding on chest radiography is present in about 75% of patients with a pulmonary carcinoid tumor.1 Findings include either the presence of the tumor mass itself or indirect evidence of its presence observed as parenchymal changes associated with bronchial obstruction from the mass.
Computed-tomography imaging
High-resolution computed-tomography (CT) imaging is the one of the best types of CT examination for evaluation of a pulmonary carcinoid tumor.4 A CT scan provides excellent resolution of tumor extent, location, and the presence or absence of mediastinal adenopathy. It also aids in morphologic characterization of peripheral (Figure 1) and especially centrally located carcinoids, which may be purely intraluminal (polypoid configuration), exclusively extra luminal, or more frequently, a mixture of intraluminal and extraluminal components.
CT scans may also be helpful for differentiating tumor from postobstructive atelectasis or bronchial obstruction-related mucoid impaction. Intravenous contrast in CT imaging can be useful in differentiating malignant from benign lesions. Because carcinoid tumors are highly vascular, they show greater enhancement on contrast CT than do benign lesions. The sensitivity of CT for detecting metastatic hilar or mediastinal nodes is high, but specificity is as low as 45%.4
Typical carcinoid is rarely metastatic so most patients do not need CT or MRI imaging to evaluate for liver involvement. Liver imaging is appropriate in patients with evidence of mediastinal involvement, relatively high mitotic rate, or clinical evidence of the carcinoid syndrome.8 To evaluate for metastatic spread to the liver, multiphase contrast-enhanced liver CT scans should be performed with arterial and portal-venous phases because carcinoid liver metastases are often hypervascular and appear isodense relative to the liver parenchyma after contrast administration.4 An MRI is often preferred the modality to evaluate for metastatic spread to the liver because of its higher sensitivity.5
Positron-emission tomography
Although carcinoid tumors of the lung are highly vascular, they do not show increased metabolic activity on positron-emission tomography (PET) and would be incorrectly designated as benign lesions on the basis of findings from a PET scan. Fludeoxyglucose F-18 PET has shown utility as a radiologic marker for atypical carcinoids, particularly for those with a higher proliferation index with Ki-67 index of 10%-20%.6
Radionucleotide studies
Somatostatin receptors (SSRs) are present in many tumors of neuroendocrine origin, including carcinoid tumors. These receptors interact with each other and undergo dimerization and internalization. SSTR subtypes (SSTRs) overexpressed in NETs are related to the type, origin, and grade of differentiation of tumor. The overexpression of an SSTR is a characteristic feature of bronchial NETs, which can be used to localize the primary tumor and its metastases by imaging with the radiolabeled SST analogues. Radionucleotide imaging modalities commonly used include single-photon–emission tomography and positron-emission tomography.
With regard to SSR scintigraphy testing, PET using Ga–DOTATATE/TOC is preferable to Octreoscan if it is available, because offers better spatial resolution and has a shorter scanning time. It has sensitivity of 97% and specificity of 92% and hence is preferable over Octreoscan in highly aggressive, atypical bronchial NETs. It also provides an estimate of receptor density and evidence of the functionality of receptors, which helps with selection of suitable treatments that act on these receptors.7
Tumor markers
Serum levels of chromogranin A in bronchial NETs are expressed at a lower rate than are other sites of carcinoid tumors, so its measurement is of limited utility in following disease activity in bronchial NETs.4,8
Bronchoscopy
About 75% of pulmonary carcinoids are visible on bronchoscopy. The bronchoscopic appearance may be characteristic but it is preferable that brushings or biopsy be performed to confirm the diagnosis. For central tumors endobronchial; and for peripheral tumors, CT-guided percutaneous biopsy is the accepted diagnostic approach. Cytologic study of bronchial brushings is more sensitive than sputum cytology, but the diagnostic yield of brushing is low overall (about 40%) and hence fine-needle biopsy is preferred. 8
A negative finding on biopsy should not produce a false sense of confidence. If a suspicion of malignancy exists despite a negative finding on transthoracic biopsy, surgical excision of the nodule and pathologic analysis should be undertaken.
Histological findings
In typical carcinoid tumors, cells tend to group in nests, cords, or broad sheets. Arrangement is orderly, with groups of cells separated by highly vascular septa of connective tissue.9Individual cell features include small and polygonal cells with finely granular eosinophilic cytoplasm (Figure 2). Nuclei are small and round. Mitoses are infrequent (Figure 3).
On electron microscopy, well-formed desmosomes and abundant neurosecretory granules are seen. Many pulmonary carcinoid tumors stain positive for a variety of neuroendocrine markers. Electron microscopy is of historical interest but is not used for tissue diagnosis for bronchial carcinoid patients.
Typical vs atypical tumors
In all, 10% of the carcinoid tumors are atypical in nature. They are generally larger than typical carcinoids and are located in the periphery of the lung in about 50% of cases. They have more aggressive behavior and tend to metastasize more commonly.2 Neither location nor size are distinguishing features. The distinction is based on histology and includes one or all of the following features:8,9
n Increased mitotic activity in a tumor with an identifiable carcinoid cellular arrangement with 2-10 mitotic figures per high-power field.9
n Pleomorphism and irregular nuclei with hyperchromatism and prominent nucleoli.
n Areas of increased cellularity with loss of the regular, organized architecture observed in typical carcinoid.
n Areas of necrosis within the tumor.
Ki-67 cell proliferation labeling index can be used to distinguish between high-grade lung NETs (>40%) and carcinoids (<20%), particularly in crushed biopsy specimens in which carcinoids may be mistaken for small-cell lung cancers. However, given overlap in the distribution of Ki-67 labeling index between typical carcinoids (≤5%) and atypical carcinoids (≤20%), Ki-67 expression does not reliably distinguish between well-differentiated lung carcinoids. The utility of Ki-67 to differentiate between typical and atypical carcinoids has yet to be established, and it is not presently recommended.9 Hence, the number of mitotic figures per high-power field of viable tumor area and presence or absence of necrosis continue to be the salient features distinguishing typical and atypical bronchial NETs.
Staging10
Lung NETs are staged using the same tumor, node, metastasis (TNM) classification from the American Joint Committee on Cancer (AJCC) that is used for bronchogenic lung carcinomas (Table).
Typical bronchial NETs most commonly present as stage I tumors, whereas more than one-half of atypical tumors are stage II (bronchopulmonary nodal involvement) or III (mediastinal nodal involvement) at presentation.
Treatment
Localized or nonmetastatic and rescetable disease
Surgical treatment. As with other non–small-cell lung cancers (NSCLCs), surgical resection is the treatment of choice for early-stage carcinoid. The long-term prognosis is typically excellent, with a 10-year disease-free survival of 77%-94%.11, 12 The extent of resection is determined by the tumor size, histology, and location. For NSCLC, the standard surgical approach is the minimal anatomic resection (lobectomy, sleeve lobectomy, bilobectomy, or pneumonectomy) needed to get microscopically negative margins, with an associated mediastinal and hilar lymph node dissection for staging.13
Given the indolent nature of typical carcinoids, there has been extensive research to evaluate whether a sublobar resection is oncologically appropriate for these tumors. Although there are no comprehensive randomized studies comparing sublobar resection with lobectomy for typical carcinoids, findings from numerous database reviews and single-center studies suggest that sublobar resections are noninferior.14-17 Due to the higher nodal metastatic rate and the overall poorer prognosis associated with atypical carcinoids, formal anatomic resection is still recommended with atypical histology.18
An adaptive approach must be taken for patients who undergo wedge resection of pulmonary lesions without a known diagnosis. If intraoperative frozen section is consistent with carcinoid and the margins are negative, mediastinal lymph node dissection should be performed. If the patient is node negative, then completion lobectomy is not required. In node-positive patients with adequate pulmonary reserve, lobectomy should be performed regardless of histology. If atypical features are found during pathologic evaluation, then interval completion lobectomy may be patients with adequate pulmonary reserve.19,20
As with other pulmonary malignancies, clinical or radiographic suspicion of mediastinal lymph node involvement requires invasive staging before pulmonary resection is considered. If the patient is proven to have mediastinal metastatic disease, then multimodality treatment should be considered.20
Adjuvant therapy. Postoperative adjuvant therapy for most resected bronchial NETs, even in the setting of positive lymph nodes, is generally not recommended.7 In clinical practice, adjuvant platinum-based chemotherapy with or without radiation therapy (RT) is a reasonable option for patients with histologically aggressive-appearing or poorly differentiated stage III atypical bronchial NETs, although there is only limited evidence to support this. RT is a reasonable option for atypical bronchial NETs if gross residual disease remains after surgery, although it has not been proven that this improves outcomes.7
Nonmetastatic and unresectable disease
For inoperable patients and for those with surgically unresectable but nonmetastatic disease, options for local control of tumor growth include RT with or without concurrent chemotherapy and palliative endobronchial resection of obstructing tumor.21
Metastatic and unresectable disease
Everolimus. In February 2016, everolimus was approved by the US Food and Drug Administration (FDA) as first-line therapy for progressive, well-differentiated, nonfunctional NETs of lung origin that are unresectable, locally advanced, or metastatic. The aApproval was based on the RADIANT-4 trial, in which median progression-free survival was 11 months in the 205 patients allocated to receive everolimus (10 mg/day) and 3.9 months in the 97 patients who received placebo. Everolimus was associated with a 52% reduction in the estimated risk of progression or death.22
Somastatin analogues (SSA). There is lack of comprehensive data on the role of SSA compared with everolimus in lung carcinoid. The National Comprehensive Cancer Network (NCCN) guidelines on NETs and SCLCs recommend the consideration of octreotide or lanreotide as first-line therapies for select patients with symptoms of carcinoid syndrome or octreotide-positive scans.21 Guidelines from the European Neuroendocrine Tumor Society (ENETS)19 also recommend the use of SSAs as a first-line option in patients with: lung carcinoids exhibiting hormone-related symptoms or slowly progressive typical or atypical carcinoid with a low proliferative index (preferably Ki-67 <10%), provided there is a strongly positive SSTR status.
In cases in which metastatic lung NETs are associated with the carcinoid syndrome, initiation of long-acting SSA therapy in combination with everolimus is recommended.
Cytotoxic chemotherapy. According to the NCCN guidelines, cisplatin-etoposide or other cytotoxic regimens (eg, those that are temozolomide based) are recommended for advanced typical and atypical carcinoids, with cisplatin-etoposide being the preferred first-line systemic regimen in stage IV atypical carcinoid.22 ENETS guidelines stipulate that systemic chemotherapy is generally restricted to atypical carcinoid after failure of first-line therapies and only under certain conditions (Ki-67 >15%, rapidly progressive disease, and SSTR-negative disease).19 Based on a summary of NCCN and ENET guidelines:
n For patients with highly aggressive atypical bronchial NETs, a combination of platinum- and etoposide-based regimens such as those used for small-cell lung cancer has shown better response rate and overall survival data.
n For patients with typical or atypical bronchial NETs, temozolomide can be used as monotherapy or combination with capecitabine, although there are no findings from large randomized controlled trials to support this. Capecitabine-temozolomide has recently shown moderate activity in a small, single-institution study of patients with advanced lung carcinoids (N = 19), with 11 of 17 assessable patients (65%) demonstrating stable disease or partial response.23
n The following regimens can also be used for advanced disease after failure of somastatin analogues, although there are limited data for objective responses:24,25fluorouracil plus dacarbazine; epirubicin, capecitabine plus oxaliplatin; and capecitabine plus liposomal doxorubicin.
Participation in a clinical trial should be encouraged for patients with progressive bronchial NETs during any line of therapy. For patients who have a limited, potentially resectable liver-isolated metastatic NET, surgical resection should be pursued. For more extensive unresectable liver-dominant metastatic disease, treatment options include embolization, radiofrequency ablation, and cryoablation.20,22
Posttreatment surveillance
Posttreatment surveillance after resection of node-positive typical bronchial NETs and for all atypical tumors.26 Patients with lymph-node–negative typical bronchial NETs are very unlikely to benefit from postoperative surveillance because of the very low risk of recurrence. CT imaging (including the thorax and abdomen) every 6 months for 2 years, followed by annual scans for a total of 5-10 years are a reasonable surveillance schedule.
Prognosis 18,27
Typical bronchial NETs have an excellent prognosis after surgical resection. Reported 5-year survival rates are 87%-100%; the corresponding rates at 10 years are 82%-87%. Features associated with negative prognostic significance include lymph-node involvement and incomplete resection.
Atypical bronchial NETs have a worse prognosis than do typical tumors. Five-year survival rates range widely, from 30%-95%; the corresponding rates at 10 years are 35%-56%. Atypical tumors have a greater tendency to metastasize (16%-23%) and recur locally (3%-25%). Distant metastases to the liver or bone are more common than local recurrence. Adverse influence of nodal metastases on prognosis is more profound than for typical tumors. Survival rates by stage for patients who underwent surgical resection (including typical and atypical carcinoid27) are: stage I, 93%; stage II, 85%; stage III, 75%; and stage IV, 57%.
1. Hauso O, Gustafsson BI, Kidd M, et al. Neuroendocrine tumor epidemiology: contrasting Norway and North America. Cancer. 2008;113(10):2655-2664.
2. Fink G, Krelbaum T, Yellin A, et al. Pulmonary carcinoid: presentation, diagnosis, and outcome in 142 cases in Israel and review of 640 cases from the literature. Chest. 2001;119(6):1647-1651.
3. Limper AH, Carpenter PC, Scheithauer B, Staats BA. The Cushing syndrome induced by bronchial carcinoid tumors. Ann Intern Med. 1992;117(3):209-214.
4. Meisinger QC, Klein JS, Butnor KJ, Gentchos G, Leavitt BJ. CT features of peripheral pulmonary carcinoid tumors. AJR Am J Roentgenol. 2011;197(5):1073-1080.
5. Guckel C, Schnabel K, Deimling M, Steinbrich W. Solitary pulmonary nodules: MR evaluation of enhancement patterns with contrast-enhanced dynamic snapshot gradient-echo imaging. Radiology. 1996;200(3):681-686.
6. Jindal T, Kumar A, Venkitaraman B, et al. Evaluation of the role of [18F]FDG-PET/CT and [68Ga]DOTATOC-PET/CT in differentiating typical and atypical pulmonary carcinoids. Cancer Imaging. 2011;11:70-75.
7. Caplin ME, Baudin E, Ferolla P, et al. Pulmonary neuroendocrine (carcinoid) tumors: European Neuroendocrine Tumor Society expert consensus and recommendations for best practice for typical and atypical pulmonary carcinoids. Ann Oncol. 2015;26(8):1604-1620.
8. Travis WD. Pathology and diagnosis of neuroendocrine tumors: lung neuroendocrine. Thorac Surg Clin. 2014;24(3):257-266.
9. Warren WH, Memoli VA, Gould VE. Immunohistochemical and ultrastructural analysis of bronchopulmonary neuroendocrine neoplasms. II. Well-differentiated neuroendocrine carcinomas. Ultrastruct Pathol. 1984;7(2-3):185-199.
10. Goldstraw P, Chansky K, Crowley J, et al. The IASLC Lung Cancer Staging Project: Proposals for revision of the TNM stage groupings in the forthcoming (eighth) edition of the TNM classification for lung cancer. J Thorac Oncol. 2016;11(1):39-51.
11. McCaughan BC, Martini N, Bains MS. Bronchial carcinoids. Review of 124 cases. J Thorac Cardiovasc Surg. 1985;89(1):8-17.
12. Hurt R, Bates M. Carcinoid tumours of the bronchus: a 33-year experience. Thorax. 1984;39(8):617-623.
13. Ettinger DS, Wood DE, Akerley W, et al. Non-small cell lung cancer, version 1.2015. J Natl Compr Cancer Netw. 2014;12(12):1738-1761.
14. Ferguson MK, Landreneau RJ, Hazelrigg SR, et al. Long-term outcome after resection for bronchial carcinoid tumors. Eur J Cardiothorac Surg. 2000;18(2):156-161.
15. Lucchi M, Melfi F, Ribechini A, et al. Sleeve and wedge parenchyma-sparing bronchial resections in low-grade neoplasms of the bronchial airway. J Thorac Cardiovasc Surg. 2007;134(2):373-377.
16. Yendamuri S, Gold D, Jayaprakash V, Dexter E, Nwogu C, Demmy T. Is sublobar resection sufficient for carcinoid tumors? Ann Thorac Surg. 2011;92(5):1774-1778; discussion 8-9.
17. Fox M, Van Berkel V, Bousamra M II, Sloan S, Martin RC II. Surgical management of pulmonary carcinoid tumors: sublobar resection versus lobectomy. Am J Surg. 2013;205(2):200-208.
18. Cardillo G, Sera F, Di Martino M, et al. Bronchial carcinoid tumors: nodal status and long-term survival after resection. Ann Thorac Surg. 2004;77(5):1781-1785.
19. Oberg K, Hellman P, Ferolla P, Papotti M; ESMO Guidelines Working Group. Neuroendocrine bronchial and thymic tumors: ESMO clinical practice guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2012;23(suppl 7:vii120-3).
20. Filosso PL, Ferolla P, Guerrera F, et al. Multidisciplinary management of advanced lung neuroendocrine tumors. J Thorac Dis. 2015;7(Suppl 2):S163-171.
21. Kulke MH, Shah MH, Benson AB III, et al. Neuroendocrine tumors, version 1.2015. J Natl Compr Cancer Netw. 2015;13(1):78-108.
22. Yao JC, Fazio N, Singh S, et al. Everolimus for the treatment of advanced, non-functional neuroendocrine tumours of the lung or gastrointestinal tract (RADIANT-4): a randomised, placebo-controlled, phase 3 study. Lancet. 2016;387(10022):968-977.
23. Ramirez RA, Beyer DT, Chauhan A, Boudreaux JP, Wang YZ, Woltering EA. The role of capecitabine/temozolomide in metastatic neuroendocrine tumors. Oncologist. 2016;21(6):671-675.
24. Bajetta E, Rimassa L, Carnaghi C, et al. 5-Fluorouracil, dacarbazine, and epirubicin in the treatment of patients with neuroendocrine tumors. Cancer. 1998;83(2):372-378.
25. Masi G, Fornaro L, Cupini S, et al. Refractory neuroendocrine tumor-response to liposomal doxorubicin and capecitabine. Nat Rev Clin Oncol. 2009;6(11):670-674.
26. Lou F, Sarkaria I, Pietanza C, et al. Recurrence of pulmonary carcinoid tumors after resection: implications for postoperative surveillance. Ann Thorac Surg. 2013;96(4):1156-1162.
27. Beasley MB, Thunnissen FB, Brambilla E, et al. Pulmonary atypical carcinoid: predictors of survival in 106 cases. Human Pathol. 2000;31(10):1255-1265.
1. Hauso O, Gustafsson BI, Kidd M, et al. Neuroendocrine tumor epidemiology: contrasting Norway and North America. Cancer. 2008;113(10):2655-2664.
2. Fink G, Krelbaum T, Yellin A, et al. Pulmonary carcinoid: presentation, diagnosis, and outcome in 142 cases in Israel and review of 640 cases from the literature. Chest. 2001;119(6):1647-1651.
3. Limper AH, Carpenter PC, Scheithauer B, Staats BA. The Cushing syndrome induced by bronchial carcinoid tumors. Ann Intern Med. 1992;117(3):209-214.
4. Meisinger QC, Klein JS, Butnor KJ, Gentchos G, Leavitt BJ. CT features of peripheral pulmonary carcinoid tumors. AJR Am J Roentgenol. 2011;197(5):1073-1080.
5. Guckel C, Schnabel K, Deimling M, Steinbrich W. Solitary pulmonary nodules: MR evaluation of enhancement patterns with contrast-enhanced dynamic snapshot gradient-echo imaging. Radiology. 1996;200(3):681-686.
6. Jindal T, Kumar A, Venkitaraman B, et al. Evaluation of the role of [18F]FDG-PET/CT and [68Ga]DOTATOC-PET/CT in differentiating typical and atypical pulmonary carcinoids. Cancer Imaging. 2011;11:70-75.
7. Caplin ME, Baudin E, Ferolla P, et al. Pulmonary neuroendocrine (carcinoid) tumors: European Neuroendocrine Tumor Society expert consensus and recommendations for best practice for typical and atypical pulmonary carcinoids. Ann Oncol. 2015;26(8):1604-1620.
8. Travis WD. Pathology and diagnosis of neuroendocrine tumors: lung neuroendocrine. Thorac Surg Clin. 2014;24(3):257-266.
9. Warren WH, Memoli VA, Gould VE. Immunohistochemical and ultrastructural analysis of bronchopulmonary neuroendocrine neoplasms. II. Well-differentiated neuroendocrine carcinomas. Ultrastruct Pathol. 1984;7(2-3):185-199.
10. Goldstraw P, Chansky K, Crowley J, et al. The IASLC Lung Cancer Staging Project: Proposals for revision of the TNM stage groupings in the forthcoming (eighth) edition of the TNM classification for lung cancer. J Thorac Oncol. 2016;11(1):39-51.
11. McCaughan BC, Martini N, Bains MS. Bronchial carcinoids. Review of 124 cases. J Thorac Cardiovasc Surg. 1985;89(1):8-17.
12. Hurt R, Bates M. Carcinoid tumours of the bronchus: a 33-year experience. Thorax. 1984;39(8):617-623.
13. Ettinger DS, Wood DE, Akerley W, et al. Non-small cell lung cancer, version 1.2015. J Natl Compr Cancer Netw. 2014;12(12):1738-1761.
14. Ferguson MK, Landreneau RJ, Hazelrigg SR, et al. Long-term outcome after resection for bronchial carcinoid tumors. Eur J Cardiothorac Surg. 2000;18(2):156-161.
15. Lucchi M, Melfi F, Ribechini A, et al. Sleeve and wedge parenchyma-sparing bronchial resections in low-grade neoplasms of the bronchial airway. J Thorac Cardiovasc Surg. 2007;134(2):373-377.
16. Yendamuri S, Gold D, Jayaprakash V, Dexter E, Nwogu C, Demmy T. Is sublobar resection sufficient for carcinoid tumors? Ann Thorac Surg. 2011;92(5):1774-1778; discussion 8-9.
17. Fox M, Van Berkel V, Bousamra M II, Sloan S, Martin RC II. Surgical management of pulmonary carcinoid tumors: sublobar resection versus lobectomy. Am J Surg. 2013;205(2):200-208.
18. Cardillo G, Sera F, Di Martino M, et al. Bronchial carcinoid tumors: nodal status and long-term survival after resection. Ann Thorac Surg. 2004;77(5):1781-1785.
19. Oberg K, Hellman P, Ferolla P, Papotti M; ESMO Guidelines Working Group. Neuroendocrine bronchial and thymic tumors: ESMO clinical practice guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2012;23(suppl 7:vii120-3).
20. Filosso PL, Ferolla P, Guerrera F, et al. Multidisciplinary management of advanced lung neuroendocrine tumors. J Thorac Dis. 2015;7(Suppl 2):S163-171.
21. Kulke MH, Shah MH, Benson AB III, et al. Neuroendocrine tumors, version 1.2015. J Natl Compr Cancer Netw. 2015;13(1):78-108.
22. Yao JC, Fazio N, Singh S, et al. Everolimus for the treatment of advanced, non-functional neuroendocrine tumours of the lung or gastrointestinal tract (RADIANT-4): a randomised, placebo-controlled, phase 3 study. Lancet. 2016;387(10022):968-977.
23. Ramirez RA, Beyer DT, Chauhan A, Boudreaux JP, Wang YZ, Woltering EA. The role of capecitabine/temozolomide in metastatic neuroendocrine tumors. Oncologist. 2016;21(6):671-675.
24. Bajetta E, Rimassa L, Carnaghi C, et al. 5-Fluorouracil, dacarbazine, and epirubicin in the treatment of patients with neuroendocrine tumors. Cancer. 1998;83(2):372-378.
25. Masi G, Fornaro L, Cupini S, et al. Refractory neuroendocrine tumor-response to liposomal doxorubicin and capecitabine. Nat Rev Clin Oncol. 2009;6(11):670-674.
26. Lou F, Sarkaria I, Pietanza C, et al. Recurrence of pulmonary carcinoid tumors after resection: implications for postoperative surveillance. Ann Thorac Surg. 2013;96(4):1156-1162.
27. Beasley MB, Thunnissen FB, Brambilla E, et al. Pulmonary atypical carcinoid: predictors of survival in 106 cases. Human Pathol. 2000;31(10):1255-1265.
Preventing cardiovascular disease in older adults: One size does not fit all
When assessing and attempting to modify the risk of cardiovascular disease in older patients, physicians should consider incorporating the concept of frailty. The balance of risk and benefit may differ considerably for 2 patients of the same age if one is fit and the other is frail. Because the aging population is a diverse group, a one-size-fits-all approach to cardiovascular disease prevention and risk-factor management is not appropriate.
A GROWING, DIVERSE GROUP
The number of older adults with multiple cardiovascular risk factors is increasing as life expectancy improves. US residents who are age 65 today can expect to live to an average age of 84 (men) or 87 (women).1
However, the range of life expectancy for people reaching these advanced ages is wide, and chronologic age is no longer sufficient to determine a patient’s risk profile. Furthermore, the prevalence of cardiovascular disease rises with age, and age itself is the strongest predictor of cardiovascular risk.2
Current risk calculators have not been validated in people over age 80,2 making them inadequate for use in older patients. Age alone cannot identify who will benefit from preventive strategies, except in situations when a dominant disease such as metastatic cancer, end-stage renal disease, end-stage dementia, or end-stage heart failure is expected to lead to mortality within a year. Guidelines for treating common risk factors such as elevated cholesterol3 in the general population have generally not focused on adults over 75 or recognized their diversity in health status.4 In order to generate an individualized prescription for cardiovascular disease prevention for older adults, issues such as frailty, cognitive and functional status, disability, and comorbidity must be considered.
WHAT IS FRAILTY?
Clinicians have recognized frailty for decades, but to date there remains a debate on how to define it.
Clegg et al5 described frailty as “a state of increased vulnerability to poor resolution of homeostasis after a stressor event,”5 a definition generally agreed upon, as frailty predicts both poor health outcomes and death.
Indeed, in a prospective study of 5,317 men and women ranging in age from 65 to 101, those identified as frail at baseline were 6 times more likely to have died 3 years later (mortality rates 18% vs 3%), and the difference persisted at 7 years.6 After adjusting for comorbidities, those identified as frail were also more likely to fall, develop limitations in mobility or activities of daily living, or be hospitalized.
The two current leading theories of frailty were defined by Fried et al6 and by Rockwood and Mitnitski.7
Fried et al6 have operationalized frailty as a “physical phenotype,” defined as 3 or more of the following:
- Unintentional weight loss of 10 pounds in the past year
- Self-reported exhaustion
- Weakness as measured by grip strength
- Slow walking speed
- Decreased physical activity.6
Rockwood and Mitnitski7 define frailty as an accumulation of health-related deficits over time. They recommend that 30 to 40 possible deficits that cover a variety of health systems be included such as cognition, mood, function, and comorbidity. These are added and divided by the total possible number of variables to generate a score between 0 and 1.8
The difficulty in defining frailty has led to varying estimates of its prevalence, ranging from 25% to 50% in adults over 65 who have cardiovascular disease.9
CAUSE AND CONSEQUENCE OF CARDIOVASCULAR DISEASE
Studies have highlighted the bidirectional connection between frailty and cardiovascular disease.10 Frailty may predict cardiovascular disease, while cardiovascular disease is associated with an increased risk of incident frailty.9,11
Frail adults with cardiovascular disease have a higher risk of poor outcomes, even after correcting for age, comorbidities, disability, and disease severity. For example, frailty is associated with a twofold higher mortality rate in individuals with cardiovascular disease.9
A prospective cohort study12 of 3,895 middle-aged men and women demonstrated that those with an elevated cardiovascular risk score were at increased risk of frailty over 10 years (odds ratio [OR] 1.35, 95% confidence interval [CI] 1.21–1.51) and incident cardiovascular events (OR 1.36, 95% CI 1.15–1.61). This suggests that modification of cardiovascular risk factors earlier in life may reduce the risk of subsequently becoming frail.
Biologic mechanisms that may explain the connection between frailty and cardiovascular disease include derangements in inflammatory, hematologic, and endocrine pathways. People who are found to be clinically frail are more likely to have insulin resistance and elevated biomarkers such as C-reactive protein, D-dimer, and factor VIII.13 The inflammatory cytokine interleukin 6 is suggested as a common link between inflammation and thrombosis, perhaps contributing to the connection between cardiovascular disease and frailty. Many of these biomarkers have been linked to the pathophysiologic changes of aging, so-called “inflamm-aging” or immunosenescence, including sarcopenia, osteoporosis, and cardiovascular disease.14
ASSESSING FRAILTY IN THE CLINIC
For adults over age 70, frailty assessment is an important first step in managing cardiovascular disease risk.15 Frailty status will better identify those at risk of adverse outcomes in the short term and those who are most likely to benefit from long-term cardiovascular preventive strategies. Additionally, incorporating frailty assessment into traditional risk factor evaluation may permit appropriate intervention and prevention of a potentially modifiable risk factor.
Gait speed is a quick, easy, inexpensive, and sensitive way to assess frailty status, with excellent inter-rater and test-retest reliability, even in those with cognitive impairment.16 Slow gait speed predicts limitations in mobility, limitations in activities of daily living, and death.8,17
In a prospective study18 of 1,567 men and women, mean age 74, slow gait speed was the strongest predictor of subsequent cardiovascular events.18
Gait speed is usually measured over a distance of 4 meters (13.1 feet),17 and the patient is asked to walk comfortably in an unobstructed, marked area. An assistive walking device can be used if needed. If possible, this is repeated once after a brief recovery period, and the average is recorded.
The FRAIL scale19,20 is a simple, validated questionnaire that combines the Fried and Rockwood concepts of frailty and can be given over the phone or to patients in a waiting room. One point is given for each of the following, and people who have 3 or more are considered frail:
- Fatigue
- Resistance (inability to climb 1 flight of stairs)
- Ambulation (inability to walk 1 block)
- Illnesses (having more than 5)
- Loss of more than 5% of body weight.
Other measures of physical function such as grip strength (using a dynamometer), the Timed Up and Go test (assessing the ability to get up from a chair and walk a short distance), and Short Physical Performance Battery (assessing balance, chair stands, and walking speed) can be used to screen for frailty, but are more time-intensive than gait speed alone, and so are not always practical to use in a busy clinic.21
MANAGEMENT OF RISK FACTORS
Management of cardiovascular risk factors is best individualized as outlined below.
LOWERING HIGH BLOOD PRESSURE
The incidence of ischemic heart disease and stroke increases with age across all levels of elevated systolic and diastolic blood pressure.22 Hypertension is also associated with increased risk of cognitive decline. However, a J-shaped relationship has been observed in older adults, with increased cardiovascular events for both low and elevated blood pressure, although the clinical relevance remains controversial.23
Odden et al24 performed an observational study and found that high blood pressure was associated with an increased mortality rate in older adults with normal gait speed, while in those with slow gait speed, high blood pressure neither harmed nor helped. Those who could not walk 6 meters appeared to benefit from higher blood pressure.
HYVET (the Hypertension in the Very Elderly Trial),25 a randomized controlled trial in 3,845 community-dwelling people age 80 or older with sustained systolic blood pressure higher than 160 mm Hg, found a significant reduction in rates of stroke and all-cause mortality (relative risk [RR] 0.76, P = .007) in the treatment arm using indapamide with perindopril if necessary to reach a target blood pressure of 150/80 mm Hg.
Frailty was not assessed during the trial; however, in a reanalysis, the results did not change in those identified as frail using a Rockwood frailty index (a count of health-related deficits accumulated over the lifespan).26
SPRINT (the Systolic Blood Pressure Intervention Trial)27 randomized participants age 50 and older with systolic blood pressure of 130 to 180 mm Hg and at increased risk of cardiovascular disease to intensive treatment (goal systolic blood pressure ≤ 120 mm Hg) or standard treatment (goal systolic blood pressure ≤ 140 mm Hg). In a prespecified subgroup of 2,636 participants over age 75 (mean age 80), hazard ratios and 95% confidence intervals for adverse outcomes with intensive treatment were:
- Major cardiovascular events: HR 0.66, 95% CI 0.51–0.85
- Death: HR 0.67, 95% CI 0.49–0.91.
Over 3 years of treatment this translated into a number needed to treat of 27 to prevent 1 cardiovascular event and 41 to prevent 1 death.
Within this subgroup, the benefit was similar regardless of level of frailty (measured both by a Rockwood frailty index and by gait speed).
However, the incidence of serious adverse treatment effects such as hypotension, orthostasis, electrolyte abnormalities, and acute kidney injury was higher with intensive treatment in the frail group. Although the difference was not statistically significant, it is cause for caution. Further, the exclusion criteria (history of diabetes, heart failure, dementia, stroke, weight loss of > 10%, nursing home residence) make it difficult to generalize the SPRINT findings to the general aging population.27
Tinetti et al28 performed an observational study using a nationally representative sample of older adults. They found that receiving any antihypertensive therapy was associated with an increased risk of falls with serious adverse outcomes. The risks of adverse events related to antihypertensive therapy increased with age.
Recommendations on hypertension
Managing hypertension in frail patients at risk of cardiovascular disease requires balancing the benefits vs the risks of treatment, such as polypharmacy, falls, and orthostatic hypotension.
The Eighth Joint National Committee suggests a blood pressure goal of less than 150/90 mm Hg for all adults over age 60, and less than 140/90 mm Hg for those with a history of cardiovascular disease or diabetes.29
The American College of Cardiology/American Heart Association (ACC/AHA) guidelines on hypertension, recently released, recommend a new blood pressure target of <120/<80 as normal, with 120–129/<80 considered elevated, 130–139/80–89 stage 1 hypertension, and ≥140/≥90 as stage 2 hypertension.30 An important caveat to these guidelines is the recommendation to measure blood pressure accurately and with accurate technique, which is often not possible in many busy clinics. These guidelines are intended to apply to older adults as well, with a note that those with multiple morbidities and limited life expectancy will benefit from a shared decision that incorporates patient preferences and clinical judgment. Little guidance is given on how to incorporate frailty, although note is made that older adults who reside in assisted living facilities and nursing homes have not been represented in randomized controlled trials.30
American Diabetes Association guidelines on hypertension in patients with diabetes recommend considering functional status, frailty, and life expectancy to decide on a blood pressure goal of either 140/90 mm Hg (if fit) or 150/90 mm Hg (if frail). They do not specify how to diagnose frailty.31
Canadian guidelines say that in those with advanced frailty (ie, entirely dependent for personal care and activities of daily living) and short life expectancy (months), it is reasonable to liberalize the systolic blood pressure goal to 160 to 190 mm Hg.32
Our recommendations. In both frail and nonfrail individuals without a limited life expectancy, it is reasonable to aim for a blood pressure of at least less than 140/90 mm Hg. For those at increased risk of cardiovascular disease and able to tolerate treatment, careful lowering to 130/80 mm Hg may be considered, with close attention to side effects.
Treatment should start with the lowest possible dose, be titrated slowly, and may need to be tailored to standing blood pressure to avoid orthostatic hypotension.
Home blood pressure measurements may be beneficial in monitoring treatment.
MANAGING LIPIDS
For those over age 75, data on efficacy of statins are mixed due to the small number of older adults enrolled in randomized controlled trials of these drugs. To our knowledge, no statin trial has examined the role of frailty.
The PROSPER trial (Prospective Study of Pravastatin in the Elderly at Risk)33 randomized 5,804 patients ages 70 to 82 to receive either pravastatin or placebo. Overall, the incidence of a composite end point of major cardiovascular events was 15% lower with active treatment (P = .014). However, the mean age was 75, which does little to address the paucity of evidence for those over age 75; follow-up time was only 3 years, and subgroup analysis did not show benefit in those who did not have a history of cardiovascular disease or in women.
The JUPITER trial (Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin)34 randomized 5,695 people over age 70 without cardiovascular disease to receive either rosuvastatin or placebo. Exploratory analysis showed a significant 39% reduction in all-cause mortality and major cardiovascular events with active treatment (HR 0.61, 95% CI 0.46–0.82). Over 5 years of treatment, this translates to a number needed to treat of 19 to prevent 1 major cardiovascular event and 29 to prevent 1 cardiovascular death.
The benefit of statins for primary prevention in these trials began to be apparent 2 years after treatment was initiated.
The Women’s Health Initiative,35 an observational study, found no difference in incident frailty in women older than 65 taking statins for 3 years compared with those who did not take statins
Odden et al36 found that although statin use is generally well tolerated, the risks of statin-associated functional and cognitive decline may outweigh the benefits in those older than 75. The ongoing Statin in Reducing Events in the Elderly (STAREE) trial may shed light on this issue.
Recommendations on lipid management
The ACC/AHA,3 in their 2013 guidelines, do not recommend routine statin treatment for primary prevention in those over age 75, given a lack of evidence from randomized controlled trials. For secondary prevention, ie, for those who have a history of atherosclerotic cardiovascular disease, they recommend moderate-intensity statin therapy in this age group.
Our recommendations. For patients over age 75 without cardiovascular disease or frailty and with a life expectancy of at least 2 years, consider offering a statin for primary prevention of cardiovascular disease as part of shared decision-making.
In those with known cardiovascular disease, it is reasonable to continue statin therapy except in situations where the life expectancy is less than 6 months.37
Although moderate- or high-intensity statin therapy is recommended in current guidelines, for many older adults it is prudent to consider the lowest tolerable dose to improve adherence, with close monitoring for side effects such as myalgia and weakness.
TYPE 2 DIABETES
Evidence suggests that tight glycemic control in type 2 diabetes is harmful for adults ages 55 to 79 and does not provide clear benefits for cardiovascular risk reduction, and controlling hemoglobin A1c to less than 6.0% is associated with increased mortality in older adults.38
The American Diabetes Association31 and the American Geriatrics Society39 recommend hemoglobin A1c goals of:
- 7.5% or less for older adults with 3 or more coexisting chronic illnesses requiring medical intervention (eg, arthritis, hypertension, and heart failure) and with intact cognition and function
- 8.0% or less for those identified as frail, or with multiple chronic illnesses or moderate cognitive or functional impairment
- 8.5% or 9.0% or less for those with very complex comorbidities, in long-term care, or with end-stage chronic illnesses (eg, end-stage heart failure), or with moderate to severe cognitive or functional limitation.
These guidelines do not endorse a specific frailty assessment, although the references allude to the Fried phenotype criteria, which include gait speed. An update from the American Diabetes Association provides a patient-centered approach to tailoring treatment regimens, taking into consideration the risk of hypoglycemia for each class of drugs, side effects, and cost.40
Our recommendations. Hyperglycemia remains a risk factor for cardiovascular disease in older adults and increases the risk of many geriatric conditions including delirium, dementia, frailty, and functional decline. The goal in individualizing hemoglobin A1c goals should be to avoid both hyper- and hypoglycemia.
Sulfonylureas and insulins should be used with caution, as they have the highest associated incidence of hypoglycemia of the diabetes medications.
ASPIRIN
For secondary prevention in older adults with a history of cardiovascular disease, pooled trials have consistently demonstrated a long-term benefit for aspirin use that exceeds bleeding risks, although age and frailty status were not considered.41
Aspirin for primary prevention?
The evidence for aspirin for primary prevention in older adults is mixed. Meta-analysis suggests a modest decrease in risk of nonfatal myocardial infarction but no appreciable effects on nonfatal stroke and cardiovascular death.42
The Japanese Primary Prevention Project,43 a randomized trial of low-dose aspirin for primary prevention of cardiovascular disease in adults ages 60 to 85, showed no reduction in major cardiovascular events. However, the event rate was lower than expected, the crossover rates were high, the incidence of hemorrhagic strokes was higher than in Western studies, and the trial may have been underpowered to detect the benefits of aspirin.
The US Preventive Services Task Force44 in 2016 noted that among individuals with a 10-year cardiovascular disease risk of 10% or higher based on the ACC/AHA pooled cohort equation,3 the greatest benefit of aspirin was in those ages 50 to 59. In this age group, 225 nonfatal myocardial infarctions and 84 nonfatal strokes were prevented per 10,000 men treated, with a net gain of 333 life-years. Similar findings were noted in women.
However, in those ages 60 to 69, the risks of harm begin to rise and the benefit of starting daily aspirin necessitates individualized clinical decision-making, with particular attention to bleeding risk and life expectancy.44
In those age 70 and older, data on benefit and harm are mixed. The bleeding risk of aspirin increases with age, predominantly due to gastrointestinal bleeding.44
The ongoing Aspirin in Reducing Events in Elderly trial will add to the evidence.
Aspirin recommendations for primary prevention
The American Geriatrics Society Beers Criteria do not routinely recommend aspirin use for primary prevention in those over age 80, even in those with diabetes.45
Our recommendations. In adults over age 75 who are not frail but are identified as being at moderate to high risk of cardiovascular disease using either the ACC/AHA calculator or any other risk estimator, and without a limited life expectancy, we believe it is reasonable to consider low-dose aspirin (75–100 mg daily) for primary prevention. However, there must be careful consideration particularly for those at risk of major bleeding. One approach to consider would be the addition of a proton pump inhibitor along with aspirin, though this requires further study.46
For those who have been on aspirin for primary prevention and are now older than age 80 without an adverse bleeding event, it is reasonable to stop aspirin, although risks and benefits of discontinuing aspirin should be discussed with the patient as part of shared decision-making.
In frail individuals the risks of aspirin therapy likely outweigh any benefit for primary prevention, and aspirin cannot be routinely recommended.
EXERCISE AND WEIGHT MANAGEMENT
A low body mass index is often associated with frailty, and weight loss may be a marker of underlying illness, which increases the risk of poor outcomes. However, those with an elevated body mass index and increased adiposity are in fact more likely to be frail (using the Fried physical phenotype definition) than those with a low body mass index,47 due in part to unrecognized sarcopenic obesity, ie, replacement of lean muscle with fat.
Physical activity is currently the only intervention known to improve frailty.5
Physical activity and a balanced diet are just as important in older adults, including those with reduced functional ability and multiple comorbid conditions, as in younger individuals.
A trial in frail long-term care residents (mean age 87) found that high-intensity resistance training improved muscle strength and mobility.48 The addition of a nutritional supplement with or without exercise did not affect frailty status. In community-dwelling older adults, physical activity has also been shown to improve sarcopenia and reduce falls and hip fractures.49
Progressive resistance training has been shown to improve strength and gait speed even in those with dementia.50
Tai chi has shown promising results in reducing falls and improving balance and function in both community-dwelling older adults and those in assisted living.51,52
Exercise recommendations
The US Department of Health and Human Services53 issued physical activity guidelines in 2008 with specific recommendations for older adults that include flexibility and balance training, which have been shown to reduce falls, in addition to aerobic activities and strength training.
Our recommendations. For all older adults, particularly those who are frail, we recommend a regimen of general daily activity, balance training such as tai chi, moderate-intensity aerobics such as cycling, resistance training such as using light weights, and stretching. Sessions lasting as little as 10 minutes are beneficial.
Gait speed can be monitored in the clinic to assess improvement in function over time.
SMOKING CESSATION
Although rates of smoking are decreasing, smoking remains one of the most important cardiovascular risk factors. Smoking has been associated with increased risk of frailty and significantly increased risk of death compared with never smoking.54 Smoking cessation is beneficial even for those who quit later in life.
The US Department of Health and Human Services in 2008 released an update on tobacco use and dependence,55 with specific attention to the benefit of smoking cessation for older adults.
All counseling interventions have been shown to be effective in older adults, as has nicotine replacement. Newer medications such as varenicline should be used with caution, as the risk of side effects is higher in older patients.
NUTRITION
Samieri et al,56 in an observational study of 10,670 nurses, found that those adhering to Mediterranean-style diets during midlife had 46% increased odds of healthy aging.
The PREDIMED study (Primary Prevention of Cardiovascular Disease With a Mediterranean Diet)57 in adults ages 55 to 80 showed the Mediterranean diet supplemented with olive oil and nuts reduced the incidence of major cardiovascular disease.
Leon-Munoz et al.58 A prospective study of 1,815 community-dwelling older adults followed for 3.5 years in Spain demonstrated that adhering to a Mediterranean diet was associated with a lower incidence of frailty (P = .002) and a lower risk of slow gait speed (OR 0.53, 95% CI 0.35–0.79). Interestingly, this study also found a protective association between fish and fruit consumption and frailty.
Our recommendations. A well-balanced, diverse diet rich in whole grains, fruits, vegetables, nuts, fish, and healthy fats (polyunsaturated fatty acids), with a moderate amount of lean meats, is recommended to prevent heart disease. However, poor dental health may limit the ability of older individuals to adhere to such diets, and modifications may be needed. Additionally, age-related changes in taste and smell may contribute to poor nutrition and unintended weight loss.59 Involving a nutritionist and social worker in the patient care team should be considered especially as poor nutrition may be a sign of cognitive impairment, functional decline, and frailty.
SPECIAL CONSIDERATIONS
Special considerations when managing cardiovascular risk in the older adult include polypharmacy, multimorbidity, quality of life, and the patient’s personal preferences.
Polypharmacy, defined as taking more than 5 medications, is associated with an increased risk of adverse drug events, falls, fractures, decreased adherence, and “prescribing cascade”— prescribing more drugs to treat side effects of the first drug (eg, adding hypertensive medications to treat hypertension induced by nonsteroidal anti-inflammatory drugs).60 This is particularly important when considering adding additional medications. If a statin will be the 20th pill, it may be less beneficial and more likely to lead to additional adverse effects than if it is the fifth medication.
Patient preferences are critically important, particularly when adding or removing medications. Interventions should include a detailed medication review for appropriate prescribing and deprescribing, referral to a pharmacist, and engaging the patient’s support system.
Multimorbidity. Many older individuals have multiple chronic illnesses. The interaction of multiple conditions must be considered in creating a comprehensive plan, including prognosis, patient preference, available evidence, treatment interactions, and risks and benefits.
Quality of life. Outlook on life and choices made regarding prolongation vs quality of life may be different for the older patient than the younger patient.
Personal preferences. Although interventions such as high-intensity statins for a robust 85-year-old may be appropriate, the individual can choose to forgo any treatment. It is important to explore the patient’s goals of care and advanced directives as part of shared decision-making when building a patient-centered prevention plan.61
ONE SIZE DOES NOT FIT ALL
The heterogeneity of aging rules out a one-size-fits-all recommendation for cardiovascular disease prevention and management of cardiovascular risk factors in older adults.
There is significant overlap between cardiovascular risk status and frailty.
Incorporating frailty into the creation of a cardiovascular risk prescription can aid in the development of an individualized care plan for the prevention of cardiovascular disease in the aging population.
- Social Security Administration (SSA). Calculators: life expectancy. www.ssa.gov/planners/lifeexpectancy.html. Accessed December 8, 2017.
- Benjamin EJ, Blaha MJ, Chiuve SE, et al. Heart disease and stroke statistics—2017 update: a report from the American Heart Association. Circulation 2017; 135:e146–e603.
- Stone NJ, Robinson JG, Lichtenstein AH, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2014; 63:2889–2934.
- Rich MW, Chyun DA, Skolnick AH, et al; American Heart Association Older Populations Committee of the Council on Clinical Cardiology, Council on Cardiovascular and Stroke Nursing, Council on Cardiovascular Surgery and Anesthesia, and Stroke Council; American College of Cardiology; and American Geriatrics Society. Knowledge gaps in cardiovascular care of the older adult population: a scientific statement from the American Heart Association, American College of Cardiology, and American Geriatrics Society. Circulation 2016; 133:2103–2122.
- Clegg A, Young J, Iliffe S, Rikkert MO, Rockwood K. Frailty in elderly people. Lancet 2013; 381:752–762.
- Fried LP, Tangen CM, Walston J, et al; Cardiovascular Health Study Collaborative Research Group. Frailty in older adults: evidence for a phenotype. J Gerontol A Biol Sci Med Sci 2001; 56:M146–M156.
- Rockwood K, Mitnitski A. Frailty in relation to the accumulation of deficits. J Gerontol A Biol Sci Med Sci 2007; 62:722–727.
- Studenski S, Perera S, Patel K, et al. Gait speed and survival in older adults. JAMA 2011; 305:50–58.
- Afilalo J, Alexander KP, Mack MJ, et al. Frailty assessment in the cardiovascular care of older adults. J Am Coll Cardiol 2014; 63:747–762.
- Afilalo J, Karunananthan S, Eisenberg MJ, Alexander KP, Bergman H. Role of frailty in patients with cardiovascular disease. Am J Cardiol 2009; 103:1616–1621.
- Woods NF, LaCroix AZ, Gray SL, et al; Women’s Health Initiative. Frailty: emergence and consequences in women aged 65 and older in the Women's Health Initiative Observational Study. J Am Geriatr Soc 2005; 53:1321–1330.
- Bouillon K, Batty GD, Hamer M, et al. Cardiovascular disease risk scores in identifying future frailty: the Whitehall II prospective cohort study. Heart 2013; 99:737–742.
- Walston J, McBurnie MA, Newman A, et al; Cardiovascular Health Study. Frailty and activation of the inflammation and coagulation systems with and without clinical comorbidities: results from the Cardiovascular Health Study. Arch Intern Med 2002; 162:2333–2341.
- De Martinis M, Franceschi C, Monti D, Ginaldi L. Inflammation markers predicting frailty and mortality in the elderly. Exp Mol Pathol 2006; 80:219–227.
- Morley JE. Frailty fantasia. J Am Med Dir Assoc 2017; 18:813–815.
- Munoz-Mendoza CL, Cabanero-Martinez MJ, Millan-Calenti JC, Cabrero-Garcia J, Lopez-Sanchez R, Maseda-Rodriguez A. Reliability of 4-m and 6-m walking speed tests in elderly people with cognitive impairment. Arch Gerontol Geriatr 2011; 52:e67–e70.
- Abellan van Kan G, Rolland Y, Andrieu S, et al. Gait speed at usual pace as a predictor of adverse outcomes in community-dwelling older people an International Academy on Nutrition and Aging (IANA) Task Force. J Nutr Health Aging 2009; 13:881–889.
- Sergi G, Veronese N, Fontana L, et al. Pre-frailty and risk of cardiovascular disease in elderly men and women: the Pro.V.A. study. J Am Coll Cardiol 2015; 65:976–983.
- Abellan van Kan G, Rolland Y, Bergman H, Morley JE, Kritchevsky SB, Vellas B. The I.A.N.A Task Force on frailty assessment of older people in clinical practice. J Nutr Health Aging 2008; 12:29–37.
- Morley JE, Malmstrom TK, Miller DK. A simple frailty questionnaire (FRAIL) predicts outcomes in middle-aged African Americans. J Nutr Health Aging 2012;16:601–608.
- Forman DE, Arena R, Boxer R, et al; American Heart Association Council on Clinical Cardiology; Council on Cardiovascular and Stroke Nursing; Council on Quality of Care and Outcomes Research; and Stroke Council. Prioritizing functional capacity as a principal end point for therapies oriented to older adults with cardiovascular disease: a scientific statement for healthcare professionals from the American Heart Association. Circulation 2017; 135:e894–e918.
- Lewington S, Clarke R, Qizilbash N, Peto R, Collins R; Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Mancia G, Grassi G. Aggressive blood pressure lowering is dangerous: the J-curve: pro side of the argument. Hypertension 2014; 63:29–36.
- Odden MC, Peralta CA, Haan MN, Covinsky KE. Rethinking the association of high blood pressure with mortality in elderly adults: the impact of frailty. Arch Intern Med 2012; 172:1162–1168.
- Beckett NS, Peters R, Fletcher AE, et al; HYVET Study Group. Treatment of hypertension in patients 80 years of age or older. N Engl J Med 2008; 358:1887–1898.
- Warwick J, Falaschetti E, Rockwood K, et al. No evidence that frailty modifies the positive impact of antihypertensive treatment in very elderly people: an investigation of the impact of frailty upon treatment effect in the HYpertension in the Very Elderly Trial (HYVET) study, a double-blind, placebo-controlled study of antihypertensives in people with hypertension aged 80 and over. BMC Med 2015 9;13:78.
- Williamson JD, Supiano MA, Applegate WB, et al; SPRINT Research Group. Intensive vs standard blood pressure control and cardiovascular disease outcomes in adults aged ≥ 75 years: a randomized clinical trial. JAMA 2016; 315:2673–2682.
- Tinetti ME, Han L, Lee DS, et al. Antihypertensive medications and serious fall injuries in a nationally representative sample of older adults. JAMA Intern Med 2014; 174:588–595.
- James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014; 311:507–520.
- Whelton PK, Carey RM, Aronow WS, et al. 2017 ACC/AHA/AAPA/ABC/ACPM/AGS/APhA/ASH/ASPC/NMA/PCNA Guideline for the Prevention, Detection, Evaluation, and Management of High Blood Pressure in Adults: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Hypertension 2017. Nov 13 [Epub ahead of print].)
- American Diabetes Association. 11. Older adults. Diabetes Care 2017; 40(suppl 1):S99–S104.
- Mallery LH, Allen M, Fleming I, et al. Promoting higher blood pressure targets for frail older adults: a consensus guideline from Canada. Cleve Clin J Med 2014; 81:427–437.
- Shepherd J, Blauw GJ, Murphy MB, et al; PROSPER study group. PROspective Study of Pravastatin in the Elderly at Risk. Pravastatin in elderly individuals at risk of vascular disease (PROSPER): a randomised controlled trial. Lancet 2002; 360:1623–1630.
- Glynn RJ, Koenig W, Nordestgaard BG, Shepherd J, Ridker PM. Rosuvastatin for primary prevention in older persons with elevated C-reactive protein and low to average low-density lipoprotein cholesterol levels: exploratory analysis of a randomized trial. Ann Intern Med 2010; 152:488–496, W174.
- LaCroix AZ, Gray SL, Aragaki A, et al; Women’s Health Initiative. Statin use and incident frailty in women aged 65 years or older: prospective findings from the Women’s Health Initiative Observational Study. J Gerontol A Biol Sci Med Sci 2008; 63:369–375.
- Odden MC, Pletcher MJ, Coxson PG, et al. Cost-effectiveness and population impact of statins for primary prevention in adults aged 75 years or older in the United States. Ann Intern Med 2015; 162:533–541.
- Kutner JS, Blatchford PJ, Taylor DH Jr, et al. Safety and benefit of discontinuing statin therapy in the setting of advanced, life-limiting illness: a randomized clinical trial. JAMA Intern Med 2015; 175:691–700.
- Huang ES, Liu JY, Moffet HH, John PM, Karter AJ. Glycemic control, complications, and death in older diabetic patients: the diabetes and aging study. Diabetes Care 2011; 34:1329–1336.
- Kirkman MS, Briscoe VJ, Clark N, et al; Consensus Development Conference on Diabetes and Older Adults. Diabetes in older adults: a consensus report. J Am Geriatr Soc 2012; 60:2342–2356.
- Inzucchi SE, Bergenstal RM, Buse JB, et al. Management of hyperglycemia in type 2 diabetes, 2015: a patient-centered approach: update to a position statement of the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care 2015; 38:140–149.
- Collaborative meta-analysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction, and stroke in high risk patients. BMJ (Clinical research ed) 2002; 324:71–86.
- Antithrombotic Trialists’ (ATT) Collaboration; Baigent C, Blackwell L, Collins R, et al. Aspirin in the primary and secondary prevention of vascular disease: collaborative meta-analysis of individual participant data from randomised trials. Lancet 2009; 373:1849–1860.
- Ikeda Y, Shimada K, Teramoto T, et al. Low-dose aspirin for primary prevention of cardiovascular events in Japanese patients 60 years or older with atherosclerotic risk factors: a randomized clinical trial. JAMA 2014; 312:2510–2520.
- Bibbins-Domingo K; US Preventive Services Task Force. Aspirin use for the primary prevention of cardiovascular disease and colorectal cancer: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2016; 164:836–845.
- American Geriatrics Society 2012 Beers Criteria Update Expert Panel. American Geriatrics Society updated Beers Criteria for potentially inappropriate medication use in older adults. J Am Geriatr Soc 2012; 60:616–631.
- Li L, Geraghty OC, Mehta Z, Rothwell PM. Age-specific risks, severity, time course, and outcome of bleeding on long-term antiplatelet treatment after vascular events: a population-based cohort study. Lancet 2017; 390:490–499.
- Barzilay JI, Blaum C, Moore T, et al. Insulin resistance and inflammation as precursors of frailty: the Cardiovascular Health Study. Arch Intern Med 2007; 167:635–641.
- Fiatarone MA, O’Neill EF, Ryan ND, et al. Exercise training and nutritional supplementation for physical frailty in very elderly people. N Engl J Med 1994; 330:1769–1775.
- Uusi-Rasi K, Patil R, Karinkanta S, et al. Exercise and vitamin D in fall prevention among older women: a randomized clinical trial. JAMA Intern Med 2015; 175:703–711.
- Hauer K, Schwenk M, Zieschang T, Essig M, Becker C, Oster P. Physical training improves motor performance in people with dementia: a randomized controlled trial. J Am Geriatr Soc 2012; 60:8–15.
- Li F, Harmer P, Fitzgerald K. Implementing an evidence-based fall prevention intervention in community senior centers. Am J Public Health 2016; 106:2026–2031.
- Manor B, Lough M, Gagnon MM, Cupples A, Wayne PM, Lipsitz LA. Functional benefits of tai chi training in senior housing facilities. J Am Geriatr Soc 2014; 62:1484–1489.
- Physical Activity Guidelines Advisory Committee report, 2008. To the Secretary of Health and Human Services. Part A: executive summary. Nutr Rev 2009; 67:114–120.
- Hubbard RE, Searle SD, Mitnitski A, Rockwood K. Effect of smoking on the accumulation of deficits, frailty and survival in older adults: a secondary analysis from the Canadian Study of Health and Aging. J Nutr Health Aging 2009; 13:468–472.
- Clinical Practice Guideline Treating Tobacco Use and Dependence 2008 Update Panel, Liaisons, and Staff. A clinical practice guideline for treating tobacco use and dependence: 2008 update. A US Public Health Service report. Am J Prev Med 2008; 35:158–176.
- Samieri C, Sun Q, Townsend MK, et al. The association between dietary patterns at midlife and health in aging: an observational study. Ann Intern Med 2013; 159:584–591.
- Estruch R, Ros E, Martinez-Gonzalez MA. Mediterranean diet for primary prevention of cardiovascular disease. N Engl J Med 2013; 369:676–677.
- Leon-Munoz LM, Guallar-Castillon P, Lopez-Garcia E, Rodriguez-Artalejo F. Mediterranean diet and risk of frailty in community-dwelling older adults. J Am Med Dir Assoc 2014; 15:899–903.
- Doty RL, Shaman P, Applebaum SL, Giberson R, Siksorski L, Rosenberg L. Smell identification ability: changes with age. Science 1984; 226:1441–1443.
- Merel SE, Paauw DS. Common drug side effects and drug-drug interactions in elderly adults in primary care. J Am Geriatr Soc 2017 Mar 21. Epub ahead of print.
- Epstein RM, Peters E. Beyond information: exploring patients’ preferences. JAMA 2009; 302:195–197.
When assessing and attempting to modify the risk of cardiovascular disease in older patients, physicians should consider incorporating the concept of frailty. The balance of risk and benefit may differ considerably for 2 patients of the same age if one is fit and the other is frail. Because the aging population is a diverse group, a one-size-fits-all approach to cardiovascular disease prevention and risk-factor management is not appropriate.
A GROWING, DIVERSE GROUP
The number of older adults with multiple cardiovascular risk factors is increasing as life expectancy improves. US residents who are age 65 today can expect to live to an average age of 84 (men) or 87 (women).1
However, the range of life expectancy for people reaching these advanced ages is wide, and chronologic age is no longer sufficient to determine a patient’s risk profile. Furthermore, the prevalence of cardiovascular disease rises with age, and age itself is the strongest predictor of cardiovascular risk.2
Current risk calculators have not been validated in people over age 80,2 making them inadequate for use in older patients. Age alone cannot identify who will benefit from preventive strategies, except in situations when a dominant disease such as metastatic cancer, end-stage renal disease, end-stage dementia, or end-stage heart failure is expected to lead to mortality within a year. Guidelines for treating common risk factors such as elevated cholesterol3 in the general population have generally not focused on adults over 75 or recognized their diversity in health status.4 In order to generate an individualized prescription for cardiovascular disease prevention for older adults, issues such as frailty, cognitive and functional status, disability, and comorbidity must be considered.
WHAT IS FRAILTY?
Clinicians have recognized frailty for decades, but to date there remains a debate on how to define it.
Clegg et al5 described frailty as “a state of increased vulnerability to poor resolution of homeostasis after a stressor event,”5 a definition generally agreed upon, as frailty predicts both poor health outcomes and death.
Indeed, in a prospective study of 5,317 men and women ranging in age from 65 to 101, those identified as frail at baseline were 6 times more likely to have died 3 years later (mortality rates 18% vs 3%), and the difference persisted at 7 years.6 After adjusting for comorbidities, those identified as frail were also more likely to fall, develop limitations in mobility or activities of daily living, or be hospitalized.
The two current leading theories of frailty were defined by Fried et al6 and by Rockwood and Mitnitski.7
Fried et al6 have operationalized frailty as a “physical phenotype,” defined as 3 or more of the following:
- Unintentional weight loss of 10 pounds in the past year
- Self-reported exhaustion
- Weakness as measured by grip strength
- Slow walking speed
- Decreased physical activity.6
Rockwood and Mitnitski7 define frailty as an accumulation of health-related deficits over time. They recommend that 30 to 40 possible deficits that cover a variety of health systems be included such as cognition, mood, function, and comorbidity. These are added and divided by the total possible number of variables to generate a score between 0 and 1.8
The difficulty in defining frailty has led to varying estimates of its prevalence, ranging from 25% to 50% in adults over 65 who have cardiovascular disease.9
CAUSE AND CONSEQUENCE OF CARDIOVASCULAR DISEASE
Studies have highlighted the bidirectional connection between frailty and cardiovascular disease.10 Frailty may predict cardiovascular disease, while cardiovascular disease is associated with an increased risk of incident frailty.9,11
Frail adults with cardiovascular disease have a higher risk of poor outcomes, even after correcting for age, comorbidities, disability, and disease severity. For example, frailty is associated with a twofold higher mortality rate in individuals with cardiovascular disease.9
A prospective cohort study12 of 3,895 middle-aged men and women demonstrated that those with an elevated cardiovascular risk score were at increased risk of frailty over 10 years (odds ratio [OR] 1.35, 95% confidence interval [CI] 1.21–1.51) and incident cardiovascular events (OR 1.36, 95% CI 1.15–1.61). This suggests that modification of cardiovascular risk factors earlier in life may reduce the risk of subsequently becoming frail.
Biologic mechanisms that may explain the connection between frailty and cardiovascular disease include derangements in inflammatory, hematologic, and endocrine pathways. People who are found to be clinically frail are more likely to have insulin resistance and elevated biomarkers such as C-reactive protein, D-dimer, and factor VIII.13 The inflammatory cytokine interleukin 6 is suggested as a common link between inflammation and thrombosis, perhaps contributing to the connection between cardiovascular disease and frailty. Many of these biomarkers have been linked to the pathophysiologic changes of aging, so-called “inflamm-aging” or immunosenescence, including sarcopenia, osteoporosis, and cardiovascular disease.14
ASSESSING FRAILTY IN THE CLINIC
For adults over age 70, frailty assessment is an important first step in managing cardiovascular disease risk.15 Frailty status will better identify those at risk of adverse outcomes in the short term and those who are most likely to benefit from long-term cardiovascular preventive strategies. Additionally, incorporating frailty assessment into traditional risk factor evaluation may permit appropriate intervention and prevention of a potentially modifiable risk factor.
Gait speed is a quick, easy, inexpensive, and sensitive way to assess frailty status, with excellent inter-rater and test-retest reliability, even in those with cognitive impairment.16 Slow gait speed predicts limitations in mobility, limitations in activities of daily living, and death.8,17
In a prospective study18 of 1,567 men and women, mean age 74, slow gait speed was the strongest predictor of subsequent cardiovascular events.18
Gait speed is usually measured over a distance of 4 meters (13.1 feet),17 and the patient is asked to walk comfortably in an unobstructed, marked area. An assistive walking device can be used if needed. If possible, this is repeated once after a brief recovery period, and the average is recorded.
The FRAIL scale19,20 is a simple, validated questionnaire that combines the Fried and Rockwood concepts of frailty and can be given over the phone or to patients in a waiting room. One point is given for each of the following, and people who have 3 or more are considered frail:
- Fatigue
- Resistance (inability to climb 1 flight of stairs)
- Ambulation (inability to walk 1 block)
- Illnesses (having more than 5)
- Loss of more than 5% of body weight.
Other measures of physical function such as grip strength (using a dynamometer), the Timed Up and Go test (assessing the ability to get up from a chair and walk a short distance), and Short Physical Performance Battery (assessing balance, chair stands, and walking speed) can be used to screen for frailty, but are more time-intensive than gait speed alone, and so are not always practical to use in a busy clinic.21
MANAGEMENT OF RISK FACTORS
Management of cardiovascular risk factors is best individualized as outlined below.
LOWERING HIGH BLOOD PRESSURE
The incidence of ischemic heart disease and stroke increases with age across all levels of elevated systolic and diastolic blood pressure.22 Hypertension is also associated with increased risk of cognitive decline. However, a J-shaped relationship has been observed in older adults, with increased cardiovascular events for both low and elevated blood pressure, although the clinical relevance remains controversial.23
Odden et al24 performed an observational study and found that high blood pressure was associated with an increased mortality rate in older adults with normal gait speed, while in those with slow gait speed, high blood pressure neither harmed nor helped. Those who could not walk 6 meters appeared to benefit from higher blood pressure.
HYVET (the Hypertension in the Very Elderly Trial),25 a randomized controlled trial in 3,845 community-dwelling people age 80 or older with sustained systolic blood pressure higher than 160 mm Hg, found a significant reduction in rates of stroke and all-cause mortality (relative risk [RR] 0.76, P = .007) in the treatment arm using indapamide with perindopril if necessary to reach a target blood pressure of 150/80 mm Hg.
Frailty was not assessed during the trial; however, in a reanalysis, the results did not change in those identified as frail using a Rockwood frailty index (a count of health-related deficits accumulated over the lifespan).26
SPRINT (the Systolic Blood Pressure Intervention Trial)27 randomized participants age 50 and older with systolic blood pressure of 130 to 180 mm Hg and at increased risk of cardiovascular disease to intensive treatment (goal systolic blood pressure ≤ 120 mm Hg) or standard treatment (goal systolic blood pressure ≤ 140 mm Hg). In a prespecified subgroup of 2,636 participants over age 75 (mean age 80), hazard ratios and 95% confidence intervals for adverse outcomes with intensive treatment were:
- Major cardiovascular events: HR 0.66, 95% CI 0.51–0.85
- Death: HR 0.67, 95% CI 0.49–0.91.
Over 3 years of treatment this translated into a number needed to treat of 27 to prevent 1 cardiovascular event and 41 to prevent 1 death.
Within this subgroup, the benefit was similar regardless of level of frailty (measured both by a Rockwood frailty index and by gait speed).
However, the incidence of serious adverse treatment effects such as hypotension, orthostasis, electrolyte abnormalities, and acute kidney injury was higher with intensive treatment in the frail group. Although the difference was not statistically significant, it is cause for caution. Further, the exclusion criteria (history of diabetes, heart failure, dementia, stroke, weight loss of > 10%, nursing home residence) make it difficult to generalize the SPRINT findings to the general aging population.27
Tinetti et al28 performed an observational study using a nationally representative sample of older adults. They found that receiving any antihypertensive therapy was associated with an increased risk of falls with serious adverse outcomes. The risks of adverse events related to antihypertensive therapy increased with age.
Recommendations on hypertension
Managing hypertension in frail patients at risk of cardiovascular disease requires balancing the benefits vs the risks of treatment, such as polypharmacy, falls, and orthostatic hypotension.
The Eighth Joint National Committee suggests a blood pressure goal of less than 150/90 mm Hg for all adults over age 60, and less than 140/90 mm Hg for those with a history of cardiovascular disease or diabetes.29
The American College of Cardiology/American Heart Association (ACC/AHA) guidelines on hypertension, recently released, recommend a new blood pressure target of <120/<80 as normal, with 120–129/<80 considered elevated, 130–139/80–89 stage 1 hypertension, and ≥140/≥90 as stage 2 hypertension.30 An important caveat to these guidelines is the recommendation to measure blood pressure accurately and with accurate technique, which is often not possible in many busy clinics. These guidelines are intended to apply to older adults as well, with a note that those with multiple morbidities and limited life expectancy will benefit from a shared decision that incorporates patient preferences and clinical judgment. Little guidance is given on how to incorporate frailty, although note is made that older adults who reside in assisted living facilities and nursing homes have not been represented in randomized controlled trials.30
American Diabetes Association guidelines on hypertension in patients with diabetes recommend considering functional status, frailty, and life expectancy to decide on a blood pressure goal of either 140/90 mm Hg (if fit) or 150/90 mm Hg (if frail). They do not specify how to diagnose frailty.31
Canadian guidelines say that in those with advanced frailty (ie, entirely dependent for personal care and activities of daily living) and short life expectancy (months), it is reasonable to liberalize the systolic blood pressure goal to 160 to 190 mm Hg.32
Our recommendations. In both frail and nonfrail individuals without a limited life expectancy, it is reasonable to aim for a blood pressure of at least less than 140/90 mm Hg. For those at increased risk of cardiovascular disease and able to tolerate treatment, careful lowering to 130/80 mm Hg may be considered, with close attention to side effects.
Treatment should start with the lowest possible dose, be titrated slowly, and may need to be tailored to standing blood pressure to avoid orthostatic hypotension.
Home blood pressure measurements may be beneficial in monitoring treatment.
MANAGING LIPIDS
For those over age 75, data on efficacy of statins are mixed due to the small number of older adults enrolled in randomized controlled trials of these drugs. To our knowledge, no statin trial has examined the role of frailty.
The PROSPER trial (Prospective Study of Pravastatin in the Elderly at Risk)33 randomized 5,804 patients ages 70 to 82 to receive either pravastatin or placebo. Overall, the incidence of a composite end point of major cardiovascular events was 15% lower with active treatment (P = .014). However, the mean age was 75, which does little to address the paucity of evidence for those over age 75; follow-up time was only 3 years, and subgroup analysis did not show benefit in those who did not have a history of cardiovascular disease or in women.
The JUPITER trial (Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin)34 randomized 5,695 people over age 70 without cardiovascular disease to receive either rosuvastatin or placebo. Exploratory analysis showed a significant 39% reduction in all-cause mortality and major cardiovascular events with active treatment (HR 0.61, 95% CI 0.46–0.82). Over 5 years of treatment, this translates to a number needed to treat of 19 to prevent 1 major cardiovascular event and 29 to prevent 1 cardiovascular death.
The benefit of statins for primary prevention in these trials began to be apparent 2 years after treatment was initiated.
The Women’s Health Initiative,35 an observational study, found no difference in incident frailty in women older than 65 taking statins for 3 years compared with those who did not take statins
Odden et al36 found that although statin use is generally well tolerated, the risks of statin-associated functional and cognitive decline may outweigh the benefits in those older than 75. The ongoing Statin in Reducing Events in the Elderly (STAREE) trial may shed light on this issue.
Recommendations on lipid management
The ACC/AHA,3 in their 2013 guidelines, do not recommend routine statin treatment for primary prevention in those over age 75, given a lack of evidence from randomized controlled trials. For secondary prevention, ie, for those who have a history of atherosclerotic cardiovascular disease, they recommend moderate-intensity statin therapy in this age group.
Our recommendations. For patients over age 75 without cardiovascular disease or frailty and with a life expectancy of at least 2 years, consider offering a statin for primary prevention of cardiovascular disease as part of shared decision-making.
In those with known cardiovascular disease, it is reasonable to continue statin therapy except in situations where the life expectancy is less than 6 months.37
Although moderate- or high-intensity statin therapy is recommended in current guidelines, for many older adults it is prudent to consider the lowest tolerable dose to improve adherence, with close monitoring for side effects such as myalgia and weakness.
TYPE 2 DIABETES
Evidence suggests that tight glycemic control in type 2 diabetes is harmful for adults ages 55 to 79 and does not provide clear benefits for cardiovascular risk reduction, and controlling hemoglobin A1c to less than 6.0% is associated with increased mortality in older adults.38
The American Diabetes Association31 and the American Geriatrics Society39 recommend hemoglobin A1c goals of:
- 7.5% or less for older adults with 3 or more coexisting chronic illnesses requiring medical intervention (eg, arthritis, hypertension, and heart failure) and with intact cognition and function
- 8.0% or less for those identified as frail, or with multiple chronic illnesses or moderate cognitive or functional impairment
- 8.5% or 9.0% or less for those with very complex comorbidities, in long-term care, or with end-stage chronic illnesses (eg, end-stage heart failure), or with moderate to severe cognitive or functional limitation.
These guidelines do not endorse a specific frailty assessment, although the references allude to the Fried phenotype criteria, which include gait speed. An update from the American Diabetes Association provides a patient-centered approach to tailoring treatment regimens, taking into consideration the risk of hypoglycemia for each class of drugs, side effects, and cost.40
Our recommendations. Hyperglycemia remains a risk factor for cardiovascular disease in older adults and increases the risk of many geriatric conditions including delirium, dementia, frailty, and functional decline. The goal in individualizing hemoglobin A1c goals should be to avoid both hyper- and hypoglycemia.
Sulfonylureas and insulins should be used with caution, as they have the highest associated incidence of hypoglycemia of the diabetes medications.
ASPIRIN
For secondary prevention in older adults with a history of cardiovascular disease, pooled trials have consistently demonstrated a long-term benefit for aspirin use that exceeds bleeding risks, although age and frailty status were not considered.41
Aspirin for primary prevention?
The evidence for aspirin for primary prevention in older adults is mixed. Meta-analysis suggests a modest decrease in risk of nonfatal myocardial infarction but no appreciable effects on nonfatal stroke and cardiovascular death.42
The Japanese Primary Prevention Project,43 a randomized trial of low-dose aspirin for primary prevention of cardiovascular disease in adults ages 60 to 85, showed no reduction in major cardiovascular events. However, the event rate was lower than expected, the crossover rates were high, the incidence of hemorrhagic strokes was higher than in Western studies, and the trial may have been underpowered to detect the benefits of aspirin.
The US Preventive Services Task Force44 in 2016 noted that among individuals with a 10-year cardiovascular disease risk of 10% or higher based on the ACC/AHA pooled cohort equation,3 the greatest benefit of aspirin was in those ages 50 to 59. In this age group, 225 nonfatal myocardial infarctions and 84 nonfatal strokes were prevented per 10,000 men treated, with a net gain of 333 life-years. Similar findings were noted in women.
However, in those ages 60 to 69, the risks of harm begin to rise and the benefit of starting daily aspirin necessitates individualized clinical decision-making, with particular attention to bleeding risk and life expectancy.44
In those age 70 and older, data on benefit and harm are mixed. The bleeding risk of aspirin increases with age, predominantly due to gastrointestinal bleeding.44
The ongoing Aspirin in Reducing Events in Elderly trial will add to the evidence.
Aspirin recommendations for primary prevention
The American Geriatrics Society Beers Criteria do not routinely recommend aspirin use for primary prevention in those over age 80, even in those with diabetes.45
Our recommendations. In adults over age 75 who are not frail but are identified as being at moderate to high risk of cardiovascular disease using either the ACC/AHA calculator or any other risk estimator, and without a limited life expectancy, we believe it is reasonable to consider low-dose aspirin (75–100 mg daily) for primary prevention. However, there must be careful consideration particularly for those at risk of major bleeding. One approach to consider would be the addition of a proton pump inhibitor along with aspirin, though this requires further study.46
For those who have been on aspirin for primary prevention and are now older than age 80 without an adverse bleeding event, it is reasonable to stop aspirin, although risks and benefits of discontinuing aspirin should be discussed with the patient as part of shared decision-making.
In frail individuals the risks of aspirin therapy likely outweigh any benefit for primary prevention, and aspirin cannot be routinely recommended.
EXERCISE AND WEIGHT MANAGEMENT
A low body mass index is often associated with frailty, and weight loss may be a marker of underlying illness, which increases the risk of poor outcomes. However, those with an elevated body mass index and increased adiposity are in fact more likely to be frail (using the Fried physical phenotype definition) than those with a low body mass index,47 due in part to unrecognized sarcopenic obesity, ie, replacement of lean muscle with fat.
Physical activity is currently the only intervention known to improve frailty.5
Physical activity and a balanced diet are just as important in older adults, including those with reduced functional ability and multiple comorbid conditions, as in younger individuals.
A trial in frail long-term care residents (mean age 87) found that high-intensity resistance training improved muscle strength and mobility.48 The addition of a nutritional supplement with or without exercise did not affect frailty status. In community-dwelling older adults, physical activity has also been shown to improve sarcopenia and reduce falls and hip fractures.49
Progressive resistance training has been shown to improve strength and gait speed even in those with dementia.50
Tai chi has shown promising results in reducing falls and improving balance and function in both community-dwelling older adults and those in assisted living.51,52
Exercise recommendations
The US Department of Health and Human Services53 issued physical activity guidelines in 2008 with specific recommendations for older adults that include flexibility and balance training, which have been shown to reduce falls, in addition to aerobic activities and strength training.
Our recommendations. For all older adults, particularly those who are frail, we recommend a regimen of general daily activity, balance training such as tai chi, moderate-intensity aerobics such as cycling, resistance training such as using light weights, and stretching. Sessions lasting as little as 10 minutes are beneficial.
Gait speed can be monitored in the clinic to assess improvement in function over time.
SMOKING CESSATION
Although rates of smoking are decreasing, smoking remains one of the most important cardiovascular risk factors. Smoking has been associated with increased risk of frailty and significantly increased risk of death compared with never smoking.54 Smoking cessation is beneficial even for those who quit later in life.
The US Department of Health and Human Services in 2008 released an update on tobacco use and dependence,55 with specific attention to the benefit of smoking cessation for older adults.
All counseling interventions have been shown to be effective in older adults, as has nicotine replacement. Newer medications such as varenicline should be used with caution, as the risk of side effects is higher in older patients.
NUTRITION
Samieri et al,56 in an observational study of 10,670 nurses, found that those adhering to Mediterranean-style diets during midlife had 46% increased odds of healthy aging.
The PREDIMED study (Primary Prevention of Cardiovascular Disease With a Mediterranean Diet)57 in adults ages 55 to 80 showed the Mediterranean diet supplemented with olive oil and nuts reduced the incidence of major cardiovascular disease.
Leon-Munoz et al.58 A prospective study of 1,815 community-dwelling older adults followed for 3.5 years in Spain demonstrated that adhering to a Mediterranean diet was associated with a lower incidence of frailty (P = .002) and a lower risk of slow gait speed (OR 0.53, 95% CI 0.35–0.79). Interestingly, this study also found a protective association between fish and fruit consumption and frailty.
Our recommendations. A well-balanced, diverse diet rich in whole grains, fruits, vegetables, nuts, fish, and healthy fats (polyunsaturated fatty acids), with a moderate amount of lean meats, is recommended to prevent heart disease. However, poor dental health may limit the ability of older individuals to adhere to such diets, and modifications may be needed. Additionally, age-related changes in taste and smell may contribute to poor nutrition and unintended weight loss.59 Involving a nutritionist and social worker in the patient care team should be considered especially as poor nutrition may be a sign of cognitive impairment, functional decline, and frailty.
SPECIAL CONSIDERATIONS
Special considerations when managing cardiovascular risk in the older adult include polypharmacy, multimorbidity, quality of life, and the patient’s personal preferences.
Polypharmacy, defined as taking more than 5 medications, is associated with an increased risk of adverse drug events, falls, fractures, decreased adherence, and “prescribing cascade”— prescribing more drugs to treat side effects of the first drug (eg, adding hypertensive medications to treat hypertension induced by nonsteroidal anti-inflammatory drugs).60 This is particularly important when considering adding additional medications. If a statin will be the 20th pill, it may be less beneficial and more likely to lead to additional adverse effects than if it is the fifth medication.
Patient preferences are critically important, particularly when adding or removing medications. Interventions should include a detailed medication review for appropriate prescribing and deprescribing, referral to a pharmacist, and engaging the patient’s support system.
Multimorbidity. Many older individuals have multiple chronic illnesses. The interaction of multiple conditions must be considered in creating a comprehensive plan, including prognosis, patient preference, available evidence, treatment interactions, and risks and benefits.
Quality of life. Outlook on life and choices made regarding prolongation vs quality of life may be different for the older patient than the younger patient.
Personal preferences. Although interventions such as high-intensity statins for a robust 85-year-old may be appropriate, the individual can choose to forgo any treatment. It is important to explore the patient’s goals of care and advanced directives as part of shared decision-making when building a patient-centered prevention plan.61
ONE SIZE DOES NOT FIT ALL
The heterogeneity of aging rules out a one-size-fits-all recommendation for cardiovascular disease prevention and management of cardiovascular risk factors in older adults.
There is significant overlap between cardiovascular risk status and frailty.
Incorporating frailty into the creation of a cardiovascular risk prescription can aid in the development of an individualized care plan for the prevention of cardiovascular disease in the aging population.
When assessing and attempting to modify the risk of cardiovascular disease in older patients, physicians should consider incorporating the concept of frailty. The balance of risk and benefit may differ considerably for 2 patients of the same age if one is fit and the other is frail. Because the aging population is a diverse group, a one-size-fits-all approach to cardiovascular disease prevention and risk-factor management is not appropriate.
A GROWING, DIVERSE GROUP
The number of older adults with multiple cardiovascular risk factors is increasing as life expectancy improves. US residents who are age 65 today can expect to live to an average age of 84 (men) or 87 (women).1
However, the range of life expectancy for people reaching these advanced ages is wide, and chronologic age is no longer sufficient to determine a patient’s risk profile. Furthermore, the prevalence of cardiovascular disease rises with age, and age itself is the strongest predictor of cardiovascular risk.2
Current risk calculators have not been validated in people over age 80,2 making them inadequate for use in older patients. Age alone cannot identify who will benefit from preventive strategies, except in situations when a dominant disease such as metastatic cancer, end-stage renal disease, end-stage dementia, or end-stage heart failure is expected to lead to mortality within a year. Guidelines for treating common risk factors such as elevated cholesterol3 in the general population have generally not focused on adults over 75 or recognized their diversity in health status.4 In order to generate an individualized prescription for cardiovascular disease prevention for older adults, issues such as frailty, cognitive and functional status, disability, and comorbidity must be considered.
WHAT IS FRAILTY?
Clinicians have recognized frailty for decades, but to date there remains a debate on how to define it.
Clegg et al5 described frailty as “a state of increased vulnerability to poor resolution of homeostasis after a stressor event,”5 a definition generally agreed upon, as frailty predicts both poor health outcomes and death.
Indeed, in a prospective study of 5,317 men and women ranging in age from 65 to 101, those identified as frail at baseline were 6 times more likely to have died 3 years later (mortality rates 18% vs 3%), and the difference persisted at 7 years.6 After adjusting for comorbidities, those identified as frail were also more likely to fall, develop limitations in mobility or activities of daily living, or be hospitalized.
The two current leading theories of frailty were defined by Fried et al6 and by Rockwood and Mitnitski.7
Fried et al6 have operationalized frailty as a “physical phenotype,” defined as 3 or more of the following:
- Unintentional weight loss of 10 pounds in the past year
- Self-reported exhaustion
- Weakness as measured by grip strength
- Slow walking speed
- Decreased physical activity.6
Rockwood and Mitnitski7 define frailty as an accumulation of health-related deficits over time. They recommend that 30 to 40 possible deficits that cover a variety of health systems be included such as cognition, mood, function, and comorbidity. These are added and divided by the total possible number of variables to generate a score between 0 and 1.8
The difficulty in defining frailty has led to varying estimates of its prevalence, ranging from 25% to 50% in adults over 65 who have cardiovascular disease.9
CAUSE AND CONSEQUENCE OF CARDIOVASCULAR DISEASE
Studies have highlighted the bidirectional connection between frailty and cardiovascular disease.10 Frailty may predict cardiovascular disease, while cardiovascular disease is associated with an increased risk of incident frailty.9,11
Frail adults with cardiovascular disease have a higher risk of poor outcomes, even after correcting for age, comorbidities, disability, and disease severity. For example, frailty is associated with a twofold higher mortality rate in individuals with cardiovascular disease.9
A prospective cohort study12 of 3,895 middle-aged men and women demonstrated that those with an elevated cardiovascular risk score were at increased risk of frailty over 10 years (odds ratio [OR] 1.35, 95% confidence interval [CI] 1.21–1.51) and incident cardiovascular events (OR 1.36, 95% CI 1.15–1.61). This suggests that modification of cardiovascular risk factors earlier in life may reduce the risk of subsequently becoming frail.
Biologic mechanisms that may explain the connection between frailty and cardiovascular disease include derangements in inflammatory, hematologic, and endocrine pathways. People who are found to be clinically frail are more likely to have insulin resistance and elevated biomarkers such as C-reactive protein, D-dimer, and factor VIII.13 The inflammatory cytokine interleukin 6 is suggested as a common link between inflammation and thrombosis, perhaps contributing to the connection between cardiovascular disease and frailty. Many of these biomarkers have been linked to the pathophysiologic changes of aging, so-called “inflamm-aging” or immunosenescence, including sarcopenia, osteoporosis, and cardiovascular disease.14
ASSESSING FRAILTY IN THE CLINIC
For adults over age 70, frailty assessment is an important first step in managing cardiovascular disease risk.15 Frailty status will better identify those at risk of adverse outcomes in the short term and those who are most likely to benefit from long-term cardiovascular preventive strategies. Additionally, incorporating frailty assessment into traditional risk factor evaluation may permit appropriate intervention and prevention of a potentially modifiable risk factor.
Gait speed is a quick, easy, inexpensive, and sensitive way to assess frailty status, with excellent inter-rater and test-retest reliability, even in those with cognitive impairment.16 Slow gait speed predicts limitations in mobility, limitations in activities of daily living, and death.8,17
In a prospective study18 of 1,567 men and women, mean age 74, slow gait speed was the strongest predictor of subsequent cardiovascular events.18
Gait speed is usually measured over a distance of 4 meters (13.1 feet),17 and the patient is asked to walk comfortably in an unobstructed, marked area. An assistive walking device can be used if needed. If possible, this is repeated once after a brief recovery period, and the average is recorded.
The FRAIL scale19,20 is a simple, validated questionnaire that combines the Fried and Rockwood concepts of frailty and can be given over the phone or to patients in a waiting room. One point is given for each of the following, and people who have 3 or more are considered frail:
- Fatigue
- Resistance (inability to climb 1 flight of stairs)
- Ambulation (inability to walk 1 block)
- Illnesses (having more than 5)
- Loss of more than 5% of body weight.
Other measures of physical function such as grip strength (using a dynamometer), the Timed Up and Go test (assessing the ability to get up from a chair and walk a short distance), and Short Physical Performance Battery (assessing balance, chair stands, and walking speed) can be used to screen for frailty, but are more time-intensive than gait speed alone, and so are not always practical to use in a busy clinic.21
MANAGEMENT OF RISK FACTORS
Management of cardiovascular risk factors is best individualized as outlined below.
LOWERING HIGH BLOOD PRESSURE
The incidence of ischemic heart disease and stroke increases with age across all levels of elevated systolic and diastolic blood pressure.22 Hypertension is also associated with increased risk of cognitive decline. However, a J-shaped relationship has been observed in older adults, with increased cardiovascular events for both low and elevated blood pressure, although the clinical relevance remains controversial.23
Odden et al24 performed an observational study and found that high blood pressure was associated with an increased mortality rate in older adults with normal gait speed, while in those with slow gait speed, high blood pressure neither harmed nor helped. Those who could not walk 6 meters appeared to benefit from higher blood pressure.
HYVET (the Hypertension in the Very Elderly Trial),25 a randomized controlled trial in 3,845 community-dwelling people age 80 or older with sustained systolic blood pressure higher than 160 mm Hg, found a significant reduction in rates of stroke and all-cause mortality (relative risk [RR] 0.76, P = .007) in the treatment arm using indapamide with perindopril if necessary to reach a target blood pressure of 150/80 mm Hg.
Frailty was not assessed during the trial; however, in a reanalysis, the results did not change in those identified as frail using a Rockwood frailty index (a count of health-related deficits accumulated over the lifespan).26
SPRINT (the Systolic Blood Pressure Intervention Trial)27 randomized participants age 50 and older with systolic blood pressure of 130 to 180 mm Hg and at increased risk of cardiovascular disease to intensive treatment (goal systolic blood pressure ≤ 120 mm Hg) or standard treatment (goal systolic blood pressure ≤ 140 mm Hg). In a prespecified subgroup of 2,636 participants over age 75 (mean age 80), hazard ratios and 95% confidence intervals for adverse outcomes with intensive treatment were:
- Major cardiovascular events: HR 0.66, 95% CI 0.51–0.85
- Death: HR 0.67, 95% CI 0.49–0.91.
Over 3 years of treatment this translated into a number needed to treat of 27 to prevent 1 cardiovascular event and 41 to prevent 1 death.
Within this subgroup, the benefit was similar regardless of level of frailty (measured both by a Rockwood frailty index and by gait speed).
However, the incidence of serious adverse treatment effects such as hypotension, orthostasis, electrolyte abnormalities, and acute kidney injury was higher with intensive treatment in the frail group. Although the difference was not statistically significant, it is cause for caution. Further, the exclusion criteria (history of diabetes, heart failure, dementia, stroke, weight loss of > 10%, nursing home residence) make it difficult to generalize the SPRINT findings to the general aging population.27
Tinetti et al28 performed an observational study using a nationally representative sample of older adults. They found that receiving any antihypertensive therapy was associated with an increased risk of falls with serious adverse outcomes. The risks of adverse events related to antihypertensive therapy increased with age.
Recommendations on hypertension
Managing hypertension in frail patients at risk of cardiovascular disease requires balancing the benefits vs the risks of treatment, such as polypharmacy, falls, and orthostatic hypotension.
The Eighth Joint National Committee suggests a blood pressure goal of less than 150/90 mm Hg for all adults over age 60, and less than 140/90 mm Hg for those with a history of cardiovascular disease or diabetes.29
The American College of Cardiology/American Heart Association (ACC/AHA) guidelines on hypertension, recently released, recommend a new blood pressure target of <120/<80 as normal, with 120–129/<80 considered elevated, 130–139/80–89 stage 1 hypertension, and ≥140/≥90 as stage 2 hypertension.30 An important caveat to these guidelines is the recommendation to measure blood pressure accurately and with accurate technique, which is often not possible in many busy clinics. These guidelines are intended to apply to older adults as well, with a note that those with multiple morbidities and limited life expectancy will benefit from a shared decision that incorporates patient preferences and clinical judgment. Little guidance is given on how to incorporate frailty, although note is made that older adults who reside in assisted living facilities and nursing homes have not been represented in randomized controlled trials.30
American Diabetes Association guidelines on hypertension in patients with diabetes recommend considering functional status, frailty, and life expectancy to decide on a blood pressure goal of either 140/90 mm Hg (if fit) or 150/90 mm Hg (if frail). They do not specify how to diagnose frailty.31
Canadian guidelines say that in those with advanced frailty (ie, entirely dependent for personal care and activities of daily living) and short life expectancy (months), it is reasonable to liberalize the systolic blood pressure goal to 160 to 190 mm Hg.32
Our recommendations. In both frail and nonfrail individuals without a limited life expectancy, it is reasonable to aim for a blood pressure of at least less than 140/90 mm Hg. For those at increased risk of cardiovascular disease and able to tolerate treatment, careful lowering to 130/80 mm Hg may be considered, with close attention to side effects.
Treatment should start with the lowest possible dose, be titrated slowly, and may need to be tailored to standing blood pressure to avoid orthostatic hypotension.
Home blood pressure measurements may be beneficial in monitoring treatment.
MANAGING LIPIDS
For those over age 75, data on efficacy of statins are mixed due to the small number of older adults enrolled in randomized controlled trials of these drugs. To our knowledge, no statin trial has examined the role of frailty.
The PROSPER trial (Prospective Study of Pravastatin in the Elderly at Risk)33 randomized 5,804 patients ages 70 to 82 to receive either pravastatin or placebo. Overall, the incidence of a composite end point of major cardiovascular events was 15% lower with active treatment (P = .014). However, the mean age was 75, which does little to address the paucity of evidence for those over age 75; follow-up time was only 3 years, and subgroup analysis did not show benefit in those who did not have a history of cardiovascular disease or in women.
The JUPITER trial (Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin)34 randomized 5,695 people over age 70 without cardiovascular disease to receive either rosuvastatin or placebo. Exploratory analysis showed a significant 39% reduction in all-cause mortality and major cardiovascular events with active treatment (HR 0.61, 95% CI 0.46–0.82). Over 5 years of treatment, this translates to a number needed to treat of 19 to prevent 1 major cardiovascular event and 29 to prevent 1 cardiovascular death.
The benefit of statins for primary prevention in these trials began to be apparent 2 years after treatment was initiated.
The Women’s Health Initiative,35 an observational study, found no difference in incident frailty in women older than 65 taking statins for 3 years compared with those who did not take statins
Odden et al36 found that although statin use is generally well tolerated, the risks of statin-associated functional and cognitive decline may outweigh the benefits in those older than 75. The ongoing Statin in Reducing Events in the Elderly (STAREE) trial may shed light on this issue.
Recommendations on lipid management
The ACC/AHA,3 in their 2013 guidelines, do not recommend routine statin treatment for primary prevention in those over age 75, given a lack of evidence from randomized controlled trials. For secondary prevention, ie, for those who have a history of atherosclerotic cardiovascular disease, they recommend moderate-intensity statin therapy in this age group.
Our recommendations. For patients over age 75 without cardiovascular disease or frailty and with a life expectancy of at least 2 years, consider offering a statin for primary prevention of cardiovascular disease as part of shared decision-making.
In those with known cardiovascular disease, it is reasonable to continue statin therapy except in situations where the life expectancy is less than 6 months.37
Although moderate- or high-intensity statin therapy is recommended in current guidelines, for many older adults it is prudent to consider the lowest tolerable dose to improve adherence, with close monitoring for side effects such as myalgia and weakness.
TYPE 2 DIABETES
Evidence suggests that tight glycemic control in type 2 diabetes is harmful for adults ages 55 to 79 and does not provide clear benefits for cardiovascular risk reduction, and controlling hemoglobin A1c to less than 6.0% is associated with increased mortality in older adults.38
The American Diabetes Association31 and the American Geriatrics Society39 recommend hemoglobin A1c goals of:
- 7.5% or less for older adults with 3 or more coexisting chronic illnesses requiring medical intervention (eg, arthritis, hypertension, and heart failure) and with intact cognition and function
- 8.0% or less for those identified as frail, or with multiple chronic illnesses or moderate cognitive or functional impairment
- 8.5% or 9.0% or less for those with very complex comorbidities, in long-term care, or with end-stage chronic illnesses (eg, end-stage heart failure), or with moderate to severe cognitive or functional limitation.
These guidelines do not endorse a specific frailty assessment, although the references allude to the Fried phenotype criteria, which include gait speed. An update from the American Diabetes Association provides a patient-centered approach to tailoring treatment regimens, taking into consideration the risk of hypoglycemia for each class of drugs, side effects, and cost.40
Our recommendations. Hyperglycemia remains a risk factor for cardiovascular disease in older adults and increases the risk of many geriatric conditions including delirium, dementia, frailty, and functional decline. The goal in individualizing hemoglobin A1c goals should be to avoid both hyper- and hypoglycemia.
Sulfonylureas and insulins should be used with caution, as they have the highest associated incidence of hypoglycemia of the diabetes medications.
ASPIRIN
For secondary prevention in older adults with a history of cardiovascular disease, pooled trials have consistently demonstrated a long-term benefit for aspirin use that exceeds bleeding risks, although age and frailty status were not considered.41
Aspirin for primary prevention?
The evidence for aspirin for primary prevention in older adults is mixed. Meta-analysis suggests a modest decrease in risk of nonfatal myocardial infarction but no appreciable effects on nonfatal stroke and cardiovascular death.42
The Japanese Primary Prevention Project,43 a randomized trial of low-dose aspirin for primary prevention of cardiovascular disease in adults ages 60 to 85, showed no reduction in major cardiovascular events. However, the event rate was lower than expected, the crossover rates were high, the incidence of hemorrhagic strokes was higher than in Western studies, and the trial may have been underpowered to detect the benefits of aspirin.
The US Preventive Services Task Force44 in 2016 noted that among individuals with a 10-year cardiovascular disease risk of 10% or higher based on the ACC/AHA pooled cohort equation,3 the greatest benefit of aspirin was in those ages 50 to 59. In this age group, 225 nonfatal myocardial infarctions and 84 nonfatal strokes were prevented per 10,000 men treated, with a net gain of 333 life-years. Similar findings were noted in women.
However, in those ages 60 to 69, the risks of harm begin to rise and the benefit of starting daily aspirin necessitates individualized clinical decision-making, with particular attention to bleeding risk and life expectancy.44
In those age 70 and older, data on benefit and harm are mixed. The bleeding risk of aspirin increases with age, predominantly due to gastrointestinal bleeding.44
The ongoing Aspirin in Reducing Events in Elderly trial will add to the evidence.
Aspirin recommendations for primary prevention
The American Geriatrics Society Beers Criteria do not routinely recommend aspirin use for primary prevention in those over age 80, even in those with diabetes.45
Our recommendations. In adults over age 75 who are not frail but are identified as being at moderate to high risk of cardiovascular disease using either the ACC/AHA calculator or any other risk estimator, and without a limited life expectancy, we believe it is reasonable to consider low-dose aspirin (75–100 mg daily) for primary prevention. However, there must be careful consideration particularly for those at risk of major bleeding. One approach to consider would be the addition of a proton pump inhibitor along with aspirin, though this requires further study.46
For those who have been on aspirin for primary prevention and are now older than age 80 without an adverse bleeding event, it is reasonable to stop aspirin, although risks and benefits of discontinuing aspirin should be discussed with the patient as part of shared decision-making.
In frail individuals the risks of aspirin therapy likely outweigh any benefit for primary prevention, and aspirin cannot be routinely recommended.
EXERCISE AND WEIGHT MANAGEMENT
A low body mass index is often associated with frailty, and weight loss may be a marker of underlying illness, which increases the risk of poor outcomes. However, those with an elevated body mass index and increased adiposity are in fact more likely to be frail (using the Fried physical phenotype definition) than those with a low body mass index,47 due in part to unrecognized sarcopenic obesity, ie, replacement of lean muscle with fat.
Physical activity is currently the only intervention known to improve frailty.5
Physical activity and a balanced diet are just as important in older adults, including those with reduced functional ability and multiple comorbid conditions, as in younger individuals.
A trial in frail long-term care residents (mean age 87) found that high-intensity resistance training improved muscle strength and mobility.48 The addition of a nutritional supplement with or without exercise did not affect frailty status. In community-dwelling older adults, physical activity has also been shown to improve sarcopenia and reduce falls and hip fractures.49
Progressive resistance training has been shown to improve strength and gait speed even in those with dementia.50
Tai chi has shown promising results in reducing falls and improving balance and function in both community-dwelling older adults and those in assisted living.51,52
Exercise recommendations
The US Department of Health and Human Services53 issued physical activity guidelines in 2008 with specific recommendations for older adults that include flexibility and balance training, which have been shown to reduce falls, in addition to aerobic activities and strength training.
Our recommendations. For all older adults, particularly those who are frail, we recommend a regimen of general daily activity, balance training such as tai chi, moderate-intensity aerobics such as cycling, resistance training such as using light weights, and stretching. Sessions lasting as little as 10 minutes are beneficial.
Gait speed can be monitored in the clinic to assess improvement in function over time.
SMOKING CESSATION
Although rates of smoking are decreasing, smoking remains one of the most important cardiovascular risk factors. Smoking has been associated with increased risk of frailty and significantly increased risk of death compared with never smoking.54 Smoking cessation is beneficial even for those who quit later in life.
The US Department of Health and Human Services in 2008 released an update on tobacco use and dependence,55 with specific attention to the benefit of smoking cessation for older adults.
All counseling interventions have been shown to be effective in older adults, as has nicotine replacement. Newer medications such as varenicline should be used with caution, as the risk of side effects is higher in older patients.
NUTRITION
Samieri et al,56 in an observational study of 10,670 nurses, found that those adhering to Mediterranean-style diets during midlife had 46% increased odds of healthy aging.
The PREDIMED study (Primary Prevention of Cardiovascular Disease With a Mediterranean Diet)57 in adults ages 55 to 80 showed the Mediterranean diet supplemented with olive oil and nuts reduced the incidence of major cardiovascular disease.
Leon-Munoz et al.58 A prospective study of 1,815 community-dwelling older adults followed for 3.5 years in Spain demonstrated that adhering to a Mediterranean diet was associated with a lower incidence of frailty (P = .002) and a lower risk of slow gait speed (OR 0.53, 95% CI 0.35–0.79). Interestingly, this study also found a protective association between fish and fruit consumption and frailty.
Our recommendations. A well-balanced, diverse diet rich in whole grains, fruits, vegetables, nuts, fish, and healthy fats (polyunsaturated fatty acids), with a moderate amount of lean meats, is recommended to prevent heart disease. However, poor dental health may limit the ability of older individuals to adhere to such diets, and modifications may be needed. Additionally, age-related changes in taste and smell may contribute to poor nutrition and unintended weight loss.59 Involving a nutritionist and social worker in the patient care team should be considered especially as poor nutrition may be a sign of cognitive impairment, functional decline, and frailty.
SPECIAL CONSIDERATIONS
Special considerations when managing cardiovascular risk in the older adult include polypharmacy, multimorbidity, quality of life, and the patient’s personal preferences.
Polypharmacy, defined as taking more than 5 medications, is associated with an increased risk of adverse drug events, falls, fractures, decreased adherence, and “prescribing cascade”— prescribing more drugs to treat side effects of the first drug (eg, adding hypertensive medications to treat hypertension induced by nonsteroidal anti-inflammatory drugs).60 This is particularly important when considering adding additional medications. If a statin will be the 20th pill, it may be less beneficial and more likely to lead to additional adverse effects than if it is the fifth medication.
Patient preferences are critically important, particularly when adding or removing medications. Interventions should include a detailed medication review for appropriate prescribing and deprescribing, referral to a pharmacist, and engaging the patient’s support system.
Multimorbidity. Many older individuals have multiple chronic illnesses. The interaction of multiple conditions must be considered in creating a comprehensive plan, including prognosis, patient preference, available evidence, treatment interactions, and risks and benefits.
Quality of life. Outlook on life and choices made regarding prolongation vs quality of life may be different for the older patient than the younger patient.
Personal preferences. Although interventions such as high-intensity statins for a robust 85-year-old may be appropriate, the individual can choose to forgo any treatment. It is important to explore the patient’s goals of care and advanced directives as part of shared decision-making when building a patient-centered prevention plan.61
ONE SIZE DOES NOT FIT ALL
The heterogeneity of aging rules out a one-size-fits-all recommendation for cardiovascular disease prevention and management of cardiovascular risk factors in older adults.
There is significant overlap between cardiovascular risk status and frailty.
Incorporating frailty into the creation of a cardiovascular risk prescription can aid in the development of an individualized care plan for the prevention of cardiovascular disease in the aging population.
- Social Security Administration (SSA). Calculators: life expectancy. www.ssa.gov/planners/lifeexpectancy.html. Accessed December 8, 2017.
- Benjamin EJ, Blaha MJ, Chiuve SE, et al. Heart disease and stroke statistics—2017 update: a report from the American Heart Association. Circulation 2017; 135:e146–e603.
- Stone NJ, Robinson JG, Lichtenstein AH, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2014; 63:2889–2934.
- Rich MW, Chyun DA, Skolnick AH, et al; American Heart Association Older Populations Committee of the Council on Clinical Cardiology, Council on Cardiovascular and Stroke Nursing, Council on Cardiovascular Surgery and Anesthesia, and Stroke Council; American College of Cardiology; and American Geriatrics Society. Knowledge gaps in cardiovascular care of the older adult population: a scientific statement from the American Heart Association, American College of Cardiology, and American Geriatrics Society. Circulation 2016; 133:2103–2122.
- Clegg A, Young J, Iliffe S, Rikkert MO, Rockwood K. Frailty in elderly people. Lancet 2013; 381:752–762.
- Fried LP, Tangen CM, Walston J, et al; Cardiovascular Health Study Collaborative Research Group. Frailty in older adults: evidence for a phenotype. J Gerontol A Biol Sci Med Sci 2001; 56:M146–M156.
- Rockwood K, Mitnitski A. Frailty in relation to the accumulation of deficits. J Gerontol A Biol Sci Med Sci 2007; 62:722–727.
- Studenski S, Perera S, Patel K, et al. Gait speed and survival in older adults. JAMA 2011; 305:50–58.
- Afilalo J, Alexander KP, Mack MJ, et al. Frailty assessment in the cardiovascular care of older adults. J Am Coll Cardiol 2014; 63:747–762.
- Afilalo J, Karunananthan S, Eisenberg MJ, Alexander KP, Bergman H. Role of frailty in patients with cardiovascular disease. Am J Cardiol 2009; 103:1616–1621.
- Woods NF, LaCroix AZ, Gray SL, et al; Women’s Health Initiative. Frailty: emergence and consequences in women aged 65 and older in the Women's Health Initiative Observational Study. J Am Geriatr Soc 2005; 53:1321–1330.
- Bouillon K, Batty GD, Hamer M, et al. Cardiovascular disease risk scores in identifying future frailty: the Whitehall II prospective cohort study. Heart 2013; 99:737–742.
- Walston J, McBurnie MA, Newman A, et al; Cardiovascular Health Study. Frailty and activation of the inflammation and coagulation systems with and without clinical comorbidities: results from the Cardiovascular Health Study. Arch Intern Med 2002; 162:2333–2341.
- De Martinis M, Franceschi C, Monti D, Ginaldi L. Inflammation markers predicting frailty and mortality in the elderly. Exp Mol Pathol 2006; 80:219–227.
- Morley JE. Frailty fantasia. J Am Med Dir Assoc 2017; 18:813–815.
- Munoz-Mendoza CL, Cabanero-Martinez MJ, Millan-Calenti JC, Cabrero-Garcia J, Lopez-Sanchez R, Maseda-Rodriguez A. Reliability of 4-m and 6-m walking speed tests in elderly people with cognitive impairment. Arch Gerontol Geriatr 2011; 52:e67–e70.
- Abellan van Kan G, Rolland Y, Andrieu S, et al. Gait speed at usual pace as a predictor of adverse outcomes in community-dwelling older people an International Academy on Nutrition and Aging (IANA) Task Force. J Nutr Health Aging 2009; 13:881–889.
- Sergi G, Veronese N, Fontana L, et al. Pre-frailty and risk of cardiovascular disease in elderly men and women: the Pro.V.A. study. J Am Coll Cardiol 2015; 65:976–983.
- Abellan van Kan G, Rolland Y, Bergman H, Morley JE, Kritchevsky SB, Vellas B. The I.A.N.A Task Force on frailty assessment of older people in clinical practice. J Nutr Health Aging 2008; 12:29–37.
- Morley JE, Malmstrom TK, Miller DK. A simple frailty questionnaire (FRAIL) predicts outcomes in middle-aged African Americans. J Nutr Health Aging 2012;16:601–608.
- Forman DE, Arena R, Boxer R, et al; American Heart Association Council on Clinical Cardiology; Council on Cardiovascular and Stroke Nursing; Council on Quality of Care and Outcomes Research; and Stroke Council. Prioritizing functional capacity as a principal end point for therapies oriented to older adults with cardiovascular disease: a scientific statement for healthcare professionals from the American Heart Association. Circulation 2017; 135:e894–e918.
- Lewington S, Clarke R, Qizilbash N, Peto R, Collins R; Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Mancia G, Grassi G. Aggressive blood pressure lowering is dangerous: the J-curve: pro side of the argument. Hypertension 2014; 63:29–36.
- Odden MC, Peralta CA, Haan MN, Covinsky KE. Rethinking the association of high blood pressure with mortality in elderly adults: the impact of frailty. Arch Intern Med 2012; 172:1162–1168.
- Beckett NS, Peters R, Fletcher AE, et al; HYVET Study Group. Treatment of hypertension in patients 80 years of age or older. N Engl J Med 2008; 358:1887–1898.
- Warwick J, Falaschetti E, Rockwood K, et al. No evidence that frailty modifies the positive impact of antihypertensive treatment in very elderly people: an investigation of the impact of frailty upon treatment effect in the HYpertension in the Very Elderly Trial (HYVET) study, a double-blind, placebo-controlled study of antihypertensives in people with hypertension aged 80 and over. BMC Med 2015 9;13:78.
- Williamson JD, Supiano MA, Applegate WB, et al; SPRINT Research Group. Intensive vs standard blood pressure control and cardiovascular disease outcomes in adults aged ≥ 75 years: a randomized clinical trial. JAMA 2016; 315:2673–2682.
- Tinetti ME, Han L, Lee DS, et al. Antihypertensive medications and serious fall injuries in a nationally representative sample of older adults. JAMA Intern Med 2014; 174:588–595.
- James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014; 311:507–520.
- Whelton PK, Carey RM, Aronow WS, et al. 2017 ACC/AHA/AAPA/ABC/ACPM/AGS/APhA/ASH/ASPC/NMA/PCNA Guideline for the Prevention, Detection, Evaluation, and Management of High Blood Pressure in Adults: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Hypertension 2017. Nov 13 [Epub ahead of print].)
- American Diabetes Association. 11. Older adults. Diabetes Care 2017; 40(suppl 1):S99–S104.
- Mallery LH, Allen M, Fleming I, et al. Promoting higher blood pressure targets for frail older adults: a consensus guideline from Canada. Cleve Clin J Med 2014; 81:427–437.
- Shepherd J, Blauw GJ, Murphy MB, et al; PROSPER study group. PROspective Study of Pravastatin in the Elderly at Risk. Pravastatin in elderly individuals at risk of vascular disease (PROSPER): a randomised controlled trial. Lancet 2002; 360:1623–1630.
- Glynn RJ, Koenig W, Nordestgaard BG, Shepherd J, Ridker PM. Rosuvastatin for primary prevention in older persons with elevated C-reactive protein and low to average low-density lipoprotein cholesterol levels: exploratory analysis of a randomized trial. Ann Intern Med 2010; 152:488–496, W174.
- LaCroix AZ, Gray SL, Aragaki A, et al; Women’s Health Initiative. Statin use and incident frailty in women aged 65 years or older: prospective findings from the Women’s Health Initiative Observational Study. J Gerontol A Biol Sci Med Sci 2008; 63:369–375.
- Odden MC, Pletcher MJ, Coxson PG, et al. Cost-effectiveness and population impact of statins for primary prevention in adults aged 75 years or older in the United States. Ann Intern Med 2015; 162:533–541.
- Kutner JS, Blatchford PJ, Taylor DH Jr, et al. Safety and benefit of discontinuing statin therapy in the setting of advanced, life-limiting illness: a randomized clinical trial. JAMA Intern Med 2015; 175:691–700.
- Huang ES, Liu JY, Moffet HH, John PM, Karter AJ. Glycemic control, complications, and death in older diabetic patients: the diabetes and aging study. Diabetes Care 2011; 34:1329–1336.
- Kirkman MS, Briscoe VJ, Clark N, et al; Consensus Development Conference on Diabetes and Older Adults. Diabetes in older adults: a consensus report. J Am Geriatr Soc 2012; 60:2342–2356.
- Inzucchi SE, Bergenstal RM, Buse JB, et al. Management of hyperglycemia in type 2 diabetes, 2015: a patient-centered approach: update to a position statement of the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care 2015; 38:140–149.
- Collaborative meta-analysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction, and stroke in high risk patients. BMJ (Clinical research ed) 2002; 324:71–86.
- Antithrombotic Trialists’ (ATT) Collaboration; Baigent C, Blackwell L, Collins R, et al. Aspirin in the primary and secondary prevention of vascular disease: collaborative meta-analysis of individual participant data from randomised trials. Lancet 2009; 373:1849–1860.
- Ikeda Y, Shimada K, Teramoto T, et al. Low-dose aspirin for primary prevention of cardiovascular events in Japanese patients 60 years or older with atherosclerotic risk factors: a randomized clinical trial. JAMA 2014; 312:2510–2520.
- Bibbins-Domingo K; US Preventive Services Task Force. Aspirin use for the primary prevention of cardiovascular disease and colorectal cancer: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2016; 164:836–845.
- American Geriatrics Society 2012 Beers Criteria Update Expert Panel. American Geriatrics Society updated Beers Criteria for potentially inappropriate medication use in older adults. J Am Geriatr Soc 2012; 60:616–631.
- Li L, Geraghty OC, Mehta Z, Rothwell PM. Age-specific risks, severity, time course, and outcome of bleeding on long-term antiplatelet treatment after vascular events: a population-based cohort study. Lancet 2017; 390:490–499.
- Barzilay JI, Blaum C, Moore T, et al. Insulin resistance and inflammation as precursors of frailty: the Cardiovascular Health Study. Arch Intern Med 2007; 167:635–641.
- Fiatarone MA, O’Neill EF, Ryan ND, et al. Exercise training and nutritional supplementation for physical frailty in very elderly people. N Engl J Med 1994; 330:1769–1775.
- Uusi-Rasi K, Patil R, Karinkanta S, et al. Exercise and vitamin D in fall prevention among older women: a randomized clinical trial. JAMA Intern Med 2015; 175:703–711.
- Hauer K, Schwenk M, Zieschang T, Essig M, Becker C, Oster P. Physical training improves motor performance in people with dementia: a randomized controlled trial. J Am Geriatr Soc 2012; 60:8–15.
- Li F, Harmer P, Fitzgerald K. Implementing an evidence-based fall prevention intervention in community senior centers. Am J Public Health 2016; 106:2026–2031.
- Manor B, Lough M, Gagnon MM, Cupples A, Wayne PM, Lipsitz LA. Functional benefits of tai chi training in senior housing facilities. J Am Geriatr Soc 2014; 62:1484–1489.
- Physical Activity Guidelines Advisory Committee report, 2008. To the Secretary of Health and Human Services. Part A: executive summary. Nutr Rev 2009; 67:114–120.
- Hubbard RE, Searle SD, Mitnitski A, Rockwood K. Effect of smoking on the accumulation of deficits, frailty and survival in older adults: a secondary analysis from the Canadian Study of Health and Aging. J Nutr Health Aging 2009; 13:468–472.
- Clinical Practice Guideline Treating Tobacco Use and Dependence 2008 Update Panel, Liaisons, and Staff. A clinical practice guideline for treating tobacco use and dependence: 2008 update. A US Public Health Service report. Am J Prev Med 2008; 35:158–176.
- Samieri C, Sun Q, Townsend MK, et al. The association between dietary patterns at midlife and health in aging: an observational study. Ann Intern Med 2013; 159:584–591.
- Estruch R, Ros E, Martinez-Gonzalez MA. Mediterranean diet for primary prevention of cardiovascular disease. N Engl J Med 2013; 369:676–677.
- Leon-Munoz LM, Guallar-Castillon P, Lopez-Garcia E, Rodriguez-Artalejo F. Mediterranean diet and risk of frailty in community-dwelling older adults. J Am Med Dir Assoc 2014; 15:899–903.
- Doty RL, Shaman P, Applebaum SL, Giberson R, Siksorski L, Rosenberg L. Smell identification ability: changes with age. Science 1984; 226:1441–1443.
- Merel SE, Paauw DS. Common drug side effects and drug-drug interactions in elderly adults in primary care. J Am Geriatr Soc 2017 Mar 21. Epub ahead of print.
- Epstein RM, Peters E. Beyond information: exploring patients’ preferences. JAMA 2009; 302:195–197.
- Social Security Administration (SSA). Calculators: life expectancy. www.ssa.gov/planners/lifeexpectancy.html. Accessed December 8, 2017.
- Benjamin EJ, Blaha MJ, Chiuve SE, et al. Heart disease and stroke statistics—2017 update: a report from the American Heart Association. Circulation 2017; 135:e146–e603.
- Stone NJ, Robinson JG, Lichtenstein AH, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2014; 63:2889–2934.
- Rich MW, Chyun DA, Skolnick AH, et al; American Heart Association Older Populations Committee of the Council on Clinical Cardiology, Council on Cardiovascular and Stroke Nursing, Council on Cardiovascular Surgery and Anesthesia, and Stroke Council; American College of Cardiology; and American Geriatrics Society. Knowledge gaps in cardiovascular care of the older adult population: a scientific statement from the American Heart Association, American College of Cardiology, and American Geriatrics Society. Circulation 2016; 133:2103–2122.
- Clegg A, Young J, Iliffe S, Rikkert MO, Rockwood K. Frailty in elderly people. Lancet 2013; 381:752–762.
- Fried LP, Tangen CM, Walston J, et al; Cardiovascular Health Study Collaborative Research Group. Frailty in older adults: evidence for a phenotype. J Gerontol A Biol Sci Med Sci 2001; 56:M146–M156.
- Rockwood K, Mitnitski A. Frailty in relation to the accumulation of deficits. J Gerontol A Biol Sci Med Sci 2007; 62:722–727.
- Studenski S, Perera S, Patel K, et al. Gait speed and survival in older adults. JAMA 2011; 305:50–58.
- Afilalo J, Alexander KP, Mack MJ, et al. Frailty assessment in the cardiovascular care of older adults. J Am Coll Cardiol 2014; 63:747–762.
- Afilalo J, Karunananthan S, Eisenberg MJ, Alexander KP, Bergman H. Role of frailty in patients with cardiovascular disease. Am J Cardiol 2009; 103:1616–1621.
- Woods NF, LaCroix AZ, Gray SL, et al; Women’s Health Initiative. Frailty: emergence and consequences in women aged 65 and older in the Women's Health Initiative Observational Study. J Am Geriatr Soc 2005; 53:1321–1330.
- Bouillon K, Batty GD, Hamer M, et al. Cardiovascular disease risk scores in identifying future frailty: the Whitehall II prospective cohort study. Heart 2013; 99:737–742.
- Walston J, McBurnie MA, Newman A, et al; Cardiovascular Health Study. Frailty and activation of the inflammation and coagulation systems with and without clinical comorbidities: results from the Cardiovascular Health Study. Arch Intern Med 2002; 162:2333–2341.
- De Martinis M, Franceschi C, Monti D, Ginaldi L. Inflammation markers predicting frailty and mortality in the elderly. Exp Mol Pathol 2006; 80:219–227.
- Morley JE. Frailty fantasia. J Am Med Dir Assoc 2017; 18:813–815.
- Munoz-Mendoza CL, Cabanero-Martinez MJ, Millan-Calenti JC, Cabrero-Garcia J, Lopez-Sanchez R, Maseda-Rodriguez A. Reliability of 4-m and 6-m walking speed tests in elderly people with cognitive impairment. Arch Gerontol Geriatr 2011; 52:e67–e70.
- Abellan van Kan G, Rolland Y, Andrieu S, et al. Gait speed at usual pace as a predictor of adverse outcomes in community-dwelling older people an International Academy on Nutrition and Aging (IANA) Task Force. J Nutr Health Aging 2009; 13:881–889.
- Sergi G, Veronese N, Fontana L, et al. Pre-frailty and risk of cardiovascular disease in elderly men and women: the Pro.V.A. study. J Am Coll Cardiol 2015; 65:976–983.
- Abellan van Kan G, Rolland Y, Bergman H, Morley JE, Kritchevsky SB, Vellas B. The I.A.N.A Task Force on frailty assessment of older people in clinical practice. J Nutr Health Aging 2008; 12:29–37.
- Morley JE, Malmstrom TK, Miller DK. A simple frailty questionnaire (FRAIL) predicts outcomes in middle-aged African Americans. J Nutr Health Aging 2012;16:601–608.
- Forman DE, Arena R, Boxer R, et al; American Heart Association Council on Clinical Cardiology; Council on Cardiovascular and Stroke Nursing; Council on Quality of Care and Outcomes Research; and Stroke Council. Prioritizing functional capacity as a principal end point for therapies oriented to older adults with cardiovascular disease: a scientific statement for healthcare professionals from the American Heart Association. Circulation 2017; 135:e894–e918.
- Lewington S, Clarke R, Qizilbash N, Peto R, Collins R; Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Mancia G, Grassi G. Aggressive blood pressure lowering is dangerous: the J-curve: pro side of the argument. Hypertension 2014; 63:29–36.
- Odden MC, Peralta CA, Haan MN, Covinsky KE. Rethinking the association of high blood pressure with mortality in elderly adults: the impact of frailty. Arch Intern Med 2012; 172:1162–1168.
- Beckett NS, Peters R, Fletcher AE, et al; HYVET Study Group. Treatment of hypertension in patients 80 years of age or older. N Engl J Med 2008; 358:1887–1898.
- Warwick J, Falaschetti E, Rockwood K, et al. No evidence that frailty modifies the positive impact of antihypertensive treatment in very elderly people: an investigation of the impact of frailty upon treatment effect in the HYpertension in the Very Elderly Trial (HYVET) study, a double-blind, placebo-controlled study of antihypertensives in people with hypertension aged 80 and over. BMC Med 2015 9;13:78.
- Williamson JD, Supiano MA, Applegate WB, et al; SPRINT Research Group. Intensive vs standard blood pressure control and cardiovascular disease outcomes in adults aged ≥ 75 years: a randomized clinical trial. JAMA 2016; 315:2673–2682.
- Tinetti ME, Han L, Lee DS, et al. Antihypertensive medications and serious fall injuries in a nationally representative sample of older adults. JAMA Intern Med 2014; 174:588–595.
- James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014; 311:507–520.
- Whelton PK, Carey RM, Aronow WS, et al. 2017 ACC/AHA/AAPA/ABC/ACPM/AGS/APhA/ASH/ASPC/NMA/PCNA Guideline for the Prevention, Detection, Evaluation, and Management of High Blood Pressure in Adults: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Hypertension 2017. Nov 13 [Epub ahead of print].)
- American Diabetes Association. 11. Older adults. Diabetes Care 2017; 40(suppl 1):S99–S104.
- Mallery LH, Allen M, Fleming I, et al. Promoting higher blood pressure targets for frail older adults: a consensus guideline from Canada. Cleve Clin J Med 2014; 81:427–437.
- Shepherd J, Blauw GJ, Murphy MB, et al; PROSPER study group. PROspective Study of Pravastatin in the Elderly at Risk. Pravastatin in elderly individuals at risk of vascular disease (PROSPER): a randomised controlled trial. Lancet 2002; 360:1623–1630.
- Glynn RJ, Koenig W, Nordestgaard BG, Shepherd J, Ridker PM. Rosuvastatin for primary prevention in older persons with elevated C-reactive protein and low to average low-density lipoprotein cholesterol levels: exploratory analysis of a randomized trial. Ann Intern Med 2010; 152:488–496, W174.
- LaCroix AZ, Gray SL, Aragaki A, et al; Women’s Health Initiative. Statin use and incident frailty in women aged 65 years or older: prospective findings from the Women’s Health Initiative Observational Study. J Gerontol A Biol Sci Med Sci 2008; 63:369–375.
- Odden MC, Pletcher MJ, Coxson PG, et al. Cost-effectiveness and population impact of statins for primary prevention in adults aged 75 years or older in the United States. Ann Intern Med 2015; 162:533–541.
- Kutner JS, Blatchford PJ, Taylor DH Jr, et al. Safety and benefit of discontinuing statin therapy in the setting of advanced, life-limiting illness: a randomized clinical trial. JAMA Intern Med 2015; 175:691–700.
- Huang ES, Liu JY, Moffet HH, John PM, Karter AJ. Glycemic control, complications, and death in older diabetic patients: the diabetes and aging study. Diabetes Care 2011; 34:1329–1336.
- Kirkman MS, Briscoe VJ, Clark N, et al; Consensus Development Conference on Diabetes and Older Adults. Diabetes in older adults: a consensus report. J Am Geriatr Soc 2012; 60:2342–2356.
- Inzucchi SE, Bergenstal RM, Buse JB, et al. Management of hyperglycemia in type 2 diabetes, 2015: a patient-centered approach: update to a position statement of the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care 2015; 38:140–149.
- Collaborative meta-analysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction, and stroke in high risk patients. BMJ (Clinical research ed) 2002; 324:71–86.
- Antithrombotic Trialists’ (ATT) Collaboration; Baigent C, Blackwell L, Collins R, et al. Aspirin in the primary and secondary prevention of vascular disease: collaborative meta-analysis of individual participant data from randomised trials. Lancet 2009; 373:1849–1860.
- Ikeda Y, Shimada K, Teramoto T, et al. Low-dose aspirin for primary prevention of cardiovascular events in Japanese patients 60 years or older with atherosclerotic risk factors: a randomized clinical trial. JAMA 2014; 312:2510–2520.
- Bibbins-Domingo K; US Preventive Services Task Force. Aspirin use for the primary prevention of cardiovascular disease and colorectal cancer: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2016; 164:836–845.
- American Geriatrics Society 2012 Beers Criteria Update Expert Panel. American Geriatrics Society updated Beers Criteria for potentially inappropriate medication use in older adults. J Am Geriatr Soc 2012; 60:616–631.
- Li L, Geraghty OC, Mehta Z, Rothwell PM. Age-specific risks, severity, time course, and outcome of bleeding on long-term antiplatelet treatment after vascular events: a population-based cohort study. Lancet 2017; 390:490–499.
- Barzilay JI, Blaum C, Moore T, et al. Insulin resistance and inflammation as precursors of frailty: the Cardiovascular Health Study. Arch Intern Med 2007; 167:635–641.
- Fiatarone MA, O’Neill EF, Ryan ND, et al. Exercise training and nutritional supplementation for physical frailty in very elderly people. N Engl J Med 1994; 330:1769–1775.
- Uusi-Rasi K, Patil R, Karinkanta S, et al. Exercise and vitamin D in fall prevention among older women: a randomized clinical trial. JAMA Intern Med 2015; 175:703–711.
- Hauer K, Schwenk M, Zieschang T, Essig M, Becker C, Oster P. Physical training improves motor performance in people with dementia: a randomized controlled trial. J Am Geriatr Soc 2012; 60:8–15.
- Li F, Harmer P, Fitzgerald K. Implementing an evidence-based fall prevention intervention in community senior centers. Am J Public Health 2016; 106:2026–2031.
- Manor B, Lough M, Gagnon MM, Cupples A, Wayne PM, Lipsitz LA. Functional benefits of tai chi training in senior housing facilities. J Am Geriatr Soc 2014; 62:1484–1489.
- Physical Activity Guidelines Advisory Committee report, 2008. To the Secretary of Health and Human Services. Part A: executive summary. Nutr Rev 2009; 67:114–120.
- Hubbard RE, Searle SD, Mitnitski A, Rockwood K. Effect of smoking on the accumulation of deficits, frailty and survival in older adults: a secondary analysis from the Canadian Study of Health and Aging. J Nutr Health Aging 2009; 13:468–472.
- Clinical Practice Guideline Treating Tobacco Use and Dependence 2008 Update Panel, Liaisons, and Staff. A clinical practice guideline for treating tobacco use and dependence: 2008 update. A US Public Health Service report. Am J Prev Med 2008; 35:158–176.
- Samieri C, Sun Q, Townsend MK, et al. The association between dietary patterns at midlife and health in aging: an observational study. Ann Intern Med 2013; 159:584–591.
- Estruch R, Ros E, Martinez-Gonzalez MA. Mediterranean diet for primary prevention of cardiovascular disease. N Engl J Med 2013; 369:676–677.
- Leon-Munoz LM, Guallar-Castillon P, Lopez-Garcia E, Rodriguez-Artalejo F. Mediterranean diet and risk of frailty in community-dwelling older adults. J Am Med Dir Assoc 2014; 15:899–903.
- Doty RL, Shaman P, Applebaum SL, Giberson R, Siksorski L, Rosenberg L. Smell identification ability: changes with age. Science 1984; 226:1441–1443.
- Merel SE, Paauw DS. Common drug side effects and drug-drug interactions in elderly adults in primary care. J Am Geriatr Soc 2017 Mar 21. Epub ahead of print.
- Epstein RM, Peters E. Beyond information: exploring patients’ preferences. JAMA 2009; 302:195–197.
KEY POINTS
- With the aging of the population, individualized prevention strategies must incorporate geriatric syndromes such as frailty.
- However, current guidelines and available evidence for cardiovascular disease prevention strategies have not incorporated frailty or make no recommendation at all for those over age 75.
- Four-meter gait speed, a simple measure of physical function and a proxy for frailty, can be used clinically to diagnose frailty.
Idiopathic hypercalciuria: Can we prevent stones and protect bones?
A 65-year-old woman was recently diagnosed with osteoporosis after a screening bone mineral density test. She has hypertension (treated with lisinopril), and she had an episode of passing a kidney stone 10 years ago. A 24-hour urine study reveals an elevated urinary calcium level.
What should the physician keep in mind in managing this patient?
IDIOPATHIC HYPERCALCIURIA
Many potential causes of secondary hypercalciuria must be ruled out before deciding that a patient has idiopathic hypercalciuria, which was first noted as a distinct entity by Albright et al in 1953.1 Causes of secondary hypercalciuria include primary hyperparathyroidism, hyperthyroidism, Paget disease, myeloma, malignancy, immobility, accelerated osteoporosis, sarcoidosis, renal tubular acidosis, and drug-induced urinary calcium loss such as that seen with loop diuretics.
Idiopathic hypercalciuria is identified by the following:
- Persistent hypercalciuria despite normal or restricted calcium intake2,3
- Normal levels of parathyroid hormone (PTH), phosphorus, and 1,25-dihydroxy-vitamin D (the active form of vitamin D, also called calcitriol) in the presence of hypercalciuria; serum calcium levels are also normal.
An alias for idiopathic hypercalciuria is “fasting hypercalciuria,” as increased urinary calcium persists and sometimes worsens while fasting or on a low-calcium diet, with increased bone turnover, reduced bone density, and normal serum PTH levels.4,5
Mineral loss from bone predominates in idiopathic hypercalciuria, but there is also a minor component of intestinal hyperabsorption of calcium and reduced renal calcium reabsorption.6 Distinguishing among intestinal hyperabsorptive hypercalciuria, renal leak hypercalciuria, and idiopathic or fasting hypercalciuria can be difficult and subtle. It has been argued that differentiating among hypercalciuric subtypes (hyperabsorptive, renal leak, idiopathic) is not useful; in general clinical practice, it is impractical to collect multiple 24-hour urine samples in the setting of controlled high- vs low-calcium diets.
COMPLICATIONS OF IDIOPATHIC HYPERCALCIURIA
Calcium is an important component in many physiologic processes, including coagulation, cell membrane transfer, hormone release, neuromuscular activation, and myocardial contraction. A sophisticated system of hormonally mediated interactions normally maintains stable extracellular calcium levels. Calcium is vital for bone strength, but the bones are the body’s calcium “bank,” and withdrawals from this bank are made at the expense of bone strength and integrity.
Renal stones
Patients with idiopathic hypercalciuria have a high incidence of renal stones. Conversely, 40% to 50% of patients with recurrent kidney stones have evidence of idiopathic hypercalciuria, the most common metabolic abnormality in “stone-formers.”7,8 Further, 35% to 40% of first- and second-degree relatives of stone-formers who have idiopathic hypercalciuria also have the condition.9 In the general population without kidney stones and without first-degree relatives with stones, the prevalence is approximately 5% to 10%.10,11
Bone loss
People with idiopathic hypercalciuria have lower bone density and a higher incidence of fracture than their normocalciuric peers. This relationship has been observed in both sexes and all ages. Idiopathic hypercalciuria has been noted in 10% to 19% of otherwise healthy men with low bone mass, in postmenopausal women with osteoporosis,10–12 and in up to 40% of postmenopausal women with osteoporotic fractures and no history of kidney stones.13
LABORATORY DEFINITION
Urinary calcium excretion
Heaney et al14 measured 24-hour urinary calcium excretion in a group of early postmenopausal women, whom he divided into 3 groups by dietary calcium intake:
- Low intake (< 500 mg/day)
- Moderate intake (500–1,000 mg/day)
- High intake (> 1,000 mg/day).
In the women who were estrogen-deprived (ie, postmenopausal and not on estrogen replacement therapy), the 95% probability ranges for urinary calcium excretion were:
- 32–252 mg/day (0.51–4.06 mg/kg/day) with low calcium intake
- 36–286 mg/day (0.57–4.52 mg/kg/day) with moderate calcium intake
- 45–357 mg/day (0.69–5.47 mg/kg/day) with high calcium intake.
For estrogen-replete women (perimenopausal or postmenopausal on estrogen replacement), using the same categories of dietary calcium intake, calcium excretion was:
- 39–194 mg/day (0.65–3.23 mg/kg/day) with low calcium intake
- 54–269 mg/day (0.77–3.84 mg/kg/day) with moderate calcium intake
- 66–237 mg/day (0.98–4.89 mg/kg/day) with high calcium intake.
In the estrogen-deprived group, urinary calcium excretion increased by only 55 mg/day per 1,000-mg increase in dietary intake, though there was individual variability. These data suggest that hypercalciuria should be defined as:
- Greater than 250 mg/day (> 4.1 mg/kg/day) in estrogen-replete women
- Greater than 300 mg/day (> 5.0 mg/kg/day) in estrogen-deprived women.
Urinary calcium-to-creatinine ratio
Use of a spot urinary calcium-to-creatinine ratio has been advocated as an alternative to the more labor-intensive 24-hour urine collection.15 However, the spot urine calcium-creatinine ratio correlates poorly with 24-hour urine criteria for hypercalciuria whether by absolute, weight-based, or menopausal and calcium-adjusted definitions.
Importantly, spot urine measurements show poor sensitivity and specificity for hypercalciuria. Spot urine samples underestimate the 24-hour urinary calcium (Bland-Altman bias –71 mg/24 hours), and postprandial sampling overestimates it (Bland-Altman bias +61 mg/24 hours).15
WHAT IS THE MECHANISM OF IDIOPATHIC HYPERCALCIURIA?
The pathophysiology of idiopathic hypercalciuria has been difficult to establish.
Increased sensitivity to vitamin D? In the hyperabsorbing population, activated vitamin D levels are often robust, but a few studies of rats with hyperabsorbing, hyperexcreting physiology have shown normal calcitriol levels, suggesting an increased sensitivity to the actions of 1,25-dihydroxyvitamin D.16
Another study found that hypercalciuric stone-forming rats have more 1,25-dihydroxyvitamin D receptors than do controls.17
These changes have not been demonstrated in patients with idiopathic hypercalciuria.
High sodium intake has been proposed as the cause of idiopathic hypercalciuria. High sodium intake leads to increased urinary sodium excretion, and the increased tubular sodium load can decrease tubular calcium reabsorption, possibly favoring a reduction in bone mineral density over time.18–20
In healthy people, urine calcium excretion increases by about 0.6 mmol/day (20–40 mg/day) for each 100-mmol (2,300 mg) increment in daily sodium ingestion.21,22 But high sodium intake is seldom the principal cause of idiopathic hypercalciuria.
High protein intake, often observed in patients with nephrolithiasis, increases dietary acid load, stimulating release of calcium from bone and inhibiting renal reabsorption of calcium.23,24 Increasing dietary protein from 0.5 to 2.0 mg/kg/day can double the urinary calcium output.25
In mice, induction of metabolic acidosis, thought to mimic a high-protein diet, inhibits osteoblastic alkaline phosphatase activity while stimulating prostaglandin E2 production.26 This in turn increases osteoblastic expression of receptor activator for nuclear factor kappa b (RANK) ligand, thereby potentially contributing to osteoclastogenesis and osteoclast activity.26
Decreasing dietary protein decreases the recurrence of nephrolithiasis in established stone-formers.27 Still, urine calcium levels are higher in those with idiopathic hypercalciuria than in normal controls at comparable levels of acid excretion, so while protein ingestion could potentially exacerbate the hypercalciuria, it is unlikely to be the sole cause.
Renal calcium leak? The frequent finding of low to low-normal PTH levels in patients with idiopathic hypercalciuria contradicts the potential etiologic mechanism of renal calcium “leak.” In idiopathic hypercalciuria, the PTH response to an oral calcium load is abnormal. If given an oral calcium load, the PTH level should decline if this were due to renal leak, but in the setting of idiopathic hypercalciuria, no clinically meaningful change in PTH occurs. This lack of response of PTH to oral calcium load has been seen in both rat and human studies. Patients also excrete normal to high amounts of urine calcium after prolonged fasting or a low-calcium diet. Low-calcium diets do not induce hyperparathyroidism in these patients, and so the source of the elevated calcium in the urine must be primarily from bone. Increased levels of 1,25-dihydroxyvitamin D in patients with idiopathic hypercalciuria have been noted.28,29
Whether the cytokine milieu also contributes to the calcitriol levels is unclear, but the high or high-normal plasma level of 1,25-dihydroxyvitamin D may be the reason that the PTH is unperturbed.
IMPACT ON BONE HEALTH
Nephrolithiasis is strongly linked to fracture risk.
The bone mineral density of trabecular bone is more affected by calcium excretion than that of cortical bone.18,20,30 However, lumbar spine bone mineral density has not been consistently found to be lower in patients with hyperabsorptive hypercalciuria. Rather, bone mineral density is correlated inversely with urine calcium excretion in men and women who form stones, but not in patients without nephrolithiasis.
In children
In children, idiopathic hypercalciuria is well known to be linked to osteopenia. This is an important group to study, as adult idiopathic hypercalciuria often begins in childhood. However, the trajectory of bone loss vs gain in children is fraught with variables such as growth, puberty, and body mass index, making this a difficult group from which to extrapolate conclusions to adults.
In men
There is more information on the relationship between hypercalciuria and osteoporosis in men than in women.
In 1998, Melton et al31 published the findings of a 25-year population-based cohort study of 624 patients, 442 (71%) of whom were men, referred for new-onset urolithiasis. The incidence of vertebral fracture was 4 times higher in this group than in patients without stone disease, but there was no difference in the rate of hip, forearm, or nonvertebral fractures. This is consistent with earlier data that report a loss of predominantly cancellous bone associated with urolithiasis.
National Health and Nutrition Examination Survey III data in 2001 focused on a potential relationship between kidney stones and bone mineral density or prevalent spine or wrist fracture.32 More than 14,000 people had hip bone mineral density measurements, of whom 793 (477 men, 316 women) had kidney stones. Men with previous nephrolithiasis had lower femoral neck bone mineral density than those without. Men with kidney stones were also more likely to report prevalent wrist and spine fractures. In women, no difference was noted between those with or without stone disease with respect to femoral neck bone mineral density or fracture incidence.
Cauley et al33 also evaluated a relationship between kidney stones and bone mineral density in the Osteoporotic Fractures in Men (MrOS) study. Of approximately 6,000 men, 13.2% reported a history of kidney stones. These men had lower spine and total hip bone mineral density than controls who had not had kidney stones, and the difference persisted after adjusting for age, race, weight, and other variables. However, further data from this cohort revealed that so few men with osteoporosis had hypercalciuria that its routine measurement was not recommended.34
In women
The relationship between idiopathic hypercalciuria and fractures has been more difficult to establish in women.
Sowers et al35 performed an observational study of 1,309 women ages 20 to 92 with a history of nephrolithiasis. No association was noted between stone disease and reduced bone mineral density in the femoral neck, lumbar spine, or radius.
These epidemiologic studies did not include the cause of the kidney stones (eg, whether or not there was associated hypercalciuria or primary hyperparathyroidism), and typically a diagnosis of idiopathic hypercalciuria was not established.
The difference in association between low bone mineral density or fracture with nephrolithiasis between men and women is not well understood, but the most consistent hypothesis is that the influence of hypoestrogenemia in women is much stronger than that of the hypercalciuria.20
Does the degree of hypercalciuria influence the amount of bone loss?
A few trials have tried to determine whether the amount of calcium in the urine influences the magnitude of bone loss.
In 2003, Asplin et al36 reported that bone mineral density Z-scores differed significantly by urinary calcium excretion, but only in stone-formers. In patients without stone disease, there was no difference in Z-scores according to the absolute value of hypercalciuria. This may be due to a self-selection bias in which stone-formers avoid calcium in the diet and those without stone disease do not.
Three studies looking solely at men with idiopathic hypercalciuria also did not detect a significant difference in bone mineral loss according to degree of hypercalciuria.20,30,37
A POLYGENIC DISORDER?
The potential contribution of genetic changes to the development of idiopathic hypercalciuria has been studied. While there is an increased risk of idiopathic hypercalciuria in first-degree relatives of patients with nephrolithiasis, most experts believe that idiopathic hypercalciuria is likely a polygenic disorder.9,38
EVALUATION AND TREATMENT
The 2014 revised version of the National Osteoporosis Foundation’s “Clinician’s guide to prevention and treatment of osteoporosis”39 noted that hypercalciuria is a risk factor that contributes to the development of osteoporosis and possibly osteoporotic fractures, and that consideration should be given to evaluating for hypercalciuria, but only in selected cases. In patients with kidney stones, the link between hypercalciuria and bone loss and fracture is recognized and should be explored in both women and men at risk of osteoporosis, as 45% to 50% of patients who form calcium stones have hypercalciuria.
Patients with kidney stones who have low bone mass and idiopathic hypercalciuria should increase their daily fluid intake, follow a low-salt and low-animal-protein diet, and take thiazide diuretics to reduce the incidence of further calcium stones. Whether this approach also improves bone mass and strength and reduces the risk of fractures within this cohort requires further study.
Dietary interventions
Don’t restrict calcium intake. Despite the connection between hypercalciuria and nephrolithiasis, restriction of dietary calcium to prevent relapse of nephrolithiasis is a risk factor for negative calcium balance and bone demineralization. Observational studies and prospective clinical trials have demonstrated an increased risk of stone formation with low calcium intake.27,30 Nevertheless, this practice seems logical to many patients with kidney stones, and this process may independently contribute to lower bone mineral density.
A low-sodium, low-animal-protein diet is beneficial. Though increased intake of sodium or protein is not the main cause of idiopathic hypercalciuria, pharmacologic therapy, especially with thiazide diuretics, is more likely to be successful in the setting of a low-sodium, low-protein diet.
Borghi et al27 studied 2 diets in men with nephrolithiasis and idiopathic hypercalciuria: a low-calcium diet and a low-salt, low-animal-protein, normal-calcium diet. Men on the latter diet experienced a greater reduction in urinary calcium excretion than those on the low-calcium diet.
Breslau et al40 found that urinary calcium excretion fell by 50% in 15 people when they switched from an animal-based to a plant-based protein diet.
Thiazide diuretics
Several epidemiologic and randomized studies41–45 found that thiazide therapy decreased the likelihood of hip fracture in postmenopausal women, men, and premenopausal women. Doses ranged from 12.5 to 50 mg of hydrochlorothiazide. Bone density increased in the radius, total body, total hip, and lumbar spine. One prospective trial noted that fracture risk declined with longer duration of thiazide use, with the largest reduction in those who used thiazides for 8 or more years.46
Thiazides have anticalciuric actions.47 In addition, they have positive effects on osteoblastic cell proliferation and activity, inhibiting osteocalcin expression by osteoblasts, thereby possibly improving bone formation and mineralization.48 The effects of thiazides on bone was reviewed by Sakhaee et al.49
However, fewer studies have looked at thiazides in patients with idiopathic hypercalciuria.
García-Nieto et al50 looked retrospectively at 22 children (average age 11.7) with idiopathic hypercalciuria and osteopenia who had received thiazides (19 received chlorthalidone 25 mg daily, and 3 received hydrochlorothiazide 25 mg daily) for an average of 2.4 years, and at 32 similar patients who had not received thiazides. Twelve (55%) of the patients receiving thiazides had an improvement in bone mineral density Z-scores, compared with 23 (72%) of the controls. This finding is confounded by growth that occurred during the study, and both groups demonstrated a significantly increased body mass index and bone mineral apparent density at the end of the trial.
Bushinsky and Favus51 evaluated whether chlorthalidone improved bone quality or structure in rats that were genetically prone to hypercalciuric stones. These rats are uniformly stone-formers, and while they have components of calcium hyperabsorption, they also demonstrate renal hyperexcretion (leak) and enhanced bone mineral resorption.51 When fed a high-calcium diet, they maintain a reduction in bone mineral density and bone strength. Study rats were given chlorthalidone 4 to 5 mg/kg/day. After 18 weeks of therapy, significant improvements were observed in trabecular thickness and connectivity as well as increased vertebral compressive strength.52 No difference in cortical bone was noted.
No randomized, blinded, placebo-controlled trial has yet been done to study the impact of thiazides on bone mineral density or fracture risk in patients with idiopathic hypercalciuria.
In practice, many physicians choose chlorthalidone over hydrochlorothiazide because of chlorthalidone’s longer half-life. Combinations of a thiazide diuretic and potassium-sparing medications are also employed, such as hydrochlorothiazide plus either triamterene or spironolactone to reduce the number of pills the patient has to take.
Potassium citrate
When prescribing thiazide diuretics, one should also consider prescribing potassium citrate, as this agent not only prevents hypokalemia but also increases urinary citrate excretion, which can help to inhibit crystallization of calcium salts.6
In a longitudinal study of 28 patients with hypercalciuria,53 combined therapy with a thiazide or indapamide and potassium citrate over a mean of 7 years increased bone density of the lumbar spine by 7.1% and of the femoral neck by 4.1%, compared with treatment in age- and sex-matched normocalcemic peers. In the same study, daily urinary calcium excretion decreased and urinary pH and citrate levels increased; urinary saturation of calcium oxalate decreased by 46%, and stone formation was decreased.
Another trial evaluated 120 patients with idiopathic calcium nephrolithiasis, half of whom were given potassium citrate. Those given potassium citrate experienced an increase in distal radius bone mineral density over 2 years.54 It is theorized that alkalinization may decrease bone turnover in these patients.
Bisphosphonates
As one of the proposed main mechanisms of bone loss in idiopathic hypercalciuria is direct bone resorption, a potential target for therapy is the osteoclast, which bisphosphonates inhibit.
Ruml et al55 studied the impact of alendronate vs placebo in 16 normal men undergoing 3 weeks of strict bedrest. Compared with the placebo group, those who received alendronate had significantly lower 24-hour urine calcium excretion and higher levels of PTH and 1,25-dihydroxyvitamin D.
Weisinger et al56 evaluated the effects of alendronate 10 mg daily in 10 patients who had stone disease with documented idiopathic hypercalciuria and also in 8 normocalciuric patients without stone disease. Alendronate resulted in a sustained reduction of calcium in the urine in the patients with idiopathic hypercalciuria but not in the normocalciuric patients.
Data are somewhat scant as to the effect of bisphosphonates on bone health in the setting of idiopathic hypercalciuria,57,58 and therapy with bisphosphonates is not recommended in patients with idiopathic hypercalciuria outside the realm of postmenopausal osteoporosis or other indications for bisphosphonates approved by the US Food and Drug Administration (FDA).
Calcimimetics
Calcium-sensing receptors are found not only in parathyroid tissue but also in the intestines and kidneys. Locally, elevated plasma calcium in the kidney causes activation of the calcium-sensing receptor, diminishing further calcium reabsorption.59 Agents that increase the sensitivity of the calcium-sensing receptors are classified as calcimimetics.
Cinacalcet is a calcimimetic approved by the FDA for treatment of secondary hyperparathyroidism in patients with chronic kidney disease on dialysis, for the treatment of hypercalcemia in patients with parathyroid carcinoma, and for patients with primary hyperparathyroidism who are unable to undergo parathyroidectomy. In an uncontrolled 5-year study of cinacalcet in patients with primary hyperparathyroidism, there was no significant change in bone density.60
Anti-inflammatory drugs
The role of cytokines in stimulating bone resorption in idiopathic hypercalciuria has led to the investigation of several anti-inflammatory drugs (eg, diclofenac, indomethacin) as potential treatments, but studies have been limited in number and scope.61,62
Omega-3 fatty acids
Omega-3 fatty acids are thought to alter prostaglandin metabolism and to potentially reduce stone formation.63
A retrospective study of 29 patients with stone disease found that, combined with dietary counseling, omega-3 fatty acids could potentially reduce urinary calcium and oxalate excretion and increase urinary citrate in hypercalciuric stone-formers.64
A review of published randomized controlled trials of omega-3 fatty acids in skeletal health discovered that 4 studies found positive effects on bone mineral density or bone turnover markers, whereas 5 studies reported no differences. All trials were small, and none evaluated fracture outcome.65
- Albright F, Henneman P, Benedict PH, Forbes AP. Idiopathic hypercalciuria: a preliminary report. Proc R Soc Med 1953; 46:1077–1081.
- Pak CY. Pathophysiology of calcium nephrolithiasis. In: Seldin DW, Giebiscg G, eds. The Kidney: Physiology and Pathophysiology. New York, NY: Raven Press; 1992:2461–2480.
- Frick KK, Bushinsky DA. Molecular mechanisms of primary hypercalciuria. J Am Soc Nephrol 2003; 14:1082–1095.
- Pacifici R, Rothstein M, Rifas L, et al. Increased monocyte interleukin-1 activity and decreased vertebral bone density in patients with fasting idiopathic hypercalciuria. J Clin Endocrinol Metab 1990; 71:138–145.
- Messa P, Mioni G, Montanaro D, et al. About a primitive osseous origin of the so-called ‘renal hypercalciuria.’ Contrib Nephrol 1987; 58:106–110.
- Zerwekh JE. Bone disease and idiopathic hypercalciuria. Semin Nephrol 2008; 28:133–142.
- Coe FL. Treated and untreated recurrent calcium nephrolithiasis in patients with idiopathic hypercalciuria, hyperuricosuria, or no metabolic disorder. Ann Intern Med 1977; 87:404–410.
- Lemann J Jr. Pathogenesis of idiopathic hypercalciuria and nephrolithiasis. In: Coe FL, Favus MJ, eds. Disorders of Bone and Mineral Metabolism. New York, NY: Raven Press; 1992:685-706.
- Coe FL, Parks JH, Moore ES. Familial idiopathic hypercalciuria. N Engl J Med 1979; 300:337–340.
- Giannini S, Nobile M, Dalle Carbonare L, et al. Hypercalciuria is a common and important finding in postmenopausal women with osteoporosis. Eur J Endocrinol 2003; 149:209–213.
- Tannenbaum C, Clark J, Schwartzman K, et al. Yield of laboratory testing to identify secondary contributors to osteoporosis in otherwise healthy women. J Clin Endocrinol Metab 2002; 87:4431–4437.
- Cerda Gabaroi D, Peris P, Monegal A, et al. Search for hidden secondary causes in postmenopausal women with osteoporosis. Menopause 2010; 17:135–139.
- Rull MA, Cano-García Mdel C, Arrabal-Martín M, Arrabal-Polo MA. The importance of urinary calcium in postmenopausal women with osteoporotic fracture. Can Urol Assoc J 2015; 9:E183–E186.
- Heaney RP, Recker RR, Ryan RA. Urinary calcium in perimenopausal women: normative values. Osteoporos Int 1999; 9:13–18.
- Bleich HL, Moore MJ, Lemann J Jr, Adams ND, Gray RW. Urinary calcium excretion in human beings. N Engl J Med 1979; 301:535–541.
- Li XQ, Tembe V, Horwitz GM, Bushinsky DA, Favus MJ. Increased intestinal vitamin D receptor in genetic hypercalciuric rats. A cause of intestinal calcium hyperabsorption. J Clin Invest 1993; 91:661–667.
- Yao J, Kathpalia P, Bushinsky DA, Favus MJ. Hyperresponsiveness of vitamin D receptor gene expression to 1,25-dihydroxyvitamin D3. A new characteristic of genetic hypercalciuric stone-forming rats. J Clin Invest 1998; 101:2223–2232.
- Pietschmann F, Breslau NA, Pak CY. Reduced vertebral bone density in hypercalciuric nephrolithiasis. J Bone Miner Res 1992; 7:1383–1388.
- Jaeger P, Lippuner K, Casez JP, Hess B, Ackermann D, Hug C. Low bone mass in idiopathic renal stone formers: magnitude and significance. J Bone Miner Res 1994; 9:1525–1532.
- Vezzoli G, Soldati L, Arcidiacono T, et al. Urinary calcium is a determinant of bone mineral density in elderly men participating in the InCHIANTI study. Kidney Int 2005; 67:2006–2014.
- Lemann J Jr, Worcester EM, Gray RW. Hypercalciuria and stones. Am J Kidney Dis 1991; 17:386–391.
- Gokce C, Gokce O, Baydinc C, et al. Use of random urine samples to estimate total urinary calcium and phosphate excretion. Arch Intern Med 1991; 151:1587–1588.
- Curhan GC, Willett WC, Rimm EB, Stampfer MJ. A prospective study of dietary calcium and other nutrients and the risk of symptomatic kidney stones. N Engl J Med 1993; 328:833–838.
- Siener R, Schade N, Nicolay C, von Unruh GE, Hesse A. The efficacy of dietary intervention on urinary risk factors for stone formation in recurrent calcium oxalate stone patients. J Urol 2005; 173:1601–1605.
- Jones AN, Shafer MM, Keuler NS, Crone EM, Hansen KE. Fasting and postprandial spot urine calcium-to-creatinine ratios do not detect hypercalciuria. Osteoporos Int 2012; 23:553–562.
- Frick KK, Bushinsky DA. Metabolic acidosis stimulates RANKL RNA expression in bone through a cyclo-oxygenase-dependent mechanism. J Bone Miner Res 2003; 18:1317–1325.
- Borghi L, Schianchi T, Meschi T, et al. Comparison of two diets for the prevention of recurrent stones in idiopathic hypercalciuria. N Engl J Med 2002; 346:77–84.
- Ghazali A, Fuentes V, Desaint C, et al. Low bone mineral density and peripheral blood monocyte activation profile in calcium stone formers with idiopathic hypercalciuria. J Clin Endocrinol Metab 1997; 82:32–38.
- Broadus AE, Insogna KL, Lang R, Ellison AF, Dreyer BE. Evidence for disordered control of 1,25-dihydroxyvitamin D production in absorptive hypercalciuria. N Engl J Med 1984; 311:73–80.
- Tasca A, Cacciola A, Ferrarese P, et al. Bone alterations in patients with idiopathic hypercalciuria and calcium nephrolithiasis. Urology 2002; 59:865–869.
- Melton LJ 3rd, Crowson CS, Khosla S, Wilson DM, O’Fallon WM. Fracture risk among patients with urolithiasis: a population-based cohort study. Kidney Int 1998; 53:459–464.
- Lauderdale DS, Thisted RA, Wen M, Favus MJ. Bone mineral density and fracture among prevalent kidney stone cases in the Third National Health and Nutrition Examination Survey. J Bone Miner Res 2001; 16:1893–1898.
- Cauley JA, Fullman RL, Stone KL, et al; MrOS Research Group. Factors associated with the lumbar spine and proximal femur bone mineral density in older men. Osteoporos Int 2005; 16:1525–1537.
- Fink HA, Litwack-Harrison S, Taylor BC, et al; Osteoporotic Fractures in Men (MrOS) Study Group. Clinical utility of routine laboratory testing to identify possible secondary causes in older men with osteoporosis: the Osteoporotic Fractures in Men (MrOS) Study. Osteoporos Int 2016: 27:331–338.
- Sowers MR, Jannausch M, Wood C, Pope SK, Lachance LL, Peterson B. Prevalence of renal stones in a population-based study with dietary calcium, oxalate and medication exposures. Am J Epidemiol 1998; 147:914–920.
- Asplin JR, Bauer KA, Kinder J, et al. Bone mineral density and urine calcium excretion among subjects with and without nephrolithiasis. Kidney Int 2003; 63:662–669.
- Letavernier E, Traxer O, Daudon M, et al. Determinants of osteopenia in male renal-stone-disease patients with idiopathic hypercalciuria. Clin J Am Soc Nephrol 2011; 6:1149–1154.
- Vezzoli G, Soldati L, Gambaro G. Update on primary hypercalciuria from a genetic perspective. J Urol 2008; 179:1676–1682.
- Cosman F, de Beur SJ, LeBoff MS, et al; National Osteoporosis Foundation. Clinician’s guide to prevention and treatment of osteoporosis. Osteoporos Int 2014: 25:2359–2381.
- Breslau NA, Brinkley L, Hill KD, Pak CY. Relationship of animal protein-rich diet to kidney stone formation and calcium metabolism. J Clin Endocrinol Metab 1988; 66:140–146.
- Reid IR, Ames RW, Orr-Walker BJ, et al. Hydrochlorothiazide reduces loss of cortical bone in normal postmenopausal women: a randomized controlled trial. Am J Med 2000; 109:362–370.
- Bolland MJ, Ames RW, Horne AM, Orr-Walker BJ, Gamble GD, Reid IR. The effect of treatment with a thiazide diuretic for 4 years on bone density in normal postmenopausal women. Osteoporos Int 2007; 18:479–486.
- LaCroix AZ, Ott SM, Ichikawa L, Scholes D, Barlow WE. Low-dose hydrochlorothiazide and preservation of bone mineral density in older adults. Ann Intern Med 2000; 133:516–526.
- Wasnich RD, Davis JW, He YF, Petrovich H, Ross PD. A randomized, double-masked, placebo-controlled trial of chlorthalidone and bone loss in elderly women. Osteoporos Int 1995; 5:247–251.
- Adams JS, Song CF, Kantorovich V. Rapid recovery of bone mass in hypercalciuric, osteoporotic men treated with hydrochlorothiazide. Ann Intern Med 1999; 130:658–660.
- Feskanich D, Willett WC, Stampfer MJ, Colditz GA. A prospective study of thiazide use and fractures in women. Osteoporos Int 1997; 7:79–84.
- Lamberg BA, Kuhlback B. Effect of chlorothiazide and hydrochlorothiazide on the excretion of calcium in the urine. Scand J Clin Lab Invest 1959; 11:351–357.
- Lajeunesse D, Delalandre A, Guggino SE. Thiazide diuretics affect osteocalcin production in human osteoblasts at the transcription level without affecting vitamin D3 receptors. J Bone Miner Res 2000; 15:894–901.
- Sakhaee K, Maalouf NM, Kumar R, Pasch A, Moe OW. Nephrolithiasis-associated bone disease: pathogenesis and treatment options. Kidney Int 2001; 79:393–403.
- García-Nieto V, Monge-Zamorano M, González-García M, Luis-Yanes MI. Effect of thiazides on bone mineral density in children with idiopathic hypercalciuria. Pediatr Nephrol 2012; 27:261–268.
- Bushinsky DA, Favus MJ. Mechanism of hypercalciuria in genetic hypercalciuric rats. Inherited defect in intestinal calcium transport. J Clin Invest 1988; 82:1585–1591.
- Bushinsky DA, Willett T, Asplin JR, Culbertson C, Che SP, Grynpas M. Chlorthalidone improves vertebral bone quality in genetic hypercalciuric stone-forming rats. J Bone Miner Res 2011; 26:1904–1912.
- Pak CY, Heller HJ, Pearle MS, Odvina CV, Poindexter JR, Peterson RD. Prevention of stone formation and bone loss in absorptive hypercalciuria by combined dietary and pharmacological interventions. J Urol 2003; 169:465–469.
- Vescini F, Buffa A, LaManna G, et al. Long-term potassium citrate therapy and bone mineral density in idiopathic calcium stone formers. J Endocrinol Invest 2005; 28:218–222.
- Ruml LA, Dubois SK, Roberts ML, Pak CY. Prevention of hypercalciuria and stone-forming propensity during prolonged bedrest by alendronate. J Bone Miner Res 1995; 10:655–662.
- Weisinger JR, Alonzo E, Machado C, et al. Role of bones in the physiopathology of idiopathic hypercalciuria: effect of amino-bisphosphonate alendronate. Medicina (B Aires) 1997; 57(suppl 1):45–48. Spanish.
- Heilberg IP, Martini LA, Teixeira SH, et al. Effect of etidronate treatment on bone mass of male nephrolithiasis patients with idiopathic hypercalciuria and osteopenia. Nephron 1998; 79:430–437.
- Bushinsky DA, Neumann KJ, Asplin J, Krieger NS. Alendronate decreases urine calcium and supersaturation in genetic hypercalciuric rats. Kidney Int 1999; 55:234–243.
- Riccardi D, Park J, Lee WS, Gamba G, Brown EM, Hebert SC. Cloning and functional expression of a rat kidney extracellular calcium/polyvalent cation-sensing receptor. Proc Natl Acad Sci USA 1995; 92:131–145.
- Peacock M, Bolognese MA, Borofsky M, et al. Cinacalcet treatment of primary hyperparathyroidism: biochemical and bone densitometric outcomes in a five-year study. J Clin Endocrinol Metab 2009; 94:4860–4867.
- Filipponi P, Mannarelli C, Pacifici R, et al. Evidence for a prostaglandin-mediated bone resorptive mechanism in subjects with fasting hypercalciuria. Calcif Tissue Int 1988; 43:61–66.
- Gomaa AA, Hassan HA, Ghaneimah SA. Effect of aspirin and indomethacin on the serum and urinary calcium, magnesium and phosphate. Pharmacol Res 1990; 22:59–70.
- Buck AC, Davies RL, Harrison T. The protective role of eicosapentaenoic acid (EPA) in the pathogenesis of nephrolithiasis. J Urol 1991; 146:188–194.
- Ortiz-Alvarado O, Miyaoka R, Kriedberg C, et al. Omega-3 fatty acids eicosapentaenoic acid and docosahexaenoic acid in the management of hypercalciuric stone formers. Urology 2012; 79:282–286.
- Orchard TS, Pan X, Cheek F, Ing SW, Jackson RD. A systematic review of omega-3 fatty acids and osteoporosis. Br J Nutr 2012; 107(suppl 2):S253–S260.
A 65-year-old woman was recently diagnosed with osteoporosis after a screening bone mineral density test. She has hypertension (treated with lisinopril), and she had an episode of passing a kidney stone 10 years ago. A 24-hour urine study reveals an elevated urinary calcium level.
What should the physician keep in mind in managing this patient?
IDIOPATHIC HYPERCALCIURIA
Many potential causes of secondary hypercalciuria must be ruled out before deciding that a patient has idiopathic hypercalciuria, which was first noted as a distinct entity by Albright et al in 1953.1 Causes of secondary hypercalciuria include primary hyperparathyroidism, hyperthyroidism, Paget disease, myeloma, malignancy, immobility, accelerated osteoporosis, sarcoidosis, renal tubular acidosis, and drug-induced urinary calcium loss such as that seen with loop diuretics.
Idiopathic hypercalciuria is identified by the following:
- Persistent hypercalciuria despite normal or restricted calcium intake2,3
- Normal levels of parathyroid hormone (PTH), phosphorus, and 1,25-dihydroxy-vitamin D (the active form of vitamin D, also called calcitriol) in the presence of hypercalciuria; serum calcium levels are also normal.
An alias for idiopathic hypercalciuria is “fasting hypercalciuria,” as increased urinary calcium persists and sometimes worsens while fasting or on a low-calcium diet, with increased bone turnover, reduced bone density, and normal serum PTH levels.4,5
Mineral loss from bone predominates in idiopathic hypercalciuria, but there is also a minor component of intestinal hyperabsorption of calcium and reduced renal calcium reabsorption.6 Distinguishing among intestinal hyperabsorptive hypercalciuria, renal leak hypercalciuria, and idiopathic or fasting hypercalciuria can be difficult and subtle. It has been argued that differentiating among hypercalciuric subtypes (hyperabsorptive, renal leak, idiopathic) is not useful; in general clinical practice, it is impractical to collect multiple 24-hour urine samples in the setting of controlled high- vs low-calcium diets.
COMPLICATIONS OF IDIOPATHIC HYPERCALCIURIA
Calcium is an important component in many physiologic processes, including coagulation, cell membrane transfer, hormone release, neuromuscular activation, and myocardial contraction. A sophisticated system of hormonally mediated interactions normally maintains stable extracellular calcium levels. Calcium is vital for bone strength, but the bones are the body’s calcium “bank,” and withdrawals from this bank are made at the expense of bone strength and integrity.
Renal stones
Patients with idiopathic hypercalciuria have a high incidence of renal stones. Conversely, 40% to 50% of patients with recurrent kidney stones have evidence of idiopathic hypercalciuria, the most common metabolic abnormality in “stone-formers.”7,8 Further, 35% to 40% of first- and second-degree relatives of stone-formers who have idiopathic hypercalciuria also have the condition.9 In the general population without kidney stones and without first-degree relatives with stones, the prevalence is approximately 5% to 10%.10,11
Bone loss
People with idiopathic hypercalciuria have lower bone density and a higher incidence of fracture than their normocalciuric peers. This relationship has been observed in both sexes and all ages. Idiopathic hypercalciuria has been noted in 10% to 19% of otherwise healthy men with low bone mass, in postmenopausal women with osteoporosis,10–12 and in up to 40% of postmenopausal women with osteoporotic fractures and no history of kidney stones.13
LABORATORY DEFINITION
Urinary calcium excretion
Heaney et al14 measured 24-hour urinary calcium excretion in a group of early postmenopausal women, whom he divided into 3 groups by dietary calcium intake:
- Low intake (< 500 mg/day)
- Moderate intake (500–1,000 mg/day)
- High intake (> 1,000 mg/day).
In the women who were estrogen-deprived (ie, postmenopausal and not on estrogen replacement therapy), the 95% probability ranges for urinary calcium excretion were:
- 32–252 mg/day (0.51–4.06 mg/kg/day) with low calcium intake
- 36–286 mg/day (0.57–4.52 mg/kg/day) with moderate calcium intake
- 45–357 mg/day (0.69–5.47 mg/kg/day) with high calcium intake.
For estrogen-replete women (perimenopausal or postmenopausal on estrogen replacement), using the same categories of dietary calcium intake, calcium excretion was:
- 39–194 mg/day (0.65–3.23 mg/kg/day) with low calcium intake
- 54–269 mg/day (0.77–3.84 mg/kg/day) with moderate calcium intake
- 66–237 mg/day (0.98–4.89 mg/kg/day) with high calcium intake.
In the estrogen-deprived group, urinary calcium excretion increased by only 55 mg/day per 1,000-mg increase in dietary intake, though there was individual variability. These data suggest that hypercalciuria should be defined as:
- Greater than 250 mg/day (> 4.1 mg/kg/day) in estrogen-replete women
- Greater than 300 mg/day (> 5.0 mg/kg/day) in estrogen-deprived women.
Urinary calcium-to-creatinine ratio
Use of a spot urinary calcium-to-creatinine ratio has been advocated as an alternative to the more labor-intensive 24-hour urine collection.15 However, the spot urine calcium-creatinine ratio correlates poorly with 24-hour urine criteria for hypercalciuria whether by absolute, weight-based, or menopausal and calcium-adjusted definitions.
Importantly, spot urine measurements show poor sensitivity and specificity for hypercalciuria. Spot urine samples underestimate the 24-hour urinary calcium (Bland-Altman bias –71 mg/24 hours), and postprandial sampling overestimates it (Bland-Altman bias +61 mg/24 hours).15
WHAT IS THE MECHANISM OF IDIOPATHIC HYPERCALCIURIA?
The pathophysiology of idiopathic hypercalciuria has been difficult to establish.
Increased sensitivity to vitamin D? In the hyperabsorbing population, activated vitamin D levels are often robust, but a few studies of rats with hyperabsorbing, hyperexcreting physiology have shown normal calcitriol levels, suggesting an increased sensitivity to the actions of 1,25-dihydroxyvitamin D.16
Another study found that hypercalciuric stone-forming rats have more 1,25-dihydroxyvitamin D receptors than do controls.17
These changes have not been demonstrated in patients with idiopathic hypercalciuria.
High sodium intake has been proposed as the cause of idiopathic hypercalciuria. High sodium intake leads to increased urinary sodium excretion, and the increased tubular sodium load can decrease tubular calcium reabsorption, possibly favoring a reduction in bone mineral density over time.18–20
In healthy people, urine calcium excretion increases by about 0.6 mmol/day (20–40 mg/day) for each 100-mmol (2,300 mg) increment in daily sodium ingestion.21,22 But high sodium intake is seldom the principal cause of idiopathic hypercalciuria.
High protein intake, often observed in patients with nephrolithiasis, increases dietary acid load, stimulating release of calcium from bone and inhibiting renal reabsorption of calcium.23,24 Increasing dietary protein from 0.5 to 2.0 mg/kg/day can double the urinary calcium output.25
In mice, induction of metabolic acidosis, thought to mimic a high-protein diet, inhibits osteoblastic alkaline phosphatase activity while stimulating prostaglandin E2 production.26 This in turn increases osteoblastic expression of receptor activator for nuclear factor kappa b (RANK) ligand, thereby potentially contributing to osteoclastogenesis and osteoclast activity.26
Decreasing dietary protein decreases the recurrence of nephrolithiasis in established stone-formers.27 Still, urine calcium levels are higher in those with idiopathic hypercalciuria than in normal controls at comparable levels of acid excretion, so while protein ingestion could potentially exacerbate the hypercalciuria, it is unlikely to be the sole cause.
Renal calcium leak? The frequent finding of low to low-normal PTH levels in patients with idiopathic hypercalciuria contradicts the potential etiologic mechanism of renal calcium “leak.” In idiopathic hypercalciuria, the PTH response to an oral calcium load is abnormal. If given an oral calcium load, the PTH level should decline if this were due to renal leak, but in the setting of idiopathic hypercalciuria, no clinically meaningful change in PTH occurs. This lack of response of PTH to oral calcium load has been seen in both rat and human studies. Patients also excrete normal to high amounts of urine calcium after prolonged fasting or a low-calcium diet. Low-calcium diets do not induce hyperparathyroidism in these patients, and so the source of the elevated calcium in the urine must be primarily from bone. Increased levels of 1,25-dihydroxyvitamin D in patients with idiopathic hypercalciuria have been noted.28,29
Whether the cytokine milieu also contributes to the calcitriol levels is unclear, but the high or high-normal plasma level of 1,25-dihydroxyvitamin D may be the reason that the PTH is unperturbed.
IMPACT ON BONE HEALTH
Nephrolithiasis is strongly linked to fracture risk.
The bone mineral density of trabecular bone is more affected by calcium excretion than that of cortical bone.18,20,30 However, lumbar spine bone mineral density has not been consistently found to be lower in patients with hyperabsorptive hypercalciuria. Rather, bone mineral density is correlated inversely with urine calcium excretion in men and women who form stones, but not in patients without nephrolithiasis.
In children
In children, idiopathic hypercalciuria is well known to be linked to osteopenia. This is an important group to study, as adult idiopathic hypercalciuria often begins in childhood. However, the trajectory of bone loss vs gain in children is fraught with variables such as growth, puberty, and body mass index, making this a difficult group from which to extrapolate conclusions to adults.
In men
There is more information on the relationship between hypercalciuria and osteoporosis in men than in women.
In 1998, Melton et al31 published the findings of a 25-year population-based cohort study of 624 patients, 442 (71%) of whom were men, referred for new-onset urolithiasis. The incidence of vertebral fracture was 4 times higher in this group than in patients without stone disease, but there was no difference in the rate of hip, forearm, or nonvertebral fractures. This is consistent with earlier data that report a loss of predominantly cancellous bone associated with urolithiasis.
National Health and Nutrition Examination Survey III data in 2001 focused on a potential relationship between kidney stones and bone mineral density or prevalent spine or wrist fracture.32 More than 14,000 people had hip bone mineral density measurements, of whom 793 (477 men, 316 women) had kidney stones. Men with previous nephrolithiasis had lower femoral neck bone mineral density than those without. Men with kidney stones were also more likely to report prevalent wrist and spine fractures. In women, no difference was noted between those with or without stone disease with respect to femoral neck bone mineral density or fracture incidence.
Cauley et al33 also evaluated a relationship between kidney stones and bone mineral density in the Osteoporotic Fractures in Men (MrOS) study. Of approximately 6,000 men, 13.2% reported a history of kidney stones. These men had lower spine and total hip bone mineral density than controls who had not had kidney stones, and the difference persisted after adjusting for age, race, weight, and other variables. However, further data from this cohort revealed that so few men with osteoporosis had hypercalciuria that its routine measurement was not recommended.34
In women
The relationship between idiopathic hypercalciuria and fractures has been more difficult to establish in women.
Sowers et al35 performed an observational study of 1,309 women ages 20 to 92 with a history of nephrolithiasis. No association was noted between stone disease and reduced bone mineral density in the femoral neck, lumbar spine, or radius.
These epidemiologic studies did not include the cause of the kidney stones (eg, whether or not there was associated hypercalciuria or primary hyperparathyroidism), and typically a diagnosis of idiopathic hypercalciuria was not established.
The difference in association between low bone mineral density or fracture with nephrolithiasis between men and women is not well understood, but the most consistent hypothesis is that the influence of hypoestrogenemia in women is much stronger than that of the hypercalciuria.20
Does the degree of hypercalciuria influence the amount of bone loss?
A few trials have tried to determine whether the amount of calcium in the urine influences the magnitude of bone loss.
In 2003, Asplin et al36 reported that bone mineral density Z-scores differed significantly by urinary calcium excretion, but only in stone-formers. In patients without stone disease, there was no difference in Z-scores according to the absolute value of hypercalciuria. This may be due to a self-selection bias in which stone-formers avoid calcium in the diet and those without stone disease do not.
Three studies looking solely at men with idiopathic hypercalciuria also did not detect a significant difference in bone mineral loss according to degree of hypercalciuria.20,30,37
A POLYGENIC DISORDER?
The potential contribution of genetic changes to the development of idiopathic hypercalciuria has been studied. While there is an increased risk of idiopathic hypercalciuria in first-degree relatives of patients with nephrolithiasis, most experts believe that idiopathic hypercalciuria is likely a polygenic disorder.9,38
EVALUATION AND TREATMENT
The 2014 revised version of the National Osteoporosis Foundation’s “Clinician’s guide to prevention and treatment of osteoporosis”39 noted that hypercalciuria is a risk factor that contributes to the development of osteoporosis and possibly osteoporotic fractures, and that consideration should be given to evaluating for hypercalciuria, but only in selected cases. In patients with kidney stones, the link between hypercalciuria and bone loss and fracture is recognized and should be explored in both women and men at risk of osteoporosis, as 45% to 50% of patients who form calcium stones have hypercalciuria.
Patients with kidney stones who have low bone mass and idiopathic hypercalciuria should increase their daily fluid intake, follow a low-salt and low-animal-protein diet, and take thiazide diuretics to reduce the incidence of further calcium stones. Whether this approach also improves bone mass and strength and reduces the risk of fractures within this cohort requires further study.
Dietary interventions
Don’t restrict calcium intake. Despite the connection between hypercalciuria and nephrolithiasis, restriction of dietary calcium to prevent relapse of nephrolithiasis is a risk factor for negative calcium balance and bone demineralization. Observational studies and prospective clinical trials have demonstrated an increased risk of stone formation with low calcium intake.27,30 Nevertheless, this practice seems logical to many patients with kidney stones, and this process may independently contribute to lower bone mineral density.
A low-sodium, low-animal-protein diet is beneficial. Though increased intake of sodium or protein is not the main cause of idiopathic hypercalciuria, pharmacologic therapy, especially with thiazide diuretics, is more likely to be successful in the setting of a low-sodium, low-protein diet.
Borghi et al27 studied 2 diets in men with nephrolithiasis and idiopathic hypercalciuria: a low-calcium diet and a low-salt, low-animal-protein, normal-calcium diet. Men on the latter diet experienced a greater reduction in urinary calcium excretion than those on the low-calcium diet.
Breslau et al40 found that urinary calcium excretion fell by 50% in 15 people when they switched from an animal-based to a plant-based protein diet.
Thiazide diuretics
Several epidemiologic and randomized studies41–45 found that thiazide therapy decreased the likelihood of hip fracture in postmenopausal women, men, and premenopausal women. Doses ranged from 12.5 to 50 mg of hydrochlorothiazide. Bone density increased in the radius, total body, total hip, and lumbar spine. One prospective trial noted that fracture risk declined with longer duration of thiazide use, with the largest reduction in those who used thiazides for 8 or more years.46
Thiazides have anticalciuric actions.47 In addition, they have positive effects on osteoblastic cell proliferation and activity, inhibiting osteocalcin expression by osteoblasts, thereby possibly improving bone formation and mineralization.48 The effects of thiazides on bone was reviewed by Sakhaee et al.49
However, fewer studies have looked at thiazides in patients with idiopathic hypercalciuria.
García-Nieto et al50 looked retrospectively at 22 children (average age 11.7) with idiopathic hypercalciuria and osteopenia who had received thiazides (19 received chlorthalidone 25 mg daily, and 3 received hydrochlorothiazide 25 mg daily) for an average of 2.4 years, and at 32 similar patients who had not received thiazides. Twelve (55%) of the patients receiving thiazides had an improvement in bone mineral density Z-scores, compared with 23 (72%) of the controls. This finding is confounded by growth that occurred during the study, and both groups demonstrated a significantly increased body mass index and bone mineral apparent density at the end of the trial.
Bushinsky and Favus51 evaluated whether chlorthalidone improved bone quality or structure in rats that were genetically prone to hypercalciuric stones. These rats are uniformly stone-formers, and while they have components of calcium hyperabsorption, they also demonstrate renal hyperexcretion (leak) and enhanced bone mineral resorption.51 When fed a high-calcium diet, they maintain a reduction in bone mineral density and bone strength. Study rats were given chlorthalidone 4 to 5 mg/kg/day. After 18 weeks of therapy, significant improvements were observed in trabecular thickness and connectivity as well as increased vertebral compressive strength.52 No difference in cortical bone was noted.
No randomized, blinded, placebo-controlled trial has yet been done to study the impact of thiazides on bone mineral density or fracture risk in patients with idiopathic hypercalciuria.
In practice, many physicians choose chlorthalidone over hydrochlorothiazide because of chlorthalidone’s longer half-life. Combinations of a thiazide diuretic and potassium-sparing medications are also employed, such as hydrochlorothiazide plus either triamterene or spironolactone to reduce the number of pills the patient has to take.
Potassium citrate
When prescribing thiazide diuretics, one should also consider prescribing potassium citrate, as this agent not only prevents hypokalemia but also increases urinary citrate excretion, which can help to inhibit crystallization of calcium salts.6
In a longitudinal study of 28 patients with hypercalciuria,53 combined therapy with a thiazide or indapamide and potassium citrate over a mean of 7 years increased bone density of the lumbar spine by 7.1% and of the femoral neck by 4.1%, compared with treatment in age- and sex-matched normocalcemic peers. In the same study, daily urinary calcium excretion decreased and urinary pH and citrate levels increased; urinary saturation of calcium oxalate decreased by 46%, and stone formation was decreased.
Another trial evaluated 120 patients with idiopathic calcium nephrolithiasis, half of whom were given potassium citrate. Those given potassium citrate experienced an increase in distal radius bone mineral density over 2 years.54 It is theorized that alkalinization may decrease bone turnover in these patients.
Bisphosphonates
As one of the proposed main mechanisms of bone loss in idiopathic hypercalciuria is direct bone resorption, a potential target for therapy is the osteoclast, which bisphosphonates inhibit.
Ruml et al55 studied the impact of alendronate vs placebo in 16 normal men undergoing 3 weeks of strict bedrest. Compared with the placebo group, those who received alendronate had significantly lower 24-hour urine calcium excretion and higher levels of PTH and 1,25-dihydroxyvitamin D.
Weisinger et al56 evaluated the effects of alendronate 10 mg daily in 10 patients who had stone disease with documented idiopathic hypercalciuria and also in 8 normocalciuric patients without stone disease. Alendronate resulted in a sustained reduction of calcium in the urine in the patients with idiopathic hypercalciuria but not in the normocalciuric patients.
Data are somewhat scant as to the effect of bisphosphonates on bone health in the setting of idiopathic hypercalciuria,57,58 and therapy with bisphosphonates is not recommended in patients with idiopathic hypercalciuria outside the realm of postmenopausal osteoporosis or other indications for bisphosphonates approved by the US Food and Drug Administration (FDA).
Calcimimetics
Calcium-sensing receptors are found not only in parathyroid tissue but also in the intestines and kidneys. Locally, elevated plasma calcium in the kidney causes activation of the calcium-sensing receptor, diminishing further calcium reabsorption.59 Agents that increase the sensitivity of the calcium-sensing receptors are classified as calcimimetics.
Cinacalcet is a calcimimetic approved by the FDA for treatment of secondary hyperparathyroidism in patients with chronic kidney disease on dialysis, for the treatment of hypercalcemia in patients with parathyroid carcinoma, and for patients with primary hyperparathyroidism who are unable to undergo parathyroidectomy. In an uncontrolled 5-year study of cinacalcet in patients with primary hyperparathyroidism, there was no significant change in bone density.60
Anti-inflammatory drugs
The role of cytokines in stimulating bone resorption in idiopathic hypercalciuria has led to the investigation of several anti-inflammatory drugs (eg, diclofenac, indomethacin) as potential treatments, but studies have been limited in number and scope.61,62
Omega-3 fatty acids
Omega-3 fatty acids are thought to alter prostaglandin metabolism and to potentially reduce stone formation.63
A retrospective study of 29 patients with stone disease found that, combined with dietary counseling, omega-3 fatty acids could potentially reduce urinary calcium and oxalate excretion and increase urinary citrate in hypercalciuric stone-formers.64
A review of published randomized controlled trials of omega-3 fatty acids in skeletal health discovered that 4 studies found positive effects on bone mineral density or bone turnover markers, whereas 5 studies reported no differences. All trials were small, and none evaluated fracture outcome.65
A 65-year-old woman was recently diagnosed with osteoporosis after a screening bone mineral density test. She has hypertension (treated with lisinopril), and she had an episode of passing a kidney stone 10 years ago. A 24-hour urine study reveals an elevated urinary calcium level.
What should the physician keep in mind in managing this patient?
IDIOPATHIC HYPERCALCIURIA
Many potential causes of secondary hypercalciuria must be ruled out before deciding that a patient has idiopathic hypercalciuria, which was first noted as a distinct entity by Albright et al in 1953.1 Causes of secondary hypercalciuria include primary hyperparathyroidism, hyperthyroidism, Paget disease, myeloma, malignancy, immobility, accelerated osteoporosis, sarcoidosis, renal tubular acidosis, and drug-induced urinary calcium loss such as that seen with loop diuretics.
Idiopathic hypercalciuria is identified by the following:
- Persistent hypercalciuria despite normal or restricted calcium intake2,3
- Normal levels of parathyroid hormone (PTH), phosphorus, and 1,25-dihydroxy-vitamin D (the active form of vitamin D, also called calcitriol) in the presence of hypercalciuria; serum calcium levels are also normal.
An alias for idiopathic hypercalciuria is “fasting hypercalciuria,” as increased urinary calcium persists and sometimes worsens while fasting or on a low-calcium diet, with increased bone turnover, reduced bone density, and normal serum PTH levels.4,5
Mineral loss from bone predominates in idiopathic hypercalciuria, but there is also a minor component of intestinal hyperabsorption of calcium and reduced renal calcium reabsorption.6 Distinguishing among intestinal hyperabsorptive hypercalciuria, renal leak hypercalciuria, and idiopathic or fasting hypercalciuria can be difficult and subtle. It has been argued that differentiating among hypercalciuric subtypes (hyperabsorptive, renal leak, idiopathic) is not useful; in general clinical practice, it is impractical to collect multiple 24-hour urine samples in the setting of controlled high- vs low-calcium diets.
COMPLICATIONS OF IDIOPATHIC HYPERCALCIURIA
Calcium is an important component in many physiologic processes, including coagulation, cell membrane transfer, hormone release, neuromuscular activation, and myocardial contraction. A sophisticated system of hormonally mediated interactions normally maintains stable extracellular calcium levels. Calcium is vital for bone strength, but the bones are the body’s calcium “bank,” and withdrawals from this bank are made at the expense of bone strength and integrity.
Renal stones
Patients with idiopathic hypercalciuria have a high incidence of renal stones. Conversely, 40% to 50% of patients with recurrent kidney stones have evidence of idiopathic hypercalciuria, the most common metabolic abnormality in “stone-formers.”7,8 Further, 35% to 40% of first- and second-degree relatives of stone-formers who have idiopathic hypercalciuria also have the condition.9 In the general population without kidney stones and without first-degree relatives with stones, the prevalence is approximately 5% to 10%.10,11
Bone loss
People with idiopathic hypercalciuria have lower bone density and a higher incidence of fracture than their normocalciuric peers. This relationship has been observed in both sexes and all ages. Idiopathic hypercalciuria has been noted in 10% to 19% of otherwise healthy men with low bone mass, in postmenopausal women with osteoporosis,10–12 and in up to 40% of postmenopausal women with osteoporotic fractures and no history of kidney stones.13
LABORATORY DEFINITION
Urinary calcium excretion
Heaney et al14 measured 24-hour urinary calcium excretion in a group of early postmenopausal women, whom he divided into 3 groups by dietary calcium intake:
- Low intake (< 500 mg/day)
- Moderate intake (500–1,000 mg/day)
- High intake (> 1,000 mg/day).
In the women who were estrogen-deprived (ie, postmenopausal and not on estrogen replacement therapy), the 95% probability ranges for urinary calcium excretion were:
- 32–252 mg/day (0.51–4.06 mg/kg/day) with low calcium intake
- 36–286 mg/day (0.57–4.52 mg/kg/day) with moderate calcium intake
- 45–357 mg/day (0.69–5.47 mg/kg/day) with high calcium intake.
For estrogen-replete women (perimenopausal or postmenopausal on estrogen replacement), using the same categories of dietary calcium intake, calcium excretion was:
- 39–194 mg/day (0.65–3.23 mg/kg/day) with low calcium intake
- 54–269 mg/day (0.77–3.84 mg/kg/day) with moderate calcium intake
- 66–237 mg/day (0.98–4.89 mg/kg/day) with high calcium intake.
In the estrogen-deprived group, urinary calcium excretion increased by only 55 mg/day per 1,000-mg increase in dietary intake, though there was individual variability. These data suggest that hypercalciuria should be defined as:
- Greater than 250 mg/day (> 4.1 mg/kg/day) in estrogen-replete women
- Greater than 300 mg/day (> 5.0 mg/kg/day) in estrogen-deprived women.
Urinary calcium-to-creatinine ratio
Use of a spot urinary calcium-to-creatinine ratio has been advocated as an alternative to the more labor-intensive 24-hour urine collection.15 However, the spot urine calcium-creatinine ratio correlates poorly with 24-hour urine criteria for hypercalciuria whether by absolute, weight-based, or menopausal and calcium-adjusted definitions.
Importantly, spot urine measurements show poor sensitivity and specificity for hypercalciuria. Spot urine samples underestimate the 24-hour urinary calcium (Bland-Altman bias –71 mg/24 hours), and postprandial sampling overestimates it (Bland-Altman bias +61 mg/24 hours).15
WHAT IS THE MECHANISM OF IDIOPATHIC HYPERCALCIURIA?
The pathophysiology of idiopathic hypercalciuria has been difficult to establish.
Increased sensitivity to vitamin D? In the hyperabsorbing population, activated vitamin D levels are often robust, but a few studies of rats with hyperabsorbing, hyperexcreting physiology have shown normal calcitriol levels, suggesting an increased sensitivity to the actions of 1,25-dihydroxyvitamin D.16
Another study found that hypercalciuric stone-forming rats have more 1,25-dihydroxyvitamin D receptors than do controls.17
These changes have not been demonstrated in patients with idiopathic hypercalciuria.
High sodium intake has been proposed as the cause of idiopathic hypercalciuria. High sodium intake leads to increased urinary sodium excretion, and the increased tubular sodium load can decrease tubular calcium reabsorption, possibly favoring a reduction in bone mineral density over time.18–20
In healthy people, urine calcium excretion increases by about 0.6 mmol/day (20–40 mg/day) for each 100-mmol (2,300 mg) increment in daily sodium ingestion.21,22 But high sodium intake is seldom the principal cause of idiopathic hypercalciuria.
High protein intake, often observed in patients with nephrolithiasis, increases dietary acid load, stimulating release of calcium from bone and inhibiting renal reabsorption of calcium.23,24 Increasing dietary protein from 0.5 to 2.0 mg/kg/day can double the urinary calcium output.25
In mice, induction of metabolic acidosis, thought to mimic a high-protein diet, inhibits osteoblastic alkaline phosphatase activity while stimulating prostaglandin E2 production.26 This in turn increases osteoblastic expression of receptor activator for nuclear factor kappa b (RANK) ligand, thereby potentially contributing to osteoclastogenesis and osteoclast activity.26
Decreasing dietary protein decreases the recurrence of nephrolithiasis in established stone-formers.27 Still, urine calcium levels are higher in those with idiopathic hypercalciuria than in normal controls at comparable levels of acid excretion, so while protein ingestion could potentially exacerbate the hypercalciuria, it is unlikely to be the sole cause.
Renal calcium leak? The frequent finding of low to low-normal PTH levels in patients with idiopathic hypercalciuria contradicts the potential etiologic mechanism of renal calcium “leak.” In idiopathic hypercalciuria, the PTH response to an oral calcium load is abnormal. If given an oral calcium load, the PTH level should decline if this were due to renal leak, but in the setting of idiopathic hypercalciuria, no clinically meaningful change in PTH occurs. This lack of response of PTH to oral calcium load has been seen in both rat and human studies. Patients also excrete normal to high amounts of urine calcium after prolonged fasting or a low-calcium diet. Low-calcium diets do not induce hyperparathyroidism in these patients, and so the source of the elevated calcium in the urine must be primarily from bone. Increased levels of 1,25-dihydroxyvitamin D in patients with idiopathic hypercalciuria have been noted.28,29
Whether the cytokine milieu also contributes to the calcitriol levels is unclear, but the high or high-normal plasma level of 1,25-dihydroxyvitamin D may be the reason that the PTH is unperturbed.
IMPACT ON BONE HEALTH
Nephrolithiasis is strongly linked to fracture risk.
The bone mineral density of trabecular bone is more affected by calcium excretion than that of cortical bone.18,20,30 However, lumbar spine bone mineral density has not been consistently found to be lower in patients with hyperabsorptive hypercalciuria. Rather, bone mineral density is correlated inversely with urine calcium excretion in men and women who form stones, but not in patients without nephrolithiasis.
In children
In children, idiopathic hypercalciuria is well known to be linked to osteopenia. This is an important group to study, as adult idiopathic hypercalciuria often begins in childhood. However, the trajectory of bone loss vs gain in children is fraught with variables such as growth, puberty, and body mass index, making this a difficult group from which to extrapolate conclusions to adults.
In men
There is more information on the relationship between hypercalciuria and osteoporosis in men than in women.
In 1998, Melton et al31 published the findings of a 25-year population-based cohort study of 624 patients, 442 (71%) of whom were men, referred for new-onset urolithiasis. The incidence of vertebral fracture was 4 times higher in this group than in patients without stone disease, but there was no difference in the rate of hip, forearm, or nonvertebral fractures. This is consistent with earlier data that report a loss of predominantly cancellous bone associated with urolithiasis.
National Health and Nutrition Examination Survey III data in 2001 focused on a potential relationship between kidney stones and bone mineral density or prevalent spine or wrist fracture.32 More than 14,000 people had hip bone mineral density measurements, of whom 793 (477 men, 316 women) had kidney stones. Men with previous nephrolithiasis had lower femoral neck bone mineral density than those without. Men with kidney stones were also more likely to report prevalent wrist and spine fractures. In women, no difference was noted between those with or without stone disease with respect to femoral neck bone mineral density or fracture incidence.
Cauley et al33 also evaluated a relationship between kidney stones and bone mineral density in the Osteoporotic Fractures in Men (MrOS) study. Of approximately 6,000 men, 13.2% reported a history of kidney stones. These men had lower spine and total hip bone mineral density than controls who had not had kidney stones, and the difference persisted after adjusting for age, race, weight, and other variables. However, further data from this cohort revealed that so few men with osteoporosis had hypercalciuria that its routine measurement was not recommended.34
In women
The relationship between idiopathic hypercalciuria and fractures has been more difficult to establish in women.
Sowers et al35 performed an observational study of 1,309 women ages 20 to 92 with a history of nephrolithiasis. No association was noted between stone disease and reduced bone mineral density in the femoral neck, lumbar spine, or radius.
These epidemiologic studies did not include the cause of the kidney stones (eg, whether or not there was associated hypercalciuria or primary hyperparathyroidism), and typically a diagnosis of idiopathic hypercalciuria was not established.
The difference in association between low bone mineral density or fracture with nephrolithiasis between men and women is not well understood, but the most consistent hypothesis is that the influence of hypoestrogenemia in women is much stronger than that of the hypercalciuria.20
Does the degree of hypercalciuria influence the amount of bone loss?
A few trials have tried to determine whether the amount of calcium in the urine influences the magnitude of bone loss.
In 2003, Asplin et al36 reported that bone mineral density Z-scores differed significantly by urinary calcium excretion, but only in stone-formers. In patients without stone disease, there was no difference in Z-scores according to the absolute value of hypercalciuria. This may be due to a self-selection bias in which stone-formers avoid calcium in the diet and those without stone disease do not.
Three studies looking solely at men with idiopathic hypercalciuria also did not detect a significant difference in bone mineral loss according to degree of hypercalciuria.20,30,37
A POLYGENIC DISORDER?
The potential contribution of genetic changes to the development of idiopathic hypercalciuria has been studied. While there is an increased risk of idiopathic hypercalciuria in first-degree relatives of patients with nephrolithiasis, most experts believe that idiopathic hypercalciuria is likely a polygenic disorder.9,38
EVALUATION AND TREATMENT
The 2014 revised version of the National Osteoporosis Foundation’s “Clinician’s guide to prevention and treatment of osteoporosis”39 noted that hypercalciuria is a risk factor that contributes to the development of osteoporosis and possibly osteoporotic fractures, and that consideration should be given to evaluating for hypercalciuria, but only in selected cases. In patients with kidney stones, the link between hypercalciuria and bone loss and fracture is recognized and should be explored in both women and men at risk of osteoporosis, as 45% to 50% of patients who form calcium stones have hypercalciuria.
Patients with kidney stones who have low bone mass and idiopathic hypercalciuria should increase their daily fluid intake, follow a low-salt and low-animal-protein diet, and take thiazide diuretics to reduce the incidence of further calcium stones. Whether this approach also improves bone mass and strength and reduces the risk of fractures within this cohort requires further study.
Dietary interventions
Don’t restrict calcium intake. Despite the connection between hypercalciuria and nephrolithiasis, restriction of dietary calcium to prevent relapse of nephrolithiasis is a risk factor for negative calcium balance and bone demineralization. Observational studies and prospective clinical trials have demonstrated an increased risk of stone formation with low calcium intake.27,30 Nevertheless, this practice seems logical to many patients with kidney stones, and this process may independently contribute to lower bone mineral density.
A low-sodium, low-animal-protein diet is beneficial. Though increased intake of sodium or protein is not the main cause of idiopathic hypercalciuria, pharmacologic therapy, especially with thiazide diuretics, is more likely to be successful in the setting of a low-sodium, low-protein diet.
Borghi et al27 studied 2 diets in men with nephrolithiasis and idiopathic hypercalciuria: a low-calcium diet and a low-salt, low-animal-protein, normal-calcium diet. Men on the latter diet experienced a greater reduction in urinary calcium excretion than those on the low-calcium diet.
Breslau et al40 found that urinary calcium excretion fell by 50% in 15 people when they switched from an animal-based to a plant-based protein diet.
Thiazide diuretics
Several epidemiologic and randomized studies41–45 found that thiazide therapy decreased the likelihood of hip fracture in postmenopausal women, men, and premenopausal women. Doses ranged from 12.5 to 50 mg of hydrochlorothiazide. Bone density increased in the radius, total body, total hip, and lumbar spine. One prospective trial noted that fracture risk declined with longer duration of thiazide use, with the largest reduction in those who used thiazides for 8 or more years.46
Thiazides have anticalciuric actions.47 In addition, they have positive effects on osteoblastic cell proliferation and activity, inhibiting osteocalcin expression by osteoblasts, thereby possibly improving bone formation and mineralization.48 The effects of thiazides on bone was reviewed by Sakhaee et al.49
However, fewer studies have looked at thiazides in patients with idiopathic hypercalciuria.
García-Nieto et al50 looked retrospectively at 22 children (average age 11.7) with idiopathic hypercalciuria and osteopenia who had received thiazides (19 received chlorthalidone 25 mg daily, and 3 received hydrochlorothiazide 25 mg daily) for an average of 2.4 years, and at 32 similar patients who had not received thiazides. Twelve (55%) of the patients receiving thiazides had an improvement in bone mineral density Z-scores, compared with 23 (72%) of the controls. This finding is confounded by growth that occurred during the study, and both groups demonstrated a significantly increased body mass index and bone mineral apparent density at the end of the trial.
Bushinsky and Favus51 evaluated whether chlorthalidone improved bone quality or structure in rats that were genetically prone to hypercalciuric stones. These rats are uniformly stone-formers, and while they have components of calcium hyperabsorption, they also demonstrate renal hyperexcretion (leak) and enhanced bone mineral resorption.51 When fed a high-calcium diet, they maintain a reduction in bone mineral density and bone strength. Study rats were given chlorthalidone 4 to 5 mg/kg/day. After 18 weeks of therapy, significant improvements were observed in trabecular thickness and connectivity as well as increased vertebral compressive strength.52 No difference in cortical bone was noted.
No randomized, blinded, placebo-controlled trial has yet been done to study the impact of thiazides on bone mineral density or fracture risk in patients with idiopathic hypercalciuria.
In practice, many physicians choose chlorthalidone over hydrochlorothiazide because of chlorthalidone’s longer half-life. Combinations of a thiazide diuretic and potassium-sparing medications are also employed, such as hydrochlorothiazide plus either triamterene or spironolactone to reduce the number of pills the patient has to take.
Potassium citrate
When prescribing thiazide diuretics, one should also consider prescribing potassium citrate, as this agent not only prevents hypokalemia but also increases urinary citrate excretion, which can help to inhibit crystallization of calcium salts.6
In a longitudinal study of 28 patients with hypercalciuria,53 combined therapy with a thiazide or indapamide and potassium citrate over a mean of 7 years increased bone density of the lumbar spine by 7.1% and of the femoral neck by 4.1%, compared with treatment in age- and sex-matched normocalcemic peers. In the same study, daily urinary calcium excretion decreased and urinary pH and citrate levels increased; urinary saturation of calcium oxalate decreased by 46%, and stone formation was decreased.
Another trial evaluated 120 patients with idiopathic calcium nephrolithiasis, half of whom were given potassium citrate. Those given potassium citrate experienced an increase in distal radius bone mineral density over 2 years.54 It is theorized that alkalinization may decrease bone turnover in these patients.
Bisphosphonates
As one of the proposed main mechanisms of bone loss in idiopathic hypercalciuria is direct bone resorption, a potential target for therapy is the osteoclast, which bisphosphonates inhibit.
Ruml et al55 studied the impact of alendronate vs placebo in 16 normal men undergoing 3 weeks of strict bedrest. Compared with the placebo group, those who received alendronate had significantly lower 24-hour urine calcium excretion and higher levels of PTH and 1,25-dihydroxyvitamin D.
Weisinger et al56 evaluated the effects of alendronate 10 mg daily in 10 patients who had stone disease with documented idiopathic hypercalciuria and also in 8 normocalciuric patients without stone disease. Alendronate resulted in a sustained reduction of calcium in the urine in the patients with idiopathic hypercalciuria but not in the normocalciuric patients.
Data are somewhat scant as to the effect of bisphosphonates on bone health in the setting of idiopathic hypercalciuria,57,58 and therapy with bisphosphonates is not recommended in patients with idiopathic hypercalciuria outside the realm of postmenopausal osteoporosis or other indications for bisphosphonates approved by the US Food and Drug Administration (FDA).
Calcimimetics
Calcium-sensing receptors are found not only in parathyroid tissue but also in the intestines and kidneys. Locally, elevated plasma calcium in the kidney causes activation of the calcium-sensing receptor, diminishing further calcium reabsorption.59 Agents that increase the sensitivity of the calcium-sensing receptors are classified as calcimimetics.
Cinacalcet is a calcimimetic approved by the FDA for treatment of secondary hyperparathyroidism in patients with chronic kidney disease on dialysis, for the treatment of hypercalcemia in patients with parathyroid carcinoma, and for patients with primary hyperparathyroidism who are unable to undergo parathyroidectomy. In an uncontrolled 5-year study of cinacalcet in patients with primary hyperparathyroidism, there was no significant change in bone density.60
Anti-inflammatory drugs
The role of cytokines in stimulating bone resorption in idiopathic hypercalciuria has led to the investigation of several anti-inflammatory drugs (eg, diclofenac, indomethacin) as potential treatments, but studies have been limited in number and scope.61,62
Omega-3 fatty acids
Omega-3 fatty acids are thought to alter prostaglandin metabolism and to potentially reduce stone formation.63
A retrospective study of 29 patients with stone disease found that, combined with dietary counseling, omega-3 fatty acids could potentially reduce urinary calcium and oxalate excretion and increase urinary citrate in hypercalciuric stone-formers.64
A review of published randomized controlled trials of omega-3 fatty acids in skeletal health discovered that 4 studies found positive effects on bone mineral density or bone turnover markers, whereas 5 studies reported no differences. All trials were small, and none evaluated fracture outcome.65
- Albright F, Henneman P, Benedict PH, Forbes AP. Idiopathic hypercalciuria: a preliminary report. Proc R Soc Med 1953; 46:1077–1081.
- Pak CY. Pathophysiology of calcium nephrolithiasis. In: Seldin DW, Giebiscg G, eds. The Kidney: Physiology and Pathophysiology. New York, NY: Raven Press; 1992:2461–2480.
- Frick KK, Bushinsky DA. Molecular mechanisms of primary hypercalciuria. J Am Soc Nephrol 2003; 14:1082–1095.
- Pacifici R, Rothstein M, Rifas L, et al. Increased monocyte interleukin-1 activity and decreased vertebral bone density in patients with fasting idiopathic hypercalciuria. J Clin Endocrinol Metab 1990; 71:138–145.
- Messa P, Mioni G, Montanaro D, et al. About a primitive osseous origin of the so-called ‘renal hypercalciuria.’ Contrib Nephrol 1987; 58:106–110.
- Zerwekh JE. Bone disease and idiopathic hypercalciuria. Semin Nephrol 2008; 28:133–142.
- Coe FL. Treated and untreated recurrent calcium nephrolithiasis in patients with idiopathic hypercalciuria, hyperuricosuria, or no metabolic disorder. Ann Intern Med 1977; 87:404–410.
- Lemann J Jr. Pathogenesis of idiopathic hypercalciuria and nephrolithiasis. In: Coe FL, Favus MJ, eds. Disorders of Bone and Mineral Metabolism. New York, NY: Raven Press; 1992:685-706.
- Coe FL, Parks JH, Moore ES. Familial idiopathic hypercalciuria. N Engl J Med 1979; 300:337–340.
- Giannini S, Nobile M, Dalle Carbonare L, et al. Hypercalciuria is a common and important finding in postmenopausal women with osteoporosis. Eur J Endocrinol 2003; 149:209–213.
- Tannenbaum C, Clark J, Schwartzman K, et al. Yield of laboratory testing to identify secondary contributors to osteoporosis in otherwise healthy women. J Clin Endocrinol Metab 2002; 87:4431–4437.
- Cerda Gabaroi D, Peris P, Monegal A, et al. Search for hidden secondary causes in postmenopausal women with osteoporosis. Menopause 2010; 17:135–139.
- Rull MA, Cano-García Mdel C, Arrabal-Martín M, Arrabal-Polo MA. The importance of urinary calcium in postmenopausal women with osteoporotic fracture. Can Urol Assoc J 2015; 9:E183–E186.
- Heaney RP, Recker RR, Ryan RA. Urinary calcium in perimenopausal women: normative values. Osteoporos Int 1999; 9:13–18.
- Bleich HL, Moore MJ, Lemann J Jr, Adams ND, Gray RW. Urinary calcium excretion in human beings. N Engl J Med 1979; 301:535–541.
- Li XQ, Tembe V, Horwitz GM, Bushinsky DA, Favus MJ. Increased intestinal vitamin D receptor in genetic hypercalciuric rats. A cause of intestinal calcium hyperabsorption. J Clin Invest 1993; 91:661–667.
- Yao J, Kathpalia P, Bushinsky DA, Favus MJ. Hyperresponsiveness of vitamin D receptor gene expression to 1,25-dihydroxyvitamin D3. A new characteristic of genetic hypercalciuric stone-forming rats. J Clin Invest 1998; 101:2223–2232.
- Pietschmann F, Breslau NA, Pak CY. Reduced vertebral bone density in hypercalciuric nephrolithiasis. J Bone Miner Res 1992; 7:1383–1388.
- Jaeger P, Lippuner K, Casez JP, Hess B, Ackermann D, Hug C. Low bone mass in idiopathic renal stone formers: magnitude and significance. J Bone Miner Res 1994; 9:1525–1532.
- Vezzoli G, Soldati L, Arcidiacono T, et al. Urinary calcium is a determinant of bone mineral density in elderly men participating in the InCHIANTI study. Kidney Int 2005; 67:2006–2014.
- Lemann J Jr, Worcester EM, Gray RW. Hypercalciuria and stones. Am J Kidney Dis 1991; 17:386–391.
- Gokce C, Gokce O, Baydinc C, et al. Use of random urine samples to estimate total urinary calcium and phosphate excretion. Arch Intern Med 1991; 151:1587–1588.
- Curhan GC, Willett WC, Rimm EB, Stampfer MJ. A prospective study of dietary calcium and other nutrients and the risk of symptomatic kidney stones. N Engl J Med 1993; 328:833–838.
- Siener R, Schade N, Nicolay C, von Unruh GE, Hesse A. The efficacy of dietary intervention on urinary risk factors for stone formation in recurrent calcium oxalate stone patients. J Urol 2005; 173:1601–1605.
- Jones AN, Shafer MM, Keuler NS, Crone EM, Hansen KE. Fasting and postprandial spot urine calcium-to-creatinine ratios do not detect hypercalciuria. Osteoporos Int 2012; 23:553–562.
- Frick KK, Bushinsky DA. Metabolic acidosis stimulates RANKL RNA expression in bone through a cyclo-oxygenase-dependent mechanism. J Bone Miner Res 2003; 18:1317–1325.
- Borghi L, Schianchi T, Meschi T, et al. Comparison of two diets for the prevention of recurrent stones in idiopathic hypercalciuria. N Engl J Med 2002; 346:77–84.
- Ghazali A, Fuentes V, Desaint C, et al. Low bone mineral density and peripheral blood monocyte activation profile in calcium stone formers with idiopathic hypercalciuria. J Clin Endocrinol Metab 1997; 82:32–38.
- Broadus AE, Insogna KL, Lang R, Ellison AF, Dreyer BE. Evidence for disordered control of 1,25-dihydroxyvitamin D production in absorptive hypercalciuria. N Engl J Med 1984; 311:73–80.
- Tasca A, Cacciola A, Ferrarese P, et al. Bone alterations in patients with idiopathic hypercalciuria and calcium nephrolithiasis. Urology 2002; 59:865–869.
- Melton LJ 3rd, Crowson CS, Khosla S, Wilson DM, O’Fallon WM. Fracture risk among patients with urolithiasis: a population-based cohort study. Kidney Int 1998; 53:459–464.
- Lauderdale DS, Thisted RA, Wen M, Favus MJ. Bone mineral density and fracture among prevalent kidney stone cases in the Third National Health and Nutrition Examination Survey. J Bone Miner Res 2001; 16:1893–1898.
- Cauley JA, Fullman RL, Stone KL, et al; MrOS Research Group. Factors associated with the lumbar spine and proximal femur bone mineral density in older men. Osteoporos Int 2005; 16:1525–1537.
- Fink HA, Litwack-Harrison S, Taylor BC, et al; Osteoporotic Fractures in Men (MrOS) Study Group. Clinical utility of routine laboratory testing to identify possible secondary causes in older men with osteoporosis: the Osteoporotic Fractures in Men (MrOS) Study. Osteoporos Int 2016: 27:331–338.
- Sowers MR, Jannausch M, Wood C, Pope SK, Lachance LL, Peterson B. Prevalence of renal stones in a population-based study with dietary calcium, oxalate and medication exposures. Am J Epidemiol 1998; 147:914–920.
- Asplin JR, Bauer KA, Kinder J, et al. Bone mineral density and urine calcium excretion among subjects with and without nephrolithiasis. Kidney Int 2003; 63:662–669.
- Letavernier E, Traxer O, Daudon M, et al. Determinants of osteopenia in male renal-stone-disease patients with idiopathic hypercalciuria. Clin J Am Soc Nephrol 2011; 6:1149–1154.
- Vezzoli G, Soldati L, Gambaro G. Update on primary hypercalciuria from a genetic perspective. J Urol 2008; 179:1676–1682.
- Cosman F, de Beur SJ, LeBoff MS, et al; National Osteoporosis Foundation. Clinician’s guide to prevention and treatment of osteoporosis. Osteoporos Int 2014: 25:2359–2381.
- Breslau NA, Brinkley L, Hill KD, Pak CY. Relationship of animal protein-rich diet to kidney stone formation and calcium metabolism. J Clin Endocrinol Metab 1988; 66:140–146.
- Reid IR, Ames RW, Orr-Walker BJ, et al. Hydrochlorothiazide reduces loss of cortical bone in normal postmenopausal women: a randomized controlled trial. Am J Med 2000; 109:362–370.
- Bolland MJ, Ames RW, Horne AM, Orr-Walker BJ, Gamble GD, Reid IR. The effect of treatment with a thiazide diuretic for 4 years on bone density in normal postmenopausal women. Osteoporos Int 2007; 18:479–486.
- LaCroix AZ, Ott SM, Ichikawa L, Scholes D, Barlow WE. Low-dose hydrochlorothiazide and preservation of bone mineral density in older adults. Ann Intern Med 2000; 133:516–526.
- Wasnich RD, Davis JW, He YF, Petrovich H, Ross PD. A randomized, double-masked, placebo-controlled trial of chlorthalidone and bone loss in elderly women. Osteoporos Int 1995; 5:247–251.
- Adams JS, Song CF, Kantorovich V. Rapid recovery of bone mass in hypercalciuric, osteoporotic men treated with hydrochlorothiazide. Ann Intern Med 1999; 130:658–660.
- Feskanich D, Willett WC, Stampfer MJ, Colditz GA. A prospective study of thiazide use and fractures in women. Osteoporos Int 1997; 7:79–84.
- Lamberg BA, Kuhlback B. Effect of chlorothiazide and hydrochlorothiazide on the excretion of calcium in the urine. Scand J Clin Lab Invest 1959; 11:351–357.
- Lajeunesse D, Delalandre A, Guggino SE. Thiazide diuretics affect osteocalcin production in human osteoblasts at the transcription level without affecting vitamin D3 receptors. J Bone Miner Res 2000; 15:894–901.
- Sakhaee K, Maalouf NM, Kumar R, Pasch A, Moe OW. Nephrolithiasis-associated bone disease: pathogenesis and treatment options. Kidney Int 2001; 79:393–403.
- García-Nieto V, Monge-Zamorano M, González-García M, Luis-Yanes MI. Effect of thiazides on bone mineral density in children with idiopathic hypercalciuria. Pediatr Nephrol 2012; 27:261–268.
- Bushinsky DA, Favus MJ. Mechanism of hypercalciuria in genetic hypercalciuric rats. Inherited defect in intestinal calcium transport. J Clin Invest 1988; 82:1585–1591.
- Bushinsky DA, Willett T, Asplin JR, Culbertson C, Che SP, Grynpas M. Chlorthalidone improves vertebral bone quality in genetic hypercalciuric stone-forming rats. J Bone Miner Res 2011; 26:1904–1912.
- Pak CY, Heller HJ, Pearle MS, Odvina CV, Poindexter JR, Peterson RD. Prevention of stone formation and bone loss in absorptive hypercalciuria by combined dietary and pharmacological interventions. J Urol 2003; 169:465–469.
- Vescini F, Buffa A, LaManna G, et al. Long-term potassium citrate therapy and bone mineral density in idiopathic calcium stone formers. J Endocrinol Invest 2005; 28:218–222.
- Ruml LA, Dubois SK, Roberts ML, Pak CY. Prevention of hypercalciuria and stone-forming propensity during prolonged bedrest by alendronate. J Bone Miner Res 1995; 10:655–662.
- Weisinger JR, Alonzo E, Machado C, et al. Role of bones in the physiopathology of idiopathic hypercalciuria: effect of amino-bisphosphonate alendronate. Medicina (B Aires) 1997; 57(suppl 1):45–48. Spanish.
- Heilberg IP, Martini LA, Teixeira SH, et al. Effect of etidronate treatment on bone mass of male nephrolithiasis patients with idiopathic hypercalciuria and osteopenia. Nephron 1998; 79:430–437.
- Bushinsky DA, Neumann KJ, Asplin J, Krieger NS. Alendronate decreases urine calcium and supersaturation in genetic hypercalciuric rats. Kidney Int 1999; 55:234–243.
- Riccardi D, Park J, Lee WS, Gamba G, Brown EM, Hebert SC. Cloning and functional expression of a rat kidney extracellular calcium/polyvalent cation-sensing receptor. Proc Natl Acad Sci USA 1995; 92:131–145.
- Peacock M, Bolognese MA, Borofsky M, et al. Cinacalcet treatment of primary hyperparathyroidism: biochemical and bone densitometric outcomes in a five-year study. J Clin Endocrinol Metab 2009; 94:4860–4867.
- Filipponi P, Mannarelli C, Pacifici R, et al. Evidence for a prostaglandin-mediated bone resorptive mechanism in subjects with fasting hypercalciuria. Calcif Tissue Int 1988; 43:61–66.
- Gomaa AA, Hassan HA, Ghaneimah SA. Effect of aspirin and indomethacin on the serum and urinary calcium, magnesium and phosphate. Pharmacol Res 1990; 22:59–70.
- Buck AC, Davies RL, Harrison T. The protective role of eicosapentaenoic acid (EPA) in the pathogenesis of nephrolithiasis. J Urol 1991; 146:188–194.
- Ortiz-Alvarado O, Miyaoka R, Kriedberg C, et al. Omega-3 fatty acids eicosapentaenoic acid and docosahexaenoic acid in the management of hypercalciuric stone formers. Urology 2012; 79:282–286.
- Orchard TS, Pan X, Cheek F, Ing SW, Jackson RD. A systematic review of omega-3 fatty acids and osteoporosis. Br J Nutr 2012; 107(suppl 2):S253–S260.
- Albright F, Henneman P, Benedict PH, Forbes AP. Idiopathic hypercalciuria: a preliminary report. Proc R Soc Med 1953; 46:1077–1081.
- Pak CY. Pathophysiology of calcium nephrolithiasis. In: Seldin DW, Giebiscg G, eds. The Kidney: Physiology and Pathophysiology. New York, NY: Raven Press; 1992:2461–2480.
- Frick KK, Bushinsky DA. Molecular mechanisms of primary hypercalciuria. J Am Soc Nephrol 2003; 14:1082–1095.
- Pacifici R, Rothstein M, Rifas L, et al. Increased monocyte interleukin-1 activity and decreased vertebral bone density in patients with fasting idiopathic hypercalciuria. J Clin Endocrinol Metab 1990; 71:138–145.
- Messa P, Mioni G, Montanaro D, et al. About a primitive osseous origin of the so-called ‘renal hypercalciuria.’ Contrib Nephrol 1987; 58:106–110.
- Zerwekh JE. Bone disease and idiopathic hypercalciuria. Semin Nephrol 2008; 28:133–142.
- Coe FL. Treated and untreated recurrent calcium nephrolithiasis in patients with idiopathic hypercalciuria, hyperuricosuria, or no metabolic disorder. Ann Intern Med 1977; 87:404–410.
- Lemann J Jr. Pathogenesis of idiopathic hypercalciuria and nephrolithiasis. In: Coe FL, Favus MJ, eds. Disorders of Bone and Mineral Metabolism. New York, NY: Raven Press; 1992:685-706.
- Coe FL, Parks JH, Moore ES. Familial idiopathic hypercalciuria. N Engl J Med 1979; 300:337–340.
- Giannini S, Nobile M, Dalle Carbonare L, et al. Hypercalciuria is a common and important finding in postmenopausal women with osteoporosis. Eur J Endocrinol 2003; 149:209–213.
- Tannenbaum C, Clark J, Schwartzman K, et al. Yield of laboratory testing to identify secondary contributors to osteoporosis in otherwise healthy women. J Clin Endocrinol Metab 2002; 87:4431–4437.
- Cerda Gabaroi D, Peris P, Monegal A, et al. Search for hidden secondary causes in postmenopausal women with osteoporosis. Menopause 2010; 17:135–139.
- Rull MA, Cano-García Mdel C, Arrabal-Martín M, Arrabal-Polo MA. The importance of urinary calcium in postmenopausal women with osteoporotic fracture. Can Urol Assoc J 2015; 9:E183–E186.
- Heaney RP, Recker RR, Ryan RA. Urinary calcium in perimenopausal women: normative values. Osteoporos Int 1999; 9:13–18.
- Bleich HL, Moore MJ, Lemann J Jr, Adams ND, Gray RW. Urinary calcium excretion in human beings. N Engl J Med 1979; 301:535–541.
- Li XQ, Tembe V, Horwitz GM, Bushinsky DA, Favus MJ. Increased intestinal vitamin D receptor in genetic hypercalciuric rats. A cause of intestinal calcium hyperabsorption. J Clin Invest 1993; 91:661–667.
- Yao J, Kathpalia P, Bushinsky DA, Favus MJ. Hyperresponsiveness of vitamin D receptor gene expression to 1,25-dihydroxyvitamin D3. A new characteristic of genetic hypercalciuric stone-forming rats. J Clin Invest 1998; 101:2223–2232.
- Pietschmann F, Breslau NA, Pak CY. Reduced vertebral bone density in hypercalciuric nephrolithiasis. J Bone Miner Res 1992; 7:1383–1388.
- Jaeger P, Lippuner K, Casez JP, Hess B, Ackermann D, Hug C. Low bone mass in idiopathic renal stone formers: magnitude and significance. J Bone Miner Res 1994; 9:1525–1532.
- Vezzoli G, Soldati L, Arcidiacono T, et al. Urinary calcium is a determinant of bone mineral density in elderly men participating in the InCHIANTI study. Kidney Int 2005; 67:2006–2014.
- Lemann J Jr, Worcester EM, Gray RW. Hypercalciuria and stones. Am J Kidney Dis 1991; 17:386–391.
- Gokce C, Gokce O, Baydinc C, et al. Use of random urine samples to estimate total urinary calcium and phosphate excretion. Arch Intern Med 1991; 151:1587–1588.
- Curhan GC, Willett WC, Rimm EB, Stampfer MJ. A prospective study of dietary calcium and other nutrients and the risk of symptomatic kidney stones. N Engl J Med 1993; 328:833–838.
- Siener R, Schade N, Nicolay C, von Unruh GE, Hesse A. The efficacy of dietary intervention on urinary risk factors for stone formation in recurrent calcium oxalate stone patients. J Urol 2005; 173:1601–1605.
- Jones AN, Shafer MM, Keuler NS, Crone EM, Hansen KE. Fasting and postprandial spot urine calcium-to-creatinine ratios do not detect hypercalciuria. Osteoporos Int 2012; 23:553–562.
- Frick KK, Bushinsky DA. Metabolic acidosis stimulates RANKL RNA expression in bone through a cyclo-oxygenase-dependent mechanism. J Bone Miner Res 2003; 18:1317–1325.
- Borghi L, Schianchi T, Meschi T, et al. Comparison of two diets for the prevention of recurrent stones in idiopathic hypercalciuria. N Engl J Med 2002; 346:77–84.
- Ghazali A, Fuentes V, Desaint C, et al. Low bone mineral density and peripheral blood monocyte activation profile in calcium stone formers with idiopathic hypercalciuria. J Clin Endocrinol Metab 1997; 82:32–38.
- Broadus AE, Insogna KL, Lang R, Ellison AF, Dreyer BE. Evidence for disordered control of 1,25-dihydroxyvitamin D production in absorptive hypercalciuria. N Engl J Med 1984; 311:73–80.
- Tasca A, Cacciola A, Ferrarese P, et al. Bone alterations in patients with idiopathic hypercalciuria and calcium nephrolithiasis. Urology 2002; 59:865–869.
- Melton LJ 3rd, Crowson CS, Khosla S, Wilson DM, O’Fallon WM. Fracture risk among patients with urolithiasis: a population-based cohort study. Kidney Int 1998; 53:459–464.
- Lauderdale DS, Thisted RA, Wen M, Favus MJ. Bone mineral density and fracture among prevalent kidney stone cases in the Third National Health and Nutrition Examination Survey. J Bone Miner Res 2001; 16:1893–1898.
- Cauley JA, Fullman RL, Stone KL, et al; MrOS Research Group. Factors associated with the lumbar spine and proximal femur bone mineral density in older men. Osteoporos Int 2005; 16:1525–1537.
- Fink HA, Litwack-Harrison S, Taylor BC, et al; Osteoporotic Fractures in Men (MrOS) Study Group. Clinical utility of routine laboratory testing to identify possible secondary causes in older men with osteoporosis: the Osteoporotic Fractures in Men (MrOS) Study. Osteoporos Int 2016: 27:331–338.
- Sowers MR, Jannausch M, Wood C, Pope SK, Lachance LL, Peterson B. Prevalence of renal stones in a population-based study with dietary calcium, oxalate and medication exposures. Am J Epidemiol 1998; 147:914–920.
- Asplin JR, Bauer KA, Kinder J, et al. Bone mineral density and urine calcium excretion among subjects with and without nephrolithiasis. Kidney Int 2003; 63:662–669.
- Letavernier E, Traxer O, Daudon M, et al. Determinants of osteopenia in male renal-stone-disease patients with idiopathic hypercalciuria. Clin J Am Soc Nephrol 2011; 6:1149–1154.
- Vezzoli G, Soldati L, Gambaro G. Update on primary hypercalciuria from a genetic perspective. J Urol 2008; 179:1676–1682.
- Cosman F, de Beur SJ, LeBoff MS, et al; National Osteoporosis Foundation. Clinician’s guide to prevention and treatment of osteoporosis. Osteoporos Int 2014: 25:2359–2381.
- Breslau NA, Brinkley L, Hill KD, Pak CY. Relationship of animal protein-rich diet to kidney stone formation and calcium metabolism. J Clin Endocrinol Metab 1988; 66:140–146.
- Reid IR, Ames RW, Orr-Walker BJ, et al. Hydrochlorothiazide reduces loss of cortical bone in normal postmenopausal women: a randomized controlled trial. Am J Med 2000; 109:362–370.
- Bolland MJ, Ames RW, Horne AM, Orr-Walker BJ, Gamble GD, Reid IR. The effect of treatment with a thiazide diuretic for 4 years on bone density in normal postmenopausal women. Osteoporos Int 2007; 18:479–486.
- LaCroix AZ, Ott SM, Ichikawa L, Scholes D, Barlow WE. Low-dose hydrochlorothiazide and preservation of bone mineral density in older adults. Ann Intern Med 2000; 133:516–526.
- Wasnich RD, Davis JW, He YF, Petrovich H, Ross PD. A randomized, double-masked, placebo-controlled trial of chlorthalidone and bone loss in elderly women. Osteoporos Int 1995; 5:247–251.
- Adams JS, Song CF, Kantorovich V. Rapid recovery of bone mass in hypercalciuric, osteoporotic men treated with hydrochlorothiazide. Ann Intern Med 1999; 130:658–660.
- Feskanich D, Willett WC, Stampfer MJ, Colditz GA. A prospective study of thiazide use and fractures in women. Osteoporos Int 1997; 7:79–84.
- Lamberg BA, Kuhlback B. Effect of chlorothiazide and hydrochlorothiazide on the excretion of calcium in the urine. Scand J Clin Lab Invest 1959; 11:351–357.
- Lajeunesse D, Delalandre A, Guggino SE. Thiazide diuretics affect osteocalcin production in human osteoblasts at the transcription level without affecting vitamin D3 receptors. J Bone Miner Res 2000; 15:894–901.
- Sakhaee K, Maalouf NM, Kumar R, Pasch A, Moe OW. Nephrolithiasis-associated bone disease: pathogenesis and treatment options. Kidney Int 2001; 79:393–403.
- García-Nieto V, Monge-Zamorano M, González-García M, Luis-Yanes MI. Effect of thiazides on bone mineral density in children with idiopathic hypercalciuria. Pediatr Nephrol 2012; 27:261–268.
- Bushinsky DA, Favus MJ. Mechanism of hypercalciuria in genetic hypercalciuric rats. Inherited defect in intestinal calcium transport. J Clin Invest 1988; 82:1585–1591.
- Bushinsky DA, Willett T, Asplin JR, Culbertson C, Che SP, Grynpas M. Chlorthalidone improves vertebral bone quality in genetic hypercalciuric stone-forming rats. J Bone Miner Res 2011; 26:1904–1912.
- Pak CY, Heller HJ, Pearle MS, Odvina CV, Poindexter JR, Peterson RD. Prevention of stone formation and bone loss in absorptive hypercalciuria by combined dietary and pharmacological interventions. J Urol 2003; 169:465–469.
- Vescini F, Buffa A, LaManna G, et al. Long-term potassium citrate therapy and bone mineral density in idiopathic calcium stone formers. J Endocrinol Invest 2005; 28:218–222.
- Ruml LA, Dubois SK, Roberts ML, Pak CY. Prevention of hypercalciuria and stone-forming propensity during prolonged bedrest by alendronate. J Bone Miner Res 1995; 10:655–662.
- Weisinger JR, Alonzo E, Machado C, et al. Role of bones in the physiopathology of idiopathic hypercalciuria: effect of amino-bisphosphonate alendronate. Medicina (B Aires) 1997; 57(suppl 1):45–48. Spanish.
- Heilberg IP, Martini LA, Teixeira SH, et al. Effect of etidronate treatment on bone mass of male nephrolithiasis patients with idiopathic hypercalciuria and osteopenia. Nephron 1998; 79:430–437.
- Bushinsky DA, Neumann KJ, Asplin J, Krieger NS. Alendronate decreases urine calcium and supersaturation in genetic hypercalciuric rats. Kidney Int 1999; 55:234–243.
- Riccardi D, Park J, Lee WS, Gamba G, Brown EM, Hebert SC. Cloning and functional expression of a rat kidney extracellular calcium/polyvalent cation-sensing receptor. Proc Natl Acad Sci USA 1995; 92:131–145.
- Peacock M, Bolognese MA, Borofsky M, et al. Cinacalcet treatment of primary hyperparathyroidism: biochemical and bone densitometric outcomes in a five-year study. J Clin Endocrinol Metab 2009; 94:4860–4867.
- Filipponi P, Mannarelli C, Pacifici R, et al. Evidence for a prostaglandin-mediated bone resorptive mechanism in subjects with fasting hypercalciuria. Calcif Tissue Int 1988; 43:61–66.
- Gomaa AA, Hassan HA, Ghaneimah SA. Effect of aspirin and indomethacin on the serum and urinary calcium, magnesium and phosphate. Pharmacol Res 1990; 22:59–70.
- Buck AC, Davies RL, Harrison T. The protective role of eicosapentaenoic acid (EPA) in the pathogenesis of nephrolithiasis. J Urol 1991; 146:188–194.
- Ortiz-Alvarado O, Miyaoka R, Kriedberg C, et al. Omega-3 fatty acids eicosapentaenoic acid and docosahexaenoic acid in the management of hypercalciuric stone formers. Urology 2012; 79:282–286.
- Orchard TS, Pan X, Cheek F, Ing SW, Jackson RD. A systematic review of omega-3 fatty acids and osteoporosis. Br J Nutr 2012; 107(suppl 2):S253–S260.
KEY POINTS
- Idiopathic hypercalciuria is common in patients with kidney stones and is also present in up to 20% of postmenopausal women with osteoporosis but no history of kidney stones.
- Idiopathic hypercalciuria has been directly implicated as a cause of loss of trabecular bone, especially in men. But reversing the hypercalciuria in this condition has not been definitively shown to diminish fracture incidence.
- Patients with kidney stones who have low bone mass and idiopathic hypercalciuria should increase their daily fluid intake, follow a diet low in salt and animal protein, and take thiazide diuretics to reduce the risk of further calcium stone formation. Whether this approach also improves bone mass and strength and reduces fracture risk in this patient group requires further study.
Detecting and managing device leads inadvertently placed in the left ventricle
Although rare, inadvertent placement of a pacemaker or defibrillator lead in the left ventricle can have serious consequences, including arterial thromboembolism and aortic or mitral valve damage or infection.1–4
This article discusses situations in which lead malpositioning is likely to occur, how to prevent it, how to detect and correct it immediately, and how to manage cases discovered long after implantation.
RARE, BUT LIKELY UNDERREPORTED
In 2011, Rodriguez et al1 reviewed 56 reported cases in which an endocardial lead had been mistakenly placed in the left ventricle. A few more cases have been reported since then, but some cases are not reported, so how often this occurs is unknown.
A large single-center retrospective study2 reported a 3.4% incidence of inadvertent lead placement in the left side of the heart, including the cardiac veins.
HOW LEADS CAN END UP IN THE WRONG PLACE
Risk factors for lead malpositioning include abnormal thoracic anatomy, underlying congenital heart disease, and operator inexperience.2
Normally, in single- and double-lead systems, leads are inserted into a cephalic, subclavian, or axillary vein and advanced into the right atrium, right ventricle, or both. However, pacing, sensing, and defibrillation leads have inadvertently been placed in the left ventricular endocardium and even on the epicardial surface.
Leads can end up inside the left ventricle by passing through an unrecognized atrial septal defect, patent foramen ovale, or ventricular septal defect, or by perforating the interventricular septum. Another route into the left ventricle is by gaining vascular access through the axillary or subclavian artery and advancing the lead retrograde across the aortic valve.
Epicardial lead placement may result from perforating the right ventricle5 or inadvertent positioning within the main coronary sinus or in a cardiac vein.
PREVENTION IS THE BEST MANAGEMENT
The best way to manage lead malpositioning is to prevent it in the first place.
Make sure you are in a vein, not an artery! If you are working from the patient’s left side, you should see the guidewire cross the midline on fluoroscopy. Working from either the left or the right side, you can ensure that the guidewire is in the venous system by advancing it into the inferior vena cava and then all the way below the diaphragm (best seen on anteroposterior views). These observations help avoid lead placement in the left ventricle by an inadvertent retrograde aortic approach.
Suspect that you are taking the wrong route to the heart (ie, through the arterial system) if, in the anteroposterior view, the guidewire bends as it approaches the left spinal border. This sign suggests that you are going backwards through the ascending aorta and bumping up against the aortic cusps. Occasionally, the wire may pass through the aortic valve without resistance and bending. Additional advancement toward the left chest wall will make contact with the left ventricular endocardium and may result in ventricular ectopy. Placement in the left ventricle is best seen in the left anterior oblique projection; the lead will cross the spine or its distal end will point toward the spine in progressive projections from farther to the left.
Make sure you are in the right ventricle. Even if you have gone through the venous system, you are not home free. Advancing the lead into the right ventricular outflow tract (best seen in the right anterior oblique projection) is a key step in avoiding lead misplacement. In the right ventricular outflow tract, the lead tip should move freely; if it does not, it may be in the coronary sinus or middle cardiac vein.
If a lead passes through a patent foramen ovale or septal defect to the left atrium, a left anterior oblique view should also demonstrate movement toward or beyond the spine. If the lead passes beyond the left heart border, a position in a pulmonary vein is possible. This is often associated with loss of a recordable intracardiac electrogram. A position in a right pulmonary vein is possible but very, very unlikely. If a lead passes through a patent foramen ovale or septal defect to the left ventricle, it will point toward the spine in left anterior oblique projections. (See “Postoperative detection by chest radiography.”)
Ventricular paced QRS complexes should show a left bundle branch pattern on electrocardiography (ECG), not a right bundle branch pattern (more about this below). However, when inserting a pacemaker, the sterile field includes the front of the chest and therefore lead V1 is usually omitted, depriving the operator of valuable information.
Fortunately, operators may fluoroscopically view leads intended for the right ventricle in left anterior oblique projections. We recommend beginning at 40° left anterior oblique. In this view, septally positioned right ventricular leads may appear to abut the spine. A right ventricular position is confirmed in a steeper left anterior oblique projection, where the lead should be seen to be away from the spine.4
POSTOPERATIVE DETECTION BY ECG
Careful evaluation of the 12-lead electrocardiogram during ventricular pacing is important for confirming correct lead placement. If ventricular pacing is absent, eg, if the device fires only if the natural heart rate drops below a set number and the heart happens to be firing on its own when you happen to be looking at it, programming the device to pace the right ventricle 10 beats per minute faster than the intrinsic heart rate usually suffices. Temporarily disabling atrial pacing and cardiac venous pacing in biventricular devices facilitates interpretation of the paced QRS complex.
Bundle branch block patterns
The typical morphology for paced events originating from the right ventricle has a left bundle branch block pattern, ie, a dominant S wave in leads V1 and V2. Nevertheless, many patients with a safely placed lead in the right ventricle can also demonstrate right bundle branch morphology during pacing,6 ie, a dominant R wave in leads V1 and V2.
Klein et al7 reported on 8 patients who had features of right bundle branch block in leads V1 and V2 and noted that placing these leads 1 interspace lower eliminated the right bundle branch block appearance. The utility of this maneuver is demonstrated in Figure 1.
Almehairi et al8 demonstrated transition to a left bundle branch block-like pattern in V1 in 14 of 26 patients after leads V1 and V2 were moved to the fifth intercostal space. Moving these leads to the sixth intercostal space produced a left bundle branch block-like pattern in all the remaining patients. Additional study is needed to validate this precordial mapping technique.9
Although the Coman and Trohman algorithm suggests that a frontal plane axis of −90° to –180° is specific for left ventricular pacing,6 other reports have identified this axis in the presence of true right ventricular pacing.6,9–12 Therefore, Barold and Giudici9 argue that a frontal plane axis in the right superior quadrant has limited diagnostic value.
POSTOPERATIVE DETECTION BY CHEST RADIOGRAPHY
A lead in the left ventricle may be a subtle finding on an anteroposterior or posteroanterior chest radiograph. The definitive view is the lateral projection, which is also true during intraoperative fluoroscopy.13–15 The tip of a malpositioned left-ventricular lead is characteristically seen farther posterior (toward the spine) in the cardiac silhouette on the lateral view (Figure 3).2 If the lead is properly positioned, the general direction of the middle to distal portion should be away from the spine.
ECHOCARDIOGRAPHY TO CONFIRM
Two-dimensional echocardiography can help to confirm left ventricular placement via an atrial septal defect, patent foramen ovale, or perforation of the interventricular septum.16,17
Three-dimensional echocardiography can facilitate cardiac venous lead placement and assess the impact of right ventricular lead placement on tricuspid valve function.18,19 In one case report, 3-dimensional echocardiography provided a definitive diagnosis of interventricular septal perforation when findings on computed tomography (CT) were indeterminate.20
CT AND MRI: LIMITED ROLES
When echocardiographic findings are equivocal, CT can help diagnose lead perforation. Electrocardiogram-triggered cardiac CT can help visualize lead positions and potential lead perforation. Unfortunately, the precise location of the lead tip (and the diagnosis) can be missed due to streaking (“star”) artifacts and acoustic shadowing from the metallic lead.21–26 Because of these limitations, as well as radiation exposure and high costs, CT should be used sparingly, if at all, for diagnosing lead malposition.
Technological advances and the increasing use of magnetic resonance imaging (MRI) in clinical practice have led to the development of “MRI-conditional” cardiac implantable electronic devices (ie, safe for undergoing MRI), as well as more lenient regulation of MRI in patients with nonconditional devices.27,28 Although the widely held opinion that patients with a pacemaker or implantable cardioverter defibrillator are not eligible to undergo MRI has largely been abandoned, it seems unlikely that cardiac MRI will become a pivotal tool in assessing lead malposition.
MANAGING MALPOSITIONED LEADS
Inadvertent left ventricular lead placement provides a nidus for thrombus formation. When inadvertent left ventricular lead malposition is identified acutely, correction of the lead position should be performed immediately by an experienced electrophysiologist.
Treatment of left ventricular lead misplacement discovered late after implantation includes lead removal or chronic anticoagulation with warfarin to prevent thromboemboli.
Long-term anticoagulation
No thromboembolic events have been reported2 in patients with lead malposition who take warfarin and maintain an international normalized ratio of 2.5 to 3.5.
Antiplatelet agents are not enough by themselves.16
The use of direct oral anticoagulants has not been explored in this setting. Use of dabigatran in patients with mechanical heart valves was associated with increased rates of thromboembolic and bleeding complications compared with warfarin.29 Based on these results and an overall lack of evidence, we do not recommend substituting a direct oral anticoagulant for warfarin in the setting of malpositioned left ventricular leads.
Late percutaneous removal
Late lead removal is most appropriate if cardiac surgery is planned for other reasons. Although percutaneous extraction of a malpositioned left ventricular lead was first described over 25 years ago,13 the safety of this procedure remains uncertain.
Kosmidou et al17 reported two cases of percutaneous removal of inadvertent transarterial leads employing standard interventional cardiology methods for cerebral embolic protection. Distal embolic filter wires were deployed in the left and right internal carotid arteries. A covered stent was deployed at the arterial entry site simultaneously with lead removal, providing immediate and effective hemostasis. Similar protection should be considered during transvenous access and extraction via an atrial septal or patent foramen ovale.
Nevertheless, not even transesophageal echocardiography can reliably exclude adhered thrombi, and the risk of embolization of fibrous adhesions or thrombi has been cited as a pivotal contraindication to percutaneous lead extraction regardless of modality.16
- Rodriguez Y, Baltodano P, Tower A, Martinez C, Carrillo R. Management of symptomatic inadvertently placed endocardial leads in the left ventricle. Pacing Clin Electrophysiol 2011; 34:1192–1200.
- Ohlow MA, Roos M, Lauer B, Von Korn H, Geller JC. Incidence, predictors, and outcome of inadvertent malposition of transvenous pacing or defibrillation lead in the left heart. Europace 2016; 18:1049–1054.
- Madias C, Trohman RG. Cardiac resynchronization therapy: the state of the art. Expert Rev Cardiovasc Ther 2014; 12:573–587.
- Trohman RG. To the editor—comment on six uneventful years with a pacing lead in the left ventricle. Heart Rhythm 2013; 10:e81.
- Cossú SF. Unusual placement of a coronary sinus lead for resynchronization therapy resulting in late lead fracture. J Innovations Cardiac Rhythm Manage 2013; 4:1148–1153.
- Coman JA, Trohman RG. Incidence and electrocardiographic localization of safe right bundle branch block configurations during permanent ventricular pacing. Am J Cardiol 1995; 76:781–784.
- Klein HO, Beker B, Sareli P, DiSegni E, Dean H, Kaplinsky E. Unusual QRS morphology associated with transvenous pacemakers. The pseudo RBBB pattern. Chest 1985; 87:517–521.
- Almehairi M, Enriquez A, Redfearn D, et al. Right bundle branch block-like pattern during ventricular pacing: a surface electrocardiographic mapping technique to locate the ventricular lead. Can J Cardiol 2015; 31:1019–1024.
- Barold SS, Giudici MC. Renewed interest in the significance of the tall R wave in ECG lead V1 during right ventricular pacing. Expert Rev Med Devices 2016; 13:611–613.
- Almehairi M, Ali FS, Enriquez A, et al. Electrocardiographic algorithms to predict true right ventricular pacing in the presence of right bundle branch block-like pattern. Int J Cardiol 2014; 172:e403–e405.
- Tzeis S, Andrikopoulos G, Weigand S, et al. Right bundle branch block-like pattern during uncomplicated right ventricular pacing and the effect of pacing site. Am J Cardiol 2016; 117:935–939.
- Hemminger EJ, Criley JM. Right ventricular enlargement mimicking electrocardiographic left ventricular pacing. J Electrocardiol 2006; 39:180–182.
- Furman S. Chest PA and lateral. Pacing Clin Electrophysiol 1993; 16:953.
- Trohman RG, Wilkoff BL, Byrne T, Cook S. Successful percutaneous extraction of a chronic left ventricular pacing lead. Pacing Clin Electrophysiol 1991; 14:1448–1451.
- Trohman RG, Kim MH, Pinski SL. Cardiac pacing: the state of the art. Lancet 2004; 364:1701–1719.
- Van Gelder BM, Bracke FA, Oto A, et al. Diagnosis and management of inadvertently placed pacing and ICD leads in the left ventricle: a multicenter experience and review of the literature. Pacing Clin Electrophysiol 2000; 23:877–883.
- Kosmidou I, Karmpaliotis D, Kandzari DE, Dan D. Inadvertent transarterial lead placement in the left ventricle and aortic cusp: percutaneous lead removal with carotid embolic protection and stent graft placement. Indian Pacing Electrophysiol J 2012; 12:269–273.
- Villanueva FS, Heinsimer JA, Burkman MH, Fananapazir L,
- Halvorsen RA Jr, Chen JT. Echocardiographic detection of perforation of the cardiac ventricular septum by a permanent pacemaker lead. Am J Cardiol 1987; 59:370–371.
- Döring M, Braunschweig F, Eitel C, et al. Individually tailored left ventricular lead placement: lessons from multimodality integration between three-dimensional echocardiography and coronary sinus angiogram. Europace 2013; 15:718–727.
- Mediratta A, Addetia K, Yamat M, et al. 3D echocardiographic location of implantable device leads and mechanism of associated tricuspid regurgitation. JACC Cardiovasc Imaging 2014; 7:337–347.
- Daher IN, Saeed M, Schwarz ER, Agoston I, Rahman MA, Ahmad M. Live three-dimensional echocardiography in diagnosis of interventricular septal perforation by pacemaker lead. Echocardiography 2006; 23:428–429.
- Mak GS, Truong QA. Cardiac CT: imaging of and through cardiac devices. Curr Cardiovasc Imaging Rep 2012; 5:328–336.
- Henrikson CA, Leng CT, Yuh DD, Brinker JA. Computed tomography to assess possible cardiac lead perforation. Pacing Clin Electrophysiol 2006; 29:509–511.
- Hirschl DA, Jain VR, Spindola-Franco H, Gross JN, Haramati LB. Prevalence and characterization of asymptomatic pacemaker and ICD lead perforation on CT. Pacing Clin Electrophysiol 2007; 30:28–32.
- Pang BJ, Lui EH, Joshi SB, et al. Pacing and implantable cardioverter defibrillator lead perforation as assessed by multiplanar reformatted ECG-gated cardiac computed tomography and clinical correlates. Pacing Clin Electrophysiol 2014; 37:537–545.
- Lanzman RS, Winter J, Blondin D, et al. Where does it lead? Imaging features of cardiovascular implantable electronic devices on chest radiograph and CT. Korean J Radiol 2011; 12:611–619.
- van der Graaf AW, Bhagirath P, Götte MJ. MRI and cardiac implantable electronic devices; current status and required safety conditions. Neth Heart J 2014; 22:269–276.
- European Society of Cardiology (ESC), European Heart Rhythm Association (EHRA); Brignole M, Auricchio A, Baron-Esquivias G, et al. 2013 ESC guidelines on cardiac pacing and cardiac resynchronization therapy: the Task Force on cardiac pacing and resynchronization therapy of the European Society of Cardiology (ESC). Developed in collaboration with the European Heart Rhythm Association (EHRA). Europace 2013; 15:1070–1118.
- Eikelboom JW, Connolly SJ, Brueckmann M, et al; RE-ALIGN Investigators. Dabigatran versus warfarin in patients with mechanical heart valves. N Engl J Med 2013; 369:1206–1214.
Although rare, inadvertent placement of a pacemaker or defibrillator lead in the left ventricle can have serious consequences, including arterial thromboembolism and aortic or mitral valve damage or infection.1–4
This article discusses situations in which lead malpositioning is likely to occur, how to prevent it, how to detect and correct it immediately, and how to manage cases discovered long after implantation.
RARE, BUT LIKELY UNDERREPORTED
In 2011, Rodriguez et al1 reviewed 56 reported cases in which an endocardial lead had been mistakenly placed in the left ventricle. A few more cases have been reported since then, but some cases are not reported, so how often this occurs is unknown.
A large single-center retrospective study2 reported a 3.4% incidence of inadvertent lead placement in the left side of the heart, including the cardiac veins.
HOW LEADS CAN END UP IN THE WRONG PLACE
Risk factors for lead malpositioning include abnormal thoracic anatomy, underlying congenital heart disease, and operator inexperience.2
Normally, in single- and double-lead systems, leads are inserted into a cephalic, subclavian, or axillary vein and advanced into the right atrium, right ventricle, or both. However, pacing, sensing, and defibrillation leads have inadvertently been placed in the left ventricular endocardium and even on the epicardial surface.
Leads can end up inside the left ventricle by passing through an unrecognized atrial septal defect, patent foramen ovale, or ventricular septal defect, or by perforating the interventricular septum. Another route into the left ventricle is by gaining vascular access through the axillary or subclavian artery and advancing the lead retrograde across the aortic valve.
Epicardial lead placement may result from perforating the right ventricle5 or inadvertent positioning within the main coronary sinus or in a cardiac vein.
PREVENTION IS THE BEST MANAGEMENT
The best way to manage lead malpositioning is to prevent it in the first place.
Make sure you are in a vein, not an artery! If you are working from the patient’s left side, you should see the guidewire cross the midline on fluoroscopy. Working from either the left or the right side, you can ensure that the guidewire is in the venous system by advancing it into the inferior vena cava and then all the way below the diaphragm (best seen on anteroposterior views). These observations help avoid lead placement in the left ventricle by an inadvertent retrograde aortic approach.
Suspect that you are taking the wrong route to the heart (ie, through the arterial system) if, in the anteroposterior view, the guidewire bends as it approaches the left spinal border. This sign suggests that you are going backwards through the ascending aorta and bumping up against the aortic cusps. Occasionally, the wire may pass through the aortic valve without resistance and bending. Additional advancement toward the left chest wall will make contact with the left ventricular endocardium and may result in ventricular ectopy. Placement in the left ventricle is best seen in the left anterior oblique projection; the lead will cross the spine or its distal end will point toward the spine in progressive projections from farther to the left.
Make sure you are in the right ventricle. Even if you have gone through the venous system, you are not home free. Advancing the lead into the right ventricular outflow tract (best seen in the right anterior oblique projection) is a key step in avoiding lead misplacement. In the right ventricular outflow tract, the lead tip should move freely; if it does not, it may be in the coronary sinus or middle cardiac vein.
If a lead passes through a patent foramen ovale or septal defect to the left atrium, a left anterior oblique view should also demonstrate movement toward or beyond the spine. If the lead passes beyond the left heart border, a position in a pulmonary vein is possible. This is often associated with loss of a recordable intracardiac electrogram. A position in a right pulmonary vein is possible but very, very unlikely. If a lead passes through a patent foramen ovale or septal defect to the left ventricle, it will point toward the spine in left anterior oblique projections. (See “Postoperative detection by chest radiography.”)
Ventricular paced QRS complexes should show a left bundle branch pattern on electrocardiography (ECG), not a right bundle branch pattern (more about this below). However, when inserting a pacemaker, the sterile field includes the front of the chest and therefore lead V1 is usually omitted, depriving the operator of valuable information.
Fortunately, operators may fluoroscopically view leads intended for the right ventricle in left anterior oblique projections. We recommend beginning at 40° left anterior oblique. In this view, septally positioned right ventricular leads may appear to abut the spine. A right ventricular position is confirmed in a steeper left anterior oblique projection, where the lead should be seen to be away from the spine.4
POSTOPERATIVE DETECTION BY ECG
Careful evaluation of the 12-lead electrocardiogram during ventricular pacing is important for confirming correct lead placement. If ventricular pacing is absent, eg, if the device fires only if the natural heart rate drops below a set number and the heart happens to be firing on its own when you happen to be looking at it, programming the device to pace the right ventricle 10 beats per minute faster than the intrinsic heart rate usually suffices. Temporarily disabling atrial pacing and cardiac venous pacing in biventricular devices facilitates interpretation of the paced QRS complex.
Bundle branch block patterns
The typical morphology for paced events originating from the right ventricle has a left bundle branch block pattern, ie, a dominant S wave in leads V1 and V2. Nevertheless, many patients with a safely placed lead in the right ventricle can also demonstrate right bundle branch morphology during pacing,6 ie, a dominant R wave in leads V1 and V2.
Klein et al7 reported on 8 patients who had features of right bundle branch block in leads V1 and V2 and noted that placing these leads 1 interspace lower eliminated the right bundle branch block appearance. The utility of this maneuver is demonstrated in Figure 1.
Almehairi et al8 demonstrated transition to a left bundle branch block-like pattern in V1 in 14 of 26 patients after leads V1 and V2 were moved to the fifth intercostal space. Moving these leads to the sixth intercostal space produced a left bundle branch block-like pattern in all the remaining patients. Additional study is needed to validate this precordial mapping technique.9
Although the Coman and Trohman algorithm suggests that a frontal plane axis of −90° to –180° is specific for left ventricular pacing,6 other reports have identified this axis in the presence of true right ventricular pacing.6,9–12 Therefore, Barold and Giudici9 argue that a frontal plane axis in the right superior quadrant has limited diagnostic value.
POSTOPERATIVE DETECTION BY CHEST RADIOGRAPHY
A lead in the left ventricle may be a subtle finding on an anteroposterior or posteroanterior chest radiograph. The definitive view is the lateral projection, which is also true during intraoperative fluoroscopy.13–15 The tip of a malpositioned left-ventricular lead is characteristically seen farther posterior (toward the spine) in the cardiac silhouette on the lateral view (Figure 3).2 If the lead is properly positioned, the general direction of the middle to distal portion should be away from the spine.
ECHOCARDIOGRAPHY TO CONFIRM
Two-dimensional echocardiography can help to confirm left ventricular placement via an atrial septal defect, patent foramen ovale, or perforation of the interventricular septum.16,17
Three-dimensional echocardiography can facilitate cardiac venous lead placement and assess the impact of right ventricular lead placement on tricuspid valve function.18,19 In one case report, 3-dimensional echocardiography provided a definitive diagnosis of interventricular septal perforation when findings on computed tomography (CT) were indeterminate.20
CT AND MRI: LIMITED ROLES
When echocardiographic findings are equivocal, CT can help diagnose lead perforation. Electrocardiogram-triggered cardiac CT can help visualize lead positions and potential lead perforation. Unfortunately, the precise location of the lead tip (and the diagnosis) can be missed due to streaking (“star”) artifacts and acoustic shadowing from the metallic lead.21–26 Because of these limitations, as well as radiation exposure and high costs, CT should be used sparingly, if at all, for diagnosing lead malposition.
Technological advances and the increasing use of magnetic resonance imaging (MRI) in clinical practice have led to the development of “MRI-conditional” cardiac implantable electronic devices (ie, safe for undergoing MRI), as well as more lenient regulation of MRI in patients with nonconditional devices.27,28 Although the widely held opinion that patients with a pacemaker or implantable cardioverter defibrillator are not eligible to undergo MRI has largely been abandoned, it seems unlikely that cardiac MRI will become a pivotal tool in assessing lead malposition.
MANAGING MALPOSITIONED LEADS
Inadvertent left ventricular lead placement provides a nidus for thrombus formation. When inadvertent left ventricular lead malposition is identified acutely, correction of the lead position should be performed immediately by an experienced electrophysiologist.
Treatment of left ventricular lead misplacement discovered late after implantation includes lead removal or chronic anticoagulation with warfarin to prevent thromboemboli.
Long-term anticoagulation
No thromboembolic events have been reported2 in patients with lead malposition who take warfarin and maintain an international normalized ratio of 2.5 to 3.5.
Antiplatelet agents are not enough by themselves.16
The use of direct oral anticoagulants has not been explored in this setting. Use of dabigatran in patients with mechanical heart valves was associated with increased rates of thromboembolic and bleeding complications compared with warfarin.29 Based on these results and an overall lack of evidence, we do not recommend substituting a direct oral anticoagulant for warfarin in the setting of malpositioned left ventricular leads.
Late percutaneous removal
Late lead removal is most appropriate if cardiac surgery is planned for other reasons. Although percutaneous extraction of a malpositioned left ventricular lead was first described over 25 years ago,13 the safety of this procedure remains uncertain.
Kosmidou et al17 reported two cases of percutaneous removal of inadvertent transarterial leads employing standard interventional cardiology methods for cerebral embolic protection. Distal embolic filter wires were deployed in the left and right internal carotid arteries. A covered stent was deployed at the arterial entry site simultaneously with lead removal, providing immediate and effective hemostasis. Similar protection should be considered during transvenous access and extraction via an atrial septal or patent foramen ovale.
Nevertheless, not even transesophageal echocardiography can reliably exclude adhered thrombi, and the risk of embolization of fibrous adhesions or thrombi has been cited as a pivotal contraindication to percutaneous lead extraction regardless of modality.16
Although rare, inadvertent placement of a pacemaker or defibrillator lead in the left ventricle can have serious consequences, including arterial thromboembolism and aortic or mitral valve damage or infection.1–4
This article discusses situations in which lead malpositioning is likely to occur, how to prevent it, how to detect and correct it immediately, and how to manage cases discovered long after implantation.
RARE, BUT LIKELY UNDERREPORTED
In 2011, Rodriguez et al1 reviewed 56 reported cases in which an endocardial lead had been mistakenly placed in the left ventricle. A few more cases have been reported since then, but some cases are not reported, so how often this occurs is unknown.
A large single-center retrospective study2 reported a 3.4% incidence of inadvertent lead placement in the left side of the heart, including the cardiac veins.
HOW LEADS CAN END UP IN THE WRONG PLACE
Risk factors for lead malpositioning include abnormal thoracic anatomy, underlying congenital heart disease, and operator inexperience.2
Normally, in single- and double-lead systems, leads are inserted into a cephalic, subclavian, or axillary vein and advanced into the right atrium, right ventricle, or both. However, pacing, sensing, and defibrillation leads have inadvertently been placed in the left ventricular endocardium and even on the epicardial surface.
Leads can end up inside the left ventricle by passing through an unrecognized atrial septal defect, patent foramen ovale, or ventricular septal defect, or by perforating the interventricular septum. Another route into the left ventricle is by gaining vascular access through the axillary or subclavian artery and advancing the lead retrograde across the aortic valve.
Epicardial lead placement may result from perforating the right ventricle5 or inadvertent positioning within the main coronary sinus or in a cardiac vein.
PREVENTION IS THE BEST MANAGEMENT
The best way to manage lead malpositioning is to prevent it in the first place.
Make sure you are in a vein, not an artery! If you are working from the patient’s left side, you should see the guidewire cross the midline on fluoroscopy. Working from either the left or the right side, you can ensure that the guidewire is in the venous system by advancing it into the inferior vena cava and then all the way below the diaphragm (best seen on anteroposterior views). These observations help avoid lead placement in the left ventricle by an inadvertent retrograde aortic approach.
Suspect that you are taking the wrong route to the heart (ie, through the arterial system) if, in the anteroposterior view, the guidewire bends as it approaches the left spinal border. This sign suggests that you are going backwards through the ascending aorta and bumping up against the aortic cusps. Occasionally, the wire may pass through the aortic valve without resistance and bending. Additional advancement toward the left chest wall will make contact with the left ventricular endocardium and may result in ventricular ectopy. Placement in the left ventricle is best seen in the left anterior oblique projection; the lead will cross the spine or its distal end will point toward the spine in progressive projections from farther to the left.
Make sure you are in the right ventricle. Even if you have gone through the venous system, you are not home free. Advancing the lead into the right ventricular outflow tract (best seen in the right anterior oblique projection) is a key step in avoiding lead misplacement. In the right ventricular outflow tract, the lead tip should move freely; if it does not, it may be in the coronary sinus or middle cardiac vein.
If a lead passes through a patent foramen ovale or septal defect to the left atrium, a left anterior oblique view should also demonstrate movement toward or beyond the spine. If the lead passes beyond the left heart border, a position in a pulmonary vein is possible. This is often associated with loss of a recordable intracardiac electrogram. A position in a right pulmonary vein is possible but very, very unlikely. If a lead passes through a patent foramen ovale or septal defect to the left ventricle, it will point toward the spine in left anterior oblique projections. (See “Postoperative detection by chest radiography.”)
Ventricular paced QRS complexes should show a left bundle branch pattern on electrocardiography (ECG), not a right bundle branch pattern (more about this below). However, when inserting a pacemaker, the sterile field includes the front of the chest and therefore lead V1 is usually omitted, depriving the operator of valuable information.
Fortunately, operators may fluoroscopically view leads intended for the right ventricle in left anterior oblique projections. We recommend beginning at 40° left anterior oblique. In this view, septally positioned right ventricular leads may appear to abut the spine. A right ventricular position is confirmed in a steeper left anterior oblique projection, where the lead should be seen to be away from the spine.4
POSTOPERATIVE DETECTION BY ECG
Careful evaluation of the 12-lead electrocardiogram during ventricular pacing is important for confirming correct lead placement. If ventricular pacing is absent, eg, if the device fires only if the natural heart rate drops below a set number and the heart happens to be firing on its own when you happen to be looking at it, programming the device to pace the right ventricle 10 beats per minute faster than the intrinsic heart rate usually suffices. Temporarily disabling atrial pacing and cardiac venous pacing in biventricular devices facilitates interpretation of the paced QRS complex.
Bundle branch block patterns
The typical morphology for paced events originating from the right ventricle has a left bundle branch block pattern, ie, a dominant S wave in leads V1 and V2. Nevertheless, many patients with a safely placed lead in the right ventricle can also demonstrate right bundle branch morphology during pacing,6 ie, a dominant R wave in leads V1 and V2.
Klein et al7 reported on 8 patients who had features of right bundle branch block in leads V1 and V2 and noted that placing these leads 1 interspace lower eliminated the right bundle branch block appearance. The utility of this maneuver is demonstrated in Figure 1.
Almehairi et al8 demonstrated transition to a left bundle branch block-like pattern in V1 in 14 of 26 patients after leads V1 and V2 were moved to the fifth intercostal space. Moving these leads to the sixth intercostal space produced a left bundle branch block-like pattern in all the remaining patients. Additional study is needed to validate this precordial mapping technique.9
Although the Coman and Trohman algorithm suggests that a frontal plane axis of −90° to –180° is specific for left ventricular pacing,6 other reports have identified this axis in the presence of true right ventricular pacing.6,9–12 Therefore, Barold and Giudici9 argue that a frontal plane axis in the right superior quadrant has limited diagnostic value.
POSTOPERATIVE DETECTION BY CHEST RADIOGRAPHY
A lead in the left ventricle may be a subtle finding on an anteroposterior or posteroanterior chest radiograph. The definitive view is the lateral projection, which is also true during intraoperative fluoroscopy.13–15 The tip of a malpositioned left-ventricular lead is characteristically seen farther posterior (toward the spine) in the cardiac silhouette on the lateral view (Figure 3).2 If the lead is properly positioned, the general direction of the middle to distal portion should be away from the spine.
ECHOCARDIOGRAPHY TO CONFIRM
Two-dimensional echocardiography can help to confirm left ventricular placement via an atrial septal defect, patent foramen ovale, or perforation of the interventricular septum.16,17
Three-dimensional echocardiography can facilitate cardiac venous lead placement and assess the impact of right ventricular lead placement on tricuspid valve function.18,19 In one case report, 3-dimensional echocardiography provided a definitive diagnosis of interventricular septal perforation when findings on computed tomography (CT) were indeterminate.20
CT AND MRI: LIMITED ROLES
When echocardiographic findings are equivocal, CT can help diagnose lead perforation. Electrocardiogram-triggered cardiac CT can help visualize lead positions and potential lead perforation. Unfortunately, the precise location of the lead tip (and the diagnosis) can be missed due to streaking (“star”) artifacts and acoustic shadowing from the metallic lead.21–26 Because of these limitations, as well as radiation exposure and high costs, CT should be used sparingly, if at all, for diagnosing lead malposition.
Technological advances and the increasing use of magnetic resonance imaging (MRI) in clinical practice have led to the development of “MRI-conditional” cardiac implantable electronic devices (ie, safe for undergoing MRI), as well as more lenient regulation of MRI in patients with nonconditional devices.27,28 Although the widely held opinion that patients with a pacemaker or implantable cardioverter defibrillator are not eligible to undergo MRI has largely been abandoned, it seems unlikely that cardiac MRI will become a pivotal tool in assessing lead malposition.
MANAGING MALPOSITIONED LEADS
Inadvertent left ventricular lead placement provides a nidus for thrombus formation. When inadvertent left ventricular lead malposition is identified acutely, correction of the lead position should be performed immediately by an experienced electrophysiologist.
Treatment of left ventricular lead misplacement discovered late after implantation includes lead removal or chronic anticoagulation with warfarin to prevent thromboemboli.
Long-term anticoagulation
No thromboembolic events have been reported2 in patients with lead malposition who take warfarin and maintain an international normalized ratio of 2.5 to 3.5.
Antiplatelet agents are not enough by themselves.16
The use of direct oral anticoagulants has not been explored in this setting. Use of dabigatran in patients with mechanical heart valves was associated with increased rates of thromboembolic and bleeding complications compared with warfarin.29 Based on these results and an overall lack of evidence, we do not recommend substituting a direct oral anticoagulant for warfarin in the setting of malpositioned left ventricular leads.
Late percutaneous removal
Late lead removal is most appropriate if cardiac surgery is planned for other reasons. Although percutaneous extraction of a malpositioned left ventricular lead was first described over 25 years ago,13 the safety of this procedure remains uncertain.
Kosmidou et al17 reported two cases of percutaneous removal of inadvertent transarterial leads employing standard interventional cardiology methods for cerebral embolic protection. Distal embolic filter wires were deployed in the left and right internal carotid arteries. A covered stent was deployed at the arterial entry site simultaneously with lead removal, providing immediate and effective hemostasis. Similar protection should be considered during transvenous access and extraction via an atrial septal or patent foramen ovale.
Nevertheless, not even transesophageal echocardiography can reliably exclude adhered thrombi, and the risk of embolization of fibrous adhesions or thrombi has been cited as a pivotal contraindication to percutaneous lead extraction regardless of modality.16
- Rodriguez Y, Baltodano P, Tower A, Martinez C, Carrillo R. Management of symptomatic inadvertently placed endocardial leads in the left ventricle. Pacing Clin Electrophysiol 2011; 34:1192–1200.
- Ohlow MA, Roos M, Lauer B, Von Korn H, Geller JC. Incidence, predictors, and outcome of inadvertent malposition of transvenous pacing or defibrillation lead in the left heart. Europace 2016; 18:1049–1054.
- Madias C, Trohman RG. Cardiac resynchronization therapy: the state of the art. Expert Rev Cardiovasc Ther 2014; 12:573–587.
- Trohman RG. To the editor—comment on six uneventful years with a pacing lead in the left ventricle. Heart Rhythm 2013; 10:e81.
- Cossú SF. Unusual placement of a coronary sinus lead for resynchronization therapy resulting in late lead fracture. J Innovations Cardiac Rhythm Manage 2013; 4:1148–1153.
- Coman JA, Trohman RG. Incidence and electrocardiographic localization of safe right bundle branch block configurations during permanent ventricular pacing. Am J Cardiol 1995; 76:781–784.
- Klein HO, Beker B, Sareli P, DiSegni E, Dean H, Kaplinsky E. Unusual QRS morphology associated with transvenous pacemakers. The pseudo RBBB pattern. Chest 1985; 87:517–521.
- Almehairi M, Enriquez A, Redfearn D, et al. Right bundle branch block-like pattern during ventricular pacing: a surface electrocardiographic mapping technique to locate the ventricular lead. Can J Cardiol 2015; 31:1019–1024.
- Barold SS, Giudici MC. Renewed interest in the significance of the tall R wave in ECG lead V1 during right ventricular pacing. Expert Rev Med Devices 2016; 13:611–613.
- Almehairi M, Ali FS, Enriquez A, et al. Electrocardiographic algorithms to predict true right ventricular pacing in the presence of right bundle branch block-like pattern. Int J Cardiol 2014; 172:e403–e405.
- Tzeis S, Andrikopoulos G, Weigand S, et al. Right bundle branch block-like pattern during uncomplicated right ventricular pacing and the effect of pacing site. Am J Cardiol 2016; 117:935–939.
- Hemminger EJ, Criley JM. Right ventricular enlargement mimicking electrocardiographic left ventricular pacing. J Electrocardiol 2006; 39:180–182.
- Furman S. Chest PA and lateral. Pacing Clin Electrophysiol 1993; 16:953.
- Trohman RG, Wilkoff BL, Byrne T, Cook S. Successful percutaneous extraction of a chronic left ventricular pacing lead. Pacing Clin Electrophysiol 1991; 14:1448–1451.
- Trohman RG, Kim MH, Pinski SL. Cardiac pacing: the state of the art. Lancet 2004; 364:1701–1719.
- Van Gelder BM, Bracke FA, Oto A, et al. Diagnosis and management of inadvertently placed pacing and ICD leads in the left ventricle: a multicenter experience and review of the literature. Pacing Clin Electrophysiol 2000; 23:877–883.
- Kosmidou I, Karmpaliotis D, Kandzari DE, Dan D. Inadvertent transarterial lead placement in the left ventricle and aortic cusp: percutaneous lead removal with carotid embolic protection and stent graft placement. Indian Pacing Electrophysiol J 2012; 12:269–273.
- Villanueva FS, Heinsimer JA, Burkman MH, Fananapazir L,
- Halvorsen RA Jr, Chen JT. Echocardiographic detection of perforation of the cardiac ventricular septum by a permanent pacemaker lead. Am J Cardiol 1987; 59:370–371.
- Döring M, Braunschweig F, Eitel C, et al. Individually tailored left ventricular lead placement: lessons from multimodality integration between three-dimensional echocardiography and coronary sinus angiogram. Europace 2013; 15:718–727.
- Mediratta A, Addetia K, Yamat M, et al. 3D echocardiographic location of implantable device leads and mechanism of associated tricuspid regurgitation. JACC Cardiovasc Imaging 2014; 7:337–347.
- Daher IN, Saeed M, Schwarz ER, Agoston I, Rahman MA, Ahmad M. Live three-dimensional echocardiography in diagnosis of interventricular septal perforation by pacemaker lead. Echocardiography 2006; 23:428–429.
- Mak GS, Truong QA. Cardiac CT: imaging of and through cardiac devices. Curr Cardiovasc Imaging Rep 2012; 5:328–336.
- Henrikson CA, Leng CT, Yuh DD, Brinker JA. Computed tomography to assess possible cardiac lead perforation. Pacing Clin Electrophysiol 2006; 29:509–511.
- Hirschl DA, Jain VR, Spindola-Franco H, Gross JN, Haramati LB. Prevalence and characterization of asymptomatic pacemaker and ICD lead perforation on CT. Pacing Clin Electrophysiol 2007; 30:28–32.
- Pang BJ, Lui EH, Joshi SB, et al. Pacing and implantable cardioverter defibrillator lead perforation as assessed by multiplanar reformatted ECG-gated cardiac computed tomography and clinical correlates. Pacing Clin Electrophysiol 2014; 37:537–545.
- Lanzman RS, Winter J, Blondin D, et al. Where does it lead? Imaging features of cardiovascular implantable electronic devices on chest radiograph and CT. Korean J Radiol 2011; 12:611–619.
- van der Graaf AW, Bhagirath P, Götte MJ. MRI and cardiac implantable electronic devices; current status and required safety conditions. Neth Heart J 2014; 22:269–276.
- European Society of Cardiology (ESC), European Heart Rhythm Association (EHRA); Brignole M, Auricchio A, Baron-Esquivias G, et al. 2013 ESC guidelines on cardiac pacing and cardiac resynchronization therapy: the Task Force on cardiac pacing and resynchronization therapy of the European Society of Cardiology (ESC). Developed in collaboration with the European Heart Rhythm Association (EHRA). Europace 2013; 15:1070–1118.
- Eikelboom JW, Connolly SJ, Brueckmann M, et al; RE-ALIGN Investigators. Dabigatran versus warfarin in patients with mechanical heart valves. N Engl J Med 2013; 369:1206–1214.
- Rodriguez Y, Baltodano P, Tower A, Martinez C, Carrillo R. Management of symptomatic inadvertently placed endocardial leads in the left ventricle. Pacing Clin Electrophysiol 2011; 34:1192–1200.
- Ohlow MA, Roos M, Lauer B, Von Korn H, Geller JC. Incidence, predictors, and outcome of inadvertent malposition of transvenous pacing or defibrillation lead in the left heart. Europace 2016; 18:1049–1054.
- Madias C, Trohman RG. Cardiac resynchronization therapy: the state of the art. Expert Rev Cardiovasc Ther 2014; 12:573–587.
- Trohman RG. To the editor—comment on six uneventful years with a pacing lead in the left ventricle. Heart Rhythm 2013; 10:e81.
- Cossú SF. Unusual placement of a coronary sinus lead for resynchronization therapy resulting in late lead fracture. J Innovations Cardiac Rhythm Manage 2013; 4:1148–1153.
- Coman JA, Trohman RG. Incidence and electrocardiographic localization of safe right bundle branch block configurations during permanent ventricular pacing. Am J Cardiol 1995; 76:781–784.
- Klein HO, Beker B, Sareli P, DiSegni E, Dean H, Kaplinsky E. Unusual QRS morphology associated with transvenous pacemakers. The pseudo RBBB pattern. Chest 1985; 87:517–521.
- Almehairi M, Enriquez A, Redfearn D, et al. Right bundle branch block-like pattern during ventricular pacing: a surface electrocardiographic mapping technique to locate the ventricular lead. Can J Cardiol 2015; 31:1019–1024.
- Barold SS, Giudici MC. Renewed interest in the significance of the tall R wave in ECG lead V1 during right ventricular pacing. Expert Rev Med Devices 2016; 13:611–613.
- Almehairi M, Ali FS, Enriquez A, et al. Electrocardiographic algorithms to predict true right ventricular pacing in the presence of right bundle branch block-like pattern. Int J Cardiol 2014; 172:e403–e405.
- Tzeis S, Andrikopoulos G, Weigand S, et al. Right bundle branch block-like pattern during uncomplicated right ventricular pacing and the effect of pacing site. Am J Cardiol 2016; 117:935–939.
- Hemminger EJ, Criley JM. Right ventricular enlargement mimicking electrocardiographic left ventricular pacing. J Electrocardiol 2006; 39:180–182.
- Furman S. Chest PA and lateral. Pacing Clin Electrophysiol 1993; 16:953.
- Trohman RG, Wilkoff BL, Byrne T, Cook S. Successful percutaneous extraction of a chronic left ventricular pacing lead. Pacing Clin Electrophysiol 1991; 14:1448–1451.
- Trohman RG, Kim MH, Pinski SL. Cardiac pacing: the state of the art. Lancet 2004; 364:1701–1719.
- Van Gelder BM, Bracke FA, Oto A, et al. Diagnosis and management of inadvertently placed pacing and ICD leads in the left ventricle: a multicenter experience and review of the literature. Pacing Clin Electrophysiol 2000; 23:877–883.
- Kosmidou I, Karmpaliotis D, Kandzari DE, Dan D. Inadvertent transarterial lead placement in the left ventricle and aortic cusp: percutaneous lead removal with carotid embolic protection and stent graft placement. Indian Pacing Electrophysiol J 2012; 12:269–273.
- Villanueva FS, Heinsimer JA, Burkman MH, Fananapazir L,
- Halvorsen RA Jr, Chen JT. Echocardiographic detection of perforation of the cardiac ventricular septum by a permanent pacemaker lead. Am J Cardiol 1987; 59:370–371.
- Döring M, Braunschweig F, Eitel C, et al. Individually tailored left ventricular lead placement: lessons from multimodality integration between three-dimensional echocardiography and coronary sinus angiogram. Europace 2013; 15:718–727.
- Mediratta A, Addetia K, Yamat M, et al. 3D echocardiographic location of implantable device leads and mechanism of associated tricuspid regurgitation. JACC Cardiovasc Imaging 2014; 7:337–347.
- Daher IN, Saeed M, Schwarz ER, Agoston I, Rahman MA, Ahmad M. Live three-dimensional echocardiography in diagnosis of interventricular septal perforation by pacemaker lead. Echocardiography 2006; 23:428–429.
- Mak GS, Truong QA. Cardiac CT: imaging of and through cardiac devices. Curr Cardiovasc Imaging Rep 2012; 5:328–336.
- Henrikson CA, Leng CT, Yuh DD, Brinker JA. Computed tomography to assess possible cardiac lead perforation. Pacing Clin Electrophysiol 2006; 29:509–511.
- Hirschl DA, Jain VR, Spindola-Franco H, Gross JN, Haramati LB. Prevalence and characterization of asymptomatic pacemaker and ICD lead perforation on CT. Pacing Clin Electrophysiol 2007; 30:28–32.
- Pang BJ, Lui EH, Joshi SB, et al. Pacing and implantable cardioverter defibrillator lead perforation as assessed by multiplanar reformatted ECG-gated cardiac computed tomography and clinical correlates. Pacing Clin Electrophysiol 2014; 37:537–545.
- Lanzman RS, Winter J, Blondin D, et al. Where does it lead? Imaging features of cardiovascular implantable electronic devices on chest radiograph and CT. Korean J Radiol 2011; 12:611–619.
- van der Graaf AW, Bhagirath P, Götte MJ. MRI and cardiac implantable electronic devices; current status and required safety conditions. Neth Heart J 2014; 22:269–276.
- European Society of Cardiology (ESC), European Heart Rhythm Association (EHRA); Brignole M, Auricchio A, Baron-Esquivias G, et al. 2013 ESC guidelines on cardiac pacing and cardiac resynchronization therapy: the Task Force on cardiac pacing and resynchronization therapy of the European Society of Cardiology (ESC). Developed in collaboration with the European Heart Rhythm Association (EHRA). Europace 2013; 15:1070–1118.
- Eikelboom JW, Connolly SJ, Brueckmann M, et al; RE-ALIGN Investigators. Dabigatran versus warfarin in patients with mechanical heart valves. N Engl J Med 2013; 369:1206–1214.
KEY POINTS
- During device implantation, fluoroscopy in progressively lateral left anterior oblique views should be used to ensure correct lead position.
- After implantation, malposition can almost always be detected promptly by examining a 12-lead electrocardiogram for the paced QRS morphology and by lateral chest radiography.
- Echocardiography and computed tomography may enhance diagnostic accuracy and clarify equivocal findings.
- Late surgical correction of a malpositioned lead is best done when a patient is undergoing cardiac surgery for other reasons.
- Long-term warfarin therapy is recommended to prevent thromboembolism if malpositioning cannot be corrected.
High users of healthcare: Strategies to improve care, reduce costs
Emergency departments are not primary care clinics, but some patients use them that way. This relatively small group of patients consumes a disproportionate share of healthcare at great cost, earning them the label of “high users.” Mostly poor and often burdened with mental illness and addiction, they are not necessarily sicker than other patients, and they do not enjoy better outcomes from the extra money spent on them. (Another subset of high users, those with end-stage chronic disease, is outside the scope of this review.)
Herein lies an opportunity. If—and this is a big if—we could manage their care in a systematic way instead of haphazardly, proactively instead of reactively, with continuity of care instead of episodically, and in a way that is convenient for the patient, we might be able to improve quality and save money.
A DISPROPORTIONATE SHARE OF COSTS
In the United States in 2012, the 5% of the population who were the highest users were responsible for 50% of healthcare costs.1 The mean cost per person in this group was more than $43,000 annually. The top 1% of users accounted for nearly 23% of all expenditures, averaging nearly $98,000 per patient per year—10 times more than the average yearly cost per patient.
CARE IS OFTEN INAPPROPRIATE AND UNNECESSARY
In addition to being disproportionately expensive, the care that these patients receive is often inappropriate and unnecessary for the severity of their disease.
A 2007–2009 study2 of 1,969 patients who had visited the emergency department 10 or more times in a year found they received more than twice as many computed tomography (CT) scans as a control group of infrequent users (< 3 visits/year). This occurred even though they were not as sick as infrequent users, based on significantly lower hospital admission rates (11.1% vs 17.9%; P < .001) and mortality rates (0.7% vs 1.5%; P < .002).2
This inverse relationship between emergency department use and illness severity was even more exaggerated at the upper extreme of the use curve. The highest users (> 29 visits to the emergency department in a year) had the lowest triage acuity and hospital admission rates but the highest number of CT scans. Charges per visit were lower among frequent users, but total charges rose steadily with increasing emergency department use, accounting for significantly more costs per year.2
We believe that one reason these patients receive more medical care than necessary is because their medical records are too large and complex for the average physician to distill effectively in a 20-minute physician-patient encounter. Physicians therefore simply order more tests, procedures, and admissions, which are often medically unnecessary and redundant.
WHAT DRIVES HIGH COST?
Mental illness and chemical dependence
Drug addiction, mental illness, and poverty frequently accompany (and influence) high-use behavior, particularly in patients without end-stage diseases.
Szekendi et al,3 in a study of 28,291 patients who had been admitted at least 5 times in a year in a Chicago health system, found that these high users were 2 to 3 times more likely to suffer from comorbid depression (40% vs 13%), psychosis (18% vs 5%), recreational drug dependence (20% vs 7%), and alcohol abuse (16% vs 7%) than non-high-use hospitalized patients.3
Mercer et al4 conducted a study at Duke University Medical Center, Durham, NC, aimed at reducing emergency department visits and hospital admissions among 24 of its highest users. They found that 23 (96%) were either addicted to drugs or mentally ill, and 20 (83%) suffered from chronic pain.4
Drug abuse among high users is becoming even more relevant as the opioid epidemic worsens. Given that most patients requiring high levels of care suffer from chronic pain and many of them develop an opioid addiction while treating their pain, physicians have a moral imperative to reduce the prevalence of drug abuse in this population.
Low socioeconomic status
Low socioeconomic status is an important factor among high users, as it is highly associated with greater disease severity, which usually increases cost without any guarantee of an associated increase in quality. Data suggest that patients of low socioeconomic status are twice as likely to require urgent emergency department visits, 4 times as likely to require admission to the hospital, and, importantly, about half as likely to use ambulatory care compared with patients of higher socioeconomic status.5 While this pattern of low-quality, high-cost spending in acute care settings reflects spending in the healthcare system at large, the pattern is greatly exaggerated among high users.
Lost to follow-up
Low socioeconomic status also complicates communication and follow-up. In a 2013 study, physician researchers in St. Paul, MN, documented attempts to interview 64 recently discharged high users. They could not reach 47 (73%) of them, for reasons largely attributable to low socioeconomic status, such as disconnected phone lines and changes in address.6
Clearly, the usual contact methods for follow-up care after discharge, such as phone calls and mailings, are unlikely to be effective in coordinating the outpatient care of these individuals.
Additionally, we must find ways of making primary care more convenient, gaining our patients’ trust, and finding ways to engage patients in follow-up without relying on traditional means of communication.
Do high users have medical insurance?
Surprisingly, most high users of the emergency department have health insurance. The Chicago health system study3 found that most (72.4%) of their high users had either Medicare or private health insurance, while 27.6% had either Medicaid or no insurance (compared with 21.6% in the general population). Other studies also found that most of the frequent emergency department users are insured,7 although the overall percentage who rely on publicly paid insurance is greater than in the population at large.
Many prefer acute care over primary care
Although one might think that high users go to the emergency department because they have nowhere else to go for care, a report published in 2013 by Kangovi et al5 suggests another reason—they prefer the emergency department.5 They interviewed 40 urban patients of low socioeconomic status who consistently cited the 24-hour, no-appointment-necessary structure of the emergency department as an advantage over primary care. The flexibility of emergency access to healthcare makes sense if one reflects on how difficult it is for even high-functioning individuals to schedule and keep medical appointments.
Specific reasons for preferring the emergency department included the following:
Affordability. Even if their insurance fully paid for visits to their primary care physicians, the primary care physician was likely to refer them to specialists, whose visits required a copay, and which required taking another day off of work. The emergency department is cheaper for the patient and it is a “one-stop shop.” Patients appreciated the emergency department guarantee of seeing a physician regardless of proof of insurance, a policy not guaranteed in primary care and specialist offices.
Accessibility. For those without a car, public transportation and even patient transportation services are inconvenient and unreliable, whereas emergency medical services will take you to the emergency department.
Accommodations. Although medical centers may tout their same-day appointments, often same-day appointments are all that they have—and you have no choice about the time. You have to call first thing in the morning and stay on hold for a long time, and then when you finally get through, all the same-day appointments are gone.
Availability. Patients said they often had a hard time getting timely medical advice from their primary care physicians. When they could get through to their primary care physicians on the phone, they would be told to go to the emergency department.
Acceptability. Men, especially, feel they need to be very sick indeed to seek medical care, so going to the emergency department is more acceptable.
Trust in the provider. For reasons that were not entirely clear, patients felt that acute care providers were more trustworthy, competent, and compassionate than primary care physicians.5
None of these reasons for using the emergency department has anything to do with disease severity, which supports the findings that high users of the emergency department were not as sick as their normal-use peers.2
QUALITY IMPROVEMENT AND COST-REDUCTION STRATEGIES
Efforts are being made to reduce the cost of healthcare for high users while improving the quality of their care. Promising strategies focus on coordinating care management, creating individualized patient care plans, and improving the components and instructions of discharge summaries.
Care management organizations
A care management organization (CMO) model has emerged as a strategy for quality improvement and cost reduction in the high-use population. In this model, social workers, health coaches, nurses, mid-level providers, and physicians collaborate on designing individualized care plans to meet the specific needs of patients.
Teams typically work in stepwise fashion, first identifying and engaging patients at high risk of poor outcomes and unnecessary care, often using sophisticated quantitative, risk-prediction tools. Then, they perform health assessments and identify potential interventions aimed at preventing expensive acute-care medical interventions. Third, they work with patients to rapidly identify and effectively respond to changes in their conditions and direct them to the most appropriate medical setting, typically primary or urgent care.
Effective models
In 1998, the Camden (NJ) Coalition of Healthcare Providers established a model for CMO care plans. Starting with the first 36 patients enrolled in the program, hospital admissions and emergency department visits were cut by 47% (from 62 to 37 per month), and collective hospital costs were cut by 56% (from $1.2 million to about $500,000 per month).8 It should be noted that this was a small, nonrandomized study and these preliminary numbers did not take into account the cost of outpatient physician visits or new medications. Thus, how much money this program actually saves is not clear.
Similar programs have had similar results. A nurse-led care coordination program in Doylestown, PA, showed an impressive 25% reduction in annual mortality and a 36% reduction in overall costs during a 10-year period.9
A program in Atlantic City, NJ, combined the typical CMO model with a primary care clinic to provide high users with unlimited access, while paying its providers in a capitation model (as opposed to fee for service). It achieved a 40% reduction in yearly emergency department visits and hospital admissions.8
Patient care plans
Individualized patient care plans for high users are among the most promising tools for reducing costs and improving quality in this group. They are low-cost and relatively easy to implement. The goal of these care plans is to provide practitioners with a concise care summary to help them make rational and consistent medical decisions.
Typically, a care plan is written by an interdisciplinary committee composed of physicians, nurses, and social workers. It is based on the patient’s pertinent medical and psychiatric history, which may include recent imaging results or other relevant diagnostic tests. It provides suggestions for managing complex chronic issues, such as drug abuse, that lead to high use of healthcare resources.
These care plans provide a rational and prespecified approach to workup and management, typically including a narcotic prescription protocol, regardless of the setting or the number of providers who see the patient. Practitioners guided by effective care plans are much more likely to effectively navigate a complex patient encounter as opposed to looking through extensive medical notes and hoping to find relevant information.
Effective models
Data show these plans can be effective. For example, Regions Hospital in St. Paul, MN, implemented patient care plans in 2010. During the first 4 months, hospital admissions in the first 94 patients were reduced by 67%.10
A study of high users at Duke University Medical Center reported similar results. One year after starting care plans, inpatient admissions had decreased by 50.5%, readmissions had decreased by 51.5%, and variable direct costs per admission were reduced by 35.8%. Paradoxically, emergency department visits went up, but this anomaly was driven by 134 visits incurred by a single dialysis patient. After removing this patient from the data, emergency department visits were relatively stable.4
Better discharge summaries
Although improving discharge summaries is not a novel concept, changing the summary from a historical document to a proactive discharge plan has the potential to prevent readmissions and promote a durable de-escalation in care acuity.
For example, when moving a patient to a subacute care facility, providing a concise summary of which treatments worked and which did not, a list of comorbidities, and a list of medications and strategies to consider, can help the next providers to better target their plan of care. Studies have shown that nearly half of discharge statements lack important information on treatments and tests.11
Improvement can be as simple as encouraging practitioners to construct their summaries in an “if-then” format. Instead of noting for instance that “Mr. Smith was treated for pneumonia with antibiotics and discharged to a rehab facility,” the following would be more useful: “Family would like to see if Mr. Smith can get back to his functional baseline after his acute pneumonia. If he clinically does not do well over the next 1 to 2 weeks and has a poor quality of life, then family would like to pursue hospice.”
In addition to shifting the philosophy, we believe that providing timely discharge summaries is a fundamental, high-yield aspect of ensuring their effectiveness. As an example, patients being discharged to a skilled nursing facility should have a discharge summary completed and in hand before leaving the hospital.
Evidence suggests that timely writing of discharge summaries improves their quality. In a retrospective cohort study published in 2012, discharge summaries created more than 24 hours after discharge were less likely to include important plan-of-care components.12
FUTURE NEEDS
Randomized trials
Although initial results have been promising for the strategies outlined above, much of the apparent cost reduction of these interventions may be at least partially related to the study design as opposed to the interventions themselves.
For example, Hong et al13 examined 18 of the more promising CMOs that had reported initial cost savings. Of these, only 4 had conducted randomized controlled trials. When broken down further, the initial cost reduction reported by most of these randomized controlled trials was generated primarily by small subgroups.14
These results, however, do not necessarily reflect an inherent failure in the system. We contend that they merely demonstrate that CMOs and care plan administrators need to be more selective about whom they enroll, either by targeting patients at the extremes of the usage curve or by identifying patient characteristics and usage parameters amenable to cost reduction and quality improvement strategies.
Better social infrastructure
Although patient care plans and CMOs have been effective in managing high users, we believe that the most promising quality improvement and cost-reduction strategy involves redirecting much of the expensive healthcare spending to the social determinants of health (eg, homelessness, mental illness, low socioeconomic status).
Among developed countries, the United States has the highest healthcare spending and the lowest social service spending as a percentage of its gross domestic product (Figure 1).15 Although seemingly discouraging, these data can actually be interpreted as hopeful, as they support the notion that the inefficiencies of our current system are not part of an inescapable reality, but rather reflect a system that has evolved uniquely in this country.
Using the available social programs
Exemplifying this medical and social services balance is a high user who visited her local emergency department 450 times in 1 year for reasons primarily related to homelessness.16 Each time, the medical system (as it is currently designed to do) applied a short-term medical solution to this patient’s problems and discharged her home, ie, back to the street.
But this patient’s high use was really a manifestation of a deeper social issue: homelessness. When the medical staff eventually noted how much this lack of stable shelter was contributing to her pattern of use, she was referred to appropriate social resources and provided with the housing she needed. Her hospital visits decreased from 450 to 12 in the subsequent year, amounting to a huge cost reduction and a clear improvement in her quality of life.
Similar encouraging results have resulted when available social programs are applied to the high-use population at large, which is particularly reassuring given this population’s preponderance of low socioeconomic status, mental illness, and homelessness. (The prevalence of homelessness is roughly 20%, depending on the definition of a high user).
New York Medicaid, for example, has a housing program that provides stable shelter outside of acute care medical settings for patients at a rate as low as $50 per day, compared with area hospital costs that often exceed $2,200 daily.17 A similar program in Westchester County, NY, reported a 45.9% reduction in inpatient costs and a 15.4% reduction in emergency department visits among 61 of its highest users after 2 years of enrollment.17
Need to reform privacy laws
Although legally daunting, reform of the Health Insurance Portability and Accountability Act (HIPAA) and other privacy laws in favor of a more open model of information sharing, particularly for high-risk patients, holds great opportunity for quality improvement. For patients who obtain their care from several healthcare facilities, the documentation is often inscrutable. If some of the HIPAA regulations and other patient privacy laws were exchanged for rules more akin to the current model of narcotic prescription tracking, for example, physicians would be better equipped to provide safe, organized, and efficient medical care for high-use patients.
Need to reform the system
A fundamental flaw in our healthcare system, which is largely based on a fee-for-service model, is that it was not designed for patients who use the system at the highest frequency and greatest cost. Also, it does not account for the psychosocial factors that beset many high-use patients. As such, it is imperative for the safety of our patients as well as the viability of the healthcare system that we change our historical way of thinking and reform this system that provides high users with care that is high-cost, low-quality, and not patient-centered.
IMPROVING QUALITY, REDUCING COST
High users of emergency services are a medically and socially complex group, predominantly characterized by low socioeconomic status and high rates of mental illness and drug dependency. Despite their increased healthcare use, they do not have better outcomes even though they are not sicker. Improving those outcomes requires both medical and social efforts.
Among the effective medical efforts are strategies aimed at creating individualized patient care plans, using coordinated care teams, and improving discharge summaries. Addressing patients’ social factors, such as homelessness, is more difficult, but healthcare systems can help patients navigate the available social programs. These strategies are part of a comprehensive care plan that can help reduce the cost and improve the quality of healthcare for high users.
- Cohen SB; Agency for Healthcare Research and Quality. Statistical Brief #359. The concentration of health care expenditures and related expenses for costly medical conditions, 2009. http://meps.ahrq.gov/mepsweb/data_files/publications/st359/stat359.pdf. Accessed December 18, 2017.
- Oostema J, Troost J, Schurr K, Waller R. High and low frequency emergency department users: a comparative analysis of morbidity, diagnostic testing, and health care costs. Ann Emerg Med 2011; 58:S225. Abstract 142.
- Szekendi MK, Williams MV, Carrier D, Hensley L, Thomas S, Cerese J. The characteristics of patients frequently admitted to academic medical centers in the United States. J Hosp Med 2015; 10:563–568.
- Mercer T, Bae J, Kipnes J, Velazquez M, Thomas S, Setji N. The highest utilizers of care: individualized care plans to coordinate care, improve healthcare service utilization, and reduce costs at an academic tertiary care center. J Hosp Med 2015; 10:419–424.
- Kangovi S, Barg FK, Carter T, Long JA, Shannon R, Grande D. Understanding why patients of low socioeconomic status prefer hospitals over ambulatory care. Health Aff (Millwood) 2013; 32:1196–1203.
- Melander I, Winkelman T, Hilger R. Analysis of high utilizers’ experience with specialized care plans. J Hosp Med 2014; 9(suppl 2):Abstract 229.
- LaCalle EJ, Rabin EJ, Genes NG. High-frequency users of emergency department care. J Emerg Med 2013; 44:1167–1173.
- Gawande A. The Hot Spotters. The New Yorker 2011. www.newyorker.com/magazine/2011/01/24/the-hot-spotters. Accessed December 18, 2017.
- Coburn KD, Marcantonio S, Lazansky R, Keller M, Davis N. Effect of a community-based nursing intervention on mortality in chronically ill older adults: a randomized controlled trial. PLoS Med 2012; 9:e1001265.
- Hilger R, Melander I, Winkelman T. Is specialized care plan work sustainable? A follow-up on healthpartners’ experience with patients who are high-utilizers. Society of Hospital Medicine Annual Meeting, Las Vegas, NV. March 24-27, 2014. www.shmabstracts.com/abstract/is-specialized-care-plan-work-sustainable-a-followup-on-healthpartners-experience-with-patients-who-are-highutilizers. Accessed December 18, 2017.
- Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA 2007; 297:831–841.
- Kind AJ, Thorpe CT, Sattin JA, Walz SE, Smith MA. Provider characteristics, clinical-work processes and their relationship to discharge summary quality for sub-acute care patients. J Gen Intern Med 2012; 27:78–84.
- Hong CS, Siegel AL, Ferris TG. Caring for high-need, high-cost patients: what makes for a successful care management program? Issue Brief (Commonwealth Fund) 2014; 19:1–19.
- Williams B. Limited effects of care management for high utilizers on total healthcare costs. Am J Managed Care 2015; 21:e244–e246.
- Organization for Economic Co-operation and Development. Health at a Glance 2009: OECD Indicators. Paris, France: OECD Publishing; 2009.
- Emeche U. Is a strategy focused on super-utilizers equal to the task of health care system transformation? Yes. Ann Fam Med 2015; 13:6–7.
- Burns J. Do we overspend on healthcare, underspend on social needs? Managed Care. http://ghli.yale.edu/news/do-we-overspend-health-care-underspend-social-needs. Accessed December 18, 2017.
Emergency departments are not primary care clinics, but some patients use them that way. This relatively small group of patients consumes a disproportionate share of healthcare at great cost, earning them the label of “high users.” Mostly poor and often burdened with mental illness and addiction, they are not necessarily sicker than other patients, and they do not enjoy better outcomes from the extra money spent on them. (Another subset of high users, those with end-stage chronic disease, is outside the scope of this review.)
Herein lies an opportunity. If—and this is a big if—we could manage their care in a systematic way instead of haphazardly, proactively instead of reactively, with continuity of care instead of episodically, and in a way that is convenient for the patient, we might be able to improve quality and save money.
A DISPROPORTIONATE SHARE OF COSTS
In the United States in 2012, the 5% of the population who were the highest users were responsible for 50% of healthcare costs.1 The mean cost per person in this group was more than $43,000 annually. The top 1% of users accounted for nearly 23% of all expenditures, averaging nearly $98,000 per patient per year—10 times more than the average yearly cost per patient.
CARE IS OFTEN INAPPROPRIATE AND UNNECESSARY
In addition to being disproportionately expensive, the care that these patients receive is often inappropriate and unnecessary for the severity of their disease.
A 2007–2009 study2 of 1,969 patients who had visited the emergency department 10 or more times in a year found they received more than twice as many computed tomography (CT) scans as a control group of infrequent users (< 3 visits/year). This occurred even though they were not as sick as infrequent users, based on significantly lower hospital admission rates (11.1% vs 17.9%; P < .001) and mortality rates (0.7% vs 1.5%; P < .002).2
This inverse relationship between emergency department use and illness severity was even more exaggerated at the upper extreme of the use curve. The highest users (> 29 visits to the emergency department in a year) had the lowest triage acuity and hospital admission rates but the highest number of CT scans. Charges per visit were lower among frequent users, but total charges rose steadily with increasing emergency department use, accounting for significantly more costs per year.2
We believe that one reason these patients receive more medical care than necessary is because their medical records are too large and complex for the average physician to distill effectively in a 20-minute physician-patient encounter. Physicians therefore simply order more tests, procedures, and admissions, which are often medically unnecessary and redundant.
WHAT DRIVES HIGH COST?
Mental illness and chemical dependence
Drug addiction, mental illness, and poverty frequently accompany (and influence) high-use behavior, particularly in patients without end-stage diseases.
Szekendi et al,3 in a study of 28,291 patients who had been admitted at least 5 times in a year in a Chicago health system, found that these high users were 2 to 3 times more likely to suffer from comorbid depression (40% vs 13%), psychosis (18% vs 5%), recreational drug dependence (20% vs 7%), and alcohol abuse (16% vs 7%) than non-high-use hospitalized patients.3
Mercer et al4 conducted a study at Duke University Medical Center, Durham, NC, aimed at reducing emergency department visits and hospital admissions among 24 of its highest users. They found that 23 (96%) were either addicted to drugs or mentally ill, and 20 (83%) suffered from chronic pain.4
Drug abuse among high users is becoming even more relevant as the opioid epidemic worsens. Given that most patients requiring high levels of care suffer from chronic pain and many of them develop an opioid addiction while treating their pain, physicians have a moral imperative to reduce the prevalence of drug abuse in this population.
Low socioeconomic status
Low socioeconomic status is an important factor among high users, as it is highly associated with greater disease severity, which usually increases cost without any guarantee of an associated increase in quality. Data suggest that patients of low socioeconomic status are twice as likely to require urgent emergency department visits, 4 times as likely to require admission to the hospital, and, importantly, about half as likely to use ambulatory care compared with patients of higher socioeconomic status.5 While this pattern of low-quality, high-cost spending in acute care settings reflects spending in the healthcare system at large, the pattern is greatly exaggerated among high users.
Lost to follow-up
Low socioeconomic status also complicates communication and follow-up. In a 2013 study, physician researchers in St. Paul, MN, documented attempts to interview 64 recently discharged high users. They could not reach 47 (73%) of them, for reasons largely attributable to low socioeconomic status, such as disconnected phone lines and changes in address.6
Clearly, the usual contact methods for follow-up care after discharge, such as phone calls and mailings, are unlikely to be effective in coordinating the outpatient care of these individuals.
Additionally, we must find ways of making primary care more convenient, gaining our patients’ trust, and finding ways to engage patients in follow-up without relying on traditional means of communication.
Do high users have medical insurance?
Surprisingly, most high users of the emergency department have health insurance. The Chicago health system study3 found that most (72.4%) of their high users had either Medicare or private health insurance, while 27.6% had either Medicaid or no insurance (compared with 21.6% in the general population). Other studies also found that most of the frequent emergency department users are insured,7 although the overall percentage who rely on publicly paid insurance is greater than in the population at large.
Many prefer acute care over primary care
Although one might think that high users go to the emergency department because they have nowhere else to go for care, a report published in 2013 by Kangovi et al5 suggests another reason—they prefer the emergency department.5 They interviewed 40 urban patients of low socioeconomic status who consistently cited the 24-hour, no-appointment-necessary structure of the emergency department as an advantage over primary care. The flexibility of emergency access to healthcare makes sense if one reflects on how difficult it is for even high-functioning individuals to schedule and keep medical appointments.
Specific reasons for preferring the emergency department included the following:
Affordability. Even if their insurance fully paid for visits to their primary care physicians, the primary care physician was likely to refer them to specialists, whose visits required a copay, and which required taking another day off of work. The emergency department is cheaper for the patient and it is a “one-stop shop.” Patients appreciated the emergency department guarantee of seeing a physician regardless of proof of insurance, a policy not guaranteed in primary care and specialist offices.
Accessibility. For those without a car, public transportation and even patient transportation services are inconvenient and unreliable, whereas emergency medical services will take you to the emergency department.
Accommodations. Although medical centers may tout their same-day appointments, often same-day appointments are all that they have—and you have no choice about the time. You have to call first thing in the morning and stay on hold for a long time, and then when you finally get through, all the same-day appointments are gone.
Availability. Patients said they often had a hard time getting timely medical advice from their primary care physicians. When they could get through to their primary care physicians on the phone, they would be told to go to the emergency department.
Acceptability. Men, especially, feel they need to be very sick indeed to seek medical care, so going to the emergency department is more acceptable.
Trust in the provider. For reasons that were not entirely clear, patients felt that acute care providers were more trustworthy, competent, and compassionate than primary care physicians.5
None of these reasons for using the emergency department has anything to do with disease severity, which supports the findings that high users of the emergency department were not as sick as their normal-use peers.2
QUALITY IMPROVEMENT AND COST-REDUCTION STRATEGIES
Efforts are being made to reduce the cost of healthcare for high users while improving the quality of their care. Promising strategies focus on coordinating care management, creating individualized patient care plans, and improving the components and instructions of discharge summaries.
Care management organizations
A care management organization (CMO) model has emerged as a strategy for quality improvement and cost reduction in the high-use population. In this model, social workers, health coaches, nurses, mid-level providers, and physicians collaborate on designing individualized care plans to meet the specific needs of patients.
Teams typically work in stepwise fashion, first identifying and engaging patients at high risk of poor outcomes and unnecessary care, often using sophisticated quantitative, risk-prediction tools. Then, they perform health assessments and identify potential interventions aimed at preventing expensive acute-care medical interventions. Third, they work with patients to rapidly identify and effectively respond to changes in their conditions and direct them to the most appropriate medical setting, typically primary or urgent care.
Effective models
In 1998, the Camden (NJ) Coalition of Healthcare Providers established a model for CMO care plans. Starting with the first 36 patients enrolled in the program, hospital admissions and emergency department visits were cut by 47% (from 62 to 37 per month), and collective hospital costs were cut by 56% (from $1.2 million to about $500,000 per month).8 It should be noted that this was a small, nonrandomized study and these preliminary numbers did not take into account the cost of outpatient physician visits or new medications. Thus, how much money this program actually saves is not clear.
Similar programs have had similar results. A nurse-led care coordination program in Doylestown, PA, showed an impressive 25% reduction in annual mortality and a 36% reduction in overall costs during a 10-year period.9
A program in Atlantic City, NJ, combined the typical CMO model with a primary care clinic to provide high users with unlimited access, while paying its providers in a capitation model (as opposed to fee for service). It achieved a 40% reduction in yearly emergency department visits and hospital admissions.8
Patient care plans
Individualized patient care plans for high users are among the most promising tools for reducing costs and improving quality in this group. They are low-cost and relatively easy to implement. The goal of these care plans is to provide practitioners with a concise care summary to help them make rational and consistent medical decisions.
Typically, a care plan is written by an interdisciplinary committee composed of physicians, nurses, and social workers. It is based on the patient’s pertinent medical and psychiatric history, which may include recent imaging results or other relevant diagnostic tests. It provides suggestions for managing complex chronic issues, such as drug abuse, that lead to high use of healthcare resources.
These care plans provide a rational and prespecified approach to workup and management, typically including a narcotic prescription protocol, regardless of the setting or the number of providers who see the patient. Practitioners guided by effective care plans are much more likely to effectively navigate a complex patient encounter as opposed to looking through extensive medical notes and hoping to find relevant information.
Effective models
Data show these plans can be effective. For example, Regions Hospital in St. Paul, MN, implemented patient care plans in 2010. During the first 4 months, hospital admissions in the first 94 patients were reduced by 67%.10
A study of high users at Duke University Medical Center reported similar results. One year after starting care plans, inpatient admissions had decreased by 50.5%, readmissions had decreased by 51.5%, and variable direct costs per admission were reduced by 35.8%. Paradoxically, emergency department visits went up, but this anomaly was driven by 134 visits incurred by a single dialysis patient. After removing this patient from the data, emergency department visits were relatively stable.4
Better discharge summaries
Although improving discharge summaries is not a novel concept, changing the summary from a historical document to a proactive discharge plan has the potential to prevent readmissions and promote a durable de-escalation in care acuity.
For example, when moving a patient to a subacute care facility, providing a concise summary of which treatments worked and which did not, a list of comorbidities, and a list of medications and strategies to consider, can help the next providers to better target their plan of care. Studies have shown that nearly half of discharge statements lack important information on treatments and tests.11
Improvement can be as simple as encouraging practitioners to construct their summaries in an “if-then” format. Instead of noting for instance that “Mr. Smith was treated for pneumonia with antibiotics and discharged to a rehab facility,” the following would be more useful: “Family would like to see if Mr. Smith can get back to his functional baseline after his acute pneumonia. If he clinically does not do well over the next 1 to 2 weeks and has a poor quality of life, then family would like to pursue hospice.”
In addition to shifting the philosophy, we believe that providing timely discharge summaries is a fundamental, high-yield aspect of ensuring their effectiveness. As an example, patients being discharged to a skilled nursing facility should have a discharge summary completed and in hand before leaving the hospital.
Evidence suggests that timely writing of discharge summaries improves their quality. In a retrospective cohort study published in 2012, discharge summaries created more than 24 hours after discharge were less likely to include important plan-of-care components.12
FUTURE NEEDS
Randomized trials
Although initial results have been promising for the strategies outlined above, much of the apparent cost reduction of these interventions may be at least partially related to the study design as opposed to the interventions themselves.
For example, Hong et al13 examined 18 of the more promising CMOs that had reported initial cost savings. Of these, only 4 had conducted randomized controlled trials. When broken down further, the initial cost reduction reported by most of these randomized controlled trials was generated primarily by small subgroups.14
These results, however, do not necessarily reflect an inherent failure in the system. We contend that they merely demonstrate that CMOs and care plan administrators need to be more selective about whom they enroll, either by targeting patients at the extremes of the usage curve or by identifying patient characteristics and usage parameters amenable to cost reduction and quality improvement strategies.
Better social infrastructure
Although patient care plans and CMOs have been effective in managing high users, we believe that the most promising quality improvement and cost-reduction strategy involves redirecting much of the expensive healthcare spending to the social determinants of health (eg, homelessness, mental illness, low socioeconomic status).
Among developed countries, the United States has the highest healthcare spending and the lowest social service spending as a percentage of its gross domestic product (Figure 1).15 Although seemingly discouraging, these data can actually be interpreted as hopeful, as they support the notion that the inefficiencies of our current system are not part of an inescapable reality, but rather reflect a system that has evolved uniquely in this country.
Using the available social programs
Exemplifying this medical and social services balance is a high user who visited her local emergency department 450 times in 1 year for reasons primarily related to homelessness.16 Each time, the medical system (as it is currently designed to do) applied a short-term medical solution to this patient’s problems and discharged her home, ie, back to the street.
But this patient’s high use was really a manifestation of a deeper social issue: homelessness. When the medical staff eventually noted how much this lack of stable shelter was contributing to her pattern of use, she was referred to appropriate social resources and provided with the housing she needed. Her hospital visits decreased from 450 to 12 in the subsequent year, amounting to a huge cost reduction and a clear improvement in her quality of life.
Similar encouraging results have resulted when available social programs are applied to the high-use population at large, which is particularly reassuring given this population’s preponderance of low socioeconomic status, mental illness, and homelessness. (The prevalence of homelessness is roughly 20%, depending on the definition of a high user).
New York Medicaid, for example, has a housing program that provides stable shelter outside of acute care medical settings for patients at a rate as low as $50 per day, compared with area hospital costs that often exceed $2,200 daily.17 A similar program in Westchester County, NY, reported a 45.9% reduction in inpatient costs and a 15.4% reduction in emergency department visits among 61 of its highest users after 2 years of enrollment.17
Need to reform privacy laws
Although legally daunting, reform of the Health Insurance Portability and Accountability Act (HIPAA) and other privacy laws in favor of a more open model of information sharing, particularly for high-risk patients, holds great opportunity for quality improvement. For patients who obtain their care from several healthcare facilities, the documentation is often inscrutable. If some of the HIPAA regulations and other patient privacy laws were exchanged for rules more akin to the current model of narcotic prescription tracking, for example, physicians would be better equipped to provide safe, organized, and efficient medical care for high-use patients.
Need to reform the system
A fundamental flaw in our healthcare system, which is largely based on a fee-for-service model, is that it was not designed for patients who use the system at the highest frequency and greatest cost. Also, it does not account for the psychosocial factors that beset many high-use patients. As such, it is imperative for the safety of our patients as well as the viability of the healthcare system that we change our historical way of thinking and reform this system that provides high users with care that is high-cost, low-quality, and not patient-centered.
IMPROVING QUALITY, REDUCING COST
High users of emergency services are a medically and socially complex group, predominantly characterized by low socioeconomic status and high rates of mental illness and drug dependency. Despite their increased healthcare use, they do not have better outcomes even though they are not sicker. Improving those outcomes requires both medical and social efforts.
Among the effective medical efforts are strategies aimed at creating individualized patient care plans, using coordinated care teams, and improving discharge summaries. Addressing patients’ social factors, such as homelessness, is more difficult, but healthcare systems can help patients navigate the available social programs. These strategies are part of a comprehensive care plan that can help reduce the cost and improve the quality of healthcare for high users.
Emergency departments are not primary care clinics, but some patients use them that way. This relatively small group of patients consumes a disproportionate share of healthcare at great cost, earning them the label of “high users.” Mostly poor and often burdened with mental illness and addiction, they are not necessarily sicker than other patients, and they do not enjoy better outcomes from the extra money spent on them. (Another subset of high users, those with end-stage chronic disease, is outside the scope of this review.)
Herein lies an opportunity. If—and this is a big if—we could manage their care in a systematic way instead of haphazardly, proactively instead of reactively, with continuity of care instead of episodically, and in a way that is convenient for the patient, we might be able to improve quality and save money.
A DISPROPORTIONATE SHARE OF COSTS
In the United States in 2012, the 5% of the population who were the highest users were responsible for 50% of healthcare costs.1 The mean cost per person in this group was more than $43,000 annually. The top 1% of users accounted for nearly 23% of all expenditures, averaging nearly $98,000 per patient per year—10 times more than the average yearly cost per patient.
CARE IS OFTEN INAPPROPRIATE AND UNNECESSARY
In addition to being disproportionately expensive, the care that these patients receive is often inappropriate and unnecessary for the severity of their disease.
A 2007–2009 study2 of 1,969 patients who had visited the emergency department 10 or more times in a year found they received more than twice as many computed tomography (CT) scans as a control group of infrequent users (< 3 visits/year). This occurred even though they were not as sick as infrequent users, based on significantly lower hospital admission rates (11.1% vs 17.9%; P < .001) and mortality rates (0.7% vs 1.5%; P < .002).2
This inverse relationship between emergency department use and illness severity was even more exaggerated at the upper extreme of the use curve. The highest users (> 29 visits to the emergency department in a year) had the lowest triage acuity and hospital admission rates but the highest number of CT scans. Charges per visit were lower among frequent users, but total charges rose steadily with increasing emergency department use, accounting for significantly more costs per year.2
We believe that one reason these patients receive more medical care than necessary is because their medical records are too large and complex for the average physician to distill effectively in a 20-minute physician-patient encounter. Physicians therefore simply order more tests, procedures, and admissions, which are often medically unnecessary and redundant.
WHAT DRIVES HIGH COST?
Mental illness and chemical dependence
Drug addiction, mental illness, and poverty frequently accompany (and influence) high-use behavior, particularly in patients without end-stage diseases.
Szekendi et al,3 in a study of 28,291 patients who had been admitted at least 5 times in a year in a Chicago health system, found that these high users were 2 to 3 times more likely to suffer from comorbid depression (40% vs 13%), psychosis (18% vs 5%), recreational drug dependence (20% vs 7%), and alcohol abuse (16% vs 7%) than non-high-use hospitalized patients.3
Mercer et al4 conducted a study at Duke University Medical Center, Durham, NC, aimed at reducing emergency department visits and hospital admissions among 24 of its highest users. They found that 23 (96%) were either addicted to drugs or mentally ill, and 20 (83%) suffered from chronic pain.4
Drug abuse among high users is becoming even more relevant as the opioid epidemic worsens. Given that most patients requiring high levels of care suffer from chronic pain and many of them develop an opioid addiction while treating their pain, physicians have a moral imperative to reduce the prevalence of drug abuse in this population.
Low socioeconomic status
Low socioeconomic status is an important factor among high users, as it is highly associated with greater disease severity, which usually increases cost without any guarantee of an associated increase in quality. Data suggest that patients of low socioeconomic status are twice as likely to require urgent emergency department visits, 4 times as likely to require admission to the hospital, and, importantly, about half as likely to use ambulatory care compared with patients of higher socioeconomic status.5 While this pattern of low-quality, high-cost spending in acute care settings reflects spending in the healthcare system at large, the pattern is greatly exaggerated among high users.
Lost to follow-up
Low socioeconomic status also complicates communication and follow-up. In a 2013 study, physician researchers in St. Paul, MN, documented attempts to interview 64 recently discharged high users. They could not reach 47 (73%) of them, for reasons largely attributable to low socioeconomic status, such as disconnected phone lines and changes in address.6
Clearly, the usual contact methods for follow-up care after discharge, such as phone calls and mailings, are unlikely to be effective in coordinating the outpatient care of these individuals.
Additionally, we must find ways of making primary care more convenient, gaining our patients’ trust, and finding ways to engage patients in follow-up without relying on traditional means of communication.
Do high users have medical insurance?
Surprisingly, most high users of the emergency department have health insurance. The Chicago health system study3 found that most (72.4%) of their high users had either Medicare or private health insurance, while 27.6% had either Medicaid or no insurance (compared with 21.6% in the general population). Other studies also found that most of the frequent emergency department users are insured,7 although the overall percentage who rely on publicly paid insurance is greater than in the population at large.
Many prefer acute care over primary care
Although one might think that high users go to the emergency department because they have nowhere else to go for care, a report published in 2013 by Kangovi et al5 suggests another reason—they prefer the emergency department.5 They interviewed 40 urban patients of low socioeconomic status who consistently cited the 24-hour, no-appointment-necessary structure of the emergency department as an advantage over primary care. The flexibility of emergency access to healthcare makes sense if one reflects on how difficult it is for even high-functioning individuals to schedule and keep medical appointments.
Specific reasons for preferring the emergency department included the following:
Affordability. Even if their insurance fully paid for visits to their primary care physicians, the primary care physician was likely to refer them to specialists, whose visits required a copay, and which required taking another day off of work. The emergency department is cheaper for the patient and it is a “one-stop shop.” Patients appreciated the emergency department guarantee of seeing a physician regardless of proof of insurance, a policy not guaranteed in primary care and specialist offices.
Accessibility. For those without a car, public transportation and even patient transportation services are inconvenient and unreliable, whereas emergency medical services will take you to the emergency department.
Accommodations. Although medical centers may tout their same-day appointments, often same-day appointments are all that they have—and you have no choice about the time. You have to call first thing in the morning and stay on hold for a long time, and then when you finally get through, all the same-day appointments are gone.
Availability. Patients said they often had a hard time getting timely medical advice from their primary care physicians. When they could get through to their primary care physicians on the phone, they would be told to go to the emergency department.
Acceptability. Men, especially, feel they need to be very sick indeed to seek medical care, so going to the emergency department is more acceptable.
Trust in the provider. For reasons that were not entirely clear, patients felt that acute care providers were more trustworthy, competent, and compassionate than primary care physicians.5
None of these reasons for using the emergency department has anything to do with disease severity, which supports the findings that high users of the emergency department were not as sick as their normal-use peers.2
QUALITY IMPROVEMENT AND COST-REDUCTION STRATEGIES
Efforts are being made to reduce the cost of healthcare for high users while improving the quality of their care. Promising strategies focus on coordinating care management, creating individualized patient care plans, and improving the components and instructions of discharge summaries.
Care management organizations
A care management organization (CMO) model has emerged as a strategy for quality improvement and cost reduction in the high-use population. In this model, social workers, health coaches, nurses, mid-level providers, and physicians collaborate on designing individualized care plans to meet the specific needs of patients.
Teams typically work in stepwise fashion, first identifying and engaging patients at high risk of poor outcomes and unnecessary care, often using sophisticated quantitative, risk-prediction tools. Then, they perform health assessments and identify potential interventions aimed at preventing expensive acute-care medical interventions. Third, they work with patients to rapidly identify and effectively respond to changes in their conditions and direct them to the most appropriate medical setting, typically primary or urgent care.
Effective models
In 1998, the Camden (NJ) Coalition of Healthcare Providers established a model for CMO care plans. Starting with the first 36 patients enrolled in the program, hospital admissions and emergency department visits were cut by 47% (from 62 to 37 per month), and collective hospital costs were cut by 56% (from $1.2 million to about $500,000 per month).8 It should be noted that this was a small, nonrandomized study and these preliminary numbers did not take into account the cost of outpatient physician visits or new medications. Thus, how much money this program actually saves is not clear.
Similar programs have had similar results. A nurse-led care coordination program in Doylestown, PA, showed an impressive 25% reduction in annual mortality and a 36% reduction in overall costs during a 10-year period.9
A program in Atlantic City, NJ, combined the typical CMO model with a primary care clinic to provide high users with unlimited access, while paying its providers in a capitation model (as opposed to fee for service). It achieved a 40% reduction in yearly emergency department visits and hospital admissions.8
Patient care plans
Individualized patient care plans for high users are among the most promising tools for reducing costs and improving quality in this group. They are low-cost and relatively easy to implement. The goal of these care plans is to provide practitioners with a concise care summary to help them make rational and consistent medical decisions.
Typically, a care plan is written by an interdisciplinary committee composed of physicians, nurses, and social workers. It is based on the patient’s pertinent medical and psychiatric history, which may include recent imaging results or other relevant diagnostic tests. It provides suggestions for managing complex chronic issues, such as drug abuse, that lead to high use of healthcare resources.
These care plans provide a rational and prespecified approach to workup and management, typically including a narcotic prescription protocol, regardless of the setting or the number of providers who see the patient. Practitioners guided by effective care plans are much more likely to effectively navigate a complex patient encounter as opposed to looking through extensive medical notes and hoping to find relevant information.
Effective models
Data show these plans can be effective. For example, Regions Hospital in St. Paul, MN, implemented patient care plans in 2010. During the first 4 months, hospital admissions in the first 94 patients were reduced by 67%.10
A study of high users at Duke University Medical Center reported similar results. One year after starting care plans, inpatient admissions had decreased by 50.5%, readmissions had decreased by 51.5%, and variable direct costs per admission were reduced by 35.8%. Paradoxically, emergency department visits went up, but this anomaly was driven by 134 visits incurred by a single dialysis patient. After removing this patient from the data, emergency department visits were relatively stable.4
Better discharge summaries
Although improving discharge summaries is not a novel concept, changing the summary from a historical document to a proactive discharge plan has the potential to prevent readmissions and promote a durable de-escalation in care acuity.
For example, when moving a patient to a subacute care facility, providing a concise summary of which treatments worked and which did not, a list of comorbidities, and a list of medications and strategies to consider, can help the next providers to better target their plan of care. Studies have shown that nearly half of discharge statements lack important information on treatments and tests.11
Improvement can be as simple as encouraging practitioners to construct their summaries in an “if-then” format. Instead of noting for instance that “Mr. Smith was treated for pneumonia with antibiotics and discharged to a rehab facility,” the following would be more useful: “Family would like to see if Mr. Smith can get back to his functional baseline after his acute pneumonia. If he clinically does not do well over the next 1 to 2 weeks and has a poor quality of life, then family would like to pursue hospice.”
In addition to shifting the philosophy, we believe that providing timely discharge summaries is a fundamental, high-yield aspect of ensuring their effectiveness. As an example, patients being discharged to a skilled nursing facility should have a discharge summary completed and in hand before leaving the hospital.
Evidence suggests that timely writing of discharge summaries improves their quality. In a retrospective cohort study published in 2012, discharge summaries created more than 24 hours after discharge were less likely to include important plan-of-care components.12
FUTURE NEEDS
Randomized trials
Although initial results have been promising for the strategies outlined above, much of the apparent cost reduction of these interventions may be at least partially related to the study design as opposed to the interventions themselves.
For example, Hong et al13 examined 18 of the more promising CMOs that had reported initial cost savings. Of these, only 4 had conducted randomized controlled trials. When broken down further, the initial cost reduction reported by most of these randomized controlled trials was generated primarily by small subgroups.14
These results, however, do not necessarily reflect an inherent failure in the system. We contend that they merely demonstrate that CMOs and care plan administrators need to be more selective about whom they enroll, either by targeting patients at the extremes of the usage curve or by identifying patient characteristics and usage parameters amenable to cost reduction and quality improvement strategies.
Better social infrastructure
Although patient care plans and CMOs have been effective in managing high users, we believe that the most promising quality improvement and cost-reduction strategy involves redirecting much of the expensive healthcare spending to the social determinants of health (eg, homelessness, mental illness, low socioeconomic status).
Among developed countries, the United States has the highest healthcare spending and the lowest social service spending as a percentage of its gross domestic product (Figure 1).15 Although seemingly discouraging, these data can actually be interpreted as hopeful, as they support the notion that the inefficiencies of our current system are not part of an inescapable reality, but rather reflect a system that has evolved uniquely in this country.
Using the available social programs
Exemplifying this medical and social services balance is a high user who visited her local emergency department 450 times in 1 year for reasons primarily related to homelessness.16 Each time, the medical system (as it is currently designed to do) applied a short-term medical solution to this patient’s problems and discharged her home, ie, back to the street.
But this patient’s high use was really a manifestation of a deeper social issue: homelessness. When the medical staff eventually noted how much this lack of stable shelter was contributing to her pattern of use, she was referred to appropriate social resources and provided with the housing she needed. Her hospital visits decreased from 450 to 12 in the subsequent year, amounting to a huge cost reduction and a clear improvement in her quality of life.
Similar encouraging results have resulted when available social programs are applied to the high-use population at large, which is particularly reassuring given this population’s preponderance of low socioeconomic status, mental illness, and homelessness. (The prevalence of homelessness is roughly 20%, depending on the definition of a high user).
New York Medicaid, for example, has a housing program that provides stable shelter outside of acute care medical settings for patients at a rate as low as $50 per day, compared with area hospital costs that often exceed $2,200 daily.17 A similar program in Westchester County, NY, reported a 45.9% reduction in inpatient costs and a 15.4% reduction in emergency department visits among 61 of its highest users after 2 years of enrollment.17
Need to reform privacy laws
Although legally daunting, reform of the Health Insurance Portability and Accountability Act (HIPAA) and other privacy laws in favor of a more open model of information sharing, particularly for high-risk patients, holds great opportunity for quality improvement. For patients who obtain their care from several healthcare facilities, the documentation is often inscrutable. If some of the HIPAA regulations and other patient privacy laws were exchanged for rules more akin to the current model of narcotic prescription tracking, for example, physicians would be better equipped to provide safe, organized, and efficient medical care for high-use patients.
Need to reform the system
A fundamental flaw in our healthcare system, which is largely based on a fee-for-service model, is that it was not designed for patients who use the system at the highest frequency and greatest cost. Also, it does not account for the psychosocial factors that beset many high-use patients. As such, it is imperative for the safety of our patients as well as the viability of the healthcare system that we change our historical way of thinking and reform this system that provides high users with care that is high-cost, low-quality, and not patient-centered.
IMPROVING QUALITY, REDUCING COST
High users of emergency services are a medically and socially complex group, predominantly characterized by low socioeconomic status and high rates of mental illness and drug dependency. Despite their increased healthcare use, they do not have better outcomes even though they are not sicker. Improving those outcomes requires both medical and social efforts.
Among the effective medical efforts are strategies aimed at creating individualized patient care plans, using coordinated care teams, and improving discharge summaries. Addressing patients’ social factors, such as homelessness, is more difficult, but healthcare systems can help patients navigate the available social programs. These strategies are part of a comprehensive care plan that can help reduce the cost and improve the quality of healthcare for high users.
- Cohen SB; Agency for Healthcare Research and Quality. Statistical Brief #359. The concentration of health care expenditures and related expenses for costly medical conditions, 2009. http://meps.ahrq.gov/mepsweb/data_files/publications/st359/stat359.pdf. Accessed December 18, 2017.
- Oostema J, Troost J, Schurr K, Waller R. High and low frequency emergency department users: a comparative analysis of morbidity, diagnostic testing, and health care costs. Ann Emerg Med 2011; 58:S225. Abstract 142.
- Szekendi MK, Williams MV, Carrier D, Hensley L, Thomas S, Cerese J. The characteristics of patients frequently admitted to academic medical centers in the United States. J Hosp Med 2015; 10:563–568.
- Mercer T, Bae J, Kipnes J, Velazquez M, Thomas S, Setji N. The highest utilizers of care: individualized care plans to coordinate care, improve healthcare service utilization, and reduce costs at an academic tertiary care center. J Hosp Med 2015; 10:419–424.
- Kangovi S, Barg FK, Carter T, Long JA, Shannon R, Grande D. Understanding why patients of low socioeconomic status prefer hospitals over ambulatory care. Health Aff (Millwood) 2013; 32:1196–1203.
- Melander I, Winkelman T, Hilger R. Analysis of high utilizers’ experience with specialized care plans. J Hosp Med 2014; 9(suppl 2):Abstract 229.
- LaCalle EJ, Rabin EJ, Genes NG. High-frequency users of emergency department care. J Emerg Med 2013; 44:1167–1173.
- Gawande A. The Hot Spotters. The New Yorker 2011. www.newyorker.com/magazine/2011/01/24/the-hot-spotters. Accessed December 18, 2017.
- Coburn KD, Marcantonio S, Lazansky R, Keller M, Davis N. Effect of a community-based nursing intervention on mortality in chronically ill older adults: a randomized controlled trial. PLoS Med 2012; 9:e1001265.
- Hilger R, Melander I, Winkelman T. Is specialized care plan work sustainable? A follow-up on healthpartners’ experience with patients who are high-utilizers. Society of Hospital Medicine Annual Meeting, Las Vegas, NV. March 24-27, 2014. www.shmabstracts.com/abstract/is-specialized-care-plan-work-sustainable-a-followup-on-healthpartners-experience-with-patients-who-are-highutilizers. Accessed December 18, 2017.
- Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA 2007; 297:831–841.
- Kind AJ, Thorpe CT, Sattin JA, Walz SE, Smith MA. Provider characteristics, clinical-work processes and their relationship to discharge summary quality for sub-acute care patients. J Gen Intern Med 2012; 27:78–84.
- Hong CS, Siegel AL, Ferris TG. Caring for high-need, high-cost patients: what makes for a successful care management program? Issue Brief (Commonwealth Fund) 2014; 19:1–19.
- Williams B. Limited effects of care management for high utilizers on total healthcare costs. Am J Managed Care 2015; 21:e244–e246.
- Organization for Economic Co-operation and Development. Health at a Glance 2009: OECD Indicators. Paris, France: OECD Publishing; 2009.
- Emeche U. Is a strategy focused on super-utilizers equal to the task of health care system transformation? Yes. Ann Fam Med 2015; 13:6–7.
- Burns J. Do we overspend on healthcare, underspend on social needs? Managed Care. http://ghli.yale.edu/news/do-we-overspend-health-care-underspend-social-needs. Accessed December 18, 2017.
- Cohen SB; Agency for Healthcare Research and Quality. Statistical Brief #359. The concentration of health care expenditures and related expenses for costly medical conditions, 2009. http://meps.ahrq.gov/mepsweb/data_files/publications/st359/stat359.pdf. Accessed December 18, 2017.
- Oostema J, Troost J, Schurr K, Waller R. High and low frequency emergency department users: a comparative analysis of morbidity, diagnostic testing, and health care costs. Ann Emerg Med 2011; 58:S225. Abstract 142.
- Szekendi MK, Williams MV, Carrier D, Hensley L, Thomas S, Cerese J. The characteristics of patients frequently admitted to academic medical centers in the United States. J Hosp Med 2015; 10:563–568.
- Mercer T, Bae J, Kipnes J, Velazquez M, Thomas S, Setji N. The highest utilizers of care: individualized care plans to coordinate care, improve healthcare service utilization, and reduce costs at an academic tertiary care center. J Hosp Med 2015; 10:419–424.
- Kangovi S, Barg FK, Carter T, Long JA, Shannon R, Grande D. Understanding why patients of low socioeconomic status prefer hospitals over ambulatory care. Health Aff (Millwood) 2013; 32:1196–1203.
- Melander I, Winkelman T, Hilger R. Analysis of high utilizers’ experience with specialized care plans. J Hosp Med 2014; 9(suppl 2):Abstract 229.
- LaCalle EJ, Rabin EJ, Genes NG. High-frequency users of emergency department care. J Emerg Med 2013; 44:1167–1173.
- Gawande A. The Hot Spotters. The New Yorker 2011. www.newyorker.com/magazine/2011/01/24/the-hot-spotters. Accessed December 18, 2017.
- Coburn KD, Marcantonio S, Lazansky R, Keller M, Davis N. Effect of a community-based nursing intervention on mortality in chronically ill older adults: a randomized controlled trial. PLoS Med 2012; 9:e1001265.
- Hilger R, Melander I, Winkelman T. Is specialized care plan work sustainable? A follow-up on healthpartners’ experience with patients who are high-utilizers. Society of Hospital Medicine Annual Meeting, Las Vegas, NV. March 24-27, 2014. www.shmabstracts.com/abstract/is-specialized-care-plan-work-sustainable-a-followup-on-healthpartners-experience-with-patients-who-are-highutilizers. Accessed December 18, 2017.
- Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA 2007; 297:831–841.
- Kind AJ, Thorpe CT, Sattin JA, Walz SE, Smith MA. Provider characteristics, clinical-work processes and their relationship to discharge summary quality for sub-acute care patients. J Gen Intern Med 2012; 27:78–84.
- Hong CS, Siegel AL, Ferris TG. Caring for high-need, high-cost patients: what makes for a successful care management program? Issue Brief (Commonwealth Fund) 2014; 19:1–19.
- Williams B. Limited effects of care management for high utilizers on total healthcare costs. Am J Managed Care 2015; 21:e244–e246.
- Organization for Economic Co-operation and Development. Health at a Glance 2009: OECD Indicators. Paris, France: OECD Publishing; 2009.
- Emeche U. Is a strategy focused on super-utilizers equal to the task of health care system transformation? Yes. Ann Fam Med 2015; 13:6–7.
- Burns J. Do we overspend on healthcare, underspend on social needs? Managed Care. http://ghli.yale.edu/news/do-we-overspend-health-care-underspend-social-needs. Accessed December 18, 2017.
KEY POINTS
- The top 5% of the population in terms of healthcare use account for 50% of costs. The top 1% account for 23% of all expenditures and cost 10 times more per year than the average patient.
- Drug addiction, mental illness, and poverty often accompany and underlie high-use behavior, particularly in patients without end-stage medical conditions.
- Comprehensive patient care plans and care management organizations are among the most effective strategies for cost reduction and quality improvement.
Disorders of diminished motivation: What they are, and how to treat them
Disorders of diminished motivation (DDM)—including apathy, abulia, and akinetic mutism—are characterized by impairment in goal-directed behavior, thought, and emotion.1 These disorders can be observed clinically as a gross underproduction of speech, movement, and emotional response.
DDM are not classified as disorders within DSM-5, and it remains unclear if they are distinct disorders or symptoms that overlap in other conditions. Some sources support distinct diagnoses, while the traditional position is that DDM are variations along a spectrum, with apathy as the mildest form and akinetic mutism as the most severe form (Figure).1-3 DDM can result from various neurologic, medical, psychiatric, socioeconomic, and drug-induced pathologies, and may represent differing severity of the same underlying pathology.1,4 It is postulated that DDM arise from disruptions in the dopaminergic frontal-subcortical-mesolimbic networks.1,4
We present 2 cases of patients who developed distinct phenotypes within DDM. Despite differences in presentation and symptom severity, both patients showed clinical improvement on methylphenidate (not the only treatment option) as assessed by the Neuropsychiatric Inventory (NPI),5 a scale used to measure dementia-related behavioral symptoms that includes an Apathy/Indifference (A/I) subscale.
CASE 1
Apathy secondary to glioblastoma multiforme
Ms. E, age 59, presents with wound drainage 3 weeks after a repeat right craniotomy for recurrent glioblastoma multiforme (GBM) of the temporal lobe. Her medical history is not believed to have contributed to her current presentation.
On hospital day 2, Ms. E undergoes debridement and reclosure at the craniotomy site. Prior to the procedure, the patient was noted to have anhedonia and flat affect. Her family reports that she seems to get little enjoyment from life and “only slept and ate.” Psychiatry is consulted on hospital day 3 for evaluation and management of a perceived depressed mood.
On initial psychiatric evaluation, Ms. E continues to have a constricted affect with delayed psychomotor processing speed. However, she denies dysphoria or anhedonia. Richmond Agitation-Sedation Scale6 score is 0 (alert and calm) and test of sustained attention (‘Vigilant A’) is intact (ie, based on the Confusion Assessment Method for the Intensive Care Unit [CAM-ICU],7 Ms. E does not have delirium). The NPI A/I frequency score is 15, with a severity score of 3, for a total score of 45, indicating moderate behavioral disturbance on the NPI A/I subsection. A diagnosis of neuropsychiatric apathy due to recurrent GBM or craniotomy is made, although substance-induced mood disorder due to concurrent dexamethasone and opiate use is considered. Methylphenidate, 2.5 mg/d, is started, and Ms. E’s blood pressure remains stable with the initial dose.
Methylphenidate is titrated to 5 mg, twice daily, over a 1-week period. Ms. E’s NPI A/I subscale score improves to 3 (mild behavioral problem), with 3 points for frequency and a multiplier of 1 for mild severity, reflecting an improvement in neuropsychiatric apathy, and she is transferred to a long-term care rehabilitation center.
CASE 2
Akinetic mutism secondary to subarachnoid hemorrhage
Ms. G, age 47, is brought to an outside hospital with syncope and a severe headache radiating to her neck. Upon arrival, she is unconscious and requires intubation. A non-contrast head CT scan shows diffuse subarachnoid hemorrhage, 6 mm right midline shift, and a small left frontal subdural hematoma. A CT angiography of her head and neck reveals a 0.7 cm anterior paraclinoid left internal carotid artery aneurysm with ophthalmic involvement. Evidence of underlying left and right carotid fibromuscular dysplasia is also seen. Ms. G is transferred to our facility for neurosurgical intervention.
Neurosurgery proceeds with aneurysm coiling, followed by left craniotomy with subdural evacuation and ventriculostomy placement. Her postoperative course is complicated by prolonged nasogastric hyperalimentation, mild hypernatremia and hyperglycemia, tracheostomy, and recurrent central fever. She also develops persistent vasospasm, which requires balloon angioplasty of the left middle cerebral artery.
The psychiatry team is consulted on postoperative day 29 to assess for delirium. The CAM-ICU is positive for delirium, with nocturnal accentuation of agitation. Ms. G demonstrates paucity of speech and minimal verbal comprehension. She starts oral ziprasidone, 5 mg/d at bedtime. In addition to her original CNS insult, scopolamine patch, 1.5 mg, to decrease respiratory secretions, and IV metronidazole, 500 mg every 8 hours, for skin-site infection, may have been contributing to her delirium.
Ms. G’s delirium quickly resolves; however, on day 32 she continues to demonstrate behavioral and cognitive slowing; The NPI A/I frequency score is 28, with a severity score of 3, for a total score of 84, indicating severe behavioral disturbance on the NPI A/I subsection. Methylphenidate, 2.5 mg/d, is started and the next day is increased to 5 mg twice a day to treat severe akinetic mutism. Ms. G also is switched from ziprasidone to olanzapine, 2.5 mg/d at night.
By day 37, the tracheostomy is decannulated, and Ms. G demonstrates a full level of alertness, awareness, and attention. Her affect is full range and appropriate; however, she demonstrates residual language deficits, including dysnomia. On day 38, Ms. G is discharged with an NPI A/I subscale score of 5, indicating a mild behavioral problem.
What these cases demonstrate about DDM
These 2 cases are part of a larger, emerging conversation about the role of dopamine in DDM. Although not fully elucidated, the pathophysiology of abulia, apathy, and akinetic mutism is thought to be related to multiple neurotransmitters—especially dopamine—involved in the cortico-striatal-pallidal-thalamic network.1,8 This position has been supported by reports of clinical improvement in patients with DDM who are given dopaminergic agonists (Table 1).3,9-32
The clinical improvement seen in both of our patients after initiating methylphenidate is consistent with previous reports.10-13 Methylphenidate was selected because of its favorable adverse effect profile and potentially rapid onset of action in DDM.10-13 In cases where oral medication cannot be administered, such as in patients with akinetic mutism, short-term adjunctive IM olanzapine may be helpful, although it is not a first-line treatment.3,15
Interestingly, both of our patients showed improvement with low doses of methylphenidate. Ms. E showed rapid improvement at 2.5 mg/d, but eventually was increased to 10 mg/d. For Ms. G, who demonstrated severe akinetic mutism, rapid improvement was noted after the initial 2.5 mg/d dose; however, because of reports of efficacy of olanzapine in treating akinetic mutism, it is possible that these medications worked synergistically. The proposed mechanism of action of olanzapine in akinetic mutism is through increased dopamine transmission in the medial prefrontal cortex.3,15 Ms. G’s methylphenidate dose was increased to 5 mg/d, which was still “subtherapeutic,” because most reports have used dosages ranging from 10 to 40 mg/d.10-13 Although there were favorable acute results in both patients, their long-term requirements are unknown because of a lack of follow-up. Our findings are also limited by the fact that both patients were recovering from neurosurgical procedures, which could lead to natural improvement in symptoms over time.
Prevalence of DDM in psychiatric disorders
The successful treatment of DDM with dopaminergic drugs is meaningful because of the coexistence of DDM in various neuropsychiatric conditions. In Alzheimer’s disease (AD), disturbances in the dopaminergic system may explain the high comorbidity of apathy, which ranges from 47% in mild AD to 80% in moderate AD.33 In the dopamine-reduced states of cocaine and amphetamine withdrawal, 67% of patients report apathy and lack of motivation.8,34 Additionally, the prevalence of apathy is reported at 27% in Parkinson’s disease, 43% in mild cognitive impairment, 70% in mixed dementia, 94% in a major depressive episode, and 53% in schizophrenia.35 In schizophrenia with predominately negative symptoms, in vivo and postmortem studies have found reduced dopamine receptors.8 Meanwhile, the high rate of akinetic mutism in Creutzfeldt-Jakob disease allows for its use as a reliable diagnostic criteria in this disorder.36
However, the prevalence of DDM is best documented as it relates to stroke and traumatic brain injury (TBI). For instance, after experiencing a stroke, 20% to 25% of patients suffer from apathy.37 Many case reports describe abulia and akinetic mutism after cerebral infarction or hemorrhage, although the incidence of these disorders is unknown.2,38-40 Apathy following TBI is common, especially in younger patients who have sustained a severe injury.41 One study evaluated the prevalence of apathy after TBI among 83 consecutive patients in a neuropsychiatric clinic. Of the 83 patients, 10.84% had apathy without depression, and an equal number were depressed without apathy; another 60% of patients exhibited both apathy and depression. Younger patients (mean age, 29 years) were more likely to be apathetic than older patients, who were more likely to be depressed or depressed and apathetic (mean age, 42 and 38 years, respectively).41 Interestingly, DDM often are associated with cerebral lesions in distinct and distant anatomical locations that are not clearly connected to the neural circuits of motivational pathways. This phenomenon may be explained by the concept of diaschisis, which states that injury to one part of an interconnected neural network can affect other, separate parts of that network.2 If this concept is accurate, it may broaden the impact of DDM, especially as it relates to stroke and TBI.
The differential diagnosis of DDM includes depression and hypokinetic delirium (Table 21,3,42-50). A potential overlapping but confounding condition is stuporous catatonia, with symptoms that include psychomotor slowing such as immobility, staring, and stupor.47 It is important to differentiate these disorders because the treatment for each differs. As previously discussed, there is a clear role for dopamine receptor agonists in the treatment of DDM.
Although major depressive disorder can be treated with medications that increase dopaminergic transmission, selective serotonin reuptake inhibitors (SSRIs) are more commonly used as first-line agents.44 However, an SSRI would theoretically be contraindicated in DDM, because increased serotonin transmission decreases dopamine release from the midbrain, and therefore an SSRI may not only result in a lack of improvement but may worsen DDM.48 Finally, although delirium is treated with atypical or conventional antipsychotics vis-a-vis dopamine type 2 receptor antagonism,45 stuporous catatonia is preferentially treated with gamma-aminobutyric acid-A receptor agonists such as lorazepam.50
What to do when your patient’s presentation suggests DDM
Assessment of DDM should be structured, with input from the patient and the caregiver, and should incorporate the physician’s perspective. A history should be obtained applying recent criteria of apathy. The 3 core domains of apathy—behavior, cognition, and emotion—need to be evaluated. The revised criteria are based on the premise that change in motivation can be measured by examining a patient’s responsiveness to internal or external stimuli. Therefore, each of the 3 domains includes 2 symptoms: (1) self-initiated or “internal” behaviors, cognitions, and emotions (initiation symptom), and (2) the patient’s responsiveness to “external” stimuli (responsiveness symptom).51
One of the main diagnostic dilemmas is how to separate DDM from depression. The differentiation is difficult because of substantial overlap in the manifestation of key symptoms, such as a lack of interest, anergia, psychomotor slowing, and fatigue. Caregivers often mistakenly describe DDM as a depressive state, even though a lack of sadness, desperation, crying, and a depressive mood distinguish DDM from depression. Usually, DDM patients lack negative thoughts, emotional distress, sadness, vegetative symptoms, and somatic concerns, which are frequently observed in mood disorders.51
Several instruments have been developed for assessing neuropsychiatric symptoms. Some were specifically designed to measure apathy, whereas others were designed to provide a broader neuropsychiatric assessment. The NPI is the most widely used multidimensional instrument for assessing neuropsychiatric functioning in patients with neurocognitive disorders (NCDs). It is a valid, reliable instrument that consists of an interview of the patient’s caregiver. It is designed to assess the presence and severity of 10 symptoms, including apathy. The NPI includes both apathy and depression items, which can help clinicians distinguish the 2 conditions. Although beyond the scope of this article, more recent standardized instruments that can assess DDM include the Apathy Inventory, the Dementia Apathy Interview and Rating, and the Structured Clinical Interview for Apathy.52
As previously mentioned, researchers have proposed that DDM are simply a continuum of severity of reduced behavior, and akinetic mutism may be the extreme form. The dilemma is how to formally diagnose states of abulia and akinetic mutism, given the lack of diagnostic criteria and paucity of standardized instruments. Thus, distinguishing between abulia and akinetic mutism (and apathy) is more of a quantitative than qualitative exercise. One could hypothesize that higher scores on a standardized scale to measure apathy (ie, NPI) could imply abulia or akinetic mutism, although to the best of our knowledge, no formal “cut-off scores” exist.53
Treatment of apathy. The duration of pharmacotherapy to treat apathy is unknown and their usage is off-label. Further studies, including double-blind, randomized controlled trials (RCTs), are needed. Nonetheless, the 2 classes of medications that have the most evidence for treating apathy/DDM are psychostimulants and acetylcholinesterase inhibitors (AChEIs).
AChEIs are primarily used for treating cognitive symptoms in NCDs, but recent findings indicate that they have beneficial effects on noncognitive symptoms such as apathy. Of all medications used to treat apathy in NCDs, AChEIs have been used to treat the largest number of patients. Of 26 studies, 24 demonstrated improvement in apathy, with 21 demonstrating statistical significance. These studies ranged in duration from 8 weeks to 1 year, and most were open-label.54
Five studies (3 RCTs and 2 open-label studies) assessed the efficacy of methylphenidate for treating apathy due to AD. All the studies demonstrated at least some benefit in apathy scores after treatment with methylphenidate. These studies ranged from 5 to 12 weeks in duration. Notably, some patients reported adverse effects, including delusions and irritability.54
Although available evidence suggests AChEIs may be the most effective medications for treating apathy in NCDs, methylphenidate has been demonstrated to work faster.55 Thus, in cases where apathy can significantly affect activities of daily living or instrumental activities of daily living, a quicker response may dictate treatment with methylphenidate. It is imperative to note that safety studies and more large-scale double-blind RCTs are needed to further demonstrate the effectiveness and safety of methylphenidate.
Published in 2007, the American Psychiatric Association (APA) guidelines56 state that psychostimulants are a possible treatment option for patients with severe apathy. At the same time, clinicians are reminded that these agents—especially at higher doses—can produce various problematic adverse effects, including tachycardia, hypertension, restlessness, dyskinesia, agitation, sleep disturbances, psychosis, confusion, and decreased appetite. The APA guidelines recommend using low initial doses, with slow and careful titration. For example, methylphenidate should be started at 2.5 to 5 mg once in the morning, with daily doses not to exceed 30 to 40 mg. In our clinical experience, doses >20 mg/d have not been necessary.57
Treatment of akinetic mutism and abulia. In patients with akinetic mutism and possible abulia, for whom oral medication administration is either impossible or contraindicated (ie, due to the potential risk of aspiration pneumonia), atypical antipsychotics, such as IM olanazapine, have produced a therapeutic response in apathetic patients with NCD. However, extensive use of antipsychotics in NCD is not recommended because this class of medications has been associated with serious adverse effects, including an increased risk of death.55
Bottom Line
Apathy, abulia, and akinetic mutism have been categorized as disorders of diminished motivation (DDM). They commonly present after a stroke or traumatic brain injury, and should be differentiated from depression, hypokinetic delirium, and stuporous catatonia. DDM can be successfully treated with dopamine agonists.
Related Resources
- Barnhart WJ, Makela EH, Latocha MJ. SSRI-induced apathy syndrome: a clinical review. J Psychiatr Pract. 2004;10(3):196-199.
- Dell’Osso B, Benatti B, Altamura AC, et al. Prevalence of selective serotonin reuptake inhibitor-related apathy in patients with obsessive compulsive disorder. J Clin Psychopharmacol. 2016;36(6):725-726.
- D’Souza G, Kakoullis A, Hegde N, et al. Recognition and management of abulia in the elderly. Prog Neurol Psychiatry. 2010;14(6):24-28.
Drug Brand Names
Bromocriptine • Parlodel
Bupropion • Wellbutrin XL, Zyban
Carbidopa • Lodosyn
Dexamethasone • DexPak, Ozurde
Donepezil • Aricept
Levodopa/benserazide • Prolopa
Levodopa/carbidopa • Pacopa Rytary Sinemet
Lorazepam • Ativan
Methylphenidate • Concerta, Methylin
Metronidazole • Flagyl, Metrogel
Modafinil • Provigil
Olanzapine • Zyprexa
Pramipexole • Mirapex
Rivastigmine • Exelon
Ropinirole • Requip
Rotigotine • Neurpro
Scopolamine • Transderm Scop
Ziprasidone • Geodon
1. Marin RS, Wilkosz PA. Disorders of diminished motivation. J Head Trauma Rehabil. 2005;20(4):377-388.
2. Ghoshal S, Gokhale S, Rebovich G, et al. The neurology of decreased activity: abulia. Rev Neurol Dis. 2011;8(3-4):e55-e67.
3. Spiegel DR, Chatterjee A. A case of abulia, status/post right middle cerebral artery territory infarct, treated successfully with olanzapine. Clin Neuropharmacol. 2014;37(6):186-189.
4. Marin RS. Differential diagnosis and classification of apathy. Am J Psychiatry. 1990;147(1):22-30.
5. Cummings JL, Mega M, Gray K, et al. The Neuropsychiatric Inventory: comprehensive assessment of psychopathology in dementia. Neurology. 1994;44(12):2308-2314.
6. Sessler CN, Gosnell MS, Grap MJ, et al. The Richmond Agitation-Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002;166(10):1338-1344.
7. Ely EW, Margolin R, Francis J, et al. Evaluation of delirium in critically ill patients: validation of the Confusion Assessment Method for the intensive care unit (CAM-ICU). Crit Care Med. 2001;29(7):1370-1379.
8. Al-Adawi S, Dawe GS, Al-Hussaini AA. Aboulia: neurobehavioural dysfunction of dopaminergic system? Med Hypotheses. 2000;54(4):523-530.
9. Volkow ND, Fowler JS, Wang G, et al. Mechanism of action of methylphenidate: insights from PET imaging studies. J Atten Disord. 2002;6(suppl 1):S31-S43.
10. Chatterjee A, Fahn S. Methylphenidate treats apathy in Parkinson’s disease. J Neuropsychiatry Clin Neurosci. 2002;14(4):461-462.
11. Keenan S, Mavaddat N, Iddon J, et al. Effects of methylphenidate on cognition and apathy in normal pressure hydrocephalus: a case study and review. Br J Neurosurg. 2005;19(1):46-50.
12. Padala PR, Petty F, Bhatia SC. Methylphenidate may treat apathy independent of depression. Ann Pharmacother. 2005;39(11):1947-1949.
13. Padala PR, Burke WJ, Bhatia SC, et al. Treatment of apathy with methylphenidate. J Neuropsychiatry Clin Neurosci. 2007;19(1):81-83.
14. Li XM, Perry KW, Wong DT, et al. Olanzapine increases in vivo dopamine and norepinephrine release in rat prefrontal cortex, nucleus accumbens and striatum. Psychopharmacology (Berl). 1998;136(2):153-161.
15. Spiegel DR, Casella DP, Callender DM, et al. Treatment of akinetic mutism with intramuscular olanzapine: a case series. J Neuropsychiatry Clin Neurosci. 2008;20(1):93-95.
16. Citrome L. Activating and sedating adverse effects of second-generation antipsychotics in the treatment of schizophrenia and major depressive disorder: absolute risk increase and number needed to harm. J Clin Psychopharmacol. 2017;37(2):138-147.
17. Bakheit AM, Fletcher K, Brennan A. Successful treatment of severe abulia with co-beneldopa. NeuroRehabilitation. 2011;29(4):347-351.
18. Debette S, Kozlowski O, Steinling M, et al. Levodopa and bromocriptine in hypoxic brain injury. J Neurol. 2002;249(12):1678-1682.
19. Combarros O, Infante J, Berciano J. Akinetic mutism from frontal lobe damage responding to levodopa. J Neurol. 2000;247(7):568-569.
20. Echiverri HC, Tatum WO, Merens TA, et al. Akinetic mutism: pharmacologic probe of the dopaminergic mesencephalofrontal activating system. Pediatr Neurol. 1988;4(4):228-230.
21. Psarros T, Zouros A, Coimbra C. Bromocriptine-responsive akinetic mutism following endoscopy for ventricular neurocysticercosis. Case report and review of the literature. J Neurosurg. 2003;99(2):397-401.
22. Naik VD. Abulia following an episode of cardiac arrest [published online July 1, 2015]. BMJ Case Rep. doi: 10.1136/bcr-2015-209357.
23. Kim MS, Rhee JJ, Lee SJ, et al. Akinetic mutism responsive to bromocriptine following subdural hematoma evacuation in a patient with hydrocephalus. Neurol Med Chir (Tokyo). 2007;47(9):419-423.
24. Rockwood K, Black S, Bedard MA; TOPS Study Investigators. Specific symptomatic changes following donepezil treatment of Alzheimer’s disease: a multi-centre, primary care, open-label study. Int J Geriatr Psychiatry. 2007;22(4):312-319.
25. Devos D, Moreau C, Maltête D, et al. Rivastigmine in apathetic but dementia and depression-free patients with Parkinson’s disease: a double-blind, placebo-controlled, randomised clinical trial. J Neurol Neurosurg Psychiatry. 2014;85(6):668-674.
26. Camargos EF, Quintas JL. Apathy syndrome treated successfully with modafinil [published online November 15, 2011]. BMJ Case Rep. doi: 10.1136/bcr.08.2011.4652.
27. Corcoran C, Wong ML, O’Keane V. Bupropion in the management of apathy. J Psychopharmacol. 2004;18(1):133-135.
28. Blundo C, Gerace C. Dopamine agonists can improve pure apathy associated with lesions of the prefrontal-basal ganglia functional system. Neurol Sci. 2015;36(7):1197-1201.
29. Mirapex [package insert]. Ridgefield, CT: Boehringer Ingelheim International GmbH; 2016.
30. Neupro [package insert]. Smyrna, GA: UBC, Inc.; 2012.
31. Requip [package insert]. Research Triangle Park, NC: GlaxoSmithKline; 2017.
32. Thobois S, Lhommée E, Klinger H, et al. Parkinsonian apathy responds to dopaminergic stimulation of D2/D3 receptors with piribedil. Brain. 2013;136(pt 5):1568-1577.
33. Mitchell RA, Herrmann N, Lanctôt KL. The role of dopamine in symptoms and treatment of apathy in Alzheimer’s disease. CNS Neurosci Ther. 2011;17(5):411-427.
34. Brower KJ, Maddahian E, Blow FC, et al. A comparison of self-reported symptoms and DSM-III-R criteria for cocaine withdrawal. Am J Drug Alcohol Abuse. 1988;14(3):347-356.
35. Mulin E, Leone E, Dujardin K, et al. Diagnostic criteria for apathy in clinical practice. Int J Geriatr Psychiatry. 2011;26(2):158-165.
36. Otto A, Zerr I, Lantsch M, et al. Akinetic mutism as a classification criterion for the diagnosis of Creutzfeldt-Jakob disease. J Neurol Neurosurg Psychiatry. 1998;64(4):524-528.
37. Jorge RE, Starkstein SE, Robinson RG. Apathy following stroke. Can J Psychiatry. 2010;55(6):350-354.
38. Hastak SM, Gorawara PS, Mishra NK. Abulia: no will, no way. J Assoc Physicians India. 2005;53:814-818.
39. Nagaratnam N, Nagaratnam K, Ng K, et al. Akinetic mutism following stroke. J Clin Neurosci. 2004;11(1):25-30.
40. Freemon FR. Akinetic mutism and bilateral anterior cerebral artery occlusion. J Neurol Neurosurg Psychiatry. 1971;34(6):693-698.
41. Schwarzbold M, Diaz A, Martins ET, et al. Psychiatric disorders and traumatic brain injury. Neuropsychiatr Dis Treat. 2008;4(4):797-816.
42. Diagnostic and statistical manual of mental disorders, 5th ed. Washington, DC: American Psychiatric Association; 2013.
43. Levy ML, Cummings JL, Fairbanks LA, et al. Apathy is not depression. J Neuropsychiatry Clin Neurosci. 1998;10(3):314-319.
44. Snow V, Lascher S, Mottur-Pilson C. Pharmacologic treatment of acute major depression and dysthymia. American College of Physicians-American Society of Internal Medicine. Ann Intern Med. 2000;132(9):738-742.
45. Schwartz AC, Fisher TJ, Greenspan HN, et al. Pharmacologic and nonpharmacologic approaches to the prevention and management of delirium. Int J Psychiatry Med. 2016;51(2):160-170.
46. Kang H, Zhao F, You L, et al. Pseudo-dementia: a neuropsychological review. Ann Indian Acad Neurol. 2014;17(2):147-154.
47. Fricchione GL, Beach SR, Huffman J, et al. Life-threatening conditions in psychiatry: catatonia, neuroleptic malignant syndrome, and serotonin syndrome. In: Stern TA, Fava M, Wilens TE, eds. Massachusetts General Hospital comprehensive clinical psychiatry. London, United Kingdom: Elsevier; 2016:608-617.
48. Rogers RD. The roles of dopamine and serotonin in decision making: evidence from pharmacological experiments in humans. Neuropsychopharmacology. 2011;36(1):114-132.
49. Stransky M, Schmidt C, Ganslmeier P, et al. Hypoactive delirium after cardiac surgery as an independent risk factor for prolonged mechanical ventilation. J Cardiothorac Vasc Anesth. 2011;25(6):968-974.
50. Wilcox JA, Reid Duffy P. The syndrome of catatonia. Behav Sci (Basel). 2015;5(4):576-588.
51. Robert PH, Mulin E, Malléa P, et al. REVIEW: apathy diagnosis, assessment, and treatment in Alzheimer’s disease. CNS Neurosci Ther. 2010;16(5):263-271.
52. Cipriani G, Lucetti C, Danti S, et al. Apathy and dementia. Nosology, assessment and management. J Nerv Ment Dis. 2014;202(10):718-724.
53. Starkstein SE, Leentjens AF. The nosological position of apathy in clinical practice. J Neurol Neurosurg Psychiatry. 2008;79(10):1088-1092.54. Berman K, Brodaty H, Withall A, et al. Pharmacologic treatment of apathy in dementia. Am J Geriatr Psychiatry. 2012;20(2):104-122.
55. Theleritis C, Siarkos K, Katirtzoglou E, et al. Pharmacological and nonpharmacological treatment for apathy in Alzheimer disease: a systematic review across modalities. J Geriatr Psychiatry Neurol. 2017;30(1):26-49.
56. APA Work Group on Alzheimer’s Disease and other Dementias; Rabins PV, Blacker D, Rovner BW, et al. American Psychiatric Association practice guideline for the treatment of patients with Alzheimer’s disease and other dementias. Second edition. Am J Psychiatry. 2007;164(suppl 12):5-56.
57. Dolder CR, Davis LN, McKinsey J. Use of psychostimulants in patients with dementia. Ann Pharmacother. 2010;44(10):1624-1632.
Disorders of diminished motivation (DDM)—including apathy, abulia, and akinetic mutism—are characterized by impairment in goal-directed behavior, thought, and emotion.1 These disorders can be observed clinically as a gross underproduction of speech, movement, and emotional response.
DDM are not classified as disorders within DSM-5, and it remains unclear if they are distinct disorders or symptoms that overlap in other conditions. Some sources support distinct diagnoses, while the traditional position is that DDM are variations along a spectrum, with apathy as the mildest form and akinetic mutism as the most severe form (Figure).1-3 DDM can result from various neurologic, medical, psychiatric, socioeconomic, and drug-induced pathologies, and may represent differing severity of the same underlying pathology.1,4 It is postulated that DDM arise from disruptions in the dopaminergic frontal-subcortical-mesolimbic networks.1,4
We present 2 cases of patients who developed distinct phenotypes within DDM. Despite differences in presentation and symptom severity, both patients showed clinical improvement on methylphenidate (not the only treatment option) as assessed by the Neuropsychiatric Inventory (NPI),5 a scale used to measure dementia-related behavioral symptoms that includes an Apathy/Indifference (A/I) subscale.
CASE 1
Apathy secondary to glioblastoma multiforme
Ms. E, age 59, presents with wound drainage 3 weeks after a repeat right craniotomy for recurrent glioblastoma multiforme (GBM) of the temporal lobe. Her medical history is not believed to have contributed to her current presentation.
On hospital day 2, Ms. E undergoes debridement and reclosure at the craniotomy site. Prior to the procedure, the patient was noted to have anhedonia and flat affect. Her family reports that she seems to get little enjoyment from life and “only slept and ate.” Psychiatry is consulted on hospital day 3 for evaluation and management of a perceived depressed mood.
On initial psychiatric evaluation, Ms. E continues to have a constricted affect with delayed psychomotor processing speed. However, she denies dysphoria or anhedonia. Richmond Agitation-Sedation Scale6 score is 0 (alert and calm) and test of sustained attention (‘Vigilant A’) is intact (ie, based on the Confusion Assessment Method for the Intensive Care Unit [CAM-ICU],7 Ms. E does not have delirium). The NPI A/I frequency score is 15, with a severity score of 3, for a total score of 45, indicating moderate behavioral disturbance on the NPI A/I subsection. A diagnosis of neuropsychiatric apathy due to recurrent GBM or craniotomy is made, although substance-induced mood disorder due to concurrent dexamethasone and opiate use is considered. Methylphenidate, 2.5 mg/d, is started, and Ms. E’s blood pressure remains stable with the initial dose.
Methylphenidate is titrated to 5 mg, twice daily, over a 1-week period. Ms. E’s NPI A/I subscale score improves to 3 (mild behavioral problem), with 3 points for frequency and a multiplier of 1 for mild severity, reflecting an improvement in neuropsychiatric apathy, and she is transferred to a long-term care rehabilitation center.
CASE 2
Akinetic mutism secondary to subarachnoid hemorrhage
Ms. G, age 47, is brought to an outside hospital with syncope and a severe headache radiating to her neck. Upon arrival, she is unconscious and requires intubation. A non-contrast head CT scan shows diffuse subarachnoid hemorrhage, 6 mm right midline shift, and a small left frontal subdural hematoma. A CT angiography of her head and neck reveals a 0.7 cm anterior paraclinoid left internal carotid artery aneurysm with ophthalmic involvement. Evidence of underlying left and right carotid fibromuscular dysplasia is also seen. Ms. G is transferred to our facility for neurosurgical intervention.
Neurosurgery proceeds with aneurysm coiling, followed by left craniotomy with subdural evacuation and ventriculostomy placement. Her postoperative course is complicated by prolonged nasogastric hyperalimentation, mild hypernatremia and hyperglycemia, tracheostomy, and recurrent central fever. She also develops persistent vasospasm, which requires balloon angioplasty of the left middle cerebral artery.
The psychiatry team is consulted on postoperative day 29 to assess for delirium. The CAM-ICU is positive for delirium, with nocturnal accentuation of agitation. Ms. G demonstrates paucity of speech and minimal verbal comprehension. She starts oral ziprasidone, 5 mg/d at bedtime. In addition to her original CNS insult, scopolamine patch, 1.5 mg, to decrease respiratory secretions, and IV metronidazole, 500 mg every 8 hours, for skin-site infection, may have been contributing to her delirium.
Ms. G’s delirium quickly resolves; however, on day 32 she continues to demonstrate behavioral and cognitive slowing; The NPI A/I frequency score is 28, with a severity score of 3, for a total score of 84, indicating severe behavioral disturbance on the NPI A/I subsection. Methylphenidate, 2.5 mg/d, is started and the next day is increased to 5 mg twice a day to treat severe akinetic mutism. Ms. G also is switched from ziprasidone to olanzapine, 2.5 mg/d at night.
By day 37, the tracheostomy is decannulated, and Ms. G demonstrates a full level of alertness, awareness, and attention. Her affect is full range and appropriate; however, she demonstrates residual language deficits, including dysnomia. On day 38, Ms. G is discharged with an NPI A/I subscale score of 5, indicating a mild behavioral problem.
What these cases demonstrate about DDM
These 2 cases are part of a larger, emerging conversation about the role of dopamine in DDM. Although not fully elucidated, the pathophysiology of abulia, apathy, and akinetic mutism is thought to be related to multiple neurotransmitters—especially dopamine—involved in the cortico-striatal-pallidal-thalamic network.1,8 This position has been supported by reports of clinical improvement in patients with DDM who are given dopaminergic agonists (Table 1).3,9-32
The clinical improvement seen in both of our patients after initiating methylphenidate is consistent with previous reports.10-13 Methylphenidate was selected because of its favorable adverse effect profile and potentially rapid onset of action in DDM.10-13 In cases where oral medication cannot be administered, such as in patients with akinetic mutism, short-term adjunctive IM olanzapine may be helpful, although it is not a first-line treatment.3,15
Interestingly, both of our patients showed improvement with low doses of methylphenidate. Ms. E showed rapid improvement at 2.5 mg/d, but eventually was increased to 10 mg/d. For Ms. G, who demonstrated severe akinetic mutism, rapid improvement was noted after the initial 2.5 mg/d dose; however, because of reports of efficacy of olanzapine in treating akinetic mutism, it is possible that these medications worked synergistically. The proposed mechanism of action of olanzapine in akinetic mutism is through increased dopamine transmission in the medial prefrontal cortex.3,15 Ms. G’s methylphenidate dose was increased to 5 mg/d, which was still “subtherapeutic,” because most reports have used dosages ranging from 10 to 40 mg/d.10-13 Although there were favorable acute results in both patients, their long-term requirements are unknown because of a lack of follow-up. Our findings are also limited by the fact that both patients were recovering from neurosurgical procedures, which could lead to natural improvement in symptoms over time.
Prevalence of DDM in psychiatric disorders
The successful treatment of DDM with dopaminergic drugs is meaningful because of the coexistence of DDM in various neuropsychiatric conditions. In Alzheimer’s disease (AD), disturbances in the dopaminergic system may explain the high comorbidity of apathy, which ranges from 47% in mild AD to 80% in moderate AD.33 In the dopamine-reduced states of cocaine and amphetamine withdrawal, 67% of patients report apathy and lack of motivation.8,34 Additionally, the prevalence of apathy is reported at 27% in Parkinson’s disease, 43% in mild cognitive impairment, 70% in mixed dementia, 94% in a major depressive episode, and 53% in schizophrenia.35 In schizophrenia with predominately negative symptoms, in vivo and postmortem studies have found reduced dopamine receptors.8 Meanwhile, the high rate of akinetic mutism in Creutzfeldt-Jakob disease allows for its use as a reliable diagnostic criteria in this disorder.36
However, the prevalence of DDM is best documented as it relates to stroke and traumatic brain injury (TBI). For instance, after experiencing a stroke, 20% to 25% of patients suffer from apathy.37 Many case reports describe abulia and akinetic mutism after cerebral infarction or hemorrhage, although the incidence of these disorders is unknown.2,38-40 Apathy following TBI is common, especially in younger patients who have sustained a severe injury.41 One study evaluated the prevalence of apathy after TBI among 83 consecutive patients in a neuropsychiatric clinic. Of the 83 patients, 10.84% had apathy without depression, and an equal number were depressed without apathy; another 60% of patients exhibited both apathy and depression. Younger patients (mean age, 29 years) were more likely to be apathetic than older patients, who were more likely to be depressed or depressed and apathetic (mean age, 42 and 38 years, respectively).41 Interestingly, DDM often are associated with cerebral lesions in distinct and distant anatomical locations that are not clearly connected to the neural circuits of motivational pathways. This phenomenon may be explained by the concept of diaschisis, which states that injury to one part of an interconnected neural network can affect other, separate parts of that network.2 If this concept is accurate, it may broaden the impact of DDM, especially as it relates to stroke and TBI.
The differential diagnosis of DDM includes depression and hypokinetic delirium (Table 21,3,42-50). A potential overlapping but confounding condition is stuporous catatonia, with symptoms that include psychomotor slowing such as immobility, staring, and stupor.47 It is important to differentiate these disorders because the treatment for each differs. As previously discussed, there is a clear role for dopamine receptor agonists in the treatment of DDM.
Although major depressive disorder can be treated with medications that increase dopaminergic transmission, selective serotonin reuptake inhibitors (SSRIs) are more commonly used as first-line agents.44 However, an SSRI would theoretically be contraindicated in DDM, because increased serotonin transmission decreases dopamine release from the midbrain, and therefore an SSRI may not only result in a lack of improvement but may worsen DDM.48 Finally, although delirium is treated with atypical or conventional antipsychotics vis-a-vis dopamine type 2 receptor antagonism,45 stuporous catatonia is preferentially treated with gamma-aminobutyric acid-A receptor agonists such as lorazepam.50
What to do when your patient’s presentation suggests DDM
Assessment of DDM should be structured, with input from the patient and the caregiver, and should incorporate the physician’s perspective. A history should be obtained applying recent criteria of apathy. The 3 core domains of apathy—behavior, cognition, and emotion—need to be evaluated. The revised criteria are based on the premise that change in motivation can be measured by examining a patient’s responsiveness to internal or external stimuli. Therefore, each of the 3 domains includes 2 symptoms: (1) self-initiated or “internal” behaviors, cognitions, and emotions (initiation symptom), and (2) the patient’s responsiveness to “external” stimuli (responsiveness symptom).51
One of the main diagnostic dilemmas is how to separate DDM from depression. The differentiation is difficult because of substantial overlap in the manifestation of key symptoms, such as a lack of interest, anergia, psychomotor slowing, and fatigue. Caregivers often mistakenly describe DDM as a depressive state, even though a lack of sadness, desperation, crying, and a depressive mood distinguish DDM from depression. Usually, DDM patients lack negative thoughts, emotional distress, sadness, vegetative symptoms, and somatic concerns, which are frequently observed in mood disorders.51
Several instruments have been developed for assessing neuropsychiatric symptoms. Some were specifically designed to measure apathy, whereas others were designed to provide a broader neuropsychiatric assessment. The NPI is the most widely used multidimensional instrument for assessing neuropsychiatric functioning in patients with neurocognitive disorders (NCDs). It is a valid, reliable instrument that consists of an interview of the patient’s caregiver. It is designed to assess the presence and severity of 10 symptoms, including apathy. The NPI includes both apathy and depression items, which can help clinicians distinguish the 2 conditions. Although beyond the scope of this article, more recent standardized instruments that can assess DDM include the Apathy Inventory, the Dementia Apathy Interview and Rating, and the Structured Clinical Interview for Apathy.52
As previously mentioned, researchers have proposed that DDM are simply a continuum of severity of reduced behavior, and akinetic mutism may be the extreme form. The dilemma is how to formally diagnose states of abulia and akinetic mutism, given the lack of diagnostic criteria and paucity of standardized instruments. Thus, distinguishing between abulia and akinetic mutism (and apathy) is more of a quantitative than qualitative exercise. One could hypothesize that higher scores on a standardized scale to measure apathy (ie, NPI) could imply abulia or akinetic mutism, although to the best of our knowledge, no formal “cut-off scores” exist.53
Treatment of apathy. The duration of pharmacotherapy to treat apathy is unknown and their usage is off-label. Further studies, including double-blind, randomized controlled trials (RCTs), are needed. Nonetheless, the 2 classes of medications that have the most evidence for treating apathy/DDM are psychostimulants and acetylcholinesterase inhibitors (AChEIs).
AChEIs are primarily used for treating cognitive symptoms in NCDs, but recent findings indicate that they have beneficial effects on noncognitive symptoms such as apathy. Of all medications used to treat apathy in NCDs, AChEIs have been used to treat the largest number of patients. Of 26 studies, 24 demonstrated improvement in apathy, with 21 demonstrating statistical significance. These studies ranged in duration from 8 weeks to 1 year, and most were open-label.54
Five studies (3 RCTs and 2 open-label studies) assessed the efficacy of methylphenidate for treating apathy due to AD. All the studies demonstrated at least some benefit in apathy scores after treatment with methylphenidate. These studies ranged from 5 to 12 weeks in duration. Notably, some patients reported adverse effects, including delusions and irritability.54
Although available evidence suggests AChEIs may be the most effective medications for treating apathy in NCDs, methylphenidate has been demonstrated to work faster.55 Thus, in cases where apathy can significantly affect activities of daily living or instrumental activities of daily living, a quicker response may dictate treatment with methylphenidate. It is imperative to note that safety studies and more large-scale double-blind RCTs are needed to further demonstrate the effectiveness and safety of methylphenidate.
Published in 2007, the American Psychiatric Association (APA) guidelines56 state that psychostimulants are a possible treatment option for patients with severe apathy. At the same time, clinicians are reminded that these agents—especially at higher doses—can produce various problematic adverse effects, including tachycardia, hypertension, restlessness, dyskinesia, agitation, sleep disturbances, psychosis, confusion, and decreased appetite. The APA guidelines recommend using low initial doses, with slow and careful titration. For example, methylphenidate should be started at 2.5 to 5 mg once in the morning, with daily doses not to exceed 30 to 40 mg. In our clinical experience, doses >20 mg/d have not been necessary.57
Treatment of akinetic mutism and abulia. In patients with akinetic mutism and possible abulia, for whom oral medication administration is either impossible or contraindicated (ie, due to the potential risk of aspiration pneumonia), atypical antipsychotics, such as IM olanazapine, have produced a therapeutic response in apathetic patients with NCD. However, extensive use of antipsychotics in NCD is not recommended because this class of medications has been associated with serious adverse effects, including an increased risk of death.55
Bottom Line
Apathy, abulia, and akinetic mutism have been categorized as disorders of diminished motivation (DDM). They commonly present after a stroke or traumatic brain injury, and should be differentiated from depression, hypokinetic delirium, and stuporous catatonia. DDM can be successfully treated with dopamine agonists.
Related Resources
- Barnhart WJ, Makela EH, Latocha MJ. SSRI-induced apathy syndrome: a clinical review. J Psychiatr Pract. 2004;10(3):196-199.
- Dell’Osso B, Benatti B, Altamura AC, et al. Prevalence of selective serotonin reuptake inhibitor-related apathy in patients with obsessive compulsive disorder. J Clin Psychopharmacol. 2016;36(6):725-726.
- D’Souza G, Kakoullis A, Hegde N, et al. Recognition and management of abulia in the elderly. Prog Neurol Psychiatry. 2010;14(6):24-28.
Drug Brand Names
Bromocriptine • Parlodel
Bupropion • Wellbutrin XL, Zyban
Carbidopa • Lodosyn
Dexamethasone • DexPak, Ozurde
Donepezil • Aricept
Levodopa/benserazide • Prolopa
Levodopa/carbidopa • Pacopa Rytary Sinemet
Lorazepam • Ativan
Methylphenidate • Concerta, Methylin
Metronidazole • Flagyl, Metrogel
Modafinil • Provigil
Olanzapine • Zyprexa
Pramipexole • Mirapex
Rivastigmine • Exelon
Ropinirole • Requip
Rotigotine • Neurpro
Scopolamine • Transderm Scop
Ziprasidone • Geodon
Disorders of diminished motivation (DDM)—including apathy, abulia, and akinetic mutism—are characterized by impairment in goal-directed behavior, thought, and emotion.1 These disorders can be observed clinically as a gross underproduction of speech, movement, and emotional response.
DDM are not classified as disorders within DSM-5, and it remains unclear if they are distinct disorders or symptoms that overlap in other conditions. Some sources support distinct diagnoses, while the traditional position is that DDM are variations along a spectrum, with apathy as the mildest form and akinetic mutism as the most severe form (Figure).1-3 DDM can result from various neurologic, medical, psychiatric, socioeconomic, and drug-induced pathologies, and may represent differing severity of the same underlying pathology.1,4 It is postulated that DDM arise from disruptions in the dopaminergic frontal-subcortical-mesolimbic networks.1,4
We present 2 cases of patients who developed distinct phenotypes within DDM. Despite differences in presentation and symptom severity, both patients showed clinical improvement on methylphenidate (not the only treatment option) as assessed by the Neuropsychiatric Inventory (NPI),5 a scale used to measure dementia-related behavioral symptoms that includes an Apathy/Indifference (A/I) subscale.
CASE 1
Apathy secondary to glioblastoma multiforme
Ms. E, age 59, presents with wound drainage 3 weeks after a repeat right craniotomy for recurrent glioblastoma multiforme (GBM) of the temporal lobe. Her medical history is not believed to have contributed to her current presentation.
On hospital day 2, Ms. E undergoes debridement and reclosure at the craniotomy site. Prior to the procedure, the patient was noted to have anhedonia and flat affect. Her family reports that she seems to get little enjoyment from life and “only slept and ate.” Psychiatry is consulted on hospital day 3 for evaluation and management of a perceived depressed mood.
On initial psychiatric evaluation, Ms. E continues to have a constricted affect with delayed psychomotor processing speed. However, she denies dysphoria or anhedonia. Richmond Agitation-Sedation Scale6 score is 0 (alert and calm) and test of sustained attention (‘Vigilant A’) is intact (ie, based on the Confusion Assessment Method for the Intensive Care Unit [CAM-ICU],7 Ms. E does not have delirium). The NPI A/I frequency score is 15, with a severity score of 3, for a total score of 45, indicating moderate behavioral disturbance on the NPI A/I subsection. A diagnosis of neuropsychiatric apathy due to recurrent GBM or craniotomy is made, although substance-induced mood disorder due to concurrent dexamethasone and opiate use is considered. Methylphenidate, 2.5 mg/d, is started, and Ms. E’s blood pressure remains stable with the initial dose.
Methylphenidate is titrated to 5 mg, twice daily, over a 1-week period. Ms. E’s NPI A/I subscale score improves to 3 (mild behavioral problem), with 3 points for frequency and a multiplier of 1 for mild severity, reflecting an improvement in neuropsychiatric apathy, and she is transferred to a long-term care rehabilitation center.
CASE 2
Akinetic mutism secondary to subarachnoid hemorrhage
Ms. G, age 47, is brought to an outside hospital with syncope and a severe headache radiating to her neck. Upon arrival, she is unconscious and requires intubation. A non-contrast head CT scan shows diffuse subarachnoid hemorrhage, 6 mm right midline shift, and a small left frontal subdural hematoma. A CT angiography of her head and neck reveals a 0.7 cm anterior paraclinoid left internal carotid artery aneurysm with ophthalmic involvement. Evidence of underlying left and right carotid fibromuscular dysplasia is also seen. Ms. G is transferred to our facility for neurosurgical intervention.
Neurosurgery proceeds with aneurysm coiling, followed by left craniotomy with subdural evacuation and ventriculostomy placement. Her postoperative course is complicated by prolonged nasogastric hyperalimentation, mild hypernatremia and hyperglycemia, tracheostomy, and recurrent central fever. She also develops persistent vasospasm, which requires balloon angioplasty of the left middle cerebral artery.
The psychiatry team is consulted on postoperative day 29 to assess for delirium. The CAM-ICU is positive for delirium, with nocturnal accentuation of agitation. Ms. G demonstrates paucity of speech and minimal verbal comprehension. She starts oral ziprasidone, 5 mg/d at bedtime. In addition to her original CNS insult, scopolamine patch, 1.5 mg, to decrease respiratory secretions, and IV metronidazole, 500 mg every 8 hours, for skin-site infection, may have been contributing to her delirium.
Ms. G’s delirium quickly resolves; however, on day 32 she continues to demonstrate behavioral and cognitive slowing; The NPI A/I frequency score is 28, with a severity score of 3, for a total score of 84, indicating severe behavioral disturbance on the NPI A/I subsection. Methylphenidate, 2.5 mg/d, is started and the next day is increased to 5 mg twice a day to treat severe akinetic mutism. Ms. G also is switched from ziprasidone to olanzapine, 2.5 mg/d at night.
By day 37, the tracheostomy is decannulated, and Ms. G demonstrates a full level of alertness, awareness, and attention. Her affect is full range and appropriate; however, she demonstrates residual language deficits, including dysnomia. On day 38, Ms. G is discharged with an NPI A/I subscale score of 5, indicating a mild behavioral problem.
What these cases demonstrate about DDM
These 2 cases are part of a larger, emerging conversation about the role of dopamine in DDM. Although not fully elucidated, the pathophysiology of abulia, apathy, and akinetic mutism is thought to be related to multiple neurotransmitters—especially dopamine—involved in the cortico-striatal-pallidal-thalamic network.1,8 This position has been supported by reports of clinical improvement in patients with DDM who are given dopaminergic agonists (Table 1).3,9-32
The clinical improvement seen in both of our patients after initiating methylphenidate is consistent with previous reports.10-13 Methylphenidate was selected because of its favorable adverse effect profile and potentially rapid onset of action in DDM.10-13 In cases where oral medication cannot be administered, such as in patients with akinetic mutism, short-term adjunctive IM olanzapine may be helpful, although it is not a first-line treatment.3,15
Interestingly, both of our patients showed improvement with low doses of methylphenidate. Ms. E showed rapid improvement at 2.5 mg/d, but eventually was increased to 10 mg/d. For Ms. G, who demonstrated severe akinetic mutism, rapid improvement was noted after the initial 2.5 mg/d dose; however, because of reports of efficacy of olanzapine in treating akinetic mutism, it is possible that these medications worked synergistically. The proposed mechanism of action of olanzapine in akinetic mutism is through increased dopamine transmission in the medial prefrontal cortex.3,15 Ms. G’s methylphenidate dose was increased to 5 mg/d, which was still “subtherapeutic,” because most reports have used dosages ranging from 10 to 40 mg/d.10-13 Although there were favorable acute results in both patients, their long-term requirements are unknown because of a lack of follow-up. Our findings are also limited by the fact that both patients were recovering from neurosurgical procedures, which could lead to natural improvement in symptoms over time.
Prevalence of DDM in psychiatric disorders
The successful treatment of DDM with dopaminergic drugs is meaningful because of the coexistence of DDM in various neuropsychiatric conditions. In Alzheimer’s disease (AD), disturbances in the dopaminergic system may explain the high comorbidity of apathy, which ranges from 47% in mild AD to 80% in moderate AD.33 In the dopamine-reduced states of cocaine and amphetamine withdrawal, 67% of patients report apathy and lack of motivation.8,34 Additionally, the prevalence of apathy is reported at 27% in Parkinson’s disease, 43% in mild cognitive impairment, 70% in mixed dementia, 94% in a major depressive episode, and 53% in schizophrenia.35 In schizophrenia with predominately negative symptoms, in vivo and postmortem studies have found reduced dopamine receptors.8 Meanwhile, the high rate of akinetic mutism in Creutzfeldt-Jakob disease allows for its use as a reliable diagnostic criteria in this disorder.36
However, the prevalence of DDM is best documented as it relates to stroke and traumatic brain injury (TBI). For instance, after experiencing a stroke, 20% to 25% of patients suffer from apathy.37 Many case reports describe abulia and akinetic mutism after cerebral infarction or hemorrhage, although the incidence of these disorders is unknown.2,38-40 Apathy following TBI is common, especially in younger patients who have sustained a severe injury.41 One study evaluated the prevalence of apathy after TBI among 83 consecutive patients in a neuropsychiatric clinic. Of the 83 patients, 10.84% had apathy without depression, and an equal number were depressed without apathy; another 60% of patients exhibited both apathy and depression. Younger patients (mean age, 29 years) were more likely to be apathetic than older patients, who were more likely to be depressed or depressed and apathetic (mean age, 42 and 38 years, respectively).41 Interestingly, DDM often are associated with cerebral lesions in distinct and distant anatomical locations that are not clearly connected to the neural circuits of motivational pathways. This phenomenon may be explained by the concept of diaschisis, which states that injury to one part of an interconnected neural network can affect other, separate parts of that network.2 If this concept is accurate, it may broaden the impact of DDM, especially as it relates to stroke and TBI.
The differential diagnosis of DDM includes depression and hypokinetic delirium (Table 21,3,42-50). A potential overlapping but confounding condition is stuporous catatonia, with symptoms that include psychomotor slowing such as immobility, staring, and stupor.47 It is important to differentiate these disorders because the treatment for each differs. As previously discussed, there is a clear role for dopamine receptor agonists in the treatment of DDM.
Although major depressive disorder can be treated with medications that increase dopaminergic transmission, selective serotonin reuptake inhibitors (SSRIs) are more commonly used as first-line agents.44 However, an SSRI would theoretically be contraindicated in DDM, because increased serotonin transmission decreases dopamine release from the midbrain, and therefore an SSRI may not only result in a lack of improvement but may worsen DDM.48 Finally, although delirium is treated with atypical or conventional antipsychotics vis-a-vis dopamine type 2 receptor antagonism,45 stuporous catatonia is preferentially treated with gamma-aminobutyric acid-A receptor agonists such as lorazepam.50
What to do when your patient’s presentation suggests DDM
Assessment of DDM should be structured, with input from the patient and the caregiver, and should incorporate the physician’s perspective. A history should be obtained applying recent criteria of apathy. The 3 core domains of apathy—behavior, cognition, and emotion—need to be evaluated. The revised criteria are based on the premise that change in motivation can be measured by examining a patient’s responsiveness to internal or external stimuli. Therefore, each of the 3 domains includes 2 symptoms: (1) self-initiated or “internal” behaviors, cognitions, and emotions (initiation symptom), and (2) the patient’s responsiveness to “external” stimuli (responsiveness symptom).51
One of the main diagnostic dilemmas is how to separate DDM from depression. The differentiation is difficult because of substantial overlap in the manifestation of key symptoms, such as a lack of interest, anergia, psychomotor slowing, and fatigue. Caregivers often mistakenly describe DDM as a depressive state, even though a lack of sadness, desperation, crying, and a depressive mood distinguish DDM from depression. Usually, DDM patients lack negative thoughts, emotional distress, sadness, vegetative symptoms, and somatic concerns, which are frequently observed in mood disorders.51
Several instruments have been developed for assessing neuropsychiatric symptoms. Some were specifically designed to measure apathy, whereas others were designed to provide a broader neuropsychiatric assessment. The NPI is the most widely used multidimensional instrument for assessing neuropsychiatric functioning in patients with neurocognitive disorders (NCDs). It is a valid, reliable instrument that consists of an interview of the patient’s caregiver. It is designed to assess the presence and severity of 10 symptoms, including apathy. The NPI includes both apathy and depression items, which can help clinicians distinguish the 2 conditions. Although beyond the scope of this article, more recent standardized instruments that can assess DDM include the Apathy Inventory, the Dementia Apathy Interview and Rating, and the Structured Clinical Interview for Apathy.52
As previously mentioned, researchers have proposed that DDM are simply a continuum of severity of reduced behavior, and akinetic mutism may be the extreme form. The dilemma is how to formally diagnose states of abulia and akinetic mutism, given the lack of diagnostic criteria and paucity of standardized instruments. Thus, distinguishing between abulia and akinetic mutism (and apathy) is more of a quantitative than qualitative exercise. One could hypothesize that higher scores on a standardized scale to measure apathy (ie, NPI) could imply abulia or akinetic mutism, although to the best of our knowledge, no formal “cut-off scores” exist.53
Treatment of apathy. The duration of pharmacotherapy to treat apathy is unknown and their usage is off-label. Further studies, including double-blind, randomized controlled trials (RCTs), are needed. Nonetheless, the 2 classes of medications that have the most evidence for treating apathy/DDM are psychostimulants and acetylcholinesterase inhibitors (AChEIs).
AChEIs are primarily used for treating cognitive symptoms in NCDs, but recent findings indicate that they have beneficial effects on noncognitive symptoms such as apathy. Of all medications used to treat apathy in NCDs, AChEIs have been used to treat the largest number of patients. Of 26 studies, 24 demonstrated improvement in apathy, with 21 demonstrating statistical significance. These studies ranged in duration from 8 weeks to 1 year, and most were open-label.54
Five studies (3 RCTs and 2 open-label studies) assessed the efficacy of methylphenidate for treating apathy due to AD. All the studies demonstrated at least some benefit in apathy scores after treatment with methylphenidate. These studies ranged from 5 to 12 weeks in duration. Notably, some patients reported adverse effects, including delusions and irritability.54
Although available evidence suggests AChEIs may be the most effective medications for treating apathy in NCDs, methylphenidate has been demonstrated to work faster.55 Thus, in cases where apathy can significantly affect activities of daily living or instrumental activities of daily living, a quicker response may dictate treatment with methylphenidate. It is imperative to note that safety studies and more large-scale double-blind RCTs are needed to further demonstrate the effectiveness and safety of methylphenidate.
Published in 2007, the American Psychiatric Association (APA) guidelines56 state that psychostimulants are a possible treatment option for patients with severe apathy. At the same time, clinicians are reminded that these agents—especially at higher doses—can produce various problematic adverse effects, including tachycardia, hypertension, restlessness, dyskinesia, agitation, sleep disturbances, psychosis, confusion, and decreased appetite. The APA guidelines recommend using low initial doses, with slow and careful titration. For example, methylphenidate should be started at 2.5 to 5 mg once in the morning, with daily doses not to exceed 30 to 40 mg. In our clinical experience, doses >20 mg/d have not been necessary.57
Treatment of akinetic mutism and abulia. In patients with akinetic mutism and possible abulia, for whom oral medication administration is either impossible or contraindicated (ie, due to the potential risk of aspiration pneumonia), atypical antipsychotics, such as IM olanazapine, have produced a therapeutic response in apathetic patients with NCD. However, extensive use of antipsychotics in NCD is not recommended because this class of medications has been associated with serious adverse effects, including an increased risk of death.55
Bottom Line
Apathy, abulia, and akinetic mutism have been categorized as disorders of diminished motivation (DDM). They commonly present after a stroke or traumatic brain injury, and should be differentiated from depression, hypokinetic delirium, and stuporous catatonia. DDM can be successfully treated with dopamine agonists.
Related Resources
- Barnhart WJ, Makela EH, Latocha MJ. SSRI-induced apathy syndrome: a clinical review. J Psychiatr Pract. 2004;10(3):196-199.
- Dell’Osso B, Benatti B, Altamura AC, et al. Prevalence of selective serotonin reuptake inhibitor-related apathy in patients with obsessive compulsive disorder. J Clin Psychopharmacol. 2016;36(6):725-726.
- D’Souza G, Kakoullis A, Hegde N, et al. Recognition and management of abulia in the elderly. Prog Neurol Psychiatry. 2010;14(6):24-28.
Drug Brand Names
Bromocriptine • Parlodel
Bupropion • Wellbutrin XL, Zyban
Carbidopa • Lodosyn
Dexamethasone • DexPak, Ozurde
Donepezil • Aricept
Levodopa/benserazide • Prolopa
Levodopa/carbidopa • Pacopa Rytary Sinemet
Lorazepam • Ativan
Methylphenidate • Concerta, Methylin
Metronidazole • Flagyl, Metrogel
Modafinil • Provigil
Olanzapine • Zyprexa
Pramipexole • Mirapex
Rivastigmine • Exelon
Ropinirole • Requip
Rotigotine • Neurpro
Scopolamine • Transderm Scop
Ziprasidone • Geodon
1. Marin RS, Wilkosz PA. Disorders of diminished motivation. J Head Trauma Rehabil. 2005;20(4):377-388.
2. Ghoshal S, Gokhale S, Rebovich G, et al. The neurology of decreased activity: abulia. Rev Neurol Dis. 2011;8(3-4):e55-e67.
3. Spiegel DR, Chatterjee A. A case of abulia, status/post right middle cerebral artery territory infarct, treated successfully with olanzapine. Clin Neuropharmacol. 2014;37(6):186-189.
4. Marin RS. Differential diagnosis and classification of apathy. Am J Psychiatry. 1990;147(1):22-30.
5. Cummings JL, Mega M, Gray K, et al. The Neuropsychiatric Inventory: comprehensive assessment of psychopathology in dementia. Neurology. 1994;44(12):2308-2314.
6. Sessler CN, Gosnell MS, Grap MJ, et al. The Richmond Agitation-Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002;166(10):1338-1344.
7. Ely EW, Margolin R, Francis J, et al. Evaluation of delirium in critically ill patients: validation of the Confusion Assessment Method for the intensive care unit (CAM-ICU). Crit Care Med. 2001;29(7):1370-1379.
8. Al-Adawi S, Dawe GS, Al-Hussaini AA. Aboulia: neurobehavioural dysfunction of dopaminergic system? Med Hypotheses. 2000;54(4):523-530.
9. Volkow ND, Fowler JS, Wang G, et al. Mechanism of action of methylphenidate: insights from PET imaging studies. J Atten Disord. 2002;6(suppl 1):S31-S43.
10. Chatterjee A, Fahn S. Methylphenidate treats apathy in Parkinson’s disease. J Neuropsychiatry Clin Neurosci. 2002;14(4):461-462.
11. Keenan S, Mavaddat N, Iddon J, et al. Effects of methylphenidate on cognition and apathy in normal pressure hydrocephalus: a case study and review. Br J Neurosurg. 2005;19(1):46-50.
12. Padala PR, Petty F, Bhatia SC. Methylphenidate may treat apathy independent of depression. Ann Pharmacother. 2005;39(11):1947-1949.
13. Padala PR, Burke WJ, Bhatia SC, et al. Treatment of apathy with methylphenidate. J Neuropsychiatry Clin Neurosci. 2007;19(1):81-83.
14. Li XM, Perry KW, Wong DT, et al. Olanzapine increases in vivo dopamine and norepinephrine release in rat prefrontal cortex, nucleus accumbens and striatum. Psychopharmacology (Berl). 1998;136(2):153-161.
15. Spiegel DR, Casella DP, Callender DM, et al. Treatment of akinetic mutism with intramuscular olanzapine: a case series. J Neuropsychiatry Clin Neurosci. 2008;20(1):93-95.
16. Citrome L. Activating and sedating adverse effects of second-generation antipsychotics in the treatment of schizophrenia and major depressive disorder: absolute risk increase and number needed to harm. J Clin Psychopharmacol. 2017;37(2):138-147.
17. Bakheit AM, Fletcher K, Brennan A. Successful treatment of severe abulia with co-beneldopa. NeuroRehabilitation. 2011;29(4):347-351.
18. Debette S, Kozlowski O, Steinling M, et al. Levodopa and bromocriptine in hypoxic brain injury. J Neurol. 2002;249(12):1678-1682.
19. Combarros O, Infante J, Berciano J. Akinetic mutism from frontal lobe damage responding to levodopa. J Neurol. 2000;247(7):568-569.
20. Echiverri HC, Tatum WO, Merens TA, et al. Akinetic mutism: pharmacologic probe of the dopaminergic mesencephalofrontal activating system. Pediatr Neurol. 1988;4(4):228-230.
21. Psarros T, Zouros A, Coimbra C. Bromocriptine-responsive akinetic mutism following endoscopy for ventricular neurocysticercosis. Case report and review of the literature. J Neurosurg. 2003;99(2):397-401.
22. Naik VD. Abulia following an episode of cardiac arrest [published online July 1, 2015]. BMJ Case Rep. doi: 10.1136/bcr-2015-209357.
23. Kim MS, Rhee JJ, Lee SJ, et al. Akinetic mutism responsive to bromocriptine following subdural hematoma evacuation in a patient with hydrocephalus. Neurol Med Chir (Tokyo). 2007;47(9):419-423.
24. Rockwood K, Black S, Bedard MA; TOPS Study Investigators. Specific symptomatic changes following donepezil treatment of Alzheimer’s disease: a multi-centre, primary care, open-label study. Int J Geriatr Psychiatry. 2007;22(4):312-319.
25. Devos D, Moreau C, Maltête D, et al. Rivastigmine in apathetic but dementia and depression-free patients with Parkinson’s disease: a double-blind, placebo-controlled, randomised clinical trial. J Neurol Neurosurg Psychiatry. 2014;85(6):668-674.
26. Camargos EF, Quintas JL. Apathy syndrome treated successfully with modafinil [published online November 15, 2011]. BMJ Case Rep. doi: 10.1136/bcr.08.2011.4652.
27. Corcoran C, Wong ML, O’Keane V. Bupropion in the management of apathy. J Psychopharmacol. 2004;18(1):133-135.
28. Blundo C, Gerace C. Dopamine agonists can improve pure apathy associated with lesions of the prefrontal-basal ganglia functional system. Neurol Sci. 2015;36(7):1197-1201.
29. Mirapex [package insert]. Ridgefield, CT: Boehringer Ingelheim International GmbH; 2016.
30. Neupro [package insert]. Smyrna, GA: UBC, Inc.; 2012.
31. Requip [package insert]. Research Triangle Park, NC: GlaxoSmithKline; 2017.
32. Thobois S, Lhommée E, Klinger H, et al. Parkinsonian apathy responds to dopaminergic stimulation of D2/D3 receptors with piribedil. Brain. 2013;136(pt 5):1568-1577.
33. Mitchell RA, Herrmann N, Lanctôt KL. The role of dopamine in symptoms and treatment of apathy in Alzheimer’s disease. CNS Neurosci Ther. 2011;17(5):411-427.
34. Brower KJ, Maddahian E, Blow FC, et al. A comparison of self-reported symptoms and DSM-III-R criteria for cocaine withdrawal. Am J Drug Alcohol Abuse. 1988;14(3):347-356.
35. Mulin E, Leone E, Dujardin K, et al. Diagnostic criteria for apathy in clinical practice. Int J Geriatr Psychiatry. 2011;26(2):158-165.
36. Otto A, Zerr I, Lantsch M, et al. Akinetic mutism as a classification criterion for the diagnosis of Creutzfeldt-Jakob disease. J Neurol Neurosurg Psychiatry. 1998;64(4):524-528.
37. Jorge RE, Starkstein SE, Robinson RG. Apathy following stroke. Can J Psychiatry. 2010;55(6):350-354.
38. Hastak SM, Gorawara PS, Mishra NK. Abulia: no will, no way. J Assoc Physicians India. 2005;53:814-818.
39. Nagaratnam N, Nagaratnam K, Ng K, et al. Akinetic mutism following stroke. J Clin Neurosci. 2004;11(1):25-30.
40. Freemon FR. Akinetic mutism and bilateral anterior cerebral artery occlusion. J Neurol Neurosurg Psychiatry. 1971;34(6):693-698.
41. Schwarzbold M, Diaz A, Martins ET, et al. Psychiatric disorders and traumatic brain injury. Neuropsychiatr Dis Treat. 2008;4(4):797-816.
42. Diagnostic and statistical manual of mental disorders, 5th ed. Washington, DC: American Psychiatric Association; 2013.
43. Levy ML, Cummings JL, Fairbanks LA, et al. Apathy is not depression. J Neuropsychiatry Clin Neurosci. 1998;10(3):314-319.
44. Snow V, Lascher S, Mottur-Pilson C. Pharmacologic treatment of acute major depression and dysthymia. American College of Physicians-American Society of Internal Medicine. Ann Intern Med. 2000;132(9):738-742.
45. Schwartz AC, Fisher TJ, Greenspan HN, et al. Pharmacologic and nonpharmacologic approaches to the prevention and management of delirium. Int J Psychiatry Med. 2016;51(2):160-170.
46. Kang H, Zhao F, You L, et al. Pseudo-dementia: a neuropsychological review. Ann Indian Acad Neurol. 2014;17(2):147-154.
47. Fricchione GL, Beach SR, Huffman J, et al. Life-threatening conditions in psychiatry: catatonia, neuroleptic malignant syndrome, and serotonin syndrome. In: Stern TA, Fava M, Wilens TE, eds. Massachusetts General Hospital comprehensive clinical psychiatry. London, United Kingdom: Elsevier; 2016:608-617.
48. Rogers RD. The roles of dopamine and serotonin in decision making: evidence from pharmacological experiments in humans. Neuropsychopharmacology. 2011;36(1):114-132.
49. Stransky M, Schmidt C, Ganslmeier P, et al. Hypoactive delirium after cardiac surgery as an independent risk factor for prolonged mechanical ventilation. J Cardiothorac Vasc Anesth. 2011;25(6):968-974.
50. Wilcox JA, Reid Duffy P. The syndrome of catatonia. Behav Sci (Basel). 2015;5(4):576-588.
51. Robert PH, Mulin E, Malléa P, et al. REVIEW: apathy diagnosis, assessment, and treatment in Alzheimer’s disease. CNS Neurosci Ther. 2010;16(5):263-271.
52. Cipriani G, Lucetti C, Danti S, et al. Apathy and dementia. Nosology, assessment and management. J Nerv Ment Dis. 2014;202(10):718-724.
53. Starkstein SE, Leentjens AF. The nosological position of apathy in clinical practice. J Neurol Neurosurg Psychiatry. 2008;79(10):1088-1092.54. Berman K, Brodaty H, Withall A, et al. Pharmacologic treatment of apathy in dementia. Am J Geriatr Psychiatry. 2012;20(2):104-122.
55. Theleritis C, Siarkos K, Katirtzoglou E, et al. Pharmacological and nonpharmacological treatment for apathy in Alzheimer disease: a systematic review across modalities. J Geriatr Psychiatry Neurol. 2017;30(1):26-49.
56. APA Work Group on Alzheimer’s Disease and other Dementias; Rabins PV, Blacker D, Rovner BW, et al. American Psychiatric Association practice guideline for the treatment of patients with Alzheimer’s disease and other dementias. Second edition. Am J Psychiatry. 2007;164(suppl 12):5-56.
57. Dolder CR, Davis LN, McKinsey J. Use of psychostimulants in patients with dementia. Ann Pharmacother. 2010;44(10):1624-1632.
1. Marin RS, Wilkosz PA. Disorders of diminished motivation. J Head Trauma Rehabil. 2005;20(4):377-388.
2. Ghoshal S, Gokhale S, Rebovich G, et al. The neurology of decreased activity: abulia. Rev Neurol Dis. 2011;8(3-4):e55-e67.
3. Spiegel DR, Chatterjee A. A case of abulia, status/post right middle cerebral artery territory infarct, treated successfully with olanzapine. Clin Neuropharmacol. 2014;37(6):186-189.
4. Marin RS. Differential diagnosis and classification of apathy. Am J Psychiatry. 1990;147(1):22-30.
5. Cummings JL, Mega M, Gray K, et al. The Neuropsychiatric Inventory: comprehensive assessment of psychopathology in dementia. Neurology. 1994;44(12):2308-2314.
6. Sessler CN, Gosnell MS, Grap MJ, et al. The Richmond Agitation-Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002;166(10):1338-1344.
7. Ely EW, Margolin R, Francis J, et al. Evaluation of delirium in critically ill patients: validation of the Confusion Assessment Method for the intensive care unit (CAM-ICU). Crit Care Med. 2001;29(7):1370-1379.
8. Al-Adawi S, Dawe GS, Al-Hussaini AA. Aboulia: neurobehavioural dysfunction of dopaminergic system? Med Hypotheses. 2000;54(4):523-530.
9. Volkow ND, Fowler JS, Wang G, et al. Mechanism of action of methylphenidate: insights from PET imaging studies. J Atten Disord. 2002;6(suppl 1):S31-S43.
10. Chatterjee A, Fahn S. Methylphenidate treats apathy in Parkinson’s disease. J Neuropsychiatry Clin Neurosci. 2002;14(4):461-462.
11. Keenan S, Mavaddat N, Iddon J, et al. Effects of methylphenidate on cognition and apathy in normal pressure hydrocephalus: a case study and review. Br J Neurosurg. 2005;19(1):46-50.
12. Padala PR, Petty F, Bhatia SC. Methylphenidate may treat apathy independent of depression. Ann Pharmacother. 2005;39(11):1947-1949.
13. Padala PR, Burke WJ, Bhatia SC, et al. Treatment of apathy with methylphenidate. J Neuropsychiatry Clin Neurosci. 2007;19(1):81-83.
14. Li XM, Perry KW, Wong DT, et al. Olanzapine increases in vivo dopamine and norepinephrine release in rat prefrontal cortex, nucleus accumbens and striatum. Psychopharmacology (Berl). 1998;136(2):153-161.
15. Spiegel DR, Casella DP, Callender DM, et al. Treatment of akinetic mutism with intramuscular olanzapine: a case series. J Neuropsychiatry Clin Neurosci. 2008;20(1):93-95.
16. Citrome L. Activating and sedating adverse effects of second-generation antipsychotics in the treatment of schizophrenia and major depressive disorder: absolute risk increase and number needed to harm. J Clin Psychopharmacol. 2017;37(2):138-147.
17. Bakheit AM, Fletcher K, Brennan A. Successful treatment of severe abulia with co-beneldopa. NeuroRehabilitation. 2011;29(4):347-351.
18. Debette S, Kozlowski O, Steinling M, et al. Levodopa and bromocriptine in hypoxic brain injury. J Neurol. 2002;249(12):1678-1682.
19. Combarros O, Infante J, Berciano J. Akinetic mutism from frontal lobe damage responding to levodopa. J Neurol. 2000;247(7):568-569.
20. Echiverri HC, Tatum WO, Merens TA, et al. Akinetic mutism: pharmacologic probe of the dopaminergic mesencephalofrontal activating system. Pediatr Neurol. 1988;4(4):228-230.
21. Psarros T, Zouros A, Coimbra C. Bromocriptine-responsive akinetic mutism following endoscopy for ventricular neurocysticercosis. Case report and review of the literature. J Neurosurg. 2003;99(2):397-401.
22. Naik VD. Abulia following an episode of cardiac arrest [published online July 1, 2015]. BMJ Case Rep. doi: 10.1136/bcr-2015-209357.
23. Kim MS, Rhee JJ, Lee SJ, et al. Akinetic mutism responsive to bromocriptine following subdural hematoma evacuation in a patient with hydrocephalus. Neurol Med Chir (Tokyo). 2007;47(9):419-423.
24. Rockwood K, Black S, Bedard MA; TOPS Study Investigators. Specific symptomatic changes following donepezil treatment of Alzheimer’s disease: a multi-centre, primary care, open-label study. Int J Geriatr Psychiatry. 2007;22(4):312-319.
25. Devos D, Moreau C, Maltête D, et al. Rivastigmine in apathetic but dementia and depression-free patients with Parkinson’s disease: a double-blind, placebo-controlled, randomised clinical trial. J Neurol Neurosurg Psychiatry. 2014;85(6):668-674.
26. Camargos EF, Quintas JL. Apathy syndrome treated successfully with modafinil [published online November 15, 2011]. BMJ Case Rep. doi: 10.1136/bcr.08.2011.4652.
27. Corcoran C, Wong ML, O’Keane V. Bupropion in the management of apathy. J Psychopharmacol. 2004;18(1):133-135.
28. Blundo C, Gerace C. Dopamine agonists can improve pure apathy associated with lesions of the prefrontal-basal ganglia functional system. Neurol Sci. 2015;36(7):1197-1201.
29. Mirapex [package insert]. Ridgefield, CT: Boehringer Ingelheim International GmbH; 2016.
30. Neupro [package insert]. Smyrna, GA: UBC, Inc.; 2012.
31. Requip [package insert]. Research Triangle Park, NC: GlaxoSmithKline; 2017.
32. Thobois S, Lhommée E, Klinger H, et al. Parkinsonian apathy responds to dopaminergic stimulation of D2/D3 receptors with piribedil. Brain. 2013;136(pt 5):1568-1577.
33. Mitchell RA, Herrmann N, Lanctôt KL. The role of dopamine in symptoms and treatment of apathy in Alzheimer’s disease. CNS Neurosci Ther. 2011;17(5):411-427.
34. Brower KJ, Maddahian E, Blow FC, et al. A comparison of self-reported symptoms and DSM-III-R criteria for cocaine withdrawal. Am J Drug Alcohol Abuse. 1988;14(3):347-356.
35. Mulin E, Leone E, Dujardin K, et al. Diagnostic criteria for apathy in clinical practice. Int J Geriatr Psychiatry. 2011;26(2):158-165.
36. Otto A, Zerr I, Lantsch M, et al. Akinetic mutism as a classification criterion for the diagnosis of Creutzfeldt-Jakob disease. J Neurol Neurosurg Psychiatry. 1998;64(4):524-528.
37. Jorge RE, Starkstein SE, Robinson RG. Apathy following stroke. Can J Psychiatry. 2010;55(6):350-354.
38. Hastak SM, Gorawara PS, Mishra NK. Abulia: no will, no way. J Assoc Physicians India. 2005;53:814-818.
39. Nagaratnam N, Nagaratnam K, Ng K, et al. Akinetic mutism following stroke. J Clin Neurosci. 2004;11(1):25-30.
40. Freemon FR. Akinetic mutism and bilateral anterior cerebral artery occlusion. J Neurol Neurosurg Psychiatry. 1971;34(6):693-698.
41. Schwarzbold M, Diaz A, Martins ET, et al. Psychiatric disorders and traumatic brain injury. Neuropsychiatr Dis Treat. 2008;4(4):797-816.
42. Diagnostic and statistical manual of mental disorders, 5th ed. Washington, DC: American Psychiatric Association; 2013.
43. Levy ML, Cummings JL, Fairbanks LA, et al. Apathy is not depression. J Neuropsychiatry Clin Neurosci. 1998;10(3):314-319.
44. Snow V, Lascher S, Mottur-Pilson C. Pharmacologic treatment of acute major depression and dysthymia. American College of Physicians-American Society of Internal Medicine. Ann Intern Med. 2000;132(9):738-742.
45. Schwartz AC, Fisher TJ, Greenspan HN, et al. Pharmacologic and nonpharmacologic approaches to the prevention and management of delirium. Int J Psychiatry Med. 2016;51(2):160-170.
46. Kang H, Zhao F, You L, et al. Pseudo-dementia: a neuropsychological review. Ann Indian Acad Neurol. 2014;17(2):147-154.
47. Fricchione GL, Beach SR, Huffman J, et al. Life-threatening conditions in psychiatry: catatonia, neuroleptic malignant syndrome, and serotonin syndrome. In: Stern TA, Fava M, Wilens TE, eds. Massachusetts General Hospital comprehensive clinical psychiatry. London, United Kingdom: Elsevier; 2016:608-617.
48. Rogers RD. The roles of dopamine and serotonin in decision making: evidence from pharmacological experiments in humans. Neuropsychopharmacology. 2011;36(1):114-132.
49. Stransky M, Schmidt C, Ganslmeier P, et al. Hypoactive delirium after cardiac surgery as an independent risk factor for prolonged mechanical ventilation. J Cardiothorac Vasc Anesth. 2011;25(6):968-974.
50. Wilcox JA, Reid Duffy P. The syndrome of catatonia. Behav Sci (Basel). 2015;5(4):576-588.
51. Robert PH, Mulin E, Malléa P, et al. REVIEW: apathy diagnosis, assessment, and treatment in Alzheimer’s disease. CNS Neurosci Ther. 2010;16(5):263-271.
52. Cipriani G, Lucetti C, Danti S, et al. Apathy and dementia. Nosology, assessment and management. J Nerv Ment Dis. 2014;202(10):718-724.
53. Starkstein SE, Leentjens AF. The nosological position of apathy in clinical practice. J Neurol Neurosurg Psychiatry. 2008;79(10):1088-1092.54. Berman K, Brodaty H, Withall A, et al. Pharmacologic treatment of apathy in dementia. Am J Geriatr Psychiatry. 2012;20(2):104-122.
55. Theleritis C, Siarkos K, Katirtzoglou E, et al. Pharmacological and nonpharmacological treatment for apathy in Alzheimer disease: a systematic review across modalities. J Geriatr Psychiatry Neurol. 2017;30(1):26-49.
56. APA Work Group on Alzheimer’s Disease and other Dementias; Rabins PV, Blacker D, Rovner BW, et al. American Psychiatric Association practice guideline for the treatment of patients with Alzheimer’s disease and other dementias. Second edition. Am J Psychiatry. 2007;164(suppl 12):5-56.
57. Dolder CR, Davis LN, McKinsey J. Use of psychostimulants in patients with dementia. Ann Pharmacother. 2010;44(10):1624-1632.














