In reply: Cognitive bias and diagnostic error

Article Type
Changed
Wed, 08/16/2017 - 13:53
Display Headline
In reply: Cognitive bias and diagnostic error

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
Article PDF
Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
407-408
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Sections
Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Article PDF
Article PDF
Related Articles

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
407-408
Page Number
407-408
Publications
Publications
Topics
Article Type
Display Headline
In reply: Cognitive bias and diagnostic error
Display Headline
In reply: Cognitive bias and diagnostic error
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Dietary and medical management of recurrent nephrolithiasis

Article Type
Changed
Thu, 03/28/2019 - 15:07
Display Headline
Dietary and medical management of recurrent nephrolithiasis

Nephrolithiasis is common and often recurs. This review focuses on measures to prevent recurrent stone formation. Some measures apply to all patients, and some apply to specific types of stones.

COMMON AND INCREASING

According to data from the 2007–2010 National Health and Nutrition Examination Survey, the prevalence of nephrolithiasis in the United States was 10.6% in men and 7.1% in women. On average, 1 in 11 Americans will develop kidney stones at least once in their lifetime.1

By race and sex, white men have the highest incidence of nephrolithiasis and Asian women have the lowest. It is less common before age 20 and peaks in incidence in the third and fourth decades of life.

The prevalence has steadily increased in the past few decades (Table 1),1,2 but the reasons are not clear. The trend may be due to changes in diet and lifestyle, increasing prevalence of obesity and diabetes, migration from rural to urban areas, and global warming, with higher temperature resulting in dehydration and high urinary concentration of calcium and other stone-forming salts.3 Nephrolithiasis is now recognized as a systemic disorder associated with chronic kidney disease, bone loss and fractures, increased risk of coronary artery disease, hypertension, type 2 diabetes mellitus, and metabolic syndrome (Table 2).4–7

Without medical treatment, the 5-year recurrence rate is high, ranging from 35% to 50% after an initial stone event.8 Annual medical costs of care for kidney stones in the United States exceed $4.5 billion, with additional costs from missed work. Therefore, this condition has a considerable economic and social burden, which underscores the importance of prevention.9

MOST STONES CONTAIN CALCIUM

About 80% of kidney stones in adults contain calcium, and calcium oxalate stones are more common than calcium phosphate stones. Uric acid and struvite stones account for 5% to 15%, and cystine, protease inhibitor, triamterene, 2,8-dihydroxyadenine (2,8-DHA) and xanthine stones each account for less than 1%.10

Stones form when the urinary concentration of stone-forming salts, which is inversely proportional to urine volume, is higher than their saturation point, which is affected by urine pH. Acidic urine (low pH) predisposes to the formation of uric acid and cystine stones, whereas alkaline urine (high pH) favors calcium phosphate stones.

INCREASED FLUID INTAKE FOR ALL

High fluid intake, enough to produce at least 2.5 L of urine per day, should be the initial therapy to prevent stone recurrence.11

Borghi et al12 randomly assigned 199 patients who had a first calcium stone to high oral fluid intake or no intervention and followed them prospectively for 5 years. The recurrence rate was 12% in the treated group and 27% in the control group. Another study, in patients who had undergone shock wave lithotripsy, found a recurrence rate of 8% in those randomized to increase fluid intake to achieve urine output greater than 2.5 L/day, compared with 56% in those assigned to no treatment.13

Certain beverages increase the risk of stones and should be avoided. Sugar-sweetened noncola soda and punch are associated with a 33% higher risk of kidney stones, and cola sodas are associated with a 23% higher risk.14 Prospective studies have shown that the consumption of coffee, beer, wine, and orange juice is associated with a lower likelihood of stone formation.13,15

Table 3 is a brief summary of the dietary and pharmacologic interventions in the management of recurrent nephrolithiasis.

PREVENTING CALCIUM OXALATE STONES

Major urinary risk factors associated with calcium oxalate stones are hypercalciuria, hyperoxaluria, hyperuricosuria, hypocitraturia, and low urine volume.16 Preventing calcium stones therefore depends on reducing the urinary concentration of calcium and oxalate, increasing urinary levels of inhibitors such as citrate, and increasing urine volume.

 

 

Reducing calcium excretion

Hypercalciuria has been traditionally defined as 24-hour urinary calcium excretion greater than 300 mg/day in men, greater than 250 mg in women, or greater than 4 mg/kg in men or women.17 It is a graded risk factor, and the cut points used in published research and clinical laboratories vary substantially. Some institutions use the same value for hypercalciuria in both sexes, eg, greater than 200 mg/day.18

Excessive sodium intake is the most common cause of hypercalciuria. Systemic conditions such as primary hyperparathyroidism, sarcoidosis, and renal tubular acidosis also cause hypercalciuria but are uncommon.19 Management depends on the underlying cause and includes dietary modifications and pharmacologic therapy.

Dietary modifications have a pivotal role in the management of recurrent stones that are due to hypercalciuria.

Dietary calcium should not be restricted, since calcium reduces the excretion of urinary oxalate by decreasing intestinal absorption of oxalate. Guidelines from the American Urological Association recommend a daily calcium intake of 1,000 to 1,200 mg.11–20 Moreover, restriction of dietary calcium to less than 800 mg/day (the current recommended daily allowance for adults) can lead to negative calcium balance and bone loss.

Sodium intake also influences hypercalciuria. Calcium is reabsorbed passively in the proximal tubule due to the concentration gradient created by active reabsorption of sodium. A high sodium intake causes volume expansion, leading to a decrease in proximal sodium and calcium reabsorption and enhancing calcium excretion. A low-sodium diet (80–100 mmol/day, or 1,800–2,300 mg/day) is recommended. This enhances proximal sodium and passive calcium absorption and leads to a decrease in calcium excretion.21

Dietary protein increases the acid load by production of sulfuric acid and leads to hypercalciuria by its action on bone and kidney. Animal protein has a higher content of sulfur and generates a higher acid load compared with vegetable protein and has been associated with an increased incidence of stone formation, at least in men.20,22 Borghi et al23 reported that the combination of restricted intake of animal protein (52 g/day), restricted salt intake (50 mmol, or 2,900 mg/day of sodium chloride), and normal calcium intake (30 mmol/day, or 1,200 mg/day) was associated with a lower incidence of stone recurrence in men with hypercalciuria compared with traditional low-calcium intake (10 mmol, or 400 mg/day). Patients should therefore be advised to avoid excessive intake of animal protein.

Increasing the dietary intake of fruits and vegetables as in the Dietary Approach to Stop Hypertension (DASH) diet is beneficial and reduces the risk of stone recurrence, mainly by increasing citrate excretion.24

Pharmacologic therapy in hypercalciuria. Thiazide diuretics are the mainstay of pharmacotherapy for preventing recurrent stones in patients with idiopathic hypercalciuria. They reduce the risk of stone recurrence by about 50%, as reported in a recent meta-analysis that looked at five trials comparing thiazide diuretics with placebo.25 They lower calcium excretion by causing volume depletion, thereby increasing proximal sodium and passive calcium reabsorption.

Chlorthalidone and hydrochlorothiazide are the thiazides commonly used to treat hypercalciuria. The dosage is titrated to the urinary calcium excretion, and a common mistake is to use doses that are too low. They are usually started at 25 mg/day, but often require an increase to 50 to 100 mg/day for adequate lowering of urinary calcium.

Care should be taken to avoid hypokalemia. If it occurs, it can be corrected by adding the potassium-sparing diuretic amiloride (5–10 mg/day), which increases calcium reabsorption in collecting ducts or, in patients with hypocitraturia, potassium citrate-potassium bicarbonate. (Sodium salts should be avoided, since they increase renal calcium excretion.)26

Management of hypercalciuria with metabolic causes, which include primary hyper­parathyroidism and chronic acidemia. Patients who have hypercalciuria from primary hyperparathyroidism are treated with parathyroidectomy.27 Chronic metabolic acidosis causes hypercalciuria by loss of bone calcium and hypocitraturia by increasing active proximal absorption of citrate. Potassium citrate or potassium bicarbonate is used to prevent stones in such patients; sodium salts should be avoided.28

Reducing oxalate excretion

Hyperoxaluria has traditionally been defined as urinary oxalate excretion of more than 45 mg/day. However, the optimal cutoff point for urinary oxalate excretion is unclear, as is the optimal cutoff for hypercalciuria. The risk of stone formation has been shown to increase with oxalate excretion even above 25 mg/day, which is within the normal limit.18

Idiopathic hyperoxaluria. High dietary oxalate intake, especially when associated with low calcium intake, leads to idiopathic hyperoxaluria. However, the contribution of abnormal endogenous oxalate metabolism is uncertain. Ingested calcium binds to oxalate in the intestinal tract and reduces both the absorption of intestinal oxalate absorption and the excretion of urinary oxalate.29 High dietary oxalate intake has usually been regarded as a major risk factor for kidney stones.

Taylor and Curhan,30 in a prospective study, reported a mild increase in the risk of stones in the highest quintile of dietary oxalate intake compared with the lowest quintile for men (relative risk [RR] 1.22, 95% confidence interval [CI] 1.03–1.45) and older women (RR 1.21, 95% CI 1.01–1.44). They also demonstrated that eating eight or more servings of spinach per month compared with fewer than one serving per month was associated with a similar increase of stone risk in men (RR 1.30, 95% CI 1.08–1.58) and older women (RR 1.34 95% CI 1.1–1.64). In contrast, spinach and dietary oxalate intake did not increase the risk of nephrolithiasis in young women. The authors concluded that the risk associated with oxalate intake was modest, and their data did not support the contention that dietary oxalate is a major risk factor for kidney stones.

Higher oxalate intake increases urinary oxalate excretion and presumably the risk of nephrolithiasis. Limiting dietary oxalate to prevent stones is recommended if habitually high dietary intake of oxalate is identified or follow-up urine measurements show a decrease in oxalate excretion.31 Foods rich in oxalate include spinach, rhubarb, nuts, legumes, cocoa, okra, and chocolate.

The DASH diet, which is high in fruits and vegetables, moderate in low-fat dairy products, and low in animal protein, is an effective dietary alternative and has been associated with a lower risk of calcium oxalate stones.24 Consuming fruits and vegetables increases the excretion of urinary citrate, which is an inhibitor of stone formation. Also, it has been proposed that the DASH diet contains unknown factors that reduce stone risk.

Taylor et al32 prospectively examined the relationship between the DASH diet and the incidence of kidney stones and found that the diet significantly reduced the risk of kidney stones. The relative risks of occurrence of kidney stones in participants in the highest quintile of the DASH score (a measure of adherence to the DASH diet) compared with the lowest quintile were 0.55 (95% CI 0.46–0.65) for men, 0.58 (95% CI 0.49–0.68) for older women, and 0.60 (95% CI 0.52–0.70) for younger women, which the authors characterized as “a marked decrease in kidney stone risk.”

Vitamin C intake should be restricted to 90 mg/day in patients who have a history of calcium oxalate stones. Urivetzky et al33 found that urinary oxalate excretion increased by 6 to 13 mg/day at doses of ascorbic acid greater than 500 mg.

Pyridoxine (vitamin B6), a coenzyme of alanine-glyoxylate aminotransferase (AGT), increases the conversion of glyoxylate to glycine instead of oxalate and is used in the treatment of type 1 primary hyperoxaluria (see below).34 However, its effect in preventing stones in idiopathic hyperoxaluria is not well known, and it has not been studied in a randomized controlled trial. In a prospective study, Curhan et al35 reported that high intake of pyridoxine (> 40 mg/day) was associated with a lower risk of stone formation in women, but no such benefit was found in men.

Enteric hyperoxaluria. About 90% of dietary oxalate binds to calcium in the small intestine and is excreted in the stool. The remaining 10% is absorbed in the colon and is secreted in urine. Hyperoxaluria is frequently seen with fat malabsorption from inflammatory bowel disease, short gut syndrome, and gastric bypass surgery. In these conditions, excess fat binds to dietary calcium, leading to increased absorption of free oxalate in the colon.36

Treatment is directed at decreasing intestinal oxalate absorption and should include high fluid intake and oral calcium supplements. Calcium carbonate or citrate causes precipitation of oxalate in the intestinal lumen and is prescribed as 1 to 4 g in three to four divided doses, always with meals. Calcium citrate is preferred over calcium carbonate in stone-formers because of the benefit of citrate and calcium citrate’s higher solubility and greater effectiveness in the presence of achlorhydria.37 Patients should be advised to avoid foods high in oxalate and fat.

Primary hyperoxaluria is caused by inherited inborn errors of glyoxylate metabolism that cause overproduction of oxalate and urinary oxalate excretion above 135 to 270 mg/day.

Type 1 primary hyperoxaluria is the most common (accounting for 90% of cases) and is caused by reduced activity of hepatic peroxisomal AGT.

Type 2 is from a deficiency of glyoxylate reductase-hydroxypyruvate reductase (GRHPR).

Type 3 is from mutations in the HOGA1 gene, which codes for the liver-specific mitochondrial 4-hydroxy-2-oxoglutarate aldolase enzyme involved in degradation of hydroxyproline to pyruvate and glyoxalate.38

High fluid intake to produce a urinary volume of 3 L/day reduces intratubular oxalate deposition and should be encouraged. Potassium citrate (0.15 mg/kg), oral phosphate supplements (30–40 mg/kg of orthophosphate), and magnesium oxide (500 mg/day/m2) inhibit precipitation of calcium oxalate in the urine.39,40 Pyridoxine, a coenzyme of AGT, increases the conversion of glyoxylate to glycine instead of oxalate and is prescribed at a starting dose of 5 mg/kg (which can be titrated up to 20 mg/kg if there is no response) in patients with type 1 primary hyperoxaluria. About 50% of patients with type 1 respond successfully to pyridoxine, and a 3- to 6-month trial should be given in all patients in this category.34 AGT is present only in hepatocytes, and GRHPR is found in multiple tissues; therefore, combined liver-kidney transplant is the treatment of choice in patients with type 1 primary hyperoxaluria, whereas isolated kidney transplant is recommended in patients with type 2.41

Reducing uric acid excretion

Hyperuricosuria is defined as uric acid excretion of greater than 800 mg/day in men and greater than 750 mg/day in women.

The association of hyperuricosuria with increased risk of calcium oxalate stone formation is controversial. Curhan and Taylor,18 in a cross-sectional study of 3,350 men and women, reported that there was no difference in mean 24-hour uric acid excretion in individuals with and without a history of stones.

The mechanism by which uric acid leads to calcium oxalate stones is not completely known and could be the “salting out” of calcium oxalate from the urine.42

Dietary purine restriction, ie, limiting intake of nondairy animal protein to 0.8 to 1 g/kg/day, is the initial dietary intervention.11 Allopurinol is the alternative approach if the patient is not compliant or if dietary restriction fails.43

In a study by Ettinger et al,44 60 patients with hyperuricosuria and normocalciuria were randomized to receive allopurinol (100 mg three times daily) or a placebo. The allopurinol group had a rate of calculus events of 0.12 per patient per year, compared with 0.26 in the placebo group.

 

 

Increasing citrate excretion

Hypocitraturia is a well-known risk factor for the formation of kidney stones. It is usually defined as a citrate excretion of less than 320 mg/day for adults.

Citrate prevents formation of calcium crystals by binding to calcium, thereby lowering the concentration of calcium oxalate below the saturation point.45

Diet therapy. Patients with calcium oxalate stones and hypocitraturia should be encouraged to increase their intake of fruits and vegetables, which enhances urinary citrate excretion, and to limit their intake of nondairy animal protein.11

The use of citrus products in preventing stones in patients with hypocitraturia is controversial, however, and needs to be studied more.

One study46 demonstrated that lemon juice was beneficial in hypocitraturic nephrolithiasis: 4 oz/day of lemon juice concentrate in the form of lemonade was associated with an increase in urinary citrate excretion to 346 mg/day from 142 mg/day in 11 of 12 patients who participated.

Odvina47 compared the effects of orange juice with those of lemonade on the acid-base profile and urinary stone risk under controlled metabolic conditions in 13 volunteers. Orange juice was reported to have greater alkalinizing and citraturic effects and was associated with lower calculated calcium oxalate supersaturation compared with lemonade.

Lemonade therapy may be used as adjunctive treatment in patients who do not comply with or cannot tolerate alkali therapy. However, we advise caution about recommending citrus products, as they can increase oxalate excretion.

Pharmacotherapy includes alkali therapy. Barcelo et al48 compared the effects of potassium citrate and placebo in 57 patients with calcium oxalate stones and hypocitraturia. Patients treated with potassium citrate had a rate of stone formation of 0.1 event per patient per year, compared with 1.1 in the placebo group.

Many forms of alkaline citrate are available. Potassium citrate is preferred over sodium citrate since the latter may increase urine calcium excretion.49 Treatment is usually started at 30 mEq/day and is titrated to a maximal dose of 60 mEq/day for a urinary citrate excretion greater than 500 mg/day.

Common side effects are abdominal bloating and hyperkalemia (especially with renal insufficiency), and in such cases sodium-based alkali, sodium citrate, or sodium bicarbonate can be prescribed.

PREVENTING CALCIUM PHOSPHATE STONES

Risk factors for calcium phosphate stones are similar to those for calcium oxalate stones (other than hyperoxaluria), but calcium phosphate stones are formed in alkaline urine (usually urine pH > 6.0), often the result of distal renal tubular acidosis. Preventive measures are similar to those for calcium oxalate stones.

Alkali therapy should be used with caution because of its effect on urinary pH and the risk of precipitation of calcium phosphate crystals.50 Use of potassium citrate was found to be associated with increases in both urinary citrate excretion and calcium phosphate supersaturation in hypercalciuric stone-forming rats.51 It is therefore challenging to manage patients with calcium phosphate stones and hypocitraturia. Alkali administration in this setting may diminish the formation of new stones by correcting hypocitraturia, but at the same time it may increase the likelihood of calcium phosphate stone formation by increasing the urinary pH. When the urine pH increases to above 6.5 with no significant change in urine citrate or urine calcium excretion, we recommend stopping alkali therapy.

PREVENTING URIC ACID STONES

Clinical conditions associated with uric acid stones include metabolic syndrome, diabetes mellitus, gout, chronic diarrheal illness, and conditions that increase tissue turnover and uric acid production, such as malignancies. Other risk factors for uric acid stone formation are low urine volume, low uric pH, and hyperuricosuria.

Abnormally acidic urine is the most common risk factor. Metabolic syndrome and diabetes mellitus reduce ammonia production, resulting in a lower urinary pH, which predisposes to uric acid stone formation. Chronic diarrhea also acidifies the urine by loss of bicarbonate. Similarly, in gout, the predisposing factor in uric acid stone formation is the persistently acidic urine due to impaired ammonium excretion.52 Uric acid precipitates to form uric acid stones in a low urinary pH even with normal excretion rates of 600 to 800 mg/day and a urinary volume of 1 to 1.5 L.53

Therefore, apart from increasing fluid intake, urinary alkalization is the cornerstone of management of uric acid stones. Potassium citrate is the preferred alkali salt and is started at a dose of 30 mEq/day for a goal urinary pH of 6 to 6.5.47

Patients with hyperuricosuria are also advised to restrict their protein intake to no more than 0.8 to 1 mg/kg/day.

If the above measures fail, patients are treated with a xanthine oxidase inhibitor, ie, allopurinol or febuxostat, even if their uric acid excretion is normal.54

PREVENTING STRUVITE STONES

Struvite stones contain magnesium ammonium phosphate and are due to chronic upper urinary tract infection with urea-splitting bacteria such as Proteus, Klebsiella, Pseudomonas, and enterococci. Urea hydrolysis releases hydroxyl ions, resulting in alkaline urine that promotes struvite stone formation. Early detection and treatment are important, since struvite stones are associated with morbidity and rapid progression.

Medical treatment of struvite stones is usually unsuccessful, and the patient is referred to a urologist for surgical removal of the stones, the gold standard treatment.55 Long-term use of culture-specific antibiotics to prevent new stone growth is not well studied. Medical therapy by itself is preferred in patients who refuse stone removal or cannot tolerate it. Urease inhibitors such as acetohydroxamic acid have been successful in preventing or slowing stone growth, but their use is limited by frequent side effects such as nausea, headache, rash, and thrombophlebitis.56

CYSTINE STONES

Cystine stones occur in people with inherited defects of renal tubular and intestinal transport of cysteine and dibasic amino acids that cause excessive excretion of urinary cystine, ie, 480 to 3,600 mg/day.

Cystine is formed from two cysteine molecules linked by a disulfide bond. The solubility of cystine is pH-dependent, with increased solubility at higher urinary pH. The goal is to maintain a urinary cystine concentration below its solubility level by keeping the cystine concentration below 243 mg/L and the urine cystine supersaturation (the ratio of the urine cysteine concentration to the cysteine solubility in the same sample) less than 0.6.57 Therapy is aimed at increasing daily urinary volume to 3 L and urine alkalization to pH above 7, in order to increase cystine solubility by 300%.58

Overnight dehydration should be prevented, and patients should be encouraged to wake up at least once a night to void and drink additional water. Sodium restriction to 100 mmol/day (2,300 mg/day) and moderate protein restriction to 0.8 to 1 g/kg/day are associated with decreased cystine excretion, but long-term studies demonstrating their benefit in preventing cystine stones are lacking.59

A thiol-containing drug, eg, D-penicillamine (0.5–2 g/day) or tiopronin (400–1,200 mg/day), should be added to the conservative measures if they have not been effective for 3 months or if there is history of noncompliance.60 Thiol-containing drugs have a sulfhydryl group that reduces the disulfide bond, and they form soluble disulfide cysteine-drug complexes with greater ability to solubilize cystine in alkaline urine. They must always be used in conjunction with fluid and alkali therapy.61

Both drugs have severe and common adverse effects including leukopenia, aplastic anemia, fever, rash, arthritis, hepatotoxicity, pyridoxine deficiency, and proteinuria (membranous nephropathy). However, tiopronin seems to have a lesser incidence of side effects.62 Regular monitoring of complete blood cell counts, liver enzymes, and urine protein should be done.

Captopril contains a sulfhydryl group, and the captopril-cysteine disulfide is more soluble than cysteine alone. The amount of captopril that appears in the urine is low, and doses of 150 mg/day are usually required to reduce cysteine excretion, which can lead to hypotension. The efficacy of captopril in treating cystine stones is unproven, and this drug is used only if patients cannot tolerate other thiol-containing drugs.63

References
  1. Scales CD Jr, Smith AC, Hanley JM, Saigal CS; Urologic Diseases in America Project. Prevalence of kidney stones in the United States. Eur Urol 2012; 62:160–165.
  2. Stamatelou KK, Francis ME, Jones CA, Nyberg LM Jr, Curhan GC. Time trends in reported prevalence of kidney stones in the United States: 1976–1994. Kidney Int 2003; 63:1817–1823.
  3. Romero V, Akpinar H, Assimos DG. Kidney stones: a global picture of prevalence, incidence, and associated risk factors. Rev Urol 2010; 12:e86–e96.
  4. Sakhaee K, Maalouf NM, Kumar R, Pasch A, Moe OW. Nephrolithiasis-associated bone disease: pathogenesis and treatment options. Kidney Int 2011; 79:393–403.
  5. Sakhaee K. Nephrolithiasis as a systemic disorder. Curr Opin Nephrol Hypertens 2008; 17:304–309.
  6. Hamano S, Nakatsu H, Suzuki N, Tomioka S, Tanaka M, Murakami S. Kidney stone disease and risk factors for coronary heart disease. Int J Urol 2005; 12:859–863.
  7. Ritz E. Metabolic syndrome: an emerging threat to renal function. Clin J Am Soc Nephrol 2007; 2:869–871.
  8. Uribarri J, Oh MS, Carroll HJ. The first kidney stone. Ann Intern Med 1989; 111:1006–1009.
  9. Saigal CS, Joyce G, Timilsina AR; Urologic Diseases in America Project. Direct and indirect costs of nephrolithiasis in an employed population: opportunity for disease management? Kidney Int 2005; 68:1808–1814.
  10. Moe OW. Kidney stones: pathophysiology and medical management. Lancet 2006; 367:333–344.
  11. Pearle MS, Goldfarb DS, Assimos DG, et al; American Urological Assocation. Medical management of kidney stones: AUA guideline. J Urol 2014; 192:316–324.
  12. Borghi L, Meschi T, Amato F, Briganti A, Novarini A, Giannini A. Urinary volume, water and recurrences in idiopathic calcium nephrolithiasis: a 5-year randomized prospective study. J Urol 1996; 155:839–843.
  13. Sarica K, Inal Y, Erturhan S, Yagci F. The effect of calcium channel blockers on stone regrowth and recurrence after shock wave lithotripsy. Urol Res 2006; 34:184–189.
  14. Ferraro PM, Taylor EN, Gambaro G, Curhan GC. Soda and other beverages and the risk of kidney stones. Clin J Am Soc Nephrol 2013; 8:1389–1395.
  15. Curhan GC, Willett WC, Speizer FE, Stampfer MJ. Beverage use and risk for kidney stones in women. Ann Intern Med 1998; 128:534–540.
  16. Pak CY, Britton F, Peterson R, et al. Ambulatory evaluation of nephrolithiasis. Classification, clinical presentation and diagnostic criteria. Am J Med 1980; 69:19–30.
  17. Hall PM. Nephrolithiasis: treatment, causes, and prevention. Cleve Clin J Med 2009; 76:583–591.
  18. Curhan GC, Taylor EN. 24-h uric acid excretion and the risk of kidney stones. Kidney Int 2008; 73:489–496.
  19. Coe FL, Evan A, Worcester E. Kidney stone disease. J Clin Invest 2005; 115:2598–2608.
  20. Curhan GC, Willett WC, Rimm EB, Stampfer MJ. A prospective study of dietary calcium and other nutrients and the risk of symptomatic kidney stones. N Engl J Med 1993; 328:833–838.
  21. Muldowney FP, Freaney R, Moloney MF. Importance of dietary sodium in the hypercalciuria syndrome. Kidney Int 1982; 22:292–296.
  22. Breslau NA, Brinkley L, Hill KD, Pak CY. Relationship of animal protein-rich diet to kidney stone formation and calcium metabolism. J Clin Endocrinol Metab 1988; 66:140–146.
  23. Borghi L, Schianchi T, Meschi T, et al. Comparison of two diets for the prevention of recurrent stones in idiopathic hypercalciuria. N Engl J Med 2002; 346:77–84.
  24. Noori N, Honarkar E, Goldfarb DS, et al. Urinary lithogenic risk profile in recurrent stone formers with hyperoxaluria: a randomized controlled trial comparing DASH (Dietary Approaches to Stop Hypertension)-style and low-oxalate diets. Am J Kidney Dis 2014; 63:456–463.
  25. Fink HA, Wilt TJ, Eidman KE, et al. Medical management to prevent recurrent nephrolithiasis in adults: a systematic review for an American College of Physicians Clinical Guideline. Ann Intern Med 2013; 158:535–543.
  26. Alon U, Costanzo LS, Chan JC. Additive hypocalciuric effects of amiloride and hydrochlorothiazide in patients treated with calcitriol. Miner Electrolyte Metab 1984; 10:379–386.
  27. Corbetta S, Baccarelli A, Aroldi A, et al. Risk factors associated to kidney stones in primary hyperparathyroidism. J Endocrinol Invest 2005; 28:122–128.
  28. Haymann JP. Metabolic disorders: stones as first clinical manifestation of significant diseases. World J Urol 2015; 33:187–192.
  29. Jaeger P, Portmann L, Jacquet AF, Burckhardt P. Influence of the calcium content of the diet on the incidence of mild hyperoxaluria in idiopathic renal stone formers. Am J Nephrol 1985; 5:40–44.
  30. Taylor EN, Curhan GC. Oxalate intake and the risk for nephrolithiasis. J Am Soc Nephrol 2007; 18:2198–2204.
  31. Lieske JC, Tremaine WJ, De Simone C, et al. Diet, but not oral probiotics, effectively reduces urinary oxalate excretion and calcium oxalate supersaturation. Kidney Int 2010; 78:1178–1185.
  32. Taylor EN, Fung TT, Curhan GC. DASH-style diet associates with reduced risk for kidney stones. J Am Soc Nephrol 2009; 20:2253–2259.
  33. Urivetzky M, Kessaris D, Smith AD. Ascorbic acid overdosing: a risk factor for calcium oxalate nephrolithiasis. J Urol 1992; 147:1215–1218.
  34. Hoyer-Kuhn H, Kohbrok S, Volland R, et al. Vitamin B6 in primary hyperoxaluria I: first prospective trial after 40 years of practice. Clin J Am Soc Nephrol 2014; 9:468–477.
  35. Curhan GC, Willett WC, Speizer FE, Stampfer MJ. Intake of vitamins B6 and C and the risk of kidney stones in women. J Am Soc Nephrol 1999; 10:840–845.
  36. Parks JH, Worcester EM, O'Connor RC, Coe FL. Urine stone risk factors in nephrolithiasis patients with and without bowel disease. Kidney Int 2003; 63:255–265.
  37. Hess B, Jost C, Zipperle L, Takkinen R, Jaeger P. High-calcium intake abolishes hyperoxaluria and reduces urinary crystallization during a 20-fold normal oxalate load in humans. Nephrol Dial Transplant 1998; 13:2241–2247.
  38. Hoppe B, Beck BB, Milliner DS. The primary hyperoxalurias. Kidney Int 2009; 75:1264–1271.
  39. Cochat P, Hulton SA, Acquaviva C, et al; OxalEurope. Primary hyperoxaluria type 1: indications for screening and guidance for diagnosis and treatment. Nephrol Dial Transplant 2012; 27:1729–1736.
  40. Leumann E, Hoppe B, Neuhaus T. Management of primary hyperoxaluria: efficacy of oral citrate administration. Pediatr Nephrol 1993; 7:207–211.
  41. Bergstralh EJ, Monico CG, Lieske JC, et al; IPHR Investigators. Transplantation outcomes in primary hyperoxaluria. Am J Transplant 2010; 10:2493–2501.
  42. Grover PK, Marshall VR, Ryall RL. Dissolved urate salts out calcium oxalate in undiluted human urine in vitro: implications for calcium oxalate stone genesis. Chem Biol 2003; 10:271–278.
  43. Coe FL, Parks JH. Hyperuricosuria and calcium nephrolithiasis. Urol Clin North Am 1981; 8:227–244.
  44. Ettinger B, Tang A, Citron JT, Livermore B, Williams T. Randomized trial of allopurinol in the prevention of calcium oxalate calculi. N Engl J Med 1986; 315:1386–1389.
  45. Zuckerman JM, Assimos DG. Hypocitraturia: pathophysiology and medical management. Rev Urol 2009; 11:134–144.
  46. Seltzer MA, Low RK, McDonald M, Shami GS, Stoller ML. Dietary manipulation with lemonade to treat hypocitraturic calcium nephrolithiasis. J Urol 1996; 156:907–909.
  47. Odvina CV. Comparative value of orange juice versus lemonade in reducing stone-forming risk. Clin J Am Soc Nephrol 2006; 1:1269–1274.
  48. Barcelo P, Wuhl O, Servitge E, Rousaud A, Pak CY. Randomized double-blind study of potassium citrate in idiopathic hypocitraturic calcium nephrolithiasis. J Urol 1993; 150:1761–1764.
  49. Lemann J Jr, Gray RW, Pleuss JA. Potassium bicarbonate, but not sodium bicarbonate, reduces urinary calcium excretion and improves calcium balance in healthy men. Kidney Int 1989; 35:688–695.
  50. Gault MH, Chafe LL, Morgan JM, et al. Comparison of patients with idiopathic calcium phosphate and calcium oxalate stones. Medicine (Baltimore) 1991; 70:345–359.
  51. Krieger NS, Asplin JR, Frick KK, et al. Effect of potassium citrate on calcium phosphate stones in a model of hypercalciuria. J Am Soc Nephrol 2015; 26:3001–3008.
  52. Falls WF Jr. Comparison of urinary acidification and ammonium excretion in normal and gouty subjects. Metabolism 1972; 21:433–445.
  53. Coe FL, Parks JH, Asplin JR. The pathogenesis and treatment of kidney stones. N Engl J Med 1992; 327:1141–1152.
  54. Kenny JE, Goldfarb DS. Update on the pathophysiology and management of uric acid renal stones. Curr Rheumatol Rep 2010; 12:125–129.
  55. Preminger GM, Assimos DG, Lingeman JE, Nakada SY, Pearle MS, Wolf JS Jr (AUA Nephrolithiasis Guideline Panel). Chapter 1: AUA guideline on management of staghorn calculi: diagnosis and treatment recommendations. J Urol 2005; 173:1991–2000.
  56. Williams JJ, Rodman JS, Peterson CM. A randomized double-blind study of acetohydroxamic acid in struvite nephrolithiasis. N Engl J Med 1984; 311:760–764.
  57. Nakagawa Y, Asplin JR, Goldfarb DS, Parks JH, Coe FL. Clinical use of cystine supersaturation measurements. J Urol 2000; 164:1481–1485.
  58. Palacın MGP, Nunes V, Gasparini P. Cystinuria. In: Shriver CR, editor. The Metabolic and Molecular Bases of Inherited Disease. New York, NY: McGraw-Hill; 2001:4909–4932.
  59. Goldfarb DS, Coe FL, Asplin JR. Urinary cystine excretion and capacity in patients with cystinuria. Kidney Int 2006; 69:1041–1047.
  60. Barbey F, Joly D, Rieu P, Mejean A, Daudon M, Jungers P. Medical treatment of cystinuria: critical reappraisal of long-term results. J Urol 2000; 163:1419–1423.
  61. Asplin DM, Asplin JR. The Interaction of thiol drugs and urine pH in the treatment of cystinuria. J Urol 2013; 189:2147–2151.
  62. Habib GS, Saliba W, Nashashibi M, Armali Z. Penicillamine and nephrotic syndrome. Eur J Intern Med 2006; 17:343–348.
  63. Sloand JA, Izzo JL Jr. Captopril reduces urinary cystine excretion in cystinuria. Arch Intern Med 1987; 147:1409–1412.
Click for Credit Link
Article PDF
Author and Disclosure Information

Silvi Shah, MD
Department of Nephrology, University of Alabama at Birmingham

Juan Camilo Calle, MD
Department of Nephrology and Hypertension, Glickman Urological & Kidney Institute, Cleveland Clinic

Address: Juan Camilo Calle, MD, Department of Nephrology and Hypertension. Glickman Urological and Kidney Institute. Q7, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; [email protected]

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
463-471
Legacy Keywords
urinary stones, kidney stones, nephrolithiasis, calcium oxalate, calcium phosphate, citrate, struvite, cysteine, uric acid, Silvi Shah, Juan Camilo Calle
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Silvi Shah, MD
Department of Nephrology, University of Alabama at Birmingham

Juan Camilo Calle, MD
Department of Nephrology and Hypertension, Glickman Urological & Kidney Institute, Cleveland Clinic

Address: Juan Camilo Calle, MD, Department of Nephrology and Hypertension. Glickman Urological and Kidney Institute. Q7, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; [email protected]

Author and Disclosure Information

Silvi Shah, MD
Department of Nephrology, University of Alabama at Birmingham

Juan Camilo Calle, MD
Department of Nephrology and Hypertension, Glickman Urological & Kidney Institute, Cleveland Clinic

Address: Juan Camilo Calle, MD, Department of Nephrology and Hypertension. Glickman Urological and Kidney Institute. Q7, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; [email protected]

Article PDF
Article PDF
Related Articles

Nephrolithiasis is common and often recurs. This review focuses on measures to prevent recurrent stone formation. Some measures apply to all patients, and some apply to specific types of stones.

COMMON AND INCREASING

According to data from the 2007–2010 National Health and Nutrition Examination Survey, the prevalence of nephrolithiasis in the United States was 10.6% in men and 7.1% in women. On average, 1 in 11 Americans will develop kidney stones at least once in their lifetime.1

By race and sex, white men have the highest incidence of nephrolithiasis and Asian women have the lowest. It is less common before age 20 and peaks in incidence in the third and fourth decades of life.

The prevalence has steadily increased in the past few decades (Table 1),1,2 but the reasons are not clear. The trend may be due to changes in diet and lifestyle, increasing prevalence of obesity and diabetes, migration from rural to urban areas, and global warming, with higher temperature resulting in dehydration and high urinary concentration of calcium and other stone-forming salts.3 Nephrolithiasis is now recognized as a systemic disorder associated with chronic kidney disease, bone loss and fractures, increased risk of coronary artery disease, hypertension, type 2 diabetes mellitus, and metabolic syndrome (Table 2).4–7

Without medical treatment, the 5-year recurrence rate is high, ranging from 35% to 50% after an initial stone event.8 Annual medical costs of care for kidney stones in the United States exceed $4.5 billion, with additional costs from missed work. Therefore, this condition has a considerable economic and social burden, which underscores the importance of prevention.9

MOST STONES CONTAIN CALCIUM

About 80% of kidney stones in adults contain calcium, and calcium oxalate stones are more common than calcium phosphate stones. Uric acid and struvite stones account for 5% to 15%, and cystine, protease inhibitor, triamterene, 2,8-dihydroxyadenine (2,8-DHA) and xanthine stones each account for less than 1%.10

Stones form when the urinary concentration of stone-forming salts, which is inversely proportional to urine volume, is higher than their saturation point, which is affected by urine pH. Acidic urine (low pH) predisposes to the formation of uric acid and cystine stones, whereas alkaline urine (high pH) favors calcium phosphate stones.

INCREASED FLUID INTAKE FOR ALL

High fluid intake, enough to produce at least 2.5 L of urine per day, should be the initial therapy to prevent stone recurrence.11

Borghi et al12 randomly assigned 199 patients who had a first calcium stone to high oral fluid intake or no intervention and followed them prospectively for 5 years. The recurrence rate was 12% in the treated group and 27% in the control group. Another study, in patients who had undergone shock wave lithotripsy, found a recurrence rate of 8% in those randomized to increase fluid intake to achieve urine output greater than 2.5 L/day, compared with 56% in those assigned to no treatment.13

Certain beverages increase the risk of stones and should be avoided. Sugar-sweetened noncola soda and punch are associated with a 33% higher risk of kidney stones, and cola sodas are associated with a 23% higher risk.14 Prospective studies have shown that the consumption of coffee, beer, wine, and orange juice is associated with a lower likelihood of stone formation.13,15

Table 3 is a brief summary of the dietary and pharmacologic interventions in the management of recurrent nephrolithiasis.

PREVENTING CALCIUM OXALATE STONES

Major urinary risk factors associated with calcium oxalate stones are hypercalciuria, hyperoxaluria, hyperuricosuria, hypocitraturia, and low urine volume.16 Preventing calcium stones therefore depends on reducing the urinary concentration of calcium and oxalate, increasing urinary levels of inhibitors such as citrate, and increasing urine volume.

 

 

Reducing calcium excretion

Hypercalciuria has been traditionally defined as 24-hour urinary calcium excretion greater than 300 mg/day in men, greater than 250 mg in women, or greater than 4 mg/kg in men or women.17 It is a graded risk factor, and the cut points used in published research and clinical laboratories vary substantially. Some institutions use the same value for hypercalciuria in both sexes, eg, greater than 200 mg/day.18

Excessive sodium intake is the most common cause of hypercalciuria. Systemic conditions such as primary hyperparathyroidism, sarcoidosis, and renal tubular acidosis also cause hypercalciuria but are uncommon.19 Management depends on the underlying cause and includes dietary modifications and pharmacologic therapy.

Dietary modifications have a pivotal role in the management of recurrent stones that are due to hypercalciuria.

Dietary calcium should not be restricted, since calcium reduces the excretion of urinary oxalate by decreasing intestinal absorption of oxalate. Guidelines from the American Urological Association recommend a daily calcium intake of 1,000 to 1,200 mg.11–20 Moreover, restriction of dietary calcium to less than 800 mg/day (the current recommended daily allowance for adults) can lead to negative calcium balance and bone loss.

Sodium intake also influences hypercalciuria. Calcium is reabsorbed passively in the proximal tubule due to the concentration gradient created by active reabsorption of sodium. A high sodium intake causes volume expansion, leading to a decrease in proximal sodium and calcium reabsorption and enhancing calcium excretion. A low-sodium diet (80–100 mmol/day, or 1,800–2,300 mg/day) is recommended. This enhances proximal sodium and passive calcium absorption and leads to a decrease in calcium excretion.21

Dietary protein increases the acid load by production of sulfuric acid and leads to hypercalciuria by its action on bone and kidney. Animal protein has a higher content of sulfur and generates a higher acid load compared with vegetable protein and has been associated with an increased incidence of stone formation, at least in men.20,22 Borghi et al23 reported that the combination of restricted intake of animal protein (52 g/day), restricted salt intake (50 mmol, or 2,900 mg/day of sodium chloride), and normal calcium intake (30 mmol/day, or 1,200 mg/day) was associated with a lower incidence of stone recurrence in men with hypercalciuria compared with traditional low-calcium intake (10 mmol, or 400 mg/day). Patients should therefore be advised to avoid excessive intake of animal protein.

Increasing the dietary intake of fruits and vegetables as in the Dietary Approach to Stop Hypertension (DASH) diet is beneficial and reduces the risk of stone recurrence, mainly by increasing citrate excretion.24

Pharmacologic therapy in hypercalciuria. Thiazide diuretics are the mainstay of pharmacotherapy for preventing recurrent stones in patients with idiopathic hypercalciuria. They reduce the risk of stone recurrence by about 50%, as reported in a recent meta-analysis that looked at five trials comparing thiazide diuretics with placebo.25 They lower calcium excretion by causing volume depletion, thereby increasing proximal sodium and passive calcium reabsorption.

Chlorthalidone and hydrochlorothiazide are the thiazides commonly used to treat hypercalciuria. The dosage is titrated to the urinary calcium excretion, and a common mistake is to use doses that are too low. They are usually started at 25 mg/day, but often require an increase to 50 to 100 mg/day for adequate lowering of urinary calcium.

Care should be taken to avoid hypokalemia. If it occurs, it can be corrected by adding the potassium-sparing diuretic amiloride (5–10 mg/day), which increases calcium reabsorption in collecting ducts or, in patients with hypocitraturia, potassium citrate-potassium bicarbonate. (Sodium salts should be avoided, since they increase renal calcium excretion.)26

Management of hypercalciuria with metabolic causes, which include primary hyper­parathyroidism and chronic acidemia. Patients who have hypercalciuria from primary hyperparathyroidism are treated with parathyroidectomy.27 Chronic metabolic acidosis causes hypercalciuria by loss of bone calcium and hypocitraturia by increasing active proximal absorption of citrate. Potassium citrate or potassium bicarbonate is used to prevent stones in such patients; sodium salts should be avoided.28

Reducing oxalate excretion

Hyperoxaluria has traditionally been defined as urinary oxalate excretion of more than 45 mg/day. However, the optimal cutoff point for urinary oxalate excretion is unclear, as is the optimal cutoff for hypercalciuria. The risk of stone formation has been shown to increase with oxalate excretion even above 25 mg/day, which is within the normal limit.18

Idiopathic hyperoxaluria. High dietary oxalate intake, especially when associated with low calcium intake, leads to idiopathic hyperoxaluria. However, the contribution of abnormal endogenous oxalate metabolism is uncertain. Ingested calcium binds to oxalate in the intestinal tract and reduces both the absorption of intestinal oxalate absorption and the excretion of urinary oxalate.29 High dietary oxalate intake has usually been regarded as a major risk factor for kidney stones.

Taylor and Curhan,30 in a prospective study, reported a mild increase in the risk of stones in the highest quintile of dietary oxalate intake compared with the lowest quintile for men (relative risk [RR] 1.22, 95% confidence interval [CI] 1.03–1.45) and older women (RR 1.21, 95% CI 1.01–1.44). They also demonstrated that eating eight or more servings of spinach per month compared with fewer than one serving per month was associated with a similar increase of stone risk in men (RR 1.30, 95% CI 1.08–1.58) and older women (RR 1.34 95% CI 1.1–1.64). In contrast, spinach and dietary oxalate intake did not increase the risk of nephrolithiasis in young women. The authors concluded that the risk associated with oxalate intake was modest, and their data did not support the contention that dietary oxalate is a major risk factor for kidney stones.

Higher oxalate intake increases urinary oxalate excretion and presumably the risk of nephrolithiasis. Limiting dietary oxalate to prevent stones is recommended if habitually high dietary intake of oxalate is identified or follow-up urine measurements show a decrease in oxalate excretion.31 Foods rich in oxalate include spinach, rhubarb, nuts, legumes, cocoa, okra, and chocolate.

The DASH diet, which is high in fruits and vegetables, moderate in low-fat dairy products, and low in animal protein, is an effective dietary alternative and has been associated with a lower risk of calcium oxalate stones.24 Consuming fruits and vegetables increases the excretion of urinary citrate, which is an inhibitor of stone formation. Also, it has been proposed that the DASH diet contains unknown factors that reduce stone risk.

Taylor et al32 prospectively examined the relationship between the DASH diet and the incidence of kidney stones and found that the diet significantly reduced the risk of kidney stones. The relative risks of occurrence of kidney stones in participants in the highest quintile of the DASH score (a measure of adherence to the DASH diet) compared with the lowest quintile were 0.55 (95% CI 0.46–0.65) for men, 0.58 (95% CI 0.49–0.68) for older women, and 0.60 (95% CI 0.52–0.70) for younger women, which the authors characterized as “a marked decrease in kidney stone risk.”

Vitamin C intake should be restricted to 90 mg/day in patients who have a history of calcium oxalate stones. Urivetzky et al33 found that urinary oxalate excretion increased by 6 to 13 mg/day at doses of ascorbic acid greater than 500 mg.

Pyridoxine (vitamin B6), a coenzyme of alanine-glyoxylate aminotransferase (AGT), increases the conversion of glyoxylate to glycine instead of oxalate and is used in the treatment of type 1 primary hyperoxaluria (see below).34 However, its effect in preventing stones in idiopathic hyperoxaluria is not well known, and it has not been studied in a randomized controlled trial. In a prospective study, Curhan et al35 reported that high intake of pyridoxine (> 40 mg/day) was associated with a lower risk of stone formation in women, but no such benefit was found in men.

Enteric hyperoxaluria. About 90% of dietary oxalate binds to calcium in the small intestine and is excreted in the stool. The remaining 10% is absorbed in the colon and is secreted in urine. Hyperoxaluria is frequently seen with fat malabsorption from inflammatory bowel disease, short gut syndrome, and gastric bypass surgery. In these conditions, excess fat binds to dietary calcium, leading to increased absorption of free oxalate in the colon.36

Treatment is directed at decreasing intestinal oxalate absorption and should include high fluid intake and oral calcium supplements. Calcium carbonate or citrate causes precipitation of oxalate in the intestinal lumen and is prescribed as 1 to 4 g in three to four divided doses, always with meals. Calcium citrate is preferred over calcium carbonate in stone-formers because of the benefit of citrate and calcium citrate’s higher solubility and greater effectiveness in the presence of achlorhydria.37 Patients should be advised to avoid foods high in oxalate and fat.

Primary hyperoxaluria is caused by inherited inborn errors of glyoxylate metabolism that cause overproduction of oxalate and urinary oxalate excretion above 135 to 270 mg/day.

Type 1 primary hyperoxaluria is the most common (accounting for 90% of cases) and is caused by reduced activity of hepatic peroxisomal AGT.

Type 2 is from a deficiency of glyoxylate reductase-hydroxypyruvate reductase (GRHPR).

Type 3 is from mutations in the HOGA1 gene, which codes for the liver-specific mitochondrial 4-hydroxy-2-oxoglutarate aldolase enzyme involved in degradation of hydroxyproline to pyruvate and glyoxalate.38

High fluid intake to produce a urinary volume of 3 L/day reduces intratubular oxalate deposition and should be encouraged. Potassium citrate (0.15 mg/kg), oral phosphate supplements (30–40 mg/kg of orthophosphate), and magnesium oxide (500 mg/day/m2) inhibit precipitation of calcium oxalate in the urine.39,40 Pyridoxine, a coenzyme of AGT, increases the conversion of glyoxylate to glycine instead of oxalate and is prescribed at a starting dose of 5 mg/kg (which can be titrated up to 20 mg/kg if there is no response) in patients with type 1 primary hyperoxaluria. About 50% of patients with type 1 respond successfully to pyridoxine, and a 3- to 6-month trial should be given in all patients in this category.34 AGT is present only in hepatocytes, and GRHPR is found in multiple tissues; therefore, combined liver-kidney transplant is the treatment of choice in patients with type 1 primary hyperoxaluria, whereas isolated kidney transplant is recommended in patients with type 2.41

Reducing uric acid excretion

Hyperuricosuria is defined as uric acid excretion of greater than 800 mg/day in men and greater than 750 mg/day in women.

The association of hyperuricosuria with increased risk of calcium oxalate stone formation is controversial. Curhan and Taylor,18 in a cross-sectional study of 3,350 men and women, reported that there was no difference in mean 24-hour uric acid excretion in individuals with and without a history of stones.

The mechanism by which uric acid leads to calcium oxalate stones is not completely known and could be the “salting out” of calcium oxalate from the urine.42

Dietary purine restriction, ie, limiting intake of nondairy animal protein to 0.8 to 1 g/kg/day, is the initial dietary intervention.11 Allopurinol is the alternative approach if the patient is not compliant or if dietary restriction fails.43

In a study by Ettinger et al,44 60 patients with hyperuricosuria and normocalciuria were randomized to receive allopurinol (100 mg three times daily) or a placebo. The allopurinol group had a rate of calculus events of 0.12 per patient per year, compared with 0.26 in the placebo group.

 

 

Increasing citrate excretion

Hypocitraturia is a well-known risk factor for the formation of kidney stones. It is usually defined as a citrate excretion of less than 320 mg/day for adults.

Citrate prevents formation of calcium crystals by binding to calcium, thereby lowering the concentration of calcium oxalate below the saturation point.45

Diet therapy. Patients with calcium oxalate stones and hypocitraturia should be encouraged to increase their intake of fruits and vegetables, which enhances urinary citrate excretion, and to limit their intake of nondairy animal protein.11

The use of citrus products in preventing stones in patients with hypocitraturia is controversial, however, and needs to be studied more.

One study46 demonstrated that lemon juice was beneficial in hypocitraturic nephrolithiasis: 4 oz/day of lemon juice concentrate in the form of lemonade was associated with an increase in urinary citrate excretion to 346 mg/day from 142 mg/day in 11 of 12 patients who participated.

Odvina47 compared the effects of orange juice with those of lemonade on the acid-base profile and urinary stone risk under controlled metabolic conditions in 13 volunteers. Orange juice was reported to have greater alkalinizing and citraturic effects and was associated with lower calculated calcium oxalate supersaturation compared with lemonade.

Lemonade therapy may be used as adjunctive treatment in patients who do not comply with or cannot tolerate alkali therapy. However, we advise caution about recommending citrus products, as they can increase oxalate excretion.

Pharmacotherapy includes alkali therapy. Barcelo et al48 compared the effects of potassium citrate and placebo in 57 patients with calcium oxalate stones and hypocitraturia. Patients treated with potassium citrate had a rate of stone formation of 0.1 event per patient per year, compared with 1.1 in the placebo group.

Many forms of alkaline citrate are available. Potassium citrate is preferred over sodium citrate since the latter may increase urine calcium excretion.49 Treatment is usually started at 30 mEq/day and is titrated to a maximal dose of 60 mEq/day for a urinary citrate excretion greater than 500 mg/day.

Common side effects are abdominal bloating and hyperkalemia (especially with renal insufficiency), and in such cases sodium-based alkali, sodium citrate, or sodium bicarbonate can be prescribed.

PREVENTING CALCIUM PHOSPHATE STONES

Risk factors for calcium phosphate stones are similar to those for calcium oxalate stones (other than hyperoxaluria), but calcium phosphate stones are formed in alkaline urine (usually urine pH > 6.0), often the result of distal renal tubular acidosis. Preventive measures are similar to those for calcium oxalate stones.

Alkali therapy should be used with caution because of its effect on urinary pH and the risk of precipitation of calcium phosphate crystals.50 Use of potassium citrate was found to be associated with increases in both urinary citrate excretion and calcium phosphate supersaturation in hypercalciuric stone-forming rats.51 It is therefore challenging to manage patients with calcium phosphate stones and hypocitraturia. Alkali administration in this setting may diminish the formation of new stones by correcting hypocitraturia, but at the same time it may increase the likelihood of calcium phosphate stone formation by increasing the urinary pH. When the urine pH increases to above 6.5 with no significant change in urine citrate or urine calcium excretion, we recommend stopping alkali therapy.

PREVENTING URIC ACID STONES

Clinical conditions associated with uric acid stones include metabolic syndrome, diabetes mellitus, gout, chronic diarrheal illness, and conditions that increase tissue turnover and uric acid production, such as malignancies. Other risk factors for uric acid stone formation are low urine volume, low uric pH, and hyperuricosuria.

Abnormally acidic urine is the most common risk factor. Metabolic syndrome and diabetes mellitus reduce ammonia production, resulting in a lower urinary pH, which predisposes to uric acid stone formation. Chronic diarrhea also acidifies the urine by loss of bicarbonate. Similarly, in gout, the predisposing factor in uric acid stone formation is the persistently acidic urine due to impaired ammonium excretion.52 Uric acid precipitates to form uric acid stones in a low urinary pH even with normal excretion rates of 600 to 800 mg/day and a urinary volume of 1 to 1.5 L.53

Therefore, apart from increasing fluid intake, urinary alkalization is the cornerstone of management of uric acid stones. Potassium citrate is the preferred alkali salt and is started at a dose of 30 mEq/day for a goal urinary pH of 6 to 6.5.47

Patients with hyperuricosuria are also advised to restrict their protein intake to no more than 0.8 to 1 mg/kg/day.

If the above measures fail, patients are treated with a xanthine oxidase inhibitor, ie, allopurinol or febuxostat, even if their uric acid excretion is normal.54

PREVENTING STRUVITE STONES

Struvite stones contain magnesium ammonium phosphate and are due to chronic upper urinary tract infection with urea-splitting bacteria such as Proteus, Klebsiella, Pseudomonas, and enterococci. Urea hydrolysis releases hydroxyl ions, resulting in alkaline urine that promotes struvite stone formation. Early detection and treatment are important, since struvite stones are associated with morbidity and rapid progression.

Medical treatment of struvite stones is usually unsuccessful, and the patient is referred to a urologist for surgical removal of the stones, the gold standard treatment.55 Long-term use of culture-specific antibiotics to prevent new stone growth is not well studied. Medical therapy by itself is preferred in patients who refuse stone removal or cannot tolerate it. Urease inhibitors such as acetohydroxamic acid have been successful in preventing or slowing stone growth, but their use is limited by frequent side effects such as nausea, headache, rash, and thrombophlebitis.56

CYSTINE STONES

Cystine stones occur in people with inherited defects of renal tubular and intestinal transport of cysteine and dibasic amino acids that cause excessive excretion of urinary cystine, ie, 480 to 3,600 mg/day.

Cystine is formed from two cysteine molecules linked by a disulfide bond. The solubility of cystine is pH-dependent, with increased solubility at higher urinary pH. The goal is to maintain a urinary cystine concentration below its solubility level by keeping the cystine concentration below 243 mg/L and the urine cystine supersaturation (the ratio of the urine cysteine concentration to the cysteine solubility in the same sample) less than 0.6.57 Therapy is aimed at increasing daily urinary volume to 3 L and urine alkalization to pH above 7, in order to increase cystine solubility by 300%.58

Overnight dehydration should be prevented, and patients should be encouraged to wake up at least once a night to void and drink additional water. Sodium restriction to 100 mmol/day (2,300 mg/day) and moderate protein restriction to 0.8 to 1 g/kg/day are associated with decreased cystine excretion, but long-term studies demonstrating their benefit in preventing cystine stones are lacking.59

A thiol-containing drug, eg, D-penicillamine (0.5–2 g/day) or tiopronin (400–1,200 mg/day), should be added to the conservative measures if they have not been effective for 3 months or if there is history of noncompliance.60 Thiol-containing drugs have a sulfhydryl group that reduces the disulfide bond, and they form soluble disulfide cysteine-drug complexes with greater ability to solubilize cystine in alkaline urine. They must always be used in conjunction with fluid and alkali therapy.61

Both drugs have severe and common adverse effects including leukopenia, aplastic anemia, fever, rash, arthritis, hepatotoxicity, pyridoxine deficiency, and proteinuria (membranous nephropathy). However, tiopronin seems to have a lesser incidence of side effects.62 Regular monitoring of complete blood cell counts, liver enzymes, and urine protein should be done.

Captopril contains a sulfhydryl group, and the captopril-cysteine disulfide is more soluble than cysteine alone. The amount of captopril that appears in the urine is low, and doses of 150 mg/day are usually required to reduce cysteine excretion, which can lead to hypotension. The efficacy of captopril in treating cystine stones is unproven, and this drug is used only if patients cannot tolerate other thiol-containing drugs.63

Nephrolithiasis is common and often recurs. This review focuses on measures to prevent recurrent stone formation. Some measures apply to all patients, and some apply to specific types of stones.

COMMON AND INCREASING

According to data from the 2007–2010 National Health and Nutrition Examination Survey, the prevalence of nephrolithiasis in the United States was 10.6% in men and 7.1% in women. On average, 1 in 11 Americans will develop kidney stones at least once in their lifetime.1

By race and sex, white men have the highest incidence of nephrolithiasis and Asian women have the lowest. It is less common before age 20 and peaks in incidence in the third and fourth decades of life.

The prevalence has steadily increased in the past few decades (Table 1),1,2 but the reasons are not clear. The trend may be due to changes in diet and lifestyle, increasing prevalence of obesity and diabetes, migration from rural to urban areas, and global warming, with higher temperature resulting in dehydration and high urinary concentration of calcium and other stone-forming salts.3 Nephrolithiasis is now recognized as a systemic disorder associated with chronic kidney disease, bone loss and fractures, increased risk of coronary artery disease, hypertension, type 2 diabetes mellitus, and metabolic syndrome (Table 2).4–7

Without medical treatment, the 5-year recurrence rate is high, ranging from 35% to 50% after an initial stone event.8 Annual medical costs of care for kidney stones in the United States exceed $4.5 billion, with additional costs from missed work. Therefore, this condition has a considerable economic and social burden, which underscores the importance of prevention.9

MOST STONES CONTAIN CALCIUM

About 80% of kidney stones in adults contain calcium, and calcium oxalate stones are more common than calcium phosphate stones. Uric acid and struvite stones account for 5% to 15%, and cystine, protease inhibitor, triamterene, 2,8-dihydroxyadenine (2,8-DHA) and xanthine stones each account for less than 1%.10

Stones form when the urinary concentration of stone-forming salts, which is inversely proportional to urine volume, is higher than their saturation point, which is affected by urine pH. Acidic urine (low pH) predisposes to the formation of uric acid and cystine stones, whereas alkaline urine (high pH) favors calcium phosphate stones.

INCREASED FLUID INTAKE FOR ALL

High fluid intake, enough to produce at least 2.5 L of urine per day, should be the initial therapy to prevent stone recurrence.11

Borghi et al12 randomly assigned 199 patients who had a first calcium stone to high oral fluid intake or no intervention and followed them prospectively for 5 years. The recurrence rate was 12% in the treated group and 27% in the control group. Another study, in patients who had undergone shock wave lithotripsy, found a recurrence rate of 8% in those randomized to increase fluid intake to achieve urine output greater than 2.5 L/day, compared with 56% in those assigned to no treatment.13

Certain beverages increase the risk of stones and should be avoided. Sugar-sweetened noncola soda and punch are associated with a 33% higher risk of kidney stones, and cola sodas are associated with a 23% higher risk.14 Prospective studies have shown that the consumption of coffee, beer, wine, and orange juice is associated with a lower likelihood of stone formation.13,15

Table 3 is a brief summary of the dietary and pharmacologic interventions in the management of recurrent nephrolithiasis.

PREVENTING CALCIUM OXALATE STONES

Major urinary risk factors associated with calcium oxalate stones are hypercalciuria, hyperoxaluria, hyperuricosuria, hypocitraturia, and low urine volume.16 Preventing calcium stones therefore depends on reducing the urinary concentration of calcium and oxalate, increasing urinary levels of inhibitors such as citrate, and increasing urine volume.

 

 

Reducing calcium excretion

Hypercalciuria has been traditionally defined as 24-hour urinary calcium excretion greater than 300 mg/day in men, greater than 250 mg in women, or greater than 4 mg/kg in men or women.17 It is a graded risk factor, and the cut points used in published research and clinical laboratories vary substantially. Some institutions use the same value for hypercalciuria in both sexes, eg, greater than 200 mg/day.18

Excessive sodium intake is the most common cause of hypercalciuria. Systemic conditions such as primary hyperparathyroidism, sarcoidosis, and renal tubular acidosis also cause hypercalciuria but are uncommon.19 Management depends on the underlying cause and includes dietary modifications and pharmacologic therapy.

Dietary modifications have a pivotal role in the management of recurrent stones that are due to hypercalciuria.

Dietary calcium should not be restricted, since calcium reduces the excretion of urinary oxalate by decreasing intestinal absorption of oxalate. Guidelines from the American Urological Association recommend a daily calcium intake of 1,000 to 1,200 mg.11–20 Moreover, restriction of dietary calcium to less than 800 mg/day (the current recommended daily allowance for adults) can lead to negative calcium balance and bone loss.

Sodium intake also influences hypercalciuria. Calcium is reabsorbed passively in the proximal tubule due to the concentration gradient created by active reabsorption of sodium. A high sodium intake causes volume expansion, leading to a decrease in proximal sodium and calcium reabsorption and enhancing calcium excretion. A low-sodium diet (80–100 mmol/day, or 1,800–2,300 mg/day) is recommended. This enhances proximal sodium and passive calcium absorption and leads to a decrease in calcium excretion.21

Dietary protein increases the acid load by production of sulfuric acid and leads to hypercalciuria by its action on bone and kidney. Animal protein has a higher content of sulfur and generates a higher acid load compared with vegetable protein and has been associated with an increased incidence of stone formation, at least in men.20,22 Borghi et al23 reported that the combination of restricted intake of animal protein (52 g/day), restricted salt intake (50 mmol, or 2,900 mg/day of sodium chloride), and normal calcium intake (30 mmol/day, or 1,200 mg/day) was associated with a lower incidence of stone recurrence in men with hypercalciuria compared with traditional low-calcium intake (10 mmol, or 400 mg/day). Patients should therefore be advised to avoid excessive intake of animal protein.

Increasing the dietary intake of fruits and vegetables as in the Dietary Approach to Stop Hypertension (DASH) diet is beneficial and reduces the risk of stone recurrence, mainly by increasing citrate excretion.24

Pharmacologic therapy in hypercalciuria. Thiazide diuretics are the mainstay of pharmacotherapy for preventing recurrent stones in patients with idiopathic hypercalciuria. They reduce the risk of stone recurrence by about 50%, as reported in a recent meta-analysis that looked at five trials comparing thiazide diuretics with placebo.25 They lower calcium excretion by causing volume depletion, thereby increasing proximal sodium and passive calcium reabsorption.

Chlorthalidone and hydrochlorothiazide are the thiazides commonly used to treat hypercalciuria. The dosage is titrated to the urinary calcium excretion, and a common mistake is to use doses that are too low. They are usually started at 25 mg/day, but often require an increase to 50 to 100 mg/day for adequate lowering of urinary calcium.

Care should be taken to avoid hypokalemia. If it occurs, it can be corrected by adding the potassium-sparing diuretic amiloride (5–10 mg/day), which increases calcium reabsorption in collecting ducts or, in patients with hypocitraturia, potassium citrate-potassium bicarbonate. (Sodium salts should be avoided, since they increase renal calcium excretion.)26

Management of hypercalciuria with metabolic causes, which include primary hyper­parathyroidism and chronic acidemia. Patients who have hypercalciuria from primary hyperparathyroidism are treated with parathyroidectomy.27 Chronic metabolic acidosis causes hypercalciuria by loss of bone calcium and hypocitraturia by increasing active proximal absorption of citrate. Potassium citrate or potassium bicarbonate is used to prevent stones in such patients; sodium salts should be avoided.28

Reducing oxalate excretion

Hyperoxaluria has traditionally been defined as urinary oxalate excretion of more than 45 mg/day. However, the optimal cutoff point for urinary oxalate excretion is unclear, as is the optimal cutoff for hypercalciuria. The risk of stone formation has been shown to increase with oxalate excretion even above 25 mg/day, which is within the normal limit.18

Idiopathic hyperoxaluria. High dietary oxalate intake, especially when associated with low calcium intake, leads to idiopathic hyperoxaluria. However, the contribution of abnormal endogenous oxalate metabolism is uncertain. Ingested calcium binds to oxalate in the intestinal tract and reduces both the absorption of intestinal oxalate absorption and the excretion of urinary oxalate.29 High dietary oxalate intake has usually been regarded as a major risk factor for kidney stones.

Taylor and Curhan,30 in a prospective study, reported a mild increase in the risk of stones in the highest quintile of dietary oxalate intake compared with the lowest quintile for men (relative risk [RR] 1.22, 95% confidence interval [CI] 1.03–1.45) and older women (RR 1.21, 95% CI 1.01–1.44). They also demonstrated that eating eight or more servings of spinach per month compared with fewer than one serving per month was associated with a similar increase of stone risk in men (RR 1.30, 95% CI 1.08–1.58) and older women (RR 1.34 95% CI 1.1–1.64). In contrast, spinach and dietary oxalate intake did not increase the risk of nephrolithiasis in young women. The authors concluded that the risk associated with oxalate intake was modest, and their data did not support the contention that dietary oxalate is a major risk factor for kidney stones.

Higher oxalate intake increases urinary oxalate excretion and presumably the risk of nephrolithiasis. Limiting dietary oxalate to prevent stones is recommended if habitually high dietary intake of oxalate is identified or follow-up urine measurements show a decrease in oxalate excretion.31 Foods rich in oxalate include spinach, rhubarb, nuts, legumes, cocoa, okra, and chocolate.

The DASH diet, which is high in fruits and vegetables, moderate in low-fat dairy products, and low in animal protein, is an effective dietary alternative and has been associated with a lower risk of calcium oxalate stones.24 Consuming fruits and vegetables increases the excretion of urinary citrate, which is an inhibitor of stone formation. Also, it has been proposed that the DASH diet contains unknown factors that reduce stone risk.

Taylor et al32 prospectively examined the relationship between the DASH diet and the incidence of kidney stones and found that the diet significantly reduced the risk of kidney stones. The relative risks of occurrence of kidney stones in participants in the highest quintile of the DASH score (a measure of adherence to the DASH diet) compared with the lowest quintile were 0.55 (95% CI 0.46–0.65) for men, 0.58 (95% CI 0.49–0.68) for older women, and 0.60 (95% CI 0.52–0.70) for younger women, which the authors characterized as “a marked decrease in kidney stone risk.”

Vitamin C intake should be restricted to 90 mg/day in patients who have a history of calcium oxalate stones. Urivetzky et al33 found that urinary oxalate excretion increased by 6 to 13 mg/day at doses of ascorbic acid greater than 500 mg.

Pyridoxine (vitamin B6), a coenzyme of alanine-glyoxylate aminotransferase (AGT), increases the conversion of glyoxylate to glycine instead of oxalate and is used in the treatment of type 1 primary hyperoxaluria (see below).34 However, its effect in preventing stones in idiopathic hyperoxaluria is not well known, and it has not been studied in a randomized controlled trial. In a prospective study, Curhan et al35 reported that high intake of pyridoxine (> 40 mg/day) was associated with a lower risk of stone formation in women, but no such benefit was found in men.

Enteric hyperoxaluria. About 90% of dietary oxalate binds to calcium in the small intestine and is excreted in the stool. The remaining 10% is absorbed in the colon and is secreted in urine. Hyperoxaluria is frequently seen with fat malabsorption from inflammatory bowel disease, short gut syndrome, and gastric bypass surgery. In these conditions, excess fat binds to dietary calcium, leading to increased absorption of free oxalate in the colon.36

Treatment is directed at decreasing intestinal oxalate absorption and should include high fluid intake and oral calcium supplements. Calcium carbonate or citrate causes precipitation of oxalate in the intestinal lumen and is prescribed as 1 to 4 g in three to four divided doses, always with meals. Calcium citrate is preferred over calcium carbonate in stone-formers because of the benefit of citrate and calcium citrate’s higher solubility and greater effectiveness in the presence of achlorhydria.37 Patients should be advised to avoid foods high in oxalate and fat.

Primary hyperoxaluria is caused by inherited inborn errors of glyoxylate metabolism that cause overproduction of oxalate and urinary oxalate excretion above 135 to 270 mg/day.

Type 1 primary hyperoxaluria is the most common (accounting for 90% of cases) and is caused by reduced activity of hepatic peroxisomal AGT.

Type 2 is from a deficiency of glyoxylate reductase-hydroxypyruvate reductase (GRHPR).

Type 3 is from mutations in the HOGA1 gene, which codes for the liver-specific mitochondrial 4-hydroxy-2-oxoglutarate aldolase enzyme involved in degradation of hydroxyproline to pyruvate and glyoxalate.38

High fluid intake to produce a urinary volume of 3 L/day reduces intratubular oxalate deposition and should be encouraged. Potassium citrate (0.15 mg/kg), oral phosphate supplements (30–40 mg/kg of orthophosphate), and magnesium oxide (500 mg/day/m2) inhibit precipitation of calcium oxalate in the urine.39,40 Pyridoxine, a coenzyme of AGT, increases the conversion of glyoxylate to glycine instead of oxalate and is prescribed at a starting dose of 5 mg/kg (which can be titrated up to 20 mg/kg if there is no response) in patients with type 1 primary hyperoxaluria. About 50% of patients with type 1 respond successfully to pyridoxine, and a 3- to 6-month trial should be given in all patients in this category.34 AGT is present only in hepatocytes, and GRHPR is found in multiple tissues; therefore, combined liver-kidney transplant is the treatment of choice in patients with type 1 primary hyperoxaluria, whereas isolated kidney transplant is recommended in patients with type 2.41

Reducing uric acid excretion

Hyperuricosuria is defined as uric acid excretion of greater than 800 mg/day in men and greater than 750 mg/day in women.

The association of hyperuricosuria with increased risk of calcium oxalate stone formation is controversial. Curhan and Taylor,18 in a cross-sectional study of 3,350 men and women, reported that there was no difference in mean 24-hour uric acid excretion in individuals with and without a history of stones.

The mechanism by which uric acid leads to calcium oxalate stones is not completely known and could be the “salting out” of calcium oxalate from the urine.42

Dietary purine restriction, ie, limiting intake of nondairy animal protein to 0.8 to 1 g/kg/day, is the initial dietary intervention.11 Allopurinol is the alternative approach if the patient is not compliant or if dietary restriction fails.43

In a study by Ettinger et al,44 60 patients with hyperuricosuria and normocalciuria were randomized to receive allopurinol (100 mg three times daily) or a placebo. The allopurinol group had a rate of calculus events of 0.12 per patient per year, compared with 0.26 in the placebo group.

 

 

Increasing citrate excretion

Hypocitraturia is a well-known risk factor for the formation of kidney stones. It is usually defined as a citrate excretion of less than 320 mg/day for adults.

Citrate prevents formation of calcium crystals by binding to calcium, thereby lowering the concentration of calcium oxalate below the saturation point.45

Diet therapy. Patients with calcium oxalate stones and hypocitraturia should be encouraged to increase their intake of fruits and vegetables, which enhances urinary citrate excretion, and to limit their intake of nondairy animal protein.11

The use of citrus products in preventing stones in patients with hypocitraturia is controversial, however, and needs to be studied more.

One study46 demonstrated that lemon juice was beneficial in hypocitraturic nephrolithiasis: 4 oz/day of lemon juice concentrate in the form of lemonade was associated with an increase in urinary citrate excretion to 346 mg/day from 142 mg/day in 11 of 12 patients who participated.

Odvina47 compared the effects of orange juice with those of lemonade on the acid-base profile and urinary stone risk under controlled metabolic conditions in 13 volunteers. Orange juice was reported to have greater alkalinizing and citraturic effects and was associated with lower calculated calcium oxalate supersaturation compared with lemonade.

Lemonade therapy may be used as adjunctive treatment in patients who do not comply with or cannot tolerate alkali therapy. However, we advise caution about recommending citrus products, as they can increase oxalate excretion.

Pharmacotherapy includes alkali therapy. Barcelo et al48 compared the effects of potassium citrate and placebo in 57 patients with calcium oxalate stones and hypocitraturia. Patients treated with potassium citrate had a rate of stone formation of 0.1 event per patient per year, compared with 1.1 in the placebo group.

Many forms of alkaline citrate are available. Potassium citrate is preferred over sodium citrate since the latter may increase urine calcium excretion.49 Treatment is usually started at 30 mEq/day and is titrated to a maximal dose of 60 mEq/day for a urinary citrate excretion greater than 500 mg/day.

Common side effects are abdominal bloating and hyperkalemia (especially with renal insufficiency), and in such cases sodium-based alkali, sodium citrate, or sodium bicarbonate can be prescribed.

PREVENTING CALCIUM PHOSPHATE STONES

Risk factors for calcium phosphate stones are similar to those for calcium oxalate stones (other than hyperoxaluria), but calcium phosphate stones are formed in alkaline urine (usually urine pH > 6.0), often the result of distal renal tubular acidosis. Preventive measures are similar to those for calcium oxalate stones.

Alkali therapy should be used with caution because of its effect on urinary pH and the risk of precipitation of calcium phosphate crystals.50 Use of potassium citrate was found to be associated with increases in both urinary citrate excretion and calcium phosphate supersaturation in hypercalciuric stone-forming rats.51 It is therefore challenging to manage patients with calcium phosphate stones and hypocitraturia. Alkali administration in this setting may diminish the formation of new stones by correcting hypocitraturia, but at the same time it may increase the likelihood of calcium phosphate stone formation by increasing the urinary pH. When the urine pH increases to above 6.5 with no significant change in urine citrate or urine calcium excretion, we recommend stopping alkali therapy.

PREVENTING URIC ACID STONES

Clinical conditions associated with uric acid stones include metabolic syndrome, diabetes mellitus, gout, chronic diarrheal illness, and conditions that increase tissue turnover and uric acid production, such as malignancies. Other risk factors for uric acid stone formation are low urine volume, low uric pH, and hyperuricosuria.

Abnormally acidic urine is the most common risk factor. Metabolic syndrome and diabetes mellitus reduce ammonia production, resulting in a lower urinary pH, which predisposes to uric acid stone formation. Chronic diarrhea also acidifies the urine by loss of bicarbonate. Similarly, in gout, the predisposing factor in uric acid stone formation is the persistently acidic urine due to impaired ammonium excretion.52 Uric acid precipitates to form uric acid stones in a low urinary pH even with normal excretion rates of 600 to 800 mg/day and a urinary volume of 1 to 1.5 L.53

Therefore, apart from increasing fluid intake, urinary alkalization is the cornerstone of management of uric acid stones. Potassium citrate is the preferred alkali salt and is started at a dose of 30 mEq/day for a goal urinary pH of 6 to 6.5.47

Patients with hyperuricosuria are also advised to restrict their protein intake to no more than 0.8 to 1 mg/kg/day.

If the above measures fail, patients are treated with a xanthine oxidase inhibitor, ie, allopurinol or febuxostat, even if their uric acid excretion is normal.54

PREVENTING STRUVITE STONES

Struvite stones contain magnesium ammonium phosphate and are due to chronic upper urinary tract infection with urea-splitting bacteria such as Proteus, Klebsiella, Pseudomonas, and enterococci. Urea hydrolysis releases hydroxyl ions, resulting in alkaline urine that promotes struvite stone formation. Early detection and treatment are important, since struvite stones are associated with morbidity and rapid progression.

Medical treatment of struvite stones is usually unsuccessful, and the patient is referred to a urologist for surgical removal of the stones, the gold standard treatment.55 Long-term use of culture-specific antibiotics to prevent new stone growth is not well studied. Medical therapy by itself is preferred in patients who refuse stone removal or cannot tolerate it. Urease inhibitors such as acetohydroxamic acid have been successful in preventing or slowing stone growth, but their use is limited by frequent side effects such as nausea, headache, rash, and thrombophlebitis.56

CYSTINE STONES

Cystine stones occur in people with inherited defects of renal tubular and intestinal transport of cysteine and dibasic amino acids that cause excessive excretion of urinary cystine, ie, 480 to 3,600 mg/day.

Cystine is formed from two cysteine molecules linked by a disulfide bond. The solubility of cystine is pH-dependent, with increased solubility at higher urinary pH. The goal is to maintain a urinary cystine concentration below its solubility level by keeping the cystine concentration below 243 mg/L and the urine cystine supersaturation (the ratio of the urine cysteine concentration to the cysteine solubility in the same sample) less than 0.6.57 Therapy is aimed at increasing daily urinary volume to 3 L and urine alkalization to pH above 7, in order to increase cystine solubility by 300%.58

Overnight dehydration should be prevented, and patients should be encouraged to wake up at least once a night to void and drink additional water. Sodium restriction to 100 mmol/day (2,300 mg/day) and moderate protein restriction to 0.8 to 1 g/kg/day are associated with decreased cystine excretion, but long-term studies demonstrating their benefit in preventing cystine stones are lacking.59

A thiol-containing drug, eg, D-penicillamine (0.5–2 g/day) or tiopronin (400–1,200 mg/day), should be added to the conservative measures if they have not been effective for 3 months or if there is history of noncompliance.60 Thiol-containing drugs have a sulfhydryl group that reduces the disulfide bond, and they form soluble disulfide cysteine-drug complexes with greater ability to solubilize cystine in alkaline urine. They must always be used in conjunction with fluid and alkali therapy.61

Both drugs have severe and common adverse effects including leukopenia, aplastic anemia, fever, rash, arthritis, hepatotoxicity, pyridoxine deficiency, and proteinuria (membranous nephropathy). However, tiopronin seems to have a lesser incidence of side effects.62 Regular monitoring of complete blood cell counts, liver enzymes, and urine protein should be done.

Captopril contains a sulfhydryl group, and the captopril-cysteine disulfide is more soluble than cysteine alone. The amount of captopril that appears in the urine is low, and doses of 150 mg/day are usually required to reduce cysteine excretion, which can lead to hypotension. The efficacy of captopril in treating cystine stones is unproven, and this drug is used only if patients cannot tolerate other thiol-containing drugs.63

References
  1. Scales CD Jr, Smith AC, Hanley JM, Saigal CS; Urologic Diseases in America Project. Prevalence of kidney stones in the United States. Eur Urol 2012; 62:160–165.
  2. Stamatelou KK, Francis ME, Jones CA, Nyberg LM Jr, Curhan GC. Time trends in reported prevalence of kidney stones in the United States: 1976–1994. Kidney Int 2003; 63:1817–1823.
  3. Romero V, Akpinar H, Assimos DG. Kidney stones: a global picture of prevalence, incidence, and associated risk factors. Rev Urol 2010; 12:e86–e96.
  4. Sakhaee K, Maalouf NM, Kumar R, Pasch A, Moe OW. Nephrolithiasis-associated bone disease: pathogenesis and treatment options. Kidney Int 2011; 79:393–403.
  5. Sakhaee K. Nephrolithiasis as a systemic disorder. Curr Opin Nephrol Hypertens 2008; 17:304–309.
  6. Hamano S, Nakatsu H, Suzuki N, Tomioka S, Tanaka M, Murakami S. Kidney stone disease and risk factors for coronary heart disease. Int J Urol 2005; 12:859–863.
  7. Ritz E. Metabolic syndrome: an emerging threat to renal function. Clin J Am Soc Nephrol 2007; 2:869–871.
  8. Uribarri J, Oh MS, Carroll HJ. The first kidney stone. Ann Intern Med 1989; 111:1006–1009.
  9. Saigal CS, Joyce G, Timilsina AR; Urologic Diseases in America Project. Direct and indirect costs of nephrolithiasis in an employed population: opportunity for disease management? Kidney Int 2005; 68:1808–1814.
  10. Moe OW. Kidney stones: pathophysiology and medical management. Lancet 2006; 367:333–344.
  11. Pearle MS, Goldfarb DS, Assimos DG, et al; American Urological Assocation. Medical management of kidney stones: AUA guideline. J Urol 2014; 192:316–324.
  12. Borghi L, Meschi T, Amato F, Briganti A, Novarini A, Giannini A. Urinary volume, water and recurrences in idiopathic calcium nephrolithiasis: a 5-year randomized prospective study. J Urol 1996; 155:839–843.
  13. Sarica K, Inal Y, Erturhan S, Yagci F. The effect of calcium channel blockers on stone regrowth and recurrence after shock wave lithotripsy. Urol Res 2006; 34:184–189.
  14. Ferraro PM, Taylor EN, Gambaro G, Curhan GC. Soda and other beverages and the risk of kidney stones. Clin J Am Soc Nephrol 2013; 8:1389–1395.
  15. Curhan GC, Willett WC, Speizer FE, Stampfer MJ. Beverage use and risk for kidney stones in women. Ann Intern Med 1998; 128:534–540.
  16. Pak CY, Britton F, Peterson R, et al. Ambulatory evaluation of nephrolithiasis. Classification, clinical presentation and diagnostic criteria. Am J Med 1980; 69:19–30.
  17. Hall PM. Nephrolithiasis: treatment, causes, and prevention. Cleve Clin J Med 2009; 76:583–591.
  18. Curhan GC, Taylor EN. 24-h uric acid excretion and the risk of kidney stones. Kidney Int 2008; 73:489–496.
  19. Coe FL, Evan A, Worcester E. Kidney stone disease. J Clin Invest 2005; 115:2598–2608.
  20. Curhan GC, Willett WC, Rimm EB, Stampfer MJ. A prospective study of dietary calcium and other nutrients and the risk of symptomatic kidney stones. N Engl J Med 1993; 328:833–838.
  21. Muldowney FP, Freaney R, Moloney MF. Importance of dietary sodium in the hypercalciuria syndrome. Kidney Int 1982; 22:292–296.
  22. Breslau NA, Brinkley L, Hill KD, Pak CY. Relationship of animal protein-rich diet to kidney stone formation and calcium metabolism. J Clin Endocrinol Metab 1988; 66:140–146.
  23. Borghi L, Schianchi T, Meschi T, et al. Comparison of two diets for the prevention of recurrent stones in idiopathic hypercalciuria. N Engl J Med 2002; 346:77–84.
  24. Noori N, Honarkar E, Goldfarb DS, et al. Urinary lithogenic risk profile in recurrent stone formers with hyperoxaluria: a randomized controlled trial comparing DASH (Dietary Approaches to Stop Hypertension)-style and low-oxalate diets. Am J Kidney Dis 2014; 63:456–463.
  25. Fink HA, Wilt TJ, Eidman KE, et al. Medical management to prevent recurrent nephrolithiasis in adults: a systematic review for an American College of Physicians Clinical Guideline. Ann Intern Med 2013; 158:535–543.
  26. Alon U, Costanzo LS, Chan JC. Additive hypocalciuric effects of amiloride and hydrochlorothiazide in patients treated with calcitriol. Miner Electrolyte Metab 1984; 10:379–386.
  27. Corbetta S, Baccarelli A, Aroldi A, et al. Risk factors associated to kidney stones in primary hyperparathyroidism. J Endocrinol Invest 2005; 28:122–128.
  28. Haymann JP. Metabolic disorders: stones as first clinical manifestation of significant diseases. World J Urol 2015; 33:187–192.
  29. Jaeger P, Portmann L, Jacquet AF, Burckhardt P. Influence of the calcium content of the diet on the incidence of mild hyperoxaluria in idiopathic renal stone formers. Am J Nephrol 1985; 5:40–44.
  30. Taylor EN, Curhan GC. Oxalate intake and the risk for nephrolithiasis. J Am Soc Nephrol 2007; 18:2198–2204.
  31. Lieske JC, Tremaine WJ, De Simone C, et al. Diet, but not oral probiotics, effectively reduces urinary oxalate excretion and calcium oxalate supersaturation. Kidney Int 2010; 78:1178–1185.
  32. Taylor EN, Fung TT, Curhan GC. DASH-style diet associates with reduced risk for kidney stones. J Am Soc Nephrol 2009; 20:2253–2259.
  33. Urivetzky M, Kessaris D, Smith AD. Ascorbic acid overdosing: a risk factor for calcium oxalate nephrolithiasis. J Urol 1992; 147:1215–1218.
  34. Hoyer-Kuhn H, Kohbrok S, Volland R, et al. Vitamin B6 in primary hyperoxaluria I: first prospective trial after 40 years of practice. Clin J Am Soc Nephrol 2014; 9:468–477.
  35. Curhan GC, Willett WC, Speizer FE, Stampfer MJ. Intake of vitamins B6 and C and the risk of kidney stones in women. J Am Soc Nephrol 1999; 10:840–845.
  36. Parks JH, Worcester EM, O'Connor RC, Coe FL. Urine stone risk factors in nephrolithiasis patients with and without bowel disease. Kidney Int 2003; 63:255–265.
  37. Hess B, Jost C, Zipperle L, Takkinen R, Jaeger P. High-calcium intake abolishes hyperoxaluria and reduces urinary crystallization during a 20-fold normal oxalate load in humans. Nephrol Dial Transplant 1998; 13:2241–2247.
  38. Hoppe B, Beck BB, Milliner DS. The primary hyperoxalurias. Kidney Int 2009; 75:1264–1271.
  39. Cochat P, Hulton SA, Acquaviva C, et al; OxalEurope. Primary hyperoxaluria type 1: indications for screening and guidance for diagnosis and treatment. Nephrol Dial Transplant 2012; 27:1729–1736.
  40. Leumann E, Hoppe B, Neuhaus T. Management of primary hyperoxaluria: efficacy of oral citrate administration. Pediatr Nephrol 1993; 7:207–211.
  41. Bergstralh EJ, Monico CG, Lieske JC, et al; IPHR Investigators. Transplantation outcomes in primary hyperoxaluria. Am J Transplant 2010; 10:2493–2501.
  42. Grover PK, Marshall VR, Ryall RL. Dissolved urate salts out calcium oxalate in undiluted human urine in vitro: implications for calcium oxalate stone genesis. Chem Biol 2003; 10:271–278.
  43. Coe FL, Parks JH. Hyperuricosuria and calcium nephrolithiasis. Urol Clin North Am 1981; 8:227–244.
  44. Ettinger B, Tang A, Citron JT, Livermore B, Williams T. Randomized trial of allopurinol in the prevention of calcium oxalate calculi. N Engl J Med 1986; 315:1386–1389.
  45. Zuckerman JM, Assimos DG. Hypocitraturia: pathophysiology and medical management. Rev Urol 2009; 11:134–144.
  46. Seltzer MA, Low RK, McDonald M, Shami GS, Stoller ML. Dietary manipulation with lemonade to treat hypocitraturic calcium nephrolithiasis. J Urol 1996; 156:907–909.
  47. Odvina CV. Comparative value of orange juice versus lemonade in reducing stone-forming risk. Clin J Am Soc Nephrol 2006; 1:1269–1274.
  48. Barcelo P, Wuhl O, Servitge E, Rousaud A, Pak CY. Randomized double-blind study of potassium citrate in idiopathic hypocitraturic calcium nephrolithiasis. J Urol 1993; 150:1761–1764.
  49. Lemann J Jr, Gray RW, Pleuss JA. Potassium bicarbonate, but not sodium bicarbonate, reduces urinary calcium excretion and improves calcium balance in healthy men. Kidney Int 1989; 35:688–695.
  50. Gault MH, Chafe LL, Morgan JM, et al. Comparison of patients with idiopathic calcium phosphate and calcium oxalate stones. Medicine (Baltimore) 1991; 70:345–359.
  51. Krieger NS, Asplin JR, Frick KK, et al. Effect of potassium citrate on calcium phosphate stones in a model of hypercalciuria. J Am Soc Nephrol 2015; 26:3001–3008.
  52. Falls WF Jr. Comparison of urinary acidification and ammonium excretion in normal and gouty subjects. Metabolism 1972; 21:433–445.
  53. Coe FL, Parks JH, Asplin JR. The pathogenesis and treatment of kidney stones. N Engl J Med 1992; 327:1141–1152.
  54. Kenny JE, Goldfarb DS. Update on the pathophysiology and management of uric acid renal stones. Curr Rheumatol Rep 2010; 12:125–129.
  55. Preminger GM, Assimos DG, Lingeman JE, Nakada SY, Pearle MS, Wolf JS Jr (AUA Nephrolithiasis Guideline Panel). Chapter 1: AUA guideline on management of staghorn calculi: diagnosis and treatment recommendations. J Urol 2005; 173:1991–2000.
  56. Williams JJ, Rodman JS, Peterson CM. A randomized double-blind study of acetohydroxamic acid in struvite nephrolithiasis. N Engl J Med 1984; 311:760–764.
  57. Nakagawa Y, Asplin JR, Goldfarb DS, Parks JH, Coe FL. Clinical use of cystine supersaturation measurements. J Urol 2000; 164:1481–1485.
  58. Palacın MGP, Nunes V, Gasparini P. Cystinuria. In: Shriver CR, editor. The Metabolic and Molecular Bases of Inherited Disease. New York, NY: McGraw-Hill; 2001:4909–4932.
  59. Goldfarb DS, Coe FL, Asplin JR. Urinary cystine excretion and capacity in patients with cystinuria. Kidney Int 2006; 69:1041–1047.
  60. Barbey F, Joly D, Rieu P, Mejean A, Daudon M, Jungers P. Medical treatment of cystinuria: critical reappraisal of long-term results. J Urol 2000; 163:1419–1423.
  61. Asplin DM, Asplin JR. The Interaction of thiol drugs and urine pH in the treatment of cystinuria. J Urol 2013; 189:2147–2151.
  62. Habib GS, Saliba W, Nashashibi M, Armali Z. Penicillamine and nephrotic syndrome. Eur J Intern Med 2006; 17:343–348.
  63. Sloand JA, Izzo JL Jr. Captopril reduces urinary cystine excretion in cystinuria. Arch Intern Med 1987; 147:1409–1412.
References
  1. Scales CD Jr, Smith AC, Hanley JM, Saigal CS; Urologic Diseases in America Project. Prevalence of kidney stones in the United States. Eur Urol 2012; 62:160–165.
  2. Stamatelou KK, Francis ME, Jones CA, Nyberg LM Jr, Curhan GC. Time trends in reported prevalence of kidney stones in the United States: 1976–1994. Kidney Int 2003; 63:1817–1823.
  3. Romero V, Akpinar H, Assimos DG. Kidney stones: a global picture of prevalence, incidence, and associated risk factors. Rev Urol 2010; 12:e86–e96.
  4. Sakhaee K, Maalouf NM, Kumar R, Pasch A, Moe OW. Nephrolithiasis-associated bone disease: pathogenesis and treatment options. Kidney Int 2011; 79:393–403.
  5. Sakhaee K. Nephrolithiasis as a systemic disorder. Curr Opin Nephrol Hypertens 2008; 17:304–309.
  6. Hamano S, Nakatsu H, Suzuki N, Tomioka S, Tanaka M, Murakami S. Kidney stone disease and risk factors for coronary heart disease. Int J Urol 2005; 12:859–863.
  7. Ritz E. Metabolic syndrome: an emerging threat to renal function. Clin J Am Soc Nephrol 2007; 2:869–871.
  8. Uribarri J, Oh MS, Carroll HJ. The first kidney stone. Ann Intern Med 1989; 111:1006–1009.
  9. Saigal CS, Joyce G, Timilsina AR; Urologic Diseases in America Project. Direct and indirect costs of nephrolithiasis in an employed population: opportunity for disease management? Kidney Int 2005; 68:1808–1814.
  10. Moe OW. Kidney stones: pathophysiology and medical management. Lancet 2006; 367:333–344.
  11. Pearle MS, Goldfarb DS, Assimos DG, et al; American Urological Assocation. Medical management of kidney stones: AUA guideline. J Urol 2014; 192:316–324.
  12. Borghi L, Meschi T, Amato F, Briganti A, Novarini A, Giannini A. Urinary volume, water and recurrences in idiopathic calcium nephrolithiasis: a 5-year randomized prospective study. J Urol 1996; 155:839–843.
  13. Sarica K, Inal Y, Erturhan S, Yagci F. The effect of calcium channel blockers on stone regrowth and recurrence after shock wave lithotripsy. Urol Res 2006; 34:184–189.
  14. Ferraro PM, Taylor EN, Gambaro G, Curhan GC. Soda and other beverages and the risk of kidney stones. Clin J Am Soc Nephrol 2013; 8:1389–1395.
  15. Curhan GC, Willett WC, Speizer FE, Stampfer MJ. Beverage use and risk for kidney stones in women. Ann Intern Med 1998; 128:534–540.
  16. Pak CY, Britton F, Peterson R, et al. Ambulatory evaluation of nephrolithiasis. Classification, clinical presentation and diagnostic criteria. Am J Med 1980; 69:19–30.
  17. Hall PM. Nephrolithiasis: treatment, causes, and prevention. Cleve Clin J Med 2009; 76:583–591.
  18. Curhan GC, Taylor EN. 24-h uric acid excretion and the risk of kidney stones. Kidney Int 2008; 73:489–496.
  19. Coe FL, Evan A, Worcester E. Kidney stone disease. J Clin Invest 2005; 115:2598–2608.
  20. Curhan GC, Willett WC, Rimm EB, Stampfer MJ. A prospective study of dietary calcium and other nutrients and the risk of symptomatic kidney stones. N Engl J Med 1993; 328:833–838.
  21. Muldowney FP, Freaney R, Moloney MF. Importance of dietary sodium in the hypercalciuria syndrome. Kidney Int 1982; 22:292–296.
  22. Breslau NA, Brinkley L, Hill KD, Pak CY. Relationship of animal protein-rich diet to kidney stone formation and calcium metabolism. J Clin Endocrinol Metab 1988; 66:140–146.
  23. Borghi L, Schianchi T, Meschi T, et al. Comparison of two diets for the prevention of recurrent stones in idiopathic hypercalciuria. N Engl J Med 2002; 346:77–84.
  24. Noori N, Honarkar E, Goldfarb DS, et al. Urinary lithogenic risk profile in recurrent stone formers with hyperoxaluria: a randomized controlled trial comparing DASH (Dietary Approaches to Stop Hypertension)-style and low-oxalate diets. Am J Kidney Dis 2014; 63:456–463.
  25. Fink HA, Wilt TJ, Eidman KE, et al. Medical management to prevent recurrent nephrolithiasis in adults: a systematic review for an American College of Physicians Clinical Guideline. Ann Intern Med 2013; 158:535–543.
  26. Alon U, Costanzo LS, Chan JC. Additive hypocalciuric effects of amiloride and hydrochlorothiazide in patients treated with calcitriol. Miner Electrolyte Metab 1984; 10:379–386.
  27. Corbetta S, Baccarelli A, Aroldi A, et al. Risk factors associated to kidney stones in primary hyperparathyroidism. J Endocrinol Invest 2005; 28:122–128.
  28. Haymann JP. Metabolic disorders: stones as first clinical manifestation of significant diseases. World J Urol 2015; 33:187–192.
  29. Jaeger P, Portmann L, Jacquet AF, Burckhardt P. Influence of the calcium content of the diet on the incidence of mild hyperoxaluria in idiopathic renal stone formers. Am J Nephrol 1985; 5:40–44.
  30. Taylor EN, Curhan GC. Oxalate intake and the risk for nephrolithiasis. J Am Soc Nephrol 2007; 18:2198–2204.
  31. Lieske JC, Tremaine WJ, De Simone C, et al. Diet, but not oral probiotics, effectively reduces urinary oxalate excretion and calcium oxalate supersaturation. Kidney Int 2010; 78:1178–1185.
  32. Taylor EN, Fung TT, Curhan GC. DASH-style diet associates with reduced risk for kidney stones. J Am Soc Nephrol 2009; 20:2253–2259.
  33. Urivetzky M, Kessaris D, Smith AD. Ascorbic acid overdosing: a risk factor for calcium oxalate nephrolithiasis. J Urol 1992; 147:1215–1218.
  34. Hoyer-Kuhn H, Kohbrok S, Volland R, et al. Vitamin B6 in primary hyperoxaluria I: first prospective trial after 40 years of practice. Clin J Am Soc Nephrol 2014; 9:468–477.
  35. Curhan GC, Willett WC, Speizer FE, Stampfer MJ. Intake of vitamins B6 and C and the risk of kidney stones in women. J Am Soc Nephrol 1999; 10:840–845.
  36. Parks JH, Worcester EM, O'Connor RC, Coe FL. Urine stone risk factors in nephrolithiasis patients with and without bowel disease. Kidney Int 2003; 63:255–265.
  37. Hess B, Jost C, Zipperle L, Takkinen R, Jaeger P. High-calcium intake abolishes hyperoxaluria and reduces urinary crystallization during a 20-fold normal oxalate load in humans. Nephrol Dial Transplant 1998; 13:2241–2247.
  38. Hoppe B, Beck BB, Milliner DS. The primary hyperoxalurias. Kidney Int 2009; 75:1264–1271.
  39. Cochat P, Hulton SA, Acquaviva C, et al; OxalEurope. Primary hyperoxaluria type 1: indications for screening and guidance for diagnosis and treatment. Nephrol Dial Transplant 2012; 27:1729–1736.
  40. Leumann E, Hoppe B, Neuhaus T. Management of primary hyperoxaluria: efficacy of oral citrate administration. Pediatr Nephrol 1993; 7:207–211.
  41. Bergstralh EJ, Monico CG, Lieske JC, et al; IPHR Investigators. Transplantation outcomes in primary hyperoxaluria. Am J Transplant 2010; 10:2493–2501.
  42. Grover PK, Marshall VR, Ryall RL. Dissolved urate salts out calcium oxalate in undiluted human urine in vitro: implications for calcium oxalate stone genesis. Chem Biol 2003; 10:271–278.
  43. Coe FL, Parks JH. Hyperuricosuria and calcium nephrolithiasis. Urol Clin North Am 1981; 8:227–244.
  44. Ettinger B, Tang A, Citron JT, Livermore B, Williams T. Randomized trial of allopurinol in the prevention of calcium oxalate calculi. N Engl J Med 1986; 315:1386–1389.
  45. Zuckerman JM, Assimos DG. Hypocitraturia: pathophysiology and medical management. Rev Urol 2009; 11:134–144.
  46. Seltzer MA, Low RK, McDonald M, Shami GS, Stoller ML. Dietary manipulation with lemonade to treat hypocitraturic calcium nephrolithiasis. J Urol 1996; 156:907–909.
  47. Odvina CV. Comparative value of orange juice versus lemonade in reducing stone-forming risk. Clin J Am Soc Nephrol 2006; 1:1269–1274.
  48. Barcelo P, Wuhl O, Servitge E, Rousaud A, Pak CY. Randomized double-blind study of potassium citrate in idiopathic hypocitraturic calcium nephrolithiasis. J Urol 1993; 150:1761–1764.
  49. Lemann J Jr, Gray RW, Pleuss JA. Potassium bicarbonate, but not sodium bicarbonate, reduces urinary calcium excretion and improves calcium balance in healthy men. Kidney Int 1989; 35:688–695.
  50. Gault MH, Chafe LL, Morgan JM, et al. Comparison of patients with idiopathic calcium phosphate and calcium oxalate stones. Medicine (Baltimore) 1991; 70:345–359.
  51. Krieger NS, Asplin JR, Frick KK, et al. Effect of potassium citrate on calcium phosphate stones in a model of hypercalciuria. J Am Soc Nephrol 2015; 26:3001–3008.
  52. Falls WF Jr. Comparison of urinary acidification and ammonium excretion in normal and gouty subjects. Metabolism 1972; 21:433–445.
  53. Coe FL, Parks JH, Asplin JR. The pathogenesis and treatment of kidney stones. N Engl J Med 1992; 327:1141–1152.
  54. Kenny JE, Goldfarb DS. Update on the pathophysiology and management of uric acid renal stones. Curr Rheumatol Rep 2010; 12:125–129.
  55. Preminger GM, Assimos DG, Lingeman JE, Nakada SY, Pearle MS, Wolf JS Jr (AUA Nephrolithiasis Guideline Panel). Chapter 1: AUA guideline on management of staghorn calculi: diagnosis and treatment recommendations. J Urol 2005; 173:1991–2000.
  56. Williams JJ, Rodman JS, Peterson CM. A randomized double-blind study of acetohydroxamic acid in struvite nephrolithiasis. N Engl J Med 1984; 311:760–764.
  57. Nakagawa Y, Asplin JR, Goldfarb DS, Parks JH, Coe FL. Clinical use of cystine supersaturation measurements. J Urol 2000; 164:1481–1485.
  58. Palacın MGP, Nunes V, Gasparini P. Cystinuria. In: Shriver CR, editor. The Metabolic and Molecular Bases of Inherited Disease. New York, NY: McGraw-Hill; 2001:4909–4932.
  59. Goldfarb DS, Coe FL, Asplin JR. Urinary cystine excretion and capacity in patients with cystinuria. Kidney Int 2006; 69:1041–1047.
  60. Barbey F, Joly D, Rieu P, Mejean A, Daudon M, Jungers P. Medical treatment of cystinuria: critical reappraisal of long-term results. J Urol 2000; 163:1419–1423.
  61. Asplin DM, Asplin JR. The Interaction of thiol drugs and urine pH in the treatment of cystinuria. J Urol 2013; 189:2147–2151.
  62. Habib GS, Saliba W, Nashashibi M, Armali Z. Penicillamine and nephrotic syndrome. Eur J Intern Med 2006; 17:343–348.
  63. Sloand JA, Izzo JL Jr. Captopril reduces urinary cystine excretion in cystinuria. Arch Intern Med 1987; 147:1409–1412.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
463-471
Page Number
463-471
Publications
Publications
Topics
Article Type
Display Headline
Dietary and medical management of recurrent nephrolithiasis
Display Headline
Dietary and medical management of recurrent nephrolithiasis
Legacy Keywords
urinary stones, kidney stones, nephrolithiasis, calcium oxalate, calcium phosphate, citrate, struvite, cysteine, uric acid, Silvi Shah, Juan Camilo Calle
Legacy Keywords
urinary stones, kidney stones, nephrolithiasis, calcium oxalate, calcium phosphate, citrate, struvite, cysteine, uric acid, Silvi Shah, Juan Camilo Calle
Sections
Inside the Article

KEY POINTS

  • Nephrolithiasis is common and widespread, and its incidence and prevalence are increasing.
  • Calcium stones are the most common type, and of these, calcium oxalate stones predominate.
  • The most common risk factors for recurrent calcium stones are low urinary output, hypercalciuria, hyperoxaluria, hypocitraturia, and hyperuricosuria.
  • Less common types of stones are usually associated with genetic abnormalities, infections, or medications.
Disallow All Ads
Alternative CME
Article PDF Media

When does chest CT require contrast enhancement?

Article Type
Changed
Wed, 08/16/2017 - 13:32
Display Headline
When does chest CT require contrast enhancement?

Computed tomography (CT) plays an important role in the diagnosis and treatment of many clinical conditions1 involving the chest wall, mediastinum, pleura, pulmonary arteries, and lung parenchyma. The need for enhancement with intravenous (IV) contrast depends on the specific clinical indication (Table 1).

EVALUATION OF SUSPECTED CANCER

CT is commonly used to diagnose, stage, and plan treatment for lung cancer, other primary neoplastic processes involving the chest, and metastatic disease.2 The need for contrast varies on a case-by-case basis, and the benefits of contrast should be weighed against the potential risks in each patient.

When the neoplasm has CT attenuation similar to that of adjacent structures (lymph nodes in the hilum, masses in the mediastinum or chest wall), IV contrast can improve identification of the lesion and delineation of its margins and the relationship with adjacent structures (eg, vascular structures) (Figure 1).

Figure 1. In a patient with colon cancer undergoing a workup for metastases, axial CT without contrast (A) shows prominence of the right hilar region (arrow). Axial CT with contrast enhancement obtained subsequently (B and C) shows that this abnormality corresponds to right hilar lymphadenopathy partially encasing the right pulmonary artery (arrows).

CT without contrast for screening

The diagnostic algorithm for lung cancer screening is evolving. The US Preventive Services Task Force currently recommends low-dose CT without contrast, along with appropriate patient counseling, for patients with a history of smoking and an age range as detailed in the Task Force statement.3

Follow-up of a solitary pulmonary nodule also typically does not require contrast enhancement, though some investigators have reported high sensitivity with dynamic contrast enhancement of pulmonary nodules.4 This represents a rare clinical application of chest CT with and without contrast.

EVALUATION OF THORACIC VASCULAR DISEASE

For the assessment of vascular disease, CT in most cases requires IV contrast to delineate the vessel lumen. Pulmonary embolic disease is the third most common cause of acute cardiovascular disease.5 CT pulmonary angiography is the most common way to assess for pulmonary embolic disease, as it is accurate, fast, and widely available, and can assess alternate pathologies in cases of undifferentiated chest pain. Contrast enhancement of the pulmonary arteries is key, as embolic disease is identified as abnormal filling defects within the pulmonary arteries (Figure 2).

Figure 2. In a 79-year-old patient with chronic thromboembolic pulmonary hypertension, CT angiography of the pulmonary artery (A) shows weblike (red arrow) and partially calcified filling defects (yellow arrow), as well as diffuse mild mosaic attenuation of lung parenchyma (B).

Contrast enhancement is also used to evaluate superior vena cava syndrome. At our institution, the CT protocol includes concomitant injections in the upper-extremity veins, with imaging timed for venous phase enhancement (pulmonary venogram). In cases of suspected arteriovenous malformation, a protocol similar to that used for suspected pulmonary embolus is used (Figure 3), although in some instances, the imaging features of arteriovenous malformation may be detectable without IV contrast.

Figure 3. CT pulmonary angiography with intravenous contrast in a patient being evaluated for arteriovenous malformation. Maximum-intensity projection images reconstructed in the axial (A) and coronal (B) planes show bilateral arteriovenous malformations with corresponding feeding arteries (white arrows) and draining veins (black arrows).

EVALUATION OF PULMONARY PARENCHYMAL DISEASE

Infection, inflammation, and edema of the lung parenchyma are usually well depicted on CT without contrast enhancement. However, contrast may be helpful if there are concerns about complications such as chest wall involvement, where contrast enhancement may help further delineate the extent of complications.

Assessment of interstitial lung disease does not require use of IV contrast; rather, a tailored protocol with thinner slices and noncontiguous expiratory images can be used to evaluate for air-trapping and dynamic airway compromise (Figure 4). Evaluation of chronic obstructive pulmonary disease also does not require IV contrast.

Figure 4. CT without contrast in a patient with a history of interstitial lung disease and right lung transplant shows the patent but partially narrowed anastomotic site of the right bronchus (A) (red arrow). In B, the native left lung is small, with evidence of bronchiectasis, bronchiolectasis, and areas of honeycombing (black arrow). In C, the transplanted lung is notable for areas of air trapping in the right upper lobe on expiratory images (blue arrow), which is associated with central airway narrowing.

EVALUATION OF THE PLEURA

In pleural effusion, CT assessment for the presence, location, and extent of the effusion does not require contrast. However, contrast enhancement is used to evaluate suspected or known exudative effusions and empyema.6 It also aids the evaluation of metastatic or primary malignancy of the pleura, particularly in cases of occult disease, as enhancement and thickening of the pleura are of diagnostic interest.

EVALUATION OF AIRWAY DISEASE

Diseases of the large airway, such as stenosis and thickening, and diseases of the small airways, such as bronchiolitis, typically do not require contrast enhancement. At our institution, to assess dynamic airway narrowing, we use a dedicated airway protocol, including inspiratory and expiratory phases and multiplanar reformatted images.

EVALUATION OF STERNAL AND MEDIASTINAL INFECTIONS

Postoperative sternal wound infections are not uncommon and range from cellulitis to frank osteomyelitis. Mediastinitis may likewise be iatrogenic or may spread from the oropharynx. CT with contrast can help to depict infection of the chest wall or mediastinum and in some instances can also delineate the route of spread.7

TYPES OF IV CONTRAST MEDIA

Contrast media used in CT contain iodine, which causes increased absorption and scattering of radiation in body tissues and blood. Other contrast media, such as those used for magnetic resonance imaging or barium enemas, do not contain iodine. This absorption and scattering in turn results in higher CT attenuation values, or “enhancement” on CT images. The extent of enhancement depends on the amount and rate of contrast material administered, as well as on patient factors (eg, tissue vascularity, permeability, interstitial space) and the energy (tube voltage) of the incident x-rays.8

Adverse reactions

Contrast materials are generally safe; however, as with any pharmaceutical, there is the potential for adverse reactions. These reactions are relatively rare and are usually mild but occasionally can be severe.9 Anaphylactoid reactions have an unclear etiology but mimic allergic reactions, and they are more likely to occur in patients with a previous reaction to contrast and in patients with asthma or cardiovascular or renal disease.

Nonanaphylactoid reactions are dependent on contrast osmolality and on the volume and route of injection (unlike anaphylactoid reactions).10 Typical symptoms include warmth, metallic taste, and nausea or vomiting.

Contrast-related nephrotoxicity has been reported,11 although this has been challenged more recently.12 Suspected risk factors for this complication include advanced age, cardiovascular disease, treatment with chemotherapy, elevated serum creatinine level, dehydration, diabetes, use of nonsteroidal anti-inflammatory medications, myeloma,13 renal disease, and kidney transplant.

Detailed protocols for premedication and management of contrast adverse reactions are beyond the scope of this review and the reader is advised to refer to dedicated manuals.10

 


Acknowledgment: We are grateful for the editorial assistance of Megan M. Griffiths, scientific writer for the Imaging Institute, Cleveland Clinic.

References
  1. Rubin GD. Computed tomography: revolutionizing the practice of medicine for 40 years. Radiology 2014; 273(suppl 2):S45–S74.
  2. American College of Radiology. ACR-SCBT-MR-SPR practice parameter for the performance of thoracic computed tomography (CT). www.acr.org/~/media/ACR/Documents/PGTS/guidelines/CT_Thoracic.pdf. Accessed March 30, 2016.
  3. Moyer VA; US Preventive Services Task Force. Screening for lung cancer: US Preventive Services Task Force recommendation statement. Ann Intern Med 2014; 160:330–338.
  4. Yi CA, Lee KS, Kim EA, et al. Solitary pulmonary nodules: dynamic enhanced multi-detector row CT study and comparison with vascular endothelial growth factor and microvessel density. Radiology 2004; 233:191–199.
  5. Bolen MA, Renapurkar RD, Popovic ZB, et al. High-pitch ECG-synchronized pulmonary CT angiography versus standard CT pulmonary angiography: a prospective randomized study. AJR Am J Roentgenol 2013; 201:971–976.
  6. Kraus GJ. The split pleura sign. Radiology 2007; 243:297–298.
  7. Bae KT. Intravenous contrast medium administration and scan timing at CT: considerations and approaches. Radiology 2010; 256:32–61.
  8. Capps EF, Kinsella JJ, Gupta M, Bhatki AM, Opatowsky MJ. Emergency imaging assessment of acute, nontraumatic conditions of the head and neck. Radiographics 2010; 30:1335–1352.
  9. Singh J, Daftary A. Iodinated contrast media and their adverse reactions. J Nucl Med Technol 2008; 36:69–74.
  10. ACR Committee on Drugs and Contrast Media. ACR Manual on Contrast Media. Version 10.1. 2015. www.acr.org/~/media/37D84428BF1D4E1B9A3A2918DA9E27A3.pdf. Accessed March 29, 2016.
  11. Barrett BJ. Contrast nephrotoxicity. J Am Soc Nephrol 1994; 5:125–137.
  12. McDonald RJ, McDonald JS, Carter RE, et al. Intravenous contrast material exposure is not an independent risk factor for dialysis or mortality. Radiology 2014; 273:714–725.
  13. McCarthy CS, Becker JA. Multiple myeloma and contrast media. Radiology 1992; 183:519–521.
Article PDF
Author and Disclosure Information

Camila Piza Purysko, MD
Imaging Institute, Cleveland Clinic

Rahul Renapurkar, MD
Imaging Institute, Cleveland Clinic

Michael A. Bolen, MD
Imaging Institute and Heart and Vascular Institute, Cleveland Clinic; Clinical Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: Michael A. Bolen, MD, Imaging Institute, J1-4, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; [email protected]

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
423-426
Legacy Keywords
computed tomography, CT, CT scan, contrast, contrast media, iodine, Camila Purysko, Rahul Renapurkar, Michael Bolen
Sections
Author and Disclosure Information

Camila Piza Purysko, MD
Imaging Institute, Cleveland Clinic

Rahul Renapurkar, MD
Imaging Institute, Cleveland Clinic

Michael A. Bolen, MD
Imaging Institute and Heart and Vascular Institute, Cleveland Clinic; Clinical Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: Michael A. Bolen, MD, Imaging Institute, J1-4, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; [email protected]

Author and Disclosure Information

Camila Piza Purysko, MD
Imaging Institute, Cleveland Clinic

Rahul Renapurkar, MD
Imaging Institute, Cleveland Clinic

Michael A. Bolen, MD
Imaging Institute and Heart and Vascular Institute, Cleveland Clinic; Clinical Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: Michael A. Bolen, MD, Imaging Institute, J1-4, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; [email protected]

Article PDF
Article PDF
Related Articles

Computed tomography (CT) plays an important role in the diagnosis and treatment of many clinical conditions1 involving the chest wall, mediastinum, pleura, pulmonary arteries, and lung parenchyma. The need for enhancement with intravenous (IV) contrast depends on the specific clinical indication (Table 1).

EVALUATION OF SUSPECTED CANCER

CT is commonly used to diagnose, stage, and plan treatment for lung cancer, other primary neoplastic processes involving the chest, and metastatic disease.2 The need for contrast varies on a case-by-case basis, and the benefits of contrast should be weighed against the potential risks in each patient.

When the neoplasm has CT attenuation similar to that of adjacent structures (lymph nodes in the hilum, masses in the mediastinum or chest wall), IV contrast can improve identification of the lesion and delineation of its margins and the relationship with adjacent structures (eg, vascular structures) (Figure 1).

Figure 1. In a patient with colon cancer undergoing a workup for metastases, axial CT without contrast (A) shows prominence of the right hilar region (arrow). Axial CT with contrast enhancement obtained subsequently (B and C) shows that this abnormality corresponds to right hilar lymphadenopathy partially encasing the right pulmonary artery (arrows).

CT without contrast for screening

The diagnostic algorithm for lung cancer screening is evolving. The US Preventive Services Task Force currently recommends low-dose CT without contrast, along with appropriate patient counseling, for patients with a history of smoking and an age range as detailed in the Task Force statement.3

Follow-up of a solitary pulmonary nodule also typically does not require contrast enhancement, though some investigators have reported high sensitivity with dynamic contrast enhancement of pulmonary nodules.4 This represents a rare clinical application of chest CT with and without contrast.

EVALUATION OF THORACIC VASCULAR DISEASE

For the assessment of vascular disease, CT in most cases requires IV contrast to delineate the vessel lumen. Pulmonary embolic disease is the third most common cause of acute cardiovascular disease.5 CT pulmonary angiography is the most common way to assess for pulmonary embolic disease, as it is accurate, fast, and widely available, and can assess alternate pathologies in cases of undifferentiated chest pain. Contrast enhancement of the pulmonary arteries is key, as embolic disease is identified as abnormal filling defects within the pulmonary arteries (Figure 2).

Figure 2. In a 79-year-old patient with chronic thromboembolic pulmonary hypertension, CT angiography of the pulmonary artery (A) shows weblike (red arrow) and partially calcified filling defects (yellow arrow), as well as diffuse mild mosaic attenuation of lung parenchyma (B).

Contrast enhancement is also used to evaluate superior vena cava syndrome. At our institution, the CT protocol includes concomitant injections in the upper-extremity veins, with imaging timed for venous phase enhancement (pulmonary venogram). In cases of suspected arteriovenous malformation, a protocol similar to that used for suspected pulmonary embolus is used (Figure 3), although in some instances, the imaging features of arteriovenous malformation may be detectable without IV contrast.

Figure 3. CT pulmonary angiography with intravenous contrast in a patient being evaluated for arteriovenous malformation. Maximum-intensity projection images reconstructed in the axial (A) and coronal (B) planes show bilateral arteriovenous malformations with corresponding feeding arteries (white arrows) and draining veins (black arrows).

EVALUATION OF PULMONARY PARENCHYMAL DISEASE

Infection, inflammation, and edema of the lung parenchyma are usually well depicted on CT without contrast enhancement. However, contrast may be helpful if there are concerns about complications such as chest wall involvement, where contrast enhancement may help further delineate the extent of complications.

Assessment of interstitial lung disease does not require use of IV contrast; rather, a tailored protocol with thinner slices and noncontiguous expiratory images can be used to evaluate for air-trapping and dynamic airway compromise (Figure 4). Evaluation of chronic obstructive pulmonary disease also does not require IV contrast.

Figure 4. CT without contrast in a patient with a history of interstitial lung disease and right lung transplant shows the patent but partially narrowed anastomotic site of the right bronchus (A) (red arrow). In B, the native left lung is small, with evidence of bronchiectasis, bronchiolectasis, and areas of honeycombing (black arrow). In C, the transplanted lung is notable for areas of air trapping in the right upper lobe on expiratory images (blue arrow), which is associated with central airway narrowing.

EVALUATION OF THE PLEURA

In pleural effusion, CT assessment for the presence, location, and extent of the effusion does not require contrast. However, contrast enhancement is used to evaluate suspected or known exudative effusions and empyema.6 It also aids the evaluation of metastatic or primary malignancy of the pleura, particularly in cases of occult disease, as enhancement and thickening of the pleura are of diagnostic interest.

EVALUATION OF AIRWAY DISEASE

Diseases of the large airway, such as stenosis and thickening, and diseases of the small airways, such as bronchiolitis, typically do not require contrast enhancement. At our institution, to assess dynamic airway narrowing, we use a dedicated airway protocol, including inspiratory and expiratory phases and multiplanar reformatted images.

EVALUATION OF STERNAL AND MEDIASTINAL INFECTIONS

Postoperative sternal wound infections are not uncommon and range from cellulitis to frank osteomyelitis. Mediastinitis may likewise be iatrogenic or may spread from the oropharynx. CT with contrast can help to depict infection of the chest wall or mediastinum and in some instances can also delineate the route of spread.7

TYPES OF IV CONTRAST MEDIA

Contrast media used in CT contain iodine, which causes increased absorption and scattering of radiation in body tissues and blood. Other contrast media, such as those used for magnetic resonance imaging or barium enemas, do not contain iodine. This absorption and scattering in turn results in higher CT attenuation values, or “enhancement” on CT images. The extent of enhancement depends on the amount and rate of contrast material administered, as well as on patient factors (eg, tissue vascularity, permeability, interstitial space) and the energy (tube voltage) of the incident x-rays.8

Adverse reactions

Contrast materials are generally safe; however, as with any pharmaceutical, there is the potential for adverse reactions. These reactions are relatively rare and are usually mild but occasionally can be severe.9 Anaphylactoid reactions have an unclear etiology but mimic allergic reactions, and they are more likely to occur in patients with a previous reaction to contrast and in patients with asthma or cardiovascular or renal disease.

Nonanaphylactoid reactions are dependent on contrast osmolality and on the volume and route of injection (unlike anaphylactoid reactions).10 Typical symptoms include warmth, metallic taste, and nausea or vomiting.

Contrast-related nephrotoxicity has been reported,11 although this has been challenged more recently.12 Suspected risk factors for this complication include advanced age, cardiovascular disease, treatment with chemotherapy, elevated serum creatinine level, dehydration, diabetes, use of nonsteroidal anti-inflammatory medications, myeloma,13 renal disease, and kidney transplant.

Detailed protocols for premedication and management of contrast adverse reactions are beyond the scope of this review and the reader is advised to refer to dedicated manuals.10

 


Acknowledgment: We are grateful for the editorial assistance of Megan M. Griffiths, scientific writer for the Imaging Institute, Cleveland Clinic.

Computed tomography (CT) plays an important role in the diagnosis and treatment of many clinical conditions1 involving the chest wall, mediastinum, pleura, pulmonary arteries, and lung parenchyma. The need for enhancement with intravenous (IV) contrast depends on the specific clinical indication (Table 1).

EVALUATION OF SUSPECTED CANCER

CT is commonly used to diagnose, stage, and plan treatment for lung cancer, other primary neoplastic processes involving the chest, and metastatic disease.2 The need for contrast varies on a case-by-case basis, and the benefits of contrast should be weighed against the potential risks in each patient.

When the neoplasm has CT attenuation similar to that of adjacent structures (lymph nodes in the hilum, masses in the mediastinum or chest wall), IV contrast can improve identification of the lesion and delineation of its margins and the relationship with adjacent structures (eg, vascular structures) (Figure 1).

Figure 1. In a patient with colon cancer undergoing a workup for metastases, axial CT without contrast (A) shows prominence of the right hilar region (arrow). Axial CT with contrast enhancement obtained subsequently (B and C) shows that this abnormality corresponds to right hilar lymphadenopathy partially encasing the right pulmonary artery (arrows).

CT without contrast for screening

The diagnostic algorithm for lung cancer screening is evolving. The US Preventive Services Task Force currently recommends low-dose CT without contrast, along with appropriate patient counseling, for patients with a history of smoking and an age range as detailed in the Task Force statement.3

Follow-up of a solitary pulmonary nodule also typically does not require contrast enhancement, though some investigators have reported high sensitivity with dynamic contrast enhancement of pulmonary nodules.4 This represents a rare clinical application of chest CT with and without contrast.

EVALUATION OF THORACIC VASCULAR DISEASE

For the assessment of vascular disease, CT in most cases requires IV contrast to delineate the vessel lumen. Pulmonary embolic disease is the third most common cause of acute cardiovascular disease.5 CT pulmonary angiography is the most common way to assess for pulmonary embolic disease, as it is accurate, fast, and widely available, and can assess alternate pathologies in cases of undifferentiated chest pain. Contrast enhancement of the pulmonary arteries is key, as embolic disease is identified as abnormal filling defects within the pulmonary arteries (Figure 2).

Figure 2. In a 79-year-old patient with chronic thromboembolic pulmonary hypertension, CT angiography of the pulmonary artery (A) shows weblike (red arrow) and partially calcified filling defects (yellow arrow), as well as diffuse mild mosaic attenuation of lung parenchyma (B).

Contrast enhancement is also used to evaluate superior vena cava syndrome. At our institution, the CT protocol includes concomitant injections in the upper-extremity veins, with imaging timed for venous phase enhancement (pulmonary venogram). In cases of suspected arteriovenous malformation, a protocol similar to that used for suspected pulmonary embolus is used (Figure 3), although in some instances, the imaging features of arteriovenous malformation may be detectable without IV contrast.

Figure 3. CT pulmonary angiography with intravenous contrast in a patient being evaluated for arteriovenous malformation. Maximum-intensity projection images reconstructed in the axial (A) and coronal (B) planes show bilateral arteriovenous malformations with corresponding feeding arteries (white arrows) and draining veins (black arrows).

EVALUATION OF PULMONARY PARENCHYMAL DISEASE

Infection, inflammation, and edema of the lung parenchyma are usually well depicted on CT without contrast enhancement. However, contrast may be helpful if there are concerns about complications such as chest wall involvement, where contrast enhancement may help further delineate the extent of complications.

Assessment of interstitial lung disease does not require use of IV contrast; rather, a tailored protocol with thinner slices and noncontiguous expiratory images can be used to evaluate for air-trapping and dynamic airway compromise (Figure 4). Evaluation of chronic obstructive pulmonary disease also does not require IV contrast.

Figure 4. CT without contrast in a patient with a history of interstitial lung disease and right lung transplant shows the patent but partially narrowed anastomotic site of the right bronchus (A) (red arrow). In B, the native left lung is small, with evidence of bronchiectasis, bronchiolectasis, and areas of honeycombing (black arrow). In C, the transplanted lung is notable for areas of air trapping in the right upper lobe on expiratory images (blue arrow), which is associated with central airway narrowing.

EVALUATION OF THE PLEURA

In pleural effusion, CT assessment for the presence, location, and extent of the effusion does not require contrast. However, contrast enhancement is used to evaluate suspected or known exudative effusions and empyema.6 It also aids the evaluation of metastatic or primary malignancy of the pleura, particularly in cases of occult disease, as enhancement and thickening of the pleura are of diagnostic interest.

EVALUATION OF AIRWAY DISEASE

Diseases of the large airway, such as stenosis and thickening, and diseases of the small airways, such as bronchiolitis, typically do not require contrast enhancement. At our institution, to assess dynamic airway narrowing, we use a dedicated airway protocol, including inspiratory and expiratory phases and multiplanar reformatted images.

EVALUATION OF STERNAL AND MEDIASTINAL INFECTIONS

Postoperative sternal wound infections are not uncommon and range from cellulitis to frank osteomyelitis. Mediastinitis may likewise be iatrogenic or may spread from the oropharynx. CT with contrast can help to depict infection of the chest wall or mediastinum and in some instances can also delineate the route of spread.7

TYPES OF IV CONTRAST MEDIA

Contrast media used in CT contain iodine, which causes increased absorption and scattering of radiation in body tissues and blood. Other contrast media, such as those used for magnetic resonance imaging or barium enemas, do not contain iodine. This absorption and scattering in turn results in higher CT attenuation values, or “enhancement” on CT images. The extent of enhancement depends on the amount and rate of contrast material administered, as well as on patient factors (eg, tissue vascularity, permeability, interstitial space) and the energy (tube voltage) of the incident x-rays.8

Adverse reactions

Contrast materials are generally safe; however, as with any pharmaceutical, there is the potential for adverse reactions. These reactions are relatively rare and are usually mild but occasionally can be severe.9 Anaphylactoid reactions have an unclear etiology but mimic allergic reactions, and they are more likely to occur in patients with a previous reaction to contrast and in patients with asthma or cardiovascular or renal disease.

Nonanaphylactoid reactions are dependent on contrast osmolality and on the volume and route of injection (unlike anaphylactoid reactions).10 Typical symptoms include warmth, metallic taste, and nausea or vomiting.

Contrast-related nephrotoxicity has been reported,11 although this has been challenged more recently.12 Suspected risk factors for this complication include advanced age, cardiovascular disease, treatment with chemotherapy, elevated serum creatinine level, dehydration, diabetes, use of nonsteroidal anti-inflammatory medications, myeloma,13 renal disease, and kidney transplant.

Detailed protocols for premedication and management of contrast adverse reactions are beyond the scope of this review and the reader is advised to refer to dedicated manuals.10

 


Acknowledgment: We are grateful for the editorial assistance of Megan M. Griffiths, scientific writer for the Imaging Institute, Cleveland Clinic.

References
  1. Rubin GD. Computed tomography: revolutionizing the practice of medicine for 40 years. Radiology 2014; 273(suppl 2):S45–S74.
  2. American College of Radiology. ACR-SCBT-MR-SPR practice parameter for the performance of thoracic computed tomography (CT). www.acr.org/~/media/ACR/Documents/PGTS/guidelines/CT_Thoracic.pdf. Accessed March 30, 2016.
  3. Moyer VA; US Preventive Services Task Force. Screening for lung cancer: US Preventive Services Task Force recommendation statement. Ann Intern Med 2014; 160:330–338.
  4. Yi CA, Lee KS, Kim EA, et al. Solitary pulmonary nodules: dynamic enhanced multi-detector row CT study and comparison with vascular endothelial growth factor and microvessel density. Radiology 2004; 233:191–199.
  5. Bolen MA, Renapurkar RD, Popovic ZB, et al. High-pitch ECG-synchronized pulmonary CT angiography versus standard CT pulmonary angiography: a prospective randomized study. AJR Am J Roentgenol 2013; 201:971–976.
  6. Kraus GJ. The split pleura sign. Radiology 2007; 243:297–298.
  7. Bae KT. Intravenous contrast medium administration and scan timing at CT: considerations and approaches. Radiology 2010; 256:32–61.
  8. Capps EF, Kinsella JJ, Gupta M, Bhatki AM, Opatowsky MJ. Emergency imaging assessment of acute, nontraumatic conditions of the head and neck. Radiographics 2010; 30:1335–1352.
  9. Singh J, Daftary A. Iodinated contrast media and their adverse reactions. J Nucl Med Technol 2008; 36:69–74.
  10. ACR Committee on Drugs and Contrast Media. ACR Manual on Contrast Media. Version 10.1. 2015. www.acr.org/~/media/37D84428BF1D4E1B9A3A2918DA9E27A3.pdf. Accessed March 29, 2016.
  11. Barrett BJ. Contrast nephrotoxicity. J Am Soc Nephrol 1994; 5:125–137.
  12. McDonald RJ, McDonald JS, Carter RE, et al. Intravenous contrast material exposure is not an independent risk factor for dialysis or mortality. Radiology 2014; 273:714–725.
  13. McCarthy CS, Becker JA. Multiple myeloma and contrast media. Radiology 1992; 183:519–521.
References
  1. Rubin GD. Computed tomography: revolutionizing the practice of medicine for 40 years. Radiology 2014; 273(suppl 2):S45–S74.
  2. American College of Radiology. ACR-SCBT-MR-SPR practice parameter for the performance of thoracic computed tomography (CT). www.acr.org/~/media/ACR/Documents/PGTS/guidelines/CT_Thoracic.pdf. Accessed March 30, 2016.
  3. Moyer VA; US Preventive Services Task Force. Screening for lung cancer: US Preventive Services Task Force recommendation statement. Ann Intern Med 2014; 160:330–338.
  4. Yi CA, Lee KS, Kim EA, et al. Solitary pulmonary nodules: dynamic enhanced multi-detector row CT study and comparison with vascular endothelial growth factor and microvessel density. Radiology 2004; 233:191–199.
  5. Bolen MA, Renapurkar RD, Popovic ZB, et al. High-pitch ECG-synchronized pulmonary CT angiography versus standard CT pulmonary angiography: a prospective randomized study. AJR Am J Roentgenol 2013; 201:971–976.
  6. Kraus GJ. The split pleura sign. Radiology 2007; 243:297–298.
  7. Bae KT. Intravenous contrast medium administration and scan timing at CT: considerations and approaches. Radiology 2010; 256:32–61.
  8. Capps EF, Kinsella JJ, Gupta M, Bhatki AM, Opatowsky MJ. Emergency imaging assessment of acute, nontraumatic conditions of the head and neck. Radiographics 2010; 30:1335–1352.
  9. Singh J, Daftary A. Iodinated contrast media and their adverse reactions. J Nucl Med Technol 2008; 36:69–74.
  10. ACR Committee on Drugs and Contrast Media. ACR Manual on Contrast Media. Version 10.1. 2015. www.acr.org/~/media/37D84428BF1D4E1B9A3A2918DA9E27A3.pdf. Accessed March 29, 2016.
  11. Barrett BJ. Contrast nephrotoxicity. J Am Soc Nephrol 1994; 5:125–137.
  12. McDonald RJ, McDonald JS, Carter RE, et al. Intravenous contrast material exposure is not an independent risk factor for dialysis or mortality. Radiology 2014; 273:714–725.
  13. McCarthy CS, Becker JA. Multiple myeloma and contrast media. Radiology 1992; 183:519–521.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
423-426
Page Number
423-426
Publications
Publications
Topics
Article Type
Display Headline
When does chest CT require contrast enhancement?
Display Headline
When does chest CT require contrast enhancement?
Legacy Keywords
computed tomography, CT, CT scan, contrast, contrast media, iodine, Camila Purysko, Rahul Renapurkar, Michael Bolen
Legacy Keywords
computed tomography, CT, CT scan, contrast, contrast media, iodine, Camila Purysko, Rahul Renapurkar, Michael Bolen
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Should I suspect obstructive sleep apnea if a patient has hard-to-control hypertension?

Article Type
Changed
Wed, 08/16/2017 - 13:30
Display Headline
Should I suspect obstructive sleep apnea if a patient has hard-to-control hypertension?

Yes. Obstructive sleep apnea is common and is associated with hypertension and resistant hypertension. Physicians taking care of patients who have hard-to-control hypertension should be aware of the possible diagnosis of obstructive sleep apnea and screen them for it. In-laboratory polysomnography or home sleep testing should be offered if appropriate, and if obstructive sleep apnea is detected, it should be treated, as this treatment may help to control blood pressure more effectively.

OBSTRUCTIVE SLEEP APNEA IS COMMON

Obstructive sleep apnea is characterized by recurrent episodes of partial or complete collapse of the upper airway during sleep, with partial collapse leading to hypopnea and complete collapse leading to apnea. These episodes result in intermittent hypoxemia, microarousals, sleep fragmentation, daytime sleepiness, and impairment in quality of life.

In tandem with the increasing obesity epidemic, the prevalence of moderate to severe obstructive sleep apnea is 17% in men and 9% in women 50 to 70 years old.1

LINKED TO HYPERTENSION

The respiratory events that occur in obstructive sleep apnea are associated with blood pressure surges during sleep that can cause persistent elevated blood pressure while awake. Obstructive sleep apnea has been independently associated with incident hypertension in large epidemiologic studies, even after correction for confounding factors such as obesity and its surrogate markers.

Moreover, the more severe the obstructive sleep apnea, the greater the risk of incident hypertension.2 And large, long-term observational studies have shown higher incidence rates of hypertension in people with untreated obstructive sleep apnea than in those who underwent treatment for it with continuous positive airway pressure (CPAP).3

Obstructive sleep apnea is also associated with nocturnal nondipping of blood pressure (defined as failure of blood pressure to decline by at least 10% during sleep), which is an independent marker for worse cardiovascular outcomes and hypertension-induced target organ damage.

Obstructive sleep apnea is particularly common in those with drug-resistant hypertension,4 which is defined as a suboptimal control of blood pressure despite the use of multiple antihypertensive medications of different classes, a condition associated with significant rates of cardiovascular morbidity and mortality. Even in patients at high risk of cardiovascular disease, we found that those with severe obstruction of the upper airway during sleep had fourfold higher odds of having resistant elevated blood pressure.5

The seventh Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure recognized obstructive sleep apnea as one of the causes of secondary hypertension.6 The 2013 European Society of Hypertension/European Society of Cardiology guidelines7 suggested an evaluation of obstructive sleep apnea symptoms for the management of hypertension.

MECHANISMS LINKING OBSTRUCTIVE SLEEP APNEA AND HYPERTENSION

Pathophysiologic mechanisms that may explain the association between obstructive sleep apnea and hypertension include stimulation of sympathetic activity,8 increased arterial stiffness, and endothelial dysfunction driven by apnea-related intermittent hypoxemia.9 Increased systemic inflammation and oxidative stress caused by obstructive sleep apnea are other proposed mechanisms.

Conversely, resistant hypertension may worsen obstructive sleep apnea. Some propose that activation of the renin-angiotensin-aldosterone system can cause parapharyngeal edema and rostral fluid shifts during sleep and thereby increase upper airway obstruction and worsen the severity of obstructive sleep apnea.10

CONSIDER SCREENING

Patients with resistant hypertension and risk factors for obstructive sleep apnea should be screened for it, as it is very common in this population.

A simple screening tool that can be used to detect sleep apnea is the STOP-BANG questionnaire11:

  • Snore: Have you been told that you snore loudly?
  • Tired: Are you often tired during the day?
  • Observed apnea: Do you know if you stop breathing, or has anyone witnessed you stop breathing while sleeping?
  • Pressure: Do you have or are you being treated for high blood pressure?
  • Body mass index: Is your body mass index greater than 35 kg/m2?
  • Age: older than 50?
  • Neck circumference: greater than 40 cm?
  • Gender: Male?

A score of 3 or more indicates a high risk of obstructive sleep apnea, and further workup for it is appropriate. Some of the other symptoms and signs are listed in Table 1.

SLEEP STUDIES: IN THE LABORATORY OR AT HOME

In-laboratory polysomnography entails electro-oculography, electromyography, electroencephalography, electrocardiography, pulse oximetry, and measurement of oronasal flow and thoracoabdominal movement (using sensors and belts). It should be performed in patients who have significant comorbid conditions.

A home sleep study, which is more limited than polysomnography, is appropriate in those who have a high probability of obstructive sleep apnea and who do not have other sleep disorders or significant cardiovascular, neurologic, or respiratory disorders.

Subsequently, if obstructive sleep apnea is found, a positive airway pressure titration study is performed to determine the optimal pressure requirements.

CPAP IS THE GOLD STANDARD TREATMENT

Behavioral changes are recommended to correct factors that predispose to obstructive sleep apnea or aggravate it. These changes include avoiding alcohol, sleeping on one’s side rather than supine, weight reduction in overweight individuals, and treating nasal congestion. In some situations, oral appliances or surgical options can be considered. However, CPAP is the gold standard therapy and the one most commonly used.

CPAP LOWERS BLOOD PRESSURE

Effective treatment of obstructive sleep apnea, added to an antihypertensive regimen, can further lower the blood pressure more than the antihypertensive medication regimen by itself.

Several meta-analyses have shown modest improvements in blood pressure with CPAP in hypertensive patients. CPAP’s effect on blood pressure seems to be more pronounced in those with resistant hypertension, in whom a meta-analysis of randomized controlled trials demonstrated a mean reduction in systolic blood pressure of 6.74 mm Hg and a mean reduction in diastolic blood pressure of 5.94 mm Hg.12 A recent clinic-based (“real-world”) study revealed lowering of blood pressure in patients with resistant and nonresistant hypertension—approximately 2 to 3 mm Hg after CPAP therapy.13

Furthermore, a randomized controlled trial in Spain showed that the nocturnal nondipping pattern observed in patients with resistant hypertension was reversed with the use of CPAP.14

References
  1. Peppard PE, Young T, Barnet JH, Palta M, Hagen EW, Hla KM. Increased prevalence of sleep-disordered breathing in adults. Am J Epidemiol 2013; 177:1006–1014.
  2. Peppard PE, Young T, Palta M, et al. Prospective study of the association between sleep-disordered breathing and hypertension. N Engl J Med 2000; 342:1378–1384.
  3. Marin JM, Agusti A, Villar I, et al. Association between treated and untreated obstructive sleep apnea and risk of hypertension. JAMA 2012; 307:2169–2176.
  4. Logan AG, Perlikowski SM, Mente A, et al. High prevalence of unrecognized sleep apnoea in drug-resistant hypertension. J Hypertens 2001; 19:2271–2277.
  5. Walia HK, Li H, Rueschman M, et al. Association of severe obstructive sleep apnea and elevated blood pressure despite antihypertensive medication use. J Clin Sleep Med 2014; 10:835–843.
  6. Chobanian AV, Bakris GL, Black HR, et al; Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure; National Heart, Lung, and Blood Institute; National High Blood Pressure Education Program Coordinating Committee. Seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Hypertension 2003; 42:1206–1252.
  7. Mancia G, Fagard R, Narkiewicz K, et al; Task Force Members. 2013 ESH/ESC guidelines for the management of arterial hypertension: the task force for the management of arterial hypertension of the European Society of Hypertension (ESH) and of the European Society of Cardiology (ESC). J Hypertens 2013; 31:1281–1357.
  8. Somers VK, Dyken ME, Clary MP, Abboud FM. Sympathetic neural mechanisms in obstructive sleep apnea. J Clin Invest 1995; 96:1897–1904.
  9. Jelic S, Bartels MN, Mateika JH, Ngai P, DeMeersman RE, Basner RC. Arterial stiffness increases during obstructive sleep apneas. Sleep 2002; 25:850–855.
  10. Dudenbostel T, Calhoun DA. Resistant hypertension, obstructive sleep apnoea and aldosterone. J Hum Hypertens 2012; 26:281–287.
  11. Chung F, Yegneswaran B, Liao P, et al. STOP questionnaire: a tool to screen patients for obstructive sleep apnea. Anesthesiology 2008; 108:812–821.
  12. Iftikhar IH, Valentine CW, Bittencourt LR, et al. Effects of continuous positive airway pressure on blood pressure in patients with resistant hypertension and obstructive sleep apnea: a meta-analysis. J Hypertens 2014; 32:2341–2350.
  13. Walia HK, Griffith SD, Foldvary-Schaefer N, et al. Longitudinal effect of CPAP on BP in resistant and nonresistant hypertension in a large clinic-based cohort. Chest 2016; 149:747–755.
  14. Martinez-Garcia MA, Capote F, Campos-Rodriguez F, et al; Spanish Sleep Network. Effect of CPAP on blood pressure in patients with obstructive sleep apnea and resistant hypertension: the HIPARCO randomized clinical trial. JAMA 2013; 310:2407–2415.
Article PDF
Author and Disclosure Information

Harneet K. Walia, MD
Center for Sleep Disorders, Cleveland Clinic; Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH; Neurological Institute Center for Outcomes Research and Evaluation Scholar, 2016

Address: Harneet K. Walia, MD, Center for Sleep Disorders, Neurological Institute, FA20, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH; [email protected]

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
419-421
Legacy Keywords
obstructive sleep apnea, hypertension, high blood pressure, resistant hypertension, Harneet Walia
Sections
Author and Disclosure Information

Harneet K. Walia, MD
Center for Sleep Disorders, Cleveland Clinic; Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH; Neurological Institute Center for Outcomes Research and Evaluation Scholar, 2016

Address: Harneet K. Walia, MD, Center for Sleep Disorders, Neurological Institute, FA20, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH; [email protected]

Author and Disclosure Information

Harneet K. Walia, MD
Center for Sleep Disorders, Cleveland Clinic; Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH; Neurological Institute Center for Outcomes Research and Evaluation Scholar, 2016

Address: Harneet K. Walia, MD, Center for Sleep Disorders, Neurological Institute, FA20, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH; [email protected]

Article PDF
Article PDF
Related Articles

Yes. Obstructive sleep apnea is common and is associated with hypertension and resistant hypertension. Physicians taking care of patients who have hard-to-control hypertension should be aware of the possible diagnosis of obstructive sleep apnea and screen them for it. In-laboratory polysomnography or home sleep testing should be offered if appropriate, and if obstructive sleep apnea is detected, it should be treated, as this treatment may help to control blood pressure more effectively.

OBSTRUCTIVE SLEEP APNEA IS COMMON

Obstructive sleep apnea is characterized by recurrent episodes of partial or complete collapse of the upper airway during sleep, with partial collapse leading to hypopnea and complete collapse leading to apnea. These episodes result in intermittent hypoxemia, microarousals, sleep fragmentation, daytime sleepiness, and impairment in quality of life.

In tandem with the increasing obesity epidemic, the prevalence of moderate to severe obstructive sleep apnea is 17% in men and 9% in women 50 to 70 years old.1

LINKED TO HYPERTENSION

The respiratory events that occur in obstructive sleep apnea are associated with blood pressure surges during sleep that can cause persistent elevated blood pressure while awake. Obstructive sleep apnea has been independently associated with incident hypertension in large epidemiologic studies, even after correction for confounding factors such as obesity and its surrogate markers.

Moreover, the more severe the obstructive sleep apnea, the greater the risk of incident hypertension.2 And large, long-term observational studies have shown higher incidence rates of hypertension in people with untreated obstructive sleep apnea than in those who underwent treatment for it with continuous positive airway pressure (CPAP).3

Obstructive sleep apnea is also associated with nocturnal nondipping of blood pressure (defined as failure of blood pressure to decline by at least 10% during sleep), which is an independent marker for worse cardiovascular outcomes and hypertension-induced target organ damage.

Obstructive sleep apnea is particularly common in those with drug-resistant hypertension,4 which is defined as a suboptimal control of blood pressure despite the use of multiple antihypertensive medications of different classes, a condition associated with significant rates of cardiovascular morbidity and mortality. Even in patients at high risk of cardiovascular disease, we found that those with severe obstruction of the upper airway during sleep had fourfold higher odds of having resistant elevated blood pressure.5

The seventh Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure recognized obstructive sleep apnea as one of the causes of secondary hypertension.6 The 2013 European Society of Hypertension/European Society of Cardiology guidelines7 suggested an evaluation of obstructive sleep apnea symptoms for the management of hypertension.

MECHANISMS LINKING OBSTRUCTIVE SLEEP APNEA AND HYPERTENSION

Pathophysiologic mechanisms that may explain the association between obstructive sleep apnea and hypertension include stimulation of sympathetic activity,8 increased arterial stiffness, and endothelial dysfunction driven by apnea-related intermittent hypoxemia.9 Increased systemic inflammation and oxidative stress caused by obstructive sleep apnea are other proposed mechanisms.

Conversely, resistant hypertension may worsen obstructive sleep apnea. Some propose that activation of the renin-angiotensin-aldosterone system can cause parapharyngeal edema and rostral fluid shifts during sleep and thereby increase upper airway obstruction and worsen the severity of obstructive sleep apnea.10

CONSIDER SCREENING

Patients with resistant hypertension and risk factors for obstructive sleep apnea should be screened for it, as it is very common in this population.

A simple screening tool that can be used to detect sleep apnea is the STOP-BANG questionnaire11:

  • Snore: Have you been told that you snore loudly?
  • Tired: Are you often tired during the day?
  • Observed apnea: Do you know if you stop breathing, or has anyone witnessed you stop breathing while sleeping?
  • Pressure: Do you have or are you being treated for high blood pressure?
  • Body mass index: Is your body mass index greater than 35 kg/m2?
  • Age: older than 50?
  • Neck circumference: greater than 40 cm?
  • Gender: Male?

A score of 3 or more indicates a high risk of obstructive sleep apnea, and further workup for it is appropriate. Some of the other symptoms and signs are listed in Table 1.

SLEEP STUDIES: IN THE LABORATORY OR AT HOME

In-laboratory polysomnography entails electro-oculography, electromyography, electroencephalography, electrocardiography, pulse oximetry, and measurement of oronasal flow and thoracoabdominal movement (using sensors and belts). It should be performed in patients who have significant comorbid conditions.

A home sleep study, which is more limited than polysomnography, is appropriate in those who have a high probability of obstructive sleep apnea and who do not have other sleep disorders or significant cardiovascular, neurologic, or respiratory disorders.

Subsequently, if obstructive sleep apnea is found, a positive airway pressure titration study is performed to determine the optimal pressure requirements.

CPAP IS THE GOLD STANDARD TREATMENT

Behavioral changes are recommended to correct factors that predispose to obstructive sleep apnea or aggravate it. These changes include avoiding alcohol, sleeping on one’s side rather than supine, weight reduction in overweight individuals, and treating nasal congestion. In some situations, oral appliances or surgical options can be considered. However, CPAP is the gold standard therapy and the one most commonly used.

CPAP LOWERS BLOOD PRESSURE

Effective treatment of obstructive sleep apnea, added to an antihypertensive regimen, can further lower the blood pressure more than the antihypertensive medication regimen by itself.

Several meta-analyses have shown modest improvements in blood pressure with CPAP in hypertensive patients. CPAP’s effect on blood pressure seems to be more pronounced in those with resistant hypertension, in whom a meta-analysis of randomized controlled trials demonstrated a mean reduction in systolic blood pressure of 6.74 mm Hg and a mean reduction in diastolic blood pressure of 5.94 mm Hg.12 A recent clinic-based (“real-world”) study revealed lowering of blood pressure in patients with resistant and nonresistant hypertension—approximately 2 to 3 mm Hg after CPAP therapy.13

Furthermore, a randomized controlled trial in Spain showed that the nocturnal nondipping pattern observed in patients with resistant hypertension was reversed with the use of CPAP.14

Yes. Obstructive sleep apnea is common and is associated with hypertension and resistant hypertension. Physicians taking care of patients who have hard-to-control hypertension should be aware of the possible diagnosis of obstructive sleep apnea and screen them for it. In-laboratory polysomnography or home sleep testing should be offered if appropriate, and if obstructive sleep apnea is detected, it should be treated, as this treatment may help to control blood pressure more effectively.

OBSTRUCTIVE SLEEP APNEA IS COMMON

Obstructive sleep apnea is characterized by recurrent episodes of partial or complete collapse of the upper airway during sleep, with partial collapse leading to hypopnea and complete collapse leading to apnea. These episodes result in intermittent hypoxemia, microarousals, sleep fragmentation, daytime sleepiness, and impairment in quality of life.

In tandem with the increasing obesity epidemic, the prevalence of moderate to severe obstructive sleep apnea is 17% in men and 9% in women 50 to 70 years old.1

LINKED TO HYPERTENSION

The respiratory events that occur in obstructive sleep apnea are associated with blood pressure surges during sleep that can cause persistent elevated blood pressure while awake. Obstructive sleep apnea has been independently associated with incident hypertension in large epidemiologic studies, even after correction for confounding factors such as obesity and its surrogate markers.

Moreover, the more severe the obstructive sleep apnea, the greater the risk of incident hypertension.2 And large, long-term observational studies have shown higher incidence rates of hypertension in people with untreated obstructive sleep apnea than in those who underwent treatment for it with continuous positive airway pressure (CPAP).3

Obstructive sleep apnea is also associated with nocturnal nondipping of blood pressure (defined as failure of blood pressure to decline by at least 10% during sleep), which is an independent marker for worse cardiovascular outcomes and hypertension-induced target organ damage.

Obstructive sleep apnea is particularly common in those with drug-resistant hypertension,4 which is defined as a suboptimal control of blood pressure despite the use of multiple antihypertensive medications of different classes, a condition associated with significant rates of cardiovascular morbidity and mortality. Even in patients at high risk of cardiovascular disease, we found that those with severe obstruction of the upper airway during sleep had fourfold higher odds of having resistant elevated blood pressure.5

The seventh Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure recognized obstructive sleep apnea as one of the causes of secondary hypertension.6 The 2013 European Society of Hypertension/European Society of Cardiology guidelines7 suggested an evaluation of obstructive sleep apnea symptoms for the management of hypertension.

MECHANISMS LINKING OBSTRUCTIVE SLEEP APNEA AND HYPERTENSION

Pathophysiologic mechanisms that may explain the association between obstructive sleep apnea and hypertension include stimulation of sympathetic activity,8 increased arterial stiffness, and endothelial dysfunction driven by apnea-related intermittent hypoxemia.9 Increased systemic inflammation and oxidative stress caused by obstructive sleep apnea are other proposed mechanisms.

Conversely, resistant hypertension may worsen obstructive sleep apnea. Some propose that activation of the renin-angiotensin-aldosterone system can cause parapharyngeal edema and rostral fluid shifts during sleep and thereby increase upper airway obstruction and worsen the severity of obstructive sleep apnea.10

CONSIDER SCREENING

Patients with resistant hypertension and risk factors for obstructive sleep apnea should be screened for it, as it is very common in this population.

A simple screening tool that can be used to detect sleep apnea is the STOP-BANG questionnaire11:

  • Snore: Have you been told that you snore loudly?
  • Tired: Are you often tired during the day?
  • Observed apnea: Do you know if you stop breathing, or has anyone witnessed you stop breathing while sleeping?
  • Pressure: Do you have or are you being treated for high blood pressure?
  • Body mass index: Is your body mass index greater than 35 kg/m2?
  • Age: older than 50?
  • Neck circumference: greater than 40 cm?
  • Gender: Male?

A score of 3 or more indicates a high risk of obstructive sleep apnea, and further workup for it is appropriate. Some of the other symptoms and signs are listed in Table 1.

SLEEP STUDIES: IN THE LABORATORY OR AT HOME

In-laboratory polysomnography entails electro-oculography, electromyography, electroencephalography, electrocardiography, pulse oximetry, and measurement of oronasal flow and thoracoabdominal movement (using sensors and belts). It should be performed in patients who have significant comorbid conditions.

A home sleep study, which is more limited than polysomnography, is appropriate in those who have a high probability of obstructive sleep apnea and who do not have other sleep disorders or significant cardiovascular, neurologic, or respiratory disorders.

Subsequently, if obstructive sleep apnea is found, a positive airway pressure titration study is performed to determine the optimal pressure requirements.

CPAP IS THE GOLD STANDARD TREATMENT

Behavioral changes are recommended to correct factors that predispose to obstructive sleep apnea or aggravate it. These changes include avoiding alcohol, sleeping on one’s side rather than supine, weight reduction in overweight individuals, and treating nasal congestion. In some situations, oral appliances or surgical options can be considered. However, CPAP is the gold standard therapy and the one most commonly used.

CPAP LOWERS BLOOD PRESSURE

Effective treatment of obstructive sleep apnea, added to an antihypertensive regimen, can further lower the blood pressure more than the antihypertensive medication regimen by itself.

Several meta-analyses have shown modest improvements in blood pressure with CPAP in hypertensive patients. CPAP’s effect on blood pressure seems to be more pronounced in those with resistant hypertension, in whom a meta-analysis of randomized controlled trials demonstrated a mean reduction in systolic blood pressure of 6.74 mm Hg and a mean reduction in diastolic blood pressure of 5.94 mm Hg.12 A recent clinic-based (“real-world”) study revealed lowering of blood pressure in patients with resistant and nonresistant hypertension—approximately 2 to 3 mm Hg after CPAP therapy.13

Furthermore, a randomized controlled trial in Spain showed that the nocturnal nondipping pattern observed in patients with resistant hypertension was reversed with the use of CPAP.14

References
  1. Peppard PE, Young T, Barnet JH, Palta M, Hagen EW, Hla KM. Increased prevalence of sleep-disordered breathing in adults. Am J Epidemiol 2013; 177:1006–1014.
  2. Peppard PE, Young T, Palta M, et al. Prospective study of the association between sleep-disordered breathing and hypertension. N Engl J Med 2000; 342:1378–1384.
  3. Marin JM, Agusti A, Villar I, et al. Association between treated and untreated obstructive sleep apnea and risk of hypertension. JAMA 2012; 307:2169–2176.
  4. Logan AG, Perlikowski SM, Mente A, et al. High prevalence of unrecognized sleep apnoea in drug-resistant hypertension. J Hypertens 2001; 19:2271–2277.
  5. Walia HK, Li H, Rueschman M, et al. Association of severe obstructive sleep apnea and elevated blood pressure despite antihypertensive medication use. J Clin Sleep Med 2014; 10:835–843.
  6. Chobanian AV, Bakris GL, Black HR, et al; Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure; National Heart, Lung, and Blood Institute; National High Blood Pressure Education Program Coordinating Committee. Seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Hypertension 2003; 42:1206–1252.
  7. Mancia G, Fagard R, Narkiewicz K, et al; Task Force Members. 2013 ESH/ESC guidelines for the management of arterial hypertension: the task force for the management of arterial hypertension of the European Society of Hypertension (ESH) and of the European Society of Cardiology (ESC). J Hypertens 2013; 31:1281–1357.
  8. Somers VK, Dyken ME, Clary MP, Abboud FM. Sympathetic neural mechanisms in obstructive sleep apnea. J Clin Invest 1995; 96:1897–1904.
  9. Jelic S, Bartels MN, Mateika JH, Ngai P, DeMeersman RE, Basner RC. Arterial stiffness increases during obstructive sleep apneas. Sleep 2002; 25:850–855.
  10. Dudenbostel T, Calhoun DA. Resistant hypertension, obstructive sleep apnoea and aldosterone. J Hum Hypertens 2012; 26:281–287.
  11. Chung F, Yegneswaran B, Liao P, et al. STOP questionnaire: a tool to screen patients for obstructive sleep apnea. Anesthesiology 2008; 108:812–821.
  12. Iftikhar IH, Valentine CW, Bittencourt LR, et al. Effects of continuous positive airway pressure on blood pressure in patients with resistant hypertension and obstructive sleep apnea: a meta-analysis. J Hypertens 2014; 32:2341–2350.
  13. Walia HK, Griffith SD, Foldvary-Schaefer N, et al. Longitudinal effect of CPAP on BP in resistant and nonresistant hypertension in a large clinic-based cohort. Chest 2016; 149:747–755.
  14. Martinez-Garcia MA, Capote F, Campos-Rodriguez F, et al; Spanish Sleep Network. Effect of CPAP on blood pressure in patients with obstructive sleep apnea and resistant hypertension: the HIPARCO randomized clinical trial. JAMA 2013; 310:2407–2415.
References
  1. Peppard PE, Young T, Barnet JH, Palta M, Hagen EW, Hla KM. Increased prevalence of sleep-disordered breathing in adults. Am J Epidemiol 2013; 177:1006–1014.
  2. Peppard PE, Young T, Palta M, et al. Prospective study of the association between sleep-disordered breathing and hypertension. N Engl J Med 2000; 342:1378–1384.
  3. Marin JM, Agusti A, Villar I, et al. Association between treated and untreated obstructive sleep apnea and risk of hypertension. JAMA 2012; 307:2169–2176.
  4. Logan AG, Perlikowski SM, Mente A, et al. High prevalence of unrecognized sleep apnoea in drug-resistant hypertension. J Hypertens 2001; 19:2271–2277.
  5. Walia HK, Li H, Rueschman M, et al. Association of severe obstructive sleep apnea and elevated blood pressure despite antihypertensive medication use. J Clin Sleep Med 2014; 10:835–843.
  6. Chobanian AV, Bakris GL, Black HR, et al; Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure; National Heart, Lung, and Blood Institute; National High Blood Pressure Education Program Coordinating Committee. Seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Hypertension 2003; 42:1206–1252.
  7. Mancia G, Fagard R, Narkiewicz K, et al; Task Force Members. 2013 ESH/ESC guidelines for the management of arterial hypertension: the task force for the management of arterial hypertension of the European Society of Hypertension (ESH) and of the European Society of Cardiology (ESC). J Hypertens 2013; 31:1281–1357.
  8. Somers VK, Dyken ME, Clary MP, Abboud FM. Sympathetic neural mechanisms in obstructive sleep apnea. J Clin Invest 1995; 96:1897–1904.
  9. Jelic S, Bartels MN, Mateika JH, Ngai P, DeMeersman RE, Basner RC. Arterial stiffness increases during obstructive sleep apneas. Sleep 2002; 25:850–855.
  10. Dudenbostel T, Calhoun DA. Resistant hypertension, obstructive sleep apnoea and aldosterone. J Hum Hypertens 2012; 26:281–287.
  11. Chung F, Yegneswaran B, Liao P, et al. STOP questionnaire: a tool to screen patients for obstructive sleep apnea. Anesthesiology 2008; 108:812–821.
  12. Iftikhar IH, Valentine CW, Bittencourt LR, et al. Effects of continuous positive airway pressure on blood pressure in patients with resistant hypertension and obstructive sleep apnea: a meta-analysis. J Hypertens 2014; 32:2341–2350.
  13. Walia HK, Griffith SD, Foldvary-Schaefer N, et al. Longitudinal effect of CPAP on BP in resistant and nonresistant hypertension in a large clinic-based cohort. Chest 2016; 149:747–755.
  14. Martinez-Garcia MA, Capote F, Campos-Rodriguez F, et al; Spanish Sleep Network. Effect of CPAP on blood pressure in patients with obstructive sleep apnea and resistant hypertension: the HIPARCO randomized clinical trial. JAMA 2013; 310:2407–2415.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
419-421
Page Number
419-421
Publications
Publications
Topics
Article Type
Display Headline
Should I suspect obstructive sleep apnea if a patient has hard-to-control hypertension?
Display Headline
Should I suspect obstructive sleep apnea if a patient has hard-to-control hypertension?
Legacy Keywords
obstructive sleep apnea, hypertension, high blood pressure, resistant hypertension, Harneet Walia
Legacy Keywords
obstructive sleep apnea, hypertension, high blood pressure, resistant hypertension, Harneet Walia
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Whiplash-shaped acute rash

Article Type
Changed
Wed, 08/16/2017 - 13:47
Display Headline
Whiplash-shaped acute rash

A previously healthy 32-year-old man presented to the emergency room with a persistent, nonpruritic rash on his trunk, which had suddenly appeared 2 days after he ate Chinese food.

Figure 1. A widespread streaked rash with a scratch-like appearance over the patient's back.

Physical examination revealed multiple crosslinked linear plaques that appeared like scratches over his chest, back, and shoulders (Figures 1 and 2). He had no dermatographism, and his scalp, nails, palms, and soles were not affected. He had no signs of lymphadenopathy or systemic involvement.

Figure 2. Closer inspection of the lesions showed intensely erythematous linear plaques with a pseudo­vesicular surface.

Basic blood and urinary laboratory testing, blood cultures, and serologic studies showed normal or negative results.

Given the presentation and results of initial testing, his rash was diagnosed as flagellate erythema, likely due to shiitake mushroom intake. The diagnosis does not require histopathologic confirmation.

The rash resolved spontaneously over the next 2 weeks with use of a topical emollient and without scarring or residual hyperpigmentation.

FLAGELLATE ERYTHEMA

Flagellate erythema is a peculiar cutaneous eruption characterized by the progressive or sudden onset of parallel linear or curvilinear plaques, most commonly on the trunk. The plaques are typically arranged in a scratch pattern resembling marks left by the lashes of a whip.1 In contrast to other itchy dermatoses and neurotic excoriations that may present with self-induced linear marks, flagellate erythema appears spontaneously.

Drug-related causes, disease associations

Originally described in association with bleomycin treatment, flagellate erythema is currently considered a distinct feature of several dermatologic and systemic disorders, and therefore the ability to recognize it is valuable in daily practice.2 In addition to bleomycin analogues and anticancer agents such as peplomycin,1 bendamustine,3 and docetaxel,4 physicians should consider shiitake dermatitis5 and other less commonly reported associations such as dermatomyositis,6 lupus,7 Still disease,8 and parvovirus infection.9

Diagnostic features

The diagnosis of flagellate erythema is mainly based on the morphologic features of the clinical lesions.1 Shiitake dermatitis and flagellate erythema related to rheumatologic disease usually present with more inflammatory and erythematous plaques. Chemotherapy-induced flagellate rash typically has a violaceous or purpuric coloration, which tends to leave noticeable hyperpigmentation for several months.2

Skin biopsy may be necessary to distinguish it from similar-looking dermatoses with different histologic findings, such as dermatographism, phytophotodermatitis, erythema gyratum repens, and factitious dermatoses, which may require specific treatments or be related to important underlying pathology.1,2

Treatment

Treatment includes both specific treatment of the underlying cause and symptomatic care of the skin with topical emollients and, in cases of associated pruritus, oral antihistamines. The patient should also be reassured about the self-healing nature of shiitake dermatitis rash.5

References
  1. Yamamoto T, Nishioka K. Flagellate erythema. Int J Dermatol 2006; 45:627–631.
  2. Bhushan P, Manjul P, Baliyan V. Flagellate dermatoses. Indian J Dermatol Venereol Leprol 2014; 80:149–152.
  3. Mahmoud BH, Eide MJ. Bendamustine-induced “flagellate dermatitis.” Dermatol Online J 2012; 18:12.
  4. Tallon B, Lamb S. Flagellate erythema induced by docetaxel. Clin Exp Dermatol 2008; 33:276–277.
  5. Adler MJ, Larsen WG. Clinical variability of shiitake dermatitis. J Am Acad Dermatol 2012; 67:140–141.
  6. Jara M, Amérigo J, Duce S, Borbujo J. Dermatomyositis and flagellate erythema. Clin Exp Dermatol 1996; 21:440–441.
  7. Niiyama S, Katsuoka K. Systemic lupus erythematosus with flagellate erythema. Eur J Dermatol 2012; 22:808–809.
  8. Ciliberto H, Kumar MG, Musiek A. Flagellate erythema in a patient with fever. JAMA Dermatol 2013; 149:1425–1426.
  9. Miguélez A, Dueñas J, Hervás D, Hervás JA, Salva F, Martín-Santiago A. Flagellate erythema in parvovirus B19 infection. Int J Dermatol 2014; 53:e583–e585.
Article PDF
Author and Disclosure Information

Lidia Maroñ-Jiménez, MD
Department of Dermatology, Hospital 12 de Octubre, and i+12 Research Institute, Universidad Complutense, Madrid, Spain

Alejandro Lobato-Berenzo, MD
Department of Dermatology, Hospital Severo Ochoa, Leganés, Madrid, Spain

Ramon Pigem, MD
Department of Dermatology, Hospital Clínic, Barcelona, Spain

Diana Menis, MD
Department of Dermatology, Hospital 12 de Octubre, Madrid, Spain

Sara Palencia-Pérez, MD, PhD
Department of Dermatology, Hospital 12 de Octubre, Madrid, Spain

Address: Lidia Maroñas-Jiménez, MD, Department of Dermatology, Hospital 12 de Octubre, Avenida de Córdoba s/n, 28041 Madrid, Spain; [email protected]

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
417-418
Legacy Keywords
flagellate erythema, rash, Shiitake, Chinese food, Spain, Lidia Maronas-Jimenez, Alejandro Lobato-Berezo, Ramon Pigem, Diana Menis, Sara Palencia-Perez
Sections
Author and Disclosure Information

Lidia Maroñ-Jiménez, MD
Department of Dermatology, Hospital 12 de Octubre, and i+12 Research Institute, Universidad Complutense, Madrid, Spain

Alejandro Lobato-Berenzo, MD
Department of Dermatology, Hospital Severo Ochoa, Leganés, Madrid, Spain

Ramon Pigem, MD
Department of Dermatology, Hospital Clínic, Barcelona, Spain

Diana Menis, MD
Department of Dermatology, Hospital 12 de Octubre, Madrid, Spain

Sara Palencia-Pérez, MD, PhD
Department of Dermatology, Hospital 12 de Octubre, Madrid, Spain

Address: Lidia Maroñas-Jiménez, MD, Department of Dermatology, Hospital 12 de Octubre, Avenida de Córdoba s/n, 28041 Madrid, Spain; [email protected]

Author and Disclosure Information

Lidia Maroñ-Jiménez, MD
Department of Dermatology, Hospital 12 de Octubre, and i+12 Research Institute, Universidad Complutense, Madrid, Spain

Alejandro Lobato-Berenzo, MD
Department of Dermatology, Hospital Severo Ochoa, Leganés, Madrid, Spain

Ramon Pigem, MD
Department of Dermatology, Hospital Clínic, Barcelona, Spain

Diana Menis, MD
Department of Dermatology, Hospital 12 de Octubre, Madrid, Spain

Sara Palencia-Pérez, MD, PhD
Department of Dermatology, Hospital 12 de Octubre, Madrid, Spain

Address: Lidia Maroñas-Jiménez, MD, Department of Dermatology, Hospital 12 de Octubre, Avenida de Córdoba s/n, 28041 Madrid, Spain; [email protected]

Article PDF
Article PDF
Related Articles

A previously healthy 32-year-old man presented to the emergency room with a persistent, nonpruritic rash on his trunk, which had suddenly appeared 2 days after he ate Chinese food.

Figure 1. A widespread streaked rash with a scratch-like appearance over the patient's back.

Physical examination revealed multiple crosslinked linear plaques that appeared like scratches over his chest, back, and shoulders (Figures 1 and 2). He had no dermatographism, and his scalp, nails, palms, and soles were not affected. He had no signs of lymphadenopathy or systemic involvement.

Figure 2. Closer inspection of the lesions showed intensely erythematous linear plaques with a pseudo­vesicular surface.

Basic blood and urinary laboratory testing, blood cultures, and serologic studies showed normal or negative results.

Given the presentation and results of initial testing, his rash was diagnosed as flagellate erythema, likely due to shiitake mushroom intake. The diagnosis does not require histopathologic confirmation.

The rash resolved spontaneously over the next 2 weeks with use of a topical emollient and without scarring or residual hyperpigmentation.

FLAGELLATE ERYTHEMA

Flagellate erythema is a peculiar cutaneous eruption characterized by the progressive or sudden onset of parallel linear or curvilinear plaques, most commonly on the trunk. The plaques are typically arranged in a scratch pattern resembling marks left by the lashes of a whip.1 In contrast to other itchy dermatoses and neurotic excoriations that may present with self-induced linear marks, flagellate erythema appears spontaneously.

Drug-related causes, disease associations

Originally described in association with bleomycin treatment, flagellate erythema is currently considered a distinct feature of several dermatologic and systemic disorders, and therefore the ability to recognize it is valuable in daily practice.2 In addition to bleomycin analogues and anticancer agents such as peplomycin,1 bendamustine,3 and docetaxel,4 physicians should consider shiitake dermatitis5 and other less commonly reported associations such as dermatomyositis,6 lupus,7 Still disease,8 and parvovirus infection.9

Diagnostic features

The diagnosis of flagellate erythema is mainly based on the morphologic features of the clinical lesions.1 Shiitake dermatitis and flagellate erythema related to rheumatologic disease usually present with more inflammatory and erythematous plaques. Chemotherapy-induced flagellate rash typically has a violaceous or purpuric coloration, which tends to leave noticeable hyperpigmentation for several months.2

Skin biopsy may be necessary to distinguish it from similar-looking dermatoses with different histologic findings, such as dermatographism, phytophotodermatitis, erythema gyratum repens, and factitious dermatoses, which may require specific treatments or be related to important underlying pathology.1,2

Treatment

Treatment includes both specific treatment of the underlying cause and symptomatic care of the skin with topical emollients and, in cases of associated pruritus, oral antihistamines. The patient should also be reassured about the self-healing nature of shiitake dermatitis rash.5

A previously healthy 32-year-old man presented to the emergency room with a persistent, nonpruritic rash on his trunk, which had suddenly appeared 2 days after he ate Chinese food.

Figure 1. A widespread streaked rash with a scratch-like appearance over the patient's back.

Physical examination revealed multiple crosslinked linear plaques that appeared like scratches over his chest, back, and shoulders (Figures 1 and 2). He had no dermatographism, and his scalp, nails, palms, and soles were not affected. He had no signs of lymphadenopathy or systemic involvement.

Figure 2. Closer inspection of the lesions showed intensely erythematous linear plaques with a pseudo­vesicular surface.

Basic blood and urinary laboratory testing, blood cultures, and serologic studies showed normal or negative results.

Given the presentation and results of initial testing, his rash was diagnosed as flagellate erythema, likely due to shiitake mushroom intake. The diagnosis does not require histopathologic confirmation.

The rash resolved spontaneously over the next 2 weeks with use of a topical emollient and without scarring or residual hyperpigmentation.

FLAGELLATE ERYTHEMA

Flagellate erythema is a peculiar cutaneous eruption characterized by the progressive or sudden onset of parallel linear or curvilinear plaques, most commonly on the trunk. The plaques are typically arranged in a scratch pattern resembling marks left by the lashes of a whip.1 In contrast to other itchy dermatoses and neurotic excoriations that may present with self-induced linear marks, flagellate erythema appears spontaneously.

Drug-related causes, disease associations

Originally described in association with bleomycin treatment, flagellate erythema is currently considered a distinct feature of several dermatologic and systemic disorders, and therefore the ability to recognize it is valuable in daily practice.2 In addition to bleomycin analogues and anticancer agents such as peplomycin,1 bendamustine,3 and docetaxel,4 physicians should consider shiitake dermatitis5 and other less commonly reported associations such as dermatomyositis,6 lupus,7 Still disease,8 and parvovirus infection.9

Diagnostic features

The diagnosis of flagellate erythema is mainly based on the morphologic features of the clinical lesions.1 Shiitake dermatitis and flagellate erythema related to rheumatologic disease usually present with more inflammatory and erythematous plaques. Chemotherapy-induced flagellate rash typically has a violaceous or purpuric coloration, which tends to leave noticeable hyperpigmentation for several months.2

Skin biopsy may be necessary to distinguish it from similar-looking dermatoses with different histologic findings, such as dermatographism, phytophotodermatitis, erythema gyratum repens, and factitious dermatoses, which may require specific treatments or be related to important underlying pathology.1,2

Treatment

Treatment includes both specific treatment of the underlying cause and symptomatic care of the skin with topical emollients and, in cases of associated pruritus, oral antihistamines. The patient should also be reassured about the self-healing nature of shiitake dermatitis rash.5

References
  1. Yamamoto T, Nishioka K. Flagellate erythema. Int J Dermatol 2006; 45:627–631.
  2. Bhushan P, Manjul P, Baliyan V. Flagellate dermatoses. Indian J Dermatol Venereol Leprol 2014; 80:149–152.
  3. Mahmoud BH, Eide MJ. Bendamustine-induced “flagellate dermatitis.” Dermatol Online J 2012; 18:12.
  4. Tallon B, Lamb S. Flagellate erythema induced by docetaxel. Clin Exp Dermatol 2008; 33:276–277.
  5. Adler MJ, Larsen WG. Clinical variability of shiitake dermatitis. J Am Acad Dermatol 2012; 67:140–141.
  6. Jara M, Amérigo J, Duce S, Borbujo J. Dermatomyositis and flagellate erythema. Clin Exp Dermatol 1996; 21:440–441.
  7. Niiyama S, Katsuoka K. Systemic lupus erythematosus with flagellate erythema. Eur J Dermatol 2012; 22:808–809.
  8. Ciliberto H, Kumar MG, Musiek A. Flagellate erythema in a patient with fever. JAMA Dermatol 2013; 149:1425–1426.
  9. Miguélez A, Dueñas J, Hervás D, Hervás JA, Salva F, Martín-Santiago A. Flagellate erythema in parvovirus B19 infection. Int J Dermatol 2014; 53:e583–e585.
References
  1. Yamamoto T, Nishioka K. Flagellate erythema. Int J Dermatol 2006; 45:627–631.
  2. Bhushan P, Manjul P, Baliyan V. Flagellate dermatoses. Indian J Dermatol Venereol Leprol 2014; 80:149–152.
  3. Mahmoud BH, Eide MJ. Bendamustine-induced “flagellate dermatitis.” Dermatol Online J 2012; 18:12.
  4. Tallon B, Lamb S. Flagellate erythema induced by docetaxel. Clin Exp Dermatol 2008; 33:276–277.
  5. Adler MJ, Larsen WG. Clinical variability of shiitake dermatitis. J Am Acad Dermatol 2012; 67:140–141.
  6. Jara M, Amérigo J, Duce S, Borbujo J. Dermatomyositis and flagellate erythema. Clin Exp Dermatol 1996; 21:440–441.
  7. Niiyama S, Katsuoka K. Systemic lupus erythematosus with flagellate erythema. Eur J Dermatol 2012; 22:808–809.
  8. Ciliberto H, Kumar MG, Musiek A. Flagellate erythema in a patient with fever. JAMA Dermatol 2013; 149:1425–1426.
  9. Miguélez A, Dueñas J, Hervás D, Hervás JA, Salva F, Martín-Santiago A. Flagellate erythema in parvovirus B19 infection. Int J Dermatol 2014; 53:e583–e585.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
417-418
Page Number
417-418
Publications
Publications
Topics
Article Type
Display Headline
Whiplash-shaped acute rash
Display Headline
Whiplash-shaped acute rash
Legacy Keywords
flagellate erythema, rash, Shiitake, Chinese food, Spain, Lidia Maronas-Jimenez, Alejandro Lobato-Berezo, Ramon Pigem, Diana Menis, Sara Palencia-Perez
Legacy Keywords
flagellate erythema, rash, Shiitake, Chinese food, Spain, Lidia Maronas-Jimenez, Alejandro Lobato-Berezo, Ramon Pigem, Diana Menis, Sara Palencia-Perez
Sections
Disallow All Ads
Alternative CME
Article PDF Media

A guide to managing acute liver failure

Article Type
Changed
Wed, 08/16/2017 - 13:38
Display Headline
A guide to managing acute liver failure

When the liver fails, it usually fails gradually. The sudden (acute) onset of liver failure, while less common, demands prompt management, with transfer to an intensive care unit, specific treatment depending on the cause, and consideration of liver transplant, without which the mortality rate is high.

This article reviews the definition, epidemiology, etiology, and management of acute liver failure.

DEFINITIONS

Acute liver failure is defined as a syndrome of acute hepatitis with evidence of abnormal coagulation (eg, an international normalized ratio > 1.5) complicated by the development of mental alteration (encephalopathy) within 26 weeks of the onset of illness in a patient without a history of liver disease.1 In general, patients have no evidence of underlying chronic liver disease, but there are exceptions; patients with Wilson disease, vertically acquired hepatitis B virus infection, or autoimmune hepatitis can present with acute liver failure superimposed on chronic liver disease or even cirrhosis.

The term acute liver failure has replaced older terms such as fulminant hepatic failure, hyperacute liver failure, and subacute liver failure, which were used for prognostic purposes. Patients with hyperacute liver failure (defined as development of encephalopathy within 7 days of onset of illness) generally have a good prognosis with medical management, whereas those with subacute liver failure (defined as development of encephalopathy within 5 to 26 weeks of onset of illness) have a poor prognosis without liver transplant.2,3

NEARLY 2,000 CASES A YEAR

There are nearly 2,000 cases of acute liver failure each year in the United States, and it accounts for 6% of all deaths due to liver disease.4 It is more common in women than in men, and more common in white people than in other races. The peak incidence is at a fairly young age, ie, 35 to 45 years.

CAUSES

The most common cause of acute liver failure in the United States and other Western countries is acetaminophen toxicity, followed by viral hepatitis. In contrast, viral hepatitis is the most common cause in developing countries.5

Acetaminophen toxicity

Patients with acetaminophen-induced liver failure tend to be younger than other patients with acute liver failure.1 Nearly half of them present after intentionally taking a single large dose, while the rest present with unintentional toxicity while taking acetaminophen for pain relief on a long-term basis and ingesting more than the recommended dose.6

After ingestion, 52% to 57% of acetaminophen is converted to glucuronide conjugates, and 30% to 44% is converted to sulfate conjugates. These compounds are nontoxic, water-soluble, and rapidly excreted in the urine.

However, about 5% to 10% of ingested acetaminophen is shunted to the cytochrome P450 system. P450 2E1 is the main isoenzyme involved in acetaminophen metabolism, but 1A2, 3A4, and 2A6 also contribute.7,8 P450 2E1 is the same isoenzyme responsible for ethanol metabolism and is inducible. Thus, regular alcohol consumption can increase P450 2E1 activity, setting the stage under certain circumstances for increased acetaminophen metabolism through this pathway.

Reprinted from Schilling A, Corey R, Leonard M, Eghtesad B. Acetaminophen: old drug, new warnings. Cleve Clin J Med 2010; 77:19–27.
Figure 1.

Metabolism of acetaminophen through the cytochrome P450 pathway results in production of N-acetyl-p-benzoquinone imine (NAPQI), the compound that damages the liver. NAPQI is rendered nontoxic by binding to glutathione, forming NAPQI-glutathione adducts. Glutathione capacity is limited, however. With too much acetaminophen, glutathione becomes depleted and NAPQI accumulates, binds with proteins to form adducts, and leads to necrosis of hepatocytes (Figure 1).9,10

Acetylcysteine, used in treating acetaminophen toxicity, is a substrate for glutathione synthesis and ultimately increases the amount of glutathione available to bind NAPQI and prevent damage to hepatocytes.11

Acetaminophen is a dose-related toxin. Most ingestions leading to acute liver failure exceed 10 g/day (> 150 mg/kg/day). Moderate chronic ingestion, eg, 4 g/day, usually leads to transient mild elevation of liver enzymes in healthy individuals12 but can in rare cases cause acute liver failure.13

Whitcomb and Block14 retrospectively identified 49 patients who presented with acetaminophen-induced hepatotoxicity in 1987 through 1993; 21 (43%) had been taking acetaminophen for therapeutic purposes. All 49 patients took more than the recommended limit of 4 g/day, many of them while fasting and some while using alcohol. Acute liver failure was seen with ingestion of more than 12 g/day—or more than 10 g/day in alcohol users. The authors attributed the increased risk to activation of cytochrome P450 2E1 by alcohol and depletion of glutathione stores by starvation or alcohol abuse. 

Advice to patients taking acetaminophen is given in Table 1.

Other drugs and supplements

A number of other drugs and herbal supplements can also cause acute liver failure (Table 2), the most common being antimicrobial and antiepileptic drugs.15 Of the antimicrobials, antitubercular drugs (especially isoniazid) are believed to be the most common causes, followed by trimethoprim-sulfamethoxazole. Phenytoin is the antiepileptic drug most often implicated in acute liver failure.

Statins can also cause acute liver failure, especially when combined with other hepatotoxic agents.16

The herbal supplements and weight-loss agents Hydroxycut and Herbalife have both been reported to cause acute liver failure, with patients presenting with either the hepatocellular or the cholestatic pattern of liver injury.17 The exact chemical in these supplements that causes liver injury has not yet been determined.

The National Institutes of Health maintains a database of cases of liver failure due to medications and supplements at livertox.nih.gov. The database includes the pattern of hepatic injury, mechanism of injury, management, and outcomes.

 

 

Viral hepatitis

Hepatitis B virus is the most common viral cause of acute liver failure and is responsible for about 8% of cases.18

Patients with chronic hepatitis B virus infection—as evidenced by positive hepatitis B surface antigen—can develop acute liver failure if the infection is reactivated by the use of immunosuppressive drugs for solid-organ or bone-marrow transplant or medications such as anti-tumor necrosis agents, rituximab, or chemotherapy. These patients should be treated prophylactically with a nucleoside analogue, which should be continued for 6 months after immunosuppressive therapy is completed.

Hepatitis A virus is responsible for about 4% of cases.18

Hepatitis C virus rarely causes acute liver failure, especially in the absence of hepatitis A and hepatitis B.3,19

Hepatitis E virus, which is endemic in areas of Asia and Africa, can cause liver disease in pregnant women and in young adults who have concomitant liver disease from another cause. It tends to cause acute liver failure more frequently in pregnant women than in the rest of the population and carries a mortality rate of more than 20% in this subgroup.

TT (transfusion-transmitted) virus was reported in the 1990s to cause acute liver failure in about 27% of patients in whom no other cause could be found.20

Other rare viral causes of acute liver failure include Epstein-Barr virus, cytomegalovirus, and herpes simplex virus types 1, 2, and 6.

Other causes

Other causes of acute liver failure include ischemic hepatitis, autoimmune hepatitis, Wilson disease, Budd-Chiari syndrome, and HELLP (hemolysis, elevated liver enzymes and low platelets) syndrome.

MANY PATIENTS NEED LIVER TRANSPLANT

Many patients with acute liver failure ultimately require orthotopic liver transplant,21 especially if they present with severe encephalopathy. Other aspects of treatment vary according to the cause of liver failure (Table 3).

SPECIFIC MANAGEMENT

Management of acetaminophen toxicity

If the time of ingestion is known, checking the acetaminophen level can help determine the cause of acute liver failure and also predict the risk of hepatotoxicity, based on the work of Rumack and Matthew.22 Calculators are available, eg, http://reference.medscape.com/calculator/acetaminophen-toxicity.

If a patient presents with acute liver failure several days after ingesting acetaminophen, the level can be in the nontoxic range, however. In this scenario, measuring acetaminophen-protein adducts can help establish acetaminophen toxicity as the cause, as the adducts last longer in the serum and provide 100% sensitivity and specificity.23 While most laboratories can rapidly measure acetaminophen levels, only a few can measure acetaminophen-protein adducts, and thus this test is not used clinically.

Acetylcysteine is the main drug used for acetaminophen toxicity. Ideally, it should be given within 8 hours of acetaminophen ingestion, but giving it later is also useful.1

Acetylcysteine is available in oral and intravenous forms, the latter for patients who have encephalopathy or cannot tolerate oral intake due to repeated episodes of vomiting.24,25 The oral form is much less costly and is thus preferred over intravenous acetylcysteine in patients who can tolerate oral intake. Intravenous acetylcysteine should be given in a loading dose of 150 mg/kg in 5% dextrose over 15 minutes, followed by a maintenance dose of 50 mg/kg over 4 hours and then 100 mg/kg given over 16 hours.1 No dose adjustment is needed in patients who have renal toxicity (acetaminophen can also be toxic to the kidneys).

Most patients with acetaminophen-induced liver failure survive with medical management alone and do not need a liver transplant.3,26 Cirrhosis does not occur in these patients.

Management of viral acute liver failure

When patients present with acute liver failure, it is necessary to look for a viral cause by serologic testing, including hepatitis A virus IgM antibody, hepatitis B surface antigen, and hepatitis B core IgM antibody.

Hepatitis B can become reactivated in immunocompromised patients, and therefore the hepatitis B virus DNA level should be checked. Detection of hepatitis B virus DNA in a patient previously known to have undetectable hepatitis B virus DNA confirms hepatitis B reactivation.

Patients with hepatitis B-induced acute liver failure should be treated with entecavir or tenofovir. Although this treatment may not change the course of acute liver failure or accelerate the recovery, it can prevent reinfection in the transplanted liver if liver transplant becomes indicated.27–29

Herpes simplex virus should be suspected in patients presenting with anicteric hepatitis with fever. Polymerase chain reaction testing for herpes simplex virus should be done,30 and if positive, patients should be given intravenous acyclovir.31 Despite treatment, herpes simplex virus disease is associated with a very poor prognosis without liver transplant.

Autoimmune hepatitis

The autoantibodies usually seen in autoimmune hepatitis are antinuclear antibody, antismooth muscle antibody, and anti-liver-kidney microsomal antibody, and patients need to be tested for them.

The diagnosis of autoimmune hepatitis can be challenging, as these autoimmune markers can be negative in 5% of patients. Liver biopsy becomes essential to establish the diagnosis in that setting.32

Guidelines advise starting prednisone 40 to 60 mg/day and placing the patient on the liver transplant list.1

Wilson disease

Although it is an uncommon cause of liver failure, Wilson disease needs special attention because it has a poor prognosis. The mortality rate in acute liver failure from Wilson disease reaches 100% without liver transplant.

Wilson disease is caused by a genetic defect that allows copper to accumulate in the liver and other organs. However, diagnosing Wilson disease as the cause of acute liver failure can be challenging because elevated serum and urine copper levels are not specific to Wilson disease and can be seen in patients with acute liver failure from any cause. In addition, the ceruloplasmin level is usually normal or high because it is an acute-phase reactant. Accumulation of copper in the liver parenchyma is usually patchy; therefore, qualitative copper staining on random liver biopsy samples provides low diagnostic yield. Quantitative copper on liver biopsy is the gold standard test to establish the diagnosis, but the test is time-consuming. Kayser-Fleischer rings around the iris are considered pathognomic for Wilson disease when seen with acute liver failure, but they are seen in only about 50% of patients.33

A unique feature of acute Wilson disease is that most patients have very high bilirubin levels and low alkaline phosphatase levels. An alkaline phosphatase-to-bilirubin ratio less than 2 in patients with acute liver failure is highly suggestive of Wilson disease.34

Another clue to the diagnosis is that patients with Wilson disease tend to develop Coombs-negative hemolytic anemia, which leads to a disproportionate elevation in aminotransferase levels, with aspartate aminotransferase being higher than alanine aminotransferase.

Once Wilson disease is suspected, the patient should be listed for liver transplant because death is almost certain without it. For patients awaiting liver transplant, the American Association for the Study of Liver Diseases guidelines recommend certain measures to lower the serum copper level such as albumin dialysis, continuous hemofiltration, plasmapheresis, and plasma exchange,1 but the evidence supporting their use is limited.

NONSPECIFIC MANAGEMENT

Figure 2.

Acute liver failure can affect a number of organs and systems in addition to the liver (Figure 2).

General considerations

Because their condition can rapidly deteriorate, patients with acute liver failure are best managed in intensive care.

Patients who present to a center that does not have the facilities for liver transplant should be transferred to a transplant center as soon as possible, preferably by air. If the patient may not be able to protect the airway, endotracheal intubation should be performed before transfer.

The major causes of death in patients with acute liver failure are cerebral edema and infection. Gastrointestinal bleeding was a major cause of death in the past, but with prophylactic use of histamine H2 receptor blockers and proton pump inhibitors, the incidence of gastrointestinal bleeding has been significantly reduced.

Although initially used only in patients with acetaminophen-induced liver failure, acetylcysteine has also shown benefit in patients with acute liver failure from other causes. In patients with grade 1 or 2 encephalopathy on a scale of 0 (minimal) to 4 (comatose), the transplant-free survival rate is higher when acetylcysteine is given compared with placebo, but this benefit does not extend to patients with a higher grade of encephalopathy.35

 

 

Cerebral edema and intracranial hypertension

Cerebral edema is the leading cause of death in patients with acute liver failure, and it develops in nearly 40% of patients.36

The mechanism by which cerebral edema develops is not well understood. Some have proposed that ammonia is converted to glutamine, which causes cerebral edema either directly by its osmotic effect37,38 or indirectly by decreasing other osmolytes, thereby promoting water retention.39

Cerebral edema leads to intracranial hypertension, which can ultimately cause cerebral herniation and death. Because of the high mortality rate associated with cerebral edema, invasive devices were extensively used in the past to monitor intracranial pressure. However, in light of known complications of these devices, including bleeding,40 and lack of evidence of long-term benefit in terms of mortality rates, their use has come under debate.

Treatments. Many treatments are available for cerebral edema and intracranial hypertension. The first step is to elevate the head of the bed about 30 degrees. In addition, hyponatremia should be corrected, as it can worsen cerebral edema.41 If patients are intubated, maintaining a hypercapneic state is advisable to decrease the intracranial pressure.

Of the two pharmacologic options, mannitol is more often used.42 It is given as a bolus dose of 0.5 to 1 g/kg intravenously if the serum osmolality is less than 320 mOsm/L.1 Given the risk of fluid overload with mannitol, caution must be exercised in patients with renal dysfunction. The other pharmacologic option is 3% hypertonic saline.

Therapeutic hypothermia is a newer treatment for cerebral edema. Lowering the body temperature to 32 to 33°C (89.6 to 91.4°F) using cooling blankets decreases intracranial pressure and cerebral blood flow and improves the cerebral perfusion pressure.43 With this treatment, patients should be closely monitored for side effects of infection, coagulopathy, and cardiac arrythmias.1

l-ornithine l-aspartate was successfully used to prevent brain edema in rats, but in humans, no benefit was seen compared with placebo.44,45 The underlying basis for this experimental treatment is that supplemental ornithine and aspartate should increase glutamate synthesis, which should increase the activity of enzyme glutamine synthetase in skeletal muscles. With the increase in enzyme activity, conversion of ammonia to glutamine should increase, thereby decreasing ammonia circulation and thus decreasing cerebral edema.

Patients with cerebral edema have a high incidence of seizures, but prophylactic antiseizure medications such as phenytoin have not been proven to be beneficial.46

Infection

Nearly 80% of patients with acute liver failure develop an infectious complication, which can be attributed to a state of immunodeficiency.47

The respiratory and urinary tracts are the most common sources of infection.48 In patients with bacteremia, Enterococcus species and coagulase-negative Staphylococcus species49 are the commonly isolated organisms. Also, in patients with acute liver failure, fungal infections account for 30% of all infections.50

Infected patients often develop worsening of their encephalopathy51 without fever or elevated white blood cell count.49,52 Thus, in any patient in whom encephalopathy is worsening, an evaluation must be done to rule out infection. In these patients, systemic inflammatory response syndrome is an independent risk factor for death.53

Despite the high mortality rate with infection, whether using antibiotics prophylactically in acute liver failure is beneficial is controversial.54,55

Gastrointestinal bleeding

The current prevalence of upper gastrointestinal bleeding in acute liver failure patients is about 1.5%.56 Coagulopathy and endotracheal intubation are the main risk factors for upper gastrointestinal bleeding in these patients.57 The most common source of bleeding is stress ulcers in the stomach. The ulcers develop from a combination of factors, including decreased blood flow to the mucosa causing ischemia and hypoperfusion-reperfusion injury.

Pharmacologic inhibition of gastric acid secretion has been shown to reduce upper gastrointestinal bleeding in acute liver failure. A histamine H2 receptor blocker or proton pump inhibitor should be given to prevent gastrointestinal bleeding in patients with acute liver failure.1,58

EXPERIMENTAL TREATMENTS

Artificial liver support systems

Membranes and dialysate solutions have been developed to remove toxic substances that are normally metabolized by the liver. Two of these—the molecular adsorbent recycling system (MARS) and the extracorporeal liver assist device (ELAD)—were developed in the late 1990s. MARS consisted of a highly permeable hollow fiber membrane mixed with albumin, and ELAD consisted of porcine hepatocytes attached to microcarriers in the extracapillary space of the hollow fiber membrane. Both systems allowed for transfer of water-soluble and protein-bound toxins in the blood across the membrane and into the dialysate.59 The clinical benefit offered by these devices is controversial,60–62 thus limiting their use to experimental purposes only.

Hepatocyte transplant

Use of hepatocyte transplant as a bridge to liver transplant was tested in 1970s, first in rats and later in humans.63 By reducing the blood ammonia level and improving cerebral perfusion pressure and cardiac function, replacement of 1% to 2% of the total liver cell mass by transplanted hepatocytes acts as a bridge to orthotopic liver transplant.64,65

PROGNOSIS

Different criteria have been used to identify patients with poor prognosis who may eventually need to undergo liver transplant.

The King’s College criteria system is the most commonly used for prognosis (Table 4).37,66–69 Its main drawback is that it is applicable only in patients with encephalopathy, and when patients reach this stage, their condition often deteriorates rapidly, and they die while awaiting liver transplant.37,66,67

The Model for End-Stage Liver Disease (MELD) score is an alternative to the King’s College criteria. A high MELD score on admission signifies advanced disease, and patients with a high MELD score tend to have a worse prognosis than those with a low score.68

The Acute Physiology and Chronic Health Evaluation (APACHE) II score can also be used, as it is more sensitive than the King’s College criteria.6

The Clichy criteria66,69 can also be used.

Liver biopsy. In addition to helping establish the cause of acute liver failure, liver biopsy can also be used as a prognostic tool. Hepatocellular necrosis greater than 70% on the biopsy predicts death with a specificity of 90% and a sensitivity of 56%.70

Hypophosphatemia has been reported to indicate recovering liver function in patients with acute liver failure.71 As the liver regenerates, its energy requirement increases. To supply the energy, adenosine triphosphate production increases, and phosphorus shifts from the extracellular to the intracellular compartment to meet the need for extra phosphorus during this process. A serum phosphorus level of 2.9 mg/dL or higher appears to indicate a poor prognosis in patients with acute liver failure, as it signifies that adequate hepatocyte regeneration is not occurring.

References
  1. Polson J, Lee WM; American Association for the Study of Liver Disease. AASLD position paper: the management of acute liver failure. Hepatology 2005; 41:1179–1197.
  2. O’Grady JG, Schalm SW, Williams R. Acute liver failure: redefining the syndromes. Lancet 1993; 342:273–275.
  3. Ostapowicz G, Fontana RJ, Schiodt FV, et al; US Acute Liver Failure Study Group. Results of a prospective study of acute liver failure at 17 tertiary care centers in the United States. Ann Intern Med 2002; 137:947–954.
  4. Lee WM, Squires RH Jr, Nyberg SL, Doo E, Hoofnagle JH. Acute liver failure: summary of a workshop. Hepatology 2008; 47:1401–1415.
  5. Acharya SK, Panda SK, Saxena A, Gupta SD. Acute hepatic failure in India: a perspective from the East. J Gastroenterol Hepatol 2000; 15:473–479.
  6. Larson AM, Polson J, Fontana RJ, et al; Acute Liver Failure Study Group. Acetaminophen-induced acute liver failure: results of a United States multicenter, prospective study. Hepatology 2005; 42:1364–1372.
  7. Patten CJ, Thomas PE, Guy RL, et al. Cytochrome P450 enzymes involved in acetaminophen activation by rat and human liver microsomes and their kinetics. Chem Res Toxicol 1993; 6:511–518.
  8. Chen W, Koenigs LL, Thompson SJ, et al. Oxidation of acetaminophen to its toxic quinone imine and nontoxic catechol metabolites by baculovirus-expressed and purified human cytochromes P450 2E1 and 2A6. Chem Res Toxicol 1998; 11:295-301.
  9. Mitchell JR, Jollow DJ, Potter WZ, Gillette JR, Brodie BB. Acetaminophen-induced hepatic necrosis. IV. Protective role of glutathione. J Pharmacol Exp Ther 1973; 187:211–217.
  10. Schilling A, Corey R, Leonard M, Eghtesad B. Acetaminophen: old drug, new warnings. Cleve Clin J Med 2010; 77:19–27.
  11. Lauterburg BH, Corcoran GB, Mitchell JR. Mechanism of action of N-acetylcysteine in the protection against the hepatotoxicity of acetaminophen in rats in vivo. J Clin Invest 1983; 71:980–991.
  12. Watkins PB, Kaplowitz N, Slattery JT, et al. Aminotransferase elevations in healthy adults receiving 4 grams of acetaminophen daily: a randomized controlled trial. JAMA 2006; 296:87–93.
  13. Schiødt FV, Rochling FA, Casey DL, Lee WM. Acetaminophen toxicity in an urban county hospital. N Engl J Med 1997; 337:1112–1117.
  14. Whitcomb DC, Block GD. Association of acetaminophen hepatotoxicity with fasting and ethanol use. JAMA 1994; 272:1845–1850.
  15. Chalasani N, Fontana RJ, Bonkovsky HL, et al; Drug Induced Liver Injury Network (DILIN). Causes, clinical features, and outcomes from a prospective study of drug-induced liver injury in the United States. Gastroenterology 2008; 135:1924–1934 e1–4
  16. Reuben A, Koch DG, Lee WM; Acute Liver Failure Study Group. Drug-induced acute liver failure: results of a US multicenter, prospective study. Hepatology 2010; 52:2065–2076.
  17. Stevens T, Qadri A, Zein NN. Two patients with acute liver injury associated with use of the herbal weight-loss supplement hydroxycut. Ann Intern Med 2005; 142:477–478.
  18. Bernal W, Lee WM, Wendon J, Larsen FS, Williams R. Acute liver failure: a curable disease by 2024? J Hepatol 2015; 62(suppl 1):S112–S120.
  19. Schiodt FV, Davern TJ, Shakil AO, McGuire B, Samuel G, Lee WM. Viral hepatitis-related acute liver failure. Am J Gastroenterol 2003; 98:448–453.
  20. Charlton M, Adjei P, Poterucha J, et al. TT-virus infection in North American blood donors, patients with fulminant hepatic failure, and cryptogenic cirrhosis. Hepatology 1998; 28:839–842.
  21. Bismuth H, Samuel D, Gugenheim J, et al. Emergency liver transplantation for fulminant hepatitis. Ann Intern Med 1987; 107:337–341.
  22. Rumack BH, Matthew H. Acetaminophen poisoning and toxicity. Pediatrics 1975; 55:871–876.
  23. Davern TJ 2nd, James LP, Hinson JA, et al; Acute Liver Failure Study Group. Measurement of serum acetaminophen-protein adducts in patients with acute liver failure. Gastroenterology 2006; 130:687–694.
  24. Perry HE, Shannon MW. Efficacy of oral versus intravenous N-acetylcysteine in acetaminophen overdose: results of an open-label, clinical trial. J Pediatr 1998; 132:149–152.
  25. Smilkstein MJ, Knapp GL, Kulig KW, Rumack BH. Efficacy of oral N-acetylcysteine in the treatment of acetaminophen overdose. Analysis of the national multicenter study (1976 to 1985). N Engl J Med 1988; 319:1557–1562.
  26. Makin AJ, Wendon J, Williams R. A 7-year experience of severe acetaminophen-induced hepatotoxicity (1987-1993). Gastroenterology 1995; 109:1907–1916.
  27. Tsang SW, Chan HL, Leung NW, et al. Lamivudine treatment for fulminant hepatic failure due to acute exacerbation of chronic hepatitis B infection. Aliment Pharmacol Ther 2001; 15:1737–1744.
  28. Yu JW, Sun LJ, Yan BZ, Kang P, Zhao YH. Lamivudine treatment is associated with improved survival in fulminant hepatitis B. Liver Int 2011; 31:499–506.
  29. Garg H, Sarin SK, Kumar M, Garg V, Sharma BC, Kumar A. Tenofovir improves the outcome in patients with spontaneous reactivation of hepatitis B presenting as acute-on-chronic liver failure. Hepatology 2011; 53:774–780.
  30. Pinna AD, Rakela J, Demetris AJ, Fung JJ. Five cases of fulminant hepatitis due to herpes simplex virus in adults. Dig Dis Sci 2002; 47:750–754.
  31. Farr RW, Short S, Weissman D. Fulminant hepatitis during herpes simplex virus infection in apparently immunocompetent adults: report of two cases and review of the literature. Clin Infect Dis 1997; 24:1191–1194.
  32. Czaja AJ, Freese DK; American Association for the Study of Liver Disease. Diagnosis and treatment of autoimmune hepatitis. Hepatology 2002; 36:479–497.
  33. Roberts EA, Schilsky ML. A practice guideline on Wilson disease. Hepatology 2003; 37:1475–1492.
  34. Berman DH, Leventhal RI, Gavaler JS, Cadoff EM, Van Thiel DH. Clinical differentiation of fulminant Wilsonian hepatitis from other causes of hepatic failure. Gastroenterology 1991; 100:1129–1134.
  35. Lee WM, Hynan LS, Rossaro L, et al. Intravenous N-acetylcysteine improves transplant-free survival in early stage non-acetaminophen acute liver failure. Gastroenterology 2009; 137:856–864.
  36. O’Grady JG, Alexander GJ, Hayllar KM, Williams R. Early indicators of prognosis in fulminant hepatic failure. Gastroenterology 1989; 97:439–445.
  37. Clemmesen JO, Larsen FS, Kondrup J, Hansen BA, Ott P. Cerebral herniation in patients with acute liver failure is correlated with arterial ammonia concentration. Hepatology 1999; 29:648–653.
  38. Swain M, Butterworth RF, Blei AT. Ammonia and related amino acids in the pathogenesis of brain edema in acute ischemic liver failure in rats. Hepatology 1992; 15:449–453.
  39. Haussinger D, Laubenberger J, vom Dahl S, et al. Proton magnetic resonance spectroscopy studies on human brain myo-inositol in hypo-osmolarity and hepatic encephalopathy. Gastroenterology 1994; 107:1475–1480.
  40. Blei AT, Olafsson S, Webster S, Levy R. Complications of intracranial pressure monitoring in fulminant hepatic failure. Lancet 1993; 341:157–158.
  41. Cordoba J, Gottstein J, Blei AT. Chronic hyponatremia exacerbates ammonia-induced brain edema in rats after portacaval anastomosis. J Hepatol 1998; 29:589–594.
  42. Canalese J, Gimson AE, Davis C, Mellon PJ, Davis M, Williams R. Controlled trial of dexamethasone and mannitol for the cerebral oedema of fulminant hepatic failure. Gut 1982; 23:625–629.
  43. Jalan R, SW OD, Deutz NE, Lee A, Hayes PC. Moderate hypothermia for uncontrolled intracranial hypertension in acute liver failure. Lancet 1999; 354:1164–1168.
  44. Rose C, Michalak A, Rao KV, Quack G, Kircheis G, Butterworth RF. L-ornithine-L-aspartate lowers plasma and cerebrospinal fluid ammonia and prevents brain edema in rats with acute liver failure. Hepatology 1999; 30:636–640.
  45. Acharya SK, Bhatia V, Sreenivas V, Khanal S, Panda SK. Efficacy of L-ornithine L-aspartate in acute liver failure: a double-blind, randomized, placebo-controlled study. Gastroenterology 2009; 136:2159–2168.
  46. Bhatia V, Batra Y, Acharya SK. Prophylactic phenytoin does not improve cerebral edema or survival in acute liver failure—a controlled clinical trial. J Hepatol 2004; 41:89–96.
  47. Canalese J, Gove CD, Gimson AE, Wilkinson SP, Wardle EN, Williams R. Reticuloendothelial system and hepatocytic function in fulminant hepatic failure. Gut 1982; 23:265–269.
  48. Rolando N, Harvey F, Brahm J, et al. Prospective study of bacterial infection in acute liver failure: an analysis of fifty patients. Hepatology 1990; 11:49–53.
  49. Rolando N, Wade JJ, Stangou A, et al. Prospective study comparing the efficacy of prophylactic parenteral antimicrobials, with or without enteral decontamination, in patients with acute liver failure. Liver Transpl Surg 1996; 2:8–13.
  50. Rolando N, Harvey F, Brahm J, et al. Fungal infection: a common, unrecognised complication of acute liver failure. J Hepatol 1991; 12:1–9.
  51. Vaquero J, Polson J, Chung C, et al. Infection and the progression of hepatic encephalopathy in acute liver failure. Gastroenterology 2003; 125:755–764.
  52. Rolando N, Philpott-Howard J, Williams R. Bacterial and fungal infection in acute liver failure. Semin Liver Dis 1996; 16:389–402.
  53. Rolando N, Wade J, Davalos M, Wendon J, Philpott-Howard J, Williams R. The systemic inflammatory response syndrome in acute liver failure. Hepatology 2000; 32:734–739.
  54. Rolando N, Gimson A, Wade J, Philpott- Howard J, Casewell M, Williams R. Prospective controlled trial of selective parenteral and enteral antimicrobial regimen in fulminant liver failure. Hepatology 1993; 17:196–201.
  55. Karvellas CJ, Cavazos J, Battenhouse H, et al; US Acute Liver Failure Study Group. Effects of antimicrobial prophylaxis and blood stream infections in patients with acute liver failure: a retrospective cohort study. Clin Gastroenterol Hepatol 2014; 12:1942–1949.
  56. Acharya SK, Dasarathy S, Kumer TL, et al. Fulminant hepatitis in a tropical population: clinical course, cause, and early predictors of outcome. Hepatology 1996; 23:1148–1155.
  57. Cook DJ, Fuller HD, Guyatt GH, et al. Risk factors for gastrointestinal bleeding in critically ill patients. Canadian Critical Care Trials Group. N Engl J Med 1994; 330:377–381.
  58. MacDougall BR, Williams R. H2-receptor antagonist in the prevention of acute gastrointestinal hemorrhage in fulminant hepatic failure: a controlled trial. Gastroenterology 1978; 74:464–465.
  59. Stange J, Mitzner SR, Risler T, et al. Molecular adsorbent recycling system (MARS): clinical results of a new membrane-based blood purification system for bioartificial liver support. Artif Organs 1999; 23:319–330.
  60. Vaid A, Chewich H, Balk EM, Jaber BL. Molecular adsorbent recirculating system as artificial support therapy for liver failure: a meta-analysis. ASAIO J 2012; 58:51–59.
  61. Khuroo MS, Khuroo MS, Farahat KL. Molecular adsorbent recirculating system for acute and acute-on-chronic liver failure: a meta-analysis. Liver Transpl 2004; 10:1099–1106.
  62. Kjaergard LL, Liu J, Als-Nielsen B, Gluud C. Artificial and bioartificial support systems for acute and acute-on-chronic liver failure: a systematic review. JAMA 2003; 289:217–222.
  63. Sommer BG, Sutherland DE, Matas AJ, Simmons RL, Najarian JS. Hepatocellular transplantation for treatment of D-galactosamine-induced acute liver failure in rats. Transplant Proc 1979; 11:578–584.
  64. Demetriou AA, Reisner A, Sanchez J, Levenson SM, Moscioni AD, Chowdhury JR. Transplantation of microcarrier-attached hepatocytes into 90% partially hepatectomized rats. Hepatology 1988; 8:1006–1009.
  65. Strom SC, Fisher RA, Thompson MT, et al. Hepatocyte transplantation as a bridge to orthotopic liver transplantation in terminal liver failure. Transplantation 1997; 63:559–569.
  66. Pauwels A, Mostefa-Kara N, Florent C, Levy VG. Emergency liver transplantation for acute liver failure. Evaluation of London and Clichy criteria. J Hepatol 1993; 17:124–127.
  67. Anand AC, Nightingale P, Neuberger JM. Early indicators of prognosis in fulminant hepatic failure: an assessment of the King's criteria. J Hepatol 1997; 26:62–68.
  68. Schmidt LE, Larsen FS. MELD score as a predictor of liver failure and death in patients with acetaminophen-induced liver injury. Hepatology 2007; 45:789–796.
  69. Bernuau J, Goudeau A, Poynard T, et al. Multivariate analysis of prognostic factors in fulminant hepatitis B. Hepatology 1986; 6:648–651.
  70. Donaldson BW, Gopinath R, Wanless IR, et al. The role of transjugular liver biopsy in fulminant liver failure: relation to other prognostic indicators. Hepatology 1993; 18:1370–1376.
  71. Schmidt LE, Dalhoff K. Serum phosphate is an early predictor of outcome in severe acetaminophen-induced hepatotoxicity. Hepatology 2002; 36:659–665.
Click for Credit Link
Article PDF
Author and Disclosure Information

Tavankit Singh, MD
Department of Internal Medicine, Medicine Institute, Cleveland Clinic

Nancy Gupta, MD
Department of Internal Medicine, Westchester Medical Center, New York Medical College, Valhalla, NY

Naim Alkhouri, MD
Department of Gastroenterology and Hepatology, and Department of Pediatric Gastroenterology, Digestive Disease Institute, Cleveland Clinic; Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

William D. Carey, MD
Department of Gastroenterology and Hepatology, Digestive Disease Institute, Cleveland Clinic; Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Ibrahim A. Hanouneh, MD
Minnesota Gastroenterology, P.A., Minneapolis, MN

Address: Ibrahim A. Hanouneh, MD, Minnesota Gastroenterology, P.A., P.O. Box 14909, Minneapolis, MN 55414; [email protected]

Dr. Alkhouri has disclosed membership on advisory committees or review panels for Bristol-Myers Squibb, Gilead Sciences, and Intercept. Dr. Carey has disclosed ownership interest in Gilead Sciences and Pfizer.

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
453-462
Legacy Keywords
acute liver failure, fulminant hepatic failure, hyperacute liver failure, acetaminophen, Tylenol, acetylcysteine, liver transplant, CYP2E1, viral hepatitis, Tavankit Singh, Nancy Gupta, Naim Alkhouri, William Carey, Ibrahim Hanouneh
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Tavankit Singh, MD
Department of Internal Medicine, Medicine Institute, Cleveland Clinic

Nancy Gupta, MD
Department of Internal Medicine, Westchester Medical Center, New York Medical College, Valhalla, NY

Naim Alkhouri, MD
Department of Gastroenterology and Hepatology, and Department of Pediatric Gastroenterology, Digestive Disease Institute, Cleveland Clinic; Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

William D. Carey, MD
Department of Gastroenterology and Hepatology, Digestive Disease Institute, Cleveland Clinic; Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Ibrahim A. Hanouneh, MD
Minnesota Gastroenterology, P.A., Minneapolis, MN

Address: Ibrahim A. Hanouneh, MD, Minnesota Gastroenterology, P.A., P.O. Box 14909, Minneapolis, MN 55414; [email protected]

Dr. Alkhouri has disclosed membership on advisory committees or review panels for Bristol-Myers Squibb, Gilead Sciences, and Intercept. Dr. Carey has disclosed ownership interest in Gilead Sciences and Pfizer.

Author and Disclosure Information

Tavankit Singh, MD
Department of Internal Medicine, Medicine Institute, Cleveland Clinic

Nancy Gupta, MD
Department of Internal Medicine, Westchester Medical Center, New York Medical College, Valhalla, NY

Naim Alkhouri, MD
Department of Gastroenterology and Hepatology, and Department of Pediatric Gastroenterology, Digestive Disease Institute, Cleveland Clinic; Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

William D. Carey, MD
Department of Gastroenterology and Hepatology, Digestive Disease Institute, Cleveland Clinic; Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Ibrahim A. Hanouneh, MD
Minnesota Gastroenterology, P.A., Minneapolis, MN

Address: Ibrahim A. Hanouneh, MD, Minnesota Gastroenterology, P.A., P.O. Box 14909, Minneapolis, MN 55414; [email protected]

Dr. Alkhouri has disclosed membership on advisory committees or review panels for Bristol-Myers Squibb, Gilead Sciences, and Intercept. Dr. Carey has disclosed ownership interest in Gilead Sciences and Pfizer.

Article PDF
Article PDF
Related Articles

When the liver fails, it usually fails gradually. The sudden (acute) onset of liver failure, while less common, demands prompt management, with transfer to an intensive care unit, specific treatment depending on the cause, and consideration of liver transplant, without which the mortality rate is high.

This article reviews the definition, epidemiology, etiology, and management of acute liver failure.

DEFINITIONS

Acute liver failure is defined as a syndrome of acute hepatitis with evidence of abnormal coagulation (eg, an international normalized ratio > 1.5) complicated by the development of mental alteration (encephalopathy) within 26 weeks of the onset of illness in a patient without a history of liver disease.1 In general, patients have no evidence of underlying chronic liver disease, but there are exceptions; patients with Wilson disease, vertically acquired hepatitis B virus infection, or autoimmune hepatitis can present with acute liver failure superimposed on chronic liver disease or even cirrhosis.

The term acute liver failure has replaced older terms such as fulminant hepatic failure, hyperacute liver failure, and subacute liver failure, which were used for prognostic purposes. Patients with hyperacute liver failure (defined as development of encephalopathy within 7 days of onset of illness) generally have a good prognosis with medical management, whereas those with subacute liver failure (defined as development of encephalopathy within 5 to 26 weeks of onset of illness) have a poor prognosis without liver transplant.2,3

NEARLY 2,000 CASES A YEAR

There are nearly 2,000 cases of acute liver failure each year in the United States, and it accounts for 6% of all deaths due to liver disease.4 It is more common in women than in men, and more common in white people than in other races. The peak incidence is at a fairly young age, ie, 35 to 45 years.

CAUSES

The most common cause of acute liver failure in the United States and other Western countries is acetaminophen toxicity, followed by viral hepatitis. In contrast, viral hepatitis is the most common cause in developing countries.5

Acetaminophen toxicity

Patients with acetaminophen-induced liver failure tend to be younger than other patients with acute liver failure.1 Nearly half of them present after intentionally taking a single large dose, while the rest present with unintentional toxicity while taking acetaminophen for pain relief on a long-term basis and ingesting more than the recommended dose.6

After ingestion, 52% to 57% of acetaminophen is converted to glucuronide conjugates, and 30% to 44% is converted to sulfate conjugates. These compounds are nontoxic, water-soluble, and rapidly excreted in the urine.

However, about 5% to 10% of ingested acetaminophen is shunted to the cytochrome P450 system. P450 2E1 is the main isoenzyme involved in acetaminophen metabolism, but 1A2, 3A4, and 2A6 also contribute.7,8 P450 2E1 is the same isoenzyme responsible for ethanol metabolism and is inducible. Thus, regular alcohol consumption can increase P450 2E1 activity, setting the stage under certain circumstances for increased acetaminophen metabolism through this pathway.

Reprinted from Schilling A, Corey R, Leonard M, Eghtesad B. Acetaminophen: old drug, new warnings. Cleve Clin J Med 2010; 77:19–27.
Figure 1.

Metabolism of acetaminophen through the cytochrome P450 pathway results in production of N-acetyl-p-benzoquinone imine (NAPQI), the compound that damages the liver. NAPQI is rendered nontoxic by binding to glutathione, forming NAPQI-glutathione adducts. Glutathione capacity is limited, however. With too much acetaminophen, glutathione becomes depleted and NAPQI accumulates, binds with proteins to form adducts, and leads to necrosis of hepatocytes (Figure 1).9,10

Acetylcysteine, used in treating acetaminophen toxicity, is a substrate for glutathione synthesis and ultimately increases the amount of glutathione available to bind NAPQI and prevent damage to hepatocytes.11

Acetaminophen is a dose-related toxin. Most ingestions leading to acute liver failure exceed 10 g/day (> 150 mg/kg/day). Moderate chronic ingestion, eg, 4 g/day, usually leads to transient mild elevation of liver enzymes in healthy individuals12 but can in rare cases cause acute liver failure.13

Whitcomb and Block14 retrospectively identified 49 patients who presented with acetaminophen-induced hepatotoxicity in 1987 through 1993; 21 (43%) had been taking acetaminophen for therapeutic purposes. All 49 patients took more than the recommended limit of 4 g/day, many of them while fasting and some while using alcohol. Acute liver failure was seen with ingestion of more than 12 g/day—or more than 10 g/day in alcohol users. The authors attributed the increased risk to activation of cytochrome P450 2E1 by alcohol and depletion of glutathione stores by starvation or alcohol abuse. 

Advice to patients taking acetaminophen is given in Table 1.

Other drugs and supplements

A number of other drugs and herbal supplements can also cause acute liver failure (Table 2), the most common being antimicrobial and antiepileptic drugs.15 Of the antimicrobials, antitubercular drugs (especially isoniazid) are believed to be the most common causes, followed by trimethoprim-sulfamethoxazole. Phenytoin is the antiepileptic drug most often implicated in acute liver failure.

Statins can also cause acute liver failure, especially when combined with other hepatotoxic agents.16

The herbal supplements and weight-loss agents Hydroxycut and Herbalife have both been reported to cause acute liver failure, with patients presenting with either the hepatocellular or the cholestatic pattern of liver injury.17 The exact chemical in these supplements that causes liver injury has not yet been determined.

The National Institutes of Health maintains a database of cases of liver failure due to medications and supplements at livertox.nih.gov. The database includes the pattern of hepatic injury, mechanism of injury, management, and outcomes.

 

 

Viral hepatitis

Hepatitis B virus is the most common viral cause of acute liver failure and is responsible for about 8% of cases.18

Patients with chronic hepatitis B virus infection—as evidenced by positive hepatitis B surface antigen—can develop acute liver failure if the infection is reactivated by the use of immunosuppressive drugs for solid-organ or bone-marrow transplant or medications such as anti-tumor necrosis agents, rituximab, or chemotherapy. These patients should be treated prophylactically with a nucleoside analogue, which should be continued for 6 months after immunosuppressive therapy is completed.

Hepatitis A virus is responsible for about 4% of cases.18

Hepatitis C virus rarely causes acute liver failure, especially in the absence of hepatitis A and hepatitis B.3,19

Hepatitis E virus, which is endemic in areas of Asia and Africa, can cause liver disease in pregnant women and in young adults who have concomitant liver disease from another cause. It tends to cause acute liver failure more frequently in pregnant women than in the rest of the population and carries a mortality rate of more than 20% in this subgroup.

TT (transfusion-transmitted) virus was reported in the 1990s to cause acute liver failure in about 27% of patients in whom no other cause could be found.20

Other rare viral causes of acute liver failure include Epstein-Barr virus, cytomegalovirus, and herpes simplex virus types 1, 2, and 6.

Other causes

Other causes of acute liver failure include ischemic hepatitis, autoimmune hepatitis, Wilson disease, Budd-Chiari syndrome, and HELLP (hemolysis, elevated liver enzymes and low platelets) syndrome.

MANY PATIENTS NEED LIVER TRANSPLANT

Many patients with acute liver failure ultimately require orthotopic liver transplant,21 especially if they present with severe encephalopathy. Other aspects of treatment vary according to the cause of liver failure (Table 3).

SPECIFIC MANAGEMENT

Management of acetaminophen toxicity

If the time of ingestion is known, checking the acetaminophen level can help determine the cause of acute liver failure and also predict the risk of hepatotoxicity, based on the work of Rumack and Matthew.22 Calculators are available, eg, http://reference.medscape.com/calculator/acetaminophen-toxicity.

If a patient presents with acute liver failure several days after ingesting acetaminophen, the level can be in the nontoxic range, however. In this scenario, measuring acetaminophen-protein adducts can help establish acetaminophen toxicity as the cause, as the adducts last longer in the serum and provide 100% sensitivity and specificity.23 While most laboratories can rapidly measure acetaminophen levels, only a few can measure acetaminophen-protein adducts, and thus this test is not used clinically.

Acetylcysteine is the main drug used for acetaminophen toxicity. Ideally, it should be given within 8 hours of acetaminophen ingestion, but giving it later is also useful.1

Acetylcysteine is available in oral and intravenous forms, the latter for patients who have encephalopathy or cannot tolerate oral intake due to repeated episodes of vomiting.24,25 The oral form is much less costly and is thus preferred over intravenous acetylcysteine in patients who can tolerate oral intake. Intravenous acetylcysteine should be given in a loading dose of 150 mg/kg in 5% dextrose over 15 minutes, followed by a maintenance dose of 50 mg/kg over 4 hours and then 100 mg/kg given over 16 hours.1 No dose adjustment is needed in patients who have renal toxicity (acetaminophen can also be toxic to the kidneys).

Most patients with acetaminophen-induced liver failure survive with medical management alone and do not need a liver transplant.3,26 Cirrhosis does not occur in these patients.

Management of viral acute liver failure

When patients present with acute liver failure, it is necessary to look for a viral cause by serologic testing, including hepatitis A virus IgM antibody, hepatitis B surface antigen, and hepatitis B core IgM antibody.

Hepatitis B can become reactivated in immunocompromised patients, and therefore the hepatitis B virus DNA level should be checked. Detection of hepatitis B virus DNA in a patient previously known to have undetectable hepatitis B virus DNA confirms hepatitis B reactivation.

Patients with hepatitis B-induced acute liver failure should be treated with entecavir or tenofovir. Although this treatment may not change the course of acute liver failure or accelerate the recovery, it can prevent reinfection in the transplanted liver if liver transplant becomes indicated.27–29

Herpes simplex virus should be suspected in patients presenting with anicteric hepatitis with fever. Polymerase chain reaction testing for herpes simplex virus should be done,30 and if positive, patients should be given intravenous acyclovir.31 Despite treatment, herpes simplex virus disease is associated with a very poor prognosis without liver transplant.

Autoimmune hepatitis

The autoantibodies usually seen in autoimmune hepatitis are antinuclear antibody, antismooth muscle antibody, and anti-liver-kidney microsomal antibody, and patients need to be tested for them.

The diagnosis of autoimmune hepatitis can be challenging, as these autoimmune markers can be negative in 5% of patients. Liver biopsy becomes essential to establish the diagnosis in that setting.32

Guidelines advise starting prednisone 40 to 60 mg/day and placing the patient on the liver transplant list.1

Wilson disease

Although it is an uncommon cause of liver failure, Wilson disease needs special attention because it has a poor prognosis. The mortality rate in acute liver failure from Wilson disease reaches 100% without liver transplant.

Wilson disease is caused by a genetic defect that allows copper to accumulate in the liver and other organs. However, diagnosing Wilson disease as the cause of acute liver failure can be challenging because elevated serum and urine copper levels are not specific to Wilson disease and can be seen in patients with acute liver failure from any cause. In addition, the ceruloplasmin level is usually normal or high because it is an acute-phase reactant. Accumulation of copper in the liver parenchyma is usually patchy; therefore, qualitative copper staining on random liver biopsy samples provides low diagnostic yield. Quantitative copper on liver biopsy is the gold standard test to establish the diagnosis, but the test is time-consuming. Kayser-Fleischer rings around the iris are considered pathognomic for Wilson disease when seen with acute liver failure, but they are seen in only about 50% of patients.33

A unique feature of acute Wilson disease is that most patients have very high bilirubin levels and low alkaline phosphatase levels. An alkaline phosphatase-to-bilirubin ratio less than 2 in patients with acute liver failure is highly suggestive of Wilson disease.34

Another clue to the diagnosis is that patients with Wilson disease tend to develop Coombs-negative hemolytic anemia, which leads to a disproportionate elevation in aminotransferase levels, with aspartate aminotransferase being higher than alanine aminotransferase.

Once Wilson disease is suspected, the patient should be listed for liver transplant because death is almost certain without it. For patients awaiting liver transplant, the American Association for the Study of Liver Diseases guidelines recommend certain measures to lower the serum copper level such as albumin dialysis, continuous hemofiltration, plasmapheresis, and plasma exchange,1 but the evidence supporting their use is limited.

NONSPECIFIC MANAGEMENT

Figure 2.

Acute liver failure can affect a number of organs and systems in addition to the liver (Figure 2).

General considerations

Because their condition can rapidly deteriorate, patients with acute liver failure are best managed in intensive care.

Patients who present to a center that does not have the facilities for liver transplant should be transferred to a transplant center as soon as possible, preferably by air. If the patient may not be able to protect the airway, endotracheal intubation should be performed before transfer.

The major causes of death in patients with acute liver failure are cerebral edema and infection. Gastrointestinal bleeding was a major cause of death in the past, but with prophylactic use of histamine H2 receptor blockers and proton pump inhibitors, the incidence of gastrointestinal bleeding has been significantly reduced.

Although initially used only in patients with acetaminophen-induced liver failure, acetylcysteine has also shown benefit in patients with acute liver failure from other causes. In patients with grade 1 or 2 encephalopathy on a scale of 0 (minimal) to 4 (comatose), the transplant-free survival rate is higher when acetylcysteine is given compared with placebo, but this benefit does not extend to patients with a higher grade of encephalopathy.35

 

 

Cerebral edema and intracranial hypertension

Cerebral edema is the leading cause of death in patients with acute liver failure, and it develops in nearly 40% of patients.36

The mechanism by which cerebral edema develops is not well understood. Some have proposed that ammonia is converted to glutamine, which causes cerebral edema either directly by its osmotic effect37,38 or indirectly by decreasing other osmolytes, thereby promoting water retention.39

Cerebral edema leads to intracranial hypertension, which can ultimately cause cerebral herniation and death. Because of the high mortality rate associated with cerebral edema, invasive devices were extensively used in the past to monitor intracranial pressure. However, in light of known complications of these devices, including bleeding,40 and lack of evidence of long-term benefit in terms of mortality rates, their use has come under debate.

Treatments. Many treatments are available for cerebral edema and intracranial hypertension. The first step is to elevate the head of the bed about 30 degrees. In addition, hyponatremia should be corrected, as it can worsen cerebral edema.41 If patients are intubated, maintaining a hypercapneic state is advisable to decrease the intracranial pressure.

Of the two pharmacologic options, mannitol is more often used.42 It is given as a bolus dose of 0.5 to 1 g/kg intravenously if the serum osmolality is less than 320 mOsm/L.1 Given the risk of fluid overload with mannitol, caution must be exercised in patients with renal dysfunction. The other pharmacologic option is 3% hypertonic saline.

Therapeutic hypothermia is a newer treatment for cerebral edema. Lowering the body temperature to 32 to 33°C (89.6 to 91.4°F) using cooling blankets decreases intracranial pressure and cerebral blood flow and improves the cerebral perfusion pressure.43 With this treatment, patients should be closely monitored for side effects of infection, coagulopathy, and cardiac arrythmias.1

l-ornithine l-aspartate was successfully used to prevent brain edema in rats, but in humans, no benefit was seen compared with placebo.44,45 The underlying basis for this experimental treatment is that supplemental ornithine and aspartate should increase glutamate synthesis, which should increase the activity of enzyme glutamine synthetase in skeletal muscles. With the increase in enzyme activity, conversion of ammonia to glutamine should increase, thereby decreasing ammonia circulation and thus decreasing cerebral edema.

Patients with cerebral edema have a high incidence of seizures, but prophylactic antiseizure medications such as phenytoin have not been proven to be beneficial.46

Infection

Nearly 80% of patients with acute liver failure develop an infectious complication, which can be attributed to a state of immunodeficiency.47

The respiratory and urinary tracts are the most common sources of infection.48 In patients with bacteremia, Enterococcus species and coagulase-negative Staphylococcus species49 are the commonly isolated organisms. Also, in patients with acute liver failure, fungal infections account for 30% of all infections.50

Infected patients often develop worsening of their encephalopathy51 without fever or elevated white blood cell count.49,52 Thus, in any patient in whom encephalopathy is worsening, an evaluation must be done to rule out infection. In these patients, systemic inflammatory response syndrome is an independent risk factor for death.53

Despite the high mortality rate with infection, whether using antibiotics prophylactically in acute liver failure is beneficial is controversial.54,55

Gastrointestinal bleeding

The current prevalence of upper gastrointestinal bleeding in acute liver failure patients is about 1.5%.56 Coagulopathy and endotracheal intubation are the main risk factors for upper gastrointestinal bleeding in these patients.57 The most common source of bleeding is stress ulcers in the stomach. The ulcers develop from a combination of factors, including decreased blood flow to the mucosa causing ischemia and hypoperfusion-reperfusion injury.

Pharmacologic inhibition of gastric acid secretion has been shown to reduce upper gastrointestinal bleeding in acute liver failure. A histamine H2 receptor blocker or proton pump inhibitor should be given to prevent gastrointestinal bleeding in patients with acute liver failure.1,58

EXPERIMENTAL TREATMENTS

Artificial liver support systems

Membranes and dialysate solutions have been developed to remove toxic substances that are normally metabolized by the liver. Two of these—the molecular adsorbent recycling system (MARS) and the extracorporeal liver assist device (ELAD)—were developed in the late 1990s. MARS consisted of a highly permeable hollow fiber membrane mixed with albumin, and ELAD consisted of porcine hepatocytes attached to microcarriers in the extracapillary space of the hollow fiber membrane. Both systems allowed for transfer of water-soluble and protein-bound toxins in the blood across the membrane and into the dialysate.59 The clinical benefit offered by these devices is controversial,60–62 thus limiting their use to experimental purposes only.

Hepatocyte transplant

Use of hepatocyte transplant as a bridge to liver transplant was tested in 1970s, first in rats and later in humans.63 By reducing the blood ammonia level and improving cerebral perfusion pressure and cardiac function, replacement of 1% to 2% of the total liver cell mass by transplanted hepatocytes acts as a bridge to orthotopic liver transplant.64,65

PROGNOSIS

Different criteria have been used to identify patients with poor prognosis who may eventually need to undergo liver transplant.

The King’s College criteria system is the most commonly used for prognosis (Table 4).37,66–69 Its main drawback is that it is applicable only in patients with encephalopathy, and when patients reach this stage, their condition often deteriorates rapidly, and they die while awaiting liver transplant.37,66,67

The Model for End-Stage Liver Disease (MELD) score is an alternative to the King’s College criteria. A high MELD score on admission signifies advanced disease, and patients with a high MELD score tend to have a worse prognosis than those with a low score.68

The Acute Physiology and Chronic Health Evaluation (APACHE) II score can also be used, as it is more sensitive than the King’s College criteria.6

The Clichy criteria66,69 can also be used.

Liver biopsy. In addition to helping establish the cause of acute liver failure, liver biopsy can also be used as a prognostic tool. Hepatocellular necrosis greater than 70% on the biopsy predicts death with a specificity of 90% and a sensitivity of 56%.70

Hypophosphatemia has been reported to indicate recovering liver function in patients with acute liver failure.71 As the liver regenerates, its energy requirement increases. To supply the energy, adenosine triphosphate production increases, and phosphorus shifts from the extracellular to the intracellular compartment to meet the need for extra phosphorus during this process. A serum phosphorus level of 2.9 mg/dL or higher appears to indicate a poor prognosis in patients with acute liver failure, as it signifies that adequate hepatocyte regeneration is not occurring.

When the liver fails, it usually fails gradually. The sudden (acute) onset of liver failure, while less common, demands prompt management, with transfer to an intensive care unit, specific treatment depending on the cause, and consideration of liver transplant, without which the mortality rate is high.

This article reviews the definition, epidemiology, etiology, and management of acute liver failure.

DEFINITIONS

Acute liver failure is defined as a syndrome of acute hepatitis with evidence of abnormal coagulation (eg, an international normalized ratio > 1.5) complicated by the development of mental alteration (encephalopathy) within 26 weeks of the onset of illness in a patient without a history of liver disease.1 In general, patients have no evidence of underlying chronic liver disease, but there are exceptions; patients with Wilson disease, vertically acquired hepatitis B virus infection, or autoimmune hepatitis can present with acute liver failure superimposed on chronic liver disease or even cirrhosis.

The term acute liver failure has replaced older terms such as fulminant hepatic failure, hyperacute liver failure, and subacute liver failure, which were used for prognostic purposes. Patients with hyperacute liver failure (defined as development of encephalopathy within 7 days of onset of illness) generally have a good prognosis with medical management, whereas those with subacute liver failure (defined as development of encephalopathy within 5 to 26 weeks of onset of illness) have a poor prognosis without liver transplant.2,3

NEARLY 2,000 CASES A YEAR

There are nearly 2,000 cases of acute liver failure each year in the United States, and it accounts for 6% of all deaths due to liver disease.4 It is more common in women than in men, and more common in white people than in other races. The peak incidence is at a fairly young age, ie, 35 to 45 years.

CAUSES

The most common cause of acute liver failure in the United States and other Western countries is acetaminophen toxicity, followed by viral hepatitis. In contrast, viral hepatitis is the most common cause in developing countries.5

Acetaminophen toxicity

Patients with acetaminophen-induced liver failure tend to be younger than other patients with acute liver failure.1 Nearly half of them present after intentionally taking a single large dose, while the rest present with unintentional toxicity while taking acetaminophen for pain relief on a long-term basis and ingesting more than the recommended dose.6

After ingestion, 52% to 57% of acetaminophen is converted to glucuronide conjugates, and 30% to 44% is converted to sulfate conjugates. These compounds are nontoxic, water-soluble, and rapidly excreted in the urine.

However, about 5% to 10% of ingested acetaminophen is shunted to the cytochrome P450 system. P450 2E1 is the main isoenzyme involved in acetaminophen metabolism, but 1A2, 3A4, and 2A6 also contribute.7,8 P450 2E1 is the same isoenzyme responsible for ethanol metabolism and is inducible. Thus, regular alcohol consumption can increase P450 2E1 activity, setting the stage under certain circumstances for increased acetaminophen metabolism through this pathway.

Reprinted from Schilling A, Corey R, Leonard M, Eghtesad B. Acetaminophen: old drug, new warnings. Cleve Clin J Med 2010; 77:19–27.
Figure 1.

Metabolism of acetaminophen through the cytochrome P450 pathway results in production of N-acetyl-p-benzoquinone imine (NAPQI), the compound that damages the liver. NAPQI is rendered nontoxic by binding to glutathione, forming NAPQI-glutathione adducts. Glutathione capacity is limited, however. With too much acetaminophen, glutathione becomes depleted and NAPQI accumulates, binds with proteins to form adducts, and leads to necrosis of hepatocytes (Figure 1).9,10

Acetylcysteine, used in treating acetaminophen toxicity, is a substrate for glutathione synthesis and ultimately increases the amount of glutathione available to bind NAPQI and prevent damage to hepatocytes.11

Acetaminophen is a dose-related toxin. Most ingestions leading to acute liver failure exceed 10 g/day (> 150 mg/kg/day). Moderate chronic ingestion, eg, 4 g/day, usually leads to transient mild elevation of liver enzymes in healthy individuals12 but can in rare cases cause acute liver failure.13

Whitcomb and Block14 retrospectively identified 49 patients who presented with acetaminophen-induced hepatotoxicity in 1987 through 1993; 21 (43%) had been taking acetaminophen for therapeutic purposes. All 49 patients took more than the recommended limit of 4 g/day, many of them while fasting and some while using alcohol. Acute liver failure was seen with ingestion of more than 12 g/day—or more than 10 g/day in alcohol users. The authors attributed the increased risk to activation of cytochrome P450 2E1 by alcohol and depletion of glutathione stores by starvation or alcohol abuse. 

Advice to patients taking acetaminophen is given in Table 1.

Other drugs and supplements

A number of other drugs and herbal supplements can also cause acute liver failure (Table 2), the most common being antimicrobial and antiepileptic drugs.15 Of the antimicrobials, antitubercular drugs (especially isoniazid) are believed to be the most common causes, followed by trimethoprim-sulfamethoxazole. Phenytoin is the antiepileptic drug most often implicated in acute liver failure.

Statins can also cause acute liver failure, especially when combined with other hepatotoxic agents.16

The herbal supplements and weight-loss agents Hydroxycut and Herbalife have both been reported to cause acute liver failure, with patients presenting with either the hepatocellular or the cholestatic pattern of liver injury.17 The exact chemical in these supplements that causes liver injury has not yet been determined.

The National Institutes of Health maintains a database of cases of liver failure due to medications and supplements at livertox.nih.gov. The database includes the pattern of hepatic injury, mechanism of injury, management, and outcomes.

 

 

Viral hepatitis

Hepatitis B virus is the most common viral cause of acute liver failure and is responsible for about 8% of cases.18

Patients with chronic hepatitis B virus infection—as evidenced by positive hepatitis B surface antigen—can develop acute liver failure if the infection is reactivated by the use of immunosuppressive drugs for solid-organ or bone-marrow transplant or medications such as anti-tumor necrosis agents, rituximab, or chemotherapy. These patients should be treated prophylactically with a nucleoside analogue, which should be continued for 6 months after immunosuppressive therapy is completed.

Hepatitis A virus is responsible for about 4% of cases.18

Hepatitis C virus rarely causes acute liver failure, especially in the absence of hepatitis A and hepatitis B.3,19

Hepatitis E virus, which is endemic in areas of Asia and Africa, can cause liver disease in pregnant women and in young adults who have concomitant liver disease from another cause. It tends to cause acute liver failure more frequently in pregnant women than in the rest of the population and carries a mortality rate of more than 20% in this subgroup.

TT (transfusion-transmitted) virus was reported in the 1990s to cause acute liver failure in about 27% of patients in whom no other cause could be found.20

Other rare viral causes of acute liver failure include Epstein-Barr virus, cytomegalovirus, and herpes simplex virus types 1, 2, and 6.

Other causes

Other causes of acute liver failure include ischemic hepatitis, autoimmune hepatitis, Wilson disease, Budd-Chiari syndrome, and HELLP (hemolysis, elevated liver enzymes and low platelets) syndrome.

MANY PATIENTS NEED LIVER TRANSPLANT

Many patients with acute liver failure ultimately require orthotopic liver transplant,21 especially if they present with severe encephalopathy. Other aspects of treatment vary according to the cause of liver failure (Table 3).

SPECIFIC MANAGEMENT

Management of acetaminophen toxicity

If the time of ingestion is known, checking the acetaminophen level can help determine the cause of acute liver failure and also predict the risk of hepatotoxicity, based on the work of Rumack and Matthew.22 Calculators are available, eg, http://reference.medscape.com/calculator/acetaminophen-toxicity.

If a patient presents with acute liver failure several days after ingesting acetaminophen, the level can be in the nontoxic range, however. In this scenario, measuring acetaminophen-protein adducts can help establish acetaminophen toxicity as the cause, as the adducts last longer in the serum and provide 100% sensitivity and specificity.23 While most laboratories can rapidly measure acetaminophen levels, only a few can measure acetaminophen-protein adducts, and thus this test is not used clinically.

Acetylcysteine is the main drug used for acetaminophen toxicity. Ideally, it should be given within 8 hours of acetaminophen ingestion, but giving it later is also useful.1

Acetylcysteine is available in oral and intravenous forms, the latter for patients who have encephalopathy or cannot tolerate oral intake due to repeated episodes of vomiting.24,25 The oral form is much less costly and is thus preferred over intravenous acetylcysteine in patients who can tolerate oral intake. Intravenous acetylcysteine should be given in a loading dose of 150 mg/kg in 5% dextrose over 15 minutes, followed by a maintenance dose of 50 mg/kg over 4 hours and then 100 mg/kg given over 16 hours.1 No dose adjustment is needed in patients who have renal toxicity (acetaminophen can also be toxic to the kidneys).

Most patients with acetaminophen-induced liver failure survive with medical management alone and do not need a liver transplant.3,26 Cirrhosis does not occur in these patients.

Management of viral acute liver failure

When patients present with acute liver failure, it is necessary to look for a viral cause by serologic testing, including hepatitis A virus IgM antibody, hepatitis B surface antigen, and hepatitis B core IgM antibody.

Hepatitis B can become reactivated in immunocompromised patients, and therefore the hepatitis B virus DNA level should be checked. Detection of hepatitis B virus DNA in a patient previously known to have undetectable hepatitis B virus DNA confirms hepatitis B reactivation.

Patients with hepatitis B-induced acute liver failure should be treated with entecavir or tenofovir. Although this treatment may not change the course of acute liver failure or accelerate the recovery, it can prevent reinfection in the transplanted liver if liver transplant becomes indicated.27–29

Herpes simplex virus should be suspected in patients presenting with anicteric hepatitis with fever. Polymerase chain reaction testing for herpes simplex virus should be done,30 and if positive, patients should be given intravenous acyclovir.31 Despite treatment, herpes simplex virus disease is associated with a very poor prognosis without liver transplant.

Autoimmune hepatitis

The autoantibodies usually seen in autoimmune hepatitis are antinuclear antibody, antismooth muscle antibody, and anti-liver-kidney microsomal antibody, and patients need to be tested for them.

The diagnosis of autoimmune hepatitis can be challenging, as these autoimmune markers can be negative in 5% of patients. Liver biopsy becomes essential to establish the diagnosis in that setting.32

Guidelines advise starting prednisone 40 to 60 mg/day and placing the patient on the liver transplant list.1

Wilson disease

Although it is an uncommon cause of liver failure, Wilson disease needs special attention because it has a poor prognosis. The mortality rate in acute liver failure from Wilson disease reaches 100% without liver transplant.

Wilson disease is caused by a genetic defect that allows copper to accumulate in the liver and other organs. However, diagnosing Wilson disease as the cause of acute liver failure can be challenging because elevated serum and urine copper levels are not specific to Wilson disease and can be seen in patients with acute liver failure from any cause. In addition, the ceruloplasmin level is usually normal or high because it is an acute-phase reactant. Accumulation of copper in the liver parenchyma is usually patchy; therefore, qualitative copper staining on random liver biopsy samples provides low diagnostic yield. Quantitative copper on liver biopsy is the gold standard test to establish the diagnosis, but the test is time-consuming. Kayser-Fleischer rings around the iris are considered pathognomic for Wilson disease when seen with acute liver failure, but they are seen in only about 50% of patients.33

A unique feature of acute Wilson disease is that most patients have very high bilirubin levels and low alkaline phosphatase levels. An alkaline phosphatase-to-bilirubin ratio less than 2 in patients with acute liver failure is highly suggestive of Wilson disease.34

Another clue to the diagnosis is that patients with Wilson disease tend to develop Coombs-negative hemolytic anemia, which leads to a disproportionate elevation in aminotransferase levels, with aspartate aminotransferase being higher than alanine aminotransferase.

Once Wilson disease is suspected, the patient should be listed for liver transplant because death is almost certain without it. For patients awaiting liver transplant, the American Association for the Study of Liver Diseases guidelines recommend certain measures to lower the serum copper level such as albumin dialysis, continuous hemofiltration, plasmapheresis, and plasma exchange,1 but the evidence supporting their use is limited.

NONSPECIFIC MANAGEMENT

Figure 2.

Acute liver failure can affect a number of organs and systems in addition to the liver (Figure 2).

General considerations

Because their condition can rapidly deteriorate, patients with acute liver failure are best managed in intensive care.

Patients who present to a center that does not have the facilities for liver transplant should be transferred to a transplant center as soon as possible, preferably by air. If the patient may not be able to protect the airway, endotracheal intubation should be performed before transfer.

The major causes of death in patients with acute liver failure are cerebral edema and infection. Gastrointestinal bleeding was a major cause of death in the past, but with prophylactic use of histamine H2 receptor blockers and proton pump inhibitors, the incidence of gastrointestinal bleeding has been significantly reduced.

Although initially used only in patients with acetaminophen-induced liver failure, acetylcysteine has also shown benefit in patients with acute liver failure from other causes. In patients with grade 1 or 2 encephalopathy on a scale of 0 (minimal) to 4 (comatose), the transplant-free survival rate is higher when acetylcysteine is given compared with placebo, but this benefit does not extend to patients with a higher grade of encephalopathy.35

 

 

Cerebral edema and intracranial hypertension

Cerebral edema is the leading cause of death in patients with acute liver failure, and it develops in nearly 40% of patients.36

The mechanism by which cerebral edema develops is not well understood. Some have proposed that ammonia is converted to glutamine, which causes cerebral edema either directly by its osmotic effect37,38 or indirectly by decreasing other osmolytes, thereby promoting water retention.39

Cerebral edema leads to intracranial hypertension, which can ultimately cause cerebral herniation and death. Because of the high mortality rate associated with cerebral edema, invasive devices were extensively used in the past to monitor intracranial pressure. However, in light of known complications of these devices, including bleeding,40 and lack of evidence of long-term benefit in terms of mortality rates, their use has come under debate.

Treatments. Many treatments are available for cerebral edema and intracranial hypertension. The first step is to elevate the head of the bed about 30 degrees. In addition, hyponatremia should be corrected, as it can worsen cerebral edema.41 If patients are intubated, maintaining a hypercapneic state is advisable to decrease the intracranial pressure.

Of the two pharmacologic options, mannitol is more often used.42 It is given as a bolus dose of 0.5 to 1 g/kg intravenously if the serum osmolality is less than 320 mOsm/L.1 Given the risk of fluid overload with mannitol, caution must be exercised in patients with renal dysfunction. The other pharmacologic option is 3% hypertonic saline.

Therapeutic hypothermia is a newer treatment for cerebral edema. Lowering the body temperature to 32 to 33°C (89.6 to 91.4°F) using cooling blankets decreases intracranial pressure and cerebral blood flow and improves the cerebral perfusion pressure.43 With this treatment, patients should be closely monitored for side effects of infection, coagulopathy, and cardiac arrythmias.1

l-ornithine l-aspartate was successfully used to prevent brain edema in rats, but in humans, no benefit was seen compared with placebo.44,45 The underlying basis for this experimental treatment is that supplemental ornithine and aspartate should increase glutamate synthesis, which should increase the activity of enzyme glutamine synthetase in skeletal muscles. With the increase in enzyme activity, conversion of ammonia to glutamine should increase, thereby decreasing ammonia circulation and thus decreasing cerebral edema.

Patients with cerebral edema have a high incidence of seizures, but prophylactic antiseizure medications such as phenytoin have not been proven to be beneficial.46

Infection

Nearly 80% of patients with acute liver failure develop an infectious complication, which can be attributed to a state of immunodeficiency.47

The respiratory and urinary tracts are the most common sources of infection.48 In patients with bacteremia, Enterococcus species and coagulase-negative Staphylococcus species49 are the commonly isolated organisms. Also, in patients with acute liver failure, fungal infections account for 30% of all infections.50

Infected patients often develop worsening of their encephalopathy51 without fever or elevated white blood cell count.49,52 Thus, in any patient in whom encephalopathy is worsening, an evaluation must be done to rule out infection. In these patients, systemic inflammatory response syndrome is an independent risk factor for death.53

Despite the high mortality rate with infection, whether using antibiotics prophylactically in acute liver failure is beneficial is controversial.54,55

Gastrointestinal bleeding

The current prevalence of upper gastrointestinal bleeding in acute liver failure patients is about 1.5%.56 Coagulopathy and endotracheal intubation are the main risk factors for upper gastrointestinal bleeding in these patients.57 The most common source of bleeding is stress ulcers in the stomach. The ulcers develop from a combination of factors, including decreased blood flow to the mucosa causing ischemia and hypoperfusion-reperfusion injury.

Pharmacologic inhibition of gastric acid secretion has been shown to reduce upper gastrointestinal bleeding in acute liver failure. A histamine H2 receptor blocker or proton pump inhibitor should be given to prevent gastrointestinal bleeding in patients with acute liver failure.1,58

EXPERIMENTAL TREATMENTS

Artificial liver support systems

Membranes and dialysate solutions have been developed to remove toxic substances that are normally metabolized by the liver. Two of these—the molecular adsorbent recycling system (MARS) and the extracorporeal liver assist device (ELAD)—were developed in the late 1990s. MARS consisted of a highly permeable hollow fiber membrane mixed with albumin, and ELAD consisted of porcine hepatocytes attached to microcarriers in the extracapillary space of the hollow fiber membrane. Both systems allowed for transfer of water-soluble and protein-bound toxins in the blood across the membrane and into the dialysate.59 The clinical benefit offered by these devices is controversial,60–62 thus limiting their use to experimental purposes only.

Hepatocyte transplant

Use of hepatocyte transplant as a bridge to liver transplant was tested in 1970s, first in rats and later in humans.63 By reducing the blood ammonia level and improving cerebral perfusion pressure and cardiac function, replacement of 1% to 2% of the total liver cell mass by transplanted hepatocytes acts as a bridge to orthotopic liver transplant.64,65

PROGNOSIS

Different criteria have been used to identify patients with poor prognosis who may eventually need to undergo liver transplant.

The King’s College criteria system is the most commonly used for prognosis (Table 4).37,66–69 Its main drawback is that it is applicable only in patients with encephalopathy, and when patients reach this stage, their condition often deteriorates rapidly, and they die while awaiting liver transplant.37,66,67

The Model for End-Stage Liver Disease (MELD) score is an alternative to the King’s College criteria. A high MELD score on admission signifies advanced disease, and patients with a high MELD score tend to have a worse prognosis than those with a low score.68

The Acute Physiology and Chronic Health Evaluation (APACHE) II score can also be used, as it is more sensitive than the King’s College criteria.6

The Clichy criteria66,69 can also be used.

Liver biopsy. In addition to helping establish the cause of acute liver failure, liver biopsy can also be used as a prognostic tool. Hepatocellular necrosis greater than 70% on the biopsy predicts death with a specificity of 90% and a sensitivity of 56%.70

Hypophosphatemia has been reported to indicate recovering liver function in patients with acute liver failure.71 As the liver regenerates, its energy requirement increases. To supply the energy, adenosine triphosphate production increases, and phosphorus shifts from the extracellular to the intracellular compartment to meet the need for extra phosphorus during this process. A serum phosphorus level of 2.9 mg/dL or higher appears to indicate a poor prognosis in patients with acute liver failure, as it signifies that adequate hepatocyte regeneration is not occurring.

References
  1. Polson J, Lee WM; American Association for the Study of Liver Disease. AASLD position paper: the management of acute liver failure. Hepatology 2005; 41:1179–1197.
  2. O’Grady JG, Schalm SW, Williams R. Acute liver failure: redefining the syndromes. Lancet 1993; 342:273–275.
  3. Ostapowicz G, Fontana RJ, Schiodt FV, et al; US Acute Liver Failure Study Group. Results of a prospective study of acute liver failure at 17 tertiary care centers in the United States. Ann Intern Med 2002; 137:947–954.
  4. Lee WM, Squires RH Jr, Nyberg SL, Doo E, Hoofnagle JH. Acute liver failure: summary of a workshop. Hepatology 2008; 47:1401–1415.
  5. Acharya SK, Panda SK, Saxena A, Gupta SD. Acute hepatic failure in India: a perspective from the East. J Gastroenterol Hepatol 2000; 15:473–479.
  6. Larson AM, Polson J, Fontana RJ, et al; Acute Liver Failure Study Group. Acetaminophen-induced acute liver failure: results of a United States multicenter, prospective study. Hepatology 2005; 42:1364–1372.
  7. Patten CJ, Thomas PE, Guy RL, et al. Cytochrome P450 enzymes involved in acetaminophen activation by rat and human liver microsomes and their kinetics. Chem Res Toxicol 1993; 6:511–518.
  8. Chen W, Koenigs LL, Thompson SJ, et al. Oxidation of acetaminophen to its toxic quinone imine and nontoxic catechol metabolites by baculovirus-expressed and purified human cytochromes P450 2E1 and 2A6. Chem Res Toxicol 1998; 11:295-301.
  9. Mitchell JR, Jollow DJ, Potter WZ, Gillette JR, Brodie BB. Acetaminophen-induced hepatic necrosis. IV. Protective role of glutathione. J Pharmacol Exp Ther 1973; 187:211–217.
  10. Schilling A, Corey R, Leonard M, Eghtesad B. Acetaminophen: old drug, new warnings. Cleve Clin J Med 2010; 77:19–27.
  11. Lauterburg BH, Corcoran GB, Mitchell JR. Mechanism of action of N-acetylcysteine in the protection against the hepatotoxicity of acetaminophen in rats in vivo. J Clin Invest 1983; 71:980–991.
  12. Watkins PB, Kaplowitz N, Slattery JT, et al. Aminotransferase elevations in healthy adults receiving 4 grams of acetaminophen daily: a randomized controlled trial. JAMA 2006; 296:87–93.
  13. Schiødt FV, Rochling FA, Casey DL, Lee WM. Acetaminophen toxicity in an urban county hospital. N Engl J Med 1997; 337:1112–1117.
  14. Whitcomb DC, Block GD. Association of acetaminophen hepatotoxicity with fasting and ethanol use. JAMA 1994; 272:1845–1850.
  15. Chalasani N, Fontana RJ, Bonkovsky HL, et al; Drug Induced Liver Injury Network (DILIN). Causes, clinical features, and outcomes from a prospective study of drug-induced liver injury in the United States. Gastroenterology 2008; 135:1924–1934 e1–4
  16. Reuben A, Koch DG, Lee WM; Acute Liver Failure Study Group. Drug-induced acute liver failure: results of a US multicenter, prospective study. Hepatology 2010; 52:2065–2076.
  17. Stevens T, Qadri A, Zein NN. Two patients with acute liver injury associated with use of the herbal weight-loss supplement hydroxycut. Ann Intern Med 2005; 142:477–478.
  18. Bernal W, Lee WM, Wendon J, Larsen FS, Williams R. Acute liver failure: a curable disease by 2024? J Hepatol 2015; 62(suppl 1):S112–S120.
  19. Schiodt FV, Davern TJ, Shakil AO, McGuire B, Samuel G, Lee WM. Viral hepatitis-related acute liver failure. Am J Gastroenterol 2003; 98:448–453.
  20. Charlton M, Adjei P, Poterucha J, et al. TT-virus infection in North American blood donors, patients with fulminant hepatic failure, and cryptogenic cirrhosis. Hepatology 1998; 28:839–842.
  21. Bismuth H, Samuel D, Gugenheim J, et al. Emergency liver transplantation for fulminant hepatitis. Ann Intern Med 1987; 107:337–341.
  22. Rumack BH, Matthew H. Acetaminophen poisoning and toxicity. Pediatrics 1975; 55:871–876.
  23. Davern TJ 2nd, James LP, Hinson JA, et al; Acute Liver Failure Study Group. Measurement of serum acetaminophen-protein adducts in patients with acute liver failure. Gastroenterology 2006; 130:687–694.
  24. Perry HE, Shannon MW. Efficacy of oral versus intravenous N-acetylcysteine in acetaminophen overdose: results of an open-label, clinical trial. J Pediatr 1998; 132:149–152.
  25. Smilkstein MJ, Knapp GL, Kulig KW, Rumack BH. Efficacy of oral N-acetylcysteine in the treatment of acetaminophen overdose. Analysis of the national multicenter study (1976 to 1985). N Engl J Med 1988; 319:1557–1562.
  26. Makin AJ, Wendon J, Williams R. A 7-year experience of severe acetaminophen-induced hepatotoxicity (1987-1993). Gastroenterology 1995; 109:1907–1916.
  27. Tsang SW, Chan HL, Leung NW, et al. Lamivudine treatment for fulminant hepatic failure due to acute exacerbation of chronic hepatitis B infection. Aliment Pharmacol Ther 2001; 15:1737–1744.
  28. Yu JW, Sun LJ, Yan BZ, Kang P, Zhao YH. Lamivudine treatment is associated with improved survival in fulminant hepatitis B. Liver Int 2011; 31:499–506.
  29. Garg H, Sarin SK, Kumar M, Garg V, Sharma BC, Kumar A. Tenofovir improves the outcome in patients with spontaneous reactivation of hepatitis B presenting as acute-on-chronic liver failure. Hepatology 2011; 53:774–780.
  30. Pinna AD, Rakela J, Demetris AJ, Fung JJ. Five cases of fulminant hepatitis due to herpes simplex virus in adults. Dig Dis Sci 2002; 47:750–754.
  31. Farr RW, Short S, Weissman D. Fulminant hepatitis during herpes simplex virus infection in apparently immunocompetent adults: report of two cases and review of the literature. Clin Infect Dis 1997; 24:1191–1194.
  32. Czaja AJ, Freese DK; American Association for the Study of Liver Disease. Diagnosis and treatment of autoimmune hepatitis. Hepatology 2002; 36:479–497.
  33. Roberts EA, Schilsky ML. A practice guideline on Wilson disease. Hepatology 2003; 37:1475–1492.
  34. Berman DH, Leventhal RI, Gavaler JS, Cadoff EM, Van Thiel DH. Clinical differentiation of fulminant Wilsonian hepatitis from other causes of hepatic failure. Gastroenterology 1991; 100:1129–1134.
  35. Lee WM, Hynan LS, Rossaro L, et al. Intravenous N-acetylcysteine improves transplant-free survival in early stage non-acetaminophen acute liver failure. Gastroenterology 2009; 137:856–864.
  36. O’Grady JG, Alexander GJ, Hayllar KM, Williams R. Early indicators of prognosis in fulminant hepatic failure. Gastroenterology 1989; 97:439–445.
  37. Clemmesen JO, Larsen FS, Kondrup J, Hansen BA, Ott P. Cerebral herniation in patients with acute liver failure is correlated with arterial ammonia concentration. Hepatology 1999; 29:648–653.
  38. Swain M, Butterworth RF, Blei AT. Ammonia and related amino acids in the pathogenesis of brain edema in acute ischemic liver failure in rats. Hepatology 1992; 15:449–453.
  39. Haussinger D, Laubenberger J, vom Dahl S, et al. Proton magnetic resonance spectroscopy studies on human brain myo-inositol in hypo-osmolarity and hepatic encephalopathy. Gastroenterology 1994; 107:1475–1480.
  40. Blei AT, Olafsson S, Webster S, Levy R. Complications of intracranial pressure monitoring in fulminant hepatic failure. Lancet 1993; 341:157–158.
  41. Cordoba J, Gottstein J, Blei AT. Chronic hyponatremia exacerbates ammonia-induced brain edema in rats after portacaval anastomosis. J Hepatol 1998; 29:589–594.
  42. Canalese J, Gimson AE, Davis C, Mellon PJ, Davis M, Williams R. Controlled trial of dexamethasone and mannitol for the cerebral oedema of fulminant hepatic failure. Gut 1982; 23:625–629.
  43. Jalan R, SW OD, Deutz NE, Lee A, Hayes PC. Moderate hypothermia for uncontrolled intracranial hypertension in acute liver failure. Lancet 1999; 354:1164–1168.
  44. Rose C, Michalak A, Rao KV, Quack G, Kircheis G, Butterworth RF. L-ornithine-L-aspartate lowers plasma and cerebrospinal fluid ammonia and prevents brain edema in rats with acute liver failure. Hepatology 1999; 30:636–640.
  45. Acharya SK, Bhatia V, Sreenivas V, Khanal S, Panda SK. Efficacy of L-ornithine L-aspartate in acute liver failure: a double-blind, randomized, placebo-controlled study. Gastroenterology 2009; 136:2159–2168.
  46. Bhatia V, Batra Y, Acharya SK. Prophylactic phenytoin does not improve cerebral edema or survival in acute liver failure—a controlled clinical trial. J Hepatol 2004; 41:89–96.
  47. Canalese J, Gove CD, Gimson AE, Wilkinson SP, Wardle EN, Williams R. Reticuloendothelial system and hepatocytic function in fulminant hepatic failure. Gut 1982; 23:265–269.
  48. Rolando N, Harvey F, Brahm J, et al. Prospective study of bacterial infection in acute liver failure: an analysis of fifty patients. Hepatology 1990; 11:49–53.
  49. Rolando N, Wade JJ, Stangou A, et al. Prospective study comparing the efficacy of prophylactic parenteral antimicrobials, with or without enteral decontamination, in patients with acute liver failure. Liver Transpl Surg 1996; 2:8–13.
  50. Rolando N, Harvey F, Brahm J, et al. Fungal infection: a common, unrecognised complication of acute liver failure. J Hepatol 1991; 12:1–9.
  51. Vaquero J, Polson J, Chung C, et al. Infection and the progression of hepatic encephalopathy in acute liver failure. Gastroenterology 2003; 125:755–764.
  52. Rolando N, Philpott-Howard J, Williams R. Bacterial and fungal infection in acute liver failure. Semin Liver Dis 1996; 16:389–402.
  53. Rolando N, Wade J, Davalos M, Wendon J, Philpott-Howard J, Williams R. The systemic inflammatory response syndrome in acute liver failure. Hepatology 2000; 32:734–739.
  54. Rolando N, Gimson A, Wade J, Philpott- Howard J, Casewell M, Williams R. Prospective controlled trial of selective parenteral and enteral antimicrobial regimen in fulminant liver failure. Hepatology 1993; 17:196–201.
  55. Karvellas CJ, Cavazos J, Battenhouse H, et al; US Acute Liver Failure Study Group. Effects of antimicrobial prophylaxis and blood stream infections in patients with acute liver failure: a retrospective cohort study. Clin Gastroenterol Hepatol 2014; 12:1942–1949.
  56. Acharya SK, Dasarathy S, Kumer TL, et al. Fulminant hepatitis in a tropical population: clinical course, cause, and early predictors of outcome. Hepatology 1996; 23:1148–1155.
  57. Cook DJ, Fuller HD, Guyatt GH, et al. Risk factors for gastrointestinal bleeding in critically ill patients. Canadian Critical Care Trials Group. N Engl J Med 1994; 330:377–381.
  58. MacDougall BR, Williams R. H2-receptor antagonist in the prevention of acute gastrointestinal hemorrhage in fulminant hepatic failure: a controlled trial. Gastroenterology 1978; 74:464–465.
  59. Stange J, Mitzner SR, Risler T, et al. Molecular adsorbent recycling system (MARS): clinical results of a new membrane-based blood purification system for bioartificial liver support. Artif Organs 1999; 23:319–330.
  60. Vaid A, Chewich H, Balk EM, Jaber BL. Molecular adsorbent recirculating system as artificial support therapy for liver failure: a meta-analysis. ASAIO J 2012; 58:51–59.
  61. Khuroo MS, Khuroo MS, Farahat KL. Molecular adsorbent recirculating system for acute and acute-on-chronic liver failure: a meta-analysis. Liver Transpl 2004; 10:1099–1106.
  62. Kjaergard LL, Liu J, Als-Nielsen B, Gluud C. Artificial and bioartificial support systems for acute and acute-on-chronic liver failure: a systematic review. JAMA 2003; 289:217–222.
  63. Sommer BG, Sutherland DE, Matas AJ, Simmons RL, Najarian JS. Hepatocellular transplantation for treatment of D-galactosamine-induced acute liver failure in rats. Transplant Proc 1979; 11:578–584.
  64. Demetriou AA, Reisner A, Sanchez J, Levenson SM, Moscioni AD, Chowdhury JR. Transplantation of microcarrier-attached hepatocytes into 90% partially hepatectomized rats. Hepatology 1988; 8:1006–1009.
  65. Strom SC, Fisher RA, Thompson MT, et al. Hepatocyte transplantation as a bridge to orthotopic liver transplantation in terminal liver failure. Transplantation 1997; 63:559–569.
  66. Pauwels A, Mostefa-Kara N, Florent C, Levy VG. Emergency liver transplantation for acute liver failure. Evaluation of London and Clichy criteria. J Hepatol 1993; 17:124–127.
  67. Anand AC, Nightingale P, Neuberger JM. Early indicators of prognosis in fulminant hepatic failure: an assessment of the King's criteria. J Hepatol 1997; 26:62–68.
  68. Schmidt LE, Larsen FS. MELD score as a predictor of liver failure and death in patients with acetaminophen-induced liver injury. Hepatology 2007; 45:789–796.
  69. Bernuau J, Goudeau A, Poynard T, et al. Multivariate analysis of prognostic factors in fulminant hepatitis B. Hepatology 1986; 6:648–651.
  70. Donaldson BW, Gopinath R, Wanless IR, et al. The role of transjugular liver biopsy in fulminant liver failure: relation to other prognostic indicators. Hepatology 1993; 18:1370–1376.
  71. Schmidt LE, Dalhoff K. Serum phosphate is an early predictor of outcome in severe acetaminophen-induced hepatotoxicity. Hepatology 2002; 36:659–665.
References
  1. Polson J, Lee WM; American Association for the Study of Liver Disease. AASLD position paper: the management of acute liver failure. Hepatology 2005; 41:1179–1197.
  2. O’Grady JG, Schalm SW, Williams R. Acute liver failure: redefining the syndromes. Lancet 1993; 342:273–275.
  3. Ostapowicz G, Fontana RJ, Schiodt FV, et al; US Acute Liver Failure Study Group. Results of a prospective study of acute liver failure at 17 tertiary care centers in the United States. Ann Intern Med 2002; 137:947–954.
  4. Lee WM, Squires RH Jr, Nyberg SL, Doo E, Hoofnagle JH. Acute liver failure: summary of a workshop. Hepatology 2008; 47:1401–1415.
  5. Acharya SK, Panda SK, Saxena A, Gupta SD. Acute hepatic failure in India: a perspective from the East. J Gastroenterol Hepatol 2000; 15:473–479.
  6. Larson AM, Polson J, Fontana RJ, et al; Acute Liver Failure Study Group. Acetaminophen-induced acute liver failure: results of a United States multicenter, prospective study. Hepatology 2005; 42:1364–1372.
  7. Patten CJ, Thomas PE, Guy RL, et al. Cytochrome P450 enzymes involved in acetaminophen activation by rat and human liver microsomes and their kinetics. Chem Res Toxicol 1993; 6:511–518.
  8. Chen W, Koenigs LL, Thompson SJ, et al. Oxidation of acetaminophen to its toxic quinone imine and nontoxic catechol metabolites by baculovirus-expressed and purified human cytochromes P450 2E1 and 2A6. Chem Res Toxicol 1998; 11:295-301.
  9. Mitchell JR, Jollow DJ, Potter WZ, Gillette JR, Brodie BB. Acetaminophen-induced hepatic necrosis. IV. Protective role of glutathione. J Pharmacol Exp Ther 1973; 187:211–217.
  10. Schilling A, Corey R, Leonard M, Eghtesad B. Acetaminophen: old drug, new warnings. Cleve Clin J Med 2010; 77:19–27.
  11. Lauterburg BH, Corcoran GB, Mitchell JR. Mechanism of action of N-acetylcysteine in the protection against the hepatotoxicity of acetaminophen in rats in vivo. J Clin Invest 1983; 71:980–991.
  12. Watkins PB, Kaplowitz N, Slattery JT, et al. Aminotransferase elevations in healthy adults receiving 4 grams of acetaminophen daily: a randomized controlled trial. JAMA 2006; 296:87–93.
  13. Schiødt FV, Rochling FA, Casey DL, Lee WM. Acetaminophen toxicity in an urban county hospital. N Engl J Med 1997; 337:1112–1117.
  14. Whitcomb DC, Block GD. Association of acetaminophen hepatotoxicity with fasting and ethanol use. JAMA 1994; 272:1845–1850.
  15. Chalasani N, Fontana RJ, Bonkovsky HL, et al; Drug Induced Liver Injury Network (DILIN). Causes, clinical features, and outcomes from a prospective study of drug-induced liver injury in the United States. Gastroenterology 2008; 135:1924–1934 e1–4
  16. Reuben A, Koch DG, Lee WM; Acute Liver Failure Study Group. Drug-induced acute liver failure: results of a US multicenter, prospective study. Hepatology 2010; 52:2065–2076.
  17. Stevens T, Qadri A, Zein NN. Two patients with acute liver injury associated with use of the herbal weight-loss supplement hydroxycut. Ann Intern Med 2005; 142:477–478.
  18. Bernal W, Lee WM, Wendon J, Larsen FS, Williams R. Acute liver failure: a curable disease by 2024? J Hepatol 2015; 62(suppl 1):S112–S120.
  19. Schiodt FV, Davern TJ, Shakil AO, McGuire B, Samuel G, Lee WM. Viral hepatitis-related acute liver failure. Am J Gastroenterol 2003; 98:448–453.
  20. Charlton M, Adjei P, Poterucha J, et al. TT-virus infection in North American blood donors, patients with fulminant hepatic failure, and cryptogenic cirrhosis. Hepatology 1998; 28:839–842.
  21. Bismuth H, Samuel D, Gugenheim J, et al. Emergency liver transplantation for fulminant hepatitis. Ann Intern Med 1987; 107:337–341.
  22. Rumack BH, Matthew H. Acetaminophen poisoning and toxicity. Pediatrics 1975; 55:871–876.
  23. Davern TJ 2nd, James LP, Hinson JA, et al; Acute Liver Failure Study Group. Measurement of serum acetaminophen-protein adducts in patients with acute liver failure. Gastroenterology 2006; 130:687–694.
  24. Perry HE, Shannon MW. Efficacy of oral versus intravenous N-acetylcysteine in acetaminophen overdose: results of an open-label, clinical trial. J Pediatr 1998; 132:149–152.
  25. Smilkstein MJ, Knapp GL, Kulig KW, Rumack BH. Efficacy of oral N-acetylcysteine in the treatment of acetaminophen overdose. Analysis of the national multicenter study (1976 to 1985). N Engl J Med 1988; 319:1557–1562.
  26. Makin AJ, Wendon J, Williams R. A 7-year experience of severe acetaminophen-induced hepatotoxicity (1987-1993). Gastroenterology 1995; 109:1907–1916.
  27. Tsang SW, Chan HL, Leung NW, et al. Lamivudine treatment for fulminant hepatic failure due to acute exacerbation of chronic hepatitis B infection. Aliment Pharmacol Ther 2001; 15:1737–1744.
  28. Yu JW, Sun LJ, Yan BZ, Kang P, Zhao YH. Lamivudine treatment is associated with improved survival in fulminant hepatitis B. Liver Int 2011; 31:499–506.
  29. Garg H, Sarin SK, Kumar M, Garg V, Sharma BC, Kumar A. Tenofovir improves the outcome in patients with spontaneous reactivation of hepatitis B presenting as acute-on-chronic liver failure. Hepatology 2011; 53:774–780.
  30. Pinna AD, Rakela J, Demetris AJ, Fung JJ. Five cases of fulminant hepatitis due to herpes simplex virus in adults. Dig Dis Sci 2002; 47:750–754.
  31. Farr RW, Short S, Weissman D. Fulminant hepatitis during herpes simplex virus infection in apparently immunocompetent adults: report of two cases and review of the literature. Clin Infect Dis 1997; 24:1191–1194.
  32. Czaja AJ, Freese DK; American Association for the Study of Liver Disease. Diagnosis and treatment of autoimmune hepatitis. Hepatology 2002; 36:479–497.
  33. Roberts EA, Schilsky ML. A practice guideline on Wilson disease. Hepatology 2003; 37:1475–1492.
  34. Berman DH, Leventhal RI, Gavaler JS, Cadoff EM, Van Thiel DH. Clinical differentiation of fulminant Wilsonian hepatitis from other causes of hepatic failure. Gastroenterology 1991; 100:1129–1134.
  35. Lee WM, Hynan LS, Rossaro L, et al. Intravenous N-acetylcysteine improves transplant-free survival in early stage non-acetaminophen acute liver failure. Gastroenterology 2009; 137:856–864.
  36. O’Grady JG, Alexander GJ, Hayllar KM, Williams R. Early indicators of prognosis in fulminant hepatic failure. Gastroenterology 1989; 97:439–445.
  37. Clemmesen JO, Larsen FS, Kondrup J, Hansen BA, Ott P. Cerebral herniation in patients with acute liver failure is correlated with arterial ammonia concentration. Hepatology 1999; 29:648–653.
  38. Swain M, Butterworth RF, Blei AT. Ammonia and related amino acids in the pathogenesis of brain edema in acute ischemic liver failure in rats. Hepatology 1992; 15:449–453.
  39. Haussinger D, Laubenberger J, vom Dahl S, et al. Proton magnetic resonance spectroscopy studies on human brain myo-inositol in hypo-osmolarity and hepatic encephalopathy. Gastroenterology 1994; 107:1475–1480.
  40. Blei AT, Olafsson S, Webster S, Levy R. Complications of intracranial pressure monitoring in fulminant hepatic failure. Lancet 1993; 341:157–158.
  41. Cordoba J, Gottstein J, Blei AT. Chronic hyponatremia exacerbates ammonia-induced brain edema in rats after portacaval anastomosis. J Hepatol 1998; 29:589–594.
  42. Canalese J, Gimson AE, Davis C, Mellon PJ, Davis M, Williams R. Controlled trial of dexamethasone and mannitol for the cerebral oedema of fulminant hepatic failure. Gut 1982; 23:625–629.
  43. Jalan R, SW OD, Deutz NE, Lee A, Hayes PC. Moderate hypothermia for uncontrolled intracranial hypertension in acute liver failure. Lancet 1999; 354:1164–1168.
  44. Rose C, Michalak A, Rao KV, Quack G, Kircheis G, Butterworth RF. L-ornithine-L-aspartate lowers plasma and cerebrospinal fluid ammonia and prevents brain edema in rats with acute liver failure. Hepatology 1999; 30:636–640.
  45. Acharya SK, Bhatia V, Sreenivas V, Khanal S, Panda SK. Efficacy of L-ornithine L-aspartate in acute liver failure: a double-blind, randomized, placebo-controlled study. Gastroenterology 2009; 136:2159–2168.
  46. Bhatia V, Batra Y, Acharya SK. Prophylactic phenytoin does not improve cerebral edema or survival in acute liver failure—a controlled clinical trial. J Hepatol 2004; 41:89–96.
  47. Canalese J, Gove CD, Gimson AE, Wilkinson SP, Wardle EN, Williams R. Reticuloendothelial system and hepatocytic function in fulminant hepatic failure. Gut 1982; 23:265–269.
  48. Rolando N, Harvey F, Brahm J, et al. Prospective study of bacterial infection in acute liver failure: an analysis of fifty patients. Hepatology 1990; 11:49–53.
  49. Rolando N, Wade JJ, Stangou A, et al. Prospective study comparing the efficacy of prophylactic parenteral antimicrobials, with or without enteral decontamination, in patients with acute liver failure. Liver Transpl Surg 1996; 2:8–13.
  50. Rolando N, Harvey F, Brahm J, et al. Fungal infection: a common, unrecognised complication of acute liver failure. J Hepatol 1991; 12:1–9.
  51. Vaquero J, Polson J, Chung C, et al. Infection and the progression of hepatic encephalopathy in acute liver failure. Gastroenterology 2003; 125:755–764.
  52. Rolando N, Philpott-Howard J, Williams R. Bacterial and fungal infection in acute liver failure. Semin Liver Dis 1996; 16:389–402.
  53. Rolando N, Wade J, Davalos M, Wendon J, Philpott-Howard J, Williams R. The systemic inflammatory response syndrome in acute liver failure. Hepatology 2000; 32:734–739.
  54. Rolando N, Gimson A, Wade J, Philpott- Howard J, Casewell M, Williams R. Prospective controlled trial of selective parenteral and enteral antimicrobial regimen in fulminant liver failure. Hepatology 1993; 17:196–201.
  55. Karvellas CJ, Cavazos J, Battenhouse H, et al; US Acute Liver Failure Study Group. Effects of antimicrobial prophylaxis and blood stream infections in patients with acute liver failure: a retrospective cohort study. Clin Gastroenterol Hepatol 2014; 12:1942–1949.
  56. Acharya SK, Dasarathy S, Kumer TL, et al. Fulminant hepatitis in a tropical population: clinical course, cause, and early predictors of outcome. Hepatology 1996; 23:1148–1155.
  57. Cook DJ, Fuller HD, Guyatt GH, et al. Risk factors for gastrointestinal bleeding in critically ill patients. Canadian Critical Care Trials Group. N Engl J Med 1994; 330:377–381.
  58. MacDougall BR, Williams R. H2-receptor antagonist in the prevention of acute gastrointestinal hemorrhage in fulminant hepatic failure: a controlled trial. Gastroenterology 1978; 74:464–465.
  59. Stange J, Mitzner SR, Risler T, et al. Molecular adsorbent recycling system (MARS): clinical results of a new membrane-based blood purification system for bioartificial liver support. Artif Organs 1999; 23:319–330.
  60. Vaid A, Chewich H, Balk EM, Jaber BL. Molecular adsorbent recirculating system as artificial support therapy for liver failure: a meta-analysis. ASAIO J 2012; 58:51–59.
  61. Khuroo MS, Khuroo MS, Farahat KL. Molecular adsorbent recirculating system for acute and acute-on-chronic liver failure: a meta-analysis. Liver Transpl 2004; 10:1099–1106.
  62. Kjaergard LL, Liu J, Als-Nielsen B, Gluud C. Artificial and bioartificial support systems for acute and acute-on-chronic liver failure: a systematic review. JAMA 2003; 289:217–222.
  63. Sommer BG, Sutherland DE, Matas AJ, Simmons RL, Najarian JS. Hepatocellular transplantation for treatment of D-galactosamine-induced acute liver failure in rats. Transplant Proc 1979; 11:578–584.
  64. Demetriou AA, Reisner A, Sanchez J, Levenson SM, Moscioni AD, Chowdhury JR. Transplantation of microcarrier-attached hepatocytes into 90% partially hepatectomized rats. Hepatology 1988; 8:1006–1009.
  65. Strom SC, Fisher RA, Thompson MT, et al. Hepatocyte transplantation as a bridge to orthotopic liver transplantation in terminal liver failure. Transplantation 1997; 63:559–569.
  66. Pauwels A, Mostefa-Kara N, Florent C, Levy VG. Emergency liver transplantation for acute liver failure. Evaluation of London and Clichy criteria. J Hepatol 1993; 17:124–127.
  67. Anand AC, Nightingale P, Neuberger JM. Early indicators of prognosis in fulminant hepatic failure: an assessment of the King's criteria. J Hepatol 1997; 26:62–68.
  68. Schmidt LE, Larsen FS. MELD score as a predictor of liver failure and death in patients with acetaminophen-induced liver injury. Hepatology 2007; 45:789–796.
  69. Bernuau J, Goudeau A, Poynard T, et al. Multivariate analysis of prognostic factors in fulminant hepatitis B. Hepatology 1986; 6:648–651.
  70. Donaldson BW, Gopinath R, Wanless IR, et al. The role of transjugular liver biopsy in fulminant liver failure: relation to other prognostic indicators. Hepatology 1993; 18:1370–1376.
  71. Schmidt LE, Dalhoff K. Serum phosphate is an early predictor of outcome in severe acetaminophen-induced hepatotoxicity. Hepatology 2002; 36:659–665.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
453-462
Page Number
453-462
Publications
Publications
Topics
Article Type
Display Headline
A guide to managing acute liver failure
Display Headline
A guide to managing acute liver failure
Legacy Keywords
acute liver failure, fulminant hepatic failure, hyperacute liver failure, acetaminophen, Tylenol, acetylcysteine, liver transplant, CYP2E1, viral hepatitis, Tavankit Singh, Nancy Gupta, Naim Alkhouri, William Carey, Ibrahim Hanouneh
Legacy Keywords
acute liver failure, fulminant hepatic failure, hyperacute liver failure, acetaminophen, Tylenol, acetylcysteine, liver transplant, CYP2E1, viral hepatitis, Tavankit Singh, Nancy Gupta, Naim Alkhouri, William Carey, Ibrahim Hanouneh
Sections
Inside the Article

KEY POINTS

  • In the United States, the most common cause of acute liver failure is acetaminophen toxicity, followed by viral hepatitis.
  • Testing for the cause of acute liver failure needs to start as soon as possible so that specific treatment can be initiated and the patient can be placed on the transplant list if needed.
  • Acetylcysteine and either a proton pump inhibitor or a histamine H2 receptor blocker should be given to all patients with acute liver failure. Liver transplant is the cornerstone of therapy in patients not responding to other treatments.
  • There are a number of prognostic scores for acute liver failure, but each has limitations.
Disallow All Ads
Alternative CME
Article PDF Media

Multiple linear subcutaneous nodules

Article Type
Changed
Wed, 08/16/2017 - 13:45
Display Headline
Multiple linear subcutaneous nodules

A 34-year-old woman sought consultation at our clinic for an asymptomatic swelling on her right foot that had been growing very slowly over the last 15 years. She said she had presented to other healthcare facilities, but no diagnosis had been made and no treatment had been offered.

Figure 1. A linear swelling extended from the lower third to the mid-dorsal aspect of the right foot.

Examination revealed a linear swelling extending from the lower third to the mid-dorsal surface of the right foot (Figure 1). Palpation revealed multiple, closely set nodules arranged in a linear fashion. This finding along with the history raised the suspicion of neurofibroma and other conditions in the differential diagnosis, eg, pure neuritic Hansen disease, phaeohyphomycosis, and palisaded neutrophilic granulomatous dermatitis. The rest of the mucocutaneous examination results were normal. No café-au-lait spots, axillary freckling, or other swelling suggestive of neurofibroma was seen. She had no family history of mucocutaneous disease or other systemic disorder.

Figure 2. T2-weighted magnetic resonance imaging with contrast showed a hyperintense lesion along the postero-lateral aspect of the right foot in the subcutaneous space up to the proximal end of the proximal phalanx of the fourth toe in the location of the sural nerve.

Because of the suspicion of neurofibromatosis, slit-lamp examination of the eyes was done to rule out Lisch nodules, a common feature of neurofibromatosis; the results were normal. Plain radiography of the right foot showed only soft-tissue swelling. Magnetic resonance imaging with contrast, done to determine the extent of the lesions, revealed multiple dumbbell-shaped lesions with homogeneous enhancement (Figure 2). Histopathologic study of a biopsy specimen of the lesions showed tumor cells in the dermis. The cells were long, with elongated nuclei with pointed ends, arranged in long and short fascicles—an appearance characteristic of neurofibroma. Areas of hypocellularity and hypercellularity were seen, and on S100 protein immunostaining, the tumor cells showed strong nuclear and cytoplasmic positivity (Figure 3).

Figure 3. The photomicrograph (A) shows tumor composed of spindle cells with pointed ends that infiltrate adjacent fat (hematoxylin and eosin, × 100). Tumor cells (B) show strong nuclear and cytoplasmic positivity (S100 protein immunostaining, × 200).

The histologic evaluation confirmed neurofibroma. The specific diagnosis of sporadic solitary neurofibroma was made based on the onset of the lesions, the number of lesions (one in this patient), and the absence of features suggestive of neurofibromatosis.

SPORADIC SOLITARY NEUROFIBROMA

Neurofibroma is a common tumor of the peripheral nerve sheath and, when present with features such as café-au-lait spots, axillary freckling, and characteristic bone changes, it is pathognomic of neurofibromatosis type 1.1 But solitary neurofibromas can occur sporadically in the absence of other features of neurofibromatosis.

Sporadic solitary neurofibroma arises from small nerves, is benign in nature, and carries a lower rate of malignant transformation than its counterpart that occurs in the setting of neurofibromatosis.2 Though sporadic solitary neurofibroma can occur in any part of the body, it is commonly seen on the head and neck, and occasionally on the presacral and parasacral space, thigh, intrascrotal area,3 the ankle and foot,4,5 and the subungual region.6 A series of 397 peripheral neural sheath tumors examined over 30 years showed 55 sporadic solitary neurofibromas occurring in the brachial plexus region, 45 in the upper extremities, 10 in the pelvic plexus, and 31 in the lower extremities.7

Management of sporadic solitary neurofibroma depends on the patient’s discomfort. For asymptomatic lesions, serial observation is all that is required. Complete surgical excision including the parent nerve is the treatment for large lesions. More research is needed to define the potential role of drugs such as pirfenidone and tipifarnib.

THE DIFFERENTIAL DIAGNOSIS

Sporadic solitary neurofibroma can masquerade as pure neuritic Hansen disease (leprosy), phaeohyphomycosis, and palisaded neutrophilic granulomatous dermatitis. The absence of neural symptoms and no evidence of trophic changes exclude pure neuritic Hansen disease. Phaeohyphomycosis clinically presents as a single cyst that may evolve into pigmented plaques,8 and the diagnosis relies on the presence of fungus in tissue. The absence of cystic changes clinically and fungi histopathologically in this patient did not favor phaeohyphomycosis. Palisaded neutrophilic granulomatous dermatitis is characterized clinically by cordlike skin lesions (the “rope sign”) and is accompanied by extracutaneous, mostly articular features. Histopathologically, it shows intense neutrophilic infiltrate and interstitial histiocytic infiltrate along with collagen degeneration. The absence of extracutaneous and classical histologic features negated this possibility in this patient.

Though sporotrichosis and cutaneous atypical mycobacterial infections may present in linear fashion following the course of the lymphatic vessels, the absence of epidermal changes after a disease course of 15 years and the absence of granulomatous infiltrate in histopathology excluded these possibilities in this patient.

The patient was referred to a plastic surgeon, and the lesions were successfully resected. She did not return for additional review after that.

References
  1. Hirbe AC, Gutmann DH. Neurofibromatosis type 1: a multidisciplinary approach to care. Lancet Neurol 2014; 13:834–843.
  2. Pulathan Z, Imamoglu M, Cay A, Guven YK. Intermittent claudication due to right common femoral artery compression by a solitary neurofibroma. Eur J Pediatr 2005; 164:463–465.
  3. Hosseini MM, Geramizadeh B, Shakeri S, Karimi MH. Intrascrotal solitary neurofibroma: a case report and review of the literature. Urol Ann 2012; 4:119–121.
  4. Carvajal JA, Cuartas E, Qadir R, Levi AD, Temple HT. Peripheral nerve sheath tumors of the foot and ankle. Foot Ankle Int 2011; 32:163–167.
  5. Tahririan MA, Hekmatnia A, Ahrar H, Heidarpour M, Hekmatnia F. Solitary giant neurofibroma of thigh. Adv Biomed Res 2014; 3:158.
  6. Huajun J, Wei Q, Ming L, Chongyang F, Weiguo Z, Decheng L. Solitary subungual neurofibroma in the right first finger. Int J Dermatol 2012; 51:335–338.
  7. Kim DH, Murovic JA, Tiel RL, Moes G, Kline DG. A series of 397 peripheral neural sheath tumors: 30-year experience at Louisiana State University Health Sciences Center. J Neurosurg 2005; 102:246–255.
  8. Garnica M, Nucci M, Queiroz-Telles F. Difficult mycoses of the skin: advances in the epidemiology and management of eumycetoma, phaeohyphomycosis and chromoblastomycosis. Curr Opin Infect Dis 2009; 22:559–563.
Article PDF
Author and Disclosure Information

Gitesh U. Sawatkar, MD
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Dipankar De, MD
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Uma Nahar Saikia, MD
Department of Histopathology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Sanjeev Handa, MD, FRCP (EDIN)
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Address: Dipankar De, MD, Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Sector 12, Chandigarh 160012, India; [email protected]

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
414-416
Legacy Keywords
nodules, neurofibroma, Hansen disease, phaehyphomycosis, foot, India, Gitesh Sawatkar, Dipankar De, Uma Saikia, Sanjeev Handa
Sections
Author and Disclosure Information

Gitesh U. Sawatkar, MD
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Dipankar De, MD
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Uma Nahar Saikia, MD
Department of Histopathology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Sanjeev Handa, MD, FRCP (EDIN)
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Address: Dipankar De, MD, Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Sector 12, Chandigarh 160012, India; [email protected]

Author and Disclosure Information

Gitesh U. Sawatkar, MD
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Dipankar De, MD
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Uma Nahar Saikia, MD
Department of Histopathology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Sanjeev Handa, MD, FRCP (EDIN)
Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Address: Dipankar De, MD, Department of Dermatology, Venereology, and Leprology, Postgraduate Institute of Medical Education and Research, Sector 12, Chandigarh 160012, India; [email protected]

Article PDF
Article PDF
Related Articles

A 34-year-old woman sought consultation at our clinic for an asymptomatic swelling on her right foot that had been growing very slowly over the last 15 years. She said she had presented to other healthcare facilities, but no diagnosis had been made and no treatment had been offered.

Figure 1. A linear swelling extended from the lower third to the mid-dorsal aspect of the right foot.

Examination revealed a linear swelling extending from the lower third to the mid-dorsal surface of the right foot (Figure 1). Palpation revealed multiple, closely set nodules arranged in a linear fashion. This finding along with the history raised the suspicion of neurofibroma and other conditions in the differential diagnosis, eg, pure neuritic Hansen disease, phaeohyphomycosis, and palisaded neutrophilic granulomatous dermatitis. The rest of the mucocutaneous examination results were normal. No café-au-lait spots, axillary freckling, or other swelling suggestive of neurofibroma was seen. She had no family history of mucocutaneous disease or other systemic disorder.

Figure 2. T2-weighted magnetic resonance imaging with contrast showed a hyperintense lesion along the postero-lateral aspect of the right foot in the subcutaneous space up to the proximal end of the proximal phalanx of the fourth toe in the location of the sural nerve.

Because of the suspicion of neurofibromatosis, slit-lamp examination of the eyes was done to rule out Lisch nodules, a common feature of neurofibromatosis; the results were normal. Plain radiography of the right foot showed only soft-tissue swelling. Magnetic resonance imaging with contrast, done to determine the extent of the lesions, revealed multiple dumbbell-shaped lesions with homogeneous enhancement (Figure 2). Histopathologic study of a biopsy specimen of the lesions showed tumor cells in the dermis. The cells were long, with elongated nuclei with pointed ends, arranged in long and short fascicles—an appearance characteristic of neurofibroma. Areas of hypocellularity and hypercellularity were seen, and on S100 protein immunostaining, the tumor cells showed strong nuclear and cytoplasmic positivity (Figure 3).

Figure 3. The photomicrograph (A) shows tumor composed of spindle cells with pointed ends that infiltrate adjacent fat (hematoxylin and eosin, × 100). Tumor cells (B) show strong nuclear and cytoplasmic positivity (S100 protein immunostaining, × 200).

The histologic evaluation confirmed neurofibroma. The specific diagnosis of sporadic solitary neurofibroma was made based on the onset of the lesions, the number of lesions (one in this patient), and the absence of features suggestive of neurofibromatosis.

SPORADIC SOLITARY NEUROFIBROMA

Neurofibroma is a common tumor of the peripheral nerve sheath and, when present with features such as café-au-lait spots, axillary freckling, and characteristic bone changes, it is pathognomic of neurofibromatosis type 1.1 But solitary neurofibromas can occur sporadically in the absence of other features of neurofibromatosis.

Sporadic solitary neurofibroma arises from small nerves, is benign in nature, and carries a lower rate of malignant transformation than its counterpart that occurs in the setting of neurofibromatosis.2 Though sporadic solitary neurofibroma can occur in any part of the body, it is commonly seen on the head and neck, and occasionally on the presacral and parasacral space, thigh, intrascrotal area,3 the ankle and foot,4,5 and the subungual region.6 A series of 397 peripheral neural sheath tumors examined over 30 years showed 55 sporadic solitary neurofibromas occurring in the brachial plexus region, 45 in the upper extremities, 10 in the pelvic plexus, and 31 in the lower extremities.7

Management of sporadic solitary neurofibroma depends on the patient’s discomfort. For asymptomatic lesions, serial observation is all that is required. Complete surgical excision including the parent nerve is the treatment for large lesions. More research is needed to define the potential role of drugs such as pirfenidone and tipifarnib.

THE DIFFERENTIAL DIAGNOSIS

Sporadic solitary neurofibroma can masquerade as pure neuritic Hansen disease (leprosy), phaeohyphomycosis, and palisaded neutrophilic granulomatous dermatitis. The absence of neural symptoms and no evidence of trophic changes exclude pure neuritic Hansen disease. Phaeohyphomycosis clinically presents as a single cyst that may evolve into pigmented plaques,8 and the diagnosis relies on the presence of fungus in tissue. The absence of cystic changes clinically and fungi histopathologically in this patient did not favor phaeohyphomycosis. Palisaded neutrophilic granulomatous dermatitis is characterized clinically by cordlike skin lesions (the “rope sign”) and is accompanied by extracutaneous, mostly articular features. Histopathologically, it shows intense neutrophilic infiltrate and interstitial histiocytic infiltrate along with collagen degeneration. The absence of extracutaneous and classical histologic features negated this possibility in this patient.

Though sporotrichosis and cutaneous atypical mycobacterial infections may present in linear fashion following the course of the lymphatic vessels, the absence of epidermal changes after a disease course of 15 years and the absence of granulomatous infiltrate in histopathology excluded these possibilities in this patient.

The patient was referred to a plastic surgeon, and the lesions were successfully resected. She did not return for additional review after that.

A 34-year-old woman sought consultation at our clinic for an asymptomatic swelling on her right foot that had been growing very slowly over the last 15 years. She said she had presented to other healthcare facilities, but no diagnosis had been made and no treatment had been offered.

Figure 1. A linear swelling extended from the lower third to the mid-dorsal aspect of the right foot.

Examination revealed a linear swelling extending from the lower third to the mid-dorsal surface of the right foot (Figure 1). Palpation revealed multiple, closely set nodules arranged in a linear fashion. This finding along with the history raised the suspicion of neurofibroma and other conditions in the differential diagnosis, eg, pure neuritic Hansen disease, phaeohyphomycosis, and palisaded neutrophilic granulomatous dermatitis. The rest of the mucocutaneous examination results were normal. No café-au-lait spots, axillary freckling, or other swelling suggestive of neurofibroma was seen. She had no family history of mucocutaneous disease or other systemic disorder.

Figure 2. T2-weighted magnetic resonance imaging with contrast showed a hyperintense lesion along the postero-lateral aspect of the right foot in the subcutaneous space up to the proximal end of the proximal phalanx of the fourth toe in the location of the sural nerve.

Because of the suspicion of neurofibromatosis, slit-lamp examination of the eyes was done to rule out Lisch nodules, a common feature of neurofibromatosis; the results were normal. Plain radiography of the right foot showed only soft-tissue swelling. Magnetic resonance imaging with contrast, done to determine the extent of the lesions, revealed multiple dumbbell-shaped lesions with homogeneous enhancement (Figure 2). Histopathologic study of a biopsy specimen of the lesions showed tumor cells in the dermis. The cells were long, with elongated nuclei with pointed ends, arranged in long and short fascicles—an appearance characteristic of neurofibroma. Areas of hypocellularity and hypercellularity were seen, and on S100 protein immunostaining, the tumor cells showed strong nuclear and cytoplasmic positivity (Figure 3).

Figure 3. The photomicrograph (A) shows tumor composed of spindle cells with pointed ends that infiltrate adjacent fat (hematoxylin and eosin, × 100). Tumor cells (B) show strong nuclear and cytoplasmic positivity (S100 protein immunostaining, × 200).

The histologic evaluation confirmed neurofibroma. The specific diagnosis of sporadic solitary neurofibroma was made based on the onset of the lesions, the number of lesions (one in this patient), and the absence of features suggestive of neurofibromatosis.

SPORADIC SOLITARY NEUROFIBROMA

Neurofibroma is a common tumor of the peripheral nerve sheath and, when present with features such as café-au-lait spots, axillary freckling, and characteristic bone changes, it is pathognomic of neurofibromatosis type 1.1 But solitary neurofibromas can occur sporadically in the absence of other features of neurofibromatosis.

Sporadic solitary neurofibroma arises from small nerves, is benign in nature, and carries a lower rate of malignant transformation than its counterpart that occurs in the setting of neurofibromatosis.2 Though sporadic solitary neurofibroma can occur in any part of the body, it is commonly seen on the head and neck, and occasionally on the presacral and parasacral space, thigh, intrascrotal area,3 the ankle and foot,4,5 and the subungual region.6 A series of 397 peripheral neural sheath tumors examined over 30 years showed 55 sporadic solitary neurofibromas occurring in the brachial plexus region, 45 in the upper extremities, 10 in the pelvic plexus, and 31 in the lower extremities.7

Management of sporadic solitary neurofibroma depends on the patient’s discomfort. For asymptomatic lesions, serial observation is all that is required. Complete surgical excision including the parent nerve is the treatment for large lesions. More research is needed to define the potential role of drugs such as pirfenidone and tipifarnib.

THE DIFFERENTIAL DIAGNOSIS

Sporadic solitary neurofibroma can masquerade as pure neuritic Hansen disease (leprosy), phaeohyphomycosis, and palisaded neutrophilic granulomatous dermatitis. The absence of neural symptoms and no evidence of trophic changes exclude pure neuritic Hansen disease. Phaeohyphomycosis clinically presents as a single cyst that may evolve into pigmented plaques,8 and the diagnosis relies on the presence of fungus in tissue. The absence of cystic changes clinically and fungi histopathologically in this patient did not favor phaeohyphomycosis. Palisaded neutrophilic granulomatous dermatitis is characterized clinically by cordlike skin lesions (the “rope sign”) and is accompanied by extracutaneous, mostly articular features. Histopathologically, it shows intense neutrophilic infiltrate and interstitial histiocytic infiltrate along with collagen degeneration. The absence of extracutaneous and classical histologic features negated this possibility in this patient.

Though sporotrichosis and cutaneous atypical mycobacterial infections may present in linear fashion following the course of the lymphatic vessels, the absence of epidermal changes after a disease course of 15 years and the absence of granulomatous infiltrate in histopathology excluded these possibilities in this patient.

The patient was referred to a plastic surgeon, and the lesions were successfully resected. She did not return for additional review after that.

References
  1. Hirbe AC, Gutmann DH. Neurofibromatosis type 1: a multidisciplinary approach to care. Lancet Neurol 2014; 13:834–843.
  2. Pulathan Z, Imamoglu M, Cay A, Guven YK. Intermittent claudication due to right common femoral artery compression by a solitary neurofibroma. Eur J Pediatr 2005; 164:463–465.
  3. Hosseini MM, Geramizadeh B, Shakeri S, Karimi MH. Intrascrotal solitary neurofibroma: a case report and review of the literature. Urol Ann 2012; 4:119–121.
  4. Carvajal JA, Cuartas E, Qadir R, Levi AD, Temple HT. Peripheral nerve sheath tumors of the foot and ankle. Foot Ankle Int 2011; 32:163–167.
  5. Tahririan MA, Hekmatnia A, Ahrar H, Heidarpour M, Hekmatnia F. Solitary giant neurofibroma of thigh. Adv Biomed Res 2014; 3:158.
  6. Huajun J, Wei Q, Ming L, Chongyang F, Weiguo Z, Decheng L. Solitary subungual neurofibroma in the right first finger. Int J Dermatol 2012; 51:335–338.
  7. Kim DH, Murovic JA, Tiel RL, Moes G, Kline DG. A series of 397 peripheral neural sheath tumors: 30-year experience at Louisiana State University Health Sciences Center. J Neurosurg 2005; 102:246–255.
  8. Garnica M, Nucci M, Queiroz-Telles F. Difficult mycoses of the skin: advances in the epidemiology and management of eumycetoma, phaeohyphomycosis and chromoblastomycosis. Curr Opin Infect Dis 2009; 22:559–563.
References
  1. Hirbe AC, Gutmann DH. Neurofibromatosis type 1: a multidisciplinary approach to care. Lancet Neurol 2014; 13:834–843.
  2. Pulathan Z, Imamoglu M, Cay A, Guven YK. Intermittent claudication due to right common femoral artery compression by a solitary neurofibroma. Eur J Pediatr 2005; 164:463–465.
  3. Hosseini MM, Geramizadeh B, Shakeri S, Karimi MH. Intrascrotal solitary neurofibroma: a case report and review of the literature. Urol Ann 2012; 4:119–121.
  4. Carvajal JA, Cuartas E, Qadir R, Levi AD, Temple HT. Peripheral nerve sheath tumors of the foot and ankle. Foot Ankle Int 2011; 32:163–167.
  5. Tahririan MA, Hekmatnia A, Ahrar H, Heidarpour M, Hekmatnia F. Solitary giant neurofibroma of thigh. Adv Biomed Res 2014; 3:158.
  6. Huajun J, Wei Q, Ming L, Chongyang F, Weiguo Z, Decheng L. Solitary subungual neurofibroma in the right first finger. Int J Dermatol 2012; 51:335–338.
  7. Kim DH, Murovic JA, Tiel RL, Moes G, Kline DG. A series of 397 peripheral neural sheath tumors: 30-year experience at Louisiana State University Health Sciences Center. J Neurosurg 2005; 102:246–255.
  8. Garnica M, Nucci M, Queiroz-Telles F. Difficult mycoses of the skin: advances in the epidemiology and management of eumycetoma, phaeohyphomycosis and chromoblastomycosis. Curr Opin Infect Dis 2009; 22:559–563.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
414-416
Page Number
414-416
Publications
Publications
Topics
Article Type
Display Headline
Multiple linear subcutaneous nodules
Display Headline
Multiple linear subcutaneous nodules
Legacy Keywords
nodules, neurofibroma, Hansen disease, phaehyphomycosis, foot, India, Gitesh Sawatkar, Dipankar De, Uma Saikia, Sanjeev Handa
Legacy Keywords
nodules, neurofibroma, Hansen disease, phaehyphomycosis, foot, India, Gitesh Sawatkar, Dipankar De, Uma Saikia, Sanjeev Handa
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Anticoagulation in dental surgery: Is it rude to interrupt?

Article Type
Changed
Wed, 08/16/2017 - 13:27
Display Headline
Anticoagulation in dental surgery: Is it rude to interrupt?

When I was growing up, my mother frequently told me that it was rude to interrupt. Although she was referring to conversations, she may have been onto something bigger.

In the nearly three quarters of a century since their discovery, vitamin K antagonist anticoagulant drugs have been used by millions of patients to prevent heart attack and stroke. Before these patients undergo surgery, a decision to continue or interrupt anticoagulation must be made, weighing the risks of postsurgical hemorrhage with continuation of anticoagulation against the risks of stroke or other embolic complications with interruption of anticoagulation. Bleeding after dental surgery when anticoagulation is continued is rarely or never life-threatening. On the other hand, embolic complications of interrupting anticoagulation are almost always consequential and often lead to death or disability. Although consideration may be different for other types of surgery, there is no need to interrupt lifesaving anticoagulation for dental surgery.

EVIDENCE THAT SUPPORTS CONTINUING ANTICOAGULATION

As early as 1957, there were reports of prolonged postoperative bleeding after dental extractions in patients taking anticoagulants. But there were also reports of embolic complications in patients whose anticoagulation was interrupted for dental procedures. Since then, there has been a plethora of literature in this area.

A review published in 2000 showed that of more than 950 anticoagulated patients undergoing more than 2,400 dental surgical procedures (including simple and surgical extraction, alveoplasty, and gingival surgery), only 12 (< 1.3%) required more than local measures for hemostasis (eg, fresh-frozen plasma, vitamin K), and no patient died,1 leading to the conclusion that the bleeding risk was not significant in anticoagulated dental patients. Other studies and systematic reviews have also concluded that anticoagulation for dental procedures should not be interrupted.2,3 In a recent review of 83 studies, only 31 (0.6%) of 5,431 patients taking warfarin suffered bleeding complications requiring more than local measures for hemostasis; there were no fatalities.4

The risk of embolism

There have been many reports of embolic complications in patients whose anticoagulation was interrupted for dental procedures. A 2000 review of 575 cases in 526 patients whose anticoagulation was interrupted for dental procedures showed that 5 patients (0.9%) had a serious embolic complication, and 4 died.1 In a more recent review of 64 studies and more than 2,673 patients whose anticoagulation was interrupted for dental procedures, 22 patients (0.8%) suffered embolic complications, and 6 (0.2%) died of the complications.4 Of those with embolic complications, the interruption period was often not reported; however; the interruption ranged from 1 to 4 days. A 2003 systematic review by Dunn and Turpie found a 0.4% embolic complication rate when anticoagulation was interrupted for dental surgery.2

BLEEDING AFTER DENTAL SURGERY

Bleeding after dental surgery can occur with either anticoagulation continuation or interruption, and minor postoperative bleeding requiring additional local hemostatic methods occurs at about the same rate in anticoagulated patients as in those whose anticoagulation is interrupted.

In our recent literature review,4 about 6% of patients in whom anticoagulation was interrupted (and 7% in whom it was not interrupted) had minor bleeding requiring additional local hemostasis, and only 0.2% of patients required more than hemostatic measures (eg, vitamin K injection, plasma transfusion), the same rate found by Dunn and Turpie.2 All patients who required more than local hemostatic measures presumably made a full recovery, while at least 6 who suffered postoperative embolic complications died, and the rest may have had permanent disabilities.

Although bridging therapy with low-molecular-weight heparin can decrease the time without anticoagulation for a dental procedure to only 12 hours, it can be complicated to implement, and there appears to be no benefit in terms of the rates of bleeding or embolic complications. Of the 64 anticoagulation interruption studies,4 17 used heparin or low-molecular-weight heparin in conjunction with temporary warfarin interruption. In 210 instances of bridging therapy in 202 patients undergoing dental procedures, there were 2 embolic complications (1% of bridging cases) and 20 bleeding complications, with 3 (1.4%) requiring hemostasis beyond local measures.4

Many of the studies analyzed independently showed there was no significant difference in postoperative bleeding with:

  • Anticoagulation continuation vs interruption for a few days
  • Lower vs higher international normalized ratio (INR), including some over 4.0
  • Surgical vs nonsurgical extraction
  • Few vs many extractions.4

Some studies of anticoagulation and anticoagulation interruption for dental surgery had important limitations. Many of the anticoagulation studies excluded patients at high risk of bleeding, those with a high INR (> 4.0), and those with severe liver or kidney disease, and their exclusion could have lowered the incidence of bleeding complications. Many studies of anticoagulation interruption excluded patients at high risk of embolism, including patients with a previous embolic event and patients with an artificial heart valve, and this could have skewed the results lower for embolic complications.

WHY DO SOME CLINICIANS STILL RECOMMEND INTERRUPTION?

The choice seems clear: for dental surgery in anticoagulated patients, the small risk of a nonfatal bleeding complication in anticoagulated patients is outweighed by the small risk of a disabling or fatal embolic complication when anticoagulation is interrupted. Most authors have concluded that anticoagulation should be continued for dental surgery. Yet surveys of dentists and physicians have shown that many still recommend interrupting anticoagulation for dental surgery.5,6

Medical and dental association positions

The American Academy of Neurology7 and the American Dental Association8 recommend continuing anticoagulant medications for dental surgery. The American College of Chest Physicians also recommends continuing anticoagulation but in 2012 added an option to interrupt or decrease anticoagulation for 2 to 3 days for dental surgery.9 Their recommendation was based partly on the results of four controlled prospective studies10–13 comparing anticoagulated dental surgical patients with patients whose anticoagulation was interrupted. In each study, there were no embolic or bleeding complications requiring more than local methods for hemostasis in the interruption groups, leading the American College of Chest Physicians to conclude that brief anticoagulation interruption for dental surgery is safe and effective.

But the results of these studies actually argue against interrupting anticoagulation for dental surgery. In each study, rates of postoperative bleeding complications and blood loss were similar in both groups, and there were no embolic complications. The authors of each study independently concluded that anticoagulation should not be interrupted for dental surgery.

The optimal INR range for anticoagulation therapy is widely accepted as 2.0 to 3.0, and 2.5 to 3.5 for patients with a mechanical mitral valve.14 Interrupting warfarin anticoagulation for 2 or 3 days leads to a suboptimal INR. Patel et al15 studied the incidence of embolic complications (including stroke, non-central nervous system embolism, myocardial infarction, and vascular death) within 30 days in 7,082 patients taking warfarin with and without an interruption of therapy of at least 3 days (median 6 days). The observed rate of embolic events in those with temporary interruption (10.75 events per 100 patient-years) was more than double the rate in those without interruption (4.03 per 100 patient-years).15 However, this study was designed to compare rivaroxaban vs warfarin, not interrupting vs not interrupting warfarin.

 

 

A DECISION-TREE REANALYSIS

In 2010, Balevi published a decision-tree analysis that slightly favored withdrawing warfarin for dental surgery, but he stated that the analysis “can be updated in the future as more accurate and up-to-date data for each of the variables in the model become available.”16 Now that there are more accurate and up-to-date data, it is time to revisit this decision-tree analysis.

In Balevi’s analysis, major bleeding is not defined. But major bleeding after dental surgery should be defined as any bleeding requiring more than local measures for hemostasis. In calculating probabilities for the analysis, Balevi cited studies allegedly showing high incidences of major bleeding after dental extractions with warfarin continuation.17,18 There were some minor bleeding complications necessitating additional local measures for hemostasis in these studies, but no major bleeding complications at all in the warfarin- continuation or warfarin-interruption group. There were no significant bleeding events in either study, and the differences in bleeding rates were not significantly different between the two groups. In both studies, the authors concluded that warfarin interruption for dental surgery should be reconsidered.

Similarly, Balevi accurately asserted that there has never been a reported case of fatal bleeding after a dental procedure in an anticoagulated patient, but “for the sake of creating balance,”16 his decision-tree analysis uses a fatal bleeding probability of 1%, based on an estimated 1% risk for nondental procedures (eg, colorectal surgery, major abdominal surgery). It is unclear how a 1% incidence creates “balance,” but dental surgery is unlike other types of surgery, and that is one reason there has never been a documented postdental fatal hemorrhage in an anticoagulated patient. Major vessels are unlikely to be encountered, and bleeding sites are easily accessible to local hemostatic methods.

Balevi used an embolic complication incidence of 0.059% with warfarin interruption of 3 days. Perhaps he used such a low embolic probability because of his incorrect assertion that “there has been no reported case of a dental extraction causing a cardiovascular accident in a patient whose warfarin was temporarily discontinued.”16 In fact, our group has now identified at least 22 reported cases of embolic complications after temporary interruption of warfarin therapy in patients undergoing dental surgery.4 These included 12 embolic complications (3 fatal) after interruption periods from 1 to 5 days.19,20 In addition, there are numerous cases of embolic complications reported in patients whose warfarin was temporarily interrupted for other types of surgery.21,22

The literature shows that embolic complications after temporary warfarin interruption occur at a much higher rate than 0.059%. Many documented embolic complications have occurred after relatively long warfarin interruption periods (greater than 5 days), but many have occurred with much shorter interruptions. Wysokinski et al21 showed that there was a 1.1% incidence of thromboembolic events, more than 18 times greater than Balevi’s incidence, in patients whose warfarin was interrupted for 4 or 5 days with or without bridging therapy. One of these patients developed an occipital infarct within 3 days after stopping warfarin without bridging (for a nondental procedure). Garcia et al22 showed that of 984 warfarin therapy interruptions of 5 days or less, there were 4 embolic complications, a rate (0.4%) more than 6 times greater than that reported by Balevi.

Even if one were to accept a 0.059% embolic risk from interruption of warfarin, that would mean for every 1,700 warfarin interruptions for dental procedures, there would be one possibly fatal embolic complication. On the other hand, if 1,700 dental surgeries were performed without warfarin interruption, based on the literature, there may be some bleeding complications, but none would be fatal. If airline flights had a 0.059% chance of crashing, far fewer people would choose to fly. (There are 87,000 airline flights in the US per day. A 0.059% crash rate would mean there would be 51 crashes per day in the United States alone.)

But regardless of whether the embolic risk is 0.059% or 1%, the question comes down to whether an anticoagulated patient should be subjected to a small but significant risk of death or permanent disability (if anticoagulation is interrupted) or to a small risk of a bleeding complication (if anticoagulation is continued), when 100% of cases up until now have apparently resulted in a full recovery.

As a result, the decision-tree analysis was fatally flawed by grossly overestimating the incidence of fatal bleeding when warfarin is continued, and by grossly underestimating the incidence of embolic complications when warfarin is interrupted.

IS WARFARIN CONTINUATION ‘TROUBLESOME’?

An oral surgeon stated, “My experience and that of many of my colleagues is that even though bleeding is never life-threatening [emphasis mine], it can be difficult to control at therapeutic levels of anticoagulation and can be troublesome, especially for elderly patients.”23 The American College of Chest Physicians stated that postoperative bleeding after dental procedures can cause “anxiety and distress.”3 Patients with even minor postoperative bleeding can be anxious, but surely, postoperative stroke is almost always far more troublesome than postoperative bleeding, which has never been life-threatening. Although other types of surgery may be different, there is no need to interrupt lifesaving anticoagulation for innocuous dental surgery.

My mother was right—it can be rude to interrupt. Anticoagulation should not be interrupted for dental surgery.

References
  1. Wahl MJ. Myths of dental surgery in patients receiving anticoagulant therapy. J Am Dent Assoc 2000; 131:77–81.
  2. Dunn AS, Turpie AG. Perioperative management of patients receiving oral anticoagulants: a systematic review. Arch Intern Med 2003; 163:901–908.
  3. Nematullah A, Alabousi A, Blanas N, Douketis JD, Sutherland SE. Dental surgery for patients on anticoagulant therapy with warfarin: a systematic review and meta-analysis. J Can Dent Assoc 2009; 75:41.
  4. Wahl MJ, Pintos A, Kilham J, Lalla RV. Dental surgery in anticoagulated patients—stop the interruption. Oral Surg Oral Med Oral Pathol Oral Radiol 2015; 119:136–157.
  5. van Diermen DE, van der Waal I, Hoogvliets MW, Ong FN, Hoogstraten J. Survey response of oral and maxillofacial surgeons on invasive procedures in patients using antithrombotic medication. Int J Oral Maxillofac Surg 2013; 42:502–507.
  6. Ward BB, Smith MH. Dentoalveolar procedures for the anticoagulated patient: literature recommendations versus current practice. J Oral Maxillofac Surg 2007; 65:1454–1460.
  7. Armstrong MJ, Gronseth G, Anderson DC, et al. Summary of evidence-based guideline: periprocedural management of antithrombotic medications in patients with ischemic cerebrovascular disease. Report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology 2013; 80:2065–2069.
  8. American Dental Association (ADA). Anticoagulant antiplatelet medications and dental procedures. www.ada.org/en/member-center/oral-health-topics/anticoagulant-antiplatelet-medications-and-dental-. Accessed May 16, 2016.
  9. Douketis JD, Spyropoulos AC, Spencer FA, et al; American College of Chest Physicians. Perioperative management of antithrombotic therapy. Antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest 2012; 141(suppl 2):e326S–e350S.
  10. Campbell JH, Alvarado F, Murray RA. Anticoagulation and minor oral surgery: should the anticoagulation regimen be altered? J Oral Maxillofac Surg 2000; 58:131–135.
  11. Devani P, Lavery M, Howell CJT. Dental extractions in patients on warfarin: is alteration of anticoagulation regime necessary? Br J Oral Maxillofac Surg 1998; 36:107–111.
  12. Gaspar R, Brenner B, Ardekian L, Peled M, Laufer D. Use of tranexamic acid mouthwash to prevent postoperative bleeding in oral surgery patients on oral anticoagulant medication. Quintessence Int 1997; 28:375–379.
  13. Blinder D, Manor Y, Martinowitz U, Taicher S. Dental extractions in patients maintained on oral anticoagulant therapy: comparison of INR value with occurrence of postoperative bleeding. Int J Oral Maxillofac Surg 2001; 30:518–521.
  14. Whitlock RP, Sun JC, Fremes SE, Rubens FD, Teoh KH; American College of Chest Physicians. Antithrombotic and thrombolytic therapy for valvular disease: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest 2012; 141(suppl 2):e576S–e600S.
  15. Patel MR, Hellkamp AS, Lokhnygina Y, et al. Outcomes of discontinuing rivaroxaban compared with warfarin in patients with nonvalvular atrial fibrillation: analysis from the ROCKET AF trial (rivaroxaban once-daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation). J Am Coll Cardiol 2013; 61:651–658.
  16. Balevi B. Should warfarin be discontinued before a dental extraction? A decision-tree analysis. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2010; 110:691–697.
  17. Al-Mubarak S, Al-Ali N, Abou Rass M, et al. Evaluation of dental extractions, suturing and INR on postoperative bleeding of patients maintained on oral anticoagulant therapy. Br Dent J 2007; 203:E15.
  18. Evans IL, Sayers MS, Gibbons AJ, Price G, Snooks H, Sugar AW. Can warfarin be continued during dental extraction? Results of a randomized controlled trial. Br J Oral Maxillofac Surg 2002; 40:248–252.
  19. Yasaka M, Naritomi H, Minematsu K. Ischemic stroke associated with brief cessation of warfarin. Thromb Res 2006; 118:290–293.
  20. Akopov SE, Suzuki S, Fredieu A, Kidwell CS, Saver JL, Cohen SN. Withdrawal of warfarin prior to a surgical procedure: time to follow the guidelines? Cerbrovasc Dis 2005; 19:337–342.
  21. Wysokinski WE, McBane RD, Daniels PR, et al. Periprocedural anticoagulation management of patients with nonvalvular atrial fibrillation. Mayo Clin Proc 2008; 83:639–645.
  22. Garcia DA, Regan S, Henault LE, et al. Risk of thromboembolism with short-term interruption of warfarin therapy. Arch Intern Med 2008; 168:63–69.
  23. Todd DW. Anticoagulated patients and oral surgery [letter]. Arch Intern Med 2003; 163:1242.
Article PDF
Author and Disclosure Information

Michael J. Wahl, DDS
Department of Oral and Maxillofacial Surgery and Hospital Dentistry, Christiana Care Health System, Wilmington, DE; Wahl Family Dentistry, Wilmington, DE

Address: Michael J. Wahl, DDS, 2003 Concord Pike, Wilmington, DE 19803; [email protected]

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
409-413
Legacy Keywords
anticoagulation, dental surgery, interruption, bridging, warfarin, michael wahl
Sections
Author and Disclosure Information

Michael J. Wahl, DDS
Department of Oral and Maxillofacial Surgery and Hospital Dentistry, Christiana Care Health System, Wilmington, DE; Wahl Family Dentistry, Wilmington, DE

Address: Michael J. Wahl, DDS, 2003 Concord Pike, Wilmington, DE 19803; [email protected]

Author and Disclosure Information

Michael J. Wahl, DDS
Department of Oral and Maxillofacial Surgery and Hospital Dentistry, Christiana Care Health System, Wilmington, DE; Wahl Family Dentistry, Wilmington, DE

Address: Michael J. Wahl, DDS, 2003 Concord Pike, Wilmington, DE 19803; [email protected]

Article PDF
Article PDF
Related Articles

When I was growing up, my mother frequently told me that it was rude to interrupt. Although she was referring to conversations, she may have been onto something bigger.

In the nearly three quarters of a century since their discovery, vitamin K antagonist anticoagulant drugs have been used by millions of patients to prevent heart attack and stroke. Before these patients undergo surgery, a decision to continue or interrupt anticoagulation must be made, weighing the risks of postsurgical hemorrhage with continuation of anticoagulation against the risks of stroke or other embolic complications with interruption of anticoagulation. Bleeding after dental surgery when anticoagulation is continued is rarely or never life-threatening. On the other hand, embolic complications of interrupting anticoagulation are almost always consequential and often lead to death or disability. Although consideration may be different for other types of surgery, there is no need to interrupt lifesaving anticoagulation for dental surgery.

EVIDENCE THAT SUPPORTS CONTINUING ANTICOAGULATION

As early as 1957, there were reports of prolonged postoperative bleeding after dental extractions in patients taking anticoagulants. But there were also reports of embolic complications in patients whose anticoagulation was interrupted for dental procedures. Since then, there has been a plethora of literature in this area.

A review published in 2000 showed that of more than 950 anticoagulated patients undergoing more than 2,400 dental surgical procedures (including simple and surgical extraction, alveoplasty, and gingival surgery), only 12 (< 1.3%) required more than local measures for hemostasis (eg, fresh-frozen plasma, vitamin K), and no patient died,1 leading to the conclusion that the bleeding risk was not significant in anticoagulated dental patients. Other studies and systematic reviews have also concluded that anticoagulation for dental procedures should not be interrupted.2,3 In a recent review of 83 studies, only 31 (0.6%) of 5,431 patients taking warfarin suffered bleeding complications requiring more than local measures for hemostasis; there were no fatalities.4

The risk of embolism

There have been many reports of embolic complications in patients whose anticoagulation was interrupted for dental procedures. A 2000 review of 575 cases in 526 patients whose anticoagulation was interrupted for dental procedures showed that 5 patients (0.9%) had a serious embolic complication, and 4 died.1 In a more recent review of 64 studies and more than 2,673 patients whose anticoagulation was interrupted for dental procedures, 22 patients (0.8%) suffered embolic complications, and 6 (0.2%) died of the complications.4 Of those with embolic complications, the interruption period was often not reported; however; the interruption ranged from 1 to 4 days. A 2003 systematic review by Dunn and Turpie found a 0.4% embolic complication rate when anticoagulation was interrupted for dental surgery.2

BLEEDING AFTER DENTAL SURGERY

Bleeding after dental surgery can occur with either anticoagulation continuation or interruption, and minor postoperative bleeding requiring additional local hemostatic methods occurs at about the same rate in anticoagulated patients as in those whose anticoagulation is interrupted.

In our recent literature review,4 about 6% of patients in whom anticoagulation was interrupted (and 7% in whom it was not interrupted) had minor bleeding requiring additional local hemostasis, and only 0.2% of patients required more than hemostatic measures (eg, vitamin K injection, plasma transfusion), the same rate found by Dunn and Turpie.2 All patients who required more than local hemostatic measures presumably made a full recovery, while at least 6 who suffered postoperative embolic complications died, and the rest may have had permanent disabilities.

Although bridging therapy with low-molecular-weight heparin can decrease the time without anticoagulation for a dental procedure to only 12 hours, it can be complicated to implement, and there appears to be no benefit in terms of the rates of bleeding or embolic complications. Of the 64 anticoagulation interruption studies,4 17 used heparin or low-molecular-weight heparin in conjunction with temporary warfarin interruption. In 210 instances of bridging therapy in 202 patients undergoing dental procedures, there were 2 embolic complications (1% of bridging cases) and 20 bleeding complications, with 3 (1.4%) requiring hemostasis beyond local measures.4

Many of the studies analyzed independently showed there was no significant difference in postoperative bleeding with:

  • Anticoagulation continuation vs interruption for a few days
  • Lower vs higher international normalized ratio (INR), including some over 4.0
  • Surgical vs nonsurgical extraction
  • Few vs many extractions.4

Some studies of anticoagulation and anticoagulation interruption for dental surgery had important limitations. Many of the anticoagulation studies excluded patients at high risk of bleeding, those with a high INR (> 4.0), and those with severe liver or kidney disease, and their exclusion could have lowered the incidence of bleeding complications. Many studies of anticoagulation interruption excluded patients at high risk of embolism, including patients with a previous embolic event and patients with an artificial heart valve, and this could have skewed the results lower for embolic complications.

WHY DO SOME CLINICIANS STILL RECOMMEND INTERRUPTION?

The choice seems clear: for dental surgery in anticoagulated patients, the small risk of a nonfatal bleeding complication in anticoagulated patients is outweighed by the small risk of a disabling or fatal embolic complication when anticoagulation is interrupted. Most authors have concluded that anticoagulation should be continued for dental surgery. Yet surveys of dentists and physicians have shown that many still recommend interrupting anticoagulation for dental surgery.5,6

Medical and dental association positions

The American Academy of Neurology7 and the American Dental Association8 recommend continuing anticoagulant medications for dental surgery. The American College of Chest Physicians also recommends continuing anticoagulation but in 2012 added an option to interrupt or decrease anticoagulation for 2 to 3 days for dental surgery.9 Their recommendation was based partly on the results of four controlled prospective studies10–13 comparing anticoagulated dental surgical patients with patients whose anticoagulation was interrupted. In each study, there were no embolic or bleeding complications requiring more than local methods for hemostasis in the interruption groups, leading the American College of Chest Physicians to conclude that brief anticoagulation interruption for dental surgery is safe and effective.

But the results of these studies actually argue against interrupting anticoagulation for dental surgery. In each study, rates of postoperative bleeding complications and blood loss were similar in both groups, and there were no embolic complications. The authors of each study independently concluded that anticoagulation should not be interrupted for dental surgery.

The optimal INR range for anticoagulation therapy is widely accepted as 2.0 to 3.0, and 2.5 to 3.5 for patients with a mechanical mitral valve.14 Interrupting warfarin anticoagulation for 2 or 3 days leads to a suboptimal INR. Patel et al15 studied the incidence of embolic complications (including stroke, non-central nervous system embolism, myocardial infarction, and vascular death) within 30 days in 7,082 patients taking warfarin with and without an interruption of therapy of at least 3 days (median 6 days). The observed rate of embolic events in those with temporary interruption (10.75 events per 100 patient-years) was more than double the rate in those without interruption (4.03 per 100 patient-years).15 However, this study was designed to compare rivaroxaban vs warfarin, not interrupting vs not interrupting warfarin.

 

 

A DECISION-TREE REANALYSIS

In 2010, Balevi published a decision-tree analysis that slightly favored withdrawing warfarin for dental surgery, but he stated that the analysis “can be updated in the future as more accurate and up-to-date data for each of the variables in the model become available.”16 Now that there are more accurate and up-to-date data, it is time to revisit this decision-tree analysis.

In Balevi’s analysis, major bleeding is not defined. But major bleeding after dental surgery should be defined as any bleeding requiring more than local measures for hemostasis. In calculating probabilities for the analysis, Balevi cited studies allegedly showing high incidences of major bleeding after dental extractions with warfarin continuation.17,18 There were some minor bleeding complications necessitating additional local measures for hemostasis in these studies, but no major bleeding complications at all in the warfarin- continuation or warfarin-interruption group. There were no significant bleeding events in either study, and the differences in bleeding rates were not significantly different between the two groups. In both studies, the authors concluded that warfarin interruption for dental surgery should be reconsidered.

Similarly, Balevi accurately asserted that there has never been a reported case of fatal bleeding after a dental procedure in an anticoagulated patient, but “for the sake of creating balance,”16 his decision-tree analysis uses a fatal bleeding probability of 1%, based on an estimated 1% risk for nondental procedures (eg, colorectal surgery, major abdominal surgery). It is unclear how a 1% incidence creates “balance,” but dental surgery is unlike other types of surgery, and that is one reason there has never been a documented postdental fatal hemorrhage in an anticoagulated patient. Major vessels are unlikely to be encountered, and bleeding sites are easily accessible to local hemostatic methods.

Balevi used an embolic complication incidence of 0.059% with warfarin interruption of 3 days. Perhaps he used such a low embolic probability because of his incorrect assertion that “there has been no reported case of a dental extraction causing a cardiovascular accident in a patient whose warfarin was temporarily discontinued.”16 In fact, our group has now identified at least 22 reported cases of embolic complications after temporary interruption of warfarin therapy in patients undergoing dental surgery.4 These included 12 embolic complications (3 fatal) after interruption periods from 1 to 5 days.19,20 In addition, there are numerous cases of embolic complications reported in patients whose warfarin was temporarily interrupted for other types of surgery.21,22

The literature shows that embolic complications after temporary warfarin interruption occur at a much higher rate than 0.059%. Many documented embolic complications have occurred after relatively long warfarin interruption periods (greater than 5 days), but many have occurred with much shorter interruptions. Wysokinski et al21 showed that there was a 1.1% incidence of thromboembolic events, more than 18 times greater than Balevi’s incidence, in patients whose warfarin was interrupted for 4 or 5 days with or without bridging therapy. One of these patients developed an occipital infarct within 3 days after stopping warfarin without bridging (for a nondental procedure). Garcia et al22 showed that of 984 warfarin therapy interruptions of 5 days or less, there were 4 embolic complications, a rate (0.4%) more than 6 times greater than that reported by Balevi.

Even if one were to accept a 0.059% embolic risk from interruption of warfarin, that would mean for every 1,700 warfarin interruptions for dental procedures, there would be one possibly fatal embolic complication. On the other hand, if 1,700 dental surgeries were performed without warfarin interruption, based on the literature, there may be some bleeding complications, but none would be fatal. If airline flights had a 0.059% chance of crashing, far fewer people would choose to fly. (There are 87,000 airline flights in the US per day. A 0.059% crash rate would mean there would be 51 crashes per day in the United States alone.)

But regardless of whether the embolic risk is 0.059% or 1%, the question comes down to whether an anticoagulated patient should be subjected to a small but significant risk of death or permanent disability (if anticoagulation is interrupted) or to a small risk of a bleeding complication (if anticoagulation is continued), when 100% of cases up until now have apparently resulted in a full recovery.

As a result, the decision-tree analysis was fatally flawed by grossly overestimating the incidence of fatal bleeding when warfarin is continued, and by grossly underestimating the incidence of embolic complications when warfarin is interrupted.

IS WARFARIN CONTINUATION ‘TROUBLESOME’?

An oral surgeon stated, “My experience and that of many of my colleagues is that even though bleeding is never life-threatening [emphasis mine], it can be difficult to control at therapeutic levels of anticoagulation and can be troublesome, especially for elderly patients.”23 The American College of Chest Physicians stated that postoperative bleeding after dental procedures can cause “anxiety and distress.”3 Patients with even minor postoperative bleeding can be anxious, but surely, postoperative stroke is almost always far more troublesome than postoperative bleeding, which has never been life-threatening. Although other types of surgery may be different, there is no need to interrupt lifesaving anticoagulation for innocuous dental surgery.

My mother was right—it can be rude to interrupt. Anticoagulation should not be interrupted for dental surgery.

When I was growing up, my mother frequently told me that it was rude to interrupt. Although she was referring to conversations, she may have been onto something bigger.

In the nearly three quarters of a century since their discovery, vitamin K antagonist anticoagulant drugs have been used by millions of patients to prevent heart attack and stroke. Before these patients undergo surgery, a decision to continue or interrupt anticoagulation must be made, weighing the risks of postsurgical hemorrhage with continuation of anticoagulation against the risks of stroke or other embolic complications with interruption of anticoagulation. Bleeding after dental surgery when anticoagulation is continued is rarely or never life-threatening. On the other hand, embolic complications of interrupting anticoagulation are almost always consequential and often lead to death or disability. Although consideration may be different for other types of surgery, there is no need to interrupt lifesaving anticoagulation for dental surgery.

EVIDENCE THAT SUPPORTS CONTINUING ANTICOAGULATION

As early as 1957, there were reports of prolonged postoperative bleeding after dental extractions in patients taking anticoagulants. But there were also reports of embolic complications in patients whose anticoagulation was interrupted for dental procedures. Since then, there has been a plethora of literature in this area.

A review published in 2000 showed that of more than 950 anticoagulated patients undergoing more than 2,400 dental surgical procedures (including simple and surgical extraction, alveoplasty, and gingival surgery), only 12 (< 1.3%) required more than local measures for hemostasis (eg, fresh-frozen plasma, vitamin K), and no patient died,1 leading to the conclusion that the bleeding risk was not significant in anticoagulated dental patients. Other studies and systematic reviews have also concluded that anticoagulation for dental procedures should not be interrupted.2,3 In a recent review of 83 studies, only 31 (0.6%) of 5,431 patients taking warfarin suffered bleeding complications requiring more than local measures for hemostasis; there were no fatalities.4

The risk of embolism

There have been many reports of embolic complications in patients whose anticoagulation was interrupted for dental procedures. A 2000 review of 575 cases in 526 patients whose anticoagulation was interrupted for dental procedures showed that 5 patients (0.9%) had a serious embolic complication, and 4 died.1 In a more recent review of 64 studies and more than 2,673 patients whose anticoagulation was interrupted for dental procedures, 22 patients (0.8%) suffered embolic complications, and 6 (0.2%) died of the complications.4 Of those with embolic complications, the interruption period was often not reported; however; the interruption ranged from 1 to 4 days. A 2003 systematic review by Dunn and Turpie found a 0.4% embolic complication rate when anticoagulation was interrupted for dental surgery.2

BLEEDING AFTER DENTAL SURGERY

Bleeding after dental surgery can occur with either anticoagulation continuation or interruption, and minor postoperative bleeding requiring additional local hemostatic methods occurs at about the same rate in anticoagulated patients as in those whose anticoagulation is interrupted.

In our recent literature review,4 about 6% of patients in whom anticoagulation was interrupted (and 7% in whom it was not interrupted) had minor bleeding requiring additional local hemostasis, and only 0.2% of patients required more than hemostatic measures (eg, vitamin K injection, plasma transfusion), the same rate found by Dunn and Turpie.2 All patients who required more than local hemostatic measures presumably made a full recovery, while at least 6 who suffered postoperative embolic complications died, and the rest may have had permanent disabilities.

Although bridging therapy with low-molecular-weight heparin can decrease the time without anticoagulation for a dental procedure to only 12 hours, it can be complicated to implement, and there appears to be no benefit in terms of the rates of bleeding or embolic complications. Of the 64 anticoagulation interruption studies,4 17 used heparin or low-molecular-weight heparin in conjunction with temporary warfarin interruption. In 210 instances of bridging therapy in 202 patients undergoing dental procedures, there were 2 embolic complications (1% of bridging cases) and 20 bleeding complications, with 3 (1.4%) requiring hemostasis beyond local measures.4

Many of the studies analyzed independently showed there was no significant difference in postoperative bleeding with:

  • Anticoagulation continuation vs interruption for a few days
  • Lower vs higher international normalized ratio (INR), including some over 4.0
  • Surgical vs nonsurgical extraction
  • Few vs many extractions.4

Some studies of anticoagulation and anticoagulation interruption for dental surgery had important limitations. Many of the anticoagulation studies excluded patients at high risk of bleeding, those with a high INR (> 4.0), and those with severe liver or kidney disease, and their exclusion could have lowered the incidence of bleeding complications. Many studies of anticoagulation interruption excluded patients at high risk of embolism, including patients with a previous embolic event and patients with an artificial heart valve, and this could have skewed the results lower for embolic complications.

WHY DO SOME CLINICIANS STILL RECOMMEND INTERRUPTION?

The choice seems clear: for dental surgery in anticoagulated patients, the small risk of a nonfatal bleeding complication in anticoagulated patients is outweighed by the small risk of a disabling or fatal embolic complication when anticoagulation is interrupted. Most authors have concluded that anticoagulation should be continued for dental surgery. Yet surveys of dentists and physicians have shown that many still recommend interrupting anticoagulation for dental surgery.5,6

Medical and dental association positions

The American Academy of Neurology7 and the American Dental Association8 recommend continuing anticoagulant medications for dental surgery. The American College of Chest Physicians also recommends continuing anticoagulation but in 2012 added an option to interrupt or decrease anticoagulation for 2 to 3 days for dental surgery.9 Their recommendation was based partly on the results of four controlled prospective studies10–13 comparing anticoagulated dental surgical patients with patients whose anticoagulation was interrupted. In each study, there were no embolic or bleeding complications requiring more than local methods for hemostasis in the interruption groups, leading the American College of Chest Physicians to conclude that brief anticoagulation interruption for dental surgery is safe and effective.

But the results of these studies actually argue against interrupting anticoagulation for dental surgery. In each study, rates of postoperative bleeding complications and blood loss were similar in both groups, and there were no embolic complications. The authors of each study independently concluded that anticoagulation should not be interrupted for dental surgery.

The optimal INR range for anticoagulation therapy is widely accepted as 2.0 to 3.0, and 2.5 to 3.5 for patients with a mechanical mitral valve.14 Interrupting warfarin anticoagulation for 2 or 3 days leads to a suboptimal INR. Patel et al15 studied the incidence of embolic complications (including stroke, non-central nervous system embolism, myocardial infarction, and vascular death) within 30 days in 7,082 patients taking warfarin with and without an interruption of therapy of at least 3 days (median 6 days). The observed rate of embolic events in those with temporary interruption (10.75 events per 100 patient-years) was more than double the rate in those without interruption (4.03 per 100 patient-years).15 However, this study was designed to compare rivaroxaban vs warfarin, not interrupting vs not interrupting warfarin.

 

 

A DECISION-TREE REANALYSIS

In 2010, Balevi published a decision-tree analysis that slightly favored withdrawing warfarin for dental surgery, but he stated that the analysis “can be updated in the future as more accurate and up-to-date data for each of the variables in the model become available.”16 Now that there are more accurate and up-to-date data, it is time to revisit this decision-tree analysis.

In Balevi’s analysis, major bleeding is not defined. But major bleeding after dental surgery should be defined as any bleeding requiring more than local measures for hemostasis. In calculating probabilities for the analysis, Balevi cited studies allegedly showing high incidences of major bleeding after dental extractions with warfarin continuation.17,18 There were some minor bleeding complications necessitating additional local measures for hemostasis in these studies, but no major bleeding complications at all in the warfarin- continuation or warfarin-interruption group. There were no significant bleeding events in either study, and the differences in bleeding rates were not significantly different between the two groups. In both studies, the authors concluded that warfarin interruption for dental surgery should be reconsidered.

Similarly, Balevi accurately asserted that there has never been a reported case of fatal bleeding after a dental procedure in an anticoagulated patient, but “for the sake of creating balance,”16 his decision-tree analysis uses a fatal bleeding probability of 1%, based on an estimated 1% risk for nondental procedures (eg, colorectal surgery, major abdominal surgery). It is unclear how a 1% incidence creates “balance,” but dental surgery is unlike other types of surgery, and that is one reason there has never been a documented postdental fatal hemorrhage in an anticoagulated patient. Major vessels are unlikely to be encountered, and bleeding sites are easily accessible to local hemostatic methods.

Balevi used an embolic complication incidence of 0.059% with warfarin interruption of 3 days. Perhaps he used such a low embolic probability because of his incorrect assertion that “there has been no reported case of a dental extraction causing a cardiovascular accident in a patient whose warfarin was temporarily discontinued.”16 In fact, our group has now identified at least 22 reported cases of embolic complications after temporary interruption of warfarin therapy in patients undergoing dental surgery.4 These included 12 embolic complications (3 fatal) after interruption periods from 1 to 5 days.19,20 In addition, there are numerous cases of embolic complications reported in patients whose warfarin was temporarily interrupted for other types of surgery.21,22

The literature shows that embolic complications after temporary warfarin interruption occur at a much higher rate than 0.059%. Many documented embolic complications have occurred after relatively long warfarin interruption periods (greater than 5 days), but many have occurred with much shorter interruptions. Wysokinski et al21 showed that there was a 1.1% incidence of thromboembolic events, more than 18 times greater than Balevi’s incidence, in patients whose warfarin was interrupted for 4 or 5 days with or without bridging therapy. One of these patients developed an occipital infarct within 3 days after stopping warfarin without bridging (for a nondental procedure). Garcia et al22 showed that of 984 warfarin therapy interruptions of 5 days or less, there were 4 embolic complications, a rate (0.4%) more than 6 times greater than that reported by Balevi.

Even if one were to accept a 0.059% embolic risk from interruption of warfarin, that would mean for every 1,700 warfarin interruptions for dental procedures, there would be one possibly fatal embolic complication. On the other hand, if 1,700 dental surgeries were performed without warfarin interruption, based on the literature, there may be some bleeding complications, but none would be fatal. If airline flights had a 0.059% chance of crashing, far fewer people would choose to fly. (There are 87,000 airline flights in the US per day. A 0.059% crash rate would mean there would be 51 crashes per day in the United States alone.)

But regardless of whether the embolic risk is 0.059% or 1%, the question comes down to whether an anticoagulated patient should be subjected to a small but significant risk of death or permanent disability (if anticoagulation is interrupted) or to a small risk of a bleeding complication (if anticoagulation is continued), when 100% of cases up until now have apparently resulted in a full recovery.

As a result, the decision-tree analysis was fatally flawed by grossly overestimating the incidence of fatal bleeding when warfarin is continued, and by grossly underestimating the incidence of embolic complications when warfarin is interrupted.

IS WARFARIN CONTINUATION ‘TROUBLESOME’?

An oral surgeon stated, “My experience and that of many of my colleagues is that even though bleeding is never life-threatening [emphasis mine], it can be difficult to control at therapeutic levels of anticoagulation and can be troublesome, especially for elderly patients.”23 The American College of Chest Physicians stated that postoperative bleeding after dental procedures can cause “anxiety and distress.”3 Patients with even minor postoperative bleeding can be anxious, but surely, postoperative stroke is almost always far more troublesome than postoperative bleeding, which has never been life-threatening. Although other types of surgery may be different, there is no need to interrupt lifesaving anticoagulation for innocuous dental surgery.

My mother was right—it can be rude to interrupt. Anticoagulation should not be interrupted for dental surgery.

References
  1. Wahl MJ. Myths of dental surgery in patients receiving anticoagulant therapy. J Am Dent Assoc 2000; 131:77–81.
  2. Dunn AS, Turpie AG. Perioperative management of patients receiving oral anticoagulants: a systematic review. Arch Intern Med 2003; 163:901–908.
  3. Nematullah A, Alabousi A, Blanas N, Douketis JD, Sutherland SE. Dental surgery for patients on anticoagulant therapy with warfarin: a systematic review and meta-analysis. J Can Dent Assoc 2009; 75:41.
  4. Wahl MJ, Pintos A, Kilham J, Lalla RV. Dental surgery in anticoagulated patients—stop the interruption. Oral Surg Oral Med Oral Pathol Oral Radiol 2015; 119:136–157.
  5. van Diermen DE, van der Waal I, Hoogvliets MW, Ong FN, Hoogstraten J. Survey response of oral and maxillofacial surgeons on invasive procedures in patients using antithrombotic medication. Int J Oral Maxillofac Surg 2013; 42:502–507.
  6. Ward BB, Smith MH. Dentoalveolar procedures for the anticoagulated patient: literature recommendations versus current practice. J Oral Maxillofac Surg 2007; 65:1454–1460.
  7. Armstrong MJ, Gronseth G, Anderson DC, et al. Summary of evidence-based guideline: periprocedural management of antithrombotic medications in patients with ischemic cerebrovascular disease. Report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology 2013; 80:2065–2069.
  8. American Dental Association (ADA). Anticoagulant antiplatelet medications and dental procedures. www.ada.org/en/member-center/oral-health-topics/anticoagulant-antiplatelet-medications-and-dental-. Accessed May 16, 2016.
  9. Douketis JD, Spyropoulos AC, Spencer FA, et al; American College of Chest Physicians. Perioperative management of antithrombotic therapy. Antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest 2012; 141(suppl 2):e326S–e350S.
  10. Campbell JH, Alvarado F, Murray RA. Anticoagulation and minor oral surgery: should the anticoagulation regimen be altered? J Oral Maxillofac Surg 2000; 58:131–135.
  11. Devani P, Lavery M, Howell CJT. Dental extractions in patients on warfarin: is alteration of anticoagulation regime necessary? Br J Oral Maxillofac Surg 1998; 36:107–111.
  12. Gaspar R, Brenner B, Ardekian L, Peled M, Laufer D. Use of tranexamic acid mouthwash to prevent postoperative bleeding in oral surgery patients on oral anticoagulant medication. Quintessence Int 1997; 28:375–379.
  13. Blinder D, Manor Y, Martinowitz U, Taicher S. Dental extractions in patients maintained on oral anticoagulant therapy: comparison of INR value with occurrence of postoperative bleeding. Int J Oral Maxillofac Surg 2001; 30:518–521.
  14. Whitlock RP, Sun JC, Fremes SE, Rubens FD, Teoh KH; American College of Chest Physicians. Antithrombotic and thrombolytic therapy for valvular disease: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest 2012; 141(suppl 2):e576S–e600S.
  15. Patel MR, Hellkamp AS, Lokhnygina Y, et al. Outcomes of discontinuing rivaroxaban compared with warfarin in patients with nonvalvular atrial fibrillation: analysis from the ROCKET AF trial (rivaroxaban once-daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation). J Am Coll Cardiol 2013; 61:651–658.
  16. Balevi B. Should warfarin be discontinued before a dental extraction? A decision-tree analysis. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2010; 110:691–697.
  17. Al-Mubarak S, Al-Ali N, Abou Rass M, et al. Evaluation of dental extractions, suturing and INR on postoperative bleeding of patients maintained on oral anticoagulant therapy. Br Dent J 2007; 203:E15.
  18. Evans IL, Sayers MS, Gibbons AJ, Price G, Snooks H, Sugar AW. Can warfarin be continued during dental extraction? Results of a randomized controlled trial. Br J Oral Maxillofac Surg 2002; 40:248–252.
  19. Yasaka M, Naritomi H, Minematsu K. Ischemic stroke associated with brief cessation of warfarin. Thromb Res 2006; 118:290–293.
  20. Akopov SE, Suzuki S, Fredieu A, Kidwell CS, Saver JL, Cohen SN. Withdrawal of warfarin prior to a surgical procedure: time to follow the guidelines? Cerbrovasc Dis 2005; 19:337–342.
  21. Wysokinski WE, McBane RD, Daniels PR, et al. Periprocedural anticoagulation management of patients with nonvalvular atrial fibrillation. Mayo Clin Proc 2008; 83:639–645.
  22. Garcia DA, Regan S, Henault LE, et al. Risk of thromboembolism with short-term interruption of warfarin therapy. Arch Intern Med 2008; 168:63–69.
  23. Todd DW. Anticoagulated patients and oral surgery [letter]. Arch Intern Med 2003; 163:1242.
References
  1. Wahl MJ. Myths of dental surgery in patients receiving anticoagulant therapy. J Am Dent Assoc 2000; 131:77–81.
  2. Dunn AS, Turpie AG. Perioperative management of patients receiving oral anticoagulants: a systematic review. Arch Intern Med 2003; 163:901–908.
  3. Nematullah A, Alabousi A, Blanas N, Douketis JD, Sutherland SE. Dental surgery for patients on anticoagulant therapy with warfarin: a systematic review and meta-analysis. J Can Dent Assoc 2009; 75:41.
  4. Wahl MJ, Pintos A, Kilham J, Lalla RV. Dental surgery in anticoagulated patients—stop the interruption. Oral Surg Oral Med Oral Pathol Oral Radiol 2015; 119:136–157.
  5. van Diermen DE, van der Waal I, Hoogvliets MW, Ong FN, Hoogstraten J. Survey response of oral and maxillofacial surgeons on invasive procedures in patients using antithrombotic medication. Int J Oral Maxillofac Surg 2013; 42:502–507.
  6. Ward BB, Smith MH. Dentoalveolar procedures for the anticoagulated patient: literature recommendations versus current practice. J Oral Maxillofac Surg 2007; 65:1454–1460.
  7. Armstrong MJ, Gronseth G, Anderson DC, et al. Summary of evidence-based guideline: periprocedural management of antithrombotic medications in patients with ischemic cerebrovascular disease. Report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology 2013; 80:2065–2069.
  8. American Dental Association (ADA). Anticoagulant antiplatelet medications and dental procedures. www.ada.org/en/member-center/oral-health-topics/anticoagulant-antiplatelet-medications-and-dental-. Accessed May 16, 2016.
  9. Douketis JD, Spyropoulos AC, Spencer FA, et al; American College of Chest Physicians. Perioperative management of antithrombotic therapy. Antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest 2012; 141(suppl 2):e326S–e350S.
  10. Campbell JH, Alvarado F, Murray RA. Anticoagulation and minor oral surgery: should the anticoagulation regimen be altered? J Oral Maxillofac Surg 2000; 58:131–135.
  11. Devani P, Lavery M, Howell CJT. Dental extractions in patients on warfarin: is alteration of anticoagulation regime necessary? Br J Oral Maxillofac Surg 1998; 36:107–111.
  12. Gaspar R, Brenner B, Ardekian L, Peled M, Laufer D. Use of tranexamic acid mouthwash to prevent postoperative bleeding in oral surgery patients on oral anticoagulant medication. Quintessence Int 1997; 28:375–379.
  13. Blinder D, Manor Y, Martinowitz U, Taicher S. Dental extractions in patients maintained on oral anticoagulant therapy: comparison of INR value with occurrence of postoperative bleeding. Int J Oral Maxillofac Surg 2001; 30:518–521.
  14. Whitlock RP, Sun JC, Fremes SE, Rubens FD, Teoh KH; American College of Chest Physicians. Antithrombotic and thrombolytic therapy for valvular disease: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest 2012; 141(suppl 2):e576S–e600S.
  15. Patel MR, Hellkamp AS, Lokhnygina Y, et al. Outcomes of discontinuing rivaroxaban compared with warfarin in patients with nonvalvular atrial fibrillation: analysis from the ROCKET AF trial (rivaroxaban once-daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation). J Am Coll Cardiol 2013; 61:651–658.
  16. Balevi B. Should warfarin be discontinued before a dental extraction? A decision-tree analysis. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2010; 110:691–697.
  17. Al-Mubarak S, Al-Ali N, Abou Rass M, et al. Evaluation of dental extractions, suturing and INR on postoperative bleeding of patients maintained on oral anticoagulant therapy. Br Dent J 2007; 203:E15.
  18. Evans IL, Sayers MS, Gibbons AJ, Price G, Snooks H, Sugar AW. Can warfarin be continued during dental extraction? Results of a randomized controlled trial. Br J Oral Maxillofac Surg 2002; 40:248–252.
  19. Yasaka M, Naritomi H, Minematsu K. Ischemic stroke associated with brief cessation of warfarin. Thromb Res 2006; 118:290–293.
  20. Akopov SE, Suzuki S, Fredieu A, Kidwell CS, Saver JL, Cohen SN. Withdrawal of warfarin prior to a surgical procedure: time to follow the guidelines? Cerbrovasc Dis 2005; 19:337–342.
  21. Wysokinski WE, McBane RD, Daniels PR, et al. Periprocedural anticoagulation management of patients with nonvalvular atrial fibrillation. Mayo Clin Proc 2008; 83:639–645.
  22. Garcia DA, Regan S, Henault LE, et al. Risk of thromboembolism with short-term interruption of warfarin therapy. Arch Intern Med 2008; 168:63–69.
  23. Todd DW. Anticoagulated patients and oral surgery [letter]. Arch Intern Med 2003; 163:1242.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
409-413
Page Number
409-413
Publications
Publications
Topics
Article Type
Display Headline
Anticoagulation in dental surgery: Is it rude to interrupt?
Display Headline
Anticoagulation in dental surgery: Is it rude to interrupt?
Legacy Keywords
anticoagulation, dental surgery, interruption, bridging, warfarin, michael wahl
Legacy Keywords
anticoagulation, dental surgery, interruption, bridging, warfarin, michael wahl
Sections
Disallow All Ads
Alternative CME
Article PDF Media

The fifth vital sign: A complex story of politics and patient care

Article Type
Changed
Wed, 08/16/2017 - 13:26
Display Headline
The fifth vital sign: A complex story of politics and patient care

In this issue of the Journal, Dr. Marissa Galicia-Castillo discusses the use of opioids in older patients with persistent (formerly known as chronic) pain. Even though she devotes one and a half pages to the side effects of chronic opioid therapy, I am sure that in the current environment many readers will perceive her as expressing a surprisingly supportive tone regarding the use of these medications. The times have changed, and the difficulties and complexities of trying to help patients with ongoing pain have increased.

In the mid-1990s, the American Pain Society aggressively pushed the concept of pain as the fifth vital sign.1 Their stated goals included raising awareness that patients with pain were undertreated, in large part because in the Society’s opinion pain was not regularly assessed at physician office visits or even in the hospital after surgery. Half a decade later the Joint Commission and others hopped on this train, emphasizing that pain needs to be regularly assessed in all patients, that pain is a subjective measure, unlike the heart rate or blood pressure, and that physicians must accept and respect patient self-reporting of pain. Concurrent with these efforts was the enhanced promotion of pain medications—new highly touted and frequently prescribed narcotics as well as nonnarcotic medications re-marketed as analgesics. Opportunistically, or perhaps wielding inappropriate and sketchy influence, some drug manufacturers in the early 2000s funded publications and physician presentations to encourage the expanded use of opioids and other medications for pain control. In a recent CNN report on the opioid epidemic, it was noted that the Joint Commission published a book in 2000 for purchase by doctors as part of required continuing education seminars, and that the book cited studies claiming “there is no evidence that addiction is a significant issue when persons are given opioids for pain control.”2 According to the CNN report, the book was sponsored by a manufacturer of narcotic analgesics.2 Lack of evidence is not evidence supporting a lack of known concern.

Step forward in time, and pain control has become a measure of patient satisfaction, and thus potentially another physician and institutional rating score that can be linked to reimbursement. This despite reports suggesting that incorporation of this required pain scale did not actually improve the quality of pain management.3 I suspect that most of my peers function in the outpatient clinic as I do, without much interest in what was recorded on the intake pain scale, since I will be taking a more focused and detailed history from the patient if pain is any part of the reason for visiting with me. The goal of alleviating a patient’s pain, whenever reasonable, must always be on our agenda. Yet, while we need to respond to scores on a somewhat silly screening pain scale, the hurdles to prescribing analgesics are getting higher.

The latest data on opioid-related deaths are sobering and scary. Organized medicine must absolutely push to close the pain-pill mills, but is the link really so strong between thoughtful prescribing of short- or even long-term opioids and the escalating “epidemic” of opioid complications that we should not prescribe these drugs? Does the fact that we don’t have good data demonstrating long-term efficacy mean that these drugs are not effective in appropriately selected patients? Is it warranted to require regular database reviews of all patients who are prescribed these medications? Is it warranted, as one patient said to me, that she be treated like a potential criminal begging for drugs when her prescriptions are up, and that she be “looked at funny” by the pharmacist when she fills them?

An increasingly discussed concept is that of central generalization of pain, and patients who have this may be opioid-resistant and, perhaps, prone to developing opioid hyperalgesia. It has been studied in patients with fibromyalgia and is now felt by some to include patients with osteoarthritis and other initially localized painful conditions. Whether or not this concept ultimately turns out to be correct, it adds another dimension to our assessment of patients with pain.

The time has come to move past using a one-size-fits-all fifth vital sign (“How would you rate your pain on a scale of 1 to 10?”) and reflexively prescribing an opioid when pain is characterized as severe. But, if the patient truly needs the drug, we also need to move past not writing that prescription because of headlines and administrative hurdles. This is a much more complex story.

References
  1. American Pain Society Quality of Care Committee. Quality improvement guidelines for the treatment of acute pain and cancer pain. JAMA 1995; 274:1874–1880.
  2. Moghe S. Opioid history: from ‘wonder drug’ to abuse epidemic. www.cnn.com/2016/05/12/health/opioid-addiction-history/. Accessed May 16, 2016.
  3. Mularski RA, White-Chu F, Overbay D, et al. Measuring pain as the 5th vital sign does not improve quality of pain management. J Gen Intern Med 2006; 21:607–612.
Article PDF
Author and Disclosure Information
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
400-401
Legacy Keywords
Opioids, pain, overdose, fifth vital sign, Pain Society, Marissa Galicia-Castillo, Brian Mandell
Sections
Author and Disclosure Information
Author and Disclosure Information
Article PDF
Article PDF
Related Articles

In this issue of the Journal, Dr. Marissa Galicia-Castillo discusses the use of opioids in older patients with persistent (formerly known as chronic) pain. Even though she devotes one and a half pages to the side effects of chronic opioid therapy, I am sure that in the current environment many readers will perceive her as expressing a surprisingly supportive tone regarding the use of these medications. The times have changed, and the difficulties and complexities of trying to help patients with ongoing pain have increased.

In the mid-1990s, the American Pain Society aggressively pushed the concept of pain as the fifth vital sign.1 Their stated goals included raising awareness that patients with pain were undertreated, in large part because in the Society’s opinion pain was not regularly assessed at physician office visits or even in the hospital after surgery. Half a decade later the Joint Commission and others hopped on this train, emphasizing that pain needs to be regularly assessed in all patients, that pain is a subjective measure, unlike the heart rate or blood pressure, and that physicians must accept and respect patient self-reporting of pain. Concurrent with these efforts was the enhanced promotion of pain medications—new highly touted and frequently prescribed narcotics as well as nonnarcotic medications re-marketed as analgesics. Opportunistically, or perhaps wielding inappropriate and sketchy influence, some drug manufacturers in the early 2000s funded publications and physician presentations to encourage the expanded use of opioids and other medications for pain control. In a recent CNN report on the opioid epidemic, it was noted that the Joint Commission published a book in 2000 for purchase by doctors as part of required continuing education seminars, and that the book cited studies claiming “there is no evidence that addiction is a significant issue when persons are given opioids for pain control.”2 According to the CNN report, the book was sponsored by a manufacturer of narcotic analgesics.2 Lack of evidence is not evidence supporting a lack of known concern.

Step forward in time, and pain control has become a measure of patient satisfaction, and thus potentially another physician and institutional rating score that can be linked to reimbursement. This despite reports suggesting that incorporation of this required pain scale did not actually improve the quality of pain management.3 I suspect that most of my peers function in the outpatient clinic as I do, without much interest in what was recorded on the intake pain scale, since I will be taking a more focused and detailed history from the patient if pain is any part of the reason for visiting with me. The goal of alleviating a patient’s pain, whenever reasonable, must always be on our agenda. Yet, while we need to respond to scores on a somewhat silly screening pain scale, the hurdles to prescribing analgesics are getting higher.

The latest data on opioid-related deaths are sobering and scary. Organized medicine must absolutely push to close the pain-pill mills, but is the link really so strong between thoughtful prescribing of short- or even long-term opioids and the escalating “epidemic” of opioid complications that we should not prescribe these drugs? Does the fact that we don’t have good data demonstrating long-term efficacy mean that these drugs are not effective in appropriately selected patients? Is it warranted to require regular database reviews of all patients who are prescribed these medications? Is it warranted, as one patient said to me, that she be treated like a potential criminal begging for drugs when her prescriptions are up, and that she be “looked at funny” by the pharmacist when she fills them?

An increasingly discussed concept is that of central generalization of pain, and patients who have this may be opioid-resistant and, perhaps, prone to developing opioid hyperalgesia. It has been studied in patients with fibromyalgia and is now felt by some to include patients with osteoarthritis and other initially localized painful conditions. Whether or not this concept ultimately turns out to be correct, it adds another dimension to our assessment of patients with pain.

The time has come to move past using a one-size-fits-all fifth vital sign (“How would you rate your pain on a scale of 1 to 10?”) and reflexively prescribing an opioid when pain is characterized as severe. But, if the patient truly needs the drug, we also need to move past not writing that prescription because of headlines and administrative hurdles. This is a much more complex story.

In this issue of the Journal, Dr. Marissa Galicia-Castillo discusses the use of opioids in older patients with persistent (formerly known as chronic) pain. Even though she devotes one and a half pages to the side effects of chronic opioid therapy, I am sure that in the current environment many readers will perceive her as expressing a surprisingly supportive tone regarding the use of these medications. The times have changed, and the difficulties and complexities of trying to help patients with ongoing pain have increased.

In the mid-1990s, the American Pain Society aggressively pushed the concept of pain as the fifth vital sign.1 Their stated goals included raising awareness that patients with pain were undertreated, in large part because in the Society’s opinion pain was not regularly assessed at physician office visits or even in the hospital after surgery. Half a decade later the Joint Commission and others hopped on this train, emphasizing that pain needs to be regularly assessed in all patients, that pain is a subjective measure, unlike the heart rate or blood pressure, and that physicians must accept and respect patient self-reporting of pain. Concurrent with these efforts was the enhanced promotion of pain medications—new highly touted and frequently prescribed narcotics as well as nonnarcotic medications re-marketed as analgesics. Opportunistically, or perhaps wielding inappropriate and sketchy influence, some drug manufacturers in the early 2000s funded publications and physician presentations to encourage the expanded use of opioids and other medications for pain control. In a recent CNN report on the opioid epidemic, it was noted that the Joint Commission published a book in 2000 for purchase by doctors as part of required continuing education seminars, and that the book cited studies claiming “there is no evidence that addiction is a significant issue when persons are given opioids for pain control.”2 According to the CNN report, the book was sponsored by a manufacturer of narcotic analgesics.2 Lack of evidence is not evidence supporting a lack of known concern.

Step forward in time, and pain control has become a measure of patient satisfaction, and thus potentially another physician and institutional rating score that can be linked to reimbursement. This despite reports suggesting that incorporation of this required pain scale did not actually improve the quality of pain management.3 I suspect that most of my peers function in the outpatient clinic as I do, without much interest in what was recorded on the intake pain scale, since I will be taking a more focused and detailed history from the patient if pain is any part of the reason for visiting with me. The goal of alleviating a patient’s pain, whenever reasonable, must always be on our agenda. Yet, while we need to respond to scores on a somewhat silly screening pain scale, the hurdles to prescribing analgesics are getting higher.

The latest data on opioid-related deaths are sobering and scary. Organized medicine must absolutely push to close the pain-pill mills, but is the link really so strong between thoughtful prescribing of short- or even long-term opioids and the escalating “epidemic” of opioid complications that we should not prescribe these drugs? Does the fact that we don’t have good data demonstrating long-term efficacy mean that these drugs are not effective in appropriately selected patients? Is it warranted to require regular database reviews of all patients who are prescribed these medications? Is it warranted, as one patient said to me, that she be treated like a potential criminal begging for drugs when her prescriptions are up, and that she be “looked at funny” by the pharmacist when she fills them?

An increasingly discussed concept is that of central generalization of pain, and patients who have this may be opioid-resistant and, perhaps, prone to developing opioid hyperalgesia. It has been studied in patients with fibromyalgia and is now felt by some to include patients with osteoarthritis and other initially localized painful conditions. Whether or not this concept ultimately turns out to be correct, it adds another dimension to our assessment of patients with pain.

The time has come to move past using a one-size-fits-all fifth vital sign (“How would you rate your pain on a scale of 1 to 10?”) and reflexively prescribing an opioid when pain is characterized as severe. But, if the patient truly needs the drug, we also need to move past not writing that prescription because of headlines and administrative hurdles. This is a much more complex story.

References
  1. American Pain Society Quality of Care Committee. Quality improvement guidelines for the treatment of acute pain and cancer pain. JAMA 1995; 274:1874–1880.
  2. Moghe S. Opioid history: from ‘wonder drug’ to abuse epidemic. www.cnn.com/2016/05/12/health/opioid-addiction-history/. Accessed May 16, 2016.
  3. Mularski RA, White-Chu F, Overbay D, et al. Measuring pain as the 5th vital sign does not improve quality of pain management. J Gen Intern Med 2006; 21:607–612.
References
  1. American Pain Society Quality of Care Committee. Quality improvement guidelines for the treatment of acute pain and cancer pain. JAMA 1995; 274:1874–1880.
  2. Moghe S. Opioid history: from ‘wonder drug’ to abuse epidemic. www.cnn.com/2016/05/12/health/opioid-addiction-history/. Accessed May 16, 2016.
  3. Mularski RA, White-Chu F, Overbay D, et al. Measuring pain as the 5th vital sign does not improve quality of pain management. J Gen Intern Med 2006; 21:607–612.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
400-401
Page Number
400-401
Publications
Publications
Topics
Article Type
Display Headline
The fifth vital sign: A complex story of politics and patient care
Display Headline
The fifth vital sign: A complex story of politics and patient care
Legacy Keywords
Opioids, pain, overdose, fifth vital sign, Pain Society, Marissa Galicia-Castillo, Brian Mandell
Legacy Keywords
Opioids, pain, overdose, fifth vital sign, Pain Society, Marissa Galicia-Castillo, Brian Mandell
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Opioids for persistent pain in older adults

Article Type
Changed
Wed, 08/16/2017 - 13:33
Display Headline
Opioids for persistent pain in older adults

The use of opioid analgesics is widely accepted for treating severe acute pain, cancer pain, and pain at the end of life.1 However, their long-term use for other types of persistent pain (Table 1) remains controversial. Clinicians and regulators need to work together to achieve a balanced approach to the use of opioids, recognizing the legitimate medical need for these medications for persistent pain while acknowledging their increasing misuse and the morbidity and mortality related to them. Finding this balance is particularly challenging in older patients.2

PAIN IN OLDER PEOPLE: COMPLICATED, OFTEN UNDERTREATED

Persistent pain is a multifaceted manifestation of an unpleasant sensation that continues for a prolonged time and may or may not be related to a distinct disease process.3 (The term “persistent pain” is preferred as it does not have the negative connotations of “chronic pain.”4) “Older” has been defined as age 65 and older. As our population ages, especially to age 85 and older, more people will be living with persistent pain due to a variety of conditions.5

Persistent pain is more complicated in older than in younger patients. Many older people have more than one illness, making them more susceptible to adverse drug interactions such as altered pharmacokinetics and pharmacodynamics.6 Up to 40% of older outpatients report pain,7 and pain affects 70% to 80% of patients with advanced malignant disease.8 Pain is also prevalent in nonmalignant, progressive, life-limiting illnesses that are common in the geriatric population, affecting 41% to 77% of patients with advanced heart disease, 34% to 77% with advanced chronic obstructive pulmonary disease, and 47% to 50% with advanced renal disease.9

Pain is underrecognized in nursing home residents, who may have multiple somatic complaints and multiple causes of pain.10,11 From 27% to 83% of older adults in an institutionalized setting are affected by pain.12 Caregiver stress and attitudes towards pain may influence patients’ experiences with pain. This aspect should also be assessed and evaluated, if present.3

Pain in older adults is often undertreated, as evidenced by the findings of a study in which only one-third of older patients with persistent pain were receiving treatment that was consistent with current guidelines.13 Approximately 40% to 80% of older adults in the community with pain do not receive any treatment for it.14,15 Of those residing in institutions, 16% to 27% of older adults in pain do not receive any treatment for it.16,17 Inadequate treatment of persistent pain is associated with many adverse outcomes, including functional decline, falls, mood changes, decreased socialization, sleep and appetite difficulties, and increased healthcare utilization.18

GOALS: BETTER QUALITY OF LIFE AND FUNCTION

Persistent pain is multifactorial and so requires an approach that addresses a variety of causes and includes both nonpharmacologic and pharmacologic strategies. Opioids are part of a multipronged approach to pain management.

To avoid adverse effects, opioids for persistent pain in an older adult should be prescribed at the lowest possible dose that provides adequate analgesia. Due to age-related changes, finding the best treatments may be a challenge, and understanding the pharmacokinetic implications in this population is key (Table 2).

Complete pain relief is uncommon and is not the goal when using opioids in older patients. Rather, treatment goals should focus on quality of life and function. Patients need to be continually educated about these goals and regularly reassessed during treatment.

APPROACH TO PAIN MANAGEMENT

Initial steps in managing pain should always include a detailed pain assessment, ideally by an interdisciplinary team.19,20 Physical therapy, cognitive behavioral therapy, and patient and caregiver education are some effective nonpharmacologic strategies.3 If nonpharmacologic treatments are ineffective, pharmacologic strategies should be used. Often, both nonpharmacologic and pharmacologic treatments work well for persistent pain.

The World Health Organization’s three-step ladder approach, originally developed for cancer pain, has subsequently been adopted for all types of pain.

  • Step 1 of the ladder is nonopioid analgesics, with or without adjuvant agents.
  • Step 2 if the pain persists or increases, is a weak opioid (eg, codeine, tramadol), with or without a nonopioid analgesic and with or without an adjuvant agent.
  • Step 3 is a strong opioid (eg, morphine, oxycodone, hydromorphone, fentanyl, or methadone), with or without nonopioid and adjuvant agents.

The European Association for Palliative Care recommendations state that there is no significant difference between morphine, oxycodone, and hydromorphone when given orally.21 Although this ladder has been modernized somewhat,22 it still provides a conceptual and practical guide.

FIRST STEP: NONOPIOID ANALGESICS

Acetaminophen is first-line

Acetaminophen is the first-line drug for persistent pain, as it is effective and safe. It does not have the same gastrointestinal and renal side effects that nonsteroidal anti-inflammatory drugs (NSAIDs) do. It also has fewer drug interactions, and its clearance does not decline with age.23

However, older adults should not take more than 3 g of acetaminophen in 24 hours.24 It should be used with extreme caution, if at all, in patients who have hepatic insufficiency or chronic alcohol abuse or dependence.

Topical therapies

Topical NSAIDs allow local analgesia with less risk of systemic side effects than with oral NSAIDs, which have a limited role in the older population.

Capsaicin, which depletes substance P, has primarily been studied for neuropathic pain.

Lidocaine 5% topical patch has been found effective for postherpetic neuralgia; however, there is limited evidence for using it in other painful conditions, such as osteoarthritis and back pain.25

Adjuvants

Duloxetine is a serotonin and norepinephrine reuptake inhibitor. Studies have found it effective in treating diabetic peripheral neuropathy, fibromyalgia, chronic low back pain, and osteoarthritis knee pain. However, except for the knee study, most of the patients enrolled were younger.

Antiepileptic medications. Gabapentin and pregabalin have been found to be effective in painful neuropathic conditions that commonly occur in older adults.25

Avoid oral NSAIDs

NSAIDs, both nonselective and cyclooxygenase 2-selective, should only rarely be considered for long-term use in older adults in view of increased risk of conditions such as congestive heart failure, acute kidney injury, and gastrointestinal bleeding.25 These adverse effects seem to be related to inhibition of prostaglandin, which plays a physiologic role in the gastrointestinal, renal, and cardiovascular systems.26 Oral NSAIDs should be used with extreme caution.

 

 

OPIOIDS

The American Geriatrics Society, American Pain Society, and American Academy of Pain Medicine made recommendations in 2009 supporting the use of opioids to treat persistent pain in patients who are carefully selected and monitored.4,6 An international expert panel in 2008 issued a consensus statement27 of evidence that also supported the use of opioids for those over age 65. The Federation of State Medical Boards of the United States also supports the use of opioids, particularly for adults who have refractory pain, and it recognizes undertreatment of pain as a public health issue.28

Clinicians are most comfortable with using opioids to manage cancer pain, but these drugs also provide an acceptable and effective means of analgesia in nonmalignant, persistent pain syndromes.24 The American Geriatrics Society Panel on Pharmacological Management of Persistent Pain in Older Persons recommends treatment with opioids in all patients with moderate-to-severe pain, pain-related functional impairment, or decreased quality of life due to pain, even though the evidence base is not robust.3

Unlike NSAIDs and acetaminophen, opioids do not have a presumed ceiling effect. However, in patients ages 15 to 64, the greatest benefits have been observed at lower doses of opioids, and the risk of death increases with dose.29 The dose can be raised gradually until pain is relieved.

Start low and go slow

When starting opioid therapy:

  • Choose a short-acting agent
  • Give it on a trial basis
  • Start at a low dose and titrate up slowly.

No data are available to tell us how much to give an older adult, but a reasonable starting dose is 30% to 50% of the recommended dose for a younger adult.24 Short-acting opioids should be titrated by increasing the total daily dose by 25% to 50% every 24 hours until adequate analgesia is reached.24

Older adults who have frequent or continuous pain should receive scheduled (around-the-clock) dosing in an effort to achieve a steady state.3 The half-lives of opioids may be longer in older adults who have renal or hepatic insufficiency; therefore, their doses should be lower and the intervals between doses longer.27

When long-acting opioid preparations are used, it is important to also prescribe breakthrough (short-acting) pain management.2 Breakthrough pain includes end-of-dose failure, incident pain (ie, due to an identifiable cause, such as movement), and spontaneous pain; these can be prevented or treated with short-acting, immediate-release opioid formulations.3

Once therapy is initiated, its safety and efficacy should be continually monitored.2 With long-term use, patients should be reassessed for ongoing attainment of therapeutic goals, adverse effects, and safe and responsible medication use.3

Table 3 lists common opioids and their initial dosing.

SIDE EFFECTS

Constipation

This is one of the most common side effects of opioids,30 and although many opioid side effects wane within days of starting as tolerance develops, this one does not.

A bowel regimen should be initiated when starting any opioid regimen. Although most of the evidence for bowel regimens is anecdotal, increasing fluid and fiber intake and taking stool softeners and laxatives are effective.­31

For very difficult cases of opioid constipation, randomized trials suggest that specific agents with opioid antagonist activity that specifically target the gastrointestinal system can help.32,33 Opioid antagonists are not used as routine prophylaxis, but rather for constipation that is refractory to laxatives.34,35 A meta-analysis demonstrated that methylnaltrexone, naloxone, and alvimopan were generally well tolerated, with no significant difference in adverse effects compared with placebo.36

 

 

Sedation

Sedation due to opioids in opioid-naïve patients is well documented,37 but it decreases over time. When starting or changing the dose of opioids, it is important to counsel patients about driving and safety at work and home.

For persistent opioid-related sedation, three options are available: dose reduction, opioid rotation, and use of psychostimulants.38 Although it does not carry a US Food and Drug Administration indication for this use, methylphenidate has been studied in cancer patients, in whom it has been associated with less drowsiness, decreased pain, and less need for rescue doses of pain medications.39–41

Nausea and vomiting

Nausea and vomiting are common in opioid recipients. These adverse effects usually decrease over days to weeks with continued exposure.

A number of antiemetic therapies are available in oral, rectal, and intravenous formulations, but there is no evidence-based recommendation for antiemetic choice for opioid-induced nausea in patients with cancer.42 It is important to always rule out constipation as the cause of nausea. There is also some evidence that reducing the opioid dose or changing the route of administration may help with symptoms.42–45

Respiratory depression

Although respiratory depression is the most feared adverse effect of opioids, it is rare with low starting doses and appropriate dose titration. Sedation precedes respiratory depression, which occurs when initial opioid dosages are too high, titration is too rapid, or opioids are combined with other drugs associated with respiratory depression or that may potentiate opioid-induced respiratory depression, such as benzodiazepines.46–51

Patients with sleep apnea may be at higher risk. In addition, in a study that specifically reviewed patients who had persistent pain, specific factors that contributed to opioid-induced respiratory depression were use of methadone and transdermal fentanyl, renal impairment, and sensory deafferentation.52 Buprenorphine was found to have a ceiling effect for respiratory depression, but not for analgesia.49

Central sleep apnea

Chronic opioid use has been associated with sleep-disordered breathing, notably central sleep apnea. This is often unrecognized. The prevalence of central sleep apnea in this population is 24%.53

Although continuous positive airway pressure is the standard of care for obstructive sleep apnea, it is ineffective for central sleep apnea and possibly may make it worse. Adaptive servoventilation is a therapy that may be effective.54

Urinary retention

Opioids can cause urinary retention, which is most noted in a postoperative setting. Changes in bladder function have been found to be partially due to a peripheral opioid effect.55

Initial management: catheterize the bladder for prompt relief and try to reduce the dose of opioids.

Impaired balance and falls

Use of opioids, especially when combined with other medications active in the central nervous system, may lead to impaired balance and falls, especially in the elderly.56 In this group, all opioids are associated with falls except for buprenorphine.27,57 Older adults need to be assessed and educated about the risk of falls before they are given opioids. Physical therapy and mobility aids may help in these cases.

Dependence

The prevalence of dependence is low in patients who have no prior history of substance abuse.6 Older age is also associated with a significantly lower risk of opioid misuse and abuse.6

Opioid-induced hyperalgesia

Opioid-induced hyperalgesia should be considered if pain continues to worsen in spite of increasing doses, tolerance to opioids appears to develop rapidly, or pain becomes more diffuse and extends past the distribution of preexisting pain.58 Although the exact mechanism is unclear, exposure to opioids causes nociceptive sensitization, as measured by several techniques.59,60

Opioid-induced hyperalgesia is distinct from opioid analgesia tolerance. A key difference is that opioid tolerance can be overcome by increasing the dose, while opioid-induced hyperalgesia can be exacerbated by it.

Management of opioid-induced hyperalgesia includes decreasing the dose, switching to a different opioid, and maximizing nonopioid analgesia.58 The plan should be clearly communicated to patients and families to avoid misunderstanding.

Other adverse effects

Long-term use of opioids may suppress production of several hypothalamic, pituitary, gonadal, and adrenal hormones.3 Long-term use of opioids is also associated with bone loss.61 Opioids have also demonstrated immunodepressant effects.38,62

OPIOID ROTATION

Trying a different opioid (opioid rotation) may be required if pain remains poorly controlled despite increasing doses or if intolerable side effects occur.

According to consensus guidelines on opioid rotation,63 if the originally prescribed opioid is not providing the appropriate therapeutic effect or the patient cannot tolerate the regimen, an equianalgesic dose (Table 3) of the new opioid is calculated based on the original opioid and then decreased in two safety steps. The first safety step is a 25% to 50% reduction in the calculated equianalgesic dose to account for incomplete cross-tolerance. There are two exceptions: methadone requires a 75% to 90% reduction, and transdermal fentanyl does not require an adjustment. The next step is an adjustment of 15% to 30% based on pain severity and the patient’s medical or psychosocial aspects.63

SPECIAL POPULATION: PATIENTS WITH DEMENTIA

There is little scientific data on pain management in older adults with dementia. Many patients with mild to moderate dementia can verbally communicate pain reliably,64 but more challenging are those who are nonverbal, for whom providers depend on caregiver reports and observational scales.65

Prescribing in patients with dementia who are verbal and nonverbal mirrors the strategies used in those older adults who are cognitively intact,66 eg:

  • Use scheduled (around-the-clock) dosing
  • Start with nonopioid medications initially, but advance to opioids as needed, guided by the WHO ladder
  • Carefully monitor the risks and benefits of pain treatment vs persistent pain.

When uncertain about whether a demented patient is in pain, a trial of analgesics is warranted. Signs of pain include not socializing, disturbed sleep, and a vegetative state.

SAFE PRESCRIBING PRACTICES

With the use of opioids to treat persistent pain comes the risk of abuse. A universal precautions approach helps establish reasonable limits before initiating therapy.

A thorough evaluation is required, including description and documentation of pain, disease processes, comorbidities, and effects on function; physical examination; and diagnostic testing. It is also important to inquire about a history of substance abuse. Tools such as the Opioid Risk Tool and the Screener and Opioid Assessment for Patients with Pain-Revised can help gauge risk of misuse or abuse.67,68

Ongoing screening and monitoring are necessary to minimize misuse and diversion. This also involves adhering to federal and state government regulatory policies and participating state prescription drug monitoring programs.69

References
  1. Chou R, Fanciullo GJ, Fine PG, et al; American Pain Society-American Academy of Pain Medicine Opioids Guidelines Panel. Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. J Pain 2009; 10:113–130.
  2. West NA, Severtson SG, Green JL, Dart RC. Trends in abuse and misuse of prescription opioids among older adults. Drug Alcohol Depend 2015; 149:117–121.
  3. American Geriatrics Society Panel on Pharmacological Management of Persistent Pain in Older Persons. Pharmacological management of persistent pain in older persons. J Am Geriatr Soc 2009; 57:1331–1346.
  4. Weiner DK, Herr K. Comprehensive interdisciplinary assessment and treatment planning: an integrative overview. In: Weiner DK, Herr K, Rudy TE, editors. Persistent pain in older adults: an interdisciplinary guide for treatment. New York, NY: Springer Publishing Company; 2002.
  5. He W, Sengupta M, Velkoff V; US Census Bureau. 65+ in the United States: 2005. Washington, DC: US Government Printing Office; 2005. www.census.gov/prod/2006pubs/p23-209.pdf. Accessed March 30, 2016.
  6. American Geriatrics Society Panel on the Pharmacological Management of Persistent Pain in Older Persons. Pharmacological management of persistent pain in older persons. Pain Med 2009; 10:1062–1083.
  7. Thomas E, Peat G, Harris L, Wilkie R, Croft PR. The prevalence of pain and pain interference in a general population of older adults: cross-sectional findings from the North Staffordshire Osteoarthritis Project (NorStOP). Pain 2004; 110:361–368.
  8. Caraceni A, Hanks G, Kaasa S, et al; European Palliative Care Research Collaborative (EPCRC); European Association for Palliative Care (EAPC). Use of opioid analgesics in the treatment of cancer pain: evidence-based recommendations from the EAPC. Lancet Oncol 2012; 13:e58–e68.
  9. Solano JP, Gomes B, Higginson IJ. A comparison of symptom prevalence in far advanced cancer, AIDS, heart disease, chronic obstructive pulmonary disease and renal disease. J Pain Symptom Manage 2006; 31:58–69.
  10. Ferrell BA, Ferrell BR, Osterweil D. Pain in the nursing home. J Am Geriatr Soc 1990; 38:409–414.
  11. Ferrell BA, Ferrell BR, Rivera L. Pain in cognitively impaired nursing home patients. J Pain Symptom Manage 1995; 10:591–598.
  12. Fox PL, Raina P, Jadad AR. Prevalence and treatment of pain in older adults in nursing homes and other long-term care institutions: a systematic review. CMAJ 1999; 160:329–333.
  13. Stewart C, Leveille SG, Shmerling RH, Samelson EJ, Bean JF, Schofield P. Management of persistent pain in older adults: the MOBILIZE Boston Study. J Am Geriatr Soc 2012; 60:2081–2086.
  14. Woo J, Ho SC, Lau J, Leung PC. Musculoskeletal complaints and associated consequences in elderly Chinese aged 70 years and over. J Rheumatol 1994; 21:1927–1931.
  15. Pahor M, Guralnik JM, Wan JY, et al. Lower body osteoarticular pain and dose of analgesic medications in older disabled women: the Women’s Health and Aging Study. Am J Public Health 1999; 89:930–934.
  16. Marzinski LR. The tragedy of dementia: clinically assessing pain in the confused nonverbal elderly. J Gerontol Nurs 1991; 17:25–28.
  17. Roy R, Thomas M. A survey of chronic pain in an elderly population. Can Fam Physician 1986; 32:513–516.
  18. AGS Panel on Persistent Pain in Older Persons. The management of persistent pain in older persons. J Am Geriatr Soc 2002; 50(suppl 6): S205–S224.
  19. Stanos S, Houle TT. Multidisciplinary and interdisciplinary management of chronic pain. Phys Med Rehabil Clin N Am 2006; 17:435–450.
  20. Helme RD, Katz B, Gibson SJ, et al. Multidisciplinary pain clinics for older people. Do they have a role? Clin Geriatr Med 1996; 12:563–582.
  21. Harris DG. Management of pain in advanced disease. Br Med Bull 2014; 110:117–128.
  22. Raffa RB, Pergolizzi JV. A modern analgesics pain ‘pyramid’. J Clin Pharm Ther 2014; 39:4–6.
  23. Fine PG, Herr KA. Pharmacologic management of persistent pain in older persons. Clin Geriatr 2009; 17:25–32.
  24. Tracy B, Sean Morrison R. Pain management in older adults. Clin Ther 2013; 35:1659–1668.
  25. Malec M, Shega JW. Pain management in the elderly. Med Clin North Am 2015; 99:337–350.
  26. Abdulla A, Adams N, Bone M, et al; British Geriatric Society. Guidance on the management of pain in older people. Age Ageing 2013; 42(suppl 1):i1–i57.
  27. Pergolizzi J, Böger RH, Budd K, et al. Opioids and the management of chronic severe pain in the elderly: consensus statement of an International Expert Panel with focus on the six clinically most often used World Health Organization Step III opioids (buprenorphine, fentanyl, hydromorphone, methadone, morphine, oxycodone). Pain Pract 2008; 8:287–313.
  28. Gloth FM 3rd. Pharmacological management of persistent pain in older persons: focus on opioids and nonopioids. J Pain 2011; 12(suppl 1):S14–S20.
  29. Gomes T, Mamdani MM, Dhalla IA, Paterson JM, Juurlink DN. Opioid dose and drug-related mortality in patients with nonmalignant pain. Arch Intern Med 2011; 171:686–691.
  30. Moore RA, McQuay HJ. Prevalence of opioid adverse events in chronic non-malignant pain: systematic review of randomised trials of oral opioids. Arthritis Res Ther 2005; 7:R1046–R1051.
  31. Candy B, Jones L, Larkin PJ, Vickerstaff V, Tookman A, Stone P. Laxatives for the management of constipation in people receiving palliative care. Cochrane Database Syst Rev 2015; 5:CD003448.
  32. Webster LR, Butera PG, Moran LV, Wu N, Burns LH, Friedmann N. Oxytrex minimizes physical dependence while providing effective analgesia: a randomized controlled trial in low back pain. J Pain 2006; 7:937–946.
  33. Paulson DM, Kennedy DT, Donovick RA, et al. Alvimopan: an oral, peripherally acting, mu-opioid receptor antagonist for the treatment of opioid-induced bowel dysfunction—a 21-day treatment-randomized clinical trial. J Pain 2005; 6:184–192.
  34. Nalamachu SR, Pergolizzi J, Taylor R, et al. Efficacy and tolerability of subcutaneous methylnaltrexone in patients with advanced illness and opioid-induced constipation: a responder analysis of 2 randomized, placebo-controlled trials. Pain Pract 2015; 15:564–571.
  35. Brick N. Laxatives or methylnaltrexone for the management of constipation in palliative care patients. Clin J Oncol Nurs 2013; 17:91–92.
  36. Ford AC, Brenner DM, Schoenfeld PS. Efficacy of pharmacological therapies for the treatment of opioid-induced constipation: systematic review and meta-analysis. Am J Gastroenterolt 2013; 108:1566–1575.
  37. Byas-Smith MG, Chapman SL, Reed B, Cotsonis G. The effect of opioids on driving and psychomotor performance in patients with chronic pain. Clin J Pain 2005; 21:345–352.
  38. Benyamin R, Trescot AM, Datta S, et al. Opioid complications and side effects. Pain Physician 2008; 11(suppl 2):S105–S120.
  39. Wilwerding MB, Loprinzi CL, Mailliard JA, et al. A randomized, crossover evaluation of methylphenidate in cancer patients receiving strong narcotics. Support Care Cancer 1995; 3:135–138.
  40. Bruera E, Miller MJ, Macmillan K, Kuehn N. Neuropsychological effects of methylphenidate in patients receiving a continuous infusion of narcotics for cancer pain. Pain 1992; 48:163–166.
  41. Ahmedzai S. New approaches to pain control in patients with cancer. Eur J Cancer 1997; 33:S8–S14.
  42. Laugsand EA, Kaasa S, Klepstad P. Management of opioid-induced nausea and vomiting in cancer patients: systematic review and evidence-based recommendations. Palliat Med 2011; 25:442–453.
  43. Hardy J, Daly S, McQuade B, et al. A double-blind, randomised, parallel group, multinational, multicentre study comparing a single dose of ondansetron 24 mg p.o. with placebo and metoclopramide 10 mg t.d.s. p.o. in the treatment of opioid-induced nausea and emesis in cancer patients. Support Care Cancer 2002; 10:231–236.
  44. Apfel CC, Jalota L. Can central antiemetic effects of opioids counter-balance opioid-induced nausea and vomiting? Acta Anaesthesiol Scand 2010; 54:129–131.
  45. Okamoto Y, Tsuneto S, Matsuda Y, et al. A retrospective chart review of the antiemetic effectiveness of risperidone in refractory opioid-induced nausea and vomiting in advanced cancer patients. J Pain Symptom Manage 2007; 34:217–222.
  46. Overdyk F, Dahan A, Roozekrans M, van der Schrier R, Aarts L, Niesters M. Opioid-induced respiratory depression in the acute care setting: a compendium of case reports. Pain Manag 2014; 4:317–325.
  47. Niesters M, Overdyk F, Smith T, Aarts L, Dahan A. Buprenorphine-induced respiratory depression and involvement of ABCB1 SNPs in opioid-induced respiratory depression in paediatrics. Br J Anaesth 2013; 110:842–843.
  48. Niesters M, Mahajan RP, Aarts L, Dahan A. High-inspired oxygen concentration further impairs opioid-induced respiratory depression. Br J Anaesth 2013; 110:837–841.
  49. Dahan A, Yassen A, Romberg R, et al. Buprenorphine induces ceiling in respiratory depression but not in analgesia. Br J Anaesth 2006; 96:627–632.
  50. van Dorp E, Yassen A, Sarton E, et al. Naloxone reversal of buprenorphine-induced respiratory depression. Anesthesiology 2006; 105:51–57.
  51. Macintyre PE, Loadsman JA, Scott DA. Opioids, ventilation and acute pain management. Anaesth Intensive Care 2011; 39:545–558.
  52. Dahan A, Overdyk F, Smith T, Aarts L, Niesters M. Pharmacovigilance: a review of opioid-induced respiratory depression in chronic pain patients. Pain Physician 2013; 16:E85–E94.
  53. Correa D, Farney RJ, Chung F, Prasad A, Lam D, Wong J. Chronic opioid use and central sleep apnea: a review of the prevalence, mechanisms, and perioperative considerations. Anesth Analg 2015; 120:1273–1285.
  54. Randerath WJ, George S. Opioid-induced sleep apnea: is it a real problem? J Clin Sleep Med 2012; 8:577–578.
  55. Rosow CE, Gomery P, Chen TY, Stefanovich P, Stambler N, Israel R. Reversal of opioid-induced bladder dysfunction by intravenous naloxone and methylnaltrexone. Clin Pharmacol Ther 2007; 82:48–53.
  56. Weiner DK, Hanlon JT, Studenski SA. Effects of central nervous system polypharmacy on falls liability in community-dwelling elderly. Gerontology 1998; 44:217–221.
  57. Wolff ML, Kewley R, Hassett M, Collins J, Brodeur MR, Nokes S. Falls in skilled nursing facilities associated with opioid use. J Am Geriatr Soc 2012; 60:987.
  58. Zylicz Z, Twycross R. Opioid-induced hyperalgesia may be more frequent than previously thought. J Clin Oncol 2008; 26:1564; author reply 1565.
  59. Lee M, Silverman SM, Hansen H, Patel VB, Manchikanti L. A comprehensive review of opioid-induced hyperalgesia. Pain Physician 2011 2011; 14:145–161.
  60. Chen L, Sein M, Vo T, et al. Clinical interpretation of opioid tolerance versus opioid-induced hyperalgesia. J Opioid Manag 2014; 10:383–393.
  61. Vestergaard P, Hermann P, Jensen JE, Eiken P, Mosekilde L. Effects of paracetamol, non-steroidal anti-inflammatory drugs, acetylsalicylic acid, and opioids on bone mineral density and risk of fracture: results of the Danish Osteoporosis Prevention Study (DOPS). Osteoporos Int 2012; 23:1255–1265.
  62. Sacerdote P, Franchi S, Panerai AE. Non-analgesic effects of opioids: mechanisms and potential clinical relevance of opioid-induced immunodepression. Curr Pharm Des 2012; 18:6034–6042.
  63. Fine PG, Portenoy RK; Ad Hoc Expert Panel on Evidence Review and Guidelines for Opioid Rotation. Establishing “best practices” for opioid rotation: conclusions of an expert panel. J Pain Symptom Manage 2009; 38:418–425.
  64. Chibnall JT, Tait RC. Pain assessment in cognitively impaired and unimpaired older adults: a comparison of four scales. Pain 2001; 92:173–186.
  65. Andrade DC, Faria JW, Caramelli P, et al. The assessment and management of pain in the demented and non-demented elderly patient. Arq Neuropsiquiatr 2011; 69:387–394.
  66. Scherder E, Herr K, Pickering G, Gibson S, Benedetti F, Lautenbacher S. Pain in dementia. Pain 2009; 145:276–278.
  67. Chou R, Fanciullo GJ, Fine PG, Miaskowski C, Passik SD, Portenoy RK. Opioids for chronic noncancer pain: prediction and identification of aberrant drug-related behaviors: a review of the evidence for an American Pain Society and American Academy of Pain Medicine clinical practice guideline. J Pain 2009; 10:131–146.
  68. Butler SF, Budman SH, Fernandez KC, Fanciullo GJ, Jamison RN. Cross-validation of a screener to predict opioid misuse in chronic pain patients (SOAPP-R). J Addict Med 2009; 3:66–73.
  69. de Leon-Casasola OA. Opioids for chronic pain: new evidence, new strategies, safe prescribing. Am J Med 2013; 126(suppl 1):S3–S11.
  70. CDC guideline for prescribing opioids for chronic pain—United States, 2016. MMWR Recomm Rep 2016 Mar 18; 65(1):1–49.
Click for Credit Link
Article PDF
Author and Disclosure Information

Marissa Galicia-Castillo, MD
Sue Faulkner Scribner Professor of Geriatrics, Section Head, Palliative Medicine, Eastern Virginia Medical School, Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Address: Marissa Galicia-Castillo, MD, Glenna Center for Geriatrics and Gerontology, Eastern Virginia Medical School, 825 Fairfax Avenue, Suite 201, Norfolk, VA 23507; [email protected]

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
443-451
Legacy Keywords
Opioids, chronic pain, persistent pain, noncancer pain, marissa galicia-castillo
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Marissa Galicia-Castillo, MD
Sue Faulkner Scribner Professor of Geriatrics, Section Head, Palliative Medicine, Eastern Virginia Medical School, Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Address: Marissa Galicia-Castillo, MD, Glenna Center for Geriatrics and Gerontology, Eastern Virginia Medical School, 825 Fairfax Avenue, Suite 201, Norfolk, VA 23507; [email protected]

Author and Disclosure Information

Marissa Galicia-Castillo, MD
Sue Faulkner Scribner Professor of Geriatrics, Section Head, Palliative Medicine, Eastern Virginia Medical School, Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Address: Marissa Galicia-Castillo, MD, Glenna Center for Geriatrics and Gerontology, Eastern Virginia Medical School, 825 Fairfax Avenue, Suite 201, Norfolk, VA 23507; [email protected]

Article PDF
Article PDF
Related Articles

The use of opioid analgesics is widely accepted for treating severe acute pain, cancer pain, and pain at the end of life.1 However, their long-term use for other types of persistent pain (Table 1) remains controversial. Clinicians and regulators need to work together to achieve a balanced approach to the use of opioids, recognizing the legitimate medical need for these medications for persistent pain while acknowledging their increasing misuse and the morbidity and mortality related to them. Finding this balance is particularly challenging in older patients.2

PAIN IN OLDER PEOPLE: COMPLICATED, OFTEN UNDERTREATED

Persistent pain is a multifaceted manifestation of an unpleasant sensation that continues for a prolonged time and may or may not be related to a distinct disease process.3 (The term “persistent pain” is preferred as it does not have the negative connotations of “chronic pain.”4) “Older” has been defined as age 65 and older. As our population ages, especially to age 85 and older, more people will be living with persistent pain due to a variety of conditions.5

Persistent pain is more complicated in older than in younger patients. Many older people have more than one illness, making them more susceptible to adverse drug interactions such as altered pharmacokinetics and pharmacodynamics.6 Up to 40% of older outpatients report pain,7 and pain affects 70% to 80% of patients with advanced malignant disease.8 Pain is also prevalent in nonmalignant, progressive, life-limiting illnesses that are common in the geriatric population, affecting 41% to 77% of patients with advanced heart disease, 34% to 77% with advanced chronic obstructive pulmonary disease, and 47% to 50% with advanced renal disease.9

Pain is underrecognized in nursing home residents, who may have multiple somatic complaints and multiple causes of pain.10,11 From 27% to 83% of older adults in an institutionalized setting are affected by pain.12 Caregiver stress and attitudes towards pain may influence patients’ experiences with pain. This aspect should also be assessed and evaluated, if present.3

Pain in older adults is often undertreated, as evidenced by the findings of a study in which only one-third of older patients with persistent pain were receiving treatment that was consistent with current guidelines.13 Approximately 40% to 80% of older adults in the community with pain do not receive any treatment for it.14,15 Of those residing in institutions, 16% to 27% of older adults in pain do not receive any treatment for it.16,17 Inadequate treatment of persistent pain is associated with many adverse outcomes, including functional decline, falls, mood changes, decreased socialization, sleep and appetite difficulties, and increased healthcare utilization.18

GOALS: BETTER QUALITY OF LIFE AND FUNCTION

Persistent pain is multifactorial and so requires an approach that addresses a variety of causes and includes both nonpharmacologic and pharmacologic strategies. Opioids are part of a multipronged approach to pain management.

To avoid adverse effects, opioids for persistent pain in an older adult should be prescribed at the lowest possible dose that provides adequate analgesia. Due to age-related changes, finding the best treatments may be a challenge, and understanding the pharmacokinetic implications in this population is key (Table 2).

Complete pain relief is uncommon and is not the goal when using opioids in older patients. Rather, treatment goals should focus on quality of life and function. Patients need to be continually educated about these goals and regularly reassessed during treatment.

APPROACH TO PAIN MANAGEMENT

Initial steps in managing pain should always include a detailed pain assessment, ideally by an interdisciplinary team.19,20 Physical therapy, cognitive behavioral therapy, and patient and caregiver education are some effective nonpharmacologic strategies.3 If nonpharmacologic treatments are ineffective, pharmacologic strategies should be used. Often, both nonpharmacologic and pharmacologic treatments work well for persistent pain.

The World Health Organization’s three-step ladder approach, originally developed for cancer pain, has subsequently been adopted for all types of pain.

  • Step 1 of the ladder is nonopioid analgesics, with or without adjuvant agents.
  • Step 2 if the pain persists or increases, is a weak opioid (eg, codeine, tramadol), with or without a nonopioid analgesic and with or without an adjuvant agent.
  • Step 3 is a strong opioid (eg, morphine, oxycodone, hydromorphone, fentanyl, or methadone), with or without nonopioid and adjuvant agents.

The European Association for Palliative Care recommendations state that there is no significant difference between morphine, oxycodone, and hydromorphone when given orally.21 Although this ladder has been modernized somewhat,22 it still provides a conceptual and practical guide.

FIRST STEP: NONOPIOID ANALGESICS

Acetaminophen is first-line

Acetaminophen is the first-line drug for persistent pain, as it is effective and safe. It does not have the same gastrointestinal and renal side effects that nonsteroidal anti-inflammatory drugs (NSAIDs) do. It also has fewer drug interactions, and its clearance does not decline with age.23

However, older adults should not take more than 3 g of acetaminophen in 24 hours.24 It should be used with extreme caution, if at all, in patients who have hepatic insufficiency or chronic alcohol abuse or dependence.

Topical therapies

Topical NSAIDs allow local analgesia with less risk of systemic side effects than with oral NSAIDs, which have a limited role in the older population.

Capsaicin, which depletes substance P, has primarily been studied for neuropathic pain.

Lidocaine 5% topical patch has been found effective for postherpetic neuralgia; however, there is limited evidence for using it in other painful conditions, such as osteoarthritis and back pain.25

Adjuvants

Duloxetine is a serotonin and norepinephrine reuptake inhibitor. Studies have found it effective in treating diabetic peripheral neuropathy, fibromyalgia, chronic low back pain, and osteoarthritis knee pain. However, except for the knee study, most of the patients enrolled were younger.

Antiepileptic medications. Gabapentin and pregabalin have been found to be effective in painful neuropathic conditions that commonly occur in older adults.25

Avoid oral NSAIDs

NSAIDs, both nonselective and cyclooxygenase 2-selective, should only rarely be considered for long-term use in older adults in view of increased risk of conditions such as congestive heart failure, acute kidney injury, and gastrointestinal bleeding.25 These adverse effects seem to be related to inhibition of prostaglandin, which plays a physiologic role in the gastrointestinal, renal, and cardiovascular systems.26 Oral NSAIDs should be used with extreme caution.

 

 

OPIOIDS

The American Geriatrics Society, American Pain Society, and American Academy of Pain Medicine made recommendations in 2009 supporting the use of opioids to treat persistent pain in patients who are carefully selected and monitored.4,6 An international expert panel in 2008 issued a consensus statement27 of evidence that also supported the use of opioids for those over age 65. The Federation of State Medical Boards of the United States also supports the use of opioids, particularly for adults who have refractory pain, and it recognizes undertreatment of pain as a public health issue.28

Clinicians are most comfortable with using opioids to manage cancer pain, but these drugs also provide an acceptable and effective means of analgesia in nonmalignant, persistent pain syndromes.24 The American Geriatrics Society Panel on Pharmacological Management of Persistent Pain in Older Persons recommends treatment with opioids in all patients with moderate-to-severe pain, pain-related functional impairment, or decreased quality of life due to pain, even though the evidence base is not robust.3

Unlike NSAIDs and acetaminophen, opioids do not have a presumed ceiling effect. However, in patients ages 15 to 64, the greatest benefits have been observed at lower doses of opioids, and the risk of death increases with dose.29 The dose can be raised gradually until pain is relieved.

Start low and go slow

When starting opioid therapy:

  • Choose a short-acting agent
  • Give it on a trial basis
  • Start at a low dose and titrate up slowly.

No data are available to tell us how much to give an older adult, but a reasonable starting dose is 30% to 50% of the recommended dose for a younger adult.24 Short-acting opioids should be titrated by increasing the total daily dose by 25% to 50% every 24 hours until adequate analgesia is reached.24

Older adults who have frequent or continuous pain should receive scheduled (around-the-clock) dosing in an effort to achieve a steady state.3 The half-lives of opioids may be longer in older adults who have renal or hepatic insufficiency; therefore, their doses should be lower and the intervals between doses longer.27

When long-acting opioid preparations are used, it is important to also prescribe breakthrough (short-acting) pain management.2 Breakthrough pain includes end-of-dose failure, incident pain (ie, due to an identifiable cause, such as movement), and spontaneous pain; these can be prevented or treated with short-acting, immediate-release opioid formulations.3

Once therapy is initiated, its safety and efficacy should be continually monitored.2 With long-term use, patients should be reassessed for ongoing attainment of therapeutic goals, adverse effects, and safe and responsible medication use.3

Table 3 lists common opioids and their initial dosing.

SIDE EFFECTS

Constipation

This is one of the most common side effects of opioids,30 and although many opioid side effects wane within days of starting as tolerance develops, this one does not.

A bowel regimen should be initiated when starting any opioid regimen. Although most of the evidence for bowel regimens is anecdotal, increasing fluid and fiber intake and taking stool softeners and laxatives are effective.­31

For very difficult cases of opioid constipation, randomized trials suggest that specific agents with opioid antagonist activity that specifically target the gastrointestinal system can help.32,33 Opioid antagonists are not used as routine prophylaxis, but rather for constipation that is refractory to laxatives.34,35 A meta-analysis demonstrated that methylnaltrexone, naloxone, and alvimopan were generally well tolerated, with no significant difference in adverse effects compared with placebo.36

 

 

Sedation

Sedation due to opioids in opioid-naïve patients is well documented,37 but it decreases over time. When starting or changing the dose of opioids, it is important to counsel patients about driving and safety at work and home.

For persistent opioid-related sedation, three options are available: dose reduction, opioid rotation, and use of psychostimulants.38 Although it does not carry a US Food and Drug Administration indication for this use, methylphenidate has been studied in cancer patients, in whom it has been associated with less drowsiness, decreased pain, and less need for rescue doses of pain medications.39–41

Nausea and vomiting

Nausea and vomiting are common in opioid recipients. These adverse effects usually decrease over days to weeks with continued exposure.

A number of antiemetic therapies are available in oral, rectal, and intravenous formulations, but there is no evidence-based recommendation for antiemetic choice for opioid-induced nausea in patients with cancer.42 It is important to always rule out constipation as the cause of nausea. There is also some evidence that reducing the opioid dose or changing the route of administration may help with symptoms.42–45

Respiratory depression

Although respiratory depression is the most feared adverse effect of opioids, it is rare with low starting doses and appropriate dose titration. Sedation precedes respiratory depression, which occurs when initial opioid dosages are too high, titration is too rapid, or opioids are combined with other drugs associated with respiratory depression or that may potentiate opioid-induced respiratory depression, such as benzodiazepines.46–51

Patients with sleep apnea may be at higher risk. In addition, in a study that specifically reviewed patients who had persistent pain, specific factors that contributed to opioid-induced respiratory depression were use of methadone and transdermal fentanyl, renal impairment, and sensory deafferentation.52 Buprenorphine was found to have a ceiling effect for respiratory depression, but not for analgesia.49

Central sleep apnea

Chronic opioid use has been associated with sleep-disordered breathing, notably central sleep apnea. This is often unrecognized. The prevalence of central sleep apnea in this population is 24%.53

Although continuous positive airway pressure is the standard of care for obstructive sleep apnea, it is ineffective for central sleep apnea and possibly may make it worse. Adaptive servoventilation is a therapy that may be effective.54

Urinary retention

Opioids can cause urinary retention, which is most noted in a postoperative setting. Changes in bladder function have been found to be partially due to a peripheral opioid effect.55

Initial management: catheterize the bladder for prompt relief and try to reduce the dose of opioids.

Impaired balance and falls

Use of opioids, especially when combined with other medications active in the central nervous system, may lead to impaired balance and falls, especially in the elderly.56 In this group, all opioids are associated with falls except for buprenorphine.27,57 Older adults need to be assessed and educated about the risk of falls before they are given opioids. Physical therapy and mobility aids may help in these cases.

Dependence

The prevalence of dependence is low in patients who have no prior history of substance abuse.6 Older age is also associated with a significantly lower risk of opioid misuse and abuse.6

Opioid-induced hyperalgesia

Opioid-induced hyperalgesia should be considered if pain continues to worsen in spite of increasing doses, tolerance to opioids appears to develop rapidly, or pain becomes more diffuse and extends past the distribution of preexisting pain.58 Although the exact mechanism is unclear, exposure to opioids causes nociceptive sensitization, as measured by several techniques.59,60

Opioid-induced hyperalgesia is distinct from opioid analgesia tolerance. A key difference is that opioid tolerance can be overcome by increasing the dose, while opioid-induced hyperalgesia can be exacerbated by it.

Management of opioid-induced hyperalgesia includes decreasing the dose, switching to a different opioid, and maximizing nonopioid analgesia.58 The plan should be clearly communicated to patients and families to avoid misunderstanding.

Other adverse effects

Long-term use of opioids may suppress production of several hypothalamic, pituitary, gonadal, and adrenal hormones.3 Long-term use of opioids is also associated with bone loss.61 Opioids have also demonstrated immunodepressant effects.38,62

OPIOID ROTATION

Trying a different opioid (opioid rotation) may be required if pain remains poorly controlled despite increasing doses or if intolerable side effects occur.

According to consensus guidelines on opioid rotation,63 if the originally prescribed opioid is not providing the appropriate therapeutic effect or the patient cannot tolerate the regimen, an equianalgesic dose (Table 3) of the new opioid is calculated based on the original opioid and then decreased in two safety steps. The first safety step is a 25% to 50% reduction in the calculated equianalgesic dose to account for incomplete cross-tolerance. There are two exceptions: methadone requires a 75% to 90% reduction, and transdermal fentanyl does not require an adjustment. The next step is an adjustment of 15% to 30% based on pain severity and the patient’s medical or psychosocial aspects.63

SPECIAL POPULATION: PATIENTS WITH DEMENTIA

There is little scientific data on pain management in older adults with dementia. Many patients with mild to moderate dementia can verbally communicate pain reliably,64 but more challenging are those who are nonverbal, for whom providers depend on caregiver reports and observational scales.65

Prescribing in patients with dementia who are verbal and nonverbal mirrors the strategies used in those older adults who are cognitively intact,66 eg:

  • Use scheduled (around-the-clock) dosing
  • Start with nonopioid medications initially, but advance to opioids as needed, guided by the WHO ladder
  • Carefully monitor the risks and benefits of pain treatment vs persistent pain.

When uncertain about whether a demented patient is in pain, a trial of analgesics is warranted. Signs of pain include not socializing, disturbed sleep, and a vegetative state.

SAFE PRESCRIBING PRACTICES

With the use of opioids to treat persistent pain comes the risk of abuse. A universal precautions approach helps establish reasonable limits before initiating therapy.

A thorough evaluation is required, including description and documentation of pain, disease processes, comorbidities, and effects on function; physical examination; and diagnostic testing. It is also important to inquire about a history of substance abuse. Tools such as the Opioid Risk Tool and the Screener and Opioid Assessment for Patients with Pain-Revised can help gauge risk of misuse or abuse.67,68

Ongoing screening and monitoring are necessary to minimize misuse and diversion. This also involves adhering to federal and state government regulatory policies and participating state prescription drug monitoring programs.69

The use of opioid analgesics is widely accepted for treating severe acute pain, cancer pain, and pain at the end of life.1 However, their long-term use for other types of persistent pain (Table 1) remains controversial. Clinicians and regulators need to work together to achieve a balanced approach to the use of opioids, recognizing the legitimate medical need for these medications for persistent pain while acknowledging their increasing misuse and the morbidity and mortality related to them. Finding this balance is particularly challenging in older patients.2

PAIN IN OLDER PEOPLE: COMPLICATED, OFTEN UNDERTREATED

Persistent pain is a multifaceted manifestation of an unpleasant sensation that continues for a prolonged time and may or may not be related to a distinct disease process.3 (The term “persistent pain” is preferred as it does not have the negative connotations of “chronic pain.”4) “Older” has been defined as age 65 and older. As our population ages, especially to age 85 and older, more people will be living with persistent pain due to a variety of conditions.5

Persistent pain is more complicated in older than in younger patients. Many older people have more than one illness, making them more susceptible to adverse drug interactions such as altered pharmacokinetics and pharmacodynamics.6 Up to 40% of older outpatients report pain,7 and pain affects 70% to 80% of patients with advanced malignant disease.8 Pain is also prevalent in nonmalignant, progressive, life-limiting illnesses that are common in the geriatric population, affecting 41% to 77% of patients with advanced heart disease, 34% to 77% with advanced chronic obstructive pulmonary disease, and 47% to 50% with advanced renal disease.9

Pain is underrecognized in nursing home residents, who may have multiple somatic complaints and multiple causes of pain.10,11 From 27% to 83% of older adults in an institutionalized setting are affected by pain.12 Caregiver stress and attitudes towards pain may influence patients’ experiences with pain. This aspect should also be assessed and evaluated, if present.3

Pain in older adults is often undertreated, as evidenced by the findings of a study in which only one-third of older patients with persistent pain were receiving treatment that was consistent with current guidelines.13 Approximately 40% to 80% of older adults in the community with pain do not receive any treatment for it.14,15 Of those residing in institutions, 16% to 27% of older adults in pain do not receive any treatment for it.16,17 Inadequate treatment of persistent pain is associated with many adverse outcomes, including functional decline, falls, mood changes, decreased socialization, sleep and appetite difficulties, and increased healthcare utilization.18

GOALS: BETTER QUALITY OF LIFE AND FUNCTION

Persistent pain is multifactorial and so requires an approach that addresses a variety of causes and includes both nonpharmacologic and pharmacologic strategies. Opioids are part of a multipronged approach to pain management.

To avoid adverse effects, opioids for persistent pain in an older adult should be prescribed at the lowest possible dose that provides adequate analgesia. Due to age-related changes, finding the best treatments may be a challenge, and understanding the pharmacokinetic implications in this population is key (Table 2).

Complete pain relief is uncommon and is not the goal when using opioids in older patients. Rather, treatment goals should focus on quality of life and function. Patients need to be continually educated about these goals and regularly reassessed during treatment.

APPROACH TO PAIN MANAGEMENT

Initial steps in managing pain should always include a detailed pain assessment, ideally by an interdisciplinary team.19,20 Physical therapy, cognitive behavioral therapy, and patient and caregiver education are some effective nonpharmacologic strategies.3 If nonpharmacologic treatments are ineffective, pharmacologic strategies should be used. Often, both nonpharmacologic and pharmacologic treatments work well for persistent pain.

The World Health Organization’s three-step ladder approach, originally developed for cancer pain, has subsequently been adopted for all types of pain.

  • Step 1 of the ladder is nonopioid analgesics, with or without adjuvant agents.
  • Step 2 if the pain persists or increases, is a weak opioid (eg, codeine, tramadol), with or without a nonopioid analgesic and with or without an adjuvant agent.
  • Step 3 is a strong opioid (eg, morphine, oxycodone, hydromorphone, fentanyl, or methadone), with or without nonopioid and adjuvant agents.

The European Association for Palliative Care recommendations state that there is no significant difference between morphine, oxycodone, and hydromorphone when given orally.21 Although this ladder has been modernized somewhat,22 it still provides a conceptual and practical guide.

FIRST STEP: NONOPIOID ANALGESICS

Acetaminophen is first-line

Acetaminophen is the first-line drug for persistent pain, as it is effective and safe. It does not have the same gastrointestinal and renal side effects that nonsteroidal anti-inflammatory drugs (NSAIDs) do. It also has fewer drug interactions, and its clearance does not decline with age.23

However, older adults should not take more than 3 g of acetaminophen in 24 hours.24 It should be used with extreme caution, if at all, in patients who have hepatic insufficiency or chronic alcohol abuse or dependence.

Topical therapies

Topical NSAIDs allow local analgesia with less risk of systemic side effects than with oral NSAIDs, which have a limited role in the older population.

Capsaicin, which depletes substance P, has primarily been studied for neuropathic pain.

Lidocaine 5% topical patch has been found effective for postherpetic neuralgia; however, there is limited evidence for using it in other painful conditions, such as osteoarthritis and back pain.25

Adjuvants

Duloxetine is a serotonin and norepinephrine reuptake inhibitor. Studies have found it effective in treating diabetic peripheral neuropathy, fibromyalgia, chronic low back pain, and osteoarthritis knee pain. However, except for the knee study, most of the patients enrolled were younger.

Antiepileptic medications. Gabapentin and pregabalin have been found to be effective in painful neuropathic conditions that commonly occur in older adults.25

Avoid oral NSAIDs

NSAIDs, both nonselective and cyclooxygenase 2-selective, should only rarely be considered for long-term use in older adults in view of increased risk of conditions such as congestive heart failure, acute kidney injury, and gastrointestinal bleeding.25 These adverse effects seem to be related to inhibition of prostaglandin, which plays a physiologic role in the gastrointestinal, renal, and cardiovascular systems.26 Oral NSAIDs should be used with extreme caution.

 

 

OPIOIDS

The American Geriatrics Society, American Pain Society, and American Academy of Pain Medicine made recommendations in 2009 supporting the use of opioids to treat persistent pain in patients who are carefully selected and monitored.4,6 An international expert panel in 2008 issued a consensus statement27 of evidence that also supported the use of opioids for those over age 65. The Federation of State Medical Boards of the United States also supports the use of opioids, particularly for adults who have refractory pain, and it recognizes undertreatment of pain as a public health issue.28

Clinicians are most comfortable with using opioids to manage cancer pain, but these drugs also provide an acceptable and effective means of analgesia in nonmalignant, persistent pain syndromes.24 The American Geriatrics Society Panel on Pharmacological Management of Persistent Pain in Older Persons recommends treatment with opioids in all patients with moderate-to-severe pain, pain-related functional impairment, or decreased quality of life due to pain, even though the evidence base is not robust.3

Unlike NSAIDs and acetaminophen, opioids do not have a presumed ceiling effect. However, in patients ages 15 to 64, the greatest benefits have been observed at lower doses of opioids, and the risk of death increases with dose.29 The dose can be raised gradually until pain is relieved.

Start low and go slow

When starting opioid therapy:

  • Choose a short-acting agent
  • Give it on a trial basis
  • Start at a low dose and titrate up slowly.

No data are available to tell us how much to give an older adult, but a reasonable starting dose is 30% to 50% of the recommended dose for a younger adult.24 Short-acting opioids should be titrated by increasing the total daily dose by 25% to 50% every 24 hours until adequate analgesia is reached.24

Older adults who have frequent or continuous pain should receive scheduled (around-the-clock) dosing in an effort to achieve a steady state.3 The half-lives of opioids may be longer in older adults who have renal or hepatic insufficiency; therefore, their doses should be lower and the intervals between doses longer.27

When long-acting opioid preparations are used, it is important to also prescribe breakthrough (short-acting) pain management.2 Breakthrough pain includes end-of-dose failure, incident pain (ie, due to an identifiable cause, such as movement), and spontaneous pain; these can be prevented or treated with short-acting, immediate-release opioid formulations.3

Once therapy is initiated, its safety and efficacy should be continually monitored.2 With long-term use, patients should be reassessed for ongoing attainment of therapeutic goals, adverse effects, and safe and responsible medication use.3

Table 3 lists common opioids and their initial dosing.

SIDE EFFECTS

Constipation

This is one of the most common side effects of opioids,30 and although many opioid side effects wane within days of starting as tolerance develops, this one does not.

A bowel regimen should be initiated when starting any opioid regimen. Although most of the evidence for bowel regimens is anecdotal, increasing fluid and fiber intake and taking stool softeners and laxatives are effective.­31

For very difficult cases of opioid constipation, randomized trials suggest that specific agents with opioid antagonist activity that specifically target the gastrointestinal system can help.32,33 Opioid antagonists are not used as routine prophylaxis, but rather for constipation that is refractory to laxatives.34,35 A meta-analysis demonstrated that methylnaltrexone, naloxone, and alvimopan were generally well tolerated, with no significant difference in adverse effects compared with placebo.36

 

 

Sedation

Sedation due to opioids in opioid-naïve patients is well documented,37 but it decreases over time. When starting or changing the dose of opioids, it is important to counsel patients about driving and safety at work and home.

For persistent opioid-related sedation, three options are available: dose reduction, opioid rotation, and use of psychostimulants.38 Although it does not carry a US Food and Drug Administration indication for this use, methylphenidate has been studied in cancer patients, in whom it has been associated with less drowsiness, decreased pain, and less need for rescue doses of pain medications.39–41

Nausea and vomiting

Nausea and vomiting are common in opioid recipients. These adverse effects usually decrease over days to weeks with continued exposure.

A number of antiemetic therapies are available in oral, rectal, and intravenous formulations, but there is no evidence-based recommendation for antiemetic choice for opioid-induced nausea in patients with cancer.42 It is important to always rule out constipation as the cause of nausea. There is also some evidence that reducing the opioid dose or changing the route of administration may help with symptoms.42–45

Respiratory depression

Although respiratory depression is the most feared adverse effect of opioids, it is rare with low starting doses and appropriate dose titration. Sedation precedes respiratory depression, which occurs when initial opioid dosages are too high, titration is too rapid, or opioids are combined with other drugs associated with respiratory depression or that may potentiate opioid-induced respiratory depression, such as benzodiazepines.46–51

Patients with sleep apnea may be at higher risk. In addition, in a study that specifically reviewed patients who had persistent pain, specific factors that contributed to opioid-induced respiratory depression were use of methadone and transdermal fentanyl, renal impairment, and sensory deafferentation.52 Buprenorphine was found to have a ceiling effect for respiratory depression, but not for analgesia.49

Central sleep apnea

Chronic opioid use has been associated with sleep-disordered breathing, notably central sleep apnea. This is often unrecognized. The prevalence of central sleep apnea in this population is 24%.53

Although continuous positive airway pressure is the standard of care for obstructive sleep apnea, it is ineffective for central sleep apnea and possibly may make it worse. Adaptive servoventilation is a therapy that may be effective.54

Urinary retention

Opioids can cause urinary retention, which is most noted in a postoperative setting. Changes in bladder function have been found to be partially due to a peripheral opioid effect.55

Initial management: catheterize the bladder for prompt relief and try to reduce the dose of opioids.

Impaired balance and falls

Use of opioids, especially when combined with other medications active in the central nervous system, may lead to impaired balance and falls, especially in the elderly.56 In this group, all opioids are associated with falls except for buprenorphine.27,57 Older adults need to be assessed and educated about the risk of falls before they are given opioids. Physical therapy and mobility aids may help in these cases.

Dependence

The prevalence of dependence is low in patients who have no prior history of substance abuse.6 Older age is also associated with a significantly lower risk of opioid misuse and abuse.6

Opioid-induced hyperalgesia

Opioid-induced hyperalgesia should be considered if pain continues to worsen in spite of increasing doses, tolerance to opioids appears to develop rapidly, or pain becomes more diffuse and extends past the distribution of preexisting pain.58 Although the exact mechanism is unclear, exposure to opioids causes nociceptive sensitization, as measured by several techniques.59,60

Opioid-induced hyperalgesia is distinct from opioid analgesia tolerance. A key difference is that opioid tolerance can be overcome by increasing the dose, while opioid-induced hyperalgesia can be exacerbated by it.

Management of opioid-induced hyperalgesia includes decreasing the dose, switching to a different opioid, and maximizing nonopioid analgesia.58 The plan should be clearly communicated to patients and families to avoid misunderstanding.

Other adverse effects

Long-term use of opioids may suppress production of several hypothalamic, pituitary, gonadal, and adrenal hormones.3 Long-term use of opioids is also associated with bone loss.61 Opioids have also demonstrated immunodepressant effects.38,62

OPIOID ROTATION

Trying a different opioid (opioid rotation) may be required if pain remains poorly controlled despite increasing doses or if intolerable side effects occur.

According to consensus guidelines on opioid rotation,63 if the originally prescribed opioid is not providing the appropriate therapeutic effect or the patient cannot tolerate the regimen, an equianalgesic dose (Table 3) of the new opioid is calculated based on the original opioid and then decreased in two safety steps. The first safety step is a 25% to 50% reduction in the calculated equianalgesic dose to account for incomplete cross-tolerance. There are two exceptions: methadone requires a 75% to 90% reduction, and transdermal fentanyl does not require an adjustment. The next step is an adjustment of 15% to 30% based on pain severity and the patient’s medical or psychosocial aspects.63

SPECIAL POPULATION: PATIENTS WITH DEMENTIA

There is little scientific data on pain management in older adults with dementia. Many patients with mild to moderate dementia can verbally communicate pain reliably,64 but more challenging are those who are nonverbal, for whom providers depend on caregiver reports and observational scales.65

Prescribing in patients with dementia who are verbal and nonverbal mirrors the strategies used in those older adults who are cognitively intact,66 eg:

  • Use scheduled (around-the-clock) dosing
  • Start with nonopioid medications initially, but advance to opioids as needed, guided by the WHO ladder
  • Carefully monitor the risks and benefits of pain treatment vs persistent pain.

When uncertain about whether a demented patient is in pain, a trial of analgesics is warranted. Signs of pain include not socializing, disturbed sleep, and a vegetative state.

SAFE PRESCRIBING PRACTICES

With the use of opioids to treat persistent pain comes the risk of abuse. A universal precautions approach helps establish reasonable limits before initiating therapy.

A thorough evaluation is required, including description and documentation of pain, disease processes, comorbidities, and effects on function; physical examination; and diagnostic testing. It is also important to inquire about a history of substance abuse. Tools such as the Opioid Risk Tool and the Screener and Opioid Assessment for Patients with Pain-Revised can help gauge risk of misuse or abuse.67,68

Ongoing screening and monitoring are necessary to minimize misuse and diversion. This also involves adhering to federal and state government regulatory policies and participating state prescription drug monitoring programs.69

References
  1. Chou R, Fanciullo GJ, Fine PG, et al; American Pain Society-American Academy of Pain Medicine Opioids Guidelines Panel. Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. J Pain 2009; 10:113–130.
  2. West NA, Severtson SG, Green JL, Dart RC. Trends in abuse and misuse of prescription opioids among older adults. Drug Alcohol Depend 2015; 149:117–121.
  3. American Geriatrics Society Panel on Pharmacological Management of Persistent Pain in Older Persons. Pharmacological management of persistent pain in older persons. J Am Geriatr Soc 2009; 57:1331–1346.
  4. Weiner DK, Herr K. Comprehensive interdisciplinary assessment and treatment planning: an integrative overview. In: Weiner DK, Herr K, Rudy TE, editors. Persistent pain in older adults: an interdisciplinary guide for treatment. New York, NY: Springer Publishing Company; 2002.
  5. He W, Sengupta M, Velkoff V; US Census Bureau. 65+ in the United States: 2005. Washington, DC: US Government Printing Office; 2005. www.census.gov/prod/2006pubs/p23-209.pdf. Accessed March 30, 2016.
  6. American Geriatrics Society Panel on the Pharmacological Management of Persistent Pain in Older Persons. Pharmacological management of persistent pain in older persons. Pain Med 2009; 10:1062–1083.
  7. Thomas E, Peat G, Harris L, Wilkie R, Croft PR. The prevalence of pain and pain interference in a general population of older adults: cross-sectional findings from the North Staffordshire Osteoarthritis Project (NorStOP). Pain 2004; 110:361–368.
  8. Caraceni A, Hanks G, Kaasa S, et al; European Palliative Care Research Collaborative (EPCRC); European Association for Palliative Care (EAPC). Use of opioid analgesics in the treatment of cancer pain: evidence-based recommendations from the EAPC. Lancet Oncol 2012; 13:e58–e68.
  9. Solano JP, Gomes B, Higginson IJ. A comparison of symptom prevalence in far advanced cancer, AIDS, heart disease, chronic obstructive pulmonary disease and renal disease. J Pain Symptom Manage 2006; 31:58–69.
  10. Ferrell BA, Ferrell BR, Osterweil D. Pain in the nursing home. J Am Geriatr Soc 1990; 38:409–414.
  11. Ferrell BA, Ferrell BR, Rivera L. Pain in cognitively impaired nursing home patients. J Pain Symptom Manage 1995; 10:591–598.
  12. Fox PL, Raina P, Jadad AR. Prevalence and treatment of pain in older adults in nursing homes and other long-term care institutions: a systematic review. CMAJ 1999; 160:329–333.
  13. Stewart C, Leveille SG, Shmerling RH, Samelson EJ, Bean JF, Schofield P. Management of persistent pain in older adults: the MOBILIZE Boston Study. J Am Geriatr Soc 2012; 60:2081–2086.
  14. Woo J, Ho SC, Lau J, Leung PC. Musculoskeletal complaints and associated consequences in elderly Chinese aged 70 years and over. J Rheumatol 1994; 21:1927–1931.
  15. Pahor M, Guralnik JM, Wan JY, et al. Lower body osteoarticular pain and dose of analgesic medications in older disabled women: the Women’s Health and Aging Study. Am J Public Health 1999; 89:930–934.
  16. Marzinski LR. The tragedy of dementia: clinically assessing pain in the confused nonverbal elderly. J Gerontol Nurs 1991; 17:25–28.
  17. Roy R, Thomas M. A survey of chronic pain in an elderly population. Can Fam Physician 1986; 32:513–516.
  18. AGS Panel on Persistent Pain in Older Persons. The management of persistent pain in older persons. J Am Geriatr Soc 2002; 50(suppl 6): S205–S224.
  19. Stanos S, Houle TT. Multidisciplinary and interdisciplinary management of chronic pain. Phys Med Rehabil Clin N Am 2006; 17:435–450.
  20. Helme RD, Katz B, Gibson SJ, et al. Multidisciplinary pain clinics for older people. Do they have a role? Clin Geriatr Med 1996; 12:563–582.
  21. Harris DG. Management of pain in advanced disease. Br Med Bull 2014; 110:117–128.
  22. Raffa RB, Pergolizzi JV. A modern analgesics pain ‘pyramid’. J Clin Pharm Ther 2014; 39:4–6.
  23. Fine PG, Herr KA. Pharmacologic management of persistent pain in older persons. Clin Geriatr 2009; 17:25–32.
  24. Tracy B, Sean Morrison R. Pain management in older adults. Clin Ther 2013; 35:1659–1668.
  25. Malec M, Shega JW. Pain management in the elderly. Med Clin North Am 2015; 99:337–350.
  26. Abdulla A, Adams N, Bone M, et al; British Geriatric Society. Guidance on the management of pain in older people. Age Ageing 2013; 42(suppl 1):i1–i57.
  27. Pergolizzi J, Böger RH, Budd K, et al. Opioids and the management of chronic severe pain in the elderly: consensus statement of an International Expert Panel with focus on the six clinically most often used World Health Organization Step III opioids (buprenorphine, fentanyl, hydromorphone, methadone, morphine, oxycodone). Pain Pract 2008; 8:287–313.
  28. Gloth FM 3rd. Pharmacological management of persistent pain in older persons: focus on opioids and nonopioids. J Pain 2011; 12(suppl 1):S14–S20.
  29. Gomes T, Mamdani MM, Dhalla IA, Paterson JM, Juurlink DN. Opioid dose and drug-related mortality in patients with nonmalignant pain. Arch Intern Med 2011; 171:686–691.
  30. Moore RA, McQuay HJ. Prevalence of opioid adverse events in chronic non-malignant pain: systematic review of randomised trials of oral opioids. Arthritis Res Ther 2005; 7:R1046–R1051.
  31. Candy B, Jones L, Larkin PJ, Vickerstaff V, Tookman A, Stone P. Laxatives for the management of constipation in people receiving palliative care. Cochrane Database Syst Rev 2015; 5:CD003448.
  32. Webster LR, Butera PG, Moran LV, Wu N, Burns LH, Friedmann N. Oxytrex minimizes physical dependence while providing effective analgesia: a randomized controlled trial in low back pain. J Pain 2006; 7:937–946.
  33. Paulson DM, Kennedy DT, Donovick RA, et al. Alvimopan: an oral, peripherally acting, mu-opioid receptor antagonist for the treatment of opioid-induced bowel dysfunction—a 21-day treatment-randomized clinical trial. J Pain 2005; 6:184–192.
  34. Nalamachu SR, Pergolizzi J, Taylor R, et al. Efficacy and tolerability of subcutaneous methylnaltrexone in patients with advanced illness and opioid-induced constipation: a responder analysis of 2 randomized, placebo-controlled trials. Pain Pract 2015; 15:564–571.
  35. Brick N. Laxatives or methylnaltrexone for the management of constipation in palliative care patients. Clin J Oncol Nurs 2013; 17:91–92.
  36. Ford AC, Brenner DM, Schoenfeld PS. Efficacy of pharmacological therapies for the treatment of opioid-induced constipation: systematic review and meta-analysis. Am J Gastroenterolt 2013; 108:1566–1575.
  37. Byas-Smith MG, Chapman SL, Reed B, Cotsonis G. The effect of opioids on driving and psychomotor performance in patients with chronic pain. Clin J Pain 2005; 21:345–352.
  38. Benyamin R, Trescot AM, Datta S, et al. Opioid complications and side effects. Pain Physician 2008; 11(suppl 2):S105–S120.
  39. Wilwerding MB, Loprinzi CL, Mailliard JA, et al. A randomized, crossover evaluation of methylphenidate in cancer patients receiving strong narcotics. Support Care Cancer 1995; 3:135–138.
  40. Bruera E, Miller MJ, Macmillan K, Kuehn N. Neuropsychological effects of methylphenidate in patients receiving a continuous infusion of narcotics for cancer pain. Pain 1992; 48:163–166.
  41. Ahmedzai S. New approaches to pain control in patients with cancer. Eur J Cancer 1997; 33:S8–S14.
  42. Laugsand EA, Kaasa S, Klepstad P. Management of opioid-induced nausea and vomiting in cancer patients: systematic review and evidence-based recommendations. Palliat Med 2011; 25:442–453.
  43. Hardy J, Daly S, McQuade B, et al. A double-blind, randomised, parallel group, multinational, multicentre study comparing a single dose of ondansetron 24 mg p.o. with placebo and metoclopramide 10 mg t.d.s. p.o. in the treatment of opioid-induced nausea and emesis in cancer patients. Support Care Cancer 2002; 10:231–236.
  44. Apfel CC, Jalota L. Can central antiemetic effects of opioids counter-balance opioid-induced nausea and vomiting? Acta Anaesthesiol Scand 2010; 54:129–131.
  45. Okamoto Y, Tsuneto S, Matsuda Y, et al. A retrospective chart review of the antiemetic effectiveness of risperidone in refractory opioid-induced nausea and vomiting in advanced cancer patients. J Pain Symptom Manage 2007; 34:217–222.
  46. Overdyk F, Dahan A, Roozekrans M, van der Schrier R, Aarts L, Niesters M. Opioid-induced respiratory depression in the acute care setting: a compendium of case reports. Pain Manag 2014; 4:317–325.
  47. Niesters M, Overdyk F, Smith T, Aarts L, Dahan A. Buprenorphine-induced respiratory depression and involvement of ABCB1 SNPs in opioid-induced respiratory depression in paediatrics. Br J Anaesth 2013; 110:842–843.
  48. Niesters M, Mahajan RP, Aarts L, Dahan A. High-inspired oxygen concentration further impairs opioid-induced respiratory depression. Br J Anaesth 2013; 110:837–841.
  49. Dahan A, Yassen A, Romberg R, et al. Buprenorphine induces ceiling in respiratory depression but not in analgesia. Br J Anaesth 2006; 96:627–632.
  50. van Dorp E, Yassen A, Sarton E, et al. Naloxone reversal of buprenorphine-induced respiratory depression. Anesthesiology 2006; 105:51–57.
  51. Macintyre PE, Loadsman JA, Scott DA. Opioids, ventilation and acute pain management. Anaesth Intensive Care 2011; 39:545–558.
  52. Dahan A, Overdyk F, Smith T, Aarts L, Niesters M. Pharmacovigilance: a review of opioid-induced respiratory depression in chronic pain patients. Pain Physician 2013; 16:E85–E94.
  53. Correa D, Farney RJ, Chung F, Prasad A, Lam D, Wong J. Chronic opioid use and central sleep apnea: a review of the prevalence, mechanisms, and perioperative considerations. Anesth Analg 2015; 120:1273–1285.
  54. Randerath WJ, George S. Opioid-induced sleep apnea: is it a real problem? J Clin Sleep Med 2012; 8:577–578.
  55. Rosow CE, Gomery P, Chen TY, Stefanovich P, Stambler N, Israel R. Reversal of opioid-induced bladder dysfunction by intravenous naloxone and methylnaltrexone. Clin Pharmacol Ther 2007; 82:48–53.
  56. Weiner DK, Hanlon JT, Studenski SA. Effects of central nervous system polypharmacy on falls liability in community-dwelling elderly. Gerontology 1998; 44:217–221.
  57. Wolff ML, Kewley R, Hassett M, Collins J, Brodeur MR, Nokes S. Falls in skilled nursing facilities associated with opioid use. J Am Geriatr Soc 2012; 60:987.
  58. Zylicz Z, Twycross R. Opioid-induced hyperalgesia may be more frequent than previously thought. J Clin Oncol 2008; 26:1564; author reply 1565.
  59. Lee M, Silverman SM, Hansen H, Patel VB, Manchikanti L. A comprehensive review of opioid-induced hyperalgesia. Pain Physician 2011 2011; 14:145–161.
  60. Chen L, Sein M, Vo T, et al. Clinical interpretation of opioid tolerance versus opioid-induced hyperalgesia. J Opioid Manag 2014; 10:383–393.
  61. Vestergaard P, Hermann P, Jensen JE, Eiken P, Mosekilde L. Effects of paracetamol, non-steroidal anti-inflammatory drugs, acetylsalicylic acid, and opioids on bone mineral density and risk of fracture: results of the Danish Osteoporosis Prevention Study (DOPS). Osteoporos Int 2012; 23:1255–1265.
  62. Sacerdote P, Franchi S, Panerai AE. Non-analgesic effects of opioids: mechanisms and potential clinical relevance of opioid-induced immunodepression. Curr Pharm Des 2012; 18:6034–6042.
  63. Fine PG, Portenoy RK; Ad Hoc Expert Panel on Evidence Review and Guidelines for Opioid Rotation. Establishing “best practices” for opioid rotation: conclusions of an expert panel. J Pain Symptom Manage 2009; 38:418–425.
  64. Chibnall JT, Tait RC. Pain assessment in cognitively impaired and unimpaired older adults: a comparison of four scales. Pain 2001; 92:173–186.
  65. Andrade DC, Faria JW, Caramelli P, et al. The assessment and management of pain in the demented and non-demented elderly patient. Arq Neuropsiquiatr 2011; 69:387–394.
  66. Scherder E, Herr K, Pickering G, Gibson S, Benedetti F, Lautenbacher S. Pain in dementia. Pain 2009; 145:276–278.
  67. Chou R, Fanciullo GJ, Fine PG, Miaskowski C, Passik SD, Portenoy RK. Opioids for chronic noncancer pain: prediction and identification of aberrant drug-related behaviors: a review of the evidence for an American Pain Society and American Academy of Pain Medicine clinical practice guideline. J Pain 2009; 10:131–146.
  68. Butler SF, Budman SH, Fernandez KC, Fanciullo GJ, Jamison RN. Cross-validation of a screener to predict opioid misuse in chronic pain patients (SOAPP-R). J Addict Med 2009; 3:66–73.
  69. de Leon-Casasola OA. Opioids for chronic pain: new evidence, new strategies, safe prescribing. Am J Med 2013; 126(suppl 1):S3–S11.
  70. CDC guideline for prescribing opioids for chronic pain—United States, 2016. MMWR Recomm Rep 2016 Mar 18; 65(1):1–49.
References
  1. Chou R, Fanciullo GJ, Fine PG, et al; American Pain Society-American Academy of Pain Medicine Opioids Guidelines Panel. Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. J Pain 2009; 10:113–130.
  2. West NA, Severtson SG, Green JL, Dart RC. Trends in abuse and misuse of prescription opioids among older adults. Drug Alcohol Depend 2015; 149:117–121.
  3. American Geriatrics Society Panel on Pharmacological Management of Persistent Pain in Older Persons. Pharmacological management of persistent pain in older persons. J Am Geriatr Soc 2009; 57:1331–1346.
  4. Weiner DK, Herr K. Comprehensive interdisciplinary assessment and treatment planning: an integrative overview. In: Weiner DK, Herr K, Rudy TE, editors. Persistent pain in older adults: an interdisciplinary guide for treatment. New York, NY: Springer Publishing Company; 2002.
  5. He W, Sengupta M, Velkoff V; US Census Bureau. 65+ in the United States: 2005. Washington, DC: US Government Printing Office; 2005. www.census.gov/prod/2006pubs/p23-209.pdf. Accessed March 30, 2016.
  6. American Geriatrics Society Panel on the Pharmacological Management of Persistent Pain in Older Persons. Pharmacological management of persistent pain in older persons. Pain Med 2009; 10:1062–1083.
  7. Thomas E, Peat G, Harris L, Wilkie R, Croft PR. The prevalence of pain and pain interference in a general population of older adults: cross-sectional findings from the North Staffordshire Osteoarthritis Project (NorStOP). Pain 2004; 110:361–368.
  8. Caraceni A, Hanks G, Kaasa S, et al; European Palliative Care Research Collaborative (EPCRC); European Association for Palliative Care (EAPC). Use of opioid analgesics in the treatment of cancer pain: evidence-based recommendations from the EAPC. Lancet Oncol 2012; 13:e58–e68.
  9. Solano JP, Gomes B, Higginson IJ. A comparison of symptom prevalence in far advanced cancer, AIDS, heart disease, chronic obstructive pulmonary disease and renal disease. J Pain Symptom Manage 2006; 31:58–69.
  10. Ferrell BA, Ferrell BR, Osterweil D. Pain in the nursing home. J Am Geriatr Soc 1990; 38:409–414.
  11. Ferrell BA, Ferrell BR, Rivera L. Pain in cognitively impaired nursing home patients. J Pain Symptom Manage 1995; 10:591–598.
  12. Fox PL, Raina P, Jadad AR. Prevalence and treatment of pain in older adults in nursing homes and other long-term care institutions: a systematic review. CMAJ 1999; 160:329–333.
  13. Stewart C, Leveille SG, Shmerling RH, Samelson EJ, Bean JF, Schofield P. Management of persistent pain in older adults: the MOBILIZE Boston Study. J Am Geriatr Soc 2012; 60:2081–2086.
  14. Woo J, Ho SC, Lau J, Leung PC. Musculoskeletal complaints and associated consequences in elderly Chinese aged 70 years and over. J Rheumatol 1994; 21:1927–1931.
  15. Pahor M, Guralnik JM, Wan JY, et al. Lower body osteoarticular pain and dose of analgesic medications in older disabled women: the Women’s Health and Aging Study. Am J Public Health 1999; 89:930–934.
  16. Marzinski LR. The tragedy of dementia: clinically assessing pain in the confused nonverbal elderly. J Gerontol Nurs 1991; 17:25–28.
  17. Roy R, Thomas M. A survey of chronic pain in an elderly population. Can Fam Physician 1986; 32:513–516.
  18. AGS Panel on Persistent Pain in Older Persons. The management of persistent pain in older persons. J Am Geriatr Soc 2002; 50(suppl 6): S205–S224.
  19. Stanos S, Houle TT. Multidisciplinary and interdisciplinary management of chronic pain. Phys Med Rehabil Clin N Am 2006; 17:435–450.
  20. Helme RD, Katz B, Gibson SJ, et al. Multidisciplinary pain clinics for older people. Do they have a role? Clin Geriatr Med 1996; 12:563–582.
  21. Harris DG. Management of pain in advanced disease. Br Med Bull 2014; 110:117–128.
  22. Raffa RB, Pergolizzi JV. A modern analgesics pain ‘pyramid’. J Clin Pharm Ther 2014; 39:4–6.
  23. Fine PG, Herr KA. Pharmacologic management of persistent pain in older persons. Clin Geriatr 2009; 17:25–32.
  24. Tracy B, Sean Morrison R. Pain management in older adults. Clin Ther 2013; 35:1659–1668.
  25. Malec M, Shega JW. Pain management in the elderly. Med Clin North Am 2015; 99:337–350.
  26. Abdulla A, Adams N, Bone M, et al; British Geriatric Society. Guidance on the management of pain in older people. Age Ageing 2013; 42(suppl 1):i1–i57.
  27. Pergolizzi J, Böger RH, Budd K, et al. Opioids and the management of chronic severe pain in the elderly: consensus statement of an International Expert Panel with focus on the six clinically most often used World Health Organization Step III opioids (buprenorphine, fentanyl, hydromorphone, methadone, morphine, oxycodone). Pain Pract 2008; 8:287–313.
  28. Gloth FM 3rd. Pharmacological management of persistent pain in older persons: focus on opioids and nonopioids. J Pain 2011; 12(suppl 1):S14–S20.
  29. Gomes T, Mamdani MM, Dhalla IA, Paterson JM, Juurlink DN. Opioid dose and drug-related mortality in patients with nonmalignant pain. Arch Intern Med 2011; 171:686–691.
  30. Moore RA, McQuay HJ. Prevalence of opioid adverse events in chronic non-malignant pain: systematic review of randomised trials of oral opioids. Arthritis Res Ther 2005; 7:R1046–R1051.
  31. Candy B, Jones L, Larkin PJ, Vickerstaff V, Tookman A, Stone P. Laxatives for the management of constipation in people receiving palliative care. Cochrane Database Syst Rev 2015; 5:CD003448.
  32. Webster LR, Butera PG, Moran LV, Wu N, Burns LH, Friedmann N. Oxytrex minimizes physical dependence while providing effective analgesia: a randomized controlled trial in low back pain. J Pain 2006; 7:937–946.
  33. Paulson DM, Kennedy DT, Donovick RA, et al. Alvimopan: an oral, peripherally acting, mu-opioid receptor antagonist for the treatment of opioid-induced bowel dysfunction—a 21-day treatment-randomized clinical trial. J Pain 2005; 6:184–192.
  34. Nalamachu SR, Pergolizzi J, Taylor R, et al. Efficacy and tolerability of subcutaneous methylnaltrexone in patients with advanced illness and opioid-induced constipation: a responder analysis of 2 randomized, placebo-controlled trials. Pain Pract 2015; 15:564–571.
  35. Brick N. Laxatives or methylnaltrexone for the management of constipation in palliative care patients. Clin J Oncol Nurs 2013; 17:91–92.
  36. Ford AC, Brenner DM, Schoenfeld PS. Efficacy of pharmacological therapies for the treatment of opioid-induced constipation: systematic review and meta-analysis. Am J Gastroenterolt 2013; 108:1566–1575.
  37. Byas-Smith MG, Chapman SL, Reed B, Cotsonis G. The effect of opioids on driving and psychomotor performance in patients with chronic pain. Clin J Pain 2005; 21:345–352.
  38. Benyamin R, Trescot AM, Datta S, et al. Opioid complications and side effects. Pain Physician 2008; 11(suppl 2):S105–S120.
  39. Wilwerding MB, Loprinzi CL, Mailliard JA, et al. A randomized, crossover evaluation of methylphenidate in cancer patients receiving strong narcotics. Support Care Cancer 1995; 3:135–138.
  40. Bruera E, Miller MJ, Macmillan K, Kuehn N. Neuropsychological effects of methylphenidate in patients receiving a continuous infusion of narcotics for cancer pain. Pain 1992; 48:163–166.
  41. Ahmedzai S. New approaches to pain control in patients with cancer. Eur J Cancer 1997; 33:S8–S14.
  42. Laugsand EA, Kaasa S, Klepstad P. Management of opioid-induced nausea and vomiting in cancer patients: systematic review and evidence-based recommendations. Palliat Med 2011; 25:442–453.
  43. Hardy J, Daly S, McQuade B, et al. A double-blind, randomised, parallel group, multinational, multicentre study comparing a single dose of ondansetron 24 mg p.o. with placebo and metoclopramide 10 mg t.d.s. p.o. in the treatment of opioid-induced nausea and emesis in cancer patients. Support Care Cancer 2002; 10:231–236.
  44. Apfel CC, Jalota L. Can central antiemetic effects of opioids counter-balance opioid-induced nausea and vomiting? Acta Anaesthesiol Scand 2010; 54:129–131.
  45. Okamoto Y, Tsuneto S, Matsuda Y, et al. A retrospective chart review of the antiemetic effectiveness of risperidone in refractory opioid-induced nausea and vomiting in advanced cancer patients. J Pain Symptom Manage 2007; 34:217–222.
  46. Overdyk F, Dahan A, Roozekrans M, van der Schrier R, Aarts L, Niesters M. Opioid-induced respiratory depression in the acute care setting: a compendium of case reports. Pain Manag 2014; 4:317–325.
  47. Niesters M, Overdyk F, Smith T, Aarts L, Dahan A. Buprenorphine-induced respiratory depression and involvement of ABCB1 SNPs in opioid-induced respiratory depression in paediatrics. Br J Anaesth 2013; 110:842–843.
  48. Niesters M, Mahajan RP, Aarts L, Dahan A. High-inspired oxygen concentration further impairs opioid-induced respiratory depression. Br J Anaesth 2013; 110:837–841.
  49. Dahan A, Yassen A, Romberg R, et al. Buprenorphine induces ceiling in respiratory depression but not in analgesia. Br J Anaesth 2006; 96:627–632.
  50. van Dorp E, Yassen A, Sarton E, et al. Naloxone reversal of buprenorphine-induced respiratory depression. Anesthesiology 2006; 105:51–57.
  51. Macintyre PE, Loadsman JA, Scott DA. Opioids, ventilation and acute pain management. Anaesth Intensive Care 2011; 39:545–558.
  52. Dahan A, Overdyk F, Smith T, Aarts L, Niesters M. Pharmacovigilance: a review of opioid-induced respiratory depression in chronic pain patients. Pain Physician 2013; 16:E85–E94.
  53. Correa D, Farney RJ, Chung F, Prasad A, Lam D, Wong J. Chronic opioid use and central sleep apnea: a review of the prevalence, mechanisms, and perioperative considerations. Anesth Analg 2015; 120:1273–1285.
  54. Randerath WJ, George S. Opioid-induced sleep apnea: is it a real problem? J Clin Sleep Med 2012; 8:577–578.
  55. Rosow CE, Gomery P, Chen TY, Stefanovich P, Stambler N, Israel R. Reversal of opioid-induced bladder dysfunction by intravenous naloxone and methylnaltrexone. Clin Pharmacol Ther 2007; 82:48–53.
  56. Weiner DK, Hanlon JT, Studenski SA. Effects of central nervous system polypharmacy on falls liability in community-dwelling elderly. Gerontology 1998; 44:217–221.
  57. Wolff ML, Kewley R, Hassett M, Collins J, Brodeur MR, Nokes S. Falls in skilled nursing facilities associated with opioid use. J Am Geriatr Soc 2012; 60:987.
  58. Zylicz Z, Twycross R. Opioid-induced hyperalgesia may be more frequent than previously thought. J Clin Oncol 2008; 26:1564; author reply 1565.
  59. Lee M, Silverman SM, Hansen H, Patel VB, Manchikanti L. A comprehensive review of opioid-induced hyperalgesia. Pain Physician 2011 2011; 14:145–161.
  60. Chen L, Sein M, Vo T, et al. Clinical interpretation of opioid tolerance versus opioid-induced hyperalgesia. J Opioid Manag 2014; 10:383–393.
  61. Vestergaard P, Hermann P, Jensen JE, Eiken P, Mosekilde L. Effects of paracetamol, non-steroidal anti-inflammatory drugs, acetylsalicylic acid, and opioids on bone mineral density and risk of fracture: results of the Danish Osteoporosis Prevention Study (DOPS). Osteoporos Int 2012; 23:1255–1265.
  62. Sacerdote P, Franchi S, Panerai AE. Non-analgesic effects of opioids: mechanisms and potential clinical relevance of opioid-induced immunodepression. Curr Pharm Des 2012; 18:6034–6042.
  63. Fine PG, Portenoy RK; Ad Hoc Expert Panel on Evidence Review and Guidelines for Opioid Rotation. Establishing “best practices” for opioid rotation: conclusions of an expert panel. J Pain Symptom Manage 2009; 38:418–425.
  64. Chibnall JT, Tait RC. Pain assessment in cognitively impaired and unimpaired older adults: a comparison of four scales. Pain 2001; 92:173–186.
  65. Andrade DC, Faria JW, Caramelli P, et al. The assessment and management of pain in the demented and non-demented elderly patient. Arq Neuropsiquiatr 2011; 69:387–394.
  66. Scherder E, Herr K, Pickering G, Gibson S, Benedetti F, Lautenbacher S. Pain in dementia. Pain 2009; 145:276–278.
  67. Chou R, Fanciullo GJ, Fine PG, Miaskowski C, Passik SD, Portenoy RK. Opioids for chronic noncancer pain: prediction and identification of aberrant drug-related behaviors: a review of the evidence for an American Pain Society and American Academy of Pain Medicine clinical practice guideline. J Pain 2009; 10:131–146.
  68. Butler SF, Budman SH, Fernandez KC, Fanciullo GJ, Jamison RN. Cross-validation of a screener to predict opioid misuse in chronic pain patients (SOAPP-R). J Addict Med 2009; 3:66–73.
  69. de Leon-Casasola OA. Opioids for chronic pain: new evidence, new strategies, safe prescribing. Am J Med 2013; 126(suppl 1):S3–S11.
  70. CDC guideline for prescribing opioids for chronic pain—United States, 2016. MMWR Recomm Rep 2016 Mar 18; 65(1):1–49.
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
443-451
Page Number
443-451
Publications
Publications
Topics
Article Type
Display Headline
Opioids for persistent pain in older adults
Display Headline
Opioids for persistent pain in older adults
Legacy Keywords
Opioids, chronic pain, persistent pain, noncancer pain, marissa galicia-castillo
Legacy Keywords
Opioids, chronic pain, persistent pain, noncancer pain, marissa galicia-castillo
Sections
Inside the Article

KEY POINTS

  • Treatment of persistent pain in older adults presents several challenges.
  • Often, persistent pain is underrecognized and undertreated, impairing function and reducing quality of life.
  • A combination of pharmacologic and nonpharmacologic strategies is needed to address the multiple factors contributing to pain and manage it effectively.
  • The World Health Organization’s three-step ladder is valuable for treating persistent pain in older adults.
  • Although nonopioids are the first-line treatments for persistent pain, opioids are also important to provide safe and effective pain management in older adults.
Disallow All Ads
Alternative CME
Article PDF Media