User login
Infective endocarditis: Beyond the usual tests
Prompt diagnois of infective endocarditis is critical. Potential consequences of missed or delayed diagnosis, including heart failure, stroke, intracardiac abscess, conduction delays, prosthesis dysfunction, and cerebral emboli, are often catastrophic. Echocardiography is the test used most frequently to evaluate for infective endocarditis, but it misses the diagnosis in almost one-third of cases, and even more often if the patient has a prosthetic valve.
But now, several sophisticated imaging tests are available that complement echocardiography in diagnosing and assessing infective endocarditis; these include 4-dimensional computed tomography (4D CT), fluorodeoxyglucose positron emission tomography (FDG-PET), and leukocyte scintigraphy. These tests have greatly improved our ability not only to diagnose infective endocarditis, but also to determine the extent and spread of infection, and they aid in perioperative assessment. Abnormal findings on these tests have been incorporated into the European Society of Cardiology’s 2015 modified diagnostic criteria for infective endocarditis.1
This article details the indications, advantages, and limitations of the various imaging tests for diagnosing and evaluating infective endocarditis (Table 1).
INFECTIVE ENDOCARDITIS IS DIFFICULT TO DIAGNOSE AND TREAT
Infective endocarditis is difficult to diagnose and treat. Clinical and imaging clues can be subtle, and the diagnosis requires a high level of suspicion and visualization of cardiac structures.
Further, the incidence of infective endocarditis is on the rise in the United States, particularly in women and young adults, likely due to intravenous drug use.2,3
ECHOCARDIOGRAPHY HAS AN IMPORTANT ROLE, BUT IS LIMITED
Echocardiography remains the most commonly performed study for diagnosing infective endocarditis, as it is fast, widely accessible, and less expensive than other imaging tests.
Transthoracic echocardiography (TTE) is often the first choice for testing. However, its sensitivity is only about 70% for detecting vegetations on native valves and 50% for detecting vegetations on prosthetic valves.1 It is inherently constrained by the limited number of views by which a comprehensive external evaluation of the heart can be achieved. Using a 2-dimensional instrument to view a 3-dimensional object is difficult, and depending on several factors, it can be hard to see vegetations and abscesses that are associated with infective endocarditis. Further, TTE is impeded by obesity and by hyperinflated lungs from obstructive pulmonary disease or mechanical ventilation. It has poor sensitivity for detecting small vegetations and for detecting vegetations and paravalvular complications in patients who have a prosthetic valve or a cardiac implanted electronic device.
Transesophageal echocardiography (TEE) is the recommended first-line imaging test for patients with prosthetic valves and no contraindications to the test. Otherwise, it should be done after TTE if the results of TTE are negative but clinical suspicion for infective endocarditis remains high (eg, because the patient uses intravenous drugs). But although TEE has a higher sensitivity than TTE (up to 96% for vegetations on native valves and 92% for those on prosthetic valves, if performed by an experienced sonographer), it can still miss infective endocarditis. Also, TEE does not provide a significant advantage over TTE in patients who have a cardiac implanted electronic device.1,4,5
Regardless of whether TTE or TEE is used, they are estimated to miss up to 30% of cases of infective endocarditis and its sequelae.4 False-negative findings are likelier in patients who have preexisting severe valvular lesions, prosthetic valves, cardiac implanted electronic devices, small vegetations, or abscesses, or if a vegetation has already broken free and embolized. Furthermore, distinguishing between vegetations and thrombi, cardiac tumors, and myxomatous changes using echocardiography is difficult.
CARDIAC CT
For patients who have inconclusive results on echocardiography, contraindications to TEE, or poor sonic windows, cardiac CT can be an excellent alternative. It is especially useful in the setting of a prosthetic valve.
Synchronized (“gated”) with the patient’s heart rate and rhythm, CT machines can acquire images during diastole, reducing motion artifact, and can create 3D images of the heart. In addition, newer machines can acquire several images at different points in the heart cycle to add a fourth dimension—time. The resulting 4D images play like short video loops of the beating heart and allow noninvasive assessment of cardiac anatomy with remarkable detail and resolution.
4D CT is increasingly being used in infective endocarditis, and growing evidence indicates that its accuracy is similar to that of TEE in the preoperative evaluation of patients with aortic prosthetic valve endocarditis.6 In a study of 28 patients, complementary use of CT angiography led to a change in treatment strategy in 7 (25%) compared with routine clinical workup.7 Several studies have found no difference between 4D CT and preoperative TEE in detecting pseudoaneurysm, abscess, or valve dehiscence. TEE and 4D CT also have similar sensitivities for detecting infective endocarditis in native and prosthetic valves.8,9
Coupled with CT angiography, 4D CT is also an excellent noninvasive way to perioperatively evaluate the coronary arteries without the risks associated with catheterization in those requiring nonemergency surgery (Figure 1A, B, and C).
4D CT performs well for detecting abscess and pseudoaneurysm but has slightly lower sensitivity for vegetations than TEE (91% vs 99%).9
Gated CT, PET, or both may be useful in cases of suspected prosthetic aortic valve endocarditis when TEE is negative. Pseudoaneurysms are not well visualized with TEE, and the atrial mitral curtain area is often thickened on TEE in cases of aortic prosthetic valve infective endocarditis that do not definitely involve abscesses. Gated CT and PET show this area better.8 This information is important in cases in which a surgeon may be unconvinced that the patient has prosthetic valve endocarditis.
Limitations of 4D cardiac CT
4D CT with or without angiography has limitations. It requires a wide-volume scanner and an experienced reader.
Patients with irregular heart rhythms or uncontrolled tachycardia pose technical problems for image acquisition. Cardiac CT is typically gated (ie, images are obtained within a defined time period) to acquire images during diastole. Ideally, images are acquired when the heart is in mid to late diastole, a time of minimal cardiac motion, so that motion artifact is minimized. To estimate the timing of image acquisition, the cardiac cycle must be predictable, and its duration should be as long as possible. Tachycardia or irregular rhythms such as frequent ectopic beats or atrial fibrillation make acquisition timing difficult, and thus make it nearly impossible to accurately obtain images when the heart is at minimum motion, limiting assessment of cardiac structures or the coronary tree.4,10
Extensive coronary calcification can hinder assessment of the coronary tree by CT coronary angiography.
Contrast exposure may limit the use of CT in some patients (eg, those with contrast allergies or renal dysfunction). However, modern scanners allow for much smaller contrast boluses without decreasing sensitivity.
4D CT involves radiation exposure, especially when done with angiography, although modern scanners have greatly reduced exposure. The average radiation dose in CT coronary angiography is 2.9 to 5.9 mSv11 compared with 7 mSv in diagnostic cardiac catheterization (without angioplasty or stenting) or 16 mSv in routine CT of the abdomen and pelvis with contrast.12,13 In view of the morbidity and mortality risks associated with infective endocarditis, especially if the diagnosis is delayed, this small radiation exposure may be justifiable.
Bottom line for cardiac CT
4D CT is an excellent alternative to echocardiography for select patients. Clinicians should strongly consider this study in the following situations:
- Patients with a prosthetic valve
- Patients who are strongly suspected of having infective endocarditis but who have a poor sonic window on TTE or TEE, as can occur with chronic obstructive lung disease, morbid obesity, or previous thoracic or cardiovascular surgery
- Patients who meet clinical indications for TEE, such as having a prosthetic valve or a high suspicion for native valve infective endocarditis with negative TTE, but who have contraindications to TEE
- As an alternative to TEE for preoperative evaluation in patients with known infective endocarditis.
Patients with tachycardia or irregular heart rhythms are not good candidates for this test.
FDG-PET AND LEUKOCYTE SCINTIGRAPHY
FDG-PET and leukocyte scintigraphy are other options for diagnosing infective endocarditis and determining the presence and extent of intra- and extracardiac infection. They are more sensitive than echocardiography for detecting infection of cardiac implanted electronic devices such as ventricular assist devices, pacemakers, implanted cardiac defibrillators, and cardiac resynchronization therapy devices.14–16
The utility of FDG-PET is founded on the uptake of 18F-fluorodeoxyglucose by cells, with higher uptake taking place in cells with higher metabolic activity (such as in areas of inflammation). Similarly, leukocyte scintigraphy relies on the use of radiolabeled leukocytes (ie, leukocytes previously extracted from the patient, labelled, and re-introduced into the patient) to allow for localization of inflamed tissue.
The most significant contribution of FDG-PET may be the ability to detect infective endocarditis early, when echocardiography is initially negative. When abnormal FDG uptake was included in the modified Duke criteria, it increased the sensitivity to 97% for detecting infective endocarditis on admission, leading some to propose its incorporation as a major criterion.17 In patients with prosthetic valves and suspected infective endocarditis, FDG-PET was found in one study to have a sensitivity of up to 91% and a specificity of up to 95%.18
Both FDG-PET and leukocyte scintigraphy have a high sensitivity, specificity, and negative predictive value for cardiac implanted electronic device infection, and should be strongly considered in patients in whom it is suspected but who have negative or inconclusive findings on echocardiography.14,15
In addition, a common conundrum faced by clinicians with use of echocardiography is the difficulty of differentiating thrombus from infected vegetation on valves or device lead wires. Some evidence indicates that FDG-PET may help to discriminate between vegetation and thrombus, although more rigorous studies are needed before its use for that purpose can be recommended.19
Limitations of nuclear studies
Both FDG-PET and leukocyte scintigraphy perform poorly for detecting native-valve infective endocarditis. In a study in which 90% of the patients had native-valve infective endocarditis according to the Duke criteria, FDG-PET had a specificity of 93% but a sensitivity of only 39%.20
Both studies can be cumbersome, laborious, and time-consuming for patients. FDG-PET requires a fasting or glucose-restricted diet before testing, and the test itself can be complicated by development of hyperglycemia, although this is rare.
While FDG-PET is most effective in detecting infections of prosthetic valves and cardiac implanted electronic devices, the results can be falsely positive in patients with a history of recent cardiac surgery (due to ongoing tissue healing), as well as maladies other than infective endocarditis that lead to inflammation, such as vasculitis or malignancy. Similarly, for unclear reasons, leukocyte scintigraphy can yield false-negative results in patients with enterococcal or candidal infective endocarditis.21
FDG-PET and leukocyte scintigraphy are more expensive than TEE and cardiac CT22 and are not widely available.
Both tests entail radiation exposure, with the average dose ranging from 7 to 14 mSv. However, this is less than the average amount acquired during percutaneous coronary intervention (16 mSv), and overlaps with the amount in chest CT with contrast when assessing for pulmonary embolism (7 to 9 mSv). Lower doses are possible with optimized protocols.12,13,15,23
Bottom line for nuclear studies
FDG-PET and leukocyte scintigraphy are especially useful for patients with a prosthetic valve or cardiac implanted electronic device. However, limitations must be kept in mind.
A suggested algorithm for testing with nuclear imaging is shown in Figure 2.1,4
CEREBRAL MAGNETIC RESONANCE IMAGING
Cerebral magnetic resonance imaging (MRI) is more sensitive than cerebral CT for detecting emboli in the brain. According to American Heart Association guidelines, cerebral MRI should be done in patients with known or suspected infective endocarditis and neurologic impairment, defined as headaches, meningeal symptoms, or neurologic deficits. It is also often used in neurologically asymptomatic patients with infective endocarditis who have indications for valve surgery to assess for mycotic aneurysms, which are associated with increased intracranial bleeding during surgery.
MRI use in other asymptomatic patients remains controversial.24 In cases with high clinical suspicion for infective endocarditis and no findings on echocardiography, cerebral MRI can increase the sensitivity of the Duke criteria by adding a minor criterion. Some have argued that, in patients with definite infective endocarditis, detecting silent cerebral complications can lead to management changes. However, more studies are needed to determine if there is indeed a group of neurologically asymptomatic infective endocarditis patients for whom cerebral MRI leads to improved outcomes.
Limitations of cerebral MRI
Cerebral MRI cannot be used in patients with non-MRI-compatible implanted hardware.
Gadolinium, the contrast agent typically used, can cause nephrogenic systemic fibrosis in patients who have poor renal function. This rare but serious adverse effect is characterized by irreversible systemic fibrosis affecting skin, muscles, and even visceral tissue such as lungs. The American College of Radiology allows for gadolinium use in patients without acute kidney injury and patients with stable chronic kidney disease with a glomerular filtration rate of at least 30 mL/min/1.73 m2. Its use should be avoided in patients with renal failure on replacement therapy, with advanced chronic kidney disease (glomerular filtration rate < 30 mL/min/1.73 m2), or with acute kidney injury, even if they do not need renal replacement therapy.25
Concerns have also been raised about gadolinium retention in the brain, even in patients with normal renal function.26–28 Thus far, no conclusive clinical adverse effects of retention have been found, although more study is warranted. Nevertheless, the US Food and Drug Administration now requires a black-box warning about this possibility and advises clinicians to counsel patients appropriately.
Bottom line on cerebral MRI
Cerebral MRI should be obtained when a patient presents with definite or possible infective endocarditis with neurologic impairment, such as new headaches, meningismus, or focal neurologic deficits. Routine brain MRI in patients with confirmed infective endocarditis without neurologic symptoms, or those without definite infective endocarditis, is discouraged.
CARDIAC MRI
Cardiac MRI, typically obtained with gadolinium contrast, allows for better 3D assessment of cardiac structures and morphology than echocardiography or CT, and can detect infiltrative cardiac disease, myopericarditis, and much more. It is increasingly used in the field of structural cardiology, but its role for evaluating infective endocarditis remains unclear.
Cardiac MRI does not appear to be better than echocardiography for diagnosing infective endocarditis. However, it may prove helpful in the evaluation of patients known to have infective endocarditis but who cannot be properly evaluated for disease extent because of poor image quality on echocardiography and contraindications to CT.1,29 Its role is limited in patients with cardiac implanted electronic devices, as most devices are incompatible with MRI use, although newer devices obviate this concern. But even for devices that are MRI-compatible, results are diminished due to an eclipsing effect, wherein the device parts can make it hard to see structures clearly because the “brightness” basically eclipses the surrounding area.4
Concerns regarding use of gadolinium as described above need also be considered.
The role of cardiac MRI in diagnosing and managing infective endocarditis may evolve, but at present, the 2017 American College of Cardiology and American Heart Association appropriate-use criteria discourage its use for these purposes.16
Bottom line for cardiac MRI
Cardiac MRI to evaluate a patient for suspected infective endocarditis is not recommended due to lack of superiority compared with echocardiography or CT, and the risk of nephrogenic systemic fibrosis from gadolinium in patients with renal compromise.
- Habib G, Lancellotti P, Antunes MJ, et al; ESC Scientific Document Group. 2015 ESC guidelines for the management of infective endocarditis: the Task Force for the Management of Infective Endocarditis of the European Society of Cardiology (ESC). Endorsed by: European Association for Cardio-Thoracic Surgery (EACTS), the European Association of Nuclear Medicine (EANM). Eur Heart J 2015; 36(44):3075–3128. doi:10.1093/eurheartj/ehv319
- Durante-Mangoni E, Bradley S, Selton-Suty C, et al; International Collaboration on Endocarditis Prospective Cohort Study Group. Current features of infective endocarditis in elderly patients: results of the International Collaboration on Endocarditis Prospective Cohort Study. Arch Intern Med 2008; 168(19):2095–2103. doi:10.1001/archinte.168.19.2095
- Wurcel AG, Anderson JE, Chui KK, et al. Increasing infectious endocarditis admissions among young people who inject drugs. Open Forum Infect Dis 2016; 3(3):ofw157. doi:10.1093/ofid/ofw157
- Gomes A, Glaudemans AW, Touw DJ, et al. Diagnostic value of imaging in infective endocarditis: a systematic review. Lancet Infect Dis 2017; 17(1):e1–e14. doi:10.1016/S1473-3099(16)30141-4
- Cahill TJ, Baddour LM, Habib G, et al. Challenges in infective endocarditis. J Am Coll Cardiol 2017; 69(3):325–344. doi:10.1016/j.jacc.2016.10.066
- Fagman E, Perrotta S, Bech-Hanssen O, et al. ECG-gated computed tomography: a new role for patients with suspected aortic prosthetic valve endocarditis. Eur Radiol 2012; 22(11):2407–2414. doi:10.1007/s00330-012-2491-5
- Habets J, Tanis W, van Herwerden LA, et al. Cardiac computed tomography angiography results in diagnostic and therapeutic change in prosthetic heart valve endocarditis. Int J Cardiovasc Imaging 2014; 30(2):377–387. doi:10.1007/s10554-013-0335-2
- Koneru S, Huang SS, Oldan J, et al. Role of preoperative cardiac CT in the evaluation of infective endocarditis: comparison with transesophageal echocardiography and surgical findings. Cardiovasc Diagn Ther 2018; 8(4):439–449. doi:10.21037/cdt.2018.07.07
- Koo HJ, Yang DH, Kang J, et al. Demonstration of infective endocarditis by cardiac CT and transoesophageal echocardiography: comparison with intra-operative findings. Eur Heart J Cardiovasc Imaging 2018; 19(2):199–207. doi:10.1093/ehjci/jex010
- Feuchtner GM, Stolzmann P, Dichtl W, et al. Multislice computed tomography in infective endocarditis: comparison with transesophageal echocardiography and intraoperative findings. J Am Coll Cardiol 2009; 53(5):436–444. doi:10.1016/j.jacc.2008.01.077
- Castellano IA, Nicol ED, Bull RK, Roobottom CA, Williams MC, Harden SP. A prospective national survey of coronary CT angiography radiation doses in the United Kingdom. J Cardiovasc Comput Tomogr 2017; 11(4):268–273. doi:10.1016/j.jcct.2017.05.002
- Mettler FA Jr, Huda W, Yoshizumi TT, Mahesh M. Effective doses in radiology and diagnostic nuclear medicine: a catalog. Radiology 2008; 248(1):254–263. doi:10.1148/radiol.2481071451
- Smith-Bindman R, Lipson J, Marcus R, et al. Radiation dose associated with common computed tomography examinations and the associated lifetime attributable risk of cancer. Arch Intern Med 2009; 169(22):2078–2086. doi:10.1001/archinternmed.2009.427
- Ploux S, Riviere A, Amraoui S, et al. Positron emission tomography in patients with suspected pacing system infections may play a critical role in difficult cases. Heart Rhythm 2011; 8(9):1478–1481. doi:10.1016/j.hrthm.2011.03.062
- Sarrazin J, Philippon F, Tessier M, et al. Usefulness of fluorine-18 positron emission tomography/computed tomography for identification of cardiovascular implantable electronic device infections. J Am Coll Cardiol 2012; 59(18):1616–1625. doi:10.1016/j.jacc.2011.11.059
- Doherty JU, Kort S, Mehran R, Schoenhagen P, Soman P; Rating Panel Members; Appropriate Use Criteria Task Force. ACC/AATS/AHA/ASE/ASNC/HRS/SCAI/SCCT/SCMR/STS 2017 Appropriate use criteria for multimodality imaging in valvular heart disease: a report of the American College of Cardiology Appropriate Use Criteria Task Force, American Association for Thoracic Surgery, American Heart Association, American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, Society for Cardiovascular Magnetic Resonance, and Society of Thoracic Surgeons. J Nucl Cardiol 2017; 24(6):2043–2063. doi:10.1007/s12350-017-1070-1
- Saby L, Laas O, Habib G, et al. Positron emission tomography/computed tomography for diagnosis of prosthetic valve endocarditis: increased valvular 18F-fluorodeoxyglucose uptake as a novel major criterion. J Am Coll Cardiol 2013; 61(23):2374–2382. doi:10.1016/j.jacc.2013.01.092
- Swart LE, Gomes A, Scholtens AM, et al. Improving the diagnostic performance of 18F-fluorodeoxyglucose positron-emission tomography/computed tomography in prosthetic heart valve endocarditis. Circulation 2018; 138(14):1412–1427. doi:10.1161/CIRCULATIONAHA.118.035032
- Graziosi M, Nanni C, Lorenzini M, et al. Role of 18F-FDG PET/CT in the diagnosis of infective endocarditis in patients with an implanted cardiac device: a prospective study. Eur J Nucl Med Mol Imaging 2014; 41(8):1617–1623. doi:10.1007/s00259-014-2773-z
- Kouijzer IJ, Vos FJ, Janssen MJ, van Dijk AP, Oyen WJ, Bleeker-Rovers CP. The value of 18F-FDG PET/CT in diagnosing infectious endocarditis. Eur J Nucl Med Mol Imaging 2013; 40(7):1102–1107. doi:10.1007/s00259-013-2376-0
- Wong D, Rubinshtein R, Keynan Y. Alternative cardiac imaging modalities to echocardiography for the diagnosis of infective endocarditis. Am J Cardiol 2016; 118(9):1410–1418. doi:10.1016/j.amjcard.2016.07.053
- Vos FJ, Bleeker-Rovers CP, Kullberg BJ, Adang EM, Oyen WJ. Cost-effectiveness of routine (18)F-FDG PET/CT in high-risk patients with gram-positive bacteremia. J Nucl Med 2011; 52(11):1673–1678. doi:10.2967/jnumed.111.089714
- McCollough CH, Bushberg JT, Fletcher JG, Eckel LJ. Answers to common questions about the use and safety of CT scans. Mayo Clin Proc 2015; 90(10):1380–1392. doi:10.1016/j.mayocp.2015.07.011
- Duval X, Iung B, Klein I, et al; IMAGE (Resonance Magnetic Imaging at the Acute Phase of Endocarditis) Study Group. Effect of early cerebral magnetic resonance imaging on clinical decisions in infective endocarditis: a prospective study. Ann Intern Med 2010; 152(8):497–504, W175. doi:10.7326/0003-4819-152-8-201004200-00006
- ACR Committee on Drugs and Contrast Media. ACR Manual on Contrast Media: 2018. www.acr.org/-/media/ACR/Files/Clinical-Resources/Contrast_Media.pdf. Accessed July 19, 2019.
- Kanda T, Fukusato T, Matsuda M, et al. Gadolinium-based contrast agent accumulates in the brain even in subjects without severe renal dysfunction: evaluation of autopsy brain specimens with inductively coupled plasma mass spectroscopy. Radiology 2015; 276(1):228–232. doi:10.1148/radiol.2015142690
- McDonald RJ, McDonald JS, Kallmes DF, et al. Intracranial gadolinium deposition after contrast-enhanced MR imaging. Radiology 2015; 275(3):772–782. doi:10.1148/radiol.15150025
- Kanda T, Ishii K, Kawaguchi H, Kitajima K, Takenaka D. High signal intensity in the dentate nucleus and globus pallidus on unenhanced T1-weighted MR images: relationship with increasing cumulative dose of a gadolinium-based contrast material. Radiology 2014; 270(3):834–841. doi:10.1148/radiol.13131669
- Expert Panel on Pediatric Imaging; Hayes LL, Palasis S, Bartel TB, et al. ACR appropriateness criteria headache-child. J Am Coll Radiol 2018; 15(5S):S78–S90. doi:10.1016/j.jacr.2018.03.017
Prompt diagnois of infective endocarditis is critical. Potential consequences of missed or delayed diagnosis, including heart failure, stroke, intracardiac abscess, conduction delays, prosthesis dysfunction, and cerebral emboli, are often catastrophic. Echocardiography is the test used most frequently to evaluate for infective endocarditis, but it misses the diagnosis in almost one-third of cases, and even more often if the patient has a prosthetic valve.
But now, several sophisticated imaging tests are available that complement echocardiography in diagnosing and assessing infective endocarditis; these include 4-dimensional computed tomography (4D CT), fluorodeoxyglucose positron emission tomography (FDG-PET), and leukocyte scintigraphy. These tests have greatly improved our ability not only to diagnose infective endocarditis, but also to determine the extent and spread of infection, and they aid in perioperative assessment. Abnormal findings on these tests have been incorporated into the European Society of Cardiology’s 2015 modified diagnostic criteria for infective endocarditis.1
This article details the indications, advantages, and limitations of the various imaging tests for diagnosing and evaluating infective endocarditis (Table 1).
INFECTIVE ENDOCARDITIS IS DIFFICULT TO DIAGNOSE AND TREAT
Infective endocarditis is difficult to diagnose and treat. Clinical and imaging clues can be subtle, and the diagnosis requires a high level of suspicion and visualization of cardiac structures.
Further, the incidence of infective endocarditis is on the rise in the United States, particularly in women and young adults, likely due to intravenous drug use.2,3
ECHOCARDIOGRAPHY HAS AN IMPORTANT ROLE, BUT IS LIMITED
Echocardiography remains the most commonly performed study for diagnosing infective endocarditis, as it is fast, widely accessible, and less expensive than other imaging tests.
Transthoracic echocardiography (TTE) is often the first choice for testing. However, its sensitivity is only about 70% for detecting vegetations on native valves and 50% for detecting vegetations on prosthetic valves.1 It is inherently constrained by the limited number of views by which a comprehensive external evaluation of the heart can be achieved. Using a 2-dimensional instrument to view a 3-dimensional object is difficult, and depending on several factors, it can be hard to see vegetations and abscesses that are associated with infective endocarditis. Further, TTE is impeded by obesity and by hyperinflated lungs from obstructive pulmonary disease or mechanical ventilation. It has poor sensitivity for detecting small vegetations and for detecting vegetations and paravalvular complications in patients who have a prosthetic valve or a cardiac implanted electronic device.
Transesophageal echocardiography (TEE) is the recommended first-line imaging test for patients with prosthetic valves and no contraindications to the test. Otherwise, it should be done after TTE if the results of TTE are negative but clinical suspicion for infective endocarditis remains high (eg, because the patient uses intravenous drugs). But although TEE has a higher sensitivity than TTE (up to 96% for vegetations on native valves and 92% for those on prosthetic valves, if performed by an experienced sonographer), it can still miss infective endocarditis. Also, TEE does not provide a significant advantage over TTE in patients who have a cardiac implanted electronic device.1,4,5
Regardless of whether TTE or TEE is used, they are estimated to miss up to 30% of cases of infective endocarditis and its sequelae.4 False-negative findings are likelier in patients who have preexisting severe valvular lesions, prosthetic valves, cardiac implanted electronic devices, small vegetations, or abscesses, or if a vegetation has already broken free and embolized. Furthermore, distinguishing between vegetations and thrombi, cardiac tumors, and myxomatous changes using echocardiography is difficult.
CARDIAC CT
For patients who have inconclusive results on echocardiography, contraindications to TEE, or poor sonic windows, cardiac CT can be an excellent alternative. It is especially useful in the setting of a prosthetic valve.
Synchronized (“gated”) with the patient’s heart rate and rhythm, CT machines can acquire images during diastole, reducing motion artifact, and can create 3D images of the heart. In addition, newer machines can acquire several images at different points in the heart cycle to add a fourth dimension—time. The resulting 4D images play like short video loops of the beating heart and allow noninvasive assessment of cardiac anatomy with remarkable detail and resolution.
4D CT is increasingly being used in infective endocarditis, and growing evidence indicates that its accuracy is similar to that of TEE in the preoperative evaluation of patients with aortic prosthetic valve endocarditis.6 In a study of 28 patients, complementary use of CT angiography led to a change in treatment strategy in 7 (25%) compared with routine clinical workup.7 Several studies have found no difference between 4D CT and preoperative TEE in detecting pseudoaneurysm, abscess, or valve dehiscence. TEE and 4D CT also have similar sensitivities for detecting infective endocarditis in native and prosthetic valves.8,9
Coupled with CT angiography, 4D CT is also an excellent noninvasive way to perioperatively evaluate the coronary arteries without the risks associated with catheterization in those requiring nonemergency surgery (Figure 1A, B, and C).
4D CT performs well for detecting abscess and pseudoaneurysm but has slightly lower sensitivity for vegetations than TEE (91% vs 99%).9
Gated CT, PET, or both may be useful in cases of suspected prosthetic aortic valve endocarditis when TEE is negative. Pseudoaneurysms are not well visualized with TEE, and the atrial mitral curtain area is often thickened on TEE in cases of aortic prosthetic valve infective endocarditis that do not definitely involve abscesses. Gated CT and PET show this area better.8 This information is important in cases in which a surgeon may be unconvinced that the patient has prosthetic valve endocarditis.
Limitations of 4D cardiac CT
4D CT with or without angiography has limitations. It requires a wide-volume scanner and an experienced reader.
Patients with irregular heart rhythms or uncontrolled tachycardia pose technical problems for image acquisition. Cardiac CT is typically gated (ie, images are obtained within a defined time period) to acquire images during diastole. Ideally, images are acquired when the heart is in mid to late diastole, a time of minimal cardiac motion, so that motion artifact is minimized. To estimate the timing of image acquisition, the cardiac cycle must be predictable, and its duration should be as long as possible. Tachycardia or irregular rhythms such as frequent ectopic beats or atrial fibrillation make acquisition timing difficult, and thus make it nearly impossible to accurately obtain images when the heart is at minimum motion, limiting assessment of cardiac structures or the coronary tree.4,10
Extensive coronary calcification can hinder assessment of the coronary tree by CT coronary angiography.
Contrast exposure may limit the use of CT in some patients (eg, those with contrast allergies or renal dysfunction). However, modern scanners allow for much smaller contrast boluses without decreasing sensitivity.
4D CT involves radiation exposure, especially when done with angiography, although modern scanners have greatly reduced exposure. The average radiation dose in CT coronary angiography is 2.9 to 5.9 mSv11 compared with 7 mSv in diagnostic cardiac catheterization (without angioplasty or stenting) or 16 mSv in routine CT of the abdomen and pelvis with contrast.12,13 In view of the morbidity and mortality risks associated with infective endocarditis, especially if the diagnosis is delayed, this small radiation exposure may be justifiable.
Bottom line for cardiac CT
4D CT is an excellent alternative to echocardiography for select patients. Clinicians should strongly consider this study in the following situations:
- Patients with a prosthetic valve
- Patients who are strongly suspected of having infective endocarditis but who have a poor sonic window on TTE or TEE, as can occur with chronic obstructive lung disease, morbid obesity, or previous thoracic or cardiovascular surgery
- Patients who meet clinical indications for TEE, such as having a prosthetic valve or a high suspicion for native valve infective endocarditis with negative TTE, but who have contraindications to TEE
- As an alternative to TEE for preoperative evaluation in patients with known infective endocarditis.
Patients with tachycardia or irregular heart rhythms are not good candidates for this test.
FDG-PET AND LEUKOCYTE SCINTIGRAPHY
FDG-PET and leukocyte scintigraphy are other options for diagnosing infective endocarditis and determining the presence and extent of intra- and extracardiac infection. They are more sensitive than echocardiography for detecting infection of cardiac implanted electronic devices such as ventricular assist devices, pacemakers, implanted cardiac defibrillators, and cardiac resynchronization therapy devices.14–16
The utility of FDG-PET is founded on the uptake of 18F-fluorodeoxyglucose by cells, with higher uptake taking place in cells with higher metabolic activity (such as in areas of inflammation). Similarly, leukocyte scintigraphy relies on the use of radiolabeled leukocytes (ie, leukocytes previously extracted from the patient, labelled, and re-introduced into the patient) to allow for localization of inflamed tissue.
The most significant contribution of FDG-PET may be the ability to detect infective endocarditis early, when echocardiography is initially negative. When abnormal FDG uptake was included in the modified Duke criteria, it increased the sensitivity to 97% for detecting infective endocarditis on admission, leading some to propose its incorporation as a major criterion.17 In patients with prosthetic valves and suspected infective endocarditis, FDG-PET was found in one study to have a sensitivity of up to 91% and a specificity of up to 95%.18
Both FDG-PET and leukocyte scintigraphy have a high sensitivity, specificity, and negative predictive value for cardiac implanted electronic device infection, and should be strongly considered in patients in whom it is suspected but who have negative or inconclusive findings on echocardiography.14,15
In addition, a common conundrum faced by clinicians with use of echocardiography is the difficulty of differentiating thrombus from infected vegetation on valves or device lead wires. Some evidence indicates that FDG-PET may help to discriminate between vegetation and thrombus, although more rigorous studies are needed before its use for that purpose can be recommended.19
Limitations of nuclear studies
Both FDG-PET and leukocyte scintigraphy perform poorly for detecting native-valve infective endocarditis. In a study in which 90% of the patients had native-valve infective endocarditis according to the Duke criteria, FDG-PET had a specificity of 93% but a sensitivity of only 39%.20
Both studies can be cumbersome, laborious, and time-consuming for patients. FDG-PET requires a fasting or glucose-restricted diet before testing, and the test itself can be complicated by development of hyperglycemia, although this is rare.
While FDG-PET is most effective in detecting infections of prosthetic valves and cardiac implanted electronic devices, the results can be falsely positive in patients with a history of recent cardiac surgery (due to ongoing tissue healing), as well as maladies other than infective endocarditis that lead to inflammation, such as vasculitis or malignancy. Similarly, for unclear reasons, leukocyte scintigraphy can yield false-negative results in patients with enterococcal or candidal infective endocarditis.21
FDG-PET and leukocyte scintigraphy are more expensive than TEE and cardiac CT22 and are not widely available.
Both tests entail radiation exposure, with the average dose ranging from 7 to 14 mSv. However, this is less than the average amount acquired during percutaneous coronary intervention (16 mSv), and overlaps with the amount in chest CT with contrast when assessing for pulmonary embolism (7 to 9 mSv). Lower doses are possible with optimized protocols.12,13,15,23
Bottom line for nuclear studies
FDG-PET and leukocyte scintigraphy are especially useful for patients with a prosthetic valve or cardiac implanted electronic device. However, limitations must be kept in mind.
A suggested algorithm for testing with nuclear imaging is shown in Figure 2.1,4
CEREBRAL MAGNETIC RESONANCE IMAGING
Cerebral magnetic resonance imaging (MRI) is more sensitive than cerebral CT for detecting emboli in the brain. According to American Heart Association guidelines, cerebral MRI should be done in patients with known or suspected infective endocarditis and neurologic impairment, defined as headaches, meningeal symptoms, or neurologic deficits. It is also often used in neurologically asymptomatic patients with infective endocarditis who have indications for valve surgery to assess for mycotic aneurysms, which are associated with increased intracranial bleeding during surgery.
MRI use in other asymptomatic patients remains controversial.24 In cases with high clinical suspicion for infective endocarditis and no findings on echocardiography, cerebral MRI can increase the sensitivity of the Duke criteria by adding a minor criterion. Some have argued that, in patients with definite infective endocarditis, detecting silent cerebral complications can lead to management changes. However, more studies are needed to determine if there is indeed a group of neurologically asymptomatic infective endocarditis patients for whom cerebral MRI leads to improved outcomes.
Limitations of cerebral MRI
Cerebral MRI cannot be used in patients with non-MRI-compatible implanted hardware.
Gadolinium, the contrast agent typically used, can cause nephrogenic systemic fibrosis in patients who have poor renal function. This rare but serious adverse effect is characterized by irreversible systemic fibrosis affecting skin, muscles, and even visceral tissue such as lungs. The American College of Radiology allows for gadolinium use in patients without acute kidney injury and patients with stable chronic kidney disease with a glomerular filtration rate of at least 30 mL/min/1.73 m2. Its use should be avoided in patients with renal failure on replacement therapy, with advanced chronic kidney disease (glomerular filtration rate < 30 mL/min/1.73 m2), or with acute kidney injury, even if they do not need renal replacement therapy.25
Concerns have also been raised about gadolinium retention in the brain, even in patients with normal renal function.26–28 Thus far, no conclusive clinical adverse effects of retention have been found, although more study is warranted. Nevertheless, the US Food and Drug Administration now requires a black-box warning about this possibility and advises clinicians to counsel patients appropriately.
Bottom line on cerebral MRI
Cerebral MRI should be obtained when a patient presents with definite or possible infective endocarditis with neurologic impairment, such as new headaches, meningismus, or focal neurologic deficits. Routine brain MRI in patients with confirmed infective endocarditis without neurologic symptoms, or those without definite infective endocarditis, is discouraged.
CARDIAC MRI
Cardiac MRI, typically obtained with gadolinium contrast, allows for better 3D assessment of cardiac structures and morphology than echocardiography or CT, and can detect infiltrative cardiac disease, myopericarditis, and much more. It is increasingly used in the field of structural cardiology, but its role for evaluating infective endocarditis remains unclear.
Cardiac MRI does not appear to be better than echocardiography for diagnosing infective endocarditis. However, it may prove helpful in the evaluation of patients known to have infective endocarditis but who cannot be properly evaluated for disease extent because of poor image quality on echocardiography and contraindications to CT.1,29 Its role is limited in patients with cardiac implanted electronic devices, as most devices are incompatible with MRI use, although newer devices obviate this concern. But even for devices that are MRI-compatible, results are diminished due to an eclipsing effect, wherein the device parts can make it hard to see structures clearly because the “brightness” basically eclipses the surrounding area.4
Concerns regarding use of gadolinium as described above need also be considered.
The role of cardiac MRI in diagnosing and managing infective endocarditis may evolve, but at present, the 2017 American College of Cardiology and American Heart Association appropriate-use criteria discourage its use for these purposes.16
Bottom line for cardiac MRI
Cardiac MRI to evaluate a patient for suspected infective endocarditis is not recommended due to lack of superiority compared with echocardiography or CT, and the risk of nephrogenic systemic fibrosis from gadolinium in patients with renal compromise.
Prompt diagnois of infective endocarditis is critical. Potential consequences of missed or delayed diagnosis, including heart failure, stroke, intracardiac abscess, conduction delays, prosthesis dysfunction, and cerebral emboli, are often catastrophic. Echocardiography is the test used most frequently to evaluate for infective endocarditis, but it misses the diagnosis in almost one-third of cases, and even more often if the patient has a prosthetic valve.
But now, several sophisticated imaging tests are available that complement echocardiography in diagnosing and assessing infective endocarditis; these include 4-dimensional computed tomography (4D CT), fluorodeoxyglucose positron emission tomography (FDG-PET), and leukocyte scintigraphy. These tests have greatly improved our ability not only to diagnose infective endocarditis, but also to determine the extent and spread of infection, and they aid in perioperative assessment. Abnormal findings on these tests have been incorporated into the European Society of Cardiology’s 2015 modified diagnostic criteria for infective endocarditis.1
This article details the indications, advantages, and limitations of the various imaging tests for diagnosing and evaluating infective endocarditis (Table 1).
INFECTIVE ENDOCARDITIS IS DIFFICULT TO DIAGNOSE AND TREAT
Infective endocarditis is difficult to diagnose and treat. Clinical and imaging clues can be subtle, and the diagnosis requires a high level of suspicion and visualization of cardiac structures.
Further, the incidence of infective endocarditis is on the rise in the United States, particularly in women and young adults, likely due to intravenous drug use.2,3
ECHOCARDIOGRAPHY HAS AN IMPORTANT ROLE, BUT IS LIMITED
Echocardiography remains the most commonly performed study for diagnosing infective endocarditis, as it is fast, widely accessible, and less expensive than other imaging tests.
Transthoracic echocardiography (TTE) is often the first choice for testing. However, its sensitivity is only about 70% for detecting vegetations on native valves and 50% for detecting vegetations on prosthetic valves.1 It is inherently constrained by the limited number of views by which a comprehensive external evaluation of the heart can be achieved. Using a 2-dimensional instrument to view a 3-dimensional object is difficult, and depending on several factors, it can be hard to see vegetations and abscesses that are associated with infective endocarditis. Further, TTE is impeded by obesity and by hyperinflated lungs from obstructive pulmonary disease or mechanical ventilation. It has poor sensitivity for detecting small vegetations and for detecting vegetations and paravalvular complications in patients who have a prosthetic valve or a cardiac implanted electronic device.
Transesophageal echocardiography (TEE) is the recommended first-line imaging test for patients with prosthetic valves and no contraindications to the test. Otherwise, it should be done after TTE if the results of TTE are negative but clinical suspicion for infective endocarditis remains high (eg, because the patient uses intravenous drugs). But although TEE has a higher sensitivity than TTE (up to 96% for vegetations on native valves and 92% for those on prosthetic valves, if performed by an experienced sonographer), it can still miss infective endocarditis. Also, TEE does not provide a significant advantage over TTE in patients who have a cardiac implanted electronic device.1,4,5
Regardless of whether TTE or TEE is used, they are estimated to miss up to 30% of cases of infective endocarditis and its sequelae.4 False-negative findings are likelier in patients who have preexisting severe valvular lesions, prosthetic valves, cardiac implanted electronic devices, small vegetations, or abscesses, or if a vegetation has already broken free and embolized. Furthermore, distinguishing between vegetations and thrombi, cardiac tumors, and myxomatous changes using echocardiography is difficult.
CARDIAC CT
For patients who have inconclusive results on echocardiography, contraindications to TEE, or poor sonic windows, cardiac CT can be an excellent alternative. It is especially useful in the setting of a prosthetic valve.
Synchronized (“gated”) with the patient’s heart rate and rhythm, CT machines can acquire images during diastole, reducing motion artifact, and can create 3D images of the heart. In addition, newer machines can acquire several images at different points in the heart cycle to add a fourth dimension—time. The resulting 4D images play like short video loops of the beating heart and allow noninvasive assessment of cardiac anatomy with remarkable detail and resolution.
4D CT is increasingly being used in infective endocarditis, and growing evidence indicates that its accuracy is similar to that of TEE in the preoperative evaluation of patients with aortic prosthetic valve endocarditis.6 In a study of 28 patients, complementary use of CT angiography led to a change in treatment strategy in 7 (25%) compared with routine clinical workup.7 Several studies have found no difference between 4D CT and preoperative TEE in detecting pseudoaneurysm, abscess, or valve dehiscence. TEE and 4D CT also have similar sensitivities for detecting infective endocarditis in native and prosthetic valves.8,9
Coupled with CT angiography, 4D CT is also an excellent noninvasive way to perioperatively evaluate the coronary arteries without the risks associated with catheterization in those requiring nonemergency surgery (Figure 1A, B, and C).
4D CT performs well for detecting abscess and pseudoaneurysm but has slightly lower sensitivity for vegetations than TEE (91% vs 99%).9
Gated CT, PET, or both may be useful in cases of suspected prosthetic aortic valve endocarditis when TEE is negative. Pseudoaneurysms are not well visualized with TEE, and the atrial mitral curtain area is often thickened on TEE in cases of aortic prosthetic valve infective endocarditis that do not definitely involve abscesses. Gated CT and PET show this area better.8 This information is important in cases in which a surgeon may be unconvinced that the patient has prosthetic valve endocarditis.
Limitations of 4D cardiac CT
4D CT with or without angiography has limitations. It requires a wide-volume scanner and an experienced reader.
Patients with irregular heart rhythms or uncontrolled tachycardia pose technical problems for image acquisition. Cardiac CT is typically gated (ie, images are obtained within a defined time period) to acquire images during diastole. Ideally, images are acquired when the heart is in mid to late diastole, a time of minimal cardiac motion, so that motion artifact is minimized. To estimate the timing of image acquisition, the cardiac cycle must be predictable, and its duration should be as long as possible. Tachycardia or irregular rhythms such as frequent ectopic beats or atrial fibrillation make acquisition timing difficult, and thus make it nearly impossible to accurately obtain images when the heart is at minimum motion, limiting assessment of cardiac structures or the coronary tree.4,10
Extensive coronary calcification can hinder assessment of the coronary tree by CT coronary angiography.
Contrast exposure may limit the use of CT in some patients (eg, those with contrast allergies or renal dysfunction). However, modern scanners allow for much smaller contrast boluses without decreasing sensitivity.
4D CT involves radiation exposure, especially when done with angiography, although modern scanners have greatly reduced exposure. The average radiation dose in CT coronary angiography is 2.9 to 5.9 mSv11 compared with 7 mSv in diagnostic cardiac catheterization (without angioplasty or stenting) or 16 mSv in routine CT of the abdomen and pelvis with contrast.12,13 In view of the morbidity and mortality risks associated with infective endocarditis, especially if the diagnosis is delayed, this small radiation exposure may be justifiable.
Bottom line for cardiac CT
4D CT is an excellent alternative to echocardiography for select patients. Clinicians should strongly consider this study in the following situations:
- Patients with a prosthetic valve
- Patients who are strongly suspected of having infective endocarditis but who have a poor sonic window on TTE or TEE, as can occur with chronic obstructive lung disease, morbid obesity, or previous thoracic or cardiovascular surgery
- Patients who meet clinical indications for TEE, such as having a prosthetic valve or a high suspicion for native valve infective endocarditis with negative TTE, but who have contraindications to TEE
- As an alternative to TEE for preoperative evaluation in patients with known infective endocarditis.
Patients with tachycardia or irregular heart rhythms are not good candidates for this test.
FDG-PET AND LEUKOCYTE SCINTIGRAPHY
FDG-PET and leukocyte scintigraphy are other options for diagnosing infective endocarditis and determining the presence and extent of intra- and extracardiac infection. They are more sensitive than echocardiography for detecting infection of cardiac implanted electronic devices such as ventricular assist devices, pacemakers, implanted cardiac defibrillators, and cardiac resynchronization therapy devices.14–16
The utility of FDG-PET is founded on the uptake of 18F-fluorodeoxyglucose by cells, with higher uptake taking place in cells with higher metabolic activity (such as in areas of inflammation). Similarly, leukocyte scintigraphy relies on the use of radiolabeled leukocytes (ie, leukocytes previously extracted from the patient, labelled, and re-introduced into the patient) to allow for localization of inflamed tissue.
The most significant contribution of FDG-PET may be the ability to detect infective endocarditis early, when echocardiography is initially negative. When abnormal FDG uptake was included in the modified Duke criteria, it increased the sensitivity to 97% for detecting infective endocarditis on admission, leading some to propose its incorporation as a major criterion.17 In patients with prosthetic valves and suspected infective endocarditis, FDG-PET was found in one study to have a sensitivity of up to 91% and a specificity of up to 95%.18
Both FDG-PET and leukocyte scintigraphy have a high sensitivity, specificity, and negative predictive value for cardiac implanted electronic device infection, and should be strongly considered in patients in whom it is suspected but who have negative or inconclusive findings on echocardiography.14,15
In addition, a common conundrum faced by clinicians with use of echocardiography is the difficulty of differentiating thrombus from infected vegetation on valves or device lead wires. Some evidence indicates that FDG-PET may help to discriminate between vegetation and thrombus, although more rigorous studies are needed before its use for that purpose can be recommended.19
Limitations of nuclear studies
Both FDG-PET and leukocyte scintigraphy perform poorly for detecting native-valve infective endocarditis. In a study in which 90% of the patients had native-valve infective endocarditis according to the Duke criteria, FDG-PET had a specificity of 93% but a sensitivity of only 39%.20
Both studies can be cumbersome, laborious, and time-consuming for patients. FDG-PET requires a fasting or glucose-restricted diet before testing, and the test itself can be complicated by development of hyperglycemia, although this is rare.
While FDG-PET is most effective in detecting infections of prosthetic valves and cardiac implanted electronic devices, the results can be falsely positive in patients with a history of recent cardiac surgery (due to ongoing tissue healing), as well as maladies other than infective endocarditis that lead to inflammation, such as vasculitis or malignancy. Similarly, for unclear reasons, leukocyte scintigraphy can yield false-negative results in patients with enterococcal or candidal infective endocarditis.21
FDG-PET and leukocyte scintigraphy are more expensive than TEE and cardiac CT22 and are not widely available.
Both tests entail radiation exposure, with the average dose ranging from 7 to 14 mSv. However, this is less than the average amount acquired during percutaneous coronary intervention (16 mSv), and overlaps with the amount in chest CT with contrast when assessing for pulmonary embolism (7 to 9 mSv). Lower doses are possible with optimized protocols.12,13,15,23
Bottom line for nuclear studies
FDG-PET and leukocyte scintigraphy are especially useful for patients with a prosthetic valve or cardiac implanted electronic device. However, limitations must be kept in mind.
A suggested algorithm for testing with nuclear imaging is shown in Figure 2.1,4
CEREBRAL MAGNETIC RESONANCE IMAGING
Cerebral magnetic resonance imaging (MRI) is more sensitive than cerebral CT for detecting emboli in the brain. According to American Heart Association guidelines, cerebral MRI should be done in patients with known or suspected infective endocarditis and neurologic impairment, defined as headaches, meningeal symptoms, or neurologic deficits. It is also often used in neurologically asymptomatic patients with infective endocarditis who have indications for valve surgery to assess for mycotic aneurysms, which are associated with increased intracranial bleeding during surgery.
MRI use in other asymptomatic patients remains controversial.24 In cases with high clinical suspicion for infective endocarditis and no findings on echocardiography, cerebral MRI can increase the sensitivity of the Duke criteria by adding a minor criterion. Some have argued that, in patients with definite infective endocarditis, detecting silent cerebral complications can lead to management changes. However, more studies are needed to determine if there is indeed a group of neurologically asymptomatic infective endocarditis patients for whom cerebral MRI leads to improved outcomes.
Limitations of cerebral MRI
Cerebral MRI cannot be used in patients with non-MRI-compatible implanted hardware.
Gadolinium, the contrast agent typically used, can cause nephrogenic systemic fibrosis in patients who have poor renal function. This rare but serious adverse effect is characterized by irreversible systemic fibrosis affecting skin, muscles, and even visceral tissue such as lungs. The American College of Radiology allows for gadolinium use in patients without acute kidney injury and patients with stable chronic kidney disease with a glomerular filtration rate of at least 30 mL/min/1.73 m2. Its use should be avoided in patients with renal failure on replacement therapy, with advanced chronic kidney disease (glomerular filtration rate < 30 mL/min/1.73 m2), or with acute kidney injury, even if they do not need renal replacement therapy.25
Concerns have also been raised about gadolinium retention in the brain, even in patients with normal renal function.26–28 Thus far, no conclusive clinical adverse effects of retention have been found, although more study is warranted. Nevertheless, the US Food and Drug Administration now requires a black-box warning about this possibility and advises clinicians to counsel patients appropriately.
Bottom line on cerebral MRI
Cerebral MRI should be obtained when a patient presents with definite or possible infective endocarditis with neurologic impairment, such as new headaches, meningismus, or focal neurologic deficits. Routine brain MRI in patients with confirmed infective endocarditis without neurologic symptoms, or those without definite infective endocarditis, is discouraged.
CARDIAC MRI
Cardiac MRI, typically obtained with gadolinium contrast, allows for better 3D assessment of cardiac structures and morphology than echocardiography or CT, and can detect infiltrative cardiac disease, myopericarditis, and much more. It is increasingly used in the field of structural cardiology, but its role for evaluating infective endocarditis remains unclear.
Cardiac MRI does not appear to be better than echocardiography for diagnosing infective endocarditis. However, it may prove helpful in the evaluation of patients known to have infective endocarditis but who cannot be properly evaluated for disease extent because of poor image quality on echocardiography and contraindications to CT.1,29 Its role is limited in patients with cardiac implanted electronic devices, as most devices are incompatible with MRI use, although newer devices obviate this concern. But even for devices that are MRI-compatible, results are diminished due to an eclipsing effect, wherein the device parts can make it hard to see structures clearly because the “brightness” basically eclipses the surrounding area.4
Concerns regarding use of gadolinium as described above need also be considered.
The role of cardiac MRI in diagnosing and managing infective endocarditis may evolve, but at present, the 2017 American College of Cardiology and American Heart Association appropriate-use criteria discourage its use for these purposes.16
Bottom line for cardiac MRI
Cardiac MRI to evaluate a patient for suspected infective endocarditis is not recommended due to lack of superiority compared with echocardiography or CT, and the risk of nephrogenic systemic fibrosis from gadolinium in patients with renal compromise.
- Habib G, Lancellotti P, Antunes MJ, et al; ESC Scientific Document Group. 2015 ESC guidelines for the management of infective endocarditis: the Task Force for the Management of Infective Endocarditis of the European Society of Cardiology (ESC). Endorsed by: European Association for Cardio-Thoracic Surgery (EACTS), the European Association of Nuclear Medicine (EANM). Eur Heart J 2015; 36(44):3075–3128. doi:10.1093/eurheartj/ehv319
- Durante-Mangoni E, Bradley S, Selton-Suty C, et al; International Collaboration on Endocarditis Prospective Cohort Study Group. Current features of infective endocarditis in elderly patients: results of the International Collaboration on Endocarditis Prospective Cohort Study. Arch Intern Med 2008; 168(19):2095–2103. doi:10.1001/archinte.168.19.2095
- Wurcel AG, Anderson JE, Chui KK, et al. Increasing infectious endocarditis admissions among young people who inject drugs. Open Forum Infect Dis 2016; 3(3):ofw157. doi:10.1093/ofid/ofw157
- Gomes A, Glaudemans AW, Touw DJ, et al. Diagnostic value of imaging in infective endocarditis: a systematic review. Lancet Infect Dis 2017; 17(1):e1–e14. doi:10.1016/S1473-3099(16)30141-4
- Cahill TJ, Baddour LM, Habib G, et al. Challenges in infective endocarditis. J Am Coll Cardiol 2017; 69(3):325–344. doi:10.1016/j.jacc.2016.10.066
- Fagman E, Perrotta S, Bech-Hanssen O, et al. ECG-gated computed tomography: a new role for patients with suspected aortic prosthetic valve endocarditis. Eur Radiol 2012; 22(11):2407–2414. doi:10.1007/s00330-012-2491-5
- Habets J, Tanis W, van Herwerden LA, et al. Cardiac computed tomography angiography results in diagnostic and therapeutic change in prosthetic heart valve endocarditis. Int J Cardiovasc Imaging 2014; 30(2):377–387. doi:10.1007/s10554-013-0335-2
- Koneru S, Huang SS, Oldan J, et al. Role of preoperative cardiac CT in the evaluation of infective endocarditis: comparison with transesophageal echocardiography and surgical findings. Cardiovasc Diagn Ther 2018; 8(4):439–449. doi:10.21037/cdt.2018.07.07
- Koo HJ, Yang DH, Kang J, et al. Demonstration of infective endocarditis by cardiac CT and transoesophageal echocardiography: comparison with intra-operative findings. Eur Heart J Cardiovasc Imaging 2018; 19(2):199–207. doi:10.1093/ehjci/jex010
- Feuchtner GM, Stolzmann P, Dichtl W, et al. Multislice computed tomography in infective endocarditis: comparison with transesophageal echocardiography and intraoperative findings. J Am Coll Cardiol 2009; 53(5):436–444. doi:10.1016/j.jacc.2008.01.077
- Castellano IA, Nicol ED, Bull RK, Roobottom CA, Williams MC, Harden SP. A prospective national survey of coronary CT angiography radiation doses in the United Kingdom. J Cardiovasc Comput Tomogr 2017; 11(4):268–273. doi:10.1016/j.jcct.2017.05.002
- Mettler FA Jr, Huda W, Yoshizumi TT, Mahesh M. Effective doses in radiology and diagnostic nuclear medicine: a catalog. Radiology 2008; 248(1):254–263. doi:10.1148/radiol.2481071451
- Smith-Bindman R, Lipson J, Marcus R, et al. Radiation dose associated with common computed tomography examinations and the associated lifetime attributable risk of cancer. Arch Intern Med 2009; 169(22):2078–2086. doi:10.1001/archinternmed.2009.427
- Ploux S, Riviere A, Amraoui S, et al. Positron emission tomography in patients with suspected pacing system infections may play a critical role in difficult cases. Heart Rhythm 2011; 8(9):1478–1481. doi:10.1016/j.hrthm.2011.03.062
- Sarrazin J, Philippon F, Tessier M, et al. Usefulness of fluorine-18 positron emission tomography/computed tomography for identification of cardiovascular implantable electronic device infections. J Am Coll Cardiol 2012; 59(18):1616–1625. doi:10.1016/j.jacc.2011.11.059
- Doherty JU, Kort S, Mehran R, Schoenhagen P, Soman P; Rating Panel Members; Appropriate Use Criteria Task Force. ACC/AATS/AHA/ASE/ASNC/HRS/SCAI/SCCT/SCMR/STS 2017 Appropriate use criteria for multimodality imaging in valvular heart disease: a report of the American College of Cardiology Appropriate Use Criteria Task Force, American Association for Thoracic Surgery, American Heart Association, American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, Society for Cardiovascular Magnetic Resonance, and Society of Thoracic Surgeons. J Nucl Cardiol 2017; 24(6):2043–2063. doi:10.1007/s12350-017-1070-1
- Saby L, Laas O, Habib G, et al. Positron emission tomography/computed tomography for diagnosis of prosthetic valve endocarditis: increased valvular 18F-fluorodeoxyglucose uptake as a novel major criterion. J Am Coll Cardiol 2013; 61(23):2374–2382. doi:10.1016/j.jacc.2013.01.092
- Swart LE, Gomes A, Scholtens AM, et al. Improving the diagnostic performance of 18F-fluorodeoxyglucose positron-emission tomography/computed tomography in prosthetic heart valve endocarditis. Circulation 2018; 138(14):1412–1427. doi:10.1161/CIRCULATIONAHA.118.035032
- Graziosi M, Nanni C, Lorenzini M, et al. Role of 18F-FDG PET/CT in the diagnosis of infective endocarditis in patients with an implanted cardiac device: a prospective study. Eur J Nucl Med Mol Imaging 2014; 41(8):1617–1623. doi:10.1007/s00259-014-2773-z
- Kouijzer IJ, Vos FJ, Janssen MJ, van Dijk AP, Oyen WJ, Bleeker-Rovers CP. The value of 18F-FDG PET/CT in diagnosing infectious endocarditis. Eur J Nucl Med Mol Imaging 2013; 40(7):1102–1107. doi:10.1007/s00259-013-2376-0
- Wong D, Rubinshtein R, Keynan Y. Alternative cardiac imaging modalities to echocardiography for the diagnosis of infective endocarditis. Am J Cardiol 2016; 118(9):1410–1418. doi:10.1016/j.amjcard.2016.07.053
- Vos FJ, Bleeker-Rovers CP, Kullberg BJ, Adang EM, Oyen WJ. Cost-effectiveness of routine (18)F-FDG PET/CT in high-risk patients with gram-positive bacteremia. J Nucl Med 2011; 52(11):1673–1678. doi:10.2967/jnumed.111.089714
- McCollough CH, Bushberg JT, Fletcher JG, Eckel LJ. Answers to common questions about the use and safety of CT scans. Mayo Clin Proc 2015; 90(10):1380–1392. doi:10.1016/j.mayocp.2015.07.011
- Duval X, Iung B, Klein I, et al; IMAGE (Resonance Magnetic Imaging at the Acute Phase of Endocarditis) Study Group. Effect of early cerebral magnetic resonance imaging on clinical decisions in infective endocarditis: a prospective study. Ann Intern Med 2010; 152(8):497–504, W175. doi:10.7326/0003-4819-152-8-201004200-00006
- ACR Committee on Drugs and Contrast Media. ACR Manual on Contrast Media: 2018. www.acr.org/-/media/ACR/Files/Clinical-Resources/Contrast_Media.pdf. Accessed July 19, 2019.
- Kanda T, Fukusato T, Matsuda M, et al. Gadolinium-based contrast agent accumulates in the brain even in subjects without severe renal dysfunction: evaluation of autopsy brain specimens with inductively coupled plasma mass spectroscopy. Radiology 2015; 276(1):228–232. doi:10.1148/radiol.2015142690
- McDonald RJ, McDonald JS, Kallmes DF, et al. Intracranial gadolinium deposition after contrast-enhanced MR imaging. Radiology 2015; 275(3):772–782. doi:10.1148/radiol.15150025
- Kanda T, Ishii K, Kawaguchi H, Kitajima K, Takenaka D. High signal intensity in the dentate nucleus and globus pallidus on unenhanced T1-weighted MR images: relationship with increasing cumulative dose of a gadolinium-based contrast material. Radiology 2014; 270(3):834–841. doi:10.1148/radiol.13131669
- Expert Panel on Pediatric Imaging; Hayes LL, Palasis S, Bartel TB, et al. ACR appropriateness criteria headache-child. J Am Coll Radiol 2018; 15(5S):S78–S90. doi:10.1016/j.jacr.2018.03.017
- Habib G, Lancellotti P, Antunes MJ, et al; ESC Scientific Document Group. 2015 ESC guidelines for the management of infective endocarditis: the Task Force for the Management of Infective Endocarditis of the European Society of Cardiology (ESC). Endorsed by: European Association for Cardio-Thoracic Surgery (EACTS), the European Association of Nuclear Medicine (EANM). Eur Heart J 2015; 36(44):3075–3128. doi:10.1093/eurheartj/ehv319
- Durante-Mangoni E, Bradley S, Selton-Suty C, et al; International Collaboration on Endocarditis Prospective Cohort Study Group. Current features of infective endocarditis in elderly patients: results of the International Collaboration on Endocarditis Prospective Cohort Study. Arch Intern Med 2008; 168(19):2095–2103. doi:10.1001/archinte.168.19.2095
- Wurcel AG, Anderson JE, Chui KK, et al. Increasing infectious endocarditis admissions among young people who inject drugs. Open Forum Infect Dis 2016; 3(3):ofw157. doi:10.1093/ofid/ofw157
- Gomes A, Glaudemans AW, Touw DJ, et al. Diagnostic value of imaging in infective endocarditis: a systematic review. Lancet Infect Dis 2017; 17(1):e1–e14. doi:10.1016/S1473-3099(16)30141-4
- Cahill TJ, Baddour LM, Habib G, et al. Challenges in infective endocarditis. J Am Coll Cardiol 2017; 69(3):325–344. doi:10.1016/j.jacc.2016.10.066
- Fagman E, Perrotta S, Bech-Hanssen O, et al. ECG-gated computed tomography: a new role for patients with suspected aortic prosthetic valve endocarditis. Eur Radiol 2012; 22(11):2407–2414. doi:10.1007/s00330-012-2491-5
- Habets J, Tanis W, van Herwerden LA, et al. Cardiac computed tomography angiography results in diagnostic and therapeutic change in prosthetic heart valve endocarditis. Int J Cardiovasc Imaging 2014; 30(2):377–387. doi:10.1007/s10554-013-0335-2
- Koneru S, Huang SS, Oldan J, et al. Role of preoperative cardiac CT in the evaluation of infective endocarditis: comparison with transesophageal echocardiography and surgical findings. Cardiovasc Diagn Ther 2018; 8(4):439–449. doi:10.21037/cdt.2018.07.07
- Koo HJ, Yang DH, Kang J, et al. Demonstration of infective endocarditis by cardiac CT and transoesophageal echocardiography: comparison with intra-operative findings. Eur Heart J Cardiovasc Imaging 2018; 19(2):199–207. doi:10.1093/ehjci/jex010
- Feuchtner GM, Stolzmann P, Dichtl W, et al. Multislice computed tomography in infective endocarditis: comparison with transesophageal echocardiography and intraoperative findings. J Am Coll Cardiol 2009; 53(5):436–444. doi:10.1016/j.jacc.2008.01.077
- Castellano IA, Nicol ED, Bull RK, Roobottom CA, Williams MC, Harden SP. A prospective national survey of coronary CT angiography radiation doses in the United Kingdom. J Cardiovasc Comput Tomogr 2017; 11(4):268–273. doi:10.1016/j.jcct.2017.05.002
- Mettler FA Jr, Huda W, Yoshizumi TT, Mahesh M. Effective doses in radiology and diagnostic nuclear medicine: a catalog. Radiology 2008; 248(1):254–263. doi:10.1148/radiol.2481071451
- Smith-Bindman R, Lipson J, Marcus R, et al. Radiation dose associated with common computed tomography examinations and the associated lifetime attributable risk of cancer. Arch Intern Med 2009; 169(22):2078–2086. doi:10.1001/archinternmed.2009.427
- Ploux S, Riviere A, Amraoui S, et al. Positron emission tomography in patients with suspected pacing system infections may play a critical role in difficult cases. Heart Rhythm 2011; 8(9):1478–1481. doi:10.1016/j.hrthm.2011.03.062
- Sarrazin J, Philippon F, Tessier M, et al. Usefulness of fluorine-18 positron emission tomography/computed tomography for identification of cardiovascular implantable electronic device infections. J Am Coll Cardiol 2012; 59(18):1616–1625. doi:10.1016/j.jacc.2011.11.059
- Doherty JU, Kort S, Mehran R, Schoenhagen P, Soman P; Rating Panel Members; Appropriate Use Criteria Task Force. ACC/AATS/AHA/ASE/ASNC/HRS/SCAI/SCCT/SCMR/STS 2017 Appropriate use criteria for multimodality imaging in valvular heart disease: a report of the American College of Cardiology Appropriate Use Criteria Task Force, American Association for Thoracic Surgery, American Heart Association, American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, Society for Cardiovascular Magnetic Resonance, and Society of Thoracic Surgeons. J Nucl Cardiol 2017; 24(6):2043–2063. doi:10.1007/s12350-017-1070-1
- Saby L, Laas O, Habib G, et al. Positron emission tomography/computed tomography for diagnosis of prosthetic valve endocarditis: increased valvular 18F-fluorodeoxyglucose uptake as a novel major criterion. J Am Coll Cardiol 2013; 61(23):2374–2382. doi:10.1016/j.jacc.2013.01.092
- Swart LE, Gomes A, Scholtens AM, et al. Improving the diagnostic performance of 18F-fluorodeoxyglucose positron-emission tomography/computed tomography in prosthetic heart valve endocarditis. Circulation 2018; 138(14):1412–1427. doi:10.1161/CIRCULATIONAHA.118.035032
- Graziosi M, Nanni C, Lorenzini M, et al. Role of 18F-FDG PET/CT in the diagnosis of infective endocarditis in patients with an implanted cardiac device: a prospective study. Eur J Nucl Med Mol Imaging 2014; 41(8):1617–1623. doi:10.1007/s00259-014-2773-z
- Kouijzer IJ, Vos FJ, Janssen MJ, van Dijk AP, Oyen WJ, Bleeker-Rovers CP. The value of 18F-FDG PET/CT in diagnosing infectious endocarditis. Eur J Nucl Med Mol Imaging 2013; 40(7):1102–1107. doi:10.1007/s00259-013-2376-0
- Wong D, Rubinshtein R, Keynan Y. Alternative cardiac imaging modalities to echocardiography for the diagnosis of infective endocarditis. Am J Cardiol 2016; 118(9):1410–1418. doi:10.1016/j.amjcard.2016.07.053
- Vos FJ, Bleeker-Rovers CP, Kullberg BJ, Adang EM, Oyen WJ. Cost-effectiveness of routine (18)F-FDG PET/CT in high-risk patients with gram-positive bacteremia. J Nucl Med 2011; 52(11):1673–1678. doi:10.2967/jnumed.111.089714
- McCollough CH, Bushberg JT, Fletcher JG, Eckel LJ. Answers to common questions about the use and safety of CT scans. Mayo Clin Proc 2015; 90(10):1380–1392. doi:10.1016/j.mayocp.2015.07.011
- Duval X, Iung B, Klein I, et al; IMAGE (Resonance Magnetic Imaging at the Acute Phase of Endocarditis) Study Group. Effect of early cerebral magnetic resonance imaging on clinical decisions in infective endocarditis: a prospective study. Ann Intern Med 2010; 152(8):497–504, W175. doi:10.7326/0003-4819-152-8-201004200-00006
- ACR Committee on Drugs and Contrast Media. ACR Manual on Contrast Media: 2018. www.acr.org/-/media/ACR/Files/Clinical-Resources/Contrast_Media.pdf. Accessed July 19, 2019.
- Kanda T, Fukusato T, Matsuda M, et al. Gadolinium-based contrast agent accumulates in the brain even in subjects without severe renal dysfunction: evaluation of autopsy brain specimens with inductively coupled plasma mass spectroscopy. Radiology 2015; 276(1):228–232. doi:10.1148/radiol.2015142690
- McDonald RJ, McDonald JS, Kallmes DF, et al. Intracranial gadolinium deposition after contrast-enhanced MR imaging. Radiology 2015; 275(3):772–782. doi:10.1148/radiol.15150025
- Kanda T, Ishii K, Kawaguchi H, Kitajima K, Takenaka D. High signal intensity in the dentate nucleus and globus pallidus on unenhanced T1-weighted MR images: relationship with increasing cumulative dose of a gadolinium-based contrast material. Radiology 2014; 270(3):834–841. doi:10.1148/radiol.13131669
- Expert Panel on Pediatric Imaging; Hayes LL, Palasis S, Bartel TB, et al. ACR appropriateness criteria headache-child. J Am Coll Radiol 2018; 15(5S):S78–S90. doi:10.1016/j.jacr.2018.03.017
KEY POINTS
- Echocardiography can produce false-negative results in native-valve infective endocarditis and is even less sensitive in patients with a prosthetic valve or cardiac implanted electronic device.
- 4D CT is a reasonable alternative to transesophageal echocardiography. It can also be used as a second test if echocardiography is inconclusive. Coupled with angiography, it also provides a noninvasive method to evaluate coronary arteries perioperatively.
- Nuclear imaging tests—FDG-PET and leukocyte scintigraphy—increase the sensitivity of the Duke criteria for diagnosing infective endocarditis. They should be considered for evaluating suspected infective endocarditis in all patients who have a prosthetic valve or cardiac implanted electronic device, and whenever echocardiography is inconclusive and clinical suspicion remains high.
Deciding when a picture is worth a thousand words and several thousand dollars
In a study from the University of Pennsylvania,2 Sedrak et al surveyed residents about their lab test ordering practices. Almost all responders recognized that they ordered “unnecessary tests.” The authors of the paper probed to understand why, and strikingly, the more common responses were the same that my resident peers and I would have given 4 decades ago: the culture of the system (“We don’t want to miss anything or be asked on rounds for data that hadn’t been checked”), the lack of transparency of cost of the tests, and the lack of role-modeling by teaching staff. There has been hope that the last of these would be resolved by increased visibility of subspecialists in hospital medicine, well-versed in the nuances of system-based practice. And the Society of Hospital Medicine, along with the American College of Physicians and others, has pushed hard to promote choosing wisely when ordering diagnostic studies. But we have a way to go.
Lab tests represent a small fraction of healthcare costs. Imaging tests, especially advanced and complex imaging studies, comprise a far greater fraction of healthcare costs. And here is the challenge: developers of new imaging modalities are now able to design and refine specific tests that are good enough to become the gold standard for diagnosis and staging of specific diseases—great for clinical care, bad for cost savings. One need only review a few new guidelines or clinical research protocols to appreciate the successful integration of these tests into clinical practice. Some tests are supplanting the need for aggressive biopsies, angiography, or a series of alternative imaging tests. This is potentially good for patients, but many of these tests are strikingly expensive and are being adopted for use prior to full vetting of their utility and limitations in large clinical studies; the cost of the tests can be an impediment to conducting a series of clinical studies that include appropriate patient subsets. The increasingly proposed use of positron emission tomography in patients with suspected malignancy, inflammation, or infection is a great example of a useful test that we are still learning how best to interpret in several conditions.
In this issue of the Journal, two testing scenarios are discussed. Lacy et al address the question of when patients with pyelonephritis should receive imaging studies. There are data to guide this decision process, but as noted in the study by Sedrak et al,2 there are forces at work that challenge the clinician to bypass the rational guidelines—not the least of which are the desire for efficiency (don’t take the chance that the test may be required later and delay discharge from the hospital or observation area) and greater surety in the clinical diagnosis. Although fear of litigation was not high on Sedrak’s list of reasons for ordering more “unnecessary” tests, I posit that a decrease in the confidence placed on clinical diagnosis drives a significant amount of imaging, in conjunction with the desire for shorter hospital stays.
The second paper, by Mgbojikwe et al, relates to the issue of which advanced technology should be ordered, and when. They review the limitations of traditional (echocardiographic) diagnosis and staging of infective endocarditis, and discuss the strengths and limitations of several advanced imaging tools in the setting of suspected or known infectious endocarditis. I suspect that in most medical centers the decisions to utilize these tests will rest with the infectious disease, cardiology, and cardiothoracic surgery consultants. But it is worth being aware of how the diagnostic and staging strategies are evolving, and of the limitations to these studies.
We have come a long way from diagnosing bacterial endocarditis with a valve abscess on the basis of finding changing murmurs, a Roth spot, a palpable spleen tip, new conduction abnormalities on the ECG, and documented daily afternoon fevers. Performing that physical examination is cheap but not highly reproducible. The new testing algorithms are not cheap but, hopefully, will offer superior sensitivity and specificity. Used correctly—and we likely have a way to go to learn what that means—these pictures may well be worth the cost.
Although someone still has to suspect the diagnosis of endocarditis.
- Papanicolas I, Woskie LR, Jha AK. Health care spending in the United States and other high-income countries. JAMA 2018; 319(10):1024–1039. doi:10.1001/jama.2018.1150
- Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med 2016; 11(12):869–872. doi:10.1002/jhm.2645
In a study from the University of Pennsylvania,2 Sedrak et al surveyed residents about their lab test ordering practices. Almost all responders recognized that they ordered “unnecessary tests.” The authors of the paper probed to understand why, and strikingly, the more common responses were the same that my resident peers and I would have given 4 decades ago: the culture of the system (“We don’t want to miss anything or be asked on rounds for data that hadn’t been checked”), the lack of transparency of cost of the tests, and the lack of role-modeling by teaching staff. There has been hope that the last of these would be resolved by increased visibility of subspecialists in hospital medicine, well-versed in the nuances of system-based practice. And the Society of Hospital Medicine, along with the American College of Physicians and others, has pushed hard to promote choosing wisely when ordering diagnostic studies. But we have a way to go.
Lab tests represent a small fraction of healthcare costs. Imaging tests, especially advanced and complex imaging studies, comprise a far greater fraction of healthcare costs. And here is the challenge: developers of new imaging modalities are now able to design and refine specific tests that are good enough to become the gold standard for diagnosis and staging of specific diseases—great for clinical care, bad for cost savings. One need only review a few new guidelines or clinical research protocols to appreciate the successful integration of these tests into clinical practice. Some tests are supplanting the need for aggressive biopsies, angiography, or a series of alternative imaging tests. This is potentially good for patients, but many of these tests are strikingly expensive and are being adopted for use prior to full vetting of their utility and limitations in large clinical studies; the cost of the tests can be an impediment to conducting a series of clinical studies that include appropriate patient subsets. The increasingly proposed use of positron emission tomography in patients with suspected malignancy, inflammation, or infection is a great example of a useful test that we are still learning how best to interpret in several conditions.
In this issue of the Journal, two testing scenarios are discussed. Lacy et al address the question of when patients with pyelonephritis should receive imaging studies. There are data to guide this decision process, but as noted in the study by Sedrak et al,2 there are forces at work that challenge the clinician to bypass the rational guidelines—not the least of which are the desire for efficiency (don’t take the chance that the test may be required later and delay discharge from the hospital or observation area) and greater surety in the clinical diagnosis. Although fear of litigation was not high on Sedrak’s list of reasons for ordering more “unnecessary” tests, I posit that a decrease in the confidence placed on clinical diagnosis drives a significant amount of imaging, in conjunction with the desire for shorter hospital stays.
The second paper, by Mgbojikwe et al, relates to the issue of which advanced technology should be ordered, and when. They review the limitations of traditional (echocardiographic) diagnosis and staging of infective endocarditis, and discuss the strengths and limitations of several advanced imaging tools in the setting of suspected or known infectious endocarditis. I suspect that in most medical centers the decisions to utilize these tests will rest with the infectious disease, cardiology, and cardiothoracic surgery consultants. But it is worth being aware of how the diagnostic and staging strategies are evolving, and of the limitations to these studies.
We have come a long way from diagnosing bacterial endocarditis with a valve abscess on the basis of finding changing murmurs, a Roth spot, a palpable spleen tip, new conduction abnormalities on the ECG, and documented daily afternoon fevers. Performing that physical examination is cheap but not highly reproducible. The new testing algorithms are not cheap but, hopefully, will offer superior sensitivity and specificity. Used correctly—and we likely have a way to go to learn what that means—these pictures may well be worth the cost.
Although someone still has to suspect the diagnosis of endocarditis.
In a study from the University of Pennsylvania,2 Sedrak et al surveyed residents about their lab test ordering practices. Almost all responders recognized that they ordered “unnecessary tests.” The authors of the paper probed to understand why, and strikingly, the more common responses were the same that my resident peers and I would have given 4 decades ago: the culture of the system (“We don’t want to miss anything or be asked on rounds for data that hadn’t been checked”), the lack of transparency of cost of the tests, and the lack of role-modeling by teaching staff. There has been hope that the last of these would be resolved by increased visibility of subspecialists in hospital medicine, well-versed in the nuances of system-based practice. And the Society of Hospital Medicine, along with the American College of Physicians and others, has pushed hard to promote choosing wisely when ordering diagnostic studies. But we have a way to go.
Lab tests represent a small fraction of healthcare costs. Imaging tests, especially advanced and complex imaging studies, comprise a far greater fraction of healthcare costs. And here is the challenge: developers of new imaging modalities are now able to design and refine specific tests that are good enough to become the gold standard for diagnosis and staging of specific diseases—great for clinical care, bad for cost savings. One need only review a few new guidelines or clinical research protocols to appreciate the successful integration of these tests into clinical practice. Some tests are supplanting the need for aggressive biopsies, angiography, or a series of alternative imaging tests. This is potentially good for patients, but many of these tests are strikingly expensive and are being adopted for use prior to full vetting of their utility and limitations in large clinical studies; the cost of the tests can be an impediment to conducting a series of clinical studies that include appropriate patient subsets. The increasingly proposed use of positron emission tomography in patients with suspected malignancy, inflammation, or infection is a great example of a useful test that we are still learning how best to interpret in several conditions.
In this issue of the Journal, two testing scenarios are discussed. Lacy et al address the question of when patients with pyelonephritis should receive imaging studies. There are data to guide this decision process, but as noted in the study by Sedrak et al,2 there are forces at work that challenge the clinician to bypass the rational guidelines—not the least of which are the desire for efficiency (don’t take the chance that the test may be required later and delay discharge from the hospital or observation area) and greater surety in the clinical diagnosis. Although fear of litigation was not high on Sedrak’s list of reasons for ordering more “unnecessary” tests, I posit that a decrease in the confidence placed on clinical diagnosis drives a significant amount of imaging, in conjunction with the desire for shorter hospital stays.
The second paper, by Mgbojikwe et al, relates to the issue of which advanced technology should be ordered, and when. They review the limitations of traditional (echocardiographic) diagnosis and staging of infective endocarditis, and discuss the strengths and limitations of several advanced imaging tools in the setting of suspected or known infectious endocarditis. I suspect that in most medical centers the decisions to utilize these tests will rest with the infectious disease, cardiology, and cardiothoracic surgery consultants. But it is worth being aware of how the diagnostic and staging strategies are evolving, and of the limitations to these studies.
We have come a long way from diagnosing bacterial endocarditis with a valve abscess on the basis of finding changing murmurs, a Roth spot, a palpable spleen tip, new conduction abnormalities on the ECG, and documented daily afternoon fevers. Performing that physical examination is cheap but not highly reproducible. The new testing algorithms are not cheap but, hopefully, will offer superior sensitivity and specificity. Used correctly—and we likely have a way to go to learn what that means—these pictures may well be worth the cost.
Although someone still has to suspect the diagnosis of endocarditis.
- Papanicolas I, Woskie LR, Jha AK. Health care spending in the United States and other high-income countries. JAMA 2018; 319(10):1024–1039. doi:10.1001/jama.2018.1150
- Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med 2016; 11(12):869–872. doi:10.1002/jhm.2645
- Papanicolas I, Woskie LR, Jha AK. Health care spending in the United States and other high-income countries. JAMA 2018; 319(10):1024–1039. doi:10.1001/jama.2018.1150
- Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med 2016; 11(12):869–872. doi:10.1002/jhm.2645
When does acute pyelonephritis require imaging?
A previously healthy 44-year-old woman presents to the emergency department with 1 day of fever, flank pain, dysuria, and persistent nausea and vomiting. Her temperature is 38.7°C (101.7°F), heart rate 102 beats per minute, and blood pressure 120/70 mm Hg. She has costovertebral angle tenderness. Laboratory testing reveals mild leukocytosis and a normal serum creatinine level; urinalysis shows leukocytes, as well as leukocyte esterase and nitrites. She has no personal or family history of nephrolithiasis. Urine cultures are obtained, and she is started on intravenous antibiotics and intravenous hydration to treat pyelonephritis.
Is imaging indicated at this point? And if so, which study is recommended?
KEY FEATURES
Acute pyelonephritis, infection of the renal parenchyma and collecting system, most often results from an ascending infection of the lower urinary tract. It is estimated to account for 250,000 office visits and 200,000 hospital admissions each year in the United States.1
Lower urinary tract symptoms such as urinary frequency, urgency, and dysuria accompanied by fever, nausea, vomiting, and flank pain raise suspicion for acute pyelonephritis. Flank pain is a key, nearly universal feature of upper urinary tract infection in patients without diabetes, though it may be absent in up to 50% of patients with diabetes.2
Additional findings include costovertebral angle tenderness on physical examination and leukocytosis, pyuria, and bacteriuria on laboratory studies.
PREDICTING THE NEED FOR EARLY IMAGING
Though guidelines state that imaging is inappropriate in most patients with pyelonephritis,2–4 it is nevertheless often done for diagnosis or identification of complications, which have been reported in more than two-thirds of patients.2–4
Acute pyelonephritis is generally classified as complicated or uncomplicated, though different definitions exist with regard to these classifications. The American College of Radiology’s Appropriateness Criteria2 consider patients with diabetes, immune compromise, a history of urolithiasis, or anatomic abnormality to be at highest risk for complications, and therefore recommend early imaging to assess for hydronephrosis, pyonephrosis, emphysematous pyelonephritis, and intrinsic or perinephric abscess.2
A clinical rule for predicting the need for imaging in acute pyelonephritis was developed and validated in an emergency department population in the Netherlands.3 The study suggested that restricting early imaging to patients with a history of urolithiasis, a urine pH of 7.0 or higher, or renal insufficiency—defined as a glomerular filtration rate (GFR) of 40 mL/min/1.73m2 or lower as estimated by the Modification of Diet in Renal Disease formula—would provide a negative predictive value of 94% to 100% for detection of an urgent urologic disorder (pyonephrosis, renal abscess, or urolithiasis). This high negative predictive value highlights that an absence of these signs and symptoms can safely identify patients who do not need renal imaging.
The positive predictive value was less useful, as only 5% to 23% of patients who had at least 1 risk factor went on to have urgent urologic risk factors.3
Implementation of this prediction rule would have resulted in a relative reduction in imaging of 40% and an absolute reduction of 28%. Of note, use of reduced GFR in this prediction rule is not clearly validated for patients with chronic kidney disease, as the previous GFR for most patients in this study was unknown.3
Based on these data, initial imaging is recommended in patients with diabetes, immune compromise, a history of urolithiasis, anatomic abnormality, a urine pH 7.0 or higher, or a GFR 40 mL/min or lower in a patient with no history of significant renal dysfunction. Early imaging would also be reasonable in patients with a complex clinical presentation, early recurrence of symptoms after treatment, clinical decompensation, or critical illness.
TREATMENT FAILURE
In a retrospective review of 62 patients hospitalized for acute renal infection, Soulen et al5 found that the most reliable indicator of complicated acute pyelonephritis was the persistence of fever and leukocytosis at 72 hours. And another small prospective study of patients with uncomplicated pyelonephritis reported a time to defervescence of no more than 4 days.6
In accordance with the Appropriateness Criteria2 and based on the best available evidence, imaging is recommended in all patients who remain febrile or have persistent leukocytosis after 72 hours of antibiotic therapy. In such cases, there should be high suspicion for a complication requiring treatment.
OPTIONS FOR IMAGING
Computed tomography
Computed tomography (CT) of the abdomen and pelvis with contrast is considered the study of choice in complicated acute pyelonephritis. CT can detect focal parenchymal abnormalities, emphysematous changes, and anatomic anomalies, and can also define the extent of disease. It can also detect perinephric fluid collections and abscesses that necessitate a change in management.2,5
A retrospective study in 2017 found that contrast-enhanced CT done without the usual noncontrast and excretory phases had an accuracy of 90% to 92% for pyelonephritis and 96% to 99% for urolithiasis, suggesting that reduction in radiation exposure through use of only the contrast-enhanced phase of CT imaging may be reasonable.7
Magnetic resonance imaging
Magnetic resonance imaging (MRI) is increasingly acknowledged as effective in the evaluation of renal pathology, including the diagnosis of pyelonephritis; but it lacks the level of evidence that CT provides for detecting renal abscesses, calculi, and emphysematous pyelonephritis.2,8,9
Though it is more costly and time-consuming than CT with contrast enhancement, MRI is nevertheless the imaging study of choice if iodinated contrast or ionizing radiation must be avoided.
MRI typically involves a precontrast phase and a gadolinium contrast-enhanced phase, though there are data to support diffusion-weighted MRI when exposure to gadolinium poses a risk to the patient, such as in pregnancy or renal impairment (particularly when the estimated GFR is < 30 mL/min/1.73 m2).10
Ultrasonography
Conventional ultrasonography is appealing due to its relatively low cost, its availability and portability, and the lack of radiation and contrast exposure. It is most helpful in detecting hydronephrosis and pyonephrosis rather than intrarenal or perinephric abscess.2,9
Color and power Doppler ultrasonography may improve testing characteristics but not to the level of CT; in one study, sensitivity for detection of pyelonephritis was 33.3% with ultrasonography vs 81.0% with CT.11
Recent studies of ultrasonography with contrast enhancement show promising results,2 and it may ultimately prove to have a similar efficacy with lower risk for patients, but this has not been validated in large studies, and its availability remains limited.
Ultrasonography should be considered for patients in whom obstruction (with resulting hydronephrosis or pyonephrosis) is a primary concern, particularly when contrast exposure or radiation is contraindicated and MRI is unavailable.2
Abdominal radiography
While emphysematous pyelonephritis or a large staghorn calculus may be seen on abdominal radiography, it is not recommended for the assessment of complications in acute pyelonephritis because it lacks sensitivity.2
RETURN TO THE CASE SCENARIO
The patient in our case scenario meets the clinical criteria for uncomplicated pyelonephritis and is therefore not a candidate for imaging. Intravenous antibiotics should be started and should lead to rapid improvement in her condition.
Acknowledgment: The authors would like to thank Dr. Lisa Blacklock for her review of the radiology section of this paper.
- Foxman B, Klemstine KL, Brown PD. Acute pyelonephritis in US hospitals in 1997: hospitalization and in-hospital mortality. Ann Epidemiol 2003; 13(2):144–150. pmid:12559674
- Expert Panel on Urologic Imaging: Nikolaidis P, Dogra VS, Goldfarb S, et al. ACR appropriateness criteria acute pyelonephritis. J Am Coll Radiol 2018; 15(11S):S232–S239. doi:10.1016/j.jacr.2018.09.011
- van Nieuwkoop C, Hoppe BP, Bonten TN, et al. Predicting the need for radiologic imaging in adults with febrile urinary tract infection. Clin Infect Dis 2010; 51(11):1266–1272. doi:10.1086/657071
- Kim Y, Seo MR, Kim SJ, et al. Usefulness of blood cultures and radiologic imaging studies in the management of patients with community-acquired acute pyelonephritis. Infect Chemother 2017; 49(1):22–30. doi:10.3947/ic.2017.49.1.22
- Soulen MC, Fishman EK, Goldman SM, Gatewood OM. Bacterial renal infection: role of CT. Radiology 1989; 171(3):703–707. doi:10.1148/radiology.171.3.2655002
- June CH, Browning MD, Smith LP, et al. Ultrasonography and computed tomography in severe urinary tract infection. Arch Intern Med 1985; 145(5):841–845. pmid:3888134
- Taniguchi LS, Torres US, Souza SM, Torres LR, D’Ippolito G. Are the unenhanced and excretory CT phases necessary for the evaluation of acute pyelonephritis? Acta Radiol 2017; 58(5):634–640. doi:10.1177/0284185116665424
- Rathod SB, Kumbhar SS, Nanivadekar A, Aman K. Role of diffusion-weighted MRI in acute pyelonephritis: a prospective study. Acta Radiol 2015; 56(2):244–249. doi:10.1177/0284185114520862
- Stunell H, Buckley O, Feeney J, Geoghegan T, Browne RF, Torreggiani WC. Imaging of acute pyelonephritis in the adult. Eur Radiol 2007; 17(7):1820–1828.
- American College of Radiology. ACR Manual on Contrast Media. www.acr.org/clinical-resources/contrast-manual. Accessed June 19, 2019.
- Yoo JM, Koh JS, Han CH, et al. Diagnosing acute pyelonephritis with CT, Tc-DMSA SPECT, and Doppler ultrasound: a comparative study. Korean J Urol 2010; 51(4):260–265. doi:10.4111/kju.2010.51.4.260
A previously healthy 44-year-old woman presents to the emergency department with 1 day of fever, flank pain, dysuria, and persistent nausea and vomiting. Her temperature is 38.7°C (101.7°F), heart rate 102 beats per minute, and blood pressure 120/70 mm Hg. She has costovertebral angle tenderness. Laboratory testing reveals mild leukocytosis and a normal serum creatinine level; urinalysis shows leukocytes, as well as leukocyte esterase and nitrites. She has no personal or family history of nephrolithiasis. Urine cultures are obtained, and she is started on intravenous antibiotics and intravenous hydration to treat pyelonephritis.
Is imaging indicated at this point? And if so, which study is recommended?
KEY FEATURES
Acute pyelonephritis, infection of the renal parenchyma and collecting system, most often results from an ascending infection of the lower urinary tract. It is estimated to account for 250,000 office visits and 200,000 hospital admissions each year in the United States.1
Lower urinary tract symptoms such as urinary frequency, urgency, and dysuria accompanied by fever, nausea, vomiting, and flank pain raise suspicion for acute pyelonephritis. Flank pain is a key, nearly universal feature of upper urinary tract infection in patients without diabetes, though it may be absent in up to 50% of patients with diabetes.2
Additional findings include costovertebral angle tenderness on physical examination and leukocytosis, pyuria, and bacteriuria on laboratory studies.
PREDICTING THE NEED FOR EARLY IMAGING
Though guidelines state that imaging is inappropriate in most patients with pyelonephritis,2–4 it is nevertheless often done for diagnosis or identification of complications, which have been reported in more than two-thirds of patients.2–4
Acute pyelonephritis is generally classified as complicated or uncomplicated, though different definitions exist with regard to these classifications. The American College of Radiology’s Appropriateness Criteria2 consider patients with diabetes, immune compromise, a history of urolithiasis, or anatomic abnormality to be at highest risk for complications, and therefore recommend early imaging to assess for hydronephrosis, pyonephrosis, emphysematous pyelonephritis, and intrinsic or perinephric abscess.2
A clinical rule for predicting the need for imaging in acute pyelonephritis was developed and validated in an emergency department population in the Netherlands.3 The study suggested that restricting early imaging to patients with a history of urolithiasis, a urine pH of 7.0 or higher, or renal insufficiency—defined as a glomerular filtration rate (GFR) of 40 mL/min/1.73m2 or lower as estimated by the Modification of Diet in Renal Disease formula—would provide a negative predictive value of 94% to 100% for detection of an urgent urologic disorder (pyonephrosis, renal abscess, or urolithiasis). This high negative predictive value highlights that an absence of these signs and symptoms can safely identify patients who do not need renal imaging.
The positive predictive value was less useful, as only 5% to 23% of patients who had at least 1 risk factor went on to have urgent urologic risk factors.3
Implementation of this prediction rule would have resulted in a relative reduction in imaging of 40% and an absolute reduction of 28%. Of note, use of reduced GFR in this prediction rule is not clearly validated for patients with chronic kidney disease, as the previous GFR for most patients in this study was unknown.3
Based on these data, initial imaging is recommended in patients with diabetes, immune compromise, a history of urolithiasis, anatomic abnormality, a urine pH 7.0 or higher, or a GFR 40 mL/min or lower in a patient with no history of significant renal dysfunction. Early imaging would also be reasonable in patients with a complex clinical presentation, early recurrence of symptoms after treatment, clinical decompensation, or critical illness.
TREATMENT FAILURE
In a retrospective review of 62 patients hospitalized for acute renal infection, Soulen et al5 found that the most reliable indicator of complicated acute pyelonephritis was the persistence of fever and leukocytosis at 72 hours. And another small prospective study of patients with uncomplicated pyelonephritis reported a time to defervescence of no more than 4 days.6
In accordance with the Appropriateness Criteria2 and based on the best available evidence, imaging is recommended in all patients who remain febrile or have persistent leukocytosis after 72 hours of antibiotic therapy. In such cases, there should be high suspicion for a complication requiring treatment.
OPTIONS FOR IMAGING
Computed tomography
Computed tomography (CT) of the abdomen and pelvis with contrast is considered the study of choice in complicated acute pyelonephritis. CT can detect focal parenchymal abnormalities, emphysematous changes, and anatomic anomalies, and can also define the extent of disease. It can also detect perinephric fluid collections and abscesses that necessitate a change in management.2,5
A retrospective study in 2017 found that contrast-enhanced CT done without the usual noncontrast and excretory phases had an accuracy of 90% to 92% for pyelonephritis and 96% to 99% for urolithiasis, suggesting that reduction in radiation exposure through use of only the contrast-enhanced phase of CT imaging may be reasonable.7
Magnetic resonance imaging
Magnetic resonance imaging (MRI) is increasingly acknowledged as effective in the evaluation of renal pathology, including the diagnosis of pyelonephritis; but it lacks the level of evidence that CT provides for detecting renal abscesses, calculi, and emphysematous pyelonephritis.2,8,9
Though it is more costly and time-consuming than CT with contrast enhancement, MRI is nevertheless the imaging study of choice if iodinated contrast or ionizing radiation must be avoided.
MRI typically involves a precontrast phase and a gadolinium contrast-enhanced phase, though there are data to support diffusion-weighted MRI when exposure to gadolinium poses a risk to the patient, such as in pregnancy or renal impairment (particularly when the estimated GFR is < 30 mL/min/1.73 m2).10
Ultrasonography
Conventional ultrasonography is appealing due to its relatively low cost, its availability and portability, and the lack of radiation and contrast exposure. It is most helpful in detecting hydronephrosis and pyonephrosis rather than intrarenal or perinephric abscess.2,9
Color and power Doppler ultrasonography may improve testing characteristics but not to the level of CT; in one study, sensitivity for detection of pyelonephritis was 33.3% with ultrasonography vs 81.0% with CT.11
Recent studies of ultrasonography with contrast enhancement show promising results,2 and it may ultimately prove to have a similar efficacy with lower risk for patients, but this has not been validated in large studies, and its availability remains limited.
Ultrasonography should be considered for patients in whom obstruction (with resulting hydronephrosis or pyonephrosis) is a primary concern, particularly when contrast exposure or radiation is contraindicated and MRI is unavailable.2
Abdominal radiography
While emphysematous pyelonephritis or a large staghorn calculus may be seen on abdominal radiography, it is not recommended for the assessment of complications in acute pyelonephritis because it lacks sensitivity.2
RETURN TO THE CASE SCENARIO
The patient in our case scenario meets the clinical criteria for uncomplicated pyelonephritis and is therefore not a candidate for imaging. Intravenous antibiotics should be started and should lead to rapid improvement in her condition.
Acknowledgment: The authors would like to thank Dr. Lisa Blacklock for her review of the radiology section of this paper.
A previously healthy 44-year-old woman presents to the emergency department with 1 day of fever, flank pain, dysuria, and persistent nausea and vomiting. Her temperature is 38.7°C (101.7°F), heart rate 102 beats per minute, and blood pressure 120/70 mm Hg. She has costovertebral angle tenderness. Laboratory testing reveals mild leukocytosis and a normal serum creatinine level; urinalysis shows leukocytes, as well as leukocyte esterase and nitrites. She has no personal or family history of nephrolithiasis. Urine cultures are obtained, and she is started on intravenous antibiotics and intravenous hydration to treat pyelonephritis.
Is imaging indicated at this point? And if so, which study is recommended?
KEY FEATURES
Acute pyelonephritis, infection of the renal parenchyma and collecting system, most often results from an ascending infection of the lower urinary tract. It is estimated to account for 250,000 office visits and 200,000 hospital admissions each year in the United States.1
Lower urinary tract symptoms such as urinary frequency, urgency, and dysuria accompanied by fever, nausea, vomiting, and flank pain raise suspicion for acute pyelonephritis. Flank pain is a key, nearly universal feature of upper urinary tract infection in patients without diabetes, though it may be absent in up to 50% of patients with diabetes.2
Additional findings include costovertebral angle tenderness on physical examination and leukocytosis, pyuria, and bacteriuria on laboratory studies.
PREDICTING THE NEED FOR EARLY IMAGING
Though guidelines state that imaging is inappropriate in most patients with pyelonephritis,2–4 it is nevertheless often done for diagnosis or identification of complications, which have been reported in more than two-thirds of patients.2–4
Acute pyelonephritis is generally classified as complicated or uncomplicated, though different definitions exist with regard to these classifications. The American College of Radiology’s Appropriateness Criteria2 consider patients with diabetes, immune compromise, a history of urolithiasis, or anatomic abnormality to be at highest risk for complications, and therefore recommend early imaging to assess for hydronephrosis, pyonephrosis, emphysematous pyelonephritis, and intrinsic or perinephric abscess.2
A clinical rule for predicting the need for imaging in acute pyelonephritis was developed and validated in an emergency department population in the Netherlands.3 The study suggested that restricting early imaging to patients with a history of urolithiasis, a urine pH of 7.0 or higher, or renal insufficiency—defined as a glomerular filtration rate (GFR) of 40 mL/min/1.73m2 or lower as estimated by the Modification of Diet in Renal Disease formula—would provide a negative predictive value of 94% to 100% for detection of an urgent urologic disorder (pyonephrosis, renal abscess, or urolithiasis). This high negative predictive value highlights that an absence of these signs and symptoms can safely identify patients who do not need renal imaging.
The positive predictive value was less useful, as only 5% to 23% of patients who had at least 1 risk factor went on to have urgent urologic risk factors.3
Implementation of this prediction rule would have resulted in a relative reduction in imaging of 40% and an absolute reduction of 28%. Of note, use of reduced GFR in this prediction rule is not clearly validated for patients with chronic kidney disease, as the previous GFR for most patients in this study was unknown.3
Based on these data, initial imaging is recommended in patients with diabetes, immune compromise, a history of urolithiasis, anatomic abnormality, a urine pH 7.0 or higher, or a GFR 40 mL/min or lower in a patient with no history of significant renal dysfunction. Early imaging would also be reasonable in patients with a complex clinical presentation, early recurrence of symptoms after treatment, clinical decompensation, or critical illness.
TREATMENT FAILURE
In a retrospective review of 62 patients hospitalized for acute renal infection, Soulen et al5 found that the most reliable indicator of complicated acute pyelonephritis was the persistence of fever and leukocytosis at 72 hours. And another small prospective study of patients with uncomplicated pyelonephritis reported a time to defervescence of no more than 4 days.6
In accordance with the Appropriateness Criteria2 and based on the best available evidence, imaging is recommended in all patients who remain febrile or have persistent leukocytosis after 72 hours of antibiotic therapy. In such cases, there should be high suspicion for a complication requiring treatment.
OPTIONS FOR IMAGING
Computed tomography
Computed tomography (CT) of the abdomen and pelvis with contrast is considered the study of choice in complicated acute pyelonephritis. CT can detect focal parenchymal abnormalities, emphysematous changes, and anatomic anomalies, and can also define the extent of disease. It can also detect perinephric fluid collections and abscesses that necessitate a change in management.2,5
A retrospective study in 2017 found that contrast-enhanced CT done without the usual noncontrast and excretory phases had an accuracy of 90% to 92% for pyelonephritis and 96% to 99% for urolithiasis, suggesting that reduction in radiation exposure through use of only the contrast-enhanced phase of CT imaging may be reasonable.7
Magnetic resonance imaging
Magnetic resonance imaging (MRI) is increasingly acknowledged as effective in the evaluation of renal pathology, including the diagnosis of pyelonephritis; but it lacks the level of evidence that CT provides for detecting renal abscesses, calculi, and emphysematous pyelonephritis.2,8,9
Though it is more costly and time-consuming than CT with contrast enhancement, MRI is nevertheless the imaging study of choice if iodinated contrast or ionizing radiation must be avoided.
MRI typically involves a precontrast phase and a gadolinium contrast-enhanced phase, though there are data to support diffusion-weighted MRI when exposure to gadolinium poses a risk to the patient, such as in pregnancy or renal impairment (particularly when the estimated GFR is < 30 mL/min/1.73 m2).10
Ultrasonography
Conventional ultrasonography is appealing due to its relatively low cost, its availability and portability, and the lack of radiation and contrast exposure. It is most helpful in detecting hydronephrosis and pyonephrosis rather than intrarenal or perinephric abscess.2,9
Color and power Doppler ultrasonography may improve testing characteristics but not to the level of CT; in one study, sensitivity for detection of pyelonephritis was 33.3% with ultrasonography vs 81.0% with CT.11
Recent studies of ultrasonography with contrast enhancement show promising results,2 and it may ultimately prove to have a similar efficacy with lower risk for patients, but this has not been validated in large studies, and its availability remains limited.
Ultrasonography should be considered for patients in whom obstruction (with resulting hydronephrosis or pyonephrosis) is a primary concern, particularly when contrast exposure or radiation is contraindicated and MRI is unavailable.2
Abdominal radiography
While emphysematous pyelonephritis or a large staghorn calculus may be seen on abdominal radiography, it is not recommended for the assessment of complications in acute pyelonephritis because it lacks sensitivity.2
RETURN TO THE CASE SCENARIO
The patient in our case scenario meets the clinical criteria for uncomplicated pyelonephritis and is therefore not a candidate for imaging. Intravenous antibiotics should be started and should lead to rapid improvement in her condition.
Acknowledgment: The authors would like to thank Dr. Lisa Blacklock for her review of the radiology section of this paper.
- Foxman B, Klemstine KL, Brown PD. Acute pyelonephritis in US hospitals in 1997: hospitalization and in-hospital mortality. Ann Epidemiol 2003; 13(2):144–150. pmid:12559674
- Expert Panel on Urologic Imaging: Nikolaidis P, Dogra VS, Goldfarb S, et al. ACR appropriateness criteria acute pyelonephritis. J Am Coll Radiol 2018; 15(11S):S232–S239. doi:10.1016/j.jacr.2018.09.011
- van Nieuwkoop C, Hoppe BP, Bonten TN, et al. Predicting the need for radiologic imaging in adults with febrile urinary tract infection. Clin Infect Dis 2010; 51(11):1266–1272. doi:10.1086/657071
- Kim Y, Seo MR, Kim SJ, et al. Usefulness of blood cultures and radiologic imaging studies in the management of patients with community-acquired acute pyelonephritis. Infect Chemother 2017; 49(1):22–30. doi:10.3947/ic.2017.49.1.22
- Soulen MC, Fishman EK, Goldman SM, Gatewood OM. Bacterial renal infection: role of CT. Radiology 1989; 171(3):703–707. doi:10.1148/radiology.171.3.2655002
- June CH, Browning MD, Smith LP, et al. Ultrasonography and computed tomography in severe urinary tract infection. Arch Intern Med 1985; 145(5):841–845. pmid:3888134
- Taniguchi LS, Torres US, Souza SM, Torres LR, D’Ippolito G. Are the unenhanced and excretory CT phases necessary for the evaluation of acute pyelonephritis? Acta Radiol 2017; 58(5):634–640. doi:10.1177/0284185116665424
- Rathod SB, Kumbhar SS, Nanivadekar A, Aman K. Role of diffusion-weighted MRI in acute pyelonephritis: a prospective study. Acta Radiol 2015; 56(2):244–249. doi:10.1177/0284185114520862
- Stunell H, Buckley O, Feeney J, Geoghegan T, Browne RF, Torreggiani WC. Imaging of acute pyelonephritis in the adult. Eur Radiol 2007; 17(7):1820–1828.
- American College of Radiology. ACR Manual on Contrast Media. www.acr.org/clinical-resources/contrast-manual. Accessed June 19, 2019.
- Yoo JM, Koh JS, Han CH, et al. Diagnosing acute pyelonephritis with CT, Tc-DMSA SPECT, and Doppler ultrasound: a comparative study. Korean J Urol 2010; 51(4):260–265. doi:10.4111/kju.2010.51.4.260
- Foxman B, Klemstine KL, Brown PD. Acute pyelonephritis in US hospitals in 1997: hospitalization and in-hospital mortality. Ann Epidemiol 2003; 13(2):144–150. pmid:12559674
- Expert Panel on Urologic Imaging: Nikolaidis P, Dogra VS, Goldfarb S, et al. ACR appropriateness criteria acute pyelonephritis. J Am Coll Radiol 2018; 15(11S):S232–S239. doi:10.1016/j.jacr.2018.09.011
- van Nieuwkoop C, Hoppe BP, Bonten TN, et al. Predicting the need for radiologic imaging in adults with febrile urinary tract infection. Clin Infect Dis 2010; 51(11):1266–1272. doi:10.1086/657071
- Kim Y, Seo MR, Kim SJ, et al. Usefulness of blood cultures and radiologic imaging studies in the management of patients with community-acquired acute pyelonephritis. Infect Chemother 2017; 49(1):22–30. doi:10.3947/ic.2017.49.1.22
- Soulen MC, Fishman EK, Goldman SM, Gatewood OM. Bacterial renal infection: role of CT. Radiology 1989; 171(3):703–707. doi:10.1148/radiology.171.3.2655002
- June CH, Browning MD, Smith LP, et al. Ultrasonography and computed tomography in severe urinary tract infection. Arch Intern Med 1985; 145(5):841–845. pmid:3888134
- Taniguchi LS, Torres US, Souza SM, Torres LR, D’Ippolito G. Are the unenhanced and excretory CT phases necessary for the evaluation of acute pyelonephritis? Acta Radiol 2017; 58(5):634–640. doi:10.1177/0284185116665424
- Rathod SB, Kumbhar SS, Nanivadekar A, Aman K. Role of diffusion-weighted MRI in acute pyelonephritis: a prospective study. Acta Radiol 2015; 56(2):244–249. doi:10.1177/0284185114520862
- Stunell H, Buckley O, Feeney J, Geoghegan T, Browne RF, Torreggiani WC. Imaging of acute pyelonephritis in the adult. Eur Radiol 2007; 17(7):1820–1828.
- American College of Radiology. ACR Manual on Contrast Media. www.acr.org/clinical-resources/contrast-manual. Accessed June 19, 2019.
- Yoo JM, Koh JS, Han CH, et al. Diagnosing acute pyelonephritis with CT, Tc-DMSA SPECT, and Doppler ultrasound: a comparative study. Korean J Urol 2010; 51(4):260–265. doi:10.4111/kju.2010.51.4.260
Do probiotics reduce C diff risk in hospitalized patients?
ILLUSTRATIVE CASE
A 68-year-old woman is admitted to the hospital with a diagnosis of community-acquired pneumonia. Should you add probiotics to her antibiotic regimen to prevent infection with Clostridium difficile?
Clostridium difficile infection (CDI) leads to significant morbidity, mortality, and treatment failures. In 2011, it culminated in a cost of $4.8 billion and 29,000 deaths.2,3 Risk factors for infection include antibiotic use, hospitalization, older age, and medical comorbidities.2 Probiotics have been proposed as one way to prevent CDI.
While several systematic reviews have demonstrated efficacy for probiotics in the prevention of CDI,4-6 guidelines from the American College of Gastroenterology and the Society for Healthcare Epidemiology of America did not incorporate a recommendation for the use of probiotics in their CDI prevention strategy.7,8
The PLACIDE trial studied the use of probiotics in inpatients ages ≥ 65 years receiving either oral or parenteral antibiotics and found no difference in the incidence of CDI in those who received probiotics vs those who did not.9 Even though the PLACIDE trial was the largest, high-quality, randomized controlled trial (RCT) on the use of probiotics to prevent CDI, it had a lower incidence of CDI than was assumed in the power calculations. Additionally, previous systematic reviews did not always follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, and did not focus specifically on hospitalized patients, who are at higher risk for CDI.
Given the conflicting and poor evidence and recommendations, an additional systematic review and meta-analysis was performed following PRISMA guidelines and focusing on studies conducted only on hospitalized adults.
STUDY SUMMARY
Probiotics prevent CDI in hospitalized patients receiving antibiotics
This meta-analysis of 19 RCTs evaluated the efficacy of probiotics for the prevention of CDI in 6261 adult hospitalized patients taking antibiotics. All patients were ≥ 18 years (mean age 68-69 years) and received antibiotics orally, intravenously, or via both routes for any medical indication.
Trials were included if the intervention was for CDI prevention and if the probiotics used were 1 or a combination of 4 strains (Lactobacillus, Saccharomyces, Bifidobacterium, Streptococcus). Probiotic doses ranged from 4 billion to 900 billion colony-forming u/day and were started from 1 to 7 days after first antibiotic dose. Duration of probiotic use was either fixed at between 14 and 21 days or varied based on the duration of antibiotics (extending 3-14 days after the last antibiotic dose).
Continue to: Control groups received...
Control groups received matching placebo in all trials but 2; those 2 used usual care of no probiotics as the control. Common patient exclusions were pregnancy, immune system compromise, intensive care, a prosthetic heart valve, and pre-existing gastrointestinal disorders.
The risk for CDI was lower in the probiotic group (range 0%-11%) than in the control group (0%-40%) with no heterogeneity (I2 = 0.0%; P = .56) when the data were pooled from all 19 studies (relative risk [RR] = 0.42; 95% confidence interval [CI], 0.30-0.57). The median incidence of CDI in the control groups from all studies was 4%, which yielded a number needed to treat (NNT) of 43 (95% CI, 36-58).
The researchers examined the NNT at varying incidence rates. If the incidence of CDI was 1.2%, the NNT to prevent 1 case of CDI was 144, and if the incidence was 7.4%, the NNT was 23. Compared with control groups, there was a significant reduction in CDI if probiotics were started within 1 to 2 days of antibiotic initiation (RR = 0.32; 95% CI, 0.22-0.48), but not if they were started at 3 to 7 days (RR = 0.70; 95% CI, 0.40-1.2). There was no significant difference in adverse events (ie, cramping, nausea, fever, soft stools, flatulence, taste disturbance) between probiotic and control groups (14% vs 16%; P = .35).
WHAT’S NEW
Probiotics provide added benefit if taken sooner rather than later
This high-quality meta-analysis shows that administration of probiotics to hospitalized patients—particularly when started within 1 to 2 days of initiating antibiotic therapy—can prevent CDI.
CAVEATS
Findings do not apply to all patients; specific recommendations are lacking
Findings from this meta-analysis do not apply to patients who have an immunocompromising condition, are pregnant, have a prosthetic heart valve, have a pre-existing gastrointestinal disorder (eg, irritable bowel disease, pancreatitis), or require intensive care. In addition, specific recommendations as to the optimal probiotic species, dose, formulation, and duration of use cannot be made based on this meta-analysis. Lastly, findings from this study do not apply to patients treated with antibiotics in the ambulatory care setting.
Continue to: CHALLENGES TO IMPLEMENTATION
CHALLENGES TO IMPLEMENTATION
Lack of “medication” status leads to limited availability in hospitals
The largest barrier to giving probiotics to hospitalized adult patients is the availability of probiotics on local hospital formularies. Probiotics are not technically a medication; they are not regulated or approved by the US Food and Drug Administration and thus, insurance coverage and availability for inpatient use are limited. Lastly, US cost-effectiveness data are lacking, although such data would likely be favorable given the high costs associated with treatment of CDI.
ACKNOWLEDGMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.
1. Shen NT, Maw A, Tmanova LL, et al. Timely use of probiotics in hospitalized adults prevents Clostridium difficile infection: a systematic review with meta-regression analysis. Gastroenterology. 2017;152:1889-1900.e9.
2. Evans CT, Safdar N. Current trends in the epidemiology and outcomes of Clostridium difficile infection. Clin Infect Dis. 2015;60(Suppl 2):S66-S71.
3. Lessa FC, Winston LG, McDonald LC, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372:2369-2370.
4. Goldenberg JZ, Yap C, Lytvyn L. Probiotics for the prevention of Clostridium difficile-associated diarrhea in adults and children. Cochrane Database Syst Rev. 2017;12:CD006095.
5. Lau CS, Chamberlain RS. Probiotics are effective at preventing Clostridium difficile–associated diarrhea: a systematic review and meta-analysis. Int J Gen Med. 2016:22:27-37.
6. Johnston BC, Goldenberg JZ, Guyatt GH. Probiotics for the prevention of Clostridium difficile–associated diarrhea. In response. Ann Intern Med. 2013;158:706-707.
7. Surawicz CM, Brandt LJ, Binion DG, et al. Guidelines for diagnosis, treatment, and prevention of Clostridium difficile infections. Am J Gastroenterol. 2013;108:478-498.
8. Cohen SH, Gerding DN, Johnson S, et al. Clinical practice guidelines for Clostridium difficile infection in adults: 2010 update by the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Diseases Society of America (IDSA). Infect Control Hosp Epidemiol. 2010;31:431-455.
9. Allen SJ, Wareham K, Wang D, et al. Lactobacilli and bifidobacteria in the prevention of antibiotic-associated diarrhoea and Clostridium difficile diarrhoea in older inpatients (PLACIDE): a randomised, double-blind, placebo-controlled, multicentre trial. Lancet. 2013;382:1249-1257.
ILLUSTRATIVE CASE
A 68-year-old woman is admitted to the hospital with a diagnosis of community-acquired pneumonia. Should you add probiotics to her antibiotic regimen to prevent infection with Clostridium difficile?
Clostridium difficile infection (CDI) leads to significant morbidity, mortality, and treatment failures. In 2011, it culminated in a cost of $4.8 billion and 29,000 deaths.2,3 Risk factors for infection include antibiotic use, hospitalization, older age, and medical comorbidities.2 Probiotics have been proposed as one way to prevent CDI.
While several systematic reviews have demonstrated efficacy for probiotics in the prevention of CDI,4-6 guidelines from the American College of Gastroenterology and the Society for Healthcare Epidemiology of America did not incorporate a recommendation for the use of probiotics in their CDI prevention strategy.7,8
The PLACIDE trial studied the use of probiotics in inpatients ages ≥ 65 years receiving either oral or parenteral antibiotics and found no difference in the incidence of CDI in those who received probiotics vs those who did not.9 Even though the PLACIDE trial was the largest, high-quality, randomized controlled trial (RCT) on the use of probiotics to prevent CDI, it had a lower incidence of CDI than was assumed in the power calculations. Additionally, previous systematic reviews did not always follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, and did not focus specifically on hospitalized patients, who are at higher risk for CDI.
Given the conflicting and poor evidence and recommendations, an additional systematic review and meta-analysis was performed following PRISMA guidelines and focusing on studies conducted only on hospitalized adults.
STUDY SUMMARY
Probiotics prevent CDI in hospitalized patients receiving antibiotics
This meta-analysis of 19 RCTs evaluated the efficacy of probiotics for the prevention of CDI in 6261 adult hospitalized patients taking antibiotics. All patients were ≥ 18 years (mean age 68-69 years) and received antibiotics orally, intravenously, or via both routes for any medical indication.
Trials were included if the intervention was for CDI prevention and if the probiotics used were 1 or a combination of 4 strains (Lactobacillus, Saccharomyces, Bifidobacterium, Streptococcus). Probiotic doses ranged from 4 billion to 900 billion colony-forming u/day and were started from 1 to 7 days after first antibiotic dose. Duration of probiotic use was either fixed at between 14 and 21 days or varied based on the duration of antibiotics (extending 3-14 days after the last antibiotic dose).
Continue to: Control groups received...
Control groups received matching placebo in all trials but 2; those 2 used usual care of no probiotics as the control. Common patient exclusions were pregnancy, immune system compromise, intensive care, a prosthetic heart valve, and pre-existing gastrointestinal disorders.
The risk for CDI was lower in the probiotic group (range 0%-11%) than in the control group (0%-40%) with no heterogeneity (I2 = 0.0%; P = .56) when the data were pooled from all 19 studies (relative risk [RR] = 0.42; 95% confidence interval [CI], 0.30-0.57). The median incidence of CDI in the control groups from all studies was 4%, which yielded a number needed to treat (NNT) of 43 (95% CI, 36-58).
The researchers examined the NNT at varying incidence rates. If the incidence of CDI was 1.2%, the NNT to prevent 1 case of CDI was 144, and if the incidence was 7.4%, the NNT was 23. Compared with control groups, there was a significant reduction in CDI if probiotics were started within 1 to 2 days of antibiotic initiation (RR = 0.32; 95% CI, 0.22-0.48), but not if they were started at 3 to 7 days (RR = 0.70; 95% CI, 0.40-1.2). There was no significant difference in adverse events (ie, cramping, nausea, fever, soft stools, flatulence, taste disturbance) between probiotic and control groups (14% vs 16%; P = .35).
WHAT’S NEW
Probiotics provide added benefit if taken sooner rather than later
This high-quality meta-analysis shows that administration of probiotics to hospitalized patients—particularly when started within 1 to 2 days of initiating antibiotic therapy—can prevent CDI.
CAVEATS
Findings do not apply to all patients; specific recommendations are lacking
Findings from this meta-analysis do not apply to patients who have an immunocompromising condition, are pregnant, have a prosthetic heart valve, have a pre-existing gastrointestinal disorder (eg, irritable bowel disease, pancreatitis), or require intensive care. In addition, specific recommendations as to the optimal probiotic species, dose, formulation, and duration of use cannot be made based on this meta-analysis. Lastly, findings from this study do not apply to patients treated with antibiotics in the ambulatory care setting.
Continue to: CHALLENGES TO IMPLEMENTATION
CHALLENGES TO IMPLEMENTATION
Lack of “medication” status leads to limited availability in hospitals
The largest barrier to giving probiotics to hospitalized adult patients is the availability of probiotics on local hospital formularies. Probiotics are not technically a medication; they are not regulated or approved by the US Food and Drug Administration and thus, insurance coverage and availability for inpatient use are limited. Lastly, US cost-effectiveness data are lacking, although such data would likely be favorable given the high costs associated with treatment of CDI.
ACKNOWLEDGMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.
ILLUSTRATIVE CASE
A 68-year-old woman is admitted to the hospital with a diagnosis of community-acquired pneumonia. Should you add probiotics to her antibiotic regimen to prevent infection with Clostridium difficile?
Clostridium difficile infection (CDI) leads to significant morbidity, mortality, and treatment failures. In 2011, it culminated in a cost of $4.8 billion and 29,000 deaths.2,3 Risk factors for infection include antibiotic use, hospitalization, older age, and medical comorbidities.2 Probiotics have been proposed as one way to prevent CDI.
While several systematic reviews have demonstrated efficacy for probiotics in the prevention of CDI,4-6 guidelines from the American College of Gastroenterology and the Society for Healthcare Epidemiology of America did not incorporate a recommendation for the use of probiotics in their CDI prevention strategy.7,8
The PLACIDE trial studied the use of probiotics in inpatients ages ≥ 65 years receiving either oral or parenteral antibiotics and found no difference in the incidence of CDI in those who received probiotics vs those who did not.9 Even though the PLACIDE trial was the largest, high-quality, randomized controlled trial (RCT) on the use of probiotics to prevent CDI, it had a lower incidence of CDI than was assumed in the power calculations. Additionally, previous systematic reviews did not always follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, and did not focus specifically on hospitalized patients, who are at higher risk for CDI.
Given the conflicting and poor evidence and recommendations, an additional systematic review and meta-analysis was performed following PRISMA guidelines and focusing on studies conducted only on hospitalized adults.
STUDY SUMMARY
Probiotics prevent CDI in hospitalized patients receiving antibiotics
This meta-analysis of 19 RCTs evaluated the efficacy of probiotics for the prevention of CDI in 6261 adult hospitalized patients taking antibiotics. All patients were ≥ 18 years (mean age 68-69 years) and received antibiotics orally, intravenously, or via both routes for any medical indication.
Trials were included if the intervention was for CDI prevention and if the probiotics used were 1 or a combination of 4 strains (Lactobacillus, Saccharomyces, Bifidobacterium, Streptococcus). Probiotic doses ranged from 4 billion to 900 billion colony-forming u/day and were started from 1 to 7 days after first antibiotic dose. Duration of probiotic use was either fixed at between 14 and 21 days or varied based on the duration of antibiotics (extending 3-14 days after the last antibiotic dose).
Continue to: Control groups received...
Control groups received matching placebo in all trials but 2; those 2 used usual care of no probiotics as the control. Common patient exclusions were pregnancy, immune system compromise, intensive care, a prosthetic heart valve, and pre-existing gastrointestinal disorders.
The risk for CDI was lower in the probiotic group (range 0%-11%) than in the control group (0%-40%) with no heterogeneity (I2 = 0.0%; P = .56) when the data were pooled from all 19 studies (relative risk [RR] = 0.42; 95% confidence interval [CI], 0.30-0.57). The median incidence of CDI in the control groups from all studies was 4%, which yielded a number needed to treat (NNT) of 43 (95% CI, 36-58).
The researchers examined the NNT at varying incidence rates. If the incidence of CDI was 1.2%, the NNT to prevent 1 case of CDI was 144, and if the incidence was 7.4%, the NNT was 23. Compared with control groups, there was a significant reduction in CDI if probiotics were started within 1 to 2 days of antibiotic initiation (RR = 0.32; 95% CI, 0.22-0.48), but not if they were started at 3 to 7 days (RR = 0.70; 95% CI, 0.40-1.2). There was no significant difference in adverse events (ie, cramping, nausea, fever, soft stools, flatulence, taste disturbance) between probiotic and control groups (14% vs 16%; P = .35).
WHAT’S NEW
Probiotics provide added benefit if taken sooner rather than later
This high-quality meta-analysis shows that administration of probiotics to hospitalized patients—particularly when started within 1 to 2 days of initiating antibiotic therapy—can prevent CDI.
CAVEATS
Findings do not apply to all patients; specific recommendations are lacking
Findings from this meta-analysis do not apply to patients who have an immunocompromising condition, are pregnant, have a prosthetic heart valve, have a pre-existing gastrointestinal disorder (eg, irritable bowel disease, pancreatitis), or require intensive care. In addition, specific recommendations as to the optimal probiotic species, dose, formulation, and duration of use cannot be made based on this meta-analysis. Lastly, findings from this study do not apply to patients treated with antibiotics in the ambulatory care setting.
Continue to: CHALLENGES TO IMPLEMENTATION
CHALLENGES TO IMPLEMENTATION
Lack of “medication” status leads to limited availability in hospitals
The largest barrier to giving probiotics to hospitalized adult patients is the availability of probiotics on local hospital formularies. Probiotics are not technically a medication; they are not regulated or approved by the US Food and Drug Administration and thus, insurance coverage and availability for inpatient use are limited. Lastly, US cost-effectiveness data are lacking, although such data would likely be favorable given the high costs associated with treatment of CDI.
ACKNOWLEDGMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.
1. Shen NT, Maw A, Tmanova LL, et al. Timely use of probiotics in hospitalized adults prevents Clostridium difficile infection: a systematic review with meta-regression analysis. Gastroenterology. 2017;152:1889-1900.e9.
2. Evans CT, Safdar N. Current trends in the epidemiology and outcomes of Clostridium difficile infection. Clin Infect Dis. 2015;60(Suppl 2):S66-S71.
3. Lessa FC, Winston LG, McDonald LC, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372:2369-2370.
4. Goldenberg JZ, Yap C, Lytvyn L. Probiotics for the prevention of Clostridium difficile-associated diarrhea in adults and children. Cochrane Database Syst Rev. 2017;12:CD006095.
5. Lau CS, Chamberlain RS. Probiotics are effective at preventing Clostridium difficile–associated diarrhea: a systematic review and meta-analysis. Int J Gen Med. 2016:22:27-37.
6. Johnston BC, Goldenberg JZ, Guyatt GH. Probiotics for the prevention of Clostridium difficile–associated diarrhea. In response. Ann Intern Med. 2013;158:706-707.
7. Surawicz CM, Brandt LJ, Binion DG, et al. Guidelines for diagnosis, treatment, and prevention of Clostridium difficile infections. Am J Gastroenterol. 2013;108:478-498.
8. Cohen SH, Gerding DN, Johnson S, et al. Clinical practice guidelines for Clostridium difficile infection in adults: 2010 update by the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Diseases Society of America (IDSA). Infect Control Hosp Epidemiol. 2010;31:431-455.
9. Allen SJ, Wareham K, Wang D, et al. Lactobacilli and bifidobacteria in the prevention of antibiotic-associated diarrhoea and Clostridium difficile diarrhoea in older inpatients (PLACIDE): a randomised, double-blind, placebo-controlled, multicentre trial. Lancet. 2013;382:1249-1257.
1. Shen NT, Maw A, Tmanova LL, et al. Timely use of probiotics in hospitalized adults prevents Clostridium difficile infection: a systematic review with meta-regression analysis. Gastroenterology. 2017;152:1889-1900.e9.
2. Evans CT, Safdar N. Current trends in the epidemiology and outcomes of Clostridium difficile infection. Clin Infect Dis. 2015;60(Suppl 2):S66-S71.
3. Lessa FC, Winston LG, McDonald LC, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372:2369-2370.
4. Goldenberg JZ, Yap C, Lytvyn L. Probiotics for the prevention of Clostridium difficile-associated diarrhea in adults and children. Cochrane Database Syst Rev. 2017;12:CD006095.
5. Lau CS, Chamberlain RS. Probiotics are effective at preventing Clostridium difficile–associated diarrhea: a systematic review and meta-analysis. Int J Gen Med. 2016:22:27-37.
6. Johnston BC, Goldenberg JZ, Guyatt GH. Probiotics for the prevention of Clostridium difficile–associated diarrhea. In response. Ann Intern Med. 2013;158:706-707.
7. Surawicz CM, Brandt LJ, Binion DG, et al. Guidelines for diagnosis, treatment, and prevention of Clostridium difficile infections. Am J Gastroenterol. 2013;108:478-498.
8. Cohen SH, Gerding DN, Johnson S, et al. Clinical practice guidelines for Clostridium difficile infection in adults: 2010 update by the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Diseases Society of America (IDSA). Infect Control Hosp Epidemiol. 2010;31:431-455.
9. Allen SJ, Wareham K, Wang D, et al. Lactobacilli and bifidobacteria in the prevention of antibiotic-associated diarrhoea and Clostridium difficile diarrhoea in older inpatients (PLACIDE): a randomised, double-blind, placebo-controlled, multicentre trial. Lancet. 2013;382:1249-1257.
PRACTICE CHANGER
Start probiotics within 1 to 2 days of starting antibiotics in hospitalized patients to reduce the risk of Clostridium difficile infection.1
STRENGTH OF RECOMMENDATION
A: Based on a meta-analysis of randomized controlled trials.
Shen NT, Maw A, Tmanova LL, et al. Timely use of probiotics in hospitalized adults prevents Clostridium difficile infection: a systematic review with meta-regression analysis. Gastroenterology. 2017;152:1889-1900. e9.
Facts to help you keep pace with the vaccine conversation
The current increase in measles cases in the United States has sharpened the focus on antivaccine activities. While the percentage of US children who are fully vaccinated remains high (≥ 94%), the number of un- or undervaccinated children has been growing1 because of nonmedical exemptions from school vaccine requirements due to concerns about vaccine safety and an underappreciation of the benefits of vaccines. Family physicians need to be conversant with several important aspects of this matter, including the magnitude of benefits provided by childhood vaccines, as well as the systems already in place for
- assessing vaccine effectiveness and safety,
- making recommendations on the use of vaccines,
- monitoring safety after vaccine approval, and
- compensating those affected by rare but serious vaccine-related adverse events (AEs).
Familiarity with these issues will allow for informed discussions with parents who are vaccine hesitant and with those who have read or heard inaccurate information.
The benefits of vaccines are indisputable
In 1999, the Centers for Disease Control and Prevention (CDC) published a list of 9 selected childhood infectious diseases and compared their incidences before and after immunization was available.2 Each of these infections causes morbidity, sequelae, and mortality at predictable rates depending on the infectious agent. The comparisons were dramatic: Measles, with a baseline annual morbidity of 503,282 cases, fell to just 89 cases; poliomyelitis decreased from 16,316 to 0; and Haemophilus influenzae type b declined from 20,000 to 54. In a 2014 analysis, the CDC stated that “among 78.6 million children born during 1994–2013, routine childhood immunization was estimated to prevent 322 million illnesses (averaging 4.1 illnesses per child) and 21 million hospitalizations (0.27 per child) over the course of their lifetimes and avert 732,000 premature deaths from vaccine-preventable illnesses” (TABLE).3
It is not unusual to hear a vaccine opponent say that childhood infectious diseases are not serious and that it is better for a child to contract the infection and let the immune system fight it naturally. Measles is often used as an example. This argument ignores some important aspects of vaccine benefits.
It is true in the United States that the average child who contracts measles will recover from it and not suffer immediate or long-term effects. However, it is also true that measles has a hospitalization rate of about 20% and a death rate of between 1/500 and 1/1000 cases.4 Mortality is much higher in developing countries. Prior to widespread use of measles vaccine, hundreds of thousands of cases of measles occurred each year. That translated into hundreds of preventable child deaths per year. An individual case does not tell the full story about the public health impact of infectious illnesses.
In addition, there are often unappreciated sequelae from child infections, such as shingles occurring years after resolution of a chickenpox infection. There are also societal consequences of child infections, such as deafness from congenital rubella and intergenerational transfer of infectious agents to family members at risk for serious consequences (influenza from a child to a grandparent). Finally, infected children pose a risk to those who cannot be vaccinated because of immune deficiencies and other medical conditions.
A multilayered US system monitors vaccine safety
Responsibility for assuring the safety of vaccines lies with the US Food and Drug Administration (FDA) Center for Biologics Evaluation and Research and with the CDC’s Immunization Safety Office (ISO). The FDA is responsible for the initial assessment of the effectiveness and safety of new vaccines and for ongoing monitoring of the manufacturing facilities where vaccines are produced. After FDA approval, safety is monitored using a multilayered system that includes the Vaccine Adverse Event Reporting System (VAERS), the Vaccine Safety Datalink (VSD) system, the Clinical Immunization Safety Assessment (CISA) Project, and periodic reviews by the National Academy of Medicine (NAM), previously the Institute of Medicine. In addition, there is a large number of studies published each year by the nation’s—and world’s—medical research community on vaccine effectiveness and safety.
Continue to: VAERS
VAERS (https://vaers.hhs.gov/) is a passive reporting system that allows patients, physicians, and other health care providers to record suspected vaccine-related adverse events.5 It was created in 1990 and is run by the FDA and the CDC. It is not intended to be a comprehensive or definitive list of proven vaccine-related harms. As a passive reporting system, it is subject to both over- and underreporting, and the data from it are often misinterpreted and used incorrectly by vaccine opponents—eg, wrongly declaring that VAERS reports of possible AEs are proven cases. It provides a sentinel system that is monitored for indications of possible serious AEs linked to a particular vaccine. When a suspected interaction is detected, it is investigated by the VSD system.
VSD is a collaboration of the CDC’s ISO and 8 geographically distributed health care organizations with complete electronic patient medical information on their members. VSD conducts studies when a question about vaccine safety arises, when new vaccines are licensed, or when there are new vaccine recommendations. A description of VSD sites, the research methods used, and a list of publications describing study results can be found at https://www.cdc.gov/vaccinesafety/ensuringsafety/monitoring/vsd/index.html#organizations. If the VSD system finds a link between serious AEs and a particular vaccine, this association is reported to the Advisory Committee on Immunization Practices (ACIP) for consideration in changing recommendations regarding that vaccine. This happens only rarely.
CISA was established in 2001 as a network of vaccine safety experts at 7 academic medical centers who collaborate with the CDC’s ISO. CISA conducts studies on specific questions related to vaccine safety and provides a consultation service to clinicians and researchers who have questions about vaccine safety. A description of the CISA sites, past publications on vaccine safety, and ongoing research priorities can be found at https://www.cdc.gov/vaccinesafety/ensuringsafety/monitoring/cisa/index.html.
NAM (https://nam.edu/) conducts periodic reviews of vaccine safety and vaccine-caused AEs. The most recent was published in 2012 and looked at possible AEs of 8 vaccines containing 12 different antigens.6 The literature search for this review found more than 12,000 articles, which speaks to the volume of scientific work on vaccine safety. These NAM reports document the rarity of severe AEs to vaccines and are used with other information to construct the table for the Vaccine Injury Compensation Program (VICP), which is described below.
Are vaccines killing children?
Vaccine opponents frequently claim that vaccines cause much more harm than is documented, including the deaths of children. A vaccine opponent made this claim in my state (Arizona) at a legislative committee hearing even though our state child mortality review committee has been investigating all child deaths for decades and has never attributed a death to a vaccine.
Continue to: One study conducted...
One study conducted using the VSD system from January 1, 2005, to December 31, 2011, identified 1100 deaths occurring within 12 months of any vaccination among 2,189,504 VSD enrollees ages 9 to 26 years.7 They found that the risk of death in this age group was not increased during the 30 days after vaccination, and no deaths were found to be causally associated with vaccination. Deaths among children do occur and, due to the number of vaccines administered, some deaths will occur within a short time period after a vaccine. This temporal association does not prove the death was vaccine-caused, but vaccine opponents have claimed that it does.
The vaccine injury compensation system
In 1986, the federal government established a no-fault system—the National Vaccine Injury Compensation Program (VICP)—to compensate those who suffer a serious AE from a vaccine covered by the program. This system is administered by the Health Resources and Services Administration (HRSA) in the Department of Health and Human Services (DHHS). HRSA maintains a table of proven AEs of specific vaccines, based in part on the NAM report mentioned earlier. Petitions for compensation—with proof of an AE following the administration of a vaccine that is included on the HRSA table—are accepted and remunerated if the AE lasted > 6 months or resulted in hospitalization. Petitions that allege AEs following administration of a vaccine not included on the table are nevertheless reviewed by the staff of HRSA, who can still recommend compensation based on the medical evidence. If HRSA declines the petition, the petitioner can appeal the case in the US Court of Federal Claims, which makes the final decision on a petition’s validity and, if warranted, the type and amount of compensation.
From 2006 to 2017, > 3.4 billion doses of vaccines covered by VICP were distributed in the United States.8 During this period, 6293 petitions were adjudicated by the court; 4311 were compensated.8 For every 1 million doses of vaccine distributed, 1 individual was compensated. Seventy percent of these compensations were awarded to petitioners despite a lack of clear evidence that the patient’s condition was caused by a vaccine.8 The rate of compensation for conditions proven to be caused by a vaccine was 1/3.33 million.8
The VICP pays for attorney fees, in some cases even if the petition is denied, but does not allow contingency fees. Since the beginning of the program, more than $4 billion has been awarded.8 The program is funded by a 75-cent tax on each vaccine antigen. Because serious AEs are so rare, the trust fund established to administer the VICP finances has a surplus of about $6 billion.
The Advisory Committee on Immunization Practices
After a vaccine is approved for use by the FDA, ACIP makes recommendations for its use in the US civilian population.9,10 ACIP, created in 1964, was chartered as a federal advisory committee to provide expert external advice to the Director of the CDC and the Secretary of DHHS on the use of vaccines
Continue to: As an official...
As an official federal advisory committee governed by the Federal Advisory Committee Act, ACIP operates under strict requirements for public notification of meetings, allowing for written and oral public comment at its meetings, and timely publication of minutes. ACIP meeting minutes are posted soon after each meeting, along with draft recommendations. ACIP meeting agendas and slide presentations are available on the ACIP Web site (https://www.cdc.gov/vaccines/acip/index.html).
ACIP consists of 15 members serving overlapping 4-year terms, appointed by the Secretary of DHHS from a list of candidates proposed by the CDC. One member is a consumer representative; the other members have expertise in vaccinology, immunology, pediatrics, internal medicine, infectious diseases, preventive medicine, and public health. In the CDC, staff support for ACIP is provided by the National Center for Immunization and Respiratory Diseases, Office of Infectious Diseases.
ACIP holds 2-day meetings 3 times a year. Much of the work occurs between meetings, by work groups via phone conferences. Work groups are chaired by an ACIP member and staffed by one or more CDC programmatic, content-expert professionals. Membership of the work groups consists of at least 2 ACIP members, representatives from relevant professional clinical and public health organizations, and other individuals with specific expertise. Work groups propose recommendations to ACIP, which can adopt, revise, or reject them.
When formulating recommendations for a particular vaccine, ACIP considers the burden of disease prevented, the effectiveness and safety of the vaccine, cost effectiveness, and practical and logistical issues of implementing recommendations. ACIP also receives frequent reports from ISO regarding the safety of vaccines previously approved. Since 2011, ACIP has used a standardized, modified GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) system to assess the evidence regarding effectiveness and safety of new vaccines and an evidence-to-recommendation framework to transparently explain how it arrives at recommendations.11,12
We can recommend vaccines with confidence
In the United States, we have a secure supply of safe vaccines, a transparent method of making vaccine recommendations, a robust system to monitor vaccine safety, and an efficient system to compensate those who experience a rare, serious adverse reaction to a vaccine. The US public health system has achieved a marked reduction in morbidity and mortality from childhood infectious diseases, mostly because of vaccines. Many people today have not experienced or seen children with these once-common childhood infections and may not appreciate the seriousness of childhood infectious diseases or the full value of vaccines. As family physicians, we can help address this problem and recommend vaccines to our patients with confidence.
1. Mellerson JL, Maxwell CB, Knighton CL, et al. Vaccine coverage for selected vaccines and exemption rates among children in kindergarten—United States, 2017-18 school year. MMWR Morb Mortal Wkly Rep. 2018;67:1115-1122.
2. CDC. Ten great public health achievements—United States, 1900-1999. MMWR Morb Mortal Wkly Rep. 1999;48:241-243.
3. Whitney CG, Zhou F, Singleton J, et al. Benefits from immunization during the Vaccines for Children Program era—United States, 1994-2013. MMWR Morb Mortal Wkly Rep. 2014;63:352-355.
4. CDC. Complications of measles. https://www.cdc.gov/measles/symptoms/complications.html. Accessed July 16, 2019.
5. Shimabukuro TT, Nguyen M, Martin D, et al. Safety monitoring in the Vaccine Adverse Event Reporting System (VAERS). Vaccine. 2015;33:4398-4405.
6. IOM (Institute of Medicine). Adverse Effects of Vaccines: Evidence and Causality. Washington, DC: The National Academies Press; 2012.
7. McCarthy NL, Gee J, Sukumaran L, et al. Vaccination and 30-day mortality risk in children, adolescents, and young adults. Pediatrics. 2016;137:1-8.
8. HRSA. Data and Statistics. https://www.hrsa.gov/sites/default/files/hrsa/vaccine-compensation/data/monthly-stats-may-2019.pdf. Accessed July 16, 2019.
9. Pickering LK, Orenstein WA, Sun W, et al. FDA licensure of and ACIP recommendations for vaccines. Vaccine. 2017;37:5027-5036.
10. Smith JC, Snider DE, Pickering LK. Immunization policy development in the United States: the role of the Advisory Committee on Immunization Practices. Ann Intern Med. 2009;150:45-49.
11. Ahmed F, Temte JL, Campos-Outcalt D, et al; for the ACIP Evidence Based Recommendations Work Group (EBRWG). Methods for developing evidence-based recommendations by the Advisory Committee on Immunization Practices (ACIP) of the U.S. Centers for Disease Control and Prevention (CDC). Vaccine. 2011;29:9171-9176.
12. Lee G, Carr W. Updated framework for development of evidence-based recommendations by the Advisory Committee on Immunization Practices. MMWR Morb Mortal Wkly Rep. 2018:76:1271-1272.
The current increase in measles cases in the United States has sharpened the focus on antivaccine activities. While the percentage of US children who are fully vaccinated remains high (≥ 94%), the number of un- or undervaccinated children has been growing1 because of nonmedical exemptions from school vaccine requirements due to concerns about vaccine safety and an underappreciation of the benefits of vaccines. Family physicians need to be conversant with several important aspects of this matter, including the magnitude of benefits provided by childhood vaccines, as well as the systems already in place for
- assessing vaccine effectiveness and safety,
- making recommendations on the use of vaccines,
- monitoring safety after vaccine approval, and
- compensating those affected by rare but serious vaccine-related adverse events (AEs).
Familiarity with these issues will allow for informed discussions with parents who are vaccine hesitant and with those who have read or heard inaccurate information.
The benefits of vaccines are indisputable
In 1999, the Centers for Disease Control and Prevention (CDC) published a list of 9 selected childhood infectious diseases and compared their incidences before and after immunization was available.2 Each of these infections causes morbidity, sequelae, and mortality at predictable rates depending on the infectious agent. The comparisons were dramatic: Measles, with a baseline annual morbidity of 503,282 cases, fell to just 89 cases; poliomyelitis decreased from 16,316 to 0; and Haemophilus influenzae type b declined from 20,000 to 54. In a 2014 analysis, the CDC stated that “among 78.6 million children born during 1994–2013, routine childhood immunization was estimated to prevent 322 million illnesses (averaging 4.1 illnesses per child) and 21 million hospitalizations (0.27 per child) over the course of their lifetimes and avert 732,000 premature deaths from vaccine-preventable illnesses” (TABLE).3
It is not unusual to hear a vaccine opponent say that childhood infectious diseases are not serious and that it is better for a child to contract the infection and let the immune system fight it naturally. Measles is often used as an example. This argument ignores some important aspects of vaccine benefits.
It is true in the United States that the average child who contracts measles will recover from it and not suffer immediate or long-term effects. However, it is also true that measles has a hospitalization rate of about 20% and a death rate of between 1/500 and 1/1000 cases.4 Mortality is much higher in developing countries. Prior to widespread use of measles vaccine, hundreds of thousands of cases of measles occurred each year. That translated into hundreds of preventable child deaths per year. An individual case does not tell the full story about the public health impact of infectious illnesses.
In addition, there are often unappreciated sequelae from child infections, such as shingles occurring years after resolution of a chickenpox infection. There are also societal consequences of child infections, such as deafness from congenital rubella and intergenerational transfer of infectious agents to family members at risk for serious consequences (influenza from a child to a grandparent). Finally, infected children pose a risk to those who cannot be vaccinated because of immune deficiencies and other medical conditions.
A multilayered US system monitors vaccine safety
Responsibility for assuring the safety of vaccines lies with the US Food and Drug Administration (FDA) Center for Biologics Evaluation and Research and with the CDC’s Immunization Safety Office (ISO). The FDA is responsible for the initial assessment of the effectiveness and safety of new vaccines and for ongoing monitoring of the manufacturing facilities where vaccines are produced. After FDA approval, safety is monitored using a multilayered system that includes the Vaccine Adverse Event Reporting System (VAERS), the Vaccine Safety Datalink (VSD) system, the Clinical Immunization Safety Assessment (CISA) Project, and periodic reviews by the National Academy of Medicine (NAM), previously the Institute of Medicine. In addition, there is a large number of studies published each year by the nation’s—and world’s—medical research community on vaccine effectiveness and safety.
Continue to: VAERS
VAERS (https://vaers.hhs.gov/) is a passive reporting system that allows patients, physicians, and other health care providers to record suspected vaccine-related adverse events.5 It was created in 1990 and is run by the FDA and the CDC. It is not intended to be a comprehensive or definitive list of proven vaccine-related harms. As a passive reporting system, it is subject to both over- and underreporting, and the data from it are often misinterpreted and used incorrectly by vaccine opponents—eg, wrongly declaring that VAERS reports of possible AEs are proven cases. It provides a sentinel system that is monitored for indications of possible serious AEs linked to a particular vaccine. When a suspected interaction is detected, it is investigated by the VSD system.
VSD is a collaboration of the CDC’s ISO and 8 geographically distributed health care organizations with complete electronic patient medical information on their members. VSD conducts studies when a question about vaccine safety arises, when new vaccines are licensed, or when there are new vaccine recommendations. A description of VSD sites, the research methods used, and a list of publications describing study results can be found at https://www.cdc.gov/vaccinesafety/ensuringsafety/monitoring/vsd/index.html#organizations. If the VSD system finds a link between serious AEs and a particular vaccine, this association is reported to the Advisory Committee on Immunization Practices (ACIP) for consideration in changing recommendations regarding that vaccine. This happens only rarely.
CISA was established in 2001 as a network of vaccine safety experts at 7 academic medical centers who collaborate with the CDC’s ISO. CISA conducts studies on specific questions related to vaccine safety and provides a consultation service to clinicians and researchers who have questions about vaccine safety. A description of the CISA sites, past publications on vaccine safety, and ongoing research priorities can be found at https://www.cdc.gov/vaccinesafety/ensuringsafety/monitoring/cisa/index.html.
NAM (https://nam.edu/) conducts periodic reviews of vaccine safety and vaccine-caused AEs. The most recent was published in 2012 and looked at possible AEs of 8 vaccines containing 12 different antigens.6 The literature search for this review found more than 12,000 articles, which speaks to the volume of scientific work on vaccine safety. These NAM reports document the rarity of severe AEs to vaccines and are used with other information to construct the table for the Vaccine Injury Compensation Program (VICP), which is described below.
Are vaccines killing children?
Vaccine opponents frequently claim that vaccines cause much more harm than is documented, including the deaths of children. A vaccine opponent made this claim in my state (Arizona) at a legislative committee hearing even though our state child mortality review committee has been investigating all child deaths for decades and has never attributed a death to a vaccine.
Continue to: One study conducted...
One study conducted using the VSD system from January 1, 2005, to December 31, 2011, identified 1100 deaths occurring within 12 months of any vaccination among 2,189,504 VSD enrollees ages 9 to 26 years.7 They found that the risk of death in this age group was not increased during the 30 days after vaccination, and no deaths were found to be causally associated with vaccination. Deaths among children do occur and, due to the number of vaccines administered, some deaths will occur within a short time period after a vaccine. This temporal association does not prove the death was vaccine-caused, but vaccine opponents have claimed that it does.
The vaccine injury compensation system
In 1986, the federal government established a no-fault system—the National Vaccine Injury Compensation Program (VICP)—to compensate those who suffer a serious AE from a vaccine covered by the program. This system is administered by the Health Resources and Services Administration (HRSA) in the Department of Health and Human Services (DHHS). HRSA maintains a table of proven AEs of specific vaccines, based in part on the NAM report mentioned earlier. Petitions for compensation—with proof of an AE following the administration of a vaccine that is included on the HRSA table—are accepted and remunerated if the AE lasted > 6 months or resulted in hospitalization. Petitions that allege AEs following administration of a vaccine not included on the table are nevertheless reviewed by the staff of HRSA, who can still recommend compensation based on the medical evidence. If HRSA declines the petition, the petitioner can appeal the case in the US Court of Federal Claims, which makes the final decision on a petition’s validity and, if warranted, the type and amount of compensation.
From 2006 to 2017, > 3.4 billion doses of vaccines covered by VICP were distributed in the United States.8 During this period, 6293 petitions were adjudicated by the court; 4311 were compensated.8 For every 1 million doses of vaccine distributed, 1 individual was compensated. Seventy percent of these compensations were awarded to petitioners despite a lack of clear evidence that the patient’s condition was caused by a vaccine.8 The rate of compensation for conditions proven to be caused by a vaccine was 1/3.33 million.8
The VICP pays for attorney fees, in some cases even if the petition is denied, but does not allow contingency fees. Since the beginning of the program, more than $4 billion has been awarded.8 The program is funded by a 75-cent tax on each vaccine antigen. Because serious AEs are so rare, the trust fund established to administer the VICP finances has a surplus of about $6 billion.
The Advisory Committee on Immunization Practices
After a vaccine is approved for use by the FDA, ACIP makes recommendations for its use in the US civilian population.9,10 ACIP, created in 1964, was chartered as a federal advisory committee to provide expert external advice to the Director of the CDC and the Secretary of DHHS on the use of vaccines
Continue to: As an official...
As an official federal advisory committee governed by the Federal Advisory Committee Act, ACIP operates under strict requirements for public notification of meetings, allowing for written and oral public comment at its meetings, and timely publication of minutes. ACIP meeting minutes are posted soon after each meeting, along with draft recommendations. ACIP meeting agendas and slide presentations are available on the ACIP Web site (https://www.cdc.gov/vaccines/acip/index.html).
ACIP consists of 15 members serving overlapping 4-year terms, appointed by the Secretary of DHHS from a list of candidates proposed by the CDC. One member is a consumer representative; the other members have expertise in vaccinology, immunology, pediatrics, internal medicine, infectious diseases, preventive medicine, and public health. In the CDC, staff support for ACIP is provided by the National Center for Immunization and Respiratory Diseases, Office of Infectious Diseases.
ACIP holds 2-day meetings 3 times a year. Much of the work occurs between meetings, by work groups via phone conferences. Work groups are chaired by an ACIP member and staffed by one or more CDC programmatic, content-expert professionals. Membership of the work groups consists of at least 2 ACIP members, representatives from relevant professional clinical and public health organizations, and other individuals with specific expertise. Work groups propose recommendations to ACIP, which can adopt, revise, or reject them.
When formulating recommendations for a particular vaccine, ACIP considers the burden of disease prevented, the effectiveness and safety of the vaccine, cost effectiveness, and practical and logistical issues of implementing recommendations. ACIP also receives frequent reports from ISO regarding the safety of vaccines previously approved. Since 2011, ACIP has used a standardized, modified GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) system to assess the evidence regarding effectiveness and safety of new vaccines and an evidence-to-recommendation framework to transparently explain how it arrives at recommendations.11,12
We can recommend vaccines with confidence
In the United States, we have a secure supply of safe vaccines, a transparent method of making vaccine recommendations, a robust system to monitor vaccine safety, and an efficient system to compensate those who experience a rare, serious adverse reaction to a vaccine. The US public health system has achieved a marked reduction in morbidity and mortality from childhood infectious diseases, mostly because of vaccines. Many people today have not experienced or seen children with these once-common childhood infections and may not appreciate the seriousness of childhood infectious diseases or the full value of vaccines. As family physicians, we can help address this problem and recommend vaccines to our patients with confidence.
The current increase in measles cases in the United States has sharpened the focus on antivaccine activities. While the percentage of US children who are fully vaccinated remains high (≥ 94%), the number of un- or undervaccinated children has been growing1 because of nonmedical exemptions from school vaccine requirements due to concerns about vaccine safety and an underappreciation of the benefits of vaccines. Family physicians need to be conversant with several important aspects of this matter, including the magnitude of benefits provided by childhood vaccines, as well as the systems already in place for
- assessing vaccine effectiveness and safety,
- making recommendations on the use of vaccines,
- monitoring safety after vaccine approval, and
- compensating those affected by rare but serious vaccine-related adverse events (AEs).
Familiarity with these issues will allow for informed discussions with parents who are vaccine hesitant and with those who have read or heard inaccurate information.
The benefits of vaccines are indisputable
In 1999, the Centers for Disease Control and Prevention (CDC) published a list of 9 selected childhood infectious diseases and compared their incidences before and after immunization was available.2 Each of these infections causes morbidity, sequelae, and mortality at predictable rates depending on the infectious agent. The comparisons were dramatic: Measles, with a baseline annual morbidity of 503,282 cases, fell to just 89 cases; poliomyelitis decreased from 16,316 to 0; and Haemophilus influenzae type b declined from 20,000 to 54. In a 2014 analysis, the CDC stated that “among 78.6 million children born during 1994–2013, routine childhood immunization was estimated to prevent 322 million illnesses (averaging 4.1 illnesses per child) and 21 million hospitalizations (0.27 per child) over the course of their lifetimes and avert 732,000 premature deaths from vaccine-preventable illnesses” (TABLE).3
It is not unusual to hear a vaccine opponent say that childhood infectious diseases are not serious and that it is better for a child to contract the infection and let the immune system fight it naturally. Measles is often used as an example. This argument ignores some important aspects of vaccine benefits.
It is true in the United States that the average child who contracts measles will recover from it and not suffer immediate or long-term effects. However, it is also true that measles has a hospitalization rate of about 20% and a death rate of between 1/500 and 1/1000 cases.4 Mortality is much higher in developing countries. Prior to widespread use of measles vaccine, hundreds of thousands of cases of measles occurred each year. That translated into hundreds of preventable child deaths per year. An individual case does not tell the full story about the public health impact of infectious illnesses.
In addition, there are often unappreciated sequelae from child infections, such as shingles occurring years after resolution of a chickenpox infection. There are also societal consequences of child infections, such as deafness from congenital rubella and intergenerational transfer of infectious agents to family members at risk for serious consequences (influenza from a child to a grandparent). Finally, infected children pose a risk to those who cannot be vaccinated because of immune deficiencies and other medical conditions.
A multilayered US system monitors vaccine safety
Responsibility for assuring the safety of vaccines lies with the US Food and Drug Administration (FDA) Center for Biologics Evaluation and Research and with the CDC’s Immunization Safety Office (ISO). The FDA is responsible for the initial assessment of the effectiveness and safety of new vaccines and for ongoing monitoring of the manufacturing facilities where vaccines are produced. After FDA approval, safety is monitored using a multilayered system that includes the Vaccine Adverse Event Reporting System (VAERS), the Vaccine Safety Datalink (VSD) system, the Clinical Immunization Safety Assessment (CISA) Project, and periodic reviews by the National Academy of Medicine (NAM), previously the Institute of Medicine. In addition, there is a large number of studies published each year by the nation’s—and world’s—medical research community on vaccine effectiveness and safety.
Continue to: VAERS
VAERS (https://vaers.hhs.gov/) is a passive reporting system that allows patients, physicians, and other health care providers to record suspected vaccine-related adverse events.5 It was created in 1990 and is run by the FDA and the CDC. It is not intended to be a comprehensive or definitive list of proven vaccine-related harms. As a passive reporting system, it is subject to both over- and underreporting, and the data from it are often misinterpreted and used incorrectly by vaccine opponents—eg, wrongly declaring that VAERS reports of possible AEs are proven cases. It provides a sentinel system that is monitored for indications of possible serious AEs linked to a particular vaccine. When a suspected interaction is detected, it is investigated by the VSD system.
VSD is a collaboration of the CDC’s ISO and 8 geographically distributed health care organizations with complete electronic patient medical information on their members. VSD conducts studies when a question about vaccine safety arises, when new vaccines are licensed, or when there are new vaccine recommendations. A description of VSD sites, the research methods used, and a list of publications describing study results can be found at https://www.cdc.gov/vaccinesafety/ensuringsafety/monitoring/vsd/index.html#organizations. If the VSD system finds a link between serious AEs and a particular vaccine, this association is reported to the Advisory Committee on Immunization Practices (ACIP) for consideration in changing recommendations regarding that vaccine. This happens only rarely.
CISA was established in 2001 as a network of vaccine safety experts at 7 academic medical centers who collaborate with the CDC’s ISO. CISA conducts studies on specific questions related to vaccine safety and provides a consultation service to clinicians and researchers who have questions about vaccine safety. A description of the CISA sites, past publications on vaccine safety, and ongoing research priorities can be found at https://www.cdc.gov/vaccinesafety/ensuringsafety/monitoring/cisa/index.html.
NAM (https://nam.edu/) conducts periodic reviews of vaccine safety and vaccine-caused AEs. The most recent was published in 2012 and looked at possible AEs of 8 vaccines containing 12 different antigens.6 The literature search for this review found more than 12,000 articles, which speaks to the volume of scientific work on vaccine safety. These NAM reports document the rarity of severe AEs to vaccines and are used with other information to construct the table for the Vaccine Injury Compensation Program (VICP), which is described below.
Are vaccines killing children?
Vaccine opponents frequently claim that vaccines cause much more harm than is documented, including the deaths of children. A vaccine opponent made this claim in my state (Arizona) at a legislative committee hearing even though our state child mortality review committee has been investigating all child deaths for decades and has never attributed a death to a vaccine.
Continue to: One study conducted...
One study conducted using the VSD system from January 1, 2005, to December 31, 2011, identified 1100 deaths occurring within 12 months of any vaccination among 2,189,504 VSD enrollees ages 9 to 26 years.7 They found that the risk of death in this age group was not increased during the 30 days after vaccination, and no deaths were found to be causally associated with vaccination. Deaths among children do occur and, due to the number of vaccines administered, some deaths will occur within a short time period after a vaccine. This temporal association does not prove the death was vaccine-caused, but vaccine opponents have claimed that it does.
The vaccine injury compensation system
In 1986, the federal government established a no-fault system—the National Vaccine Injury Compensation Program (VICP)—to compensate those who suffer a serious AE from a vaccine covered by the program. This system is administered by the Health Resources and Services Administration (HRSA) in the Department of Health and Human Services (DHHS). HRSA maintains a table of proven AEs of specific vaccines, based in part on the NAM report mentioned earlier. Petitions for compensation—with proof of an AE following the administration of a vaccine that is included on the HRSA table—are accepted and remunerated if the AE lasted > 6 months or resulted in hospitalization. Petitions that allege AEs following administration of a vaccine not included on the table are nevertheless reviewed by the staff of HRSA, who can still recommend compensation based on the medical evidence. If HRSA declines the petition, the petitioner can appeal the case in the US Court of Federal Claims, which makes the final decision on a petition’s validity and, if warranted, the type and amount of compensation.
From 2006 to 2017, > 3.4 billion doses of vaccines covered by VICP were distributed in the United States.8 During this period, 6293 petitions were adjudicated by the court; 4311 were compensated.8 For every 1 million doses of vaccine distributed, 1 individual was compensated. Seventy percent of these compensations were awarded to petitioners despite a lack of clear evidence that the patient’s condition was caused by a vaccine.8 The rate of compensation for conditions proven to be caused by a vaccine was 1/3.33 million.8
The VICP pays for attorney fees, in some cases even if the petition is denied, but does not allow contingency fees. Since the beginning of the program, more than $4 billion has been awarded.8 The program is funded by a 75-cent tax on each vaccine antigen. Because serious AEs are so rare, the trust fund established to administer the VICP finances has a surplus of about $6 billion.
The Advisory Committee on Immunization Practices
After a vaccine is approved for use by the FDA, ACIP makes recommendations for its use in the US civilian population.9,10 ACIP, created in 1964, was chartered as a federal advisory committee to provide expert external advice to the Director of the CDC and the Secretary of DHHS on the use of vaccines
Continue to: As an official...
As an official federal advisory committee governed by the Federal Advisory Committee Act, ACIP operates under strict requirements for public notification of meetings, allowing for written and oral public comment at its meetings, and timely publication of minutes. ACIP meeting minutes are posted soon after each meeting, along with draft recommendations. ACIP meeting agendas and slide presentations are available on the ACIP Web site (https://www.cdc.gov/vaccines/acip/index.html).
ACIP consists of 15 members serving overlapping 4-year terms, appointed by the Secretary of DHHS from a list of candidates proposed by the CDC. One member is a consumer representative; the other members have expertise in vaccinology, immunology, pediatrics, internal medicine, infectious diseases, preventive medicine, and public health. In the CDC, staff support for ACIP is provided by the National Center for Immunization and Respiratory Diseases, Office of Infectious Diseases.
ACIP holds 2-day meetings 3 times a year. Much of the work occurs between meetings, by work groups via phone conferences. Work groups are chaired by an ACIP member and staffed by one or more CDC programmatic, content-expert professionals. Membership of the work groups consists of at least 2 ACIP members, representatives from relevant professional clinical and public health organizations, and other individuals with specific expertise. Work groups propose recommendations to ACIP, which can adopt, revise, or reject them.
When formulating recommendations for a particular vaccine, ACIP considers the burden of disease prevented, the effectiveness and safety of the vaccine, cost effectiveness, and practical and logistical issues of implementing recommendations. ACIP also receives frequent reports from ISO regarding the safety of vaccines previously approved. Since 2011, ACIP has used a standardized, modified GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) system to assess the evidence regarding effectiveness and safety of new vaccines and an evidence-to-recommendation framework to transparently explain how it arrives at recommendations.11,12
We can recommend vaccines with confidence
In the United States, we have a secure supply of safe vaccines, a transparent method of making vaccine recommendations, a robust system to monitor vaccine safety, and an efficient system to compensate those who experience a rare, serious adverse reaction to a vaccine. The US public health system has achieved a marked reduction in morbidity and mortality from childhood infectious diseases, mostly because of vaccines. Many people today have not experienced or seen children with these once-common childhood infections and may not appreciate the seriousness of childhood infectious diseases or the full value of vaccines. As family physicians, we can help address this problem and recommend vaccines to our patients with confidence.
1. Mellerson JL, Maxwell CB, Knighton CL, et al. Vaccine coverage for selected vaccines and exemption rates among children in kindergarten—United States, 2017-18 school year. MMWR Morb Mortal Wkly Rep. 2018;67:1115-1122.
2. CDC. Ten great public health achievements—United States, 1900-1999. MMWR Morb Mortal Wkly Rep. 1999;48:241-243.
3. Whitney CG, Zhou F, Singleton J, et al. Benefits from immunization during the Vaccines for Children Program era—United States, 1994-2013. MMWR Morb Mortal Wkly Rep. 2014;63:352-355.
4. CDC. Complications of measles. https://www.cdc.gov/measles/symptoms/complications.html. Accessed July 16, 2019.
5. Shimabukuro TT, Nguyen M, Martin D, et al. Safety monitoring in the Vaccine Adverse Event Reporting System (VAERS). Vaccine. 2015;33:4398-4405.
6. IOM (Institute of Medicine). Adverse Effects of Vaccines: Evidence and Causality. Washington, DC: The National Academies Press; 2012.
7. McCarthy NL, Gee J, Sukumaran L, et al. Vaccination and 30-day mortality risk in children, adolescents, and young adults. Pediatrics. 2016;137:1-8.
8. HRSA. Data and Statistics. https://www.hrsa.gov/sites/default/files/hrsa/vaccine-compensation/data/monthly-stats-may-2019.pdf. Accessed July 16, 2019.
9. Pickering LK, Orenstein WA, Sun W, et al. FDA licensure of and ACIP recommendations for vaccines. Vaccine. 2017;37:5027-5036.
10. Smith JC, Snider DE, Pickering LK. Immunization policy development in the United States: the role of the Advisory Committee on Immunization Practices. Ann Intern Med. 2009;150:45-49.
11. Ahmed F, Temte JL, Campos-Outcalt D, et al; for the ACIP Evidence Based Recommendations Work Group (EBRWG). Methods for developing evidence-based recommendations by the Advisory Committee on Immunization Practices (ACIP) of the U.S. Centers for Disease Control and Prevention (CDC). Vaccine. 2011;29:9171-9176.
12. Lee G, Carr W. Updated framework for development of evidence-based recommendations by the Advisory Committee on Immunization Practices. MMWR Morb Mortal Wkly Rep. 2018:76:1271-1272.
1. Mellerson JL, Maxwell CB, Knighton CL, et al. Vaccine coverage for selected vaccines and exemption rates among children in kindergarten—United States, 2017-18 school year. MMWR Morb Mortal Wkly Rep. 2018;67:1115-1122.
2. CDC. Ten great public health achievements—United States, 1900-1999. MMWR Morb Mortal Wkly Rep. 1999;48:241-243.
3. Whitney CG, Zhou F, Singleton J, et al. Benefits from immunization during the Vaccines for Children Program era—United States, 1994-2013. MMWR Morb Mortal Wkly Rep. 2014;63:352-355.
4. CDC. Complications of measles. https://www.cdc.gov/measles/symptoms/complications.html. Accessed July 16, 2019.
5. Shimabukuro TT, Nguyen M, Martin D, et al. Safety monitoring in the Vaccine Adverse Event Reporting System (VAERS). Vaccine. 2015;33:4398-4405.
6. IOM (Institute of Medicine). Adverse Effects of Vaccines: Evidence and Causality. Washington, DC: The National Academies Press; 2012.
7. McCarthy NL, Gee J, Sukumaran L, et al. Vaccination and 30-day mortality risk in children, adolescents, and young adults. Pediatrics. 2016;137:1-8.
8. HRSA. Data and Statistics. https://www.hrsa.gov/sites/default/files/hrsa/vaccine-compensation/data/monthly-stats-may-2019.pdf. Accessed July 16, 2019.
9. Pickering LK, Orenstein WA, Sun W, et al. FDA licensure of and ACIP recommendations for vaccines. Vaccine. 2017;37:5027-5036.
10. Smith JC, Snider DE, Pickering LK. Immunization policy development in the United States: the role of the Advisory Committee on Immunization Practices. Ann Intern Med. 2009;150:45-49.
11. Ahmed F, Temte JL, Campos-Outcalt D, et al; for the ACIP Evidence Based Recommendations Work Group (EBRWG). Methods for developing evidence-based recommendations by the Advisory Committee on Immunization Practices (ACIP) of the U.S. Centers for Disease Control and Prevention (CDC). Vaccine. 2011;29:9171-9176.
12. Lee G, Carr W. Updated framework for development of evidence-based recommendations by the Advisory Committee on Immunization Practices. MMWR Morb Mortal Wkly Rep. 2018:76:1271-1272.
FDA approvals permit double-immunoassay approach to Lyme disease diagnosis
Concurrent or sequential enzyme immunoassays can now be conducted to diagnose Lyme disease, according to the U.S. Food and Drug Administration.
Four previously cleared tests are now approved by the agency for marketing with new indications as part of the revised diagnostic approach. Previously, the two-step diagnostic process consisted of an initial enzyme immunoassay followed by a Western blot test.
“With today’s action, clinicians have a new option to test for Lyme that is easier to interpret by a clinical laboratory due to the streamlined method of conducting the test. These tests may improve confidence in diagnosing a patient for a condition that requires the earliest possible treatment to ensure the best outcome for patients,” Tim Stenzel, MD, PhD, director of the Office of In Vitro Diagnostics and Radiological Health in the FDA’s Center for Devices and Radiologic Health, said in a press release announcing the newly approved approach.
The modified two-tier enzyme immunoassay approach was found to be as accurate for assessing exposure to Borrelia burgdorferi as the standard immunoassay followed by Western blot test in an FDA review of data from clinical studies using the following ZEUS Scientific ELISA Test Systems: Borrelia VlsE1/pepC10 IgG/IgM; Borrelia burgdorferi IgG/IgM; Borrelia burgdorferi IgM; and Borrelia burgdorferi IgG.
The recommendations of the Centers for Disease Control and Prevention should be followed for the diagnosis of Lyme disease and for determining when laboratory tests are appropriate, the FDA statement said. In 2017, the last year for which the CDC published data, a total of 42,743 confirmed and probable cases of Lyme disease were reported, an increase of 17% from 2016.
The FDA granted clearance of the ZEUS ELISA enzyme immunoassay tests to ZEUS Scientific.
Concurrent or sequential enzyme immunoassays can now be conducted to diagnose Lyme disease, according to the U.S. Food and Drug Administration.
Four previously cleared tests are now approved by the agency for marketing with new indications as part of the revised diagnostic approach. Previously, the two-step diagnostic process consisted of an initial enzyme immunoassay followed by a Western blot test.
“With today’s action, clinicians have a new option to test for Lyme that is easier to interpret by a clinical laboratory due to the streamlined method of conducting the test. These tests may improve confidence in diagnosing a patient for a condition that requires the earliest possible treatment to ensure the best outcome for patients,” Tim Stenzel, MD, PhD, director of the Office of In Vitro Diagnostics and Radiological Health in the FDA’s Center for Devices and Radiologic Health, said in a press release announcing the newly approved approach.
The modified two-tier enzyme immunoassay approach was found to be as accurate for assessing exposure to Borrelia burgdorferi as the standard immunoassay followed by Western blot test in an FDA review of data from clinical studies using the following ZEUS Scientific ELISA Test Systems: Borrelia VlsE1/pepC10 IgG/IgM; Borrelia burgdorferi IgG/IgM; Borrelia burgdorferi IgM; and Borrelia burgdorferi IgG.
The recommendations of the Centers for Disease Control and Prevention should be followed for the diagnosis of Lyme disease and for determining when laboratory tests are appropriate, the FDA statement said. In 2017, the last year for which the CDC published data, a total of 42,743 confirmed and probable cases of Lyme disease were reported, an increase of 17% from 2016.
The FDA granted clearance of the ZEUS ELISA enzyme immunoassay tests to ZEUS Scientific.
Concurrent or sequential enzyme immunoassays can now be conducted to diagnose Lyme disease, according to the U.S. Food and Drug Administration.
Four previously cleared tests are now approved by the agency for marketing with new indications as part of the revised diagnostic approach. Previously, the two-step diagnostic process consisted of an initial enzyme immunoassay followed by a Western blot test.
“With today’s action, clinicians have a new option to test for Lyme that is easier to interpret by a clinical laboratory due to the streamlined method of conducting the test. These tests may improve confidence in diagnosing a patient for a condition that requires the earliest possible treatment to ensure the best outcome for patients,” Tim Stenzel, MD, PhD, director of the Office of In Vitro Diagnostics and Radiological Health in the FDA’s Center for Devices and Radiologic Health, said in a press release announcing the newly approved approach.
The modified two-tier enzyme immunoassay approach was found to be as accurate for assessing exposure to Borrelia burgdorferi as the standard immunoassay followed by Western blot test in an FDA review of data from clinical studies using the following ZEUS Scientific ELISA Test Systems: Borrelia VlsE1/pepC10 IgG/IgM; Borrelia burgdorferi IgG/IgM; Borrelia burgdorferi IgM; and Borrelia burgdorferi IgG.
The recommendations of the Centers for Disease Control and Prevention should be followed for the diagnosis of Lyme disease and for determining when laboratory tests are appropriate, the FDA statement said. In 2017, the last year for which the CDC published data, a total of 42,743 confirmed and probable cases of Lyme disease were reported, an increase of 17% from 2016.
The FDA granted clearance of the ZEUS ELISA enzyme immunoassay tests to ZEUS Scientific.
Gram-negative bacteremia: Cultures, drugs, and duration
Are we doing it right?
Case
A 42-year-old woman with uncontrolled diabetes presents to the ED with fever, chills, dysuria, and flank pain for 3 days. On exam, she is febrile and tachycardic. Lab results show leukocytosis and urinalysis is consistent with infection. CT scan shows acute pyelonephritis without complication. She is admitted to the hospital and started on ceftriaxone 2 g/24 hrs. On hospital day 2, her blood cultures show gram-negative bacteria.
Brief overview
Management of gram-negative (GN) bacteremia remains a challenging clinical situation for inpatient providers. With the push for high-value care and reductions in length of stay, recent literature has focused on reviewing current practices and attempting to standardize care. Despite this, no overarching guidelines exist to direct practice and clinicians are left to make decisions based on prior experience and expert opinion. Three key clinical questions exist when caring for a hospitalized patient with GN bacteremia: Should blood cultures be repeated? When is transition to oral antibiotics appropriate? And for what duration should antibiotics be given?
Overview of the data
When considering repeating blood cultures, it is important to understand that current literature does not support the practice for all GN bacteremias.
Canzoneri et al. retrospectively studied GN bacteremia and found that it took 17 repeat blood cultures being drawn to yield 1 positive result, which suggests that they are not necessary in all cases.1 Furthermore, repeat blood cultures increase cost of hospitalization, length of stay, and inconvenience to patients.2
However, Mushtaq et al. noted that repeating blood cultures can provide valuable information to confirm the response to treatment in patients with endovascular infection. Furthermore, they found that repeated blood cultures are also reasonable when the following scenarios are suspected: endocarditis or central line–associated infection, concern for multidrug resistant GN bacilli, and ongoing evidence of sepsis or patient decompensation.3
Consideration of a transition from intravenous to oral antibiotics is a key decision point in the care of GN bacteremia. Without guidelines, clinicians are left to evaluate patients on a case-by-case basis.4 Studies have suggested that the transition should be guided by the condition of the patient, the type of infection, and the culture-derived sensitivities.5 Additionally, bioavailability of antibiotics (see Table 1) is an important consideration and a recent examination of oral antibiotic failure rates demonstrated that lower bioavailability antibiotics have an increased risk of failure (2% vs. 16%).6
In their study, Kutob et al. highlighted the importance of choosing not only an antibiotic of high bioavailability, but also an antibiotic dose which will support a high concentration of the antibiotic in the bloodstream.6 For example, they identify ciprofloxacin as a moderate bioavailability medication, but note that most cases they examined utilized 500 mg b.i.d., where the concentration-dependent killing and dose-dependent bioavailability would advocate for the use of 750 mg b.i.d. or 500 mg every 8 hours.
The heterogeneity of GN bloodstream infections also creates difficulty in standardization of care. The literature suggests that infection source plays a significant role in the type of GN bacteria isolated.6,7 The best data for the transition to oral antibiotics exists with urologic sources and it remains unclear whether bacteria from other sources have higher risks of oral antibiotic failure.8
One recent study of 66 patients examined bacteremia in the setting of cholangitis and found that, once patients had stabilized, a switch from intravenous to oral antibiotics was noninferior, but randomized, prospective trials have not been performed. Notably, patients were transitioned to orals only after they were found to have a fluoroquinolone-sensitive infection, allowing the study authors to use higher-bioavailability agents for the transition to orals.9 Multiple studies have highlighted the unique care required for certain infections, such as pseudomonal infections, which most experts agree requires a more conservative approach.5,6
Fluoroquinolones are the bedrock of therapy for GN bacteremia because of historic in vivo experience and in vitro findings about bioavailability and dose-dependent killing, but they are also the antibiotic class associated with the highest hospitalization rates for antibiotic-associated adverse events.8 A recent noninferiority trial comparing the use of beta-lactams with fluoroquinolones found that beta-lactams were noninferior, though the study was flawed by the limited number of beta-lactam–using patients identified.8 It is clear that more investigation is needed before recommendations can be made regarding ideal oral antibiotics for GN bacteremia.
The transition to oral is reasonable given the following criteria: the patient has improved on intravenous antibiotics and source control has been achieved; the culture data have demonstrated sensitivity to the oral antibiotic of choice, with special care given to higher-risk bacteria such as Pseudomonas; the patient is able to take the oral antibiotic; and the oral antibiotic of choice has the highest bioavailability possible and is given at an appropriate dose to reach its highest killing and bioavailability concentrations.7
After evaluating the appropriateness of transition to oral antibiotics, the final decision is about duration of antibiotic therapy. Current Infectious Disease Society of America guidelines are based on expert opinion and recommend 7-14 days of therapy. As with many common infections, recent studies have focused on evaluating reduction in antibiotic durations.
Chotiprasitsakul et al. demonstrated no difference in mortality or morbidity in 385 propensity-matched pairs with treatment of Enterobacteriaceae bacteremia for 8 versus 15 days.10 A mixed meta-analysis performed in 2011 evaluated 24 randomized, controlled trials and found shorter durations (5-7 days) had similar outcomes to prolonged durations (7-21 days).11 Recently, Yahav et al. performed a randomized control trial comparing 7- and 14-day regimens for uncomplicated GN bacteremia and found a 7-day course to be noninferior if patients were clinically stable by day 5 and had source control.12
It should be noted that not all studies have found that reduced durations are without harm. Nelson et al. performed a retrospective cohort analysis and found that reduced durations of antibiotics (7-10 days) increased mortality and recurrent infection when compared with a longer course (greater than 10 days).13 These contrary findings highlight the need for provider discretion in selecting a course of antibiotics as well as the need for further studies about optimal duration of antibiotics.
Application of the data
Returning to our case, on day 3, the patient’s fever had resolved and leukocytosis improved. In the absence of concern for persistent infection, repeat blood cultures were not performed. On day 4 initial blood cultures showed pan-sensitive Escherichia coli. The patient was transitioned to 750 mg oral ciprofloxacin b.i.d. to complete a 10-day course from first dose of ceftriaxone and was discharged from the hospital.
Bottom line
Management of GN bacteremia requires individualized care based on clinical presentation, but the data presented above can be used as broad guidelines to help reduce excess blood cultures, avoid prolonged use of intravenous antibiotics, and limit the duration of antibiotic exposure.
Dr. Imber is an assistant professor in the division of hospital medicine at the University of New Mexico, Albuquerque, and director of the Internal Medicine Simulation Education and Hospitalist Procedural Certification. Dr. Burns is an assistant professor in the division of hospital medicine at the University of New Mexico. Dr. Chan is currently a chief resident in the department of internal medicine at the University of New Mexico.
References
1. Canzoneri CN et al. Follow-up blood cultures in gram-negative bacteremia: Are they needed? Clin Infect Dis. 2017;65(11):1776-9. doi: 10.1093/cid/cix648.
2. Kang CK et al. Can a routine follow-up blood culture be justified in Klebsiella pneumoniae bacteremia? A retrospective case-control study. BMC Infect Dis. 2013;13:365. doi: 10.1186/1471-2334-13-365.
3. Mushtaq A et al. Repeating blood cultures after an initial bacteremia: When and how often? Cleve Clin J Med. 2019;86(2):89-92. doi: 10.3949/ccjm.86a.18001.
4. Nimmich EB et al. Development of institutional guidelines for management of gram-negative bloodstream infections: Incorporating local evidence. Hosp Pharm. 2017;52(10):691-7. doi: 10.1177/0018578717720506.
5. Hale AJ et al. When are oral antibiotics a safe and effective choice for bacterial bloodstream infections? An evidence-based narrative review. J Hosp Med. 2018 May. doi: 10.12788/jhm.2949.
6. Kutob LF et al. Effectiveness of oral antibiotics for definitive therapy of gram-negative bloodstream infections. Int J Antimicrob Agents. 2016. doi: 10.1016/j.ijantimicag.2016.07.013.
7. Tamma PD et al. Association of 30-day mortality with oral step-down vs. continued intravenous therapy in patients hospitalized with Enterobacteriaceae bacteremia. JAMA Intern Med. 2019. doi: 10.1001/jamainternmed.2018.6226.
8. Mercuro NJ et al. Retrospective analysis comparing oral stepdown therapy for enterobacteriaceae bloodstream infections: fluoroquinolones vs. B-lactams. Int J Antimicrob Agents. 2017. doi: 10.1016/j.ijantimicag.2017.12.007.
9. Park TY et al. Early oral antibiotic switch compared with conventional intravenous antibiotic therapy for acute cholangitis with bacteremia. Dig Dis Sci. 2014;59:2790-6. doi: 10.1007/s10620-014-3233-0.
10. Chotiprasitsakul D et al. Comparing the outcomes of adults with Enterobacteriaceae bacteremia receiving short-course versus prolonged-course antibiotic therapy in a multicenter, propensity score-matched cohort. Clin Infect Dis. 2018;66(2):172-7. doi:10.1093/cid/cix767.
11. Havey TC et al. Duration of antibiotic therapy for bacteremia: a systematic review and meta-analysis. Crit Care. 2011;15(6):R267. doi:10.1186/cc10545.
12. Yahav D et al. Seven versus fourteen days of antibiotic therapy for uncomplicated gram-negative bacteremia: A noninferiority randomized controlled trial. Clin Infect Dis. 2018 Dec. doi:10.1093/cid/ciy1054.
13. Nelson AN et al. Optimal duration of antimicrobial therapy for uncomplicated gram-negative bloodstream infections. Infection. 2017;45(5):613-20. doi:10.1007/s15010-017-1020-5.
Are we doing it right?
Are we doing it right?
Case
A 42-year-old woman with uncontrolled diabetes presents to the ED with fever, chills, dysuria, and flank pain for 3 days. On exam, she is febrile and tachycardic. Lab results show leukocytosis and urinalysis is consistent with infection. CT scan shows acute pyelonephritis without complication. She is admitted to the hospital and started on ceftriaxone 2 g/24 hrs. On hospital day 2, her blood cultures show gram-negative bacteria.
Brief overview
Management of gram-negative (GN) bacteremia remains a challenging clinical situation for inpatient providers. With the push for high-value care and reductions in length of stay, recent literature has focused on reviewing current practices and attempting to standardize care. Despite this, no overarching guidelines exist to direct practice and clinicians are left to make decisions based on prior experience and expert opinion. Three key clinical questions exist when caring for a hospitalized patient with GN bacteremia: Should blood cultures be repeated? When is transition to oral antibiotics appropriate? And for what duration should antibiotics be given?
Overview of the data
When considering repeating blood cultures, it is important to understand that current literature does not support the practice for all GN bacteremias.
Canzoneri et al. retrospectively studied GN bacteremia and found that it took 17 repeat blood cultures being drawn to yield 1 positive result, which suggests that they are not necessary in all cases.1 Furthermore, repeat blood cultures increase cost of hospitalization, length of stay, and inconvenience to patients.2
However, Mushtaq et al. noted that repeating blood cultures can provide valuable information to confirm the response to treatment in patients with endovascular infection. Furthermore, they found that repeated blood cultures are also reasonable when the following scenarios are suspected: endocarditis or central line–associated infection, concern for multidrug resistant GN bacilli, and ongoing evidence of sepsis or patient decompensation.3
Consideration of a transition from intravenous to oral antibiotics is a key decision point in the care of GN bacteremia. Without guidelines, clinicians are left to evaluate patients on a case-by-case basis.4 Studies have suggested that the transition should be guided by the condition of the patient, the type of infection, and the culture-derived sensitivities.5 Additionally, bioavailability of antibiotics (see Table 1) is an important consideration and a recent examination of oral antibiotic failure rates demonstrated that lower bioavailability antibiotics have an increased risk of failure (2% vs. 16%).6
In their study, Kutob et al. highlighted the importance of choosing not only an antibiotic of high bioavailability, but also an antibiotic dose which will support a high concentration of the antibiotic in the bloodstream.6 For example, they identify ciprofloxacin as a moderate bioavailability medication, but note that most cases they examined utilized 500 mg b.i.d., where the concentration-dependent killing and dose-dependent bioavailability would advocate for the use of 750 mg b.i.d. or 500 mg every 8 hours.
The heterogeneity of GN bloodstream infections also creates difficulty in standardization of care. The literature suggests that infection source plays a significant role in the type of GN bacteria isolated.6,7 The best data for the transition to oral antibiotics exists with urologic sources and it remains unclear whether bacteria from other sources have higher risks of oral antibiotic failure.8
One recent study of 66 patients examined bacteremia in the setting of cholangitis and found that, once patients had stabilized, a switch from intravenous to oral antibiotics was noninferior, but randomized, prospective trials have not been performed. Notably, patients were transitioned to orals only after they were found to have a fluoroquinolone-sensitive infection, allowing the study authors to use higher-bioavailability agents for the transition to orals.9 Multiple studies have highlighted the unique care required for certain infections, such as pseudomonal infections, which most experts agree requires a more conservative approach.5,6
Fluoroquinolones are the bedrock of therapy for GN bacteremia because of historic in vivo experience and in vitro findings about bioavailability and dose-dependent killing, but they are also the antibiotic class associated with the highest hospitalization rates for antibiotic-associated adverse events.8 A recent noninferiority trial comparing the use of beta-lactams with fluoroquinolones found that beta-lactams were noninferior, though the study was flawed by the limited number of beta-lactam–using patients identified.8 It is clear that more investigation is needed before recommendations can be made regarding ideal oral antibiotics for GN bacteremia.
The transition to oral is reasonable given the following criteria: the patient has improved on intravenous antibiotics and source control has been achieved; the culture data have demonstrated sensitivity to the oral antibiotic of choice, with special care given to higher-risk bacteria such as Pseudomonas; the patient is able to take the oral antibiotic; and the oral antibiotic of choice has the highest bioavailability possible and is given at an appropriate dose to reach its highest killing and bioavailability concentrations.7
After evaluating the appropriateness of transition to oral antibiotics, the final decision is about duration of antibiotic therapy. Current Infectious Disease Society of America guidelines are based on expert opinion and recommend 7-14 days of therapy. As with many common infections, recent studies have focused on evaluating reduction in antibiotic durations.
Chotiprasitsakul et al. demonstrated no difference in mortality or morbidity in 385 propensity-matched pairs with treatment of Enterobacteriaceae bacteremia for 8 versus 15 days.10 A mixed meta-analysis performed in 2011 evaluated 24 randomized, controlled trials and found shorter durations (5-7 days) had similar outcomes to prolonged durations (7-21 days).11 Recently, Yahav et al. performed a randomized control trial comparing 7- and 14-day regimens for uncomplicated GN bacteremia and found a 7-day course to be noninferior if patients were clinically stable by day 5 and had source control.12
It should be noted that not all studies have found that reduced durations are without harm. Nelson et al. performed a retrospective cohort analysis and found that reduced durations of antibiotics (7-10 days) increased mortality and recurrent infection when compared with a longer course (greater than 10 days).13 These contrary findings highlight the need for provider discretion in selecting a course of antibiotics as well as the need for further studies about optimal duration of antibiotics.
Application of the data
Returning to our case, on day 3, the patient’s fever had resolved and leukocytosis improved. In the absence of concern for persistent infection, repeat blood cultures were not performed. On day 4 initial blood cultures showed pan-sensitive Escherichia coli. The patient was transitioned to 750 mg oral ciprofloxacin b.i.d. to complete a 10-day course from first dose of ceftriaxone and was discharged from the hospital.
Bottom line
Management of GN bacteremia requires individualized care based on clinical presentation, but the data presented above can be used as broad guidelines to help reduce excess blood cultures, avoid prolonged use of intravenous antibiotics, and limit the duration of antibiotic exposure.
Dr. Imber is an assistant professor in the division of hospital medicine at the University of New Mexico, Albuquerque, and director of the Internal Medicine Simulation Education and Hospitalist Procedural Certification. Dr. Burns is an assistant professor in the division of hospital medicine at the University of New Mexico. Dr. Chan is currently a chief resident in the department of internal medicine at the University of New Mexico.
References
1. Canzoneri CN et al. Follow-up blood cultures in gram-negative bacteremia: Are they needed? Clin Infect Dis. 2017;65(11):1776-9. doi: 10.1093/cid/cix648.
2. Kang CK et al. Can a routine follow-up blood culture be justified in Klebsiella pneumoniae bacteremia? A retrospective case-control study. BMC Infect Dis. 2013;13:365. doi: 10.1186/1471-2334-13-365.
3. Mushtaq A et al. Repeating blood cultures after an initial bacteremia: When and how often? Cleve Clin J Med. 2019;86(2):89-92. doi: 10.3949/ccjm.86a.18001.
4. Nimmich EB et al. Development of institutional guidelines for management of gram-negative bloodstream infections: Incorporating local evidence. Hosp Pharm. 2017;52(10):691-7. doi: 10.1177/0018578717720506.
5. Hale AJ et al. When are oral antibiotics a safe and effective choice for bacterial bloodstream infections? An evidence-based narrative review. J Hosp Med. 2018 May. doi: 10.12788/jhm.2949.
6. Kutob LF et al. Effectiveness of oral antibiotics for definitive therapy of gram-negative bloodstream infections. Int J Antimicrob Agents. 2016. doi: 10.1016/j.ijantimicag.2016.07.013.
7. Tamma PD et al. Association of 30-day mortality with oral step-down vs. continued intravenous therapy in patients hospitalized with Enterobacteriaceae bacteremia. JAMA Intern Med. 2019. doi: 10.1001/jamainternmed.2018.6226.
8. Mercuro NJ et al. Retrospective analysis comparing oral stepdown therapy for enterobacteriaceae bloodstream infections: fluoroquinolones vs. B-lactams. Int J Antimicrob Agents. 2017. doi: 10.1016/j.ijantimicag.2017.12.007.
9. Park TY et al. Early oral antibiotic switch compared with conventional intravenous antibiotic therapy for acute cholangitis with bacteremia. Dig Dis Sci. 2014;59:2790-6. doi: 10.1007/s10620-014-3233-0.
10. Chotiprasitsakul D et al. Comparing the outcomes of adults with Enterobacteriaceae bacteremia receiving short-course versus prolonged-course antibiotic therapy in a multicenter, propensity score-matched cohort. Clin Infect Dis. 2018;66(2):172-7. doi:10.1093/cid/cix767.
11. Havey TC et al. Duration of antibiotic therapy for bacteremia: a systematic review and meta-analysis. Crit Care. 2011;15(6):R267. doi:10.1186/cc10545.
12. Yahav D et al. Seven versus fourteen days of antibiotic therapy for uncomplicated gram-negative bacteremia: A noninferiority randomized controlled trial. Clin Infect Dis. 2018 Dec. doi:10.1093/cid/ciy1054.
13. Nelson AN et al. Optimal duration of antimicrobial therapy for uncomplicated gram-negative bloodstream infections. Infection. 2017;45(5):613-20. doi:10.1007/s15010-017-1020-5.
Case
A 42-year-old woman with uncontrolled diabetes presents to the ED with fever, chills, dysuria, and flank pain for 3 days. On exam, she is febrile and tachycardic. Lab results show leukocytosis and urinalysis is consistent with infection. CT scan shows acute pyelonephritis without complication. She is admitted to the hospital and started on ceftriaxone 2 g/24 hrs. On hospital day 2, her blood cultures show gram-negative bacteria.
Brief overview
Management of gram-negative (GN) bacteremia remains a challenging clinical situation for inpatient providers. With the push for high-value care and reductions in length of stay, recent literature has focused on reviewing current practices and attempting to standardize care. Despite this, no overarching guidelines exist to direct practice and clinicians are left to make decisions based on prior experience and expert opinion. Three key clinical questions exist when caring for a hospitalized patient with GN bacteremia: Should blood cultures be repeated? When is transition to oral antibiotics appropriate? And for what duration should antibiotics be given?
Overview of the data
When considering repeating blood cultures, it is important to understand that current literature does not support the practice for all GN bacteremias.
Canzoneri et al. retrospectively studied GN bacteremia and found that it took 17 repeat blood cultures being drawn to yield 1 positive result, which suggests that they are not necessary in all cases.1 Furthermore, repeat blood cultures increase cost of hospitalization, length of stay, and inconvenience to patients.2
However, Mushtaq et al. noted that repeating blood cultures can provide valuable information to confirm the response to treatment in patients with endovascular infection. Furthermore, they found that repeated blood cultures are also reasonable when the following scenarios are suspected: endocarditis or central line–associated infection, concern for multidrug resistant GN bacilli, and ongoing evidence of sepsis or patient decompensation.3
Consideration of a transition from intravenous to oral antibiotics is a key decision point in the care of GN bacteremia. Without guidelines, clinicians are left to evaluate patients on a case-by-case basis.4 Studies have suggested that the transition should be guided by the condition of the patient, the type of infection, and the culture-derived sensitivities.5 Additionally, bioavailability of antibiotics (see Table 1) is an important consideration and a recent examination of oral antibiotic failure rates demonstrated that lower bioavailability antibiotics have an increased risk of failure (2% vs. 16%).6
In their study, Kutob et al. highlighted the importance of choosing not only an antibiotic of high bioavailability, but also an antibiotic dose which will support a high concentration of the antibiotic in the bloodstream.6 For example, they identify ciprofloxacin as a moderate bioavailability medication, but note that most cases they examined utilized 500 mg b.i.d., where the concentration-dependent killing and dose-dependent bioavailability would advocate for the use of 750 mg b.i.d. or 500 mg every 8 hours.
The heterogeneity of GN bloodstream infections also creates difficulty in standardization of care. The literature suggests that infection source plays a significant role in the type of GN bacteria isolated.6,7 The best data for the transition to oral antibiotics exists with urologic sources and it remains unclear whether bacteria from other sources have higher risks of oral antibiotic failure.8
One recent study of 66 patients examined bacteremia in the setting of cholangitis and found that, once patients had stabilized, a switch from intravenous to oral antibiotics was noninferior, but randomized, prospective trials have not been performed. Notably, patients were transitioned to orals only after they were found to have a fluoroquinolone-sensitive infection, allowing the study authors to use higher-bioavailability agents for the transition to orals.9 Multiple studies have highlighted the unique care required for certain infections, such as pseudomonal infections, which most experts agree requires a more conservative approach.5,6
Fluoroquinolones are the bedrock of therapy for GN bacteremia because of historic in vivo experience and in vitro findings about bioavailability and dose-dependent killing, but they are also the antibiotic class associated with the highest hospitalization rates for antibiotic-associated adverse events.8 A recent noninferiority trial comparing the use of beta-lactams with fluoroquinolones found that beta-lactams were noninferior, though the study was flawed by the limited number of beta-lactam–using patients identified.8 It is clear that more investigation is needed before recommendations can be made regarding ideal oral antibiotics for GN bacteremia.
The transition to oral is reasonable given the following criteria: the patient has improved on intravenous antibiotics and source control has been achieved; the culture data have demonstrated sensitivity to the oral antibiotic of choice, with special care given to higher-risk bacteria such as Pseudomonas; the patient is able to take the oral antibiotic; and the oral antibiotic of choice has the highest bioavailability possible and is given at an appropriate dose to reach its highest killing and bioavailability concentrations.7
After evaluating the appropriateness of transition to oral antibiotics, the final decision is about duration of antibiotic therapy. Current Infectious Disease Society of America guidelines are based on expert opinion and recommend 7-14 days of therapy. As with many common infections, recent studies have focused on evaluating reduction in antibiotic durations.
Chotiprasitsakul et al. demonstrated no difference in mortality or morbidity in 385 propensity-matched pairs with treatment of Enterobacteriaceae bacteremia for 8 versus 15 days.10 A mixed meta-analysis performed in 2011 evaluated 24 randomized, controlled trials and found shorter durations (5-7 days) had similar outcomes to prolonged durations (7-21 days).11 Recently, Yahav et al. performed a randomized control trial comparing 7- and 14-day regimens for uncomplicated GN bacteremia and found a 7-day course to be noninferior if patients were clinically stable by day 5 and had source control.12
It should be noted that not all studies have found that reduced durations are without harm. Nelson et al. performed a retrospective cohort analysis and found that reduced durations of antibiotics (7-10 days) increased mortality and recurrent infection when compared with a longer course (greater than 10 days).13 These contrary findings highlight the need for provider discretion in selecting a course of antibiotics as well as the need for further studies about optimal duration of antibiotics.
Application of the data
Returning to our case, on day 3, the patient’s fever had resolved and leukocytosis improved. In the absence of concern for persistent infection, repeat blood cultures were not performed. On day 4 initial blood cultures showed pan-sensitive Escherichia coli. The patient was transitioned to 750 mg oral ciprofloxacin b.i.d. to complete a 10-day course from first dose of ceftriaxone and was discharged from the hospital.
Bottom line
Management of GN bacteremia requires individualized care based on clinical presentation, but the data presented above can be used as broad guidelines to help reduce excess blood cultures, avoid prolonged use of intravenous antibiotics, and limit the duration of antibiotic exposure.
Dr. Imber is an assistant professor in the division of hospital medicine at the University of New Mexico, Albuquerque, and director of the Internal Medicine Simulation Education and Hospitalist Procedural Certification. Dr. Burns is an assistant professor in the division of hospital medicine at the University of New Mexico. Dr. Chan is currently a chief resident in the department of internal medicine at the University of New Mexico.
References
1. Canzoneri CN et al. Follow-up blood cultures in gram-negative bacteremia: Are they needed? Clin Infect Dis. 2017;65(11):1776-9. doi: 10.1093/cid/cix648.
2. Kang CK et al. Can a routine follow-up blood culture be justified in Klebsiella pneumoniae bacteremia? A retrospective case-control study. BMC Infect Dis. 2013;13:365. doi: 10.1186/1471-2334-13-365.
3. Mushtaq A et al. Repeating blood cultures after an initial bacteremia: When and how often? Cleve Clin J Med. 2019;86(2):89-92. doi: 10.3949/ccjm.86a.18001.
4. Nimmich EB et al. Development of institutional guidelines for management of gram-negative bloodstream infections: Incorporating local evidence. Hosp Pharm. 2017;52(10):691-7. doi: 10.1177/0018578717720506.
5. Hale AJ et al. When are oral antibiotics a safe and effective choice for bacterial bloodstream infections? An evidence-based narrative review. J Hosp Med. 2018 May. doi: 10.12788/jhm.2949.
6. Kutob LF et al. Effectiveness of oral antibiotics for definitive therapy of gram-negative bloodstream infections. Int J Antimicrob Agents. 2016. doi: 10.1016/j.ijantimicag.2016.07.013.
7. Tamma PD et al. Association of 30-day mortality with oral step-down vs. continued intravenous therapy in patients hospitalized with Enterobacteriaceae bacteremia. JAMA Intern Med. 2019. doi: 10.1001/jamainternmed.2018.6226.
8. Mercuro NJ et al. Retrospective analysis comparing oral stepdown therapy for enterobacteriaceae bloodstream infections: fluoroquinolones vs. B-lactams. Int J Antimicrob Agents. 2017. doi: 10.1016/j.ijantimicag.2017.12.007.
9. Park TY et al. Early oral antibiotic switch compared with conventional intravenous antibiotic therapy for acute cholangitis with bacteremia. Dig Dis Sci. 2014;59:2790-6. doi: 10.1007/s10620-014-3233-0.
10. Chotiprasitsakul D et al. Comparing the outcomes of adults with Enterobacteriaceae bacteremia receiving short-course versus prolonged-course antibiotic therapy in a multicenter, propensity score-matched cohort. Clin Infect Dis. 2018;66(2):172-7. doi:10.1093/cid/cix767.
11. Havey TC et al. Duration of antibiotic therapy for bacteremia: a systematic review and meta-analysis. Crit Care. 2011;15(6):R267. doi:10.1186/cc10545.
12. Yahav D et al. Seven versus fourteen days of antibiotic therapy for uncomplicated gram-negative bacteremia: A noninferiority randomized controlled trial. Clin Infect Dis. 2018 Dec. doi:10.1093/cid/ciy1054.
13. Nelson AN et al. Optimal duration of antimicrobial therapy for uncomplicated gram-negative bloodstream infections. Infection. 2017;45(5):613-20. doi:10.1007/s15010-017-1020-5.
Prescriptions for cough, cold medicine dropping for children
with evidence suggesting replacement by off-label antihistamines, according to analysis of two national databases.
Compared with older children, declines in both opioid and nonopioid cold and cough medicine (CCM) use “appeared to accelerate in children younger than 2 years … and among children younger than 6 years for opioid-containing CCM” after the Food and Drug Administration’s 2008 public health advisory on use of OTC forms of CCM, Daniel B. Horton, MD, of the Robert Wood Johnson Medical School, New Brunswick, N.J., and his associates wrote in JAMA Pediatrics.
Meanwhile, recommendations for single-agent antihistamines rose – for some age groups significantly – over the 14-year study period, which was divided into two eras: 2002-2008 and 2009-2015.
When the two eras were compared, trends for decreased use of CCM in children under 2 years of age (nonopioid) and under 4 years (opioid) approached – both were P = .05 – but did not quite reach the less than .05 considered statistically significant. Adjusted odds ratios for the other age groups were further off the mark. For antihistamines, the upward trend between the two eras was significant for children aged under 2 years, 2-3 years, and 6-11 years, Dr. Horton and associates reported.
The two youngest groups, under 2 years and 2-3, were combined for the opioid CCM analyses to avoid a population under 30, which would have yielded unreliable estimates. The investigators used data from the National Ambulatory Medical Care Survey and the National Hospital Ambulatory Medical Care Survey, with the sample representing 3.1 billion pediatric visits from 2002 to 2015.
Dr. Horton is supported by an award from the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The investigators reported no disclosures relevant to this study.
SOURCE: Horton DB et al. JAMA Pediatr. 2019 Jul 29. doi: 10.1001/jamapediatrics.2019.2252.
with evidence suggesting replacement by off-label antihistamines, according to analysis of two national databases.
Compared with older children, declines in both opioid and nonopioid cold and cough medicine (CCM) use “appeared to accelerate in children younger than 2 years … and among children younger than 6 years for opioid-containing CCM” after the Food and Drug Administration’s 2008 public health advisory on use of OTC forms of CCM, Daniel B. Horton, MD, of the Robert Wood Johnson Medical School, New Brunswick, N.J., and his associates wrote in JAMA Pediatrics.
Meanwhile, recommendations for single-agent antihistamines rose – for some age groups significantly – over the 14-year study period, which was divided into two eras: 2002-2008 and 2009-2015.
When the two eras were compared, trends for decreased use of CCM in children under 2 years of age (nonopioid) and under 4 years (opioid) approached – both were P = .05 – but did not quite reach the less than .05 considered statistically significant. Adjusted odds ratios for the other age groups were further off the mark. For antihistamines, the upward trend between the two eras was significant for children aged under 2 years, 2-3 years, and 6-11 years, Dr. Horton and associates reported.
The two youngest groups, under 2 years and 2-3, were combined for the opioid CCM analyses to avoid a population under 30, which would have yielded unreliable estimates. The investigators used data from the National Ambulatory Medical Care Survey and the National Hospital Ambulatory Medical Care Survey, with the sample representing 3.1 billion pediatric visits from 2002 to 2015.
Dr. Horton is supported by an award from the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The investigators reported no disclosures relevant to this study.
SOURCE: Horton DB et al. JAMA Pediatr. 2019 Jul 29. doi: 10.1001/jamapediatrics.2019.2252.
with evidence suggesting replacement by off-label antihistamines, according to analysis of two national databases.
Compared with older children, declines in both opioid and nonopioid cold and cough medicine (CCM) use “appeared to accelerate in children younger than 2 years … and among children younger than 6 years for opioid-containing CCM” after the Food and Drug Administration’s 2008 public health advisory on use of OTC forms of CCM, Daniel B. Horton, MD, of the Robert Wood Johnson Medical School, New Brunswick, N.J., and his associates wrote in JAMA Pediatrics.
Meanwhile, recommendations for single-agent antihistamines rose – for some age groups significantly – over the 14-year study period, which was divided into two eras: 2002-2008 and 2009-2015.
When the two eras were compared, trends for decreased use of CCM in children under 2 years of age (nonopioid) and under 4 years (opioid) approached – both were P = .05 – but did not quite reach the less than .05 considered statistically significant. Adjusted odds ratios for the other age groups were further off the mark. For antihistamines, the upward trend between the two eras was significant for children aged under 2 years, 2-3 years, and 6-11 years, Dr. Horton and associates reported.
The two youngest groups, under 2 years and 2-3, were combined for the opioid CCM analyses to avoid a population under 30, which would have yielded unreliable estimates. The investigators used data from the National Ambulatory Medical Care Survey and the National Hospital Ambulatory Medical Care Survey, with the sample representing 3.1 billion pediatric visits from 2002 to 2015.
Dr. Horton is supported by an award from the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The investigators reported no disclosures relevant to this study.
SOURCE: Horton DB et al. JAMA Pediatr. 2019 Jul 29. doi: 10.1001/jamapediatrics.2019.2252.
FROM JAMA PEDIATRICS
Novel translocation inhibitor shows efficacy in treatment-naive HIV-1–infected adults
The first-in-class antiretroviral therapy islatravir (Merck) was well tolerated and had promising efficacy in a phase 2B study including treatment-naive adults with HIV-1 infection, supporting plans to initiate a phase 3 trial, an investigator said at the International AIDS Society Conference on HIV Science.
The proportion of patients achieving viral suppression at week 48 with combinations including the nucleoside transcriptase translocation inhibitor (NRTTI) was comparable to what was achieved with a standard triple regimen, said investigator Jean-Michel Molina, MD, professor of infectious diseases at the University of Paris Diderot, and head of the infectious diseases department at the Saint-Louis Hospital in Paris
The treatment was effective not only as part of a three-drug combination of islatravir, doravirine, and lamivudine over 24 weeks, but also over an additional 24 weeks in patients who achieved viral suppression and were switched to dual therapy with islatravir and doravirine, according to Dr. Molina.
“These are promising data that will encourage the company to move to a phase 3 trial to see how these results can be confirmed in a larger study set, and also to assess the potency of the dual combination for maintenance therapy in the future, providing also novel options for people with a drug that has a high genetic barrier to resistance and efficacy that seems to be quite interesting,” Dr. Molina said in an IAS press conference in Mexico City.
This drug has very potent activity not only against wild-type HIV-1 viruses, but also multiresistant viruses, according to Dr. Molina.
“It has a high inhibitory quotient at a very low dose, so you give people a tiny amount of drug – in the range of 1 milligram per day, instead of a few hundred milligrams with other, regular drugs,” he said.
Another attribute of islatravir is its long half-life of approximately 120 hours, allowing not only for once-daily dosing, but potentially for evaluation as once-weekly or once-monthly dosing in the future, he said, adding that a subdermal islatravir-eluting implant under investigation for preexposure prophylaxis has potential as a once-yearly option.
The international, multicenter, 121-patient clinical trial that Dr. Molina described included adults with HIV-1 infection naive to antiretroviral therapy randomized to islatravir (in one of three doses) plus doravirine and lamivudine, or to the combination of doravirine, lamivudine, and tenofovir (Delstrigo, Merck).
After at least 24 weeks of treatment, subjects in the islatravir treatment groups were transitioned to the two-drug combination of islatravir and doravirine if they had HIV-1 RNA levels less than 50 copies/mL and did not meet any protocol-defined criteria for virologic failure.
Those participants in the islatravir arms who received 48 weeks of treatment had “very good response” and safety that was comparable to the control arm, according to Dr. Molina.
At 48 weeks, the proportion of patients with HIV-1 RNA less than 50 copies/mL were 89.7%, 90%, and 77.4% for regimens containing islatravir 0.25 mg, 0.75 mg, and 2.25 mg, respectively, and 83.9% for those receiving the standard triple therapy, according to reported data.
All patients with protocol-defined virologic failure (greater than or equal to 50 copies/mL) in the study actually had very low viral load, below 80 copies/mL, Dr. Molina said.
The study was supported by Merck. Dr. Molina has been on the Merck advisory board and speaker’s bureau.
SOURCE: Molina J-M et al. IAS 2019, Abstract WEAB0402LB.
The first-in-class antiretroviral therapy islatravir (Merck) was well tolerated and had promising efficacy in a phase 2B study including treatment-naive adults with HIV-1 infection, supporting plans to initiate a phase 3 trial, an investigator said at the International AIDS Society Conference on HIV Science.
The proportion of patients achieving viral suppression at week 48 with combinations including the nucleoside transcriptase translocation inhibitor (NRTTI) was comparable to what was achieved with a standard triple regimen, said investigator Jean-Michel Molina, MD, professor of infectious diseases at the University of Paris Diderot, and head of the infectious diseases department at the Saint-Louis Hospital in Paris
The treatment was effective not only as part of a three-drug combination of islatravir, doravirine, and lamivudine over 24 weeks, but also over an additional 24 weeks in patients who achieved viral suppression and were switched to dual therapy with islatravir and doravirine, according to Dr. Molina.
“These are promising data that will encourage the company to move to a phase 3 trial to see how these results can be confirmed in a larger study set, and also to assess the potency of the dual combination for maintenance therapy in the future, providing also novel options for people with a drug that has a high genetic barrier to resistance and efficacy that seems to be quite interesting,” Dr. Molina said in an IAS press conference in Mexico City.
This drug has very potent activity not only against wild-type HIV-1 viruses, but also multiresistant viruses, according to Dr. Molina.
“It has a high inhibitory quotient at a very low dose, so you give people a tiny amount of drug – in the range of 1 milligram per day, instead of a few hundred milligrams with other, regular drugs,” he said.
Another attribute of islatravir is its long half-life of approximately 120 hours, allowing not only for once-daily dosing, but potentially for evaluation as once-weekly or once-monthly dosing in the future, he said, adding that a subdermal islatravir-eluting implant under investigation for preexposure prophylaxis has potential as a once-yearly option.
The international, multicenter, 121-patient clinical trial that Dr. Molina described included adults with HIV-1 infection naive to antiretroviral therapy randomized to islatravir (in one of three doses) plus doravirine and lamivudine, or to the combination of doravirine, lamivudine, and tenofovir (Delstrigo, Merck).
After at least 24 weeks of treatment, subjects in the islatravir treatment groups were transitioned to the two-drug combination of islatravir and doravirine if they had HIV-1 RNA levels less than 50 copies/mL and did not meet any protocol-defined criteria for virologic failure.
Those participants in the islatravir arms who received 48 weeks of treatment had “very good response” and safety that was comparable to the control arm, according to Dr. Molina.
At 48 weeks, the proportion of patients with HIV-1 RNA less than 50 copies/mL were 89.7%, 90%, and 77.4% for regimens containing islatravir 0.25 mg, 0.75 mg, and 2.25 mg, respectively, and 83.9% for those receiving the standard triple therapy, according to reported data.
All patients with protocol-defined virologic failure (greater than or equal to 50 copies/mL) in the study actually had very low viral load, below 80 copies/mL, Dr. Molina said.
The study was supported by Merck. Dr. Molina has been on the Merck advisory board and speaker’s bureau.
SOURCE: Molina J-M et al. IAS 2019, Abstract WEAB0402LB.
The first-in-class antiretroviral therapy islatravir (Merck) was well tolerated and had promising efficacy in a phase 2B study including treatment-naive adults with HIV-1 infection, supporting plans to initiate a phase 3 trial, an investigator said at the International AIDS Society Conference on HIV Science.
The proportion of patients achieving viral suppression at week 48 with combinations including the nucleoside transcriptase translocation inhibitor (NRTTI) was comparable to what was achieved with a standard triple regimen, said investigator Jean-Michel Molina, MD, professor of infectious diseases at the University of Paris Diderot, and head of the infectious diseases department at the Saint-Louis Hospital in Paris
The treatment was effective not only as part of a three-drug combination of islatravir, doravirine, and lamivudine over 24 weeks, but also over an additional 24 weeks in patients who achieved viral suppression and were switched to dual therapy with islatravir and doravirine, according to Dr. Molina.
“These are promising data that will encourage the company to move to a phase 3 trial to see how these results can be confirmed in a larger study set, and also to assess the potency of the dual combination for maintenance therapy in the future, providing also novel options for people with a drug that has a high genetic barrier to resistance and efficacy that seems to be quite interesting,” Dr. Molina said in an IAS press conference in Mexico City.
This drug has very potent activity not only against wild-type HIV-1 viruses, but also multiresistant viruses, according to Dr. Molina.
“It has a high inhibitory quotient at a very low dose, so you give people a tiny amount of drug – in the range of 1 milligram per day, instead of a few hundred milligrams with other, regular drugs,” he said.
Another attribute of islatravir is its long half-life of approximately 120 hours, allowing not only for once-daily dosing, but potentially for evaluation as once-weekly or once-monthly dosing in the future, he said, adding that a subdermal islatravir-eluting implant under investigation for preexposure prophylaxis has potential as a once-yearly option.
The international, multicenter, 121-patient clinical trial that Dr. Molina described included adults with HIV-1 infection naive to antiretroviral therapy randomized to islatravir (in one of three doses) plus doravirine and lamivudine, or to the combination of doravirine, lamivudine, and tenofovir (Delstrigo, Merck).
After at least 24 weeks of treatment, subjects in the islatravir treatment groups were transitioned to the two-drug combination of islatravir and doravirine if they had HIV-1 RNA levels less than 50 copies/mL and did not meet any protocol-defined criteria for virologic failure.
Those participants in the islatravir arms who received 48 weeks of treatment had “very good response” and safety that was comparable to the control arm, according to Dr. Molina.
At 48 weeks, the proportion of patients with HIV-1 RNA less than 50 copies/mL were 89.7%, 90%, and 77.4% for regimens containing islatravir 0.25 mg, 0.75 mg, and 2.25 mg, respectively, and 83.9% for those receiving the standard triple therapy, according to reported data.
All patients with protocol-defined virologic failure (greater than or equal to 50 copies/mL) in the study actually had very low viral load, below 80 copies/mL, Dr. Molina said.
The study was supported by Merck. Dr. Molina has been on the Merck advisory board and speaker’s bureau.
SOURCE: Molina J-M et al. IAS 2019, Abstract WEAB0402LB.
FROM IAS 2019
Antiretroviral-eluting implant could provide HIV prophylaxis for a year or more
An implant that elutes an investigational antiretroviral agent provided drug release that should be sufficient for HIV prophylaxis for 12 months or more, according to results of a phase 1 clinical trial just presented here at the International AIDS Society Conference on HIV Science.
The islatravir-eluting arm implant was safe and generally well tolerated, with drug concentrations that remained above the target level needed for protection throughout the randomized, placebo-controlled study, said investigator Randolph P. Matthews, MD, PhD, senior principal scientist at Merck, Kenilworth, N.J.
“Based on this study, the islatravir-eluting implant appears to be a potentially important option for preexposure prophylaxis (PrEP) as an agent that could be effective with yearly dosing,” Dr. Matthews said in an IAS press conference.
This drug-eluting implant, inserted subdermally in the skin of the upper arm, could represent a “meaningful option” for many individuals at high risk of HIV infection, particularly those who have adherence challenges, said Dr. Matthews.
“Many at-risk individuals face adherence challenges with the existing daily oral PrEP therapy,” he added. “A high degree of adherence is required for it to be effective, and daily adherence is challenging for many, particularly for women.”
Islatravir, formerly known as MK-8591, is a nucleoside reverse transcriptase translocation inhibitor (NRTTI) being evaluated in clinical trials not only for PrEP, but also for treatment of HIV-1 infection in combination with other antiretrovirals.
In preclinical trials, islatravir demonstrated high potency, a high barrier to resistance, and a long half-life, according to Dr. Matthews.
The phase 1, single-site, double-blind study included a total of 16 healthy adult volunteers who received implants of islatravir at one of two doses (54 mg and 62 mg) or placebo for 12 weeks.
Both active doses of islatravir led to concentrations above the target level at 12 weeks, and based on data modeling, the higher-dose implant would still be above the target level for at least a year, Dr. Matthews said in the press conference.
The projected duration above the target ranged from 12 to 16 months for the 62-mg dose of islatravir, and from 8 to 10 months for the 54-mg dose, according to the reported data.
All drug-related adverse events were mild or moderate in severity, and none of the volunteers discontinued the study because of an adverse event, Dr. Matthews said.
Taken together, these data support the continued progression of the implant clinical development program, said Dr. Matthews, who is an employee of Merck, which sponsored the study.
SOURCE: Matthews RP et al. IAS 2019, Abstract TUAC0401LB.
An implant that elutes an investigational antiretroviral agent provided drug release that should be sufficient for HIV prophylaxis for 12 months or more, according to results of a phase 1 clinical trial just presented here at the International AIDS Society Conference on HIV Science.
The islatravir-eluting arm implant was safe and generally well tolerated, with drug concentrations that remained above the target level needed for protection throughout the randomized, placebo-controlled study, said investigator Randolph P. Matthews, MD, PhD, senior principal scientist at Merck, Kenilworth, N.J.
“Based on this study, the islatravir-eluting implant appears to be a potentially important option for preexposure prophylaxis (PrEP) as an agent that could be effective with yearly dosing,” Dr. Matthews said in an IAS press conference.
This drug-eluting implant, inserted subdermally in the skin of the upper arm, could represent a “meaningful option” for many individuals at high risk of HIV infection, particularly those who have adherence challenges, said Dr. Matthews.
“Many at-risk individuals face adherence challenges with the existing daily oral PrEP therapy,” he added. “A high degree of adherence is required for it to be effective, and daily adherence is challenging for many, particularly for women.”
Islatravir, formerly known as MK-8591, is a nucleoside reverse transcriptase translocation inhibitor (NRTTI) being evaluated in clinical trials not only for PrEP, but also for treatment of HIV-1 infection in combination with other antiretrovirals.
In preclinical trials, islatravir demonstrated high potency, a high barrier to resistance, and a long half-life, according to Dr. Matthews.
The phase 1, single-site, double-blind study included a total of 16 healthy adult volunteers who received implants of islatravir at one of two doses (54 mg and 62 mg) or placebo for 12 weeks.
Both active doses of islatravir led to concentrations above the target level at 12 weeks, and based on data modeling, the higher-dose implant would still be above the target level for at least a year, Dr. Matthews said in the press conference.
The projected duration above the target ranged from 12 to 16 months for the 62-mg dose of islatravir, and from 8 to 10 months for the 54-mg dose, according to the reported data.
All drug-related adverse events were mild or moderate in severity, and none of the volunteers discontinued the study because of an adverse event, Dr. Matthews said.
Taken together, these data support the continued progression of the implant clinical development program, said Dr. Matthews, who is an employee of Merck, which sponsored the study.
SOURCE: Matthews RP et al. IAS 2019, Abstract TUAC0401LB.
An implant that elutes an investigational antiretroviral agent provided drug release that should be sufficient for HIV prophylaxis for 12 months or more, according to results of a phase 1 clinical trial just presented here at the International AIDS Society Conference on HIV Science.
The islatravir-eluting arm implant was safe and generally well tolerated, with drug concentrations that remained above the target level needed for protection throughout the randomized, placebo-controlled study, said investigator Randolph P. Matthews, MD, PhD, senior principal scientist at Merck, Kenilworth, N.J.
“Based on this study, the islatravir-eluting implant appears to be a potentially important option for preexposure prophylaxis (PrEP) as an agent that could be effective with yearly dosing,” Dr. Matthews said in an IAS press conference.
This drug-eluting implant, inserted subdermally in the skin of the upper arm, could represent a “meaningful option” for many individuals at high risk of HIV infection, particularly those who have adherence challenges, said Dr. Matthews.
“Many at-risk individuals face adherence challenges with the existing daily oral PrEP therapy,” he added. “A high degree of adherence is required for it to be effective, and daily adherence is challenging for many, particularly for women.”
Islatravir, formerly known as MK-8591, is a nucleoside reverse transcriptase translocation inhibitor (NRTTI) being evaluated in clinical trials not only for PrEP, but also for treatment of HIV-1 infection in combination with other antiretrovirals.
In preclinical trials, islatravir demonstrated high potency, a high barrier to resistance, and a long half-life, according to Dr. Matthews.
The phase 1, single-site, double-blind study included a total of 16 healthy adult volunteers who received implants of islatravir at one of two doses (54 mg and 62 mg) or placebo for 12 weeks.
Both active doses of islatravir led to concentrations above the target level at 12 weeks, and based on data modeling, the higher-dose implant would still be above the target level for at least a year, Dr. Matthews said in the press conference.
The projected duration above the target ranged from 12 to 16 months for the 62-mg dose of islatravir, and from 8 to 10 months for the 54-mg dose, according to the reported data.
All drug-related adverse events were mild or moderate in severity, and none of the volunteers discontinued the study because of an adverse event, Dr. Matthews said.
Taken together, these data support the continued progression of the implant clinical development program, said Dr. Matthews, who is an employee of Merck, which sponsored the study.
SOURCE: Matthews RP et al. IAS 2019, Abstract TUAC0401LB.
FROM IAS 2019