Ultrasound Guidance for Lumbar Puncture: A Consideration, Not an Obligation

Article Type
Changed

Recognizing the increasingly important role of point-of-care ultrasound (POCUS) in advancing clinical care, the Society of Hospital Medicine (SHM) has published a valuable series of position statements to guide hospitalists and administrators on the safe and effective use of POCUS.1 In this issue of the Journal of Hospital Medicine, Soni et al. present a series of consensus-based recommendations on ultrasound guidance for lumbar puncture (LP).2 Among these are the recommendations that ultrasound “should be used” to map the lumbar spine and to select an appropriate puncture site to reduce insertion attempts, reduce needle redirections, and increase overall procedural success.

At first glance, the recommendations appear definitive. However, not immediately obvious is the authors’ clarification that “This position statement does not mandate that hospitalists use ultrasound guidance for LP, nor does it establish ultrasound guidance as the standard of care for LP.” Even with the authors’ caveat, this nuance may not be readily apparent to the readers who review only the executive summary of the guidelines or who omit the context provided in the background of the position statement.

The directive language of this position statement may be a result of an unmerited amplification. The SHM POCUS Task Force employed the Research and Development Appropriateness Method to quantify the degree of consensus and the strength of the recommendation assigned,3 reaching “very good” consensus for each of the recommendations espoused in its position statement. Procedurally, this implies that ≥80% of the 27 voting members rated each published recommendation statement as “appropriate”. Using wording assigned a priori by the committee to each level of consensus, appropriateness became magnified to the declaration “should be used”. In this manner, the strength of the recommendations in this position statement is not necessarily based on the experts’ convictions related to ultrasound-guided LP, nor the strength of the supporting evidence.

In the case of ultrasound-guided LP, we might choose different descriptors than “appropriate” or “should be used”. The evidence base for ultrasound guidance for LP, though growing, may be insufficient as a foundation to a position statement and is certainly insufficient to create a new standard of care for hospitalists. Although the SHM POCUS Task Force completed a thoughtful literature review, no systematic approach (eg, GRADE methodology4) was used to rate the quality of evidence. Furthermore, the literature reviewed was drawn predominantly from anesthesia and emergency medicine sources—not readily generalizable to the hospitalist. Notably, these studies examined all neuraxial procedures (most commonly epidural and spinal anesthesia), which employ different techniques and tools than LP and are performed by clinicians with vastly different procedural training backgrounds than most hospitalists. Altogether, this creates the potential for a gap between true evidence quality and the strength of recommendation.

At a high level, although the technique for ultrasound mapping of the lumbar spine may be similar, the use of ultrasound has been less well studied specifically for LP. When considering LP alone, the available literature is inadequate to recommend uniform ultrasound guidance. A 2018 meta-analysis by Gottlieb et al. included 12 studies focusing only on LP, totaling N = 957 patients.5 This showed some favorability of ultrasound guidance, with a success rate of 90% using ultrasound, 81.4% with a landmark-based approach, and an odds ratio of 2.22 favoring ultrasound guidance (95% CI: 1.03-4.77). Unfortunately, when focusing only on adult patients, the advantage of POCUS diminished, with 91.4% success in the ultrasound group, 87.7% success in the landmark group, and a nonsignificant odds ratio of 2.10 (95% CI: 0.66-7.44).

Unequivocally, POCUS has established itself as a transformative technology for the guidance of invasive bedside procedures, bringing increased procedural success, improved safety, and decreased complication rates.6 For some procedures, particularly central venous catheterization, ultrasound guidance is a clear standard of care.7,8 For LP, the greatest benefit has been observed in patients with anticipated procedural challenges, most commonly obese patients in whom landmarks are not easily palpable.9 Moreover, the harms ultrasound seeks to prevent are substantially different. The primary risk of deferring ultrasound guidance for LP is most often a failed procedure, whereas for other common ultrasound-guided procedures, the harms may include significant vascular injury, pneumothorax, or bowel perforation. Differences in the relative harms make risk-benefit assessments harder to quantify and studies harder to carry out.

Sonographic guidance for LP has a role in clinical practice and should always be considered. However, at present, there exist no guidelines in any other specialty regarding the routine use of ultrasound-guided LP, including anesthesia, emergency medicine, neurology, or interventional radiology.10-15 As a result, a conservative interpretation of the POCUS Task Force’s findings would be to consider the use of ultrasound guidance for LP in patients where landmark identification is particularly challenging, but not to consider it a standard requirement for accreditation, training, or practice as of yet. Saying “more studies are required” can be a cop-out in some cases, but in this situation, the old adage does seem to apply.

We have great respect for the work of the SHM POCUS Task Force in advancing the use of POCUS in hospital medicine. Though ultrasound is not currently mandated as a care standard for the performance of LP, we all can agree that POCUS does confer advantages for this procedure, particularly in a well-selected patient population. To continue to provide care of the highest quality, hospitalists must be encouraged to elevate their practice with POCUS and be supported with the equipment, training, credentialing, and quality assurance structures necessary to integrate bedside ultrasound safely and effectively into their diagnostic and procedural practice.

 

 

Disclosures

No conflicts of interest to disclose.

Funding

None.

 

References

1. Soni NJ, Schnobrich D, Matthews BK, et al. Point-of-care ultrasound for hospitalists: a position statement of the society of hospital medicine [published online ahead of print June 10, 2019]. J Hosp Med. 2019;14(10):591-601. https://doi.org/10.12788/jhm.3079.
2. Soni NJ, Franco-Sadud R, Dobaidze K, et al. Recommendations on the use of ultrasound guidance for adult lumbar puncture: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):126-135. https://doi.org/10.12788/jhm.2940.
3. Fitch, K, Bernstein SJ, Aguilar MD et al. The RAND/UCLA appropriateness method user’s manual. Santa Monica, CA: RAND Corporation, 2001.
4. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;334(7650):924-926. PubMed
5. Gottlieb M, Holladay D, Peksa GD. Ultrasound-assisted lumbar punctures: a systematic review and meta-analysis. Acad Emerg Med. 2019;26(1):85-96. https://doi.org/10.1111/acem.13558.
6. Moore CL, Copel JA. Point of care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
7. Shojania K, Duncan B, McDonald K, Wachter RM. Making health care safer: a critical analysis of patient safety practices. Rockville, MD: Agency for Healthcare Research and Quality, 2001. Evidence Report/Technology Assessment No. 43; AHRQ publication 01-E058. PubMed
8. Brass P, Hellmich M, Kolodziej L, Schick G, Smith AF. Ultrasound guidance versus anatomical landmarks for internal jugular vein catherization. Cochrane Database Syst Rev. 2015;Art. No.: 1:CD006962. https://doi.org/10.1002/14651858.CD006962.pub2.
9. Peterson MA, Pisupati D, Heyming TW, Abele JA, Lewis RJ. Ultrasound for routine lumbar puncture. Acad Emerg Med. 2014;21(2):130-136. https://doi.org/10.1111/acem.12305.
10. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care, and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
11. Neal JM, Brull R, Horn JL, et al. The Second American Society of Regional Anesthesia and Pain Medicine Evidence-Based Medicine Assessment of Ultrasound-Guided Regional Anesthesia: executive summary. Reg Anesth Pain Med. 2016;41(2):181-194. doi: 10.1097/AAP.0000000000000331.
12. Practice guidelines for obstetric anesthesia: an updated report by the American Society of Anesthesiologists Task Force on Obstetric Anesthesia and the Society for Obstetric Anesthesia and Perinatology. Anesthesiology. 2016;124(2):270-300. doi: 10.1097/ALN.0000000000000935.
13. Engelborghs S, Sebastiaan E, Struyfs H, et al. Consensus guidelines for lumbar puncture in patients with neurological diseases. Alzheimers Dement. 2017;8:111-126. doi: 10.1016/j.dadm.2017.04.007.
14. American College of Radiology. ACR-SPR-SRU Practice Parameter for the Performing and Interpreting Diagnostic Ultrasound Examinations. 2017; Available at https://www.acr.org/-/media/ACR/Files/Practice-Parameters/us-perf-interpret.pdf. Accessed April 15, 2019.
15. American College of Radiology. ACR-AIUM-SPR-SRU Practice Parameter for the Performance of an Ultrasound Examination of the Neonatal and Infant Spine. 2016/ Available at https://www.acr.org/-/media/ACR/Files/Practice-Parameters/US-NeonatalSpine.pdf. Accessed April 15, 2019.

Article PDF
Issue
Journal of Hospital Medicine 14(10)
Topics
Page Number
636-637. Published online first June 10, 2019
Sections
Article PDF
Article PDF
Related Articles

Recognizing the increasingly important role of point-of-care ultrasound (POCUS) in advancing clinical care, the Society of Hospital Medicine (SHM) has published a valuable series of position statements to guide hospitalists and administrators on the safe and effective use of POCUS.1 In this issue of the Journal of Hospital Medicine, Soni et al. present a series of consensus-based recommendations on ultrasound guidance for lumbar puncture (LP).2 Among these are the recommendations that ultrasound “should be used” to map the lumbar spine and to select an appropriate puncture site to reduce insertion attempts, reduce needle redirections, and increase overall procedural success.

At first glance, the recommendations appear definitive. However, not immediately obvious is the authors’ clarification that “This position statement does not mandate that hospitalists use ultrasound guidance for LP, nor does it establish ultrasound guidance as the standard of care for LP.” Even with the authors’ caveat, this nuance may not be readily apparent to the readers who review only the executive summary of the guidelines or who omit the context provided in the background of the position statement.

The directive language of this position statement may be a result of an unmerited amplification. The SHM POCUS Task Force employed the Research and Development Appropriateness Method to quantify the degree of consensus and the strength of the recommendation assigned,3 reaching “very good” consensus for each of the recommendations espoused in its position statement. Procedurally, this implies that ≥80% of the 27 voting members rated each published recommendation statement as “appropriate”. Using wording assigned a priori by the committee to each level of consensus, appropriateness became magnified to the declaration “should be used”. In this manner, the strength of the recommendations in this position statement is not necessarily based on the experts’ convictions related to ultrasound-guided LP, nor the strength of the supporting evidence.

In the case of ultrasound-guided LP, we might choose different descriptors than “appropriate” or “should be used”. The evidence base for ultrasound guidance for LP, though growing, may be insufficient as a foundation to a position statement and is certainly insufficient to create a new standard of care for hospitalists. Although the SHM POCUS Task Force completed a thoughtful literature review, no systematic approach (eg, GRADE methodology4) was used to rate the quality of evidence. Furthermore, the literature reviewed was drawn predominantly from anesthesia and emergency medicine sources—not readily generalizable to the hospitalist. Notably, these studies examined all neuraxial procedures (most commonly epidural and spinal anesthesia), which employ different techniques and tools than LP and are performed by clinicians with vastly different procedural training backgrounds than most hospitalists. Altogether, this creates the potential for a gap between true evidence quality and the strength of recommendation.

At a high level, although the technique for ultrasound mapping of the lumbar spine may be similar, the use of ultrasound has been less well studied specifically for LP. When considering LP alone, the available literature is inadequate to recommend uniform ultrasound guidance. A 2018 meta-analysis by Gottlieb et al. included 12 studies focusing only on LP, totaling N = 957 patients.5 This showed some favorability of ultrasound guidance, with a success rate of 90% using ultrasound, 81.4% with a landmark-based approach, and an odds ratio of 2.22 favoring ultrasound guidance (95% CI: 1.03-4.77). Unfortunately, when focusing only on adult patients, the advantage of POCUS diminished, with 91.4% success in the ultrasound group, 87.7% success in the landmark group, and a nonsignificant odds ratio of 2.10 (95% CI: 0.66-7.44).

Unequivocally, POCUS has established itself as a transformative technology for the guidance of invasive bedside procedures, bringing increased procedural success, improved safety, and decreased complication rates.6 For some procedures, particularly central venous catheterization, ultrasound guidance is a clear standard of care.7,8 For LP, the greatest benefit has been observed in patients with anticipated procedural challenges, most commonly obese patients in whom landmarks are not easily palpable.9 Moreover, the harms ultrasound seeks to prevent are substantially different. The primary risk of deferring ultrasound guidance for LP is most often a failed procedure, whereas for other common ultrasound-guided procedures, the harms may include significant vascular injury, pneumothorax, or bowel perforation. Differences in the relative harms make risk-benefit assessments harder to quantify and studies harder to carry out.

Sonographic guidance for LP has a role in clinical practice and should always be considered. However, at present, there exist no guidelines in any other specialty regarding the routine use of ultrasound-guided LP, including anesthesia, emergency medicine, neurology, or interventional radiology.10-15 As a result, a conservative interpretation of the POCUS Task Force’s findings would be to consider the use of ultrasound guidance for LP in patients where landmark identification is particularly challenging, but not to consider it a standard requirement for accreditation, training, or practice as of yet. Saying “more studies are required” can be a cop-out in some cases, but in this situation, the old adage does seem to apply.

We have great respect for the work of the SHM POCUS Task Force in advancing the use of POCUS in hospital medicine. Though ultrasound is not currently mandated as a care standard for the performance of LP, we all can agree that POCUS does confer advantages for this procedure, particularly in a well-selected patient population. To continue to provide care of the highest quality, hospitalists must be encouraged to elevate their practice with POCUS and be supported with the equipment, training, credentialing, and quality assurance structures necessary to integrate bedside ultrasound safely and effectively into their diagnostic and procedural practice.

 

 

Disclosures

No conflicts of interest to disclose.

Funding

None.

 

Recognizing the increasingly important role of point-of-care ultrasound (POCUS) in advancing clinical care, the Society of Hospital Medicine (SHM) has published a valuable series of position statements to guide hospitalists and administrators on the safe and effective use of POCUS.1 In this issue of the Journal of Hospital Medicine, Soni et al. present a series of consensus-based recommendations on ultrasound guidance for lumbar puncture (LP).2 Among these are the recommendations that ultrasound “should be used” to map the lumbar spine and to select an appropriate puncture site to reduce insertion attempts, reduce needle redirections, and increase overall procedural success.

At first glance, the recommendations appear definitive. However, not immediately obvious is the authors’ clarification that “This position statement does not mandate that hospitalists use ultrasound guidance for LP, nor does it establish ultrasound guidance as the standard of care for LP.” Even with the authors’ caveat, this nuance may not be readily apparent to the readers who review only the executive summary of the guidelines or who omit the context provided in the background of the position statement.

The directive language of this position statement may be a result of an unmerited amplification. The SHM POCUS Task Force employed the Research and Development Appropriateness Method to quantify the degree of consensus and the strength of the recommendation assigned,3 reaching “very good” consensus for each of the recommendations espoused in its position statement. Procedurally, this implies that ≥80% of the 27 voting members rated each published recommendation statement as “appropriate”. Using wording assigned a priori by the committee to each level of consensus, appropriateness became magnified to the declaration “should be used”. In this manner, the strength of the recommendations in this position statement is not necessarily based on the experts’ convictions related to ultrasound-guided LP, nor the strength of the supporting evidence.

In the case of ultrasound-guided LP, we might choose different descriptors than “appropriate” or “should be used”. The evidence base for ultrasound guidance for LP, though growing, may be insufficient as a foundation to a position statement and is certainly insufficient to create a new standard of care for hospitalists. Although the SHM POCUS Task Force completed a thoughtful literature review, no systematic approach (eg, GRADE methodology4) was used to rate the quality of evidence. Furthermore, the literature reviewed was drawn predominantly from anesthesia and emergency medicine sources—not readily generalizable to the hospitalist. Notably, these studies examined all neuraxial procedures (most commonly epidural and spinal anesthesia), which employ different techniques and tools than LP and are performed by clinicians with vastly different procedural training backgrounds than most hospitalists. Altogether, this creates the potential for a gap between true evidence quality and the strength of recommendation.

At a high level, although the technique for ultrasound mapping of the lumbar spine may be similar, the use of ultrasound has been less well studied specifically for LP. When considering LP alone, the available literature is inadequate to recommend uniform ultrasound guidance. A 2018 meta-analysis by Gottlieb et al. included 12 studies focusing only on LP, totaling N = 957 patients.5 This showed some favorability of ultrasound guidance, with a success rate of 90% using ultrasound, 81.4% with a landmark-based approach, and an odds ratio of 2.22 favoring ultrasound guidance (95% CI: 1.03-4.77). Unfortunately, when focusing only on adult patients, the advantage of POCUS diminished, with 91.4% success in the ultrasound group, 87.7% success in the landmark group, and a nonsignificant odds ratio of 2.10 (95% CI: 0.66-7.44).

Unequivocally, POCUS has established itself as a transformative technology for the guidance of invasive bedside procedures, bringing increased procedural success, improved safety, and decreased complication rates.6 For some procedures, particularly central venous catheterization, ultrasound guidance is a clear standard of care.7,8 For LP, the greatest benefit has been observed in patients with anticipated procedural challenges, most commonly obese patients in whom landmarks are not easily palpable.9 Moreover, the harms ultrasound seeks to prevent are substantially different. The primary risk of deferring ultrasound guidance for LP is most often a failed procedure, whereas for other common ultrasound-guided procedures, the harms may include significant vascular injury, pneumothorax, or bowel perforation. Differences in the relative harms make risk-benefit assessments harder to quantify and studies harder to carry out.

Sonographic guidance for LP has a role in clinical practice and should always be considered. However, at present, there exist no guidelines in any other specialty regarding the routine use of ultrasound-guided LP, including anesthesia, emergency medicine, neurology, or interventional radiology.10-15 As a result, a conservative interpretation of the POCUS Task Force’s findings would be to consider the use of ultrasound guidance for LP in patients where landmark identification is particularly challenging, but not to consider it a standard requirement for accreditation, training, or practice as of yet. Saying “more studies are required” can be a cop-out in some cases, but in this situation, the old adage does seem to apply.

We have great respect for the work of the SHM POCUS Task Force in advancing the use of POCUS in hospital medicine. Though ultrasound is not currently mandated as a care standard for the performance of LP, we all can agree that POCUS does confer advantages for this procedure, particularly in a well-selected patient population. To continue to provide care of the highest quality, hospitalists must be encouraged to elevate their practice with POCUS and be supported with the equipment, training, credentialing, and quality assurance structures necessary to integrate bedside ultrasound safely and effectively into their diagnostic and procedural practice.

 

 

Disclosures

No conflicts of interest to disclose.

Funding

None.

 

References

1. Soni NJ, Schnobrich D, Matthews BK, et al. Point-of-care ultrasound for hospitalists: a position statement of the society of hospital medicine [published online ahead of print June 10, 2019]. J Hosp Med. 2019;14(10):591-601. https://doi.org/10.12788/jhm.3079.
2. Soni NJ, Franco-Sadud R, Dobaidze K, et al. Recommendations on the use of ultrasound guidance for adult lumbar puncture: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):126-135. https://doi.org/10.12788/jhm.2940.
3. Fitch, K, Bernstein SJ, Aguilar MD et al. The RAND/UCLA appropriateness method user’s manual. Santa Monica, CA: RAND Corporation, 2001.
4. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;334(7650):924-926. PubMed
5. Gottlieb M, Holladay D, Peksa GD. Ultrasound-assisted lumbar punctures: a systematic review and meta-analysis. Acad Emerg Med. 2019;26(1):85-96. https://doi.org/10.1111/acem.13558.
6. Moore CL, Copel JA. Point of care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
7. Shojania K, Duncan B, McDonald K, Wachter RM. Making health care safer: a critical analysis of patient safety practices. Rockville, MD: Agency for Healthcare Research and Quality, 2001. Evidence Report/Technology Assessment No. 43; AHRQ publication 01-E058. PubMed
8. Brass P, Hellmich M, Kolodziej L, Schick G, Smith AF. Ultrasound guidance versus anatomical landmarks for internal jugular vein catherization. Cochrane Database Syst Rev. 2015;Art. No.: 1:CD006962. https://doi.org/10.1002/14651858.CD006962.pub2.
9. Peterson MA, Pisupati D, Heyming TW, Abele JA, Lewis RJ. Ultrasound for routine lumbar puncture. Acad Emerg Med. 2014;21(2):130-136. https://doi.org/10.1111/acem.12305.
10. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care, and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
11. Neal JM, Brull R, Horn JL, et al. The Second American Society of Regional Anesthesia and Pain Medicine Evidence-Based Medicine Assessment of Ultrasound-Guided Regional Anesthesia: executive summary. Reg Anesth Pain Med. 2016;41(2):181-194. doi: 10.1097/AAP.0000000000000331.
12. Practice guidelines for obstetric anesthesia: an updated report by the American Society of Anesthesiologists Task Force on Obstetric Anesthesia and the Society for Obstetric Anesthesia and Perinatology. Anesthesiology. 2016;124(2):270-300. doi: 10.1097/ALN.0000000000000935.
13. Engelborghs S, Sebastiaan E, Struyfs H, et al. Consensus guidelines for lumbar puncture in patients with neurological diseases. Alzheimers Dement. 2017;8:111-126. doi: 10.1016/j.dadm.2017.04.007.
14. American College of Radiology. ACR-SPR-SRU Practice Parameter for the Performing and Interpreting Diagnostic Ultrasound Examinations. 2017; Available at https://www.acr.org/-/media/ACR/Files/Practice-Parameters/us-perf-interpret.pdf. Accessed April 15, 2019.
15. American College of Radiology. ACR-AIUM-SPR-SRU Practice Parameter for the Performance of an Ultrasound Examination of the Neonatal and Infant Spine. 2016/ Available at https://www.acr.org/-/media/ACR/Files/Practice-Parameters/US-NeonatalSpine.pdf. Accessed April 15, 2019.

References

1. Soni NJ, Schnobrich D, Matthews BK, et al. Point-of-care ultrasound for hospitalists: a position statement of the society of hospital medicine [published online ahead of print June 10, 2019]. J Hosp Med. 2019;14(10):591-601. https://doi.org/10.12788/jhm.3079.
2. Soni NJ, Franco-Sadud R, Dobaidze K, et al. Recommendations on the use of ultrasound guidance for adult lumbar puncture: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):126-135. https://doi.org/10.12788/jhm.2940.
3. Fitch, K, Bernstein SJ, Aguilar MD et al. The RAND/UCLA appropriateness method user’s manual. Santa Monica, CA: RAND Corporation, 2001.
4. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;334(7650):924-926. PubMed
5. Gottlieb M, Holladay D, Peksa GD. Ultrasound-assisted lumbar punctures: a systematic review and meta-analysis. Acad Emerg Med. 2019;26(1):85-96. https://doi.org/10.1111/acem.13558.
6. Moore CL, Copel JA. Point of care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
7. Shojania K, Duncan B, McDonald K, Wachter RM. Making health care safer: a critical analysis of patient safety practices. Rockville, MD: Agency for Healthcare Research and Quality, 2001. Evidence Report/Technology Assessment No. 43; AHRQ publication 01-E058. PubMed
8. Brass P, Hellmich M, Kolodziej L, Schick G, Smith AF. Ultrasound guidance versus anatomical landmarks for internal jugular vein catherization. Cochrane Database Syst Rev. 2015;Art. No.: 1:CD006962. https://doi.org/10.1002/14651858.CD006962.pub2.
9. Peterson MA, Pisupati D, Heyming TW, Abele JA, Lewis RJ. Ultrasound for routine lumbar puncture. Acad Emerg Med. 2014;21(2):130-136. https://doi.org/10.1111/acem.12305.
10. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care, and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
11. Neal JM, Brull R, Horn JL, et al. The Second American Society of Regional Anesthesia and Pain Medicine Evidence-Based Medicine Assessment of Ultrasound-Guided Regional Anesthesia: executive summary. Reg Anesth Pain Med. 2016;41(2):181-194. doi: 10.1097/AAP.0000000000000331.
12. Practice guidelines for obstetric anesthesia: an updated report by the American Society of Anesthesiologists Task Force on Obstetric Anesthesia and the Society for Obstetric Anesthesia and Perinatology. Anesthesiology. 2016;124(2):270-300. doi: 10.1097/ALN.0000000000000935.
13. Engelborghs S, Sebastiaan E, Struyfs H, et al. Consensus guidelines for lumbar puncture in patients with neurological diseases. Alzheimers Dement. 2017;8:111-126. doi: 10.1016/j.dadm.2017.04.007.
14. American College of Radiology. ACR-SPR-SRU Practice Parameter for the Performing and Interpreting Diagnostic Ultrasound Examinations. 2017; Available at https://www.acr.org/-/media/ACR/Files/Practice-Parameters/us-perf-interpret.pdf. Accessed April 15, 2019.
15. American College of Radiology. ACR-AIUM-SPR-SRU Practice Parameter for the Performance of an Ultrasound Examination of the Neonatal and Infant Spine. 2016/ Available at https://www.acr.org/-/media/ACR/Files/Practice-Parameters/US-NeonatalSpine.pdf. Accessed April 15, 2019.

Issue
Journal of Hospital Medicine 14(10)
Issue
Journal of Hospital Medicine 14(10)
Page Number
636-637. Published online first June 10, 2019
Page Number
636-637. Published online first June 10, 2019
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Tiffany C Fong, MD; E-mail: [email protected]; Telephone: 410- 955-8708.
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Improving Respiratory Rate Accuracy in the Hospital: A Quality Improvement Initiative

Article Type
Changed

Respiratory rate (RR) is an essential vital sign that is routinely measured for hospitalized adults. It is a strong predictor of adverse events.1,2 Therefore, RR is a key component of several widely used risk prediction scores, including the systemic inflammatory response syndrome (SIRS).3

Despite its clinical utility, RR is inaccurately measured.4-7 One reason for the inaccurate measurement of RR is that RR measurement, in contrast to that of other vital signs, is not automated. The gold-standard technique for measuring RR is the visual assessment of a resting patient. Thus, RR measurement is perceived as time-consuming. Clinical staff instead frequently approximate RR through brief observation.8-11

Given its clinical importance and widespread inaccuracy, we conducted a quality improvement (QI) initiative to improve RR accuracy.

METHODS

Design and Setting

We conducted an interdisciplinary QI initiative by using the plan–do–study–act (PDSA) methodology from July 2017 to February 2018. The initiative was set in a single adult 28-bed medical inpatient unit of a large, urban, safety-net hospital consisting of general internal medicine and hematology/oncology patients. Routine vital sign measurements on this unit occur at four- or six-hour intervals per physician orders and are performed by patient-care assistants (PCAs) who are nonregistered nursing support staff. PCAs use a vital signs cart equipped with automated tools to measure vital signs except for RR, which is manually assessed. PCAs are trained on vital sign measurements during a two-day onboarding orientation and four to six weeks of on-the-job training by experienced PCAs. PCAs are directly supervised by nursing operations managers. Formal continuing education programs for PCAs or performance audits of their clinical duties did not exist prior to our QI initiative.

Intervention

Intervention development addressing several important barriers and workflow inefficiencies was based on the direct observation of PCA workflow and information gathering by engaging stakeholders, including PCAs, nursing operations management, nursing leadership, and hospital administration (PDSA cycles 1-7 in Table). Our modified PCA vital sign workflow incorporated RR measurement during the approximate 30 seconds needed to complete automated blood pressure measurement as previously described.12 Nursing administration purchased three stopwatches (each $5 US) to attach to vital signs carts. One investigator (NK) participated in two monthly one-hour meetings, and three investigators (NK, KB, and SD) participated in 19 daily 15-minute huddles to conduct stakeholder engagement and educate and retrain PCAs on proper technique (total of 6.75 hours).

Evaluation

The primary aim of this QI initiative was to improve RR accuracy, which was evaluated using two distinct but complementary analyses: the prospective comparison of PCA-recorded RRs with gold-standard recorded RRs and the retrospective comparison of RRs recorded in electronic health records (EHR) on the intervention unit versus two control units. The secondary aims were to examine time to complete vital sign measurement and to assess whether the intervention was associated with a reduction in the incidence of SIRS specifically due to tachypnea.

 

 

Respiratory Rate Accuracy

PCA-recorded RRs were considered accurate if the RR was within ±2 breaths of a gold-standard RR measurement performed by a trained study member (NK or KB). We conducted gold-standard RR measurements for 100 observations pre- and postintervention within 30 minutes of PCA measurement to avoid Hawthorne bias.

We assessed the variability of recorded RRs in the EHR for all patients in the intervention unit as a proxy for accuracy. We hypothesized on the basis of prior research that improving the accuracy of RR measurement would increase the variability and normality of distribution in RRs.13 This is an approach that we have employed previously.7 The EHR cohort included consecutive hospitalizations by patients who were admitted to either the intervention unit or to one of two nonintervention general medicine inpatient units that served as concurrent controls. We grouped hospitalizations into a preintervention phase from March 1, 2017-July 22, 2017, a planning phase from July 23, 2017-December 3, 2017, and a postintervention phase from December 21, 2017-February 28, 2018. Hospitalizations during the two-week teaching phase from December 3, 2017-December 21, 2017 were excluded. We excluded vital signs obtained in the emergency department or in a location different from the patient’s admission unit. We qualitatively assessed RR distribution using histograms as we have done previously.7

We examined the distributions of RRs recorded in the EHR before and after intervention by individual PCAs on the intervention floor to assess for fidelity and adherence in the PCA uptake of the intervention.

Time

We compared the time to complete vital sign measurement among convenience samples of 50 unique observations pre- and postintervention using the Wilcoxon rank sum test.

SIRS Incidence

Since we hypothesized that improved RR accuracy would reduce falsely elevated RRs but have no impact on the other three SIRS criteria, we assessed changes in tachypnea-specific SIRS incidence, which was defined a priori as the presence of exactly two concurrent SIRS criteria, one of which was an elevated RR.3 We examined changes using a difference-in-differences approach with three different units of analysis (per vital sign measurement, hospital-day, and hospitalization; see footnote for Appendix Table 1 for methodological details. All analyses were conducted using STATA 12.0 (StataCorp, College Station, Texas).

RESULTS

Respiratory Rate Accuracy

Prior to the intervention, the median PCA RR was 18 (IQR 18-20) versus 12 (IQR 12-18) for the gold-standard RR (Appendix Figure 1), with only 36% of PCA measurements considered accurate. After the intervention, the median PCA-recorded RR was 14 (IQR 15-20) versus 14 (IQR 14-20) for the gold-standard RR and a RR accuracy of 58% (P < .001).

For our analyses on RR distribution using EHR data, we included 143,447 unique RRs (Appendix Table 2). After the intervention, the normality of the distribution of RRs on the intervention unit had increased, whereas those of RRs on the control units remained qualitatively similar pre- and postintervention (Appendix Figure 2).

Notable differences existed among the 11 individual PCAs (Figure) despite observing increased variability in PCA-recorded RRs postintervention. Some PCAs (numbers 2, 7, and 10) shifted their narrow RR interquartile range lower by several breaths/minute, whereas most other PCAs had a reduced median RR and widened interquartile range.

 

 

Time

Before the intervention, the median time to complete vital sign measurements was 2:36 (IQR 2:04-3:20). After the intervention, the time to complete vital signs decreased to 1:55 (IQR, 1:40-2:22; P < .001), which was 41 less seconds on average per vital sign set.

SIRS Incidence

The intervention was associated with a 3.3% reduction (95% CI, –6.4% to –0.005%) in tachypnea-specific SIRS incidence per hospital-day and a 7.8% reduction (95% CI, –13.5% to –2.2%) per hospitalization (Appendix Table 1). We also observed a modest reduction in overall SIRS incidence after the intervention (2.9% less per vital sign check, 4.6% less per hospital-day, and 3.2% less per hospitalization), although these reductions were not statistically significant.

DISCUSSION

Our QI initiative improved the absolute RR accuracy by 22%, saved PCAs 41 seconds on average per vital sign measurement, and decreased the absolute proportion of hospitalizations with tachypnea-specific SIRS by 7.8%. Our intervention is a novel, interdisciplinary, low-cost, low-effort, low-tech approach that addressed known challenges to accurate RR measurement,8,9,11 as well as the key barriers identified in our initial PDSA cycles. Our approach includes adding a time-keeping device to vital sign carts and standardizing a PCA vital sign workflow with increased efficiency. Lastly, this intervention is potentially scalable because stakeholder engagement, education, and retraining of the entire PCA staff for the unit required only 6.75 hours.

While our primary goal was to improve RR accuracy, our QI initiative also improved vital sign efficiency. By extrapolating our findings to an eight-hour PCA shift caring for eight patients who require vital sign checks every four hours, we estimated that our intervention would save approximately 16:24 minutes per PCA shift. This newfound time could be repurposed for other patient-care tasks or could be spent ensuring the accuracy of other vital signs given that accurate monitoring may be neglected because of time constraints.11 Additionally, the improvement in RR accuracy reduced falsely elevated RRs and thus lowered SIRS incidence specifically due to tachypnea. Given that EHR-based sepsis alerts are often based on SIRS criteria, improved RR accuracy may also improve alarm fatigue by reducing the rate of false-positive alerts.14

This initiative is not without limitations. Generalizability to other hospitals and even other units within the same hospital is uncertain. However, because this initiative was conducted within a safety-net hospital, we anticipate at least similar, if not increased, success in better-resourced hospitals. Second, the long-term durability of our intervention is unclear, although EHR RR variability remained steady for two months after our intervention (data not shown).

To ensure long-term sustainability and further improve RR accuracy, future PDSA cycles could include electing a PCA “vital signs champion” to reiterate the importance of RRs in clinical decision-making and ensure adherence to the modified workflow. Nursing champions act as persuasive change agents that disseminate and implement healthcare change,15 which may also be true of PCA champions. Additionally, future PDSA cycles can obviate the need for labor-intensive manual audits by leveraging EHR-based auditing to target education and retraining interventions to PCAs with minimal RR variability to optimize workflow adherence.

In conclusion, through a multipronged QI initiative we improved RR accuracy, increased the efficiency of vital sign measurement, and decreased SIRS incidence specifically due to tachypnea by reducing the number of falsely elevated RRs. This novel, low-cost, low-effort, low-tech approach can readily be implemented and disseminated in hospital inpatient settings.

 

 

Acknowledgments

The authors would like to acknowledge the meaningful contributions of Mr. Sudarshaan Pathak, RN, Ms. Shirly Koduvathu, RN, and Ms. Judy Herrington MSN, RN in this multidisciplinary initiative. We thank Mr. Christopher McKintosh, RN for his support in data acquisition. Lastly, the authors would like to acknowledge all of the patient-care assistants involved in this QI initiative.

Disclosures

Dr. Makam reports grants from NIA/NIH, during the conduct of the study. All other authors have nothing to disclose.

Funding

This work is supported in part by the Agency for Healthcare Research and Quality-funded UT Southwestern Center for Patient-Centered Outcomes Research (R24HS022418). OKN is funded by the National Heart, Lung, and Blood Institute (K23HL133441), and ANM is funded by the National Institute on Aging (K23AG052603).

 

Files
References

1. Fieselmann JF, Hendryx MS, Helms CM, Wakefield DS. Respiratory rate predicts cardiopulmonary arrest for internal medicine inpatients. J Gen Intern Med. 1993;8(7):354-360. https://doi.org/10.1007/BF02600071.
2. Hodgetts TJ, Kenward G, Vlachonikolis IG, Payne S, Castle N. The identification of risk factors for cardiac arrest and formulation of activation criteria to alert a medical emergency team. Resuscitation. 2002;54(2):125-131. https://doi.org/10.1016/S0300-9572(02)00100-4.
3. Bone RC, Sibbald WJ, Sprung CL. The ACCP-SCCM consensus conference on sepsis and organ failure. Chest. 1992;101(6):1481-1483.
4. Lovett PB, Buchwald JM, Sturmann K, Bijur P. The vexatious vital: neither clinical measurements by nurses nor an electronic monitor provides accurate measurements of respiratory rate in triage. Ann Emerg Med. 2005;45(1):68-76. https://doi.org/10.1016/j.annemergmed.2004.06.016.
5. Chen J, Hillman K, Bellomo R, et al. The impact of introducing medical emergency team system on the documentations of vital signs. Resuscitation. 2009;80(1):35-43. https://doi.org/10.1016/j.resuscitation.2008.10.009.
6. Leuvan CH, Mitchell I. Missed opportunities? An observational study of vital sign measurements. Crit Care Resusc. 2008;10(2):111-115.
7. Badawy J, Nguyen OK, Clark C, Halm EA, Makam AN. Is everyone really breathing 20 times a minute? Assessing epidemiology and variation in recorded respiratory rate in hospitalised adults. BMJ Qual Saf. 2017;26(10):832-836. https://doi.org/10.1136/bmjqs-2017-006671.
8. Chua WL, Mackey S, Ng EK, Liaw SY. Front line nurses’ experiences with deteriorating ward patients: a qualitative study. Int Nurs Rev. 2013;60(4):501-509. https://doi.org/10.1111/inr.12061.
9. De Meester K, Van Bogaert P, Clarke SP, Bossaert L. In-hospital mortality after serious adverse events on medical and surgical nursing units: a mixed methods study. J Clin Nurs. 2013;22(15-16):2308-2317. https://doi.org/10.1111/j.1365-2702.2012.04154.x.
10. Cheng AC, Black JF, Buising KL. Respiratory rate: the neglected vital sign. Med J Aust. 2008;189(9):531. https://doi.org/10.5694/j.1326-5377.2008.tb02163.x.
11. Mok W, Wang W, Cooper S, Ang EN, Liaw SY. Attitudes towards vital signs monitoring in the detection of clinical deterioration: scale development and survey of ward nurses. Int J Qual Health Care. 2015;27(3):207-213. https://doi.org/10.1093/intqhc/mzv019.
12. Keshvani N, Berger K, Nguyen OK, Makam AN. Roadmap for improving the accuracy of respiratory rate measurements. BMJ Qual Saf. 2018;27(8):e5. https://doi.org/10.1136/bmjqs-2017-007516.
13. Semler MW, Stover DG, Copland AP, et al. Flash mob research: a single-day, multicenter, resident-directed study of respiratory rate. Chest. 2013;143(6):1740-1744. https://doi.org/10.1378/chest.12-1837.
14. Makam AN, Nguyen OK, Auerbach AD. Diagnostic accuracy and effectiveness of automated electronic sepsis alert systems: a systematic review. J Hosp Med. 2015;10(6):396-402. https://doi.org/10.1002/jhm.2347.
15. Ploeg J, Skelly J, Rowan M, et al. The role of nursing best practice champions in diffusing practice guidelines: a mixed methods study. Worldviews Evid Based Nurs. 2010;7(4):238-251. https://doi.org/10.1111/j.1741-6787.2010.00202.x.

Article PDF
Issue
Journal of Hospital Medicine 14(11)
Topics
Page Number
673-677. Published online first June 10, 2019
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Respiratory rate (RR) is an essential vital sign that is routinely measured for hospitalized adults. It is a strong predictor of adverse events.1,2 Therefore, RR is a key component of several widely used risk prediction scores, including the systemic inflammatory response syndrome (SIRS).3

Despite its clinical utility, RR is inaccurately measured.4-7 One reason for the inaccurate measurement of RR is that RR measurement, in contrast to that of other vital signs, is not automated. The gold-standard technique for measuring RR is the visual assessment of a resting patient. Thus, RR measurement is perceived as time-consuming. Clinical staff instead frequently approximate RR through brief observation.8-11

Given its clinical importance and widespread inaccuracy, we conducted a quality improvement (QI) initiative to improve RR accuracy.

METHODS

Design and Setting

We conducted an interdisciplinary QI initiative by using the plan–do–study–act (PDSA) methodology from July 2017 to February 2018. The initiative was set in a single adult 28-bed medical inpatient unit of a large, urban, safety-net hospital consisting of general internal medicine and hematology/oncology patients. Routine vital sign measurements on this unit occur at four- or six-hour intervals per physician orders and are performed by patient-care assistants (PCAs) who are nonregistered nursing support staff. PCAs use a vital signs cart equipped with automated tools to measure vital signs except for RR, which is manually assessed. PCAs are trained on vital sign measurements during a two-day onboarding orientation and four to six weeks of on-the-job training by experienced PCAs. PCAs are directly supervised by nursing operations managers. Formal continuing education programs for PCAs or performance audits of their clinical duties did not exist prior to our QI initiative.

Intervention

Intervention development addressing several important barriers and workflow inefficiencies was based on the direct observation of PCA workflow and information gathering by engaging stakeholders, including PCAs, nursing operations management, nursing leadership, and hospital administration (PDSA cycles 1-7 in Table). Our modified PCA vital sign workflow incorporated RR measurement during the approximate 30 seconds needed to complete automated blood pressure measurement as previously described.12 Nursing administration purchased three stopwatches (each $5 US) to attach to vital signs carts. One investigator (NK) participated in two monthly one-hour meetings, and three investigators (NK, KB, and SD) participated in 19 daily 15-minute huddles to conduct stakeholder engagement and educate and retrain PCAs on proper technique (total of 6.75 hours).

Evaluation

The primary aim of this QI initiative was to improve RR accuracy, which was evaluated using two distinct but complementary analyses: the prospective comparison of PCA-recorded RRs with gold-standard recorded RRs and the retrospective comparison of RRs recorded in electronic health records (EHR) on the intervention unit versus two control units. The secondary aims were to examine time to complete vital sign measurement and to assess whether the intervention was associated with a reduction in the incidence of SIRS specifically due to tachypnea.

 

 

Respiratory Rate Accuracy

PCA-recorded RRs were considered accurate if the RR was within ±2 breaths of a gold-standard RR measurement performed by a trained study member (NK or KB). We conducted gold-standard RR measurements for 100 observations pre- and postintervention within 30 minutes of PCA measurement to avoid Hawthorne bias.

We assessed the variability of recorded RRs in the EHR for all patients in the intervention unit as a proxy for accuracy. We hypothesized on the basis of prior research that improving the accuracy of RR measurement would increase the variability and normality of distribution in RRs.13 This is an approach that we have employed previously.7 The EHR cohort included consecutive hospitalizations by patients who were admitted to either the intervention unit or to one of two nonintervention general medicine inpatient units that served as concurrent controls. We grouped hospitalizations into a preintervention phase from March 1, 2017-July 22, 2017, a planning phase from July 23, 2017-December 3, 2017, and a postintervention phase from December 21, 2017-February 28, 2018. Hospitalizations during the two-week teaching phase from December 3, 2017-December 21, 2017 were excluded. We excluded vital signs obtained in the emergency department or in a location different from the patient’s admission unit. We qualitatively assessed RR distribution using histograms as we have done previously.7

We examined the distributions of RRs recorded in the EHR before and after intervention by individual PCAs on the intervention floor to assess for fidelity and adherence in the PCA uptake of the intervention.

Time

We compared the time to complete vital sign measurement among convenience samples of 50 unique observations pre- and postintervention using the Wilcoxon rank sum test.

SIRS Incidence

Since we hypothesized that improved RR accuracy would reduce falsely elevated RRs but have no impact on the other three SIRS criteria, we assessed changes in tachypnea-specific SIRS incidence, which was defined a priori as the presence of exactly two concurrent SIRS criteria, one of which was an elevated RR.3 We examined changes using a difference-in-differences approach with three different units of analysis (per vital sign measurement, hospital-day, and hospitalization; see footnote for Appendix Table 1 for methodological details. All analyses were conducted using STATA 12.0 (StataCorp, College Station, Texas).

RESULTS

Respiratory Rate Accuracy

Prior to the intervention, the median PCA RR was 18 (IQR 18-20) versus 12 (IQR 12-18) for the gold-standard RR (Appendix Figure 1), with only 36% of PCA measurements considered accurate. After the intervention, the median PCA-recorded RR was 14 (IQR 15-20) versus 14 (IQR 14-20) for the gold-standard RR and a RR accuracy of 58% (P < .001).

For our analyses on RR distribution using EHR data, we included 143,447 unique RRs (Appendix Table 2). After the intervention, the normality of the distribution of RRs on the intervention unit had increased, whereas those of RRs on the control units remained qualitatively similar pre- and postintervention (Appendix Figure 2).

Notable differences existed among the 11 individual PCAs (Figure) despite observing increased variability in PCA-recorded RRs postintervention. Some PCAs (numbers 2, 7, and 10) shifted their narrow RR interquartile range lower by several breaths/minute, whereas most other PCAs had a reduced median RR and widened interquartile range.

 

 

Time

Before the intervention, the median time to complete vital sign measurements was 2:36 (IQR 2:04-3:20). After the intervention, the time to complete vital signs decreased to 1:55 (IQR, 1:40-2:22; P < .001), which was 41 less seconds on average per vital sign set.

SIRS Incidence

The intervention was associated with a 3.3% reduction (95% CI, –6.4% to –0.005%) in tachypnea-specific SIRS incidence per hospital-day and a 7.8% reduction (95% CI, –13.5% to –2.2%) per hospitalization (Appendix Table 1). We also observed a modest reduction in overall SIRS incidence after the intervention (2.9% less per vital sign check, 4.6% less per hospital-day, and 3.2% less per hospitalization), although these reductions were not statistically significant.

DISCUSSION

Our QI initiative improved the absolute RR accuracy by 22%, saved PCAs 41 seconds on average per vital sign measurement, and decreased the absolute proportion of hospitalizations with tachypnea-specific SIRS by 7.8%. Our intervention is a novel, interdisciplinary, low-cost, low-effort, low-tech approach that addressed known challenges to accurate RR measurement,8,9,11 as well as the key barriers identified in our initial PDSA cycles. Our approach includes adding a time-keeping device to vital sign carts and standardizing a PCA vital sign workflow with increased efficiency. Lastly, this intervention is potentially scalable because stakeholder engagement, education, and retraining of the entire PCA staff for the unit required only 6.75 hours.

While our primary goal was to improve RR accuracy, our QI initiative also improved vital sign efficiency. By extrapolating our findings to an eight-hour PCA shift caring for eight patients who require vital sign checks every four hours, we estimated that our intervention would save approximately 16:24 minutes per PCA shift. This newfound time could be repurposed for other patient-care tasks or could be spent ensuring the accuracy of other vital signs given that accurate monitoring may be neglected because of time constraints.11 Additionally, the improvement in RR accuracy reduced falsely elevated RRs and thus lowered SIRS incidence specifically due to tachypnea. Given that EHR-based sepsis alerts are often based on SIRS criteria, improved RR accuracy may also improve alarm fatigue by reducing the rate of false-positive alerts.14

This initiative is not without limitations. Generalizability to other hospitals and even other units within the same hospital is uncertain. However, because this initiative was conducted within a safety-net hospital, we anticipate at least similar, if not increased, success in better-resourced hospitals. Second, the long-term durability of our intervention is unclear, although EHR RR variability remained steady for two months after our intervention (data not shown).

To ensure long-term sustainability and further improve RR accuracy, future PDSA cycles could include electing a PCA “vital signs champion” to reiterate the importance of RRs in clinical decision-making and ensure adherence to the modified workflow. Nursing champions act as persuasive change agents that disseminate and implement healthcare change,15 which may also be true of PCA champions. Additionally, future PDSA cycles can obviate the need for labor-intensive manual audits by leveraging EHR-based auditing to target education and retraining interventions to PCAs with minimal RR variability to optimize workflow adherence.

In conclusion, through a multipronged QI initiative we improved RR accuracy, increased the efficiency of vital sign measurement, and decreased SIRS incidence specifically due to tachypnea by reducing the number of falsely elevated RRs. This novel, low-cost, low-effort, low-tech approach can readily be implemented and disseminated in hospital inpatient settings.

 

 

Acknowledgments

The authors would like to acknowledge the meaningful contributions of Mr. Sudarshaan Pathak, RN, Ms. Shirly Koduvathu, RN, and Ms. Judy Herrington MSN, RN in this multidisciplinary initiative. We thank Mr. Christopher McKintosh, RN for his support in data acquisition. Lastly, the authors would like to acknowledge all of the patient-care assistants involved in this QI initiative.

Disclosures

Dr. Makam reports grants from NIA/NIH, during the conduct of the study. All other authors have nothing to disclose.

Funding

This work is supported in part by the Agency for Healthcare Research and Quality-funded UT Southwestern Center for Patient-Centered Outcomes Research (R24HS022418). OKN is funded by the National Heart, Lung, and Blood Institute (K23HL133441), and ANM is funded by the National Institute on Aging (K23AG052603).

 

Respiratory rate (RR) is an essential vital sign that is routinely measured for hospitalized adults. It is a strong predictor of adverse events.1,2 Therefore, RR is a key component of several widely used risk prediction scores, including the systemic inflammatory response syndrome (SIRS).3

Despite its clinical utility, RR is inaccurately measured.4-7 One reason for the inaccurate measurement of RR is that RR measurement, in contrast to that of other vital signs, is not automated. The gold-standard technique for measuring RR is the visual assessment of a resting patient. Thus, RR measurement is perceived as time-consuming. Clinical staff instead frequently approximate RR through brief observation.8-11

Given its clinical importance and widespread inaccuracy, we conducted a quality improvement (QI) initiative to improve RR accuracy.

METHODS

Design and Setting

We conducted an interdisciplinary QI initiative by using the plan–do–study–act (PDSA) methodology from July 2017 to February 2018. The initiative was set in a single adult 28-bed medical inpatient unit of a large, urban, safety-net hospital consisting of general internal medicine and hematology/oncology patients. Routine vital sign measurements on this unit occur at four- or six-hour intervals per physician orders and are performed by patient-care assistants (PCAs) who are nonregistered nursing support staff. PCAs use a vital signs cart equipped with automated tools to measure vital signs except for RR, which is manually assessed. PCAs are trained on vital sign measurements during a two-day onboarding orientation and four to six weeks of on-the-job training by experienced PCAs. PCAs are directly supervised by nursing operations managers. Formal continuing education programs for PCAs or performance audits of their clinical duties did not exist prior to our QI initiative.

Intervention

Intervention development addressing several important barriers and workflow inefficiencies was based on the direct observation of PCA workflow and information gathering by engaging stakeholders, including PCAs, nursing operations management, nursing leadership, and hospital administration (PDSA cycles 1-7 in Table). Our modified PCA vital sign workflow incorporated RR measurement during the approximate 30 seconds needed to complete automated blood pressure measurement as previously described.12 Nursing administration purchased three stopwatches (each $5 US) to attach to vital signs carts. One investigator (NK) participated in two monthly one-hour meetings, and three investigators (NK, KB, and SD) participated in 19 daily 15-minute huddles to conduct stakeholder engagement and educate and retrain PCAs on proper technique (total of 6.75 hours).

Evaluation

The primary aim of this QI initiative was to improve RR accuracy, which was evaluated using two distinct but complementary analyses: the prospective comparison of PCA-recorded RRs with gold-standard recorded RRs and the retrospective comparison of RRs recorded in electronic health records (EHR) on the intervention unit versus two control units. The secondary aims were to examine time to complete vital sign measurement and to assess whether the intervention was associated with a reduction in the incidence of SIRS specifically due to tachypnea.

 

 

Respiratory Rate Accuracy

PCA-recorded RRs were considered accurate if the RR was within ±2 breaths of a gold-standard RR measurement performed by a trained study member (NK or KB). We conducted gold-standard RR measurements for 100 observations pre- and postintervention within 30 minutes of PCA measurement to avoid Hawthorne bias.

We assessed the variability of recorded RRs in the EHR for all patients in the intervention unit as a proxy for accuracy. We hypothesized on the basis of prior research that improving the accuracy of RR measurement would increase the variability and normality of distribution in RRs.13 This is an approach that we have employed previously.7 The EHR cohort included consecutive hospitalizations by patients who were admitted to either the intervention unit or to one of two nonintervention general medicine inpatient units that served as concurrent controls. We grouped hospitalizations into a preintervention phase from March 1, 2017-July 22, 2017, a planning phase from July 23, 2017-December 3, 2017, and a postintervention phase from December 21, 2017-February 28, 2018. Hospitalizations during the two-week teaching phase from December 3, 2017-December 21, 2017 were excluded. We excluded vital signs obtained in the emergency department or in a location different from the patient’s admission unit. We qualitatively assessed RR distribution using histograms as we have done previously.7

We examined the distributions of RRs recorded in the EHR before and after intervention by individual PCAs on the intervention floor to assess for fidelity and adherence in the PCA uptake of the intervention.

Time

We compared the time to complete vital sign measurement among convenience samples of 50 unique observations pre- and postintervention using the Wilcoxon rank sum test.

SIRS Incidence

Since we hypothesized that improved RR accuracy would reduce falsely elevated RRs but have no impact on the other three SIRS criteria, we assessed changes in tachypnea-specific SIRS incidence, which was defined a priori as the presence of exactly two concurrent SIRS criteria, one of which was an elevated RR.3 We examined changes using a difference-in-differences approach with three different units of analysis (per vital sign measurement, hospital-day, and hospitalization; see footnote for Appendix Table 1 for methodological details. All analyses were conducted using STATA 12.0 (StataCorp, College Station, Texas).

RESULTS

Respiratory Rate Accuracy

Prior to the intervention, the median PCA RR was 18 (IQR 18-20) versus 12 (IQR 12-18) for the gold-standard RR (Appendix Figure 1), with only 36% of PCA measurements considered accurate. After the intervention, the median PCA-recorded RR was 14 (IQR 15-20) versus 14 (IQR 14-20) for the gold-standard RR and a RR accuracy of 58% (P < .001).

For our analyses on RR distribution using EHR data, we included 143,447 unique RRs (Appendix Table 2). After the intervention, the normality of the distribution of RRs on the intervention unit had increased, whereas those of RRs on the control units remained qualitatively similar pre- and postintervention (Appendix Figure 2).

Notable differences existed among the 11 individual PCAs (Figure) despite observing increased variability in PCA-recorded RRs postintervention. Some PCAs (numbers 2, 7, and 10) shifted their narrow RR interquartile range lower by several breaths/minute, whereas most other PCAs had a reduced median RR and widened interquartile range.

 

 

Time

Before the intervention, the median time to complete vital sign measurements was 2:36 (IQR 2:04-3:20). After the intervention, the time to complete vital signs decreased to 1:55 (IQR, 1:40-2:22; P < .001), which was 41 less seconds on average per vital sign set.

SIRS Incidence

The intervention was associated with a 3.3% reduction (95% CI, –6.4% to –0.005%) in tachypnea-specific SIRS incidence per hospital-day and a 7.8% reduction (95% CI, –13.5% to –2.2%) per hospitalization (Appendix Table 1). We also observed a modest reduction in overall SIRS incidence after the intervention (2.9% less per vital sign check, 4.6% less per hospital-day, and 3.2% less per hospitalization), although these reductions were not statistically significant.

DISCUSSION

Our QI initiative improved the absolute RR accuracy by 22%, saved PCAs 41 seconds on average per vital sign measurement, and decreased the absolute proportion of hospitalizations with tachypnea-specific SIRS by 7.8%. Our intervention is a novel, interdisciplinary, low-cost, low-effort, low-tech approach that addressed known challenges to accurate RR measurement,8,9,11 as well as the key barriers identified in our initial PDSA cycles. Our approach includes adding a time-keeping device to vital sign carts and standardizing a PCA vital sign workflow with increased efficiency. Lastly, this intervention is potentially scalable because stakeholder engagement, education, and retraining of the entire PCA staff for the unit required only 6.75 hours.

While our primary goal was to improve RR accuracy, our QI initiative also improved vital sign efficiency. By extrapolating our findings to an eight-hour PCA shift caring for eight patients who require vital sign checks every four hours, we estimated that our intervention would save approximately 16:24 minutes per PCA shift. This newfound time could be repurposed for other patient-care tasks or could be spent ensuring the accuracy of other vital signs given that accurate monitoring may be neglected because of time constraints.11 Additionally, the improvement in RR accuracy reduced falsely elevated RRs and thus lowered SIRS incidence specifically due to tachypnea. Given that EHR-based sepsis alerts are often based on SIRS criteria, improved RR accuracy may also improve alarm fatigue by reducing the rate of false-positive alerts.14

This initiative is not without limitations. Generalizability to other hospitals and even other units within the same hospital is uncertain. However, because this initiative was conducted within a safety-net hospital, we anticipate at least similar, if not increased, success in better-resourced hospitals. Second, the long-term durability of our intervention is unclear, although EHR RR variability remained steady for two months after our intervention (data not shown).

To ensure long-term sustainability and further improve RR accuracy, future PDSA cycles could include electing a PCA “vital signs champion” to reiterate the importance of RRs in clinical decision-making and ensure adherence to the modified workflow. Nursing champions act as persuasive change agents that disseminate and implement healthcare change,15 which may also be true of PCA champions. Additionally, future PDSA cycles can obviate the need for labor-intensive manual audits by leveraging EHR-based auditing to target education and retraining interventions to PCAs with minimal RR variability to optimize workflow adherence.

In conclusion, through a multipronged QI initiative we improved RR accuracy, increased the efficiency of vital sign measurement, and decreased SIRS incidence specifically due to tachypnea by reducing the number of falsely elevated RRs. This novel, low-cost, low-effort, low-tech approach can readily be implemented and disseminated in hospital inpatient settings.

 

 

Acknowledgments

The authors would like to acknowledge the meaningful contributions of Mr. Sudarshaan Pathak, RN, Ms. Shirly Koduvathu, RN, and Ms. Judy Herrington MSN, RN in this multidisciplinary initiative. We thank Mr. Christopher McKintosh, RN for his support in data acquisition. Lastly, the authors would like to acknowledge all of the patient-care assistants involved in this QI initiative.

Disclosures

Dr. Makam reports grants from NIA/NIH, during the conduct of the study. All other authors have nothing to disclose.

Funding

This work is supported in part by the Agency for Healthcare Research and Quality-funded UT Southwestern Center for Patient-Centered Outcomes Research (R24HS022418). OKN is funded by the National Heart, Lung, and Blood Institute (K23HL133441), and ANM is funded by the National Institute on Aging (K23AG052603).

 

References

1. Fieselmann JF, Hendryx MS, Helms CM, Wakefield DS. Respiratory rate predicts cardiopulmonary arrest for internal medicine inpatients. J Gen Intern Med. 1993;8(7):354-360. https://doi.org/10.1007/BF02600071.
2. Hodgetts TJ, Kenward G, Vlachonikolis IG, Payne S, Castle N. The identification of risk factors for cardiac arrest and formulation of activation criteria to alert a medical emergency team. Resuscitation. 2002;54(2):125-131. https://doi.org/10.1016/S0300-9572(02)00100-4.
3. Bone RC, Sibbald WJ, Sprung CL. The ACCP-SCCM consensus conference on sepsis and organ failure. Chest. 1992;101(6):1481-1483.
4. Lovett PB, Buchwald JM, Sturmann K, Bijur P. The vexatious vital: neither clinical measurements by nurses nor an electronic monitor provides accurate measurements of respiratory rate in triage. Ann Emerg Med. 2005;45(1):68-76. https://doi.org/10.1016/j.annemergmed.2004.06.016.
5. Chen J, Hillman K, Bellomo R, et al. The impact of introducing medical emergency team system on the documentations of vital signs. Resuscitation. 2009;80(1):35-43. https://doi.org/10.1016/j.resuscitation.2008.10.009.
6. Leuvan CH, Mitchell I. Missed opportunities? An observational study of vital sign measurements. Crit Care Resusc. 2008;10(2):111-115.
7. Badawy J, Nguyen OK, Clark C, Halm EA, Makam AN. Is everyone really breathing 20 times a minute? Assessing epidemiology and variation in recorded respiratory rate in hospitalised adults. BMJ Qual Saf. 2017;26(10):832-836. https://doi.org/10.1136/bmjqs-2017-006671.
8. Chua WL, Mackey S, Ng EK, Liaw SY. Front line nurses’ experiences with deteriorating ward patients: a qualitative study. Int Nurs Rev. 2013;60(4):501-509. https://doi.org/10.1111/inr.12061.
9. De Meester K, Van Bogaert P, Clarke SP, Bossaert L. In-hospital mortality after serious adverse events on medical and surgical nursing units: a mixed methods study. J Clin Nurs. 2013;22(15-16):2308-2317. https://doi.org/10.1111/j.1365-2702.2012.04154.x.
10. Cheng AC, Black JF, Buising KL. Respiratory rate: the neglected vital sign. Med J Aust. 2008;189(9):531. https://doi.org/10.5694/j.1326-5377.2008.tb02163.x.
11. Mok W, Wang W, Cooper S, Ang EN, Liaw SY. Attitudes towards vital signs monitoring in the detection of clinical deterioration: scale development and survey of ward nurses. Int J Qual Health Care. 2015;27(3):207-213. https://doi.org/10.1093/intqhc/mzv019.
12. Keshvani N, Berger K, Nguyen OK, Makam AN. Roadmap for improving the accuracy of respiratory rate measurements. BMJ Qual Saf. 2018;27(8):e5. https://doi.org/10.1136/bmjqs-2017-007516.
13. Semler MW, Stover DG, Copland AP, et al. Flash mob research: a single-day, multicenter, resident-directed study of respiratory rate. Chest. 2013;143(6):1740-1744. https://doi.org/10.1378/chest.12-1837.
14. Makam AN, Nguyen OK, Auerbach AD. Diagnostic accuracy and effectiveness of automated electronic sepsis alert systems: a systematic review. J Hosp Med. 2015;10(6):396-402. https://doi.org/10.1002/jhm.2347.
15. Ploeg J, Skelly J, Rowan M, et al. The role of nursing best practice champions in diffusing practice guidelines: a mixed methods study. Worldviews Evid Based Nurs. 2010;7(4):238-251. https://doi.org/10.1111/j.1741-6787.2010.00202.x.

References

1. Fieselmann JF, Hendryx MS, Helms CM, Wakefield DS. Respiratory rate predicts cardiopulmonary arrest for internal medicine inpatients. J Gen Intern Med. 1993;8(7):354-360. https://doi.org/10.1007/BF02600071.
2. Hodgetts TJ, Kenward G, Vlachonikolis IG, Payne S, Castle N. The identification of risk factors for cardiac arrest and formulation of activation criteria to alert a medical emergency team. Resuscitation. 2002;54(2):125-131. https://doi.org/10.1016/S0300-9572(02)00100-4.
3. Bone RC, Sibbald WJ, Sprung CL. The ACCP-SCCM consensus conference on sepsis and organ failure. Chest. 1992;101(6):1481-1483.
4. Lovett PB, Buchwald JM, Sturmann K, Bijur P. The vexatious vital: neither clinical measurements by nurses nor an electronic monitor provides accurate measurements of respiratory rate in triage. Ann Emerg Med. 2005;45(1):68-76. https://doi.org/10.1016/j.annemergmed.2004.06.016.
5. Chen J, Hillman K, Bellomo R, et al. The impact of introducing medical emergency team system on the documentations of vital signs. Resuscitation. 2009;80(1):35-43. https://doi.org/10.1016/j.resuscitation.2008.10.009.
6. Leuvan CH, Mitchell I. Missed opportunities? An observational study of vital sign measurements. Crit Care Resusc. 2008;10(2):111-115.
7. Badawy J, Nguyen OK, Clark C, Halm EA, Makam AN. Is everyone really breathing 20 times a minute? Assessing epidemiology and variation in recorded respiratory rate in hospitalised adults. BMJ Qual Saf. 2017;26(10):832-836. https://doi.org/10.1136/bmjqs-2017-006671.
8. Chua WL, Mackey S, Ng EK, Liaw SY. Front line nurses’ experiences with deteriorating ward patients: a qualitative study. Int Nurs Rev. 2013;60(4):501-509. https://doi.org/10.1111/inr.12061.
9. De Meester K, Van Bogaert P, Clarke SP, Bossaert L. In-hospital mortality after serious adverse events on medical and surgical nursing units: a mixed methods study. J Clin Nurs. 2013;22(15-16):2308-2317. https://doi.org/10.1111/j.1365-2702.2012.04154.x.
10. Cheng AC, Black JF, Buising KL. Respiratory rate: the neglected vital sign. Med J Aust. 2008;189(9):531. https://doi.org/10.5694/j.1326-5377.2008.tb02163.x.
11. Mok W, Wang W, Cooper S, Ang EN, Liaw SY. Attitudes towards vital signs monitoring in the detection of clinical deterioration: scale development and survey of ward nurses. Int J Qual Health Care. 2015;27(3):207-213. https://doi.org/10.1093/intqhc/mzv019.
12. Keshvani N, Berger K, Nguyen OK, Makam AN. Roadmap for improving the accuracy of respiratory rate measurements. BMJ Qual Saf. 2018;27(8):e5. https://doi.org/10.1136/bmjqs-2017-007516.
13. Semler MW, Stover DG, Copland AP, et al. Flash mob research: a single-day, multicenter, resident-directed study of respiratory rate. Chest. 2013;143(6):1740-1744. https://doi.org/10.1378/chest.12-1837.
14. Makam AN, Nguyen OK, Auerbach AD. Diagnostic accuracy and effectiveness of automated electronic sepsis alert systems: a systematic review. J Hosp Med. 2015;10(6):396-402. https://doi.org/10.1002/jhm.2347.
15. Ploeg J, Skelly J, Rowan M, et al. The role of nursing best practice champions in diffusing practice guidelines: a mixed methods study. Worldviews Evid Based Nurs. 2010;7(4):238-251. https://doi.org/10.1111/j.1741-6787.2010.00202.x.

Issue
Journal of Hospital Medicine 14(11)
Issue
Journal of Hospital Medicine 14(11)
Page Number
673-677. Published online first June 10, 2019
Page Number
673-677. Published online first June 10, 2019
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Neil Keshvani, MD; E-mail: [email protected]; Telephone: 214-648-2287; Twitter: @NeilKeshvani.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Adverse Events Experienced by Patients Hospitalized without Definite Medical Acuity: A Retrospective Cohort Study

Article Type
Changed

Evidence exists that physicians consider what may be called “social” or “nonmedical” factors (lack of social support or barriers to access) in hospital admission decision-making and that patients are hospitalized even in the absence of a level of medical acuity warranting admission.1-3 Although hospitalization is associated with the risk of adverse events (AEs),4 whether this risk is related to the medical acuity of admission remains unclear. Our study sought to quantify the AEs experienced by patients hospitalized without definite medical acuity compared with those experienced by patients hospitalized with a definite medically appropriate indication for admission.

METHODS

Setting and Database Used for Analysis

This study was conducted at an urban, safety-net, public teaching hospital. At our site, calls for medical admissions are always answered by a hospital medicine attending physician (“triage physician”) who works collaboratively with the referring physician to facilitate appropriate disposition. Many of these discussions occur via telephone, but the triage physician may also assess the patient directly if needed. This study involved 24 triage physicians who directly assessed the patient in 65% of the cases.

At the time of each admission call, the triage physician logs the following information into a central triage database: date and time of call, patient location, reason for admission, assessment of appropriateness for medical floor, contributing factors to admission decision-making, and patient disposition.

Admission Appropriateness Group Designation

To be considered for inclusion in this study, calls must have originated from the emergency department and resulted in admission to the general medicine floor on either a resident teaching or hospitalist service from February 1, 2018 to June 1, 2018. This time frame was selected to avoid the start of a new academic cycle in late June that may confound AE rates.

The designation of appropriateness was determined by the triage physician’s logged response to triage database questions at the time of the admission call. Of the 748 admissions meeting inclusion criteria, 513 (68.6%) were considered definitely appropriate on the basis of the triage physician’s response to the question “Based ONLY on the medical reason for hospitalization, in your opinion, how appropriate is this admission to the medicine floor service?” Furthermore, 169 (22.6%) were considered without definite medical acuity on the basis of the triage physician’s indication that “severity of medical problems alone may not require inpatient hospitalization” (Appendix Figure 1).

Study Design

Following a retrospective cohort study design, we systematically sampled 150 admissions from those “admitted without definite medical acuity” to create the exposure group and 150 from the “definitely medically appropriate” admissions to create the nonexposure group. Our sampling method involved selecting every third record until reaching the target sample size. This method and group sizes were determined prior to beginning data collection. Given the expected incidence of 33% AEs in the unexposed group (consistent with previous reports of AEs using the trigger tool5), we anticipated that a total sample size of 300 would be appropriate to capture a relative risk of at least 1.5 with 80% power and 95% confidence level.

 

 

Chart review was performed to capture patient demographics, admission characteristics, and hospitalization outcomes. We captured emergency severity index (ESI)6, a validated, reliable triage assessment score assigned by our emergency department, as a measurement of acute illness and calculated the Charlson comorbidity index (CCI)7 as a measurement of chronic comorbidity.

Identification of Adverse Events

We measured AEs by using the Institute for Healthcare Improvement Global Trigger Tool,8,9 which is estimated to identify up to 10 times more AEs than other methods, such as voluntary reporting.5 This protocol includes 28 triggers in the Cares and Medication Modules that serve as indicators that an AE may have occurred. The presence of a trigger is not necessarily an AE but a clue for further analysis. Two investigators (AS and CS) independently systematically searched for the presence of triggers within each patient chart. Trigger identification prompted in-depth analysis to confirm the occurrence of an AE and to characterize its severity by using the National Coordinating Council for Medication Error Reporting and Prevention categorization.10 An AE was coded when independent reviewers identified evidence of a preventable or nonpreventable “noxious and unintended event occurring in association with medical care.”9 By definition, any AEs identified were patient harms. Findings were reviewed weekly to ensure agreement, and discrepancies were adjudicated by a third investigator (MB).

All study data were collected by using REDCap electronic data capture tools hosted at the University of Washington.11 The University of Washington Institutional Review Board granted approval for this study.

Study Outcome and Statistical Analysis

The primary outcome was AEs per group with results calculated in three ways: AEs per 1,000 patient-days, AEs per 100 admissions, and percent of admissions with an AE. The risk ratio (RR) for the percent of admissions with an AE and the incidence rate ratio (IRR) for AEs per 1,000 patient-days were calculated for the comparison of significance.

Other data were analyzed by using Pearson’s chi square for categorical data, Student t test for normally distributed quantitative data, and Wilcoxon rank-sum (Mann–Whitney) for the length of stay (due to skew). Analyses were conducted using STATA (version 15.1, College Station, TX).

This work follows standards for reporting observational students as outlined in the STROBE statement.12

RESULTS

Patient Demographics

Both groups were predominantly white/non-Hispanic, male, and English-speaking (Table 1). More patients without definite medical acuity were covered by public insurance (78.9% vs 69.8%, P = .010) and discharged to homelessness (34.8% vs 22.6%, P = .041).

Measures of Illness

Patients considered definitely medically appropriate had lower ESI scores, indicative of more acute presentation, than those without definite medical acuity (2.73 [95% CI 2.64-2.81] vs 2.87 [95% CI 2.78-2.95], P = .026). There was no difference in CCI scores (Table 1).

Reason for Admission and Outcomes

Admissions considered definitely medically appropriate more frequently had an identified diagnosis/syndrome (66% vs 53%) or objective measurement (8.7% vs 2.7%) listed as the reason for admission, whereas patients admitted without definite medical acuity more freuqently had undifferentiated symptoms (34.7% vs 24%) or other/disposition (6% vs 1.3%) listed. The most common factors that triage physicians cited as contributing to the decision to admit patients without definite medical acuity included homelessness (34%), lack of outpatient social support (32%), and substance use disorder (25%). More details are available in Appendix Tables 1 and 2.

 

 

Admissions without definite medical acuity were longer than those with definite medical acuity (6.6 vs 6.0 days, P = .038), but there was no difference in emergency department readmissions within 48 hours or hospital readmissions within 30 days (Table 1).

Adverse Events

We identified 76 AEs in 41 admissions without definite medical acuity (range 0-10 AEs per admission) and 63 AEs in 44 definitely medically appropriate admissions (range 0-4 AEs per admission). The percentage of admissions with AE (27.3% vs 29.3%; RR 0.93, 95% CI 0.65-1.34, P = .70) and the rate of AE/1,000 patient-days (76.8 vs 70.4; IRR 1.09, 95% CI 0.77-1.55, P = .61) did not show statistically significant differences. The distribution of AE severity was similar between the two groups (Table 2). Most identified AEs caused temporary harm to the patient and were rated at severity levels E or F. Severe AEs, including at least one level I (patient death), occurred in both groups. The complete listing of positive triggers leading to adverse event identification by group and severity is available in Appendix Table 3.

DISCUSSION

By using a robust, standardized method, we found that patients admitted without definite medical acuity experienced the same number of inpatient AEs as patients admitted for definitely medically appropriate reasons. While the groups were relatively similar overall in terms of demographics and chronic comorbidity, we found evidence of social vulnerability in the group admitted without definite medical acuity in the form of increased rates of homelessness, triage physician concern regarding the lack of outpatient social support, and disposition-related reasons for admission. That both groups suffered harm―including patient death―while admitted to the hospital is striking, in particular for those patients who were admitted because of the lack of suitable outpatient options.

The potential limitations to the generalizability of this work include the single-site, safety-net setting and the use of individual physician determination of admission appropriateness. The proportion of admissions without definite medical acuity reported here is similar to that reported by previously published admission decision-making studies,2,3 and the rate of AEs observed is similar to rates measured in other studies using the trigger tool methodology.5,13 These similarities suggest some commonality across settings. Our study treats triage physician assessment as the marker of difference in defining the two groups and is an inherently subjective assessment that is reflective of real-world, holistic decision-making. Notably, the triage physician assessment was corroborated by corresponding differences in the ESI score, an acute triage assessment completed by a clinician outside of our team.

This study adds foundational knowledge to the risk/benefit discussion surrounding the decision to admit. Physician admission decisions are likely influenced by concern for the safety of vulnerable patients. Our results suggest that considering the risk of hospitalization itself in this decision-making remains important.

Files
References

1. Mushlin AI, Appel FA. Extramedical factors in the decision to hospitalize medical patients. Am J Public Health. 1976;66(2):170-172. https://doi.org/10.2105/AJPH.66.2.170.
2. Lewis Hunter AE, Spatz ES, Bernstein SL, Rosenthal MS. Factors influencing hospital admission of noncritically ill patients presenting to the emergency department: a cross-sectional study. J Gen Intern Med. 2016;31(1):37-44. https://doi.org/10.1007/s11606-015-3438-8.
3. Pope I, Burn H, Ismail SA, Harris T, McCoy D. A qualitative study exploring the factors influencing admission to hospital from the emergency department. BMJ Open. 2017;7(8):e011543. https://doi.org/10.1136/bmjopen-2016-011543.
4. Levinson DR. Adverse Events in Hospitals: National Incidence among Medicare Beneficiaries. 2010. https://oig.hhs.gov/oei/reports/oei-06-09-00090.pdf. Accessed May 20, 2019.
5. Classen DC, Resar R, Griffin F, et al. ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. https://doi.org/10.1377/hlthaff.2011.0190.
6. Wuerz RC, Milne LW, Eitel DR, Travers D, Gilboy N. Reliability and validity of a new five-level triage instrument. Acad Emerg Med. 2000;7(3):236-242.https://doi.org/10.1111/j.1553-2712.2000.tb01066.x.
7. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chron Dis. 1987;40:373-383. https://doi.org/10.1016/0021-9681(87)90171-8.
8. Resar RK, Rozich JD, Classen D. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care. 2003;12(2):ii39-ii45. https://doi.org/10.1136/qhc.12.suppl_2.ii39.
9. Griffen FA, Resar RK. IHI Global Trigger Tool for Measuring Adverse Events (Second Edition). Cambridge, Massachusetts: Institute for Healthcare Improvement; 2009.
10. National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Index for Categorizing Errors. https://www.nccmerp.org/types-medication-errors Accessed May 20, 2019.
11. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
12. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007;147(8):573-577.
13. Kennerly DA, Kudyakov R, da Graca B, et al. Characterization of adverse events detected in a large health care delivery system using an enhanced global trigger tool over a five-year interval. Health Serv Res. 2014;49(5):1407-1425. https://doi.org/10.1111/1475-6773.12163.

Article PDF
Author and Disclosure Information

1University of Washington School of Medicine, Seattle, Washington; 2Department of Medicine, Harborview Medical Center, University of Washington, Seattle, Washington.

Disclosures

No financial disclosures or funding sources to report.

Issue
Journal of Hospital Medicine 15(1)
Topics
Page Number
42-45. Published online first June 10, 2019
Sections
Files
Files
Author and Disclosure Information

1University of Washington School of Medicine, Seattle, Washington; 2Department of Medicine, Harborview Medical Center, University of Washington, Seattle, Washington.

Disclosures

No financial disclosures or funding sources to report.

Author and Disclosure Information

1University of Washington School of Medicine, Seattle, Washington; 2Department of Medicine, Harborview Medical Center, University of Washington, Seattle, Washington.

Disclosures

No financial disclosures or funding sources to report.

Article PDF
Article PDF
Related Articles

Evidence exists that physicians consider what may be called “social” or “nonmedical” factors (lack of social support or barriers to access) in hospital admission decision-making and that patients are hospitalized even in the absence of a level of medical acuity warranting admission.1-3 Although hospitalization is associated with the risk of adverse events (AEs),4 whether this risk is related to the medical acuity of admission remains unclear. Our study sought to quantify the AEs experienced by patients hospitalized without definite medical acuity compared with those experienced by patients hospitalized with a definite medically appropriate indication for admission.

METHODS

Setting and Database Used for Analysis

This study was conducted at an urban, safety-net, public teaching hospital. At our site, calls for medical admissions are always answered by a hospital medicine attending physician (“triage physician”) who works collaboratively with the referring physician to facilitate appropriate disposition. Many of these discussions occur via telephone, but the triage physician may also assess the patient directly if needed. This study involved 24 triage physicians who directly assessed the patient in 65% of the cases.

At the time of each admission call, the triage physician logs the following information into a central triage database: date and time of call, patient location, reason for admission, assessment of appropriateness for medical floor, contributing factors to admission decision-making, and patient disposition.

Admission Appropriateness Group Designation

To be considered for inclusion in this study, calls must have originated from the emergency department and resulted in admission to the general medicine floor on either a resident teaching or hospitalist service from February 1, 2018 to June 1, 2018. This time frame was selected to avoid the start of a new academic cycle in late June that may confound AE rates.

The designation of appropriateness was determined by the triage physician’s logged response to triage database questions at the time of the admission call. Of the 748 admissions meeting inclusion criteria, 513 (68.6%) were considered definitely appropriate on the basis of the triage physician’s response to the question “Based ONLY on the medical reason for hospitalization, in your opinion, how appropriate is this admission to the medicine floor service?” Furthermore, 169 (22.6%) were considered without definite medical acuity on the basis of the triage physician’s indication that “severity of medical problems alone may not require inpatient hospitalization” (Appendix Figure 1).

Study Design

Following a retrospective cohort study design, we systematically sampled 150 admissions from those “admitted without definite medical acuity” to create the exposure group and 150 from the “definitely medically appropriate” admissions to create the nonexposure group. Our sampling method involved selecting every third record until reaching the target sample size. This method and group sizes were determined prior to beginning data collection. Given the expected incidence of 33% AEs in the unexposed group (consistent with previous reports of AEs using the trigger tool5), we anticipated that a total sample size of 300 would be appropriate to capture a relative risk of at least 1.5 with 80% power and 95% confidence level.

 

 

Chart review was performed to capture patient demographics, admission characteristics, and hospitalization outcomes. We captured emergency severity index (ESI)6, a validated, reliable triage assessment score assigned by our emergency department, as a measurement of acute illness and calculated the Charlson comorbidity index (CCI)7 as a measurement of chronic comorbidity.

Identification of Adverse Events

We measured AEs by using the Institute for Healthcare Improvement Global Trigger Tool,8,9 which is estimated to identify up to 10 times more AEs than other methods, such as voluntary reporting.5 This protocol includes 28 triggers in the Cares and Medication Modules that serve as indicators that an AE may have occurred. The presence of a trigger is not necessarily an AE but a clue for further analysis. Two investigators (AS and CS) independently systematically searched for the presence of triggers within each patient chart. Trigger identification prompted in-depth analysis to confirm the occurrence of an AE and to characterize its severity by using the National Coordinating Council for Medication Error Reporting and Prevention categorization.10 An AE was coded when independent reviewers identified evidence of a preventable or nonpreventable “noxious and unintended event occurring in association with medical care.”9 By definition, any AEs identified were patient harms. Findings were reviewed weekly to ensure agreement, and discrepancies were adjudicated by a third investigator (MB).

All study data were collected by using REDCap electronic data capture tools hosted at the University of Washington.11 The University of Washington Institutional Review Board granted approval for this study.

Study Outcome and Statistical Analysis

The primary outcome was AEs per group with results calculated in three ways: AEs per 1,000 patient-days, AEs per 100 admissions, and percent of admissions with an AE. The risk ratio (RR) for the percent of admissions with an AE and the incidence rate ratio (IRR) for AEs per 1,000 patient-days were calculated for the comparison of significance.

Other data were analyzed by using Pearson’s chi square for categorical data, Student t test for normally distributed quantitative data, and Wilcoxon rank-sum (Mann–Whitney) for the length of stay (due to skew). Analyses were conducted using STATA (version 15.1, College Station, TX).

This work follows standards for reporting observational students as outlined in the STROBE statement.12

RESULTS

Patient Demographics

Both groups were predominantly white/non-Hispanic, male, and English-speaking (Table 1). More patients without definite medical acuity were covered by public insurance (78.9% vs 69.8%, P = .010) and discharged to homelessness (34.8% vs 22.6%, P = .041).

Measures of Illness

Patients considered definitely medically appropriate had lower ESI scores, indicative of more acute presentation, than those without definite medical acuity (2.73 [95% CI 2.64-2.81] vs 2.87 [95% CI 2.78-2.95], P = .026). There was no difference in CCI scores (Table 1).

Reason for Admission and Outcomes

Admissions considered definitely medically appropriate more frequently had an identified diagnosis/syndrome (66% vs 53%) or objective measurement (8.7% vs 2.7%) listed as the reason for admission, whereas patients admitted without definite medical acuity more freuqently had undifferentiated symptoms (34.7% vs 24%) or other/disposition (6% vs 1.3%) listed. The most common factors that triage physicians cited as contributing to the decision to admit patients without definite medical acuity included homelessness (34%), lack of outpatient social support (32%), and substance use disorder (25%). More details are available in Appendix Tables 1 and 2.

 

 

Admissions without definite medical acuity were longer than those with definite medical acuity (6.6 vs 6.0 days, P = .038), but there was no difference in emergency department readmissions within 48 hours or hospital readmissions within 30 days (Table 1).

Adverse Events

We identified 76 AEs in 41 admissions without definite medical acuity (range 0-10 AEs per admission) and 63 AEs in 44 definitely medically appropriate admissions (range 0-4 AEs per admission). The percentage of admissions with AE (27.3% vs 29.3%; RR 0.93, 95% CI 0.65-1.34, P = .70) and the rate of AE/1,000 patient-days (76.8 vs 70.4; IRR 1.09, 95% CI 0.77-1.55, P = .61) did not show statistically significant differences. The distribution of AE severity was similar between the two groups (Table 2). Most identified AEs caused temporary harm to the patient and were rated at severity levels E or F. Severe AEs, including at least one level I (patient death), occurred in both groups. The complete listing of positive triggers leading to adverse event identification by group and severity is available in Appendix Table 3.

DISCUSSION

By using a robust, standardized method, we found that patients admitted without definite medical acuity experienced the same number of inpatient AEs as patients admitted for definitely medically appropriate reasons. While the groups were relatively similar overall in terms of demographics and chronic comorbidity, we found evidence of social vulnerability in the group admitted without definite medical acuity in the form of increased rates of homelessness, triage physician concern regarding the lack of outpatient social support, and disposition-related reasons for admission. That both groups suffered harm―including patient death―while admitted to the hospital is striking, in particular for those patients who were admitted because of the lack of suitable outpatient options.

The potential limitations to the generalizability of this work include the single-site, safety-net setting and the use of individual physician determination of admission appropriateness. The proportion of admissions without definite medical acuity reported here is similar to that reported by previously published admission decision-making studies,2,3 and the rate of AEs observed is similar to rates measured in other studies using the trigger tool methodology.5,13 These similarities suggest some commonality across settings. Our study treats triage physician assessment as the marker of difference in defining the two groups and is an inherently subjective assessment that is reflective of real-world, holistic decision-making. Notably, the triage physician assessment was corroborated by corresponding differences in the ESI score, an acute triage assessment completed by a clinician outside of our team.

This study adds foundational knowledge to the risk/benefit discussion surrounding the decision to admit. Physician admission decisions are likely influenced by concern for the safety of vulnerable patients. Our results suggest that considering the risk of hospitalization itself in this decision-making remains important.

Evidence exists that physicians consider what may be called “social” or “nonmedical” factors (lack of social support or barriers to access) in hospital admission decision-making and that patients are hospitalized even in the absence of a level of medical acuity warranting admission.1-3 Although hospitalization is associated with the risk of adverse events (AEs),4 whether this risk is related to the medical acuity of admission remains unclear. Our study sought to quantify the AEs experienced by patients hospitalized without definite medical acuity compared with those experienced by patients hospitalized with a definite medically appropriate indication for admission.

METHODS

Setting and Database Used for Analysis

This study was conducted at an urban, safety-net, public teaching hospital. At our site, calls for medical admissions are always answered by a hospital medicine attending physician (“triage physician”) who works collaboratively with the referring physician to facilitate appropriate disposition. Many of these discussions occur via telephone, but the triage physician may also assess the patient directly if needed. This study involved 24 triage physicians who directly assessed the patient in 65% of the cases.

At the time of each admission call, the triage physician logs the following information into a central triage database: date and time of call, patient location, reason for admission, assessment of appropriateness for medical floor, contributing factors to admission decision-making, and patient disposition.

Admission Appropriateness Group Designation

To be considered for inclusion in this study, calls must have originated from the emergency department and resulted in admission to the general medicine floor on either a resident teaching or hospitalist service from February 1, 2018 to June 1, 2018. This time frame was selected to avoid the start of a new academic cycle in late June that may confound AE rates.

The designation of appropriateness was determined by the triage physician’s logged response to triage database questions at the time of the admission call. Of the 748 admissions meeting inclusion criteria, 513 (68.6%) were considered definitely appropriate on the basis of the triage physician’s response to the question “Based ONLY on the medical reason for hospitalization, in your opinion, how appropriate is this admission to the medicine floor service?” Furthermore, 169 (22.6%) were considered without definite medical acuity on the basis of the triage physician’s indication that “severity of medical problems alone may not require inpatient hospitalization” (Appendix Figure 1).

Study Design

Following a retrospective cohort study design, we systematically sampled 150 admissions from those “admitted without definite medical acuity” to create the exposure group and 150 from the “definitely medically appropriate” admissions to create the nonexposure group. Our sampling method involved selecting every third record until reaching the target sample size. This method and group sizes were determined prior to beginning data collection. Given the expected incidence of 33% AEs in the unexposed group (consistent with previous reports of AEs using the trigger tool5), we anticipated that a total sample size of 300 would be appropriate to capture a relative risk of at least 1.5 with 80% power and 95% confidence level.

 

 

Chart review was performed to capture patient demographics, admission characteristics, and hospitalization outcomes. We captured emergency severity index (ESI)6, a validated, reliable triage assessment score assigned by our emergency department, as a measurement of acute illness and calculated the Charlson comorbidity index (CCI)7 as a measurement of chronic comorbidity.

Identification of Adverse Events

We measured AEs by using the Institute for Healthcare Improvement Global Trigger Tool,8,9 which is estimated to identify up to 10 times more AEs than other methods, such as voluntary reporting.5 This protocol includes 28 triggers in the Cares and Medication Modules that serve as indicators that an AE may have occurred. The presence of a trigger is not necessarily an AE but a clue for further analysis. Two investigators (AS and CS) independently systematically searched for the presence of triggers within each patient chart. Trigger identification prompted in-depth analysis to confirm the occurrence of an AE and to characterize its severity by using the National Coordinating Council for Medication Error Reporting and Prevention categorization.10 An AE was coded when independent reviewers identified evidence of a preventable or nonpreventable “noxious and unintended event occurring in association with medical care.”9 By definition, any AEs identified were patient harms. Findings were reviewed weekly to ensure agreement, and discrepancies were adjudicated by a third investigator (MB).

All study data were collected by using REDCap electronic data capture tools hosted at the University of Washington.11 The University of Washington Institutional Review Board granted approval for this study.

Study Outcome and Statistical Analysis

The primary outcome was AEs per group with results calculated in three ways: AEs per 1,000 patient-days, AEs per 100 admissions, and percent of admissions with an AE. The risk ratio (RR) for the percent of admissions with an AE and the incidence rate ratio (IRR) for AEs per 1,000 patient-days were calculated for the comparison of significance.

Other data were analyzed by using Pearson’s chi square for categorical data, Student t test for normally distributed quantitative data, and Wilcoxon rank-sum (Mann–Whitney) for the length of stay (due to skew). Analyses were conducted using STATA (version 15.1, College Station, TX).

This work follows standards for reporting observational students as outlined in the STROBE statement.12

RESULTS

Patient Demographics

Both groups were predominantly white/non-Hispanic, male, and English-speaking (Table 1). More patients without definite medical acuity were covered by public insurance (78.9% vs 69.8%, P = .010) and discharged to homelessness (34.8% vs 22.6%, P = .041).

Measures of Illness

Patients considered definitely medically appropriate had lower ESI scores, indicative of more acute presentation, than those without definite medical acuity (2.73 [95% CI 2.64-2.81] vs 2.87 [95% CI 2.78-2.95], P = .026). There was no difference in CCI scores (Table 1).

Reason for Admission and Outcomes

Admissions considered definitely medically appropriate more frequently had an identified diagnosis/syndrome (66% vs 53%) or objective measurement (8.7% vs 2.7%) listed as the reason for admission, whereas patients admitted without definite medical acuity more freuqently had undifferentiated symptoms (34.7% vs 24%) or other/disposition (6% vs 1.3%) listed. The most common factors that triage physicians cited as contributing to the decision to admit patients without definite medical acuity included homelessness (34%), lack of outpatient social support (32%), and substance use disorder (25%). More details are available in Appendix Tables 1 and 2.

 

 

Admissions without definite medical acuity were longer than those with definite medical acuity (6.6 vs 6.0 days, P = .038), but there was no difference in emergency department readmissions within 48 hours or hospital readmissions within 30 days (Table 1).

Adverse Events

We identified 76 AEs in 41 admissions without definite medical acuity (range 0-10 AEs per admission) and 63 AEs in 44 definitely medically appropriate admissions (range 0-4 AEs per admission). The percentage of admissions with AE (27.3% vs 29.3%; RR 0.93, 95% CI 0.65-1.34, P = .70) and the rate of AE/1,000 patient-days (76.8 vs 70.4; IRR 1.09, 95% CI 0.77-1.55, P = .61) did not show statistically significant differences. The distribution of AE severity was similar between the two groups (Table 2). Most identified AEs caused temporary harm to the patient and were rated at severity levels E or F. Severe AEs, including at least one level I (patient death), occurred in both groups. The complete listing of positive triggers leading to adverse event identification by group and severity is available in Appendix Table 3.

DISCUSSION

By using a robust, standardized method, we found that patients admitted without definite medical acuity experienced the same number of inpatient AEs as patients admitted for definitely medically appropriate reasons. While the groups were relatively similar overall in terms of demographics and chronic comorbidity, we found evidence of social vulnerability in the group admitted without definite medical acuity in the form of increased rates of homelessness, triage physician concern regarding the lack of outpatient social support, and disposition-related reasons for admission. That both groups suffered harm―including patient death―while admitted to the hospital is striking, in particular for those patients who were admitted because of the lack of suitable outpatient options.

The potential limitations to the generalizability of this work include the single-site, safety-net setting and the use of individual physician determination of admission appropriateness. The proportion of admissions without definite medical acuity reported here is similar to that reported by previously published admission decision-making studies,2,3 and the rate of AEs observed is similar to rates measured in other studies using the trigger tool methodology.5,13 These similarities suggest some commonality across settings. Our study treats triage physician assessment as the marker of difference in defining the two groups and is an inherently subjective assessment that is reflective of real-world, holistic decision-making. Notably, the triage physician assessment was corroborated by corresponding differences in the ESI score, an acute triage assessment completed by a clinician outside of our team.

This study adds foundational knowledge to the risk/benefit discussion surrounding the decision to admit. Physician admission decisions are likely influenced by concern for the safety of vulnerable patients. Our results suggest that considering the risk of hospitalization itself in this decision-making remains important.

References

1. Mushlin AI, Appel FA. Extramedical factors in the decision to hospitalize medical patients. Am J Public Health. 1976;66(2):170-172. https://doi.org/10.2105/AJPH.66.2.170.
2. Lewis Hunter AE, Spatz ES, Bernstein SL, Rosenthal MS. Factors influencing hospital admission of noncritically ill patients presenting to the emergency department: a cross-sectional study. J Gen Intern Med. 2016;31(1):37-44. https://doi.org/10.1007/s11606-015-3438-8.
3. Pope I, Burn H, Ismail SA, Harris T, McCoy D. A qualitative study exploring the factors influencing admission to hospital from the emergency department. BMJ Open. 2017;7(8):e011543. https://doi.org/10.1136/bmjopen-2016-011543.
4. Levinson DR. Adverse Events in Hospitals: National Incidence among Medicare Beneficiaries. 2010. https://oig.hhs.gov/oei/reports/oei-06-09-00090.pdf. Accessed May 20, 2019.
5. Classen DC, Resar R, Griffin F, et al. ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. https://doi.org/10.1377/hlthaff.2011.0190.
6. Wuerz RC, Milne LW, Eitel DR, Travers D, Gilboy N. Reliability and validity of a new five-level triage instrument. Acad Emerg Med. 2000;7(3):236-242.https://doi.org/10.1111/j.1553-2712.2000.tb01066.x.
7. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chron Dis. 1987;40:373-383. https://doi.org/10.1016/0021-9681(87)90171-8.
8. Resar RK, Rozich JD, Classen D. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care. 2003;12(2):ii39-ii45. https://doi.org/10.1136/qhc.12.suppl_2.ii39.
9. Griffen FA, Resar RK. IHI Global Trigger Tool for Measuring Adverse Events (Second Edition). Cambridge, Massachusetts: Institute for Healthcare Improvement; 2009.
10. National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Index for Categorizing Errors. https://www.nccmerp.org/types-medication-errors Accessed May 20, 2019.
11. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
12. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007;147(8):573-577.
13. Kennerly DA, Kudyakov R, da Graca B, et al. Characterization of adverse events detected in a large health care delivery system using an enhanced global trigger tool over a five-year interval. Health Serv Res. 2014;49(5):1407-1425. https://doi.org/10.1111/1475-6773.12163.

References

1. Mushlin AI, Appel FA. Extramedical factors in the decision to hospitalize medical patients. Am J Public Health. 1976;66(2):170-172. https://doi.org/10.2105/AJPH.66.2.170.
2. Lewis Hunter AE, Spatz ES, Bernstein SL, Rosenthal MS. Factors influencing hospital admission of noncritically ill patients presenting to the emergency department: a cross-sectional study. J Gen Intern Med. 2016;31(1):37-44. https://doi.org/10.1007/s11606-015-3438-8.
3. Pope I, Burn H, Ismail SA, Harris T, McCoy D. A qualitative study exploring the factors influencing admission to hospital from the emergency department. BMJ Open. 2017;7(8):e011543. https://doi.org/10.1136/bmjopen-2016-011543.
4. Levinson DR. Adverse Events in Hospitals: National Incidence among Medicare Beneficiaries. 2010. https://oig.hhs.gov/oei/reports/oei-06-09-00090.pdf. Accessed May 20, 2019.
5. Classen DC, Resar R, Griffin F, et al. ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. https://doi.org/10.1377/hlthaff.2011.0190.
6. Wuerz RC, Milne LW, Eitel DR, Travers D, Gilboy N. Reliability and validity of a new five-level triage instrument. Acad Emerg Med. 2000;7(3):236-242.https://doi.org/10.1111/j.1553-2712.2000.tb01066.x.
7. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chron Dis. 1987;40:373-383. https://doi.org/10.1016/0021-9681(87)90171-8.
8. Resar RK, Rozich JD, Classen D. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care. 2003;12(2):ii39-ii45. https://doi.org/10.1136/qhc.12.suppl_2.ii39.
9. Griffen FA, Resar RK. IHI Global Trigger Tool for Measuring Adverse Events (Second Edition). Cambridge, Massachusetts: Institute for Healthcare Improvement; 2009.
10. National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Index for Categorizing Errors. https://www.nccmerp.org/types-medication-errors Accessed May 20, 2019.
11. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
12. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007;147(8):573-577.
13. Kennerly DA, Kudyakov R, da Graca B, et al. Characterization of adverse events detected in a large health care delivery system using an enhanced global trigger tool over a five-year interval. Health Serv Res. 2014;49(5):1407-1425. https://doi.org/10.1111/1475-6773.12163.

Issue
Journal of Hospital Medicine 15(1)
Issue
Journal of Hospital Medicine 15(1)
Page Number
42-45. Published online first June 10, 2019
Page Number
42-45. Published online first June 10, 2019
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Maralyssa Bann, MD; E-mail: [email protected]; Telephone: 206-744-4529; Twitter: @mbann_md.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Peek Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media
Media Files

Breathing New Life into Vital Sign Measurement

Article Type
Changed

As you review the electronic health record before rounds in the morning, you notice a red exclamation mark in the chart of a patient who was admitted two days ago for an acute chronic obstructive pulmonary disease (COPD) exacerbation. The patient’s respiratory rate (RR) this morning is recorded at 24 breaths per minute (bpm). His RR last evening was 16 bpm and he remains on two liters per minute of supplemental oxygen. No one has notified you that he is getting worse, but you stop by the room to confirm that he is clinically stable.

During rounds, the resident states “The respiratory rate is recorded as 24 bpm, which is high, but I never trust the respiratory rate.” You silently agree and confirm your mistrust of the recorded RR.

Elevated RR has been associated with numerous poor outcomes, including mortality after myocardial infarction1 and death and readmission after acute COPD exacerbation.2 Furthermore, RR is used in models to predict mortality and intensive care unit admission,3 as well as in models to identify and predict mortality from sepsis.4 Recorded RRs are frequency inaccurate,5 and medical staff lack confidence in recorded RR values.6 Based on this evidence, you feel justified in your mistrust of recorded RR values. You might even believe that until a high-tech RR monitoring system is invented and implemented at your hospital, human error will forever prevent you from knowing your patients’ true RRs.

However, there is hope. In this issue of the Journal of Hospital Medicine, Keshvani et al.7 describe a successful quality improvement project where they employed plan–do–study–act methodology in a single inpatient unit to improve the accuracy of recorded RR. Before their project, only 36% of RR measurements were accurate, and there was considerable heterogeneity in the RR measurement technique. To address this problem, an interdisciplinary team of patient care assistants (PCAs), nurses, physicians, and hospital administration developed a plan to identify barriers, improve workflow, and educate stakeholders in RR recording.

The authors created a low-cost, “low-tech” intervention that consisted of training and educating PCAs on the correct technique and the importance of RR measurement, modifying workflow to incorporate RR measurement into a 30-second period of automated blood pressure measurement, and adding stopwatches to the vital sign carts. The RR measurements obtained by PCAs were compared with the RR measurements obtained by trained team members to assess for accuracy. PCA-obtained RR measurements were also compared with two control units, both before and after the intervention. Secondary outcomes included time to complete vital sign measurements and the incidence of systemic inflammatory response syndrome (SIRS) specifically due to tachypnea. The authors hypothesized that improved RR accuracy would reduce the number of falsely elevated RRs and could reduce the rate of SIRS.

The intervention improved the accuracy of PCA-obtained RRs from 36% to 58% and decreased the median RR from 18 to 14 breaths per minute. The implementation also resulted in a more normal distribution of RR in the intervention unit compared with the control unit. Interestingly, this intervention did not increase the time spent in obtaining vital signs—in fact, the time to complete vital signs decreased from a median of 2:26 to 1:55 minutes. In addition, tachypnea-specific SIRS incidence was reduced by 7.8% per hospitalization. An important implication of this finding is that reducing the false-positive rate of SIRS could possibly decrease unnecessary testing, medical interventions, and alert fatigue.

This project shows that meaningful interventions need not be expensive or overly technologic to have very real clinical effects. It would be very easy for a system to advocate for funding to purchase advanced monitors that purport to remove human error from the situation rather than trying first to improve human performance. Certainly, there is a role for advanced technologies—but improvement need not wait for, or be completely predicated on, these new technologies. The first barrier often expressed when evaluating a potential improvement initiative is that “we don’t have time for that”. This project demonstrates that innovations to improve care can also benefit the care team and improve workflow. Certainly, this project is not definitive and should be replicated elsewhere, but it is an important first step.

In an era where technology is expanding rapidly and the pace of innovation is breathtaking, we have an obligation to ensure that we are getting the basics right. Further, we must not take core tasks—such as vital signs, physical examination, and medication reconciliation—for granted, nor should we accept that they are as they will be. We discuss and debate the merits of advanced imaging, artificial intelligence, and machine learning­—which are certainly exciting advances—but we must occasionally pause, breathe, and examine our practice to make sure that we do not overlook things that are truly vital to our patients’ care.

 

 

Disclosures

The authors have nothing to disclose.

 

References

1. Barthel P, Wensel R, Bauer A, et al. Respiratory rate predicts outcome after acute myocardial infarction: a prospective cohort study. Eur Heart J. 2013;34(22):1644-1650. https://doi.org/10.1093/eurheartj/ehs420.
2. Flattet Y, Garin N, Serratrice J, Arnaud P, Stirnemann J, Carballo S. Determining prognosis in acute exacerbation of COPD. Int J Chron Obstruct Pulmon Dis. 2017;12:467-475. https://doi.org/10.2147/COPD.S122382.
3. Subbe CP, Kruger M, Rutherford P, Gemmel L. Validation of a modified early warning score in medical admissions. QJM. 2001;94(10):521-526. https://doi.org/10.1093/qjmed/94.10.521.
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA. 2016;315(8):762-774. https://doi.org/10.1001/jama.2016.0288.
5. Badawy J, Nguyen OK, Clark C, Halm EA, Makam AN. Is everyone really breathing 20 times a minute? Assessing epidemiology and variation in recorded respiratory rate in hospitalised adults. BMJ Qual Saf. 2017;26(10):832-836. https://doi.org/10.1136/bmjqs-2017-006671.
6. Philip K, Richardson R, Cohen M. Staff perceptions of respiratory rate measurement in a general hospital. Br J Nurs. 2013;22(10):570-574. https://doi.org/10.12968/bjon.2013.22.10.570.
7. Keshvani N, Berger K, Gupta A, DePaola S, Nguyen O, Makam A. Improving respiratory rate accuracy in the hospital: a quality improvement initiative [published online ahead of print June 10, 2019]. J Hosp Med. 2019;14(11):673-677. https://doi.org/10.12788/jhm.3232.

Article PDF
Issue
Journal of Hospital Medicine 14(11)
Topics
Page Number
719-720. Published online first June 10, 2019
Sections
Article PDF
Article PDF
Related Articles

As you review the electronic health record before rounds in the morning, you notice a red exclamation mark in the chart of a patient who was admitted two days ago for an acute chronic obstructive pulmonary disease (COPD) exacerbation. The patient’s respiratory rate (RR) this morning is recorded at 24 breaths per minute (bpm). His RR last evening was 16 bpm and he remains on two liters per minute of supplemental oxygen. No one has notified you that he is getting worse, but you stop by the room to confirm that he is clinically stable.

During rounds, the resident states “The respiratory rate is recorded as 24 bpm, which is high, but I never trust the respiratory rate.” You silently agree and confirm your mistrust of the recorded RR.

Elevated RR has been associated with numerous poor outcomes, including mortality after myocardial infarction1 and death and readmission after acute COPD exacerbation.2 Furthermore, RR is used in models to predict mortality and intensive care unit admission,3 as well as in models to identify and predict mortality from sepsis.4 Recorded RRs are frequency inaccurate,5 and medical staff lack confidence in recorded RR values.6 Based on this evidence, you feel justified in your mistrust of recorded RR values. You might even believe that until a high-tech RR monitoring system is invented and implemented at your hospital, human error will forever prevent you from knowing your patients’ true RRs.

However, there is hope. In this issue of the Journal of Hospital Medicine, Keshvani et al.7 describe a successful quality improvement project where they employed plan–do–study–act methodology in a single inpatient unit to improve the accuracy of recorded RR. Before their project, only 36% of RR measurements were accurate, and there was considerable heterogeneity in the RR measurement technique. To address this problem, an interdisciplinary team of patient care assistants (PCAs), nurses, physicians, and hospital administration developed a plan to identify barriers, improve workflow, and educate stakeholders in RR recording.

The authors created a low-cost, “low-tech” intervention that consisted of training and educating PCAs on the correct technique and the importance of RR measurement, modifying workflow to incorporate RR measurement into a 30-second period of automated blood pressure measurement, and adding stopwatches to the vital sign carts. The RR measurements obtained by PCAs were compared with the RR measurements obtained by trained team members to assess for accuracy. PCA-obtained RR measurements were also compared with two control units, both before and after the intervention. Secondary outcomes included time to complete vital sign measurements and the incidence of systemic inflammatory response syndrome (SIRS) specifically due to tachypnea. The authors hypothesized that improved RR accuracy would reduce the number of falsely elevated RRs and could reduce the rate of SIRS.

The intervention improved the accuracy of PCA-obtained RRs from 36% to 58% and decreased the median RR from 18 to 14 breaths per minute. The implementation also resulted in a more normal distribution of RR in the intervention unit compared with the control unit. Interestingly, this intervention did not increase the time spent in obtaining vital signs—in fact, the time to complete vital signs decreased from a median of 2:26 to 1:55 minutes. In addition, tachypnea-specific SIRS incidence was reduced by 7.8% per hospitalization. An important implication of this finding is that reducing the false-positive rate of SIRS could possibly decrease unnecessary testing, medical interventions, and alert fatigue.

This project shows that meaningful interventions need not be expensive or overly technologic to have very real clinical effects. It would be very easy for a system to advocate for funding to purchase advanced monitors that purport to remove human error from the situation rather than trying first to improve human performance. Certainly, there is a role for advanced technologies—but improvement need not wait for, or be completely predicated on, these new technologies. The first barrier often expressed when evaluating a potential improvement initiative is that “we don’t have time for that”. This project demonstrates that innovations to improve care can also benefit the care team and improve workflow. Certainly, this project is not definitive and should be replicated elsewhere, but it is an important first step.

In an era where technology is expanding rapidly and the pace of innovation is breathtaking, we have an obligation to ensure that we are getting the basics right. Further, we must not take core tasks—such as vital signs, physical examination, and medication reconciliation—for granted, nor should we accept that they are as they will be. We discuss and debate the merits of advanced imaging, artificial intelligence, and machine learning­—which are certainly exciting advances—but we must occasionally pause, breathe, and examine our practice to make sure that we do not overlook things that are truly vital to our patients’ care.

 

 

Disclosures

The authors have nothing to disclose.

 

As you review the electronic health record before rounds in the morning, you notice a red exclamation mark in the chart of a patient who was admitted two days ago for an acute chronic obstructive pulmonary disease (COPD) exacerbation. The patient’s respiratory rate (RR) this morning is recorded at 24 breaths per minute (bpm). His RR last evening was 16 bpm and he remains on two liters per minute of supplemental oxygen. No one has notified you that he is getting worse, but you stop by the room to confirm that he is clinically stable.

During rounds, the resident states “The respiratory rate is recorded as 24 bpm, which is high, but I never trust the respiratory rate.” You silently agree and confirm your mistrust of the recorded RR.

Elevated RR has been associated with numerous poor outcomes, including mortality after myocardial infarction1 and death and readmission after acute COPD exacerbation.2 Furthermore, RR is used in models to predict mortality and intensive care unit admission,3 as well as in models to identify and predict mortality from sepsis.4 Recorded RRs are frequency inaccurate,5 and medical staff lack confidence in recorded RR values.6 Based on this evidence, you feel justified in your mistrust of recorded RR values. You might even believe that until a high-tech RR monitoring system is invented and implemented at your hospital, human error will forever prevent you from knowing your patients’ true RRs.

However, there is hope. In this issue of the Journal of Hospital Medicine, Keshvani et al.7 describe a successful quality improvement project where they employed plan–do–study–act methodology in a single inpatient unit to improve the accuracy of recorded RR. Before their project, only 36% of RR measurements were accurate, and there was considerable heterogeneity in the RR measurement technique. To address this problem, an interdisciplinary team of patient care assistants (PCAs), nurses, physicians, and hospital administration developed a plan to identify barriers, improve workflow, and educate stakeholders in RR recording.

The authors created a low-cost, “low-tech” intervention that consisted of training and educating PCAs on the correct technique and the importance of RR measurement, modifying workflow to incorporate RR measurement into a 30-second period of automated blood pressure measurement, and adding stopwatches to the vital sign carts. The RR measurements obtained by PCAs were compared with the RR measurements obtained by trained team members to assess for accuracy. PCA-obtained RR measurements were also compared with two control units, both before and after the intervention. Secondary outcomes included time to complete vital sign measurements and the incidence of systemic inflammatory response syndrome (SIRS) specifically due to tachypnea. The authors hypothesized that improved RR accuracy would reduce the number of falsely elevated RRs and could reduce the rate of SIRS.

The intervention improved the accuracy of PCA-obtained RRs from 36% to 58% and decreased the median RR from 18 to 14 breaths per minute. The implementation also resulted in a more normal distribution of RR in the intervention unit compared with the control unit. Interestingly, this intervention did not increase the time spent in obtaining vital signs—in fact, the time to complete vital signs decreased from a median of 2:26 to 1:55 minutes. In addition, tachypnea-specific SIRS incidence was reduced by 7.8% per hospitalization. An important implication of this finding is that reducing the false-positive rate of SIRS could possibly decrease unnecessary testing, medical interventions, and alert fatigue.

This project shows that meaningful interventions need not be expensive or overly technologic to have very real clinical effects. It would be very easy for a system to advocate for funding to purchase advanced monitors that purport to remove human error from the situation rather than trying first to improve human performance. Certainly, there is a role for advanced technologies—but improvement need not wait for, or be completely predicated on, these new technologies. The first barrier often expressed when evaluating a potential improvement initiative is that “we don’t have time for that”. This project demonstrates that innovations to improve care can also benefit the care team and improve workflow. Certainly, this project is not definitive and should be replicated elsewhere, but it is an important first step.

In an era where technology is expanding rapidly and the pace of innovation is breathtaking, we have an obligation to ensure that we are getting the basics right. Further, we must not take core tasks—such as vital signs, physical examination, and medication reconciliation—for granted, nor should we accept that they are as they will be. We discuss and debate the merits of advanced imaging, artificial intelligence, and machine learning­—which are certainly exciting advances—but we must occasionally pause, breathe, and examine our practice to make sure that we do not overlook things that are truly vital to our patients’ care.

 

 

Disclosures

The authors have nothing to disclose.

 

References

1. Barthel P, Wensel R, Bauer A, et al. Respiratory rate predicts outcome after acute myocardial infarction: a prospective cohort study. Eur Heart J. 2013;34(22):1644-1650. https://doi.org/10.1093/eurheartj/ehs420.
2. Flattet Y, Garin N, Serratrice J, Arnaud P, Stirnemann J, Carballo S. Determining prognosis in acute exacerbation of COPD. Int J Chron Obstruct Pulmon Dis. 2017;12:467-475. https://doi.org/10.2147/COPD.S122382.
3. Subbe CP, Kruger M, Rutherford P, Gemmel L. Validation of a modified early warning score in medical admissions. QJM. 2001;94(10):521-526. https://doi.org/10.1093/qjmed/94.10.521.
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA. 2016;315(8):762-774. https://doi.org/10.1001/jama.2016.0288.
5. Badawy J, Nguyen OK, Clark C, Halm EA, Makam AN. Is everyone really breathing 20 times a minute? Assessing epidemiology and variation in recorded respiratory rate in hospitalised adults. BMJ Qual Saf. 2017;26(10):832-836. https://doi.org/10.1136/bmjqs-2017-006671.
6. Philip K, Richardson R, Cohen M. Staff perceptions of respiratory rate measurement in a general hospital. Br J Nurs. 2013;22(10):570-574. https://doi.org/10.12968/bjon.2013.22.10.570.
7. Keshvani N, Berger K, Gupta A, DePaola S, Nguyen O, Makam A. Improving respiratory rate accuracy in the hospital: a quality improvement initiative [published online ahead of print June 10, 2019]. J Hosp Med. 2019;14(11):673-677. https://doi.org/10.12788/jhm.3232.

References

1. Barthel P, Wensel R, Bauer A, et al. Respiratory rate predicts outcome after acute myocardial infarction: a prospective cohort study. Eur Heart J. 2013;34(22):1644-1650. https://doi.org/10.1093/eurheartj/ehs420.
2. Flattet Y, Garin N, Serratrice J, Arnaud P, Stirnemann J, Carballo S. Determining prognosis in acute exacerbation of COPD. Int J Chron Obstruct Pulmon Dis. 2017;12:467-475. https://doi.org/10.2147/COPD.S122382.
3. Subbe CP, Kruger M, Rutherford P, Gemmel L. Validation of a modified early warning score in medical admissions. QJM. 2001;94(10):521-526. https://doi.org/10.1093/qjmed/94.10.521.
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA. 2016;315(8):762-774. https://doi.org/10.1001/jama.2016.0288.
5. Badawy J, Nguyen OK, Clark C, Halm EA, Makam AN. Is everyone really breathing 20 times a minute? Assessing epidemiology and variation in recorded respiratory rate in hospitalised adults. BMJ Qual Saf. 2017;26(10):832-836. https://doi.org/10.1136/bmjqs-2017-006671.
6. Philip K, Richardson R, Cohen M. Staff perceptions of respiratory rate measurement in a general hospital. Br J Nurs. 2013;22(10):570-574. https://doi.org/10.12968/bjon.2013.22.10.570.
7. Keshvani N, Berger K, Gupta A, DePaola S, Nguyen O, Makam A. Improving respiratory rate accuracy in the hospital: a quality improvement initiative [published online ahead of print June 10, 2019]. J Hosp Med. 2019;14(11):673-677. https://doi.org/10.12788/jhm.3232.

Issue
Journal of Hospital Medicine 14(11)
Issue
Journal of Hospital Medicine 14(11)
Page Number
719-720. Published online first June 10, 2019
Page Number
719-720. Published online first June 10, 2019
Topics
Article Type
Sections
Article Source


© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Timothy Capecchi, MD; E-mail: [email protected]; Telephone: (612) 625-2343.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media

Varicella vaccine delivers doubled benefit to children

As memory of disease fades, vaccine questioning emerges
Article Type
Changed

 

Pediatric herpes zoster declined by 72% in the years following introduction of routine varicella vaccination, with the rates in vaccinated children 78% lower than those in unvaccinated children.

KatarzynaBialasiewicz/Thinkstock

The benefit became largely apparent after children received the second vaccination in the recommended series, and persisted throughout childhood, Sheila Weinmann, PhD, of Kaiser Permanente Northern California, Oakland, and colleagues said.*

The analysis included 6.37 million children in the Kaiser Permanente database, 50% of whom were vaccinated for all or some of the study period stretching from 2003 to 2014. Overall, the crude lab-confirmed herpes zoster (HZ) incidence rate was 74/100,000 person-years. When stratified by vaccine status, the crude rate of HZ among vaccinated children was 78% lower than among unvaccinated children (38 vs. 170 cases per 100,000 person years).

Herpes zoster was more common among girls than boys and up to six times more common in immunosuppressed children than in nonimmunosuppressed children.

The authors also found that unvaccinated children benefited from the high rate of vaccination around them. Although the HZ rate was always lower among vaccinated children, the rate among unvaccinated children fell sharply after 2007.

“The trend of decreasing HZ incidence among children who were unvaccinated is likely due to a lack of primary VZV [varicella-zoster virus] infection resulting from herd immunity in a highly vaccinated population,” Dr. Weinmann and her associates said.

There was some variability among age groups, especially among the youngest who were not fully vaccinated.

“In the group aged 1-2 years, the confirmation-adjusted HZ rate among children who were vaccinated was 70% higher than among those who were unvaccinated,” the authors said. In the “older groups, HZ rates were significantly higher in children who were unvaccinated than in those who were vaccinated,” the researchers noted.

The highest incidence was among vaccinated 1-year-olds, who had a 140% higher risk of HZ than did unvaccinated 1-year-olds. But this risk elevation disappeared by age 2 years. For everyone else, aged 2-17 years, the rate of HZ remained significantly lower in vaccinated children.

“Among the small number of children vaccinated at 11 months of age (for whom the vaccine is not recommended), the HZ incidence rate was significantly higher than in children vaccinated at 1 year of age and older. Similarly, children who contract wild-type varicella infection at younger than 1 year of age also have a higher risk of HZ (relative risk, 13.5). The immature adaptive T-cell response in children less than 1 year of age appears less able to contain VZV as a latent infection, compared with older children.

“Our findings for 11-month-olds who were vaccinated should be interpreted with caution because this population included only three cases of HZ and could have included children participating in a prelicensure study with a vaccine formulation different from Varivax,” Dr. Weinmann and her associates said.

Dr. Weinmann and her associates reported no relevant financial disclosures. The study was supported by the Centers for Disease Control and Prevention.

SOURCE: Weinmann S et al. Pediatrics. 2019 Jun 10. doi: 10.1542/peds.2018-2917.

* This article was updated 6/14/2019

Body

 

The finding of a 78% lower incidence of zoster in varicella-vaccinated children is nothing short of “remarkable,” Anne A Gershon, MD, wrote in an accompanying editorial.

But the benefit could be in jeopardy, as parents question the safety and effectiveness of all vaccines, she wrote.

“That the varicella vaccine prevents not only varicella but zoster as well is an exciting dual benefit from the varicella vaccine, further improving the health of children by immunization,” Dr. Gershon said. “Additional studies will be necessary to show the mechanism for the protection against zoster (viral, immunologic, or both), how long this benefit lasts, and whether additional doses of some form of VZV [varicella-zoster virus] vaccine will be more useful.”

But, she suggested, in a time when cases of clinical varicella are dwindling, so is public awareness of the vaccine’s benefit. Clinical varicella is worse for adults than it is for children.

“Efforts to immunize all children against chickenpox must continue to be made to protect our population from wild-type VZV. Fortunately, antiviral therapy is also available for individuals who are unvaccinated and develop varicella or zoster, but immunization is, as usual, preferable,” Dr. Gershon concluded.
 

Dr. Gershon, a pediatric infectious disease specialist, is a professor of pediatrics at Columbia University, New York. She wrote a commentary to accompany the article by Weinmann et al. (Pediatrics. 2019 Jun 10. doi: 10.1542/peds.2018-3561). Dr. Gershon had no relevant financial disclosures. The commentary was funded by the National Institutes of Health.

Publications
Topics
Sections
Body

 

The finding of a 78% lower incidence of zoster in varicella-vaccinated children is nothing short of “remarkable,” Anne A Gershon, MD, wrote in an accompanying editorial.

But the benefit could be in jeopardy, as parents question the safety and effectiveness of all vaccines, she wrote.

“That the varicella vaccine prevents not only varicella but zoster as well is an exciting dual benefit from the varicella vaccine, further improving the health of children by immunization,” Dr. Gershon said. “Additional studies will be necessary to show the mechanism for the protection against zoster (viral, immunologic, or both), how long this benefit lasts, and whether additional doses of some form of VZV [varicella-zoster virus] vaccine will be more useful.”

But, she suggested, in a time when cases of clinical varicella are dwindling, so is public awareness of the vaccine’s benefit. Clinical varicella is worse for adults than it is for children.

“Efforts to immunize all children against chickenpox must continue to be made to protect our population from wild-type VZV. Fortunately, antiviral therapy is also available for individuals who are unvaccinated and develop varicella or zoster, but immunization is, as usual, preferable,” Dr. Gershon concluded.
 

Dr. Gershon, a pediatric infectious disease specialist, is a professor of pediatrics at Columbia University, New York. She wrote a commentary to accompany the article by Weinmann et al. (Pediatrics. 2019 Jun 10. doi: 10.1542/peds.2018-3561). Dr. Gershon had no relevant financial disclosures. The commentary was funded by the National Institutes of Health.

Body

 

The finding of a 78% lower incidence of zoster in varicella-vaccinated children is nothing short of “remarkable,” Anne A Gershon, MD, wrote in an accompanying editorial.

But the benefit could be in jeopardy, as parents question the safety and effectiveness of all vaccines, she wrote.

“That the varicella vaccine prevents not only varicella but zoster as well is an exciting dual benefit from the varicella vaccine, further improving the health of children by immunization,” Dr. Gershon said. “Additional studies will be necessary to show the mechanism for the protection against zoster (viral, immunologic, or both), how long this benefit lasts, and whether additional doses of some form of VZV [varicella-zoster virus] vaccine will be more useful.”

But, she suggested, in a time when cases of clinical varicella are dwindling, so is public awareness of the vaccine’s benefit. Clinical varicella is worse for adults than it is for children.

“Efforts to immunize all children against chickenpox must continue to be made to protect our population from wild-type VZV. Fortunately, antiviral therapy is also available for individuals who are unvaccinated and develop varicella or zoster, but immunization is, as usual, preferable,” Dr. Gershon concluded.
 

Dr. Gershon, a pediatric infectious disease specialist, is a professor of pediatrics at Columbia University, New York. She wrote a commentary to accompany the article by Weinmann et al. (Pediatrics. 2019 Jun 10. doi: 10.1542/peds.2018-3561). Dr. Gershon had no relevant financial disclosures. The commentary was funded by the National Institutes of Health.

Title
As memory of disease fades, vaccine questioning emerges
As memory of disease fades, vaccine questioning emerges

 

Pediatric herpes zoster declined by 72% in the years following introduction of routine varicella vaccination, with the rates in vaccinated children 78% lower than those in unvaccinated children.

KatarzynaBialasiewicz/Thinkstock

The benefit became largely apparent after children received the second vaccination in the recommended series, and persisted throughout childhood, Sheila Weinmann, PhD, of Kaiser Permanente Northern California, Oakland, and colleagues said.*

The analysis included 6.37 million children in the Kaiser Permanente database, 50% of whom were vaccinated for all or some of the study period stretching from 2003 to 2014. Overall, the crude lab-confirmed herpes zoster (HZ) incidence rate was 74/100,000 person-years. When stratified by vaccine status, the crude rate of HZ among vaccinated children was 78% lower than among unvaccinated children (38 vs. 170 cases per 100,000 person years).

Herpes zoster was more common among girls than boys and up to six times more common in immunosuppressed children than in nonimmunosuppressed children.

The authors also found that unvaccinated children benefited from the high rate of vaccination around them. Although the HZ rate was always lower among vaccinated children, the rate among unvaccinated children fell sharply after 2007.

“The trend of decreasing HZ incidence among children who were unvaccinated is likely due to a lack of primary VZV [varicella-zoster virus] infection resulting from herd immunity in a highly vaccinated population,” Dr. Weinmann and her associates said.

There was some variability among age groups, especially among the youngest who were not fully vaccinated.

“In the group aged 1-2 years, the confirmation-adjusted HZ rate among children who were vaccinated was 70% higher than among those who were unvaccinated,” the authors said. In the “older groups, HZ rates were significantly higher in children who were unvaccinated than in those who were vaccinated,” the researchers noted.

The highest incidence was among vaccinated 1-year-olds, who had a 140% higher risk of HZ than did unvaccinated 1-year-olds. But this risk elevation disappeared by age 2 years. For everyone else, aged 2-17 years, the rate of HZ remained significantly lower in vaccinated children.

“Among the small number of children vaccinated at 11 months of age (for whom the vaccine is not recommended), the HZ incidence rate was significantly higher than in children vaccinated at 1 year of age and older. Similarly, children who contract wild-type varicella infection at younger than 1 year of age also have a higher risk of HZ (relative risk, 13.5). The immature adaptive T-cell response in children less than 1 year of age appears less able to contain VZV as a latent infection, compared with older children.

“Our findings for 11-month-olds who were vaccinated should be interpreted with caution because this population included only three cases of HZ and could have included children participating in a prelicensure study with a vaccine formulation different from Varivax,” Dr. Weinmann and her associates said.

Dr. Weinmann and her associates reported no relevant financial disclosures. The study was supported by the Centers for Disease Control and Prevention.

SOURCE: Weinmann S et al. Pediatrics. 2019 Jun 10. doi: 10.1542/peds.2018-2917.

* This article was updated 6/14/2019

 

Pediatric herpes zoster declined by 72% in the years following introduction of routine varicella vaccination, with the rates in vaccinated children 78% lower than those in unvaccinated children.

KatarzynaBialasiewicz/Thinkstock

The benefit became largely apparent after children received the second vaccination in the recommended series, and persisted throughout childhood, Sheila Weinmann, PhD, of Kaiser Permanente Northern California, Oakland, and colleagues said.*

The analysis included 6.37 million children in the Kaiser Permanente database, 50% of whom were vaccinated for all or some of the study period stretching from 2003 to 2014. Overall, the crude lab-confirmed herpes zoster (HZ) incidence rate was 74/100,000 person-years. When stratified by vaccine status, the crude rate of HZ among vaccinated children was 78% lower than among unvaccinated children (38 vs. 170 cases per 100,000 person years).

Herpes zoster was more common among girls than boys and up to six times more common in immunosuppressed children than in nonimmunosuppressed children.

The authors also found that unvaccinated children benefited from the high rate of vaccination around them. Although the HZ rate was always lower among vaccinated children, the rate among unvaccinated children fell sharply after 2007.

“The trend of decreasing HZ incidence among children who were unvaccinated is likely due to a lack of primary VZV [varicella-zoster virus] infection resulting from herd immunity in a highly vaccinated population,” Dr. Weinmann and her associates said.

There was some variability among age groups, especially among the youngest who were not fully vaccinated.

“In the group aged 1-2 years, the confirmation-adjusted HZ rate among children who were vaccinated was 70% higher than among those who were unvaccinated,” the authors said. In the “older groups, HZ rates were significantly higher in children who were unvaccinated than in those who were vaccinated,” the researchers noted.

The highest incidence was among vaccinated 1-year-olds, who had a 140% higher risk of HZ than did unvaccinated 1-year-olds. But this risk elevation disappeared by age 2 years. For everyone else, aged 2-17 years, the rate of HZ remained significantly lower in vaccinated children.

“Among the small number of children vaccinated at 11 months of age (for whom the vaccine is not recommended), the HZ incidence rate was significantly higher than in children vaccinated at 1 year of age and older. Similarly, children who contract wild-type varicella infection at younger than 1 year of age also have a higher risk of HZ (relative risk, 13.5). The immature adaptive T-cell response in children less than 1 year of age appears less able to contain VZV as a latent infection, compared with older children.

“Our findings for 11-month-olds who were vaccinated should be interpreted with caution because this population included only three cases of HZ and could have included children participating in a prelicensure study with a vaccine formulation different from Varivax,” Dr. Weinmann and her associates said.

Dr. Weinmann and her associates reported no relevant financial disclosures. The study was supported by the Centers for Disease Control and Prevention.

SOURCE: Weinmann S et al. Pediatrics. 2019 Jun 10. doi: 10.1542/peds.2018-2917.

* This article was updated 6/14/2019

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Varicella vaccine is preventing pediatric zoster among children aged 2-17 years.

Major finding: Varicella-vaccinated children have a 78% lower incidence of pediatric zoster than do unvaccinated children.

Study details: The population-based cohort study included more than 6.3 million children.

Disclosures: Dr. Weinmann and her associates reported no relevant financial disclosures. The study was supported by the Centers for Disease Control and Prevention.

Source: Weinmann S et al. Pediatrics. 2019. doi: 10.1542/peds.2018-2917.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Less Is More When It Comes to Ketorolac for Pain

Article Type
Changed
Display Headline
Less Is More When It Comes to Ketorolac for Pain

Practice Changer

A 46-year-old man with no significant medical history presents to the emergency department (ED) with right flank pain and nausea. CT reveals a 5-mm ureteral stone with no obstruction or hydronephrosis. You are planning to start him on IV ketorolac for pain. What is the most appropriate dose?

Ketorolac tromethamine is a highly effective NSAID. As a nonopiate analgesic, it is often the first choice for the treatment of acute pain in the flank, abdomen, musculoskeletal system, or head.2 While it is not associated with euphoria, withdrawal effects, or respiratory depression (like its opiate analgesic counterparts), ketorolac carries an FDA black-box warning for gastrointestinal, cardiovascular, renal, and bleeding risks.3

NSAIDs are known to have a “ceiling dose” at which maximum analgesic benefit is achieved; higher doses will not provide further pain relief. Higher doses of ketorolac may be used when the anti-inflammatory effects of NSAIDs are desired, but they are likely to cause more adverse effects.4 Available data describe the ceiling dose of ketorolac as 10 mg across dosage forms—yet the majority of research and most health care providers in current practice use higher doses (20 to 60 mg).4,5 The FDA-approved labeling provides for a maximum dose of 60 mg/d.3

In one recent study, ketorolac was prescribed above its ceiling dose in at least 97% of patients who received IV doses and at least 96% of those who received intramuscular (IM) doses in a US ED.6 If 10 mg of ketorolac is an effective analgesic dose, current practice exceeds the label recommendation to use the lowest effective dose. This study sought to determine the comparative efficacy of 3 different doses of IV ketorolac for acute pain management in an ED.

STUDY SUMMARY

10 mg of ketorolac is enough for pain

This randomized double-blind trial evaluated the effectiveness of ketorolac in 240 adult patients (ages 18 to 65) presenting to an ED with acute flank, abdominal, musculoskeletal, or headache pain.1 Acute pain was defined as onset within the past 30 days.

Patients were randomly assigned to receive either 10, 15, or 30 mg of IV ketorolac in 10 mL of normal saline. A pharmacist prepared the medication in identical syringes, which were delivered in a blinded manner to the nurses caring for the patients. Pain (measured using a 0-to-10 scale), vital signs, and adverse effects were assessed at baseline and at 15, 30, 60, 90, and 120 minutes. If patients were still in pain at 30 minutes, IV morphine (0.1 mg/kg) was offered. The primary outcome was a numerical pain score at 30 minutes after ketorolac administration; secondary outcomes included the occurrence of adverse events and the use of rescue medication (morphine).

The treatment groups were similar in terms of demographics and baseline vital signs. Mean age was 39 to 42. Across the 3 groups, 36% to 40% of patients had abdominal pain, 26% to 39% had flank pain, 20% to 26% had musculoskeletal pain, and 1% to 11% had headache pain. Patients had experienced pain for an average of 1.5 to 3.5 days.

Continue to: Baseline pain scores...

 

 

Baseline pain scores were similar for all 3 groups (7.5-7.8 on a 10-point scale). In the intention-to-treat analysis, all 3 doses of ketorolac decreased pain significantly at 30 minutes, with no difference between the groups: mean pain scores postintervention were 5.1 for the 10- and 15-mg group and 4.8 for the 30-mg group. There was no difference between the groups at any other time intervals. There was also no difference between groups in the number of patients who needed rescue medication at 30 minutes (4 patients in the 10-mg group, 3 patients in the 15-mg group, and 4 patients in the 30-mg group). In addition, adverse events (eg, dizziness, nausea, headache, itching, flushing) did not differ between the groups.

WHAT’S NEW

10 mg is just as effective as 30 mg

This trial confirms that a low dose of IV ketorolac is just as effective as higher doses for acute pain control.

CAVEATS

2-hour limit; no look at long-term effects

It isn’t known whether the higher dose would have provided greater pain relief beyond the 120 minutes evaluated in this trial, or if alternative dosage forms (oral or IM) would result in different outcomes. This study was not designed to compare serious long-term adverse effects such as bleeding, renal impairment, or cardiovascular events. Additionally, this study was not powered to look at specific therapeutic indications or anti-inflammatory response.

 

CHALLENGES TO IMPLEMENTATION

10-mg single-dose vial not readily available

Ketorolac tromethamine for injection is available in the United States in 15-, 30-, and 60-mg single-dose vials. Because a 10-mg dose is not available as a single-dose vial, it would need to be specially prepared (as it was in this study). However, this study should reassure providers that using the lowest available dose (eg, 15 mg IV if that is what is available) will relieve acute pain as well as higher doses will. CR

ACKNOWLEDGEMENT

The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.

Copyright © 2019. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice (2019;68[1]:41-42).

References

1. Motov S, Yasavolian M, Likourezos A, et al. Comparison of intravenous ketorolac at three single-dose regimens for treating acute pain in the emergency department: a randomized controlled trial. Ann Emerg Med. 2017; 70:177-184.
2. Buckley MM, Brogden RN. Ketorolac: a review of its pharmacodynamic and pharmacokinetic properties, and therapeutic potential. Drugs. 1990;39: 86-109.
3. Ketorolac tromethamine [package insert]. Bedford, OH: Bedford Laboratories; 2009.
4. Catapano MS. The analgesic efficacy of ketorolac for acute pain. J Emerg Med. 1996;14:67-75.
5. García Rodríguez LA, Cattaruzzi C, Troncon MG, et al. Risk of hospitalization for upper gastrointestinal tract bleeding associated with ketorolac, other nonsteroidal anti-inflammatory drugs, calcium antagonists, and other antihypertensive drugs. Arch Intern Med. 1998;158:33-39.
6. Soleyman-Zomalan E, Motov S, Likourezos A, et al. Patterns of ketorolac dosing by emergency physicians. World J Emerg Med. 2017;8:43-46.

Article PDF
Author and Disclosure Information

Corey Lyon and Liza W. Claus are with the University of Colorado Family Medicine Residency, Denver.

Issue
Clinician Reviews - 29(6)
Publications
Topics
Page Number
3e-4e
Sections
Author and Disclosure Information

Corey Lyon and Liza W. Claus are with the University of Colorado Family Medicine Residency, Denver.

Author and Disclosure Information

Corey Lyon and Liza W. Claus are with the University of Colorado Family Medicine Residency, Denver.

Article PDF
Article PDF

Practice Changer

A 46-year-old man with no significant medical history presents to the emergency department (ED) with right flank pain and nausea. CT reveals a 5-mm ureteral stone with no obstruction or hydronephrosis. You are planning to start him on IV ketorolac for pain. What is the most appropriate dose?

Ketorolac tromethamine is a highly effective NSAID. As a nonopiate analgesic, it is often the first choice for the treatment of acute pain in the flank, abdomen, musculoskeletal system, or head.2 While it is not associated with euphoria, withdrawal effects, or respiratory depression (like its opiate analgesic counterparts), ketorolac carries an FDA black-box warning for gastrointestinal, cardiovascular, renal, and bleeding risks.3

NSAIDs are known to have a “ceiling dose” at which maximum analgesic benefit is achieved; higher doses will not provide further pain relief. Higher doses of ketorolac may be used when the anti-inflammatory effects of NSAIDs are desired, but they are likely to cause more adverse effects.4 Available data describe the ceiling dose of ketorolac as 10 mg across dosage forms—yet the majority of research and most health care providers in current practice use higher doses (20 to 60 mg).4,5 The FDA-approved labeling provides for a maximum dose of 60 mg/d.3

In one recent study, ketorolac was prescribed above its ceiling dose in at least 97% of patients who received IV doses and at least 96% of those who received intramuscular (IM) doses in a US ED.6 If 10 mg of ketorolac is an effective analgesic dose, current practice exceeds the label recommendation to use the lowest effective dose. This study sought to determine the comparative efficacy of 3 different doses of IV ketorolac for acute pain management in an ED.

STUDY SUMMARY

10 mg of ketorolac is enough for pain

This randomized double-blind trial evaluated the effectiveness of ketorolac in 240 adult patients (ages 18 to 65) presenting to an ED with acute flank, abdominal, musculoskeletal, or headache pain.1 Acute pain was defined as onset within the past 30 days.

Patients were randomly assigned to receive either 10, 15, or 30 mg of IV ketorolac in 10 mL of normal saline. A pharmacist prepared the medication in identical syringes, which were delivered in a blinded manner to the nurses caring for the patients. Pain (measured using a 0-to-10 scale), vital signs, and adverse effects were assessed at baseline and at 15, 30, 60, 90, and 120 minutes. If patients were still in pain at 30 minutes, IV morphine (0.1 mg/kg) was offered. The primary outcome was a numerical pain score at 30 minutes after ketorolac administration; secondary outcomes included the occurrence of adverse events and the use of rescue medication (morphine).

The treatment groups were similar in terms of demographics and baseline vital signs. Mean age was 39 to 42. Across the 3 groups, 36% to 40% of patients had abdominal pain, 26% to 39% had flank pain, 20% to 26% had musculoskeletal pain, and 1% to 11% had headache pain. Patients had experienced pain for an average of 1.5 to 3.5 days.

Continue to: Baseline pain scores...

 

 

Baseline pain scores were similar for all 3 groups (7.5-7.8 on a 10-point scale). In the intention-to-treat analysis, all 3 doses of ketorolac decreased pain significantly at 30 minutes, with no difference between the groups: mean pain scores postintervention were 5.1 for the 10- and 15-mg group and 4.8 for the 30-mg group. There was no difference between the groups at any other time intervals. There was also no difference between groups in the number of patients who needed rescue medication at 30 minutes (4 patients in the 10-mg group, 3 patients in the 15-mg group, and 4 patients in the 30-mg group). In addition, adverse events (eg, dizziness, nausea, headache, itching, flushing) did not differ between the groups.

WHAT’S NEW

10 mg is just as effective as 30 mg

This trial confirms that a low dose of IV ketorolac is just as effective as higher doses for acute pain control.

CAVEATS

2-hour limit; no look at long-term effects

It isn’t known whether the higher dose would have provided greater pain relief beyond the 120 minutes evaluated in this trial, or if alternative dosage forms (oral or IM) would result in different outcomes. This study was not designed to compare serious long-term adverse effects such as bleeding, renal impairment, or cardiovascular events. Additionally, this study was not powered to look at specific therapeutic indications or anti-inflammatory response.

 

CHALLENGES TO IMPLEMENTATION

10-mg single-dose vial not readily available

Ketorolac tromethamine for injection is available in the United States in 15-, 30-, and 60-mg single-dose vials. Because a 10-mg dose is not available as a single-dose vial, it would need to be specially prepared (as it was in this study). However, this study should reassure providers that using the lowest available dose (eg, 15 mg IV if that is what is available) will relieve acute pain as well as higher doses will. CR

ACKNOWLEDGEMENT

The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.

Copyright © 2019. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice (2019;68[1]:41-42).

Practice Changer

A 46-year-old man with no significant medical history presents to the emergency department (ED) with right flank pain and nausea. CT reveals a 5-mm ureteral stone with no obstruction or hydronephrosis. You are planning to start him on IV ketorolac for pain. What is the most appropriate dose?

Ketorolac tromethamine is a highly effective NSAID. As a nonopiate analgesic, it is often the first choice for the treatment of acute pain in the flank, abdomen, musculoskeletal system, or head.2 While it is not associated with euphoria, withdrawal effects, or respiratory depression (like its opiate analgesic counterparts), ketorolac carries an FDA black-box warning for gastrointestinal, cardiovascular, renal, and bleeding risks.3

NSAIDs are known to have a “ceiling dose” at which maximum analgesic benefit is achieved; higher doses will not provide further pain relief. Higher doses of ketorolac may be used when the anti-inflammatory effects of NSAIDs are desired, but they are likely to cause more adverse effects.4 Available data describe the ceiling dose of ketorolac as 10 mg across dosage forms—yet the majority of research and most health care providers in current practice use higher doses (20 to 60 mg).4,5 The FDA-approved labeling provides for a maximum dose of 60 mg/d.3

In one recent study, ketorolac was prescribed above its ceiling dose in at least 97% of patients who received IV doses and at least 96% of those who received intramuscular (IM) doses in a US ED.6 If 10 mg of ketorolac is an effective analgesic dose, current practice exceeds the label recommendation to use the lowest effective dose. This study sought to determine the comparative efficacy of 3 different doses of IV ketorolac for acute pain management in an ED.

STUDY SUMMARY

10 mg of ketorolac is enough for pain

This randomized double-blind trial evaluated the effectiveness of ketorolac in 240 adult patients (ages 18 to 65) presenting to an ED with acute flank, abdominal, musculoskeletal, or headache pain.1 Acute pain was defined as onset within the past 30 days.

Patients were randomly assigned to receive either 10, 15, or 30 mg of IV ketorolac in 10 mL of normal saline. A pharmacist prepared the medication in identical syringes, which were delivered in a blinded manner to the nurses caring for the patients. Pain (measured using a 0-to-10 scale), vital signs, and adverse effects were assessed at baseline and at 15, 30, 60, 90, and 120 minutes. If patients were still in pain at 30 minutes, IV morphine (0.1 mg/kg) was offered. The primary outcome was a numerical pain score at 30 minutes after ketorolac administration; secondary outcomes included the occurrence of adverse events and the use of rescue medication (morphine).

The treatment groups were similar in terms of demographics and baseline vital signs. Mean age was 39 to 42. Across the 3 groups, 36% to 40% of patients had abdominal pain, 26% to 39% had flank pain, 20% to 26% had musculoskeletal pain, and 1% to 11% had headache pain. Patients had experienced pain for an average of 1.5 to 3.5 days.

Continue to: Baseline pain scores...

 

 

Baseline pain scores were similar for all 3 groups (7.5-7.8 on a 10-point scale). In the intention-to-treat analysis, all 3 doses of ketorolac decreased pain significantly at 30 minutes, with no difference between the groups: mean pain scores postintervention were 5.1 for the 10- and 15-mg group and 4.8 for the 30-mg group. There was no difference between the groups at any other time intervals. There was also no difference between groups in the number of patients who needed rescue medication at 30 minutes (4 patients in the 10-mg group, 3 patients in the 15-mg group, and 4 patients in the 30-mg group). In addition, adverse events (eg, dizziness, nausea, headache, itching, flushing) did not differ between the groups.

WHAT’S NEW

10 mg is just as effective as 30 mg

This trial confirms that a low dose of IV ketorolac is just as effective as higher doses for acute pain control.

CAVEATS

2-hour limit; no look at long-term effects

It isn’t known whether the higher dose would have provided greater pain relief beyond the 120 minutes evaluated in this trial, or if alternative dosage forms (oral or IM) would result in different outcomes. This study was not designed to compare serious long-term adverse effects such as bleeding, renal impairment, or cardiovascular events. Additionally, this study was not powered to look at specific therapeutic indications or anti-inflammatory response.

 

CHALLENGES TO IMPLEMENTATION

10-mg single-dose vial not readily available

Ketorolac tromethamine for injection is available in the United States in 15-, 30-, and 60-mg single-dose vials. Because a 10-mg dose is not available as a single-dose vial, it would need to be specially prepared (as it was in this study). However, this study should reassure providers that using the lowest available dose (eg, 15 mg IV if that is what is available) will relieve acute pain as well as higher doses will. CR

ACKNOWLEDGEMENT

The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.

Copyright © 2019. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice (2019;68[1]:41-42).

References

1. Motov S, Yasavolian M, Likourezos A, et al. Comparison of intravenous ketorolac at three single-dose regimens for treating acute pain in the emergency department: a randomized controlled trial. Ann Emerg Med. 2017; 70:177-184.
2. Buckley MM, Brogden RN. Ketorolac: a review of its pharmacodynamic and pharmacokinetic properties, and therapeutic potential. Drugs. 1990;39: 86-109.
3. Ketorolac tromethamine [package insert]. Bedford, OH: Bedford Laboratories; 2009.
4. Catapano MS. The analgesic efficacy of ketorolac for acute pain. J Emerg Med. 1996;14:67-75.
5. García Rodríguez LA, Cattaruzzi C, Troncon MG, et al. Risk of hospitalization for upper gastrointestinal tract bleeding associated with ketorolac, other nonsteroidal anti-inflammatory drugs, calcium antagonists, and other antihypertensive drugs. Arch Intern Med. 1998;158:33-39.
6. Soleyman-Zomalan E, Motov S, Likourezos A, et al. Patterns of ketorolac dosing by emergency physicians. World J Emerg Med. 2017;8:43-46.

References

1. Motov S, Yasavolian M, Likourezos A, et al. Comparison of intravenous ketorolac at three single-dose regimens for treating acute pain in the emergency department: a randomized controlled trial. Ann Emerg Med. 2017; 70:177-184.
2. Buckley MM, Brogden RN. Ketorolac: a review of its pharmacodynamic and pharmacokinetic properties, and therapeutic potential. Drugs. 1990;39: 86-109.
3. Ketorolac tromethamine [package insert]. Bedford, OH: Bedford Laboratories; 2009.
4. Catapano MS. The analgesic efficacy of ketorolac for acute pain. J Emerg Med. 1996;14:67-75.
5. García Rodríguez LA, Cattaruzzi C, Troncon MG, et al. Risk of hospitalization for upper gastrointestinal tract bleeding associated with ketorolac, other nonsteroidal anti-inflammatory drugs, calcium antagonists, and other antihypertensive drugs. Arch Intern Med. 1998;158:33-39.
6. Soleyman-Zomalan E, Motov S, Likourezos A, et al. Patterns of ketorolac dosing by emergency physicians. World J Emerg Med. 2017;8:43-46.

Issue
Clinician Reviews - 29(6)
Issue
Clinician Reviews - 29(6)
Page Number
3e-4e
Page Number
3e-4e
Publications
Publications
Topics
Article Type
Display Headline
Less Is More When It Comes to Ketorolac for Pain
Display Headline
Less Is More When It Comes to Ketorolac for Pain
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Estimated prevalence of OSA in the Americas stands at 170 million

Article Type
Changed

The estimated prevalence of obstructive sleep apnea in North and South America stands at 170 million, results from a novel epidemiologic analysis showed.

Dr. Atul Malhotra

“I would not have thought that there are 170 million people in the Americas with clinically important sleep apnea based on our conservative estimates,” the study’s first author, Atul Malhotra, MD, said in an interview in advance of the annual meeting of the Associated Professional Sleep Societies. “Even if we restrict the conversation to moderate to severe sleep apnea, we still see 81 million people afflicted in the Americas alone. We have recently estimated almost 1 billion patients afflicted with OSA worldwide.”

In an effort to estimate the Americas’ prevalence of adult OSA using existing data from epidemiologic studies, Dr. Malhotra, director of sleep medicine at the University of California, San Diego, senior author Adam V. Benjafield, PhD, and their colleagues contacted authors of important analyses on the topic following an exhaustive review of the literature. For countries where no measurement had been made, they used publicly available data to obtain estimates of age, sex, race, and body mass index. Next, they developed an algorithm to match countries without prevalence estimates with countries from which OSA epidemiologic studies exist. “The situation was complicated given the variable age of the existing studies, the differences in technology used (e.g., nasal pressure vs. thermistor), the changing scoring criteria, and other sources of variability,” the researchers wrote in their abstract.

Dr. Malhotra reported on data from 38 of 40 countries in the Americas. Drawing from American Academy of Sleep Medicine 2012 criteria and using what they characterized as a “somewhat conservative” approach, the researchers estimated the prevalence of adult OSA in the Americas to be 170 million, or 37% of the population. In addition, they estimate that 81 million adults, or 18% of the population, suffer from moderate to severe OSA based on an apnea hypopnea index of 15 or more per hour. The countries with the greatest burden of OSA are the United States (54 million), Brazil (49 million), and Colombia (11 million).

“The findings will hopefully help to raise awareness about the disease but also encourage a strategic conversation regarding how best to address this large burden,” Dr. Malhotra said. “We are unaware of prior efforts to estimate OSA prevalence on a large scale.”

He acknowledged certain limitations of the study, including that the methods, equipment, definitions, and criteria used in existing studies in the medial literature varied widely. “We did our best to harmonize these methods across studies but obviously we can’t change the equipment that was used in previous studies,” he said. “Thus, we view our findings as an estimate requiring further efforts to corroborate.”

The research stemmed from an academic/industry partnership with ResMed, which provided a donation the UCSD Sleep Medicine Center. Dr. Malhotra reported having no financial disclosures. Dr. Benjafield is an employee of ResMed, a medical equipment company that specializes in sleep-related breathing devices.

SOURCE: Malhotra A et al. SLEEP 2019, Abstract 0477.

 

 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The estimated prevalence of obstructive sleep apnea in North and South America stands at 170 million, results from a novel epidemiologic analysis showed.

Dr. Atul Malhotra

“I would not have thought that there are 170 million people in the Americas with clinically important sleep apnea based on our conservative estimates,” the study’s first author, Atul Malhotra, MD, said in an interview in advance of the annual meeting of the Associated Professional Sleep Societies. “Even if we restrict the conversation to moderate to severe sleep apnea, we still see 81 million people afflicted in the Americas alone. We have recently estimated almost 1 billion patients afflicted with OSA worldwide.”

In an effort to estimate the Americas’ prevalence of adult OSA using existing data from epidemiologic studies, Dr. Malhotra, director of sleep medicine at the University of California, San Diego, senior author Adam V. Benjafield, PhD, and their colleagues contacted authors of important analyses on the topic following an exhaustive review of the literature. For countries where no measurement had been made, they used publicly available data to obtain estimates of age, sex, race, and body mass index. Next, they developed an algorithm to match countries without prevalence estimates with countries from which OSA epidemiologic studies exist. “The situation was complicated given the variable age of the existing studies, the differences in technology used (e.g., nasal pressure vs. thermistor), the changing scoring criteria, and other sources of variability,” the researchers wrote in their abstract.

Dr. Malhotra reported on data from 38 of 40 countries in the Americas. Drawing from American Academy of Sleep Medicine 2012 criteria and using what they characterized as a “somewhat conservative” approach, the researchers estimated the prevalence of adult OSA in the Americas to be 170 million, or 37% of the population. In addition, they estimate that 81 million adults, or 18% of the population, suffer from moderate to severe OSA based on an apnea hypopnea index of 15 or more per hour. The countries with the greatest burden of OSA are the United States (54 million), Brazil (49 million), and Colombia (11 million).

“The findings will hopefully help to raise awareness about the disease but also encourage a strategic conversation regarding how best to address this large burden,” Dr. Malhotra said. “We are unaware of prior efforts to estimate OSA prevalence on a large scale.”

He acknowledged certain limitations of the study, including that the methods, equipment, definitions, and criteria used in existing studies in the medial literature varied widely. “We did our best to harmonize these methods across studies but obviously we can’t change the equipment that was used in previous studies,” he said. “Thus, we view our findings as an estimate requiring further efforts to corroborate.”

The research stemmed from an academic/industry partnership with ResMed, which provided a donation the UCSD Sleep Medicine Center. Dr. Malhotra reported having no financial disclosures. Dr. Benjafield is an employee of ResMed, a medical equipment company that specializes in sleep-related breathing devices.

SOURCE: Malhotra A et al. SLEEP 2019, Abstract 0477.

 

 

The estimated prevalence of obstructive sleep apnea in North and South America stands at 170 million, results from a novel epidemiologic analysis showed.

Dr. Atul Malhotra

“I would not have thought that there are 170 million people in the Americas with clinically important sleep apnea based on our conservative estimates,” the study’s first author, Atul Malhotra, MD, said in an interview in advance of the annual meeting of the Associated Professional Sleep Societies. “Even if we restrict the conversation to moderate to severe sleep apnea, we still see 81 million people afflicted in the Americas alone. We have recently estimated almost 1 billion patients afflicted with OSA worldwide.”

In an effort to estimate the Americas’ prevalence of adult OSA using existing data from epidemiologic studies, Dr. Malhotra, director of sleep medicine at the University of California, San Diego, senior author Adam V. Benjafield, PhD, and their colleagues contacted authors of important analyses on the topic following an exhaustive review of the literature. For countries where no measurement had been made, they used publicly available data to obtain estimates of age, sex, race, and body mass index. Next, they developed an algorithm to match countries without prevalence estimates with countries from which OSA epidemiologic studies exist. “The situation was complicated given the variable age of the existing studies, the differences in technology used (e.g., nasal pressure vs. thermistor), the changing scoring criteria, and other sources of variability,” the researchers wrote in their abstract.

Dr. Malhotra reported on data from 38 of 40 countries in the Americas. Drawing from American Academy of Sleep Medicine 2012 criteria and using what they characterized as a “somewhat conservative” approach, the researchers estimated the prevalence of adult OSA in the Americas to be 170 million, or 37% of the population. In addition, they estimate that 81 million adults, or 18% of the population, suffer from moderate to severe OSA based on an apnea hypopnea index of 15 or more per hour. The countries with the greatest burden of OSA are the United States (54 million), Brazil (49 million), and Colombia (11 million).

“The findings will hopefully help to raise awareness about the disease but also encourage a strategic conversation regarding how best to address this large burden,” Dr. Malhotra said. “We are unaware of prior efforts to estimate OSA prevalence on a large scale.”

He acknowledged certain limitations of the study, including that the methods, equipment, definitions, and criteria used in existing studies in the medial literature varied widely. “We did our best to harmonize these methods across studies but obviously we can’t change the equipment that was used in previous studies,” he said. “Thus, we view our findings as an estimate requiring further efforts to corroborate.”

The research stemmed from an academic/industry partnership with ResMed, which provided a donation the UCSD Sleep Medicine Center. Dr. Malhotra reported having no financial disclosures. Dr. Benjafield is an employee of ResMed, a medical equipment company that specializes in sleep-related breathing devices.

SOURCE: Malhotra A et al. SLEEP 2019, Abstract 0477.

 

 

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM SLEEP 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The large burden of OSA in the Americas has not been widely appreciated.

Major finding: The estimated prevalence of adult OSA in the Americas is 170 million, or 37% of the population.

Study details: An analysis of epidemiologic studies that included data on 38 countries in the Americas.

Disclosures: The research stemmed from an academic/industry partnership with ResMed, a medical equipment company that specializes in sleep-related breathing devices, which provided a donation the UCSD Sleep Medicine Center. Dr. Malhotra reported having no financial disclosures. Dr. Benjafield is an employee of ResMed.

Source: Malhotra A et al. SLEEP 2019, Abstract 0477.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Immunotherapy drug teplizumab may stall onset of type 1 diabetes

Striking results, but questions still to be answered
Article Type
Changed

The monoclonal antibody teplizumab may delay the onset of type 1 diabetes in individuals at high risk, according to research presented at the annual scientific sessions of the American Diabetes Association.

In this study, 76 first-degree relatives of individuals with type 1 diabetes – who did not themselves have the disease but were considered at high risk because of antibodies and abnormal glucose tolerance tests – were randomized to a single two-week outpatient course of intravenous teplizumab or saline placebo. The patients, of whom 72% were 18 years of age or younger, were followed for a median of 745 days and had twice-yearly oral glucose tolerance testing.

Overall, 43% of the 44 patients who received teplizumab were diagnosed with type 1 diabetes during the course of the study, compared with 72% of the 32 who received the placebo. The treatment was associated with a 59% reduction in the hazard ratio for type 1 diabetes, even after adjusting for age, the results of a second oral glucose-tolerance testing before randomization, or the presence of anti-GAD65 antibodies.

The median time to diagnosis was 48.4 months in the teplizumab group and 24.4 months in the placebo group. The greatest effect was seen in the first year after randomization, during which only 7% of the teplizumab group were diagnosed with type 1 diabetes, compared with 44% of the placebo group. The findings were published simultaneously in the New England Journal of Medicine.

“The delay of progression to type 1 diabetes is of clinical importance, particularly for children, in whom the diagnosis is associated with adverse outcomes, and given the challenges of daily management of the condition,” said Dr. Kevan C. Herold, professor of immunobiology and medicine at Yale University, New Haven, Conn., and coauthors.

There were significantly more adverse events in the teplizumab group, compared with placebo, with three-quarters of the 20 grade 3 adverse events being lymphopenia during the first 30 days. In all but one participant, however, the lymphopenia resolved by day 45. Participants receiving teplizumab also reported a higher incidence of dermatologic adverse events, such as a spontaneously-resolving rash that was experienced by just over one-third of the group.

The researchers also looked for evidence of T-cell unresponsiveness, which has been previously seen in patients with new-onset type 1 diabetes who received treatment with teplizumab. They noted an increase in a particular type of CD8+ T cell associated with T-cell unresponsiveness at months 3 and 6 in participants treated with teplizumab.

Teplizumab is an Fc receptor-nonbinding monoclonal antibody that has been shown to reduce the loss of beta-cell function in patients with type 1 diabetes (Diabetes. 2013 Nov;62(11):3766-74).

The study was supported by the National Institutes of Health, the Juvenile Diabetes Research Foundation, and the American Diabetes Association, with the study drug and additional site monitoring provided by MacroGenics. Eight authors declared grants, personal fees, and other support from private industry, with one also declaring income and stock options from MacroGenics.

SOURCE: Herold K et al. NEJM. 2019 Jun 9. doi: 10.1056/NEJMoa1902226*

*Correction, 6/9/2019: An earlier version of this story misstated the doi number for the journal article. The number is 10.1056/NEJMoa1902226.

Body

While the results of this trial are striking, there are several caveats that are important to note. The trial did show a significant delay in the onset of type 1 diabetes – with the greatest preventive benefit in the first year of the trial – but these results do not necessarily mean that immune modulation represents a potential cure.

They do, however, provide indirect evidence of the pathogenesis of beta-cell destruction and the potential for newer biologic agents to alter the course of this.

The study also was small and involved only a 2-week course of the treatment. As such, there are still questions to be answered about the duration of treatment, longer-term side effects, sub-groups of patients who may respond differently to treatment, and the longer clinical course of those who do respond to treatment.

Julie R. Ingelfinger, MD, is deputy editor of the New England Journal of Medicine, and Clifford J. Rosen, MD, is from the Maine Medical Center Research Institute and is associate editor of the journal. Their comments are adapted from an accompanying editorial (NEJM 2019, Jun 9. doi: 10.1056/NEJMe1907458). No conflicts of interest were declared.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Body

While the results of this trial are striking, there are several caveats that are important to note. The trial did show a significant delay in the onset of type 1 diabetes – with the greatest preventive benefit in the first year of the trial – but these results do not necessarily mean that immune modulation represents a potential cure.

They do, however, provide indirect evidence of the pathogenesis of beta-cell destruction and the potential for newer biologic agents to alter the course of this.

The study also was small and involved only a 2-week course of the treatment. As such, there are still questions to be answered about the duration of treatment, longer-term side effects, sub-groups of patients who may respond differently to treatment, and the longer clinical course of those who do respond to treatment.

Julie R. Ingelfinger, MD, is deputy editor of the New England Journal of Medicine, and Clifford J. Rosen, MD, is from the Maine Medical Center Research Institute and is associate editor of the journal. Their comments are adapted from an accompanying editorial (NEJM 2019, Jun 9. doi: 10.1056/NEJMe1907458). No conflicts of interest were declared.

Body

While the results of this trial are striking, there are several caveats that are important to note. The trial did show a significant delay in the onset of type 1 diabetes – with the greatest preventive benefit in the first year of the trial – but these results do not necessarily mean that immune modulation represents a potential cure.

They do, however, provide indirect evidence of the pathogenesis of beta-cell destruction and the potential for newer biologic agents to alter the course of this.

The study also was small and involved only a 2-week course of the treatment. As such, there are still questions to be answered about the duration of treatment, longer-term side effects, sub-groups of patients who may respond differently to treatment, and the longer clinical course of those who do respond to treatment.

Julie R. Ingelfinger, MD, is deputy editor of the New England Journal of Medicine, and Clifford J. Rosen, MD, is from the Maine Medical Center Research Institute and is associate editor of the journal. Their comments are adapted from an accompanying editorial (NEJM 2019, Jun 9. doi: 10.1056/NEJMe1907458). No conflicts of interest were declared.

Title
Striking results, but questions still to be answered
Striking results, but questions still to be answered

The monoclonal antibody teplizumab may delay the onset of type 1 diabetes in individuals at high risk, according to research presented at the annual scientific sessions of the American Diabetes Association.

In this study, 76 first-degree relatives of individuals with type 1 diabetes – who did not themselves have the disease but were considered at high risk because of antibodies and abnormal glucose tolerance tests – were randomized to a single two-week outpatient course of intravenous teplizumab or saline placebo. The patients, of whom 72% were 18 years of age or younger, were followed for a median of 745 days and had twice-yearly oral glucose tolerance testing.

Overall, 43% of the 44 patients who received teplizumab were diagnosed with type 1 diabetes during the course of the study, compared with 72% of the 32 who received the placebo. The treatment was associated with a 59% reduction in the hazard ratio for type 1 diabetes, even after adjusting for age, the results of a second oral glucose-tolerance testing before randomization, or the presence of anti-GAD65 antibodies.

The median time to diagnosis was 48.4 months in the teplizumab group and 24.4 months in the placebo group. The greatest effect was seen in the first year after randomization, during which only 7% of the teplizumab group were diagnosed with type 1 diabetes, compared with 44% of the placebo group. The findings were published simultaneously in the New England Journal of Medicine.

“The delay of progression to type 1 diabetes is of clinical importance, particularly for children, in whom the diagnosis is associated with adverse outcomes, and given the challenges of daily management of the condition,” said Dr. Kevan C. Herold, professor of immunobiology and medicine at Yale University, New Haven, Conn., and coauthors.

There were significantly more adverse events in the teplizumab group, compared with placebo, with three-quarters of the 20 grade 3 adverse events being lymphopenia during the first 30 days. In all but one participant, however, the lymphopenia resolved by day 45. Participants receiving teplizumab also reported a higher incidence of dermatologic adverse events, such as a spontaneously-resolving rash that was experienced by just over one-third of the group.

The researchers also looked for evidence of T-cell unresponsiveness, which has been previously seen in patients with new-onset type 1 diabetes who received treatment with teplizumab. They noted an increase in a particular type of CD8+ T cell associated with T-cell unresponsiveness at months 3 and 6 in participants treated with teplizumab.

Teplizumab is an Fc receptor-nonbinding monoclonal antibody that has been shown to reduce the loss of beta-cell function in patients with type 1 diabetes (Diabetes. 2013 Nov;62(11):3766-74).

The study was supported by the National Institutes of Health, the Juvenile Diabetes Research Foundation, and the American Diabetes Association, with the study drug and additional site monitoring provided by MacroGenics. Eight authors declared grants, personal fees, and other support from private industry, with one also declaring income and stock options from MacroGenics.

SOURCE: Herold K et al. NEJM. 2019 Jun 9. doi: 10.1056/NEJMoa1902226*

*Correction, 6/9/2019: An earlier version of this story misstated the doi number for the journal article. The number is 10.1056/NEJMoa1902226.

The monoclonal antibody teplizumab may delay the onset of type 1 diabetes in individuals at high risk, according to research presented at the annual scientific sessions of the American Diabetes Association.

In this study, 76 first-degree relatives of individuals with type 1 diabetes – who did not themselves have the disease but were considered at high risk because of antibodies and abnormal glucose tolerance tests – were randomized to a single two-week outpatient course of intravenous teplizumab or saline placebo. The patients, of whom 72% were 18 years of age or younger, were followed for a median of 745 days and had twice-yearly oral glucose tolerance testing.

Overall, 43% of the 44 patients who received teplizumab were diagnosed with type 1 diabetes during the course of the study, compared with 72% of the 32 who received the placebo. The treatment was associated with a 59% reduction in the hazard ratio for type 1 diabetes, even after adjusting for age, the results of a second oral glucose-tolerance testing before randomization, or the presence of anti-GAD65 antibodies.

The median time to diagnosis was 48.4 months in the teplizumab group and 24.4 months in the placebo group. The greatest effect was seen in the first year after randomization, during which only 7% of the teplizumab group were diagnosed with type 1 diabetes, compared with 44% of the placebo group. The findings were published simultaneously in the New England Journal of Medicine.

“The delay of progression to type 1 diabetes is of clinical importance, particularly for children, in whom the diagnosis is associated with adverse outcomes, and given the challenges of daily management of the condition,” said Dr. Kevan C. Herold, professor of immunobiology and medicine at Yale University, New Haven, Conn., and coauthors.

There were significantly more adverse events in the teplizumab group, compared with placebo, with three-quarters of the 20 grade 3 adverse events being lymphopenia during the first 30 days. In all but one participant, however, the lymphopenia resolved by day 45. Participants receiving teplizumab also reported a higher incidence of dermatologic adverse events, such as a spontaneously-resolving rash that was experienced by just over one-third of the group.

The researchers also looked for evidence of T-cell unresponsiveness, which has been previously seen in patients with new-onset type 1 diabetes who received treatment with teplizumab. They noted an increase in a particular type of CD8+ T cell associated with T-cell unresponsiveness at months 3 and 6 in participants treated with teplizumab.

Teplizumab is an Fc receptor-nonbinding monoclonal antibody that has been shown to reduce the loss of beta-cell function in patients with type 1 diabetes (Diabetes. 2013 Nov;62(11):3766-74).

The study was supported by the National Institutes of Health, the Juvenile Diabetes Research Foundation, and the American Diabetes Association, with the study drug and additional site monitoring provided by MacroGenics. Eight authors declared grants, personal fees, and other support from private industry, with one also declaring income and stock options from MacroGenics.

SOURCE: Herold K et al. NEJM. 2019 Jun 9. doi: 10.1056/NEJMoa1902226*

*Correction, 6/9/2019: An earlier version of this story misstated the doi number for the journal article. The number is 10.1056/NEJMoa1902226.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ADA 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Teplizumab may delay the onset of type 1 diabetes in individuals at risk.

Major finding: Templizumab treatment was associated with a 59% lower hazard ratio for the diagnosis of type 1 diabetes.

Study details: Phase 2, randomized, double-blind, placebo-controlled trial in 76 participants.

Disclosures: The study was supported by the National Institutes of Health, the Juvenile Diabetes Research Foundation, and the American Diabetes Association, with the study drug and additional site monitoring provided by MacroGenics. Eight authors declared grants, personal fees, and other support from private industry, with one also declaring income and stock options from MacroGenics.

Source: Herold K et al. NEJM 2019, June 9. DOI: 10.1065/NEJMoa1902226.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Survival exceeds 90% in transplant for SCD

Article Type
Changed

FORT LAUDERDALE, FLA. — A multicenter pilot study of a prophylactic regimen for both matched sibling donor and unrelated donor bone marrow transplantation in adults with severe sickle cell disease has found similar overall and event-free survival rates between the two approaches, exceeding 90% and 85%, respectively, at one year, according to preliminary results presented at the annual meeting of the Foundation for Sickle Cell Disease Research.

The results have led to a Phase 2 single-arm, multicenter trial, known as STRIDE , to evaluate a reduced toxicity preparative regimen consisting of busulfan (13.2 mg/kg), fludarabine (175   mg/m 2 ) and antithymocyte globulin (ATG, 6 mg/kg) and cyclosporine or tacrolimus and methotrexate for graft-vs-host disease (GVHD) prophylaxis in adults with sickle cell disease (SCD), said Lakshmanan Krishnamurti, MD, of Children’s Healthcare of Atlanta/Emory University. “The data are similar with 91% overall survival and 86% event-free survival,” he said.

The pilot study, published recently ( Am J Hematol. 2019;94:446-54 ), indicated the effectiveness of non-myeloablative conditioning in SCD patients with matched-sibling bone marrow transplant (BMT), with a higher intensity regimen of busulfan/fludarabine/ATG effective in unrelated donor BMT for other conditions, Dr. Krishnamurti said.

The pilot study also found that three-year event-free survival (EFS) of 82%, and statistically significant improvements in pain and health-related quality of life.

STRIDE is the first comparative study of BMT vs. standard of care in severe SCD, Dr. Krishnamurti added. The primary endpoint is overall survival at two years after biologic assignment, with longer-term outcomes including survival at three to 10 years post-hematopoietic stem cell transplantation (HSCT), and impact of BMT on sickle-related events, organ function, health-related quality of life and chronic pain.  

The pilot study included 22 patients between the ages of 17 and 36 who had BMT at eight centers. Seventeen patients received marrow from a sibling-matched donor and five patients received marrow from an unrelated donor. 

Dr. Krishnamurti referenced a recent study out of France that showed chimerism levels after transplant may be a determining physiological factor for outcomes (Haematologica. doi:10.3324/haematol.2018.213207 ). “So if chimerism is stable, somewhere in the 25% to 50% or better range, and hemoglobin levels are improved, this decrease hemolysis,” he said. “This is very important in understanding how to manage these patients.”

That study showed that rates of chronic GVHD up to 10 years post-transplant have steadily improved over the past three decades in patients with SCD who’ve had BMT, Dr. Krishnamurti noted. “But chronic GVHD is higher in patients age 16 to 30 vs. patients 15 and younger,” he said, “so that’s the reason to consider transplantation sooner in patients who have a matched sibling donor.”

The French study shows that BMT with sibling-matched donors has excellent outcomes in young children, Dr. Krishnamurti said. “Outcomes for adults with transplantation is becoming similar to that in children,” he added. “Age is an important predictor of outcomes and the risk for progressive morbidity-impaired quality of life and risk of mortality still exists in adults with sickle cell disease.”

The bottom line, he said, is that patients and caregivers must be given the opportunity to consider transplantation as an option at younger ages.

Dr. Krishnamurti did not disclose any financial relationships.

SOURCE: Krishnamurti L et al. FSCDR 2019

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

FORT LAUDERDALE, FLA. — A multicenter pilot study of a prophylactic regimen for both matched sibling donor and unrelated donor bone marrow transplantation in adults with severe sickle cell disease has found similar overall and event-free survival rates between the two approaches, exceeding 90% and 85%, respectively, at one year, according to preliminary results presented at the annual meeting of the Foundation for Sickle Cell Disease Research.

The results have led to a Phase 2 single-arm, multicenter trial, known as STRIDE , to evaluate a reduced toxicity preparative regimen consisting of busulfan (13.2 mg/kg), fludarabine (175   mg/m 2 ) and antithymocyte globulin (ATG, 6 mg/kg) and cyclosporine or tacrolimus and methotrexate for graft-vs-host disease (GVHD) prophylaxis in adults with sickle cell disease (SCD), said Lakshmanan Krishnamurti, MD, of Children’s Healthcare of Atlanta/Emory University. “The data are similar with 91% overall survival and 86% event-free survival,” he said.

The pilot study, published recently ( Am J Hematol. 2019;94:446-54 ), indicated the effectiveness of non-myeloablative conditioning in SCD patients with matched-sibling bone marrow transplant (BMT), with a higher intensity regimen of busulfan/fludarabine/ATG effective in unrelated donor BMT for other conditions, Dr. Krishnamurti said.

The pilot study also found that three-year event-free survival (EFS) of 82%, and statistically significant improvements in pain and health-related quality of life.

STRIDE is the first comparative study of BMT vs. standard of care in severe SCD, Dr. Krishnamurti added. The primary endpoint is overall survival at two years after biologic assignment, with longer-term outcomes including survival at three to 10 years post-hematopoietic stem cell transplantation (HSCT), and impact of BMT on sickle-related events, organ function, health-related quality of life and chronic pain.  

The pilot study included 22 patients between the ages of 17 and 36 who had BMT at eight centers. Seventeen patients received marrow from a sibling-matched donor and five patients received marrow from an unrelated donor. 

Dr. Krishnamurti referenced a recent study out of France that showed chimerism levels after transplant may be a determining physiological factor for outcomes (Haematologica. doi:10.3324/haematol.2018.213207 ). “So if chimerism is stable, somewhere in the 25% to 50% or better range, and hemoglobin levels are improved, this decrease hemolysis,” he said. “This is very important in understanding how to manage these patients.”

That study showed that rates of chronic GVHD up to 10 years post-transplant have steadily improved over the past three decades in patients with SCD who’ve had BMT, Dr. Krishnamurti noted. “But chronic GVHD is higher in patients age 16 to 30 vs. patients 15 and younger,” he said, “so that’s the reason to consider transplantation sooner in patients who have a matched sibling donor.”

The French study shows that BMT with sibling-matched donors has excellent outcomes in young children, Dr. Krishnamurti said. “Outcomes for adults with transplantation is becoming similar to that in children,” he added. “Age is an important predictor of outcomes and the risk for progressive morbidity-impaired quality of life and risk of mortality still exists in adults with sickle cell disease.”

The bottom line, he said, is that patients and caregivers must be given the opportunity to consider transplantation as an option at younger ages.

Dr. Krishnamurti did not disclose any financial relationships.

SOURCE: Krishnamurti L et al. FSCDR 2019

FORT LAUDERDALE, FLA. — A multicenter pilot study of a prophylactic regimen for both matched sibling donor and unrelated donor bone marrow transplantation in adults with severe sickle cell disease has found similar overall and event-free survival rates between the two approaches, exceeding 90% and 85%, respectively, at one year, according to preliminary results presented at the annual meeting of the Foundation for Sickle Cell Disease Research.

The results have led to a Phase 2 single-arm, multicenter trial, known as STRIDE , to evaluate a reduced toxicity preparative regimen consisting of busulfan (13.2 mg/kg), fludarabine (175   mg/m 2 ) and antithymocyte globulin (ATG, 6 mg/kg) and cyclosporine or tacrolimus and methotrexate for graft-vs-host disease (GVHD) prophylaxis in adults with sickle cell disease (SCD), said Lakshmanan Krishnamurti, MD, of Children’s Healthcare of Atlanta/Emory University. “The data are similar with 91% overall survival and 86% event-free survival,” he said.

The pilot study, published recently ( Am J Hematol. 2019;94:446-54 ), indicated the effectiveness of non-myeloablative conditioning in SCD patients with matched-sibling bone marrow transplant (BMT), with a higher intensity regimen of busulfan/fludarabine/ATG effective in unrelated donor BMT for other conditions, Dr. Krishnamurti said.

The pilot study also found that three-year event-free survival (EFS) of 82%, and statistically significant improvements in pain and health-related quality of life.

STRIDE is the first comparative study of BMT vs. standard of care in severe SCD, Dr. Krishnamurti added. The primary endpoint is overall survival at two years after biologic assignment, with longer-term outcomes including survival at three to 10 years post-hematopoietic stem cell transplantation (HSCT), and impact of BMT on sickle-related events, organ function, health-related quality of life and chronic pain.  

The pilot study included 22 patients between the ages of 17 and 36 who had BMT at eight centers. Seventeen patients received marrow from a sibling-matched donor and five patients received marrow from an unrelated donor. 

Dr. Krishnamurti referenced a recent study out of France that showed chimerism levels after transplant may be a determining physiological factor for outcomes (Haematologica. doi:10.3324/haematol.2018.213207 ). “So if chimerism is stable, somewhere in the 25% to 50% or better range, and hemoglobin levels are improved, this decrease hemolysis,” he said. “This is very important in understanding how to manage these patients.”

That study showed that rates of chronic GVHD up to 10 years post-transplant have steadily improved over the past three decades in patients with SCD who’ve had BMT, Dr. Krishnamurti noted. “But chronic GVHD is higher in patients age 16 to 30 vs. patients 15 and younger,” he said, “so that’s the reason to consider transplantation sooner in patients who have a matched sibling donor.”

The French study shows that BMT with sibling-matched donors has excellent outcomes in young children, Dr. Krishnamurti said. “Outcomes for adults with transplantation is becoming similar to that in children,” he added. “Age is an important predictor of outcomes and the risk for progressive morbidity-impaired quality of life and risk of mortality still exists in adults with sickle cell disease.”

The bottom line, he said, is that patients and caregivers must be given the opportunity to consider transplantation as an option at younger ages.

Dr. Krishnamurti did not disclose any financial relationships.

SOURCE: Krishnamurti L et al. FSCDR 2019

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM FSCDR 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Postpartum LARC uptake increased with separate payment

Article Type
Changed

The introduction of separate payment for the immediate postpartum implantation of long-acting reversible contraception was associated with increased use and a slow-down in the number of short-interval births in patients covered by South Carolina’s Medicaid program.

Immediate postpartum long-acting reversible contraception (IPP-LARC) is recommended to reduce the incidence of short pregnancy intervals – pregnancies within 6-24 months of each other. The global payment for hospital labor and delivery, however, may act as a disincentive to providing IPP-LARC, according to Maria W. Steenland of Brown University, Providence, R.I., and co-authors.

They looked at inpatient Medicaid claims data for 242,825 childbirth hospitalizations in South Carolina from 2010-2017; during that time the state Medicaid program began to provide an additional payment for IPP-LARC.

At the start of the study, just 0.07% of women received an IPP-LARC. After the change in reimbursement policy in March 2012, there was a steady 0.07 percentage point monthly increase in their use in adults and 0.1 percentage point increase per month in adolescents. In December 2017, 5.65% of adults and 10.48% of adolescents received an IPP-LARC (JAMA. 2019; doi: 10.1001/jama.2019.6854).

There was a corresponding, significant change in the trend of short-interval births among adolescents. Before the policy change, adolescent short-interval births had been increasing, but by March 2016 – 4 years after the payment change – the adolescent short-interval birth rate was 5.28 percentage points lower than what was expected had the increasing trend continued.

There was no significant change in the trend for short-interval births among adults.

“These findings suggest that IPP-LARC reimbursement could increase immediate postpartum contraceptive options and help adolescents avoid short-interval births,” the authors wrote, noting that as of February 2018, 36 other states’ Medicaid programs had began separately reimbursing for IPP-LARC.

They also raised the possibility that there may have been confounding due to other events that occurred at the same time as the policy changes.

The study was supported by the Eric M. Mindich Research Fund and one author was supported by National Institutes of Health. No conflicts of interest were declared.

SOURCE: Steenland M et al. JAMA 2019, DOI:10.1001/jama.2019.6854.

Publications
Topics
Sections

The introduction of separate payment for the immediate postpartum implantation of long-acting reversible contraception was associated with increased use and a slow-down in the number of short-interval births in patients covered by South Carolina’s Medicaid program.

Immediate postpartum long-acting reversible contraception (IPP-LARC) is recommended to reduce the incidence of short pregnancy intervals – pregnancies within 6-24 months of each other. The global payment for hospital labor and delivery, however, may act as a disincentive to providing IPP-LARC, according to Maria W. Steenland of Brown University, Providence, R.I., and co-authors.

They looked at inpatient Medicaid claims data for 242,825 childbirth hospitalizations in South Carolina from 2010-2017; during that time the state Medicaid program began to provide an additional payment for IPP-LARC.

At the start of the study, just 0.07% of women received an IPP-LARC. After the change in reimbursement policy in March 2012, there was a steady 0.07 percentage point monthly increase in their use in adults and 0.1 percentage point increase per month in adolescents. In December 2017, 5.65% of adults and 10.48% of adolescents received an IPP-LARC (JAMA. 2019; doi: 10.1001/jama.2019.6854).

There was a corresponding, significant change in the trend of short-interval births among adolescents. Before the policy change, adolescent short-interval births had been increasing, but by March 2016 – 4 years after the payment change – the adolescent short-interval birth rate was 5.28 percentage points lower than what was expected had the increasing trend continued.

There was no significant change in the trend for short-interval births among adults.

“These findings suggest that IPP-LARC reimbursement could increase immediate postpartum contraceptive options and help adolescents avoid short-interval births,” the authors wrote, noting that as of February 2018, 36 other states’ Medicaid programs had began separately reimbursing for IPP-LARC.

They also raised the possibility that there may have been confounding due to other events that occurred at the same time as the policy changes.

The study was supported by the Eric M. Mindich Research Fund and one author was supported by National Institutes of Health. No conflicts of interest were declared.

SOURCE: Steenland M et al. JAMA 2019, DOI:10.1001/jama.2019.6854.

The introduction of separate payment for the immediate postpartum implantation of long-acting reversible contraception was associated with increased use and a slow-down in the number of short-interval births in patients covered by South Carolina’s Medicaid program.

Immediate postpartum long-acting reversible contraception (IPP-LARC) is recommended to reduce the incidence of short pregnancy intervals – pregnancies within 6-24 months of each other. The global payment for hospital labor and delivery, however, may act as a disincentive to providing IPP-LARC, according to Maria W. Steenland of Brown University, Providence, R.I., and co-authors.

They looked at inpatient Medicaid claims data for 242,825 childbirth hospitalizations in South Carolina from 2010-2017; during that time the state Medicaid program began to provide an additional payment for IPP-LARC.

At the start of the study, just 0.07% of women received an IPP-LARC. After the change in reimbursement policy in March 2012, there was a steady 0.07 percentage point monthly increase in their use in adults and 0.1 percentage point increase per month in adolescents. In December 2017, 5.65% of adults and 10.48% of adolescents received an IPP-LARC (JAMA. 2019; doi: 10.1001/jama.2019.6854).

There was a corresponding, significant change in the trend of short-interval births among adolescents. Before the policy change, adolescent short-interval births had been increasing, but by March 2016 – 4 years after the payment change – the adolescent short-interval birth rate was 5.28 percentage points lower than what was expected had the increasing trend continued.

There was no significant change in the trend for short-interval births among adults.

“These findings suggest that IPP-LARC reimbursement could increase immediate postpartum contraceptive options and help adolescents avoid short-interval births,” the authors wrote, noting that as of February 2018, 36 other states’ Medicaid programs had began separately reimbursing for IPP-LARC.

They also raised the possibility that there may have been confounding due to other events that occurred at the same time as the policy changes.

The study was supported by the Eric M. Mindich Research Fund and one author was supported by National Institutes of Health. No conflicts of interest were declared.

SOURCE: Steenland M et al. JAMA 2019, DOI:10.1001/jama.2019.6854.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.