Calculating Risk for Poor Outcomes After Transcatheter Aortic Valve Replacement

Article Type
Changed
Thu, 04/23/2020 - 15:10
Display Headline
Calculating Risk for Poor Outcomes After Transcatheter Aortic Valve Replacement

From Saint Luke’s Mid America Heart Institute/University of Missouri–Kansas City, Kansas City, MO.

Abstract

  • Objective: To outline the tools available to help understand the risk of transcatheter aortic valve replacement (TAVR) and the gaps in knowledge regarding TAVR risk estimation.
  • Methods: Review of the literature.
  • Results: Two models developed and validated by the American College of Cardiology can be used to estimate the risk of short-term mortality, a 6-variable in-hospital model designed for clinical use and a 41-variable 30-day model designed primarily for site comparisons and quality improvement. Importantly, neither model should be used to inform the choice of TAVR versus surgical aortic valve replacement. Regarding long-term outcomes, a risk model to estimate risk of dying or having a persistently poor quality of life at 1 year after TAVR has been developed and validated. Factors that most significantly increase a patient’s risk for poor outcomes are very poor functional status prior to TAVR, requiring home oxygen, chronic renal insufficiency, atrial fibrillation, dependencies in activities of daily living, and dementia. If a patient has ≥ 2 or 3 major risk factors for a poor outcome, this risk and the uncertainty about the degree of recovery expected after TAVR should be discussed with the patient (and family).
  • Conclusion: It is important to understand the patient factors that most strongly drive risk of poor outcomes after TAVR and use this information to set appropriate expectations for recovery.

Keywords: aortic valve stenosis; risk factors; postoperative complications; TAVR.

Among patients with severe aortic stenosis, trans­catheter aortic valve replacement (TAVR) has emerged as a less invasive option for aortic valve replacement. This procedure offers substantial reductions in mortality and improvement in quality of life compared with medical therapy1,2 and at least similar long-term outcomes compared to surgical aortic valve replacement (SAVR).3-9

As with any emerging technology, selecting the appropriate patients for TAVR—a procedure with high initial costs10—has been an area of active investigation. As TAVR was first introduced in patients who were considered inoperable, initial efforts focused on trying to identify the patients who did not improve functionally or live longer following TAVR. Termed Cohort C patients, these patients were thought to have too many comorbidities, be too sick, and have too little reserve to recover from TAVR, and in the early trials, represented a substantial minority of the patients. For example, in pivotal clinical trials of patients at high or extreme surgical risk, approximately 1 in 4 patients who were treated with TAVR were dead at 1 year.1,3,11 Furthermore, a number of patients who received TAVR were alive at 1 year but continued to have significant heart failure symptoms and functional limitations.2,4 Practitioners,12,13 regulators,14 and third-party payers15 have recommended that TAVR should not be offered to patients in whom valve replacement would not be expected to positively impact either their survival or quality of life, but how best to identify these patients has been less clear.

More recently, as the use of TAVR has moved down the risk spectrum, patient selection for TAVR has shifted to understanding which patients should be preferentially treated with TAVR versus SAVR. While patients often prefer a less invasive treatment option with faster recovery—which is what TAVR offers—there are lingering questions about valve longevity, need for a pacemaker (and the associated long-term implications), and the ability to treat other cardiovascular conditions (eg, Maze, mitral valve repair) that potentially make a patient a more appropriate candidate for valve surgery. This review outlines the tools currently available to help understand the risk of TAVR and the gaps in knowledge.

Short-Term Outcomes

When TAVR was initially introduced, the 30-day mortality rate was 5% to 8%.1,11,16 This high mortality rate was a function of treating very ill patients and more invasive procedures with larger sheath sizes and routine use of general anesthesia, transesophageal echocardiography, pulmonary artery catheterization, and so on. Over time, however, this rate has gone down substantially, with the 30-day mortality rate in intermediate- and low-risk patients now ranging from 0.5% to 1%.8,17-19 Although this low mortality rate indicates that the vast majority of patients will survive to discharge from the hospital, 2 models can be used to estimate the risk of short-term mortality: an in-hospital20 and a 30-day model,21 both developed and validated by the American College of Cardiology. The in-hospital model was developed for clinical use, as it includes only 6 variables (age, renal function, severe lung disease, non-femoral access, New York Heart Association class IV, and acuity of the procedure [elective versus urgent versus shock versus emergent])20 and has an online calculator (http://tools.acc.org/tavrrisk/). The 30-day model was developed for risk adjustment (primarily for site comparisons and quality improvement) and includes 41 variables (including pre-TAVR patient health status and gait speed).21

While 30 days is a better time frame for assessment because outcome is less impacted by differences in local post-acute care facilities, we explicitly did not create a parsimonious 30-day mortality model for clinical use due to concern that having such a model would allow for indirect comparisons with estimated risk of SAVR using the Society of Thoracic Surgeons risk model (http://riskcalc.sts.org/stswebriskcalc). It would be tempting to estimate a patient’s risk of mortality with the TAVR calculator and the SAVR calculator and use those risk estimates to inform the choice of treatment; however, these risk estimates should not be directly compared to make treatment selections, as they were built on entirely different patient populations. In real-world practice, there is minimal overlap in the characteristics of patients who are treated with TAVR and SAVR. For example, in an analysis that merged surgical and transcatheter databases, less than 25% of patients treated with TAVR could be matched to a clinically similar patient treated with SAVR.22 As such, these TAVR models should be used to estimate a patient’s risk for short-term mortality, but should not be used to contribute to the decision on TAVR versus SAVR.

 

 

The decision of selecting SAVR over TAVR is typically driven by factors other than short- or long-term mortality (eg, whether TAVR will be covered by insurance, very young age and concern about durability, need to treat concomitant mitral regurgitation or aortopathy), as clinical trials have shown that survival and quality of life outcomes are at least as good with TAVR compared with SAVR.6,7,9,23 In fact, in an analysis that compared similar patients treated with TAVR versus SAVR and specifically looked for patient factors that might make one treatment preferable to the other, patients who had a prior cardiac operation and those on home oxygen were more likely to do better with TAVR, whereas no patient factors that favored SAVR were found.24 The majority of patients, however, were expected to have similar long-term outcomes regardless of treatment choice, and as such, the benefit of TAVR appears mostly to be an earlier and easier recovery.

Long-Term Outcomes: Estimating the Risk for Failure to Recover

While many patients who undergo TAVR are quite ill prior to the procedure, with substantial limitations due to the fatigue and shortness of breath associated with severe aortic stenosis, most patients recover well after the procedure, with marked improvement in symptoms and functional capacity. Approximately 25% to 35% of patients currently treated with TAVR commercially (ie, intermediate- and high-surgical-risk patients) either die or do not recover a reasonable quality of life after the procedure. Identifying those patients prior to the procedure can be challenging. We have previously developed and externally validated a risk model to estimate risk of dying or having a persistently poor quality of life at 1 year after TAVR.25,26 The factors that most significantly increase a patient’s risk for poor outcomes are very poor functional status prior to TAVR, requiring home oxygen, chronic renal insufficiency, atrial fibrillation, and dementia. For example, a patient who is short of breath at rest, is on home oxygen, has a serum creatinine of 2.5 mg/dL, and has atrial fibrillation has an estimated risk of poor outcome at 1 year of ~70%. However, it should be noted that ~25% of patients with no risk factors for poor outcomes (ie, those considered “low risk”) still have a poor outcome at 1 year after TAVR, as the patients who undergo TAVR are typically at an advanced age with at least some comorbidities. Therefore, a 1-year mortality rate of 10% to 15% would not be unexpected in this population independent of the TAVR, although this will likely change over time as TAVR expands to patients at low surgical risk.

Beyond clinical factors, frailty negatively impacts both survival and quality of life after TAVR. Frailty is a geriatric syndrome of impaired physiologic reserve and decreased resistance to stressors27 that is characterized by weakness, slowness, exhaustion, wasting, and low activity level. Across a wide variety of clinical situations (eg, pneumonia,28 myocardial infarction,29 general30,31 and cardiac surgery32,33), frailty increases the risk of morbidity and mortality after nearly any intervention34 or clinical insult, independent of traditional demographic and clinical risk factors. Frail patients often do better with less invasive interventions such as TAVR compared with traditional surgery, but nonetheless remain at increased risk for death35-37 or failure to recover quality of life and functional status25,37 after TAVR. However, there are unique challenges in both assessing and managing frailty in patients who are considered potential candidates for TAVR. One challenge is the lack of a laboratory or radiologic test for frailty; instead, the lack of physiologic reserve of frailty is identified through a combination of factors, such as slow gait speed, weak grip strength, and unintentional weight loss. While these factors readily identify frail patients in general elderly populations, in patients with severe symptomatic aortic stenosis, these metrics can be impacted by the disease process itself. This distinction is important as slow gait speed that is due to aortic stenosis will be “fixed” by TAVR, but slow gait speed from frailty would identify a patient who will have a difficult time recovering from the procedure. For example, in the CoreValve High Risk Pivotal Trial, 80% of patients had a slow gait speed and 67% had a weak grip strength,5 and yet 58% of patients in this trial were alive and with a reasonable quality of life 1 year after TAVR.6 A number of studies have attempted to define true frailty within the pre-TAVR population, that which represents decreased physiologic reserve and an impaired ability to recover from an insult, and the factors that appear to be most prognostically important are malnutrition38 or unintentional weight loss25 and the inability to be independent in activities of daily living (eg, dressing, feeding, transferring).25,37

Even with frailty assessments, the ability to predict who is or is not going to have a poor outcome after TAVR (ie, to use pre-procedural factors to identify patients who perhaps should not be offered TAVR because he or she will not recover from the procedure) is exceedingly difficult. The Table shows how to grossly estimate risk using the major factors that impact risk based on the more precise estimates from our models.25,26

Estimation of Risk for Poor Outcome

The model shown in the Table can be used to estimate a patient’s risk for a poor outcome, but it should be noted that even at the extreme high end of risk, there will be some patients who still do well after TAVR. Furthermore, being high risk for a poor outcome after TAVR does not imply anything about how the patient would do without TAVR, as many of these patients would likely die even sooner or have worse quality of life with medical therapy only. However, if a patient has ≥ 2 or 3 major risk factors for a poor outcome, it may be worthwhile to have a serious conversation with the patient (and family) about this risk and the uncertainty about the degree of recovery expected after TAVR.

Conclusion

Calculating the risk of TAVR can be complicated. In patients who are electively treated using transfemoral access and a less invasive approach, the short-term risk of mortality is very low. Risk calculators can be used to estimate short-term risk, but the patients who are high risk for in-hospital mortality are often fairly easy to recognize, as the factors that drive that risk are not subtle (eg, the patient is in shock at the time of the procedure). The true risk of TAVR lies in the inability to recover from the procedure—being chronically ill, frail, or debilitated to a degree that the patient either dies or fails to recover a reasonable quality of life. Given the overlap of symptomatic aortic stenosis with true frailty, it is often difficult to identify these patients who will not thrive after TAVR. Understanding the patient factors that most strongly drive risk of poor outcomes after TAVR, and allowing this information to guide the conversation prior to TAVR so as to set appropriate expectations for recovery, can be a good place to start.

Corresponding author: Suzanne V. Arnold, MD, MHA, 4401 Wornall Rd., Kansas City, MO 64111.

Financial disclosures: This work was funded in part by grant K23HL116799 from the National Institutes of Health.

References

1. Leon MB, Smith CR, Mack M, et al. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med. 2010;363:1597-1607.

2. Reynolds MR, Magnuson EA, Lei Y, et al. Health-related quality of life after transcatheter aortic valve replacement in inoperable patients with severe aortic stenosis. Circulation. 2011;124(:1964-1972.

3. Smith CR, Leon MB, Mack MJ, et al. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med. 2011;364:2187-2198.

4. Reynolds MR, Magnuson EA, Wang K, et al. Health-related quality of life after transcatheter or surgical aortic valve replacement in high-risk patients with severe aortic stenosis: results from the PARTNER (Placement of AoRTic TraNscathetER Valve) trial (Cohort A). J Am Coll Cardiol. 2012;60:548-558.

5. Adams DH, Popma JJ, Reardon MJ, et al. Transcatheter aortic-valve replacement with a self-expanding prosthesis. N Engl J Med. 2014;370:1790-1798.

6. Arnold SV, Reynolds MR, Wang K, et al. Health status after trans­catheter or surgical aortic valve replacement in patients with severe aortic stenosis at increased surgical risk: results from the CoreValve US Pivotal trial. JACC Cardiovasc Interv. 2015;8:1207-1217.

7. Leon MB, Smith CR, Mack MJ, et al. Transcatheter or surgical aortic-valve replacement in intermediate-risk patients. N Engl J Med. 2016;374:1609-1620.

8. Reardon MJ, Van Mieghem NM, Popma JJ, et al. Surgical or trans­catheter aortic-valve replacement in intermediate-risk patients. N Engl J Med. 2017;376:1321-1331.

9. Baron SJ, Arnold SV, Wang K, et al. Health status benefits of trans­catheter vs surgical aortic valve replacement in patients with severe aortic stenosis at intermediate surgical risk: results from the PARTNER 2 randomized clinical trial. JAMA Cardiol. 2017;2:837-845.

10. Reynolds MR, Magnuson EA, Wang K, et al. Cost-effectiveness of transcatheter aortic valve replacement compared with standard care among inoperable patients with severe aortic stenosis: results from the placement of aortic transcatheter valves (PARTNER) trial (Cohort B). Circulation. 2012;125:1102-1109.

11. Popma JJ, Adams DH, Reardon MJ, et al. Transcatheter aortic valve replacement using a self-expanding bioprosthesis in patients with severe aortic stenosis at extreme risk for surgery. J Am Coll Cardiol. 2014;63:1972-1981.

12. Vahanian A, Alfieri O, Al-Attar N, et al. Transcatheter valve implantation for patients with aortic stenosis: a position statement from the European Association of Cardio-Thoracic Surgery (EACTS) and the European Society of Cardiology (ESC), in collaboration with the European Association of Percutaneous Cardiovascular Interventions (EAPCI). Eur Heart J. 2008;29:1463-1470.

13. Holmes DR Jr, Mack MJ, Kaul S, et al. 2012 ACCF/AATS/SCAI/STS expert consensus document on transcatheter aortic valve replacement. J Am Coll Cardiol. 2012;59:1200-1254.

14. US Food and Drug Administration. FDA Executive Summary: Edwards SAPIEN™ Transcatheter Heart Valve. Presented July 20, 2011, Gaithersburg, MD.

15. Centers for Medicare & Medicaid Services. Decision Memo for Transcatheter Aortic Valve Replacement (TAVR) (CAG-00430N). May 5, 2012.

16. Mack MJ, Brennan JM, Brindis R, et al. Outcomes following trans­catheter aortic valve replacement in the United States. JAMA. 2013;310:2069-2077.

17. Thourani VH, Kodali S, Makkar RR, et al. Transcatheter aortic valve replacement versus surgical valve replacement in intermediate-risk patients: a propensity score analysis. Lancet. 2016;387:2218-2225.

18. Mack MJ, Leon MB, Thourani VH, et al. Transcatheter aortic-valve replacement with a balloon-expandable valve in low-risk patients. N Engl J Med. 2019;380:1695-1705.

19. Popma JJ, Deeb GM, Yakubov SJ, et al. Transcatheter aortic-valve replacement with a self-expanding valve in low-risk patients. N Engl J Med. 2019;380:1706-1715.

20. Edwards FH, Peterson ED, Coombs LP, et al. Prediction of operative mortality after valve replacement surgery. J Am Coll Cardiol. 2001;37:885-892.

21. Arnold SV, O’Brien SM, Vemulapalli S, et al. Inclusion of functional status measures in the risk adjustment of 30-day mortality after transcatheter aortic valve replacement: a report from the Society of Thoracic Surgeons/American College of Cardiology TVT Registry. JACC Cardiovasc Interv. 2018;11:581-589.

22. Brennan JM, Thomas L, Cohen DJ, et al. Transcatheter versus surgical aortic valve replacement: propensity-matched comparison. J Am Coll Cardiol. 2017;70:439-450.

23. Reardon MJ, Adams DH, Kleiman NS, et al. 2-year outcomes in patients undergoing surgical or self-expanding transcatheter aortic valve replacement. J Am Coll Cardiol. 2015;66:113-121.

24. Baron SJ, Cohen DJ, Suchindran S, et al. Development of a risk prediction model for 1-year mortality after surgical vs. transcatheter aortic valve replacement in patients with severe aortic stenosis. Circulation. 2016;134(A20166).

25. Arnold SV, Afilalo J, Spertus JA, et al. Prediction of poor outcome after transcatheter aortic valve replacement. J Am Coll Cardiol. 2016;68:1868-1877.

26. Arnold SV, Reynolds MR, Lei Y, et al. Predictors of poor outcomes after transcatheter aortic valve replacement: results from the PARTNER (Placement of Aortic Transcatheter Valve) trial. Circulation. 2014;129:2682-2690.

27. Fried LP, Hadley EC, Walston JD, et al. From bedside to bench: research agenda for frailty. Sci Aging Knowledge Environ. 2005;2005:pe24.

28. Torres OH, Munoz J, Ruiz D, et al. Outcome predictors of pneumonia in elderly patients: importance of functional assessment. J Am Geriatr Soc. 2004;52:1603-1609.

29. Ekerstad N, Swahn E, Janzon M, et al. Frailty is independently associated with short-term outcomes for elderly patients with non-ST-segment elevation myocardial infarction. Circulation. 2011;124:2397-2404.

30. Makary MA, Segev DL, Pronovost PJ, et al. Frailty as a predictor of surgical outcomes in older patients. J Am Coll Surg. 2010;210:901-908.

31. Hewitt J, Moug SJ, Middleton M, et al. Prevalence of frailty and its association with mortality in general surgery. Am J Surg. 2015;209:254-259.

32. Sundermann S, Dademasch A, Praetorius J, et al. Comprehensive assessment of frailty for elderly high-risk patients undergoing cardiac surgery. Eur J Cardiothorac Surg. 2011;39:33-37.

33. Afilalo J, Mottillo S, Eisenberg MJ, et al. Addition of frailty and disability to cardiac surgery risk scores identifies elderly patients at high risk of mortality or major morbidity. Circ Cardiovasc Qual Outcomes. 2012;5:222-228.

34. Lin HS, Watts JN, Peel NM, Hubbard RE. Frailty and post-operative outcomes in older surgical patients: a systematic review. BMC Geriatr. 2016;16:157.

35. Stortecky S, Schoenenberger AW, Moser A, et al. Evaluation of multidimensional geriatric assessment as a predictor of mortality and cardiovascular events after transcatheter aortic valve implantation. JACC Cardiovasc Interv. 2012;5:489-496.

36. Schoenenberger AW, Stortecky S, Neumann S, et al. Predictors of functional decline in elderly patients undergoing transcatheter aortic valve implantation (TAVI). Eur Heart J. 2013;34:684-689.

37. Green P, Arnold SV, Cohen DJ, et al. Relation of frailty to outcomes after transcatheter aortic valve replacement (from the PARTNER trial). Am J Cardiol. 2015;116:264-269.

38. Goldfarb M, Lauck S, Webb J, et al. Malnutrition and mortality in frail and non-frail older adults undergoing aortic valve replacement. Circulation. 2018;138:2202-2211.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
125-129
Sections
Article PDF
Article PDF

From Saint Luke’s Mid America Heart Institute/University of Missouri–Kansas City, Kansas City, MO.

Abstract

  • Objective: To outline the tools available to help understand the risk of transcatheter aortic valve replacement (TAVR) and the gaps in knowledge regarding TAVR risk estimation.
  • Methods: Review of the literature.
  • Results: Two models developed and validated by the American College of Cardiology can be used to estimate the risk of short-term mortality, a 6-variable in-hospital model designed for clinical use and a 41-variable 30-day model designed primarily for site comparisons and quality improvement. Importantly, neither model should be used to inform the choice of TAVR versus surgical aortic valve replacement. Regarding long-term outcomes, a risk model to estimate risk of dying or having a persistently poor quality of life at 1 year after TAVR has been developed and validated. Factors that most significantly increase a patient’s risk for poor outcomes are very poor functional status prior to TAVR, requiring home oxygen, chronic renal insufficiency, atrial fibrillation, dependencies in activities of daily living, and dementia. If a patient has ≥ 2 or 3 major risk factors for a poor outcome, this risk and the uncertainty about the degree of recovery expected after TAVR should be discussed with the patient (and family).
  • Conclusion: It is important to understand the patient factors that most strongly drive risk of poor outcomes after TAVR and use this information to set appropriate expectations for recovery.

Keywords: aortic valve stenosis; risk factors; postoperative complications; TAVR.

Among patients with severe aortic stenosis, trans­catheter aortic valve replacement (TAVR) has emerged as a less invasive option for aortic valve replacement. This procedure offers substantial reductions in mortality and improvement in quality of life compared with medical therapy1,2 and at least similar long-term outcomes compared to surgical aortic valve replacement (SAVR).3-9

As with any emerging technology, selecting the appropriate patients for TAVR—a procedure with high initial costs10—has been an area of active investigation. As TAVR was first introduced in patients who were considered inoperable, initial efforts focused on trying to identify the patients who did not improve functionally or live longer following TAVR. Termed Cohort C patients, these patients were thought to have too many comorbidities, be too sick, and have too little reserve to recover from TAVR, and in the early trials, represented a substantial minority of the patients. For example, in pivotal clinical trials of patients at high or extreme surgical risk, approximately 1 in 4 patients who were treated with TAVR were dead at 1 year.1,3,11 Furthermore, a number of patients who received TAVR were alive at 1 year but continued to have significant heart failure symptoms and functional limitations.2,4 Practitioners,12,13 regulators,14 and third-party payers15 have recommended that TAVR should not be offered to patients in whom valve replacement would not be expected to positively impact either their survival or quality of life, but how best to identify these patients has been less clear.

More recently, as the use of TAVR has moved down the risk spectrum, patient selection for TAVR has shifted to understanding which patients should be preferentially treated with TAVR versus SAVR. While patients often prefer a less invasive treatment option with faster recovery—which is what TAVR offers—there are lingering questions about valve longevity, need for a pacemaker (and the associated long-term implications), and the ability to treat other cardiovascular conditions (eg, Maze, mitral valve repair) that potentially make a patient a more appropriate candidate for valve surgery. This review outlines the tools currently available to help understand the risk of TAVR and the gaps in knowledge.

Short-Term Outcomes

When TAVR was initially introduced, the 30-day mortality rate was 5% to 8%.1,11,16 This high mortality rate was a function of treating very ill patients and more invasive procedures with larger sheath sizes and routine use of general anesthesia, transesophageal echocardiography, pulmonary artery catheterization, and so on. Over time, however, this rate has gone down substantially, with the 30-day mortality rate in intermediate- and low-risk patients now ranging from 0.5% to 1%.8,17-19 Although this low mortality rate indicates that the vast majority of patients will survive to discharge from the hospital, 2 models can be used to estimate the risk of short-term mortality: an in-hospital20 and a 30-day model,21 both developed and validated by the American College of Cardiology. The in-hospital model was developed for clinical use, as it includes only 6 variables (age, renal function, severe lung disease, non-femoral access, New York Heart Association class IV, and acuity of the procedure [elective versus urgent versus shock versus emergent])20 and has an online calculator (http://tools.acc.org/tavrrisk/). The 30-day model was developed for risk adjustment (primarily for site comparisons and quality improvement) and includes 41 variables (including pre-TAVR patient health status and gait speed).21

While 30 days is a better time frame for assessment because outcome is less impacted by differences in local post-acute care facilities, we explicitly did not create a parsimonious 30-day mortality model for clinical use due to concern that having such a model would allow for indirect comparisons with estimated risk of SAVR using the Society of Thoracic Surgeons risk model (http://riskcalc.sts.org/stswebriskcalc). It would be tempting to estimate a patient’s risk of mortality with the TAVR calculator and the SAVR calculator and use those risk estimates to inform the choice of treatment; however, these risk estimates should not be directly compared to make treatment selections, as they were built on entirely different patient populations. In real-world practice, there is minimal overlap in the characteristics of patients who are treated with TAVR and SAVR. For example, in an analysis that merged surgical and transcatheter databases, less than 25% of patients treated with TAVR could be matched to a clinically similar patient treated with SAVR.22 As such, these TAVR models should be used to estimate a patient’s risk for short-term mortality, but should not be used to contribute to the decision on TAVR versus SAVR.

 

 

The decision of selecting SAVR over TAVR is typically driven by factors other than short- or long-term mortality (eg, whether TAVR will be covered by insurance, very young age and concern about durability, need to treat concomitant mitral regurgitation or aortopathy), as clinical trials have shown that survival and quality of life outcomes are at least as good with TAVR compared with SAVR.6,7,9,23 In fact, in an analysis that compared similar patients treated with TAVR versus SAVR and specifically looked for patient factors that might make one treatment preferable to the other, patients who had a prior cardiac operation and those on home oxygen were more likely to do better with TAVR, whereas no patient factors that favored SAVR were found.24 The majority of patients, however, were expected to have similar long-term outcomes regardless of treatment choice, and as such, the benefit of TAVR appears mostly to be an earlier and easier recovery.

Long-Term Outcomes: Estimating the Risk for Failure to Recover

While many patients who undergo TAVR are quite ill prior to the procedure, with substantial limitations due to the fatigue and shortness of breath associated with severe aortic stenosis, most patients recover well after the procedure, with marked improvement in symptoms and functional capacity. Approximately 25% to 35% of patients currently treated with TAVR commercially (ie, intermediate- and high-surgical-risk patients) either die or do not recover a reasonable quality of life after the procedure. Identifying those patients prior to the procedure can be challenging. We have previously developed and externally validated a risk model to estimate risk of dying or having a persistently poor quality of life at 1 year after TAVR.25,26 The factors that most significantly increase a patient’s risk for poor outcomes are very poor functional status prior to TAVR, requiring home oxygen, chronic renal insufficiency, atrial fibrillation, and dementia. For example, a patient who is short of breath at rest, is on home oxygen, has a serum creatinine of 2.5 mg/dL, and has atrial fibrillation has an estimated risk of poor outcome at 1 year of ~70%. However, it should be noted that ~25% of patients with no risk factors for poor outcomes (ie, those considered “low risk”) still have a poor outcome at 1 year after TAVR, as the patients who undergo TAVR are typically at an advanced age with at least some comorbidities. Therefore, a 1-year mortality rate of 10% to 15% would not be unexpected in this population independent of the TAVR, although this will likely change over time as TAVR expands to patients at low surgical risk.

Beyond clinical factors, frailty negatively impacts both survival and quality of life after TAVR. Frailty is a geriatric syndrome of impaired physiologic reserve and decreased resistance to stressors27 that is characterized by weakness, slowness, exhaustion, wasting, and low activity level. Across a wide variety of clinical situations (eg, pneumonia,28 myocardial infarction,29 general30,31 and cardiac surgery32,33), frailty increases the risk of morbidity and mortality after nearly any intervention34 or clinical insult, independent of traditional demographic and clinical risk factors. Frail patients often do better with less invasive interventions such as TAVR compared with traditional surgery, but nonetheless remain at increased risk for death35-37 or failure to recover quality of life and functional status25,37 after TAVR. However, there are unique challenges in both assessing and managing frailty in patients who are considered potential candidates for TAVR. One challenge is the lack of a laboratory or radiologic test for frailty; instead, the lack of physiologic reserve of frailty is identified through a combination of factors, such as slow gait speed, weak grip strength, and unintentional weight loss. While these factors readily identify frail patients in general elderly populations, in patients with severe symptomatic aortic stenosis, these metrics can be impacted by the disease process itself. This distinction is important as slow gait speed that is due to aortic stenosis will be “fixed” by TAVR, but slow gait speed from frailty would identify a patient who will have a difficult time recovering from the procedure. For example, in the CoreValve High Risk Pivotal Trial, 80% of patients had a slow gait speed and 67% had a weak grip strength,5 and yet 58% of patients in this trial were alive and with a reasonable quality of life 1 year after TAVR.6 A number of studies have attempted to define true frailty within the pre-TAVR population, that which represents decreased physiologic reserve and an impaired ability to recover from an insult, and the factors that appear to be most prognostically important are malnutrition38 or unintentional weight loss25 and the inability to be independent in activities of daily living (eg, dressing, feeding, transferring).25,37

Even with frailty assessments, the ability to predict who is or is not going to have a poor outcome after TAVR (ie, to use pre-procedural factors to identify patients who perhaps should not be offered TAVR because he or she will not recover from the procedure) is exceedingly difficult. The Table shows how to grossly estimate risk using the major factors that impact risk based on the more precise estimates from our models.25,26

Estimation of Risk for Poor Outcome

The model shown in the Table can be used to estimate a patient’s risk for a poor outcome, but it should be noted that even at the extreme high end of risk, there will be some patients who still do well after TAVR. Furthermore, being high risk for a poor outcome after TAVR does not imply anything about how the patient would do without TAVR, as many of these patients would likely die even sooner or have worse quality of life with medical therapy only. However, if a patient has ≥ 2 or 3 major risk factors for a poor outcome, it may be worthwhile to have a serious conversation with the patient (and family) about this risk and the uncertainty about the degree of recovery expected after TAVR.

Conclusion

Calculating the risk of TAVR can be complicated. In patients who are electively treated using transfemoral access and a less invasive approach, the short-term risk of mortality is very low. Risk calculators can be used to estimate short-term risk, but the patients who are high risk for in-hospital mortality are often fairly easy to recognize, as the factors that drive that risk are not subtle (eg, the patient is in shock at the time of the procedure). The true risk of TAVR lies in the inability to recover from the procedure—being chronically ill, frail, or debilitated to a degree that the patient either dies or fails to recover a reasonable quality of life. Given the overlap of symptomatic aortic stenosis with true frailty, it is often difficult to identify these patients who will not thrive after TAVR. Understanding the patient factors that most strongly drive risk of poor outcomes after TAVR, and allowing this information to guide the conversation prior to TAVR so as to set appropriate expectations for recovery, can be a good place to start.

Corresponding author: Suzanne V. Arnold, MD, MHA, 4401 Wornall Rd., Kansas City, MO 64111.

Financial disclosures: This work was funded in part by grant K23HL116799 from the National Institutes of Health.

From Saint Luke’s Mid America Heart Institute/University of Missouri–Kansas City, Kansas City, MO.

Abstract

  • Objective: To outline the tools available to help understand the risk of transcatheter aortic valve replacement (TAVR) and the gaps in knowledge regarding TAVR risk estimation.
  • Methods: Review of the literature.
  • Results: Two models developed and validated by the American College of Cardiology can be used to estimate the risk of short-term mortality, a 6-variable in-hospital model designed for clinical use and a 41-variable 30-day model designed primarily for site comparisons and quality improvement. Importantly, neither model should be used to inform the choice of TAVR versus surgical aortic valve replacement. Regarding long-term outcomes, a risk model to estimate risk of dying or having a persistently poor quality of life at 1 year after TAVR has been developed and validated. Factors that most significantly increase a patient’s risk for poor outcomes are very poor functional status prior to TAVR, requiring home oxygen, chronic renal insufficiency, atrial fibrillation, dependencies in activities of daily living, and dementia. If a patient has ≥ 2 or 3 major risk factors for a poor outcome, this risk and the uncertainty about the degree of recovery expected after TAVR should be discussed with the patient (and family).
  • Conclusion: It is important to understand the patient factors that most strongly drive risk of poor outcomes after TAVR and use this information to set appropriate expectations for recovery.

Keywords: aortic valve stenosis; risk factors; postoperative complications; TAVR.

Among patients with severe aortic stenosis, trans­catheter aortic valve replacement (TAVR) has emerged as a less invasive option for aortic valve replacement. This procedure offers substantial reductions in mortality and improvement in quality of life compared with medical therapy1,2 and at least similar long-term outcomes compared to surgical aortic valve replacement (SAVR).3-9

As with any emerging technology, selecting the appropriate patients for TAVR—a procedure with high initial costs10—has been an area of active investigation. As TAVR was first introduced in patients who were considered inoperable, initial efforts focused on trying to identify the patients who did not improve functionally or live longer following TAVR. Termed Cohort C patients, these patients were thought to have too many comorbidities, be too sick, and have too little reserve to recover from TAVR, and in the early trials, represented a substantial minority of the patients. For example, in pivotal clinical trials of patients at high or extreme surgical risk, approximately 1 in 4 patients who were treated with TAVR were dead at 1 year.1,3,11 Furthermore, a number of patients who received TAVR were alive at 1 year but continued to have significant heart failure symptoms and functional limitations.2,4 Practitioners,12,13 regulators,14 and third-party payers15 have recommended that TAVR should not be offered to patients in whom valve replacement would not be expected to positively impact either their survival or quality of life, but how best to identify these patients has been less clear.

More recently, as the use of TAVR has moved down the risk spectrum, patient selection for TAVR has shifted to understanding which patients should be preferentially treated with TAVR versus SAVR. While patients often prefer a less invasive treatment option with faster recovery—which is what TAVR offers—there are lingering questions about valve longevity, need for a pacemaker (and the associated long-term implications), and the ability to treat other cardiovascular conditions (eg, Maze, mitral valve repair) that potentially make a patient a more appropriate candidate for valve surgery. This review outlines the tools currently available to help understand the risk of TAVR and the gaps in knowledge.

Short-Term Outcomes

When TAVR was initially introduced, the 30-day mortality rate was 5% to 8%.1,11,16 This high mortality rate was a function of treating very ill patients and more invasive procedures with larger sheath sizes and routine use of general anesthesia, transesophageal echocardiography, pulmonary artery catheterization, and so on. Over time, however, this rate has gone down substantially, with the 30-day mortality rate in intermediate- and low-risk patients now ranging from 0.5% to 1%.8,17-19 Although this low mortality rate indicates that the vast majority of patients will survive to discharge from the hospital, 2 models can be used to estimate the risk of short-term mortality: an in-hospital20 and a 30-day model,21 both developed and validated by the American College of Cardiology. The in-hospital model was developed for clinical use, as it includes only 6 variables (age, renal function, severe lung disease, non-femoral access, New York Heart Association class IV, and acuity of the procedure [elective versus urgent versus shock versus emergent])20 and has an online calculator (http://tools.acc.org/tavrrisk/). The 30-day model was developed for risk adjustment (primarily for site comparisons and quality improvement) and includes 41 variables (including pre-TAVR patient health status and gait speed).21

While 30 days is a better time frame for assessment because outcome is less impacted by differences in local post-acute care facilities, we explicitly did not create a parsimonious 30-day mortality model for clinical use due to concern that having such a model would allow for indirect comparisons with estimated risk of SAVR using the Society of Thoracic Surgeons risk model (http://riskcalc.sts.org/stswebriskcalc). It would be tempting to estimate a patient’s risk of mortality with the TAVR calculator and the SAVR calculator and use those risk estimates to inform the choice of treatment; however, these risk estimates should not be directly compared to make treatment selections, as they were built on entirely different patient populations. In real-world practice, there is minimal overlap in the characteristics of patients who are treated with TAVR and SAVR. For example, in an analysis that merged surgical and transcatheter databases, less than 25% of patients treated with TAVR could be matched to a clinically similar patient treated with SAVR.22 As such, these TAVR models should be used to estimate a patient’s risk for short-term mortality, but should not be used to contribute to the decision on TAVR versus SAVR.

 

 

The decision of selecting SAVR over TAVR is typically driven by factors other than short- or long-term mortality (eg, whether TAVR will be covered by insurance, very young age and concern about durability, need to treat concomitant mitral regurgitation or aortopathy), as clinical trials have shown that survival and quality of life outcomes are at least as good with TAVR compared with SAVR.6,7,9,23 In fact, in an analysis that compared similar patients treated with TAVR versus SAVR and specifically looked for patient factors that might make one treatment preferable to the other, patients who had a prior cardiac operation and those on home oxygen were more likely to do better with TAVR, whereas no patient factors that favored SAVR were found.24 The majority of patients, however, were expected to have similar long-term outcomes regardless of treatment choice, and as such, the benefit of TAVR appears mostly to be an earlier and easier recovery.

Long-Term Outcomes: Estimating the Risk for Failure to Recover

While many patients who undergo TAVR are quite ill prior to the procedure, with substantial limitations due to the fatigue and shortness of breath associated with severe aortic stenosis, most patients recover well after the procedure, with marked improvement in symptoms and functional capacity. Approximately 25% to 35% of patients currently treated with TAVR commercially (ie, intermediate- and high-surgical-risk patients) either die or do not recover a reasonable quality of life after the procedure. Identifying those patients prior to the procedure can be challenging. We have previously developed and externally validated a risk model to estimate risk of dying or having a persistently poor quality of life at 1 year after TAVR.25,26 The factors that most significantly increase a patient’s risk for poor outcomes are very poor functional status prior to TAVR, requiring home oxygen, chronic renal insufficiency, atrial fibrillation, and dementia. For example, a patient who is short of breath at rest, is on home oxygen, has a serum creatinine of 2.5 mg/dL, and has atrial fibrillation has an estimated risk of poor outcome at 1 year of ~70%. However, it should be noted that ~25% of patients with no risk factors for poor outcomes (ie, those considered “low risk”) still have a poor outcome at 1 year after TAVR, as the patients who undergo TAVR are typically at an advanced age with at least some comorbidities. Therefore, a 1-year mortality rate of 10% to 15% would not be unexpected in this population independent of the TAVR, although this will likely change over time as TAVR expands to patients at low surgical risk.

Beyond clinical factors, frailty negatively impacts both survival and quality of life after TAVR. Frailty is a geriatric syndrome of impaired physiologic reserve and decreased resistance to stressors27 that is characterized by weakness, slowness, exhaustion, wasting, and low activity level. Across a wide variety of clinical situations (eg, pneumonia,28 myocardial infarction,29 general30,31 and cardiac surgery32,33), frailty increases the risk of morbidity and mortality after nearly any intervention34 or clinical insult, independent of traditional demographic and clinical risk factors. Frail patients often do better with less invasive interventions such as TAVR compared with traditional surgery, but nonetheless remain at increased risk for death35-37 or failure to recover quality of life and functional status25,37 after TAVR. However, there are unique challenges in both assessing and managing frailty in patients who are considered potential candidates for TAVR. One challenge is the lack of a laboratory or radiologic test for frailty; instead, the lack of physiologic reserve of frailty is identified through a combination of factors, such as slow gait speed, weak grip strength, and unintentional weight loss. While these factors readily identify frail patients in general elderly populations, in patients with severe symptomatic aortic stenosis, these metrics can be impacted by the disease process itself. This distinction is important as slow gait speed that is due to aortic stenosis will be “fixed” by TAVR, but slow gait speed from frailty would identify a patient who will have a difficult time recovering from the procedure. For example, in the CoreValve High Risk Pivotal Trial, 80% of patients had a slow gait speed and 67% had a weak grip strength,5 and yet 58% of patients in this trial were alive and with a reasonable quality of life 1 year after TAVR.6 A number of studies have attempted to define true frailty within the pre-TAVR population, that which represents decreased physiologic reserve and an impaired ability to recover from an insult, and the factors that appear to be most prognostically important are malnutrition38 or unintentional weight loss25 and the inability to be independent in activities of daily living (eg, dressing, feeding, transferring).25,37

Even with frailty assessments, the ability to predict who is or is not going to have a poor outcome after TAVR (ie, to use pre-procedural factors to identify patients who perhaps should not be offered TAVR because he or she will not recover from the procedure) is exceedingly difficult. The Table shows how to grossly estimate risk using the major factors that impact risk based on the more precise estimates from our models.25,26

Estimation of Risk for Poor Outcome

The model shown in the Table can be used to estimate a patient’s risk for a poor outcome, but it should be noted that even at the extreme high end of risk, there will be some patients who still do well after TAVR. Furthermore, being high risk for a poor outcome after TAVR does not imply anything about how the patient would do without TAVR, as many of these patients would likely die even sooner or have worse quality of life with medical therapy only. However, if a patient has ≥ 2 or 3 major risk factors for a poor outcome, it may be worthwhile to have a serious conversation with the patient (and family) about this risk and the uncertainty about the degree of recovery expected after TAVR.

Conclusion

Calculating the risk of TAVR can be complicated. In patients who are electively treated using transfemoral access and a less invasive approach, the short-term risk of mortality is very low. Risk calculators can be used to estimate short-term risk, but the patients who are high risk for in-hospital mortality are often fairly easy to recognize, as the factors that drive that risk are not subtle (eg, the patient is in shock at the time of the procedure). The true risk of TAVR lies in the inability to recover from the procedure—being chronically ill, frail, or debilitated to a degree that the patient either dies or fails to recover a reasonable quality of life. Given the overlap of symptomatic aortic stenosis with true frailty, it is often difficult to identify these patients who will not thrive after TAVR. Understanding the patient factors that most strongly drive risk of poor outcomes after TAVR, and allowing this information to guide the conversation prior to TAVR so as to set appropriate expectations for recovery, can be a good place to start.

Corresponding author: Suzanne V. Arnold, MD, MHA, 4401 Wornall Rd., Kansas City, MO 64111.

Financial disclosures: This work was funded in part by grant K23HL116799 from the National Institutes of Health.

References

1. Leon MB, Smith CR, Mack M, et al. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med. 2010;363:1597-1607.

2. Reynolds MR, Magnuson EA, Lei Y, et al. Health-related quality of life after transcatheter aortic valve replacement in inoperable patients with severe aortic stenosis. Circulation. 2011;124(:1964-1972.

3. Smith CR, Leon MB, Mack MJ, et al. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med. 2011;364:2187-2198.

4. Reynolds MR, Magnuson EA, Wang K, et al. Health-related quality of life after transcatheter or surgical aortic valve replacement in high-risk patients with severe aortic stenosis: results from the PARTNER (Placement of AoRTic TraNscathetER Valve) trial (Cohort A). J Am Coll Cardiol. 2012;60:548-558.

5. Adams DH, Popma JJ, Reardon MJ, et al. Transcatheter aortic-valve replacement with a self-expanding prosthesis. N Engl J Med. 2014;370:1790-1798.

6. Arnold SV, Reynolds MR, Wang K, et al. Health status after trans­catheter or surgical aortic valve replacement in patients with severe aortic stenosis at increased surgical risk: results from the CoreValve US Pivotal trial. JACC Cardiovasc Interv. 2015;8:1207-1217.

7. Leon MB, Smith CR, Mack MJ, et al. Transcatheter or surgical aortic-valve replacement in intermediate-risk patients. N Engl J Med. 2016;374:1609-1620.

8. Reardon MJ, Van Mieghem NM, Popma JJ, et al. Surgical or trans­catheter aortic-valve replacement in intermediate-risk patients. N Engl J Med. 2017;376:1321-1331.

9. Baron SJ, Arnold SV, Wang K, et al. Health status benefits of trans­catheter vs surgical aortic valve replacement in patients with severe aortic stenosis at intermediate surgical risk: results from the PARTNER 2 randomized clinical trial. JAMA Cardiol. 2017;2:837-845.

10. Reynolds MR, Magnuson EA, Wang K, et al. Cost-effectiveness of transcatheter aortic valve replacement compared with standard care among inoperable patients with severe aortic stenosis: results from the placement of aortic transcatheter valves (PARTNER) trial (Cohort B). Circulation. 2012;125:1102-1109.

11. Popma JJ, Adams DH, Reardon MJ, et al. Transcatheter aortic valve replacement using a self-expanding bioprosthesis in patients with severe aortic stenosis at extreme risk for surgery. J Am Coll Cardiol. 2014;63:1972-1981.

12. Vahanian A, Alfieri O, Al-Attar N, et al. Transcatheter valve implantation for patients with aortic stenosis: a position statement from the European Association of Cardio-Thoracic Surgery (EACTS) and the European Society of Cardiology (ESC), in collaboration with the European Association of Percutaneous Cardiovascular Interventions (EAPCI). Eur Heart J. 2008;29:1463-1470.

13. Holmes DR Jr, Mack MJ, Kaul S, et al. 2012 ACCF/AATS/SCAI/STS expert consensus document on transcatheter aortic valve replacement. J Am Coll Cardiol. 2012;59:1200-1254.

14. US Food and Drug Administration. FDA Executive Summary: Edwards SAPIEN™ Transcatheter Heart Valve. Presented July 20, 2011, Gaithersburg, MD.

15. Centers for Medicare & Medicaid Services. Decision Memo for Transcatheter Aortic Valve Replacement (TAVR) (CAG-00430N). May 5, 2012.

16. Mack MJ, Brennan JM, Brindis R, et al. Outcomes following trans­catheter aortic valve replacement in the United States. JAMA. 2013;310:2069-2077.

17. Thourani VH, Kodali S, Makkar RR, et al. Transcatheter aortic valve replacement versus surgical valve replacement in intermediate-risk patients: a propensity score analysis. Lancet. 2016;387:2218-2225.

18. Mack MJ, Leon MB, Thourani VH, et al. Transcatheter aortic-valve replacement with a balloon-expandable valve in low-risk patients. N Engl J Med. 2019;380:1695-1705.

19. Popma JJ, Deeb GM, Yakubov SJ, et al. Transcatheter aortic-valve replacement with a self-expanding valve in low-risk patients. N Engl J Med. 2019;380:1706-1715.

20. Edwards FH, Peterson ED, Coombs LP, et al. Prediction of operative mortality after valve replacement surgery. J Am Coll Cardiol. 2001;37:885-892.

21. Arnold SV, O’Brien SM, Vemulapalli S, et al. Inclusion of functional status measures in the risk adjustment of 30-day mortality after transcatheter aortic valve replacement: a report from the Society of Thoracic Surgeons/American College of Cardiology TVT Registry. JACC Cardiovasc Interv. 2018;11:581-589.

22. Brennan JM, Thomas L, Cohen DJ, et al. Transcatheter versus surgical aortic valve replacement: propensity-matched comparison. J Am Coll Cardiol. 2017;70:439-450.

23. Reardon MJ, Adams DH, Kleiman NS, et al. 2-year outcomes in patients undergoing surgical or self-expanding transcatheter aortic valve replacement. J Am Coll Cardiol. 2015;66:113-121.

24. Baron SJ, Cohen DJ, Suchindran S, et al. Development of a risk prediction model for 1-year mortality after surgical vs. transcatheter aortic valve replacement in patients with severe aortic stenosis. Circulation. 2016;134(A20166).

25. Arnold SV, Afilalo J, Spertus JA, et al. Prediction of poor outcome after transcatheter aortic valve replacement. J Am Coll Cardiol. 2016;68:1868-1877.

26. Arnold SV, Reynolds MR, Lei Y, et al. Predictors of poor outcomes after transcatheter aortic valve replacement: results from the PARTNER (Placement of Aortic Transcatheter Valve) trial. Circulation. 2014;129:2682-2690.

27. Fried LP, Hadley EC, Walston JD, et al. From bedside to bench: research agenda for frailty. Sci Aging Knowledge Environ. 2005;2005:pe24.

28. Torres OH, Munoz J, Ruiz D, et al. Outcome predictors of pneumonia in elderly patients: importance of functional assessment. J Am Geriatr Soc. 2004;52:1603-1609.

29. Ekerstad N, Swahn E, Janzon M, et al. Frailty is independently associated with short-term outcomes for elderly patients with non-ST-segment elevation myocardial infarction. Circulation. 2011;124:2397-2404.

30. Makary MA, Segev DL, Pronovost PJ, et al. Frailty as a predictor of surgical outcomes in older patients. J Am Coll Surg. 2010;210:901-908.

31. Hewitt J, Moug SJ, Middleton M, et al. Prevalence of frailty and its association with mortality in general surgery. Am J Surg. 2015;209:254-259.

32. Sundermann S, Dademasch A, Praetorius J, et al. Comprehensive assessment of frailty for elderly high-risk patients undergoing cardiac surgery. Eur J Cardiothorac Surg. 2011;39:33-37.

33. Afilalo J, Mottillo S, Eisenberg MJ, et al. Addition of frailty and disability to cardiac surgery risk scores identifies elderly patients at high risk of mortality or major morbidity. Circ Cardiovasc Qual Outcomes. 2012;5:222-228.

34. Lin HS, Watts JN, Peel NM, Hubbard RE. Frailty and post-operative outcomes in older surgical patients: a systematic review. BMC Geriatr. 2016;16:157.

35. Stortecky S, Schoenenberger AW, Moser A, et al. Evaluation of multidimensional geriatric assessment as a predictor of mortality and cardiovascular events after transcatheter aortic valve implantation. JACC Cardiovasc Interv. 2012;5:489-496.

36. Schoenenberger AW, Stortecky S, Neumann S, et al. Predictors of functional decline in elderly patients undergoing transcatheter aortic valve implantation (TAVI). Eur Heart J. 2013;34:684-689.

37. Green P, Arnold SV, Cohen DJ, et al. Relation of frailty to outcomes after transcatheter aortic valve replacement (from the PARTNER trial). Am J Cardiol. 2015;116:264-269.

38. Goldfarb M, Lauck S, Webb J, et al. Malnutrition and mortality in frail and non-frail older adults undergoing aortic valve replacement. Circulation. 2018;138:2202-2211.

References

1. Leon MB, Smith CR, Mack M, et al. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med. 2010;363:1597-1607.

2. Reynolds MR, Magnuson EA, Lei Y, et al. Health-related quality of life after transcatheter aortic valve replacement in inoperable patients with severe aortic stenosis. Circulation. 2011;124(:1964-1972.

3. Smith CR, Leon MB, Mack MJ, et al. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med. 2011;364:2187-2198.

4. Reynolds MR, Magnuson EA, Wang K, et al. Health-related quality of life after transcatheter or surgical aortic valve replacement in high-risk patients with severe aortic stenosis: results from the PARTNER (Placement of AoRTic TraNscathetER Valve) trial (Cohort A). J Am Coll Cardiol. 2012;60:548-558.

5. Adams DH, Popma JJ, Reardon MJ, et al. Transcatheter aortic-valve replacement with a self-expanding prosthesis. N Engl J Med. 2014;370:1790-1798.

6. Arnold SV, Reynolds MR, Wang K, et al. Health status after trans­catheter or surgical aortic valve replacement in patients with severe aortic stenosis at increased surgical risk: results from the CoreValve US Pivotal trial. JACC Cardiovasc Interv. 2015;8:1207-1217.

7. Leon MB, Smith CR, Mack MJ, et al. Transcatheter or surgical aortic-valve replacement in intermediate-risk patients. N Engl J Med. 2016;374:1609-1620.

8. Reardon MJ, Van Mieghem NM, Popma JJ, et al. Surgical or trans­catheter aortic-valve replacement in intermediate-risk patients. N Engl J Med. 2017;376:1321-1331.

9. Baron SJ, Arnold SV, Wang K, et al. Health status benefits of trans­catheter vs surgical aortic valve replacement in patients with severe aortic stenosis at intermediate surgical risk: results from the PARTNER 2 randomized clinical trial. JAMA Cardiol. 2017;2:837-845.

10. Reynolds MR, Magnuson EA, Wang K, et al. Cost-effectiveness of transcatheter aortic valve replacement compared with standard care among inoperable patients with severe aortic stenosis: results from the placement of aortic transcatheter valves (PARTNER) trial (Cohort B). Circulation. 2012;125:1102-1109.

11. Popma JJ, Adams DH, Reardon MJ, et al. Transcatheter aortic valve replacement using a self-expanding bioprosthesis in patients with severe aortic stenosis at extreme risk for surgery. J Am Coll Cardiol. 2014;63:1972-1981.

12. Vahanian A, Alfieri O, Al-Attar N, et al. Transcatheter valve implantation for patients with aortic stenosis: a position statement from the European Association of Cardio-Thoracic Surgery (EACTS) and the European Society of Cardiology (ESC), in collaboration with the European Association of Percutaneous Cardiovascular Interventions (EAPCI). Eur Heart J. 2008;29:1463-1470.

13. Holmes DR Jr, Mack MJ, Kaul S, et al. 2012 ACCF/AATS/SCAI/STS expert consensus document on transcatheter aortic valve replacement. J Am Coll Cardiol. 2012;59:1200-1254.

14. US Food and Drug Administration. FDA Executive Summary: Edwards SAPIEN™ Transcatheter Heart Valve. Presented July 20, 2011, Gaithersburg, MD.

15. Centers for Medicare & Medicaid Services. Decision Memo for Transcatheter Aortic Valve Replacement (TAVR) (CAG-00430N). May 5, 2012.

16. Mack MJ, Brennan JM, Brindis R, et al. Outcomes following trans­catheter aortic valve replacement in the United States. JAMA. 2013;310:2069-2077.

17. Thourani VH, Kodali S, Makkar RR, et al. Transcatheter aortic valve replacement versus surgical valve replacement in intermediate-risk patients: a propensity score analysis. Lancet. 2016;387:2218-2225.

18. Mack MJ, Leon MB, Thourani VH, et al. Transcatheter aortic-valve replacement with a balloon-expandable valve in low-risk patients. N Engl J Med. 2019;380:1695-1705.

19. Popma JJ, Deeb GM, Yakubov SJ, et al. Transcatheter aortic-valve replacement with a self-expanding valve in low-risk patients. N Engl J Med. 2019;380:1706-1715.

20. Edwards FH, Peterson ED, Coombs LP, et al. Prediction of operative mortality after valve replacement surgery. J Am Coll Cardiol. 2001;37:885-892.

21. Arnold SV, O’Brien SM, Vemulapalli S, et al. Inclusion of functional status measures in the risk adjustment of 30-day mortality after transcatheter aortic valve replacement: a report from the Society of Thoracic Surgeons/American College of Cardiology TVT Registry. JACC Cardiovasc Interv. 2018;11:581-589.

22. Brennan JM, Thomas L, Cohen DJ, et al. Transcatheter versus surgical aortic valve replacement: propensity-matched comparison. J Am Coll Cardiol. 2017;70:439-450.

23. Reardon MJ, Adams DH, Kleiman NS, et al. 2-year outcomes in patients undergoing surgical or self-expanding transcatheter aortic valve replacement. J Am Coll Cardiol. 2015;66:113-121.

24. Baron SJ, Cohen DJ, Suchindran S, et al. Development of a risk prediction model for 1-year mortality after surgical vs. transcatheter aortic valve replacement in patients with severe aortic stenosis. Circulation. 2016;134(A20166).

25. Arnold SV, Afilalo J, Spertus JA, et al. Prediction of poor outcome after transcatheter aortic valve replacement. J Am Coll Cardiol. 2016;68:1868-1877.

26. Arnold SV, Reynolds MR, Lei Y, et al. Predictors of poor outcomes after transcatheter aortic valve replacement: results from the PARTNER (Placement of Aortic Transcatheter Valve) trial. Circulation. 2014;129:2682-2690.

27. Fried LP, Hadley EC, Walston JD, et al. From bedside to bench: research agenda for frailty. Sci Aging Knowledge Environ. 2005;2005:pe24.

28. Torres OH, Munoz J, Ruiz D, et al. Outcome predictors of pneumonia in elderly patients: importance of functional assessment. J Am Geriatr Soc. 2004;52:1603-1609.

29. Ekerstad N, Swahn E, Janzon M, et al. Frailty is independently associated with short-term outcomes for elderly patients with non-ST-segment elevation myocardial infarction. Circulation. 2011;124:2397-2404.

30. Makary MA, Segev DL, Pronovost PJ, et al. Frailty as a predictor of surgical outcomes in older patients. J Am Coll Surg. 2010;210:901-908.

31. Hewitt J, Moug SJ, Middleton M, et al. Prevalence of frailty and its association with mortality in general surgery. Am J Surg. 2015;209:254-259.

32. Sundermann S, Dademasch A, Praetorius J, et al. Comprehensive assessment of frailty for elderly high-risk patients undergoing cardiac surgery. Eur J Cardiothorac Surg. 2011;39:33-37.

33. Afilalo J, Mottillo S, Eisenberg MJ, et al. Addition of frailty and disability to cardiac surgery risk scores identifies elderly patients at high risk of mortality or major morbidity. Circ Cardiovasc Qual Outcomes. 2012;5:222-228.

34. Lin HS, Watts JN, Peel NM, Hubbard RE. Frailty and post-operative outcomes in older surgical patients: a systematic review. BMC Geriatr. 2016;16:157.

35. Stortecky S, Schoenenberger AW, Moser A, et al. Evaluation of multidimensional geriatric assessment as a predictor of mortality and cardiovascular events after transcatheter aortic valve implantation. JACC Cardiovasc Interv. 2012;5:489-496.

36. Schoenenberger AW, Stortecky S, Neumann S, et al. Predictors of functional decline in elderly patients undergoing transcatheter aortic valve implantation (TAVI). Eur Heart J. 2013;34:684-689.

37. Green P, Arnold SV, Cohen DJ, et al. Relation of frailty to outcomes after transcatheter aortic valve replacement (from the PARTNER trial). Am J Cardiol. 2015;116:264-269.

38. Goldfarb M, Lauck S, Webb J, et al. Malnutrition and mortality in frail and non-frail older adults undergoing aortic valve replacement. Circulation. 2018;138:2202-2211.

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
125-129
Page Number
125-129
Publications
Publications
Topics
Article Type
Display Headline
Calculating Risk for Poor Outcomes After Transcatheter Aortic Valve Replacement
Display Headline
Calculating Risk for Poor Outcomes After Transcatheter Aortic Valve Replacement
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Use of Hybrid Coronary Revascularization in Patients with Multivessel Coronary Artery Disease

Article Type
Changed
Thu, 04/23/2020 - 15:17
Display Headline
Use of Hybrid Coronary Revascularization in Patients with Multivessel Coronary Artery Disease

Study Overview

Objective. To investigate the 5-year clinical outcome of patients undergoing hybrid revascularization for multivessel coronary artery disease (CAD).

Design. Multicenter, open-label, prospective randomized control trial.

Setting and participants. 200 patients with multivessel CAD referred for conventional surgical revascularization were randomly assigned to undergo hybrid coronary revascularization (HCR) or coronary artery bypass grafting (CABG).

Main outcome measures. The primary endpoint was all-cause mortality at 5 years.

Main results. After excluding 9 patients who were lost to follow-up before 5 years, 191 patients (94 in HCR group and 97 in CABG group) formed the basis of the study. All-cause mortality at 5-year follow-up was similar in the 2 groups (6.4% versus 9.2%, P = 0.69). The rates of myocardial infarction (4.3% versus 7.2%, P = 0.30), repeat revascularization (37.2% versus 45.4%, P = 0.38), stroke (2.1% versus 4.1%, P = 0.35), and major adverse and cardiac and cerebrovascular events (45.2% versus 53.4%, P = 0.39) were similar in the 2 groups. These findings were consistent across all levels of risk for surgical complications (EuroScore) and for complexity of revascularization (SYNTAX score).

Conclusion. HCR has similar 5-year all-cause mortality when compared with conventional CABG.

Commentary

HCR has been proposed as a less invasive, effective alternative revascularization strategy to conventional CABG for patients with multivessel CAD. The hybrid approach typically combines the long-term durability of grafting of the left anterior descending artery (LAD) using the left internal mammary artery and the percutaneous coronary intervention (PCI) for non-LAD stenosis; this approach has been shown to have similar or perhaps even better long-term patency compared with saphenous vein grafts.1,2 Previous studies have demonstrated the feasibility of HCR by comparing HCR to conventional CABG at 1 year.2 However, the long-term outcome of HCR compared to conventional CABG has not been previously reported.

 

 

In this context, Tajstra et al reported the 5-year follow-up from their prospective randomized pilot study. They report that among the 200 patients with multivessel coronary disease randomly assigned to either HCR or CABG, all-cause mortality at 5-year follow-up was similar in the 2 groups (6.4% versus 9.2%, P = 0.69). The rates of myocardial infarction, repeat revascularization, stroke, and major adverse and cardiac and cerebrovascular event (MACCE) were also similar in the 2 groups.

This is an important study because it is the first to compare the long-term outcome of HCR with conventional CABG; previous studies have been limited due to their short- to mid-term follow-up.2 However, because this study was not powered to assess the superiority of the HCR compared to conventional CABG, future randomized control trials with a larger number of patients are needed.

Future studies must address some important questions. First, the patients in the present study were younger (mean age, 62.1 ± 8.3 years) with less comorbidity and a relatively low SYNTAX score (23.6 ± 6.1 for the HCR arm). As CABG and PCI are associated with similar long- term outcomes in patients with low (< 22) to intermediate (22–32) SYNTAX score,3 comparisons between HCR and multivessel PCI using the current generation of drug-eluting stents are needed. The results from the ongoing Hybrid Coronary Revascularization Trial (NCT03089398) will shed light on this clinical question. Second, whether these findings can be extended to patients with a high baseline SYNTAX score needs further study. Nonetheless, outcomes were similar between the 2 strategies in the intermediate (n = 98) and high (n = 8) SYNTAX score groups. Interestingly, there is no clear benefit of HCR in the high surgical risk groups as measured by EuroScore. Third, in addition to the hard outcomes (death and MACCE), the quality of life of patients measured by an established metric, such as the Seattle Angina Questionnaire, need to be assessed. Last, the completeness of revascularization in each group needs to be further evaluated because incomplete revascularization is a known predictor of adverse outcomes.4,5

 

Applications for Clinical Practice

In patients with multivessel coronary disease with low SYNTAX score, the 5-year outcome for HCR was similar to that of conventional CABG. Further larger studies are needed to assess the superiority of this approach.

—Taishi Hirai, MD, University of Missouri Medical Center, Columbia, MO; Hiroto Kitahara, MD, University of Chicago Medical Center, Chicago, IL; and John Blair, MD, Medstar Washington Hospital Center, Washington, DC

References

1. Lee PH, Kwon O, Ahn JM, et al. Safety and effectiveness of second-generation drug-eluting stents in patients with left main coronary artery disease. J Am Coll Cardiol. 2018;71:832-841.

2. Gasior M, Zembala MO, Tajstra M, et al. Hybrid revascularization for multivessel coronary artery disease. JACC Cardiovasc Interv. 2014;7:1277-1283.

3. Serruys PW, Onuma Y, Garg S, et al. Assessment of the SYNTAX score in the Syntax study. EuroIntervention. 2009;5:50-56.

4. Genereux P, Palmerini T, Caixeta A, et al. Quantification and impact of untreated coronary artery disease after percutaneous coronary intervention: the residual SYNTAX (Synergy Between PCI with Taxus and Cardiac Surgery) score. J Am Coll Cardiol. 2012;59:2165-2174.

5. Choi KH, Lee JM, Koo BK, et al. Prognostic implication of functional incomplete revascularization and residual functional SYNTAX score in patients with coronary artery disease. JACC Cardiovasc Interv. 2018;11:237-245.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
108-109
Sections
Article PDF
Article PDF

Study Overview

Objective. To investigate the 5-year clinical outcome of patients undergoing hybrid revascularization for multivessel coronary artery disease (CAD).

Design. Multicenter, open-label, prospective randomized control trial.

Setting and participants. 200 patients with multivessel CAD referred for conventional surgical revascularization were randomly assigned to undergo hybrid coronary revascularization (HCR) or coronary artery bypass grafting (CABG).

Main outcome measures. The primary endpoint was all-cause mortality at 5 years.

Main results. After excluding 9 patients who were lost to follow-up before 5 years, 191 patients (94 in HCR group and 97 in CABG group) formed the basis of the study. All-cause mortality at 5-year follow-up was similar in the 2 groups (6.4% versus 9.2%, P = 0.69). The rates of myocardial infarction (4.3% versus 7.2%, P = 0.30), repeat revascularization (37.2% versus 45.4%, P = 0.38), stroke (2.1% versus 4.1%, P = 0.35), and major adverse and cardiac and cerebrovascular events (45.2% versus 53.4%, P = 0.39) were similar in the 2 groups. These findings were consistent across all levels of risk for surgical complications (EuroScore) and for complexity of revascularization (SYNTAX score).

Conclusion. HCR has similar 5-year all-cause mortality when compared with conventional CABG.

Commentary

HCR has been proposed as a less invasive, effective alternative revascularization strategy to conventional CABG for patients with multivessel CAD. The hybrid approach typically combines the long-term durability of grafting of the left anterior descending artery (LAD) using the left internal mammary artery and the percutaneous coronary intervention (PCI) for non-LAD stenosis; this approach has been shown to have similar or perhaps even better long-term patency compared with saphenous vein grafts.1,2 Previous studies have demonstrated the feasibility of HCR by comparing HCR to conventional CABG at 1 year.2 However, the long-term outcome of HCR compared to conventional CABG has not been previously reported.

 

 

In this context, Tajstra et al reported the 5-year follow-up from their prospective randomized pilot study. They report that among the 200 patients with multivessel coronary disease randomly assigned to either HCR or CABG, all-cause mortality at 5-year follow-up was similar in the 2 groups (6.4% versus 9.2%, P = 0.69). The rates of myocardial infarction, repeat revascularization, stroke, and major adverse and cardiac and cerebrovascular event (MACCE) were also similar in the 2 groups.

This is an important study because it is the first to compare the long-term outcome of HCR with conventional CABG; previous studies have been limited due to their short- to mid-term follow-up.2 However, because this study was not powered to assess the superiority of the HCR compared to conventional CABG, future randomized control trials with a larger number of patients are needed.

Future studies must address some important questions. First, the patients in the present study were younger (mean age, 62.1 ± 8.3 years) with less comorbidity and a relatively low SYNTAX score (23.6 ± 6.1 for the HCR arm). As CABG and PCI are associated with similar long- term outcomes in patients with low (< 22) to intermediate (22–32) SYNTAX score,3 comparisons between HCR and multivessel PCI using the current generation of drug-eluting stents are needed. The results from the ongoing Hybrid Coronary Revascularization Trial (NCT03089398) will shed light on this clinical question. Second, whether these findings can be extended to patients with a high baseline SYNTAX score needs further study. Nonetheless, outcomes were similar between the 2 strategies in the intermediate (n = 98) and high (n = 8) SYNTAX score groups. Interestingly, there is no clear benefit of HCR in the high surgical risk groups as measured by EuroScore. Third, in addition to the hard outcomes (death and MACCE), the quality of life of patients measured by an established metric, such as the Seattle Angina Questionnaire, need to be assessed. Last, the completeness of revascularization in each group needs to be further evaluated because incomplete revascularization is a known predictor of adverse outcomes.4,5

 

Applications for Clinical Practice

In patients with multivessel coronary disease with low SYNTAX score, the 5-year outcome for HCR was similar to that of conventional CABG. Further larger studies are needed to assess the superiority of this approach.

—Taishi Hirai, MD, University of Missouri Medical Center, Columbia, MO; Hiroto Kitahara, MD, University of Chicago Medical Center, Chicago, IL; and John Blair, MD, Medstar Washington Hospital Center, Washington, DC

Study Overview

Objective. To investigate the 5-year clinical outcome of patients undergoing hybrid revascularization for multivessel coronary artery disease (CAD).

Design. Multicenter, open-label, prospective randomized control trial.

Setting and participants. 200 patients with multivessel CAD referred for conventional surgical revascularization were randomly assigned to undergo hybrid coronary revascularization (HCR) or coronary artery bypass grafting (CABG).

Main outcome measures. The primary endpoint was all-cause mortality at 5 years.

Main results. After excluding 9 patients who were lost to follow-up before 5 years, 191 patients (94 in HCR group and 97 in CABG group) formed the basis of the study. All-cause mortality at 5-year follow-up was similar in the 2 groups (6.4% versus 9.2%, P = 0.69). The rates of myocardial infarction (4.3% versus 7.2%, P = 0.30), repeat revascularization (37.2% versus 45.4%, P = 0.38), stroke (2.1% versus 4.1%, P = 0.35), and major adverse and cardiac and cerebrovascular events (45.2% versus 53.4%, P = 0.39) were similar in the 2 groups. These findings were consistent across all levels of risk for surgical complications (EuroScore) and for complexity of revascularization (SYNTAX score).

Conclusion. HCR has similar 5-year all-cause mortality when compared with conventional CABG.

Commentary

HCR has been proposed as a less invasive, effective alternative revascularization strategy to conventional CABG for patients with multivessel CAD. The hybrid approach typically combines the long-term durability of grafting of the left anterior descending artery (LAD) using the left internal mammary artery and the percutaneous coronary intervention (PCI) for non-LAD stenosis; this approach has been shown to have similar or perhaps even better long-term patency compared with saphenous vein grafts.1,2 Previous studies have demonstrated the feasibility of HCR by comparing HCR to conventional CABG at 1 year.2 However, the long-term outcome of HCR compared to conventional CABG has not been previously reported.

 

 

In this context, Tajstra et al reported the 5-year follow-up from their prospective randomized pilot study. They report that among the 200 patients with multivessel coronary disease randomly assigned to either HCR or CABG, all-cause mortality at 5-year follow-up was similar in the 2 groups (6.4% versus 9.2%, P = 0.69). The rates of myocardial infarction, repeat revascularization, stroke, and major adverse and cardiac and cerebrovascular event (MACCE) were also similar in the 2 groups.

This is an important study because it is the first to compare the long-term outcome of HCR with conventional CABG; previous studies have been limited due to their short- to mid-term follow-up.2 However, because this study was not powered to assess the superiority of the HCR compared to conventional CABG, future randomized control trials with a larger number of patients are needed.

Future studies must address some important questions. First, the patients in the present study were younger (mean age, 62.1 ± 8.3 years) with less comorbidity and a relatively low SYNTAX score (23.6 ± 6.1 for the HCR arm). As CABG and PCI are associated with similar long- term outcomes in patients with low (< 22) to intermediate (22–32) SYNTAX score,3 comparisons between HCR and multivessel PCI using the current generation of drug-eluting stents are needed. The results from the ongoing Hybrid Coronary Revascularization Trial (NCT03089398) will shed light on this clinical question. Second, whether these findings can be extended to patients with a high baseline SYNTAX score needs further study. Nonetheless, outcomes were similar between the 2 strategies in the intermediate (n = 98) and high (n = 8) SYNTAX score groups. Interestingly, there is no clear benefit of HCR in the high surgical risk groups as measured by EuroScore. Third, in addition to the hard outcomes (death and MACCE), the quality of life of patients measured by an established metric, such as the Seattle Angina Questionnaire, need to be assessed. Last, the completeness of revascularization in each group needs to be further evaluated because incomplete revascularization is a known predictor of adverse outcomes.4,5

 

Applications for Clinical Practice

In patients with multivessel coronary disease with low SYNTAX score, the 5-year outcome for HCR was similar to that of conventional CABG. Further larger studies are needed to assess the superiority of this approach.

—Taishi Hirai, MD, University of Missouri Medical Center, Columbia, MO; Hiroto Kitahara, MD, University of Chicago Medical Center, Chicago, IL; and John Blair, MD, Medstar Washington Hospital Center, Washington, DC

References

1. Lee PH, Kwon O, Ahn JM, et al. Safety and effectiveness of second-generation drug-eluting stents in patients with left main coronary artery disease. J Am Coll Cardiol. 2018;71:832-841.

2. Gasior M, Zembala MO, Tajstra M, et al. Hybrid revascularization for multivessel coronary artery disease. JACC Cardiovasc Interv. 2014;7:1277-1283.

3. Serruys PW, Onuma Y, Garg S, et al. Assessment of the SYNTAX score in the Syntax study. EuroIntervention. 2009;5:50-56.

4. Genereux P, Palmerini T, Caixeta A, et al. Quantification and impact of untreated coronary artery disease after percutaneous coronary intervention: the residual SYNTAX (Synergy Between PCI with Taxus and Cardiac Surgery) score. J Am Coll Cardiol. 2012;59:2165-2174.

5. Choi KH, Lee JM, Koo BK, et al. Prognostic implication of functional incomplete revascularization and residual functional SYNTAX score in patients with coronary artery disease. JACC Cardiovasc Interv. 2018;11:237-245.

References

1. Lee PH, Kwon O, Ahn JM, et al. Safety and effectiveness of second-generation drug-eluting stents in patients with left main coronary artery disease. J Am Coll Cardiol. 2018;71:832-841.

2. Gasior M, Zembala MO, Tajstra M, et al. Hybrid revascularization for multivessel coronary artery disease. JACC Cardiovasc Interv. 2014;7:1277-1283.

3. Serruys PW, Onuma Y, Garg S, et al. Assessment of the SYNTAX score in the Syntax study. EuroIntervention. 2009;5:50-56.

4. Genereux P, Palmerini T, Caixeta A, et al. Quantification and impact of untreated coronary artery disease after percutaneous coronary intervention: the residual SYNTAX (Synergy Between PCI with Taxus and Cardiac Surgery) score. J Am Coll Cardiol. 2012;59:2165-2174.

5. Choi KH, Lee JM, Koo BK, et al. Prognostic implication of functional incomplete revascularization and residual functional SYNTAX score in patients with coronary artery disease. JACC Cardiovasc Interv. 2018;11:237-245.

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
108-109
Page Number
108-109
Publications
Publications
Topics
Article Type
Display Headline
Use of Hybrid Coronary Revascularization in Patients with Multivessel Coronary Artery Disease
Display Headline
Use of Hybrid Coronary Revascularization in Patients with Multivessel Coronary Artery Disease
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Mismatch Between Process and Outcome Measures for Hospital-Acquired Venous Thromboembolism in a Surgical Cohort

Article Type
Changed
Thu, 04/23/2020 - 15:12
Display Headline
Mismatch Between Process and Outcome Measures for Hospital-Acquired Venous Thromboembolism in a Surgical Cohort

From Tufts Medical Center, Boston, MA.

Abstract

  • Objective: Audits at our academic medical center revealed near 100% compliance with protocols for perioperative venous thromboembolism (VTE) prophylaxis, but recent National Surgical Quality Improvement Program data demonstrated a higher than expected incidence of VTE (observed/expected = 1.32). The objective of this study was to identify potential causes of this discrepancy.
  • Design: Retrospective case-control study.
  • Setting: Urban academic medical center with high case-mix indices (Medicare approximately 2.4, non-Medicare approximately 2.0).
  • Participants: 102 surgical inpatients with VTE (September 2012 to October 2015) matched with controls for age, gender, and type of procedure.
  • Measurements: Prevalence of common VTE risk factors, length of stay, number of procedures, index operation times, and postoperative bed rest > 12 hours were assessed. Utilization of and compliance with our VTE risk assessment tool was also investigated.
  • Results: Cases underwent more procedures and had longer lengths of stay and index procedures than controls. In addition, cases were more likely to have had > 12 hours of postoperative bed rest and central venous access than controls. Cases had more infections and were more likely to have severe lung disease, thrombophilia, and a history of prior VTE than controls. No differences in body mass index, tobacco use, current or previous malignancy, or VTE risk assessment form use were observed. Overall, care complexity and risk factors were equally important in determining VTE incidence. Our analyses also revealed lack of strict adherence to our VTE risk stratification protocol and frequent use of suboptimal prophylactic regimens.
  • Conclusion: Well-accepted risk factors and overall care complexity determine VTE risk. Preventing VTE in high-risk patients requires assiduous attention to detail in VTE risk assessment and in delivery of optimal prophylaxis. Patients at especially high risk may require customized prophylactic regimens.

Keywords: hospital-acquired venous thromboembolic disease; VTE prophylaxis, surgical patients.

Deep vein thrombosis (DVT) and pulmonary embolism (PE) are well-recognized causes of morbidity and mortality in surgical patients. Between 350,000 and 600,000 cases of venous thromboembolism (VTE) occur each year in the United States, and it is responsible for approximately 10% of preventable in-hospital fatalities.1-3 Given VTE’s impact on patients and the healthcare system and the fact that it is preventable, intense effort has been focused on developing more effective prophylactic measures to decrease its incidence.2-4 In 2008, the surgeon general issued a “call to action” for increased efforts to prevent VTE.5

The American College of Chest Physicians (ACCP) guidelines subcategorize patients based on type of surgery. In addition, the ACCP guidelines support the use of a Caprini-based scoring system to aid in risk stratification and improve clinical decision-making (Table 1).4,6-9 In general, scores ≥ 5 qualify individuals as high risk. Based on their risk category, patients receive mechanical prophylaxis, chemical prophylaxis, or a combination of the 2. Lower-risk patients who are ambulatory typically receive only mechanical prophylaxis while in bed, whereas higher-risk patients receive a combination of mechanical prophylaxis and chemoprophylaxis measures.7 In general, low-molecular-weight heparin (40 mg daily) and low-dose unfractionated heparin (5000 units 3 times daily) have been the standard evidence-based options for chemoprophylaxis in surgical patients. Absolute contraindications for prophylaxis include active bleeding and known increased risk of bleeding based on patient- or procedure-specific factors.

Caprini Risk Assessment Model

Our hospital, a 350-bed academic medical center in downtown Boston, MA, serving a diverse population with a very high case-mix index (2.4 Medicare and 2.0 non-Medicare), has strict protocols for VTE prophylaxis consistent with the ACCP guidelines and based on the Surgical Care Improvement Project (SCIP) measures published in 2006.10 The SCIP mandates allow for considerable surgeon discretion in the use of chemoprophylaxis for neurosurgical cases and general and orthopedic surgery cases deemed to be at high risk for bleeding. In addition, SCIP requires only that prophylaxis be initiated within 24 hours of surgical end time. Although recent audits revealed nearly 100% compliance with SCIP-mandated protocols, National Surgical Quality Improvement Program (NSQIP) data showed that the incidence of VTE events at our institution was higher than expected (observed/expected [O/E] = 1.32).

In order to determine the reasons for this mismatch between process and outcome performance, we investigated whether there were characteristics of our patient population that contributed to the higher than expected rates of VTE, and we scrutinized our VTE prophylaxis protocol to determine if there were aspects of our process that were also contributory.

Methods

Study Sample

This is a retrospective case-control study of surgical inpatients at our hospital during the period September 2012 to October 2015. Cases were identified as patients diagnosed with a VTE (DVT or PE). Controls were identified from a pool of surgical patients whose courses were not complicated by VTE during the same time frame as the cases and who were matched as closely as possible by procedure code, age, and gender.

 

 

Variables

Patient and hospital course variables that were analyzed included demographics, comorbidities, length of stay, number of procedures, index operation times, duration of postoperative bed rest, use of mechanical prophylaxis, and type of chemoprophylaxis and time frame within which it was initiated. Data were collected via chart review using International Classification of Diseases-9 and -10 codes to identify surgical cases within the allotted time period who were diagnosed with VTE. Demographic variables included age, sex, and ethnicity. Comorbidities included hypertension, diabetes, coronary artery disease, serious lung disease, previous or current malignancy, documented hypercoagulable state, and previous history of VTE. Body mass index (BMI) was also recorded. The aforementioned disease-specific variables were not matched between the case and control groups, as this data was obtained retrospectively during data collection.

Analysis

Associations between case and matched control were analyzed using the paired t-test for continuous variables and McNemar’s test for categorical variables. P values < 0.05 were considered statistically significant. SAS Enterprise Guide 7.15 (Cary, NC) was used for all statistical analyses.

The requirement for informed consent was waived by our Institutional Review Board, as the study was initially deemed to be a quality improvement project, and all data used for this report were de-identified.

Results

Our retrospective case-control analysis included a sample of 102 surgical patients whose courses were complicated by VTE between September 2012 and October 2015. The cases were distributed among 6 different surgical categories (Figure 1): trauma (20%), cancer (10%), cardiovascular (21%), noncancer neurosurgery (28%), elective orthopedics (11%), and miscellaneous general surgery (10%).

Distribution of procedure type.

Comparisons between cases and controls in terms of patient demographics and risk factors are shown in Table 2. No statistically significant difference was observed in ethnicity or race between the 2 groups. Overall, cases had more hip/pelvis/leg fractures at presentation (P = 0.0008). The case group also had higher proportions of patients with postoperative bed rest greater than 12 hours (P = 0.009), central venous access (P < 0.0001), infection (P < 0.0001), and lower extremity edema documented during the hospitalization prior to development of DVT (P < 0.0001). Additionally, cases had significantly greater rates of previous VTE (P = 0.0004), inherited or acquired thrombophilia (P = 0.03), history of stroke (P = 0.0003), and severe lung disease, including pneumonia (P = 0.0008). No significant differences were noted between cases and matched controls in BMI (P = 0.43), current tobacco use (P = 0.71), current malignancy (P = 0.80), previous malignancy (P = 0.83), head trauma (P = 0.17), or acute cardiac disease (myocardial infarction or congestive heart failure; P = 0.12).

Patient Demographics and Risk Factors

Variables felt to indicate overall complexity of hospital course for cases as compared to controls are outlined in Table 3. Cases were found to have significantly longer lengths of stay (median, 15.5 days versus 3 days, P < 0.0001). To account for the possibility that the development of VTE contributed to the increased length of stay in the cases, we also looked at the duration between admission date and the date of VTE diagnosis and determined that cases still had a longer length of stay when this was accounted for (median, 7 days versus 3 days, P < 0.0001). A much higher proportion of cases underwent more than 1 procedure compared to controls (P < 0.0001), and cases had significantly longer index operations as compared to controls (P = 0.002).

Complexity of Care

 

 

Seventeen cases received heparin on induction during their index procedure, compared to 23 controls (P = 0.24). Additionally, 63 cases began a prophylaxis regimen within 24 hours of surgery end time, compared to 68 controls (P = 0.24). The chemoprophylactic regimens utilized in cases and in controls are summarized in Figure 2. Of note, only 26 cases and 32 controls received standard prophylactic regimens with no missed doses (heparin 5000 units 3 times daily or enoxaparin 40 mg daily). Additionally, in over half of cases and a third of controls, nonstandard regimens were ordered. Examples of nonstandard regimens included nonstandard heparin or enoxaparin doses, low-dose warfarin, or aspirin alone. In most cases, nonstandard regimens were justified on the basis of high risk for bleeding.

Frequencies of prophylactic regimens utilized.

Mechanical prophylaxis with pneumatic sequential compression devices (SCDs) was ordered in 93 (91%) cases and 87 (85%) controls; however, we were unable to accurately document uniform compliance in the use of these devices.

With regard to evaluation of our process measures, we found only 17% of cases and controls combined actually had a VTE risk assessment in their chart, and when it was present, it was often incomplete or was completed inaccurately.

 

Discussion

The goal of this study was to identify factors (patient characteristics and/or processes of care) that may be contributing to the higher than expected incidence of VTE events at our medical center, despite internal audits suggesting near perfect compliance with SCIP-mandated protocols. We found that in addition to usual risk factors for VTE, an overarching theme of our case cohort was their high complexity of illness. At baseline, these patients had significantly greater rates of stroke, thrombophilia, severe lung disease, infection, and history of VTE than controls. Moreover, the hospital courses of cases were significantly more complex than those of controls, as these patients had more procedures, longer lengths of stay and longer index operations, higher rates of postoperative bed rest exceeding 12 hours, and more prevalent central venous access than controls (Table 2). Several of these risk factors have been found to contribute to VTE development despite compliance with prophylaxis protocols.

Cassidy et al reviewed a cohort of nontrauma general surgery patients who developed VTE despite receiving appropriate prophylaxis and found that both multiple operations and emergency procedures contributed to the failure of VTE prophylaxis.11 Similarly, Wang et al identified several independent risk factors for VTE despite thromboprophylaxis, including central venous access and infection, as well as intensive care unit admission, hospitalization for cranial surgery, and admission from a long-term care facility.12 While our study did not capture some of these additional factors considered by Wang et al, the presence of risk factors not captured in traditional assessment tools suggests that additional consideration for complex patients is warranted.

 

 

In addition to these nonmodifiable patient characteristics, aspects of our VTE prophylaxis processes likely contributed to the higher than expected rate of VTE. While the electronic medical record at our institution does contain a VTE risk assessment tool based on the Caprini score, we found it often is not used at all or is used incorrectly/incompletely, which likely reflects the fact that physicians are neither prompted nor required to complete the assessment prior to prescribing VTE prophylaxis.

There is a significant body of evidence demonstrating that mandatory computerized VTE risk assessments can effectively reduce VTE rates and that improved outcomes occur shortly after implementation. Cassidy et al demonstrated the benefits of instituting a hospital-wide, mandatory, Caprini-based computerized VTE risk assessment that provides prophylaxis/early ambulation recommendations. Two years after implementing this system, they observed an 84% reduction in DVTs (P < 0.001) and a 55% reduction in PEs (P < 0.001).13 Nimeri et al had similarly impressive success, achieving a reduction in their NSQIP O/E for PE/DVT in general surgery from 6.00 in 2010 to 0.82 (for DVTs) and 0.78 (for PEs) 5 years after implementation of mandatory VTE risk assessment (though they noted that the most dramatic reduction occurred 1 year after implementation).14 Additionally, a recent systematic review and meta-analysis by Borab et al found computerized VTE risk assessments to be associated with a significant decrease in VTE events.15

The risk assessment tool used at our institution is qualitative in nature, and current literature suggests that employing a more quantitative tool may yield improved outcomes. Numerous studies have highlighted the importance of identifying patients at very high risk for VTE, as higher risk may necessitate more careful consideration of their prophylactic regimens. Obi et al found patients with Caprini scores higher than 8 to be at significantly greater risk of developing VTE compared to patients with scores of 7 or 8. Also, patients with scores of 7 or 8 were significantly more likely to have a VTE compared to those with scores of 5 or 6.16 In another study, Lobastov et al identified Caprini scores of 11 or higher as representing an extremely high-risk category for which standard prophylaxis regimens may not be effective.17 Thus, while having mandatory risk assessment has been shown to dramatically decrease VTE incidence, it is important to consider the magnitude of the numerical risk score. This is of particular importance at medical centers with high case-mix indices where patients at the highest risk might need to be managed with different prophylactic guidelines.

Another notable aspect of the process at our hospital was the great variation in the types of prophylactic regimens ordered, and the adherence to what was ordered. Only 25.5% of patients were maintained on a standard prophylactic regimen with no missed doses (heparin 5000 every 8 hours or enoxaparin 40 mg daily). Thus, the vast majority of the patients who went on to develop VTE either were prescribed a nontraditional prophylaxis regimen or missed doses of standard agents. The need for secondary surgical procedures or other invasive interventions may explain many, but not all, of the missed doses.

The timing of prophylaxis initiation for our patients was also found to deviate from accepted standards. Only 16.8% of cases received prophylaxis upon induction of anesthesia, and furthermore, 38% of cases did not receive any anticoagulation within 24 hours of their index operation. While this variability in prophylaxis implementation was acceptable within the SCIP guidelines based on “high risk for bleeding” or other considerations, it likely contributed to our suboptimal outcomes. The variations and interruptions in prophylactic regimens speak to barriers that have previously been reported as contributing factors to noncompliance with VTE prophylaxis.18

 

 

Given these known barriers and the observed underutilization and improper use of our risk assessment tool, we have recently changed our surgical admission order sets such that a mandatory quantitative risk assessment must be done for every surgical patient at the time of admission/operation before other orders can be completed. Following completion of the assessment, the physician will be presented with an appropriate standard regimen based on the individual patient’s risk assessment. Early results of our VTE quality improvement project have been satisfying: in the most recent NSQIP semi-annual report, our O/E for VTE was 0.74, placing us in the first decile. Some of these early reports may simply be the product of the Hawthorne effect; however, we are encouraged by the early improvements seen in other research. While we are hopeful that these changes will result in sustainable improvements in outcomes, patients at extremely high risk may require novel weight-based or otherwise customized aggressive prophylactic regimens. Such regimens have already been proposed for arthroplasty and other high-risk patients.

Future research may identify other risk factors not captured by traditional risk assessments. In addition, research should continue to explore the use and efficacy of standard prophylactic regimens in these populations to help determine if they are sufficient. Currently, weight-based low-molecular-weight heparin dosing and alternative regimens employing fondaparinux are under investigation for very-high-risk patients.19

There were several limitations to the present study. First, due to the retrospective design of our study, we could collect only data that had been uniformly recorded in the charts throughout the study period. Second, we were unable to accurately assess compliance with mechanical prophylaxis. While our chart review showed that the vast majority of cases and controls were ordered to have mechanical prophylaxis, it is impossible to document how often these devices were used appropriately in a retrospective analysis. Anecdotal observation suggests that once patients are out of post-anesthesia or critical care units, SCD use is not standardized. The inability to measure compliance precisely may be leading to an overestimation of our compliance with prophylaxis. Finally, because our study included only patients who underwent surgery at our hospital, our observations may not be generalizable outside our institution.

 

Conclusion

Our study findings reinforce the importance of attention to detail in VTE risk assessment and in ordering and administering VTE prophylactic regimens, especially in high-risk surgical patients. While we adhered to the SCIP-mandated prophylaxis requirements, the complexity of our patients and our lack of a truly standardized approach to risk assessment and prophylactic regimens resulted in suboptimal outcomes. Stricter and more quantitative mandatory VTE risk assessment, along with highly standardized VTE prophylaxis regimens, are required to achieve optimal outcomes.

Corresponding author: Jason C. DeGiovanni, MS, BA, [email protected].

Financial disclosures: None.

References

1. Spyropoulos AC, Hussein M, Lin J, et al. Rates of symptomatic venous thromboembolism in US surgical patients: a retrospective administrative database study. J Thromb Thrombolysis. 2009;28:458-464.

2. Deitzelzweig SB, Johnson BH, Lin J, et al. Prevalence of clinical venous thromboembolism in the USA: Current trends and future projections. Am J Hematol. 2011;86:217-220.

3. Horlander KT, Mannino DM, Leeper KV. Pulmonary embolism mortality in the United States, 1979-1998: an analysis using multiple-cause mortality data. Arch Intern Med. 2003;163:1711-1717.

4. Guyatt GH, Akl EA, Crowther M, et al. Introduction to the ninth edition: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(suppl):48S-52S.

5. Office of the Surgeon General; National Heart, Lung, and Blood Institute. The Surgeon General’s Call to Action to Prevent Deep Vein Thrombosis and Pulmonary Embolism. Rockville, MD: Office of the Surgeon General; 2008. www.ncbi.nlm.nih.gov/books/NBK44178/. Accessed May 2, 2019.

6. Pannucci CJ, Swistun L, MacDonald JK, et al. Individualized venous thromboembolism risk stratification using the 2005 Caprini score to identify the benefits and harms of chemoprophylaxis in surgical patients: a meta-analysis. Ann Surg. 2017;265:1094-1102.

7. Caprini JA, Arcelus JI, Hasty JH, et al. Clinical assessment of venous thromboembolic risk in surgical patients. Semin Thromb Hemost. 1991;17(suppl 3):304-312.

8. Caprini JA. Risk assessment as a guide for the prevention of the many faces of venous thromboembolism. Am J Surg. 2010;199:S3-S10.

9. Gould MK, Garcia DA, Wren SM, et al. Prevention of VTE in nonorthopedic surgical patients: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2 Suppl):e227S-e277S.

10. The Joint Commission. Surgical Care Improvement Project (SCIP) Measure Information Form (Version 2.1c). www.jointcommission.org/surgical_care_improvement_project_scip_measure_information_form_version_21c/. Accessed June 22, 2016.

11. Cassidy MR, Macht RD, Rosenkranz P, et al. Patterns of failure of a standardized perioperative venous thromboembolism prophylaxis protocol. J Am Coll Surg. 2016;222:1074-1081.

12. Wang TF, Wong CA, Milligan PE, et al. Risk factors for inpatient venous thromboembolism despite thromboprophylaxis. Thromb Res. 2014;133:25-29.

13. Cassidy MR, Rosenkranz P, McAneny D. Reducing postoperative venous thromboembolism complications with a standardized risk-stratified prophylaxis protocol and mobilization program. J Am Coll Surg. 2014;218:1095-1104.

14. Nimeri AA, Gamaleldin MM, McKenna KL, et al. Reduction of venous thromboembolism in surgical patients using a mandatory risk-scoring system: 5-year follow-up of an American College of Surgeons National Quality Improvement Program. Clin Appl Thromb Hemost. 2017;23:392-396.

15. Borab ZM, Lanni MA, Tecce MG, et al. Use of computerized clinical decision support systems to prevent venous thromboembolism in surgical patients: a systematic review and meta-analysis. JAMA Surg. 2017;152:638–645.

16. Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150:941-948.

17. Lobastov K, Barinov V, Schastlivtsev I, et al. Validation of the Caprini risk assessment model for venous thromboembolism in high-risk surgical patients in the background of standard prophylaxis. J Vasc Surg Venous Lymphat Disord. 2016;4:153-160.

18. Kakkar AK, Cohen AT, Tapson VF, et al. Venous thromboembolism risk and prophylaxis in the acute care hospital setting (ENDORSE survey): findings in surgical patients. Ann Surg. 2010;251:330-338.

19. Smythe MA, Priziola J, Dobesh PP, et al. Guidance for the practical management of the heparin anticoagulants in the treatment of venous thromboembolism. J Thromb Thrombolysis. 2016;41:165-186.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
117-124
Sections
Article PDF
Article PDF

From Tufts Medical Center, Boston, MA.

Abstract

  • Objective: Audits at our academic medical center revealed near 100% compliance with protocols for perioperative venous thromboembolism (VTE) prophylaxis, but recent National Surgical Quality Improvement Program data demonstrated a higher than expected incidence of VTE (observed/expected = 1.32). The objective of this study was to identify potential causes of this discrepancy.
  • Design: Retrospective case-control study.
  • Setting: Urban academic medical center with high case-mix indices (Medicare approximately 2.4, non-Medicare approximately 2.0).
  • Participants: 102 surgical inpatients with VTE (September 2012 to October 2015) matched with controls for age, gender, and type of procedure.
  • Measurements: Prevalence of common VTE risk factors, length of stay, number of procedures, index operation times, and postoperative bed rest > 12 hours were assessed. Utilization of and compliance with our VTE risk assessment tool was also investigated.
  • Results: Cases underwent more procedures and had longer lengths of stay and index procedures than controls. In addition, cases were more likely to have had > 12 hours of postoperative bed rest and central venous access than controls. Cases had more infections and were more likely to have severe lung disease, thrombophilia, and a history of prior VTE than controls. No differences in body mass index, tobacco use, current or previous malignancy, or VTE risk assessment form use were observed. Overall, care complexity and risk factors were equally important in determining VTE incidence. Our analyses also revealed lack of strict adherence to our VTE risk stratification protocol and frequent use of suboptimal prophylactic regimens.
  • Conclusion: Well-accepted risk factors and overall care complexity determine VTE risk. Preventing VTE in high-risk patients requires assiduous attention to detail in VTE risk assessment and in delivery of optimal prophylaxis. Patients at especially high risk may require customized prophylactic regimens.

Keywords: hospital-acquired venous thromboembolic disease; VTE prophylaxis, surgical patients.

Deep vein thrombosis (DVT) and pulmonary embolism (PE) are well-recognized causes of morbidity and mortality in surgical patients. Between 350,000 and 600,000 cases of venous thromboembolism (VTE) occur each year in the United States, and it is responsible for approximately 10% of preventable in-hospital fatalities.1-3 Given VTE’s impact on patients and the healthcare system and the fact that it is preventable, intense effort has been focused on developing more effective prophylactic measures to decrease its incidence.2-4 In 2008, the surgeon general issued a “call to action” for increased efforts to prevent VTE.5

The American College of Chest Physicians (ACCP) guidelines subcategorize patients based on type of surgery. In addition, the ACCP guidelines support the use of a Caprini-based scoring system to aid in risk stratification and improve clinical decision-making (Table 1).4,6-9 In general, scores ≥ 5 qualify individuals as high risk. Based on their risk category, patients receive mechanical prophylaxis, chemical prophylaxis, or a combination of the 2. Lower-risk patients who are ambulatory typically receive only mechanical prophylaxis while in bed, whereas higher-risk patients receive a combination of mechanical prophylaxis and chemoprophylaxis measures.7 In general, low-molecular-weight heparin (40 mg daily) and low-dose unfractionated heparin (5000 units 3 times daily) have been the standard evidence-based options for chemoprophylaxis in surgical patients. Absolute contraindications for prophylaxis include active bleeding and known increased risk of bleeding based on patient- or procedure-specific factors.

Caprini Risk Assessment Model

Our hospital, a 350-bed academic medical center in downtown Boston, MA, serving a diverse population with a very high case-mix index (2.4 Medicare and 2.0 non-Medicare), has strict protocols for VTE prophylaxis consistent with the ACCP guidelines and based on the Surgical Care Improvement Project (SCIP) measures published in 2006.10 The SCIP mandates allow for considerable surgeon discretion in the use of chemoprophylaxis for neurosurgical cases and general and orthopedic surgery cases deemed to be at high risk for bleeding. In addition, SCIP requires only that prophylaxis be initiated within 24 hours of surgical end time. Although recent audits revealed nearly 100% compliance with SCIP-mandated protocols, National Surgical Quality Improvement Program (NSQIP) data showed that the incidence of VTE events at our institution was higher than expected (observed/expected [O/E] = 1.32).

In order to determine the reasons for this mismatch between process and outcome performance, we investigated whether there were characteristics of our patient population that contributed to the higher than expected rates of VTE, and we scrutinized our VTE prophylaxis protocol to determine if there were aspects of our process that were also contributory.

Methods

Study Sample

This is a retrospective case-control study of surgical inpatients at our hospital during the period September 2012 to October 2015. Cases were identified as patients diagnosed with a VTE (DVT or PE). Controls were identified from a pool of surgical patients whose courses were not complicated by VTE during the same time frame as the cases and who were matched as closely as possible by procedure code, age, and gender.

 

 

Variables

Patient and hospital course variables that were analyzed included demographics, comorbidities, length of stay, number of procedures, index operation times, duration of postoperative bed rest, use of mechanical prophylaxis, and type of chemoprophylaxis and time frame within which it was initiated. Data were collected via chart review using International Classification of Diseases-9 and -10 codes to identify surgical cases within the allotted time period who were diagnosed with VTE. Demographic variables included age, sex, and ethnicity. Comorbidities included hypertension, diabetes, coronary artery disease, serious lung disease, previous or current malignancy, documented hypercoagulable state, and previous history of VTE. Body mass index (BMI) was also recorded. The aforementioned disease-specific variables were not matched between the case and control groups, as this data was obtained retrospectively during data collection.

Analysis

Associations between case and matched control were analyzed using the paired t-test for continuous variables and McNemar’s test for categorical variables. P values < 0.05 were considered statistically significant. SAS Enterprise Guide 7.15 (Cary, NC) was used for all statistical analyses.

The requirement for informed consent was waived by our Institutional Review Board, as the study was initially deemed to be a quality improvement project, and all data used for this report were de-identified.

Results

Our retrospective case-control analysis included a sample of 102 surgical patients whose courses were complicated by VTE between September 2012 and October 2015. The cases were distributed among 6 different surgical categories (Figure 1): trauma (20%), cancer (10%), cardiovascular (21%), noncancer neurosurgery (28%), elective orthopedics (11%), and miscellaneous general surgery (10%).

Distribution of procedure type.

Comparisons between cases and controls in terms of patient demographics and risk factors are shown in Table 2. No statistically significant difference was observed in ethnicity or race between the 2 groups. Overall, cases had more hip/pelvis/leg fractures at presentation (P = 0.0008). The case group also had higher proportions of patients with postoperative bed rest greater than 12 hours (P = 0.009), central venous access (P < 0.0001), infection (P < 0.0001), and lower extremity edema documented during the hospitalization prior to development of DVT (P < 0.0001). Additionally, cases had significantly greater rates of previous VTE (P = 0.0004), inherited or acquired thrombophilia (P = 0.03), history of stroke (P = 0.0003), and severe lung disease, including pneumonia (P = 0.0008). No significant differences were noted between cases and matched controls in BMI (P = 0.43), current tobacco use (P = 0.71), current malignancy (P = 0.80), previous malignancy (P = 0.83), head trauma (P = 0.17), or acute cardiac disease (myocardial infarction or congestive heart failure; P = 0.12).

Patient Demographics and Risk Factors

Variables felt to indicate overall complexity of hospital course for cases as compared to controls are outlined in Table 3. Cases were found to have significantly longer lengths of stay (median, 15.5 days versus 3 days, P < 0.0001). To account for the possibility that the development of VTE contributed to the increased length of stay in the cases, we also looked at the duration between admission date and the date of VTE diagnosis and determined that cases still had a longer length of stay when this was accounted for (median, 7 days versus 3 days, P < 0.0001). A much higher proportion of cases underwent more than 1 procedure compared to controls (P < 0.0001), and cases had significantly longer index operations as compared to controls (P = 0.002).

Complexity of Care

 

 

Seventeen cases received heparin on induction during their index procedure, compared to 23 controls (P = 0.24). Additionally, 63 cases began a prophylaxis regimen within 24 hours of surgery end time, compared to 68 controls (P = 0.24). The chemoprophylactic regimens utilized in cases and in controls are summarized in Figure 2. Of note, only 26 cases and 32 controls received standard prophylactic regimens with no missed doses (heparin 5000 units 3 times daily or enoxaparin 40 mg daily). Additionally, in over half of cases and a third of controls, nonstandard regimens were ordered. Examples of nonstandard regimens included nonstandard heparin or enoxaparin doses, low-dose warfarin, or aspirin alone. In most cases, nonstandard regimens were justified on the basis of high risk for bleeding.

Frequencies of prophylactic regimens utilized.

Mechanical prophylaxis with pneumatic sequential compression devices (SCDs) was ordered in 93 (91%) cases and 87 (85%) controls; however, we were unable to accurately document uniform compliance in the use of these devices.

With regard to evaluation of our process measures, we found only 17% of cases and controls combined actually had a VTE risk assessment in their chart, and when it was present, it was often incomplete or was completed inaccurately.

 

Discussion

The goal of this study was to identify factors (patient characteristics and/or processes of care) that may be contributing to the higher than expected incidence of VTE events at our medical center, despite internal audits suggesting near perfect compliance with SCIP-mandated protocols. We found that in addition to usual risk factors for VTE, an overarching theme of our case cohort was their high complexity of illness. At baseline, these patients had significantly greater rates of stroke, thrombophilia, severe lung disease, infection, and history of VTE than controls. Moreover, the hospital courses of cases were significantly more complex than those of controls, as these patients had more procedures, longer lengths of stay and longer index operations, higher rates of postoperative bed rest exceeding 12 hours, and more prevalent central venous access than controls (Table 2). Several of these risk factors have been found to contribute to VTE development despite compliance with prophylaxis protocols.

Cassidy et al reviewed a cohort of nontrauma general surgery patients who developed VTE despite receiving appropriate prophylaxis and found that both multiple operations and emergency procedures contributed to the failure of VTE prophylaxis.11 Similarly, Wang et al identified several independent risk factors for VTE despite thromboprophylaxis, including central venous access and infection, as well as intensive care unit admission, hospitalization for cranial surgery, and admission from a long-term care facility.12 While our study did not capture some of these additional factors considered by Wang et al, the presence of risk factors not captured in traditional assessment tools suggests that additional consideration for complex patients is warranted.

 

 

In addition to these nonmodifiable patient characteristics, aspects of our VTE prophylaxis processes likely contributed to the higher than expected rate of VTE. While the electronic medical record at our institution does contain a VTE risk assessment tool based on the Caprini score, we found it often is not used at all or is used incorrectly/incompletely, which likely reflects the fact that physicians are neither prompted nor required to complete the assessment prior to prescribing VTE prophylaxis.

There is a significant body of evidence demonstrating that mandatory computerized VTE risk assessments can effectively reduce VTE rates and that improved outcomes occur shortly after implementation. Cassidy et al demonstrated the benefits of instituting a hospital-wide, mandatory, Caprini-based computerized VTE risk assessment that provides prophylaxis/early ambulation recommendations. Two years after implementing this system, they observed an 84% reduction in DVTs (P < 0.001) and a 55% reduction in PEs (P < 0.001).13 Nimeri et al had similarly impressive success, achieving a reduction in their NSQIP O/E for PE/DVT in general surgery from 6.00 in 2010 to 0.82 (for DVTs) and 0.78 (for PEs) 5 years after implementation of mandatory VTE risk assessment (though they noted that the most dramatic reduction occurred 1 year after implementation).14 Additionally, a recent systematic review and meta-analysis by Borab et al found computerized VTE risk assessments to be associated with a significant decrease in VTE events.15

The risk assessment tool used at our institution is qualitative in nature, and current literature suggests that employing a more quantitative tool may yield improved outcomes. Numerous studies have highlighted the importance of identifying patients at very high risk for VTE, as higher risk may necessitate more careful consideration of their prophylactic regimens. Obi et al found patients with Caprini scores higher than 8 to be at significantly greater risk of developing VTE compared to patients with scores of 7 or 8. Also, patients with scores of 7 or 8 were significantly more likely to have a VTE compared to those with scores of 5 or 6.16 In another study, Lobastov et al identified Caprini scores of 11 or higher as representing an extremely high-risk category for which standard prophylaxis regimens may not be effective.17 Thus, while having mandatory risk assessment has been shown to dramatically decrease VTE incidence, it is important to consider the magnitude of the numerical risk score. This is of particular importance at medical centers with high case-mix indices where patients at the highest risk might need to be managed with different prophylactic guidelines.

Another notable aspect of the process at our hospital was the great variation in the types of prophylactic regimens ordered, and the adherence to what was ordered. Only 25.5% of patients were maintained on a standard prophylactic regimen with no missed doses (heparin 5000 every 8 hours or enoxaparin 40 mg daily). Thus, the vast majority of the patients who went on to develop VTE either were prescribed a nontraditional prophylaxis regimen or missed doses of standard agents. The need for secondary surgical procedures or other invasive interventions may explain many, but not all, of the missed doses.

The timing of prophylaxis initiation for our patients was also found to deviate from accepted standards. Only 16.8% of cases received prophylaxis upon induction of anesthesia, and furthermore, 38% of cases did not receive any anticoagulation within 24 hours of their index operation. While this variability in prophylaxis implementation was acceptable within the SCIP guidelines based on “high risk for bleeding” or other considerations, it likely contributed to our suboptimal outcomes. The variations and interruptions in prophylactic regimens speak to barriers that have previously been reported as contributing factors to noncompliance with VTE prophylaxis.18

 

 

Given these known barriers and the observed underutilization and improper use of our risk assessment tool, we have recently changed our surgical admission order sets such that a mandatory quantitative risk assessment must be done for every surgical patient at the time of admission/operation before other orders can be completed. Following completion of the assessment, the physician will be presented with an appropriate standard regimen based on the individual patient’s risk assessment. Early results of our VTE quality improvement project have been satisfying: in the most recent NSQIP semi-annual report, our O/E for VTE was 0.74, placing us in the first decile. Some of these early reports may simply be the product of the Hawthorne effect; however, we are encouraged by the early improvements seen in other research. While we are hopeful that these changes will result in sustainable improvements in outcomes, patients at extremely high risk may require novel weight-based or otherwise customized aggressive prophylactic regimens. Such regimens have already been proposed for arthroplasty and other high-risk patients.

Future research may identify other risk factors not captured by traditional risk assessments. In addition, research should continue to explore the use and efficacy of standard prophylactic regimens in these populations to help determine if they are sufficient. Currently, weight-based low-molecular-weight heparin dosing and alternative regimens employing fondaparinux are under investigation for very-high-risk patients.19

There were several limitations to the present study. First, due to the retrospective design of our study, we could collect only data that had been uniformly recorded in the charts throughout the study period. Second, we were unable to accurately assess compliance with mechanical prophylaxis. While our chart review showed that the vast majority of cases and controls were ordered to have mechanical prophylaxis, it is impossible to document how often these devices were used appropriately in a retrospective analysis. Anecdotal observation suggests that once patients are out of post-anesthesia or critical care units, SCD use is not standardized. The inability to measure compliance precisely may be leading to an overestimation of our compliance with prophylaxis. Finally, because our study included only patients who underwent surgery at our hospital, our observations may not be generalizable outside our institution.

 

Conclusion

Our study findings reinforce the importance of attention to detail in VTE risk assessment and in ordering and administering VTE prophylactic regimens, especially in high-risk surgical patients. While we adhered to the SCIP-mandated prophylaxis requirements, the complexity of our patients and our lack of a truly standardized approach to risk assessment and prophylactic regimens resulted in suboptimal outcomes. Stricter and more quantitative mandatory VTE risk assessment, along with highly standardized VTE prophylaxis regimens, are required to achieve optimal outcomes.

Corresponding author: Jason C. DeGiovanni, MS, BA, [email protected].

Financial disclosures: None.

From Tufts Medical Center, Boston, MA.

Abstract

  • Objective: Audits at our academic medical center revealed near 100% compliance with protocols for perioperative venous thromboembolism (VTE) prophylaxis, but recent National Surgical Quality Improvement Program data demonstrated a higher than expected incidence of VTE (observed/expected = 1.32). The objective of this study was to identify potential causes of this discrepancy.
  • Design: Retrospective case-control study.
  • Setting: Urban academic medical center with high case-mix indices (Medicare approximately 2.4, non-Medicare approximately 2.0).
  • Participants: 102 surgical inpatients with VTE (September 2012 to October 2015) matched with controls for age, gender, and type of procedure.
  • Measurements: Prevalence of common VTE risk factors, length of stay, number of procedures, index operation times, and postoperative bed rest > 12 hours were assessed. Utilization of and compliance with our VTE risk assessment tool was also investigated.
  • Results: Cases underwent more procedures and had longer lengths of stay and index procedures than controls. In addition, cases were more likely to have had > 12 hours of postoperative bed rest and central venous access than controls. Cases had more infections and were more likely to have severe lung disease, thrombophilia, and a history of prior VTE than controls. No differences in body mass index, tobacco use, current or previous malignancy, or VTE risk assessment form use were observed. Overall, care complexity and risk factors were equally important in determining VTE incidence. Our analyses also revealed lack of strict adherence to our VTE risk stratification protocol and frequent use of suboptimal prophylactic regimens.
  • Conclusion: Well-accepted risk factors and overall care complexity determine VTE risk. Preventing VTE in high-risk patients requires assiduous attention to detail in VTE risk assessment and in delivery of optimal prophylaxis. Patients at especially high risk may require customized prophylactic regimens.

Keywords: hospital-acquired venous thromboembolic disease; VTE prophylaxis, surgical patients.

Deep vein thrombosis (DVT) and pulmonary embolism (PE) are well-recognized causes of morbidity and mortality in surgical patients. Between 350,000 and 600,000 cases of venous thromboembolism (VTE) occur each year in the United States, and it is responsible for approximately 10% of preventable in-hospital fatalities.1-3 Given VTE’s impact on patients and the healthcare system and the fact that it is preventable, intense effort has been focused on developing more effective prophylactic measures to decrease its incidence.2-4 In 2008, the surgeon general issued a “call to action” for increased efforts to prevent VTE.5

The American College of Chest Physicians (ACCP) guidelines subcategorize patients based on type of surgery. In addition, the ACCP guidelines support the use of a Caprini-based scoring system to aid in risk stratification and improve clinical decision-making (Table 1).4,6-9 In general, scores ≥ 5 qualify individuals as high risk. Based on their risk category, patients receive mechanical prophylaxis, chemical prophylaxis, or a combination of the 2. Lower-risk patients who are ambulatory typically receive only mechanical prophylaxis while in bed, whereas higher-risk patients receive a combination of mechanical prophylaxis and chemoprophylaxis measures.7 In general, low-molecular-weight heparin (40 mg daily) and low-dose unfractionated heparin (5000 units 3 times daily) have been the standard evidence-based options for chemoprophylaxis in surgical patients. Absolute contraindications for prophylaxis include active bleeding and known increased risk of bleeding based on patient- or procedure-specific factors.

Caprini Risk Assessment Model

Our hospital, a 350-bed academic medical center in downtown Boston, MA, serving a diverse population with a very high case-mix index (2.4 Medicare and 2.0 non-Medicare), has strict protocols for VTE prophylaxis consistent with the ACCP guidelines and based on the Surgical Care Improvement Project (SCIP) measures published in 2006.10 The SCIP mandates allow for considerable surgeon discretion in the use of chemoprophylaxis for neurosurgical cases and general and orthopedic surgery cases deemed to be at high risk for bleeding. In addition, SCIP requires only that prophylaxis be initiated within 24 hours of surgical end time. Although recent audits revealed nearly 100% compliance with SCIP-mandated protocols, National Surgical Quality Improvement Program (NSQIP) data showed that the incidence of VTE events at our institution was higher than expected (observed/expected [O/E] = 1.32).

In order to determine the reasons for this mismatch between process and outcome performance, we investigated whether there were characteristics of our patient population that contributed to the higher than expected rates of VTE, and we scrutinized our VTE prophylaxis protocol to determine if there were aspects of our process that were also contributory.

Methods

Study Sample

This is a retrospective case-control study of surgical inpatients at our hospital during the period September 2012 to October 2015. Cases were identified as patients diagnosed with a VTE (DVT or PE). Controls were identified from a pool of surgical patients whose courses were not complicated by VTE during the same time frame as the cases and who were matched as closely as possible by procedure code, age, and gender.

 

 

Variables

Patient and hospital course variables that were analyzed included demographics, comorbidities, length of stay, number of procedures, index operation times, duration of postoperative bed rest, use of mechanical prophylaxis, and type of chemoprophylaxis and time frame within which it was initiated. Data were collected via chart review using International Classification of Diseases-9 and -10 codes to identify surgical cases within the allotted time period who were diagnosed with VTE. Demographic variables included age, sex, and ethnicity. Comorbidities included hypertension, diabetes, coronary artery disease, serious lung disease, previous or current malignancy, documented hypercoagulable state, and previous history of VTE. Body mass index (BMI) was also recorded. The aforementioned disease-specific variables were not matched between the case and control groups, as this data was obtained retrospectively during data collection.

Analysis

Associations between case and matched control were analyzed using the paired t-test for continuous variables and McNemar’s test for categorical variables. P values < 0.05 were considered statistically significant. SAS Enterprise Guide 7.15 (Cary, NC) was used for all statistical analyses.

The requirement for informed consent was waived by our Institutional Review Board, as the study was initially deemed to be a quality improvement project, and all data used for this report were de-identified.

Results

Our retrospective case-control analysis included a sample of 102 surgical patients whose courses were complicated by VTE between September 2012 and October 2015. The cases were distributed among 6 different surgical categories (Figure 1): trauma (20%), cancer (10%), cardiovascular (21%), noncancer neurosurgery (28%), elective orthopedics (11%), and miscellaneous general surgery (10%).

Distribution of procedure type.

Comparisons between cases and controls in terms of patient demographics and risk factors are shown in Table 2. No statistically significant difference was observed in ethnicity or race between the 2 groups. Overall, cases had more hip/pelvis/leg fractures at presentation (P = 0.0008). The case group also had higher proportions of patients with postoperative bed rest greater than 12 hours (P = 0.009), central venous access (P < 0.0001), infection (P < 0.0001), and lower extremity edema documented during the hospitalization prior to development of DVT (P < 0.0001). Additionally, cases had significantly greater rates of previous VTE (P = 0.0004), inherited or acquired thrombophilia (P = 0.03), history of stroke (P = 0.0003), and severe lung disease, including pneumonia (P = 0.0008). No significant differences were noted between cases and matched controls in BMI (P = 0.43), current tobacco use (P = 0.71), current malignancy (P = 0.80), previous malignancy (P = 0.83), head trauma (P = 0.17), or acute cardiac disease (myocardial infarction or congestive heart failure; P = 0.12).

Patient Demographics and Risk Factors

Variables felt to indicate overall complexity of hospital course for cases as compared to controls are outlined in Table 3. Cases were found to have significantly longer lengths of stay (median, 15.5 days versus 3 days, P < 0.0001). To account for the possibility that the development of VTE contributed to the increased length of stay in the cases, we also looked at the duration between admission date and the date of VTE diagnosis and determined that cases still had a longer length of stay when this was accounted for (median, 7 days versus 3 days, P < 0.0001). A much higher proportion of cases underwent more than 1 procedure compared to controls (P < 0.0001), and cases had significantly longer index operations as compared to controls (P = 0.002).

Complexity of Care

 

 

Seventeen cases received heparin on induction during their index procedure, compared to 23 controls (P = 0.24). Additionally, 63 cases began a prophylaxis regimen within 24 hours of surgery end time, compared to 68 controls (P = 0.24). The chemoprophylactic regimens utilized in cases and in controls are summarized in Figure 2. Of note, only 26 cases and 32 controls received standard prophylactic regimens with no missed doses (heparin 5000 units 3 times daily or enoxaparin 40 mg daily). Additionally, in over half of cases and a third of controls, nonstandard regimens were ordered. Examples of nonstandard regimens included nonstandard heparin or enoxaparin doses, low-dose warfarin, or aspirin alone. In most cases, nonstandard regimens were justified on the basis of high risk for bleeding.

Frequencies of prophylactic regimens utilized.

Mechanical prophylaxis with pneumatic sequential compression devices (SCDs) was ordered in 93 (91%) cases and 87 (85%) controls; however, we were unable to accurately document uniform compliance in the use of these devices.

With regard to evaluation of our process measures, we found only 17% of cases and controls combined actually had a VTE risk assessment in their chart, and when it was present, it was often incomplete or was completed inaccurately.

 

Discussion

The goal of this study was to identify factors (patient characteristics and/or processes of care) that may be contributing to the higher than expected incidence of VTE events at our medical center, despite internal audits suggesting near perfect compliance with SCIP-mandated protocols. We found that in addition to usual risk factors for VTE, an overarching theme of our case cohort was their high complexity of illness. At baseline, these patients had significantly greater rates of stroke, thrombophilia, severe lung disease, infection, and history of VTE than controls. Moreover, the hospital courses of cases were significantly more complex than those of controls, as these patients had more procedures, longer lengths of stay and longer index operations, higher rates of postoperative bed rest exceeding 12 hours, and more prevalent central venous access than controls (Table 2). Several of these risk factors have been found to contribute to VTE development despite compliance with prophylaxis protocols.

Cassidy et al reviewed a cohort of nontrauma general surgery patients who developed VTE despite receiving appropriate prophylaxis and found that both multiple operations and emergency procedures contributed to the failure of VTE prophylaxis.11 Similarly, Wang et al identified several independent risk factors for VTE despite thromboprophylaxis, including central venous access and infection, as well as intensive care unit admission, hospitalization for cranial surgery, and admission from a long-term care facility.12 While our study did not capture some of these additional factors considered by Wang et al, the presence of risk factors not captured in traditional assessment tools suggests that additional consideration for complex patients is warranted.

 

 

In addition to these nonmodifiable patient characteristics, aspects of our VTE prophylaxis processes likely contributed to the higher than expected rate of VTE. While the electronic medical record at our institution does contain a VTE risk assessment tool based on the Caprini score, we found it often is not used at all or is used incorrectly/incompletely, which likely reflects the fact that physicians are neither prompted nor required to complete the assessment prior to prescribing VTE prophylaxis.

There is a significant body of evidence demonstrating that mandatory computerized VTE risk assessments can effectively reduce VTE rates and that improved outcomes occur shortly after implementation. Cassidy et al demonstrated the benefits of instituting a hospital-wide, mandatory, Caprini-based computerized VTE risk assessment that provides prophylaxis/early ambulation recommendations. Two years after implementing this system, they observed an 84% reduction in DVTs (P < 0.001) and a 55% reduction in PEs (P < 0.001).13 Nimeri et al had similarly impressive success, achieving a reduction in their NSQIP O/E for PE/DVT in general surgery from 6.00 in 2010 to 0.82 (for DVTs) and 0.78 (for PEs) 5 years after implementation of mandatory VTE risk assessment (though they noted that the most dramatic reduction occurred 1 year after implementation).14 Additionally, a recent systematic review and meta-analysis by Borab et al found computerized VTE risk assessments to be associated with a significant decrease in VTE events.15

The risk assessment tool used at our institution is qualitative in nature, and current literature suggests that employing a more quantitative tool may yield improved outcomes. Numerous studies have highlighted the importance of identifying patients at very high risk for VTE, as higher risk may necessitate more careful consideration of their prophylactic regimens. Obi et al found patients with Caprini scores higher than 8 to be at significantly greater risk of developing VTE compared to patients with scores of 7 or 8. Also, patients with scores of 7 or 8 were significantly more likely to have a VTE compared to those with scores of 5 or 6.16 In another study, Lobastov et al identified Caprini scores of 11 or higher as representing an extremely high-risk category for which standard prophylaxis regimens may not be effective.17 Thus, while having mandatory risk assessment has been shown to dramatically decrease VTE incidence, it is important to consider the magnitude of the numerical risk score. This is of particular importance at medical centers with high case-mix indices where patients at the highest risk might need to be managed with different prophylactic guidelines.

Another notable aspect of the process at our hospital was the great variation in the types of prophylactic regimens ordered, and the adherence to what was ordered. Only 25.5% of patients were maintained on a standard prophylactic regimen with no missed doses (heparin 5000 every 8 hours or enoxaparin 40 mg daily). Thus, the vast majority of the patients who went on to develop VTE either were prescribed a nontraditional prophylaxis regimen or missed doses of standard agents. The need for secondary surgical procedures or other invasive interventions may explain many, but not all, of the missed doses.

The timing of prophylaxis initiation for our patients was also found to deviate from accepted standards. Only 16.8% of cases received prophylaxis upon induction of anesthesia, and furthermore, 38% of cases did not receive any anticoagulation within 24 hours of their index operation. While this variability in prophylaxis implementation was acceptable within the SCIP guidelines based on “high risk for bleeding” or other considerations, it likely contributed to our suboptimal outcomes. The variations and interruptions in prophylactic regimens speak to barriers that have previously been reported as contributing factors to noncompliance with VTE prophylaxis.18

 

 

Given these known barriers and the observed underutilization and improper use of our risk assessment tool, we have recently changed our surgical admission order sets such that a mandatory quantitative risk assessment must be done for every surgical patient at the time of admission/operation before other orders can be completed. Following completion of the assessment, the physician will be presented with an appropriate standard regimen based on the individual patient’s risk assessment. Early results of our VTE quality improvement project have been satisfying: in the most recent NSQIP semi-annual report, our O/E for VTE was 0.74, placing us in the first decile. Some of these early reports may simply be the product of the Hawthorne effect; however, we are encouraged by the early improvements seen in other research. While we are hopeful that these changes will result in sustainable improvements in outcomes, patients at extremely high risk may require novel weight-based or otherwise customized aggressive prophylactic regimens. Such regimens have already been proposed for arthroplasty and other high-risk patients.

Future research may identify other risk factors not captured by traditional risk assessments. In addition, research should continue to explore the use and efficacy of standard prophylactic regimens in these populations to help determine if they are sufficient. Currently, weight-based low-molecular-weight heparin dosing and alternative regimens employing fondaparinux are under investigation for very-high-risk patients.19

There were several limitations to the present study. First, due to the retrospective design of our study, we could collect only data that had been uniformly recorded in the charts throughout the study period. Second, we were unable to accurately assess compliance with mechanical prophylaxis. While our chart review showed that the vast majority of cases and controls were ordered to have mechanical prophylaxis, it is impossible to document how often these devices were used appropriately in a retrospective analysis. Anecdotal observation suggests that once patients are out of post-anesthesia or critical care units, SCD use is not standardized. The inability to measure compliance precisely may be leading to an overestimation of our compliance with prophylaxis. Finally, because our study included only patients who underwent surgery at our hospital, our observations may not be generalizable outside our institution.

 

Conclusion

Our study findings reinforce the importance of attention to detail in VTE risk assessment and in ordering and administering VTE prophylactic regimens, especially in high-risk surgical patients. While we adhered to the SCIP-mandated prophylaxis requirements, the complexity of our patients and our lack of a truly standardized approach to risk assessment and prophylactic regimens resulted in suboptimal outcomes. Stricter and more quantitative mandatory VTE risk assessment, along with highly standardized VTE prophylaxis regimens, are required to achieve optimal outcomes.

Corresponding author: Jason C. DeGiovanni, MS, BA, [email protected].

Financial disclosures: None.

References

1. Spyropoulos AC, Hussein M, Lin J, et al. Rates of symptomatic venous thromboembolism in US surgical patients: a retrospective administrative database study. J Thromb Thrombolysis. 2009;28:458-464.

2. Deitzelzweig SB, Johnson BH, Lin J, et al. Prevalence of clinical venous thromboembolism in the USA: Current trends and future projections. Am J Hematol. 2011;86:217-220.

3. Horlander KT, Mannino DM, Leeper KV. Pulmonary embolism mortality in the United States, 1979-1998: an analysis using multiple-cause mortality data. Arch Intern Med. 2003;163:1711-1717.

4. Guyatt GH, Akl EA, Crowther M, et al. Introduction to the ninth edition: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(suppl):48S-52S.

5. Office of the Surgeon General; National Heart, Lung, and Blood Institute. The Surgeon General’s Call to Action to Prevent Deep Vein Thrombosis and Pulmonary Embolism. Rockville, MD: Office of the Surgeon General; 2008. www.ncbi.nlm.nih.gov/books/NBK44178/. Accessed May 2, 2019.

6. Pannucci CJ, Swistun L, MacDonald JK, et al. Individualized venous thromboembolism risk stratification using the 2005 Caprini score to identify the benefits and harms of chemoprophylaxis in surgical patients: a meta-analysis. Ann Surg. 2017;265:1094-1102.

7. Caprini JA, Arcelus JI, Hasty JH, et al. Clinical assessment of venous thromboembolic risk in surgical patients. Semin Thromb Hemost. 1991;17(suppl 3):304-312.

8. Caprini JA. Risk assessment as a guide for the prevention of the many faces of venous thromboembolism. Am J Surg. 2010;199:S3-S10.

9. Gould MK, Garcia DA, Wren SM, et al. Prevention of VTE in nonorthopedic surgical patients: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2 Suppl):e227S-e277S.

10. The Joint Commission. Surgical Care Improvement Project (SCIP) Measure Information Form (Version 2.1c). www.jointcommission.org/surgical_care_improvement_project_scip_measure_information_form_version_21c/. Accessed June 22, 2016.

11. Cassidy MR, Macht RD, Rosenkranz P, et al. Patterns of failure of a standardized perioperative venous thromboembolism prophylaxis protocol. J Am Coll Surg. 2016;222:1074-1081.

12. Wang TF, Wong CA, Milligan PE, et al. Risk factors for inpatient venous thromboembolism despite thromboprophylaxis. Thromb Res. 2014;133:25-29.

13. Cassidy MR, Rosenkranz P, McAneny D. Reducing postoperative venous thromboembolism complications with a standardized risk-stratified prophylaxis protocol and mobilization program. J Am Coll Surg. 2014;218:1095-1104.

14. Nimeri AA, Gamaleldin MM, McKenna KL, et al. Reduction of venous thromboembolism in surgical patients using a mandatory risk-scoring system: 5-year follow-up of an American College of Surgeons National Quality Improvement Program. Clin Appl Thromb Hemost. 2017;23:392-396.

15. Borab ZM, Lanni MA, Tecce MG, et al. Use of computerized clinical decision support systems to prevent venous thromboembolism in surgical patients: a systematic review and meta-analysis. JAMA Surg. 2017;152:638–645.

16. Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150:941-948.

17. Lobastov K, Barinov V, Schastlivtsev I, et al. Validation of the Caprini risk assessment model for venous thromboembolism in high-risk surgical patients in the background of standard prophylaxis. J Vasc Surg Venous Lymphat Disord. 2016;4:153-160.

18. Kakkar AK, Cohen AT, Tapson VF, et al. Venous thromboembolism risk and prophylaxis in the acute care hospital setting (ENDORSE survey): findings in surgical patients. Ann Surg. 2010;251:330-338.

19. Smythe MA, Priziola J, Dobesh PP, et al. Guidance for the practical management of the heparin anticoagulants in the treatment of venous thromboembolism. J Thromb Thrombolysis. 2016;41:165-186.

References

1. Spyropoulos AC, Hussein M, Lin J, et al. Rates of symptomatic venous thromboembolism in US surgical patients: a retrospective administrative database study. J Thromb Thrombolysis. 2009;28:458-464.

2. Deitzelzweig SB, Johnson BH, Lin J, et al. Prevalence of clinical venous thromboembolism in the USA: Current trends and future projections. Am J Hematol. 2011;86:217-220.

3. Horlander KT, Mannino DM, Leeper KV. Pulmonary embolism mortality in the United States, 1979-1998: an analysis using multiple-cause mortality data. Arch Intern Med. 2003;163:1711-1717.

4. Guyatt GH, Akl EA, Crowther M, et al. Introduction to the ninth edition: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(suppl):48S-52S.

5. Office of the Surgeon General; National Heart, Lung, and Blood Institute. The Surgeon General’s Call to Action to Prevent Deep Vein Thrombosis and Pulmonary Embolism. Rockville, MD: Office of the Surgeon General; 2008. www.ncbi.nlm.nih.gov/books/NBK44178/. Accessed May 2, 2019.

6. Pannucci CJ, Swistun L, MacDonald JK, et al. Individualized venous thromboembolism risk stratification using the 2005 Caprini score to identify the benefits and harms of chemoprophylaxis in surgical patients: a meta-analysis. Ann Surg. 2017;265:1094-1102.

7. Caprini JA, Arcelus JI, Hasty JH, et al. Clinical assessment of venous thromboembolic risk in surgical patients. Semin Thromb Hemost. 1991;17(suppl 3):304-312.

8. Caprini JA. Risk assessment as a guide for the prevention of the many faces of venous thromboembolism. Am J Surg. 2010;199:S3-S10.

9. Gould MK, Garcia DA, Wren SM, et al. Prevention of VTE in nonorthopedic surgical patients: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2 Suppl):e227S-e277S.

10. The Joint Commission. Surgical Care Improvement Project (SCIP) Measure Information Form (Version 2.1c). www.jointcommission.org/surgical_care_improvement_project_scip_measure_information_form_version_21c/. Accessed June 22, 2016.

11. Cassidy MR, Macht RD, Rosenkranz P, et al. Patterns of failure of a standardized perioperative venous thromboembolism prophylaxis protocol. J Am Coll Surg. 2016;222:1074-1081.

12. Wang TF, Wong CA, Milligan PE, et al. Risk factors for inpatient venous thromboembolism despite thromboprophylaxis. Thromb Res. 2014;133:25-29.

13. Cassidy MR, Rosenkranz P, McAneny D. Reducing postoperative venous thromboembolism complications with a standardized risk-stratified prophylaxis protocol and mobilization program. J Am Coll Surg. 2014;218:1095-1104.

14. Nimeri AA, Gamaleldin MM, McKenna KL, et al. Reduction of venous thromboembolism in surgical patients using a mandatory risk-scoring system: 5-year follow-up of an American College of Surgeons National Quality Improvement Program. Clin Appl Thromb Hemost. 2017;23:392-396.

15. Borab ZM, Lanni MA, Tecce MG, et al. Use of computerized clinical decision support systems to prevent venous thromboembolism in surgical patients: a systematic review and meta-analysis. JAMA Surg. 2017;152:638–645.

16. Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150:941-948.

17. Lobastov K, Barinov V, Schastlivtsev I, et al. Validation of the Caprini risk assessment model for venous thromboembolism in high-risk surgical patients in the background of standard prophylaxis. J Vasc Surg Venous Lymphat Disord. 2016;4:153-160.

18. Kakkar AK, Cohen AT, Tapson VF, et al. Venous thromboembolism risk and prophylaxis in the acute care hospital setting (ENDORSE survey): findings in surgical patients. Ann Surg. 2010;251:330-338.

19. Smythe MA, Priziola J, Dobesh PP, et al. Guidance for the practical management of the heparin anticoagulants in the treatment of venous thromboembolism. J Thromb Thrombolysis. 2016;41:165-186.

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
117-124
Page Number
117-124
Publications
Publications
Topics
Article Type
Display Headline
Mismatch Between Process and Outcome Measures for Hospital-Acquired Venous Thromboembolism in a Surgical Cohort
Display Headline
Mismatch Between Process and Outcome Measures for Hospital-Acquired Venous Thromboembolism in a Surgical Cohort
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Structured Approach to Venous Access Associated with Zero Risk of Pneumothorax During Cardiac Device Implant Procedures

Article Type
Changed
Thu, 04/23/2020 - 15:14
Display Headline
Structured Approach to Venous Access Associated with Zero Risk of Pneumothorax During Cardiac Device Implant Procedures

Iatrogenic pneumothorax, an acute serious complication, is reported to occur in 0.1% to 2% of permanent trans-venous cardiac device implant procedures. 1,2 A National Cardiovascular Data Registry analysis of data between January 2006 and December 2008 found that pneumothorax incidence after a new defibrillator implant was 0.5%. 1 Among 4355 Danish patients undergoing a new device implant, 0.9% experienced pneumothorax requiring drainage and 0.7% had pneumothorax treated conservatively. 2 Studies have shown a higher risk of complications when procedures were performed at low-volume centers compared with the highest volume quartile (odds ratio, 1.26; 95% confidence interval, 1.05-1.52) or when procedures were performed by low-volume operators. 1

Methods. At 2 community hospitals, a project to reduce pneumothorax risk related to new device implants was implemented. This project consisted of obtaining a pre-procedure venogram (right anterior oblique [RAO] view, 12–18 degrees, 42 cm magnification), creating a subcutaneous pocket first and then obtaining axillary venous access with a 4Fr micro-puncture needle, and obtaining a post-procedure chest radiograph. During venous access, the needle was never advanced beyond the inner border of the first rib. This new process was fully implemented by January 2015. A chart review of all patients who underwent a new device implant between January 2015 and July 2017 at the 2 community medical centers was performed.

Results. Seventy patients received new implants during the review period (31 female, 39 male). The median age was 78 years (range, 34–94 years), median body mass index was 29.05 (range, 17.3–67.9), median procedural time was 70 minutes (range, 26–146 minutes), and median fluoroscopic time was 6.4 minutes (range, 0.5–35.7 minutes). A total of 131 independent venous accesses were obtained to implant 42 pacemakers and 28 defibrillators (10 single, 54 dual, and 6 biventricular devices). Of these accesses, 127 were axillary and the remainder were cephalic. There was no incidence of pneumothorax reported during these venous accesses.

Discussion. A structured approach to venous access during device implants was associated with zero incidence of pneumothorax in a low-volume center where implants were performed by a low-volume trained operator. The venogram eliminates “blind attempts,” and the RAO view reduces the likelihood of going too posterior. Using caudal fluoroscopy and targeting the axillary vein, other groups have reported a 0% to 0.2% risk for acute pneumothorax in larger patient groups. 3,4 Creating a subcutaneous pocket first allows the needle to be aligned more longitudinally along the course of the vein. The 4Fr needle increases the ratio of vein-to-needle surface area, reducing risk for pneumothorax.

Standardization of venous access can potentially reduce iatrogenic pneumothorax risk to a never event, similar to the approach used to prevent central line–associated blood stream infections. 5

Benjamin Carmel
Lake Erie College of Osteopathic Medicine
Bradenton, FL

Indiresha R. Iyer, MD
Case Western Reserve University
Cleveland, OH

Corresponding author: Indiresha R. Iyer, MD, Indiresha.iyer@ uhhospitals.org.

Financial disclosures: None.

References

1. Freeman JV, Wang Y, Curtis JP, et al. The relation between hospital procedure volume and complications of cardioverter-defibrillator implantation from the implantable cardioverter-defibrillator registry. J Am Coll Cardiol . 2010; 56:1133-1139.

2. Kirkfeldt RE, Johansen JB, Nohr, EA, et al. Complications after cardiac implantable electronic device implantations: an analysis of a complete, nationwide cohort in Denmark, Eur Heart J . 2014;35:1186–1194.

3. Yang F, Kulbak GA. New trick to a routine procedure: taking the fear out of the axillary vein stick using the 35° caudal view. Europace . 2015;17:1157-1160.

4. Hettiarachchi EMS, Arsene C, Fares S, et al. Fluoroscopy-guided axillary vein puncture, a reliable method to prevent acute complications associated with pacemaker, defibrillator, and cardiac resynchronization therapy leads insertion. J Cardiovasc Dis Diagn. 2014;2:136.

5. Chu H, Cosgrove S, Sexton B, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med . 2006;355:2725-2732.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
115
Sections
Article PDF
Article PDF

Iatrogenic pneumothorax, an acute serious complication, is reported to occur in 0.1% to 2% of permanent trans-venous cardiac device implant procedures. 1,2 A National Cardiovascular Data Registry analysis of data between January 2006 and December 2008 found that pneumothorax incidence after a new defibrillator implant was 0.5%. 1 Among 4355 Danish patients undergoing a new device implant, 0.9% experienced pneumothorax requiring drainage and 0.7% had pneumothorax treated conservatively. 2 Studies have shown a higher risk of complications when procedures were performed at low-volume centers compared with the highest volume quartile (odds ratio, 1.26; 95% confidence interval, 1.05-1.52) or when procedures were performed by low-volume operators. 1

Methods. At 2 community hospitals, a project to reduce pneumothorax risk related to new device implants was implemented. This project consisted of obtaining a pre-procedure venogram (right anterior oblique [RAO] view, 12–18 degrees, 42 cm magnification), creating a subcutaneous pocket first and then obtaining axillary venous access with a 4Fr micro-puncture needle, and obtaining a post-procedure chest radiograph. During venous access, the needle was never advanced beyond the inner border of the first rib. This new process was fully implemented by January 2015. A chart review of all patients who underwent a new device implant between January 2015 and July 2017 at the 2 community medical centers was performed.

Results. Seventy patients received new implants during the review period (31 female, 39 male). The median age was 78 years (range, 34–94 years), median body mass index was 29.05 (range, 17.3–67.9), median procedural time was 70 minutes (range, 26–146 minutes), and median fluoroscopic time was 6.4 minutes (range, 0.5–35.7 minutes). A total of 131 independent venous accesses were obtained to implant 42 pacemakers and 28 defibrillators (10 single, 54 dual, and 6 biventricular devices). Of these accesses, 127 were axillary and the remainder were cephalic. There was no incidence of pneumothorax reported during these venous accesses.

Discussion. A structured approach to venous access during device implants was associated with zero incidence of pneumothorax in a low-volume center where implants were performed by a low-volume trained operator. The venogram eliminates “blind attempts,” and the RAO view reduces the likelihood of going too posterior. Using caudal fluoroscopy and targeting the axillary vein, other groups have reported a 0% to 0.2% risk for acute pneumothorax in larger patient groups. 3,4 Creating a subcutaneous pocket first allows the needle to be aligned more longitudinally along the course of the vein. The 4Fr needle increases the ratio of vein-to-needle surface area, reducing risk for pneumothorax.

Standardization of venous access can potentially reduce iatrogenic pneumothorax risk to a never event, similar to the approach used to prevent central line–associated blood stream infections. 5

Benjamin Carmel
Lake Erie College of Osteopathic Medicine
Bradenton, FL

Indiresha R. Iyer, MD
Case Western Reserve University
Cleveland, OH

Corresponding author: Indiresha R. Iyer, MD, Indiresha.iyer@ uhhospitals.org.

Financial disclosures: None.

Iatrogenic pneumothorax, an acute serious complication, is reported to occur in 0.1% to 2% of permanent trans-venous cardiac device implant procedures. 1,2 A National Cardiovascular Data Registry analysis of data between January 2006 and December 2008 found that pneumothorax incidence after a new defibrillator implant was 0.5%. 1 Among 4355 Danish patients undergoing a new device implant, 0.9% experienced pneumothorax requiring drainage and 0.7% had pneumothorax treated conservatively. 2 Studies have shown a higher risk of complications when procedures were performed at low-volume centers compared with the highest volume quartile (odds ratio, 1.26; 95% confidence interval, 1.05-1.52) or when procedures were performed by low-volume operators. 1

Methods. At 2 community hospitals, a project to reduce pneumothorax risk related to new device implants was implemented. This project consisted of obtaining a pre-procedure venogram (right anterior oblique [RAO] view, 12–18 degrees, 42 cm magnification), creating a subcutaneous pocket first and then obtaining axillary venous access with a 4Fr micro-puncture needle, and obtaining a post-procedure chest radiograph. During venous access, the needle was never advanced beyond the inner border of the first rib. This new process was fully implemented by January 2015. A chart review of all patients who underwent a new device implant between January 2015 and July 2017 at the 2 community medical centers was performed.

Results. Seventy patients received new implants during the review period (31 female, 39 male). The median age was 78 years (range, 34–94 years), median body mass index was 29.05 (range, 17.3–67.9), median procedural time was 70 minutes (range, 26–146 minutes), and median fluoroscopic time was 6.4 minutes (range, 0.5–35.7 minutes). A total of 131 independent venous accesses were obtained to implant 42 pacemakers and 28 defibrillators (10 single, 54 dual, and 6 biventricular devices). Of these accesses, 127 were axillary and the remainder were cephalic. There was no incidence of pneumothorax reported during these venous accesses.

Discussion. A structured approach to venous access during device implants was associated with zero incidence of pneumothorax in a low-volume center where implants were performed by a low-volume trained operator. The venogram eliminates “blind attempts,” and the RAO view reduces the likelihood of going too posterior. Using caudal fluoroscopy and targeting the axillary vein, other groups have reported a 0% to 0.2% risk for acute pneumothorax in larger patient groups. 3,4 Creating a subcutaneous pocket first allows the needle to be aligned more longitudinally along the course of the vein. The 4Fr needle increases the ratio of vein-to-needle surface area, reducing risk for pneumothorax.

Standardization of venous access can potentially reduce iatrogenic pneumothorax risk to a never event, similar to the approach used to prevent central line–associated blood stream infections. 5

Benjamin Carmel
Lake Erie College of Osteopathic Medicine
Bradenton, FL

Indiresha R. Iyer, MD
Case Western Reserve University
Cleveland, OH

Corresponding author: Indiresha R. Iyer, MD, Indiresha.iyer@ uhhospitals.org.

Financial disclosures: None.

References

1. Freeman JV, Wang Y, Curtis JP, et al. The relation between hospital procedure volume and complications of cardioverter-defibrillator implantation from the implantable cardioverter-defibrillator registry. J Am Coll Cardiol . 2010; 56:1133-1139.

2. Kirkfeldt RE, Johansen JB, Nohr, EA, et al. Complications after cardiac implantable electronic device implantations: an analysis of a complete, nationwide cohort in Denmark, Eur Heart J . 2014;35:1186–1194.

3. Yang F, Kulbak GA. New trick to a routine procedure: taking the fear out of the axillary vein stick using the 35° caudal view. Europace . 2015;17:1157-1160.

4. Hettiarachchi EMS, Arsene C, Fares S, et al. Fluoroscopy-guided axillary vein puncture, a reliable method to prevent acute complications associated with pacemaker, defibrillator, and cardiac resynchronization therapy leads insertion. J Cardiovasc Dis Diagn. 2014;2:136.

5. Chu H, Cosgrove S, Sexton B, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med . 2006;355:2725-2732.

References

1. Freeman JV, Wang Y, Curtis JP, et al. The relation between hospital procedure volume and complications of cardioverter-defibrillator implantation from the implantable cardioverter-defibrillator registry. J Am Coll Cardiol . 2010; 56:1133-1139.

2. Kirkfeldt RE, Johansen JB, Nohr, EA, et al. Complications after cardiac implantable electronic device implantations: an analysis of a complete, nationwide cohort in Denmark, Eur Heart J . 2014;35:1186–1194.

3. Yang F, Kulbak GA. New trick to a routine procedure: taking the fear out of the axillary vein stick using the 35° caudal view. Europace . 2015;17:1157-1160.

4. Hettiarachchi EMS, Arsene C, Fares S, et al. Fluoroscopy-guided axillary vein puncture, a reliable method to prevent acute complications associated with pacemaker, defibrillator, and cardiac resynchronization therapy leads insertion. J Cardiovasc Dis Diagn. 2014;2:136.

5. Chu H, Cosgrove S, Sexton B, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med . 2006;355:2725-2732.

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
115
Page Number
115
Publications
Publications
Topics
Article Type
Display Headline
Structured Approach to Venous Access Associated with Zero Risk of Pneumothorax During Cardiac Device Implant Procedures
Display Headline
Structured Approach to Venous Access Associated with Zero Risk of Pneumothorax During Cardiac Device Implant Procedures
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Delayed Cardioversion Noninferior to Early Cardioversion in Recent-Onset Atrial Fibrillation

Article Type
Changed
Thu, 04/23/2020 - 15:20
Display Headline
Delayed Cardioversion Noninferior to Early Cardioversion in Recent-Onset Atrial Fibrillation

Study Overview

Objective. To assess whether immediate restoration of sinus rhythm is necessary in hemodynamically stable, recent onset (< 36 hr), symptomatic atrial fibrillation in the emergency department.

Design. Multicenter, randomized, open-label, noninferiority trial, RACE 7 ACWAS (Rate Control versus Electrical Cardioversion Trial 7--Acute Cardioversion versus Wait and See).

Setting and participants. 15 hospitals in the Netherlands, including 3 academic hospitals, 8 nonacademic teaching hospitals, and 4 nonteaching hospitals. Patients 18 years of age or older with recent-onset (< 36 hr), symptomatic atrial fibrillation without signs of myocardial ischemia or a history of persistent atrial fibrillation who presented to the emergency department were randomized in a 1:1 ratio to either a wait-and-see approach or early cardioversion. The wait-and-see approach consisted of the administration of rate-control medication, including intravenous or oral beta-adrenergic-receptor blocking agents, nondihydropyridine calcium-channel blockers, or digoxin to achieve a heart rate of 110 beats per minute or less and symptomatic relief. Patients were then discharged with an outpatient visit scheduled for the next day and a referral for cardioversion as close as possible to 48 hours after the onset of symptoms. The cardioconversion group received pharmacologic cardioversion with flecainide unless contraindicated, then electrical cardioversion was performed.

Main outcome measures. Primary outcome was the presence of sinus rhythm on electrocardiogram (ECG) recorded at the 4-week trial visit. Secondary endpoints included the duration of the index visit at the emergency department, emergency department visits related to atrial fibrillation, cardiovascular complications, and time until recurrence of atrial fibrillation.

Main results. From October 2014 through September 2018, 437 patients underwent randomization, with 218 patients assigned to the delayed cardioversion group and 219 to the early cardioversion group. Mean age was 65 years, and a majority of the patients (60%) were men (n = 261). The primary end point of the presence of sinus rhythm on the ECG recorded at the 4-week visit was present in 193 of 212 patients (91%) in the delayed cardioversion group and in 202 of 215 patients (94%) in the early cardioversion group. The –2.9 percentage points with confidence interval [CI] –8.2 to 2.2 (P = 0.005) met the criteria for the noninferiority of the wait-and-see approach.

For secondary outcomes, the median duration of the index visit was 120 minutes (range, 60 to 253) in the delayed cardioversion group and 158 minutes (range, 110 to 228) in the early cardioversion group. The median difference between the 2 groups was 30 minutes (95% CI, 6 to 51 minutes). There was no significant difference in cardiovascular complications between the 2 groups. Fourteen of 212 patients (7%) in the delayed cardioversion group and 14 of 215 patients (7%) in the early cardioversion group had subsequent visits to the emergency department because of a recurrence of atrial fibrillation. Telemetric ECG recordings were available for 335 of the 437 patients. Recurrence of atrial fibrillation occurred in 49 of the 164 (30%) patients in the delayed cardioversion group and 50 of the 171 (29%) patients in the early cardioversion group.

In terms of treatment, conversion to sinus rhythm within 48 hours occurred spontaneously in 150 of 218 patients (69%) in the delayed cardioversion group after receiving rate-control medications only. Of the 218 patients, 61 (28%) had delayed cardioversion (9 by pharmacologic and 52 by electrical cardioversion) as per protocol and achieved sinus rhythm within 48 hours. In the early cardioversion group, conversion to sinus rhythm occurred spontaneously in 36 of 219 patients (16%) before the initiation of the cardioversion and in 171 of 219 (78%) after cardioversion (83 by pharmacologic and 88 by electrical).

 

 

Conclusion. For patients with recent-onset, symptomatic atrial fibrillation, allowing a short time for spontaneous conversion to sinus rhythm is reasonable as demonstrated by this noninferiority study.

Commentary

Atrial fibrillation accounts for nearly 0.5% of all emergency department visits, and this number is increasing.1,2 Patients commonly undergo immediate restoration of sinus rhythm by means of pharmacologic or electrical cardioversion. However, it is questionable whether immediate restoration of sinus rhythm is necessary, as spontaneous conversion to sinus rhythm occurs frequently. In addition, the safety of cardioversion between 12 and 48 hours after the onset of atrial fibrillation is questionable.3,4

In this pragmatic trial, the findings suggest that rate-control therapy alone can achieve prompt symptom relief in almost all eligible patients, had a low risk of complications, and reduced the median length of stay in the emergency department to 2 hours. Independent of cardioversion strategy, the authors stressed the importance of management of stroke risk when patients present with atrial fibrillation to the emergency department. In this trial, 2 patients had cerebral embolism even though both were started on anticoagulation in the index visit. One patient from the delayed cardioversion group was on dabigatran after spontaneous conversion to sinus rhythm and had an event 5 days after the index visit. The other patient, from the early cardioversion group, was on rivaroxaban and had an event 10 days after electrical cardiology. In order for the results of this trial to be broadly applicable, exclusion of intraatrial thrombus on transesophageal echocardiography may be necessary when the onset of atrial fibrillation is not as clear.

There are several limitations of this study. First, this study included only 171 of the 3706 patients (4.6%) screened systematically at the 2 academic centers, but included 266 from 13 centers without systematic screening. The large amount of patients excluded from the controlled environment made the results less generalizable in the broader scope. Second, the reported incidence of recurrent atrial fibrillation within 4 weeks after randomization was an underestimation of the true recurrence rate since the trial used intermittent monitoring. Although the incidence of about 30% was similar between the 2 groups, the authors suggested that the probability of recurrence of atrial fibrillation was not affected by management approach during the acute event. Finally, for these results to be applicable in the general population, defined treatment algorithms and access to prompt follow-up are needed, and these may not be practical in other clinical settings.2,5

Applications for Clinical Practice

The current study demonstrated immediate cardioversion is not necessary for patients with recent-onset, symptomatic atrial fibrillation in the emergency department. Allowing a short time for spontaneous conversion to sinus rhythm is reasonable as long as the total time in atrial fibrillation is less than 48 hours. Special consideration for anticoagulation is critical because stroke has been associated with atrial fibrillation duration between 24 and 48 hours.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Rozen G, Hosseini SM, Kaadan MI, et al. Emergency department visits for atrial fibrillation in the United States: trends in admission rates and economic burden from 2007 to 2014. J Am Heart Assoc. 2018;7(15):e009024.

2. Healey JS, McIntyre WF. The RACE to treat atrial fibrillation in the emergency department. N Engl J Med. 2019 Mar 18.

3. Andrade JM, Verma A, Mitchell LB, et al. 2018 Focused update of the Canadian Cardiovascular Society guidelines for the management of atrial fibrillation. Can J Cardiol. 2018;34:1371-1392. 


4. Nuotio I, Hartikainen JE, Grönberg T, et al. Time to cardioversion for acute atrial fibrillation and thromboembolic complications. JAMA. 2014;312:647-649

5. Baugh CW, Clark CL, Wilson JW, et al. Creation and implementation of an outpatient pathway for atrial fibrillation in the emergency department setting: results of an expert panel. Acad Emerg Med. 2018;25:1065-1075.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
113-114
Sections
Article PDF
Article PDF

Study Overview

Objective. To assess whether immediate restoration of sinus rhythm is necessary in hemodynamically stable, recent onset (< 36 hr), symptomatic atrial fibrillation in the emergency department.

Design. Multicenter, randomized, open-label, noninferiority trial, RACE 7 ACWAS (Rate Control versus Electrical Cardioversion Trial 7--Acute Cardioversion versus Wait and See).

Setting and participants. 15 hospitals in the Netherlands, including 3 academic hospitals, 8 nonacademic teaching hospitals, and 4 nonteaching hospitals. Patients 18 years of age or older with recent-onset (< 36 hr), symptomatic atrial fibrillation without signs of myocardial ischemia or a history of persistent atrial fibrillation who presented to the emergency department were randomized in a 1:1 ratio to either a wait-and-see approach or early cardioversion. The wait-and-see approach consisted of the administration of rate-control medication, including intravenous or oral beta-adrenergic-receptor blocking agents, nondihydropyridine calcium-channel blockers, or digoxin to achieve a heart rate of 110 beats per minute or less and symptomatic relief. Patients were then discharged with an outpatient visit scheduled for the next day and a referral for cardioversion as close as possible to 48 hours after the onset of symptoms. The cardioconversion group received pharmacologic cardioversion with flecainide unless contraindicated, then electrical cardioversion was performed.

Main outcome measures. Primary outcome was the presence of sinus rhythm on electrocardiogram (ECG) recorded at the 4-week trial visit. Secondary endpoints included the duration of the index visit at the emergency department, emergency department visits related to atrial fibrillation, cardiovascular complications, and time until recurrence of atrial fibrillation.

Main results. From October 2014 through September 2018, 437 patients underwent randomization, with 218 patients assigned to the delayed cardioversion group and 219 to the early cardioversion group. Mean age was 65 years, and a majority of the patients (60%) were men (n = 261). The primary end point of the presence of sinus rhythm on the ECG recorded at the 4-week visit was present in 193 of 212 patients (91%) in the delayed cardioversion group and in 202 of 215 patients (94%) in the early cardioversion group. The –2.9 percentage points with confidence interval [CI] –8.2 to 2.2 (P = 0.005) met the criteria for the noninferiority of the wait-and-see approach.

For secondary outcomes, the median duration of the index visit was 120 minutes (range, 60 to 253) in the delayed cardioversion group and 158 minutes (range, 110 to 228) in the early cardioversion group. The median difference between the 2 groups was 30 minutes (95% CI, 6 to 51 minutes). There was no significant difference in cardiovascular complications between the 2 groups. Fourteen of 212 patients (7%) in the delayed cardioversion group and 14 of 215 patients (7%) in the early cardioversion group had subsequent visits to the emergency department because of a recurrence of atrial fibrillation. Telemetric ECG recordings were available for 335 of the 437 patients. Recurrence of atrial fibrillation occurred in 49 of the 164 (30%) patients in the delayed cardioversion group and 50 of the 171 (29%) patients in the early cardioversion group.

In terms of treatment, conversion to sinus rhythm within 48 hours occurred spontaneously in 150 of 218 patients (69%) in the delayed cardioversion group after receiving rate-control medications only. Of the 218 patients, 61 (28%) had delayed cardioversion (9 by pharmacologic and 52 by electrical cardioversion) as per protocol and achieved sinus rhythm within 48 hours. In the early cardioversion group, conversion to sinus rhythm occurred spontaneously in 36 of 219 patients (16%) before the initiation of the cardioversion and in 171 of 219 (78%) after cardioversion (83 by pharmacologic and 88 by electrical).

 

 

Conclusion. For patients with recent-onset, symptomatic atrial fibrillation, allowing a short time for spontaneous conversion to sinus rhythm is reasonable as demonstrated by this noninferiority study.

Commentary

Atrial fibrillation accounts for nearly 0.5% of all emergency department visits, and this number is increasing.1,2 Patients commonly undergo immediate restoration of sinus rhythm by means of pharmacologic or electrical cardioversion. However, it is questionable whether immediate restoration of sinus rhythm is necessary, as spontaneous conversion to sinus rhythm occurs frequently. In addition, the safety of cardioversion between 12 and 48 hours after the onset of atrial fibrillation is questionable.3,4

In this pragmatic trial, the findings suggest that rate-control therapy alone can achieve prompt symptom relief in almost all eligible patients, had a low risk of complications, and reduced the median length of stay in the emergency department to 2 hours. Independent of cardioversion strategy, the authors stressed the importance of management of stroke risk when patients present with atrial fibrillation to the emergency department. In this trial, 2 patients had cerebral embolism even though both were started on anticoagulation in the index visit. One patient from the delayed cardioversion group was on dabigatran after spontaneous conversion to sinus rhythm and had an event 5 days after the index visit. The other patient, from the early cardioversion group, was on rivaroxaban and had an event 10 days after electrical cardiology. In order for the results of this trial to be broadly applicable, exclusion of intraatrial thrombus on transesophageal echocardiography may be necessary when the onset of atrial fibrillation is not as clear.

There are several limitations of this study. First, this study included only 171 of the 3706 patients (4.6%) screened systematically at the 2 academic centers, but included 266 from 13 centers without systematic screening. The large amount of patients excluded from the controlled environment made the results less generalizable in the broader scope. Second, the reported incidence of recurrent atrial fibrillation within 4 weeks after randomization was an underestimation of the true recurrence rate since the trial used intermittent monitoring. Although the incidence of about 30% was similar between the 2 groups, the authors suggested that the probability of recurrence of atrial fibrillation was not affected by management approach during the acute event. Finally, for these results to be applicable in the general population, defined treatment algorithms and access to prompt follow-up are needed, and these may not be practical in other clinical settings.2,5

Applications for Clinical Practice

The current study demonstrated immediate cardioversion is not necessary for patients with recent-onset, symptomatic atrial fibrillation in the emergency department. Allowing a short time for spontaneous conversion to sinus rhythm is reasonable as long as the total time in atrial fibrillation is less than 48 hours. Special consideration for anticoagulation is critical because stroke has been associated with atrial fibrillation duration between 24 and 48 hours.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To assess whether immediate restoration of sinus rhythm is necessary in hemodynamically stable, recent onset (< 36 hr), symptomatic atrial fibrillation in the emergency department.

Design. Multicenter, randomized, open-label, noninferiority trial, RACE 7 ACWAS (Rate Control versus Electrical Cardioversion Trial 7--Acute Cardioversion versus Wait and See).

Setting and participants. 15 hospitals in the Netherlands, including 3 academic hospitals, 8 nonacademic teaching hospitals, and 4 nonteaching hospitals. Patients 18 years of age or older with recent-onset (< 36 hr), symptomatic atrial fibrillation without signs of myocardial ischemia or a history of persistent atrial fibrillation who presented to the emergency department were randomized in a 1:1 ratio to either a wait-and-see approach or early cardioversion. The wait-and-see approach consisted of the administration of rate-control medication, including intravenous or oral beta-adrenergic-receptor blocking agents, nondihydropyridine calcium-channel blockers, or digoxin to achieve a heart rate of 110 beats per minute or less and symptomatic relief. Patients were then discharged with an outpatient visit scheduled for the next day and a referral for cardioversion as close as possible to 48 hours after the onset of symptoms. The cardioconversion group received pharmacologic cardioversion with flecainide unless contraindicated, then electrical cardioversion was performed.

Main outcome measures. Primary outcome was the presence of sinus rhythm on electrocardiogram (ECG) recorded at the 4-week trial visit. Secondary endpoints included the duration of the index visit at the emergency department, emergency department visits related to atrial fibrillation, cardiovascular complications, and time until recurrence of atrial fibrillation.

Main results. From October 2014 through September 2018, 437 patients underwent randomization, with 218 patients assigned to the delayed cardioversion group and 219 to the early cardioversion group. Mean age was 65 years, and a majority of the patients (60%) were men (n = 261). The primary end point of the presence of sinus rhythm on the ECG recorded at the 4-week visit was present in 193 of 212 patients (91%) in the delayed cardioversion group and in 202 of 215 patients (94%) in the early cardioversion group. The –2.9 percentage points with confidence interval [CI] –8.2 to 2.2 (P = 0.005) met the criteria for the noninferiority of the wait-and-see approach.

For secondary outcomes, the median duration of the index visit was 120 minutes (range, 60 to 253) in the delayed cardioversion group and 158 minutes (range, 110 to 228) in the early cardioversion group. The median difference between the 2 groups was 30 minutes (95% CI, 6 to 51 minutes). There was no significant difference in cardiovascular complications between the 2 groups. Fourteen of 212 patients (7%) in the delayed cardioversion group and 14 of 215 patients (7%) in the early cardioversion group had subsequent visits to the emergency department because of a recurrence of atrial fibrillation. Telemetric ECG recordings were available for 335 of the 437 patients. Recurrence of atrial fibrillation occurred in 49 of the 164 (30%) patients in the delayed cardioversion group and 50 of the 171 (29%) patients in the early cardioversion group.

In terms of treatment, conversion to sinus rhythm within 48 hours occurred spontaneously in 150 of 218 patients (69%) in the delayed cardioversion group after receiving rate-control medications only. Of the 218 patients, 61 (28%) had delayed cardioversion (9 by pharmacologic and 52 by electrical cardioversion) as per protocol and achieved sinus rhythm within 48 hours. In the early cardioversion group, conversion to sinus rhythm occurred spontaneously in 36 of 219 patients (16%) before the initiation of the cardioversion and in 171 of 219 (78%) after cardioversion (83 by pharmacologic and 88 by electrical).

 

 

Conclusion. For patients with recent-onset, symptomatic atrial fibrillation, allowing a short time for spontaneous conversion to sinus rhythm is reasonable as demonstrated by this noninferiority study.

Commentary

Atrial fibrillation accounts for nearly 0.5% of all emergency department visits, and this number is increasing.1,2 Patients commonly undergo immediate restoration of sinus rhythm by means of pharmacologic or electrical cardioversion. However, it is questionable whether immediate restoration of sinus rhythm is necessary, as spontaneous conversion to sinus rhythm occurs frequently. In addition, the safety of cardioversion between 12 and 48 hours after the onset of atrial fibrillation is questionable.3,4

In this pragmatic trial, the findings suggest that rate-control therapy alone can achieve prompt symptom relief in almost all eligible patients, had a low risk of complications, and reduced the median length of stay in the emergency department to 2 hours. Independent of cardioversion strategy, the authors stressed the importance of management of stroke risk when patients present with atrial fibrillation to the emergency department. In this trial, 2 patients had cerebral embolism even though both were started on anticoagulation in the index visit. One patient from the delayed cardioversion group was on dabigatran after spontaneous conversion to sinus rhythm and had an event 5 days after the index visit. The other patient, from the early cardioversion group, was on rivaroxaban and had an event 10 days after electrical cardiology. In order for the results of this trial to be broadly applicable, exclusion of intraatrial thrombus on transesophageal echocardiography may be necessary when the onset of atrial fibrillation is not as clear.

There are several limitations of this study. First, this study included only 171 of the 3706 patients (4.6%) screened systematically at the 2 academic centers, but included 266 from 13 centers without systematic screening. The large amount of patients excluded from the controlled environment made the results less generalizable in the broader scope. Second, the reported incidence of recurrent atrial fibrillation within 4 weeks after randomization was an underestimation of the true recurrence rate since the trial used intermittent monitoring. Although the incidence of about 30% was similar between the 2 groups, the authors suggested that the probability of recurrence of atrial fibrillation was not affected by management approach during the acute event. Finally, for these results to be applicable in the general population, defined treatment algorithms and access to prompt follow-up are needed, and these may not be practical in other clinical settings.2,5

Applications for Clinical Practice

The current study demonstrated immediate cardioversion is not necessary for patients with recent-onset, symptomatic atrial fibrillation in the emergency department. Allowing a short time for spontaneous conversion to sinus rhythm is reasonable as long as the total time in atrial fibrillation is less than 48 hours. Special consideration for anticoagulation is critical because stroke has been associated with atrial fibrillation duration between 24 and 48 hours.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Rozen G, Hosseini SM, Kaadan MI, et al. Emergency department visits for atrial fibrillation in the United States: trends in admission rates and economic burden from 2007 to 2014. J Am Heart Assoc. 2018;7(15):e009024.

2. Healey JS, McIntyre WF. The RACE to treat atrial fibrillation in the emergency department. N Engl J Med. 2019 Mar 18.

3. Andrade JM, Verma A, Mitchell LB, et al. 2018 Focused update of the Canadian Cardiovascular Society guidelines for the management of atrial fibrillation. Can J Cardiol. 2018;34:1371-1392. 


4. Nuotio I, Hartikainen JE, Grönberg T, et al. Time to cardioversion for acute atrial fibrillation and thromboembolic complications. JAMA. 2014;312:647-649

5. Baugh CW, Clark CL, Wilson JW, et al. Creation and implementation of an outpatient pathway for atrial fibrillation in the emergency department setting: results of an expert panel. Acad Emerg Med. 2018;25:1065-1075.

References

1. Rozen G, Hosseini SM, Kaadan MI, et al. Emergency department visits for atrial fibrillation in the United States: trends in admission rates and economic burden from 2007 to 2014. J Am Heart Assoc. 2018;7(15):e009024.

2. Healey JS, McIntyre WF. The RACE to treat atrial fibrillation in the emergency department. N Engl J Med. 2019 Mar 18.

3. Andrade JM, Verma A, Mitchell LB, et al. 2018 Focused update of the Canadian Cardiovascular Society guidelines for the management of atrial fibrillation. Can J Cardiol. 2018;34:1371-1392. 


4. Nuotio I, Hartikainen JE, Grönberg T, et al. Time to cardioversion for acute atrial fibrillation and thromboembolic complications. JAMA. 2014;312:647-649

5. Baugh CW, Clark CL, Wilson JW, et al. Creation and implementation of an outpatient pathway for atrial fibrillation in the emergency department setting: results of an expert panel. Acad Emerg Med. 2018;25:1065-1075.

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
113-114
Page Number
113-114
Publications
Publications
Topics
Article Type
Display Headline
Delayed Cardioversion Noninferior to Early Cardioversion in Recent-Onset Atrial Fibrillation
Display Headline
Delayed Cardioversion Noninferior to Early Cardioversion in Recent-Onset Atrial Fibrillation
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Does Vitamin D Supplementation Improve Lower Extremity Power and Function in Community-Dwelling Older Adults?

Article Type
Changed
Thu, 04/23/2020 - 15:18
Display Headline
Does Vitamin D Supplementation Improve Lower Extremity Power and Function in Community-Dwelling Older Adults?

Study Overview

Objective. To test the effect of 12 months of vitamin D supplementation on lower-extremity power and function in older community-dwelling adults screened for low serum 25-hydroxyvitamin D (25(OH)D).

Design. A single-center, double-blind, randomized placebo-controlled study in which participants were assigned to 800 IU of vitamin D3 supplementation or placebo daily and were followed over a total period of 12 months.

Setting and participants. A total of 100 community-dwelling men and women aged ≥ 60 years with serum 25(OH)D ≤ 20 ng/mL at screening participated. Participants were prescreened by phone, and were excluded if they met any of the following exclusion criteria: vitamin D supplement use > 600 IU/day (for age 60-70 years) or > 800 IU/day (for age ≥ 71 years); vitamin D injection within the previous 3 months; > 2 falls or 1 fall with injury in past year; use of cane, walker, or other indoor walking aid; history of kidney stones within past 3 years; hypercalcemia (serum calcium > 10.8 mg/dL); renal dysfunction (glomerular filtration rate, < 30 mL/min); history of liver disease, sarcoidosis, lymphoma, dysphagia, or other gastrointestinal disorder; neuromuscular disorder affecting lower-extremity function; hip replacement within the past year; cancer treatment in the past 3 years; treatment with thiazide diuretics > 37.5 mg, teriparatide, denosumab, or bisphosphonates within the past 2 years; oral steroids (for > 3 weeks in the past 6 months); and use of fat malabsorption products or anticonvulsive therapy.

Main outcome measures. The primary outcome was leg extensor power assessed using a computer-interfaced bilateral Keiser pneumatic leg press. Secondary outcomes to measure physical function included: (1) backward tandem walk test (which is an indicator of balance and postural control during movement1); (2) Short Physical Performance Battery (SPPB) testing, which includes a balance assessment (ability to stand with feet positioned normally, semi-tandem, and tandem for 10s), a timed 4-m walk, and a chair stand test (time to complete 5 repeated chair stands); (3) stair climbing (ie, time to climb 10 steps, as a measure of knee extensor strength and functional capacity); and (4) handgrip strength (using a dynamometer). Lean tissue mass was assessed by dual X-ray absorptiometry (DEXA scan). Finally, other measures included serum total 25(OH)D levels measured at baseline, 4, 8, and 12 months, as well as 24-hour urine collection for urea-nitrogen and creatinine measurements.

Main results. Of the 2289 individuals screened for the study, 100 met eligibility criteria and underwent randomization to receive either 800 IU vitamin D supplementation daily (n = 49) or placebo (n = 51). Three patients (2 in vitamin D group and 1 in placebo group) were lost to follow up. The mean age of all participants was 69.6 ± 6.9 years. In the vitamin D group versus the control group, respectively, the percent male: female ratio was 66:34 versus 63:37, and percent Caucasian was 75% versus 82%. Mean body mass index was 28.2 ± 7.0 and mean serum 25(OH)D was 20.2 ± 6.7 ng/mL. At the end of the study (12 months), 70% of participants given vitamin D supplementation had 25(OH)D levels ≥ 30 ng/mL and all participants had levels ≥ 20 ng/mL. In the placebo group, the serum 25(OH)D level was ≥ 20 ng/mL in 54% and ≥ 30 ng/mL in 6%. The mean serum 25(OH)D level increased to 32.5 ± 5.1 ng/mL in the vitamin D–supplemented group, but no significant change was found in the placebo group (treatment × time, P < 0.001). Overall, the serum 1,25 (OH)2D3 levels did not differ between the 2 groups over the intervention period (time, P = 0.49; treatment × time, P = 0.27). Dietary intake of vitamin D, calcium, nitrogen, and protein did not differ or change over time between the 2 groups. The change in leg press power, function, and strength did not differ between the groups over 12 months (all treatment × time, P values ≥ 0.60). A total of 27 falls were reported (14 in vitamin D versus 9 in control group), of which 9 were associated with injuries. There was no significant change in lean body mass at the end of the study period in either group (treatment × time, P = 0.98).

Conclusion. In community-dwelling older adults with vitamin D deficiency (≤ 20 ng/mL), 12-month daily supplementation with 800 IU of vitamin D3 resulted in sufficient increases in serum 25(OH)D levels, but did not improve lower-extremity power, strength, or lean mass.

Commentary

Vitamin D deficiency is common in older adults (prevalence of about 41% in US adults ≥ 65 years old, according to Forrest et al2) and is likely due to dietary deficiency, reduced sun exposure (lifestyle), and decreased intestinal calcium absorption. As such, vitamin D deficiency has historically been a topic of debate and of interest in geriatric medicine, as it relates to muscle weakness, which in turn leads to increased susceptibility to falls.3 Interestingly, vitamin D receptors are expressed in human skeletal muscle,4 and in one study, 3-month supplementation of vitamin D led to an increase in type II skeletal muscle fibers in older women.5 Similarly, results from a meta-analysis of 5 randomized controlled trials (RCTs)6 showed that vitamin D supplementation may reduce fall risk in older adults by 22% (corrected odds ratio, 0.78; 95% confidence interval, 0.64-0.92). Thus, in keeping with this general theme of vitamin D supplementation yielding beneficial effects in clinical outcomes, clinicians have long accepted and practiced routine vitamin D supplementation in caring for older adults.

 

 

In more recent years, the role of vitamin D supplementation in primary care has become controversial,7 as observed in a recent paradigm shift of moving away from routine supplementation for fall and fracture prevention in clinical practice.8 In a recent meta-analysis of 33 RCTs in older community-dwelling adults, supplementation with vitamin D with or without calcium did not result in a reduction of hip fracture or total number of fractures.9 Moreover, the United States Preventive Services Task Force (USPSTF) recently published updated recommendations on the use of vitamin D supplementation for primary prevention of fractures10 and prevention of falls11 in community-dwelling adults. In these updated recommendations, the USPSTF indicated that insufficient evidence exists to recommend vitamin D supplementation to prevent fractures in men and premenopausal women, and recommends against vitamin D supplementation for prevention of falls. Finally, USPSTF recommends against low-dose vitamin D (400 IU or less) supplementation for primary prevention of fractures in community-dwelling, postmenopausal women.10 Nevertheless, these statements are not applicable for individuals with a prior history of osteoporotic fractures, increased risk of falls, or a diagnosis of vitamin D deficiency or osteoporosis. Therefore, vitamin D supplementation for prevention of fall and fractures should be practiced with caution.

Vitamin D supplementation is no longer routinely recommended for fall and fracture prevention. However, if we believe that poor lower extremity muscle strength is a risk factor for falls,12 then the question of whether vitamin D has a beneficial role in improving lower extremity strength in older adults needs to be addressed. Results regarding the effect of vitamin D supplementation on muscle function have so far been mixed. For example, in a randomized, double-blinded, placebo-controlled trial of 160 postmenopausal women with low vitamin D level (< 20 ng/mL), vitamin D3 supplementation at 1000 IU/day for 9 months showed a significant increase in lower extremity muscle strength.13 However, in another randomized double-blinded, placebo-controlled trial of 130 men aged 65 to 90 years with low vitamin D level (< 30 ng/mL) and an SPPB score of ≤ 9 (mild-moderate limitation in mobility), daily supplementation with 4000 IU of vitamin D3 for 9 months did not result in improved SPPB score or gait speed.14 In the study reported by Shea et al, the authors showed that 800 IU of daily vitamin D supplementation (consistent with the Institute of Medicine [IOM] recommendations for older adults15) in community-dwelling older adults with vitamin D deficiency (< 20 ng/mL) did not improve lower extremity muscle strength. This finding is significant in that it adds further evidence to support the rationale against using vitamin D supplementation for the sole purpose of improving lower extremity muscle function in older adults with vitamin D deficiency.

Valuable strengths of this study include its randomized, double-blinded, placebo-controlled trial design testing the IOM recommended dose of daily vitamin D supplementation for older adults. In addition, compared to some of the prior studies mentioned above, the study population included both males and females, although the final study population resulted in some gender bias (with male predominance). Moreover, participants were followed for a sufficient amount of time (1 year), with an excellent adherence rate (only 3 were lost to follow-up) and with corresponding improvement in vitamin D levels. Finally, the use of SPPB as a readout for primary outcome should also be commended, as this assessment is a well-validated method for measuring lower extremity function with scaled scores that predict poor outcomes.16 However, some limitations include the aforementioned predominance of male participants and Caucasian race in both intervention and control groups, as well as discrepancies between the measurement methods for serum vitamin D levels (ie, finger-stick cards versus clinical lab measurement) that may have underestimated the actual serum 25(OH)D levels.

 

Applications for Clinical Practice

While the null findings from the Shea and colleagues study are applicable to healthier community-dwelling older adults, they may not be generalizable to the care of more frail older patients due to their increased risks for falls and high vulnerability to adverse outcomes. Thus, further studies that account for baseline sarcopenia, frailty, and other fall-risk factors (eg, polypharmacy) are needed to better evaluate the value of vitamin D supplementation in this most vulnerable population.

Caroline Park, MD, PhD, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai, New York, NY

References

1. Husu P, Suni J, Pasanen M, Miilunpalo S. Health-related fitness tests as predictors of difficulties in long-distance walking among high-functioning older adults. Aging Clin Exp Res. 2007;19:444-450.

2. Forrest KYZ, Stuhldreher WL. Prevalence and correlates of vitamin D deficiency in US adults. Nutr Res. 2011;31:48-54.

3. Bischoff-Ferrari HA, Giovannucci E, Willett WC, et al. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84:1253.

4. Simpson RU, Thomas GA, Arnold AJ. Identification of 1,25-dihydroxyvitamin-D3 receptors and activities in muscle. J Biol Chem. 1985;260:8882-8891.

5. Sorensen OH, Lund BI, Saltin B, et al. Myopathy in bone loss ofaging - improvement by treatment with 1alpha-hydroxycholecalciferol and calcium. Clinical Science. 1979;56:157-161.

6. Bischoff-Ferrari HA, Dawson-Hughes B, Willett WC, et al. Effect of vitamin D on falls - A meta-analysis. JAMA. 2004;291:1999-2006.

7. Lewis JR SM, Daly RM. The vitamin D and calcium controversy: an update. Curr Opin Rheumatol. 2019;31:91-97.

8. Schwenk T. No value for routine vitamin D supplementation. NEJM Journal Watch. December 26, 2018.

9. Zhao JG, Zeng XT, Wang J, Liu L. Association between calcium or vitamin D supplementation and fracture incidence in community-dwelling older adults: a systematic review and meta-analysis. JAMA. 2017;318:2466-2482.

10. Grossman DC, Curry SJ, Owens DK, et al. Vitamin D, calcium, or combined supplementation for the primary prevention of fractures in community-dwelling adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1592-1599.

11. Grossman DC, Curry SJ, Owens DK, et al. Interventions to prevent falls in community-dwelling older adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1696-1704.

12. Tinetti ME, Speechley M, Ginter SF. Risk-factors for falls among elderly persons living in the community. N Engl J Med. 1988;319:1701-1707.

13. Cangussu LM, Nahas-Neto J, Orsatti CL, et al. Effect of vitamin D supplementation alone on muscle function in postmenopausal women: a randomized, double-blind, placebo-controlled clinical trial. Osteoporos Int. 2015;26:2413-2421.

14. Levis S, Gomez-Marin O. Vitamin D and physical function in sedentary older men. J Am Geriatr Soc. 2017;65:323-331.

15. Ross CA TC, Yaktine AL, Del Valle HB. Institute of Medicine (US) Committee to Review Dietary Reference Intakes for Vitamin D and Calcium. National Academies Press. 2011.

16. Guralnik JM, Ferrucci L, Simonsick EM, et al. Lower-extremity function in persons over the age of 70 years as a predictor of subsequent disability. N Engl J Med. 1995;332:556-561

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
110-112
Sections
Article PDF
Article PDF

Study Overview

Objective. To test the effect of 12 months of vitamin D supplementation on lower-extremity power and function in older community-dwelling adults screened for low serum 25-hydroxyvitamin D (25(OH)D).

Design. A single-center, double-blind, randomized placebo-controlled study in which participants were assigned to 800 IU of vitamin D3 supplementation or placebo daily and were followed over a total period of 12 months.

Setting and participants. A total of 100 community-dwelling men and women aged ≥ 60 years with serum 25(OH)D ≤ 20 ng/mL at screening participated. Participants were prescreened by phone, and were excluded if they met any of the following exclusion criteria: vitamin D supplement use > 600 IU/day (for age 60-70 years) or > 800 IU/day (for age ≥ 71 years); vitamin D injection within the previous 3 months; > 2 falls or 1 fall with injury in past year; use of cane, walker, or other indoor walking aid; history of kidney stones within past 3 years; hypercalcemia (serum calcium > 10.8 mg/dL); renal dysfunction (glomerular filtration rate, < 30 mL/min); history of liver disease, sarcoidosis, lymphoma, dysphagia, or other gastrointestinal disorder; neuromuscular disorder affecting lower-extremity function; hip replacement within the past year; cancer treatment in the past 3 years; treatment with thiazide diuretics > 37.5 mg, teriparatide, denosumab, or bisphosphonates within the past 2 years; oral steroids (for > 3 weeks in the past 6 months); and use of fat malabsorption products or anticonvulsive therapy.

Main outcome measures. The primary outcome was leg extensor power assessed using a computer-interfaced bilateral Keiser pneumatic leg press. Secondary outcomes to measure physical function included: (1) backward tandem walk test (which is an indicator of balance and postural control during movement1); (2) Short Physical Performance Battery (SPPB) testing, which includes a balance assessment (ability to stand with feet positioned normally, semi-tandem, and tandem for 10s), a timed 4-m walk, and a chair stand test (time to complete 5 repeated chair stands); (3) stair climbing (ie, time to climb 10 steps, as a measure of knee extensor strength and functional capacity); and (4) handgrip strength (using a dynamometer). Lean tissue mass was assessed by dual X-ray absorptiometry (DEXA scan). Finally, other measures included serum total 25(OH)D levels measured at baseline, 4, 8, and 12 months, as well as 24-hour urine collection for urea-nitrogen and creatinine measurements.

Main results. Of the 2289 individuals screened for the study, 100 met eligibility criteria and underwent randomization to receive either 800 IU vitamin D supplementation daily (n = 49) or placebo (n = 51). Three patients (2 in vitamin D group and 1 in placebo group) were lost to follow up. The mean age of all participants was 69.6 ± 6.9 years. In the vitamin D group versus the control group, respectively, the percent male: female ratio was 66:34 versus 63:37, and percent Caucasian was 75% versus 82%. Mean body mass index was 28.2 ± 7.0 and mean serum 25(OH)D was 20.2 ± 6.7 ng/mL. At the end of the study (12 months), 70% of participants given vitamin D supplementation had 25(OH)D levels ≥ 30 ng/mL and all participants had levels ≥ 20 ng/mL. In the placebo group, the serum 25(OH)D level was ≥ 20 ng/mL in 54% and ≥ 30 ng/mL in 6%. The mean serum 25(OH)D level increased to 32.5 ± 5.1 ng/mL in the vitamin D–supplemented group, but no significant change was found in the placebo group (treatment × time, P < 0.001). Overall, the serum 1,25 (OH)2D3 levels did not differ between the 2 groups over the intervention period (time, P = 0.49; treatment × time, P = 0.27). Dietary intake of vitamin D, calcium, nitrogen, and protein did not differ or change over time between the 2 groups. The change in leg press power, function, and strength did not differ between the groups over 12 months (all treatment × time, P values ≥ 0.60). A total of 27 falls were reported (14 in vitamin D versus 9 in control group), of which 9 were associated with injuries. There was no significant change in lean body mass at the end of the study period in either group (treatment × time, P = 0.98).

Conclusion. In community-dwelling older adults with vitamin D deficiency (≤ 20 ng/mL), 12-month daily supplementation with 800 IU of vitamin D3 resulted in sufficient increases in serum 25(OH)D levels, but did not improve lower-extremity power, strength, or lean mass.

Commentary

Vitamin D deficiency is common in older adults (prevalence of about 41% in US adults ≥ 65 years old, according to Forrest et al2) and is likely due to dietary deficiency, reduced sun exposure (lifestyle), and decreased intestinal calcium absorption. As such, vitamin D deficiency has historically been a topic of debate and of interest in geriatric medicine, as it relates to muscle weakness, which in turn leads to increased susceptibility to falls.3 Interestingly, vitamin D receptors are expressed in human skeletal muscle,4 and in one study, 3-month supplementation of vitamin D led to an increase in type II skeletal muscle fibers in older women.5 Similarly, results from a meta-analysis of 5 randomized controlled trials (RCTs)6 showed that vitamin D supplementation may reduce fall risk in older adults by 22% (corrected odds ratio, 0.78; 95% confidence interval, 0.64-0.92). Thus, in keeping with this general theme of vitamin D supplementation yielding beneficial effects in clinical outcomes, clinicians have long accepted and practiced routine vitamin D supplementation in caring for older adults.

 

 

In more recent years, the role of vitamin D supplementation in primary care has become controversial,7 as observed in a recent paradigm shift of moving away from routine supplementation for fall and fracture prevention in clinical practice.8 In a recent meta-analysis of 33 RCTs in older community-dwelling adults, supplementation with vitamin D with or without calcium did not result in a reduction of hip fracture or total number of fractures.9 Moreover, the United States Preventive Services Task Force (USPSTF) recently published updated recommendations on the use of vitamin D supplementation for primary prevention of fractures10 and prevention of falls11 in community-dwelling adults. In these updated recommendations, the USPSTF indicated that insufficient evidence exists to recommend vitamin D supplementation to prevent fractures in men and premenopausal women, and recommends against vitamin D supplementation for prevention of falls. Finally, USPSTF recommends against low-dose vitamin D (400 IU or less) supplementation for primary prevention of fractures in community-dwelling, postmenopausal women.10 Nevertheless, these statements are not applicable for individuals with a prior history of osteoporotic fractures, increased risk of falls, or a diagnosis of vitamin D deficiency or osteoporosis. Therefore, vitamin D supplementation for prevention of fall and fractures should be practiced with caution.

Vitamin D supplementation is no longer routinely recommended for fall and fracture prevention. However, if we believe that poor lower extremity muscle strength is a risk factor for falls,12 then the question of whether vitamin D has a beneficial role in improving lower extremity strength in older adults needs to be addressed. Results regarding the effect of vitamin D supplementation on muscle function have so far been mixed. For example, in a randomized, double-blinded, placebo-controlled trial of 160 postmenopausal women with low vitamin D level (< 20 ng/mL), vitamin D3 supplementation at 1000 IU/day for 9 months showed a significant increase in lower extremity muscle strength.13 However, in another randomized double-blinded, placebo-controlled trial of 130 men aged 65 to 90 years with low vitamin D level (< 30 ng/mL) and an SPPB score of ≤ 9 (mild-moderate limitation in mobility), daily supplementation with 4000 IU of vitamin D3 for 9 months did not result in improved SPPB score or gait speed.14 In the study reported by Shea et al, the authors showed that 800 IU of daily vitamin D supplementation (consistent with the Institute of Medicine [IOM] recommendations for older adults15) in community-dwelling older adults with vitamin D deficiency (< 20 ng/mL) did not improve lower extremity muscle strength. This finding is significant in that it adds further evidence to support the rationale against using vitamin D supplementation for the sole purpose of improving lower extremity muscle function in older adults with vitamin D deficiency.

Valuable strengths of this study include its randomized, double-blinded, placebo-controlled trial design testing the IOM recommended dose of daily vitamin D supplementation for older adults. In addition, compared to some of the prior studies mentioned above, the study population included both males and females, although the final study population resulted in some gender bias (with male predominance). Moreover, participants were followed for a sufficient amount of time (1 year), with an excellent adherence rate (only 3 were lost to follow-up) and with corresponding improvement in vitamin D levels. Finally, the use of SPPB as a readout for primary outcome should also be commended, as this assessment is a well-validated method for measuring lower extremity function with scaled scores that predict poor outcomes.16 However, some limitations include the aforementioned predominance of male participants and Caucasian race in both intervention and control groups, as well as discrepancies between the measurement methods for serum vitamin D levels (ie, finger-stick cards versus clinical lab measurement) that may have underestimated the actual serum 25(OH)D levels.

 

Applications for Clinical Practice

While the null findings from the Shea and colleagues study are applicable to healthier community-dwelling older adults, they may not be generalizable to the care of more frail older patients due to their increased risks for falls and high vulnerability to adverse outcomes. Thus, further studies that account for baseline sarcopenia, frailty, and other fall-risk factors (eg, polypharmacy) are needed to better evaluate the value of vitamin D supplementation in this most vulnerable population.

Caroline Park, MD, PhD, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai, New York, NY

Study Overview

Objective. To test the effect of 12 months of vitamin D supplementation on lower-extremity power and function in older community-dwelling adults screened for low serum 25-hydroxyvitamin D (25(OH)D).

Design. A single-center, double-blind, randomized placebo-controlled study in which participants were assigned to 800 IU of vitamin D3 supplementation or placebo daily and were followed over a total period of 12 months.

Setting and participants. A total of 100 community-dwelling men and women aged ≥ 60 years with serum 25(OH)D ≤ 20 ng/mL at screening participated. Participants were prescreened by phone, and were excluded if they met any of the following exclusion criteria: vitamin D supplement use > 600 IU/day (for age 60-70 years) or > 800 IU/day (for age ≥ 71 years); vitamin D injection within the previous 3 months; > 2 falls or 1 fall with injury in past year; use of cane, walker, or other indoor walking aid; history of kidney stones within past 3 years; hypercalcemia (serum calcium > 10.8 mg/dL); renal dysfunction (glomerular filtration rate, < 30 mL/min); history of liver disease, sarcoidosis, lymphoma, dysphagia, or other gastrointestinal disorder; neuromuscular disorder affecting lower-extremity function; hip replacement within the past year; cancer treatment in the past 3 years; treatment with thiazide diuretics > 37.5 mg, teriparatide, denosumab, or bisphosphonates within the past 2 years; oral steroids (for > 3 weeks in the past 6 months); and use of fat malabsorption products or anticonvulsive therapy.

Main outcome measures. The primary outcome was leg extensor power assessed using a computer-interfaced bilateral Keiser pneumatic leg press. Secondary outcomes to measure physical function included: (1) backward tandem walk test (which is an indicator of balance and postural control during movement1); (2) Short Physical Performance Battery (SPPB) testing, which includes a balance assessment (ability to stand with feet positioned normally, semi-tandem, and tandem for 10s), a timed 4-m walk, and a chair stand test (time to complete 5 repeated chair stands); (3) stair climbing (ie, time to climb 10 steps, as a measure of knee extensor strength and functional capacity); and (4) handgrip strength (using a dynamometer). Lean tissue mass was assessed by dual X-ray absorptiometry (DEXA scan). Finally, other measures included serum total 25(OH)D levels measured at baseline, 4, 8, and 12 months, as well as 24-hour urine collection for urea-nitrogen and creatinine measurements.

Main results. Of the 2289 individuals screened for the study, 100 met eligibility criteria and underwent randomization to receive either 800 IU vitamin D supplementation daily (n = 49) or placebo (n = 51). Three patients (2 in vitamin D group and 1 in placebo group) were lost to follow up. The mean age of all participants was 69.6 ± 6.9 years. In the vitamin D group versus the control group, respectively, the percent male: female ratio was 66:34 versus 63:37, and percent Caucasian was 75% versus 82%. Mean body mass index was 28.2 ± 7.0 and mean serum 25(OH)D was 20.2 ± 6.7 ng/mL. At the end of the study (12 months), 70% of participants given vitamin D supplementation had 25(OH)D levels ≥ 30 ng/mL and all participants had levels ≥ 20 ng/mL. In the placebo group, the serum 25(OH)D level was ≥ 20 ng/mL in 54% and ≥ 30 ng/mL in 6%. The mean serum 25(OH)D level increased to 32.5 ± 5.1 ng/mL in the vitamin D–supplemented group, but no significant change was found in the placebo group (treatment × time, P < 0.001). Overall, the serum 1,25 (OH)2D3 levels did not differ between the 2 groups over the intervention period (time, P = 0.49; treatment × time, P = 0.27). Dietary intake of vitamin D, calcium, nitrogen, and protein did not differ or change over time between the 2 groups. The change in leg press power, function, and strength did not differ between the groups over 12 months (all treatment × time, P values ≥ 0.60). A total of 27 falls were reported (14 in vitamin D versus 9 in control group), of which 9 were associated with injuries. There was no significant change in lean body mass at the end of the study period in either group (treatment × time, P = 0.98).

Conclusion. In community-dwelling older adults with vitamin D deficiency (≤ 20 ng/mL), 12-month daily supplementation with 800 IU of vitamin D3 resulted in sufficient increases in serum 25(OH)D levels, but did not improve lower-extremity power, strength, or lean mass.

Commentary

Vitamin D deficiency is common in older adults (prevalence of about 41% in US adults ≥ 65 years old, according to Forrest et al2) and is likely due to dietary deficiency, reduced sun exposure (lifestyle), and decreased intestinal calcium absorption. As such, vitamin D deficiency has historically been a topic of debate and of interest in geriatric medicine, as it relates to muscle weakness, which in turn leads to increased susceptibility to falls.3 Interestingly, vitamin D receptors are expressed in human skeletal muscle,4 and in one study, 3-month supplementation of vitamin D led to an increase in type II skeletal muscle fibers in older women.5 Similarly, results from a meta-analysis of 5 randomized controlled trials (RCTs)6 showed that vitamin D supplementation may reduce fall risk in older adults by 22% (corrected odds ratio, 0.78; 95% confidence interval, 0.64-0.92). Thus, in keeping with this general theme of vitamin D supplementation yielding beneficial effects in clinical outcomes, clinicians have long accepted and practiced routine vitamin D supplementation in caring for older adults.

 

 

In more recent years, the role of vitamin D supplementation in primary care has become controversial,7 as observed in a recent paradigm shift of moving away from routine supplementation for fall and fracture prevention in clinical practice.8 In a recent meta-analysis of 33 RCTs in older community-dwelling adults, supplementation with vitamin D with or without calcium did not result in a reduction of hip fracture or total number of fractures.9 Moreover, the United States Preventive Services Task Force (USPSTF) recently published updated recommendations on the use of vitamin D supplementation for primary prevention of fractures10 and prevention of falls11 in community-dwelling adults. In these updated recommendations, the USPSTF indicated that insufficient evidence exists to recommend vitamin D supplementation to prevent fractures in men and premenopausal women, and recommends against vitamin D supplementation for prevention of falls. Finally, USPSTF recommends against low-dose vitamin D (400 IU or less) supplementation for primary prevention of fractures in community-dwelling, postmenopausal women.10 Nevertheless, these statements are not applicable for individuals with a prior history of osteoporotic fractures, increased risk of falls, or a diagnosis of vitamin D deficiency or osteoporosis. Therefore, vitamin D supplementation for prevention of fall and fractures should be practiced with caution.

Vitamin D supplementation is no longer routinely recommended for fall and fracture prevention. However, if we believe that poor lower extremity muscle strength is a risk factor for falls,12 then the question of whether vitamin D has a beneficial role in improving lower extremity strength in older adults needs to be addressed. Results regarding the effect of vitamin D supplementation on muscle function have so far been mixed. For example, in a randomized, double-blinded, placebo-controlled trial of 160 postmenopausal women with low vitamin D level (< 20 ng/mL), vitamin D3 supplementation at 1000 IU/day for 9 months showed a significant increase in lower extremity muscle strength.13 However, in another randomized double-blinded, placebo-controlled trial of 130 men aged 65 to 90 years with low vitamin D level (< 30 ng/mL) and an SPPB score of ≤ 9 (mild-moderate limitation in mobility), daily supplementation with 4000 IU of vitamin D3 for 9 months did not result in improved SPPB score or gait speed.14 In the study reported by Shea et al, the authors showed that 800 IU of daily vitamin D supplementation (consistent with the Institute of Medicine [IOM] recommendations for older adults15) in community-dwelling older adults with vitamin D deficiency (< 20 ng/mL) did not improve lower extremity muscle strength. This finding is significant in that it adds further evidence to support the rationale against using vitamin D supplementation for the sole purpose of improving lower extremity muscle function in older adults with vitamin D deficiency.

Valuable strengths of this study include its randomized, double-blinded, placebo-controlled trial design testing the IOM recommended dose of daily vitamin D supplementation for older adults. In addition, compared to some of the prior studies mentioned above, the study population included both males and females, although the final study population resulted in some gender bias (with male predominance). Moreover, participants were followed for a sufficient amount of time (1 year), with an excellent adherence rate (only 3 were lost to follow-up) and with corresponding improvement in vitamin D levels. Finally, the use of SPPB as a readout for primary outcome should also be commended, as this assessment is a well-validated method for measuring lower extremity function with scaled scores that predict poor outcomes.16 However, some limitations include the aforementioned predominance of male participants and Caucasian race in both intervention and control groups, as well as discrepancies between the measurement methods for serum vitamin D levels (ie, finger-stick cards versus clinical lab measurement) that may have underestimated the actual serum 25(OH)D levels.

 

Applications for Clinical Practice

While the null findings from the Shea and colleagues study are applicable to healthier community-dwelling older adults, they may not be generalizable to the care of more frail older patients due to their increased risks for falls and high vulnerability to adverse outcomes. Thus, further studies that account for baseline sarcopenia, frailty, and other fall-risk factors (eg, polypharmacy) are needed to better evaluate the value of vitamin D supplementation in this most vulnerable population.

Caroline Park, MD, PhD, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai, New York, NY

References

1. Husu P, Suni J, Pasanen M, Miilunpalo S. Health-related fitness tests as predictors of difficulties in long-distance walking among high-functioning older adults. Aging Clin Exp Res. 2007;19:444-450.

2. Forrest KYZ, Stuhldreher WL. Prevalence and correlates of vitamin D deficiency in US adults. Nutr Res. 2011;31:48-54.

3. Bischoff-Ferrari HA, Giovannucci E, Willett WC, et al. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84:1253.

4. Simpson RU, Thomas GA, Arnold AJ. Identification of 1,25-dihydroxyvitamin-D3 receptors and activities in muscle. J Biol Chem. 1985;260:8882-8891.

5. Sorensen OH, Lund BI, Saltin B, et al. Myopathy in bone loss ofaging - improvement by treatment with 1alpha-hydroxycholecalciferol and calcium. Clinical Science. 1979;56:157-161.

6. Bischoff-Ferrari HA, Dawson-Hughes B, Willett WC, et al. Effect of vitamin D on falls - A meta-analysis. JAMA. 2004;291:1999-2006.

7. Lewis JR SM, Daly RM. The vitamin D and calcium controversy: an update. Curr Opin Rheumatol. 2019;31:91-97.

8. Schwenk T. No value for routine vitamin D supplementation. NEJM Journal Watch. December 26, 2018.

9. Zhao JG, Zeng XT, Wang J, Liu L. Association between calcium or vitamin D supplementation and fracture incidence in community-dwelling older adults: a systematic review and meta-analysis. JAMA. 2017;318:2466-2482.

10. Grossman DC, Curry SJ, Owens DK, et al. Vitamin D, calcium, or combined supplementation for the primary prevention of fractures in community-dwelling adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1592-1599.

11. Grossman DC, Curry SJ, Owens DK, et al. Interventions to prevent falls in community-dwelling older adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1696-1704.

12. Tinetti ME, Speechley M, Ginter SF. Risk-factors for falls among elderly persons living in the community. N Engl J Med. 1988;319:1701-1707.

13. Cangussu LM, Nahas-Neto J, Orsatti CL, et al. Effect of vitamin D supplementation alone on muscle function in postmenopausal women: a randomized, double-blind, placebo-controlled clinical trial. Osteoporos Int. 2015;26:2413-2421.

14. Levis S, Gomez-Marin O. Vitamin D and physical function in sedentary older men. J Am Geriatr Soc. 2017;65:323-331.

15. Ross CA TC, Yaktine AL, Del Valle HB. Institute of Medicine (US) Committee to Review Dietary Reference Intakes for Vitamin D and Calcium. National Academies Press. 2011.

16. Guralnik JM, Ferrucci L, Simonsick EM, et al. Lower-extremity function in persons over the age of 70 years as a predictor of subsequent disability. N Engl J Med. 1995;332:556-561

References

1. Husu P, Suni J, Pasanen M, Miilunpalo S. Health-related fitness tests as predictors of difficulties in long-distance walking among high-functioning older adults. Aging Clin Exp Res. 2007;19:444-450.

2. Forrest KYZ, Stuhldreher WL. Prevalence and correlates of vitamin D deficiency in US adults. Nutr Res. 2011;31:48-54.

3. Bischoff-Ferrari HA, Giovannucci E, Willett WC, et al. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84:1253.

4. Simpson RU, Thomas GA, Arnold AJ. Identification of 1,25-dihydroxyvitamin-D3 receptors and activities in muscle. J Biol Chem. 1985;260:8882-8891.

5. Sorensen OH, Lund BI, Saltin B, et al. Myopathy in bone loss ofaging - improvement by treatment with 1alpha-hydroxycholecalciferol and calcium. Clinical Science. 1979;56:157-161.

6. Bischoff-Ferrari HA, Dawson-Hughes B, Willett WC, et al. Effect of vitamin D on falls - A meta-analysis. JAMA. 2004;291:1999-2006.

7. Lewis JR SM, Daly RM. The vitamin D and calcium controversy: an update. Curr Opin Rheumatol. 2019;31:91-97.

8. Schwenk T. No value for routine vitamin D supplementation. NEJM Journal Watch. December 26, 2018.

9. Zhao JG, Zeng XT, Wang J, Liu L. Association between calcium or vitamin D supplementation and fracture incidence in community-dwelling older adults: a systematic review and meta-analysis. JAMA. 2017;318:2466-2482.

10. Grossman DC, Curry SJ, Owens DK, et al. Vitamin D, calcium, or combined supplementation for the primary prevention of fractures in community-dwelling adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1592-1599.

11. Grossman DC, Curry SJ, Owens DK, et al. Interventions to prevent falls in community-dwelling older adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1696-1704.

12. Tinetti ME, Speechley M, Ginter SF. Risk-factors for falls among elderly persons living in the community. N Engl J Med. 1988;319:1701-1707.

13. Cangussu LM, Nahas-Neto J, Orsatti CL, et al. Effect of vitamin D supplementation alone on muscle function in postmenopausal women: a randomized, double-blind, placebo-controlled clinical trial. Osteoporos Int. 2015;26:2413-2421.

14. Levis S, Gomez-Marin O. Vitamin D and physical function in sedentary older men. J Am Geriatr Soc. 2017;65:323-331.

15. Ross CA TC, Yaktine AL, Del Valle HB. Institute of Medicine (US) Committee to Review Dietary Reference Intakes for Vitamin D and Calcium. National Academies Press. 2011.

16. Guralnik JM, Ferrucci L, Simonsick EM, et al. Lower-extremity function in persons over the age of 70 years as a predictor of subsequent disability. N Engl J Med. 1995;332:556-561

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
110-112
Page Number
110-112
Publications
Publications
Topics
Article Type
Display Headline
Does Vitamin D Supplementation Improve Lower Extremity Power and Function in Community-Dwelling Older Adults?
Display Headline
Does Vitamin D Supplementation Improve Lower Extremity Power and Function in Community-Dwelling Older Adults?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Once-Daily 2-Drug versus 3-Drug Antiretroviral Therapy for HIV Infection in Treatment-naive Adults: Less Is Best?

Article Type
Changed
Thu, 04/23/2020 - 15:01
Display Headline
Once-Daily 2-Drug versus 3-Drug Antiretroviral Therapy for HIV Infection in Treatment-naive Adults: Less Is Best?

Study Overview

Objective. To evaluate the efficacy and safety of a once-daily 2-drug antiretroviral (ARV) regimen, dolutegravir plus lamivudine, for the treatment of HIV-1 infection in adults naive to antiretroviral therapy (ART).

Design. GEMINI-1 and GEMINI-2 were 2 identically designed multicenter, double-blind, randomized, noninferiority, phase 3 clinical trials conducted between July 18, 2016 and March 31, 2017. Participants were stratified to receive 1 of 2 once-daily HIV regimens: the study regimen, consisting of once-daily dolutegravir 50 mg plus lamivudine 300 mg, or the standard-of-care regimen, consisting of once-daily dolutegravir 50 mg plus tenofovir disoproxil fumarate (TDF) 300 mg plus emtricitabine 200 mg. While this article presents results at week 48, both trials are scheduled to evaluate participants up to week 148 in an attempt to evaluate long-term efficacy and safety.

Setting and participants. Eligible participants had to be aged 18 years or older with treatment-naive HIV-1 infection. Women were eligible if they were not (1) pregnant, (2) lactating, or (3) of reproductive potential, defined by various means, including tubal ligation, hysterectomy, postmenopausal, and the use of highly effective contraception. Initially, eligibility screening restricted participation to those with viral loads between 1000 and 100,000 copies/mL. However, the upper limit was later increased to 500,000 copies/mL based on an independent review of results from other clinical trials1,2 evaluating dual therapy with dolutegravir and lamivudine, which indicated efficacy in patients with viral loads up to 500,000.3-5

Notable exclusion criteria included: (1) major mutations to nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, and protease inhibitors; (2) evidence of hepatitis B infection; (3) hepatitis C infection with anticipation of initiating treatment within 48 weeks of study enrollment; and (4) stage 3 HIV disease, per Centers for Disease Control and Prevention criteria, with the exception of cutaneous Kaposi sarcoma and CD4 cell counts < 200 cells/mL.

Main outcome measures. The primary endpoint was demonstration of noninferiority of the 2-drug ARV regimen through assessment of the proportion of participants who achieved virologic suppression at week 48 in the intent-to-treat-exposed population. For the purposes of this study, virologic suppression was defined as having fewer than 50 copies of HIV-1 RNA per mL at week 48. For evaluation of safety and toxicity concerns, renal and bone biomarkers were assessed at study entry and at weeks 24 and 48. In addition, participants who met virological withdrawal criteria were evaluated for integrase strand transfer inhibitor mutations. Virological withdrawal was defined as the presence of 1 of the following: (1) HIV RNA > 200 copies/mL at week 24, (2) HIV RNA > 200 copies/mL after previous HIV RNA < 200 copies/mL (confirmed rebound), and (3) a < 1 log10 copies/mL decrease from baseline (unless already < 200 copies/mL).

Main results. GEMINI-1 and GEMINI-2 randomized a combined total of 1441 participants to receive either the once-daily 2-drug ARV regimen (dolutegravir and lamivudine, n = 719) or the once-daily 3-drug ARV regimen (dolutegravir, TDF, and emtricitabine, n = 722). Of the 533 participants who did not meet inclusion criteria, the predominant reasons for exclusion were either having preexisting major viral resistance mutations (n = 246) or viral loads outside the range of 1000 to 500,000 copies/mL (n = 133).

Baseline demographic and clinical characteristics were similar between both groups. The median age was 33 years (10% were over 50 years of age), and participants were mostly male (85%) and white (68%). Baseline HIV RNA counts of > 100,000 copies/mL were found in 293 participants (20%), and 188 (8%) participants had CD4 counts of ≤ 200 cells/mL.

 

 

Noninferiority of the once-daily 2-drug versus the once-daily 3-drug ARV regimen was demonstrated in both the GEMINI-1 and GEMINI-2 trials for the intent-to-treat-exposed population. In GEMINI-1, 90% (n = 320) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 93% (n = 332) in the 3-drug ARV group (no statistically significant difference). In GEMINI-2, 93% (n =335 ) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 94% (n = 337) in the 3-drug ARV group (no statistically significant difference).

A subgroup analysis found no significant impact of baseline HIV RNA (> 100,000 compared to ≤ 100,000 copies/mL) on achieving virologic suppression at week 48. However, a subgroup analysis did find that participants with CD4 counts < 200 copies/mL had a reduced response in the once-daily 2-drug versus 3-drug ARV regimen for achieving virologic response at week 48 (79% versus 93%, respectively).

Overall, 10 participants met virological withdrawal criteria during the study period, and 4 of these were on the 2-drug ARV regimen. For these 10 participants, genotypic testing did not find emergence of resistance to either nucleoside reverse transcriptase or integrase strand transfer inhibitors.

Regarding renal biomarkers, increases of both serum creatinine and urinary excretion of protein creatinine were significantly greater in the 3-drug ARV group. Also, biomarkers indicating increased bone turnover were elevated in both groups, but the degree of elevation was significantly lower in the 2-drug ARV regimen cohort. It is unclear whether these findings reflect an increased or decreased risk of developing osteopenia or osteoporosis in the 2 study groups.

Conclusion. The once-daily 2-drug ARV regimen dolutegravir and lamivudine is noninferior to the guideline-recommended once-daily 3-drug ARV regimen dolutegravir, TDF, and emtricitabine at achieving viral suppression in ART-naive HIV-1 infected individuals with HIV RNA counts < 500,000 copies/mL. However, the efficacy of this ARV regimen may be compromised in individuals with CD4 counts < 200 cells/mL.

 

 

Commentary

Currently, the mainstay of HIV pharmacotherapy is a 3-drug regimen consisting of 2 nucleoside reverse transcriptase inhibitors in combination with 1 drug from another class, with an integrase strand transfer inhibitor being the preferred third drug.6 Despite the improved tolerability of contemporary ARVs, there remains concern among HIV practitioners regarding potential toxicities associated with cumulative drug exposure, specifically related to nucleoside reverse transcriptase inhibitors. As a result, there has been much interest in evaluating 2-drug ARV regimens for HIV treatment in order to reduce overall drug exposure.7-10

The 48-week results of the GEMINI-1 and GEMINI-2 trials, published in early 2019, further expand our understanding regarding the efficacy and safety of 2-drug regimens in HIV treatment. These identically designed studies evaluated once-daily dolutegravir and lamivudine for HIV in a treatment-naive population. This goes a step further than the SWORD-1 and SWORD-2 trials, which evaluated once-daily dolutegravir and rilpivirine as a step-down therapy for virologically suppressed individuals and led to the U.S. Food and Drug Administration (FDA) approval of the single-tablet combination regimen dolutegravir/rilpivirine (Juluca).10 Therefore, whereas the SWORD trials evaluated a 2-drug regimen for maintenance of virologic suppression, the GEMINI trials assessed whether a 2-drug regimen can both achieve and maintain virologic suppression.

The results of the GEMINI trials are promising for a future direction in HIV care. The rates of virologic suppression achieved in these trials are comparable to those seen in the SWORD trials.10 Furthermore, the virologic response seen in the GEMINI trials is comparable to that seen in similar trials that evaluated a 3-drug ARV regimen consisting of an integrase strand transfer inhibitor–based backbone in ART-naive individuals.11,12

A major confounder to the design of this trial was that it included TDF as one of the components in the comparator arm, an agent that has already been demonstrated to have detrimental effects on both renal and bone health.13,14 Additionally, the bone biomarker results were inconclusive, and the agents’ effects on bone would have been better demonstrated through bone mineral density testing, as had been done in prior trials.

Applications for Clinical Practice

Given the recent FDA approval of the single-tablet combination regimen dolutegravir and lamivudine (Dovato), this once-daily 2-drug ARV regimen will begin making its way into clinical practice for certain patients. Prior to starting this regimen, hepatitis B infection first must be ruled out due to poor efficacy of lamivudine monotherapy for management of chronic hepatitis B infection.15 Additionally, baseline genotype testing should be performed prior to starting this ART given that approximately 10% of newly diagnosed HIV patients have baseline resistance mutations.16 Obtaining rapid genotype testing may be difficult to accomplish in low-resource settings where such testing is not readily available. Finally, this approach may not be applicable to those presenting with acute HIV infection, in whom viral loads are often in the millions of copies per mL. It is likely that dolutegravir/lamivudine could assume a role similar to that of dolutegravir/rilpivirine, in which patients who present with acute HIV step down to a 2-drug regimen once their viral loads have either dropped below 500,000 copies/mL or have already been suppressed.

—Evan K. Mallory, PharmD, Banner-University Medical Center Tucson, and Norman L. Beatty, MD, University of Arizona College of Medicine, Tucson, AZ

References

1. Cahn P, Rolón MJ, Figueroa MI, et al. Dolutegravir-lamivudine as initial therapy in HIV-1 infected, ARV-naive patients, 48-week results of the PADDLE (Pilot Antiretroviral Design with Dolutegravir LamivudinE) study. J Int AIDS Soc. 2017;20:21678.

2. Taiwo BO, Zheng L, Stefanescu A, et al. ACTG A5353: a pilot study of dolutegravir plus lamivudine for initial treatment of human immunodeficiency virus-1 (HIV-1)-infected participants eith HIV-1 RNA <500000 vopies/mL. Clin Infect Dis. 2018;66:1689-1697.

3. Min S, Sloan L, DeJesus E, et al. Antiviral activity, safety, and pharmacokinetics/pharmacodynamics of dolutegravir as 10-day monotherapy in HIV-1-infected adults. AIDS. 2011;25:1737-1745.

4. Eron JJ, Benoit SL, Jemsek J, et al. Treatment with lamivudine, zidovudine, or both in HIV-positive patients with 200 to 500 CD4+ cells per cubic millimeter. North American HIV Working Party. N Engl J Med. 1995;333:1662-1669.

5. Kuritzkes DR, Quinn JB, Benoit SL, et al. Drug resistance and virologic response in NUCA 3001, a randomized trial of lamivudine (3TC) versus zidovudine (ZDV) versus ZDV plus 3TC in previously untreated patients. AIDS. 1996;10:975-981.

6. Department of Health and Human Services. Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. http://aidsinfo.nih.gov/contentfiles/lvguidelines/AdultandAdolescentGL.pdf. Accessed April 1, 2019.

7. Riddler SA, Haubrich R, DiRienzo AG, et al. Class-sparing regimens for initial treatment of HIV-1 infection. N Engl J Med. 2008;358:2095-2106.

8. Reynes J, Lawal A, Pulido F, et al. Examination of noninferiority, safety, and tolerability of lopinavir/ritonavir and raltegravir compared with lopinavir/ritonavir and tenofovir/ emtricitabine in antiretroviral-naïve subjects: the progress study, 48-week results. HIV Clin Trials. 2011;12:255-267.

9. Cahn P, Andrade-Villanueva J, Arribas JR, et al. Dual therapy with lopinavir and ritonavir plus lamivudine versus triple therapy with lopinavir and ritonavir plus two nucleoside reverse transcriptase inhibitors in antiretroviral-therapy-naive adults with HIV-1 infection: 48 week results of the randomised, open label, non-inferiority GARDEL trial. Lancet Infect Dis. 2014;14:572-580.

10. Llibre JM, Hung CC, Brinson C, et al. Efficacy, safety, and tolerability of dolutegravir-rilpivirine for the maintenance of virological suppression in adults with HIV-1: phase 3, randomised, non-inferiority SWORD-1 and SWORD-2 studies. Lancet. 2018;391:839-849.

11. Walmsley SL, Antela A, Clumeck N, et al. Dolutegravir plus abacavir-lamivudine for the treatment of HIV-1 infection. N Engl J Med. 2013;369:1807-1818.

12. Sax PE, Wohl D, Yin MT, et al. Tenofovir alafenamide versus tenofovir disoproxil fumarate, coformulated with elvitegravir, cobicistat, and emtricitabine, for initial treatment of HIV-1 infection: two randomised, double-blind, phase 3, non-inferiority trials. Lancet. 2015;385:2606-2615.

13. Mulligan K, Glidden DV, Anderson PL, et al. Effects of emtricitabine/tenofovir on bone mineral density in HIV-negative persons in a randomized, double-blind, placebo-controlled trial. Clin Infect Dis. 2015;61:572-580.

14. Cooper RD, Wiebe N, Smith N, et al. Systematic review and meta-analysis: renal safety of tenofovir disoproxil fumarate in HIV-infected patients. Clin Infect Dis. 2010;51:496-505.

15. Kim D, Wheeler W, Ziebell R, et al. Prevalence of antiretroviral drug resistance among newly diagnosed HIV-1 infected persons, United States, 2007. 17th Conference on Retroviruses & Opportunistic Infections; San Francisco, CA: 2010. Feb 16-19. Abstract 580.

16. Terrault NA, Lok ASF, McMahon BJ, et al. Update on prevention, diagnosis, and treatment of chronic hepatitis B: AASLD 2018 hepatitis B guidance. Hepatology. 2018;67:1560-1599.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
105-108
Sections
Article PDF
Article PDF

Study Overview

Objective. To evaluate the efficacy and safety of a once-daily 2-drug antiretroviral (ARV) regimen, dolutegravir plus lamivudine, for the treatment of HIV-1 infection in adults naive to antiretroviral therapy (ART).

Design. GEMINI-1 and GEMINI-2 were 2 identically designed multicenter, double-blind, randomized, noninferiority, phase 3 clinical trials conducted between July 18, 2016 and March 31, 2017. Participants were stratified to receive 1 of 2 once-daily HIV regimens: the study regimen, consisting of once-daily dolutegravir 50 mg plus lamivudine 300 mg, or the standard-of-care regimen, consisting of once-daily dolutegravir 50 mg plus tenofovir disoproxil fumarate (TDF) 300 mg plus emtricitabine 200 mg. While this article presents results at week 48, both trials are scheduled to evaluate participants up to week 148 in an attempt to evaluate long-term efficacy and safety.

Setting and participants. Eligible participants had to be aged 18 years or older with treatment-naive HIV-1 infection. Women were eligible if they were not (1) pregnant, (2) lactating, or (3) of reproductive potential, defined by various means, including tubal ligation, hysterectomy, postmenopausal, and the use of highly effective contraception. Initially, eligibility screening restricted participation to those with viral loads between 1000 and 100,000 copies/mL. However, the upper limit was later increased to 500,000 copies/mL based on an independent review of results from other clinical trials1,2 evaluating dual therapy with dolutegravir and lamivudine, which indicated efficacy in patients with viral loads up to 500,000.3-5

Notable exclusion criteria included: (1) major mutations to nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, and protease inhibitors; (2) evidence of hepatitis B infection; (3) hepatitis C infection with anticipation of initiating treatment within 48 weeks of study enrollment; and (4) stage 3 HIV disease, per Centers for Disease Control and Prevention criteria, with the exception of cutaneous Kaposi sarcoma and CD4 cell counts < 200 cells/mL.

Main outcome measures. The primary endpoint was demonstration of noninferiority of the 2-drug ARV regimen through assessment of the proportion of participants who achieved virologic suppression at week 48 in the intent-to-treat-exposed population. For the purposes of this study, virologic suppression was defined as having fewer than 50 copies of HIV-1 RNA per mL at week 48. For evaluation of safety and toxicity concerns, renal and bone biomarkers were assessed at study entry and at weeks 24 and 48. In addition, participants who met virological withdrawal criteria were evaluated for integrase strand transfer inhibitor mutations. Virological withdrawal was defined as the presence of 1 of the following: (1) HIV RNA > 200 copies/mL at week 24, (2) HIV RNA > 200 copies/mL after previous HIV RNA < 200 copies/mL (confirmed rebound), and (3) a < 1 log10 copies/mL decrease from baseline (unless already < 200 copies/mL).

Main results. GEMINI-1 and GEMINI-2 randomized a combined total of 1441 participants to receive either the once-daily 2-drug ARV regimen (dolutegravir and lamivudine, n = 719) or the once-daily 3-drug ARV regimen (dolutegravir, TDF, and emtricitabine, n = 722). Of the 533 participants who did not meet inclusion criteria, the predominant reasons for exclusion were either having preexisting major viral resistance mutations (n = 246) or viral loads outside the range of 1000 to 500,000 copies/mL (n = 133).

Baseline demographic and clinical characteristics were similar between both groups. The median age was 33 years (10% were over 50 years of age), and participants were mostly male (85%) and white (68%). Baseline HIV RNA counts of > 100,000 copies/mL were found in 293 participants (20%), and 188 (8%) participants had CD4 counts of ≤ 200 cells/mL.

 

 

Noninferiority of the once-daily 2-drug versus the once-daily 3-drug ARV regimen was demonstrated in both the GEMINI-1 and GEMINI-2 trials for the intent-to-treat-exposed population. In GEMINI-1, 90% (n = 320) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 93% (n = 332) in the 3-drug ARV group (no statistically significant difference). In GEMINI-2, 93% (n =335 ) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 94% (n = 337) in the 3-drug ARV group (no statistically significant difference).

A subgroup analysis found no significant impact of baseline HIV RNA (> 100,000 compared to ≤ 100,000 copies/mL) on achieving virologic suppression at week 48. However, a subgroup analysis did find that participants with CD4 counts < 200 copies/mL had a reduced response in the once-daily 2-drug versus 3-drug ARV regimen for achieving virologic response at week 48 (79% versus 93%, respectively).

Overall, 10 participants met virological withdrawal criteria during the study period, and 4 of these were on the 2-drug ARV regimen. For these 10 participants, genotypic testing did not find emergence of resistance to either nucleoside reverse transcriptase or integrase strand transfer inhibitors.

Regarding renal biomarkers, increases of both serum creatinine and urinary excretion of protein creatinine were significantly greater in the 3-drug ARV group. Also, biomarkers indicating increased bone turnover were elevated in both groups, but the degree of elevation was significantly lower in the 2-drug ARV regimen cohort. It is unclear whether these findings reflect an increased or decreased risk of developing osteopenia or osteoporosis in the 2 study groups.

Conclusion. The once-daily 2-drug ARV regimen dolutegravir and lamivudine is noninferior to the guideline-recommended once-daily 3-drug ARV regimen dolutegravir, TDF, and emtricitabine at achieving viral suppression in ART-naive HIV-1 infected individuals with HIV RNA counts < 500,000 copies/mL. However, the efficacy of this ARV regimen may be compromised in individuals with CD4 counts < 200 cells/mL.

 

 

Commentary

Currently, the mainstay of HIV pharmacotherapy is a 3-drug regimen consisting of 2 nucleoside reverse transcriptase inhibitors in combination with 1 drug from another class, with an integrase strand transfer inhibitor being the preferred third drug.6 Despite the improved tolerability of contemporary ARVs, there remains concern among HIV practitioners regarding potential toxicities associated with cumulative drug exposure, specifically related to nucleoside reverse transcriptase inhibitors. As a result, there has been much interest in evaluating 2-drug ARV regimens for HIV treatment in order to reduce overall drug exposure.7-10

The 48-week results of the GEMINI-1 and GEMINI-2 trials, published in early 2019, further expand our understanding regarding the efficacy and safety of 2-drug regimens in HIV treatment. These identically designed studies evaluated once-daily dolutegravir and lamivudine for HIV in a treatment-naive population. This goes a step further than the SWORD-1 and SWORD-2 trials, which evaluated once-daily dolutegravir and rilpivirine as a step-down therapy for virologically suppressed individuals and led to the U.S. Food and Drug Administration (FDA) approval of the single-tablet combination regimen dolutegravir/rilpivirine (Juluca).10 Therefore, whereas the SWORD trials evaluated a 2-drug regimen for maintenance of virologic suppression, the GEMINI trials assessed whether a 2-drug regimen can both achieve and maintain virologic suppression.

The results of the GEMINI trials are promising for a future direction in HIV care. The rates of virologic suppression achieved in these trials are comparable to those seen in the SWORD trials.10 Furthermore, the virologic response seen in the GEMINI trials is comparable to that seen in similar trials that evaluated a 3-drug ARV regimen consisting of an integrase strand transfer inhibitor–based backbone in ART-naive individuals.11,12

A major confounder to the design of this trial was that it included TDF as one of the components in the comparator arm, an agent that has already been demonstrated to have detrimental effects on both renal and bone health.13,14 Additionally, the bone biomarker results were inconclusive, and the agents’ effects on bone would have been better demonstrated through bone mineral density testing, as had been done in prior trials.

Applications for Clinical Practice

Given the recent FDA approval of the single-tablet combination regimen dolutegravir and lamivudine (Dovato), this once-daily 2-drug ARV regimen will begin making its way into clinical practice for certain patients. Prior to starting this regimen, hepatitis B infection first must be ruled out due to poor efficacy of lamivudine monotherapy for management of chronic hepatitis B infection.15 Additionally, baseline genotype testing should be performed prior to starting this ART given that approximately 10% of newly diagnosed HIV patients have baseline resistance mutations.16 Obtaining rapid genotype testing may be difficult to accomplish in low-resource settings where such testing is not readily available. Finally, this approach may not be applicable to those presenting with acute HIV infection, in whom viral loads are often in the millions of copies per mL. It is likely that dolutegravir/lamivudine could assume a role similar to that of dolutegravir/rilpivirine, in which patients who present with acute HIV step down to a 2-drug regimen once their viral loads have either dropped below 500,000 copies/mL or have already been suppressed.

—Evan K. Mallory, PharmD, Banner-University Medical Center Tucson, and Norman L. Beatty, MD, University of Arizona College of Medicine, Tucson, AZ

Study Overview

Objective. To evaluate the efficacy and safety of a once-daily 2-drug antiretroviral (ARV) regimen, dolutegravir plus lamivudine, for the treatment of HIV-1 infection in adults naive to antiretroviral therapy (ART).

Design. GEMINI-1 and GEMINI-2 were 2 identically designed multicenter, double-blind, randomized, noninferiority, phase 3 clinical trials conducted between July 18, 2016 and March 31, 2017. Participants were stratified to receive 1 of 2 once-daily HIV regimens: the study regimen, consisting of once-daily dolutegravir 50 mg plus lamivudine 300 mg, or the standard-of-care regimen, consisting of once-daily dolutegravir 50 mg plus tenofovir disoproxil fumarate (TDF) 300 mg plus emtricitabine 200 mg. While this article presents results at week 48, both trials are scheduled to evaluate participants up to week 148 in an attempt to evaluate long-term efficacy and safety.

Setting and participants. Eligible participants had to be aged 18 years or older with treatment-naive HIV-1 infection. Women were eligible if they were not (1) pregnant, (2) lactating, or (3) of reproductive potential, defined by various means, including tubal ligation, hysterectomy, postmenopausal, and the use of highly effective contraception. Initially, eligibility screening restricted participation to those with viral loads between 1000 and 100,000 copies/mL. However, the upper limit was later increased to 500,000 copies/mL based on an independent review of results from other clinical trials1,2 evaluating dual therapy with dolutegravir and lamivudine, which indicated efficacy in patients with viral loads up to 500,000.3-5

Notable exclusion criteria included: (1) major mutations to nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, and protease inhibitors; (2) evidence of hepatitis B infection; (3) hepatitis C infection with anticipation of initiating treatment within 48 weeks of study enrollment; and (4) stage 3 HIV disease, per Centers for Disease Control and Prevention criteria, with the exception of cutaneous Kaposi sarcoma and CD4 cell counts < 200 cells/mL.

Main outcome measures. The primary endpoint was demonstration of noninferiority of the 2-drug ARV regimen through assessment of the proportion of participants who achieved virologic suppression at week 48 in the intent-to-treat-exposed population. For the purposes of this study, virologic suppression was defined as having fewer than 50 copies of HIV-1 RNA per mL at week 48. For evaluation of safety and toxicity concerns, renal and bone biomarkers were assessed at study entry and at weeks 24 and 48. In addition, participants who met virological withdrawal criteria were evaluated for integrase strand transfer inhibitor mutations. Virological withdrawal was defined as the presence of 1 of the following: (1) HIV RNA > 200 copies/mL at week 24, (2) HIV RNA > 200 copies/mL after previous HIV RNA < 200 copies/mL (confirmed rebound), and (3) a < 1 log10 copies/mL decrease from baseline (unless already < 200 copies/mL).

Main results. GEMINI-1 and GEMINI-2 randomized a combined total of 1441 participants to receive either the once-daily 2-drug ARV regimen (dolutegravir and lamivudine, n = 719) or the once-daily 3-drug ARV regimen (dolutegravir, TDF, and emtricitabine, n = 722). Of the 533 participants who did not meet inclusion criteria, the predominant reasons for exclusion were either having preexisting major viral resistance mutations (n = 246) or viral loads outside the range of 1000 to 500,000 copies/mL (n = 133).

Baseline demographic and clinical characteristics were similar between both groups. The median age was 33 years (10% were over 50 years of age), and participants were mostly male (85%) and white (68%). Baseline HIV RNA counts of > 100,000 copies/mL were found in 293 participants (20%), and 188 (8%) participants had CD4 counts of ≤ 200 cells/mL.

 

 

Noninferiority of the once-daily 2-drug versus the once-daily 3-drug ARV regimen was demonstrated in both the GEMINI-1 and GEMINI-2 trials for the intent-to-treat-exposed population. In GEMINI-1, 90% (n = 320) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 93% (n = 332) in the 3-drug ARV group (no statistically significant difference). In GEMINI-2, 93% (n =335 ) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 94% (n = 337) in the 3-drug ARV group (no statistically significant difference).

A subgroup analysis found no significant impact of baseline HIV RNA (> 100,000 compared to ≤ 100,000 copies/mL) on achieving virologic suppression at week 48. However, a subgroup analysis did find that participants with CD4 counts < 200 copies/mL had a reduced response in the once-daily 2-drug versus 3-drug ARV regimen for achieving virologic response at week 48 (79% versus 93%, respectively).

Overall, 10 participants met virological withdrawal criteria during the study period, and 4 of these were on the 2-drug ARV regimen. For these 10 participants, genotypic testing did not find emergence of resistance to either nucleoside reverse transcriptase or integrase strand transfer inhibitors.

Regarding renal biomarkers, increases of both serum creatinine and urinary excretion of protein creatinine were significantly greater in the 3-drug ARV group. Also, biomarkers indicating increased bone turnover were elevated in both groups, but the degree of elevation was significantly lower in the 2-drug ARV regimen cohort. It is unclear whether these findings reflect an increased or decreased risk of developing osteopenia or osteoporosis in the 2 study groups.

Conclusion. The once-daily 2-drug ARV regimen dolutegravir and lamivudine is noninferior to the guideline-recommended once-daily 3-drug ARV regimen dolutegravir, TDF, and emtricitabine at achieving viral suppression in ART-naive HIV-1 infected individuals with HIV RNA counts < 500,000 copies/mL. However, the efficacy of this ARV regimen may be compromised in individuals with CD4 counts < 200 cells/mL.

 

 

Commentary

Currently, the mainstay of HIV pharmacotherapy is a 3-drug regimen consisting of 2 nucleoside reverse transcriptase inhibitors in combination with 1 drug from another class, with an integrase strand transfer inhibitor being the preferred third drug.6 Despite the improved tolerability of contemporary ARVs, there remains concern among HIV practitioners regarding potential toxicities associated with cumulative drug exposure, specifically related to nucleoside reverse transcriptase inhibitors. As a result, there has been much interest in evaluating 2-drug ARV regimens for HIV treatment in order to reduce overall drug exposure.7-10

The 48-week results of the GEMINI-1 and GEMINI-2 trials, published in early 2019, further expand our understanding regarding the efficacy and safety of 2-drug regimens in HIV treatment. These identically designed studies evaluated once-daily dolutegravir and lamivudine for HIV in a treatment-naive population. This goes a step further than the SWORD-1 and SWORD-2 trials, which evaluated once-daily dolutegravir and rilpivirine as a step-down therapy for virologically suppressed individuals and led to the U.S. Food and Drug Administration (FDA) approval of the single-tablet combination regimen dolutegravir/rilpivirine (Juluca).10 Therefore, whereas the SWORD trials evaluated a 2-drug regimen for maintenance of virologic suppression, the GEMINI trials assessed whether a 2-drug regimen can both achieve and maintain virologic suppression.

The results of the GEMINI trials are promising for a future direction in HIV care. The rates of virologic suppression achieved in these trials are comparable to those seen in the SWORD trials.10 Furthermore, the virologic response seen in the GEMINI trials is comparable to that seen in similar trials that evaluated a 3-drug ARV regimen consisting of an integrase strand transfer inhibitor–based backbone in ART-naive individuals.11,12

A major confounder to the design of this trial was that it included TDF as one of the components in the comparator arm, an agent that has already been demonstrated to have detrimental effects on both renal and bone health.13,14 Additionally, the bone biomarker results were inconclusive, and the agents’ effects on bone would have been better demonstrated through bone mineral density testing, as had been done in prior trials.

Applications for Clinical Practice

Given the recent FDA approval of the single-tablet combination regimen dolutegravir and lamivudine (Dovato), this once-daily 2-drug ARV regimen will begin making its way into clinical practice for certain patients. Prior to starting this regimen, hepatitis B infection first must be ruled out due to poor efficacy of lamivudine monotherapy for management of chronic hepatitis B infection.15 Additionally, baseline genotype testing should be performed prior to starting this ART given that approximately 10% of newly diagnosed HIV patients have baseline resistance mutations.16 Obtaining rapid genotype testing may be difficult to accomplish in low-resource settings where such testing is not readily available. Finally, this approach may not be applicable to those presenting with acute HIV infection, in whom viral loads are often in the millions of copies per mL. It is likely that dolutegravir/lamivudine could assume a role similar to that of dolutegravir/rilpivirine, in which patients who present with acute HIV step down to a 2-drug regimen once their viral loads have either dropped below 500,000 copies/mL or have already been suppressed.

—Evan K. Mallory, PharmD, Banner-University Medical Center Tucson, and Norman L. Beatty, MD, University of Arizona College of Medicine, Tucson, AZ

References

1. Cahn P, Rolón MJ, Figueroa MI, et al. Dolutegravir-lamivudine as initial therapy in HIV-1 infected, ARV-naive patients, 48-week results of the PADDLE (Pilot Antiretroviral Design with Dolutegravir LamivudinE) study. J Int AIDS Soc. 2017;20:21678.

2. Taiwo BO, Zheng L, Stefanescu A, et al. ACTG A5353: a pilot study of dolutegravir plus lamivudine for initial treatment of human immunodeficiency virus-1 (HIV-1)-infected participants eith HIV-1 RNA <500000 vopies/mL. Clin Infect Dis. 2018;66:1689-1697.

3. Min S, Sloan L, DeJesus E, et al. Antiviral activity, safety, and pharmacokinetics/pharmacodynamics of dolutegravir as 10-day monotherapy in HIV-1-infected adults. AIDS. 2011;25:1737-1745.

4. Eron JJ, Benoit SL, Jemsek J, et al. Treatment with lamivudine, zidovudine, or both in HIV-positive patients with 200 to 500 CD4+ cells per cubic millimeter. North American HIV Working Party. N Engl J Med. 1995;333:1662-1669.

5. Kuritzkes DR, Quinn JB, Benoit SL, et al. Drug resistance and virologic response in NUCA 3001, a randomized trial of lamivudine (3TC) versus zidovudine (ZDV) versus ZDV plus 3TC in previously untreated patients. AIDS. 1996;10:975-981.

6. Department of Health and Human Services. Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. http://aidsinfo.nih.gov/contentfiles/lvguidelines/AdultandAdolescentGL.pdf. Accessed April 1, 2019.

7. Riddler SA, Haubrich R, DiRienzo AG, et al. Class-sparing regimens for initial treatment of HIV-1 infection. N Engl J Med. 2008;358:2095-2106.

8. Reynes J, Lawal A, Pulido F, et al. Examination of noninferiority, safety, and tolerability of lopinavir/ritonavir and raltegravir compared with lopinavir/ritonavir and tenofovir/ emtricitabine in antiretroviral-naïve subjects: the progress study, 48-week results. HIV Clin Trials. 2011;12:255-267.

9. Cahn P, Andrade-Villanueva J, Arribas JR, et al. Dual therapy with lopinavir and ritonavir plus lamivudine versus triple therapy with lopinavir and ritonavir plus two nucleoside reverse transcriptase inhibitors in antiretroviral-therapy-naive adults with HIV-1 infection: 48 week results of the randomised, open label, non-inferiority GARDEL trial. Lancet Infect Dis. 2014;14:572-580.

10. Llibre JM, Hung CC, Brinson C, et al. Efficacy, safety, and tolerability of dolutegravir-rilpivirine for the maintenance of virological suppression in adults with HIV-1: phase 3, randomised, non-inferiority SWORD-1 and SWORD-2 studies. Lancet. 2018;391:839-849.

11. Walmsley SL, Antela A, Clumeck N, et al. Dolutegravir plus abacavir-lamivudine for the treatment of HIV-1 infection. N Engl J Med. 2013;369:1807-1818.

12. Sax PE, Wohl D, Yin MT, et al. Tenofovir alafenamide versus tenofovir disoproxil fumarate, coformulated with elvitegravir, cobicistat, and emtricitabine, for initial treatment of HIV-1 infection: two randomised, double-blind, phase 3, non-inferiority trials. Lancet. 2015;385:2606-2615.

13. Mulligan K, Glidden DV, Anderson PL, et al. Effects of emtricitabine/tenofovir on bone mineral density in HIV-negative persons in a randomized, double-blind, placebo-controlled trial. Clin Infect Dis. 2015;61:572-580.

14. Cooper RD, Wiebe N, Smith N, et al. Systematic review and meta-analysis: renal safety of tenofovir disoproxil fumarate in HIV-infected patients. Clin Infect Dis. 2010;51:496-505.

15. Kim D, Wheeler W, Ziebell R, et al. Prevalence of antiretroviral drug resistance among newly diagnosed HIV-1 infected persons, United States, 2007. 17th Conference on Retroviruses & Opportunistic Infections; San Francisco, CA: 2010. Feb 16-19. Abstract 580.

16. Terrault NA, Lok ASF, McMahon BJ, et al. Update on prevention, diagnosis, and treatment of chronic hepatitis B: AASLD 2018 hepatitis B guidance. Hepatology. 2018;67:1560-1599.

References

1. Cahn P, Rolón MJ, Figueroa MI, et al. Dolutegravir-lamivudine as initial therapy in HIV-1 infected, ARV-naive patients, 48-week results of the PADDLE (Pilot Antiretroviral Design with Dolutegravir LamivudinE) study. J Int AIDS Soc. 2017;20:21678.

2. Taiwo BO, Zheng L, Stefanescu A, et al. ACTG A5353: a pilot study of dolutegravir plus lamivudine for initial treatment of human immunodeficiency virus-1 (HIV-1)-infected participants eith HIV-1 RNA <500000 vopies/mL. Clin Infect Dis. 2018;66:1689-1697.

3. Min S, Sloan L, DeJesus E, et al. Antiviral activity, safety, and pharmacokinetics/pharmacodynamics of dolutegravir as 10-day monotherapy in HIV-1-infected adults. AIDS. 2011;25:1737-1745.

4. Eron JJ, Benoit SL, Jemsek J, et al. Treatment with lamivudine, zidovudine, or both in HIV-positive patients with 200 to 500 CD4+ cells per cubic millimeter. North American HIV Working Party. N Engl J Med. 1995;333:1662-1669.

5. Kuritzkes DR, Quinn JB, Benoit SL, et al. Drug resistance and virologic response in NUCA 3001, a randomized trial of lamivudine (3TC) versus zidovudine (ZDV) versus ZDV plus 3TC in previously untreated patients. AIDS. 1996;10:975-981.

6. Department of Health and Human Services. Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. http://aidsinfo.nih.gov/contentfiles/lvguidelines/AdultandAdolescentGL.pdf. Accessed April 1, 2019.

7. Riddler SA, Haubrich R, DiRienzo AG, et al. Class-sparing regimens for initial treatment of HIV-1 infection. N Engl J Med. 2008;358:2095-2106.

8. Reynes J, Lawal A, Pulido F, et al. Examination of noninferiority, safety, and tolerability of lopinavir/ritonavir and raltegravir compared with lopinavir/ritonavir and tenofovir/ emtricitabine in antiretroviral-naïve subjects: the progress study, 48-week results. HIV Clin Trials. 2011;12:255-267.

9. Cahn P, Andrade-Villanueva J, Arribas JR, et al. Dual therapy with lopinavir and ritonavir plus lamivudine versus triple therapy with lopinavir and ritonavir plus two nucleoside reverse transcriptase inhibitors in antiretroviral-therapy-naive adults with HIV-1 infection: 48 week results of the randomised, open label, non-inferiority GARDEL trial. Lancet Infect Dis. 2014;14:572-580.

10. Llibre JM, Hung CC, Brinson C, et al. Efficacy, safety, and tolerability of dolutegravir-rilpivirine for the maintenance of virological suppression in adults with HIV-1: phase 3, randomised, non-inferiority SWORD-1 and SWORD-2 studies. Lancet. 2018;391:839-849.

11. Walmsley SL, Antela A, Clumeck N, et al. Dolutegravir plus abacavir-lamivudine for the treatment of HIV-1 infection. N Engl J Med. 2013;369:1807-1818.

12. Sax PE, Wohl D, Yin MT, et al. Tenofovir alafenamide versus tenofovir disoproxil fumarate, coformulated with elvitegravir, cobicistat, and emtricitabine, for initial treatment of HIV-1 infection: two randomised, double-blind, phase 3, non-inferiority trials. Lancet. 2015;385:2606-2615.

13. Mulligan K, Glidden DV, Anderson PL, et al. Effects of emtricitabine/tenofovir on bone mineral density in HIV-negative persons in a randomized, double-blind, placebo-controlled trial. Clin Infect Dis. 2015;61:572-580.

14. Cooper RD, Wiebe N, Smith N, et al. Systematic review and meta-analysis: renal safety of tenofovir disoproxil fumarate in HIV-infected patients. Clin Infect Dis. 2010;51:496-505.

15. Kim D, Wheeler W, Ziebell R, et al. Prevalence of antiretroviral drug resistance among newly diagnosed HIV-1 infected persons, United States, 2007. 17th Conference on Retroviruses & Opportunistic Infections; San Francisco, CA: 2010. Feb 16-19. Abstract 580.

16. Terrault NA, Lok ASF, McMahon BJ, et al. Update on prevention, diagnosis, and treatment of chronic hepatitis B: AASLD 2018 hepatitis B guidance. Hepatology. 2018;67:1560-1599.

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
105-108
Page Number
105-108
Publications
Publications
Topics
Article Type
Display Headline
Once-Daily 2-Drug versus 3-Drug Antiretroviral Therapy for HIV Infection in Treatment-naive Adults: Less Is Best?
Display Headline
Once-Daily 2-Drug versus 3-Drug Antiretroviral Therapy for HIV Infection in Treatment-naive Adults: Less Is Best?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

FDA launches call center project to streamline Expanded Access request process

Article Type
Changed
Mon, 06/03/2019 - 18:56

– The Food and Drug Administration launched a new call center project to assist physicians seeking to help cancer patients access unapproved therapies.

MDedge/Neil Osterweil
Dr. Richard Pazdur

Entitled “Project Facilitate,” the program aims to create a single point of contact with FDA oncology staff who can guide physicians through the process of submitting Expanded Access (EA) requests on behalf of individual patients.

“This is a pilot program to provide continuous support to health care professionals throughout the entire Expanded Access process,” Richard Pazdur, MD, director of the FDA’s Oncology Center of Excellence and acting director of the Office of Hematology and Oncology Products said during the unveiling of the project during a press briefing at the annual meeting of the American Society of Clinical Oncology.

Physicians utilizing Project Facilitate can expect a “concierge service” experience including advice on the information needed to complete requests, assistance completing forms, pharma/biotech contact information, independent review board resource options, and follow-up on patient outcomes.

The project will work in synergy with the Reagan-Udall EA Navigator website, an “online road map” for physicians and patients that was launched 2 years ago “to facilitate and coordinate and collaborate with the FDA to advance the science mission of FDA,” and which has been expanded in conjunction with Project Facilitate, Ellen V. Sigal, PhD, chair of the board of the Reagan-Udall Foundation for the FDA, said at the press briefing.

“EA Navigator delivers transparent, concise, and searchable information provided by companies about their Expanded Access policies,” Dr. Sigal said. “Today I’m pleased to announce that the Navigator now features Expanded Access opportunities listed in ClinicalTrials.gov for companies in the directory.

“For the first time, those who need quick access to drug availability and Expanded Access options will find it in one place without having to visit site by site by site, or sift through thousands of studies that don’t merit their needs,” she added, noting that EA Navigator will often be the first step for physicians before they engage with Project Facilitate.

Project Facilitate can be reached Monday-Friday, 9 a.m.-5 p.m. ET at 240-402-0004, or by email at [email protected].

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– The Food and Drug Administration launched a new call center project to assist physicians seeking to help cancer patients access unapproved therapies.

MDedge/Neil Osterweil
Dr. Richard Pazdur

Entitled “Project Facilitate,” the program aims to create a single point of contact with FDA oncology staff who can guide physicians through the process of submitting Expanded Access (EA) requests on behalf of individual patients.

“This is a pilot program to provide continuous support to health care professionals throughout the entire Expanded Access process,” Richard Pazdur, MD, director of the FDA’s Oncology Center of Excellence and acting director of the Office of Hematology and Oncology Products said during the unveiling of the project during a press briefing at the annual meeting of the American Society of Clinical Oncology.

Physicians utilizing Project Facilitate can expect a “concierge service” experience including advice on the information needed to complete requests, assistance completing forms, pharma/biotech contact information, independent review board resource options, and follow-up on patient outcomes.

The project will work in synergy with the Reagan-Udall EA Navigator website, an “online road map” for physicians and patients that was launched 2 years ago “to facilitate and coordinate and collaborate with the FDA to advance the science mission of FDA,” and which has been expanded in conjunction with Project Facilitate, Ellen V. Sigal, PhD, chair of the board of the Reagan-Udall Foundation for the FDA, said at the press briefing.

“EA Navigator delivers transparent, concise, and searchable information provided by companies about their Expanded Access policies,” Dr. Sigal said. “Today I’m pleased to announce that the Navigator now features Expanded Access opportunities listed in ClinicalTrials.gov for companies in the directory.

“For the first time, those who need quick access to drug availability and Expanded Access options will find it in one place without having to visit site by site by site, or sift through thousands of studies that don’t merit their needs,” she added, noting that EA Navigator will often be the first step for physicians before they engage with Project Facilitate.

Project Facilitate can be reached Monday-Friday, 9 a.m.-5 p.m. ET at 240-402-0004, or by email at [email protected].

– The Food and Drug Administration launched a new call center project to assist physicians seeking to help cancer patients access unapproved therapies.

MDedge/Neil Osterweil
Dr. Richard Pazdur

Entitled “Project Facilitate,” the program aims to create a single point of contact with FDA oncology staff who can guide physicians through the process of submitting Expanded Access (EA) requests on behalf of individual patients.

“This is a pilot program to provide continuous support to health care professionals throughout the entire Expanded Access process,” Richard Pazdur, MD, director of the FDA’s Oncology Center of Excellence and acting director of the Office of Hematology and Oncology Products said during the unveiling of the project during a press briefing at the annual meeting of the American Society of Clinical Oncology.

Physicians utilizing Project Facilitate can expect a “concierge service” experience including advice on the information needed to complete requests, assistance completing forms, pharma/biotech contact information, independent review board resource options, and follow-up on patient outcomes.

The project will work in synergy with the Reagan-Udall EA Navigator website, an “online road map” for physicians and patients that was launched 2 years ago “to facilitate and coordinate and collaborate with the FDA to advance the science mission of FDA,” and which has been expanded in conjunction with Project Facilitate, Ellen V. Sigal, PhD, chair of the board of the Reagan-Udall Foundation for the FDA, said at the press briefing.

“EA Navigator delivers transparent, concise, and searchable information provided by companies about their Expanded Access policies,” Dr. Sigal said. “Today I’m pleased to announce that the Navigator now features Expanded Access opportunities listed in ClinicalTrials.gov for companies in the directory.

“For the first time, those who need quick access to drug availability and Expanded Access options will find it in one place without having to visit site by site by site, or sift through thousands of studies that don’t merit their needs,” she added, noting that EA Navigator will often be the first step for physicians before they engage with Project Facilitate.

Project Facilitate can be reached Monday-Friday, 9 a.m.-5 p.m. ET at 240-402-0004, or by email at [email protected].

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ASCO 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Expanded indication being considered for meningococcal group B vaccine

Article Type
Changed
Wed, 05/06/2020 - 12:21

 

LJUBLJANA, SLOVENIAAn expanded indication for the meningococcal group B vaccine known as Trumenba in patients aged 1-9 years is being considered by the Food and Drug Administration under the agency’s Breakthrough Therapy designation.

Dr. Jason D. Maguire

Breakthrough Therapy status is reserved for accelerated review of therapies considered to show substantial preliminary promise of effectively targeting a major unmet medical need.

The unmet need here is that there is no meningococcal group B vaccine approved for use in children under age 10 years. Yet infants and children under 5 years of age are at greatest risk of invasive meningococcal B disease, with reported case fatality rates of 8%-9%, Jason D. Maguire, MD, noted at the annual meeting of the European Society for Paediatric Infectious Diseases.

Trumenba has been approved in the United States for patients aged 10-25 years and in the European Union for individuals aged 10 years or older.

Dr. Maguire, of Pfizer’s vaccine clinical research and development program, presented the results of the two phase 2 randomized safety and immunogenicity trials conducted in patients aged 1- 9 years that the company has submitted to the FDA in support of the expanded indication. One study was carried out in 352 1-year-old toddlers, the other in 400 children aged 2-9 years, whose mean age was 4 years. The studies were carried out in Australia, Finland, Poland, and the Czech Republic.

In a pooled analysis of the vaccine’s immunogenicity when administered in a three-dose schedule of 120 mcg at 0, 2, and 6 months to 193 toddlers and 274 of the children aged 2-9 years, robust bactericidal antibody responses were seen against the four major Neisseria meningitidis group B strains that cause invasive disease. In fact, at least a fourfold rise in titers from baseline to 1 month after dose three was documented in the same high proportion of 1- to 9-year-olds as previously seen in the phase 3 trials that led to vaccine licensure in adolescents and young adults.

“These results support that the use of Trumenba, when given to children ages 1 to less than 10 years at the same dose and schedule that is currently approved in adolescents and young adults, can afford a high degree of protective antibody responses that correlate with immunity in this population,” Dr. Maguire said.

The safety and tolerability analysis included all 752 children in the two phase 2 studies, including the 110 toddlers randomized to three 60-mcg doses of the vaccine, although it has subsequently become clear that 120 mcg is the dose that provides the best immunogenicity with an acceptable safety profile, according to the physician.

Across the age groups, local reactions, including redness and swelling, were more common in Trumenba recipients than in controls who received hepatitis A vaccine and saline injections. So were systemic adverse events. Fever – a systemic event of particular interest to parents and clinicians – occurred in 37% of toddlers after vaccination, compared with 25% of 2- to 9-year-olds and 10%-12% of controls. Of note, prophylactic antipyretics weren’t allowed in the study.

“There’s somewhat of an inverse relationship between age and temperature. So as we go down in age, the rate of fever rises. But after each subsequent dose, regardless of age, there’s a reduction in the incidence of fever,” Dr. Maguire observed.

Most fevers were less than 39.0° C. Only 3 of 752 (less than 1%) patients experienced fever in excess of 40.0° C.

Two children withdrew from the study after developing hip synovitis, which was transient. Another withdrew because of prolonged irritability, fatigue, and decreased appetite.

“Although Trumenba had an acceptable safety and tolerability profile in 1- to 9-year-olds, this analysis wasn’t powered enough to detect uncommon adverse events, so we’ll continue to monitor safety for things like synovitis,” he said.

In 10- to 25-year-olds, the meningococcal vaccine can be given concomitantly with other vaccines without interference. There are plans to study concurrent vaccination with MMR and pneumococcal vaccines in 1- to 9-year-olds as well, according to Dr. Maguire.

Pfizer also now is planning clinical trials of the vaccine in infants, another important group currently unprotected against meningococcal group B disease, he added.

Dr. Maguire is an employee of Pfizer, who funded the studies.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

LJUBLJANA, SLOVENIAAn expanded indication for the meningococcal group B vaccine known as Trumenba in patients aged 1-9 years is being considered by the Food and Drug Administration under the agency’s Breakthrough Therapy designation.

Dr. Jason D. Maguire

Breakthrough Therapy status is reserved for accelerated review of therapies considered to show substantial preliminary promise of effectively targeting a major unmet medical need.

The unmet need here is that there is no meningococcal group B vaccine approved for use in children under age 10 years. Yet infants and children under 5 years of age are at greatest risk of invasive meningococcal B disease, with reported case fatality rates of 8%-9%, Jason D. Maguire, MD, noted at the annual meeting of the European Society for Paediatric Infectious Diseases.

Trumenba has been approved in the United States for patients aged 10-25 years and in the European Union for individuals aged 10 years or older.

Dr. Maguire, of Pfizer’s vaccine clinical research and development program, presented the results of the two phase 2 randomized safety and immunogenicity trials conducted in patients aged 1- 9 years that the company has submitted to the FDA in support of the expanded indication. One study was carried out in 352 1-year-old toddlers, the other in 400 children aged 2-9 years, whose mean age was 4 years. The studies were carried out in Australia, Finland, Poland, and the Czech Republic.

In a pooled analysis of the vaccine’s immunogenicity when administered in a three-dose schedule of 120 mcg at 0, 2, and 6 months to 193 toddlers and 274 of the children aged 2-9 years, robust bactericidal antibody responses were seen against the four major Neisseria meningitidis group B strains that cause invasive disease. In fact, at least a fourfold rise in titers from baseline to 1 month after dose three was documented in the same high proportion of 1- to 9-year-olds as previously seen in the phase 3 trials that led to vaccine licensure in adolescents and young adults.

“These results support that the use of Trumenba, when given to children ages 1 to less than 10 years at the same dose and schedule that is currently approved in adolescents and young adults, can afford a high degree of protective antibody responses that correlate with immunity in this population,” Dr. Maguire said.

The safety and tolerability analysis included all 752 children in the two phase 2 studies, including the 110 toddlers randomized to three 60-mcg doses of the vaccine, although it has subsequently become clear that 120 mcg is the dose that provides the best immunogenicity with an acceptable safety profile, according to the physician.

Across the age groups, local reactions, including redness and swelling, were more common in Trumenba recipients than in controls who received hepatitis A vaccine and saline injections. So were systemic adverse events. Fever – a systemic event of particular interest to parents and clinicians – occurred in 37% of toddlers after vaccination, compared with 25% of 2- to 9-year-olds and 10%-12% of controls. Of note, prophylactic antipyretics weren’t allowed in the study.

“There’s somewhat of an inverse relationship between age and temperature. So as we go down in age, the rate of fever rises. But after each subsequent dose, regardless of age, there’s a reduction in the incidence of fever,” Dr. Maguire observed.

Most fevers were less than 39.0° C. Only 3 of 752 (less than 1%) patients experienced fever in excess of 40.0° C.

Two children withdrew from the study after developing hip synovitis, which was transient. Another withdrew because of prolonged irritability, fatigue, and decreased appetite.

“Although Trumenba had an acceptable safety and tolerability profile in 1- to 9-year-olds, this analysis wasn’t powered enough to detect uncommon adverse events, so we’ll continue to monitor safety for things like synovitis,” he said.

In 10- to 25-year-olds, the meningococcal vaccine can be given concomitantly with other vaccines without interference. There are plans to study concurrent vaccination with MMR and pneumococcal vaccines in 1- to 9-year-olds as well, according to Dr. Maguire.

Pfizer also now is planning clinical trials of the vaccine in infants, another important group currently unprotected against meningococcal group B disease, he added.

Dr. Maguire is an employee of Pfizer, who funded the studies.

 

LJUBLJANA, SLOVENIAAn expanded indication for the meningococcal group B vaccine known as Trumenba in patients aged 1-9 years is being considered by the Food and Drug Administration under the agency’s Breakthrough Therapy designation.

Dr. Jason D. Maguire

Breakthrough Therapy status is reserved for accelerated review of therapies considered to show substantial preliminary promise of effectively targeting a major unmet medical need.

The unmet need here is that there is no meningococcal group B vaccine approved for use in children under age 10 years. Yet infants and children under 5 years of age are at greatest risk of invasive meningococcal B disease, with reported case fatality rates of 8%-9%, Jason D. Maguire, MD, noted at the annual meeting of the European Society for Paediatric Infectious Diseases.

Trumenba has been approved in the United States for patients aged 10-25 years and in the European Union for individuals aged 10 years or older.

Dr. Maguire, of Pfizer’s vaccine clinical research and development program, presented the results of the two phase 2 randomized safety and immunogenicity trials conducted in patients aged 1- 9 years that the company has submitted to the FDA in support of the expanded indication. One study was carried out in 352 1-year-old toddlers, the other in 400 children aged 2-9 years, whose mean age was 4 years. The studies were carried out in Australia, Finland, Poland, and the Czech Republic.

In a pooled analysis of the vaccine’s immunogenicity when administered in a three-dose schedule of 120 mcg at 0, 2, and 6 months to 193 toddlers and 274 of the children aged 2-9 years, robust bactericidal antibody responses were seen against the four major Neisseria meningitidis group B strains that cause invasive disease. In fact, at least a fourfold rise in titers from baseline to 1 month after dose three was documented in the same high proportion of 1- to 9-year-olds as previously seen in the phase 3 trials that led to vaccine licensure in adolescents and young adults.

“These results support that the use of Trumenba, when given to children ages 1 to less than 10 years at the same dose and schedule that is currently approved in adolescents and young adults, can afford a high degree of protective antibody responses that correlate with immunity in this population,” Dr. Maguire said.

The safety and tolerability analysis included all 752 children in the two phase 2 studies, including the 110 toddlers randomized to three 60-mcg doses of the vaccine, although it has subsequently become clear that 120 mcg is the dose that provides the best immunogenicity with an acceptable safety profile, according to the physician.

Across the age groups, local reactions, including redness and swelling, were more common in Trumenba recipients than in controls who received hepatitis A vaccine and saline injections. So were systemic adverse events. Fever – a systemic event of particular interest to parents and clinicians – occurred in 37% of toddlers after vaccination, compared with 25% of 2- to 9-year-olds and 10%-12% of controls. Of note, prophylactic antipyretics weren’t allowed in the study.

“There’s somewhat of an inverse relationship between age and temperature. So as we go down in age, the rate of fever rises. But after each subsequent dose, regardless of age, there’s a reduction in the incidence of fever,” Dr. Maguire observed.

Most fevers were less than 39.0° C. Only 3 of 752 (less than 1%) patients experienced fever in excess of 40.0° C.

Two children withdrew from the study after developing hip synovitis, which was transient. Another withdrew because of prolonged irritability, fatigue, and decreased appetite.

“Although Trumenba had an acceptable safety and tolerability profile in 1- to 9-year-olds, this analysis wasn’t powered enough to detect uncommon adverse events, so we’ll continue to monitor safety for things like synovitis,” he said.

In 10- to 25-year-olds, the meningococcal vaccine can be given concomitantly with other vaccines without interference. There are plans to study concurrent vaccination with MMR and pneumococcal vaccines in 1- to 9-year-olds as well, according to Dr. Maguire.

Pfizer also now is planning clinical trials of the vaccine in infants, another important group currently unprotected against meningococcal group B disease, he added.

Dr. Maguire is an employee of Pfizer, who funded the studies.

Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM ESPID 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Novel enfortumab vedotin induces responses in advanced urothelial cancers

Article Type
Changed
Fri, 06/11/2021 - 10:33

– Patients with advanced urothelial cancer that has progressed following platinum-based chemotherapy and immunotherapy with checkpoint inhibitors have a poor prognosis and few effective therapeutic options.

But in a phase 2 trial in 125 patients with locally advanced or metastatic urothelial cancer, the investigational agent enfortumab vedotin was associated with a 44% objective response rate, including a 12% complete response rate and 32% partial response rate. The responses were observed across all subgroups, irrespective of response to prior immunotherapy or the presence of liver metastases, reported Daniel Petrylak, MD, a professor of medical oncology and urology at Yale Cancer Center in New Haven, Connecticut.

In a video interview at the annual meeting of the American Society of Clinical Oncology, Dr. Petrylak described how the agent is directed toward a novel target, Nectin-4, a protein expressed in about 97% of urothelial cancers and in other solid tumor types.

The study is sponsored by Seattle Genetics and Astellas Pharma. Dr. Petrylak disclosed a consulting or advisory role with Astellas and others, funding from Seattle Genetics, and financial relationships with multiple other companies.
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Patients with advanced urothelial cancer that has progressed following platinum-based chemotherapy and immunotherapy with checkpoint inhibitors have a poor prognosis and few effective therapeutic options.

But in a phase 2 trial in 125 patients with locally advanced or metastatic urothelial cancer, the investigational agent enfortumab vedotin was associated with a 44% objective response rate, including a 12% complete response rate and 32% partial response rate. The responses were observed across all subgroups, irrespective of response to prior immunotherapy or the presence of liver metastases, reported Daniel Petrylak, MD, a professor of medical oncology and urology at Yale Cancer Center in New Haven, Connecticut.

In a video interview at the annual meeting of the American Society of Clinical Oncology, Dr. Petrylak described how the agent is directed toward a novel target, Nectin-4, a protein expressed in about 97% of urothelial cancers and in other solid tumor types.

The study is sponsored by Seattle Genetics and Astellas Pharma. Dr. Petrylak disclosed a consulting or advisory role with Astellas and others, funding from Seattle Genetics, and financial relationships with multiple other companies.
 

– Patients with advanced urothelial cancer that has progressed following platinum-based chemotherapy and immunotherapy with checkpoint inhibitors have a poor prognosis and few effective therapeutic options.

But in a phase 2 trial in 125 patients with locally advanced or metastatic urothelial cancer, the investigational agent enfortumab vedotin was associated with a 44% objective response rate, including a 12% complete response rate and 32% partial response rate. The responses were observed across all subgroups, irrespective of response to prior immunotherapy or the presence of liver metastases, reported Daniel Petrylak, MD, a professor of medical oncology and urology at Yale Cancer Center in New Haven, Connecticut.

In a video interview at the annual meeting of the American Society of Clinical Oncology, Dr. Petrylak described how the agent is directed toward a novel target, Nectin-4, a protein expressed in about 97% of urothelial cancers and in other solid tumor types.

The study is sponsored by Seattle Genetics and Astellas Pharma. Dr. Petrylak disclosed a consulting or advisory role with Astellas and others, funding from Seattle Genetics, and financial relationships with multiple other companies.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ASCO 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article