The Critical Value of Telepathology in the COVID-19 Era

Article Type
Changed

Advances in technology, including ubiquitous access to the internet and the capacity to transfer high-resolution representative images, have facilitated the adoption of telepathology by laboratories worldwide.1-5 Telepathology includes the use of telecommunication links that enable transmission of digital pathology images for primary diagnosis, quality assurance (QA), education, research, or second opinion diagnoses.3 This improvement has culminated in approvals by the US Food and Drug Administration (FDA) of whole slide imaging (WSI) systems for surgical pathology slides: specifically, the Philips IntelliSite Digital Pathology Solution in 2017 and the Leica Aperio AT2 DX in 2020.6-8 However, the approvals do not include telecytology due to lack of whole slide multiplanar scanning at different planes of focus or z-stacking capabilities.7

Long-term trends in pathology, specifically the slow reduction in the number of practicing pathologists available in the workforce compared with the total served population, along with the social distancing imperatives and disruptions brought about by the COVID-19 pandemic have made telepathology implementation pertinent to continue and improve pathology practice.8-10

Despite the initial capital equipment costs, telepathology has several advantages, including increasing productivity, saving costs, improving access to pathologist care, improving quality of care, and ease of second opinions (Figures 1 and 2; Table 1).2-5,6-8   This review will cover aspects of telepathology implementation for laboratories in light of the recent COVID-19 pandemic and its potential to improve pathology practice.

Description and Definitions

The primary modes of telepathology (static image telepathology, robotic telepathology, video microscopy, WSI, and multimodality telepathology) have been defined by the American Telemedicine Association (ATA).2 WSI has been particularly suited for telepathology due to the ability to view digital slides in high resolution at various magnifications. These image files can also be viewed and shared with ease with other observers. Also, they take a shorter time to view compared with the use of a robotic microscope.3

Selection, Validation, and Implementation

WSI platforms vary in their characteristics and have several parameters, including but not limited to batch scanning vs continuous or random-access processing, throughput volume capacities, scan speed, cost, manual vs automatic loading of slides, image quality, slide capacity, flexibility for different slide sizes/features, telepathology capabilities once slide scanned, z-stacking, and regulatory approval status.8 Selection of the WSI device is dependent on need and cost considerations. For example, use for frozen section requires faster scanning speed and does not generally require a high throughput scanner.

 

 

Validation of telepathology by the testing site demonstrates that the new system performs as expected for its intended clinical use before being put into service and that the digital slides produced are acceptable for clinical diagnostic interpretation.11 The College of American Pathologists (CAP) established WSI validation guidelines are part of the published laboratory standard of care.11-13 An appropriate validation enables the benefits of telepathology while mitigating the risks.

There are 3 major CAP recommendations for validation. First, ≥ 60 cases should be included for each use case being validated with 20 additional cases for relevant ancillary applications not included in the 60 cases. Second, diagnostic concordance (ideally ≥ 95%) should be established between digital and glass slides for the same observer. Third, there should be a 2-week washout period between the viewing of digital and glass slides (Table 2).12,13

Neither glass nor digital slides are viewed during the washout period. In addition, there are 9 CAP good practice statements, including that all pathology laboratories implementing WSI technology should carry out appropriate validations, have adequately trained pathologists, and be able to address changes in the WSI system that could impact clinical results.12,13 This CAP guideline is an effective reference for medical laboratories validating WSI systems.2,11-13 Telepathology involves many technical, privacy/security, and facility-based specifications.2 Therefore, involvement of the relevant departments is warranted.2

Guidelines from the ATA establish that telepathology systems should be validated for clinical use, including non-WSI platforms.2 Published validations of other non-WSI platforms (such as by robotic or multimodality telepathology) have followed the structure proposed in the guidelines by CAP for validating WSI.14,15

Ensuring that all relevant responsibilities (clinical, facility, technical, training, documentation/archiving, quality management, and operations related) for the use of telepathology are met is another aspect of validation and implementation.2 Clinical responsibilities include an agreement between the sending (referring) and receiving (consulting) parties on the information to accompany the digital material.2 From ATA clinical guidelines, this includes identification information, provision to the consulting pathologist of all relevant clinical data, provision to retrieve for access any needed and/or relevant diagnostic material, and responsibility by referrer that the correct image/metadata was sent.2 Involved parties should be trained to manage the materials being transmitted.2

Facility responsibilities include maintaining the standard of care defined by the facility and regulatory agencies.2 The maintenance of accreditation, adherence to licensure requirements, and proper management of privileges to practice telepathology are also important.2 Technical responsibilities include ensuring a proper validation that meets the standard of care and covers use cases.2,11-13

All processes, training, and competencies should be followed and documented per standard facility operating procedures.2 ATA recommends that telepathology should result in a formal report for diagnostic consultations, maintain logs of telepathology interactions or disclaimer statements, and have an appropriate retention policy.2 The CAP recommends digital images used for primary diagnosis should be kept for 10 years if the original glass slides are not available.16 Once implemented, telepathology reports must be incorporated into the pathology and laboratory medicine department’s quality management plan for both the technical performance of the telepathology system and diagnostic performance of the pathologists using the system.2 Operations responsibilities include ensuring that the telepathology system is maintained according to vendor recommendations and regulatory standards. Appropriate provisions for space and associated needs should be developed in conjunction with the information technology team of the facility to ensure appropriate security, privacy, and regulatory compliance.2

 

 

Applications and Uses

Telecytology. Rapid real-time telecytology has been documented to be useful in rapid on-site evaluations (ROSE) of the adequacy of fine needle aspirations (FNA).17-21 Nevertheless, current Medicare reimbursement is limited given that ROSE is cost prohibitive, time consuming, and affects productivity in cytology laboratories.17,22,23 Estimates of the time to provide ROSE for 1 procedure without telecytology range from 48.7 to 56.2 minutes.17,23 The use of telecytology significantly reduces pathologist ROSE time without losing quality to about 12 minutes, of which only an average of 7.5 minutes was spent by the cytopathologist for the ROSE diagnosis.17-21 ROSE also can be used for distant and remote locations to improve patient care.17-21 Multiple vendors provide real-time telecytology service. Innovations using smartphone adapters, digital cameras that could work as their own IP addresses, and connection with high-speed dedicated connections with viewing platforms on high-sensitivity monitors can facilitate ROSE to improve patient management.24,25 The successful accurate use of ROSE has been described; however, there are currently no FDA-approved telepathology ROSE platforms.17-19,21-25

To date, the FDA has not approved any telecytology whole slide scanner due to a lack of z-stacking capability in submitted scanners.7,21 Not all whole slide scanners offer z-stacking, though even in those that do offer it, the time necessary to scan the entire slide with adequate z-stacking takes too long to be clinically acceptable for many situations involving ROSE.21 WSI has also been used to develop international consensus for cytologic samples.26 Published recommendations for the validation of these other modalities before usage follow the spirit of the CAP guidelines (as far as multiple cases with high concordance rates) for validation of WSI for diagnostic purposes but vary on the exact number of slides and acceptable concordance rate.21,27 For ROSE with a robotic microscope without any on-site cytology personnel, documented standardized training of nonpathology staff members, such as the radiologist or other physician performing the FNA procedure, may be needed to enable the performance of ROSE telecytology and ensure compliance with regulations.2,21 Besides ROSE, there are published validations for telecytology in primary diagnosis and QA, indicating a role for telecytology for diagnosis for laboratories that have properly validated and implemented the laboratory-developed test.28-30

Frozen section. Telepathology has significant potential to improve access to frozen section consultation.5,31-33 Benefits to improving access to frozen section include providing frozen section consultation at remote or off-site locations, increasing access to subspecialty consultation, improving workflow by eliminating the need to travel off-site to the frozen section case, cost savings in staff work time, and providing educational opportunities for pathology trainees.5,31-33 In our experience, WSI with real-time viewing of frozen section allows for the assessment of transplant tissues, which is an evaluation that generally occurs at night. Discrepancies from frozen section telepathology using WSI to the final diagnosis may occur and those specific to WSI could result from slide or image quality, internet connectivity, and lack of training in using the telepathology system.32 Other issues that may lead to discrepancies between the frozen section diagnosis and the final diagnosis may occur with the review of glass slides by light microscopy.34 Appropriate performance of validation, training, implementation, and quality control for telepathology can help in reaping the benefits while mitigating the risks.2 In a large study comparing frozen section evaluation by telepathology with light microscopy, the sensitivity and specificity of frozen section were comparable between telepathology and light microscopy with a trend toward greater sensitivity by telepathology (0.92 and 0.99 for telepathology vs 0.90 and 0.99 by light microscopy alone, sensitivity and specificity, respectively).33

Other applications. Evidence for efficacy in surgical pathology diagnosis led to FDA approval of the Philips IntelliSite Digital Pathology in 2017 and the Leica Aperio AT2 DX in 2020 WSI platforms.6-8 The use of WSI in surgical pathology has been successfully validated or used in clinical practice at several pathology laboratory settings with documented benefits in the literature for primary and secondary diagnoses, QA, research, and education.6-8,35-45 Benefits of telepathology include improved ergonomics and access to real-time pathologic services in remote areas or during on-site pathologist absence and expert second opinions. Telepathology also may reduce risk of slide loss during transport, shortened turnaround time, reduced costs of operation through workflow efficiencies, better load balancing, improve virtual collaboration, and digital storage of slides that may be irreplaceable.3-8,35-45 Telepathology also has been shown to be useful for education, improving access to learning materials and increasing quality instructional materials at a lower cost.45 The increased ease of collaboration with remote experts and access to slide material for other pathologists improves QA capabilities.3-8,35-45 The availability of virtual slides is expected to promote further research in telepathology and pathology due to the increased availability of virtual material to researchers.1,5,46

Telehematology. Published validations have shown effectiveness for hematopathology specimens, such as the peripheral smear. Telehematology also has demonstrated potential in a laboratory after proper validation and implementation as a laboratory-developed test.37,47-49

Telemicrobiology and Computer-Assisted Pathologic Diagnosis. Telemicrobiology also has been successfully used for clinical, educational, and QA purposes.50 The digitalization of slides involved with telepathology enables further innovation in machine learning for computer-assisted pathologic diagnosis (CAPD), which is already being used clinically for cervical Pap smears.20 An artificial intelligence (AI)–based algorithm analyzes the slides to identify cells of interest, which are presented to the cytopathologist for confirmation.20 However, the expansion of CAPD to include a variety of specimen types or diagnostic situations as well as safely and effectively take initiative in completing an accurate automated diagnosis requires additional development.20,51,52 One of the key factors for machine learning to develop AI is the provision of a corpus of data.51,52 Public, open-source data sources have been limited in size while private proprietary sources have highly restricted and expensive access; to address this, there is a current effort to build the world’s largest public open-source digital pathology corpus at Temple University Hospital, which may help enable innovations in the future.52

 

 

Long-Term Trends/Applications

The COVID-19 pandemic has been unprecedented not only in its widespread morbidity and mortality, but also for the significant socioeconomic, health, lifestyle, societal, and workspace changes.53-57 Specifically, the pandemic has introduced not only a need for social distancing and staff quarantines to prevent the spread of infection, but also a reduction in the workforce due to the stresses of COVID-19 (also known as the Great Resignation).55 Before the pandemic, there was an existing downtrend in the number of pathologists in the US workforce.9-10,58,59 From 2007 to 2017, the number of active pathologists in the US declined by 17.5% despite the increasing national population, resulting in not only an absolute decrease in the number of pathologists, but also an increasing population served per pathologist ratio.59 Since 2017, this downtrend has continued; given the increasing loss of active pathologists from the workforce and the decreasing training of new pathologists, this decrease shows no signs of reversing even as the impact of the COVID-19 pandemic has begun to wane.9,10,58-60

The advantages of telepathology in enabling social distancing and reducing travel to remote sites are known.3-7,17 Given these advantages, some medical centers in the US have previously successfully validated and implemented telepathology operations earlier during the COVID-19 pandemic to ease workflow and ensure continued operations.56,57 The use of telepathology also helps in balancing workload and continuing pathology operations even in light of the workforce reduction as cases no longer need to be signed out on site with glass slides but instead can be signed out at a remote laboratory. Although the impact of the COVID-19 pandemic on operations is decreasing, the capabilities for social distancing and reducing travel remain important to both improve operations and ensure resiliency in response to similar potential events.3-7,17,60

Considering the long-term trends, the lessons of the COVID-19 pandemic, and the potential for future pandemics or other disasters, telepathology’s validation and implementation remains a reasonable choice for pathology practices looking to improve. A variety of practices not just in the general population, but also among US Department of Veterans Affairs medical centers (VAMCs) and the US Department of Defense Military Health System treating a veteran population can benefit from telepathology where it has previously been reported to have been reliable or successfully implemented.61-63 Although the veteran population differs from the general population in several characteristics, such as the severity of disease, coexisting morbidities, and other history, given proper validation and implementation, telepathology’s usefulness extends across different pathology practice settings.35-43,61-66

Limitations of Telepathology

In telepathology’s current state, there are limitations despite its immense promise.6,35 These include initial capital costs, the additional training requirement, the additional time necessary to scan slides, technical challenges (ie, laboratory information system integration, color calibration, display artifacts, potential for small particle scanner omissions, and information technology dependence), the potential for slower evaluation per slide compared with optical microscopes, limitations of slide imaging (ie, z-stacking or lack of polarization on digital pathology), and occupational concerns regarding eye strain with increased computer monitor usage (ie, computer vision syndrome).6,35 In addition, there are few telepathology scanners with FDA approval for WSI.6-8

The improving technology of telepathology has made these limitations surmountable, including faster slide scanning and increasing digital storage capacity for large WSI files. Due to this improvement in technology, an increasing number of laboratory settings, have adopted telepathology as its advantages have begun to outweigh the limitations.2-5 Additionally, the proper validation performed before implementing telepathology can help laboratories identify their unique challenges, troubleshoot, and resolve the limitations before use in clinical care.11-13 Continuing QA during its use and implementation is important to ensure that telepathology performs as expected for clinical purposes despite its limitations.2

Conclusions

Telepathology is a promising technology that may improve pathology practice once properly validated and implemented.1-8 Though there are barriers to this validation and implementation, particularly the capital costs and training, there are several potential benefits, including increased productivity, cost savings, improvement in the workflow, enhanced access to pathologic consultation, and adaptability of the pathology laboratory in an era of a decreased workforce and social distancing due to the COVID-19 pandemic.1-8,55-56 This potential applies across the wide spectrum of potential telepathology uses from frozen section, telecytology (including ROSE) to primary and second opinion diagnoses.1-8,17-33 The benefits also extends to QA, education, and research, as diagnoses can not only be rereviewed by specialty or second opinion consultation with ease, but also digital slides can be produced for educational and research purposes.3-8,35-45 Settings that treat the general population and those focused on the care of veterans or members of the armed forces have reported similar reliability or successful implementation.35-44,61-63 All in all, the use of telepathology represents an innovation that may transform the practice of pathology tomorrow.

References

1. Weinstein RS. Prospects for telepathology. Hum Pathol. 1986;17(5):433-434. doi:10.1016/s0046-8177(86)80028-4

2. Pantanowitz L, Dickinson K, Evans AJ, et al. American Telemedicine Association clinical guidelines for telepathology. J Pathol Inform. 2014;5(1):39. Published 2014 Oct 21. doi:10.4103/2153-3539.143329

3. Farahani N, Pantanowitz L. Overview of telepathology. Surg Pathol Clin. 2015;8(2):223-231. doi:10.1016/j.path. 2015.02.018 4. Petersen JM, Jhala D. Telepathology: a transforming practice for the efficient, safe, and best patient care at the regional Veteran Affairs medical center. Am J Clin Pathol. 2022;158(suppl 1):S97-S98. doi:10.1093/ajcp/aqac126.205

5. Bashshur RL, Krupinski EA, Weinstein RS, Dunn MR, Bashshur N. The empirical foundations of telepathology: evidence of feasibility and intermediate effects. Telemed J E Health. 2017;23(3):155-191. doi:10.1089/tmj.2016.0278

6. Jahn SW, Plass M, Moinfar F. Digital pathology: advantages, limitations and emerging perspectives. J Clin Med. 2020;9(11):3697. Published 2020 Nov 18. doi:10.3390/jcm9113697

7. Evans AJ, Bauer TW, Bui MM, et al. US Food and Drug Administration approval of whole slide imaging for primary diagnosis: a key milestone is reached and new questions are raised. Arch Pathol Lab Med. 2018;142(11):1383-1387. doi:10.5858/arpa.2017-0496-CP.

8. Patel A, Balis UGJ, Cheng J, et al. Contemporary whole slide imaging devices and their applications within the modern pathology department: a selected hardware review. J Pathol Inform. 2021;12:50. Published 2021 Dec 9. doi:10.4103/jpi.jpi_66_21

9. Association of American Medical Colleges. 2017 State Physician Workforce Data Book. November 2017. Accessed April 14, 2023. https://store.aamc.org/downloadable/download/sample/sample_id/30

10. Robboy SJ, Gross D, Park JY, et al. Reevaluation of the US pathologist workforce size. JAMA Netw Open. 2020;3(7):e2010648. Published 2020 Jul 1. doi:10.1001/jamanetworkopen.2020.10648

11. Pantanowitz L, Sinard JH, Henricks WH, et al. Validating whole slide imaging for diagnostic purposes in pathology: guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch Pathol Lab Med. 2013;137(12):1710-1722. doi:10.5858/arpa.2013-0093-CP

12. Evans AJ, Brown RW, Bui MM, et al. Validating whole slide imaging systems for diagnostic purposes in pathology. Arch Pathol Lab Med. 2021;146(4):440-450. doi:10.5858/arpa.2020-0723-CP

13. Evans AJ, Lacchetti C, Reid K, Thomas NE. Validating whole slide imaging for diagnostic purposes in pathology: guideline update. College of American Pathologists. May 2021. Accessed April 13, 2023. https://documents.cap.org/documents/wsi-methodology.pdf

14. Chandraratnam E, Santos LD, Chou S, et al. Parathyroid frozen section interpretation via desktop telepathology systems: a validation study. J Pathol Inform. 2018;9:41. Published 2018 Dec 3. doi:10.4103/jpi.jpi_57_18

15. Thrall MJ, Rivera AL, Takei H, Powell SZ. Validation of a novel robotic telepathology platform for neuropathology intraoperative touch preparations. J Pathol Inform. 2014;5(1):21. Published 2014 Jul 28. doi:10.4103/2153-3539.137642

16. Balis UGJ, Williams CL, Cheng J, et al. Whole-Slide Imaging: Thinking Twice Before Hitting the Delete Key. AJSP: Reviews & Reports. 2018;23(6):p 249-250. doi:10.1097/PCR.0000000000000283

17. Kim B, Chhieng DC, Crowe DR, et al. Dynamic telecytopathology of on site rapid cytology diagnoses for pancreatic carcinoma. Cytojournal. 2006;3:27. Published 2006 Dec 11. doi:10.1186/1742-6413-3-27

18. Perez D, Stemmer MN, Khurana KK. Utilization of dynamic telecytopathology for rapid onsite evaluation of touch imprint cytology of needle core biopsy: diagnostic accuracy and pitfalls. Telemed J E Health. 2021;27(5):525-531. doi:10.1089/tmj.2020.0117

19. McCarthy EE, McMahon RQ, Das K, Stewart J 3rd. Internal validation testing for new technologies: bringing telecytopathology into the mainstream. Diagn Cytopathol. 2015;43(1):3-7. doi:10.1002/dc.23167

20. Marletta S, Treanor D, Eccher A, Pantanowitz L. Whole-slide imaging in cytopathology: state of the art and future directions. Diagn Histopathol (Oxf). 2021;27(11):425-430. doi:10.1016/j.mpdhp.2021.08.001

21. Lin O. Telecytology for rapid on-site evaluation: current status. J Am Soc Cytopathol. 2018;7(1):1-6. doi:10.1016/j.jasc.2017.10.002

22. Eloubeidi MA, Tamhane A, Jhala N, et al. Agreement between rapid onsite and final cytologic interpretations of EUS-guided FNA specimens: implications for the endosonographer and patient management. Am J Gastroenterol. 2006;101(12):2841-2847. doi:10.1111/j.1572-0241.2006.00852.x

23. Layfield LJ, Bentz JS, Gopez EV. Immediate on-site interpretation of fine-needle aspiration smears: a cost and compensation analysis. Cancer. 2001;93(5):319-322. doi:10.1002/cncr.9046

24. Fontelo P, Liu F, Yagi Y. Evaluation of a smartphone for telepathology: lessons learned. J Pathol Inform. 2015;6:35. Published 2015 Jun 23. doi:10.4103/2153-3539.158912

25. Lin O. Telecytology for rapid on-site evaluation: current status. J Am Soc Cytopathol. 2018;7(1):1-6. doi:10.1016/j.jasc.2017.10.002

26. Johnson DN, Onenerk M, Krane JF, et al. Cytologic grading of primary malignant salivary gland tumors: A blinded review by an international panel. Cancer Cytopathol. 2020;128(6):392-402. doi:10.1002/cncy.22271

27. Trabzonlu L, Chatt G, McIntire PJ, et al. Telecytology validation: is there a recipe for everybody? J Am Soc Cytopathol. 2022;11(4):218-225. doi:10.1016/j.jasc.2022.03.001

28. Canberk S, Behzatoglu K, Caliskan CK, et al. The role of telecytology in the primary diagnosis of thyroid fine-needle aspiration specimens. Acta Cytol. 2020;64(4):323-331. doi:10.1159/000503914.

29. Archondakis S, Roma M, Kaladelfou E. Implementation of pre-captured videos for remote diagnosis of cervical cytology specimens. Cytopathology. 2021;32(3):338-343. doi:10.1111/cyt.12948

30. Lee ES, Kim IS, Choi JS, et al. Accuracy and reproducibility of telecytology diagnosis of cervical smears. A tool for quality assurance programs. Am J Clin Pathol. 2003;119(3):356-360. doi:10.1309/7ytvag4xnr48t75h

31. Dietz RL, Hartman DJ, Pantanowitz L. Systematic review of the use of telepathology during intraoperative consultation. Am J Clin Pathol. 2020;153(2):198-209. doi:10.1093/ajcp/aqz155

32. Bauer TW, Slaw RJ, McKenney JK, Patil DT. Validation of whole slide imaging for frozen section diagnosis in surgical pathology. J Pathol Inform. 2015;6:49. Published 2015 Aug 31. doi:10.4103/2153-3539.163988

<--pagebreak-->

33. Vosoughi A, Smith PT, Zeitouni JA, et al. Frozen section evaluation via dynamic real-time nonrobotic telepathology system in a university cancer center by resident/faculty cooperation team. Hum Pathol. 2018;78:144-150. doi:10.1016/j.humpath.2018.04.012

34. Mahe E, Ara S, Bishara M, et al. Intraoperative pathology consultation: error, cause and impact. Can J Surg. 2013;56(3):E13-E18. doi:10.1503/cjs.011112.

35. Farahani N, Parwani AV, Pantanowitz L. Whole slide imaging in pathology: advantages, limitations, and emerging perspectives. Pathol Lab Med Int. 2015;7:23-33. doi:10.2147/PLMI.S59826

36. Thorstenson S, Molin J, Lundström C. Implementation of large-scale routine diagnostics using whole slide imaging in Sweden: digital pathology experiences 2006-2013. J Pathol Inform. 2014;5(1):14. Published 2014 Mar 28. doi:10.4103/2153-3539.129452

37. Pantanowitz L, Wiley CA, Demetris A, et al. Experience with multimodality telepathology at the University of Pittsburgh Medical Center. J Pathol Inform. 2012;3:45. doi:10.4103/2153-3539.104907

38. Al Habeeb A, Evans A, Ghazarian D. Virtual microscopy using whole-slide imaging as an enabler for teledermatopathology: a paired consultant validation study. J Pathol Inform. 2012;3:2. doi:10.4103/2153-3539.93399

39. Al-Janabi S, Huisman A, Vink A, et al. Whole slide images for primary diagnostics in dermatopathology: a feasibility study. J Clin Pathol. 2012;65(2):152-158. doi:10.1136/jclinpath-2011-200277

40. Nielsen PS, Lindebjerg J, Rasmussen J, Starklint H, Waldstrøm M, Nielsen B. Virtual microscopy: an evaluation of its validity and diagnostic performance in routine histologic diagnosis of skin tumors. Hum Pathol. 2010;41(12):1770-1776. doi:10.1016/j.humpath.2010.05.015

41. Leinweber B, Massone C, Kodama K, et al. Telederma-topathology: a controlled study about diagnostic validity and technical requirements for digital transmission. Am J Dermatopathol. 2006;28(5):413-416. doi:10.1097/01.dad.0000211523.95552.86

42. Koch LH, Lampros JN, Delong LK, Chen SC, Woosley JT, Hood AF. Randomized comparison of virtual microscopy and traditional glass microscopy in diagnostic accuracy among dermatology and pathology residents. Hum Pathol. 2009;40(5):662-667. doi:10.1016/j.humpath.2008.10.009

43. Farris AB, Cohen C, Rogers TE, Smith GH. Whole slide imaging for analytical anatomic pathology and telepathology: practical applications today, promises, and perils. Arch Pathol Lab Med. 2017;141(4):542-550. doi:10.5858/arpa.2016-0265-SA

44. Chong T, Palma-Diaz MF, Fisher C, et al. The California Telepathology Service: UCLA’s experience in deploying a regional digital pathology subspecialty consultation network. J Pathol Inform. 2019;10:31. Published 2019 Sep 27. doi:10.4103/jpi.jpi_22_19

45. Meyer J, Paré G. Telepathology impacts and implementation challenges: a scoping review. Arch Pathol Lab Med. 2015;139(12):1550-1557. doi:10.5858/arpa.2014-0606-RA

46. Weinstein RS, Descour MR, Liang C, et al. Telepathology overview: from concept to implementation. Hum Pathol. 2001;32(12):1283-1299. doi:10.1053/hupa.2001.29643

47. Riley RS, Ben-Ezra JM, Massey D, Cousar J. The virtual blood film. Clin Lab Med. 2002;22(1):317-345. doi:10.1016/s0272-2712(03)00077-5

48. Garcia CA, Hanna M, Contis LC, Pantanowitz L, Hyman R. Sharing Cellavision blood smear images with clinicians via the electronic medical record. Blood. 2017;130(suppl 1):5586. doi:10.1182/blood.V130.Suppl_1.5586.5586

49. Goswami R, Pi D, Pal J, Cheng K, Hudoba De Badyn M. Performance evaluation of a dynamic telepathology system (Panoptiq) in the morphologic assessment of peripheral blood film abnormalities. Int J Lab Hematol. 2015;37(3):365-371. doi:10.1111/ijlh.12294

50. Rhoads DD, Mathison BA, Bishop HS, da Silva AJ, Pantanowitz L. Review of telemicrobiology. Arch Pathol Lab Med. 2016;140(4):362-370. doi:10.5858/arpa.2015-0116-RA51. Nam S, Chong Y, Jung CK, et al. Introduction to digital pathology and computer-aided pathology. J Pathol Transl Med. 2020;54(2):125-134. doi:10.4132/jptm.2019.12.31

52. Houser D, Shadhin G, Anstotz R, et al. The Temple University Hospital Digital Pathology Corpus. IEEE Signal Process Med Biol Symp. 2018:1-7. doi:10.1109/SPMB.2018.8615619

53. Petersen J, Dalal S, Jhala D. Criticality of in-house preparation of viral transport medium in times of shortage during COVID-19 pandemic. Lab Med. 2021;52(2):e39-e45. doi:10.1093/labmed/lmaa099

54. Ranney ML, Griffeth V, Jha AK. Critical supply shortages—the need for ventilators and personal protective equipment during the Covid-19 pandemic. N Engl J Med. 2020;382(18):e41. doi:10.1056/NEJMp2006141

55. Ksinan Jiskrova G. Impact of COVID-19 pandemic on the workforce: from psychological distress to the Great Resignation. J Epidemiol Community Health. 2022;76(6):525-526. doi:10.1136/jech-2022-218826

56. Henriksen J, Kolognizak T, Houghton T, et al. Rapid validation of telepathology by an academic neuropathology practice during the COVID-19 pandemic. Arch Pathol Lab Med. 2020;144(11):1311-1320. doi:10.5858/arpa.2020-0372-SA

57. Ardon O, Reuter VE, Hameed M, et al. Digital pathology operations at an NYC tertiary cancer center during the first 4 months of COVID-19 pandemic response. Acad Pathol. 2021;8:23742895211010276. Published 2021 Apr 28. doi:10.1177/23742895211010276

58. Jajosky RP, Jajosky AN, Kleven DT, Singh G. Fewer seniors from United States allopathic medical schools are filling pathology residency positions in the Main Residency Match, 2008-2017. Hum Pathol. 2018;73:26-32. doi:10.1016/j.humpath.2017.11.014

59. Metter DM, Colgan TJ, Leung ST, Timmons CF, Park JY. Trends in the US and Canadian pathologist workforces from 2007 to 2017. JAMA Netw Open. 2019;2(5):e194337. Published 2019 May 3. doi:10.1001/jamanetworkopen.2019.4337

60. Murray CJL. COVID-19 will continue but the end of the pandemic is near. Lancet. 2022;399(10323):417-419. doi:10.1016/S0140-6736(22)00100-3

61. Ghosh A, Brown GT, Fontelo P. Telepathology at the Armed Forces Institute of Pathology: a retrospective review of consultations from 1996 to 1997. Arch Pathol Lab Med. 2018;142(2):248-252. doi:10.5858/arpa.2017-0055-OA

62. Dunn BE, Choi H, Almagro UA, Recla DL, Davis CW. Telepathology networking in VISN-12 of the Veterans Health Administration. Telemed J E Health. 2000;6(3):349-354. doi:10.1089/153056200750040200

63. Dunn BE, Almagro UA, Choi H, et al. Dynamic-robotic telepathology: Department of Veterans Affairs feasibility study. Hum Pathol. 1997;28(1):8-12. doi:10.1016/s0046-8177(97)90271-9

64. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252

<--pagebreak-->

65. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.

66. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448

Article PDF
Author and Disclosure Information

Jeffrey M. Petersen, MDa,b; Nirag Jhala, MDc; Darshana N. Jhala, MDa,b

Correspondence:  Darshana Jhala  (darshana.jhala@ pennmedicine.upenn.edu)

aCorporal Michael J Crescenz Veteran Affairs Medical Center, Philadelphia, Pennsylvania

bUniversity of Pennsylvania, Philadelphia

cTemple University Hospital, Philadelphia

Author contributions

Authors contributed equally to the manuscript.

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 40(6)a
Publications
Topics
Page Number
186-193
Sections
Author and Disclosure Information

Jeffrey M. Petersen, MDa,b; Nirag Jhala, MDc; Darshana N. Jhala, MDa,b

Correspondence:  Darshana Jhala  (darshana.jhala@ pennmedicine.upenn.edu)

aCorporal Michael J Crescenz Veteran Affairs Medical Center, Philadelphia, Pennsylvania

bUniversity of Pennsylvania, Philadelphia

cTemple University Hospital, Philadelphia

Author contributions

Authors contributed equally to the manuscript.

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Jeffrey M. Petersen, MDa,b; Nirag Jhala, MDc; Darshana N. Jhala, MDa,b

Correspondence:  Darshana Jhala  (darshana.jhala@ pennmedicine.upenn.edu)

aCorporal Michael J Crescenz Veteran Affairs Medical Center, Philadelphia, Pennsylvania

bUniversity of Pennsylvania, Philadelphia

cTemple University Hospital, Philadelphia

Author contributions

Authors contributed equally to the manuscript.

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF

Advances in technology, including ubiquitous access to the internet and the capacity to transfer high-resolution representative images, have facilitated the adoption of telepathology by laboratories worldwide.1-5 Telepathology includes the use of telecommunication links that enable transmission of digital pathology images for primary diagnosis, quality assurance (QA), education, research, or second opinion diagnoses.3 This improvement has culminated in approvals by the US Food and Drug Administration (FDA) of whole slide imaging (WSI) systems for surgical pathology slides: specifically, the Philips IntelliSite Digital Pathology Solution in 2017 and the Leica Aperio AT2 DX in 2020.6-8 However, the approvals do not include telecytology due to lack of whole slide multiplanar scanning at different planes of focus or z-stacking capabilities.7

Long-term trends in pathology, specifically the slow reduction in the number of practicing pathologists available in the workforce compared with the total served population, along with the social distancing imperatives and disruptions brought about by the COVID-19 pandemic have made telepathology implementation pertinent to continue and improve pathology practice.8-10

Despite the initial capital equipment costs, telepathology has several advantages, including increasing productivity, saving costs, improving access to pathologist care, improving quality of care, and ease of second opinions (Figures 1 and 2; Table 1).2-5,6-8   This review will cover aspects of telepathology implementation for laboratories in light of the recent COVID-19 pandemic and its potential to improve pathology practice.

Description and Definitions

The primary modes of telepathology (static image telepathology, robotic telepathology, video microscopy, WSI, and multimodality telepathology) have been defined by the American Telemedicine Association (ATA).2 WSI has been particularly suited for telepathology due to the ability to view digital slides in high resolution at various magnifications. These image files can also be viewed and shared with ease with other observers. Also, they take a shorter time to view compared with the use of a robotic microscope.3

Selection, Validation, and Implementation

WSI platforms vary in their characteristics and have several parameters, including but not limited to batch scanning vs continuous or random-access processing, throughput volume capacities, scan speed, cost, manual vs automatic loading of slides, image quality, slide capacity, flexibility for different slide sizes/features, telepathology capabilities once slide scanned, z-stacking, and regulatory approval status.8 Selection of the WSI device is dependent on need and cost considerations. For example, use for frozen section requires faster scanning speed and does not generally require a high throughput scanner.

 

 

Validation of telepathology by the testing site demonstrates that the new system performs as expected for its intended clinical use before being put into service and that the digital slides produced are acceptable for clinical diagnostic interpretation.11 The College of American Pathologists (CAP) established WSI validation guidelines are part of the published laboratory standard of care.11-13 An appropriate validation enables the benefits of telepathology while mitigating the risks.

There are 3 major CAP recommendations for validation. First, ≥ 60 cases should be included for each use case being validated with 20 additional cases for relevant ancillary applications not included in the 60 cases. Second, diagnostic concordance (ideally ≥ 95%) should be established between digital and glass slides for the same observer. Third, there should be a 2-week washout period between the viewing of digital and glass slides (Table 2).12,13

Neither glass nor digital slides are viewed during the washout period. In addition, there are 9 CAP good practice statements, including that all pathology laboratories implementing WSI technology should carry out appropriate validations, have adequately trained pathologists, and be able to address changes in the WSI system that could impact clinical results.12,13 This CAP guideline is an effective reference for medical laboratories validating WSI systems.2,11-13 Telepathology involves many technical, privacy/security, and facility-based specifications.2 Therefore, involvement of the relevant departments is warranted.2

Guidelines from the ATA establish that telepathology systems should be validated for clinical use, including non-WSI platforms.2 Published validations of other non-WSI platforms (such as by robotic or multimodality telepathology) have followed the structure proposed in the guidelines by CAP for validating WSI.14,15

Ensuring that all relevant responsibilities (clinical, facility, technical, training, documentation/archiving, quality management, and operations related) for the use of telepathology are met is another aspect of validation and implementation.2 Clinical responsibilities include an agreement between the sending (referring) and receiving (consulting) parties on the information to accompany the digital material.2 From ATA clinical guidelines, this includes identification information, provision to the consulting pathologist of all relevant clinical data, provision to retrieve for access any needed and/or relevant diagnostic material, and responsibility by referrer that the correct image/metadata was sent.2 Involved parties should be trained to manage the materials being transmitted.2

Facility responsibilities include maintaining the standard of care defined by the facility and regulatory agencies.2 The maintenance of accreditation, adherence to licensure requirements, and proper management of privileges to practice telepathology are also important.2 Technical responsibilities include ensuring a proper validation that meets the standard of care and covers use cases.2,11-13

All processes, training, and competencies should be followed and documented per standard facility operating procedures.2 ATA recommends that telepathology should result in a formal report for diagnostic consultations, maintain logs of telepathology interactions or disclaimer statements, and have an appropriate retention policy.2 The CAP recommends digital images used for primary diagnosis should be kept for 10 years if the original glass slides are not available.16 Once implemented, telepathology reports must be incorporated into the pathology and laboratory medicine department’s quality management plan for both the technical performance of the telepathology system and diagnostic performance of the pathologists using the system.2 Operations responsibilities include ensuring that the telepathology system is maintained according to vendor recommendations and regulatory standards. Appropriate provisions for space and associated needs should be developed in conjunction with the information technology team of the facility to ensure appropriate security, privacy, and regulatory compliance.2

 

 

Applications and Uses

Telecytology. Rapid real-time telecytology has been documented to be useful in rapid on-site evaluations (ROSE) of the adequacy of fine needle aspirations (FNA).17-21 Nevertheless, current Medicare reimbursement is limited given that ROSE is cost prohibitive, time consuming, and affects productivity in cytology laboratories.17,22,23 Estimates of the time to provide ROSE for 1 procedure without telecytology range from 48.7 to 56.2 minutes.17,23 The use of telecytology significantly reduces pathologist ROSE time without losing quality to about 12 minutes, of which only an average of 7.5 minutes was spent by the cytopathologist for the ROSE diagnosis.17-21 ROSE also can be used for distant and remote locations to improve patient care.17-21 Multiple vendors provide real-time telecytology service. Innovations using smartphone adapters, digital cameras that could work as their own IP addresses, and connection with high-speed dedicated connections with viewing platforms on high-sensitivity monitors can facilitate ROSE to improve patient management.24,25 The successful accurate use of ROSE has been described; however, there are currently no FDA-approved telepathology ROSE platforms.17-19,21-25

To date, the FDA has not approved any telecytology whole slide scanner due to a lack of z-stacking capability in submitted scanners.7,21 Not all whole slide scanners offer z-stacking, though even in those that do offer it, the time necessary to scan the entire slide with adequate z-stacking takes too long to be clinically acceptable for many situations involving ROSE.21 WSI has also been used to develop international consensus for cytologic samples.26 Published recommendations for the validation of these other modalities before usage follow the spirit of the CAP guidelines (as far as multiple cases with high concordance rates) for validation of WSI for diagnostic purposes but vary on the exact number of slides and acceptable concordance rate.21,27 For ROSE with a robotic microscope without any on-site cytology personnel, documented standardized training of nonpathology staff members, such as the radiologist or other physician performing the FNA procedure, may be needed to enable the performance of ROSE telecytology and ensure compliance with regulations.2,21 Besides ROSE, there are published validations for telecytology in primary diagnosis and QA, indicating a role for telecytology for diagnosis for laboratories that have properly validated and implemented the laboratory-developed test.28-30

Frozen section. Telepathology has significant potential to improve access to frozen section consultation.5,31-33 Benefits to improving access to frozen section include providing frozen section consultation at remote or off-site locations, increasing access to subspecialty consultation, improving workflow by eliminating the need to travel off-site to the frozen section case, cost savings in staff work time, and providing educational opportunities for pathology trainees.5,31-33 In our experience, WSI with real-time viewing of frozen section allows for the assessment of transplant tissues, which is an evaluation that generally occurs at night. Discrepancies from frozen section telepathology using WSI to the final diagnosis may occur and those specific to WSI could result from slide or image quality, internet connectivity, and lack of training in using the telepathology system.32 Other issues that may lead to discrepancies between the frozen section diagnosis and the final diagnosis may occur with the review of glass slides by light microscopy.34 Appropriate performance of validation, training, implementation, and quality control for telepathology can help in reaping the benefits while mitigating the risks.2 In a large study comparing frozen section evaluation by telepathology with light microscopy, the sensitivity and specificity of frozen section were comparable between telepathology and light microscopy with a trend toward greater sensitivity by telepathology (0.92 and 0.99 for telepathology vs 0.90 and 0.99 by light microscopy alone, sensitivity and specificity, respectively).33

Other applications. Evidence for efficacy in surgical pathology diagnosis led to FDA approval of the Philips IntelliSite Digital Pathology in 2017 and the Leica Aperio AT2 DX in 2020 WSI platforms.6-8 The use of WSI in surgical pathology has been successfully validated or used in clinical practice at several pathology laboratory settings with documented benefits in the literature for primary and secondary diagnoses, QA, research, and education.6-8,35-45 Benefits of telepathology include improved ergonomics and access to real-time pathologic services in remote areas or during on-site pathologist absence and expert second opinions. Telepathology also may reduce risk of slide loss during transport, shortened turnaround time, reduced costs of operation through workflow efficiencies, better load balancing, improve virtual collaboration, and digital storage of slides that may be irreplaceable.3-8,35-45 Telepathology also has been shown to be useful for education, improving access to learning materials and increasing quality instructional materials at a lower cost.45 The increased ease of collaboration with remote experts and access to slide material for other pathologists improves QA capabilities.3-8,35-45 The availability of virtual slides is expected to promote further research in telepathology and pathology due to the increased availability of virtual material to researchers.1,5,46

Telehematology. Published validations have shown effectiveness for hematopathology specimens, such as the peripheral smear. Telehematology also has demonstrated potential in a laboratory after proper validation and implementation as a laboratory-developed test.37,47-49

Telemicrobiology and Computer-Assisted Pathologic Diagnosis. Telemicrobiology also has been successfully used for clinical, educational, and QA purposes.50 The digitalization of slides involved with telepathology enables further innovation in machine learning for computer-assisted pathologic diagnosis (CAPD), which is already being used clinically for cervical Pap smears.20 An artificial intelligence (AI)–based algorithm analyzes the slides to identify cells of interest, which are presented to the cytopathologist for confirmation.20 However, the expansion of CAPD to include a variety of specimen types or diagnostic situations as well as safely and effectively take initiative in completing an accurate automated diagnosis requires additional development.20,51,52 One of the key factors for machine learning to develop AI is the provision of a corpus of data.51,52 Public, open-source data sources have been limited in size while private proprietary sources have highly restricted and expensive access; to address this, there is a current effort to build the world’s largest public open-source digital pathology corpus at Temple University Hospital, which may help enable innovations in the future.52

 

 

Long-Term Trends/Applications

The COVID-19 pandemic has been unprecedented not only in its widespread morbidity and mortality, but also for the significant socioeconomic, health, lifestyle, societal, and workspace changes.53-57 Specifically, the pandemic has introduced not only a need for social distancing and staff quarantines to prevent the spread of infection, but also a reduction in the workforce due to the stresses of COVID-19 (also known as the Great Resignation).55 Before the pandemic, there was an existing downtrend in the number of pathologists in the US workforce.9-10,58,59 From 2007 to 2017, the number of active pathologists in the US declined by 17.5% despite the increasing national population, resulting in not only an absolute decrease in the number of pathologists, but also an increasing population served per pathologist ratio.59 Since 2017, this downtrend has continued; given the increasing loss of active pathologists from the workforce and the decreasing training of new pathologists, this decrease shows no signs of reversing even as the impact of the COVID-19 pandemic has begun to wane.9,10,58-60

The advantages of telepathology in enabling social distancing and reducing travel to remote sites are known.3-7,17 Given these advantages, some medical centers in the US have previously successfully validated and implemented telepathology operations earlier during the COVID-19 pandemic to ease workflow and ensure continued operations.56,57 The use of telepathology also helps in balancing workload and continuing pathology operations even in light of the workforce reduction as cases no longer need to be signed out on site with glass slides but instead can be signed out at a remote laboratory. Although the impact of the COVID-19 pandemic on operations is decreasing, the capabilities for social distancing and reducing travel remain important to both improve operations and ensure resiliency in response to similar potential events.3-7,17,60

Considering the long-term trends, the lessons of the COVID-19 pandemic, and the potential for future pandemics or other disasters, telepathology’s validation and implementation remains a reasonable choice for pathology practices looking to improve. A variety of practices not just in the general population, but also among US Department of Veterans Affairs medical centers (VAMCs) and the US Department of Defense Military Health System treating a veteran population can benefit from telepathology where it has previously been reported to have been reliable or successfully implemented.61-63 Although the veteran population differs from the general population in several characteristics, such as the severity of disease, coexisting morbidities, and other history, given proper validation and implementation, telepathology’s usefulness extends across different pathology practice settings.35-43,61-66

Limitations of Telepathology

In telepathology’s current state, there are limitations despite its immense promise.6,35 These include initial capital costs, the additional training requirement, the additional time necessary to scan slides, technical challenges (ie, laboratory information system integration, color calibration, display artifacts, potential for small particle scanner omissions, and information technology dependence), the potential for slower evaluation per slide compared with optical microscopes, limitations of slide imaging (ie, z-stacking or lack of polarization on digital pathology), and occupational concerns regarding eye strain with increased computer monitor usage (ie, computer vision syndrome).6,35 In addition, there are few telepathology scanners with FDA approval for WSI.6-8

The improving technology of telepathology has made these limitations surmountable, including faster slide scanning and increasing digital storage capacity for large WSI files. Due to this improvement in technology, an increasing number of laboratory settings, have adopted telepathology as its advantages have begun to outweigh the limitations.2-5 Additionally, the proper validation performed before implementing telepathology can help laboratories identify their unique challenges, troubleshoot, and resolve the limitations before use in clinical care.11-13 Continuing QA during its use and implementation is important to ensure that telepathology performs as expected for clinical purposes despite its limitations.2

Conclusions

Telepathology is a promising technology that may improve pathology practice once properly validated and implemented.1-8 Though there are barriers to this validation and implementation, particularly the capital costs and training, there are several potential benefits, including increased productivity, cost savings, improvement in the workflow, enhanced access to pathologic consultation, and adaptability of the pathology laboratory in an era of a decreased workforce and social distancing due to the COVID-19 pandemic.1-8,55-56 This potential applies across the wide spectrum of potential telepathology uses from frozen section, telecytology (including ROSE) to primary and second opinion diagnoses.1-8,17-33 The benefits also extends to QA, education, and research, as diagnoses can not only be rereviewed by specialty or second opinion consultation with ease, but also digital slides can be produced for educational and research purposes.3-8,35-45 Settings that treat the general population and those focused on the care of veterans or members of the armed forces have reported similar reliability or successful implementation.35-44,61-63 All in all, the use of telepathology represents an innovation that may transform the practice of pathology tomorrow.

Advances in technology, including ubiquitous access to the internet and the capacity to transfer high-resolution representative images, have facilitated the adoption of telepathology by laboratories worldwide.1-5 Telepathology includes the use of telecommunication links that enable transmission of digital pathology images for primary diagnosis, quality assurance (QA), education, research, or second opinion diagnoses.3 This improvement has culminated in approvals by the US Food and Drug Administration (FDA) of whole slide imaging (WSI) systems for surgical pathology slides: specifically, the Philips IntelliSite Digital Pathology Solution in 2017 and the Leica Aperio AT2 DX in 2020.6-8 However, the approvals do not include telecytology due to lack of whole slide multiplanar scanning at different planes of focus or z-stacking capabilities.7

Long-term trends in pathology, specifically the slow reduction in the number of practicing pathologists available in the workforce compared with the total served population, along with the social distancing imperatives and disruptions brought about by the COVID-19 pandemic have made telepathology implementation pertinent to continue and improve pathology practice.8-10

Despite the initial capital equipment costs, telepathology has several advantages, including increasing productivity, saving costs, improving access to pathologist care, improving quality of care, and ease of second opinions (Figures 1 and 2; Table 1).2-5,6-8   This review will cover aspects of telepathology implementation for laboratories in light of the recent COVID-19 pandemic and its potential to improve pathology practice.

Description and Definitions

The primary modes of telepathology (static image telepathology, robotic telepathology, video microscopy, WSI, and multimodality telepathology) have been defined by the American Telemedicine Association (ATA).2 WSI has been particularly suited for telepathology due to the ability to view digital slides in high resolution at various magnifications. These image files can also be viewed and shared with ease with other observers. Also, they take a shorter time to view compared with the use of a robotic microscope.3

Selection, Validation, and Implementation

WSI platforms vary in their characteristics and have several parameters, including but not limited to batch scanning vs continuous or random-access processing, throughput volume capacities, scan speed, cost, manual vs automatic loading of slides, image quality, slide capacity, flexibility for different slide sizes/features, telepathology capabilities once slide scanned, z-stacking, and regulatory approval status.8 Selection of the WSI device is dependent on need and cost considerations. For example, use for frozen section requires faster scanning speed and does not generally require a high throughput scanner.

 

 

Validation of telepathology by the testing site demonstrates that the new system performs as expected for its intended clinical use before being put into service and that the digital slides produced are acceptable for clinical diagnostic interpretation.11 The College of American Pathologists (CAP) established WSI validation guidelines are part of the published laboratory standard of care.11-13 An appropriate validation enables the benefits of telepathology while mitigating the risks.

There are 3 major CAP recommendations for validation. First, ≥ 60 cases should be included for each use case being validated with 20 additional cases for relevant ancillary applications not included in the 60 cases. Second, diagnostic concordance (ideally ≥ 95%) should be established between digital and glass slides for the same observer. Third, there should be a 2-week washout period between the viewing of digital and glass slides (Table 2).12,13

Neither glass nor digital slides are viewed during the washout period. In addition, there are 9 CAP good practice statements, including that all pathology laboratories implementing WSI technology should carry out appropriate validations, have adequately trained pathologists, and be able to address changes in the WSI system that could impact clinical results.12,13 This CAP guideline is an effective reference for medical laboratories validating WSI systems.2,11-13 Telepathology involves many technical, privacy/security, and facility-based specifications.2 Therefore, involvement of the relevant departments is warranted.2

Guidelines from the ATA establish that telepathology systems should be validated for clinical use, including non-WSI platforms.2 Published validations of other non-WSI platforms (such as by robotic or multimodality telepathology) have followed the structure proposed in the guidelines by CAP for validating WSI.14,15

Ensuring that all relevant responsibilities (clinical, facility, technical, training, documentation/archiving, quality management, and operations related) for the use of telepathology are met is another aspect of validation and implementation.2 Clinical responsibilities include an agreement between the sending (referring) and receiving (consulting) parties on the information to accompany the digital material.2 From ATA clinical guidelines, this includes identification information, provision to the consulting pathologist of all relevant clinical data, provision to retrieve for access any needed and/or relevant diagnostic material, and responsibility by referrer that the correct image/metadata was sent.2 Involved parties should be trained to manage the materials being transmitted.2

Facility responsibilities include maintaining the standard of care defined by the facility and regulatory agencies.2 The maintenance of accreditation, adherence to licensure requirements, and proper management of privileges to practice telepathology are also important.2 Technical responsibilities include ensuring a proper validation that meets the standard of care and covers use cases.2,11-13

All processes, training, and competencies should be followed and documented per standard facility operating procedures.2 ATA recommends that telepathology should result in a formal report for diagnostic consultations, maintain logs of telepathology interactions or disclaimer statements, and have an appropriate retention policy.2 The CAP recommends digital images used for primary diagnosis should be kept for 10 years if the original glass slides are not available.16 Once implemented, telepathology reports must be incorporated into the pathology and laboratory medicine department’s quality management plan for both the technical performance of the telepathology system and diagnostic performance of the pathologists using the system.2 Operations responsibilities include ensuring that the telepathology system is maintained according to vendor recommendations and regulatory standards. Appropriate provisions for space and associated needs should be developed in conjunction with the information technology team of the facility to ensure appropriate security, privacy, and regulatory compliance.2

 

 

Applications and Uses

Telecytology. Rapid real-time telecytology has been documented to be useful in rapid on-site evaluations (ROSE) of the adequacy of fine needle aspirations (FNA).17-21 Nevertheless, current Medicare reimbursement is limited given that ROSE is cost prohibitive, time consuming, and affects productivity in cytology laboratories.17,22,23 Estimates of the time to provide ROSE for 1 procedure without telecytology range from 48.7 to 56.2 minutes.17,23 The use of telecytology significantly reduces pathologist ROSE time without losing quality to about 12 minutes, of which only an average of 7.5 minutes was spent by the cytopathologist for the ROSE diagnosis.17-21 ROSE also can be used for distant and remote locations to improve patient care.17-21 Multiple vendors provide real-time telecytology service. Innovations using smartphone adapters, digital cameras that could work as their own IP addresses, and connection with high-speed dedicated connections with viewing platforms on high-sensitivity monitors can facilitate ROSE to improve patient management.24,25 The successful accurate use of ROSE has been described; however, there are currently no FDA-approved telepathology ROSE platforms.17-19,21-25

To date, the FDA has not approved any telecytology whole slide scanner due to a lack of z-stacking capability in submitted scanners.7,21 Not all whole slide scanners offer z-stacking, though even in those that do offer it, the time necessary to scan the entire slide with adequate z-stacking takes too long to be clinically acceptable for many situations involving ROSE.21 WSI has also been used to develop international consensus for cytologic samples.26 Published recommendations for the validation of these other modalities before usage follow the spirit of the CAP guidelines (as far as multiple cases with high concordance rates) for validation of WSI for diagnostic purposes but vary on the exact number of slides and acceptable concordance rate.21,27 For ROSE with a robotic microscope without any on-site cytology personnel, documented standardized training of nonpathology staff members, such as the radiologist or other physician performing the FNA procedure, may be needed to enable the performance of ROSE telecytology and ensure compliance with regulations.2,21 Besides ROSE, there are published validations for telecytology in primary diagnosis and QA, indicating a role for telecytology for diagnosis for laboratories that have properly validated and implemented the laboratory-developed test.28-30

Frozen section. Telepathology has significant potential to improve access to frozen section consultation.5,31-33 Benefits to improving access to frozen section include providing frozen section consultation at remote or off-site locations, increasing access to subspecialty consultation, improving workflow by eliminating the need to travel off-site to the frozen section case, cost savings in staff work time, and providing educational opportunities for pathology trainees.5,31-33 In our experience, WSI with real-time viewing of frozen section allows for the assessment of transplant tissues, which is an evaluation that generally occurs at night. Discrepancies from frozen section telepathology using WSI to the final diagnosis may occur and those specific to WSI could result from slide or image quality, internet connectivity, and lack of training in using the telepathology system.32 Other issues that may lead to discrepancies between the frozen section diagnosis and the final diagnosis may occur with the review of glass slides by light microscopy.34 Appropriate performance of validation, training, implementation, and quality control for telepathology can help in reaping the benefits while mitigating the risks.2 In a large study comparing frozen section evaluation by telepathology with light microscopy, the sensitivity and specificity of frozen section were comparable between telepathology and light microscopy with a trend toward greater sensitivity by telepathology (0.92 and 0.99 for telepathology vs 0.90 and 0.99 by light microscopy alone, sensitivity and specificity, respectively).33

Other applications. Evidence for efficacy in surgical pathology diagnosis led to FDA approval of the Philips IntelliSite Digital Pathology in 2017 and the Leica Aperio AT2 DX in 2020 WSI platforms.6-8 The use of WSI in surgical pathology has been successfully validated or used in clinical practice at several pathology laboratory settings with documented benefits in the literature for primary and secondary diagnoses, QA, research, and education.6-8,35-45 Benefits of telepathology include improved ergonomics and access to real-time pathologic services in remote areas or during on-site pathologist absence and expert second opinions. Telepathology also may reduce risk of slide loss during transport, shortened turnaround time, reduced costs of operation through workflow efficiencies, better load balancing, improve virtual collaboration, and digital storage of slides that may be irreplaceable.3-8,35-45 Telepathology also has been shown to be useful for education, improving access to learning materials and increasing quality instructional materials at a lower cost.45 The increased ease of collaboration with remote experts and access to slide material for other pathologists improves QA capabilities.3-8,35-45 The availability of virtual slides is expected to promote further research in telepathology and pathology due to the increased availability of virtual material to researchers.1,5,46

Telehematology. Published validations have shown effectiveness for hematopathology specimens, such as the peripheral smear. Telehematology also has demonstrated potential in a laboratory after proper validation and implementation as a laboratory-developed test.37,47-49

Telemicrobiology and Computer-Assisted Pathologic Diagnosis. Telemicrobiology also has been successfully used for clinical, educational, and QA purposes.50 The digitalization of slides involved with telepathology enables further innovation in machine learning for computer-assisted pathologic diagnosis (CAPD), which is already being used clinically for cervical Pap smears.20 An artificial intelligence (AI)–based algorithm analyzes the slides to identify cells of interest, which are presented to the cytopathologist for confirmation.20 However, the expansion of CAPD to include a variety of specimen types or diagnostic situations as well as safely and effectively take initiative in completing an accurate automated diagnosis requires additional development.20,51,52 One of the key factors for machine learning to develop AI is the provision of a corpus of data.51,52 Public, open-source data sources have been limited in size while private proprietary sources have highly restricted and expensive access; to address this, there is a current effort to build the world’s largest public open-source digital pathology corpus at Temple University Hospital, which may help enable innovations in the future.52

 

 

Long-Term Trends/Applications

The COVID-19 pandemic has been unprecedented not only in its widespread morbidity and mortality, but also for the significant socioeconomic, health, lifestyle, societal, and workspace changes.53-57 Specifically, the pandemic has introduced not only a need for social distancing and staff quarantines to prevent the spread of infection, but also a reduction in the workforce due to the stresses of COVID-19 (also known as the Great Resignation).55 Before the pandemic, there was an existing downtrend in the number of pathologists in the US workforce.9-10,58,59 From 2007 to 2017, the number of active pathologists in the US declined by 17.5% despite the increasing national population, resulting in not only an absolute decrease in the number of pathologists, but also an increasing population served per pathologist ratio.59 Since 2017, this downtrend has continued; given the increasing loss of active pathologists from the workforce and the decreasing training of new pathologists, this decrease shows no signs of reversing even as the impact of the COVID-19 pandemic has begun to wane.9,10,58-60

The advantages of telepathology in enabling social distancing and reducing travel to remote sites are known.3-7,17 Given these advantages, some medical centers in the US have previously successfully validated and implemented telepathology operations earlier during the COVID-19 pandemic to ease workflow and ensure continued operations.56,57 The use of telepathology also helps in balancing workload and continuing pathology operations even in light of the workforce reduction as cases no longer need to be signed out on site with glass slides but instead can be signed out at a remote laboratory. Although the impact of the COVID-19 pandemic on operations is decreasing, the capabilities for social distancing and reducing travel remain important to both improve operations and ensure resiliency in response to similar potential events.3-7,17,60

Considering the long-term trends, the lessons of the COVID-19 pandemic, and the potential for future pandemics or other disasters, telepathology’s validation and implementation remains a reasonable choice for pathology practices looking to improve. A variety of practices not just in the general population, but also among US Department of Veterans Affairs medical centers (VAMCs) and the US Department of Defense Military Health System treating a veteran population can benefit from telepathology where it has previously been reported to have been reliable or successfully implemented.61-63 Although the veteran population differs from the general population in several characteristics, such as the severity of disease, coexisting morbidities, and other history, given proper validation and implementation, telepathology’s usefulness extends across different pathology practice settings.35-43,61-66

Limitations of Telepathology

In telepathology’s current state, there are limitations despite its immense promise.6,35 These include initial capital costs, the additional training requirement, the additional time necessary to scan slides, technical challenges (ie, laboratory information system integration, color calibration, display artifacts, potential for small particle scanner omissions, and information technology dependence), the potential for slower evaluation per slide compared with optical microscopes, limitations of slide imaging (ie, z-stacking or lack of polarization on digital pathology), and occupational concerns regarding eye strain with increased computer monitor usage (ie, computer vision syndrome).6,35 In addition, there are few telepathology scanners with FDA approval for WSI.6-8

The improving technology of telepathology has made these limitations surmountable, including faster slide scanning and increasing digital storage capacity for large WSI files. Due to this improvement in technology, an increasing number of laboratory settings, have adopted telepathology as its advantages have begun to outweigh the limitations.2-5 Additionally, the proper validation performed before implementing telepathology can help laboratories identify their unique challenges, troubleshoot, and resolve the limitations before use in clinical care.11-13 Continuing QA during its use and implementation is important to ensure that telepathology performs as expected for clinical purposes despite its limitations.2

Conclusions

Telepathology is a promising technology that may improve pathology practice once properly validated and implemented.1-8 Though there are barriers to this validation and implementation, particularly the capital costs and training, there are several potential benefits, including increased productivity, cost savings, improvement in the workflow, enhanced access to pathologic consultation, and adaptability of the pathology laboratory in an era of a decreased workforce and social distancing due to the COVID-19 pandemic.1-8,55-56 This potential applies across the wide spectrum of potential telepathology uses from frozen section, telecytology (including ROSE) to primary and second opinion diagnoses.1-8,17-33 The benefits also extends to QA, education, and research, as diagnoses can not only be rereviewed by specialty or second opinion consultation with ease, but also digital slides can be produced for educational and research purposes.3-8,35-45 Settings that treat the general population and those focused on the care of veterans or members of the armed forces have reported similar reliability or successful implementation.35-44,61-63 All in all, the use of telepathology represents an innovation that may transform the practice of pathology tomorrow.

References

1. Weinstein RS. Prospects for telepathology. Hum Pathol. 1986;17(5):433-434. doi:10.1016/s0046-8177(86)80028-4

2. Pantanowitz L, Dickinson K, Evans AJ, et al. American Telemedicine Association clinical guidelines for telepathology. J Pathol Inform. 2014;5(1):39. Published 2014 Oct 21. doi:10.4103/2153-3539.143329

3. Farahani N, Pantanowitz L. Overview of telepathology. Surg Pathol Clin. 2015;8(2):223-231. doi:10.1016/j.path. 2015.02.018 4. Petersen JM, Jhala D. Telepathology: a transforming practice for the efficient, safe, and best patient care at the regional Veteran Affairs medical center. Am J Clin Pathol. 2022;158(suppl 1):S97-S98. doi:10.1093/ajcp/aqac126.205

5. Bashshur RL, Krupinski EA, Weinstein RS, Dunn MR, Bashshur N. The empirical foundations of telepathology: evidence of feasibility and intermediate effects. Telemed J E Health. 2017;23(3):155-191. doi:10.1089/tmj.2016.0278

6. Jahn SW, Plass M, Moinfar F. Digital pathology: advantages, limitations and emerging perspectives. J Clin Med. 2020;9(11):3697. Published 2020 Nov 18. doi:10.3390/jcm9113697

7. Evans AJ, Bauer TW, Bui MM, et al. US Food and Drug Administration approval of whole slide imaging for primary diagnosis: a key milestone is reached and new questions are raised. Arch Pathol Lab Med. 2018;142(11):1383-1387. doi:10.5858/arpa.2017-0496-CP.

8. Patel A, Balis UGJ, Cheng J, et al. Contemporary whole slide imaging devices and their applications within the modern pathology department: a selected hardware review. J Pathol Inform. 2021;12:50. Published 2021 Dec 9. doi:10.4103/jpi.jpi_66_21

9. Association of American Medical Colleges. 2017 State Physician Workforce Data Book. November 2017. Accessed April 14, 2023. https://store.aamc.org/downloadable/download/sample/sample_id/30

10. Robboy SJ, Gross D, Park JY, et al. Reevaluation of the US pathologist workforce size. JAMA Netw Open. 2020;3(7):e2010648. Published 2020 Jul 1. doi:10.1001/jamanetworkopen.2020.10648

11. Pantanowitz L, Sinard JH, Henricks WH, et al. Validating whole slide imaging for diagnostic purposes in pathology: guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch Pathol Lab Med. 2013;137(12):1710-1722. doi:10.5858/arpa.2013-0093-CP

12. Evans AJ, Brown RW, Bui MM, et al. Validating whole slide imaging systems for diagnostic purposes in pathology. Arch Pathol Lab Med. 2021;146(4):440-450. doi:10.5858/arpa.2020-0723-CP

13. Evans AJ, Lacchetti C, Reid K, Thomas NE. Validating whole slide imaging for diagnostic purposes in pathology: guideline update. College of American Pathologists. May 2021. Accessed April 13, 2023. https://documents.cap.org/documents/wsi-methodology.pdf

14. Chandraratnam E, Santos LD, Chou S, et al. Parathyroid frozen section interpretation via desktop telepathology systems: a validation study. J Pathol Inform. 2018;9:41. Published 2018 Dec 3. doi:10.4103/jpi.jpi_57_18

15. Thrall MJ, Rivera AL, Takei H, Powell SZ. Validation of a novel robotic telepathology platform for neuropathology intraoperative touch preparations. J Pathol Inform. 2014;5(1):21. Published 2014 Jul 28. doi:10.4103/2153-3539.137642

16. Balis UGJ, Williams CL, Cheng J, et al. Whole-Slide Imaging: Thinking Twice Before Hitting the Delete Key. AJSP: Reviews & Reports. 2018;23(6):p 249-250. doi:10.1097/PCR.0000000000000283

17. Kim B, Chhieng DC, Crowe DR, et al. Dynamic telecytopathology of on site rapid cytology diagnoses for pancreatic carcinoma. Cytojournal. 2006;3:27. Published 2006 Dec 11. doi:10.1186/1742-6413-3-27

18. Perez D, Stemmer MN, Khurana KK. Utilization of dynamic telecytopathology for rapid onsite evaluation of touch imprint cytology of needle core biopsy: diagnostic accuracy and pitfalls. Telemed J E Health. 2021;27(5):525-531. doi:10.1089/tmj.2020.0117

19. McCarthy EE, McMahon RQ, Das K, Stewart J 3rd. Internal validation testing for new technologies: bringing telecytopathology into the mainstream. Diagn Cytopathol. 2015;43(1):3-7. doi:10.1002/dc.23167

20. Marletta S, Treanor D, Eccher A, Pantanowitz L. Whole-slide imaging in cytopathology: state of the art and future directions. Diagn Histopathol (Oxf). 2021;27(11):425-430. doi:10.1016/j.mpdhp.2021.08.001

21. Lin O. Telecytology for rapid on-site evaluation: current status. J Am Soc Cytopathol. 2018;7(1):1-6. doi:10.1016/j.jasc.2017.10.002

22. Eloubeidi MA, Tamhane A, Jhala N, et al. Agreement between rapid onsite and final cytologic interpretations of EUS-guided FNA specimens: implications for the endosonographer and patient management. Am J Gastroenterol. 2006;101(12):2841-2847. doi:10.1111/j.1572-0241.2006.00852.x

23. Layfield LJ, Bentz JS, Gopez EV. Immediate on-site interpretation of fine-needle aspiration smears: a cost and compensation analysis. Cancer. 2001;93(5):319-322. doi:10.1002/cncr.9046

24. Fontelo P, Liu F, Yagi Y. Evaluation of a smartphone for telepathology: lessons learned. J Pathol Inform. 2015;6:35. Published 2015 Jun 23. doi:10.4103/2153-3539.158912

25. Lin O. Telecytology for rapid on-site evaluation: current status. J Am Soc Cytopathol. 2018;7(1):1-6. doi:10.1016/j.jasc.2017.10.002

26. Johnson DN, Onenerk M, Krane JF, et al. Cytologic grading of primary malignant salivary gland tumors: A blinded review by an international panel. Cancer Cytopathol. 2020;128(6):392-402. doi:10.1002/cncy.22271

27. Trabzonlu L, Chatt G, McIntire PJ, et al. Telecytology validation: is there a recipe for everybody? J Am Soc Cytopathol. 2022;11(4):218-225. doi:10.1016/j.jasc.2022.03.001

28. Canberk S, Behzatoglu K, Caliskan CK, et al. The role of telecytology in the primary diagnosis of thyroid fine-needle aspiration specimens. Acta Cytol. 2020;64(4):323-331. doi:10.1159/000503914.

29. Archondakis S, Roma M, Kaladelfou E. Implementation of pre-captured videos for remote diagnosis of cervical cytology specimens. Cytopathology. 2021;32(3):338-343. doi:10.1111/cyt.12948

30. Lee ES, Kim IS, Choi JS, et al. Accuracy and reproducibility of telecytology diagnosis of cervical smears. A tool for quality assurance programs. Am J Clin Pathol. 2003;119(3):356-360. doi:10.1309/7ytvag4xnr48t75h

31. Dietz RL, Hartman DJ, Pantanowitz L. Systematic review of the use of telepathology during intraoperative consultation. Am J Clin Pathol. 2020;153(2):198-209. doi:10.1093/ajcp/aqz155

32. Bauer TW, Slaw RJ, McKenney JK, Patil DT. Validation of whole slide imaging for frozen section diagnosis in surgical pathology. J Pathol Inform. 2015;6:49. Published 2015 Aug 31. doi:10.4103/2153-3539.163988

<--pagebreak-->

33. Vosoughi A, Smith PT, Zeitouni JA, et al. Frozen section evaluation via dynamic real-time nonrobotic telepathology system in a university cancer center by resident/faculty cooperation team. Hum Pathol. 2018;78:144-150. doi:10.1016/j.humpath.2018.04.012

34. Mahe E, Ara S, Bishara M, et al. Intraoperative pathology consultation: error, cause and impact. Can J Surg. 2013;56(3):E13-E18. doi:10.1503/cjs.011112.

35. Farahani N, Parwani AV, Pantanowitz L. Whole slide imaging in pathology: advantages, limitations, and emerging perspectives. Pathol Lab Med Int. 2015;7:23-33. doi:10.2147/PLMI.S59826

36. Thorstenson S, Molin J, Lundström C. Implementation of large-scale routine diagnostics using whole slide imaging in Sweden: digital pathology experiences 2006-2013. J Pathol Inform. 2014;5(1):14. Published 2014 Mar 28. doi:10.4103/2153-3539.129452

37. Pantanowitz L, Wiley CA, Demetris A, et al. Experience with multimodality telepathology at the University of Pittsburgh Medical Center. J Pathol Inform. 2012;3:45. doi:10.4103/2153-3539.104907

38. Al Habeeb A, Evans A, Ghazarian D. Virtual microscopy using whole-slide imaging as an enabler for teledermatopathology: a paired consultant validation study. J Pathol Inform. 2012;3:2. doi:10.4103/2153-3539.93399

39. Al-Janabi S, Huisman A, Vink A, et al. Whole slide images for primary diagnostics in dermatopathology: a feasibility study. J Clin Pathol. 2012;65(2):152-158. doi:10.1136/jclinpath-2011-200277

40. Nielsen PS, Lindebjerg J, Rasmussen J, Starklint H, Waldstrøm M, Nielsen B. Virtual microscopy: an evaluation of its validity and diagnostic performance in routine histologic diagnosis of skin tumors. Hum Pathol. 2010;41(12):1770-1776. doi:10.1016/j.humpath.2010.05.015

41. Leinweber B, Massone C, Kodama K, et al. Telederma-topathology: a controlled study about diagnostic validity and technical requirements for digital transmission. Am J Dermatopathol. 2006;28(5):413-416. doi:10.1097/01.dad.0000211523.95552.86

42. Koch LH, Lampros JN, Delong LK, Chen SC, Woosley JT, Hood AF. Randomized comparison of virtual microscopy and traditional glass microscopy in diagnostic accuracy among dermatology and pathology residents. Hum Pathol. 2009;40(5):662-667. doi:10.1016/j.humpath.2008.10.009

43. Farris AB, Cohen C, Rogers TE, Smith GH. Whole slide imaging for analytical anatomic pathology and telepathology: practical applications today, promises, and perils. Arch Pathol Lab Med. 2017;141(4):542-550. doi:10.5858/arpa.2016-0265-SA

44. Chong T, Palma-Diaz MF, Fisher C, et al. The California Telepathology Service: UCLA’s experience in deploying a regional digital pathology subspecialty consultation network. J Pathol Inform. 2019;10:31. Published 2019 Sep 27. doi:10.4103/jpi.jpi_22_19

45. Meyer J, Paré G. Telepathology impacts and implementation challenges: a scoping review. Arch Pathol Lab Med. 2015;139(12):1550-1557. doi:10.5858/arpa.2014-0606-RA

46. Weinstein RS, Descour MR, Liang C, et al. Telepathology overview: from concept to implementation. Hum Pathol. 2001;32(12):1283-1299. doi:10.1053/hupa.2001.29643

47. Riley RS, Ben-Ezra JM, Massey D, Cousar J. The virtual blood film. Clin Lab Med. 2002;22(1):317-345. doi:10.1016/s0272-2712(03)00077-5

48. Garcia CA, Hanna M, Contis LC, Pantanowitz L, Hyman R. Sharing Cellavision blood smear images with clinicians via the electronic medical record. Blood. 2017;130(suppl 1):5586. doi:10.1182/blood.V130.Suppl_1.5586.5586

49. Goswami R, Pi D, Pal J, Cheng K, Hudoba De Badyn M. Performance evaluation of a dynamic telepathology system (Panoptiq) in the morphologic assessment of peripheral blood film abnormalities. Int J Lab Hematol. 2015;37(3):365-371. doi:10.1111/ijlh.12294

50. Rhoads DD, Mathison BA, Bishop HS, da Silva AJ, Pantanowitz L. Review of telemicrobiology. Arch Pathol Lab Med. 2016;140(4):362-370. doi:10.5858/arpa.2015-0116-RA51. Nam S, Chong Y, Jung CK, et al. Introduction to digital pathology and computer-aided pathology. J Pathol Transl Med. 2020;54(2):125-134. doi:10.4132/jptm.2019.12.31

52. Houser D, Shadhin G, Anstotz R, et al. The Temple University Hospital Digital Pathology Corpus. IEEE Signal Process Med Biol Symp. 2018:1-7. doi:10.1109/SPMB.2018.8615619

53. Petersen J, Dalal S, Jhala D. Criticality of in-house preparation of viral transport medium in times of shortage during COVID-19 pandemic. Lab Med. 2021;52(2):e39-e45. doi:10.1093/labmed/lmaa099

54. Ranney ML, Griffeth V, Jha AK. Critical supply shortages—the need for ventilators and personal protective equipment during the Covid-19 pandemic. N Engl J Med. 2020;382(18):e41. doi:10.1056/NEJMp2006141

55. Ksinan Jiskrova G. Impact of COVID-19 pandemic on the workforce: from psychological distress to the Great Resignation. J Epidemiol Community Health. 2022;76(6):525-526. doi:10.1136/jech-2022-218826

56. Henriksen J, Kolognizak T, Houghton T, et al. Rapid validation of telepathology by an academic neuropathology practice during the COVID-19 pandemic. Arch Pathol Lab Med. 2020;144(11):1311-1320. doi:10.5858/arpa.2020-0372-SA

57. Ardon O, Reuter VE, Hameed M, et al. Digital pathology operations at an NYC tertiary cancer center during the first 4 months of COVID-19 pandemic response. Acad Pathol. 2021;8:23742895211010276. Published 2021 Apr 28. doi:10.1177/23742895211010276

58. Jajosky RP, Jajosky AN, Kleven DT, Singh G. Fewer seniors from United States allopathic medical schools are filling pathology residency positions in the Main Residency Match, 2008-2017. Hum Pathol. 2018;73:26-32. doi:10.1016/j.humpath.2017.11.014

59. Metter DM, Colgan TJ, Leung ST, Timmons CF, Park JY. Trends in the US and Canadian pathologist workforces from 2007 to 2017. JAMA Netw Open. 2019;2(5):e194337. Published 2019 May 3. doi:10.1001/jamanetworkopen.2019.4337

60. Murray CJL. COVID-19 will continue but the end of the pandemic is near. Lancet. 2022;399(10323):417-419. doi:10.1016/S0140-6736(22)00100-3

61. Ghosh A, Brown GT, Fontelo P. Telepathology at the Armed Forces Institute of Pathology: a retrospective review of consultations from 1996 to 1997. Arch Pathol Lab Med. 2018;142(2):248-252. doi:10.5858/arpa.2017-0055-OA

62. Dunn BE, Choi H, Almagro UA, Recla DL, Davis CW. Telepathology networking in VISN-12 of the Veterans Health Administration. Telemed J E Health. 2000;6(3):349-354. doi:10.1089/153056200750040200

63. Dunn BE, Almagro UA, Choi H, et al. Dynamic-robotic telepathology: Department of Veterans Affairs feasibility study. Hum Pathol. 1997;28(1):8-12. doi:10.1016/s0046-8177(97)90271-9

64. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252

<--pagebreak-->

65. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.

66. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448

References

1. Weinstein RS. Prospects for telepathology. Hum Pathol. 1986;17(5):433-434. doi:10.1016/s0046-8177(86)80028-4

2. Pantanowitz L, Dickinson K, Evans AJ, et al. American Telemedicine Association clinical guidelines for telepathology. J Pathol Inform. 2014;5(1):39. Published 2014 Oct 21. doi:10.4103/2153-3539.143329

3. Farahani N, Pantanowitz L. Overview of telepathology. Surg Pathol Clin. 2015;8(2):223-231. doi:10.1016/j.path. 2015.02.018 4. Petersen JM, Jhala D. Telepathology: a transforming practice for the efficient, safe, and best patient care at the regional Veteran Affairs medical center. Am J Clin Pathol. 2022;158(suppl 1):S97-S98. doi:10.1093/ajcp/aqac126.205

5. Bashshur RL, Krupinski EA, Weinstein RS, Dunn MR, Bashshur N. The empirical foundations of telepathology: evidence of feasibility and intermediate effects. Telemed J E Health. 2017;23(3):155-191. doi:10.1089/tmj.2016.0278

6. Jahn SW, Plass M, Moinfar F. Digital pathology: advantages, limitations and emerging perspectives. J Clin Med. 2020;9(11):3697. Published 2020 Nov 18. doi:10.3390/jcm9113697

7. Evans AJ, Bauer TW, Bui MM, et al. US Food and Drug Administration approval of whole slide imaging for primary diagnosis: a key milestone is reached and new questions are raised. Arch Pathol Lab Med. 2018;142(11):1383-1387. doi:10.5858/arpa.2017-0496-CP.

8. Patel A, Balis UGJ, Cheng J, et al. Contemporary whole slide imaging devices and their applications within the modern pathology department: a selected hardware review. J Pathol Inform. 2021;12:50. Published 2021 Dec 9. doi:10.4103/jpi.jpi_66_21

9. Association of American Medical Colleges. 2017 State Physician Workforce Data Book. November 2017. Accessed April 14, 2023. https://store.aamc.org/downloadable/download/sample/sample_id/30

10. Robboy SJ, Gross D, Park JY, et al. Reevaluation of the US pathologist workforce size. JAMA Netw Open. 2020;3(7):e2010648. Published 2020 Jul 1. doi:10.1001/jamanetworkopen.2020.10648

11. Pantanowitz L, Sinard JH, Henricks WH, et al. Validating whole slide imaging for diagnostic purposes in pathology: guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch Pathol Lab Med. 2013;137(12):1710-1722. doi:10.5858/arpa.2013-0093-CP

12. Evans AJ, Brown RW, Bui MM, et al. Validating whole slide imaging systems for diagnostic purposes in pathology. Arch Pathol Lab Med. 2021;146(4):440-450. doi:10.5858/arpa.2020-0723-CP

13. Evans AJ, Lacchetti C, Reid K, Thomas NE. Validating whole slide imaging for diagnostic purposes in pathology: guideline update. College of American Pathologists. May 2021. Accessed April 13, 2023. https://documents.cap.org/documents/wsi-methodology.pdf

14. Chandraratnam E, Santos LD, Chou S, et al. Parathyroid frozen section interpretation via desktop telepathology systems: a validation study. J Pathol Inform. 2018;9:41. Published 2018 Dec 3. doi:10.4103/jpi.jpi_57_18

15. Thrall MJ, Rivera AL, Takei H, Powell SZ. Validation of a novel robotic telepathology platform for neuropathology intraoperative touch preparations. J Pathol Inform. 2014;5(1):21. Published 2014 Jul 28. doi:10.4103/2153-3539.137642

16. Balis UGJ, Williams CL, Cheng J, et al. Whole-Slide Imaging: Thinking Twice Before Hitting the Delete Key. AJSP: Reviews & Reports. 2018;23(6):p 249-250. doi:10.1097/PCR.0000000000000283

17. Kim B, Chhieng DC, Crowe DR, et al. Dynamic telecytopathology of on site rapid cytology diagnoses for pancreatic carcinoma. Cytojournal. 2006;3:27. Published 2006 Dec 11. doi:10.1186/1742-6413-3-27

18. Perez D, Stemmer MN, Khurana KK. Utilization of dynamic telecytopathology for rapid onsite evaluation of touch imprint cytology of needle core biopsy: diagnostic accuracy and pitfalls. Telemed J E Health. 2021;27(5):525-531. doi:10.1089/tmj.2020.0117

19. McCarthy EE, McMahon RQ, Das K, Stewart J 3rd. Internal validation testing for new technologies: bringing telecytopathology into the mainstream. Diagn Cytopathol. 2015;43(1):3-7. doi:10.1002/dc.23167

20. Marletta S, Treanor D, Eccher A, Pantanowitz L. Whole-slide imaging in cytopathology: state of the art and future directions. Diagn Histopathol (Oxf). 2021;27(11):425-430. doi:10.1016/j.mpdhp.2021.08.001

21. Lin O. Telecytology for rapid on-site evaluation: current status. J Am Soc Cytopathol. 2018;7(1):1-6. doi:10.1016/j.jasc.2017.10.002

22. Eloubeidi MA, Tamhane A, Jhala N, et al. Agreement between rapid onsite and final cytologic interpretations of EUS-guided FNA specimens: implications for the endosonographer and patient management. Am J Gastroenterol. 2006;101(12):2841-2847. doi:10.1111/j.1572-0241.2006.00852.x

23. Layfield LJ, Bentz JS, Gopez EV. Immediate on-site interpretation of fine-needle aspiration smears: a cost and compensation analysis. Cancer. 2001;93(5):319-322. doi:10.1002/cncr.9046

24. Fontelo P, Liu F, Yagi Y. Evaluation of a smartphone for telepathology: lessons learned. J Pathol Inform. 2015;6:35. Published 2015 Jun 23. doi:10.4103/2153-3539.158912

25. Lin O. Telecytology for rapid on-site evaluation: current status. J Am Soc Cytopathol. 2018;7(1):1-6. doi:10.1016/j.jasc.2017.10.002

26. Johnson DN, Onenerk M, Krane JF, et al. Cytologic grading of primary malignant salivary gland tumors: A blinded review by an international panel. Cancer Cytopathol. 2020;128(6):392-402. doi:10.1002/cncy.22271

27. Trabzonlu L, Chatt G, McIntire PJ, et al. Telecytology validation: is there a recipe for everybody? J Am Soc Cytopathol. 2022;11(4):218-225. doi:10.1016/j.jasc.2022.03.001

28. Canberk S, Behzatoglu K, Caliskan CK, et al. The role of telecytology in the primary diagnosis of thyroid fine-needle aspiration specimens. Acta Cytol. 2020;64(4):323-331. doi:10.1159/000503914.

29. Archondakis S, Roma M, Kaladelfou E. Implementation of pre-captured videos for remote diagnosis of cervical cytology specimens. Cytopathology. 2021;32(3):338-343. doi:10.1111/cyt.12948

30. Lee ES, Kim IS, Choi JS, et al. Accuracy and reproducibility of telecytology diagnosis of cervical smears. A tool for quality assurance programs. Am J Clin Pathol. 2003;119(3):356-360. doi:10.1309/7ytvag4xnr48t75h

31. Dietz RL, Hartman DJ, Pantanowitz L. Systematic review of the use of telepathology during intraoperative consultation. Am J Clin Pathol. 2020;153(2):198-209. doi:10.1093/ajcp/aqz155

32. Bauer TW, Slaw RJ, McKenney JK, Patil DT. Validation of whole slide imaging for frozen section diagnosis in surgical pathology. J Pathol Inform. 2015;6:49. Published 2015 Aug 31. doi:10.4103/2153-3539.163988

<--pagebreak-->

33. Vosoughi A, Smith PT, Zeitouni JA, et al. Frozen section evaluation via dynamic real-time nonrobotic telepathology system in a university cancer center by resident/faculty cooperation team. Hum Pathol. 2018;78:144-150. doi:10.1016/j.humpath.2018.04.012

34. Mahe E, Ara S, Bishara M, et al. Intraoperative pathology consultation: error, cause and impact. Can J Surg. 2013;56(3):E13-E18. doi:10.1503/cjs.011112.

35. Farahani N, Parwani AV, Pantanowitz L. Whole slide imaging in pathology: advantages, limitations, and emerging perspectives. Pathol Lab Med Int. 2015;7:23-33. doi:10.2147/PLMI.S59826

36. Thorstenson S, Molin J, Lundström C. Implementation of large-scale routine diagnostics using whole slide imaging in Sweden: digital pathology experiences 2006-2013. J Pathol Inform. 2014;5(1):14. Published 2014 Mar 28. doi:10.4103/2153-3539.129452

37. Pantanowitz L, Wiley CA, Demetris A, et al. Experience with multimodality telepathology at the University of Pittsburgh Medical Center. J Pathol Inform. 2012;3:45. doi:10.4103/2153-3539.104907

38. Al Habeeb A, Evans A, Ghazarian D. Virtual microscopy using whole-slide imaging as an enabler for teledermatopathology: a paired consultant validation study. J Pathol Inform. 2012;3:2. doi:10.4103/2153-3539.93399

39. Al-Janabi S, Huisman A, Vink A, et al. Whole slide images for primary diagnostics in dermatopathology: a feasibility study. J Clin Pathol. 2012;65(2):152-158. doi:10.1136/jclinpath-2011-200277

40. Nielsen PS, Lindebjerg J, Rasmussen J, Starklint H, Waldstrøm M, Nielsen B. Virtual microscopy: an evaluation of its validity and diagnostic performance in routine histologic diagnosis of skin tumors. Hum Pathol. 2010;41(12):1770-1776. doi:10.1016/j.humpath.2010.05.015

41. Leinweber B, Massone C, Kodama K, et al. Telederma-topathology: a controlled study about diagnostic validity and technical requirements for digital transmission. Am J Dermatopathol. 2006;28(5):413-416. doi:10.1097/01.dad.0000211523.95552.86

42. Koch LH, Lampros JN, Delong LK, Chen SC, Woosley JT, Hood AF. Randomized comparison of virtual microscopy and traditional glass microscopy in diagnostic accuracy among dermatology and pathology residents. Hum Pathol. 2009;40(5):662-667. doi:10.1016/j.humpath.2008.10.009

43. Farris AB, Cohen C, Rogers TE, Smith GH. Whole slide imaging for analytical anatomic pathology and telepathology: practical applications today, promises, and perils. Arch Pathol Lab Med. 2017;141(4):542-550. doi:10.5858/arpa.2016-0265-SA

44. Chong T, Palma-Diaz MF, Fisher C, et al. The California Telepathology Service: UCLA’s experience in deploying a regional digital pathology subspecialty consultation network. J Pathol Inform. 2019;10:31. Published 2019 Sep 27. doi:10.4103/jpi.jpi_22_19

45. Meyer J, Paré G. Telepathology impacts and implementation challenges: a scoping review. Arch Pathol Lab Med. 2015;139(12):1550-1557. doi:10.5858/arpa.2014-0606-RA

46. Weinstein RS, Descour MR, Liang C, et al. Telepathology overview: from concept to implementation. Hum Pathol. 2001;32(12):1283-1299. doi:10.1053/hupa.2001.29643

47. Riley RS, Ben-Ezra JM, Massey D, Cousar J. The virtual blood film. Clin Lab Med. 2002;22(1):317-345. doi:10.1016/s0272-2712(03)00077-5

48. Garcia CA, Hanna M, Contis LC, Pantanowitz L, Hyman R. Sharing Cellavision blood smear images with clinicians via the electronic medical record. Blood. 2017;130(suppl 1):5586. doi:10.1182/blood.V130.Suppl_1.5586.5586

49. Goswami R, Pi D, Pal J, Cheng K, Hudoba De Badyn M. Performance evaluation of a dynamic telepathology system (Panoptiq) in the morphologic assessment of peripheral blood film abnormalities. Int J Lab Hematol. 2015;37(3):365-371. doi:10.1111/ijlh.12294

50. Rhoads DD, Mathison BA, Bishop HS, da Silva AJ, Pantanowitz L. Review of telemicrobiology. Arch Pathol Lab Med. 2016;140(4):362-370. doi:10.5858/arpa.2015-0116-RA51. Nam S, Chong Y, Jung CK, et al. Introduction to digital pathology and computer-aided pathology. J Pathol Transl Med. 2020;54(2):125-134. doi:10.4132/jptm.2019.12.31

52. Houser D, Shadhin G, Anstotz R, et al. The Temple University Hospital Digital Pathology Corpus. IEEE Signal Process Med Biol Symp. 2018:1-7. doi:10.1109/SPMB.2018.8615619

53. Petersen J, Dalal S, Jhala D. Criticality of in-house preparation of viral transport medium in times of shortage during COVID-19 pandemic. Lab Med. 2021;52(2):e39-e45. doi:10.1093/labmed/lmaa099

54. Ranney ML, Griffeth V, Jha AK. Critical supply shortages—the need for ventilators and personal protective equipment during the Covid-19 pandemic. N Engl J Med. 2020;382(18):e41. doi:10.1056/NEJMp2006141

55. Ksinan Jiskrova G. Impact of COVID-19 pandemic on the workforce: from psychological distress to the Great Resignation. J Epidemiol Community Health. 2022;76(6):525-526. doi:10.1136/jech-2022-218826

56. Henriksen J, Kolognizak T, Houghton T, et al. Rapid validation of telepathology by an academic neuropathology practice during the COVID-19 pandemic. Arch Pathol Lab Med. 2020;144(11):1311-1320. doi:10.5858/arpa.2020-0372-SA

57. Ardon O, Reuter VE, Hameed M, et al. Digital pathology operations at an NYC tertiary cancer center during the first 4 months of COVID-19 pandemic response. Acad Pathol. 2021;8:23742895211010276. Published 2021 Apr 28. doi:10.1177/23742895211010276

58. Jajosky RP, Jajosky AN, Kleven DT, Singh G. Fewer seniors from United States allopathic medical schools are filling pathology residency positions in the Main Residency Match, 2008-2017. Hum Pathol. 2018;73:26-32. doi:10.1016/j.humpath.2017.11.014

59. Metter DM, Colgan TJ, Leung ST, Timmons CF, Park JY. Trends in the US and Canadian pathologist workforces from 2007 to 2017. JAMA Netw Open. 2019;2(5):e194337. Published 2019 May 3. doi:10.1001/jamanetworkopen.2019.4337

60. Murray CJL. COVID-19 will continue but the end of the pandemic is near. Lancet. 2022;399(10323):417-419. doi:10.1016/S0140-6736(22)00100-3

61. Ghosh A, Brown GT, Fontelo P. Telepathology at the Armed Forces Institute of Pathology: a retrospective review of consultations from 1996 to 1997. Arch Pathol Lab Med. 2018;142(2):248-252. doi:10.5858/arpa.2017-0055-OA

62. Dunn BE, Choi H, Almagro UA, Recla DL, Davis CW. Telepathology networking in VISN-12 of the Veterans Health Administration. Telemed J E Health. 2000;6(3):349-354. doi:10.1089/153056200750040200

63. Dunn BE, Almagro UA, Choi H, et al. Dynamic-robotic telepathology: Department of Veterans Affairs feasibility study. Hum Pathol. 1997;28(1):8-12. doi:10.1016/s0046-8177(97)90271-9

64. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252

<--pagebreak-->

65. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.

66. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448

Issue
Federal Practitioner - 40(6)a
Issue
Federal Practitioner - 40(6)a
Page Number
186-193
Page Number
186-193
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

EULAR PsA recommendations update emphasizes safety, nonmusculoskeletal manifestations

Article Type
Changed

 

AT EULAR 2023

– Safety considerations, particularly regarding the use of Janus kinase (JAK) inhibitors, are of utmost importance in the 2023 update to recommendations for managing psoriatic arthritis (PsA) by the European Alliance of Associations for Rheumatology (EULAR). Additionally, the selection of therapy should now take into account the complete clinical presentation, explicitly considering nonmusculoskeletal manifestations.

Dr. Laure Gossec
Presenting the updated recommendations, Laure Gossec, MD, PhD, professor of rheumatology at Pitié-Salpétriere Hospital and Sorbonne University, Paris, emphasized an increasingly manifestation-oriented approach, integrating a growing range of available drugs in a stepwise manner to optimize the balance between safety and efficacy and achieve the highest quality of care. These updates were developed over the past 8 months, guided by a comprehensive review of drug efficacy based on 38 publications covering 18 drugs, as well as a safety review encompassing 24 publications.
 

Safety considerations with JAK inhibitors

Expanding on the existing six overarching principles from the 2019 recommendations, the PsA EULAR recommendations now introduce a seventh principle: “The choice of treatment should consider safety considerations regarding individual modes of action to optimize the benefit-risk profile.”

This addition was prompted by recent safety data on JAK inhibitors, which revealed serious potential side effects, such as heart attacks, blood clots, cancer, and severe infections, that recently prompted the European Medicines Agency to restrict their use. As indicated by the new principle, safety considerations have been incorporated into several recommendations.

For instance, in the context of peripheral arthritis, JAK inhibitors may now be considered if there is an inadequate response to at least one conventional synthetic disease-modifying antirheumatic drug (csDMARD) such as methotrexate, sulfasalazine, or leflunomide, and at least one biologic DMARD (bDMARD).

Alternatively, JAK inhibitors may be utilized when bDMARDs are not suitable for other reasons. However, EULAR now emphasizes caution whenever JAK inhibitors are mentioned. Specifically, “careful consideration is necessary for patients aged 65 or above, current or past long-time smokers, individuals with a history of atherosclerotic cardiovascular disease or other cardiovascular risk factors, those with other malignancy risk factors, or individuals with a known risk for venous thromboembolism.”
 

Consider nonmusculoskeletal manifestations in treatment decisions

In another significant update, EULAR now recommends that the choice of therapy should also consider nonmusculoskeletal manifestations associated with PsA. “There is a notable shift in perspective here,” Dr. Gossec told this news organization. Clinically relevant skin involvement should prompt the use of IL-17A or IL-17A/F or IL-23 or IL-12/23 inhibitors, while uveitis should be treated with tumor necrosis factor (TNF) inhibitors.

In the case of inflammatory bowel disease, EULAR advises the use of anti-TNF agents, IL-12/23 or IL-23 inhibitors, or a JAK inhibitor. The recommended course of action within each treatment category is not ranked in order of preference, but EULAR emphasizes the importance of following EMA recommendations and considering safety.
 

Systemic glucocorticoids removed

Certain medications have been removed from the recommendations, reflecting the heightened focus on treatment safety. The use of systemic glucocorticoids as adjunctive therapy is no longer recommended. “We always had reservations about their use, and now we have eliminated them. We are aware that they are still utilized, with 30% of patients in Germany, for instance, receiving low doses of glucocorticoids. However, the long-term efficacy/safety balance of glucocorticoids is unfavorable in any disease, particularly in patients with psoriatic arthritis and multiple comorbidities,” Dr. Gossec explained.

 

 

NSAIDs and local glucocorticoids are now limited to specific patient populations, namely those affected by oligoarthritis without poor prognostic factors, entheseal disease, or predominant axial disease. Their use should be short-term, generally no longer than 4 weeks. Polyarthritis or oligoarthritis with poor prognostic factors should instead be treated directly with csDMARDs.
 

No specific biologic treatment order recommended for peripheral arthritis

Regarding patients with peripheral arthritis, recent efficacy data have led EULAR to refrain from recommending any specific order of preference for the use of bDMARDs, which encompass TNF inhibitors and drugs targeting the IL-17 and IL-12/23 pathways. “We lack the data to propose an order of preference in patients with peripheral arthritis. Different classes of molecules exhibit efficacy in joint inflammation, generally resulting in a 50% response rate and similar overall effects,” said Dr. Gossec, referencing head-to-head trials between biologics that yielded very comparable results, such as the EXCEED trial or SPIRIT-H2H trial.

The updated recommendations now consider two IL-23p19 inhibitors, guselkumab (Tremfya) and risankizumab (Skyrizi), the JAK inhibitor upadacitinib (Rinvoq), and the very recently EMA-approved bimekizumab (Bimzelx), an IL-17A/F double inhibitor.

The recommendation for patients with mono- or oligoarthritis and poor prognostic factors now aligns with the previous recommendations for polyarthritis: A csDMARD should be initiated promptly, with a preference for methotrexate if significant skin involvement is present. New data suggest that methotrexate may be beneficial for enthesitis, achieving resolution in approximately 30% of patients. When considering treatment options, JAK inhibitors may also be taken into account, with safety considerations in mind.

In cases of clinically relevant axial disease and an inadequate response to NSAIDs, therapy with an IL-17A inhibitor, a TNF inhibitor, an IL-17A/F inhibitor, or a JAK inhibitor may be considered. This approach now aligns with the most recent axial spondyloarthritis recommendation from EULAR and the Assessment of SpondyloArthritis international Society (ASAS).
 

Which disease manifestation to treat first?

During the discussion, chairwoman Uta Kiltz, MD, PhD, a rheumatologist at Rheumatism Center Ruhrgebiet, Herne, Germany, and clinical lecturer at Ruhr University Bochum, inquired about identifying the primary manifestation to guide the course of action.

“Psoriatic arthritis is highly heterogeneous, and determining the predominant manifestation is sometimes challenging,” Dr. Gossec said. “However, we believe that a certain order of preference is necessary when making treatment decisions. Starting with peripheral arthritis, which can lead to structural damage, allows for treatment selection based on that aspect. If peripheral arthritis is not present, attention should be directed towards axial disease, ensuring the presence of actual inflammation rather than solely axial pain, as mechanical origin axial pain can occur due to the patient’s age.”

David Liew, MBBS, PhD, consultant rheumatologist and clinical pharmacologist at Austin Health in Melbourne, commented on the update to this news organization: “We are fortunate to have a wide range of targeted therapy options for psoriatic arthritis, and these guidelines reflect this abundance of choices. They emphasize the importance of selecting therapies based on specific disease manifestations and tailoring care to each patient’s unique type of psoriatic arthritis. It’s worth noting that some changes in these guidelines were influenced by regulatory changes following ORAL Surveillance. In an era of numerous options, we can afford to be selective at times.”

Regarding safety concerns and JAK inhibitors, Dr. Liew added: “It is not surprising to see these guidelines impose certain restrictions on the use of JAK inhibitors, especially in psoriatic arthritis, where other therapies offer distinct advantages. Until high-quality evidence convincingly points away from a class effect, we can expect to see similar provisions in many more guidelines.”

Many of the recommendations’ authors report financial relationships with one or more pharmaceutical companies. These include AbbVie, Amgen, Biogen, Bristol-Myers Squibb, Boehringer Ingelheim, Celgene, Celltrion, Chugai, Galapagos, Gilead, GlaxoSmithKline, Janssen, Leo, Lilly, Medac, Merck, Merck Sharp & Dohme, Novartis, Pfizer, R-Pharma, Regeneron, Roche, Sandoz, Sanofi, Takeda, UCB, and Viatris.

EULAR funded the development of the recommendations.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

AT EULAR 2023

– Safety considerations, particularly regarding the use of Janus kinase (JAK) inhibitors, are of utmost importance in the 2023 update to recommendations for managing psoriatic arthritis (PsA) by the European Alliance of Associations for Rheumatology (EULAR). Additionally, the selection of therapy should now take into account the complete clinical presentation, explicitly considering nonmusculoskeletal manifestations.

Dr. Laure Gossec
Presenting the updated recommendations, Laure Gossec, MD, PhD, professor of rheumatology at Pitié-Salpétriere Hospital and Sorbonne University, Paris, emphasized an increasingly manifestation-oriented approach, integrating a growing range of available drugs in a stepwise manner to optimize the balance between safety and efficacy and achieve the highest quality of care. These updates were developed over the past 8 months, guided by a comprehensive review of drug efficacy based on 38 publications covering 18 drugs, as well as a safety review encompassing 24 publications.
 

Safety considerations with JAK inhibitors

Expanding on the existing six overarching principles from the 2019 recommendations, the PsA EULAR recommendations now introduce a seventh principle: “The choice of treatment should consider safety considerations regarding individual modes of action to optimize the benefit-risk profile.”

This addition was prompted by recent safety data on JAK inhibitors, which revealed serious potential side effects, such as heart attacks, blood clots, cancer, and severe infections, that recently prompted the European Medicines Agency to restrict their use. As indicated by the new principle, safety considerations have been incorporated into several recommendations.

For instance, in the context of peripheral arthritis, JAK inhibitors may now be considered if there is an inadequate response to at least one conventional synthetic disease-modifying antirheumatic drug (csDMARD) such as methotrexate, sulfasalazine, or leflunomide, and at least one biologic DMARD (bDMARD).

Alternatively, JAK inhibitors may be utilized when bDMARDs are not suitable for other reasons. However, EULAR now emphasizes caution whenever JAK inhibitors are mentioned. Specifically, “careful consideration is necessary for patients aged 65 or above, current or past long-time smokers, individuals with a history of atherosclerotic cardiovascular disease or other cardiovascular risk factors, those with other malignancy risk factors, or individuals with a known risk for venous thromboembolism.”
 

Consider nonmusculoskeletal manifestations in treatment decisions

In another significant update, EULAR now recommends that the choice of therapy should also consider nonmusculoskeletal manifestations associated with PsA. “There is a notable shift in perspective here,” Dr. Gossec told this news organization. Clinically relevant skin involvement should prompt the use of IL-17A or IL-17A/F or IL-23 or IL-12/23 inhibitors, while uveitis should be treated with tumor necrosis factor (TNF) inhibitors.

In the case of inflammatory bowel disease, EULAR advises the use of anti-TNF agents, IL-12/23 or IL-23 inhibitors, or a JAK inhibitor. The recommended course of action within each treatment category is not ranked in order of preference, but EULAR emphasizes the importance of following EMA recommendations and considering safety.
 

Systemic glucocorticoids removed

Certain medications have been removed from the recommendations, reflecting the heightened focus on treatment safety. The use of systemic glucocorticoids as adjunctive therapy is no longer recommended. “We always had reservations about their use, and now we have eliminated them. We are aware that they are still utilized, with 30% of patients in Germany, for instance, receiving low doses of glucocorticoids. However, the long-term efficacy/safety balance of glucocorticoids is unfavorable in any disease, particularly in patients with psoriatic arthritis and multiple comorbidities,” Dr. Gossec explained.

 

 

NSAIDs and local glucocorticoids are now limited to specific patient populations, namely those affected by oligoarthritis without poor prognostic factors, entheseal disease, or predominant axial disease. Their use should be short-term, generally no longer than 4 weeks. Polyarthritis or oligoarthritis with poor prognostic factors should instead be treated directly with csDMARDs.
 

No specific biologic treatment order recommended for peripheral arthritis

Regarding patients with peripheral arthritis, recent efficacy data have led EULAR to refrain from recommending any specific order of preference for the use of bDMARDs, which encompass TNF inhibitors and drugs targeting the IL-17 and IL-12/23 pathways. “We lack the data to propose an order of preference in patients with peripheral arthritis. Different classes of molecules exhibit efficacy in joint inflammation, generally resulting in a 50% response rate and similar overall effects,” said Dr. Gossec, referencing head-to-head trials between biologics that yielded very comparable results, such as the EXCEED trial or SPIRIT-H2H trial.

The updated recommendations now consider two IL-23p19 inhibitors, guselkumab (Tremfya) and risankizumab (Skyrizi), the JAK inhibitor upadacitinib (Rinvoq), and the very recently EMA-approved bimekizumab (Bimzelx), an IL-17A/F double inhibitor.

The recommendation for patients with mono- or oligoarthritis and poor prognostic factors now aligns with the previous recommendations for polyarthritis: A csDMARD should be initiated promptly, with a preference for methotrexate if significant skin involvement is present. New data suggest that methotrexate may be beneficial for enthesitis, achieving resolution in approximately 30% of patients. When considering treatment options, JAK inhibitors may also be taken into account, with safety considerations in mind.

In cases of clinically relevant axial disease and an inadequate response to NSAIDs, therapy with an IL-17A inhibitor, a TNF inhibitor, an IL-17A/F inhibitor, or a JAK inhibitor may be considered. This approach now aligns with the most recent axial spondyloarthritis recommendation from EULAR and the Assessment of SpondyloArthritis international Society (ASAS).
 

Which disease manifestation to treat first?

During the discussion, chairwoman Uta Kiltz, MD, PhD, a rheumatologist at Rheumatism Center Ruhrgebiet, Herne, Germany, and clinical lecturer at Ruhr University Bochum, inquired about identifying the primary manifestation to guide the course of action.

“Psoriatic arthritis is highly heterogeneous, and determining the predominant manifestation is sometimes challenging,” Dr. Gossec said. “However, we believe that a certain order of preference is necessary when making treatment decisions. Starting with peripheral arthritis, which can lead to structural damage, allows for treatment selection based on that aspect. If peripheral arthritis is not present, attention should be directed towards axial disease, ensuring the presence of actual inflammation rather than solely axial pain, as mechanical origin axial pain can occur due to the patient’s age.”

David Liew, MBBS, PhD, consultant rheumatologist and clinical pharmacologist at Austin Health in Melbourne, commented on the update to this news organization: “We are fortunate to have a wide range of targeted therapy options for psoriatic arthritis, and these guidelines reflect this abundance of choices. They emphasize the importance of selecting therapies based on specific disease manifestations and tailoring care to each patient’s unique type of psoriatic arthritis. It’s worth noting that some changes in these guidelines were influenced by regulatory changes following ORAL Surveillance. In an era of numerous options, we can afford to be selective at times.”

Regarding safety concerns and JAK inhibitors, Dr. Liew added: “It is not surprising to see these guidelines impose certain restrictions on the use of JAK inhibitors, especially in psoriatic arthritis, where other therapies offer distinct advantages. Until high-quality evidence convincingly points away from a class effect, we can expect to see similar provisions in many more guidelines.”

Many of the recommendations’ authors report financial relationships with one or more pharmaceutical companies. These include AbbVie, Amgen, Biogen, Bristol-Myers Squibb, Boehringer Ingelheim, Celgene, Celltrion, Chugai, Galapagos, Gilead, GlaxoSmithKline, Janssen, Leo, Lilly, Medac, Merck, Merck Sharp & Dohme, Novartis, Pfizer, R-Pharma, Regeneron, Roche, Sandoz, Sanofi, Takeda, UCB, and Viatris.

EULAR funded the development of the recommendations.

A version of this article originally appeared on Medscape.com.

 

AT EULAR 2023

– Safety considerations, particularly regarding the use of Janus kinase (JAK) inhibitors, are of utmost importance in the 2023 update to recommendations for managing psoriatic arthritis (PsA) by the European Alliance of Associations for Rheumatology (EULAR). Additionally, the selection of therapy should now take into account the complete clinical presentation, explicitly considering nonmusculoskeletal manifestations.

Dr. Laure Gossec
Presenting the updated recommendations, Laure Gossec, MD, PhD, professor of rheumatology at Pitié-Salpétriere Hospital and Sorbonne University, Paris, emphasized an increasingly manifestation-oriented approach, integrating a growing range of available drugs in a stepwise manner to optimize the balance between safety and efficacy and achieve the highest quality of care. These updates were developed over the past 8 months, guided by a comprehensive review of drug efficacy based on 38 publications covering 18 drugs, as well as a safety review encompassing 24 publications.
 

Safety considerations with JAK inhibitors

Expanding on the existing six overarching principles from the 2019 recommendations, the PsA EULAR recommendations now introduce a seventh principle: “The choice of treatment should consider safety considerations regarding individual modes of action to optimize the benefit-risk profile.”

This addition was prompted by recent safety data on JAK inhibitors, which revealed serious potential side effects, such as heart attacks, blood clots, cancer, and severe infections, that recently prompted the European Medicines Agency to restrict their use. As indicated by the new principle, safety considerations have been incorporated into several recommendations.

For instance, in the context of peripheral arthritis, JAK inhibitors may now be considered if there is an inadequate response to at least one conventional synthetic disease-modifying antirheumatic drug (csDMARD) such as methotrexate, sulfasalazine, or leflunomide, and at least one biologic DMARD (bDMARD).

Alternatively, JAK inhibitors may be utilized when bDMARDs are not suitable for other reasons. However, EULAR now emphasizes caution whenever JAK inhibitors are mentioned. Specifically, “careful consideration is necessary for patients aged 65 or above, current or past long-time smokers, individuals with a history of atherosclerotic cardiovascular disease or other cardiovascular risk factors, those with other malignancy risk factors, or individuals with a known risk for venous thromboembolism.”
 

Consider nonmusculoskeletal manifestations in treatment decisions

In another significant update, EULAR now recommends that the choice of therapy should also consider nonmusculoskeletal manifestations associated with PsA. “There is a notable shift in perspective here,” Dr. Gossec told this news organization. Clinically relevant skin involvement should prompt the use of IL-17A or IL-17A/F or IL-23 or IL-12/23 inhibitors, while uveitis should be treated with tumor necrosis factor (TNF) inhibitors.

In the case of inflammatory bowel disease, EULAR advises the use of anti-TNF agents, IL-12/23 or IL-23 inhibitors, or a JAK inhibitor. The recommended course of action within each treatment category is not ranked in order of preference, but EULAR emphasizes the importance of following EMA recommendations and considering safety.
 

Systemic glucocorticoids removed

Certain medications have been removed from the recommendations, reflecting the heightened focus on treatment safety. The use of systemic glucocorticoids as adjunctive therapy is no longer recommended. “We always had reservations about their use, and now we have eliminated them. We are aware that they are still utilized, with 30% of patients in Germany, for instance, receiving low doses of glucocorticoids. However, the long-term efficacy/safety balance of glucocorticoids is unfavorable in any disease, particularly in patients with psoriatic arthritis and multiple comorbidities,” Dr. Gossec explained.

 

 

NSAIDs and local glucocorticoids are now limited to specific patient populations, namely those affected by oligoarthritis without poor prognostic factors, entheseal disease, or predominant axial disease. Their use should be short-term, generally no longer than 4 weeks. Polyarthritis or oligoarthritis with poor prognostic factors should instead be treated directly with csDMARDs.
 

No specific biologic treatment order recommended for peripheral arthritis

Regarding patients with peripheral arthritis, recent efficacy data have led EULAR to refrain from recommending any specific order of preference for the use of bDMARDs, which encompass TNF inhibitors and drugs targeting the IL-17 and IL-12/23 pathways. “We lack the data to propose an order of preference in patients with peripheral arthritis. Different classes of molecules exhibit efficacy in joint inflammation, generally resulting in a 50% response rate and similar overall effects,” said Dr. Gossec, referencing head-to-head trials between biologics that yielded very comparable results, such as the EXCEED trial or SPIRIT-H2H trial.

The updated recommendations now consider two IL-23p19 inhibitors, guselkumab (Tremfya) and risankizumab (Skyrizi), the JAK inhibitor upadacitinib (Rinvoq), and the very recently EMA-approved bimekizumab (Bimzelx), an IL-17A/F double inhibitor.

The recommendation for patients with mono- or oligoarthritis and poor prognostic factors now aligns with the previous recommendations for polyarthritis: A csDMARD should be initiated promptly, with a preference for methotrexate if significant skin involvement is present. New data suggest that methotrexate may be beneficial for enthesitis, achieving resolution in approximately 30% of patients. When considering treatment options, JAK inhibitors may also be taken into account, with safety considerations in mind.

In cases of clinically relevant axial disease and an inadequate response to NSAIDs, therapy with an IL-17A inhibitor, a TNF inhibitor, an IL-17A/F inhibitor, or a JAK inhibitor may be considered. This approach now aligns with the most recent axial spondyloarthritis recommendation from EULAR and the Assessment of SpondyloArthritis international Society (ASAS).
 

Which disease manifestation to treat first?

During the discussion, chairwoman Uta Kiltz, MD, PhD, a rheumatologist at Rheumatism Center Ruhrgebiet, Herne, Germany, and clinical lecturer at Ruhr University Bochum, inquired about identifying the primary manifestation to guide the course of action.

“Psoriatic arthritis is highly heterogeneous, and determining the predominant manifestation is sometimes challenging,” Dr. Gossec said. “However, we believe that a certain order of preference is necessary when making treatment decisions. Starting with peripheral arthritis, which can lead to structural damage, allows for treatment selection based on that aspect. If peripheral arthritis is not present, attention should be directed towards axial disease, ensuring the presence of actual inflammation rather than solely axial pain, as mechanical origin axial pain can occur due to the patient’s age.”

David Liew, MBBS, PhD, consultant rheumatologist and clinical pharmacologist at Austin Health in Melbourne, commented on the update to this news organization: “We are fortunate to have a wide range of targeted therapy options for psoriatic arthritis, and these guidelines reflect this abundance of choices. They emphasize the importance of selecting therapies based on specific disease manifestations and tailoring care to each patient’s unique type of psoriatic arthritis. It’s worth noting that some changes in these guidelines were influenced by regulatory changes following ORAL Surveillance. In an era of numerous options, we can afford to be selective at times.”

Regarding safety concerns and JAK inhibitors, Dr. Liew added: “It is not surprising to see these guidelines impose certain restrictions on the use of JAK inhibitors, especially in psoriatic arthritis, where other therapies offer distinct advantages. Until high-quality evidence convincingly points away from a class effect, we can expect to see similar provisions in many more guidelines.”

Many of the recommendations’ authors report financial relationships with one or more pharmaceutical companies. These include AbbVie, Amgen, Biogen, Bristol-Myers Squibb, Boehringer Ingelheim, Celgene, Celltrion, Chugai, Galapagos, Gilead, GlaxoSmithKline, Janssen, Leo, Lilly, Medac, Merck, Merck Sharp & Dohme, Novartis, Pfizer, R-Pharma, Regeneron, Roche, Sandoz, Sanofi, Takeda, UCB, and Viatris.

EULAR funded the development of the recommendations.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Big boost in sodium excretion with HF diuretic protocol 

Article Type
Changed

In patients with acute heart failure, a urine sodium-guided diuretic protocol, currently recommended in guidelines from the Heart Failure Association of the European Society of Cardiology (HFA-ESC), led to significant increases in natriuresis and diuresis over 2 days in the prospective ENACT-HF clinical trial.

The guideline protocol was based on a 2019 HFA position paper with expert consensus, but it had not been tested prospectively, Jeroen Dauw, MD, of AZ Sint-Lucas Ghent (Belgium), explained in a presentation at HFA-ESC 2023.

“We had 282 millimoles of sodium excretion after one day, which is an increase of 64%, compared with standard of care,” Dr. Dauw told meeting attendees. “We wanted to power for 15%, so we’re way above it, with a P value of lower than 0.001.”

The effect was consistent across predefined subgroups, he said. “In addition, there’s an even higher benefit in patients with a lower eGFR [estimated glomerular filtration rate] and a higher home dose of loop diuretics, which might signal more diuretic resistance and more benefit of the protocol.”

After 2 days, the investigators saw 52% higher natriuresis and 33% higher diuresis, compared with usual care.

In an interview, Dr. Dauw said, “The protocol is feasible, safe, and very effective. Cardiologists might consider how to implement a similar protocol in their center to improve the care of their acute heart failure patients.”
 

Twice the oral home dose

The investigators conducted a multicenter, open-label, nonrandomized pragmatic trial at 29 centers in 18 countries globally. “We aimed to recruit 500 to detect a 15% difference in natriuresis,” Dr. Dauw said in his presentation, “but because we were a really low-budget trial, we had to stop after 3 years of recruitment.”

Therefore, 401 patients participated, 254 in the SOC arm and 147 in the protocol arm, because of the sequential nature of the study; that is, patients in the SOC arm of the two-phase study were recruited first.

Patients’ mean age was 70 years, 38% were women, and they all had at least one sign of volume overload. They were on a maintenance daily diuretic dose of 40 mg of furosemide for a month or more, and the NT-proBNP was above 1,000.

In phase 1 of the study, all centers treated 10 consecutive patients according to the local standard of care, at the discretion of the physician. In phase 2, the centers again recruited and treated at least 10 consecutive patients, this time according to the standardized diuretic protocol.

In the protocol phase, patients were treated with twice the oral home dose as an IV bolus. “This meant if, for example, you have 40 mg of furosemide at home, then you receive 80 mg as a first bolus,” Dr. Dauw told attendees. A spot urine sample was taken after 2 hours, and the response was evaluated after 6 hours. A urine sodium above 50 millimoles per liter was considered a good response.

On the second day, patients were reevaluated in the morning using urine output as a measure of diuretic response. If it was above 3 L, then the same bolus was repeated again twice daily, with 6-12 hours between administrations.

As noted, after one day, natriuresis was 174 millimoles in the SOC arm versus 282 millimoles in the protocol group – an increase of 64%. The effect was consistent across subgroups, and those with a lower eGFR and a higher home dose of loop diuretics benefited more.

Furthermore, Dr. Dauw said, there was no interaction on the endpoints with SGLT2 inhibitor use at baseline.

After two days, natriuresis was 52% higher in the protocol group and diuresis was 33% higher.

However, there was no significant difference in weight loss and no difference in the congestion score.

“We did expect to see a difference in weight loss between the study groups, as higher natriuresis and diuresis would normally be associated with higher weight loss in the protocol group,” Dr. Dauw told this news organization. “However, looking back at the study design, weight was collected from the electronic health records and not rigorously collected by study nurses. Previous studies have shown discrepancies between fluid loss and weight loss, so this is an ‘explainable’ finding.”

Participants also had a relatively high congestion score at baseline, with edema above the knee and also some pleural effusion, he told meeting attendees. Therefore, it might take more time to see a change in congestion score in those patients.

The protocol also led to a shorter length of stay – one day less in the hospital – and was very safe on renal endpoints, Dr. Dauw concluded.

A session chair asked why only patients already on diuretics were included in the study, noting that in his clinic, about half of the admissions are de novo.

Dr. Dauw said that patients already taking diuretics chronically would benefit most from the protocol. “If patients are diuretic-naive, they probably will respond well to whatever you do; if you just give a higher dose, they will respond well,” he said. “We expected that the largest benefit would be in patients already taking diuretics because they have a higher chance of not responding well.”

“There also was a big difference in the starting dose,” he added. “In the SOC arm, the baseline dose was about 60 mg, whereas we gave 120 mg, and we could already see a high difference in the effect. So, in those patients, I think the gain is bigger if you follow the protocol.”
 

 

 

More data coming

Looking ahead, “we only showed efficacy in the first 2 days of treatment and a shorter length of stay, probably reflecting a faster decongestion, but we don’t know for sure,” Dr. Dauw told this news organization.

“It would be important to have a study where the protocol is followed until full decongestion is reached,” he said. “That way, we can directly prove that decongestion is better and/or faster with the protocol.”

“A good decongestive strategy is one that is fast, safe and effective in decreasing signs and symptoms that patients suffer from,” he added. “We believe our protocol can achieve that, but our study is only one piece of the puzzle.”

More data on natriuresis-guided decongestion is coming this year, he said, with the PUSH-AHF study from Groningen, the European DECONGEST study, and the U.S. ESCALATE study.

The study had no funding. Dr. Dauw declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

In patients with acute heart failure, a urine sodium-guided diuretic protocol, currently recommended in guidelines from the Heart Failure Association of the European Society of Cardiology (HFA-ESC), led to significant increases in natriuresis and diuresis over 2 days in the prospective ENACT-HF clinical trial.

The guideline protocol was based on a 2019 HFA position paper with expert consensus, but it had not been tested prospectively, Jeroen Dauw, MD, of AZ Sint-Lucas Ghent (Belgium), explained in a presentation at HFA-ESC 2023.

“We had 282 millimoles of sodium excretion after one day, which is an increase of 64%, compared with standard of care,” Dr. Dauw told meeting attendees. “We wanted to power for 15%, so we’re way above it, with a P value of lower than 0.001.”

The effect was consistent across predefined subgroups, he said. “In addition, there’s an even higher benefit in patients with a lower eGFR [estimated glomerular filtration rate] and a higher home dose of loop diuretics, which might signal more diuretic resistance and more benefit of the protocol.”

After 2 days, the investigators saw 52% higher natriuresis and 33% higher diuresis, compared with usual care.

In an interview, Dr. Dauw said, “The protocol is feasible, safe, and very effective. Cardiologists might consider how to implement a similar protocol in their center to improve the care of their acute heart failure patients.”
 

Twice the oral home dose

The investigators conducted a multicenter, open-label, nonrandomized pragmatic trial at 29 centers in 18 countries globally. “We aimed to recruit 500 to detect a 15% difference in natriuresis,” Dr. Dauw said in his presentation, “but because we were a really low-budget trial, we had to stop after 3 years of recruitment.”

Therefore, 401 patients participated, 254 in the SOC arm and 147 in the protocol arm, because of the sequential nature of the study; that is, patients in the SOC arm of the two-phase study were recruited first.

Patients’ mean age was 70 years, 38% were women, and they all had at least one sign of volume overload. They were on a maintenance daily diuretic dose of 40 mg of furosemide for a month or more, and the NT-proBNP was above 1,000.

In phase 1 of the study, all centers treated 10 consecutive patients according to the local standard of care, at the discretion of the physician. In phase 2, the centers again recruited and treated at least 10 consecutive patients, this time according to the standardized diuretic protocol.

In the protocol phase, patients were treated with twice the oral home dose as an IV bolus. “This meant if, for example, you have 40 mg of furosemide at home, then you receive 80 mg as a first bolus,” Dr. Dauw told attendees. A spot urine sample was taken after 2 hours, and the response was evaluated after 6 hours. A urine sodium above 50 millimoles per liter was considered a good response.

On the second day, patients were reevaluated in the morning using urine output as a measure of diuretic response. If it was above 3 L, then the same bolus was repeated again twice daily, with 6-12 hours between administrations.

As noted, after one day, natriuresis was 174 millimoles in the SOC arm versus 282 millimoles in the protocol group – an increase of 64%. The effect was consistent across subgroups, and those with a lower eGFR and a higher home dose of loop diuretics benefited more.

Furthermore, Dr. Dauw said, there was no interaction on the endpoints with SGLT2 inhibitor use at baseline.

After two days, natriuresis was 52% higher in the protocol group and diuresis was 33% higher.

However, there was no significant difference in weight loss and no difference in the congestion score.

“We did expect to see a difference in weight loss between the study groups, as higher natriuresis and diuresis would normally be associated with higher weight loss in the protocol group,” Dr. Dauw told this news organization. “However, looking back at the study design, weight was collected from the electronic health records and not rigorously collected by study nurses. Previous studies have shown discrepancies between fluid loss and weight loss, so this is an ‘explainable’ finding.”

Participants also had a relatively high congestion score at baseline, with edema above the knee and also some pleural effusion, he told meeting attendees. Therefore, it might take more time to see a change in congestion score in those patients.

The protocol also led to a shorter length of stay – one day less in the hospital – and was very safe on renal endpoints, Dr. Dauw concluded.

A session chair asked why only patients already on diuretics were included in the study, noting that in his clinic, about half of the admissions are de novo.

Dr. Dauw said that patients already taking diuretics chronically would benefit most from the protocol. “If patients are diuretic-naive, they probably will respond well to whatever you do; if you just give a higher dose, they will respond well,” he said. “We expected that the largest benefit would be in patients already taking diuretics because they have a higher chance of not responding well.”

“There also was a big difference in the starting dose,” he added. “In the SOC arm, the baseline dose was about 60 mg, whereas we gave 120 mg, and we could already see a high difference in the effect. So, in those patients, I think the gain is bigger if you follow the protocol.”
 

 

 

More data coming

Looking ahead, “we only showed efficacy in the first 2 days of treatment and a shorter length of stay, probably reflecting a faster decongestion, but we don’t know for sure,” Dr. Dauw told this news organization.

“It would be important to have a study where the protocol is followed until full decongestion is reached,” he said. “That way, we can directly prove that decongestion is better and/or faster with the protocol.”

“A good decongestive strategy is one that is fast, safe and effective in decreasing signs and symptoms that patients suffer from,” he added. “We believe our protocol can achieve that, but our study is only one piece of the puzzle.”

More data on natriuresis-guided decongestion is coming this year, he said, with the PUSH-AHF study from Groningen, the European DECONGEST study, and the U.S. ESCALATE study.

The study had no funding. Dr. Dauw declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

In patients with acute heart failure, a urine sodium-guided diuretic protocol, currently recommended in guidelines from the Heart Failure Association of the European Society of Cardiology (HFA-ESC), led to significant increases in natriuresis and diuresis over 2 days in the prospective ENACT-HF clinical trial.

The guideline protocol was based on a 2019 HFA position paper with expert consensus, but it had not been tested prospectively, Jeroen Dauw, MD, of AZ Sint-Lucas Ghent (Belgium), explained in a presentation at HFA-ESC 2023.

“We had 282 millimoles of sodium excretion after one day, which is an increase of 64%, compared with standard of care,” Dr. Dauw told meeting attendees. “We wanted to power for 15%, so we’re way above it, with a P value of lower than 0.001.”

The effect was consistent across predefined subgroups, he said. “In addition, there’s an even higher benefit in patients with a lower eGFR [estimated glomerular filtration rate] and a higher home dose of loop diuretics, which might signal more diuretic resistance and more benefit of the protocol.”

After 2 days, the investigators saw 52% higher natriuresis and 33% higher diuresis, compared with usual care.

In an interview, Dr. Dauw said, “The protocol is feasible, safe, and very effective. Cardiologists might consider how to implement a similar protocol in their center to improve the care of their acute heart failure patients.”
 

Twice the oral home dose

The investigators conducted a multicenter, open-label, nonrandomized pragmatic trial at 29 centers in 18 countries globally. “We aimed to recruit 500 to detect a 15% difference in natriuresis,” Dr. Dauw said in his presentation, “but because we were a really low-budget trial, we had to stop after 3 years of recruitment.”

Therefore, 401 patients participated, 254 in the SOC arm and 147 in the protocol arm, because of the sequential nature of the study; that is, patients in the SOC arm of the two-phase study were recruited first.

Patients’ mean age was 70 years, 38% were women, and they all had at least one sign of volume overload. They were on a maintenance daily diuretic dose of 40 mg of furosemide for a month or more, and the NT-proBNP was above 1,000.

In phase 1 of the study, all centers treated 10 consecutive patients according to the local standard of care, at the discretion of the physician. In phase 2, the centers again recruited and treated at least 10 consecutive patients, this time according to the standardized diuretic protocol.

In the protocol phase, patients were treated with twice the oral home dose as an IV bolus. “This meant if, for example, you have 40 mg of furosemide at home, then you receive 80 mg as a first bolus,” Dr. Dauw told attendees. A spot urine sample was taken after 2 hours, and the response was evaluated after 6 hours. A urine sodium above 50 millimoles per liter was considered a good response.

On the second day, patients were reevaluated in the morning using urine output as a measure of diuretic response. If it was above 3 L, then the same bolus was repeated again twice daily, with 6-12 hours between administrations.

As noted, after one day, natriuresis was 174 millimoles in the SOC arm versus 282 millimoles in the protocol group – an increase of 64%. The effect was consistent across subgroups, and those with a lower eGFR and a higher home dose of loop diuretics benefited more.

Furthermore, Dr. Dauw said, there was no interaction on the endpoints with SGLT2 inhibitor use at baseline.

After two days, natriuresis was 52% higher in the protocol group and diuresis was 33% higher.

However, there was no significant difference in weight loss and no difference in the congestion score.

“We did expect to see a difference in weight loss between the study groups, as higher natriuresis and diuresis would normally be associated with higher weight loss in the protocol group,” Dr. Dauw told this news organization. “However, looking back at the study design, weight was collected from the electronic health records and not rigorously collected by study nurses. Previous studies have shown discrepancies between fluid loss and weight loss, so this is an ‘explainable’ finding.”

Participants also had a relatively high congestion score at baseline, with edema above the knee and also some pleural effusion, he told meeting attendees. Therefore, it might take more time to see a change in congestion score in those patients.

The protocol also led to a shorter length of stay – one day less in the hospital – and was very safe on renal endpoints, Dr. Dauw concluded.

A session chair asked why only patients already on diuretics were included in the study, noting that in his clinic, about half of the admissions are de novo.

Dr. Dauw said that patients already taking diuretics chronically would benefit most from the protocol. “If patients are diuretic-naive, they probably will respond well to whatever you do; if you just give a higher dose, they will respond well,” he said. “We expected that the largest benefit would be in patients already taking diuretics because they have a higher chance of not responding well.”

“There also was a big difference in the starting dose,” he added. “In the SOC arm, the baseline dose was about 60 mg, whereas we gave 120 mg, and we could already see a high difference in the effect. So, in those patients, I think the gain is bigger if you follow the protocol.”
 

 

 

More data coming

Looking ahead, “we only showed efficacy in the first 2 days of treatment and a shorter length of stay, probably reflecting a faster decongestion, but we don’t know for sure,” Dr. Dauw told this news organization.

“It would be important to have a study where the protocol is followed until full decongestion is reached,” he said. “That way, we can directly prove that decongestion is better and/or faster with the protocol.”

“A good decongestive strategy is one that is fast, safe and effective in decreasing signs and symptoms that patients suffer from,” he added. “We believe our protocol can achieve that, but our study is only one piece of the puzzle.”

More data on natriuresis-guided decongestion is coming this year, he said, with the PUSH-AHF study from Groningen, the European DECONGEST study, and the U.S. ESCALATE study.

The study had no funding. Dr. Dauw declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HFA-ESC 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Burnout threatens primary care workforce and doctors’ mental health

Article Type
Changed

Melanie Gray Miller, a 30-year-old physician, wiped away tears as she described the isolation she felt after losing a beloved patient.
 

“It was at the end of a night shift, when it seems like bad things always happen,” said Dr. Miller, who is training to become a pediatrician.

The infant had been sick for months in the Medical University of South Carolina’s pediatric intensive care unit and the possibility that he might not improve was obvious, Dr. Miller recalled during an April meeting with physicians and hospital administrators. But the suddenness of his death still caught her off guard.

“I have family and friends that I talk to about things,” she said. “But no one truly understands.”

Doctors don’t typically take time to grieve at work. But during that recent meeting, Dr. Miller and her colleagues opened up about the insomnia, emotional exhaustion, trauma, and burnout they experienced from their time in the pediatric ICU.

“This is not a normal place,” Grant Goodrich, the hospital system’s director of ethics, said to the group, acknowledging an occupational hazard the industry often downplays. “Most people don’t see kids die.”

The recurring conversation, scheduled for early-career doctors coming off month-long pediatric ICU rotations, is one way the hospital helps staffers cope with stress, according to Alyssa Rheingold, a licensed clinical psychologist who leads its resiliency program.

“Often the focus is to teach somebody how to do yoga and take a bath,” she said. “That’s not at all what well-being is about.”

Dr. Miller says working in the hospital’s pediatric intensive care unit can be tough. “In medicine, we’re just expected to be resilient 24/7,” she says. The trauma and stress from patients dying can be particularly hard to process.

Burnout in the health care industry is a widespread problem that long predates the COVID-19 pandemic, though the chaos introduced by the coronavirus’s spread made things worse, physicians and psychologists said. Health systems across the country are trying to boost morale and keep clinicians from quitting or retiring early, but the stakes are higher than workforce shortages.

Rates of physician suicide, partly fueled by burnout, have been a concern for decades. And while burnout occurs across medical specialties, some studies have shown that primary care doctors, such as pediatricians and family physicians, may run a higher risk.

“Why go into primary care when you can make twice the money doing something with half the stress?” said Daniel Crummett, a retired primary care doctor who lives in North Carolina. “I don’t know why anyone would go into primary care.”

Doctors say they are fed up with demands imposed by hospital administrators and health insurance companies, and they’re concerned about the notoriously grueling shifts assigned to medical residents during the early years of their careers. A long-standing stigma keeps physicians from prioritizing their own mental health, while their jobs require them to routinely grapple with death, grief, and trauma. The culture of medicine encourages them to simply bear it.

“Resiliency is a cringe word for me,” Dr. Miller said. “In medicine, we’re just expected to be resilient 24/7. I don’t love that culture.”

And though the pipeline of physicians entering the profession is strong, the ranks of doctors in the United States aren’t growing fast enough to meet future demand, according to the American Medical Association. That’s why burnout exacerbates workforce shortages and, if it continues, may limit the ability of some patients to access even basic care. A 2021 report published by the Association of American Medical Colleges projects the United States will be short as many as 48,000 primary care physicians by 2034, a higher number than any other single medical specialty.

survey published last year by The Physicians Foundation, a nonprofit focused on improving health care, found more than half of the 1,501 responding doctors didn›t have positive feelings about the current or future state of the medical profession. More than 20% said they wanted to retire within a year.

Similarly, in a 2022 AMA survey of 11,000 doctors and other medical professionals, more than half reported feeling burned out and indicated they were experiencing a great deal of stress.

Those numbers appear to be even higher in primary care. Even before the pandemic, 70% of primary care providers and 89% of primary care residents reported feelings of burnout.

“Everyone in health care feels overworked,” said Gregg Coodley, a primary care physician in Portland, Ore., and author of the book “Patients in Peril: The Demise of Primary Care in America”

“I’m not saying there aren’t issues for other specialists, too, but in primary care, it’s the worst problem,” he said.

The high level of student debt most medical school graduates carry, combined with salaries more than four times as high as the average, deter many physicians from quitting medicine midcareer. Even primary care doctors, whose salaries are among the lowest of all medical specialties, are paid significantly more than the average American worker. That's why, instead of leaving the profession in their 30s or 40s, doctors often stay in their jobs but retire early.

“We go into medicine to help people, to take care of people, to do good in the world,” said Dr. Crummett, who retired from the Duke University hospital system in 2020 when he turned 65.

Dr. Crummett said he would have enjoyed working until he was 70, if not for the bureaucratic burdens of practicing medicine, including needing to get prior authorization from insurance companies before providing care, navigating cumbersome electronic health record platforms, and logging hours of administrative work outside the exam room.

“I enjoyed seeing patients. I really enjoyed my coworkers,” he said. “The administration was certainly a major factor in burnout.”

Jean Antonucci, a primary care doctor in rural Maine who retired from full-time work at 66, said she, too, would have kept working if not for the hassle of dealing with hospital administrators and insurance companies.

Once, Dr. Antonucci said, she had to call an insurance company – by landline and cellphone simultaneously, with one phone on each ear – to get prior authorization to conduct a CT scan, while her patient in need of an appendectomy waited in pain. The hospital wouldn’t conduct the scan without insurance approval.

“It was just infuriating,” said Dr. Antonucci, who now practices medicine only 1 day a week. “I could have kept working. I just got tired.”

Providers’ collective exhaustion is a crisis kept hidden by design, said Whitney Marvin, a pediatrician who works in the pediatric ICU at the Medical University of South Carolina. She said hospital culture implicitly teaches doctors to tamp down their emotions and to “keep moving.”

“I’m not supposed to be weak, and I’m not supposed to cry, and I’m not supposed to have all these emotions, because then maybe I’m not good enough at my job,” said Dr. Marvin, describing the way doctors have historically thought about their mental health.

 

 

This mentality prevents many doctors from seeking the help they need, which can lead to burnout – and much worse. An estimated 300 physicians die by suicide every year, according to the American Foundation for Suicide Prevention. The problem is particularly pronounced among female physicians, who die by suicide at a significantly higher rate than women in other professions.

A March report from this news organization found, of more than 9,000 doctors surveyed, 9% of male physicians and 11% of female physicians said they have had suicidal thoughts. But the problem isn’t new, the report noted. Elevated rates of suicide among physicians have been documented for 150 years.

“Ironically, it’s happening to a group of people who should have the easiest access to mental health care,” said Gary Price, a Connecticut surgeon and president of The Physicians Foundation.

But the reluctance to seek help isn’t unfounded, said Corey Feist, president of the Dr. Lorna Breen Heroes’ Foundation .

“There’s something known in residency as the ‘silent curriculum,’ ” Mr. Feist said in describing an often-unspoken understanding among doctors that seeking mental health treatment could jeopardize their livelihood.

Mr. Feist’s sister-in-law, emergency room physician Lorna Breen, died by suicide during the early months of the pandemic. Dr. Breen sought inpatient treatment for mental health once, Mr. Feist said, but feared that her medical license could be revoked for doing so.

The foundation works to change laws across the country to prohibit medical boards and hospitals from asking doctors invasive mental health questions on employment or license applications.

“These people need to be taken care of by us, because really, no one’s looking out for them,” Mr. Feist said.

In Charleston, psychologists are made available to physicians during group meetings like the one Dr. Miller attended, as part of the resiliency program.

But fixing the burnout problem also requires a cultural change, especially among older physicians.

“They had it worse and we know that. But it’s still not good,” Dr. Miller said. “Until that changes, we’re just going to continue burning out physicians within the first 3 years of their career.”

KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF — the independent source for health policy research, polling, and journalism.

Publications
Topics
Sections

Melanie Gray Miller, a 30-year-old physician, wiped away tears as she described the isolation she felt after losing a beloved patient.
 

“It was at the end of a night shift, when it seems like bad things always happen,” said Dr. Miller, who is training to become a pediatrician.

The infant had been sick for months in the Medical University of South Carolina’s pediatric intensive care unit and the possibility that he might not improve was obvious, Dr. Miller recalled during an April meeting with physicians and hospital administrators. But the suddenness of his death still caught her off guard.

“I have family and friends that I talk to about things,” she said. “But no one truly understands.”

Doctors don’t typically take time to grieve at work. But during that recent meeting, Dr. Miller and her colleagues opened up about the insomnia, emotional exhaustion, trauma, and burnout they experienced from their time in the pediatric ICU.

“This is not a normal place,” Grant Goodrich, the hospital system’s director of ethics, said to the group, acknowledging an occupational hazard the industry often downplays. “Most people don’t see kids die.”

The recurring conversation, scheduled for early-career doctors coming off month-long pediatric ICU rotations, is one way the hospital helps staffers cope with stress, according to Alyssa Rheingold, a licensed clinical psychologist who leads its resiliency program.

“Often the focus is to teach somebody how to do yoga and take a bath,” she said. “That’s not at all what well-being is about.”

Dr. Miller says working in the hospital’s pediatric intensive care unit can be tough. “In medicine, we’re just expected to be resilient 24/7,” she says. The trauma and stress from patients dying can be particularly hard to process.

Burnout in the health care industry is a widespread problem that long predates the COVID-19 pandemic, though the chaos introduced by the coronavirus’s spread made things worse, physicians and psychologists said. Health systems across the country are trying to boost morale and keep clinicians from quitting or retiring early, but the stakes are higher than workforce shortages.

Rates of physician suicide, partly fueled by burnout, have been a concern for decades. And while burnout occurs across medical specialties, some studies have shown that primary care doctors, such as pediatricians and family physicians, may run a higher risk.

“Why go into primary care when you can make twice the money doing something with half the stress?” said Daniel Crummett, a retired primary care doctor who lives in North Carolina. “I don’t know why anyone would go into primary care.”

Doctors say they are fed up with demands imposed by hospital administrators and health insurance companies, and they’re concerned about the notoriously grueling shifts assigned to medical residents during the early years of their careers. A long-standing stigma keeps physicians from prioritizing their own mental health, while their jobs require them to routinely grapple with death, grief, and trauma. The culture of medicine encourages them to simply bear it.

“Resiliency is a cringe word for me,” Dr. Miller said. “In medicine, we’re just expected to be resilient 24/7. I don’t love that culture.”

And though the pipeline of physicians entering the profession is strong, the ranks of doctors in the United States aren’t growing fast enough to meet future demand, according to the American Medical Association. That’s why burnout exacerbates workforce shortages and, if it continues, may limit the ability of some patients to access even basic care. A 2021 report published by the Association of American Medical Colleges projects the United States will be short as many as 48,000 primary care physicians by 2034, a higher number than any other single medical specialty.

survey published last year by The Physicians Foundation, a nonprofit focused on improving health care, found more than half of the 1,501 responding doctors didn›t have positive feelings about the current or future state of the medical profession. More than 20% said they wanted to retire within a year.

Similarly, in a 2022 AMA survey of 11,000 doctors and other medical professionals, more than half reported feeling burned out and indicated they were experiencing a great deal of stress.

Those numbers appear to be even higher in primary care. Even before the pandemic, 70% of primary care providers and 89% of primary care residents reported feelings of burnout.

“Everyone in health care feels overworked,” said Gregg Coodley, a primary care physician in Portland, Ore., and author of the book “Patients in Peril: The Demise of Primary Care in America”

“I’m not saying there aren’t issues for other specialists, too, but in primary care, it’s the worst problem,” he said.

The high level of student debt most medical school graduates carry, combined with salaries more than four times as high as the average, deter many physicians from quitting medicine midcareer. Even primary care doctors, whose salaries are among the lowest of all medical specialties, are paid significantly more than the average American worker. That's why, instead of leaving the profession in their 30s or 40s, doctors often stay in their jobs but retire early.

“We go into medicine to help people, to take care of people, to do good in the world,” said Dr. Crummett, who retired from the Duke University hospital system in 2020 when he turned 65.

Dr. Crummett said he would have enjoyed working until he was 70, if not for the bureaucratic burdens of practicing medicine, including needing to get prior authorization from insurance companies before providing care, navigating cumbersome electronic health record platforms, and logging hours of administrative work outside the exam room.

“I enjoyed seeing patients. I really enjoyed my coworkers,” he said. “The administration was certainly a major factor in burnout.”

Jean Antonucci, a primary care doctor in rural Maine who retired from full-time work at 66, said she, too, would have kept working if not for the hassle of dealing with hospital administrators and insurance companies.

Once, Dr. Antonucci said, she had to call an insurance company – by landline and cellphone simultaneously, with one phone on each ear – to get prior authorization to conduct a CT scan, while her patient in need of an appendectomy waited in pain. The hospital wouldn’t conduct the scan without insurance approval.

“It was just infuriating,” said Dr. Antonucci, who now practices medicine only 1 day a week. “I could have kept working. I just got tired.”

Providers’ collective exhaustion is a crisis kept hidden by design, said Whitney Marvin, a pediatrician who works in the pediatric ICU at the Medical University of South Carolina. She said hospital culture implicitly teaches doctors to tamp down their emotions and to “keep moving.”

“I’m not supposed to be weak, and I’m not supposed to cry, and I’m not supposed to have all these emotions, because then maybe I’m not good enough at my job,” said Dr. Marvin, describing the way doctors have historically thought about their mental health.

 

 

This mentality prevents many doctors from seeking the help they need, which can lead to burnout – and much worse. An estimated 300 physicians die by suicide every year, according to the American Foundation for Suicide Prevention. The problem is particularly pronounced among female physicians, who die by suicide at a significantly higher rate than women in other professions.

A March report from this news organization found, of more than 9,000 doctors surveyed, 9% of male physicians and 11% of female physicians said they have had suicidal thoughts. But the problem isn’t new, the report noted. Elevated rates of suicide among physicians have been documented for 150 years.

“Ironically, it’s happening to a group of people who should have the easiest access to mental health care,” said Gary Price, a Connecticut surgeon and president of The Physicians Foundation.

But the reluctance to seek help isn’t unfounded, said Corey Feist, president of the Dr. Lorna Breen Heroes’ Foundation .

“There’s something known in residency as the ‘silent curriculum,’ ” Mr. Feist said in describing an often-unspoken understanding among doctors that seeking mental health treatment could jeopardize their livelihood.

Mr. Feist’s sister-in-law, emergency room physician Lorna Breen, died by suicide during the early months of the pandemic. Dr. Breen sought inpatient treatment for mental health once, Mr. Feist said, but feared that her medical license could be revoked for doing so.

The foundation works to change laws across the country to prohibit medical boards and hospitals from asking doctors invasive mental health questions on employment or license applications.

“These people need to be taken care of by us, because really, no one’s looking out for them,” Mr. Feist said.

In Charleston, psychologists are made available to physicians during group meetings like the one Dr. Miller attended, as part of the resiliency program.

But fixing the burnout problem also requires a cultural change, especially among older physicians.

“They had it worse and we know that. But it’s still not good,” Dr. Miller said. “Until that changes, we’re just going to continue burning out physicians within the first 3 years of their career.”

KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF — the independent source for health policy research, polling, and journalism.

Melanie Gray Miller, a 30-year-old physician, wiped away tears as she described the isolation she felt after losing a beloved patient.
 

“It was at the end of a night shift, when it seems like bad things always happen,” said Dr. Miller, who is training to become a pediatrician.

The infant had been sick for months in the Medical University of South Carolina’s pediatric intensive care unit and the possibility that he might not improve was obvious, Dr. Miller recalled during an April meeting with physicians and hospital administrators. But the suddenness of his death still caught her off guard.

“I have family and friends that I talk to about things,” she said. “But no one truly understands.”

Doctors don’t typically take time to grieve at work. But during that recent meeting, Dr. Miller and her colleagues opened up about the insomnia, emotional exhaustion, trauma, and burnout they experienced from their time in the pediatric ICU.

“This is not a normal place,” Grant Goodrich, the hospital system’s director of ethics, said to the group, acknowledging an occupational hazard the industry often downplays. “Most people don’t see kids die.”

The recurring conversation, scheduled for early-career doctors coming off month-long pediatric ICU rotations, is one way the hospital helps staffers cope with stress, according to Alyssa Rheingold, a licensed clinical psychologist who leads its resiliency program.

“Often the focus is to teach somebody how to do yoga and take a bath,” she said. “That’s not at all what well-being is about.”

Dr. Miller says working in the hospital’s pediatric intensive care unit can be tough. “In medicine, we’re just expected to be resilient 24/7,” she says. The trauma and stress from patients dying can be particularly hard to process.

Burnout in the health care industry is a widespread problem that long predates the COVID-19 pandemic, though the chaos introduced by the coronavirus’s spread made things worse, physicians and psychologists said. Health systems across the country are trying to boost morale and keep clinicians from quitting or retiring early, but the stakes are higher than workforce shortages.

Rates of physician suicide, partly fueled by burnout, have been a concern for decades. And while burnout occurs across medical specialties, some studies have shown that primary care doctors, such as pediatricians and family physicians, may run a higher risk.

“Why go into primary care when you can make twice the money doing something with half the stress?” said Daniel Crummett, a retired primary care doctor who lives in North Carolina. “I don’t know why anyone would go into primary care.”

Doctors say they are fed up with demands imposed by hospital administrators and health insurance companies, and they’re concerned about the notoriously grueling shifts assigned to medical residents during the early years of their careers. A long-standing stigma keeps physicians from prioritizing their own mental health, while their jobs require them to routinely grapple with death, grief, and trauma. The culture of medicine encourages them to simply bear it.

“Resiliency is a cringe word for me,” Dr. Miller said. “In medicine, we’re just expected to be resilient 24/7. I don’t love that culture.”

And though the pipeline of physicians entering the profession is strong, the ranks of doctors in the United States aren’t growing fast enough to meet future demand, according to the American Medical Association. That’s why burnout exacerbates workforce shortages and, if it continues, may limit the ability of some patients to access even basic care. A 2021 report published by the Association of American Medical Colleges projects the United States will be short as many as 48,000 primary care physicians by 2034, a higher number than any other single medical specialty.

survey published last year by The Physicians Foundation, a nonprofit focused on improving health care, found more than half of the 1,501 responding doctors didn›t have positive feelings about the current or future state of the medical profession. More than 20% said they wanted to retire within a year.

Similarly, in a 2022 AMA survey of 11,000 doctors and other medical professionals, more than half reported feeling burned out and indicated they were experiencing a great deal of stress.

Those numbers appear to be even higher in primary care. Even before the pandemic, 70% of primary care providers and 89% of primary care residents reported feelings of burnout.

“Everyone in health care feels overworked,” said Gregg Coodley, a primary care physician in Portland, Ore., and author of the book “Patients in Peril: The Demise of Primary Care in America”

“I’m not saying there aren’t issues for other specialists, too, but in primary care, it’s the worst problem,” he said.

The high level of student debt most medical school graduates carry, combined with salaries more than four times as high as the average, deter many physicians from quitting medicine midcareer. Even primary care doctors, whose salaries are among the lowest of all medical specialties, are paid significantly more than the average American worker. That's why, instead of leaving the profession in their 30s or 40s, doctors often stay in their jobs but retire early.

“We go into medicine to help people, to take care of people, to do good in the world,” said Dr. Crummett, who retired from the Duke University hospital system in 2020 when he turned 65.

Dr. Crummett said he would have enjoyed working until he was 70, if not for the bureaucratic burdens of practicing medicine, including needing to get prior authorization from insurance companies before providing care, navigating cumbersome electronic health record platforms, and logging hours of administrative work outside the exam room.

“I enjoyed seeing patients. I really enjoyed my coworkers,” he said. “The administration was certainly a major factor in burnout.”

Jean Antonucci, a primary care doctor in rural Maine who retired from full-time work at 66, said she, too, would have kept working if not for the hassle of dealing with hospital administrators and insurance companies.

Once, Dr. Antonucci said, she had to call an insurance company – by landline and cellphone simultaneously, with one phone on each ear – to get prior authorization to conduct a CT scan, while her patient in need of an appendectomy waited in pain. The hospital wouldn’t conduct the scan without insurance approval.

“It was just infuriating,” said Dr. Antonucci, who now practices medicine only 1 day a week. “I could have kept working. I just got tired.”

Providers’ collective exhaustion is a crisis kept hidden by design, said Whitney Marvin, a pediatrician who works in the pediatric ICU at the Medical University of South Carolina. She said hospital culture implicitly teaches doctors to tamp down their emotions and to “keep moving.”

“I’m not supposed to be weak, and I’m not supposed to cry, and I’m not supposed to have all these emotions, because then maybe I’m not good enough at my job,” said Dr. Marvin, describing the way doctors have historically thought about their mental health.

 

 

This mentality prevents many doctors from seeking the help they need, which can lead to burnout – and much worse. An estimated 300 physicians die by suicide every year, according to the American Foundation for Suicide Prevention. The problem is particularly pronounced among female physicians, who die by suicide at a significantly higher rate than women in other professions.

A March report from this news organization found, of more than 9,000 doctors surveyed, 9% of male physicians and 11% of female physicians said they have had suicidal thoughts. But the problem isn’t new, the report noted. Elevated rates of suicide among physicians have been documented for 150 years.

“Ironically, it’s happening to a group of people who should have the easiest access to mental health care,” said Gary Price, a Connecticut surgeon and president of The Physicians Foundation.

But the reluctance to seek help isn’t unfounded, said Corey Feist, president of the Dr. Lorna Breen Heroes’ Foundation .

“There’s something known in residency as the ‘silent curriculum,’ ” Mr. Feist said in describing an often-unspoken understanding among doctors that seeking mental health treatment could jeopardize their livelihood.

Mr. Feist’s sister-in-law, emergency room physician Lorna Breen, died by suicide during the early months of the pandemic. Dr. Breen sought inpatient treatment for mental health once, Mr. Feist said, but feared that her medical license could be revoked for doing so.

The foundation works to change laws across the country to prohibit medical boards and hospitals from asking doctors invasive mental health questions on employment or license applications.

“These people need to be taken care of by us, because really, no one’s looking out for them,” Mr. Feist said.

In Charleston, psychologists are made available to physicians during group meetings like the one Dr. Miller attended, as part of the resiliency program.

But fixing the burnout problem also requires a cultural change, especially among older physicians.

“They had it worse and we know that. But it’s still not good,” Dr. Miller said. “Until that changes, we’re just going to continue burning out physicians within the first 3 years of their career.”

KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF — the independent source for health policy research, polling, and journalism.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Low-carb breakfast key to lower glucose variability in T2D?

Article Type
Changed

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF CLINICAL NUTRITION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cardiopulmonary exercise testing for unexplained dyspnea

Article Type
Changed

 

Unexplained dyspnea is a common complaint among patients seen in pulmonary clinics, and can be difficult to define, quantify, and determine the etiology. The ATS official statement defined dyspnea as “a subjective experience of breathing discomfort that consists of qualitatively distinct sensations that vary in intensity” (Am J Respir Crit Care Med. 2012; 185:435). A myriad of diseases can cause dyspnea, including cardiac, pulmonary, neuromuscular, psychological, and hematologic disorders; obesity, deconditioning, and the normal aging process may also contribute to dyspnea. Adding further diagnostic confusion, multiple causes may exist in a given patient.

Finding the cause or causes of dyspnea can be difficult and may require extensive testing, time, and cost. Initially, a history and physical exam are performed with more focused testing undertaken depending on most likely causes. For most patients, initial evaluation includes a CBC, TSH, pulmonary function tests, chest radiograph, and, often, a transthoracic echocardiogram. If these tests are unrevealing, or if clinical suspicion is high, more costly, invasive, and time-consuming tests are obtained. These may include bronchoprovocation testing, cardiac stress tests, chest CT scan, and, if warranted, right- and/or left-sided heart catheterization. Ideally, these tests are utilized appropriately based on the patient’s clinical presentation and the results of initial evaluation. In addition to high cost, invasive testing risks injury.

Cardiopulmonary exercise testing (CPET) has been called the “gold standard” test for evaluation of unexplained dyspnea (Palange P, et al. Eur Respir J. 2007;29:185).

Symptom-limited CPET measures multiple physiological variables during stress, potentially identifying the cause of dyspnea that is not evident by measurements made at rest. CPET may also differentiate the limiting factor in patients with multiple diseases that each could be contributing to dyspnea. CPET provides an objective measurement of cardiorespiratory fitness and may provide prognostic information. CPET typically consists of a symptom-limited maximal incremental exercise test using either a treadmill or cycle ergometer. The primary measurements include oxygen uptake (Vo2), carbon dioxide output (Vco2), minute ventilation (VE), ECG, blood pressure, oxygen saturation (Spo2) and, depending on the indication, arterial blood gases at rest and peak exercise. An invasive CPET includes the above measurements and the addition of a pulmonary artery catheter and radial artery catheter allowing the assessment of ventricular filling pressures, pulmonary arterial pressures, cardiac output, and measures of oxygen transport. Invasive CPET is less commonly performed in clinical practice due to cost, high resource utilization, and greater risk of complications.

What is the evidence that CPET is the gold standard for evaluating dyspnea? Limited evidence supports this claim. Martinez and colleagues (Chest. 1994;105[1]:168) evaluated 50 patients presenting with unexplained dyspnea with normal CBC, thyroid studies, chest radiograph, and spirometry with no-invasive CPET. CPET was used to make an initial diagnosis, and this was compared with a definitive diagnosis based on additional testing guided by CPET findings and response to targeted therapy. Most patients (68%) eventually received a diagnossis of normal, deconditioned, hyperactive airway disease, or a psychogenic cause of dyspnea. The important findings from this study include: (1) CPET was able to identify cardiac or pulmonary disease, if present; (2) A normal CPET excluded significant cardiac or pulmonary disease in most patients suggesting that a normal CPET is useful in limiting subsequent testing; (3) In some patients, CPET wasn’t able to accurately differentiate cardiac disease from deconditioning as both exhibited an abnormal CPET pattern including low peak Vo2, low Vo2 at anaerobic threshold, decreased O2 pulse, and often low peak heart rate. In more than 75% of patients, the CPET, and focused testing based on CPET findings, confidently identified the cause of dyspnea not explained by routine testing.

There is evidence that invasive CPET may provide diagnostic information when the cause of dyspnea is not identified using noninvasive testing. Huang and colleagues (Eur J Prev Cardiol. 2017;24[11]:1190) investigated the use of invasive CPET in 530 patients who had undergone extensive evaluation for dyspnea, including noninvasive CPET in 30% of patients, and the diagnosis remained unclear. The cause of dyspnea was determinedin all patients and included: exercise-induced pulmonary arterial hypertension (17%), heart failure with preserved ejection fraction (18%), dysautonomia or preload failure (21%), oxidative myopathy (25%), primary hyperventilation (8%), and various other conditions (11%). Most patients had been undergoing work up for unexplained dyspnea for a median of 511 days before evaluation in the dyspnea clinic. Huang et al’s study demonstrates some of the limitations of noninvasive CPET, including distinguishing cardiac limitation from dysautonomia or preload failure, deconditioning, oxidative myopathies, and mild pulmonary vascular disease. This study didn’t answer how many patients having noninvasive CPET would need an invasive study to get their diagnosis.

A limitation of both the Martinez et al and Huang et al studies is that they were conducted at subspecialty dyspnea clinics located in large referral centers and may not be representative of patients seen in general pulmonary clinics for the evaluation of dyspnea. This may result in over-representation of less common diseases, such as oxidative myopathies and dysautonomia or preload failure. Even with this limitation, these two studies showed that CPETs have the potential to expedite diagnoses and treatment in patients with unexplained dyspnea.

More investigation is needed to understand the clinical utility, and potential cost savings, of CPET for patients referred to general pulmonary clinics with unexplained dyspnea. We retrospectively reviewed 89 patients who underwent CPET for unexplained dyspnea from 2017 to 2019 at Intermountain Medical Center (Cook CP. Eur Respir J. 2022; 60: Suppl. 66, 1939). Nearly 50% of the patients undergoing CPET were diagnosed with obesity, deconditioning, or normal. In patients under the age of 60 years, 64% were diagnosed with obesity, deconditioning, or a normal study. Conversely, 70% of patients over the age of 60 years had an abnormal cardiac or pulmonary limitation.

We also evaluated whether CPET affected diagnostic testing patterns in the 6 months following testing. We determined that potentially inappropriate testing was performed in only 13% of patients after obtaining a CPET diagnosis. These data suggest that CPET results affect ordering provider behavior. Also, in younger patients, in whom initial evaluation is unrevealing of cardiopulmonary disease, a CPET could be performed early in the evaluation process. This may result in decreased health care cost and time to diagnosis. At our institution, CPET is less expensive than a transthoracic echocardiogram.

 

 

So, is CPET worthy of its status as the gold standard for determining the etiology of unexplained dysp-nea? The answer for noninvasive CPET is a definite “maybe.” There is evidence that some CPET patterns support a specific diagnosis. However, referring providers may be disappointed by CPET reports that do not provide a definitive cause for a patient’s dyspnea. An abnormal cardiac limitation may be caused by systolic or diastolic dysfunction, myocardial ischemia, preload failure or dysautonomia, deconditioning, and oxidative myopathy. Even in these situations, a specific CPET pattern may limit the differential diagnosis and facilitate a more focused and cost-effective evaluation. A normal CPET provides reassurance that significant disease is not causing the patient’s dyspnea and prevent further unnecessary and costly evaluation.

Publications
Topics
Sections

 

Unexplained dyspnea is a common complaint among patients seen in pulmonary clinics, and can be difficult to define, quantify, and determine the etiology. The ATS official statement defined dyspnea as “a subjective experience of breathing discomfort that consists of qualitatively distinct sensations that vary in intensity” (Am J Respir Crit Care Med. 2012; 185:435). A myriad of diseases can cause dyspnea, including cardiac, pulmonary, neuromuscular, psychological, and hematologic disorders; obesity, deconditioning, and the normal aging process may also contribute to dyspnea. Adding further diagnostic confusion, multiple causes may exist in a given patient.

Finding the cause or causes of dyspnea can be difficult and may require extensive testing, time, and cost. Initially, a history and physical exam are performed with more focused testing undertaken depending on most likely causes. For most patients, initial evaluation includes a CBC, TSH, pulmonary function tests, chest radiograph, and, often, a transthoracic echocardiogram. If these tests are unrevealing, or if clinical suspicion is high, more costly, invasive, and time-consuming tests are obtained. These may include bronchoprovocation testing, cardiac stress tests, chest CT scan, and, if warranted, right- and/or left-sided heart catheterization. Ideally, these tests are utilized appropriately based on the patient’s clinical presentation and the results of initial evaluation. In addition to high cost, invasive testing risks injury.

Cardiopulmonary exercise testing (CPET) has been called the “gold standard” test for evaluation of unexplained dyspnea (Palange P, et al. Eur Respir J. 2007;29:185).

Symptom-limited CPET measures multiple physiological variables during stress, potentially identifying the cause of dyspnea that is not evident by measurements made at rest. CPET may also differentiate the limiting factor in patients with multiple diseases that each could be contributing to dyspnea. CPET provides an objective measurement of cardiorespiratory fitness and may provide prognostic information. CPET typically consists of a symptom-limited maximal incremental exercise test using either a treadmill or cycle ergometer. The primary measurements include oxygen uptake (Vo2), carbon dioxide output (Vco2), minute ventilation (VE), ECG, blood pressure, oxygen saturation (Spo2) and, depending on the indication, arterial blood gases at rest and peak exercise. An invasive CPET includes the above measurements and the addition of a pulmonary artery catheter and radial artery catheter allowing the assessment of ventricular filling pressures, pulmonary arterial pressures, cardiac output, and measures of oxygen transport. Invasive CPET is less commonly performed in clinical practice due to cost, high resource utilization, and greater risk of complications.

What is the evidence that CPET is the gold standard for evaluating dyspnea? Limited evidence supports this claim. Martinez and colleagues (Chest. 1994;105[1]:168) evaluated 50 patients presenting with unexplained dyspnea with normal CBC, thyroid studies, chest radiograph, and spirometry with no-invasive CPET. CPET was used to make an initial diagnosis, and this was compared with a definitive diagnosis based on additional testing guided by CPET findings and response to targeted therapy. Most patients (68%) eventually received a diagnossis of normal, deconditioned, hyperactive airway disease, or a psychogenic cause of dyspnea. The important findings from this study include: (1) CPET was able to identify cardiac or pulmonary disease, if present; (2) A normal CPET excluded significant cardiac or pulmonary disease in most patients suggesting that a normal CPET is useful in limiting subsequent testing; (3) In some patients, CPET wasn’t able to accurately differentiate cardiac disease from deconditioning as both exhibited an abnormal CPET pattern including low peak Vo2, low Vo2 at anaerobic threshold, decreased O2 pulse, and often low peak heart rate. In more than 75% of patients, the CPET, and focused testing based on CPET findings, confidently identified the cause of dyspnea not explained by routine testing.

There is evidence that invasive CPET may provide diagnostic information when the cause of dyspnea is not identified using noninvasive testing. Huang and colleagues (Eur J Prev Cardiol. 2017;24[11]:1190) investigated the use of invasive CPET in 530 patients who had undergone extensive evaluation for dyspnea, including noninvasive CPET in 30% of patients, and the diagnosis remained unclear. The cause of dyspnea was determinedin all patients and included: exercise-induced pulmonary arterial hypertension (17%), heart failure with preserved ejection fraction (18%), dysautonomia or preload failure (21%), oxidative myopathy (25%), primary hyperventilation (8%), and various other conditions (11%). Most patients had been undergoing work up for unexplained dyspnea for a median of 511 days before evaluation in the dyspnea clinic. Huang et al’s study demonstrates some of the limitations of noninvasive CPET, including distinguishing cardiac limitation from dysautonomia or preload failure, deconditioning, oxidative myopathies, and mild pulmonary vascular disease. This study didn’t answer how many patients having noninvasive CPET would need an invasive study to get their diagnosis.

A limitation of both the Martinez et al and Huang et al studies is that they were conducted at subspecialty dyspnea clinics located in large referral centers and may not be representative of patients seen in general pulmonary clinics for the evaluation of dyspnea. This may result in over-representation of less common diseases, such as oxidative myopathies and dysautonomia or preload failure. Even with this limitation, these two studies showed that CPETs have the potential to expedite diagnoses and treatment in patients with unexplained dyspnea.

More investigation is needed to understand the clinical utility, and potential cost savings, of CPET for patients referred to general pulmonary clinics with unexplained dyspnea. We retrospectively reviewed 89 patients who underwent CPET for unexplained dyspnea from 2017 to 2019 at Intermountain Medical Center (Cook CP. Eur Respir J. 2022; 60: Suppl. 66, 1939). Nearly 50% of the patients undergoing CPET were diagnosed with obesity, deconditioning, or normal. In patients under the age of 60 years, 64% were diagnosed with obesity, deconditioning, or a normal study. Conversely, 70% of patients over the age of 60 years had an abnormal cardiac or pulmonary limitation.

We also evaluated whether CPET affected diagnostic testing patterns in the 6 months following testing. We determined that potentially inappropriate testing was performed in only 13% of patients after obtaining a CPET diagnosis. These data suggest that CPET results affect ordering provider behavior. Also, in younger patients, in whom initial evaluation is unrevealing of cardiopulmonary disease, a CPET could be performed early in the evaluation process. This may result in decreased health care cost and time to diagnosis. At our institution, CPET is less expensive than a transthoracic echocardiogram.

 

 

So, is CPET worthy of its status as the gold standard for determining the etiology of unexplained dysp-nea? The answer for noninvasive CPET is a definite “maybe.” There is evidence that some CPET patterns support a specific diagnosis. However, referring providers may be disappointed by CPET reports that do not provide a definitive cause for a patient’s dyspnea. An abnormal cardiac limitation may be caused by systolic or diastolic dysfunction, myocardial ischemia, preload failure or dysautonomia, deconditioning, and oxidative myopathy. Even in these situations, a specific CPET pattern may limit the differential diagnosis and facilitate a more focused and cost-effective evaluation. A normal CPET provides reassurance that significant disease is not causing the patient’s dyspnea and prevent further unnecessary and costly evaluation.

 

Unexplained dyspnea is a common complaint among patients seen in pulmonary clinics, and can be difficult to define, quantify, and determine the etiology. The ATS official statement defined dyspnea as “a subjective experience of breathing discomfort that consists of qualitatively distinct sensations that vary in intensity” (Am J Respir Crit Care Med. 2012; 185:435). A myriad of diseases can cause dyspnea, including cardiac, pulmonary, neuromuscular, psychological, and hematologic disorders; obesity, deconditioning, and the normal aging process may also contribute to dyspnea. Adding further diagnostic confusion, multiple causes may exist in a given patient.

Finding the cause or causes of dyspnea can be difficult and may require extensive testing, time, and cost. Initially, a history and physical exam are performed with more focused testing undertaken depending on most likely causes. For most patients, initial evaluation includes a CBC, TSH, pulmonary function tests, chest radiograph, and, often, a transthoracic echocardiogram. If these tests are unrevealing, or if clinical suspicion is high, more costly, invasive, and time-consuming tests are obtained. These may include bronchoprovocation testing, cardiac stress tests, chest CT scan, and, if warranted, right- and/or left-sided heart catheterization. Ideally, these tests are utilized appropriately based on the patient’s clinical presentation and the results of initial evaluation. In addition to high cost, invasive testing risks injury.

Cardiopulmonary exercise testing (CPET) has been called the “gold standard” test for evaluation of unexplained dyspnea (Palange P, et al. Eur Respir J. 2007;29:185).

Symptom-limited CPET measures multiple physiological variables during stress, potentially identifying the cause of dyspnea that is not evident by measurements made at rest. CPET may also differentiate the limiting factor in patients with multiple diseases that each could be contributing to dyspnea. CPET provides an objective measurement of cardiorespiratory fitness and may provide prognostic information. CPET typically consists of a symptom-limited maximal incremental exercise test using either a treadmill or cycle ergometer. The primary measurements include oxygen uptake (Vo2), carbon dioxide output (Vco2), minute ventilation (VE), ECG, blood pressure, oxygen saturation (Spo2) and, depending on the indication, arterial blood gases at rest and peak exercise. An invasive CPET includes the above measurements and the addition of a pulmonary artery catheter and radial artery catheter allowing the assessment of ventricular filling pressures, pulmonary arterial pressures, cardiac output, and measures of oxygen transport. Invasive CPET is less commonly performed in clinical practice due to cost, high resource utilization, and greater risk of complications.

What is the evidence that CPET is the gold standard for evaluating dyspnea? Limited evidence supports this claim. Martinez and colleagues (Chest. 1994;105[1]:168) evaluated 50 patients presenting with unexplained dyspnea with normal CBC, thyroid studies, chest radiograph, and spirometry with no-invasive CPET. CPET was used to make an initial diagnosis, and this was compared with a definitive diagnosis based on additional testing guided by CPET findings and response to targeted therapy. Most patients (68%) eventually received a diagnossis of normal, deconditioned, hyperactive airway disease, or a psychogenic cause of dyspnea. The important findings from this study include: (1) CPET was able to identify cardiac or pulmonary disease, if present; (2) A normal CPET excluded significant cardiac or pulmonary disease in most patients suggesting that a normal CPET is useful in limiting subsequent testing; (3) In some patients, CPET wasn’t able to accurately differentiate cardiac disease from deconditioning as both exhibited an abnormal CPET pattern including low peak Vo2, low Vo2 at anaerobic threshold, decreased O2 pulse, and often low peak heart rate. In more than 75% of patients, the CPET, and focused testing based on CPET findings, confidently identified the cause of dyspnea not explained by routine testing.

There is evidence that invasive CPET may provide diagnostic information when the cause of dyspnea is not identified using noninvasive testing. Huang and colleagues (Eur J Prev Cardiol. 2017;24[11]:1190) investigated the use of invasive CPET in 530 patients who had undergone extensive evaluation for dyspnea, including noninvasive CPET in 30% of patients, and the diagnosis remained unclear. The cause of dyspnea was determinedin all patients and included: exercise-induced pulmonary arterial hypertension (17%), heart failure with preserved ejection fraction (18%), dysautonomia or preload failure (21%), oxidative myopathy (25%), primary hyperventilation (8%), and various other conditions (11%). Most patients had been undergoing work up for unexplained dyspnea for a median of 511 days before evaluation in the dyspnea clinic. Huang et al’s study demonstrates some of the limitations of noninvasive CPET, including distinguishing cardiac limitation from dysautonomia or preload failure, deconditioning, oxidative myopathies, and mild pulmonary vascular disease. This study didn’t answer how many patients having noninvasive CPET would need an invasive study to get their diagnosis.

A limitation of both the Martinez et al and Huang et al studies is that they were conducted at subspecialty dyspnea clinics located in large referral centers and may not be representative of patients seen in general pulmonary clinics for the evaluation of dyspnea. This may result in over-representation of less common diseases, such as oxidative myopathies and dysautonomia or preload failure. Even with this limitation, these two studies showed that CPETs have the potential to expedite diagnoses and treatment in patients with unexplained dyspnea.

More investigation is needed to understand the clinical utility, and potential cost savings, of CPET for patients referred to general pulmonary clinics with unexplained dyspnea. We retrospectively reviewed 89 patients who underwent CPET for unexplained dyspnea from 2017 to 2019 at Intermountain Medical Center (Cook CP. Eur Respir J. 2022; 60: Suppl. 66, 1939). Nearly 50% of the patients undergoing CPET were diagnosed with obesity, deconditioning, or normal. In patients under the age of 60 years, 64% were diagnosed with obesity, deconditioning, or a normal study. Conversely, 70% of patients over the age of 60 years had an abnormal cardiac or pulmonary limitation.

We also evaluated whether CPET affected diagnostic testing patterns in the 6 months following testing. We determined that potentially inappropriate testing was performed in only 13% of patients after obtaining a CPET diagnosis. These data suggest that CPET results affect ordering provider behavior. Also, in younger patients, in whom initial evaluation is unrevealing of cardiopulmonary disease, a CPET could be performed early in the evaluation process. This may result in decreased health care cost and time to diagnosis. At our institution, CPET is less expensive than a transthoracic echocardiogram.

 

 

So, is CPET worthy of its status as the gold standard for determining the etiology of unexplained dysp-nea? The answer for noninvasive CPET is a definite “maybe.” There is evidence that some CPET patterns support a specific diagnosis. However, referring providers may be disappointed by CPET reports that do not provide a definitive cause for a patient’s dyspnea. An abnormal cardiac limitation may be caused by systolic or diastolic dysfunction, myocardial ischemia, preload failure or dysautonomia, deconditioning, and oxidative myopathy. Even in these situations, a specific CPET pattern may limit the differential diagnosis and facilitate a more focused and cost-effective evaluation. A normal CPET provides reassurance that significant disease is not causing the patient’s dyspnea and prevent further unnecessary and costly evaluation.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

CAR-T hikes overall survival in relapsed/refractory LBCL

Article Type
Changed
Display Headline
CAR-T hikes overall survival in relapsed/refractory LBCL

 

At a median follow-up of 47.2 months, axicabtagene ciloleucel (axi-cel, Yescarta) significantly improved overall survival compared with standard second-line treatments in patients with early relapsed or refractory large B-cell lymphoma, according to a phase 3 investigation reported at the annual meeting of the American Society of Clinical Oncology (ASCO).

The anti-CD19 chimeric antigen receptor T-cell (CAR-T) therapy was approved for second-line treatment in 2022 based on better event-free survival, but standard second-line treatment – chemoimmunotherapy followed by high-dose chemotherapy and autologous stem-cell transplant in responders – still remains the prevailing approach, explained Jason Westin, MD, director of lymphoma research at MD Anderson Cancer Center, Houston. Dr. Westin, lead investigator, presented the trial, dubbed ZUMA-7, at the ASCO meeting.

The new findings might change that. ZUMA-7 “conclusively demonstrates that trying chemotherapy in the second line and saving cell therapy for the third line is an inferior approach ... ZUMA-7 confirms axi-cel is a second-line standard of care for patients with refractory or early relapsed large B cell lymphoma based on superior overall survival,” said Dr. Westin.

Study discussant Asher A. Chanan-Khan, MD, a CAR-T specialist at the Mayo Clinic in Jacksonville, Fla., agreed.

“This data must alter the current standard of care making CAR-T or axi-cel, based on the data we heard, a preferred second-line treatment ... Moving CAR-T earlier in the treatment paradigm is likely a better choice for our patients,” he said.

The study was published in the New England Journal of Medicine to coincide with the presentations.

Dr. Westin noted that axi-cel is now under investigation in ZUMA-23 for first-line treatment of high-risk large B-cell lymphoma (LBCL).


 

Study details

Zuma-7 randomized 180 LBCL patients to a one-time axi-cel infusion and 179 to standard care. Patients were refractory to first line chemoimmunotherapy or had relapsed within 12 months; just 36% of patients in the standard care group did well enough on treatment to go on to stem-cell transplant.

Median progression-free survival (PFS) was 14.7 months with axi-cel versus 3.7 months with standard care.

Significantly, the better PFS appears to have translated into better overall survival (OS).

At a median of almost 4 years, 82 patients in the axi-cel group had died, compared with 95 patients with standard care who had died. Estimated 4-year OS was 54.6% with axi-cel versus 46% with standard care (HR 0.73, P = .03).

The OS benefit held in high-risk subgroups, including patients over 64 years old, those refractory to first-line treatment, and patients with high-grade disease.

Adverse events were in keeping with labeling. Cytokine release syndrome was more common in the axi-cel arm, including grade 3 or worse CRS in 6% of axi-cel patients versus none on standard care. Grade 3 or worse infections were also more common at 16.5% versus 11.9% with standard care. Over 11% of axi-cel patients developed hypogammaglobulinemia versus 0.6% in the standard care group.

Overall, there were no new serious or fatal adverse events since the initial PFS results were reported in 2022, when eight fatal adverse events were reported with axi-cel versus two with standard care.

The work was funded by axi-cel maker Kite Pharma, a subsidiary of Gilead. Investigators included Kite/Gilead employees and others who reported financial relationships with the companies, including Dr. Westin, a Kite/Gilead researcher and adviser. Dr. Chanan-Khan disclosed ties with Cellectar, Starton Therapeutics, Ascentage Pharma, and others.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

At a median follow-up of 47.2 months, axicabtagene ciloleucel (axi-cel, Yescarta) significantly improved overall survival compared with standard second-line treatments in patients with early relapsed or refractory large B-cell lymphoma, according to a phase 3 investigation reported at the annual meeting of the American Society of Clinical Oncology (ASCO).

The anti-CD19 chimeric antigen receptor T-cell (CAR-T) therapy was approved for second-line treatment in 2022 based on better event-free survival, but standard second-line treatment – chemoimmunotherapy followed by high-dose chemotherapy and autologous stem-cell transplant in responders – still remains the prevailing approach, explained Jason Westin, MD, director of lymphoma research at MD Anderson Cancer Center, Houston. Dr. Westin, lead investigator, presented the trial, dubbed ZUMA-7, at the ASCO meeting.

The new findings might change that. ZUMA-7 “conclusively demonstrates that trying chemotherapy in the second line and saving cell therapy for the third line is an inferior approach ... ZUMA-7 confirms axi-cel is a second-line standard of care for patients with refractory or early relapsed large B cell lymphoma based on superior overall survival,” said Dr. Westin.

Study discussant Asher A. Chanan-Khan, MD, a CAR-T specialist at the Mayo Clinic in Jacksonville, Fla., agreed.

“This data must alter the current standard of care making CAR-T or axi-cel, based on the data we heard, a preferred second-line treatment ... Moving CAR-T earlier in the treatment paradigm is likely a better choice for our patients,” he said.

The study was published in the New England Journal of Medicine to coincide with the presentations.

Dr. Westin noted that axi-cel is now under investigation in ZUMA-23 for first-line treatment of high-risk large B-cell lymphoma (LBCL).


 

Study details

Zuma-7 randomized 180 LBCL patients to a one-time axi-cel infusion and 179 to standard care. Patients were refractory to first line chemoimmunotherapy or had relapsed within 12 months; just 36% of patients in the standard care group did well enough on treatment to go on to stem-cell transplant.

Median progression-free survival (PFS) was 14.7 months with axi-cel versus 3.7 months with standard care.

Significantly, the better PFS appears to have translated into better overall survival (OS).

At a median of almost 4 years, 82 patients in the axi-cel group had died, compared with 95 patients with standard care who had died. Estimated 4-year OS was 54.6% with axi-cel versus 46% with standard care (HR 0.73, P = .03).

The OS benefit held in high-risk subgroups, including patients over 64 years old, those refractory to first-line treatment, and patients with high-grade disease.

Adverse events were in keeping with labeling. Cytokine release syndrome was more common in the axi-cel arm, including grade 3 or worse CRS in 6% of axi-cel patients versus none on standard care. Grade 3 or worse infections were also more common at 16.5% versus 11.9% with standard care. Over 11% of axi-cel patients developed hypogammaglobulinemia versus 0.6% in the standard care group.

Overall, there were no new serious or fatal adverse events since the initial PFS results were reported in 2022, when eight fatal adverse events were reported with axi-cel versus two with standard care.

The work was funded by axi-cel maker Kite Pharma, a subsidiary of Gilead. Investigators included Kite/Gilead employees and others who reported financial relationships with the companies, including Dr. Westin, a Kite/Gilead researcher and adviser. Dr. Chanan-Khan disclosed ties with Cellectar, Starton Therapeutics, Ascentage Pharma, and others.

 

At a median follow-up of 47.2 months, axicabtagene ciloleucel (axi-cel, Yescarta) significantly improved overall survival compared with standard second-line treatments in patients with early relapsed or refractory large B-cell lymphoma, according to a phase 3 investigation reported at the annual meeting of the American Society of Clinical Oncology (ASCO).

The anti-CD19 chimeric antigen receptor T-cell (CAR-T) therapy was approved for second-line treatment in 2022 based on better event-free survival, but standard second-line treatment – chemoimmunotherapy followed by high-dose chemotherapy and autologous stem-cell transplant in responders – still remains the prevailing approach, explained Jason Westin, MD, director of lymphoma research at MD Anderson Cancer Center, Houston. Dr. Westin, lead investigator, presented the trial, dubbed ZUMA-7, at the ASCO meeting.

The new findings might change that. ZUMA-7 “conclusively demonstrates that trying chemotherapy in the second line and saving cell therapy for the third line is an inferior approach ... ZUMA-7 confirms axi-cel is a second-line standard of care for patients with refractory or early relapsed large B cell lymphoma based on superior overall survival,” said Dr. Westin.

Study discussant Asher A. Chanan-Khan, MD, a CAR-T specialist at the Mayo Clinic in Jacksonville, Fla., agreed.

“This data must alter the current standard of care making CAR-T or axi-cel, based on the data we heard, a preferred second-line treatment ... Moving CAR-T earlier in the treatment paradigm is likely a better choice for our patients,” he said.

The study was published in the New England Journal of Medicine to coincide with the presentations.

Dr. Westin noted that axi-cel is now under investigation in ZUMA-23 for first-line treatment of high-risk large B-cell lymphoma (LBCL).


 

Study details

Zuma-7 randomized 180 LBCL patients to a one-time axi-cel infusion and 179 to standard care. Patients were refractory to first line chemoimmunotherapy or had relapsed within 12 months; just 36% of patients in the standard care group did well enough on treatment to go on to stem-cell transplant.

Median progression-free survival (PFS) was 14.7 months with axi-cel versus 3.7 months with standard care.

Significantly, the better PFS appears to have translated into better overall survival (OS).

At a median of almost 4 years, 82 patients in the axi-cel group had died, compared with 95 patients with standard care who had died. Estimated 4-year OS was 54.6% with axi-cel versus 46% with standard care (HR 0.73, P = .03).

The OS benefit held in high-risk subgroups, including patients over 64 years old, those refractory to first-line treatment, and patients with high-grade disease.

Adverse events were in keeping with labeling. Cytokine release syndrome was more common in the axi-cel arm, including grade 3 or worse CRS in 6% of axi-cel patients versus none on standard care. Grade 3 or worse infections were also more common at 16.5% versus 11.9% with standard care. Over 11% of axi-cel patients developed hypogammaglobulinemia versus 0.6% in the standard care group.

Overall, there were no new serious or fatal adverse events since the initial PFS results were reported in 2022, when eight fatal adverse events were reported with axi-cel versus two with standard care.

The work was funded by axi-cel maker Kite Pharma, a subsidiary of Gilead. Investigators included Kite/Gilead employees and others who reported financial relationships with the companies, including Dr. Westin, a Kite/Gilead researcher and adviser. Dr. Chanan-Khan disclosed ties with Cellectar, Starton Therapeutics, Ascentage Pharma, and others.

Publications
Publications
Topics
Article Type
Display Headline
CAR-T hikes overall survival in relapsed/refractory LBCL
Display Headline
CAR-T hikes overall survival in relapsed/refractory LBCL
Sections
Article Source

FROM ASCO 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Fluids or vasopressors: Is sepsis management that simple?

Article Type
Changed

In recent months, we have seen the results of the much awaited Crystalloid Liberal or Vasopressors Early Resuscitation in Sepsis (CLOVERS) trial showing that a restrictive fluid and early vasopressor strategy initiated on arrival of patients with sepsis and hypotension in the ED did not result in decreased mortality compared with a liberal fluid approach (PETAL Network. www.nejm.org/doi/10.1056/NEJMoa2202707). The March 2023 issue of CHEST Physician provided a synopsis of the trial highlighting several limitations (Splete H. CHEST Physician. 2023;18[3]:1). Last year in 2022, the Conservative versus Liberal Approach to Fluid Therapy in Septic Shock (CLASSIC) trial also showed no difference in mortality with restrictive fluid compared with standard fluid in patients with septic shock in the ICU already receiving vasopressor therapy (Meyhoff TS, et al. N Engl J Med. 2022;386[26]:2459). Did CLOVERS and CLASSIC resolve the ongoing debate about the timing and quantity of fluid resuscitation in sepsis? Did their results suggest a “you can do what you want” approach? Is the management of sepsis and septic shock limited to fluids vs vasopressors? Hopefully, the ongoing studies ARISE FLUIDS (NCT04569942), EVIS (NCT05179499), FRESHLY (NCT05453565), 1BED (NCT05273034), and REDUCE (NCT04931485) will further address these questions.

In the meantime, I continue to admit and care for patients with sepsis in the ICU. One example was a 72-year-old woman with a history of stroke, coronary artery disease, diabetes, and chronic kidney disease presenting with 3 days of progressive cough and dyspnea. In the ED, temperature was 38.2° C, heart rate 120 beats per min, respiratory rate 28/min, blood pressure 82/48 mm Hg, and weight 92 kg. She had audible crackles in the left lower lung. Her laboratory and imaging results supported a diagnosis of sepsis due to severe community-acquired pneumonia, including the following values: white blood cell 18.2 million/mm3; lactate 3.8 mmol/L; and creatinine 4.3 mg/dL.

While in the ED, the patient received 1 liter of crystalloid fluids and appropriate broad spectrum antibiotics. Repeat lactate value was 2.8 mmol/L. Patient’s blood pressure then decreased to 85/42 mm Hg. Norepinephrine was started peripherally and titrated to 6 mcg/min to achieve blood pressure 104/56 mm Hg. No further fluid administration was given, and the patient was admitted to the medical ICU. On admission, a repeat lactate had increased to 3.4 mmol/L with blood pressure of 80/45 mm Hg. Instead of further escalating vasopressor administration, she received 2 L of fluid and continued at 150 mL/h. Shortly after, norepinephrine was titrated off. Fluid resuscitation was then deescalated. We transfered the patient to the general ward within 12 hours of ICU admission.

Could we have avoided ICU admission and critical care resource utilization if the patient had received more optimal fluid resuscitation in the ED?

While our fear of fluids (or hydrophobia) may be unwarranted, the management of this patient was a common example of fluid restriction in sepsis (Jaehne AK, et al. Crit Care Med. 2016;44[12]:2263). By clinical criteria, she was in septic shock (requiring vasopressor) and appropriately required ICU admission. But, I would posit that the patient had severe sepsis based on pre-Sepsis 3 criteria. Optimal initial fluid resuscitation would have prevented her from requiring vasopressor and progressing to septic shock with ICU admission. Unfortunately, the patient’s care reflected the objective of CLOVERS and its results. Other than the lack of decreased mortality, decreased ventilator use, decreased renal replacement therapy, and decreased hospital length of stay, restricting fluids resulted in an increase of 8.1% (95% confidence interval 3.3 to 12.8) ICU utilization. Furthermore, the data and safety monitoring committee halted the trial for futility at two-thirds of enrollment. One must wonder if CLOVERS had completed its intended enrollment of 2,320 patients, negative outcomes would have occurred.

Should an astute clinician interpret the results of the CLOVERS and CLASSIC trials as “Fluids, it doesn’t matter, so I can do what I want?” Absolutely not! The literature is abundant with studies showing that increasing dose and/or number of vasopressors is associated with higher mortality in septic shock. One example is a recent multicenter prospective cohort study examining the association of vasopressor dosing during the first 24 hours and 30-day mortality in septic shock over 33 hospitals (Roberts RJ, et al. Crit Care Med. 2020;48[10]:1445).

Six hundred and sixteen patients were enrolled with 31% 30-day mortality. In 24 hours after shock diagnosis, patients received a median of 3.4 (1.9-5.3) L of fluids and 8.5 mcg/min norepinephrine equivalent. During the first 6 hours, increasing vasopressor dosing was associated with increased odds of mortality. Every 10 mcg/min increase in norepinephrine over the 24-hour period was associated with a 33% increased odds of mortality. Patients who received no fluids but 35 mcg/min norepinephrine in 6 hours had the highest mortality of 50%. As fluid volume increased, the association between vasopressor dosing and mortality decreased, such that at least 2 L of fluid during the first 6 hours was required for this association to become nonsignificant. Based on these results and a number of past studies, we should be cautious in believing that a resuscitation strategy favoring vasopressors would result in a better outcome.

Shock resuscitation is complex, and there is no one-size-fits-all approach. With the present climate, the success of resuscitation has been simplified to assessing fluid responsiveness. Trainees learn to identify the inferior vena cava and lung B-lines by ultrasound. With more advanced technology, stroke volume variation is considered. And, let us not forget the passive leg raise. Rarely can our fellows and residents recite the components of oxygen delivery as targets of shock resuscitation: preload, afterload, contractility, hemoglobin, and oxygen saturation. Another patient example comes to mind when fluid responsiveness alone is inadequate.

Our patient was a 46-year-old man now day 4 in the ICU with Klebsiella bacteremia and acute cholecystitis undergoing medical management. His comorbidities included diabetes, obesity, hypertension, and cardiomyopathy with ejection fraction 35%. He was supported sson mechanical ventilation, norepinephrine 20 mcg/min, and receiving appropriate antibiotics. For hemodynamic monitoring, a central venous and arterial catheter have been placed. The patient had a heart rate 92 beats per min, mean arterial pressure (MAP) 57 mm Hg, central venous pressure (CVP) 26 mm Hg, stroke volume variation (SVV) 9%, cardiac output (CO) 2.5 L/min, and central venous oxygen saturation (ScvO2) 42%.

Based on these parameters, we initiated dobutamine at 2.5 mcg/kg/min, which was then titrated to 20 mcg/kg/min over 2 hours to achieve ScvO2 72%. Interestingly, CVP had decreased to 18 mm Hg, SVV increased to 16%, with CO 4.5 L/min. MAP also increased to 68 mm Hg. We then administered 1-L fluid bolus with the elevated SVV. Given the patient’s underlying cardiomyopathy, CVP < 20 mm Hg appeared to indicate a state of fluid responsiveness. After our fluid administration, heart rate 98 beats per min, MAP 70 mm Hg, CVP increased to 21 mm Hg, SVV 12%, CO 4.7 L/min, and ScvO2 74%. In acknowledging a mixed hypovolemic, cardiogenic, and septic shock, we had optimized his hemodynamic state. Importantly, during this exercise of hemodynamic manipulation, we were able to decrease norepinephrine to 8 mcg/min, maintaining dobutamine at 20 mcg/kg/min.

 

 

The above case illustrates that the hemodynamic perturbations in sepsis and septic shock are not simple. Patients do not present with a single shock state. An infection progressing to shock often is confounded by hypovolemia and underlying comorbidities, such as cardiac dysfunction. Without considering the complex physiology, our desire to continue the debate of fluids vs vasopressors is on the brink of taking us back several decades when the management of sepsis was to start a fluid bolus, administer “Rocephin,” and initiate dopamine. But I remind myself that we have made advances – now it’s 1 L lactated Ringer’s, administer “vanco and zosyn,” and initiate norepinephrine.

Publications
Topics
Sections

In recent months, we have seen the results of the much awaited Crystalloid Liberal or Vasopressors Early Resuscitation in Sepsis (CLOVERS) trial showing that a restrictive fluid and early vasopressor strategy initiated on arrival of patients with sepsis and hypotension in the ED did not result in decreased mortality compared with a liberal fluid approach (PETAL Network. www.nejm.org/doi/10.1056/NEJMoa2202707). The March 2023 issue of CHEST Physician provided a synopsis of the trial highlighting several limitations (Splete H. CHEST Physician. 2023;18[3]:1). Last year in 2022, the Conservative versus Liberal Approach to Fluid Therapy in Septic Shock (CLASSIC) trial also showed no difference in mortality with restrictive fluid compared with standard fluid in patients with septic shock in the ICU already receiving vasopressor therapy (Meyhoff TS, et al. N Engl J Med. 2022;386[26]:2459). Did CLOVERS and CLASSIC resolve the ongoing debate about the timing and quantity of fluid resuscitation in sepsis? Did their results suggest a “you can do what you want” approach? Is the management of sepsis and septic shock limited to fluids vs vasopressors? Hopefully, the ongoing studies ARISE FLUIDS (NCT04569942), EVIS (NCT05179499), FRESHLY (NCT05453565), 1BED (NCT05273034), and REDUCE (NCT04931485) will further address these questions.

In the meantime, I continue to admit and care for patients with sepsis in the ICU. One example was a 72-year-old woman with a history of stroke, coronary artery disease, diabetes, and chronic kidney disease presenting with 3 days of progressive cough and dyspnea. In the ED, temperature was 38.2° C, heart rate 120 beats per min, respiratory rate 28/min, blood pressure 82/48 mm Hg, and weight 92 kg. She had audible crackles in the left lower lung. Her laboratory and imaging results supported a diagnosis of sepsis due to severe community-acquired pneumonia, including the following values: white blood cell 18.2 million/mm3; lactate 3.8 mmol/L; and creatinine 4.3 mg/dL.

While in the ED, the patient received 1 liter of crystalloid fluids and appropriate broad spectrum antibiotics. Repeat lactate value was 2.8 mmol/L. Patient’s blood pressure then decreased to 85/42 mm Hg. Norepinephrine was started peripherally and titrated to 6 mcg/min to achieve blood pressure 104/56 mm Hg. No further fluid administration was given, and the patient was admitted to the medical ICU. On admission, a repeat lactate had increased to 3.4 mmol/L with blood pressure of 80/45 mm Hg. Instead of further escalating vasopressor administration, she received 2 L of fluid and continued at 150 mL/h. Shortly after, norepinephrine was titrated off. Fluid resuscitation was then deescalated. We transfered the patient to the general ward within 12 hours of ICU admission.

Could we have avoided ICU admission and critical care resource utilization if the patient had received more optimal fluid resuscitation in the ED?

While our fear of fluids (or hydrophobia) may be unwarranted, the management of this patient was a common example of fluid restriction in sepsis (Jaehne AK, et al. Crit Care Med. 2016;44[12]:2263). By clinical criteria, she was in septic shock (requiring vasopressor) and appropriately required ICU admission. But, I would posit that the patient had severe sepsis based on pre-Sepsis 3 criteria. Optimal initial fluid resuscitation would have prevented her from requiring vasopressor and progressing to septic shock with ICU admission. Unfortunately, the patient’s care reflected the objective of CLOVERS and its results. Other than the lack of decreased mortality, decreased ventilator use, decreased renal replacement therapy, and decreased hospital length of stay, restricting fluids resulted in an increase of 8.1% (95% confidence interval 3.3 to 12.8) ICU utilization. Furthermore, the data and safety monitoring committee halted the trial for futility at two-thirds of enrollment. One must wonder if CLOVERS had completed its intended enrollment of 2,320 patients, negative outcomes would have occurred.

Should an astute clinician interpret the results of the CLOVERS and CLASSIC trials as “Fluids, it doesn’t matter, so I can do what I want?” Absolutely not! The literature is abundant with studies showing that increasing dose and/or number of vasopressors is associated with higher mortality in septic shock. One example is a recent multicenter prospective cohort study examining the association of vasopressor dosing during the first 24 hours and 30-day mortality in septic shock over 33 hospitals (Roberts RJ, et al. Crit Care Med. 2020;48[10]:1445).

Six hundred and sixteen patients were enrolled with 31% 30-day mortality. In 24 hours after shock diagnosis, patients received a median of 3.4 (1.9-5.3) L of fluids and 8.5 mcg/min norepinephrine equivalent. During the first 6 hours, increasing vasopressor dosing was associated with increased odds of mortality. Every 10 mcg/min increase in norepinephrine over the 24-hour period was associated with a 33% increased odds of mortality. Patients who received no fluids but 35 mcg/min norepinephrine in 6 hours had the highest mortality of 50%. As fluid volume increased, the association between vasopressor dosing and mortality decreased, such that at least 2 L of fluid during the first 6 hours was required for this association to become nonsignificant. Based on these results and a number of past studies, we should be cautious in believing that a resuscitation strategy favoring vasopressors would result in a better outcome.

Shock resuscitation is complex, and there is no one-size-fits-all approach. With the present climate, the success of resuscitation has been simplified to assessing fluid responsiveness. Trainees learn to identify the inferior vena cava and lung B-lines by ultrasound. With more advanced technology, stroke volume variation is considered. And, let us not forget the passive leg raise. Rarely can our fellows and residents recite the components of oxygen delivery as targets of shock resuscitation: preload, afterload, contractility, hemoglobin, and oxygen saturation. Another patient example comes to mind when fluid responsiveness alone is inadequate.

Our patient was a 46-year-old man now day 4 in the ICU with Klebsiella bacteremia and acute cholecystitis undergoing medical management. His comorbidities included diabetes, obesity, hypertension, and cardiomyopathy with ejection fraction 35%. He was supported sson mechanical ventilation, norepinephrine 20 mcg/min, and receiving appropriate antibiotics. For hemodynamic monitoring, a central venous and arterial catheter have been placed. The patient had a heart rate 92 beats per min, mean arterial pressure (MAP) 57 mm Hg, central venous pressure (CVP) 26 mm Hg, stroke volume variation (SVV) 9%, cardiac output (CO) 2.5 L/min, and central venous oxygen saturation (ScvO2) 42%.

Based on these parameters, we initiated dobutamine at 2.5 mcg/kg/min, which was then titrated to 20 mcg/kg/min over 2 hours to achieve ScvO2 72%. Interestingly, CVP had decreased to 18 mm Hg, SVV increased to 16%, with CO 4.5 L/min. MAP also increased to 68 mm Hg. We then administered 1-L fluid bolus with the elevated SVV. Given the patient’s underlying cardiomyopathy, CVP < 20 mm Hg appeared to indicate a state of fluid responsiveness. After our fluid administration, heart rate 98 beats per min, MAP 70 mm Hg, CVP increased to 21 mm Hg, SVV 12%, CO 4.7 L/min, and ScvO2 74%. In acknowledging a mixed hypovolemic, cardiogenic, and septic shock, we had optimized his hemodynamic state. Importantly, during this exercise of hemodynamic manipulation, we were able to decrease norepinephrine to 8 mcg/min, maintaining dobutamine at 20 mcg/kg/min.

 

 

The above case illustrates that the hemodynamic perturbations in sepsis and septic shock are not simple. Patients do not present with a single shock state. An infection progressing to shock often is confounded by hypovolemia and underlying comorbidities, such as cardiac dysfunction. Without considering the complex physiology, our desire to continue the debate of fluids vs vasopressors is on the brink of taking us back several decades when the management of sepsis was to start a fluid bolus, administer “Rocephin,” and initiate dopamine. But I remind myself that we have made advances – now it’s 1 L lactated Ringer’s, administer “vanco and zosyn,” and initiate norepinephrine.

In recent months, we have seen the results of the much awaited Crystalloid Liberal or Vasopressors Early Resuscitation in Sepsis (CLOVERS) trial showing that a restrictive fluid and early vasopressor strategy initiated on arrival of patients with sepsis and hypotension in the ED did not result in decreased mortality compared with a liberal fluid approach (PETAL Network. www.nejm.org/doi/10.1056/NEJMoa2202707). The March 2023 issue of CHEST Physician provided a synopsis of the trial highlighting several limitations (Splete H. CHEST Physician. 2023;18[3]:1). Last year in 2022, the Conservative versus Liberal Approach to Fluid Therapy in Septic Shock (CLASSIC) trial also showed no difference in mortality with restrictive fluid compared with standard fluid in patients with septic shock in the ICU already receiving vasopressor therapy (Meyhoff TS, et al. N Engl J Med. 2022;386[26]:2459). Did CLOVERS and CLASSIC resolve the ongoing debate about the timing and quantity of fluid resuscitation in sepsis? Did their results suggest a “you can do what you want” approach? Is the management of sepsis and septic shock limited to fluids vs vasopressors? Hopefully, the ongoing studies ARISE FLUIDS (NCT04569942), EVIS (NCT05179499), FRESHLY (NCT05453565), 1BED (NCT05273034), and REDUCE (NCT04931485) will further address these questions.

In the meantime, I continue to admit and care for patients with sepsis in the ICU. One example was a 72-year-old woman with a history of stroke, coronary artery disease, diabetes, and chronic kidney disease presenting with 3 days of progressive cough and dyspnea. In the ED, temperature was 38.2° C, heart rate 120 beats per min, respiratory rate 28/min, blood pressure 82/48 mm Hg, and weight 92 kg. She had audible crackles in the left lower lung. Her laboratory and imaging results supported a diagnosis of sepsis due to severe community-acquired pneumonia, including the following values: white blood cell 18.2 million/mm3; lactate 3.8 mmol/L; and creatinine 4.3 mg/dL.

While in the ED, the patient received 1 liter of crystalloid fluids and appropriate broad spectrum antibiotics. Repeat lactate value was 2.8 mmol/L. Patient’s blood pressure then decreased to 85/42 mm Hg. Norepinephrine was started peripherally and titrated to 6 mcg/min to achieve blood pressure 104/56 mm Hg. No further fluid administration was given, and the patient was admitted to the medical ICU. On admission, a repeat lactate had increased to 3.4 mmol/L with blood pressure of 80/45 mm Hg. Instead of further escalating vasopressor administration, she received 2 L of fluid and continued at 150 mL/h. Shortly after, norepinephrine was titrated off. Fluid resuscitation was then deescalated. We transfered the patient to the general ward within 12 hours of ICU admission.

Could we have avoided ICU admission and critical care resource utilization if the patient had received more optimal fluid resuscitation in the ED?

While our fear of fluids (or hydrophobia) may be unwarranted, the management of this patient was a common example of fluid restriction in sepsis (Jaehne AK, et al. Crit Care Med. 2016;44[12]:2263). By clinical criteria, she was in septic shock (requiring vasopressor) and appropriately required ICU admission. But, I would posit that the patient had severe sepsis based on pre-Sepsis 3 criteria. Optimal initial fluid resuscitation would have prevented her from requiring vasopressor and progressing to septic shock with ICU admission. Unfortunately, the patient’s care reflected the objective of CLOVERS and its results. Other than the lack of decreased mortality, decreased ventilator use, decreased renal replacement therapy, and decreased hospital length of stay, restricting fluids resulted in an increase of 8.1% (95% confidence interval 3.3 to 12.8) ICU utilization. Furthermore, the data and safety monitoring committee halted the trial for futility at two-thirds of enrollment. One must wonder if CLOVERS had completed its intended enrollment of 2,320 patients, negative outcomes would have occurred.

Should an astute clinician interpret the results of the CLOVERS and CLASSIC trials as “Fluids, it doesn’t matter, so I can do what I want?” Absolutely not! The literature is abundant with studies showing that increasing dose and/or number of vasopressors is associated with higher mortality in septic shock. One example is a recent multicenter prospective cohort study examining the association of vasopressor dosing during the first 24 hours and 30-day mortality in septic shock over 33 hospitals (Roberts RJ, et al. Crit Care Med. 2020;48[10]:1445).

Six hundred and sixteen patients were enrolled with 31% 30-day mortality. In 24 hours after shock diagnosis, patients received a median of 3.4 (1.9-5.3) L of fluids and 8.5 mcg/min norepinephrine equivalent. During the first 6 hours, increasing vasopressor dosing was associated with increased odds of mortality. Every 10 mcg/min increase in norepinephrine over the 24-hour period was associated with a 33% increased odds of mortality. Patients who received no fluids but 35 mcg/min norepinephrine in 6 hours had the highest mortality of 50%. As fluid volume increased, the association between vasopressor dosing and mortality decreased, such that at least 2 L of fluid during the first 6 hours was required for this association to become nonsignificant. Based on these results and a number of past studies, we should be cautious in believing that a resuscitation strategy favoring vasopressors would result in a better outcome.

Shock resuscitation is complex, and there is no one-size-fits-all approach. With the present climate, the success of resuscitation has been simplified to assessing fluid responsiveness. Trainees learn to identify the inferior vena cava and lung B-lines by ultrasound. With more advanced technology, stroke volume variation is considered. And, let us not forget the passive leg raise. Rarely can our fellows and residents recite the components of oxygen delivery as targets of shock resuscitation: preload, afterload, contractility, hemoglobin, and oxygen saturation. Another patient example comes to mind when fluid responsiveness alone is inadequate.

Our patient was a 46-year-old man now day 4 in the ICU with Klebsiella bacteremia and acute cholecystitis undergoing medical management. His comorbidities included diabetes, obesity, hypertension, and cardiomyopathy with ejection fraction 35%. He was supported sson mechanical ventilation, norepinephrine 20 mcg/min, and receiving appropriate antibiotics. For hemodynamic monitoring, a central venous and arterial catheter have been placed. The patient had a heart rate 92 beats per min, mean arterial pressure (MAP) 57 mm Hg, central venous pressure (CVP) 26 mm Hg, stroke volume variation (SVV) 9%, cardiac output (CO) 2.5 L/min, and central venous oxygen saturation (ScvO2) 42%.

Based on these parameters, we initiated dobutamine at 2.5 mcg/kg/min, which was then titrated to 20 mcg/kg/min over 2 hours to achieve ScvO2 72%. Interestingly, CVP had decreased to 18 mm Hg, SVV increased to 16%, with CO 4.5 L/min. MAP also increased to 68 mm Hg. We then administered 1-L fluid bolus with the elevated SVV. Given the patient’s underlying cardiomyopathy, CVP < 20 mm Hg appeared to indicate a state of fluid responsiveness. After our fluid administration, heart rate 98 beats per min, MAP 70 mm Hg, CVP increased to 21 mm Hg, SVV 12%, CO 4.7 L/min, and ScvO2 74%. In acknowledging a mixed hypovolemic, cardiogenic, and septic shock, we had optimized his hemodynamic state. Importantly, during this exercise of hemodynamic manipulation, we were able to decrease norepinephrine to 8 mcg/min, maintaining dobutamine at 20 mcg/kg/min.

 

 

The above case illustrates that the hemodynamic perturbations in sepsis and septic shock are not simple. Patients do not present with a single shock state. An infection progressing to shock often is confounded by hypovolemia and underlying comorbidities, such as cardiac dysfunction. Without considering the complex physiology, our desire to continue the debate of fluids vs vasopressors is on the brink of taking us back several decades when the management of sepsis was to start a fluid bolus, administer “Rocephin,” and initiate dopamine. But I remind myself that we have made advances – now it’s 1 L lactated Ringer’s, administer “vanco and zosyn,” and initiate norepinephrine.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Kangaroo mother care may cut death risk for premature babies by a third

Article Type
Changed

Kangaroo mother care (KMC), with close skin-to-skin contact between mothers and their low-birthweight newborns, appears to reduce mortality risk by almost one-third, compared with conventional care, according to new research published online in BMJ Global Health.

Starting the contact, which involves mothers carrying the newborn in a sling, within 24 hours of birth and continuing it for at least 8 hours a day both appear to amplify the effect on reducing mortality and infection, the paper states.

Sindhu Sivanandan, MD, with the department of neonatology at Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India, and Mari Jeeva Sankar, MD, in the pediatrics department of the All India Institute of Medical Sciences, New Delhi, looked at existing studies to compare KMC with conventional care and to compare starting the intervention within 24 hours of birth versus a later start.

Dr. Sindhu Sivanandan


Their review looked at 31 trials that included 15,559 low-birthweight and preterm infants collectively. Of the 31 trials, 27 studies compared KMC with conventional care and four compared early with late initiation of KMC.
Dr. Mari Jeeva Sanka

 

Mortality risk reduction

Analysis showed that, compared with conventional care, KMC appeared to cut mortality risk by 32% (relative risk, 0.68; 95% confidence interval, 0.53-0.86) during birth hospitalization or by 28 days after birth, while it seemed to reduce the risk of severe infection, such as sepsis, by 15% (RR, 0.85; 95% CI, 0.76-0.96; low-certainty evidence.)

That mortality-risk reduction was found regardless of gestational age or weight of the child at enrolment, time of starting KMC, and whether the intervention was started in a hospital or community.

The studies that had compared early with late-initiated KMC showed a reduction in neonatal mortality of 33%.

Low- and middle-income countries have the highest rates of premature births (gestational age of less than 37 weeks) and low birthweight (less than 2,500 grams). Premature births and low birthweight both are key causes of death and disability.

The World Health Organization recommends KMC as the standard of care among low birthweight infants after clinical stabilization. The American Academy of Pediatrics also promotes immediate KMC.
 

Relevance in the U.S.

Grace Chan, MD, MPH, PhD, an epidemiologist and pediatrician with the Harvard School of Public Health, Boston, said though the practice is promoted by the WHO and AAP, recommendations to families vary widely by providers.

She said the health benefits for KMC are numerous. One of the biggest is that skin-to-skin contact can help transfer heat to newborns who may have trouble regulating their own temperature. That is especially important in cold climates in places where there may be insufficient indoor heat.

She said it’s well-known that preterm babies are at higher risk for apnea, and listening to a mother’s heartbeat may stimulate the child to breathe regularly.

Additionally with KMC, there’s an inherent benefit of a mother or caregiver being able to see any change in a newborn’s color immediately when the baby is held so closely, as opposed to a nurse watching several babies at a time in a neonatal intensive care unit.

This is evidence that starting KMC right away is important, because the risk of death for premature and low-weight newborns is highest in the first 24 hours of life, Dr. Chan noted.
 

 

 

Barriers of time

There are some barriers, she noted, in that mothers or other caregivers caring for several young children may not have the time to carry a child in a sling for 8 or more hours at a time.

The authors conclude that their findings have policy implications, particularly for low- and middle-income countries: “KMC should be provided to all low birth weight and preterm infants irrespective of the settings – both health facilities and at home,” they wrote.

The authors caution that, “very low birth weight, extremely preterm neonates, and severely unstable neonates were often excluded from studies. More evidence is needed before extrapolating the study results in these high-risk groups.”

The study authors and Dr. Chan report no relevant financial relationships.

Publications
Topics
Sections

Kangaroo mother care (KMC), with close skin-to-skin contact between mothers and their low-birthweight newborns, appears to reduce mortality risk by almost one-third, compared with conventional care, according to new research published online in BMJ Global Health.

Starting the contact, which involves mothers carrying the newborn in a sling, within 24 hours of birth and continuing it for at least 8 hours a day both appear to amplify the effect on reducing mortality and infection, the paper states.

Sindhu Sivanandan, MD, with the department of neonatology at Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India, and Mari Jeeva Sankar, MD, in the pediatrics department of the All India Institute of Medical Sciences, New Delhi, looked at existing studies to compare KMC with conventional care and to compare starting the intervention within 24 hours of birth versus a later start.

Dr. Sindhu Sivanandan


Their review looked at 31 trials that included 15,559 low-birthweight and preterm infants collectively. Of the 31 trials, 27 studies compared KMC with conventional care and four compared early with late initiation of KMC.
Dr. Mari Jeeva Sanka

 

Mortality risk reduction

Analysis showed that, compared with conventional care, KMC appeared to cut mortality risk by 32% (relative risk, 0.68; 95% confidence interval, 0.53-0.86) during birth hospitalization or by 28 days after birth, while it seemed to reduce the risk of severe infection, such as sepsis, by 15% (RR, 0.85; 95% CI, 0.76-0.96; low-certainty evidence.)

That mortality-risk reduction was found regardless of gestational age or weight of the child at enrolment, time of starting KMC, and whether the intervention was started in a hospital or community.

The studies that had compared early with late-initiated KMC showed a reduction in neonatal mortality of 33%.

Low- and middle-income countries have the highest rates of premature births (gestational age of less than 37 weeks) and low birthweight (less than 2,500 grams). Premature births and low birthweight both are key causes of death and disability.

The World Health Organization recommends KMC as the standard of care among low birthweight infants after clinical stabilization. The American Academy of Pediatrics also promotes immediate KMC.
 

Relevance in the U.S.

Grace Chan, MD, MPH, PhD, an epidemiologist and pediatrician with the Harvard School of Public Health, Boston, said though the practice is promoted by the WHO and AAP, recommendations to families vary widely by providers.

She said the health benefits for KMC are numerous. One of the biggest is that skin-to-skin contact can help transfer heat to newborns who may have trouble regulating their own temperature. That is especially important in cold climates in places where there may be insufficient indoor heat.

She said it’s well-known that preterm babies are at higher risk for apnea, and listening to a mother’s heartbeat may stimulate the child to breathe regularly.

Additionally with KMC, there’s an inherent benefit of a mother or caregiver being able to see any change in a newborn’s color immediately when the baby is held so closely, as opposed to a nurse watching several babies at a time in a neonatal intensive care unit.

This is evidence that starting KMC right away is important, because the risk of death for premature and low-weight newborns is highest in the first 24 hours of life, Dr. Chan noted.
 

 

 

Barriers of time

There are some barriers, she noted, in that mothers or other caregivers caring for several young children may not have the time to carry a child in a sling for 8 or more hours at a time.

The authors conclude that their findings have policy implications, particularly for low- and middle-income countries: “KMC should be provided to all low birth weight and preterm infants irrespective of the settings – both health facilities and at home,” they wrote.

The authors caution that, “very low birth weight, extremely preterm neonates, and severely unstable neonates were often excluded from studies. More evidence is needed before extrapolating the study results in these high-risk groups.”

The study authors and Dr. Chan report no relevant financial relationships.

Kangaroo mother care (KMC), with close skin-to-skin contact between mothers and their low-birthweight newborns, appears to reduce mortality risk by almost one-third, compared with conventional care, according to new research published online in BMJ Global Health.

Starting the contact, which involves mothers carrying the newborn in a sling, within 24 hours of birth and continuing it for at least 8 hours a day both appear to amplify the effect on reducing mortality and infection, the paper states.

Sindhu Sivanandan, MD, with the department of neonatology at Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India, and Mari Jeeva Sankar, MD, in the pediatrics department of the All India Institute of Medical Sciences, New Delhi, looked at existing studies to compare KMC with conventional care and to compare starting the intervention within 24 hours of birth versus a later start.

Dr. Sindhu Sivanandan


Their review looked at 31 trials that included 15,559 low-birthweight and preterm infants collectively. Of the 31 trials, 27 studies compared KMC with conventional care and four compared early with late initiation of KMC.
Dr. Mari Jeeva Sanka

 

Mortality risk reduction

Analysis showed that, compared with conventional care, KMC appeared to cut mortality risk by 32% (relative risk, 0.68; 95% confidence interval, 0.53-0.86) during birth hospitalization or by 28 days after birth, while it seemed to reduce the risk of severe infection, such as sepsis, by 15% (RR, 0.85; 95% CI, 0.76-0.96; low-certainty evidence.)

That mortality-risk reduction was found regardless of gestational age or weight of the child at enrolment, time of starting KMC, and whether the intervention was started in a hospital or community.

The studies that had compared early with late-initiated KMC showed a reduction in neonatal mortality of 33%.

Low- and middle-income countries have the highest rates of premature births (gestational age of less than 37 weeks) and low birthweight (less than 2,500 grams). Premature births and low birthweight both are key causes of death and disability.

The World Health Organization recommends KMC as the standard of care among low birthweight infants after clinical stabilization. The American Academy of Pediatrics also promotes immediate KMC.
 

Relevance in the U.S.

Grace Chan, MD, MPH, PhD, an epidemiologist and pediatrician with the Harvard School of Public Health, Boston, said though the practice is promoted by the WHO and AAP, recommendations to families vary widely by providers.

She said the health benefits for KMC are numerous. One of the biggest is that skin-to-skin contact can help transfer heat to newborns who may have trouble regulating their own temperature. That is especially important in cold climates in places where there may be insufficient indoor heat.

She said it’s well-known that preterm babies are at higher risk for apnea, and listening to a mother’s heartbeat may stimulate the child to breathe regularly.

Additionally with KMC, there’s an inherent benefit of a mother or caregiver being able to see any change in a newborn’s color immediately when the baby is held so closely, as opposed to a nurse watching several babies at a time in a neonatal intensive care unit.

This is evidence that starting KMC right away is important, because the risk of death for premature and low-weight newborns is highest in the first 24 hours of life, Dr. Chan noted.
 

 

 

Barriers of time

There are some barriers, she noted, in that mothers or other caregivers caring for several young children may not have the time to carry a child in a sling for 8 or more hours at a time.

The authors conclude that their findings have policy implications, particularly for low- and middle-income countries: “KMC should be provided to all low birth weight and preterm infants irrespective of the settings – both health facilities and at home,” they wrote.

The authors caution that, “very low birth weight, extremely preterm neonates, and severely unstable neonates were often excluded from studies. More evidence is needed before extrapolating the study results in these high-risk groups.”

The study authors and Dr. Chan report no relevant financial relationships.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BMJ GLOBAL HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What’s best for patients who are dying of anorexia?

Article Type
Changed

 

– The patient at a Florida eating disorder clinic said she was eating plenty even though she acknowledged purging once a week. But her vitals told a different story: Her body mass index (BMI) was 12.2, down from 14.8 a couple of years before – a dangerously low value.

University of Florida
Dr. Nadia Surexa Cacodcar
The pandemic had disrupted her care, said Nadia Surexa Cacodcar, MD, a resident psychiatrist at the University of Florida, Gainesville, in a presentation at the annual meeting of the American Psychiatric Association. To make matters more challenging, coordinating with the patient’s primary doctor was difficult because her electronic health records couldn’t communicate with one another.

While the woman agreed that she needed to gain weight, she refused advice to pursue residential or inpatient treatment. This left her team with a big dilemma: Should they force her into care because she wouldn’t eat? Was that even possible under the law? Did she have the capacity to make decisions about her future? What other alternatives were there?

Determining the best course of action in cases like this is anything but simple, Dr. Cacodcar said. To make matters more complicated, there are numerous hurdles facing clinicians as they try to help their patients with advanced and severe anorexia nervosa (AN).

“At least in my state of Florida, we know that it can be very, very hard to get patients expert care,” said Dr. Cacodcar. And, she said, it can be even tougher for certain types of patients, such as those that are LGBTQ and those who have severe illness but don’t meet the criteria.

As Dr. Cacodcar noted, the APA released new practice guidelines regarding eating disorders earlier this year, marking their first update since 2006. The guidelines highlight research that suggests nearly 1% – 0.8% – of the U.S. population will develop AN over their lifetimes. Recent studies also suggest that eating disorder numbers rose during the pandemic, with one analysis finding that patients under inpatient care doubled in 2020.

“Mortality rates are high for anorexia nervosa, up to 10 times higher than matched controls,” Dr. Cacodcar said. “It has the highest mortality rate of the psychiatric diseases with the exception of opioid use disorder.”

As for outcomes, she pointed to a 2019 study that surveyed 387 parents who had children with eating disorders, mostly AN. Only 20% made a full recovery. “The farther you get out from the onset of anorexia, the less likely you are to achieve recovery,” Dr. Cacodcar said. “A lot of the control behaviors become very automatic.”
 

Determining capacity

In some cases of AN, psychiatrists must determine whether they have the capacity to make decisions about treatment, said Gabriel Jerkins, MD, a chief resident of psychiatry at the University of Florida. At issue is “the ability of the individual to comprehend the information being disclosed in regard to their condition, as well as the nature and potential risks and benefits of the proposed treatment alternatives. They include of course, no treatment at all.”

 

 

University of Florida
Dr. Gabriel Jerkins
Patients with AN often lack insight into their condition and may disagree with clinicians who say they’re underweight because of AN, Dr. Jerkins said. This raises more questions: Do they have capacity if they don’t understand what’s wrong with them? And could their own malnutrition affect their cognition?

“We know psychiatric conditions can limit one’s ability to appreciate consequence,” he said.

One option is to seek to institutionalize patients with severe AN because they are a danger to themselves. Clinicians opted to not do this in the case of the patient profiled by Dr. Cacodcar, the one with the BMI of 12.2 who didn’t want inpatient or residential care. (A 5-foot-8 person with a BMI of 12.2 would weigh 80 pounds.)

“The main reason we did not hospitalize her is because an appropriate level of care was not going to be readily available,” Dr. Cacodcar said, and her treatment would have been substandard.

Fortunately, the woman did return after a couple of months and accept residential care. No facility in Florida was willing to accept her because of her low BMI, but she did find one in North Carolina, where she stayed for 2 months. She’s doing well, and her BMI is now 21, Dr. Cacodcar said.

The patient’s story shows that involuntary hospitalization “is not necessarily the best course of action,” Dr. Cacodcar said. “It wasn’t necessarily going to be in the patient’s best interest.”

In another case, a 22-year-old woman had severe AN. She had been a gymnast and dancer, Dr. Jerkins said, “and I include that here only because of how commonly we see that kind of demographic information in patients with anorexia nervosa.”

Her BMI was 17.5, and clinicians discussed feeding her through a feeding tube. She still had “no insight that her symptoms were related to an underlying eating disorder,” Dr. Jerkins said, raising questions about her capacity. “Is it sufficient that the patient understand that she’s underweight?”

Ultimately, he said, she received a feeding tube at a time when her BMI had dropped to 16.3. She suffered from an infection but ultimately she improved and has stabilized at a BMI of around 19, he said.

“I do wonder if allowing her to have some control of how to pursue treatment in this case was therapeutic in a way,” he said, especially since matters of control are deeply ingrained in AN.

University of Florida
Dr. Lauren Ashley Schmidt
Another case didn’t have a positive outcome. A postmenopausal woman was hospitalized for hypoglycemia secondary to overuse of insulin, recalled University of Florida psychiatrist Lauren Ashley Schmidt, MD. And the insulin use was linked to obsessive-compulsive disorder.

A former physical trainer, the patient had a BMI of 17.6. The University of Florida’s eating disorder clinic sent her to an out-of-state residential program, but she was discharged when her blood glucose dipped dangerously low as she compulsively exercised. Her BMI dipped to 16.2.

Dr. Schmidt had the patient involuntarily committed upon her return, but she went home after 12 days with no change in her weight. Ultimately, the patient was referred to an eating disorder center in Colorado for medical stabilization where she was given a feeding tube. But her medical situation was so dire that she was discharged to her home, where she went on hospice and died.

“I’m not arguing for or against the term ‘terminal anorexia.’ But this case does make me think about it,” said Dr. Schmidt. She was referring to a controversial term used by some clinicians to refer to patients who face inevitable death from AN. “Unfortunately,” wrote the authors of a recent report proposing a clinical definition, “these patients and their carers often receive minimal support from eating disorders health professionals who are conflicted about terminal care, and who are hampered and limited by the paucity of literature on end-of-life care for those with anorexia nervosa.”
Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– The patient at a Florida eating disorder clinic said she was eating plenty even though she acknowledged purging once a week. But her vitals told a different story: Her body mass index (BMI) was 12.2, down from 14.8 a couple of years before – a dangerously low value.

University of Florida
Dr. Nadia Surexa Cacodcar
The pandemic had disrupted her care, said Nadia Surexa Cacodcar, MD, a resident psychiatrist at the University of Florida, Gainesville, in a presentation at the annual meeting of the American Psychiatric Association. To make matters more challenging, coordinating with the patient’s primary doctor was difficult because her electronic health records couldn’t communicate with one another.

While the woman agreed that she needed to gain weight, she refused advice to pursue residential or inpatient treatment. This left her team with a big dilemma: Should they force her into care because she wouldn’t eat? Was that even possible under the law? Did she have the capacity to make decisions about her future? What other alternatives were there?

Determining the best course of action in cases like this is anything but simple, Dr. Cacodcar said. To make matters more complicated, there are numerous hurdles facing clinicians as they try to help their patients with advanced and severe anorexia nervosa (AN).

“At least in my state of Florida, we know that it can be very, very hard to get patients expert care,” said Dr. Cacodcar. And, she said, it can be even tougher for certain types of patients, such as those that are LGBTQ and those who have severe illness but don’t meet the criteria.

As Dr. Cacodcar noted, the APA released new practice guidelines regarding eating disorders earlier this year, marking their first update since 2006. The guidelines highlight research that suggests nearly 1% – 0.8% – of the U.S. population will develop AN over their lifetimes. Recent studies also suggest that eating disorder numbers rose during the pandemic, with one analysis finding that patients under inpatient care doubled in 2020.

“Mortality rates are high for anorexia nervosa, up to 10 times higher than matched controls,” Dr. Cacodcar said. “It has the highest mortality rate of the psychiatric diseases with the exception of opioid use disorder.”

As for outcomes, she pointed to a 2019 study that surveyed 387 parents who had children with eating disorders, mostly AN. Only 20% made a full recovery. “The farther you get out from the onset of anorexia, the less likely you are to achieve recovery,” Dr. Cacodcar said. “A lot of the control behaviors become very automatic.”
 

Determining capacity

In some cases of AN, psychiatrists must determine whether they have the capacity to make decisions about treatment, said Gabriel Jerkins, MD, a chief resident of psychiatry at the University of Florida. At issue is “the ability of the individual to comprehend the information being disclosed in regard to their condition, as well as the nature and potential risks and benefits of the proposed treatment alternatives. They include of course, no treatment at all.”

 

 

University of Florida
Dr. Gabriel Jerkins
Patients with AN often lack insight into their condition and may disagree with clinicians who say they’re underweight because of AN, Dr. Jerkins said. This raises more questions: Do they have capacity if they don’t understand what’s wrong with them? And could their own malnutrition affect their cognition?

“We know psychiatric conditions can limit one’s ability to appreciate consequence,” he said.

One option is to seek to institutionalize patients with severe AN because they are a danger to themselves. Clinicians opted to not do this in the case of the patient profiled by Dr. Cacodcar, the one with the BMI of 12.2 who didn’t want inpatient or residential care. (A 5-foot-8 person with a BMI of 12.2 would weigh 80 pounds.)

“The main reason we did not hospitalize her is because an appropriate level of care was not going to be readily available,” Dr. Cacodcar said, and her treatment would have been substandard.

Fortunately, the woman did return after a couple of months and accept residential care. No facility in Florida was willing to accept her because of her low BMI, but she did find one in North Carolina, where she stayed for 2 months. She’s doing well, and her BMI is now 21, Dr. Cacodcar said.

The patient’s story shows that involuntary hospitalization “is not necessarily the best course of action,” Dr. Cacodcar said. “It wasn’t necessarily going to be in the patient’s best interest.”

In another case, a 22-year-old woman had severe AN. She had been a gymnast and dancer, Dr. Jerkins said, “and I include that here only because of how commonly we see that kind of demographic information in patients with anorexia nervosa.”

Her BMI was 17.5, and clinicians discussed feeding her through a feeding tube. She still had “no insight that her symptoms were related to an underlying eating disorder,” Dr. Jerkins said, raising questions about her capacity. “Is it sufficient that the patient understand that she’s underweight?”

Ultimately, he said, she received a feeding tube at a time when her BMI had dropped to 16.3. She suffered from an infection but ultimately she improved and has stabilized at a BMI of around 19, he said.

“I do wonder if allowing her to have some control of how to pursue treatment in this case was therapeutic in a way,” he said, especially since matters of control are deeply ingrained in AN.

University of Florida
Dr. Lauren Ashley Schmidt
Another case didn’t have a positive outcome. A postmenopausal woman was hospitalized for hypoglycemia secondary to overuse of insulin, recalled University of Florida psychiatrist Lauren Ashley Schmidt, MD. And the insulin use was linked to obsessive-compulsive disorder.

A former physical trainer, the patient had a BMI of 17.6. The University of Florida’s eating disorder clinic sent her to an out-of-state residential program, but she was discharged when her blood glucose dipped dangerously low as she compulsively exercised. Her BMI dipped to 16.2.

Dr. Schmidt had the patient involuntarily committed upon her return, but she went home after 12 days with no change in her weight. Ultimately, the patient was referred to an eating disorder center in Colorado for medical stabilization where she was given a feeding tube. But her medical situation was so dire that she was discharged to her home, where she went on hospice and died.

“I’m not arguing for or against the term ‘terminal anorexia.’ But this case does make me think about it,” said Dr. Schmidt. She was referring to a controversial term used by some clinicians to refer to patients who face inevitable death from AN. “Unfortunately,” wrote the authors of a recent report proposing a clinical definition, “these patients and their carers often receive minimal support from eating disorders health professionals who are conflicted about terminal care, and who are hampered and limited by the paucity of literature on end-of-life care for those with anorexia nervosa.”

 

– The patient at a Florida eating disorder clinic said she was eating plenty even though she acknowledged purging once a week. But her vitals told a different story: Her body mass index (BMI) was 12.2, down from 14.8 a couple of years before – a dangerously low value.

University of Florida
Dr. Nadia Surexa Cacodcar
The pandemic had disrupted her care, said Nadia Surexa Cacodcar, MD, a resident psychiatrist at the University of Florida, Gainesville, in a presentation at the annual meeting of the American Psychiatric Association. To make matters more challenging, coordinating with the patient’s primary doctor was difficult because her electronic health records couldn’t communicate with one another.

While the woman agreed that she needed to gain weight, she refused advice to pursue residential or inpatient treatment. This left her team with a big dilemma: Should they force her into care because she wouldn’t eat? Was that even possible under the law? Did she have the capacity to make decisions about her future? What other alternatives were there?

Determining the best course of action in cases like this is anything but simple, Dr. Cacodcar said. To make matters more complicated, there are numerous hurdles facing clinicians as they try to help their patients with advanced and severe anorexia nervosa (AN).

“At least in my state of Florida, we know that it can be very, very hard to get patients expert care,” said Dr. Cacodcar. And, she said, it can be even tougher for certain types of patients, such as those that are LGBTQ and those who have severe illness but don’t meet the criteria.

As Dr. Cacodcar noted, the APA released new practice guidelines regarding eating disorders earlier this year, marking their first update since 2006. The guidelines highlight research that suggests nearly 1% – 0.8% – of the U.S. population will develop AN over their lifetimes. Recent studies also suggest that eating disorder numbers rose during the pandemic, with one analysis finding that patients under inpatient care doubled in 2020.

“Mortality rates are high for anorexia nervosa, up to 10 times higher than matched controls,” Dr. Cacodcar said. “It has the highest mortality rate of the psychiatric diseases with the exception of opioid use disorder.”

As for outcomes, she pointed to a 2019 study that surveyed 387 parents who had children with eating disorders, mostly AN. Only 20% made a full recovery. “The farther you get out from the onset of anorexia, the less likely you are to achieve recovery,” Dr. Cacodcar said. “A lot of the control behaviors become very automatic.”
 

Determining capacity

In some cases of AN, psychiatrists must determine whether they have the capacity to make decisions about treatment, said Gabriel Jerkins, MD, a chief resident of psychiatry at the University of Florida. At issue is “the ability of the individual to comprehend the information being disclosed in regard to their condition, as well as the nature and potential risks and benefits of the proposed treatment alternatives. They include of course, no treatment at all.”

 

 

University of Florida
Dr. Gabriel Jerkins
Patients with AN often lack insight into their condition and may disagree with clinicians who say they’re underweight because of AN, Dr. Jerkins said. This raises more questions: Do they have capacity if they don’t understand what’s wrong with them? And could their own malnutrition affect their cognition?

“We know psychiatric conditions can limit one’s ability to appreciate consequence,” he said.

One option is to seek to institutionalize patients with severe AN because they are a danger to themselves. Clinicians opted to not do this in the case of the patient profiled by Dr. Cacodcar, the one with the BMI of 12.2 who didn’t want inpatient or residential care. (A 5-foot-8 person with a BMI of 12.2 would weigh 80 pounds.)

“The main reason we did not hospitalize her is because an appropriate level of care was not going to be readily available,” Dr. Cacodcar said, and her treatment would have been substandard.

Fortunately, the woman did return after a couple of months and accept residential care. No facility in Florida was willing to accept her because of her low BMI, but she did find one in North Carolina, where she stayed for 2 months. She’s doing well, and her BMI is now 21, Dr. Cacodcar said.

The patient’s story shows that involuntary hospitalization “is not necessarily the best course of action,” Dr. Cacodcar said. “It wasn’t necessarily going to be in the patient’s best interest.”

In another case, a 22-year-old woman had severe AN. She had been a gymnast and dancer, Dr. Jerkins said, “and I include that here only because of how commonly we see that kind of demographic information in patients with anorexia nervosa.”

Her BMI was 17.5, and clinicians discussed feeding her through a feeding tube. She still had “no insight that her symptoms were related to an underlying eating disorder,” Dr. Jerkins said, raising questions about her capacity. “Is it sufficient that the patient understand that she’s underweight?”

Ultimately, he said, she received a feeding tube at a time when her BMI had dropped to 16.3. She suffered from an infection but ultimately she improved and has stabilized at a BMI of around 19, he said.

“I do wonder if allowing her to have some control of how to pursue treatment in this case was therapeutic in a way,” he said, especially since matters of control are deeply ingrained in AN.

University of Florida
Dr. Lauren Ashley Schmidt
Another case didn’t have a positive outcome. A postmenopausal woman was hospitalized for hypoglycemia secondary to overuse of insulin, recalled University of Florida psychiatrist Lauren Ashley Schmidt, MD. And the insulin use was linked to obsessive-compulsive disorder.

A former physical trainer, the patient had a BMI of 17.6. The University of Florida’s eating disorder clinic sent her to an out-of-state residential program, but she was discharged when her blood glucose dipped dangerously low as she compulsively exercised. Her BMI dipped to 16.2.

Dr. Schmidt had the patient involuntarily committed upon her return, but she went home after 12 days with no change in her weight. Ultimately, the patient was referred to an eating disorder center in Colorado for medical stabilization where she was given a feeding tube. But her medical situation was so dire that she was discharged to her home, where she went on hospice and died.

“I’m not arguing for or against the term ‘terminal anorexia.’ But this case does make me think about it,” said Dr. Schmidt. She was referring to a controversial term used by some clinicians to refer to patients who face inevitable death from AN. “Unfortunately,” wrote the authors of a recent report proposing a clinical definition, “these patients and their carers often receive minimal support from eating disorders health professionals who are conflicted about terminal care, and who are hampered and limited by the paucity of literature on end-of-life care for those with anorexia nervosa.”
Publications
Publications
Topics
Article Type
Sections
Article Source

AT APA 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article