User login
Diagnostic Errors in Hospitalized Patients
Abstract
Diagnostic errors in hospitalized patients are a leading cause of preventable morbidity and mortality. Significant challenges in defining and measuring diagnostic errors and underlying process failure points have led to considerable variability in reported rates of diagnostic errors and adverse outcomes. In this article, we explore the diagnostic process and its discrete components, emphasizing the centrality of the patient in decision-making as well as the continuous nature of the process. We review the incidence of diagnostic errors in hospitalized patients and different methodological approaches that have been used to arrive at these estimates. We discuss different but interdependent provider- and system-related process-failure points that lead to diagnostic errors. We examine specific challenges related to measurement of diagnostic errors and describe traditional and novel approaches that are being used to obtain the most precise estimates. Finally, we examine various patient-, provider-, and organizational-level interventions that have been proposed to improve diagnostic safety in hospitalized patients.
Keywords: diagnostic error, hospital medicine, patient safety.
Diagnosis is defined as a “pre-existing set of categories agreed upon by the medical profession to designate a specific condition.”1 The diagnostic process involves obtaining a clinical history, performing a physical examination, conducting diagnostic testing, and consulting with other clinical providers to gather data that are relevant to understanding the underlying disease processes. This exercise involves generating hypotheses and updating prior probabilities as more information and evidence become available. Throughout this process of information gathering, integration, and interpretation, there is an ongoing assessment of whether sufficient and necessary knowledge has been obtained to make an accurate diagnosis and provide appropriate treatment.2
Diagnostic error is defined as a missed opportunity to make a timely diagnosis as part of this iterative process, including the failure of communicating the diagnosis to the patient in a timely manner.3 It can be categorized as a missed, delayed, or incorrect diagnosis based on available evidence at the time. Establishing the correct diagnosis has important implications. A timely and precise diagnosis ensures the patient the highest probability of having a positive health outcome that reflects an appropriate understanding of underlying disease processes and is consistent with their overall goals of care.3 When diagnostic errors occur, they can cause patient harm. Adverse events due to medical errors, including diagnostic errors, are estimated to be the third leading cause of death in the United States.4 Most people will experience at least 1 diagnostic error in their lifetime. In the 2015 National Academy of Medicine report Improving Diagnosis in Healthcare, diagnostic errors were identified as a major hazard as well as an opportunity to improve patient outcomes.2
Diagnostic errors during hospitalizations are especially concerning, as they are more likely to be implicated in a wider spectrum of harm, including permanent disability and death. This has become even more relevant for hospital medicine physicians and other clinical providers as they encounter increasing cognitive and administrative workloads, rising dissatisfaction and burnout, and unique obstacles such as night-time scheduling.5
Incidence of Diagnostic Errors in Hospitalized Patients
Several methodological approaches have been used to estimate the incidence of diagnostic errors in hospitalized patients. These include retrospective reviews of a sample of all hospital admissions, evaluations of selected adverse outcomes including autopsy studies, patient and provider surveys, and malpractice claims. Laboratory testing audits and secondary reviews in other diagnostic subspecialities (eg, radiology, pathology, and microbiology) are also essential to improving diagnostic performance in these specialized fields, which in turn affects overall hospital diagnostic error rates.6-8 These diverse approaches provide unique insights regarding our ability to assess the degree to which potential harms, ranging from temporary impairment to permanent disability, to death, are attributable to different failure points in the diagnostic process.
Large retrospective chart reviews of random hospital admissions remain the most accurate way to determine the overall incidence of diagnostic errors in hospitalized patients.9 The Harvard Medical Practice Study, published in 1991, laid the groundwork for measuring the incidence of adverse events in hospitalized patients and assessing their relation to medical error, negligence, and disability. Reviewing 30,121 randomly selected records from 51 randomly selected acute care hospitals in New York State, the study found that adverse events occurred in 3.7% of hospitalizations, diagnostic errors accounted for 13.8% of these events, and these errors were likely attributable to negligence in 74.7% of cases. The study not only outlined individual-level process failures, but also focused attention on some of the systemic causes, setting the agenda for quality improvement research in hospital-based care for years to come.10-12 A recent systematic review and meta-analysis of 22 hospital admission studies found a pooled rate of 0.7% (95% CI, 0.5%-1.1%) for harmful diagnostic errors.9 It found significant variations in the rates of adverse events, diagnostic errors, and range of diagnoses that were missed. This was primarily because of variabilities in pre-test probabilities in detecting diagnostic errors in these specific cohorts, as well as due to heterogeneity in study definitions and methodologies, especially regarding how they defined and measured “diagnostic error.” The analysis, however, did not account for diagnostic errors that were not related to patient harm (missed opportunities); therefore, it likely significantly underestimated the true incidence of diagnostic errors in these study populations. Table 1 summarizes some of key studies that have examined the incidence of harmful diagnostic errors in hospitalized patients.9-21
The chief limitation of reviewing random hospital admissions is that, since overall rates of diagnostic errors are still relatively low, a large number of case reviews are required to identify a sufficient sample of adverse outcomes to gain a meaningful understanding of the underlying process failure points and develop tools for remediation. Patient and provider surveys or data from malpractice claims can be high-yield starting points for research on process errors.22,23 Reviews of enriched cohorts of adverse outcomes, such as rapid-response events, intensive care unit (ICU) transfers, deaths, and hospital readmissions, can be an efficient way to identify process failures that lead to greatest harm. Depending on the research approach and the types of underlying patient populations sampled, rates of diagnostic errors in these high-risk groups have been estimated to be approximately 5% to 20%, or even higher.6,24-31 For example, a retrospective study of 391 cases of unplanned 7-day readmissions found that 5.6% of cases contained at least 1 diagnostic error during the index admission.32 In a study conducted at 6 Belgian acute-care hospitals, 56% of patients requiring an unplanned transfer to a higher level of care were determined to have had an adverse event, and of these adverse events, 12.4% of cases were associated with errors in diagnosis.29 A systematic review of 16 hospital-based studies estimated that 3.1% of all inpatient deaths were likely preventable, which corresponded to 22,165 deaths annually in the United States.30 Another such review of 31 autopsy studies reported that 28% of autopsied ICU patients had at least 1 misdiagnosis; of these diagnostic errors, 8% were classified as potentially lethal, and 15% were considered major but not lethal.31 Significant drawbacks of such enriched cohort studies, however, are their poor generalizability and inability to detect failure points that do not lead to patient harm (near-miss events).33
Causes of Diagnostic Errors in Hospitalized Patients
All aspects of the diagnostic process are susceptible to errors. These errors stem from a variety of faulty processes, including failure of the patient to engage with the health care system (eg, due to lack of insurance or transportation, or delay in seeking care); failure in information gathering (eg, missed history or exam findings, ordering wrong tests, laboratory errors); failure in information interpretation (eg, exam finding or test result misinterpretation); inaccurate hypothesis generation (eg, due to suboptimal prioritization or weighing of supporting evidence); and failure in communication (eg, with other team members or with the patient).2,34 Reasons for diagnostic process failures vary widely across different health care settings. While clinician assessment errors (eg, failure to consider or alternatively overweigh competing diagnoses) and errors in testing and the monitoring phase (eg, failure to order or follow up diagnostic tests) can lead to a majority of diagnostic errors in some patient populations, in other settings, social (eg, poor health literacy, punitive cultural practices) and economic factors (eg, lack of access to appropriate diagnostic tests or to specialty expertise) play a more prominent role.34,35
The Figure describes the relationship between components of the diagnostic process and subsequent outcomes, including diagnostic process failures, diagnostic errors, and absence or presence of patient harm.2,36,37 It reemphasizes the centrality of the patient in decision-making and the continuous nature of the process. The Figure also illustrates that only a minority of process failures result in diagnostic errors, and a smaller proportion of diagnostic errors actually lead to patient harm. Conversely, it also shows that diagnostic errors can happen without any obvious process-failure points, and, similarly, patient harm can take place in the absence of any evident diagnostic errors.36-38 Finally, it highlights the need to incorporate feedback from process failures, diagnostic errors, and favorable and unfavorable patient outcomes in order to inform future quality improvement efforts and research.
A significant proportion of diagnostic errors are due to system-related vulnerabilities, such as limitations in availability, adoption or quality of work force training, health informatics resources, and diagnostic capabilities. Lack of institutional culture that promotes safety and transparency also predisposes to diagnostic errors.39,40 The other major domain of process failures is related to cognitive errors in clinician decision-making. Anchoring, confirmation bias, availability bias, and base-rate neglect are some of the common cognitive biases that, along with personality traits (aversion to risk or ambiguity, overconfidence) and affective biases (influence of emotion on decision-making), often determine the degree of utilization of resources and the possibility of suboptimal diagnostic performance.41,42 Further, implicit biases related to age, race, gender, and sexual orientation contribute to disparities in access to health care and outcomes.43 In a large number of cases of preventable adverse outcomes, however, there are multiple interdependent individual and system-related failure points that lead to diagnostic error and patient harm.6,32
Challenges in Defining and Measuring Diagnostic Errors
In order to develop effective, evidence-based interventions to reduce diagnostic errors in hospitalized patients, it is essential to be able to first operationally define, and then accurately measure, diagnostic errors and the process failures that contribute to these errors in a standardized way that is reproducible across different settings.6,44 There are a number of obstacles in this endeavor.
A fundamental problem is that establishing a diagnosis is not a single act but a process. Patterns of symptoms and clinical presentations often differ for the same disease. Information required to make a diagnosis is usually gathered in stages, where the clinician obtains additional data, while considering many possibilities, of which 1 may be ultimately correct. Diagnoses evolve over time and in different care settings. “The most likely diagnosis” is not always the same as “the final correct diagnosis.” Moreover, the diagnostic process is influenced by patients’ individual clinical courses and preferences over time. This makes determination of missed, delayed, or incorrect diagnoses challenging.45,46
For hospitalized patients, generally the goal is to first rule out more serious and acute conditions (eg, pulmonary embolism or stroke), even if their probability is rather low. Conversely, a diagnosis that appears less consequential if delayed (eg, chronic anemia of unclear etiology) might not be pursued on an urgent basis, and is often left to outpatient providers to examine, but still may manifest in downstream harm (eg, delayed diagnosis of gastrointestinal malignancy or recurrent admissions for heart failure due to missed iron-deficiency anemia). Therefore, coming up with disease diagnosis likelihoods in hindsight may turn out to be highly subjective and not always accurate. This can be particularly difficult when clinician and other team deliberations are not recorded in their entirety.47
Another hurdle in the practice of diagnostic medicine is to preserve the balance between underdiagnosing versus pursuing overly aggressive diagnostic approaches. Conducting laboratory, imaging, or other diagnostic studies without a clear shared understanding of how they would affect clinical decision-making (eg, use of prostate-specific antigen to detect prostate cancer) not only leads to increased costs but can also delay appropriate care. Worse, subsequent unnecessary diagnostic tests and treatments can sometimes lead to serious harm.48,49
Finally, retrospective reviews by clinicians are subject to multiple potential limitations that include failure to create well-defined research questions, poorly developed inclusion and exclusion criteria, and issues related to inter- and intra-rater reliability.50 These methodological deficiencies can occur despite following "best practice" guidelines during the study planning, execution, and analysis phases. They further add to the challenge of defining and measuring diagnostic errors.47
Strategies to Improve Measurement of Diagnostic Errors
Development of new methodologies to reliably measure diagnostic errors is an area of active research. The advancement of uniform and universally agreed-upon frameworks to define and identify process failure points and diagnostic errors would help reduce measurement error and support development and testing of interventions that could be generalizable across different health care settings. To more accurately define and measure diagnostic errors, several novel approaches have been proposed (Table 2).
The Safer Dx framework is an all-round tool developed to advance the discipline of measuring diagnostic errors. For an episode of care under review, the instrument scores various items to determine the likelihood of a diagnostic error. These items evaluate multiple dimensions affecting diagnostic performance and measurements across 3 broad domains: structure (provider and organizational characteristics—from everyone involved with patient care, to computing infrastructure, to policies and regulations), process (elements of the patient-provider encounter, diagnostic test performance and follow-up, and subspecialty- and referral-specific factors), and outcome (establishing accurate and timely diagnosis as opposed to missed, delayed, or incorrect diagnosis). This instrument has been revised and can be further modified by a variety of stakeholders, including clinicians, health care organizations, and policymakers, to identify potential diagnostic errors in a standardized way for patient safety and quality improvement research.51,52
Use of standardized tools, such as the Diagnosis Error Evaluation and Research (DEER) taxonomy, can help to identify and classify specific failure points across different diagnostic process dimensions.37 These failure points can be classified into: issues related to patient presentation or access to health care; failure to obtain or misinterpretation of history or physical exam findings; errors in use of diagnostics tests due to technical or clinician-related factors; failures in appropriate weighing of evidence and hypothesis generation; errors associated with referral or consultation process; and failure to monitor the patient or obtain timely follow-up.34 The DEER taxonomy can also be modified based on specific research questions and study populations. Further, it can be recategorized to correspond to Safer Dx framework diagnostic process dimensions to provide insights into reasons for specific process failures and to develop new interventions to mitigate errors and patient harm.6
Since a majority of diagnostic errors do not lead to actual harm, use of “triggers” or clues (eg, procedure-related complications, patient falls, transfers to a higher level of care, readmissions within 30 days) can be a more efficient method to identify diagnostic errors and adverse events that do cause harm. The Global Trigger Tool, developed by the Institute for Healthcare Improvement, uses this strategy. This tool has been shown to identify a significantly higher number of serious adverse events than comparable methods.53 This facilitates selection and development of strategies at the institutional level that are most likely to improve patient outcomes.24
Encouraging and facilitating voluntary or prompted reporting from patients and clinicians can also play an important role in capturing diagnostic errors. Patients and clinicians are not only the key stakeholders but are also uniquely placed within the diagnostic process to detect and report potential errors.25,54 Patient-safety-event reporting systems, such as RL6, play a vital role in reporting near-misses and adverse events. These systems provide a mechanism for team members at all levels within the hospital to contribute toward reporting patient adverse events, including those arising from diagnostic errors.55 The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is the first standardized, nationally reported patient survey designed to measure patients’ perceptions of their hospital experience. The US Centers for Medicare and Medicaid Services (CMS) publishes HCAHPS results on its website 4 times a year, which serves as an important incentive for hospitals to improve patient safety and quality of health care delivery.56
Another novel approach links multiple symptoms to a range of target diseases using the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) framework. Using “big data” technologies, this technique can help discover otherwise hidden symptom-disease links and improve overall diagnostic performance. This approach is proposed for both case-control (look-back) and cohort (look-forward) studies assessing diagnostic errors and misdiagnosis-related harms. For example, starting with a known diagnosis with high potential for harm (eg, stroke), the “look-back” approach can be used to identify high-risk symptoms (eg, dizziness, vertigo). In the “look-forward” approach, a single symptom or exposure risk factor known to be frequently misdiagnosed (eg, dizziness) can be analyzed to identify potential adverse disease outcomes (eg, stroke, migraine).57
Many large ongoing studies looking at diagnostic errors among hospitalized patients, such as Utility of Predictive Systems to identify Inpatient Diagnostic Errors (UPSIDE),58 Patient Safety Learning Lab (PSLL),59 and Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT),60 are using structured chart review methodologies incorporating many of the above strategies in combination. Cases triggered by certain events (eg, ICU transfer, death, rapid response event, new or worsening acute kidney injury) are reviewed using validated tools, including Safer Dx framework and DEER taxonomy, to provide the most precise estimates of the burden of diagnostic errors in hospitalized patients. These estimates may be much higher than previously predicted using traditional chart review approaches.6,24 For example, a recently published study of 2809 random admissions in 11 Massachusetts hospitals identified 978 adverse events but only 10 diagnostic errors (diagnostic error rate, 0.4%).19 This was likely because the trigger method used in the study did not specifically examine the diagnostic process as critically as done by the Safer Dx framework and DEER taxonomy tools, thereby underestimating the total number of diagnostic errors. Further, these ongoing studies (eg, UPSIDE, ADEPT) aim to employ new and upcoming advanced machine-learning methods to create models that can improve overall diagnostic performance. This would pave the way to test and build novel, efficient, and scalable interventions to reduce diagnostic errors and improve patient outcomes.
Strategies to Improve Diagnostic Safety in Hospitalized Patients
Disease-specific biomedical research, as well as advances in laboratory, imaging, and other technologies, play a critical role in improving diagnostic accuracy. However, these technical approaches do not address many of the broader clinician- and system-level failure points and opportunities for improvement. Various patient-, provider-, and organizational-level interventions that could make diagnostic processes more resilient and reduce the risk of error and patient harm have been proposed.61
Among these strategies are approaches to empower patients and their families. Fostering therapeutic relationships between patients and members of the care team is essential to reducing diagnostic errors.62 Facilitating timely access to health records, ensuring transparency in decision making, and tailoring communication strategies to patients’ cultural and educational backgrounds can reduce harm.63 Similarly, at the system level, enhancing communication among different providers by use of tools such as structured handoffs can prevent communication breakdowns and facilitate positive outcomes.64
Interventions targeted at individual health care providers, such as educational programs to improve content-specific knowledge, can enhance diagnostic performance. Regular feedback, strategies to enhance equity, and fostering an environment where all providers are actively encouraged to think critically and participate in the diagnostic process (training programs to use “diagnostic time-outs” and making it a “team sport”) can improve clinical reasoning.65,66 Use of standardized patients can help identify individual-level cognitive failure points and facilitate creation of new interventions to improve clinical decision-making processes.67
Novel health information technologies can further augment these efforts. These include effective documentation by maintaining dynamic and accurate patient histories, problem lists, and medication lists68-70; use of electronic health record–based algorithms to identify potential diagnostic delays for serious conditions71,72; use of telemedicine technologies to improve accessibility and coordination73; application of mobile health and wearable technologies to facilitate data-gathering and care delivery74,75; and use of computerized decision-support tools, including applications to interpret electrocardiograms, imaging studies, and other diagnostic tests.76
Use of precision medicine, powered by new artificial intelligence (AI) tools, is becoming more widespread. Algorithms powered by AI can augment and sometimes even outperform clinician decision-making in areas such as oncology, radiology, and primary care.77 Creation of large biobanks like the All of Us research program can be used to study thousands of environmental and genetic risk factors and health conditions simultaneously, and help identify specific treatments that work best for people of different backgrounds.78 Active research in these areas holds great promise in terms of how and when we diagnose diseases and make appropriate preventative and treatment decisions. Significant scientific, ethical, and regulatory challenges will need to be overcome before these technologies can address some of the most complex problems in health care.79
Finally, diagnostic performance is affected by the external environment, including the functioning of the medical liability system. Diagnostic errors that lead to patient harm are a leading cause of malpractice claims.80 Developing a legal environment, in collaboration with patient advocacy groups and health care organizations, that promotes and facilitates timely disclosure of diagnostic errors could decrease the incentive to hide errors, advance care processes, and improve outcomes.81,82
Conclusion
The burden of diagnostic errors in hospitalized patients is unacceptably high and remains an underemphasized cause of preventable morbidity and mortality. Diagnostic errors often result from a breakdown in multiple interdependent processes that involve patient-, provider-, and system-level factors. Significant challenges remain in defining and identifying diagnostic errors as well as underlying process-failure points. The most effective interventions to reduce diagnostic errors will require greater patient participation in the diagnostic process and a mix of evidence-based interventions that promote individual-provider excellence as well as system-level changes. Further research and collaboration among various stakeholders should help improve diagnostic safety for hospitalized patients.
Corresponding author: Abhishek Goyal, MD, MPH; [email protected]
Disclosures: Dr. Dalal disclosed receiving income ≥ $250 from MayaMD.
1. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493-1499. doi:10.1001/archinte.165.13.1493
2. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. The National Academies Press. doi:10.17226/21794
3. Singh H, Graber ML. Improving diagnosis in health care—the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. doi:10.1056/NEJMp1512241
4. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139. doi:10.1136/bmj.i2139
5. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. doi:10.1007/s11606-009-0944-6
6. Griffin JA, Carr K, Bersani K, et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl). 2021;9(1):77-88. doi:10.1515/dx-2021-0033
7. Itri JN, Tappouni RR, McEachern RO, Pesch AJ, Patel SH. Fundamentals of diagnostic error in imaging. RadioGraphics. 2018;38(6):1845-1865. doi:10.1148/rg.2018180021
8. Hammerling JA. A Review of medical errors in laboratory diagnostics and where we are today. Lab Med. 2012;43(2):41-44. doi:10.1309/LM6ER9WJR1IHQAUY
9. Gunderson CG, Bilan VP, Holleck JL, et al. Prevalence of harmful diagnostic errors in hospitalised adults: a systematic review and meta-analysis. BMJ Qual Saf. 2020;29(12):1008-1018. doi:10.1136/bmjqs-2019-010822
10. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
11. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377-384. doi:10.1056/NEJM199102073240605
12. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. Results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. doi:10.1056/NEJM199107253250405
13. Wilson RM, Michel P, Olsen S, et al. Patient safety in developing countries: retrospective estimation of scale and nature of harm to patients in hospital. BMJ. 2012;344:e832. doi:10.1136/bmj.e832
14. Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163(9):458-471. doi:10.5694/j.1326-5377.1995.tb124691.x
15. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38(3):261-271. doi:10.1097/00005650-200003000-00003
16. Baker GR, Norton PG, Flintoft V, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. CMAJ. 2004;170(11):1678-1686. doi:10.1503/cmaj.1040498
17. Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S. Adverse events in New Zealand public hospitals II: preventability and clinical context. N Z Med J. 2003;116(1183):U624.
18. Aranaz-Andrés JM, Aibar-Remón C, Vitaller-Murillo J, et al. Incidence of adverse events related to health care in Spain: results of the Spanish National Study of Adverse Events. J Epidemiol Community Health. 2008;62(12):1022-1029. doi:10.1136/jech.2007.065227
19. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
20. Soop M, Fryksmark U, Köster M, Haglund B. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care. 2009;21(4):285-291. doi:10.1093/intqhc/mzp025
21. Rafter N, Hickey A, Conroy RM, et al. The Irish National Adverse Events Study (INAES): the frequency and nature of adverse events in Irish hospitals—a retrospective record review study. BMJ Qual Saf. 2017;26(2):111-119. doi:10.1136/bmjqs-2015-004828
22. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med. 2002;347(24):1933-1940. doi:10.1056/NEJMsa022151
23. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. doi:10.1136/bmjqs-2012-001550
24. Malik MA, Motta-Calderon D, Piniella N, et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl). 2022;9(4):446-457. doi:10.1515/dx-2022-0032
25. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22(suppl 2):ii21-ii27. doi:10.1136/bmjqs-2012-001615
26. Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med. 2019;47(11):e902-e910. doi:10.1097/CCM.0000000000003976
27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012;21(9):737-745. doi:10.1136/bmjqs-2011-001159
28. Bergl PA, Nanchal RS, Singh H. Diagnostic error in the critically ill: defining the problem and exploring next steps to advance intensive care unit safety. Ann Am Thorac Soc. 2018;15(8):903-907. doi:10.1513/AnnalsATS.201801-068PS
29. Marquet K, Claes N, De Troy E, et al. One fourth of unplanned transfers to a higher level of care are associated with a highly preventable adverse event: a patient record review in six Belgian hospitals. Crit Care Med. 2015;43(5):1053-1061. doi:10.1097/CCM.0000000000000932
30. Rodwin BA, Bilan VP, Merchant NB, et al. Rate of preventable mortality in hospitalized patients: a systematic review and meta-analysis. J Gen Intern Med. 2020;35(7):2099-2106. doi:10.1007/s11606-019-05592-5
31. Winters B, Custer J, Galvagno SM, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. doi:10.1136/bmjqs-2012-000803
32. Raffel KE, Kantor MA, Barish P, et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf. 2020;29(12):971-979. doi:10.1136/bmjqs-2020-010896
33. Weingart SN, Pagovich O, Sands DZ, et al. What can hospitalized patients tell us about adverse events? learning from patient-reported incidents. J Gen Intern Med. 2005;20(9):830-836. doi:10.1111/j.1525-1497.2005.0180.x
34. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. doi:10.1001/archinternmed.2009.333
35. Singh H, Schiff GD, Graber ML, Onakpoya I, Thompson MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf. 2017;26(6):484-494. doi:10.1136/bmjqs-2016-005401
36. Schiff GD, Leape LL. Commentary: how can we make diagnosis safer? Acad Med J Assoc Am Med Coll. 2012;87(2):135-138. doi:10.1097/ACM.0b013e31823f711c
37. Schiff GD, Kim S, Abrams R, et al. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation. Volume 2: Concepts and Methodology. AHRQ Publication No. 05-0021-2. Agency for Healthcare Research and Quality (US); 2005. Accessed January 16, 2023. http://www.ncbi.nlm.nih.gov/books/NBK20492/
38. Newman-Toker DE. A unified conceptual model for diagnostic errors: underdiagnosis, overdiagnosis, and misdiagnosis. Diagnosis (Berl). 2014;1(1):43-48. doi:10.1515/dx-2013-0027
39. Abimanyi-Ochom J, Bohingamu Mudiyanselage S, Catchpool M, Firipis M, Wanni Arachchige Dona S, Watts JJ. Strategies to reduce diagnostic errors: a systematic review. BMC Med Inform Decis Mak. 2019;19(1):174. doi:10.1186/s12911-019-0901-1
40. Gupta A, Harrod M, Quinn M, et al. Mind the overlap: how system problems contribute to cognitive failure and diagnostic errors. Diagnosis (Berl). 2018;5(3):151-156. doi:10.1515/dx-2018-0014
41. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138. doi:10.1186/s12911-016-0377-1
42. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775-780. doi: 10.1097/00001888-200308000-00003
43. Chapman EN, Kaatz A, Carnes M. Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. J Gen Intern Med. 2013;28(11):1504-1510. doi:10.1007/s11606-013-2441-1
44. Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis (Ber). 2015;2(2):97-103. doi:10.1515/dx-2014-0069
45. Arkes HR, Wortmann RL, Saville PD, Harkness AR. Hindsight bias among physicians weighing the likelihood of diagnoses. J Appl Psychol. 1981;66(2):252-254.
46. Singh H. Editorial: Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014;40(3):99-101. doi:10.1016/s1553-7250(14)40012-6
47. Vassar M, Holzmann M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof. 2013;10:12. doi:10.3352/jeehp.2013.10.12
48. Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605-613. doi:10.1093/jnci/djq099
49. Moynihan R, Doust J, Henry D. Preventing overdiagnosis: how to stop harming the healthy. BMJ. 2012;344:e3502. doi:10.1136/bmj.e3502
50. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA. 2001;286(4):415-420. doi:10.1001/jama.286.4.415
51. Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-110. doi:10.1136/bmjqs-2014-003675
52. Singh H, Khanna A, Spitzmueller C, Meyer AND. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl). 2019;6(4):315-323. doi:10.1515/dx-2019-0012
53. Classen DC, Resar R, Griffin F, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. doi:10.1377/hlthaff.2011.0190
54. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med. 2008;121(5 suppl):S38-S42. doi:10.1016/j.amjmed.2008.02.004
55. Mitchell I, Schuster A, Smith K, Pronovost P, Wu A. Patient safety incident reporting: a qualitative study of thoughts and perceptions of experts 15 years after “To Err is Human.” BMJ Qual Saf. 2016;25(2):92-99. doi:10.1136/bmjqs-2015-004405
56. Mazurenko O, Collum T, Ferdinand A, Menachemi N. Predictors of hospital patient satisfaction as measured by HCAHPS: a systematic review. J Healthc Manag. 2017;62(4):272-283. doi:10.1097/JHM-D-15-00050
57. Liberman AL, Newman-Toker DE. Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf. 2018;27(7):557-566. doi:10.1136/bmjqs-2017-007032
58. Utility of Predictive Systems to Identify Inpatient Diagnostic Errors: the UPSIDE study. NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/search/rpoHXlEAcEudQV3B9ld8iw/project-details/10020962
59. Overview of Patient Safety Learning Laboratory (PSLL) Projects. Agency for Healthcare Research and Quality. Accessed January 14, 2023. https://www.ahrq.gov/patient-safety/resources/learning-lab/index.html
60. Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT). NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/project-details/10642576
61. Zwaan L, Singh H. Diagnostic error in hospitals: finding forests not just the big trees. BMJ Qual Saf. 2020;29(12):961-964. doi:10.1136/bmjqs-2020-011099
62. Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85(1):53-62. doi:10.4065/mcp.2009.0248
63. Murphy DR, Singh H, Berlin L. Communication breakdowns and diagnostic errors: a radiology perspective. Diagnosis (Berl). 2014;1(4):253-261. doi:10.1515/dx-2014-0035
64. Singh H, Naik AD, Rao R, Petersen LA. Reducing diagnostic errors through effective communication: harnessing the power of information technology. J Gen Intern Med. 2008;23(4):489-494. doi:10.1007/s11606-007-0393-z
65. Singh H, Connor DM, Dhaliwal G. Five strategies for clinicians to advance diagnostic excellence. BMJ. 2022;376:e068044. doi:10.1136/bmj-2021-068044
66. Yale S, Cohen S, Bordini BJ. Diagnostic time-outs to improve diagnosis. Crit Care Clin. 2022;38(2):185-194. doi:10.1016/j.ccc.2021.11.008
67. Schwartz A, Peskin S, Spiro A, Weiner SJ. Impact of unannounced standardized patient audit and feedback on care, documentation, and costs: an experiment and claims analysis. J Gen Intern Med. 2021;36(1):27-34. doi:10.1007/s11606-020-05965-1
68. Carpenter JD, Gorman PN. Using medication list—problem list mismatches as markers of potential error. Proc AMIA Symp. 2002:106-110.
69. Hron JD, Manzi S, Dionne R, et al. Electronic medication reconciliation and medication errors. Int J Qual Health Care. 2015;27(4):314-319. doi:10.1093/intqhc/mzv046
70. Graber ML, Siegal D, Riah H, Johnston D, Kenyon K. Electronic health record–related events in medical malpractice claims. J Patient Saf. 2019;15(2):77-85. doi:10.1097/PTS.0000000000000240
71. Murphy DR, Wu L, Thomas EJ, Forjuoh SN, Meyer AND, Singh H. Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol. 2015;33(31):3560-3567. doi:10.1200/JCO.2015.61.1301
72. Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93-100. doi:10.1136/bmjqs-2011-000304
73. Armaignac DL, Saxena A, Rubens M, et al. Impact of telemedicine on mortality, length of stay, and cost among patients in progressive care units: experience from a large healthcare system. Crit Care Med. 2018;46(5):728-735. doi:10.1097/CCM.0000000000002994
74. MacKinnon GE, Brittain EL. Mobile health technologies in cardiopulmonary disease. Chest. 2020;157(3):654-664. doi:10.1016/j.chest.2019.10.015
75. DeVore AD, Wosik J, Hernandez AF. The future of wearables in heart failure patients. JACC Heart Fail. 2019;7(11):922-932. doi:10.1016/j.jchf.2019.08.008
76. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10(5):478-483. doi:10.1197/jamia.M1279
77. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. 2019;34(8):1626-1630. doi:10.1007/s11606-019-05035-1
78. Ramirez AH, Gebo KA, Harris PA. Progress with the All Of Us research program: opening access for researchers. JAMA. 2021;325(24):2441-2442. doi:10.1001/jama.2021.7702
79. Johnson KB, Wei W, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86-93. doi:10.1111/cts.12884
80. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2017;27(1):bmjqs-2017-006774. doi:10.1136/bmjqs-2017-006774
81. Renkema E, Broekhuis M, Ahaus K. Conditions that influence the impact of malpractice litigation risk on physicians’ behavior regarding patient safety. BMC Health Serv Res. 2014;14(1):38. doi:10.1186/1472-6963-14-38
82. Kachalia A, Mello MM, Nallamothu BK, Studdert DM. Legal and policy interventions to improve patient safety. Circulation. 2016;133(7):661-671. doi:10.1161/CIRCULATIONAHA.115.015880
Abstract
Diagnostic errors in hospitalized patients are a leading cause of preventable morbidity and mortality. Significant challenges in defining and measuring diagnostic errors and underlying process failure points have led to considerable variability in reported rates of diagnostic errors and adverse outcomes. In this article, we explore the diagnostic process and its discrete components, emphasizing the centrality of the patient in decision-making as well as the continuous nature of the process. We review the incidence of diagnostic errors in hospitalized patients and different methodological approaches that have been used to arrive at these estimates. We discuss different but interdependent provider- and system-related process-failure points that lead to diagnostic errors. We examine specific challenges related to measurement of diagnostic errors and describe traditional and novel approaches that are being used to obtain the most precise estimates. Finally, we examine various patient-, provider-, and organizational-level interventions that have been proposed to improve diagnostic safety in hospitalized patients.
Keywords: diagnostic error, hospital medicine, patient safety.
Diagnosis is defined as a “pre-existing set of categories agreed upon by the medical profession to designate a specific condition.”1 The diagnostic process involves obtaining a clinical history, performing a physical examination, conducting diagnostic testing, and consulting with other clinical providers to gather data that are relevant to understanding the underlying disease processes. This exercise involves generating hypotheses and updating prior probabilities as more information and evidence become available. Throughout this process of information gathering, integration, and interpretation, there is an ongoing assessment of whether sufficient and necessary knowledge has been obtained to make an accurate diagnosis and provide appropriate treatment.2
Diagnostic error is defined as a missed opportunity to make a timely diagnosis as part of this iterative process, including the failure of communicating the diagnosis to the patient in a timely manner.3 It can be categorized as a missed, delayed, or incorrect diagnosis based on available evidence at the time. Establishing the correct diagnosis has important implications. A timely and precise diagnosis ensures the patient the highest probability of having a positive health outcome that reflects an appropriate understanding of underlying disease processes and is consistent with their overall goals of care.3 When diagnostic errors occur, they can cause patient harm. Adverse events due to medical errors, including diagnostic errors, are estimated to be the third leading cause of death in the United States.4 Most people will experience at least 1 diagnostic error in their lifetime. In the 2015 National Academy of Medicine report Improving Diagnosis in Healthcare, diagnostic errors were identified as a major hazard as well as an opportunity to improve patient outcomes.2
Diagnostic errors during hospitalizations are especially concerning, as they are more likely to be implicated in a wider spectrum of harm, including permanent disability and death. This has become even more relevant for hospital medicine physicians and other clinical providers as they encounter increasing cognitive and administrative workloads, rising dissatisfaction and burnout, and unique obstacles such as night-time scheduling.5
Incidence of Diagnostic Errors in Hospitalized Patients
Several methodological approaches have been used to estimate the incidence of diagnostic errors in hospitalized patients. These include retrospective reviews of a sample of all hospital admissions, evaluations of selected adverse outcomes including autopsy studies, patient and provider surveys, and malpractice claims. Laboratory testing audits and secondary reviews in other diagnostic subspecialities (eg, radiology, pathology, and microbiology) are also essential to improving diagnostic performance in these specialized fields, which in turn affects overall hospital diagnostic error rates.6-8 These diverse approaches provide unique insights regarding our ability to assess the degree to which potential harms, ranging from temporary impairment to permanent disability, to death, are attributable to different failure points in the diagnostic process.
Large retrospective chart reviews of random hospital admissions remain the most accurate way to determine the overall incidence of diagnostic errors in hospitalized patients.9 The Harvard Medical Practice Study, published in 1991, laid the groundwork for measuring the incidence of adverse events in hospitalized patients and assessing their relation to medical error, negligence, and disability. Reviewing 30,121 randomly selected records from 51 randomly selected acute care hospitals in New York State, the study found that adverse events occurred in 3.7% of hospitalizations, diagnostic errors accounted for 13.8% of these events, and these errors were likely attributable to negligence in 74.7% of cases. The study not only outlined individual-level process failures, but also focused attention on some of the systemic causes, setting the agenda for quality improvement research in hospital-based care for years to come.10-12 A recent systematic review and meta-analysis of 22 hospital admission studies found a pooled rate of 0.7% (95% CI, 0.5%-1.1%) for harmful diagnostic errors.9 It found significant variations in the rates of adverse events, diagnostic errors, and range of diagnoses that were missed. This was primarily because of variabilities in pre-test probabilities in detecting diagnostic errors in these specific cohorts, as well as due to heterogeneity in study definitions and methodologies, especially regarding how they defined and measured “diagnostic error.” The analysis, however, did not account for diagnostic errors that were not related to patient harm (missed opportunities); therefore, it likely significantly underestimated the true incidence of diagnostic errors in these study populations. Table 1 summarizes some of key studies that have examined the incidence of harmful diagnostic errors in hospitalized patients.9-21
The chief limitation of reviewing random hospital admissions is that, since overall rates of diagnostic errors are still relatively low, a large number of case reviews are required to identify a sufficient sample of adverse outcomes to gain a meaningful understanding of the underlying process failure points and develop tools for remediation. Patient and provider surveys or data from malpractice claims can be high-yield starting points for research on process errors.22,23 Reviews of enriched cohorts of adverse outcomes, such as rapid-response events, intensive care unit (ICU) transfers, deaths, and hospital readmissions, can be an efficient way to identify process failures that lead to greatest harm. Depending on the research approach and the types of underlying patient populations sampled, rates of diagnostic errors in these high-risk groups have been estimated to be approximately 5% to 20%, or even higher.6,24-31 For example, a retrospective study of 391 cases of unplanned 7-day readmissions found that 5.6% of cases contained at least 1 diagnostic error during the index admission.32 In a study conducted at 6 Belgian acute-care hospitals, 56% of patients requiring an unplanned transfer to a higher level of care were determined to have had an adverse event, and of these adverse events, 12.4% of cases were associated with errors in diagnosis.29 A systematic review of 16 hospital-based studies estimated that 3.1% of all inpatient deaths were likely preventable, which corresponded to 22,165 deaths annually in the United States.30 Another such review of 31 autopsy studies reported that 28% of autopsied ICU patients had at least 1 misdiagnosis; of these diagnostic errors, 8% were classified as potentially lethal, and 15% were considered major but not lethal.31 Significant drawbacks of such enriched cohort studies, however, are their poor generalizability and inability to detect failure points that do not lead to patient harm (near-miss events).33
Causes of Diagnostic Errors in Hospitalized Patients
All aspects of the diagnostic process are susceptible to errors. These errors stem from a variety of faulty processes, including failure of the patient to engage with the health care system (eg, due to lack of insurance or transportation, or delay in seeking care); failure in information gathering (eg, missed history or exam findings, ordering wrong tests, laboratory errors); failure in information interpretation (eg, exam finding or test result misinterpretation); inaccurate hypothesis generation (eg, due to suboptimal prioritization or weighing of supporting evidence); and failure in communication (eg, with other team members or with the patient).2,34 Reasons for diagnostic process failures vary widely across different health care settings. While clinician assessment errors (eg, failure to consider or alternatively overweigh competing diagnoses) and errors in testing and the monitoring phase (eg, failure to order or follow up diagnostic tests) can lead to a majority of diagnostic errors in some patient populations, in other settings, social (eg, poor health literacy, punitive cultural practices) and economic factors (eg, lack of access to appropriate diagnostic tests or to specialty expertise) play a more prominent role.34,35
The Figure describes the relationship between components of the diagnostic process and subsequent outcomes, including diagnostic process failures, diagnostic errors, and absence or presence of patient harm.2,36,37 It reemphasizes the centrality of the patient in decision-making and the continuous nature of the process. The Figure also illustrates that only a minority of process failures result in diagnostic errors, and a smaller proportion of diagnostic errors actually lead to patient harm. Conversely, it also shows that diagnostic errors can happen without any obvious process-failure points, and, similarly, patient harm can take place in the absence of any evident diagnostic errors.36-38 Finally, it highlights the need to incorporate feedback from process failures, diagnostic errors, and favorable and unfavorable patient outcomes in order to inform future quality improvement efforts and research.
A significant proportion of diagnostic errors are due to system-related vulnerabilities, such as limitations in availability, adoption or quality of work force training, health informatics resources, and diagnostic capabilities. Lack of institutional culture that promotes safety and transparency also predisposes to diagnostic errors.39,40 The other major domain of process failures is related to cognitive errors in clinician decision-making. Anchoring, confirmation bias, availability bias, and base-rate neglect are some of the common cognitive biases that, along with personality traits (aversion to risk or ambiguity, overconfidence) and affective biases (influence of emotion on decision-making), often determine the degree of utilization of resources and the possibility of suboptimal diagnostic performance.41,42 Further, implicit biases related to age, race, gender, and sexual orientation contribute to disparities in access to health care and outcomes.43 In a large number of cases of preventable adverse outcomes, however, there are multiple interdependent individual and system-related failure points that lead to diagnostic error and patient harm.6,32
Challenges in Defining and Measuring Diagnostic Errors
In order to develop effective, evidence-based interventions to reduce diagnostic errors in hospitalized patients, it is essential to be able to first operationally define, and then accurately measure, diagnostic errors and the process failures that contribute to these errors in a standardized way that is reproducible across different settings.6,44 There are a number of obstacles in this endeavor.
A fundamental problem is that establishing a diagnosis is not a single act but a process. Patterns of symptoms and clinical presentations often differ for the same disease. Information required to make a diagnosis is usually gathered in stages, where the clinician obtains additional data, while considering many possibilities, of which 1 may be ultimately correct. Diagnoses evolve over time and in different care settings. “The most likely diagnosis” is not always the same as “the final correct diagnosis.” Moreover, the diagnostic process is influenced by patients’ individual clinical courses and preferences over time. This makes determination of missed, delayed, or incorrect diagnoses challenging.45,46
For hospitalized patients, generally the goal is to first rule out more serious and acute conditions (eg, pulmonary embolism or stroke), even if their probability is rather low. Conversely, a diagnosis that appears less consequential if delayed (eg, chronic anemia of unclear etiology) might not be pursued on an urgent basis, and is often left to outpatient providers to examine, but still may manifest in downstream harm (eg, delayed diagnosis of gastrointestinal malignancy or recurrent admissions for heart failure due to missed iron-deficiency anemia). Therefore, coming up with disease diagnosis likelihoods in hindsight may turn out to be highly subjective and not always accurate. This can be particularly difficult when clinician and other team deliberations are not recorded in their entirety.47
Another hurdle in the practice of diagnostic medicine is to preserve the balance between underdiagnosing versus pursuing overly aggressive diagnostic approaches. Conducting laboratory, imaging, or other diagnostic studies without a clear shared understanding of how they would affect clinical decision-making (eg, use of prostate-specific antigen to detect prostate cancer) not only leads to increased costs but can also delay appropriate care. Worse, subsequent unnecessary diagnostic tests and treatments can sometimes lead to serious harm.48,49
Finally, retrospective reviews by clinicians are subject to multiple potential limitations that include failure to create well-defined research questions, poorly developed inclusion and exclusion criteria, and issues related to inter- and intra-rater reliability.50 These methodological deficiencies can occur despite following "best practice" guidelines during the study planning, execution, and analysis phases. They further add to the challenge of defining and measuring diagnostic errors.47
Strategies to Improve Measurement of Diagnostic Errors
Development of new methodologies to reliably measure diagnostic errors is an area of active research. The advancement of uniform and universally agreed-upon frameworks to define and identify process failure points and diagnostic errors would help reduce measurement error and support development and testing of interventions that could be generalizable across different health care settings. To more accurately define and measure diagnostic errors, several novel approaches have been proposed (Table 2).
The Safer Dx framework is an all-round tool developed to advance the discipline of measuring diagnostic errors. For an episode of care under review, the instrument scores various items to determine the likelihood of a diagnostic error. These items evaluate multiple dimensions affecting diagnostic performance and measurements across 3 broad domains: structure (provider and organizational characteristics—from everyone involved with patient care, to computing infrastructure, to policies and regulations), process (elements of the patient-provider encounter, diagnostic test performance and follow-up, and subspecialty- and referral-specific factors), and outcome (establishing accurate and timely diagnosis as opposed to missed, delayed, or incorrect diagnosis). This instrument has been revised and can be further modified by a variety of stakeholders, including clinicians, health care organizations, and policymakers, to identify potential diagnostic errors in a standardized way for patient safety and quality improvement research.51,52
Use of standardized tools, such as the Diagnosis Error Evaluation and Research (DEER) taxonomy, can help to identify and classify specific failure points across different diagnostic process dimensions.37 These failure points can be classified into: issues related to patient presentation or access to health care; failure to obtain or misinterpretation of history or physical exam findings; errors in use of diagnostics tests due to technical or clinician-related factors; failures in appropriate weighing of evidence and hypothesis generation; errors associated with referral or consultation process; and failure to monitor the patient or obtain timely follow-up.34 The DEER taxonomy can also be modified based on specific research questions and study populations. Further, it can be recategorized to correspond to Safer Dx framework diagnostic process dimensions to provide insights into reasons for specific process failures and to develop new interventions to mitigate errors and patient harm.6
Since a majority of diagnostic errors do not lead to actual harm, use of “triggers” or clues (eg, procedure-related complications, patient falls, transfers to a higher level of care, readmissions within 30 days) can be a more efficient method to identify diagnostic errors and adverse events that do cause harm. The Global Trigger Tool, developed by the Institute for Healthcare Improvement, uses this strategy. This tool has been shown to identify a significantly higher number of serious adverse events than comparable methods.53 This facilitates selection and development of strategies at the institutional level that are most likely to improve patient outcomes.24
Encouraging and facilitating voluntary or prompted reporting from patients and clinicians can also play an important role in capturing diagnostic errors. Patients and clinicians are not only the key stakeholders but are also uniquely placed within the diagnostic process to detect and report potential errors.25,54 Patient-safety-event reporting systems, such as RL6, play a vital role in reporting near-misses and adverse events. These systems provide a mechanism for team members at all levels within the hospital to contribute toward reporting patient adverse events, including those arising from diagnostic errors.55 The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is the first standardized, nationally reported patient survey designed to measure patients’ perceptions of their hospital experience. The US Centers for Medicare and Medicaid Services (CMS) publishes HCAHPS results on its website 4 times a year, which serves as an important incentive for hospitals to improve patient safety and quality of health care delivery.56
Another novel approach links multiple symptoms to a range of target diseases using the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) framework. Using “big data” technologies, this technique can help discover otherwise hidden symptom-disease links and improve overall diagnostic performance. This approach is proposed for both case-control (look-back) and cohort (look-forward) studies assessing diagnostic errors and misdiagnosis-related harms. For example, starting with a known diagnosis with high potential for harm (eg, stroke), the “look-back” approach can be used to identify high-risk symptoms (eg, dizziness, vertigo). In the “look-forward” approach, a single symptom or exposure risk factor known to be frequently misdiagnosed (eg, dizziness) can be analyzed to identify potential adverse disease outcomes (eg, stroke, migraine).57
Many large ongoing studies looking at diagnostic errors among hospitalized patients, such as Utility of Predictive Systems to identify Inpatient Diagnostic Errors (UPSIDE),58 Patient Safety Learning Lab (PSLL),59 and Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT),60 are using structured chart review methodologies incorporating many of the above strategies in combination. Cases triggered by certain events (eg, ICU transfer, death, rapid response event, new or worsening acute kidney injury) are reviewed using validated tools, including Safer Dx framework and DEER taxonomy, to provide the most precise estimates of the burden of diagnostic errors in hospitalized patients. These estimates may be much higher than previously predicted using traditional chart review approaches.6,24 For example, a recently published study of 2809 random admissions in 11 Massachusetts hospitals identified 978 adverse events but only 10 diagnostic errors (diagnostic error rate, 0.4%).19 This was likely because the trigger method used in the study did not specifically examine the diagnostic process as critically as done by the Safer Dx framework and DEER taxonomy tools, thereby underestimating the total number of diagnostic errors. Further, these ongoing studies (eg, UPSIDE, ADEPT) aim to employ new and upcoming advanced machine-learning methods to create models that can improve overall diagnostic performance. This would pave the way to test and build novel, efficient, and scalable interventions to reduce diagnostic errors and improve patient outcomes.
Strategies to Improve Diagnostic Safety in Hospitalized Patients
Disease-specific biomedical research, as well as advances in laboratory, imaging, and other technologies, play a critical role in improving diagnostic accuracy. However, these technical approaches do not address many of the broader clinician- and system-level failure points and opportunities for improvement. Various patient-, provider-, and organizational-level interventions that could make diagnostic processes more resilient and reduce the risk of error and patient harm have been proposed.61
Among these strategies are approaches to empower patients and their families. Fostering therapeutic relationships between patients and members of the care team is essential to reducing diagnostic errors.62 Facilitating timely access to health records, ensuring transparency in decision making, and tailoring communication strategies to patients’ cultural and educational backgrounds can reduce harm.63 Similarly, at the system level, enhancing communication among different providers by use of tools such as structured handoffs can prevent communication breakdowns and facilitate positive outcomes.64
Interventions targeted at individual health care providers, such as educational programs to improve content-specific knowledge, can enhance diagnostic performance. Regular feedback, strategies to enhance equity, and fostering an environment where all providers are actively encouraged to think critically and participate in the diagnostic process (training programs to use “diagnostic time-outs” and making it a “team sport”) can improve clinical reasoning.65,66 Use of standardized patients can help identify individual-level cognitive failure points and facilitate creation of new interventions to improve clinical decision-making processes.67
Novel health information technologies can further augment these efforts. These include effective documentation by maintaining dynamic and accurate patient histories, problem lists, and medication lists68-70; use of electronic health record–based algorithms to identify potential diagnostic delays for serious conditions71,72; use of telemedicine technologies to improve accessibility and coordination73; application of mobile health and wearable technologies to facilitate data-gathering and care delivery74,75; and use of computerized decision-support tools, including applications to interpret electrocardiograms, imaging studies, and other diagnostic tests.76
Use of precision medicine, powered by new artificial intelligence (AI) tools, is becoming more widespread. Algorithms powered by AI can augment and sometimes even outperform clinician decision-making in areas such as oncology, radiology, and primary care.77 Creation of large biobanks like the All of Us research program can be used to study thousands of environmental and genetic risk factors and health conditions simultaneously, and help identify specific treatments that work best for people of different backgrounds.78 Active research in these areas holds great promise in terms of how and when we diagnose diseases and make appropriate preventative and treatment decisions. Significant scientific, ethical, and regulatory challenges will need to be overcome before these technologies can address some of the most complex problems in health care.79
Finally, diagnostic performance is affected by the external environment, including the functioning of the medical liability system. Diagnostic errors that lead to patient harm are a leading cause of malpractice claims.80 Developing a legal environment, in collaboration with patient advocacy groups and health care organizations, that promotes and facilitates timely disclosure of diagnostic errors could decrease the incentive to hide errors, advance care processes, and improve outcomes.81,82
Conclusion
The burden of diagnostic errors in hospitalized patients is unacceptably high and remains an underemphasized cause of preventable morbidity and mortality. Diagnostic errors often result from a breakdown in multiple interdependent processes that involve patient-, provider-, and system-level factors. Significant challenges remain in defining and identifying diagnostic errors as well as underlying process-failure points. The most effective interventions to reduce diagnostic errors will require greater patient participation in the diagnostic process and a mix of evidence-based interventions that promote individual-provider excellence as well as system-level changes. Further research and collaboration among various stakeholders should help improve diagnostic safety for hospitalized patients.
Corresponding author: Abhishek Goyal, MD, MPH; [email protected]
Disclosures: Dr. Dalal disclosed receiving income ≥ $250 from MayaMD.
Abstract
Diagnostic errors in hospitalized patients are a leading cause of preventable morbidity and mortality. Significant challenges in defining and measuring diagnostic errors and underlying process failure points have led to considerable variability in reported rates of diagnostic errors and adverse outcomes. In this article, we explore the diagnostic process and its discrete components, emphasizing the centrality of the patient in decision-making as well as the continuous nature of the process. We review the incidence of diagnostic errors in hospitalized patients and different methodological approaches that have been used to arrive at these estimates. We discuss different but interdependent provider- and system-related process-failure points that lead to diagnostic errors. We examine specific challenges related to measurement of diagnostic errors and describe traditional and novel approaches that are being used to obtain the most precise estimates. Finally, we examine various patient-, provider-, and organizational-level interventions that have been proposed to improve diagnostic safety in hospitalized patients.
Keywords: diagnostic error, hospital medicine, patient safety.
Diagnosis is defined as a “pre-existing set of categories agreed upon by the medical profession to designate a specific condition.”1 The diagnostic process involves obtaining a clinical history, performing a physical examination, conducting diagnostic testing, and consulting with other clinical providers to gather data that are relevant to understanding the underlying disease processes. This exercise involves generating hypotheses and updating prior probabilities as more information and evidence become available. Throughout this process of information gathering, integration, and interpretation, there is an ongoing assessment of whether sufficient and necessary knowledge has been obtained to make an accurate diagnosis and provide appropriate treatment.2
Diagnostic error is defined as a missed opportunity to make a timely diagnosis as part of this iterative process, including the failure of communicating the diagnosis to the patient in a timely manner.3 It can be categorized as a missed, delayed, or incorrect diagnosis based on available evidence at the time. Establishing the correct diagnosis has important implications. A timely and precise diagnosis ensures the patient the highest probability of having a positive health outcome that reflects an appropriate understanding of underlying disease processes and is consistent with their overall goals of care.3 When diagnostic errors occur, they can cause patient harm. Adverse events due to medical errors, including diagnostic errors, are estimated to be the third leading cause of death in the United States.4 Most people will experience at least 1 diagnostic error in their lifetime. In the 2015 National Academy of Medicine report Improving Diagnosis in Healthcare, diagnostic errors were identified as a major hazard as well as an opportunity to improve patient outcomes.2
Diagnostic errors during hospitalizations are especially concerning, as they are more likely to be implicated in a wider spectrum of harm, including permanent disability and death. This has become even more relevant for hospital medicine physicians and other clinical providers as they encounter increasing cognitive and administrative workloads, rising dissatisfaction and burnout, and unique obstacles such as night-time scheduling.5
Incidence of Diagnostic Errors in Hospitalized Patients
Several methodological approaches have been used to estimate the incidence of diagnostic errors in hospitalized patients. These include retrospective reviews of a sample of all hospital admissions, evaluations of selected adverse outcomes including autopsy studies, patient and provider surveys, and malpractice claims. Laboratory testing audits and secondary reviews in other diagnostic subspecialities (eg, radiology, pathology, and microbiology) are also essential to improving diagnostic performance in these specialized fields, which in turn affects overall hospital diagnostic error rates.6-8 These diverse approaches provide unique insights regarding our ability to assess the degree to which potential harms, ranging from temporary impairment to permanent disability, to death, are attributable to different failure points in the diagnostic process.
Large retrospective chart reviews of random hospital admissions remain the most accurate way to determine the overall incidence of diagnostic errors in hospitalized patients.9 The Harvard Medical Practice Study, published in 1991, laid the groundwork for measuring the incidence of adverse events in hospitalized patients and assessing their relation to medical error, negligence, and disability. Reviewing 30,121 randomly selected records from 51 randomly selected acute care hospitals in New York State, the study found that adverse events occurred in 3.7% of hospitalizations, diagnostic errors accounted for 13.8% of these events, and these errors were likely attributable to negligence in 74.7% of cases. The study not only outlined individual-level process failures, but also focused attention on some of the systemic causes, setting the agenda for quality improvement research in hospital-based care for years to come.10-12 A recent systematic review and meta-analysis of 22 hospital admission studies found a pooled rate of 0.7% (95% CI, 0.5%-1.1%) for harmful diagnostic errors.9 It found significant variations in the rates of adverse events, diagnostic errors, and range of diagnoses that were missed. This was primarily because of variabilities in pre-test probabilities in detecting diagnostic errors in these specific cohorts, as well as due to heterogeneity in study definitions and methodologies, especially regarding how they defined and measured “diagnostic error.” The analysis, however, did not account for diagnostic errors that were not related to patient harm (missed opportunities); therefore, it likely significantly underestimated the true incidence of diagnostic errors in these study populations. Table 1 summarizes some of key studies that have examined the incidence of harmful diagnostic errors in hospitalized patients.9-21
The chief limitation of reviewing random hospital admissions is that, since overall rates of diagnostic errors are still relatively low, a large number of case reviews are required to identify a sufficient sample of adverse outcomes to gain a meaningful understanding of the underlying process failure points and develop tools for remediation. Patient and provider surveys or data from malpractice claims can be high-yield starting points for research on process errors.22,23 Reviews of enriched cohorts of adverse outcomes, such as rapid-response events, intensive care unit (ICU) transfers, deaths, and hospital readmissions, can be an efficient way to identify process failures that lead to greatest harm. Depending on the research approach and the types of underlying patient populations sampled, rates of diagnostic errors in these high-risk groups have been estimated to be approximately 5% to 20%, or even higher.6,24-31 For example, a retrospective study of 391 cases of unplanned 7-day readmissions found that 5.6% of cases contained at least 1 diagnostic error during the index admission.32 In a study conducted at 6 Belgian acute-care hospitals, 56% of patients requiring an unplanned transfer to a higher level of care were determined to have had an adverse event, and of these adverse events, 12.4% of cases were associated with errors in diagnosis.29 A systematic review of 16 hospital-based studies estimated that 3.1% of all inpatient deaths were likely preventable, which corresponded to 22,165 deaths annually in the United States.30 Another such review of 31 autopsy studies reported that 28% of autopsied ICU patients had at least 1 misdiagnosis; of these diagnostic errors, 8% were classified as potentially lethal, and 15% were considered major but not lethal.31 Significant drawbacks of such enriched cohort studies, however, are their poor generalizability and inability to detect failure points that do not lead to patient harm (near-miss events).33
Causes of Diagnostic Errors in Hospitalized Patients
All aspects of the diagnostic process are susceptible to errors. These errors stem from a variety of faulty processes, including failure of the patient to engage with the health care system (eg, due to lack of insurance or transportation, or delay in seeking care); failure in information gathering (eg, missed history or exam findings, ordering wrong tests, laboratory errors); failure in information interpretation (eg, exam finding or test result misinterpretation); inaccurate hypothesis generation (eg, due to suboptimal prioritization or weighing of supporting evidence); and failure in communication (eg, with other team members or with the patient).2,34 Reasons for diagnostic process failures vary widely across different health care settings. While clinician assessment errors (eg, failure to consider or alternatively overweigh competing diagnoses) and errors in testing and the monitoring phase (eg, failure to order or follow up diagnostic tests) can lead to a majority of diagnostic errors in some patient populations, in other settings, social (eg, poor health literacy, punitive cultural practices) and economic factors (eg, lack of access to appropriate diagnostic tests or to specialty expertise) play a more prominent role.34,35
The Figure describes the relationship between components of the diagnostic process and subsequent outcomes, including diagnostic process failures, diagnostic errors, and absence or presence of patient harm.2,36,37 It reemphasizes the centrality of the patient in decision-making and the continuous nature of the process. The Figure also illustrates that only a minority of process failures result in diagnostic errors, and a smaller proportion of diagnostic errors actually lead to patient harm. Conversely, it also shows that diagnostic errors can happen without any obvious process-failure points, and, similarly, patient harm can take place in the absence of any evident diagnostic errors.36-38 Finally, it highlights the need to incorporate feedback from process failures, diagnostic errors, and favorable and unfavorable patient outcomes in order to inform future quality improvement efforts and research.
A significant proportion of diagnostic errors are due to system-related vulnerabilities, such as limitations in availability, adoption or quality of work force training, health informatics resources, and diagnostic capabilities. Lack of institutional culture that promotes safety and transparency also predisposes to diagnostic errors.39,40 The other major domain of process failures is related to cognitive errors in clinician decision-making. Anchoring, confirmation bias, availability bias, and base-rate neglect are some of the common cognitive biases that, along with personality traits (aversion to risk or ambiguity, overconfidence) and affective biases (influence of emotion on decision-making), often determine the degree of utilization of resources and the possibility of suboptimal diagnostic performance.41,42 Further, implicit biases related to age, race, gender, and sexual orientation contribute to disparities in access to health care and outcomes.43 In a large number of cases of preventable adverse outcomes, however, there are multiple interdependent individual and system-related failure points that lead to diagnostic error and patient harm.6,32
Challenges in Defining and Measuring Diagnostic Errors
In order to develop effective, evidence-based interventions to reduce diagnostic errors in hospitalized patients, it is essential to be able to first operationally define, and then accurately measure, diagnostic errors and the process failures that contribute to these errors in a standardized way that is reproducible across different settings.6,44 There are a number of obstacles in this endeavor.
A fundamental problem is that establishing a diagnosis is not a single act but a process. Patterns of symptoms and clinical presentations often differ for the same disease. Information required to make a diagnosis is usually gathered in stages, where the clinician obtains additional data, while considering many possibilities, of which 1 may be ultimately correct. Diagnoses evolve over time and in different care settings. “The most likely diagnosis” is not always the same as “the final correct diagnosis.” Moreover, the diagnostic process is influenced by patients’ individual clinical courses and preferences over time. This makes determination of missed, delayed, or incorrect diagnoses challenging.45,46
For hospitalized patients, generally the goal is to first rule out more serious and acute conditions (eg, pulmonary embolism or stroke), even if their probability is rather low. Conversely, a diagnosis that appears less consequential if delayed (eg, chronic anemia of unclear etiology) might not be pursued on an urgent basis, and is often left to outpatient providers to examine, but still may manifest in downstream harm (eg, delayed diagnosis of gastrointestinal malignancy or recurrent admissions for heart failure due to missed iron-deficiency anemia). Therefore, coming up with disease diagnosis likelihoods in hindsight may turn out to be highly subjective and not always accurate. This can be particularly difficult when clinician and other team deliberations are not recorded in their entirety.47
Another hurdle in the practice of diagnostic medicine is to preserve the balance between underdiagnosing versus pursuing overly aggressive diagnostic approaches. Conducting laboratory, imaging, or other diagnostic studies without a clear shared understanding of how they would affect clinical decision-making (eg, use of prostate-specific antigen to detect prostate cancer) not only leads to increased costs but can also delay appropriate care. Worse, subsequent unnecessary diagnostic tests and treatments can sometimes lead to serious harm.48,49
Finally, retrospective reviews by clinicians are subject to multiple potential limitations that include failure to create well-defined research questions, poorly developed inclusion and exclusion criteria, and issues related to inter- and intra-rater reliability.50 These methodological deficiencies can occur despite following "best practice" guidelines during the study planning, execution, and analysis phases. They further add to the challenge of defining and measuring diagnostic errors.47
Strategies to Improve Measurement of Diagnostic Errors
Development of new methodologies to reliably measure diagnostic errors is an area of active research. The advancement of uniform and universally agreed-upon frameworks to define and identify process failure points and diagnostic errors would help reduce measurement error and support development and testing of interventions that could be generalizable across different health care settings. To more accurately define and measure diagnostic errors, several novel approaches have been proposed (Table 2).
The Safer Dx framework is an all-round tool developed to advance the discipline of measuring diagnostic errors. For an episode of care under review, the instrument scores various items to determine the likelihood of a diagnostic error. These items evaluate multiple dimensions affecting diagnostic performance and measurements across 3 broad domains: structure (provider and organizational characteristics—from everyone involved with patient care, to computing infrastructure, to policies and regulations), process (elements of the patient-provider encounter, diagnostic test performance and follow-up, and subspecialty- and referral-specific factors), and outcome (establishing accurate and timely diagnosis as opposed to missed, delayed, or incorrect diagnosis). This instrument has been revised and can be further modified by a variety of stakeholders, including clinicians, health care organizations, and policymakers, to identify potential diagnostic errors in a standardized way for patient safety and quality improvement research.51,52
Use of standardized tools, such as the Diagnosis Error Evaluation and Research (DEER) taxonomy, can help to identify and classify specific failure points across different diagnostic process dimensions.37 These failure points can be classified into: issues related to patient presentation or access to health care; failure to obtain or misinterpretation of history or physical exam findings; errors in use of diagnostics tests due to technical or clinician-related factors; failures in appropriate weighing of evidence and hypothesis generation; errors associated with referral or consultation process; and failure to monitor the patient or obtain timely follow-up.34 The DEER taxonomy can also be modified based on specific research questions and study populations. Further, it can be recategorized to correspond to Safer Dx framework diagnostic process dimensions to provide insights into reasons for specific process failures and to develop new interventions to mitigate errors and patient harm.6
Since a majority of diagnostic errors do not lead to actual harm, use of “triggers” or clues (eg, procedure-related complications, patient falls, transfers to a higher level of care, readmissions within 30 days) can be a more efficient method to identify diagnostic errors and adverse events that do cause harm. The Global Trigger Tool, developed by the Institute for Healthcare Improvement, uses this strategy. This tool has been shown to identify a significantly higher number of serious adverse events than comparable methods.53 This facilitates selection and development of strategies at the institutional level that are most likely to improve patient outcomes.24
Encouraging and facilitating voluntary or prompted reporting from patients and clinicians can also play an important role in capturing diagnostic errors. Patients and clinicians are not only the key stakeholders but are also uniquely placed within the diagnostic process to detect and report potential errors.25,54 Patient-safety-event reporting systems, such as RL6, play a vital role in reporting near-misses and adverse events. These systems provide a mechanism for team members at all levels within the hospital to contribute toward reporting patient adverse events, including those arising from diagnostic errors.55 The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is the first standardized, nationally reported patient survey designed to measure patients’ perceptions of their hospital experience. The US Centers for Medicare and Medicaid Services (CMS) publishes HCAHPS results on its website 4 times a year, which serves as an important incentive for hospitals to improve patient safety and quality of health care delivery.56
Another novel approach links multiple symptoms to a range of target diseases using the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) framework. Using “big data” technologies, this technique can help discover otherwise hidden symptom-disease links and improve overall diagnostic performance. This approach is proposed for both case-control (look-back) and cohort (look-forward) studies assessing diagnostic errors and misdiagnosis-related harms. For example, starting with a known diagnosis with high potential for harm (eg, stroke), the “look-back” approach can be used to identify high-risk symptoms (eg, dizziness, vertigo). In the “look-forward” approach, a single symptom or exposure risk factor known to be frequently misdiagnosed (eg, dizziness) can be analyzed to identify potential adverse disease outcomes (eg, stroke, migraine).57
Many large ongoing studies looking at diagnostic errors among hospitalized patients, such as Utility of Predictive Systems to identify Inpatient Diagnostic Errors (UPSIDE),58 Patient Safety Learning Lab (PSLL),59 and Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT),60 are using structured chart review methodologies incorporating many of the above strategies in combination. Cases triggered by certain events (eg, ICU transfer, death, rapid response event, new or worsening acute kidney injury) are reviewed using validated tools, including Safer Dx framework and DEER taxonomy, to provide the most precise estimates of the burden of diagnostic errors in hospitalized patients. These estimates may be much higher than previously predicted using traditional chart review approaches.6,24 For example, a recently published study of 2809 random admissions in 11 Massachusetts hospitals identified 978 adverse events but only 10 diagnostic errors (diagnostic error rate, 0.4%).19 This was likely because the trigger method used in the study did not specifically examine the diagnostic process as critically as done by the Safer Dx framework and DEER taxonomy tools, thereby underestimating the total number of diagnostic errors. Further, these ongoing studies (eg, UPSIDE, ADEPT) aim to employ new and upcoming advanced machine-learning methods to create models that can improve overall diagnostic performance. This would pave the way to test and build novel, efficient, and scalable interventions to reduce diagnostic errors and improve patient outcomes.
Strategies to Improve Diagnostic Safety in Hospitalized Patients
Disease-specific biomedical research, as well as advances in laboratory, imaging, and other technologies, play a critical role in improving diagnostic accuracy. However, these technical approaches do not address many of the broader clinician- and system-level failure points and opportunities for improvement. Various patient-, provider-, and organizational-level interventions that could make diagnostic processes more resilient and reduce the risk of error and patient harm have been proposed.61
Among these strategies are approaches to empower patients and their families. Fostering therapeutic relationships between patients and members of the care team is essential to reducing diagnostic errors.62 Facilitating timely access to health records, ensuring transparency in decision making, and tailoring communication strategies to patients’ cultural and educational backgrounds can reduce harm.63 Similarly, at the system level, enhancing communication among different providers by use of tools such as structured handoffs can prevent communication breakdowns and facilitate positive outcomes.64
Interventions targeted at individual health care providers, such as educational programs to improve content-specific knowledge, can enhance diagnostic performance. Regular feedback, strategies to enhance equity, and fostering an environment where all providers are actively encouraged to think critically and participate in the diagnostic process (training programs to use “diagnostic time-outs” and making it a “team sport”) can improve clinical reasoning.65,66 Use of standardized patients can help identify individual-level cognitive failure points and facilitate creation of new interventions to improve clinical decision-making processes.67
Novel health information technologies can further augment these efforts. These include effective documentation by maintaining dynamic and accurate patient histories, problem lists, and medication lists68-70; use of electronic health record–based algorithms to identify potential diagnostic delays for serious conditions71,72; use of telemedicine technologies to improve accessibility and coordination73; application of mobile health and wearable technologies to facilitate data-gathering and care delivery74,75; and use of computerized decision-support tools, including applications to interpret electrocardiograms, imaging studies, and other diagnostic tests.76
Use of precision medicine, powered by new artificial intelligence (AI) tools, is becoming more widespread. Algorithms powered by AI can augment and sometimes even outperform clinician decision-making in areas such as oncology, radiology, and primary care.77 Creation of large biobanks like the All of Us research program can be used to study thousands of environmental and genetic risk factors and health conditions simultaneously, and help identify specific treatments that work best for people of different backgrounds.78 Active research in these areas holds great promise in terms of how and when we diagnose diseases and make appropriate preventative and treatment decisions. Significant scientific, ethical, and regulatory challenges will need to be overcome before these technologies can address some of the most complex problems in health care.79
Finally, diagnostic performance is affected by the external environment, including the functioning of the medical liability system. Diagnostic errors that lead to patient harm are a leading cause of malpractice claims.80 Developing a legal environment, in collaboration with patient advocacy groups and health care organizations, that promotes and facilitates timely disclosure of diagnostic errors could decrease the incentive to hide errors, advance care processes, and improve outcomes.81,82
Conclusion
The burden of diagnostic errors in hospitalized patients is unacceptably high and remains an underemphasized cause of preventable morbidity and mortality. Diagnostic errors often result from a breakdown in multiple interdependent processes that involve patient-, provider-, and system-level factors. Significant challenges remain in defining and identifying diagnostic errors as well as underlying process-failure points. The most effective interventions to reduce diagnostic errors will require greater patient participation in the diagnostic process and a mix of evidence-based interventions that promote individual-provider excellence as well as system-level changes. Further research and collaboration among various stakeholders should help improve diagnostic safety for hospitalized patients.
Corresponding author: Abhishek Goyal, MD, MPH; [email protected]
Disclosures: Dr. Dalal disclosed receiving income ≥ $250 from MayaMD.
1. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493-1499. doi:10.1001/archinte.165.13.1493
2. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. The National Academies Press. doi:10.17226/21794
3. Singh H, Graber ML. Improving diagnosis in health care—the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. doi:10.1056/NEJMp1512241
4. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139. doi:10.1136/bmj.i2139
5. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. doi:10.1007/s11606-009-0944-6
6. Griffin JA, Carr K, Bersani K, et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl). 2021;9(1):77-88. doi:10.1515/dx-2021-0033
7. Itri JN, Tappouni RR, McEachern RO, Pesch AJ, Patel SH. Fundamentals of diagnostic error in imaging. RadioGraphics. 2018;38(6):1845-1865. doi:10.1148/rg.2018180021
8. Hammerling JA. A Review of medical errors in laboratory diagnostics and where we are today. Lab Med. 2012;43(2):41-44. doi:10.1309/LM6ER9WJR1IHQAUY
9. Gunderson CG, Bilan VP, Holleck JL, et al. Prevalence of harmful diagnostic errors in hospitalised adults: a systematic review and meta-analysis. BMJ Qual Saf. 2020;29(12):1008-1018. doi:10.1136/bmjqs-2019-010822
10. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
11. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377-384. doi:10.1056/NEJM199102073240605
12. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. Results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. doi:10.1056/NEJM199107253250405
13. Wilson RM, Michel P, Olsen S, et al. Patient safety in developing countries: retrospective estimation of scale and nature of harm to patients in hospital. BMJ. 2012;344:e832. doi:10.1136/bmj.e832
14. Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163(9):458-471. doi:10.5694/j.1326-5377.1995.tb124691.x
15. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38(3):261-271. doi:10.1097/00005650-200003000-00003
16. Baker GR, Norton PG, Flintoft V, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. CMAJ. 2004;170(11):1678-1686. doi:10.1503/cmaj.1040498
17. Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S. Adverse events in New Zealand public hospitals II: preventability and clinical context. N Z Med J. 2003;116(1183):U624.
18. Aranaz-Andrés JM, Aibar-Remón C, Vitaller-Murillo J, et al. Incidence of adverse events related to health care in Spain: results of the Spanish National Study of Adverse Events. J Epidemiol Community Health. 2008;62(12):1022-1029. doi:10.1136/jech.2007.065227
19. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
20. Soop M, Fryksmark U, Köster M, Haglund B. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care. 2009;21(4):285-291. doi:10.1093/intqhc/mzp025
21. Rafter N, Hickey A, Conroy RM, et al. The Irish National Adverse Events Study (INAES): the frequency and nature of adverse events in Irish hospitals—a retrospective record review study. BMJ Qual Saf. 2017;26(2):111-119. doi:10.1136/bmjqs-2015-004828
22. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med. 2002;347(24):1933-1940. doi:10.1056/NEJMsa022151
23. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. doi:10.1136/bmjqs-2012-001550
24. Malik MA, Motta-Calderon D, Piniella N, et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl). 2022;9(4):446-457. doi:10.1515/dx-2022-0032
25. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22(suppl 2):ii21-ii27. doi:10.1136/bmjqs-2012-001615
26. Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med. 2019;47(11):e902-e910. doi:10.1097/CCM.0000000000003976
27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012;21(9):737-745. doi:10.1136/bmjqs-2011-001159
28. Bergl PA, Nanchal RS, Singh H. Diagnostic error in the critically ill: defining the problem and exploring next steps to advance intensive care unit safety. Ann Am Thorac Soc. 2018;15(8):903-907. doi:10.1513/AnnalsATS.201801-068PS
29. Marquet K, Claes N, De Troy E, et al. One fourth of unplanned transfers to a higher level of care are associated with a highly preventable adverse event: a patient record review in six Belgian hospitals. Crit Care Med. 2015;43(5):1053-1061. doi:10.1097/CCM.0000000000000932
30. Rodwin BA, Bilan VP, Merchant NB, et al. Rate of preventable mortality in hospitalized patients: a systematic review and meta-analysis. J Gen Intern Med. 2020;35(7):2099-2106. doi:10.1007/s11606-019-05592-5
31. Winters B, Custer J, Galvagno SM, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. doi:10.1136/bmjqs-2012-000803
32. Raffel KE, Kantor MA, Barish P, et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf. 2020;29(12):971-979. doi:10.1136/bmjqs-2020-010896
33. Weingart SN, Pagovich O, Sands DZ, et al. What can hospitalized patients tell us about adverse events? learning from patient-reported incidents. J Gen Intern Med. 2005;20(9):830-836. doi:10.1111/j.1525-1497.2005.0180.x
34. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. doi:10.1001/archinternmed.2009.333
35. Singh H, Schiff GD, Graber ML, Onakpoya I, Thompson MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf. 2017;26(6):484-494. doi:10.1136/bmjqs-2016-005401
36. Schiff GD, Leape LL. Commentary: how can we make diagnosis safer? Acad Med J Assoc Am Med Coll. 2012;87(2):135-138. doi:10.1097/ACM.0b013e31823f711c
37. Schiff GD, Kim S, Abrams R, et al. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation. Volume 2: Concepts and Methodology. AHRQ Publication No. 05-0021-2. Agency for Healthcare Research and Quality (US); 2005. Accessed January 16, 2023. http://www.ncbi.nlm.nih.gov/books/NBK20492/
38. Newman-Toker DE. A unified conceptual model for diagnostic errors: underdiagnosis, overdiagnosis, and misdiagnosis. Diagnosis (Berl). 2014;1(1):43-48. doi:10.1515/dx-2013-0027
39. Abimanyi-Ochom J, Bohingamu Mudiyanselage S, Catchpool M, Firipis M, Wanni Arachchige Dona S, Watts JJ. Strategies to reduce diagnostic errors: a systematic review. BMC Med Inform Decis Mak. 2019;19(1):174. doi:10.1186/s12911-019-0901-1
40. Gupta A, Harrod M, Quinn M, et al. Mind the overlap: how system problems contribute to cognitive failure and diagnostic errors. Diagnosis (Berl). 2018;5(3):151-156. doi:10.1515/dx-2018-0014
41. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138. doi:10.1186/s12911-016-0377-1
42. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775-780. doi: 10.1097/00001888-200308000-00003
43. Chapman EN, Kaatz A, Carnes M. Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. J Gen Intern Med. 2013;28(11):1504-1510. doi:10.1007/s11606-013-2441-1
44. Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis (Ber). 2015;2(2):97-103. doi:10.1515/dx-2014-0069
45. Arkes HR, Wortmann RL, Saville PD, Harkness AR. Hindsight bias among physicians weighing the likelihood of diagnoses. J Appl Psychol. 1981;66(2):252-254.
46. Singh H. Editorial: Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014;40(3):99-101. doi:10.1016/s1553-7250(14)40012-6
47. Vassar M, Holzmann M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof. 2013;10:12. doi:10.3352/jeehp.2013.10.12
48. Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605-613. doi:10.1093/jnci/djq099
49. Moynihan R, Doust J, Henry D. Preventing overdiagnosis: how to stop harming the healthy. BMJ. 2012;344:e3502. doi:10.1136/bmj.e3502
50. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA. 2001;286(4):415-420. doi:10.1001/jama.286.4.415
51. Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-110. doi:10.1136/bmjqs-2014-003675
52. Singh H, Khanna A, Spitzmueller C, Meyer AND. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl). 2019;6(4):315-323. doi:10.1515/dx-2019-0012
53. Classen DC, Resar R, Griffin F, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. doi:10.1377/hlthaff.2011.0190
54. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med. 2008;121(5 suppl):S38-S42. doi:10.1016/j.amjmed.2008.02.004
55. Mitchell I, Schuster A, Smith K, Pronovost P, Wu A. Patient safety incident reporting: a qualitative study of thoughts and perceptions of experts 15 years after “To Err is Human.” BMJ Qual Saf. 2016;25(2):92-99. doi:10.1136/bmjqs-2015-004405
56. Mazurenko O, Collum T, Ferdinand A, Menachemi N. Predictors of hospital patient satisfaction as measured by HCAHPS: a systematic review. J Healthc Manag. 2017;62(4):272-283. doi:10.1097/JHM-D-15-00050
57. Liberman AL, Newman-Toker DE. Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf. 2018;27(7):557-566. doi:10.1136/bmjqs-2017-007032
58. Utility of Predictive Systems to Identify Inpatient Diagnostic Errors: the UPSIDE study. NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/search/rpoHXlEAcEudQV3B9ld8iw/project-details/10020962
59. Overview of Patient Safety Learning Laboratory (PSLL) Projects. Agency for Healthcare Research and Quality. Accessed January 14, 2023. https://www.ahrq.gov/patient-safety/resources/learning-lab/index.html
60. Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT). NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/project-details/10642576
61. Zwaan L, Singh H. Diagnostic error in hospitals: finding forests not just the big trees. BMJ Qual Saf. 2020;29(12):961-964. doi:10.1136/bmjqs-2020-011099
62. Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85(1):53-62. doi:10.4065/mcp.2009.0248
63. Murphy DR, Singh H, Berlin L. Communication breakdowns and diagnostic errors: a radiology perspective. Diagnosis (Berl). 2014;1(4):253-261. doi:10.1515/dx-2014-0035
64. Singh H, Naik AD, Rao R, Petersen LA. Reducing diagnostic errors through effective communication: harnessing the power of information technology. J Gen Intern Med. 2008;23(4):489-494. doi:10.1007/s11606-007-0393-z
65. Singh H, Connor DM, Dhaliwal G. Five strategies for clinicians to advance diagnostic excellence. BMJ. 2022;376:e068044. doi:10.1136/bmj-2021-068044
66. Yale S, Cohen S, Bordini BJ. Diagnostic time-outs to improve diagnosis. Crit Care Clin. 2022;38(2):185-194. doi:10.1016/j.ccc.2021.11.008
67. Schwartz A, Peskin S, Spiro A, Weiner SJ. Impact of unannounced standardized patient audit and feedback on care, documentation, and costs: an experiment and claims analysis. J Gen Intern Med. 2021;36(1):27-34. doi:10.1007/s11606-020-05965-1
68. Carpenter JD, Gorman PN. Using medication list—problem list mismatches as markers of potential error. Proc AMIA Symp. 2002:106-110.
69. Hron JD, Manzi S, Dionne R, et al. Electronic medication reconciliation and medication errors. Int J Qual Health Care. 2015;27(4):314-319. doi:10.1093/intqhc/mzv046
70. Graber ML, Siegal D, Riah H, Johnston D, Kenyon K. Electronic health record–related events in medical malpractice claims. J Patient Saf. 2019;15(2):77-85. doi:10.1097/PTS.0000000000000240
71. Murphy DR, Wu L, Thomas EJ, Forjuoh SN, Meyer AND, Singh H. Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol. 2015;33(31):3560-3567. doi:10.1200/JCO.2015.61.1301
72. Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93-100. doi:10.1136/bmjqs-2011-000304
73. Armaignac DL, Saxena A, Rubens M, et al. Impact of telemedicine on mortality, length of stay, and cost among patients in progressive care units: experience from a large healthcare system. Crit Care Med. 2018;46(5):728-735. doi:10.1097/CCM.0000000000002994
74. MacKinnon GE, Brittain EL. Mobile health technologies in cardiopulmonary disease. Chest. 2020;157(3):654-664. doi:10.1016/j.chest.2019.10.015
75. DeVore AD, Wosik J, Hernandez AF. The future of wearables in heart failure patients. JACC Heart Fail. 2019;7(11):922-932. doi:10.1016/j.jchf.2019.08.008
76. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10(5):478-483. doi:10.1197/jamia.M1279
77. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. 2019;34(8):1626-1630. doi:10.1007/s11606-019-05035-1
78. Ramirez AH, Gebo KA, Harris PA. Progress with the All Of Us research program: opening access for researchers. JAMA. 2021;325(24):2441-2442. doi:10.1001/jama.2021.7702
79. Johnson KB, Wei W, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86-93. doi:10.1111/cts.12884
80. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2017;27(1):bmjqs-2017-006774. doi:10.1136/bmjqs-2017-006774
81. Renkema E, Broekhuis M, Ahaus K. Conditions that influence the impact of malpractice litigation risk on physicians’ behavior regarding patient safety. BMC Health Serv Res. 2014;14(1):38. doi:10.1186/1472-6963-14-38
82. Kachalia A, Mello MM, Nallamothu BK, Studdert DM. Legal and policy interventions to improve patient safety. Circulation. 2016;133(7):661-671. doi:10.1161/CIRCULATIONAHA.115.015880
1. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493-1499. doi:10.1001/archinte.165.13.1493
2. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. The National Academies Press. doi:10.17226/21794
3. Singh H, Graber ML. Improving diagnosis in health care—the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. doi:10.1056/NEJMp1512241
4. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139. doi:10.1136/bmj.i2139
5. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. doi:10.1007/s11606-009-0944-6
6. Griffin JA, Carr K, Bersani K, et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl). 2021;9(1):77-88. doi:10.1515/dx-2021-0033
7. Itri JN, Tappouni RR, McEachern RO, Pesch AJ, Patel SH. Fundamentals of diagnostic error in imaging. RadioGraphics. 2018;38(6):1845-1865. doi:10.1148/rg.2018180021
8. Hammerling JA. A Review of medical errors in laboratory diagnostics and where we are today. Lab Med. 2012;43(2):41-44. doi:10.1309/LM6ER9WJR1IHQAUY
9. Gunderson CG, Bilan VP, Holleck JL, et al. Prevalence of harmful diagnostic errors in hospitalised adults: a systematic review and meta-analysis. BMJ Qual Saf. 2020;29(12):1008-1018. doi:10.1136/bmjqs-2019-010822
10. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
11. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377-384. doi:10.1056/NEJM199102073240605
12. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. Results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. doi:10.1056/NEJM199107253250405
13. Wilson RM, Michel P, Olsen S, et al. Patient safety in developing countries: retrospective estimation of scale and nature of harm to patients in hospital. BMJ. 2012;344:e832. doi:10.1136/bmj.e832
14. Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163(9):458-471. doi:10.5694/j.1326-5377.1995.tb124691.x
15. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38(3):261-271. doi:10.1097/00005650-200003000-00003
16. Baker GR, Norton PG, Flintoft V, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. CMAJ. 2004;170(11):1678-1686. doi:10.1503/cmaj.1040498
17. Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S. Adverse events in New Zealand public hospitals II: preventability and clinical context. N Z Med J. 2003;116(1183):U624.
18. Aranaz-Andrés JM, Aibar-Remón C, Vitaller-Murillo J, et al. Incidence of adverse events related to health care in Spain: results of the Spanish National Study of Adverse Events. J Epidemiol Community Health. 2008;62(12):1022-1029. doi:10.1136/jech.2007.065227
19. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
20. Soop M, Fryksmark U, Köster M, Haglund B. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care. 2009;21(4):285-291. doi:10.1093/intqhc/mzp025
21. Rafter N, Hickey A, Conroy RM, et al. The Irish National Adverse Events Study (INAES): the frequency and nature of adverse events in Irish hospitals—a retrospective record review study. BMJ Qual Saf. 2017;26(2):111-119. doi:10.1136/bmjqs-2015-004828
22. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med. 2002;347(24):1933-1940. doi:10.1056/NEJMsa022151
23. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. doi:10.1136/bmjqs-2012-001550
24. Malik MA, Motta-Calderon D, Piniella N, et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl). 2022;9(4):446-457. doi:10.1515/dx-2022-0032
25. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22(suppl 2):ii21-ii27. doi:10.1136/bmjqs-2012-001615
26. Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med. 2019;47(11):e902-e910. doi:10.1097/CCM.0000000000003976
27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012;21(9):737-745. doi:10.1136/bmjqs-2011-001159
28. Bergl PA, Nanchal RS, Singh H. Diagnostic error in the critically ill: defining the problem and exploring next steps to advance intensive care unit safety. Ann Am Thorac Soc. 2018;15(8):903-907. doi:10.1513/AnnalsATS.201801-068PS
29. Marquet K, Claes N, De Troy E, et al. One fourth of unplanned transfers to a higher level of care are associated with a highly preventable adverse event: a patient record review in six Belgian hospitals. Crit Care Med. 2015;43(5):1053-1061. doi:10.1097/CCM.0000000000000932
30. Rodwin BA, Bilan VP, Merchant NB, et al. Rate of preventable mortality in hospitalized patients: a systematic review and meta-analysis. J Gen Intern Med. 2020;35(7):2099-2106. doi:10.1007/s11606-019-05592-5
31. Winters B, Custer J, Galvagno SM, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. doi:10.1136/bmjqs-2012-000803
32. Raffel KE, Kantor MA, Barish P, et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf. 2020;29(12):971-979. doi:10.1136/bmjqs-2020-010896
33. Weingart SN, Pagovich O, Sands DZ, et al. What can hospitalized patients tell us about adverse events? learning from patient-reported incidents. J Gen Intern Med. 2005;20(9):830-836. doi:10.1111/j.1525-1497.2005.0180.x
34. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. doi:10.1001/archinternmed.2009.333
35. Singh H, Schiff GD, Graber ML, Onakpoya I, Thompson MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf. 2017;26(6):484-494. doi:10.1136/bmjqs-2016-005401
36. Schiff GD, Leape LL. Commentary: how can we make diagnosis safer? Acad Med J Assoc Am Med Coll. 2012;87(2):135-138. doi:10.1097/ACM.0b013e31823f711c
37. Schiff GD, Kim S, Abrams R, et al. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation. Volume 2: Concepts and Methodology. AHRQ Publication No. 05-0021-2. Agency for Healthcare Research and Quality (US); 2005. Accessed January 16, 2023. http://www.ncbi.nlm.nih.gov/books/NBK20492/
38. Newman-Toker DE. A unified conceptual model for diagnostic errors: underdiagnosis, overdiagnosis, and misdiagnosis. Diagnosis (Berl). 2014;1(1):43-48. doi:10.1515/dx-2013-0027
39. Abimanyi-Ochom J, Bohingamu Mudiyanselage S, Catchpool M, Firipis M, Wanni Arachchige Dona S, Watts JJ. Strategies to reduce diagnostic errors: a systematic review. BMC Med Inform Decis Mak. 2019;19(1):174. doi:10.1186/s12911-019-0901-1
40. Gupta A, Harrod M, Quinn M, et al. Mind the overlap: how system problems contribute to cognitive failure and diagnostic errors. Diagnosis (Berl). 2018;5(3):151-156. doi:10.1515/dx-2018-0014
41. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138. doi:10.1186/s12911-016-0377-1
42. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775-780. doi: 10.1097/00001888-200308000-00003
43. Chapman EN, Kaatz A, Carnes M. Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. J Gen Intern Med. 2013;28(11):1504-1510. doi:10.1007/s11606-013-2441-1
44. Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis (Ber). 2015;2(2):97-103. doi:10.1515/dx-2014-0069
45. Arkes HR, Wortmann RL, Saville PD, Harkness AR. Hindsight bias among physicians weighing the likelihood of diagnoses. J Appl Psychol. 1981;66(2):252-254.
46. Singh H. Editorial: Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014;40(3):99-101. doi:10.1016/s1553-7250(14)40012-6
47. Vassar M, Holzmann M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof. 2013;10:12. doi:10.3352/jeehp.2013.10.12
48. Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605-613. doi:10.1093/jnci/djq099
49. Moynihan R, Doust J, Henry D. Preventing overdiagnosis: how to stop harming the healthy. BMJ. 2012;344:e3502. doi:10.1136/bmj.e3502
50. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA. 2001;286(4):415-420. doi:10.1001/jama.286.4.415
51. Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-110. doi:10.1136/bmjqs-2014-003675
52. Singh H, Khanna A, Spitzmueller C, Meyer AND. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl). 2019;6(4):315-323. doi:10.1515/dx-2019-0012
53. Classen DC, Resar R, Griffin F, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. doi:10.1377/hlthaff.2011.0190
54. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med. 2008;121(5 suppl):S38-S42. doi:10.1016/j.amjmed.2008.02.004
55. Mitchell I, Schuster A, Smith K, Pronovost P, Wu A. Patient safety incident reporting: a qualitative study of thoughts and perceptions of experts 15 years after “To Err is Human.” BMJ Qual Saf. 2016;25(2):92-99. doi:10.1136/bmjqs-2015-004405
56. Mazurenko O, Collum T, Ferdinand A, Menachemi N. Predictors of hospital patient satisfaction as measured by HCAHPS: a systematic review. J Healthc Manag. 2017;62(4):272-283. doi:10.1097/JHM-D-15-00050
57. Liberman AL, Newman-Toker DE. Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf. 2018;27(7):557-566. doi:10.1136/bmjqs-2017-007032
58. Utility of Predictive Systems to Identify Inpatient Diagnostic Errors: the UPSIDE study. NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/search/rpoHXlEAcEudQV3B9ld8iw/project-details/10020962
59. Overview of Patient Safety Learning Laboratory (PSLL) Projects. Agency for Healthcare Research and Quality. Accessed January 14, 2023. https://www.ahrq.gov/patient-safety/resources/learning-lab/index.html
60. Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT). NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/project-details/10642576
61. Zwaan L, Singh H. Diagnostic error in hospitals: finding forests not just the big trees. BMJ Qual Saf. 2020;29(12):961-964. doi:10.1136/bmjqs-2020-011099
62. Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85(1):53-62. doi:10.4065/mcp.2009.0248
63. Murphy DR, Singh H, Berlin L. Communication breakdowns and diagnostic errors: a radiology perspective. Diagnosis (Berl). 2014;1(4):253-261. doi:10.1515/dx-2014-0035
64. Singh H, Naik AD, Rao R, Petersen LA. Reducing diagnostic errors through effective communication: harnessing the power of information technology. J Gen Intern Med. 2008;23(4):489-494. doi:10.1007/s11606-007-0393-z
65. Singh H, Connor DM, Dhaliwal G. Five strategies for clinicians to advance diagnostic excellence. BMJ. 2022;376:e068044. doi:10.1136/bmj-2021-068044
66. Yale S, Cohen S, Bordini BJ. Diagnostic time-outs to improve diagnosis. Crit Care Clin. 2022;38(2):185-194. doi:10.1016/j.ccc.2021.11.008
67. Schwartz A, Peskin S, Spiro A, Weiner SJ. Impact of unannounced standardized patient audit and feedback on care, documentation, and costs: an experiment and claims analysis. J Gen Intern Med. 2021;36(1):27-34. doi:10.1007/s11606-020-05965-1
68. Carpenter JD, Gorman PN. Using medication list—problem list mismatches as markers of potential error. Proc AMIA Symp. 2002:106-110.
69. Hron JD, Manzi S, Dionne R, et al. Electronic medication reconciliation and medication errors. Int J Qual Health Care. 2015;27(4):314-319. doi:10.1093/intqhc/mzv046
70. Graber ML, Siegal D, Riah H, Johnston D, Kenyon K. Electronic health record–related events in medical malpractice claims. J Patient Saf. 2019;15(2):77-85. doi:10.1097/PTS.0000000000000240
71. Murphy DR, Wu L, Thomas EJ, Forjuoh SN, Meyer AND, Singh H. Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol. 2015;33(31):3560-3567. doi:10.1200/JCO.2015.61.1301
72. Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93-100. doi:10.1136/bmjqs-2011-000304
73. Armaignac DL, Saxena A, Rubens M, et al. Impact of telemedicine on mortality, length of stay, and cost among patients in progressive care units: experience from a large healthcare system. Crit Care Med. 2018;46(5):728-735. doi:10.1097/CCM.0000000000002994
74. MacKinnon GE, Brittain EL. Mobile health technologies in cardiopulmonary disease. Chest. 2020;157(3):654-664. doi:10.1016/j.chest.2019.10.015
75. DeVore AD, Wosik J, Hernandez AF. The future of wearables in heart failure patients. JACC Heart Fail. 2019;7(11):922-932. doi:10.1016/j.jchf.2019.08.008
76. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10(5):478-483. doi:10.1197/jamia.M1279
77. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. 2019;34(8):1626-1630. doi:10.1007/s11606-019-05035-1
78. Ramirez AH, Gebo KA, Harris PA. Progress with the All Of Us research program: opening access for researchers. JAMA. 2021;325(24):2441-2442. doi:10.1001/jama.2021.7702
79. Johnson KB, Wei W, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86-93. doi:10.1111/cts.12884
80. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2017;27(1):bmjqs-2017-006774. doi:10.1136/bmjqs-2017-006774
81. Renkema E, Broekhuis M, Ahaus K. Conditions that influence the impact of malpractice litigation risk on physicians’ behavior regarding patient safety. BMC Health Serv Res. 2014;14(1):38. doi:10.1186/1472-6963-14-38
82. Kachalia A, Mello MM, Nallamothu BK, Studdert DM. Legal and policy interventions to improve patient safety. Circulation. 2016;133(7):661-671. doi:10.1161/CIRCULATIONAHA.115.015880
Safety in Health Care: An Essential Pillar of Quality
Each year, 40,000 to 98,000 deaths occur due to medical errors.1 The Harvard Medical Practice Study (HMPS), published in 1991, found that 3.7% of hospitalized patients were harmed by adverse events and 1% were harmed by adverse events due to negligence.2 The latest HMPS showed that, despite significant improvements in patient safety over the past 3 decades, patient safety challenges persist. This study found that inpatient care leads to harm in nearly a quarter of patients, and that 1 in 4 of these adverse events are preventable.3
Since the first HMPS study was published, efforts to improve patient safety have focused on identifying causes of medical error and the design and implementation of interventions to mitigate errors. Factors contributing to medical errors have been well documented: the complexity of care delivery from inpatient to outpatient settings, with transitions of care and extensive use of medications; multiple comorbidities; and the fragmentation of care across multiple systems and specialties. Although most errors are related to process or system failure, accountability of each practitioner and clinician is essential to promoting a culture of safety. Many medical errors are preventable through multifaceted approaches employed throughout the phases of the care,4 with medication errors, both prescribing and administration, and diagnostic and treatment errors encompassing most risk prevention areas. Broadly, safety efforts should emphasize building a culture of safety where all safety events are reported, including near-miss events.
Two articles in this issue of JCOM address key elements of patient safety: building a safety culture and diagnostic error. Merchant et al5 report on an initiative designed to promote a safety culture by recognizing and rewarding staff who identify and report near misses. The tiered awards program they designed led to significantly increased staff participation in the safety awards nomination process and was associated with increased reporting of actual and close-call events and greater attendance at monthly safety forums. Goyal et al,6 noting that diagnostic error rates in hospitalized patients remain unacceptably high, provide a concise update on diagnostic error among inpatients, focusing on issues related to defining and measuring diagnostic errors and current strategies to improve diagnostic safety in hospitalized patients. In a third article, Sathi et al report on efforts to teach quality improvement (QI) methods to internal medicine trainees; their project increased residents’ knowledge of their patient panels and comfort with QI approaches and led to improved patient outcomes.
Major progress has been made to improve health care safety since the first HMPS was published. However, the latest HMPS shows that patient safety efforts must continue, given the persistent risk for patient harm in the current health care delivery system. Safety, along with clear accountability for identifying, reporting, and addressing errors, should be a top priority for health care systems throughout the preventive, diagnostic, and therapeutic phases of care.
Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]
1. Clancy C, Munier W, Brady J. National healthcare quality report. Agency for Healthcare Research and Quality; 2013.
2. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
3. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
4. Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events: implications for prevention. JAMA. 1995;274(1):29-34.
5. Merchant NB, O’Neal J, Murray JS. Development of a safety awards program at a Veterans Affairs health care system: a quality improvement initiative. J Clin Outcome Manag. 2023;30(1):9-16. doi:10.12788/jcom.0120
6. Goyal A, Martin-Doyle W, Dalal AK. Diagnostic errors in hospitalized patients. J Clin Outcome Manag. 2023;30(1):17-27. doi:10.12788/jcom.0121
7. Sathi K, Huang KTL, Chandler DM, et al. Teaching quality improvement to internal medicine residents to address patient care gaps in ambulatory quality metrics. J Clin Outcome Manag. 2023;30(1):1-6.doi:10.12788/jcom.0119
Each year, 40,000 to 98,000 deaths occur due to medical errors.1 The Harvard Medical Practice Study (HMPS), published in 1991, found that 3.7% of hospitalized patients were harmed by adverse events and 1% were harmed by adverse events due to negligence.2 The latest HMPS showed that, despite significant improvements in patient safety over the past 3 decades, patient safety challenges persist. This study found that inpatient care leads to harm in nearly a quarter of patients, and that 1 in 4 of these adverse events are preventable.3
Since the first HMPS study was published, efforts to improve patient safety have focused on identifying causes of medical error and the design and implementation of interventions to mitigate errors. Factors contributing to medical errors have been well documented: the complexity of care delivery from inpatient to outpatient settings, with transitions of care and extensive use of medications; multiple comorbidities; and the fragmentation of care across multiple systems and specialties. Although most errors are related to process or system failure, accountability of each practitioner and clinician is essential to promoting a culture of safety. Many medical errors are preventable through multifaceted approaches employed throughout the phases of the care,4 with medication errors, both prescribing and administration, and diagnostic and treatment errors encompassing most risk prevention areas. Broadly, safety efforts should emphasize building a culture of safety where all safety events are reported, including near-miss events.
Two articles in this issue of JCOM address key elements of patient safety: building a safety culture and diagnostic error. Merchant et al5 report on an initiative designed to promote a safety culture by recognizing and rewarding staff who identify and report near misses. The tiered awards program they designed led to significantly increased staff participation in the safety awards nomination process and was associated with increased reporting of actual and close-call events and greater attendance at monthly safety forums. Goyal et al,6 noting that diagnostic error rates in hospitalized patients remain unacceptably high, provide a concise update on diagnostic error among inpatients, focusing on issues related to defining and measuring diagnostic errors and current strategies to improve diagnostic safety in hospitalized patients. In a third article, Sathi et al report on efforts to teach quality improvement (QI) methods to internal medicine trainees; their project increased residents’ knowledge of their patient panels and comfort with QI approaches and led to improved patient outcomes.
Major progress has been made to improve health care safety since the first HMPS was published. However, the latest HMPS shows that patient safety efforts must continue, given the persistent risk for patient harm in the current health care delivery system. Safety, along with clear accountability for identifying, reporting, and addressing errors, should be a top priority for health care systems throughout the preventive, diagnostic, and therapeutic phases of care.
Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]
Each year, 40,000 to 98,000 deaths occur due to medical errors.1 The Harvard Medical Practice Study (HMPS), published in 1991, found that 3.7% of hospitalized patients were harmed by adverse events and 1% were harmed by adverse events due to negligence.2 The latest HMPS showed that, despite significant improvements in patient safety over the past 3 decades, patient safety challenges persist. This study found that inpatient care leads to harm in nearly a quarter of patients, and that 1 in 4 of these adverse events are preventable.3
Since the first HMPS study was published, efforts to improve patient safety have focused on identifying causes of medical error and the design and implementation of interventions to mitigate errors. Factors contributing to medical errors have been well documented: the complexity of care delivery from inpatient to outpatient settings, with transitions of care and extensive use of medications; multiple comorbidities; and the fragmentation of care across multiple systems and specialties. Although most errors are related to process or system failure, accountability of each practitioner and clinician is essential to promoting a culture of safety. Many medical errors are preventable through multifaceted approaches employed throughout the phases of the care,4 with medication errors, both prescribing and administration, and diagnostic and treatment errors encompassing most risk prevention areas. Broadly, safety efforts should emphasize building a culture of safety where all safety events are reported, including near-miss events.
Two articles in this issue of JCOM address key elements of patient safety: building a safety culture and diagnostic error. Merchant et al5 report on an initiative designed to promote a safety culture by recognizing and rewarding staff who identify and report near misses. The tiered awards program they designed led to significantly increased staff participation in the safety awards nomination process and was associated with increased reporting of actual and close-call events and greater attendance at monthly safety forums. Goyal et al,6 noting that diagnostic error rates in hospitalized patients remain unacceptably high, provide a concise update on diagnostic error among inpatients, focusing on issues related to defining and measuring diagnostic errors and current strategies to improve diagnostic safety in hospitalized patients. In a third article, Sathi et al report on efforts to teach quality improvement (QI) methods to internal medicine trainees; their project increased residents’ knowledge of their patient panels and comfort with QI approaches and led to improved patient outcomes.
Major progress has been made to improve health care safety since the first HMPS was published. However, the latest HMPS shows that patient safety efforts must continue, given the persistent risk for patient harm in the current health care delivery system. Safety, along with clear accountability for identifying, reporting, and addressing errors, should be a top priority for health care systems throughout the preventive, diagnostic, and therapeutic phases of care.
Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]
1. Clancy C, Munier W, Brady J. National healthcare quality report. Agency for Healthcare Research and Quality; 2013.
2. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
3. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
4. Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events: implications for prevention. JAMA. 1995;274(1):29-34.
5. Merchant NB, O’Neal J, Murray JS. Development of a safety awards program at a Veterans Affairs health care system: a quality improvement initiative. J Clin Outcome Manag. 2023;30(1):9-16. doi:10.12788/jcom.0120
6. Goyal A, Martin-Doyle W, Dalal AK. Diagnostic errors in hospitalized patients. J Clin Outcome Manag. 2023;30(1):17-27. doi:10.12788/jcom.0121
7. Sathi K, Huang KTL, Chandler DM, et al. Teaching quality improvement to internal medicine residents to address patient care gaps in ambulatory quality metrics. J Clin Outcome Manag. 2023;30(1):1-6.doi:10.12788/jcom.0119
1. Clancy C, Munier W, Brady J. National healthcare quality report. Agency for Healthcare Research and Quality; 2013.
2. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
3. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
4. Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events: implications for prevention. JAMA. 1995;274(1):29-34.
5. Merchant NB, O’Neal J, Murray JS. Development of a safety awards program at a Veterans Affairs health care system: a quality improvement initiative. J Clin Outcome Manag. 2023;30(1):9-16. doi:10.12788/jcom.0120
6. Goyal A, Martin-Doyle W, Dalal AK. Diagnostic errors in hospitalized patients. J Clin Outcome Manag. 2023;30(1):17-27. doi:10.12788/jcom.0121
7. Sathi K, Huang KTL, Chandler DM, et al. Teaching quality improvement to internal medicine residents to address patient care gaps in ambulatory quality metrics. J Clin Outcome Manag. 2023;30(1):1-6.doi:10.12788/jcom.0119
A patient named ‘Settle’ decides to sue instead
On Nov. 1, 2020, Dallas Settle went to Plateau Medical Center, Oak Hill, W.Va., complaining of pain that was later described in court documents as being “in his right mid-abdomen migrating to his right lower abdomen.” Following a CT scan, Mr. Settle was diagnosed with diverticulitis resulting in pneumoperitoneum, which is the presence of air or other gas in the abdominal cavity. The patient, it was decided, required surgery to correct the problem, but Plateau Medical Center didn’t have the staff to perform the procedure.
Mr. Settle was then transferred to another West Virginia hospital, Charleston Area Medical Center (CAMC). Here, he was evaluated by doctors in the facility’s General Division, who initiated treatment with IV fluids and opiate analgesics. He was then placed under the care of a trauma surgeon, who initially decided to treat the patient nonoperatively. If that approach failed, the surgeon believed, Mr. Settle would probably require a laparotomy, bowel resection, and ostomy.
Another surgical team performed an exploratory laparotomy the following day. The team determined that Mr. Settle was suffering from a ruptured appendicitis and allegedly performed an appendectomy. But Mr. Settle’s condition continued to deteriorate the following day.
Another CT scan followed. It revealed various problems – multiple fluid collections, an ileus, distended loops of the patient’s small bowel, a left renal cyst, subcentimeter mesenteric, and retroperitoneal adenopathy. Additional CT scans conducted 4 days later indicated other problems, including fluid collections in the patient’s right- and left-lower quadrants.
Over the next few days, doctors performed further exploratory laparotomies. Finally, on Nov. 22, Mr. Settle was transferred out of the intensive care unit in preparation for his discharge the following day.
His pain continued to worsen, however, and he was readmitted to CAMC a day later. At this point, an examination revealed that his surgical incisions had become infected.
Worse news was on the horizon. On Nov. 28, the trauma surgeon who had first agreed to treat Mr. Settle informed him that, despite claims to the contrary, his appendix hadn’t been removed.
Eventually, Mr. Settle was referred to the Cleveland Clinic, where at press time he was still being treated.
Mr. Settle has hired the firm Calwell Luce diTrapano to sue CAMC, accusing it of medical malpractice, medical negligence, and other lapses in the standard of care. In his complaint, he accused the hospital and its staff of breaching their duty of care “by negligently and improperly treating him” and by failing “to exercise the degree of care, skill, and learning required and expected of reasonable health care providers.”
His suit seeks not only compensatory damages and other relief but also punitive damages.
The content contained in this article is for informational purposes only and does not constitute legal advice. Reliance on any information provided in this article is solely at your own risk.
A version of this article originally appeared on Medscape.com.
On Nov. 1, 2020, Dallas Settle went to Plateau Medical Center, Oak Hill, W.Va., complaining of pain that was later described in court documents as being “in his right mid-abdomen migrating to his right lower abdomen.” Following a CT scan, Mr. Settle was diagnosed with diverticulitis resulting in pneumoperitoneum, which is the presence of air or other gas in the abdominal cavity. The patient, it was decided, required surgery to correct the problem, but Plateau Medical Center didn’t have the staff to perform the procedure.
Mr. Settle was then transferred to another West Virginia hospital, Charleston Area Medical Center (CAMC). Here, he was evaluated by doctors in the facility’s General Division, who initiated treatment with IV fluids and opiate analgesics. He was then placed under the care of a trauma surgeon, who initially decided to treat the patient nonoperatively. If that approach failed, the surgeon believed, Mr. Settle would probably require a laparotomy, bowel resection, and ostomy.
Another surgical team performed an exploratory laparotomy the following day. The team determined that Mr. Settle was suffering from a ruptured appendicitis and allegedly performed an appendectomy. But Mr. Settle’s condition continued to deteriorate the following day.
Another CT scan followed. It revealed various problems – multiple fluid collections, an ileus, distended loops of the patient’s small bowel, a left renal cyst, subcentimeter mesenteric, and retroperitoneal adenopathy. Additional CT scans conducted 4 days later indicated other problems, including fluid collections in the patient’s right- and left-lower quadrants.
Over the next few days, doctors performed further exploratory laparotomies. Finally, on Nov. 22, Mr. Settle was transferred out of the intensive care unit in preparation for his discharge the following day.
His pain continued to worsen, however, and he was readmitted to CAMC a day later. At this point, an examination revealed that his surgical incisions had become infected.
Worse news was on the horizon. On Nov. 28, the trauma surgeon who had first agreed to treat Mr. Settle informed him that, despite claims to the contrary, his appendix hadn’t been removed.
Eventually, Mr. Settle was referred to the Cleveland Clinic, where at press time he was still being treated.
Mr. Settle has hired the firm Calwell Luce diTrapano to sue CAMC, accusing it of medical malpractice, medical negligence, and other lapses in the standard of care. In his complaint, he accused the hospital and its staff of breaching their duty of care “by negligently and improperly treating him” and by failing “to exercise the degree of care, skill, and learning required and expected of reasonable health care providers.”
His suit seeks not only compensatory damages and other relief but also punitive damages.
The content contained in this article is for informational purposes only and does not constitute legal advice. Reliance on any information provided in this article is solely at your own risk.
A version of this article originally appeared on Medscape.com.
On Nov. 1, 2020, Dallas Settle went to Plateau Medical Center, Oak Hill, W.Va., complaining of pain that was later described in court documents as being “in his right mid-abdomen migrating to his right lower abdomen.” Following a CT scan, Mr. Settle was diagnosed with diverticulitis resulting in pneumoperitoneum, which is the presence of air or other gas in the abdominal cavity. The patient, it was decided, required surgery to correct the problem, but Plateau Medical Center didn’t have the staff to perform the procedure.
Mr. Settle was then transferred to another West Virginia hospital, Charleston Area Medical Center (CAMC). Here, he was evaluated by doctors in the facility’s General Division, who initiated treatment with IV fluids and opiate analgesics. He was then placed under the care of a trauma surgeon, who initially decided to treat the patient nonoperatively. If that approach failed, the surgeon believed, Mr. Settle would probably require a laparotomy, bowel resection, and ostomy.
Another surgical team performed an exploratory laparotomy the following day. The team determined that Mr. Settle was suffering from a ruptured appendicitis and allegedly performed an appendectomy. But Mr. Settle’s condition continued to deteriorate the following day.
Another CT scan followed. It revealed various problems – multiple fluid collections, an ileus, distended loops of the patient’s small bowel, a left renal cyst, subcentimeter mesenteric, and retroperitoneal adenopathy. Additional CT scans conducted 4 days later indicated other problems, including fluid collections in the patient’s right- and left-lower quadrants.
Over the next few days, doctors performed further exploratory laparotomies. Finally, on Nov. 22, Mr. Settle was transferred out of the intensive care unit in preparation for his discharge the following day.
His pain continued to worsen, however, and he was readmitted to CAMC a day later. At this point, an examination revealed that his surgical incisions had become infected.
Worse news was on the horizon. On Nov. 28, the trauma surgeon who had first agreed to treat Mr. Settle informed him that, despite claims to the contrary, his appendix hadn’t been removed.
Eventually, Mr. Settle was referred to the Cleveland Clinic, where at press time he was still being treated.
Mr. Settle has hired the firm Calwell Luce diTrapano to sue CAMC, accusing it of medical malpractice, medical negligence, and other lapses in the standard of care. In his complaint, he accused the hospital and its staff of breaching their duty of care “by negligently and improperly treating him” and by failing “to exercise the degree of care, skill, and learning required and expected of reasonable health care providers.”
His suit seeks not only compensatory damages and other relief but also punitive damages.
The content contained in this article is for informational purposes only and does not constitute legal advice. Reliance on any information provided in this article is solely at your own risk.
A version of this article originally appeared on Medscape.com.
Oncologist to insurer: ‘This denial will not stand’
“Is this really the hill you want to die on?” asked Rebecca Shatsky, MD, a medical oncologist at the University of California, San Diego.
It was Nov. 18 and Dr. Shatsky was on the phone with a retired oncologist working for the health insurance company Premera Blue Cross.
Dr. Shatsky was appealing a prior authorization denial for pembrolizumab (Keytruda) to treat her patient with stage IIIc triple-negative breast cancer (TNBC). She hoped the peer-to-peer would reverse the denial. The Food and Drug Administration had approved the immunotherapy for people with high-risk TNBC both in the neoadjuvant setting alongside chemotherapy and, in her patient’s case, as a single-agent adjuvant treatment based on data from the KEYNOTE 522 trial.
In the peer-to-peer, Dr. Shatsky laid out the evidence, but she could tell the physician wasn’t going to budge.
When she pressed him further, asking why he was denying potentially lifesaving care for her patient, he said the data on whether patients really need adjuvant pembrolizumab were not clear yet.
“The man – who was not a breast oncologist – was essentially mansplaining breast oncology to me,” she said in an interview. “I don’t need a nonexpert giving me their misinterpretation of the data.”
Dr. Shatsky informed him that this decision would not stand. She would be escalating the claim.
“I’m not going to let you get in way of my patient’s survival,” Dr. Shatsky told the physician during the peer-to-peer. “We have one shot to cure this, and if we don’t do it now, patients’ average lifespan is 17 months.”
The conversation turned a few heads in her office.
“My whole office stopped and stared. But then they clapped after they realized why I was yelling,” she tweeted later that night.
She continued: “@premera picked the wrong oncologist to mess with today. I will not be letting this go. This denial. Will. Not. Stand. An insurance company should not get to tell me how to practice medicine when Phase III RCT data and @NCCN + @ASCO guideline support my decision!”
A spokesperson for Premera said in a statement that, “while we did see many of the details about the case were posted to Twitter, we cannot comment on the specifics you noted due to privacy policies.”
The spokesperson explained that Premera has “the same goal as our provider partners: ensure our members have access to quality health care,” noting that prior authorization helps health plans evaluate the medical necessity and safety of health care services given that “15%-30% of care is unnecessary.”
“We also understand that providers may not agree with our decisions, which is why we have a robust appeals process,” the spokesperson said, suggesting Dr. Shatsky could have appealed the decision a second time.
And “if the member or provider still disagrees with Premera’s coverage decision after the initial appeal, providers can request review by a medical expert outside Premera who works for an independent review organization,” and the company “will pay for” and “abide by” that decision, the spokesperson added.
The Twitter storm
After Dr. Shatsky tweeted about her experience with Premera, she received a flood of support from the Twitterverse. The thread garnered tens of thousands of likes and hundreds of comments offering support and advice.
Several people suggested asking Merck for help accessing the drug. But Dr. Shatsky said no, “I’m tired of laying down and letting [insurance companies] win. It IS worth fighting for.”
The next morning, Dr. Shatsky got a call. It was the vice president of medical management at Premera.
“We’ve talked again, and we’ll give you the drug,” Dr. Shatsky recalled the Premera vice president saying.
The next day, Monday morning, Dr. Shatsky’s patient received her first infusion of pembrolizumab.
Although relieved, Dr. Shatsky noted that it wasn’t until she posted her experience to Twitter that Premera seemed to take notice.
Plus, “an oncologist without a strong social media following may not have gotten care approved and that’s not how medicine should work,” said Dr. Anderson, assistant professor in the department of clinical pharmacy, University of Colorado at Denver, Aurora.
Tatiana Prowell, MD, expressed similar concerns in a Nov. 20 tweet: “And sadly, the patients with cancer & an even busier, more exhausted doctor who doesn’t have a big [reach] on social media will be denied appropriate care. And that’s bank for insurers.”
But, Dr. Prowell noted sarcastically: “At least a patient with cancer had her care delayed & a dedicated OncTwitter colleague’s Physician Burnout was exacerbated.”
In this case, the prior authorization process took about a week – requiring an initial prior authorization request, an appeal after the request was denied, a peer-to-peer resulting in a second denial, and finally a tweet and a phone call from a top executive at the company.
In fact, these delays have become so common that Dr. Shatsky needs to anticipate and incorporate likely delays into her workflow.
“I learn which drugs will take a long time to get prior authorization for and then plan enough time so that my patient’s care is hopefully not delayed,” Dr. Shatsky said. “It should not be so hard to get appropriate and time-sensitive care for our patients.”
A version of this article first appeared on Medscape.com.
“Is this really the hill you want to die on?” asked Rebecca Shatsky, MD, a medical oncologist at the University of California, San Diego.
It was Nov. 18 and Dr. Shatsky was on the phone with a retired oncologist working for the health insurance company Premera Blue Cross.
Dr. Shatsky was appealing a prior authorization denial for pembrolizumab (Keytruda) to treat her patient with stage IIIc triple-negative breast cancer (TNBC). She hoped the peer-to-peer would reverse the denial. The Food and Drug Administration had approved the immunotherapy for people with high-risk TNBC both in the neoadjuvant setting alongside chemotherapy and, in her patient’s case, as a single-agent adjuvant treatment based on data from the KEYNOTE 522 trial.
In the peer-to-peer, Dr. Shatsky laid out the evidence, but she could tell the physician wasn’t going to budge.
When she pressed him further, asking why he was denying potentially lifesaving care for her patient, he said the data on whether patients really need adjuvant pembrolizumab were not clear yet.
“The man – who was not a breast oncologist – was essentially mansplaining breast oncology to me,” she said in an interview. “I don’t need a nonexpert giving me their misinterpretation of the data.”
Dr. Shatsky informed him that this decision would not stand. She would be escalating the claim.
“I’m not going to let you get in way of my patient’s survival,” Dr. Shatsky told the physician during the peer-to-peer. “We have one shot to cure this, and if we don’t do it now, patients’ average lifespan is 17 months.”
The conversation turned a few heads in her office.
“My whole office stopped and stared. But then they clapped after they realized why I was yelling,” she tweeted later that night.
She continued: “@premera picked the wrong oncologist to mess with today. I will not be letting this go. This denial. Will. Not. Stand. An insurance company should not get to tell me how to practice medicine when Phase III RCT data and @NCCN + @ASCO guideline support my decision!”
A spokesperson for Premera said in a statement that, “while we did see many of the details about the case were posted to Twitter, we cannot comment on the specifics you noted due to privacy policies.”
The spokesperson explained that Premera has “the same goal as our provider partners: ensure our members have access to quality health care,” noting that prior authorization helps health plans evaluate the medical necessity and safety of health care services given that “15%-30% of care is unnecessary.”
“We also understand that providers may not agree with our decisions, which is why we have a robust appeals process,” the spokesperson said, suggesting Dr. Shatsky could have appealed the decision a second time.
And “if the member or provider still disagrees with Premera’s coverage decision after the initial appeal, providers can request review by a medical expert outside Premera who works for an independent review organization,” and the company “will pay for” and “abide by” that decision, the spokesperson added.
The Twitter storm
After Dr. Shatsky tweeted about her experience with Premera, she received a flood of support from the Twitterverse. The thread garnered tens of thousands of likes and hundreds of comments offering support and advice.
Several people suggested asking Merck for help accessing the drug. But Dr. Shatsky said no, “I’m tired of laying down and letting [insurance companies] win. It IS worth fighting for.”
The next morning, Dr. Shatsky got a call. It was the vice president of medical management at Premera.
“We’ve talked again, and we’ll give you the drug,” Dr. Shatsky recalled the Premera vice president saying.
The next day, Monday morning, Dr. Shatsky’s patient received her first infusion of pembrolizumab.
Although relieved, Dr. Shatsky noted that it wasn’t until she posted her experience to Twitter that Premera seemed to take notice.
Plus, “an oncologist without a strong social media following may not have gotten care approved and that’s not how medicine should work,” said Dr. Anderson, assistant professor in the department of clinical pharmacy, University of Colorado at Denver, Aurora.
Tatiana Prowell, MD, expressed similar concerns in a Nov. 20 tweet: “And sadly, the patients with cancer & an even busier, more exhausted doctor who doesn’t have a big [reach] on social media will be denied appropriate care. And that’s bank for insurers.”
But, Dr. Prowell noted sarcastically: “At least a patient with cancer had her care delayed & a dedicated OncTwitter colleague’s Physician Burnout was exacerbated.”
In this case, the prior authorization process took about a week – requiring an initial prior authorization request, an appeal after the request was denied, a peer-to-peer resulting in a second denial, and finally a tweet and a phone call from a top executive at the company.
In fact, these delays have become so common that Dr. Shatsky needs to anticipate and incorporate likely delays into her workflow.
“I learn which drugs will take a long time to get prior authorization for and then plan enough time so that my patient’s care is hopefully not delayed,” Dr. Shatsky said. “It should not be so hard to get appropriate and time-sensitive care for our patients.”
A version of this article first appeared on Medscape.com.
“Is this really the hill you want to die on?” asked Rebecca Shatsky, MD, a medical oncologist at the University of California, San Diego.
It was Nov. 18 and Dr. Shatsky was on the phone with a retired oncologist working for the health insurance company Premera Blue Cross.
Dr. Shatsky was appealing a prior authorization denial for pembrolizumab (Keytruda) to treat her patient with stage IIIc triple-negative breast cancer (TNBC). She hoped the peer-to-peer would reverse the denial. The Food and Drug Administration had approved the immunotherapy for people with high-risk TNBC both in the neoadjuvant setting alongside chemotherapy and, in her patient’s case, as a single-agent adjuvant treatment based on data from the KEYNOTE 522 trial.
In the peer-to-peer, Dr. Shatsky laid out the evidence, but she could tell the physician wasn’t going to budge.
When she pressed him further, asking why he was denying potentially lifesaving care for her patient, he said the data on whether patients really need adjuvant pembrolizumab were not clear yet.
“The man – who was not a breast oncologist – was essentially mansplaining breast oncology to me,” she said in an interview. “I don’t need a nonexpert giving me their misinterpretation of the data.”
Dr. Shatsky informed him that this decision would not stand. She would be escalating the claim.
“I’m not going to let you get in way of my patient’s survival,” Dr. Shatsky told the physician during the peer-to-peer. “We have one shot to cure this, and if we don’t do it now, patients’ average lifespan is 17 months.”
The conversation turned a few heads in her office.
“My whole office stopped and stared. But then they clapped after they realized why I was yelling,” she tweeted later that night.
She continued: “@premera picked the wrong oncologist to mess with today. I will not be letting this go. This denial. Will. Not. Stand. An insurance company should not get to tell me how to practice medicine when Phase III RCT data and @NCCN + @ASCO guideline support my decision!”
A spokesperson for Premera said in a statement that, “while we did see many of the details about the case were posted to Twitter, we cannot comment on the specifics you noted due to privacy policies.”
The spokesperson explained that Premera has “the same goal as our provider partners: ensure our members have access to quality health care,” noting that prior authorization helps health plans evaluate the medical necessity and safety of health care services given that “15%-30% of care is unnecessary.”
“We also understand that providers may not agree with our decisions, which is why we have a robust appeals process,” the spokesperson said, suggesting Dr. Shatsky could have appealed the decision a second time.
And “if the member or provider still disagrees with Premera’s coverage decision after the initial appeal, providers can request review by a medical expert outside Premera who works for an independent review organization,” and the company “will pay for” and “abide by” that decision, the spokesperson added.
The Twitter storm
After Dr. Shatsky tweeted about her experience with Premera, she received a flood of support from the Twitterverse. The thread garnered tens of thousands of likes and hundreds of comments offering support and advice.
Several people suggested asking Merck for help accessing the drug. But Dr. Shatsky said no, “I’m tired of laying down and letting [insurance companies] win. It IS worth fighting for.”
The next morning, Dr. Shatsky got a call. It was the vice president of medical management at Premera.
“We’ve talked again, and we’ll give you the drug,” Dr. Shatsky recalled the Premera vice president saying.
The next day, Monday morning, Dr. Shatsky’s patient received her first infusion of pembrolizumab.
Although relieved, Dr. Shatsky noted that it wasn’t until she posted her experience to Twitter that Premera seemed to take notice.
Plus, “an oncologist without a strong social media following may not have gotten care approved and that’s not how medicine should work,” said Dr. Anderson, assistant professor in the department of clinical pharmacy, University of Colorado at Denver, Aurora.
Tatiana Prowell, MD, expressed similar concerns in a Nov. 20 tweet: “And sadly, the patients with cancer & an even busier, more exhausted doctor who doesn’t have a big [reach] on social media will be denied appropriate care. And that’s bank for insurers.”
But, Dr. Prowell noted sarcastically: “At least a patient with cancer had her care delayed & a dedicated OncTwitter colleague’s Physician Burnout was exacerbated.”
In this case, the prior authorization process took about a week – requiring an initial prior authorization request, an appeal after the request was denied, a peer-to-peer resulting in a second denial, and finally a tweet and a phone call from a top executive at the company.
In fact, these delays have become so common that Dr. Shatsky needs to anticipate and incorporate likely delays into her workflow.
“I learn which drugs will take a long time to get prior authorization for and then plan enough time so that my patient’s care is hopefully not delayed,” Dr. Shatsky said. “It should not be so hard to get appropriate and time-sensitive care for our patients.”
A version of this article first appeared on Medscape.com.
Not all white coats are doctors: Why titles are important at the doctor’s office
says Cyndy Flores, a physician assistant (PA) in the emergency department at Vituity, Emeryville, Calif. “Sometimes, I can go through a complete history and physical, explain a treatment plan, and perform a procedure, and [the patient] will say, ‘Thank you, doctor.’ ”
“I always come back and say, ‘You’re very welcome, but my name is Cyndy, and I’m the PA.’ ”
Ms. Flores is used to patients calling her “doctor” when she greets them. She typically offers a quick correction and moves on with the appointment.
With 355,000 nurse practitioners (NPs) and 149,000 certified PAs practicing in the United States, it’s more common than ever for health care providers who don’t go by the title “doctor” to diagnose and treat patients.
A recent report, Evolving Scope of Practice, found that more than 70% of physicians were “somewhat satisfied to very satisfied” with patient treatment by PAs and NPs.
But for patients, having a health care team that includes physicians, NPs, and PAs can be confusing. Additionally, it creates a need for education about their correct titles and roles in patient care.
“It’s really important for patients to understand who is taking care of them,” Ms. Flores says.
Education starts in your practice
Educating patients about the roles of different providers on their health care team starts long before patients enter the exam room, Ms. Flores explains.
Some patients may not understand the difference, some may just forget because they’re used to calling all providers doctors, and others may find it awkward to use a provider’s first name or not know the respectful way to address an NP or a PA.
Practices can help by listing the names and biographies of the health care team on the clinic website. In addition, when patients call for an appointment, Ms. Flores believes front desk staff can reinforce that information. When offering appointments with a physician, NP, or PA, clearly use the practitioner’s title and reiterate it throughout the conversation. For example, “Would you like to see our nurse practitioner, Alice Smith, next week?” or “So, our physician assistant Mrs. Jones will see you Friday at 3 PM.”
The report also found that 76% of patients expressed a preference to see a physician over a PA, and 71% expressed a preference to see a physician over an NP, but offering appointments with nonphysician providers is part of the education process.
“Some families are super savvy and know the differences between nurse practitioners, physician assistants, and doctors, and ... there are families who don’t understand those titles, [and] we need to explain what they do in our practice,” adds Nicole Aaronson, MD, MBA, attending surgeon at Nemours Children’s Health of Delaware. Dr. Aaronson believes there’s an opportunity for educating patients when speaking about all the available providers they may see.
Hanging posters or using brochures in the clinic or hospital is another effective way to reinforce the roles of various providers on the care team. Include biographies and educational information on practice materials and video programs running in the waiting room.
“Patients mean it [calling everyone doctor] as a way to respectfully address the nurse practitioner or physician assistant rather than meaning it as a denigration of the physician,” Dr. Aaronson says. “But everyone appreciates being called by the correct title.”
Helping patients understand the members of their care team and the correct titles to use for those health care professionals could also help patients feel more confident about their health care experience.
“Patients really like knowing that there are specialists in each of the areas taking care of them,” Ms. Flores says. “I think that conveys a feeling of trust in your provider.”
Not everyone is a doctor
Even when PAs and NPs remind patients of their roles and reinforce the use of their preferred names, there will still be patients who continue referring to their nonphysician provider as “doctor.”
“There’s a perception that anyone who walks into a room with a stethoscope is your doctor,” says Graig Straus, DNP, an NP and president and CEO of Rockland Urgent Care Family Health NP, P.C., West Haverstraw, N.Y. “You do get a little bit of burnout correcting people all the time.”
Dr. Straus, who earned his doctorate in nursing practice, notes that patients using the honorific with him aren’t incorrect, but he still educates them on his role within the health care team.
“NPs and PAs have a valuable role to play independently and in concert with the physician,” Dr. Aaronson says. This understanding is essential, as states consider expanding treatment abilities for NPs and PAs.
NPs have expanded treatment abilities or full practice authority in almost half the states, and 31% of the physicians surveyed agreed that NPs should have expanded treatment abilities.
An estimated 1 in 5 states characterizes the physician-PA relationship as collaborative, not supervisory, according to the American Academy of Physician Associates. At the same time, only 39% of physicians surveyed said they favored this trend.
“Patients need great quality care, and there are many different types of providers that can provide that care as part of the team,” Ms. Flores says. “When you have a team taking care of a patient, that patient [gets] the best care possible – and ... that’s why we went into medicine: to deliver high-quality, compassionate care to our patients, and we should all be in this together.”
When practices do their part explaining who is and isn’t a doctor and what each provider’s title and role is and what to call them, and everyone reinforces it, health care becomes not only more manageable for patients to traverse but easier to understand, leading to a better experience.
A version of this article first appeared on Medscape.com.
says Cyndy Flores, a physician assistant (PA) in the emergency department at Vituity, Emeryville, Calif. “Sometimes, I can go through a complete history and physical, explain a treatment plan, and perform a procedure, and [the patient] will say, ‘Thank you, doctor.’ ”
“I always come back and say, ‘You’re very welcome, but my name is Cyndy, and I’m the PA.’ ”
Ms. Flores is used to patients calling her “doctor” when she greets them. She typically offers a quick correction and moves on with the appointment.
With 355,000 nurse practitioners (NPs) and 149,000 certified PAs practicing in the United States, it’s more common than ever for health care providers who don’t go by the title “doctor” to diagnose and treat patients.
A recent report, Evolving Scope of Practice, found that more than 70% of physicians were “somewhat satisfied to very satisfied” with patient treatment by PAs and NPs.
But for patients, having a health care team that includes physicians, NPs, and PAs can be confusing. Additionally, it creates a need for education about their correct titles and roles in patient care.
“It’s really important for patients to understand who is taking care of them,” Ms. Flores says.
Education starts in your practice
Educating patients about the roles of different providers on their health care team starts long before patients enter the exam room, Ms. Flores explains.
Some patients may not understand the difference, some may just forget because they’re used to calling all providers doctors, and others may find it awkward to use a provider’s first name or not know the respectful way to address an NP or a PA.
Practices can help by listing the names and biographies of the health care team on the clinic website. In addition, when patients call for an appointment, Ms. Flores believes front desk staff can reinforce that information. When offering appointments with a physician, NP, or PA, clearly use the practitioner’s title and reiterate it throughout the conversation. For example, “Would you like to see our nurse practitioner, Alice Smith, next week?” or “So, our physician assistant Mrs. Jones will see you Friday at 3 PM.”
The report also found that 76% of patients expressed a preference to see a physician over a PA, and 71% expressed a preference to see a physician over an NP, but offering appointments with nonphysician providers is part of the education process.
“Some families are super savvy and know the differences between nurse practitioners, physician assistants, and doctors, and ... there are families who don’t understand those titles, [and] we need to explain what they do in our practice,” adds Nicole Aaronson, MD, MBA, attending surgeon at Nemours Children’s Health of Delaware. Dr. Aaronson believes there’s an opportunity for educating patients when speaking about all the available providers they may see.
Hanging posters or using brochures in the clinic or hospital is another effective way to reinforce the roles of various providers on the care team. Include biographies and educational information on practice materials and video programs running in the waiting room.
“Patients mean it [calling everyone doctor] as a way to respectfully address the nurse practitioner or physician assistant rather than meaning it as a denigration of the physician,” Dr. Aaronson says. “But everyone appreciates being called by the correct title.”
Helping patients understand the members of their care team and the correct titles to use for those health care professionals could also help patients feel more confident about their health care experience.
“Patients really like knowing that there are specialists in each of the areas taking care of them,” Ms. Flores says. “I think that conveys a feeling of trust in your provider.”
Not everyone is a doctor
Even when PAs and NPs remind patients of their roles and reinforce the use of their preferred names, there will still be patients who continue referring to their nonphysician provider as “doctor.”
“There’s a perception that anyone who walks into a room with a stethoscope is your doctor,” says Graig Straus, DNP, an NP and president and CEO of Rockland Urgent Care Family Health NP, P.C., West Haverstraw, N.Y. “You do get a little bit of burnout correcting people all the time.”
Dr. Straus, who earned his doctorate in nursing practice, notes that patients using the honorific with him aren’t incorrect, but he still educates them on his role within the health care team.
“NPs and PAs have a valuable role to play independently and in concert with the physician,” Dr. Aaronson says. This understanding is essential, as states consider expanding treatment abilities for NPs and PAs.
NPs have expanded treatment abilities or full practice authority in almost half the states, and 31% of the physicians surveyed agreed that NPs should have expanded treatment abilities.
An estimated 1 in 5 states characterizes the physician-PA relationship as collaborative, not supervisory, according to the American Academy of Physician Associates. At the same time, only 39% of physicians surveyed said they favored this trend.
“Patients need great quality care, and there are many different types of providers that can provide that care as part of the team,” Ms. Flores says. “When you have a team taking care of a patient, that patient [gets] the best care possible – and ... that’s why we went into medicine: to deliver high-quality, compassionate care to our patients, and we should all be in this together.”
When practices do their part explaining who is and isn’t a doctor and what each provider’s title and role is and what to call them, and everyone reinforces it, health care becomes not only more manageable for patients to traverse but easier to understand, leading to a better experience.
A version of this article first appeared on Medscape.com.
says Cyndy Flores, a physician assistant (PA) in the emergency department at Vituity, Emeryville, Calif. “Sometimes, I can go through a complete history and physical, explain a treatment plan, and perform a procedure, and [the patient] will say, ‘Thank you, doctor.’ ”
“I always come back and say, ‘You’re very welcome, but my name is Cyndy, and I’m the PA.’ ”
Ms. Flores is used to patients calling her “doctor” when she greets them. She typically offers a quick correction and moves on with the appointment.
With 355,000 nurse practitioners (NPs) and 149,000 certified PAs practicing in the United States, it’s more common than ever for health care providers who don’t go by the title “doctor” to diagnose and treat patients.
A recent report, Evolving Scope of Practice, found that more than 70% of physicians were “somewhat satisfied to very satisfied” with patient treatment by PAs and NPs.
But for patients, having a health care team that includes physicians, NPs, and PAs can be confusing. Additionally, it creates a need for education about their correct titles and roles in patient care.
“It’s really important for patients to understand who is taking care of them,” Ms. Flores says.
Education starts in your practice
Educating patients about the roles of different providers on their health care team starts long before patients enter the exam room, Ms. Flores explains.
Some patients may not understand the difference, some may just forget because they’re used to calling all providers doctors, and others may find it awkward to use a provider’s first name or not know the respectful way to address an NP or a PA.
Practices can help by listing the names and biographies of the health care team on the clinic website. In addition, when patients call for an appointment, Ms. Flores believes front desk staff can reinforce that information. When offering appointments with a physician, NP, or PA, clearly use the practitioner’s title and reiterate it throughout the conversation. For example, “Would you like to see our nurse practitioner, Alice Smith, next week?” or “So, our physician assistant Mrs. Jones will see you Friday at 3 PM.”
The report also found that 76% of patients expressed a preference to see a physician over a PA, and 71% expressed a preference to see a physician over an NP, but offering appointments with nonphysician providers is part of the education process.
“Some families are super savvy and know the differences between nurse practitioners, physician assistants, and doctors, and ... there are families who don’t understand those titles, [and] we need to explain what they do in our practice,” adds Nicole Aaronson, MD, MBA, attending surgeon at Nemours Children’s Health of Delaware. Dr. Aaronson believes there’s an opportunity for educating patients when speaking about all the available providers they may see.
Hanging posters or using brochures in the clinic or hospital is another effective way to reinforce the roles of various providers on the care team. Include biographies and educational information on practice materials and video programs running in the waiting room.
“Patients mean it [calling everyone doctor] as a way to respectfully address the nurse practitioner or physician assistant rather than meaning it as a denigration of the physician,” Dr. Aaronson says. “But everyone appreciates being called by the correct title.”
Helping patients understand the members of their care team and the correct titles to use for those health care professionals could also help patients feel more confident about their health care experience.
“Patients really like knowing that there are specialists in each of the areas taking care of them,” Ms. Flores says. “I think that conveys a feeling of trust in your provider.”
Not everyone is a doctor
Even when PAs and NPs remind patients of their roles and reinforce the use of their preferred names, there will still be patients who continue referring to their nonphysician provider as “doctor.”
“There’s a perception that anyone who walks into a room with a stethoscope is your doctor,” says Graig Straus, DNP, an NP and president and CEO of Rockland Urgent Care Family Health NP, P.C., West Haverstraw, N.Y. “You do get a little bit of burnout correcting people all the time.”
Dr. Straus, who earned his doctorate in nursing practice, notes that patients using the honorific with him aren’t incorrect, but he still educates them on his role within the health care team.
“NPs and PAs have a valuable role to play independently and in concert with the physician,” Dr. Aaronson says. This understanding is essential, as states consider expanding treatment abilities for NPs and PAs.
NPs have expanded treatment abilities or full practice authority in almost half the states, and 31% of the physicians surveyed agreed that NPs should have expanded treatment abilities.
An estimated 1 in 5 states characterizes the physician-PA relationship as collaborative, not supervisory, according to the American Academy of Physician Associates. At the same time, only 39% of physicians surveyed said they favored this trend.
“Patients need great quality care, and there are many different types of providers that can provide that care as part of the team,” Ms. Flores says. “When you have a team taking care of a patient, that patient [gets] the best care possible – and ... that’s why we went into medicine: to deliver high-quality, compassionate care to our patients, and we should all be in this together.”
When practices do their part explaining who is and isn’t a doctor and what each provider’s title and role is and what to call them, and everyone reinforces it, health care becomes not only more manageable for patients to traverse but easier to understand, leading to a better experience.
A version of this article first appeared on Medscape.com.
Physician sues AMA for defamation over 2022 election controversy
If Willarda Edwards, MD, MBA, had won her 2022 campaign for president-elect of the American Medical Association (AMA), she would have been the second Black woman to head the group.
The lawsuit sheds light on the power dynamics of a politically potent organization that has more than 271,000 members and holds assets of $1.2 billion. The AMA president is one of the most visible figures in American medicine.
“The AMA impugned Dr. Edwards with these false charges, which destroyed her candidacy and irreparably damaged her reputation,” according to the complaint, which was filed Nov. 9, 2022, in Baltimore County Circuit Court. The case was later moved to federal court.
The AMA “previously rejected our attempt to resolve this matter without litigation,” Dr. Edwards’ attorney, Timothy Maloney, told this news organization. An AMA spokesman said the organization had no comment on Dr. Edwards’ suit.
Dr. Edwards is a past president of the National Medical Association, MedChi, the Baltimore City Medical Society, the Monumental City Medical Society, and the Sickle Cell Disease Association of America. She joined the AMA in 1994 and has served as a trustee since 2016.
As chair of the AMA Task Force on Health Equity, “she helped lead the way in consensus building and driving action that in 2019 resulted in the AMA House of Delegates establishing the AMA Center on Health Equity,” according to her AMA bio page.
‘Quid pro quo’ alleged
In June 2022, Dr. Edwards was one of three individuals running to be AMA president-elect.
According to Dr. Edwards’ complaint, she was “incorrectly advised by colleagues” that Virginia urologist William Reha, MD, had decided not to seek the AMA vice-speakership in 2023. This was important because both Dr. Edwards and Dr. Reha were in the Southeastern delegation. It could be in Dr. Edwards’ favor if Dr. Reha was not running, as it would mean one less leadership candidate from the same region.
Dr. Edwards called Dr. Reha on June 6 to discuss the matter. When they talked, Dr. Reha allegedly recorded the call without Dr. Edwards’ knowledge or permission – a felony in Maryland – and also steered her toward discussions about how his decision could benefit her campaign, according to the complaint.
The suit alleges that Dr. Reha’s questions were “clearly calculated to draw some statements by Dr. Edwards that he could use later to thwart her candidacy and to benefit her opponent.”
On June 10, at the AMA’s House of Delegates meeting in Chicago, Dr. Edwards was taken aside and questioned by members of the AMA’s Election Campaign Committee, according to the complaint. They accused her of “vote trading” but did not provide any evidence or a copy of a complaint they said had been filed against her, the suit said.
Dr. Edwards was given no opportunity to produce her own evidence or rebut the accusations, the suit alleges.
Just before the delegates started formal business on June 13, House Speaker Bruce Scott, MD, read a statement to the assembly saying that a complaint of a possible campaign violation had been filed against Dr. Edwards.
Dr. Scott told the delegates that “committee members interviewed the complainant and multiple other individuals said to have knowledge of the circumstances. In addition to conducting multiple interviews, the committee reviewed evidence that was deemed credible and corroborated that a campaign violation did in fact occur,” according to the complaint.
The supposed violation: A “quid pro quo” in which an unnamed delegation would support Dr. Edwards’ current candidacy, and the Southeastern delegation would support a future candidate from that other unnamed delegation.
Dr. Edwards was given a short opportunity to speak, in which she denied any violations.
According to a news report, Dr. Edwards said, “I’ve been in the House of Delegates for 30 years, and you know me as a process person – a person who truly believes in the process and trying to follow the complexities of our election campaign.”
The lawsuit alleges that “this defamatory conduct was repeated the next day to more than 600 delegates just minutes prior to the casting of votes, when Dr Scott repeated these allegations.”
Dr. Edwards lost the election.
AMA: Nothing more to add
The suit alleges that neither the Election Campaign Committee nor the AMA itself has made any accusers or complaints available to Dr. Edwards and that it has not provided any audio or written evidence of her alleged violation.
In July, the AMA’s Southeastern delegation told its membership, “We continue to maintain that Willarda was ‘set up’ ... The whole affair lacked any reasonable semblance of due process.”
The delegation has filed a counter claim against the AMA seeking “to address this lack of due process as well as the reputational harm” to the delegation.
The AMA said that it has nothing it can produce. “The Speaker of the House presented a verbal report to the attending delegates,” said a spokesman. “The Speaker’s report remains the only remarks from an AMA officer, and no additional remarks can be expected at this time.”
He added that there “is no official transcript of the Speaker’s report.”
A version of this article first appeared on Medscape.com.
If Willarda Edwards, MD, MBA, had won her 2022 campaign for president-elect of the American Medical Association (AMA), she would have been the second Black woman to head the group.
The lawsuit sheds light on the power dynamics of a politically potent organization that has more than 271,000 members and holds assets of $1.2 billion. The AMA president is one of the most visible figures in American medicine.
“The AMA impugned Dr. Edwards with these false charges, which destroyed her candidacy and irreparably damaged her reputation,” according to the complaint, which was filed Nov. 9, 2022, in Baltimore County Circuit Court. The case was later moved to federal court.
The AMA “previously rejected our attempt to resolve this matter without litigation,” Dr. Edwards’ attorney, Timothy Maloney, told this news organization. An AMA spokesman said the organization had no comment on Dr. Edwards’ suit.
Dr. Edwards is a past president of the National Medical Association, MedChi, the Baltimore City Medical Society, the Monumental City Medical Society, and the Sickle Cell Disease Association of America. She joined the AMA in 1994 and has served as a trustee since 2016.
As chair of the AMA Task Force on Health Equity, “she helped lead the way in consensus building and driving action that in 2019 resulted in the AMA House of Delegates establishing the AMA Center on Health Equity,” according to her AMA bio page.
‘Quid pro quo’ alleged
In June 2022, Dr. Edwards was one of three individuals running to be AMA president-elect.
According to Dr. Edwards’ complaint, she was “incorrectly advised by colleagues” that Virginia urologist William Reha, MD, had decided not to seek the AMA vice-speakership in 2023. This was important because both Dr. Edwards and Dr. Reha were in the Southeastern delegation. It could be in Dr. Edwards’ favor if Dr. Reha was not running, as it would mean one less leadership candidate from the same region.
Dr. Edwards called Dr. Reha on June 6 to discuss the matter. When they talked, Dr. Reha allegedly recorded the call without Dr. Edwards’ knowledge or permission – a felony in Maryland – and also steered her toward discussions about how his decision could benefit her campaign, according to the complaint.
The suit alleges that Dr. Reha’s questions were “clearly calculated to draw some statements by Dr. Edwards that he could use later to thwart her candidacy and to benefit her opponent.”
On June 10, at the AMA’s House of Delegates meeting in Chicago, Dr. Edwards was taken aside and questioned by members of the AMA’s Election Campaign Committee, according to the complaint. They accused her of “vote trading” but did not provide any evidence or a copy of a complaint they said had been filed against her, the suit said.
Dr. Edwards was given no opportunity to produce her own evidence or rebut the accusations, the suit alleges.
Just before the delegates started formal business on June 13, House Speaker Bruce Scott, MD, read a statement to the assembly saying that a complaint of a possible campaign violation had been filed against Dr. Edwards.
Dr. Scott told the delegates that “committee members interviewed the complainant and multiple other individuals said to have knowledge of the circumstances. In addition to conducting multiple interviews, the committee reviewed evidence that was deemed credible and corroborated that a campaign violation did in fact occur,” according to the complaint.
The supposed violation: A “quid pro quo” in which an unnamed delegation would support Dr. Edwards’ current candidacy, and the Southeastern delegation would support a future candidate from that other unnamed delegation.
Dr. Edwards was given a short opportunity to speak, in which she denied any violations.
According to a news report, Dr. Edwards said, “I’ve been in the House of Delegates for 30 years, and you know me as a process person – a person who truly believes in the process and trying to follow the complexities of our election campaign.”
The lawsuit alleges that “this defamatory conduct was repeated the next day to more than 600 delegates just minutes prior to the casting of votes, when Dr Scott repeated these allegations.”
Dr. Edwards lost the election.
AMA: Nothing more to add
The suit alleges that neither the Election Campaign Committee nor the AMA itself has made any accusers or complaints available to Dr. Edwards and that it has not provided any audio or written evidence of her alleged violation.
In July, the AMA’s Southeastern delegation told its membership, “We continue to maintain that Willarda was ‘set up’ ... The whole affair lacked any reasonable semblance of due process.”
The delegation has filed a counter claim against the AMA seeking “to address this lack of due process as well as the reputational harm” to the delegation.
The AMA said that it has nothing it can produce. “The Speaker of the House presented a verbal report to the attending delegates,” said a spokesman. “The Speaker’s report remains the only remarks from an AMA officer, and no additional remarks can be expected at this time.”
He added that there “is no official transcript of the Speaker’s report.”
A version of this article first appeared on Medscape.com.
If Willarda Edwards, MD, MBA, had won her 2022 campaign for president-elect of the American Medical Association (AMA), she would have been the second Black woman to head the group.
The lawsuit sheds light on the power dynamics of a politically potent organization that has more than 271,000 members and holds assets of $1.2 billion. The AMA president is one of the most visible figures in American medicine.
“The AMA impugned Dr. Edwards with these false charges, which destroyed her candidacy and irreparably damaged her reputation,” according to the complaint, which was filed Nov. 9, 2022, in Baltimore County Circuit Court. The case was later moved to federal court.
The AMA “previously rejected our attempt to resolve this matter without litigation,” Dr. Edwards’ attorney, Timothy Maloney, told this news organization. An AMA spokesman said the organization had no comment on Dr. Edwards’ suit.
Dr. Edwards is a past president of the National Medical Association, MedChi, the Baltimore City Medical Society, the Monumental City Medical Society, and the Sickle Cell Disease Association of America. She joined the AMA in 1994 and has served as a trustee since 2016.
As chair of the AMA Task Force on Health Equity, “she helped lead the way in consensus building and driving action that in 2019 resulted in the AMA House of Delegates establishing the AMA Center on Health Equity,” according to her AMA bio page.
‘Quid pro quo’ alleged
In June 2022, Dr. Edwards was one of three individuals running to be AMA president-elect.
According to Dr. Edwards’ complaint, she was “incorrectly advised by colleagues” that Virginia urologist William Reha, MD, had decided not to seek the AMA vice-speakership in 2023. This was important because both Dr. Edwards and Dr. Reha were in the Southeastern delegation. It could be in Dr. Edwards’ favor if Dr. Reha was not running, as it would mean one less leadership candidate from the same region.
Dr. Edwards called Dr. Reha on June 6 to discuss the matter. When they talked, Dr. Reha allegedly recorded the call without Dr. Edwards’ knowledge or permission – a felony in Maryland – and also steered her toward discussions about how his decision could benefit her campaign, according to the complaint.
The suit alleges that Dr. Reha’s questions were “clearly calculated to draw some statements by Dr. Edwards that he could use later to thwart her candidacy and to benefit her opponent.”
On June 10, at the AMA’s House of Delegates meeting in Chicago, Dr. Edwards was taken aside and questioned by members of the AMA’s Election Campaign Committee, according to the complaint. They accused her of “vote trading” but did not provide any evidence or a copy of a complaint they said had been filed against her, the suit said.
Dr. Edwards was given no opportunity to produce her own evidence or rebut the accusations, the suit alleges.
Just before the delegates started formal business on June 13, House Speaker Bruce Scott, MD, read a statement to the assembly saying that a complaint of a possible campaign violation had been filed against Dr. Edwards.
Dr. Scott told the delegates that “committee members interviewed the complainant and multiple other individuals said to have knowledge of the circumstances. In addition to conducting multiple interviews, the committee reviewed evidence that was deemed credible and corroborated that a campaign violation did in fact occur,” according to the complaint.
The supposed violation: A “quid pro quo” in which an unnamed delegation would support Dr. Edwards’ current candidacy, and the Southeastern delegation would support a future candidate from that other unnamed delegation.
Dr. Edwards was given a short opportunity to speak, in which she denied any violations.
According to a news report, Dr. Edwards said, “I’ve been in the House of Delegates for 30 years, and you know me as a process person – a person who truly believes in the process and trying to follow the complexities of our election campaign.”
The lawsuit alleges that “this defamatory conduct was repeated the next day to more than 600 delegates just minutes prior to the casting of votes, when Dr Scott repeated these allegations.”
Dr. Edwards lost the election.
AMA: Nothing more to add
The suit alleges that neither the Election Campaign Committee nor the AMA itself has made any accusers or complaints available to Dr. Edwards and that it has not provided any audio or written evidence of her alleged violation.
In July, the AMA’s Southeastern delegation told its membership, “We continue to maintain that Willarda was ‘set up’ ... The whole affair lacked any reasonable semblance of due process.”
The delegation has filed a counter claim against the AMA seeking “to address this lack of due process as well as the reputational harm” to the delegation.
The AMA said that it has nothing it can produce. “The Speaker of the House presented a verbal report to the attending delegates,” said a spokesman. “The Speaker’s report remains the only remarks from an AMA officer, and no additional remarks can be expected at this time.”
He added that there “is no official transcript of the Speaker’s report.”
A version of this article first appeared on Medscape.com.
Will your smartphone be the next doctor’s office?
A fingertip pressed against a phone’s camera lens can measure a heart rate. The microphone, kept by the bedside, can screen for sleep apnea. Even the speaker is being tapped, to monitor breathing using sonar technology.
In the best of this new world, the data is conveyed remotely to a medical professional for the convenience and comfort of the patient or, in some cases, to support a clinician without the need for costly hardware.
But using smartphones as diagnostic tools is a work in progress, experts say. Although doctors and their patients have found some real-world success in deploying the phone as a medical device, the overall potential remains unfulfilled and uncertain.
Smartphones come packed with sensors capable of monitoring a patient’s vital signs. They can help assess people for concussions, watch for atrial fibrillation, and conduct mental health wellness checks, to name the uses of a few nascent applications.
Companies and researchers eager to find medical applications for smartphone technology are tapping into modern phones’ built-in cameras and light sensors; microphones; accelerometers, which detect body movements; gyroscopes; and even speakers. The apps then use artificial intelligence software to analyze the collected sights and sounds to create an easy connection between patients and physicians. Earning potential and marketability are evidenced by the more than 350,000 digital health products available in app stores, according to a Grand View Research report.
“It’s very hard to put devices into the patient home or in the hospital, but everybody is just walking around with a cellphone that has a network connection,” said Dr. Andrew Gostine, CEO of the sensor network company Artisight. Most Americans own a smartphone, including more than 60% of people 65 and over, an increase from just 13% a decade ago, according the Pew Research Center. The COVID-19 pandemic has also pushed people to become more comfortable with virtual care.
Some of these products have sought FDA clearance to be marketed as a medical device. That way, if patients must pay to use the software, health insurers are more likely to cover at least part of the cost. Other products are designated as exempt from this regulatory process, placed in the same clinical classification as a Band-Aid. But how the agency handles AI and machine learning–based medical devices is still being adjusted to reflect software’s adaptive nature.
Ensuring accuracy and clinical validation is crucial to securing buy-in from health care providers. And many tools still need fine-tuning, said Eugene Yang, MD, a professor of medicine at the University of Washington, Seattle. Currently, Dr. Yang is testing contactless measurement of blood pressure, heart rate, and oxygen saturation gleaned remotely via Zoom camera footage of a patient’s face.
Judging these new technologies is difficult because they rely on algorithms built by machine learning and artificial intelligence to collect data, rather than the physical tools typically used in hospitals. So researchers cannot “compare apples to apples” with medical industry standards, Dr. Yang said. Failure to build in such assurances undermines the technology’s ultimate goals of easing costs and access because a doctor still must verify results.
“False positives and false negatives lead to more testing and more cost to the health care system,” he said.
Big tech companies like Google have heavily invested in researching this kind of technology, catering to clinicians and in-home caregivers, as well as consumers. Currently, in the Google Fit app, users can check their heart rate by placing their finger on the rear-facing camera lens or track their breathing rate using the front-facing camera.
“If you took the sensor out of the phone and out of a clinical device, they are probably the same thing,” said Shwetak Patel, director of health technologies at Google and a professor of electrical and computer engineering at the University of Washington.
Google’s research uses machine learning and computer vision, a field within AI based on information from visual inputs like videos or images. So instead of using a blood pressure cuff, for example, the algorithm can interpret slight visual changes to the body that serve as proxies and biosignals for a patient’s blood pressure, Mr. Patel said.
Google is also investigating the effectiveness of the built-in microphone for detecting heartbeats and murmurs and using the camera to preserve eyesight by screening for diabetic eye disease, according to information the company published last year.
The tech giant recently purchased Sound Life Sciences, a Seattle startup with an FDA-cleared sonar technology app. It uses a smart device’s speaker to bounce inaudible pulses off a patient’s body to identify movement and monitor breathing.
Binah.ai, based in Israel, is another company using the smartphone camera to calculate vital signs. Its software looks at the region around the eyes, where the skin is a bit thinner, and analyzes the light reflecting off blood vessels back to the lens. The company is wrapping up a U.S. clinical trial and marketing its wellness app directly to insurers and other health companies, said company spokesperson Mona Popilian-Yona.
The applications even reach into disciplines such as optometry and mental health:
- With the microphone, Canary Speech uses the same underlying technology as Amazon’s Alexa to analyze patients’ voices for mental health conditions. The software can integrate with telemedicine appointments and allow clinicians to screen for anxiety and depression using a library of vocal biomarkers and predictive analytics, said Henry O’Connell, the company’s CEO.
- Australia-based ResApp Health last year for its iPhone app that screens for moderate to severe obstructive sleep apnea by listening to breathing and snoring. SleepCheckRx, which will require a prescription, is minimally invasive compared with sleep studies currently used to diagnose sleep apnea. Those can cost thousands of dollars and require an array of tests.
- Brightlamp’s Reflex app is a clinical decision support tool for helping manage concussions and vision rehabilitation, among other things. Using an iPad’s or iPhone’s camera, the mobile app measures how a person’s pupils react to changes in light. Through machine learning analysis, the imagery gives practitioners data points for evaluating patients. Brightlamp sells directly to health care providers and is being used in more than 230 clinics. Clinicians pay a $400 standard annual fee per account, which is currently not covered by insurance. The Department of Defense has an ongoing clinical trial using Reflex.
In some cases, such as with the Reflex app, the data is processed directly on the phone – rather than in the cloud, Brightlamp CEO Kurtis Sluss said. By processing everything on the device, the app avoids running into privacy issues, as streaming data elsewhere requires patient consent.
But algorithms need to be trained and tested by collecting reams of data, and that is an ongoing process.
Researchers, for example, have found that some computer vision applications, like heart rate or blood pressure monitoring, can be less accurate for darker skin. Studies are underway to find better solutions.
Small algorithm glitches can also produce false alarms and frighten patients enough to keep widespread adoption out of reach. For example, Apple’s new car-crash detection feature, available on both the latest iPhone and Apple Watch, was set off when people were riding roller coasters and automatically dialed 911.
“We’re not there yet,” Dr. Yang said. “That’s the bottom line.”
KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation.
A fingertip pressed against a phone’s camera lens can measure a heart rate. The microphone, kept by the bedside, can screen for sleep apnea. Even the speaker is being tapped, to monitor breathing using sonar technology.
In the best of this new world, the data is conveyed remotely to a medical professional for the convenience and comfort of the patient or, in some cases, to support a clinician without the need for costly hardware.
But using smartphones as diagnostic tools is a work in progress, experts say. Although doctors and their patients have found some real-world success in deploying the phone as a medical device, the overall potential remains unfulfilled and uncertain.
Smartphones come packed with sensors capable of monitoring a patient’s vital signs. They can help assess people for concussions, watch for atrial fibrillation, and conduct mental health wellness checks, to name the uses of a few nascent applications.
Companies and researchers eager to find medical applications for smartphone technology are tapping into modern phones’ built-in cameras and light sensors; microphones; accelerometers, which detect body movements; gyroscopes; and even speakers. The apps then use artificial intelligence software to analyze the collected sights and sounds to create an easy connection between patients and physicians. Earning potential and marketability are evidenced by the more than 350,000 digital health products available in app stores, according to a Grand View Research report.
“It’s very hard to put devices into the patient home or in the hospital, but everybody is just walking around with a cellphone that has a network connection,” said Dr. Andrew Gostine, CEO of the sensor network company Artisight. Most Americans own a smartphone, including more than 60% of people 65 and over, an increase from just 13% a decade ago, according the Pew Research Center. The COVID-19 pandemic has also pushed people to become more comfortable with virtual care.
Some of these products have sought FDA clearance to be marketed as a medical device. That way, if patients must pay to use the software, health insurers are more likely to cover at least part of the cost. Other products are designated as exempt from this regulatory process, placed in the same clinical classification as a Band-Aid. But how the agency handles AI and machine learning–based medical devices is still being adjusted to reflect software’s adaptive nature.
Ensuring accuracy and clinical validation is crucial to securing buy-in from health care providers. And many tools still need fine-tuning, said Eugene Yang, MD, a professor of medicine at the University of Washington, Seattle. Currently, Dr. Yang is testing contactless measurement of blood pressure, heart rate, and oxygen saturation gleaned remotely via Zoom camera footage of a patient’s face.
Judging these new technologies is difficult because they rely on algorithms built by machine learning and artificial intelligence to collect data, rather than the physical tools typically used in hospitals. So researchers cannot “compare apples to apples” with medical industry standards, Dr. Yang said. Failure to build in such assurances undermines the technology’s ultimate goals of easing costs and access because a doctor still must verify results.
“False positives and false negatives lead to more testing and more cost to the health care system,” he said.
Big tech companies like Google have heavily invested in researching this kind of technology, catering to clinicians and in-home caregivers, as well as consumers. Currently, in the Google Fit app, users can check their heart rate by placing their finger on the rear-facing camera lens or track their breathing rate using the front-facing camera.
“If you took the sensor out of the phone and out of a clinical device, they are probably the same thing,” said Shwetak Patel, director of health technologies at Google and a professor of electrical and computer engineering at the University of Washington.
Google’s research uses machine learning and computer vision, a field within AI based on information from visual inputs like videos or images. So instead of using a blood pressure cuff, for example, the algorithm can interpret slight visual changes to the body that serve as proxies and biosignals for a patient’s blood pressure, Mr. Patel said.
Google is also investigating the effectiveness of the built-in microphone for detecting heartbeats and murmurs and using the camera to preserve eyesight by screening for diabetic eye disease, according to information the company published last year.
The tech giant recently purchased Sound Life Sciences, a Seattle startup with an FDA-cleared sonar technology app. It uses a smart device’s speaker to bounce inaudible pulses off a patient’s body to identify movement and monitor breathing.
Binah.ai, based in Israel, is another company using the smartphone camera to calculate vital signs. Its software looks at the region around the eyes, where the skin is a bit thinner, and analyzes the light reflecting off blood vessels back to the lens. The company is wrapping up a U.S. clinical trial and marketing its wellness app directly to insurers and other health companies, said company spokesperson Mona Popilian-Yona.
The applications even reach into disciplines such as optometry and mental health:
- With the microphone, Canary Speech uses the same underlying technology as Amazon’s Alexa to analyze patients’ voices for mental health conditions. The software can integrate with telemedicine appointments and allow clinicians to screen for anxiety and depression using a library of vocal biomarkers and predictive analytics, said Henry O’Connell, the company’s CEO.
- Australia-based ResApp Health last year for its iPhone app that screens for moderate to severe obstructive sleep apnea by listening to breathing and snoring. SleepCheckRx, which will require a prescription, is minimally invasive compared with sleep studies currently used to diagnose sleep apnea. Those can cost thousands of dollars and require an array of tests.
- Brightlamp’s Reflex app is a clinical decision support tool for helping manage concussions and vision rehabilitation, among other things. Using an iPad’s or iPhone’s camera, the mobile app measures how a person’s pupils react to changes in light. Through machine learning analysis, the imagery gives practitioners data points for evaluating patients. Brightlamp sells directly to health care providers and is being used in more than 230 clinics. Clinicians pay a $400 standard annual fee per account, which is currently not covered by insurance. The Department of Defense has an ongoing clinical trial using Reflex.
In some cases, such as with the Reflex app, the data is processed directly on the phone – rather than in the cloud, Brightlamp CEO Kurtis Sluss said. By processing everything on the device, the app avoids running into privacy issues, as streaming data elsewhere requires patient consent.
But algorithms need to be trained and tested by collecting reams of data, and that is an ongoing process.
Researchers, for example, have found that some computer vision applications, like heart rate or blood pressure monitoring, can be less accurate for darker skin. Studies are underway to find better solutions.
Small algorithm glitches can also produce false alarms and frighten patients enough to keep widespread adoption out of reach. For example, Apple’s new car-crash detection feature, available on both the latest iPhone and Apple Watch, was set off when people were riding roller coasters and automatically dialed 911.
“We’re not there yet,” Dr. Yang said. “That’s the bottom line.”
KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation.
A fingertip pressed against a phone’s camera lens can measure a heart rate. The microphone, kept by the bedside, can screen for sleep apnea. Even the speaker is being tapped, to monitor breathing using sonar technology.
In the best of this new world, the data is conveyed remotely to a medical professional for the convenience and comfort of the patient or, in some cases, to support a clinician without the need for costly hardware.
But using smartphones as diagnostic tools is a work in progress, experts say. Although doctors and their patients have found some real-world success in deploying the phone as a medical device, the overall potential remains unfulfilled and uncertain.
Smartphones come packed with sensors capable of monitoring a patient’s vital signs. They can help assess people for concussions, watch for atrial fibrillation, and conduct mental health wellness checks, to name the uses of a few nascent applications.
Companies and researchers eager to find medical applications for smartphone technology are tapping into modern phones’ built-in cameras and light sensors; microphones; accelerometers, which detect body movements; gyroscopes; and even speakers. The apps then use artificial intelligence software to analyze the collected sights and sounds to create an easy connection between patients and physicians. Earning potential and marketability are evidenced by the more than 350,000 digital health products available in app stores, according to a Grand View Research report.
“It’s very hard to put devices into the patient home or in the hospital, but everybody is just walking around with a cellphone that has a network connection,” said Dr. Andrew Gostine, CEO of the sensor network company Artisight. Most Americans own a smartphone, including more than 60% of people 65 and over, an increase from just 13% a decade ago, according the Pew Research Center. The COVID-19 pandemic has also pushed people to become more comfortable with virtual care.
Some of these products have sought FDA clearance to be marketed as a medical device. That way, if patients must pay to use the software, health insurers are more likely to cover at least part of the cost. Other products are designated as exempt from this regulatory process, placed in the same clinical classification as a Band-Aid. But how the agency handles AI and machine learning–based medical devices is still being adjusted to reflect software’s adaptive nature.
Ensuring accuracy and clinical validation is crucial to securing buy-in from health care providers. And many tools still need fine-tuning, said Eugene Yang, MD, a professor of medicine at the University of Washington, Seattle. Currently, Dr. Yang is testing contactless measurement of blood pressure, heart rate, and oxygen saturation gleaned remotely via Zoom camera footage of a patient’s face.
Judging these new technologies is difficult because they rely on algorithms built by machine learning and artificial intelligence to collect data, rather than the physical tools typically used in hospitals. So researchers cannot “compare apples to apples” with medical industry standards, Dr. Yang said. Failure to build in such assurances undermines the technology’s ultimate goals of easing costs and access because a doctor still must verify results.
“False positives and false negatives lead to more testing and more cost to the health care system,” he said.
Big tech companies like Google have heavily invested in researching this kind of technology, catering to clinicians and in-home caregivers, as well as consumers. Currently, in the Google Fit app, users can check their heart rate by placing their finger on the rear-facing camera lens or track their breathing rate using the front-facing camera.
“If you took the sensor out of the phone and out of a clinical device, they are probably the same thing,” said Shwetak Patel, director of health technologies at Google and a professor of electrical and computer engineering at the University of Washington.
Google’s research uses machine learning and computer vision, a field within AI based on information from visual inputs like videos or images. So instead of using a blood pressure cuff, for example, the algorithm can interpret slight visual changes to the body that serve as proxies and biosignals for a patient’s blood pressure, Mr. Patel said.
Google is also investigating the effectiveness of the built-in microphone for detecting heartbeats and murmurs and using the camera to preserve eyesight by screening for diabetic eye disease, according to information the company published last year.
The tech giant recently purchased Sound Life Sciences, a Seattle startup with an FDA-cleared sonar technology app. It uses a smart device’s speaker to bounce inaudible pulses off a patient’s body to identify movement and monitor breathing.
Binah.ai, based in Israel, is another company using the smartphone camera to calculate vital signs. Its software looks at the region around the eyes, where the skin is a bit thinner, and analyzes the light reflecting off blood vessels back to the lens. The company is wrapping up a U.S. clinical trial and marketing its wellness app directly to insurers and other health companies, said company spokesperson Mona Popilian-Yona.
The applications even reach into disciplines such as optometry and mental health:
- With the microphone, Canary Speech uses the same underlying technology as Amazon’s Alexa to analyze patients’ voices for mental health conditions. The software can integrate with telemedicine appointments and allow clinicians to screen for anxiety and depression using a library of vocal biomarkers and predictive analytics, said Henry O’Connell, the company’s CEO.
- Australia-based ResApp Health last year for its iPhone app that screens for moderate to severe obstructive sleep apnea by listening to breathing and snoring. SleepCheckRx, which will require a prescription, is minimally invasive compared with sleep studies currently used to diagnose sleep apnea. Those can cost thousands of dollars and require an array of tests.
- Brightlamp’s Reflex app is a clinical decision support tool for helping manage concussions and vision rehabilitation, among other things. Using an iPad’s or iPhone’s camera, the mobile app measures how a person’s pupils react to changes in light. Through machine learning analysis, the imagery gives practitioners data points for evaluating patients. Brightlamp sells directly to health care providers and is being used in more than 230 clinics. Clinicians pay a $400 standard annual fee per account, which is currently not covered by insurance. The Department of Defense has an ongoing clinical trial using Reflex.
In some cases, such as with the Reflex app, the data is processed directly on the phone – rather than in the cloud, Brightlamp CEO Kurtis Sluss said. By processing everything on the device, the app avoids running into privacy issues, as streaming data elsewhere requires patient consent.
But algorithms need to be trained and tested by collecting reams of data, and that is an ongoing process.
Researchers, for example, have found that some computer vision applications, like heart rate or blood pressure monitoring, can be less accurate for darker skin. Studies are underway to find better solutions.
Small algorithm glitches can also produce false alarms and frighten patients enough to keep widespread adoption out of reach. For example, Apple’s new car-crash detection feature, available on both the latest iPhone and Apple Watch, was set off when people were riding roller coasters and automatically dialed 911.
“We’re not there yet,” Dr. Yang said. “That’s the bottom line.”
KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation.
AGA venture capital fund makes first investment
The American Gastroenterological Association has made the first investment through its new venture capital fund – an initiative that gives gastroenterologists a financial opportunity combined with a chance to help corporations trying to make a difference in the field.
It was established in partnership with Varia Ventures.
The AGA recently announced the fund’s first investment with Carlsbad, Calif.–based Virgo Surgical Video Solutions, which offers endoscopy video recording that uses artificial intelligence for ease of use during procedures, for reviewing video later, and for using video to connect trial investigators with potential candidates.
“While AGA has long guided innovators who share our goal of improving digestive health care, we have doubled down on this commitment by establishing the GI Opportunity Fund,” said Lawrence Kosinski, MD, AGAF, AGA Governing Board Councilor for Development and Growth. “The fund’s first investment – Virgo – exemplifies our pursuit of improved clinical care.”
He said the fund gives physicians a chance to work closely with AGA to invest in difference-making ventures.
“Through our venture fund, gastroenterologists can join AGA to invest in fast-growing, early-stage companies that are transforming care for patients with digestive disease,” Dr. Kosinski said.
Virgo CEO Matthew Z. Schwartz said the company’s product is intended to fill an important need.
“We recognized that it was really difficult for doctors to capture endoscopy procedures video in high-definition at scale,” he said. “Generally, they were just taking still images. And the images were often not of great quality.”
Virgo offers a small device that connects to existing endoscopy equipment, plugging into the back of a video processor, securely compressing and encrypting video and sending it to Virgo’s HIPAA-compliant cloud storage Web portal. Once it’s plugged in, Mr. Schwartz said, it’s “set it and forget it.”
“We try to make it as easy as possible for doctors to record their video – which means we don’t want them to have to do anything different about their normal clinical workflow in order to generate these videos,” Mr. Schwartz said. Physicians don’t even have to press a start or stop button – Virgo’s machine-learning algorithm detects when to start and stop video recording by discerning when the scope is inserted and removed.
“A goal of ours is to change the paradigm for endoscopy to help make sure that every procedure is captured in HD to the cloud,” he said.
The service also includes an “auto-highlight” feature that detects important moments in the procedure video. It automatically marks points in the video when the physician takes a still image and moments when an instrument, such as a snare or forceps, is present in the field of view. This, Mr. Schwartz said, makes it “easy in playback to focus on important aspects of the procedure.”
There is also a clinical trial screening feature, called “auto IBD,” that involves an algorithm that assesses videos to identify patients most likely to be eligible candidates for clinical trials. Mr. Schwartz said that procedures and patients who might go unconsidered – if they are performed at an affiliated community hospital or at an endoscopy center, for instance – can now be brought to the attention of trial investigators, without the need to comb through hundreds or thousands of candidates.
“We believe there are many more patients with these diseases that are eligible for IBD clinical trials than are currently being exposed to research opportunities within large health systems,” he said.
The proceeds from the AGA’s Opportunity Fund will be used, in part, to expand Virgo’s reach, he added. Virgo’s connection with the AGA began with its participation in the AGA Tech Summit Shark Tank competition in 2018.
“For us, the name of the game is getting Virgo in the hands of as many physicians and health systems as possible,” Mr. Schwartz said. “So we’ll be using these proceeds to build up the team and work on global distribution.” The company is also “looking to refine machine-learning algorithms and build out new features and tools.”
Ziad Gellad, MD, MPH, associate professor of medicine in gastroenterology at Duke University, Durham, N.C., was one of the Opportunity Fund’s earliest member investors.
“I was looking for ways to diversify my portfolio and this was an attractive way to get into an area of investment that is not easily accessible, and so I was excited about that,” said Dr. Gellad, who himself is cofounder of a health start-up that develops software for patient navigation and outcomes collection but is not associated with the fund.
“As a start-up cofounder myself, I understand the needs of founders of companies, especially those in the GI space and appreciate the struggles they face,” Dr. Gellad added. “The opportunity to contribute to that was appealing.”
“I also believe that specialty societies like the AGA need to diversify their funding strategy and I think this is a really innovative way to do that,” he said.
The American Gastroenterological Association has made the first investment through its new venture capital fund – an initiative that gives gastroenterologists a financial opportunity combined with a chance to help corporations trying to make a difference in the field.
It was established in partnership with Varia Ventures.
The AGA recently announced the fund’s first investment with Carlsbad, Calif.–based Virgo Surgical Video Solutions, which offers endoscopy video recording that uses artificial intelligence for ease of use during procedures, for reviewing video later, and for using video to connect trial investigators with potential candidates.
“While AGA has long guided innovators who share our goal of improving digestive health care, we have doubled down on this commitment by establishing the GI Opportunity Fund,” said Lawrence Kosinski, MD, AGAF, AGA Governing Board Councilor for Development and Growth. “The fund’s first investment – Virgo – exemplifies our pursuit of improved clinical care.”
He said the fund gives physicians a chance to work closely with AGA to invest in difference-making ventures.
“Through our venture fund, gastroenterologists can join AGA to invest in fast-growing, early-stage companies that are transforming care for patients with digestive disease,” Dr. Kosinski said.
Virgo CEO Matthew Z. Schwartz said the company’s product is intended to fill an important need.
“We recognized that it was really difficult for doctors to capture endoscopy procedures video in high-definition at scale,” he said. “Generally, they were just taking still images. And the images were often not of great quality.”
Virgo offers a small device that connects to existing endoscopy equipment, plugging into the back of a video processor, securely compressing and encrypting video and sending it to Virgo’s HIPAA-compliant cloud storage Web portal. Once it’s plugged in, Mr. Schwartz said, it’s “set it and forget it.”
“We try to make it as easy as possible for doctors to record their video – which means we don’t want them to have to do anything different about their normal clinical workflow in order to generate these videos,” Mr. Schwartz said. Physicians don’t even have to press a start or stop button – Virgo’s machine-learning algorithm detects when to start and stop video recording by discerning when the scope is inserted and removed.
“A goal of ours is to change the paradigm for endoscopy to help make sure that every procedure is captured in HD to the cloud,” he said.
The service also includes an “auto-highlight” feature that detects important moments in the procedure video. It automatically marks points in the video when the physician takes a still image and moments when an instrument, such as a snare or forceps, is present in the field of view. This, Mr. Schwartz said, makes it “easy in playback to focus on important aspects of the procedure.”
There is also a clinical trial screening feature, called “auto IBD,” that involves an algorithm that assesses videos to identify patients most likely to be eligible candidates for clinical trials. Mr. Schwartz said that procedures and patients who might go unconsidered – if they are performed at an affiliated community hospital or at an endoscopy center, for instance – can now be brought to the attention of trial investigators, without the need to comb through hundreds or thousands of candidates.
“We believe there are many more patients with these diseases that are eligible for IBD clinical trials than are currently being exposed to research opportunities within large health systems,” he said.
The proceeds from the AGA’s Opportunity Fund will be used, in part, to expand Virgo’s reach, he added. Virgo’s connection with the AGA began with its participation in the AGA Tech Summit Shark Tank competition in 2018.
“For us, the name of the game is getting Virgo in the hands of as many physicians and health systems as possible,” Mr. Schwartz said. “So we’ll be using these proceeds to build up the team and work on global distribution.” The company is also “looking to refine machine-learning algorithms and build out new features and tools.”
Ziad Gellad, MD, MPH, associate professor of medicine in gastroenterology at Duke University, Durham, N.C., was one of the Opportunity Fund’s earliest member investors.
“I was looking for ways to diversify my portfolio and this was an attractive way to get into an area of investment that is not easily accessible, and so I was excited about that,” said Dr. Gellad, who himself is cofounder of a health start-up that develops software for patient navigation and outcomes collection but is not associated with the fund.
“As a start-up cofounder myself, I understand the needs of founders of companies, especially those in the GI space and appreciate the struggles they face,” Dr. Gellad added. “The opportunity to contribute to that was appealing.”
“I also believe that specialty societies like the AGA need to diversify their funding strategy and I think this is a really innovative way to do that,” he said.
The American Gastroenterological Association has made the first investment through its new venture capital fund – an initiative that gives gastroenterologists a financial opportunity combined with a chance to help corporations trying to make a difference in the field.
It was established in partnership with Varia Ventures.
The AGA recently announced the fund’s first investment with Carlsbad, Calif.–based Virgo Surgical Video Solutions, which offers endoscopy video recording that uses artificial intelligence for ease of use during procedures, for reviewing video later, and for using video to connect trial investigators with potential candidates.
“While AGA has long guided innovators who share our goal of improving digestive health care, we have doubled down on this commitment by establishing the GI Opportunity Fund,” said Lawrence Kosinski, MD, AGAF, AGA Governing Board Councilor for Development and Growth. “The fund’s first investment – Virgo – exemplifies our pursuit of improved clinical care.”
He said the fund gives physicians a chance to work closely with AGA to invest in difference-making ventures.
“Through our venture fund, gastroenterologists can join AGA to invest in fast-growing, early-stage companies that are transforming care for patients with digestive disease,” Dr. Kosinski said.
Virgo CEO Matthew Z. Schwartz said the company’s product is intended to fill an important need.
“We recognized that it was really difficult for doctors to capture endoscopy procedures video in high-definition at scale,” he said. “Generally, they were just taking still images. And the images were often not of great quality.”
Virgo offers a small device that connects to existing endoscopy equipment, plugging into the back of a video processor, securely compressing and encrypting video and sending it to Virgo’s HIPAA-compliant cloud storage Web portal. Once it’s plugged in, Mr. Schwartz said, it’s “set it and forget it.”
“We try to make it as easy as possible for doctors to record their video – which means we don’t want them to have to do anything different about their normal clinical workflow in order to generate these videos,” Mr. Schwartz said. Physicians don’t even have to press a start or stop button – Virgo’s machine-learning algorithm detects when to start and stop video recording by discerning when the scope is inserted and removed.
“A goal of ours is to change the paradigm for endoscopy to help make sure that every procedure is captured in HD to the cloud,” he said.
The service also includes an “auto-highlight” feature that detects important moments in the procedure video. It automatically marks points in the video when the physician takes a still image and moments when an instrument, such as a snare or forceps, is present in the field of view. This, Mr. Schwartz said, makes it “easy in playback to focus on important aspects of the procedure.”
There is also a clinical trial screening feature, called “auto IBD,” that involves an algorithm that assesses videos to identify patients most likely to be eligible candidates for clinical trials. Mr. Schwartz said that procedures and patients who might go unconsidered – if they are performed at an affiliated community hospital or at an endoscopy center, for instance – can now be brought to the attention of trial investigators, without the need to comb through hundreds or thousands of candidates.
“We believe there are many more patients with these diseases that are eligible for IBD clinical trials than are currently being exposed to research opportunities within large health systems,” he said.
The proceeds from the AGA’s Opportunity Fund will be used, in part, to expand Virgo’s reach, he added. Virgo’s connection with the AGA began with its participation in the AGA Tech Summit Shark Tank competition in 2018.
“For us, the name of the game is getting Virgo in the hands of as many physicians and health systems as possible,” Mr. Schwartz said. “So we’ll be using these proceeds to build up the team and work on global distribution.” The company is also “looking to refine machine-learning algorithms and build out new features and tools.”
Ziad Gellad, MD, MPH, associate professor of medicine in gastroenterology at Duke University, Durham, N.C., was one of the Opportunity Fund’s earliest member investors.
“I was looking for ways to diversify my portfolio and this was an attractive way to get into an area of investment that is not easily accessible, and so I was excited about that,” said Dr. Gellad, who himself is cofounder of a health start-up that develops software for patient navigation and outcomes collection but is not associated with the fund.
“As a start-up cofounder myself, I understand the needs of founders of companies, especially those in the GI space and appreciate the struggles they face,” Dr. Gellad added. “The opportunity to contribute to that was appealing.”
“I also believe that specialty societies like the AGA need to diversify their funding strategy and I think this is a really innovative way to do that,” he said.
How to talk with patients in ways that help them feel heard and understood
How do we become those professionals and make sure that we are doing a good job connecting and communicating with our patients?
Here are a few suggestions on how to do this.
Practice intent listening
When a patient shares their symptoms with you, show genuine curiosity and concern. Ask clarifying questions. Ask how the symptom or problem is affecting their day-to-day life. Avoid quick, rapid-fire questions back at the patient. Do not accept a patient self-diagnosis.
When a patient with a first-time headache says they are having a migraine headache, for example, ask many clarifying questions to make sure you can make a diagnosis of headache type, then use all the information you have gathered to educate the patient on what you believe they have.
It is easy to jump to treatment, but we always want to make sure we have the diagnosis correct first. By intently listening, it also makes it much easier to tell a patient you do not know what is causing their symptoms, but that you and the patient will be vigilant for any future clues that may lead to a diagnosis.
Use terminology that patients understand
Rachael Gotlieb, MD, and colleagues published an excellent study with eye-opening results on common phrases we use as health care providers and how often patients do not understand them.
Only 9% of patients understood what was meant when they were asked if they have been febrile. Only 2% understood what was meant by “I am concerned the patient has an occult infection.” Only 21% understood that “your xray findings were quite impressive” was bad news.
It is easy to avoid these medical language traps, we just have to check our doctor speak. Ask, “Do you have a fever?” Say, “I am concerned you may have an infection that is hard to find.”
Several other terms we use all the time in explaining things to patients that I have found most patients do not understand are the terms bilateral, systemic, and significant. Think carefully as you explain things to patients and check back to have them repeat to you what they think you said.
Be comfortable saying you don’t know
Many symptoms in medicine end up not being diagnosable. When a patient shares symptoms that do not fit a pattern of a disease, it is important to share with them why you think it is okay to wait and watch, even if you do not have a diagnosis.
Patients find it comforting that you are so honest with them. Doing this also has the benefit of gaining patients’ trust when you are sure about something, because it tells them you don’t have an answer for everything.
Ask your patients what they think is causing their symptoms
This way, you know what their big fear is. You can address what they are worried about, even if it isn’t something you are considering.
Patients are often fearful of a disease a close friend or relative has, so when they get new symptoms, they fear diseases that we might not think of. By knowing what they are fearful of, you can reassure when appropriate.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
How do we become those professionals and make sure that we are doing a good job connecting and communicating with our patients?
Here are a few suggestions on how to do this.
Practice intent listening
When a patient shares their symptoms with you, show genuine curiosity and concern. Ask clarifying questions. Ask how the symptom or problem is affecting their day-to-day life. Avoid quick, rapid-fire questions back at the patient. Do not accept a patient self-diagnosis.
When a patient with a first-time headache says they are having a migraine headache, for example, ask many clarifying questions to make sure you can make a diagnosis of headache type, then use all the information you have gathered to educate the patient on what you believe they have.
It is easy to jump to treatment, but we always want to make sure we have the diagnosis correct first. By intently listening, it also makes it much easier to tell a patient you do not know what is causing their symptoms, but that you and the patient will be vigilant for any future clues that may lead to a diagnosis.
Use terminology that patients understand
Rachael Gotlieb, MD, and colleagues published an excellent study with eye-opening results on common phrases we use as health care providers and how often patients do not understand them.
Only 9% of patients understood what was meant when they were asked if they have been febrile. Only 2% understood what was meant by “I am concerned the patient has an occult infection.” Only 21% understood that “your xray findings were quite impressive” was bad news.
It is easy to avoid these medical language traps, we just have to check our doctor speak. Ask, “Do you have a fever?” Say, “I am concerned you may have an infection that is hard to find.”
Several other terms we use all the time in explaining things to patients that I have found most patients do not understand are the terms bilateral, systemic, and significant. Think carefully as you explain things to patients and check back to have them repeat to you what they think you said.
Be comfortable saying you don’t know
Many symptoms in medicine end up not being diagnosable. When a patient shares symptoms that do not fit a pattern of a disease, it is important to share with them why you think it is okay to wait and watch, even if you do not have a diagnosis.
Patients find it comforting that you are so honest with them. Doing this also has the benefit of gaining patients’ trust when you are sure about something, because it tells them you don’t have an answer for everything.
Ask your patients what they think is causing their symptoms
This way, you know what their big fear is. You can address what they are worried about, even if it isn’t something you are considering.
Patients are often fearful of a disease a close friend or relative has, so when they get new symptoms, they fear diseases that we might not think of. By knowing what they are fearful of, you can reassure when appropriate.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
How do we become those professionals and make sure that we are doing a good job connecting and communicating with our patients?
Here are a few suggestions on how to do this.
Practice intent listening
When a patient shares their symptoms with you, show genuine curiosity and concern. Ask clarifying questions. Ask how the symptom or problem is affecting their day-to-day life. Avoid quick, rapid-fire questions back at the patient. Do not accept a patient self-diagnosis.
When a patient with a first-time headache says they are having a migraine headache, for example, ask many clarifying questions to make sure you can make a diagnosis of headache type, then use all the information you have gathered to educate the patient on what you believe they have.
It is easy to jump to treatment, but we always want to make sure we have the diagnosis correct first. By intently listening, it also makes it much easier to tell a patient you do not know what is causing their symptoms, but that you and the patient will be vigilant for any future clues that may lead to a diagnosis.
Use terminology that patients understand
Rachael Gotlieb, MD, and colleagues published an excellent study with eye-opening results on common phrases we use as health care providers and how often patients do not understand them.
Only 9% of patients understood what was meant when they were asked if they have been febrile. Only 2% understood what was meant by “I am concerned the patient has an occult infection.” Only 21% understood that “your xray findings were quite impressive” was bad news.
It is easy to avoid these medical language traps, we just have to check our doctor speak. Ask, “Do you have a fever?” Say, “I am concerned you may have an infection that is hard to find.”
Several other terms we use all the time in explaining things to patients that I have found most patients do not understand are the terms bilateral, systemic, and significant. Think carefully as you explain things to patients and check back to have them repeat to you what they think you said.
Be comfortable saying you don’t know
Many symptoms in medicine end up not being diagnosable. When a patient shares symptoms that do not fit a pattern of a disease, it is important to share with them why you think it is okay to wait and watch, even if you do not have a diagnosis.
Patients find it comforting that you are so honest with them. Doing this also has the benefit of gaining patients’ trust when you are sure about something, because it tells them you don’t have an answer for everything.
Ask your patients what they think is causing their symptoms
This way, you know what their big fear is. You can address what they are worried about, even if it isn’t something you are considering.
Patients are often fearful of a disease a close friend or relative has, so when they get new symptoms, they fear diseases that we might not think of. By knowing what they are fearful of, you can reassure when appropriate.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
Adverse events reported in one-quarter of inpatient admissions
as indicated from data from 2,809 admissions at 11 hospitals.
The 1991 Harvard Medical Practice Study, which focused on medical injury and litigation, documented an adverse event rate of 3.7 events per 100 admissions; 28% of those events were attributed to negligence, write David W. Bates, MD, of Brigham and Women’s Hospital, Boston, and colleagues.
Although patient safety has changed significantly since 1991, documenting improvements has been challenging, the researchers say. Several reports have shown a decrease in health care–associated infections. However, other aspects of safety – notably, adverse drug events, defined as injuries resulting from drugs taken – are not easily measured and tracked, the researchers say.
“We have not had good estimates of how much harm is being caused by care in hospitals in an ongoing way that looked across all types of adverse events,” and the current review is therefore important, Dr. Bates said in an interview.
In a study recently published in the New England Journal of Medicine, the researchers analyzed a random sample of 2,809 hospital admissions from 11 hospitals in Massachusetts during the 2018 calendar year. The hospitals ranged in size from fewer than 100 beds to more than 700 beds; all patients were aged 18 years and older. A panel of nine nurses reviewed the admissions records to identify potential adverse events, and eight physicians reviewed the adverse event summaries and either agreed or disagreed with the adverse event type. The severity of each event was ranked using a general severity scale into categories of significant, serious, life-threatening, or fatal.
Overall, at least one adverse event was identified in 23.6% of the hospital admissions. A total of 978 adverse events were deemed to have occurred during the index admission, and 222 of these (22.7%) were deemed preventable. Among the preventable adverse events, 19.7% were classified as serious, 3.3% as life-threatening, and 0.5% as fatal.
A total of 523 admissions (18.6%) involved at least one significant adverse event, defined as an event that caused unnecessary harm but from which recovery was rapid. A total of 211 admissions involved a serious adverse event, defined as harm resulting in substantial intervention or prolonged recovery; 34 included at least one life-threatening event; and seven admissions involved a fatal adverse event.
A total of 191 admissions involved at least one adverse event deemed preventable. Of those, 29 involved at least one preventable adverse event that was serious, life-threatening, or fatal, the researchers write. Of the seven deaths in the study population, one was deemed preventable.
The most common adverse events were adverse drug events, which accounted for 39.0% of the adverse events; surgical or other procedural events accounted for 30.4%; patient care events (including falls and pressure ulcers) accounted for 15.0%; and health care–associated infections accounted for 11.9%.
Overcoming barriers to better safety
“The overall level of harm, with nearly 1 in 4 patients suffering an adverse event, was higher than I expected it might be,” Dr. Bates told this news organization. However, techniques for identifying adverse events have improved, and “it is easier to find them in electronic records than in paper records,” he noted.
“Hospitals have many issues they are currently dealing with since COVID, and one issue is simply prioritization,” Dr. Bates said. “But it is now possible to measure harm for all patients using electronic tools, and if hospitals know how much harm they are having in specific areas, they can make choices about which ones to focus on.”
“We now have effective prevention strategies for most of the main kinds of harm,” he said. Generally, rates of harm are high because these strategies are not being used effectively, he said. “In addition, there are new tools that can be used – for example, to identify patients who are decompensating earlier,” he noted.
As for additional research, some specific types of harm that have been resistant to interventions, such as pressure ulcers, deserve more attention, said Dr. Bates. “In addition, diagnostic errors appear to cause a great deal of harm, but we don’t yet have good strategies for preventing these,” he said.
The study findings were limited by several factors, including the use of data from hospitals that might not represent hospitals at large and by the inclusion mainly of patients with private insurance, the researchers write. Other limitations include the likelihood that some adverse events were missed and the level of agreement on adverse events between adjudicators was only fair.
However, the findings serve as a reminder to health care professionals of the need for continued attention to improving patient safety, and measuring adverse events remains a critical part of guiding these improvements, the researchers conclude.
Timely reassessment and opportunities to improve
In the decades since the publication of the report, “To Err Is Human,” by the National Academies in 2000, significant attention has been paid to improving patient safety during hospitalizations, and health care systems have increased in both system and disease complexity, Said Suman Pal, MBBS, a specialist in hospital medicine at the University of New Mexico, Albuquerque, said in an interview. “Therefore, this study is important in reassessing the safety of inpatient care at the current time,” he said.
“The findings of this study showing preventable adverse events in approximately 7% of all admissions; while concerning, is not surprising, as it is consistent with other studies over time, as the authors have also noted in their discussion,” said Dr. Pal. The current findings “underscore the importance of continuous quality improvement efforts to increase the safety of patient care for hospitalized patients,” he noted.
“The increasing complexity of medical care, fragmentation of health care, structural inequities of health systems, and more recent widespread public health challenges such as the COVID-19 pandemic have been, in my opinion, barriers to improving patient safety,” Dr. Pal said. “The use of innovation and an interdisciplinary approach to patient safety and quality improvement in hospital-based care, such as the use of machine learning to monitor trends and predict the individualized risk of harm, could be a potential way out” to help reduce barriers and improve safety, he said.
“Additional research is needed to understand the key drivers of preventable harm for hospitalized patients in the United States,” said Dr. Pal. “When planning for change, keen attention must be paid to understanding how these [drivers] may differ for patients who have been historically marginalized or are otherwise underserved so as to not exacerbate health care inequities,” he added.
The study was funded by the Controlled Risk Insurance Company and the Risk Management Foundation of the Harvard Medical Institutions. Dr. Bates owns stock options with AESOP, Clew, FeelBetter, Guided Clinical Solutions, MDClone, and ValeraHealth and has grants/contracts from IBM Watson and EarlySense. He has also served as a consultant for CDI Negev. Dr. Pal has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
as indicated from data from 2,809 admissions at 11 hospitals.
The 1991 Harvard Medical Practice Study, which focused on medical injury and litigation, documented an adverse event rate of 3.7 events per 100 admissions; 28% of those events were attributed to negligence, write David W. Bates, MD, of Brigham and Women’s Hospital, Boston, and colleagues.
Although patient safety has changed significantly since 1991, documenting improvements has been challenging, the researchers say. Several reports have shown a decrease in health care–associated infections. However, other aspects of safety – notably, adverse drug events, defined as injuries resulting from drugs taken – are not easily measured and tracked, the researchers say.
“We have not had good estimates of how much harm is being caused by care in hospitals in an ongoing way that looked across all types of adverse events,” and the current review is therefore important, Dr. Bates said in an interview.
In a study recently published in the New England Journal of Medicine, the researchers analyzed a random sample of 2,809 hospital admissions from 11 hospitals in Massachusetts during the 2018 calendar year. The hospitals ranged in size from fewer than 100 beds to more than 700 beds; all patients were aged 18 years and older. A panel of nine nurses reviewed the admissions records to identify potential adverse events, and eight physicians reviewed the adverse event summaries and either agreed or disagreed with the adverse event type. The severity of each event was ranked using a general severity scale into categories of significant, serious, life-threatening, or fatal.
Overall, at least one adverse event was identified in 23.6% of the hospital admissions. A total of 978 adverse events were deemed to have occurred during the index admission, and 222 of these (22.7%) were deemed preventable. Among the preventable adverse events, 19.7% were classified as serious, 3.3% as life-threatening, and 0.5% as fatal.
A total of 523 admissions (18.6%) involved at least one significant adverse event, defined as an event that caused unnecessary harm but from which recovery was rapid. A total of 211 admissions involved a serious adverse event, defined as harm resulting in substantial intervention or prolonged recovery; 34 included at least one life-threatening event; and seven admissions involved a fatal adverse event.
A total of 191 admissions involved at least one adverse event deemed preventable. Of those, 29 involved at least one preventable adverse event that was serious, life-threatening, or fatal, the researchers write. Of the seven deaths in the study population, one was deemed preventable.
The most common adverse events were adverse drug events, which accounted for 39.0% of the adverse events; surgical or other procedural events accounted for 30.4%; patient care events (including falls and pressure ulcers) accounted for 15.0%; and health care–associated infections accounted for 11.9%.
Overcoming barriers to better safety
“The overall level of harm, with nearly 1 in 4 patients suffering an adverse event, was higher than I expected it might be,” Dr. Bates told this news organization. However, techniques for identifying adverse events have improved, and “it is easier to find them in electronic records than in paper records,” he noted.
“Hospitals have many issues they are currently dealing with since COVID, and one issue is simply prioritization,” Dr. Bates said. “But it is now possible to measure harm for all patients using electronic tools, and if hospitals know how much harm they are having in specific areas, they can make choices about which ones to focus on.”
“We now have effective prevention strategies for most of the main kinds of harm,” he said. Generally, rates of harm are high because these strategies are not being used effectively, he said. “In addition, there are new tools that can be used – for example, to identify patients who are decompensating earlier,” he noted.
As for additional research, some specific types of harm that have been resistant to interventions, such as pressure ulcers, deserve more attention, said Dr. Bates. “In addition, diagnostic errors appear to cause a great deal of harm, but we don’t yet have good strategies for preventing these,” he said.
The study findings were limited by several factors, including the use of data from hospitals that might not represent hospitals at large and by the inclusion mainly of patients with private insurance, the researchers write. Other limitations include the likelihood that some adverse events were missed and the level of agreement on adverse events between adjudicators was only fair.
However, the findings serve as a reminder to health care professionals of the need for continued attention to improving patient safety, and measuring adverse events remains a critical part of guiding these improvements, the researchers conclude.
Timely reassessment and opportunities to improve
In the decades since the publication of the report, “To Err Is Human,” by the National Academies in 2000, significant attention has been paid to improving patient safety during hospitalizations, and health care systems have increased in both system and disease complexity, Said Suman Pal, MBBS, a specialist in hospital medicine at the University of New Mexico, Albuquerque, said in an interview. “Therefore, this study is important in reassessing the safety of inpatient care at the current time,” he said.
“The findings of this study showing preventable adverse events in approximately 7% of all admissions; while concerning, is not surprising, as it is consistent with other studies over time, as the authors have also noted in their discussion,” said Dr. Pal. The current findings “underscore the importance of continuous quality improvement efforts to increase the safety of patient care for hospitalized patients,” he noted.
“The increasing complexity of medical care, fragmentation of health care, structural inequities of health systems, and more recent widespread public health challenges such as the COVID-19 pandemic have been, in my opinion, barriers to improving patient safety,” Dr. Pal said. “The use of innovation and an interdisciplinary approach to patient safety and quality improvement in hospital-based care, such as the use of machine learning to monitor trends and predict the individualized risk of harm, could be a potential way out” to help reduce barriers and improve safety, he said.
“Additional research is needed to understand the key drivers of preventable harm for hospitalized patients in the United States,” said Dr. Pal. “When planning for change, keen attention must be paid to understanding how these [drivers] may differ for patients who have been historically marginalized or are otherwise underserved so as to not exacerbate health care inequities,” he added.
The study was funded by the Controlled Risk Insurance Company and the Risk Management Foundation of the Harvard Medical Institutions. Dr. Bates owns stock options with AESOP, Clew, FeelBetter, Guided Clinical Solutions, MDClone, and ValeraHealth and has grants/contracts from IBM Watson and EarlySense. He has also served as a consultant for CDI Negev. Dr. Pal has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
as indicated from data from 2,809 admissions at 11 hospitals.
The 1991 Harvard Medical Practice Study, which focused on medical injury and litigation, documented an adverse event rate of 3.7 events per 100 admissions; 28% of those events were attributed to negligence, write David W. Bates, MD, of Brigham and Women’s Hospital, Boston, and colleagues.
Although patient safety has changed significantly since 1991, documenting improvements has been challenging, the researchers say. Several reports have shown a decrease in health care–associated infections. However, other aspects of safety – notably, adverse drug events, defined as injuries resulting from drugs taken – are not easily measured and tracked, the researchers say.
“We have not had good estimates of how much harm is being caused by care in hospitals in an ongoing way that looked across all types of adverse events,” and the current review is therefore important, Dr. Bates said in an interview.
In a study recently published in the New England Journal of Medicine, the researchers analyzed a random sample of 2,809 hospital admissions from 11 hospitals in Massachusetts during the 2018 calendar year. The hospitals ranged in size from fewer than 100 beds to more than 700 beds; all patients were aged 18 years and older. A panel of nine nurses reviewed the admissions records to identify potential adverse events, and eight physicians reviewed the adverse event summaries and either agreed or disagreed with the adverse event type. The severity of each event was ranked using a general severity scale into categories of significant, serious, life-threatening, or fatal.
Overall, at least one adverse event was identified in 23.6% of the hospital admissions. A total of 978 adverse events were deemed to have occurred during the index admission, and 222 of these (22.7%) were deemed preventable. Among the preventable adverse events, 19.7% were classified as serious, 3.3% as life-threatening, and 0.5% as fatal.
A total of 523 admissions (18.6%) involved at least one significant adverse event, defined as an event that caused unnecessary harm but from which recovery was rapid. A total of 211 admissions involved a serious adverse event, defined as harm resulting in substantial intervention or prolonged recovery; 34 included at least one life-threatening event; and seven admissions involved a fatal adverse event.
A total of 191 admissions involved at least one adverse event deemed preventable. Of those, 29 involved at least one preventable adverse event that was serious, life-threatening, or fatal, the researchers write. Of the seven deaths in the study population, one was deemed preventable.
The most common adverse events were adverse drug events, which accounted for 39.0% of the adverse events; surgical or other procedural events accounted for 30.4%; patient care events (including falls and pressure ulcers) accounted for 15.0%; and health care–associated infections accounted for 11.9%.
Overcoming barriers to better safety
“The overall level of harm, with nearly 1 in 4 patients suffering an adverse event, was higher than I expected it might be,” Dr. Bates told this news organization. However, techniques for identifying adverse events have improved, and “it is easier to find them in electronic records than in paper records,” he noted.
“Hospitals have many issues they are currently dealing with since COVID, and one issue is simply prioritization,” Dr. Bates said. “But it is now possible to measure harm for all patients using electronic tools, and if hospitals know how much harm they are having in specific areas, they can make choices about which ones to focus on.”
“We now have effective prevention strategies for most of the main kinds of harm,” he said. Generally, rates of harm are high because these strategies are not being used effectively, he said. “In addition, there are new tools that can be used – for example, to identify patients who are decompensating earlier,” he noted.
As for additional research, some specific types of harm that have been resistant to interventions, such as pressure ulcers, deserve more attention, said Dr. Bates. “In addition, diagnostic errors appear to cause a great deal of harm, but we don’t yet have good strategies for preventing these,” he said.
The study findings were limited by several factors, including the use of data from hospitals that might not represent hospitals at large and by the inclusion mainly of patients with private insurance, the researchers write. Other limitations include the likelihood that some adverse events were missed and the level of agreement on adverse events between adjudicators was only fair.
However, the findings serve as a reminder to health care professionals of the need for continued attention to improving patient safety, and measuring adverse events remains a critical part of guiding these improvements, the researchers conclude.
Timely reassessment and opportunities to improve
In the decades since the publication of the report, “To Err Is Human,” by the National Academies in 2000, significant attention has been paid to improving patient safety during hospitalizations, and health care systems have increased in both system and disease complexity, Said Suman Pal, MBBS, a specialist in hospital medicine at the University of New Mexico, Albuquerque, said in an interview. “Therefore, this study is important in reassessing the safety of inpatient care at the current time,” he said.
“The findings of this study showing preventable adverse events in approximately 7% of all admissions; while concerning, is not surprising, as it is consistent with other studies over time, as the authors have also noted in their discussion,” said Dr. Pal. The current findings “underscore the importance of continuous quality improvement efforts to increase the safety of patient care for hospitalized patients,” he noted.
“The increasing complexity of medical care, fragmentation of health care, structural inequities of health systems, and more recent widespread public health challenges such as the COVID-19 pandemic have been, in my opinion, barriers to improving patient safety,” Dr. Pal said. “The use of innovation and an interdisciplinary approach to patient safety and quality improvement in hospital-based care, such as the use of machine learning to monitor trends and predict the individualized risk of harm, could be a potential way out” to help reduce barriers and improve safety, he said.
“Additional research is needed to understand the key drivers of preventable harm for hospitalized patients in the United States,” said Dr. Pal. “When planning for change, keen attention must be paid to understanding how these [drivers] may differ for patients who have been historically marginalized or are otherwise underserved so as to not exacerbate health care inequities,” he added.
The study was funded by the Controlled Risk Insurance Company and the Risk Management Foundation of the Harvard Medical Institutions. Dr. Bates owns stock options with AESOP, Clew, FeelBetter, Guided Clinical Solutions, MDClone, and ValeraHealth and has grants/contracts from IBM Watson and EarlySense. He has also served as a consultant for CDI Negev. Dr. Pal has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE