Albuminuria

Article Type
Changed
Wed, 09/13/2017 - 12:00
Display Headline
Albuminuria

To the Editor: Stephen et al1 have written a nice review of the implications of albuminuria. However, they are clearly incorrect when they state, “Most of the protein in the urine is albumin filtered from the plasma.”1 First, as they later point out in the article, the normal upper limit of protein excretion is about 150 mg/day, and only about 20 mg/day is normally albumin. Therefore, most of the protein in normally found in urine is not albumin, but instead is mostly a variety of globulins. Tamm-Horsfall mucoprotein or uromodulin is usually the protein found in highest concentration in normal urine.

References
  1. Stephen R, Jolly SE, Nally JV, Navaneethan SD. Albuminuria: When urine predicts kidney and cardiovascular disease. Cleve Clin J Med 2014; 81:4150.
Article PDF
Author and Disclosure Information

Michael Emmett, MD, MACP
Baylor University Medical Center, Dallas, TX

Issue
Cleveland Clinic Journal of Medicine - 81(6)
Publications
Topics
Page Number
345
Sections
Author and Disclosure Information

Michael Emmett, MD, MACP
Baylor University Medical Center, Dallas, TX

Author and Disclosure Information

Michael Emmett, MD, MACP
Baylor University Medical Center, Dallas, TX

Article PDF
Article PDF

To the Editor: Stephen et al1 have written a nice review of the implications of albuminuria. However, they are clearly incorrect when they state, “Most of the protein in the urine is albumin filtered from the plasma.”1 First, as they later point out in the article, the normal upper limit of protein excretion is about 150 mg/day, and only about 20 mg/day is normally albumin. Therefore, most of the protein in normally found in urine is not albumin, but instead is mostly a variety of globulins. Tamm-Horsfall mucoprotein or uromodulin is usually the protein found in highest concentration in normal urine.

To the Editor: Stephen et al1 have written a nice review of the implications of albuminuria. However, they are clearly incorrect when they state, “Most of the protein in the urine is albumin filtered from the plasma.”1 First, as they later point out in the article, the normal upper limit of protein excretion is about 150 mg/day, and only about 20 mg/day is normally albumin. Therefore, most of the protein in normally found in urine is not albumin, but instead is mostly a variety of globulins. Tamm-Horsfall mucoprotein or uromodulin is usually the protein found in highest concentration in normal urine.

References
  1. Stephen R, Jolly SE, Nally JV, Navaneethan SD. Albuminuria: When urine predicts kidney and cardiovascular disease. Cleve Clin J Med 2014; 81:4150.
References
  1. Stephen R, Jolly SE, Nally JV, Navaneethan SD. Albuminuria: When urine predicts kidney and cardiovascular disease. Cleve Clin J Med 2014; 81:4150.
Issue
Cleveland Clinic Journal of Medicine - 81(6)
Issue
Cleveland Clinic Journal of Medicine - 81(6)
Page Number
345
Page Number
345
Publications
Publications
Topics
Article Type
Display Headline
Albuminuria
Display Headline
Albuminuria
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Letter to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
In response to “Patient‐Centered Blood Management”

Dr. Horwitz appropriately notes that anemia of chronic disease may be better described as anemia of inflammation.[1, 2] We used the more traditional nomenclature for the sake of clarity to those readers who may not be familiar with the newer terminology. We do agree that there is often a role for hematology evaluation, but believe it should be based on the needs of the individual case and the needs of the organization. To be effective, any algorithm must maintain some flexibility for clinical judgment, and must also meet the needs of local stakeholders. We recommend that prior to implementing a preoperative anemia algorithm it is reviewed by the appropriate parties, which would typically include, but not be limited to, clinical leadership in anesthesia, surgery, medicine, and hematology.

Files
References
  1. Hohmuth B, Ozawa S, Ashton M, Melseth RL. Patient‐centered blood management. J Hosp Med. 2014;9:6065.
  2. Weiss G, Goodnough LT. Anemia of chronic disease. N Engl J Med. 2005:352:10111023.
Article PDF
Issue
Journal of Hospital Medicine - 9(7)
Publications
Page Number
479-479
Sections
Files
Files
Article PDF
Article PDF

Dr. Horwitz appropriately notes that anemia of chronic disease may be better described as anemia of inflammation.[1, 2] We used the more traditional nomenclature for the sake of clarity to those readers who may not be familiar with the newer terminology. We do agree that there is often a role for hematology evaluation, but believe it should be based on the needs of the individual case and the needs of the organization. To be effective, any algorithm must maintain some flexibility for clinical judgment, and must also meet the needs of local stakeholders. We recommend that prior to implementing a preoperative anemia algorithm it is reviewed by the appropriate parties, which would typically include, but not be limited to, clinical leadership in anesthesia, surgery, medicine, and hematology.

Dr. Horwitz appropriately notes that anemia of chronic disease may be better described as anemia of inflammation.[1, 2] We used the more traditional nomenclature for the sake of clarity to those readers who may not be familiar with the newer terminology. We do agree that there is often a role for hematology evaluation, but believe it should be based on the needs of the individual case and the needs of the organization. To be effective, any algorithm must maintain some flexibility for clinical judgment, and must also meet the needs of local stakeholders. We recommend that prior to implementing a preoperative anemia algorithm it is reviewed by the appropriate parties, which would typically include, but not be limited to, clinical leadership in anesthesia, surgery, medicine, and hematology.

References
  1. Hohmuth B, Ozawa S, Ashton M, Melseth RL. Patient‐centered blood management. J Hosp Med. 2014;9:6065.
  2. Weiss G, Goodnough LT. Anemia of chronic disease. N Engl J Med. 2005:352:10111023.
References
  1. Hohmuth B, Ozawa S, Ashton M, Melseth RL. Patient‐centered blood management. J Hosp Med. 2014;9:6065.
  2. Weiss G, Goodnough LT. Anemia of chronic disease. N Engl J Med. 2005:352:10111023.
Issue
Journal of Hospital Medicine - 9(7)
Issue
Journal of Hospital Medicine - 9(7)
Page Number
479-479
Page Number
479-479
Publications
Publications
Article Type
Display Headline
In response to “Patient‐Centered Blood Management”
Display Headline
In response to “Patient‐Centered Blood Management”
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Letter to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
In reference to “Patient‐Centered Blood Management”

There is one thing I would like to add to the flowsheet for management of preoperative anemia that Hohmuth et al.[1] have incorporated into their recent review titled Patient‐Centered Blood Management. Anemia of chronic disease, now called anemia of chronic inflammation,[2] is still a diagnosis of exclusion, and I would recommend a hematology evaluation before the final step of starting somebody on erythropoietic agents.

Files
References
  1. Hohmuth B, Ozawa S, Ashton M, Melseth RL. Patient‐centered blood management. J Hosp Med. 2014;9(1):6065.
  2. Adamson J. The anemia of chronic inflammation. In: Balducci L, Ershler WB, Bennett JM, eds. Anemia in the Elderly. New York, NY: Springer; 2007:5159.
Article PDF
Issue
Journal of Hospital Medicine - 9(7)
Publications
Page Number
478-478
Sections
Files
Files
Article PDF
Article PDF

There is one thing I would like to add to the flowsheet for management of preoperative anemia that Hohmuth et al.[1] have incorporated into their recent review titled Patient‐Centered Blood Management. Anemia of chronic disease, now called anemia of chronic inflammation,[2] is still a diagnosis of exclusion, and I would recommend a hematology evaluation before the final step of starting somebody on erythropoietic agents.

There is one thing I would like to add to the flowsheet for management of preoperative anemia that Hohmuth et al.[1] have incorporated into their recent review titled Patient‐Centered Blood Management. Anemia of chronic disease, now called anemia of chronic inflammation,[2] is still a diagnosis of exclusion, and I would recommend a hematology evaluation before the final step of starting somebody on erythropoietic agents.

References
  1. Hohmuth B, Ozawa S, Ashton M, Melseth RL. Patient‐centered blood management. J Hosp Med. 2014;9(1):6065.
  2. Adamson J. The anemia of chronic inflammation. In: Balducci L, Ershler WB, Bennett JM, eds. Anemia in the Elderly. New York, NY: Springer; 2007:5159.
References
  1. Hohmuth B, Ozawa S, Ashton M, Melseth RL. Patient‐centered blood management. J Hosp Med. 2014;9(1):6065.
  2. Adamson J. The anemia of chronic inflammation. In: Balducci L, Ershler WB, Bennett JM, eds. Anemia in the Elderly. New York, NY: Springer; 2007:5159.
Issue
Journal of Hospital Medicine - 9(7)
Issue
Journal of Hospital Medicine - 9(7)
Page Number
478-478
Page Number
478-478
Publications
Publications
Article Type
Display Headline
In reference to “Patient‐Centered Blood Management”
Display Headline
In reference to “Patient‐Centered Blood Management”
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Letter to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
What did we do before mHealth?

We agree with Drs. Arora and Mahmud that emerging mobile health (mHealth) approaches to improving patient engagement will need to demonstrate their value to advance health and healthcare. The potential for mHealth to do this has been often described[1, 2] but, so far, rarely measured or demonstrated.

The technology costs of our tablet‐based intervention[3] were low: 2 iPads at $400 each. The real expense was for personnel: research assistants needed to teach patients how to use the technology effectively. In the future, we hope to shift device and software orientation to patient‐care assistants, nurses, or even digital assistants, nonmedical personnel who have technical expertise with the health‐related devices and software needed to engage with the electronic health record and educational materials. Thus, at least part of the challenge of cost‐effectiveness aside from improved outcomeswill be demonstrating eventual time savings for providers who no longer need to hand deliver or explain paper pamphlets or printouts, or shepherd patients through their digitally assisted education.

One day we may muse, what did we do before mHealth? as we might do now when using mobile technologies for nonhealth‐related tasks like getting directions or making a call. Indeed, who can remember the last time they routinely used a paper map or phonebook for these daily tasks? Our prescription for tablets is a step in that direction, but we will need to also reimagine patient education and related daily tasks at the hospital and system level to realize the potential of lower costs and higher quality care we can achieve using mHealth.[4]

References
  1. Steinhubl SR, Muse ED, Topol EJ. Can mobile health technologies transform health care? JAMA. 2013;310(22):23952396.
  2. Free C, Phillips G, Watson L, et al. The effectiveness of mobile‐health technologies to improve health care service delivery processes: a systematic review and meta‐analysis. PLoS Med. 2013;10(1):e1001363.
  3. Greysen SR, Khanna RR, Jacolbia R, Lee HM, Auerbach AD. Tablet computers for hospitalized patients: a pilot study to improve inpatient engagement [published online ahead of print February 13, 2013]. J Hosp Med. doi: 10.1002/jhm.2169.
  4. Prey JE, Woollen J, Wilcox L, et al. Patient engagement in the inpatient setting: a systematic review [published online ahead of print November 22, 2013]. J Am Med Inform Assoc. doi: 10.1136/amiajnl‐2013‐002141.
Article PDF
Issue
Journal of Hospital Medicine - 9(8)
Publications
Page Number
551-551
Sections
Article PDF
Article PDF

We agree with Drs. Arora and Mahmud that emerging mobile health (mHealth) approaches to improving patient engagement will need to demonstrate their value to advance health and healthcare. The potential for mHealth to do this has been often described[1, 2] but, so far, rarely measured or demonstrated.

The technology costs of our tablet‐based intervention[3] were low: 2 iPads at $400 each. The real expense was for personnel: research assistants needed to teach patients how to use the technology effectively. In the future, we hope to shift device and software orientation to patient‐care assistants, nurses, or even digital assistants, nonmedical personnel who have technical expertise with the health‐related devices and software needed to engage with the electronic health record and educational materials. Thus, at least part of the challenge of cost‐effectiveness aside from improved outcomeswill be demonstrating eventual time savings for providers who no longer need to hand deliver or explain paper pamphlets or printouts, or shepherd patients through their digitally assisted education.

One day we may muse, what did we do before mHealth? as we might do now when using mobile technologies for nonhealth‐related tasks like getting directions or making a call. Indeed, who can remember the last time they routinely used a paper map or phonebook for these daily tasks? Our prescription for tablets is a step in that direction, but we will need to also reimagine patient education and related daily tasks at the hospital and system level to realize the potential of lower costs and higher quality care we can achieve using mHealth.[4]

We agree with Drs. Arora and Mahmud that emerging mobile health (mHealth) approaches to improving patient engagement will need to demonstrate their value to advance health and healthcare. The potential for mHealth to do this has been often described[1, 2] but, so far, rarely measured or demonstrated.

The technology costs of our tablet‐based intervention[3] were low: 2 iPads at $400 each. The real expense was for personnel: research assistants needed to teach patients how to use the technology effectively. In the future, we hope to shift device and software orientation to patient‐care assistants, nurses, or even digital assistants, nonmedical personnel who have technical expertise with the health‐related devices and software needed to engage with the electronic health record and educational materials. Thus, at least part of the challenge of cost‐effectiveness aside from improved outcomeswill be demonstrating eventual time savings for providers who no longer need to hand deliver or explain paper pamphlets or printouts, or shepherd patients through their digitally assisted education.

One day we may muse, what did we do before mHealth? as we might do now when using mobile technologies for nonhealth‐related tasks like getting directions or making a call. Indeed, who can remember the last time they routinely used a paper map or phonebook for these daily tasks? Our prescription for tablets is a step in that direction, but we will need to also reimagine patient education and related daily tasks at the hospital and system level to realize the potential of lower costs and higher quality care we can achieve using mHealth.[4]

References
  1. Steinhubl SR, Muse ED, Topol EJ. Can mobile health technologies transform health care? JAMA. 2013;310(22):23952396.
  2. Free C, Phillips G, Watson L, et al. The effectiveness of mobile‐health technologies to improve health care service delivery processes: a systematic review and meta‐analysis. PLoS Med. 2013;10(1):e1001363.
  3. Greysen SR, Khanna RR, Jacolbia R, Lee HM, Auerbach AD. Tablet computers for hospitalized patients: a pilot study to improve inpatient engagement [published online ahead of print February 13, 2013]. J Hosp Med. doi: 10.1002/jhm.2169.
  4. Prey JE, Woollen J, Wilcox L, et al. Patient engagement in the inpatient setting: a systematic review [published online ahead of print November 22, 2013]. J Am Med Inform Assoc. doi: 10.1136/amiajnl‐2013‐002141.
References
  1. Steinhubl SR, Muse ED, Topol EJ. Can mobile health technologies transform health care? JAMA. 2013;310(22):23952396.
  2. Free C, Phillips G, Watson L, et al. The effectiveness of mobile‐health technologies to improve health care service delivery processes: a systematic review and meta‐analysis. PLoS Med. 2013;10(1):e1001363.
  3. Greysen SR, Khanna RR, Jacolbia R, Lee HM, Auerbach AD. Tablet computers for hospitalized patients: a pilot study to improve inpatient engagement [published online ahead of print February 13, 2013]. J Hosp Med. doi: 10.1002/jhm.2169.
  4. Prey JE, Woollen J, Wilcox L, et al. Patient engagement in the inpatient setting: a systematic review [published online ahead of print November 22, 2013]. J Am Med Inform Assoc. doi: 10.1136/amiajnl‐2013‐002141.
Issue
Journal of Hospital Medicine - 9(8)
Issue
Journal of Hospital Medicine - 9(8)
Page Number
551-551
Page Number
551-551
Publications
Publications
Article Type
Display Headline
What did we do before mHealth?
Display Headline
What did we do before mHealth?
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Letter to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Tablets: The new prescription?

We are pleased to see positive results from the use of tablet computers (tablets) in engaging patients, as presented by Greyson and colleagues.[1] Patient engagement is correlated with better patient‐reported health outcomes.[2] But how do we justify any additional costs in the current climate?

The answer lies in the value delivered.[3] Achieving high‐value care means delivering the best outcomes at the lowest cost. Indeed, a growing number of studies are demonstrating improved outcomes with mobile technology. In Cleveland, tablet‐based self‐reporting in cancer patients improved communication of symptoms to physicians.[4] In Australia, chronic obstructive pulmonary disease patients engaged in tablet‐facilitated physical rehabilitation reported improved symptoms and exercise tolerance.[5] In Haiti, tablet‐delivered education sustainably improved knowledge of human immunodeficiency virus prevention and behavior among internally displaced women.[6]

What the extant literature is lacking, however, are studies demonstrating the cost‐effectiveness of mobile interventions. Digital platforms are unlikely to gain traction without these data. Some exceptions exist, but they are in the minority.[7] It is clear that engaged patients demonstrate better outcomes. However, future studies exploring the use of digital platforms would be well advised to include measures of cost‐effectiveness to build a true value‐based rationale for their integration into daily practice.

References
  1. Greysen SR, Khanna RR, Jacolbia R, Lee HM, Auerbach AD. Tablet computers for hospitalized patients: a pilot study to improve inpatient engagement [published online ahead of print February 13, 2014]. J Hosp Med. doi: 10.1002/jhm.2169.
  2. Simmons LA, Wolever RQ, Bechard EM, Snyderman . Patient engagement as a risk factor in personalized health care: a systematic review of the literature on chronic disease. Genome Med. 2014;6(2):16.
  3. Porter ME, Lee TH. The strategy that will fix health care. Harvard Business Review 2013;91(10):5070.
  4. Aktas A, Hullihen B, Shrotriya S, Thomas S, Walsh D, Estfan B. Connected health: cancer symptom and quality‐of‐life assessment using a tablet computer: a pilot study [published online ahead of print November 7, 2013]. Am J Hosp Palliat Care. doi: 10.1177/1049909113510963.
  5. Holland AE, Hill CJ, Rochford P, Fiore J, Berlowitz DJ, McDonald CF. Telerehabilitation for people with chronic obstructive pulmonary disease: feasibility of a simple, real time model of supervised exercise training. J Telemed Telecare. 2013;19(4):222226.
  6. Logie CH, Daniel C, Newman PA, Weaver J, Loutfy MR. A psycho‐educational HIV/STI prevention intervention for internally displaced women in Leogane, Haiti: results from a non‐randomized cohort pilot study. PLoS One. 2014;9(2):e89836.
  7. Marcano Belisario JS, Huckvale K, Greenfield G, Car J, Gunn LH. Smartphone and tablet self management apps for asthma. Cochrane Database Syst Rev. 2013;11:CD010013.
Article PDF
Issue
Journal of Hospital Medicine - 9(8)
Publications
Page Number
552-552
Sections
Article PDF
Article PDF

We are pleased to see positive results from the use of tablet computers (tablets) in engaging patients, as presented by Greyson and colleagues.[1] Patient engagement is correlated with better patient‐reported health outcomes.[2] But how do we justify any additional costs in the current climate?

The answer lies in the value delivered.[3] Achieving high‐value care means delivering the best outcomes at the lowest cost. Indeed, a growing number of studies are demonstrating improved outcomes with mobile technology. In Cleveland, tablet‐based self‐reporting in cancer patients improved communication of symptoms to physicians.[4] In Australia, chronic obstructive pulmonary disease patients engaged in tablet‐facilitated physical rehabilitation reported improved symptoms and exercise tolerance.[5] In Haiti, tablet‐delivered education sustainably improved knowledge of human immunodeficiency virus prevention and behavior among internally displaced women.[6]

What the extant literature is lacking, however, are studies demonstrating the cost‐effectiveness of mobile interventions. Digital platforms are unlikely to gain traction without these data. Some exceptions exist, but they are in the minority.[7] It is clear that engaged patients demonstrate better outcomes. However, future studies exploring the use of digital platforms would be well advised to include measures of cost‐effectiveness to build a true value‐based rationale for their integration into daily practice.

We are pleased to see positive results from the use of tablet computers (tablets) in engaging patients, as presented by Greyson and colleagues.[1] Patient engagement is correlated with better patient‐reported health outcomes.[2] But how do we justify any additional costs in the current climate?

The answer lies in the value delivered.[3] Achieving high‐value care means delivering the best outcomes at the lowest cost. Indeed, a growing number of studies are demonstrating improved outcomes with mobile technology. In Cleveland, tablet‐based self‐reporting in cancer patients improved communication of symptoms to physicians.[4] In Australia, chronic obstructive pulmonary disease patients engaged in tablet‐facilitated physical rehabilitation reported improved symptoms and exercise tolerance.[5] In Haiti, tablet‐delivered education sustainably improved knowledge of human immunodeficiency virus prevention and behavior among internally displaced women.[6]

What the extant literature is lacking, however, are studies demonstrating the cost‐effectiveness of mobile interventions. Digital platforms are unlikely to gain traction without these data. Some exceptions exist, but they are in the minority.[7] It is clear that engaged patients demonstrate better outcomes. However, future studies exploring the use of digital platforms would be well advised to include measures of cost‐effectiveness to build a true value‐based rationale for their integration into daily practice.

References
  1. Greysen SR, Khanna RR, Jacolbia R, Lee HM, Auerbach AD. Tablet computers for hospitalized patients: a pilot study to improve inpatient engagement [published online ahead of print February 13, 2014]. J Hosp Med. doi: 10.1002/jhm.2169.
  2. Simmons LA, Wolever RQ, Bechard EM, Snyderman . Patient engagement as a risk factor in personalized health care: a systematic review of the literature on chronic disease. Genome Med. 2014;6(2):16.
  3. Porter ME, Lee TH. The strategy that will fix health care. Harvard Business Review 2013;91(10):5070.
  4. Aktas A, Hullihen B, Shrotriya S, Thomas S, Walsh D, Estfan B. Connected health: cancer symptom and quality‐of‐life assessment using a tablet computer: a pilot study [published online ahead of print November 7, 2013]. Am J Hosp Palliat Care. doi: 10.1177/1049909113510963.
  5. Holland AE, Hill CJ, Rochford P, Fiore J, Berlowitz DJ, McDonald CF. Telerehabilitation for people with chronic obstructive pulmonary disease: feasibility of a simple, real time model of supervised exercise training. J Telemed Telecare. 2013;19(4):222226.
  6. Logie CH, Daniel C, Newman PA, Weaver J, Loutfy MR. A psycho‐educational HIV/STI prevention intervention for internally displaced women in Leogane, Haiti: results from a non‐randomized cohort pilot study. PLoS One. 2014;9(2):e89836.
  7. Marcano Belisario JS, Huckvale K, Greenfield G, Car J, Gunn LH. Smartphone and tablet self management apps for asthma. Cochrane Database Syst Rev. 2013;11:CD010013.
References
  1. Greysen SR, Khanna RR, Jacolbia R, Lee HM, Auerbach AD. Tablet computers for hospitalized patients: a pilot study to improve inpatient engagement [published online ahead of print February 13, 2014]. J Hosp Med. doi: 10.1002/jhm.2169.
  2. Simmons LA, Wolever RQ, Bechard EM, Snyderman . Patient engagement as a risk factor in personalized health care: a systematic review of the literature on chronic disease. Genome Med. 2014;6(2):16.
  3. Porter ME, Lee TH. The strategy that will fix health care. Harvard Business Review 2013;91(10):5070.
  4. Aktas A, Hullihen B, Shrotriya S, Thomas S, Walsh D, Estfan B. Connected health: cancer symptom and quality‐of‐life assessment using a tablet computer: a pilot study [published online ahead of print November 7, 2013]. Am J Hosp Palliat Care. doi: 10.1177/1049909113510963.
  5. Holland AE, Hill CJ, Rochford P, Fiore J, Berlowitz DJ, McDonald CF. Telerehabilitation for people with chronic obstructive pulmonary disease: feasibility of a simple, real time model of supervised exercise training. J Telemed Telecare. 2013;19(4):222226.
  6. Logie CH, Daniel C, Newman PA, Weaver J, Loutfy MR. A psycho‐educational HIV/STI prevention intervention for internally displaced women in Leogane, Haiti: results from a non‐randomized cohort pilot study. PLoS One. 2014;9(2):e89836.
  7. Marcano Belisario JS, Huckvale K, Greenfield G, Car J, Gunn LH. Smartphone and tablet self management apps for asthma. Cochrane Database Syst Rev. 2013;11:CD010013.
Issue
Journal of Hospital Medicine - 9(8)
Issue
Journal of Hospital Medicine - 9(8)
Page Number
552-552
Page Number
552-552
Publications
Publications
Article Type
Display Headline
Tablets: The new prescription?
Display Headline
Tablets: The new prescription?
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Letter to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
In response to “(Re)turning the pages of residency: The impact of localizing resident physicians to hospital units on paging frequency”

Fanucchi et al. provide compelling evidence that geographic localization decreases pager frequency in a dose‐dependent fashion.[1] However, the study's inability to capture the burden of face‐to‐face interruptions for localized teams undermines their conclusion that reduced paging will decrease resident workload and increase physician efficiency.

Although in‐person communications are less prone to error, psychological research suggests it is the actual interruption (and not just the modality) that disrupts cognitive processes and thus impedes problem solving, decision making, patient care efficiency, and safety.[2]

One study based in a teaching hospital emergency room (an effectively completely geographically localized care setting) found that attending physicians were interrupted once every 13.8 minutes on average. Only 1.86% of intrusions were from pages; 85.7% were face‐to‐face interruptions by nurses or medical staff.[3] Anecdotal evidence after restructuring our hospital's housestaff medicine teams to a geographic model was analogous. Such frequent disruptions would contradict Fanucchi et al.'s claim that direct communication[s]lead to fewer overall interruptions,[1] and would nullify the benefit of decreased paging.

Geographic localization offers potential advantages. However, rigorous scrutiny measuring amalgamate pager and in‐person interruptions is needed to know whether these translate into tangible workflow benefits.

References
  1. Fanucchi L, Unterbrink M, Logio LS. (Re)turning the pages of residency: the impact of localizing resident physicians to hospital units on paging frequency. J Hosp Med. 2014;9(2):120122.
  2. Li SYW, Magrabi F, Coiera E. A systematic review of the psychological literature on interruption and its patient safety implications. J Am Med Inform Assoc. 2012;19(1):612.
  3. Friedman SM, Elinson R, Arenovich T. A study of emergency physician work and communication: a human factors approach. Isr J Em Med. 2005;5(3):3542.
Article PDF
Issue
Journal of Hospital Medicine - 9(7)
Publications
Page Number
476-476
Sections
Article PDF
Article PDF

Fanucchi et al. provide compelling evidence that geographic localization decreases pager frequency in a dose‐dependent fashion.[1] However, the study's inability to capture the burden of face‐to‐face interruptions for localized teams undermines their conclusion that reduced paging will decrease resident workload and increase physician efficiency.

Although in‐person communications are less prone to error, psychological research suggests it is the actual interruption (and not just the modality) that disrupts cognitive processes and thus impedes problem solving, decision making, patient care efficiency, and safety.[2]

One study based in a teaching hospital emergency room (an effectively completely geographically localized care setting) found that attending physicians were interrupted once every 13.8 minutes on average. Only 1.86% of intrusions were from pages; 85.7% were face‐to‐face interruptions by nurses or medical staff.[3] Anecdotal evidence after restructuring our hospital's housestaff medicine teams to a geographic model was analogous. Such frequent disruptions would contradict Fanucchi et al.'s claim that direct communication[s]lead to fewer overall interruptions,[1] and would nullify the benefit of decreased paging.

Geographic localization offers potential advantages. However, rigorous scrutiny measuring amalgamate pager and in‐person interruptions is needed to know whether these translate into tangible workflow benefits.

Fanucchi et al. provide compelling evidence that geographic localization decreases pager frequency in a dose‐dependent fashion.[1] However, the study's inability to capture the burden of face‐to‐face interruptions for localized teams undermines their conclusion that reduced paging will decrease resident workload and increase physician efficiency.

Although in‐person communications are less prone to error, psychological research suggests it is the actual interruption (and not just the modality) that disrupts cognitive processes and thus impedes problem solving, decision making, patient care efficiency, and safety.[2]

One study based in a teaching hospital emergency room (an effectively completely geographically localized care setting) found that attending physicians were interrupted once every 13.8 minutes on average. Only 1.86% of intrusions were from pages; 85.7% were face‐to‐face interruptions by nurses or medical staff.[3] Anecdotal evidence after restructuring our hospital's housestaff medicine teams to a geographic model was analogous. Such frequent disruptions would contradict Fanucchi et al.'s claim that direct communication[s]lead to fewer overall interruptions,[1] and would nullify the benefit of decreased paging.

Geographic localization offers potential advantages. However, rigorous scrutiny measuring amalgamate pager and in‐person interruptions is needed to know whether these translate into tangible workflow benefits.

References
  1. Fanucchi L, Unterbrink M, Logio LS. (Re)turning the pages of residency: the impact of localizing resident physicians to hospital units on paging frequency. J Hosp Med. 2014;9(2):120122.
  2. Li SYW, Magrabi F, Coiera E. A systematic review of the psychological literature on interruption and its patient safety implications. J Am Med Inform Assoc. 2012;19(1):612.
  3. Friedman SM, Elinson R, Arenovich T. A study of emergency physician work and communication: a human factors approach. Isr J Em Med. 2005;5(3):3542.
References
  1. Fanucchi L, Unterbrink M, Logio LS. (Re)turning the pages of residency: the impact of localizing resident physicians to hospital units on paging frequency. J Hosp Med. 2014;9(2):120122.
  2. Li SYW, Magrabi F, Coiera E. A systematic review of the psychological literature on interruption and its patient safety implications. J Am Med Inform Assoc. 2012;19(1):612.
  3. Friedman SM, Elinson R, Arenovich T. A study of emergency physician work and communication: a human factors approach. Isr J Em Med. 2005;5(3):3542.
Issue
Journal of Hospital Medicine - 9(7)
Issue
Journal of Hospital Medicine - 9(7)
Page Number
476-476
Page Number
476-476
Publications
Publications
Article Type
Display Headline
In response to “(Re)turning the pages of residency: The impact of localizing resident physicians to hospital units on paging frequency”
Display Headline
In response to “(Re)turning the pages of residency: The impact of localizing resident physicians to hospital units on paging frequency”
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Letter to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Authors' reply: “(Re)turning the pages of residency: The impact of localizing resident physicians to hospital units on paging frequency”

We acknowledge that our inability to measure in‐person interruptions is a limitation of our study. We maintain that while in‐person interruptions may increase in geographically localized patient care units, this form of direct face‐to‐face communication is more effective, efficient and decreases the latent errors inherent in alphanumeric paging.

Dr. Gandiga cites a study conducted in an emergency department where the vast majority of interruptions to attending physicians were in person from nurses or medical staff. We feel that this study cannot be extrapolated to medical floors, as the workflow and patient flow in an emergency department is very different than on a medical floor. The continuous throughput of patients in an emergency department would require ongoing and frequent communication between the different members of the care team. In addition, the physicians in that study were receiving an average of 1 page in 12 hours, compared to greater than 25 in 12 hours for our interns on a localized service, which illustrates the problem with comparing the emergency department to a localized medical floor.[1, 2]

We believe that the benefits of geographically localized care models, which include dramatic decreases in paging, improved efficiency, and greater agreement on the plan of care, outweigh the probable increases in in‐person interruptions. Additional study is indeed warranted to further clarify this discussion.

References
  1. Friedman SM, Elinson R, Arenovich T. A study of emergency physician work and communication: a human factors approach. Isr J Em Med. 2005;5(3):3542.
  2. Fanucchi L, Unterbrink M, Logio LS. (Re)turning the pages of residency: the impact of localizing resident physicians to hospital units on paging frequency. J Hosp Med. 2014;9(2):120122.
Article PDF
Issue
Journal of Hospital Medicine - 9(7)
Publications
Page Number
477-477
Sections
Article PDF
Article PDF

We acknowledge that our inability to measure in‐person interruptions is a limitation of our study. We maintain that while in‐person interruptions may increase in geographically localized patient care units, this form of direct face‐to‐face communication is more effective, efficient and decreases the latent errors inherent in alphanumeric paging.

Dr. Gandiga cites a study conducted in an emergency department where the vast majority of interruptions to attending physicians were in person from nurses or medical staff. We feel that this study cannot be extrapolated to medical floors, as the workflow and patient flow in an emergency department is very different than on a medical floor. The continuous throughput of patients in an emergency department would require ongoing and frequent communication between the different members of the care team. In addition, the physicians in that study were receiving an average of 1 page in 12 hours, compared to greater than 25 in 12 hours for our interns on a localized service, which illustrates the problem with comparing the emergency department to a localized medical floor.[1, 2]

We believe that the benefits of geographically localized care models, which include dramatic decreases in paging, improved efficiency, and greater agreement on the plan of care, outweigh the probable increases in in‐person interruptions. Additional study is indeed warranted to further clarify this discussion.

We acknowledge that our inability to measure in‐person interruptions is a limitation of our study. We maintain that while in‐person interruptions may increase in geographically localized patient care units, this form of direct face‐to‐face communication is more effective, efficient and decreases the latent errors inherent in alphanumeric paging.

Dr. Gandiga cites a study conducted in an emergency department where the vast majority of interruptions to attending physicians were in person from nurses or medical staff. We feel that this study cannot be extrapolated to medical floors, as the workflow and patient flow in an emergency department is very different than on a medical floor. The continuous throughput of patients in an emergency department would require ongoing and frequent communication between the different members of the care team. In addition, the physicians in that study were receiving an average of 1 page in 12 hours, compared to greater than 25 in 12 hours for our interns on a localized service, which illustrates the problem with comparing the emergency department to a localized medical floor.[1, 2]

We believe that the benefits of geographically localized care models, which include dramatic decreases in paging, improved efficiency, and greater agreement on the plan of care, outweigh the probable increases in in‐person interruptions. Additional study is indeed warranted to further clarify this discussion.

References
  1. Friedman SM, Elinson R, Arenovich T. A study of emergency physician work and communication: a human factors approach. Isr J Em Med. 2005;5(3):3542.
  2. Fanucchi L, Unterbrink M, Logio LS. (Re)turning the pages of residency: the impact of localizing resident physicians to hospital units on paging frequency. J Hosp Med. 2014;9(2):120122.
References
  1. Friedman SM, Elinson R, Arenovich T. A study of emergency physician work and communication: a human factors approach. Isr J Em Med. 2005;5(3):3542.
  2. Fanucchi L, Unterbrink M, Logio LS. (Re)turning the pages of residency: the impact of localizing resident physicians to hospital units on paging frequency. J Hosp Med. 2014;9(2):120122.
Issue
Journal of Hospital Medicine - 9(7)
Issue
Journal of Hospital Medicine - 9(7)
Page Number
477-477
Page Number
477-477
Publications
Publications
Article Type
Display Headline
Authors' reply: “(Re)turning the pages of residency: The impact of localizing resident physicians to hospital units on paging frequency”
Display Headline
Authors' reply: “(Re)turning the pages of residency: The impact of localizing resident physicians to hospital units on paging frequency”
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Letters to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
In response to “It's safety, not the score, that needs improvement”

After reading the letter to the editor from Neil Goldfarb, we are concerned that the focus of our study[1] was misinterpreted. Upon reviewing the methodology for Leapfrog's Hospital Safety Score in May 2013, we were surprised to find that Leapfrog uses 2 separate scoring methodologies, depending on whether the hospital participates in the Leapfrog Hospital Survey. Survey participants are scored from 26 measures, whereas nonparticipants are scored from only 18 measures3 of which are imputed from other data sourceswith recalibrated weightings for each measure. Measuring and publicly disclosing hospital information are paramount to improving safety and quality, and we applaud Leapfrog for taking a leading role in this. However, our report demonstrated that Leapfrog's Hospital Safety Score, which was attained through 2 separate methodologies, may result in unintended inconsistency or misinterpretation.

We believe Mr. Goldfarb misunderstood our notion of statistical significance. In the report, we acknowledged that the mean score differences between participating and nonparticipating hospitals in our sample were not statistically significant, possibly due to small sample size. However, this was not the focus of our report. Utilizing a mean imputation approach, we rescored the nonparticipating hospitals in our sample as if they had participated in the Leapfrog Hospital Survey. The differences between the original nonparticipant scores and their respective participant estimations were not statistically significant. However, due to the cutoff points Leapfrog uses to assign letter grades, these differences resulted in a letter grade change for many of the nonparticipating hospitals in our sample.

We wish to clarify that a hospital's choice to participate or not to participate in the Leapfrog Hospital Survey is not a reflection of their willingness to promote patient safety. Hospitals voluntarily report data to numerous private organizations and are required to report hundreds of quality and safety measures to government agencies. The 26 (or 18) measures included in Leapfrog's Hospital Safety Score are merely a fraction of the measures hospitals already report.

Finally, we regret that our brief report has been mischaracterized by Neil Goldfarb as being clearly biased against the work of the Leapfrog Group. This is far from our intent. Throughout the manuscript, we repeatedly acknowledge Leapfrog's contribution in patient safety improvement; our work does not intend to discredit Leapfrog's hard‐earned reputation. We provide a recommendation that Leapfrog produce 2 separate reports for participating and nonparticipating hospitals to maintain clarity. Our research has followed academic protocol, has undergone a stringent peer‐review process, and included full disclosure of any potential conflicts of interest. We hope our analysis will contribute to the continuing improvement of Leapfrog's hospital patient safety reporting.

Files
References
  1. Hwang W, Derk J, Laclair M, Paz H. Hospital patient safety grades may misrepresent hospital performance. J Hosp Med. 2014;9(2):111115.
Article PDF
Issue
Journal of Hospital Medicine - 9(4)
Publications
Page Number
275-275
Sections
Files
Files
Article PDF
Article PDF

After reading the letter to the editor from Neil Goldfarb, we are concerned that the focus of our study[1] was misinterpreted. Upon reviewing the methodology for Leapfrog's Hospital Safety Score in May 2013, we were surprised to find that Leapfrog uses 2 separate scoring methodologies, depending on whether the hospital participates in the Leapfrog Hospital Survey. Survey participants are scored from 26 measures, whereas nonparticipants are scored from only 18 measures3 of which are imputed from other data sourceswith recalibrated weightings for each measure. Measuring and publicly disclosing hospital information are paramount to improving safety and quality, and we applaud Leapfrog for taking a leading role in this. However, our report demonstrated that Leapfrog's Hospital Safety Score, which was attained through 2 separate methodologies, may result in unintended inconsistency or misinterpretation.

We believe Mr. Goldfarb misunderstood our notion of statistical significance. In the report, we acknowledged that the mean score differences between participating and nonparticipating hospitals in our sample were not statistically significant, possibly due to small sample size. However, this was not the focus of our report. Utilizing a mean imputation approach, we rescored the nonparticipating hospitals in our sample as if they had participated in the Leapfrog Hospital Survey. The differences between the original nonparticipant scores and their respective participant estimations were not statistically significant. However, due to the cutoff points Leapfrog uses to assign letter grades, these differences resulted in a letter grade change for many of the nonparticipating hospitals in our sample.

We wish to clarify that a hospital's choice to participate or not to participate in the Leapfrog Hospital Survey is not a reflection of their willingness to promote patient safety. Hospitals voluntarily report data to numerous private organizations and are required to report hundreds of quality and safety measures to government agencies. The 26 (or 18) measures included in Leapfrog's Hospital Safety Score are merely a fraction of the measures hospitals already report.

Finally, we regret that our brief report has been mischaracterized by Neil Goldfarb as being clearly biased against the work of the Leapfrog Group. This is far from our intent. Throughout the manuscript, we repeatedly acknowledge Leapfrog's contribution in patient safety improvement; our work does not intend to discredit Leapfrog's hard‐earned reputation. We provide a recommendation that Leapfrog produce 2 separate reports for participating and nonparticipating hospitals to maintain clarity. Our research has followed academic protocol, has undergone a stringent peer‐review process, and included full disclosure of any potential conflicts of interest. We hope our analysis will contribute to the continuing improvement of Leapfrog's hospital patient safety reporting.

After reading the letter to the editor from Neil Goldfarb, we are concerned that the focus of our study[1] was misinterpreted. Upon reviewing the methodology for Leapfrog's Hospital Safety Score in May 2013, we were surprised to find that Leapfrog uses 2 separate scoring methodologies, depending on whether the hospital participates in the Leapfrog Hospital Survey. Survey participants are scored from 26 measures, whereas nonparticipants are scored from only 18 measures3 of which are imputed from other data sourceswith recalibrated weightings for each measure. Measuring and publicly disclosing hospital information are paramount to improving safety and quality, and we applaud Leapfrog for taking a leading role in this. However, our report demonstrated that Leapfrog's Hospital Safety Score, which was attained through 2 separate methodologies, may result in unintended inconsistency or misinterpretation.

We believe Mr. Goldfarb misunderstood our notion of statistical significance. In the report, we acknowledged that the mean score differences between participating and nonparticipating hospitals in our sample were not statistically significant, possibly due to small sample size. However, this was not the focus of our report. Utilizing a mean imputation approach, we rescored the nonparticipating hospitals in our sample as if they had participated in the Leapfrog Hospital Survey. The differences between the original nonparticipant scores and their respective participant estimations were not statistically significant. However, due to the cutoff points Leapfrog uses to assign letter grades, these differences resulted in a letter grade change for many of the nonparticipating hospitals in our sample.

We wish to clarify that a hospital's choice to participate or not to participate in the Leapfrog Hospital Survey is not a reflection of their willingness to promote patient safety. Hospitals voluntarily report data to numerous private organizations and are required to report hundreds of quality and safety measures to government agencies. The 26 (or 18) measures included in Leapfrog's Hospital Safety Score are merely a fraction of the measures hospitals already report.

Finally, we regret that our brief report has been mischaracterized by Neil Goldfarb as being clearly biased against the work of the Leapfrog Group. This is far from our intent. Throughout the manuscript, we repeatedly acknowledge Leapfrog's contribution in patient safety improvement; our work does not intend to discredit Leapfrog's hard‐earned reputation. We provide a recommendation that Leapfrog produce 2 separate reports for participating and nonparticipating hospitals to maintain clarity. Our research has followed academic protocol, has undergone a stringent peer‐review process, and included full disclosure of any potential conflicts of interest. We hope our analysis will contribute to the continuing improvement of Leapfrog's hospital patient safety reporting.

References
  1. Hwang W, Derk J, Laclair M, Paz H. Hospital patient safety grades may misrepresent hospital performance. J Hosp Med. 2014;9(2):111115.
References
  1. Hwang W, Derk J, Laclair M, Paz H. Hospital patient safety grades may misrepresent hospital performance. J Hosp Med. 2014;9(2):111115.
Issue
Journal of Hospital Medicine - 9(4)
Issue
Journal of Hospital Medicine - 9(4)
Page Number
275-275
Page Number
275-275
Publications
Publications
Article Type
Display Headline
In response to “It's safety, not the score, that needs improvement”
Display Headline
In response to “It's safety, not the score, that needs improvement”
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Letters to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
It's safety, not the score, that needs improvement

As the Executive Director of a purchaser coalition that has been promoting hospital participation in the Leapfrog Hospital Survey in our region, I found the brief report from Hwang and colleagues, Hospital Patient Safety Grades May Misrepresent Hospital Performance,[1] troubling. Putting aside the methodological vagaries and the lack of statistical significance to the findings, the authors have a clear bias against the work of the Leapfrog Group. As acknowledged in the disclosures, their institution does not participate in the Leapfrog Hospital Survey. What is not acknowledged is that their institution has not performed particularly well on the hospital safety score.

The authors note in their introduction that according to Leapfrog, 4 to 6 days are required for a hospital to compile the necessary survey data with an additional 90‐minute time commitment to enter the data, and state that this is a significant time commitment for many hospitals. Although it undoubtedly is a significant time commitment, apparently more than 1400 hospitals have found the time and made a commitment to measuring and publicly disclosing information that will help consumers, purchasers, and health plans identify and select safer, higher‐quality care providers. In addition, many studies have shown that public reporting helps to drive providers to improve. In the 12 years since To Err Is Human[2] was published, nothing suggests that the number of deaths associated with medical errors has diminished; in fact, a recent study suggested that over 400,000 deaths may occur annually due to errors.[3] In light of these ongoing safety concerns, is a commitment of 4 to 6 days really too large an investment?

It is time that America's hospitals stopped whining about the burden of public reporting and recognized that their customers have a right to, and are starting to demand, better data on quality, safety, and costs of care. If the Hospital Safety Score is indeed biased against nonreporting hospitals (and I remain unconvinced from this poorly designed study that it is), the main message of the article should have been that hospitals need to start reporting their data, not that the Leapfrog Group needs to change its methodology.

Files
References
  1. Hwang W, Derk J, LaClair M, et al. Hospital patient safety grades may misrepresent hospital performance. J Hosp Med. 2014;9(2):111115.
  2. Kohn LT, Corrigan JM, Donaldson MS, eds; Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
  3. James JT. A new, evidence‐based estimate of patient harms associated with hospital care. J Patient Saf. 2013;9:122128.
Article PDF
Issue
Journal of Hospital Medicine - 9(4)
Publications
Page Number
274-274
Sections
Files
Files
Article PDF
Article PDF

As the Executive Director of a purchaser coalition that has been promoting hospital participation in the Leapfrog Hospital Survey in our region, I found the brief report from Hwang and colleagues, Hospital Patient Safety Grades May Misrepresent Hospital Performance,[1] troubling. Putting aside the methodological vagaries and the lack of statistical significance to the findings, the authors have a clear bias against the work of the Leapfrog Group. As acknowledged in the disclosures, their institution does not participate in the Leapfrog Hospital Survey. What is not acknowledged is that their institution has not performed particularly well on the hospital safety score.

The authors note in their introduction that according to Leapfrog, 4 to 6 days are required for a hospital to compile the necessary survey data with an additional 90‐minute time commitment to enter the data, and state that this is a significant time commitment for many hospitals. Although it undoubtedly is a significant time commitment, apparently more than 1400 hospitals have found the time and made a commitment to measuring and publicly disclosing information that will help consumers, purchasers, and health plans identify and select safer, higher‐quality care providers. In addition, many studies have shown that public reporting helps to drive providers to improve. In the 12 years since To Err Is Human[2] was published, nothing suggests that the number of deaths associated with medical errors has diminished; in fact, a recent study suggested that over 400,000 deaths may occur annually due to errors.[3] In light of these ongoing safety concerns, is a commitment of 4 to 6 days really too large an investment?

It is time that America's hospitals stopped whining about the burden of public reporting and recognized that their customers have a right to, and are starting to demand, better data on quality, safety, and costs of care. If the Hospital Safety Score is indeed biased against nonreporting hospitals (and I remain unconvinced from this poorly designed study that it is), the main message of the article should have been that hospitals need to start reporting their data, not that the Leapfrog Group needs to change its methodology.

As the Executive Director of a purchaser coalition that has been promoting hospital participation in the Leapfrog Hospital Survey in our region, I found the brief report from Hwang and colleagues, Hospital Patient Safety Grades May Misrepresent Hospital Performance,[1] troubling. Putting aside the methodological vagaries and the lack of statistical significance to the findings, the authors have a clear bias against the work of the Leapfrog Group. As acknowledged in the disclosures, their institution does not participate in the Leapfrog Hospital Survey. What is not acknowledged is that their institution has not performed particularly well on the hospital safety score.

The authors note in their introduction that according to Leapfrog, 4 to 6 days are required for a hospital to compile the necessary survey data with an additional 90‐minute time commitment to enter the data, and state that this is a significant time commitment for many hospitals. Although it undoubtedly is a significant time commitment, apparently more than 1400 hospitals have found the time and made a commitment to measuring and publicly disclosing information that will help consumers, purchasers, and health plans identify and select safer, higher‐quality care providers. In addition, many studies have shown that public reporting helps to drive providers to improve. In the 12 years since To Err Is Human[2] was published, nothing suggests that the number of deaths associated with medical errors has diminished; in fact, a recent study suggested that over 400,000 deaths may occur annually due to errors.[3] In light of these ongoing safety concerns, is a commitment of 4 to 6 days really too large an investment?

It is time that America's hospitals stopped whining about the burden of public reporting and recognized that their customers have a right to, and are starting to demand, better data on quality, safety, and costs of care. If the Hospital Safety Score is indeed biased against nonreporting hospitals (and I remain unconvinced from this poorly designed study that it is), the main message of the article should have been that hospitals need to start reporting their data, not that the Leapfrog Group needs to change its methodology.

References
  1. Hwang W, Derk J, LaClair M, et al. Hospital patient safety grades may misrepresent hospital performance. J Hosp Med. 2014;9(2):111115.
  2. Kohn LT, Corrigan JM, Donaldson MS, eds; Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
  3. James JT. A new, evidence‐based estimate of patient harms associated with hospital care. J Patient Saf. 2013;9:122128.
References
  1. Hwang W, Derk J, LaClair M, et al. Hospital patient safety grades may misrepresent hospital performance. J Hosp Med. 2014;9(2):111115.
  2. Kohn LT, Corrigan JM, Donaldson MS, eds; Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
  3. James JT. A new, evidence‐based estimate of patient harms associated with hospital care. J Patient Saf. 2013;9:122128.
Issue
Journal of Hospital Medicine - 9(4)
Issue
Journal of Hospital Medicine - 9(4)
Page Number
274-274
Page Number
274-274
Publications
Publications
Article Type
Display Headline
It's safety, not the score, that needs improvement
Display Headline
It's safety, not the score, that needs improvement
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Problems with myocardial infarction definitions

Article Type
Changed
Wed, 09/13/2017 - 09:32
Display Headline
Problems with myocardial infarction definitions

To the Editor: In the December 2013 Cleveland Clinic Journal of Medicine, Tehrani and Seto provide a review of the updated definitions of myocardial infarction (MI).1 A key concept incorporated into the structured definitions is that cardiac biomarkers must be interpreted in a clinical context.2 This in turn helps better align the laboratory and clinical findings with the pathophysiologic processes.

However, there is another dimension to the definitions that is sometimes overlooked and requires careful attention: translation of the definitions into codes and comparable databases. Accurate and consistent coding according to the International Statistical Classification of Diseases, ninth edition (ICD-9), and the ICD-10 is critically vital to the appropriate analysis of data, research, quality measurement, and reimbursement of services related to MI. Unfortunately, there is no straightforward translation of the definitions into ICD-9 codes, and the challenge is further confounded when it comes to ICD-10, which will be implemented in October 2014.

The ICD-10-CM Index to Diseases does not yet recognize this nomenclature. ST-elevation MI is the default for the unspecified term “acute MI.” Non-ST-elevation MI requires more explicit documentation and is classified based on whether it occurs during or after a variety of procedures. Type 2 MI is particularly challenging because of the several possible ways to code the condition—for example, as acute subendocardial MI (I21.4), demand ischemia (I24.8), or acute MI, unspecified (I21.9). Coding guidelines are assumed to standardize the approach to coding these conditions, but there is no guarantee that comparability of the data will endure biases of code assignment. Although extreme precision in disease capture by coding may not exist, other clinical conditions have better correlations with coding classifications, such as stages of chronic kidney disease ranging from stage 1 through end-stage renal disease (N18.1 through N18.6). Furthermore, ICD-10 codes are insufficient to clearly distinguish the type of acute MI.3

While the concept of acute MI applies when the stated date of onset is less than 8 weeks in ICD-9,4 it changes to 4 weeks in ICD-10. “Acute” can reference an initial or a subsequent MI in ICD-10, but it does not define the time frame of the MI.5 This is different than in ICD-9, where the concept of “subsequent” refers to a “subsequent episode of care.”

On the surface, these variations may not seem significant. However, the discriminatory efforts to better define a patient’s clinical condition using the new definitions may get diluted by the challenges of the coding process. The implications on comparability of quality metrics and reporting are not to be underestimated and need to be assessed on a national level.

References
  1. Tehrani DM, Seto AH. Third universal definition of myocardial infarction: update, caveats, differential diagnoses. Cleve Clin J Med 2013; 80:777786.
  2. Thygesen K, Alpert JS, Jaffe AS, et al. Third universal definition of myocardial infarction. J Am Coll Cardiol 2012; 60:15811598.
  3. Alexandrescu R, Bottle A, Jarman B, Aylin P. Current ICD10 codes are insufficient to clearly distinguish acute myocardial infarction type: a descriptive study. BMC Health Serv Res 2013; 13:468.
  4. ICD-9-CM Addenda, Conversion Table, and Guidelines. www.cdc.gov
  5. WEDI Strategic National Implementation Process (SNIP). Acute Myocardial Infarction Issue Brief. www.wedi.org. Accessed February 3, 2014.
Article PDF
Author and Disclosure Information

Samer Antonios, MD
Via Christi Health, Kansas, Assistant Professor, Department of Internal Medicine, University of Kansas—Wichita

Issue
Cleveland Clinic Journal of Medicine - 81(3)
Publications
Topics
Page Number
139, 144
Sections
Author and Disclosure Information

Samer Antonios, MD
Via Christi Health, Kansas, Assistant Professor, Department of Internal Medicine, University of Kansas—Wichita

Author and Disclosure Information

Samer Antonios, MD
Via Christi Health, Kansas, Assistant Professor, Department of Internal Medicine, University of Kansas—Wichita

Article PDF
Article PDF
Related Articles

To the Editor: In the December 2013 Cleveland Clinic Journal of Medicine, Tehrani and Seto provide a review of the updated definitions of myocardial infarction (MI).1 A key concept incorporated into the structured definitions is that cardiac biomarkers must be interpreted in a clinical context.2 This in turn helps better align the laboratory and clinical findings with the pathophysiologic processes.

However, there is another dimension to the definitions that is sometimes overlooked and requires careful attention: translation of the definitions into codes and comparable databases. Accurate and consistent coding according to the International Statistical Classification of Diseases, ninth edition (ICD-9), and the ICD-10 is critically vital to the appropriate analysis of data, research, quality measurement, and reimbursement of services related to MI. Unfortunately, there is no straightforward translation of the definitions into ICD-9 codes, and the challenge is further confounded when it comes to ICD-10, which will be implemented in October 2014.

The ICD-10-CM Index to Diseases does not yet recognize this nomenclature. ST-elevation MI is the default for the unspecified term “acute MI.” Non-ST-elevation MI requires more explicit documentation and is classified based on whether it occurs during or after a variety of procedures. Type 2 MI is particularly challenging because of the several possible ways to code the condition—for example, as acute subendocardial MI (I21.4), demand ischemia (I24.8), or acute MI, unspecified (I21.9). Coding guidelines are assumed to standardize the approach to coding these conditions, but there is no guarantee that comparability of the data will endure biases of code assignment. Although extreme precision in disease capture by coding may not exist, other clinical conditions have better correlations with coding classifications, such as stages of chronic kidney disease ranging from stage 1 through end-stage renal disease (N18.1 through N18.6). Furthermore, ICD-10 codes are insufficient to clearly distinguish the type of acute MI.3

While the concept of acute MI applies when the stated date of onset is less than 8 weeks in ICD-9,4 it changes to 4 weeks in ICD-10. “Acute” can reference an initial or a subsequent MI in ICD-10, but it does not define the time frame of the MI.5 This is different than in ICD-9, where the concept of “subsequent” refers to a “subsequent episode of care.”

On the surface, these variations may not seem significant. However, the discriminatory efforts to better define a patient’s clinical condition using the new definitions may get diluted by the challenges of the coding process. The implications on comparability of quality metrics and reporting are not to be underestimated and need to be assessed on a national level.

To the Editor: In the December 2013 Cleveland Clinic Journal of Medicine, Tehrani and Seto provide a review of the updated definitions of myocardial infarction (MI).1 A key concept incorporated into the structured definitions is that cardiac biomarkers must be interpreted in a clinical context.2 This in turn helps better align the laboratory and clinical findings with the pathophysiologic processes.

However, there is another dimension to the definitions that is sometimes overlooked and requires careful attention: translation of the definitions into codes and comparable databases. Accurate and consistent coding according to the International Statistical Classification of Diseases, ninth edition (ICD-9), and the ICD-10 is critically vital to the appropriate analysis of data, research, quality measurement, and reimbursement of services related to MI. Unfortunately, there is no straightforward translation of the definitions into ICD-9 codes, and the challenge is further confounded when it comes to ICD-10, which will be implemented in October 2014.

The ICD-10-CM Index to Diseases does not yet recognize this nomenclature. ST-elevation MI is the default for the unspecified term “acute MI.” Non-ST-elevation MI requires more explicit documentation and is classified based on whether it occurs during or after a variety of procedures. Type 2 MI is particularly challenging because of the several possible ways to code the condition—for example, as acute subendocardial MI (I21.4), demand ischemia (I24.8), or acute MI, unspecified (I21.9). Coding guidelines are assumed to standardize the approach to coding these conditions, but there is no guarantee that comparability of the data will endure biases of code assignment. Although extreme precision in disease capture by coding may not exist, other clinical conditions have better correlations with coding classifications, such as stages of chronic kidney disease ranging from stage 1 through end-stage renal disease (N18.1 through N18.6). Furthermore, ICD-10 codes are insufficient to clearly distinguish the type of acute MI.3

While the concept of acute MI applies when the stated date of onset is less than 8 weeks in ICD-9,4 it changes to 4 weeks in ICD-10. “Acute” can reference an initial or a subsequent MI in ICD-10, but it does not define the time frame of the MI.5 This is different than in ICD-9, where the concept of “subsequent” refers to a “subsequent episode of care.”

On the surface, these variations may not seem significant. However, the discriminatory efforts to better define a patient’s clinical condition using the new definitions may get diluted by the challenges of the coding process. The implications on comparability of quality metrics and reporting are not to be underestimated and need to be assessed on a national level.

References
  1. Tehrani DM, Seto AH. Third universal definition of myocardial infarction: update, caveats, differential diagnoses. Cleve Clin J Med 2013; 80:777786.
  2. Thygesen K, Alpert JS, Jaffe AS, et al. Third universal definition of myocardial infarction. J Am Coll Cardiol 2012; 60:15811598.
  3. Alexandrescu R, Bottle A, Jarman B, Aylin P. Current ICD10 codes are insufficient to clearly distinguish acute myocardial infarction type: a descriptive study. BMC Health Serv Res 2013; 13:468.
  4. ICD-9-CM Addenda, Conversion Table, and Guidelines. www.cdc.gov
  5. WEDI Strategic National Implementation Process (SNIP). Acute Myocardial Infarction Issue Brief. www.wedi.org. Accessed February 3, 2014.
References
  1. Tehrani DM, Seto AH. Third universal definition of myocardial infarction: update, caveats, differential diagnoses. Cleve Clin J Med 2013; 80:777786.
  2. Thygesen K, Alpert JS, Jaffe AS, et al. Third universal definition of myocardial infarction. J Am Coll Cardiol 2012; 60:15811598.
  3. Alexandrescu R, Bottle A, Jarman B, Aylin P. Current ICD10 codes are insufficient to clearly distinguish acute myocardial infarction type: a descriptive study. BMC Health Serv Res 2013; 13:468.
  4. ICD-9-CM Addenda, Conversion Table, and Guidelines. www.cdc.gov
  5. WEDI Strategic National Implementation Process (SNIP). Acute Myocardial Infarction Issue Brief. www.wedi.org. Accessed February 3, 2014.
Issue
Cleveland Clinic Journal of Medicine - 81(3)
Issue
Cleveland Clinic Journal of Medicine - 81(3)
Page Number
139, 144
Page Number
139, 144
Publications
Publications
Topics
Article Type
Display Headline
Problems with myocardial infarction definitions
Display Headline
Problems with myocardial infarction definitions
Sections
Disallow All Ads
Alternative CME
Article PDF Media