User login
Smartphones feasible modality for collecting data in bipolar disorders
Smartphone surveys of mood and social stress might be useful in identifying mental changes in bipolar disorder patients, according to a pilot feasibility study by Stefani Schwartz of the department of psychiatry at Pennsylvania State University, Hershey, and her associates.
Ten bipolar disorder patients and 10 healthy controls recruited for the study were given smartphones and asked to complete surveys of mood and social stress twice a day at random for 14 days. The surveys included a visual analog scale to record ratings of mood, energy, speed of thoughts, and impulsivity, in which participants could choose any point along a scale of 0-100 by moving a sliding marker; and a Likert scale to measure social stress. For this part, participants revealed whether they were with others and whether they would rather be alone.
Completion rates were similar among the groups: a median of 95% in the bipolar disorder group and 88% in the healthy control group (P = 0.68). Median scores of the 14-day mean mood and energy in the bipolar disorder group were significantly lower in the bipolar disorder group, while speed of thoughts, impulsivity, and social stress were not significantly different between the groups. Median scores of the 14-day range for mood, speed of thoughts, and impulsivity did differ from the healthy controls, while energy and social stress did not differ significantly.
Prolonged monitoring might be required to detect prodromal symptoms of an impending major episode among patients with bipolar disorder, the authors wrote. Also, the findings are preliminary in light of many factors, including the small sample. Nevertheless, the techniques used in this study “could be investigated in subjects in different treatment settings to explore the sensitivity of detection of changes in symptoms,” the investigators wrote.
Read the article in the Journal of Affective Disorders (http://dx.doi.org/10.1016/j.jad.2015.11.013).
Smartphone surveys of mood and social stress might be useful in identifying mental changes in bipolar disorder patients, according to a pilot feasibility study by Stefani Schwartz of the department of psychiatry at Pennsylvania State University, Hershey, and her associates.
Ten bipolar disorder patients and 10 healthy controls recruited for the study were given smartphones and asked to complete surveys of mood and social stress twice a day at random for 14 days. The surveys included a visual analog scale to record ratings of mood, energy, speed of thoughts, and impulsivity, in which participants could choose any point along a scale of 0-100 by moving a sliding marker; and a Likert scale to measure social stress. For this part, participants revealed whether they were with others and whether they would rather be alone.
Completion rates were similar among the groups: a median of 95% in the bipolar disorder group and 88% in the healthy control group (P = 0.68). Median scores of the 14-day mean mood and energy in the bipolar disorder group were significantly lower in the bipolar disorder group, while speed of thoughts, impulsivity, and social stress were not significantly different between the groups. Median scores of the 14-day range for mood, speed of thoughts, and impulsivity did differ from the healthy controls, while energy and social stress did not differ significantly.
Prolonged monitoring might be required to detect prodromal symptoms of an impending major episode among patients with bipolar disorder, the authors wrote. Also, the findings are preliminary in light of many factors, including the small sample. Nevertheless, the techniques used in this study “could be investigated in subjects in different treatment settings to explore the sensitivity of detection of changes in symptoms,” the investigators wrote.
Read the article in the Journal of Affective Disorders (http://dx.doi.org/10.1016/j.jad.2015.11.013).
Smartphone surveys of mood and social stress might be useful in identifying mental changes in bipolar disorder patients, according to a pilot feasibility study by Stefani Schwartz of the department of psychiatry at Pennsylvania State University, Hershey, and her associates.
Ten bipolar disorder patients and 10 healthy controls recruited for the study were given smartphones and asked to complete surveys of mood and social stress twice a day at random for 14 days. The surveys included a visual analog scale to record ratings of mood, energy, speed of thoughts, and impulsivity, in which participants could choose any point along a scale of 0-100 by moving a sliding marker; and a Likert scale to measure social stress. For this part, participants revealed whether they were with others and whether they would rather be alone.
Completion rates were similar among the groups: a median of 95% in the bipolar disorder group and 88% in the healthy control group (P = 0.68). Median scores of the 14-day mean mood and energy in the bipolar disorder group were significantly lower in the bipolar disorder group, while speed of thoughts, impulsivity, and social stress were not significantly different between the groups. Median scores of the 14-day range for mood, speed of thoughts, and impulsivity did differ from the healthy controls, while energy and social stress did not differ significantly.
Prolonged monitoring might be required to detect prodromal symptoms of an impending major episode among patients with bipolar disorder, the authors wrote. Also, the findings are preliminary in light of many factors, including the small sample. Nevertheless, the techniques used in this study “could be investigated in subjects in different treatment settings to explore the sensitivity of detection of changes in symptoms,” the investigators wrote.
Read the article in the Journal of Affective Disorders (http://dx.doi.org/10.1016/j.jad.2015.11.013).
FROM THE JOURNAL OF AFFECTIVE DISORDERS
Great day, new tanning bed restrictions proposed
It’s a great day for our patients! The Food and Drug Administration has proposed new tanning bed restrictions that would not allow those under 18 years old to use tanning beds and would require adults to sign a written acknowledgment that certifies that they have been warned of the risks of tanning beds. These are proposed rules, so it is important for you to write and support them (or ask for even more). The rule is available for review and comments are accepted through March 21, 2016, at www.regulations.gov.
The American Academy of Dermatology and numerous state dermatology societies have been advocating for such restrictions for many years.
The real question is why has it taken so long. At a meeting almost 6 years ago, an FDA advisory committee agreed that these devices were hazardous and recommended greater restrictions. The answer is multifactorial, long, but an important one for dermatologists who have been patient advocates on this issue for many long years.
First, we had to establish a scientific basis for our efforts. Many contributed to this, from establishing basic science of ultraviolet injury, to demonstration of ultraviolet carcinogenesis in an animal model (a fish no less!), to publishing compelling numbers confirming the epidemic of skin cancer (for which I will take some credit). Once the literature was in place, we had the good fortune to have a dermatologist (a former resident of mine and good friend) who was acting surgeon general (Dr. Boris Lushniak) and brought the Centers for Disease Control and Prevention on board and declared ultraviolet radiation exposure a national health crisis.
During this time, the tanning industry was not idle. They were not highly organized and were caught off guard by the imposition of a federal tanning tax several years ago. Since then, they have become more organized and have hired lobbyists (reportedly the same ones who represented big tobacco) and petitioned congress for relief as small businesses. They claim they provide something that is healthful (vitamin D is good, right?), are small businesses, and sell something that people can get by walking outside, but in a timelier manner. Never mind that they are clustered around high school and college campuses and sell their carcinogenic wares to unsuspecting teenage girls. I remember arguing with a tanning lobbyist at a state hearing who claimed that dermatologists were trying to line their pockets because we sell ultraviolet radiation in our offices.
There are powerful social drivers behind tanning bed use. A tan in our office cubicle-based work force implies wealth and success since no one really has time to sit around a pool and cultivate one. The young are healthy and indestructible and haven’t seen or won’t believe the carnage wrought by skin cancer. Some do buy the pictures of resultant wrinkling. The good news is that tanning is nowhere as nearly addictive as tobacco and should be easier to vanquish. As the data continue to roll in, and as more movie stars go under the knife for skin cancer, the momentum builds. We are making progress and we will continue the campaign because prevention efforts will save more anguish and lives than an army of dermatologists.
It has taken 50 years to turn the tide on cigarette smoking and with a similarly long cancer latency, tanning will take persistent effort. The problem, however, has been identified, the solution obvious, and an ultimate victory for our patients inevitable. Dermatologists everywhere should take great pride in this victory at the FDA. Remember, tanning is the new tobacco!
Dr. Coldiron is a past president of the American Academy of Dermatology. He is currently in private practice, but maintains a clinical assistant professorship at the University of Cincinnati. He cares for patients, teaches medical students and residents, and has several active clinical research projects. Dr. Coldiron is the author of more than 80 scientific letters, papers, and several book chapters, and he speaks frequently on a variety of topics.
It’s a great day for our patients! The Food and Drug Administration has proposed new tanning bed restrictions that would not allow those under 18 years old to use tanning beds and would require adults to sign a written acknowledgment that certifies that they have been warned of the risks of tanning beds. These are proposed rules, so it is important for you to write and support them (or ask for even more). The rule is available for review and comments are accepted through March 21, 2016, at www.regulations.gov.
The American Academy of Dermatology and numerous state dermatology societies have been advocating for such restrictions for many years.
The real question is why has it taken so long. At a meeting almost 6 years ago, an FDA advisory committee agreed that these devices were hazardous and recommended greater restrictions. The answer is multifactorial, long, but an important one for dermatologists who have been patient advocates on this issue for many long years.
First, we had to establish a scientific basis for our efforts. Many contributed to this, from establishing basic science of ultraviolet injury, to demonstration of ultraviolet carcinogenesis in an animal model (a fish no less!), to publishing compelling numbers confirming the epidemic of skin cancer (for which I will take some credit). Once the literature was in place, we had the good fortune to have a dermatologist (a former resident of mine and good friend) who was acting surgeon general (Dr. Boris Lushniak) and brought the Centers for Disease Control and Prevention on board and declared ultraviolet radiation exposure a national health crisis.
During this time, the tanning industry was not idle. They were not highly organized and were caught off guard by the imposition of a federal tanning tax several years ago. Since then, they have become more organized and have hired lobbyists (reportedly the same ones who represented big tobacco) and petitioned congress for relief as small businesses. They claim they provide something that is healthful (vitamin D is good, right?), are small businesses, and sell something that people can get by walking outside, but in a timelier manner. Never mind that they are clustered around high school and college campuses and sell their carcinogenic wares to unsuspecting teenage girls. I remember arguing with a tanning lobbyist at a state hearing who claimed that dermatologists were trying to line their pockets because we sell ultraviolet radiation in our offices.
There are powerful social drivers behind tanning bed use. A tan in our office cubicle-based work force implies wealth and success since no one really has time to sit around a pool and cultivate one. The young are healthy and indestructible and haven’t seen or won’t believe the carnage wrought by skin cancer. Some do buy the pictures of resultant wrinkling. The good news is that tanning is nowhere as nearly addictive as tobacco and should be easier to vanquish. As the data continue to roll in, and as more movie stars go under the knife for skin cancer, the momentum builds. We are making progress and we will continue the campaign because prevention efforts will save more anguish and lives than an army of dermatologists.
It has taken 50 years to turn the tide on cigarette smoking and with a similarly long cancer latency, tanning will take persistent effort. The problem, however, has been identified, the solution obvious, and an ultimate victory for our patients inevitable. Dermatologists everywhere should take great pride in this victory at the FDA. Remember, tanning is the new tobacco!
Dr. Coldiron is a past president of the American Academy of Dermatology. He is currently in private practice, but maintains a clinical assistant professorship at the University of Cincinnati. He cares for patients, teaches medical students and residents, and has several active clinical research projects. Dr. Coldiron is the author of more than 80 scientific letters, papers, and several book chapters, and he speaks frequently on a variety of topics.
It’s a great day for our patients! The Food and Drug Administration has proposed new tanning bed restrictions that would not allow those under 18 years old to use tanning beds and would require adults to sign a written acknowledgment that certifies that they have been warned of the risks of tanning beds. These are proposed rules, so it is important for you to write and support them (or ask for even more). The rule is available for review and comments are accepted through March 21, 2016, at www.regulations.gov.
The American Academy of Dermatology and numerous state dermatology societies have been advocating for such restrictions for many years.
The real question is why has it taken so long. At a meeting almost 6 years ago, an FDA advisory committee agreed that these devices were hazardous and recommended greater restrictions. The answer is multifactorial, long, but an important one for dermatologists who have been patient advocates on this issue for many long years.
First, we had to establish a scientific basis for our efforts. Many contributed to this, from establishing basic science of ultraviolet injury, to demonstration of ultraviolet carcinogenesis in an animal model (a fish no less!), to publishing compelling numbers confirming the epidemic of skin cancer (for which I will take some credit). Once the literature was in place, we had the good fortune to have a dermatologist (a former resident of mine and good friend) who was acting surgeon general (Dr. Boris Lushniak) and brought the Centers for Disease Control and Prevention on board and declared ultraviolet radiation exposure a national health crisis.
During this time, the tanning industry was not idle. They were not highly organized and were caught off guard by the imposition of a federal tanning tax several years ago. Since then, they have become more organized and have hired lobbyists (reportedly the same ones who represented big tobacco) and petitioned congress for relief as small businesses. They claim they provide something that is healthful (vitamin D is good, right?), are small businesses, and sell something that people can get by walking outside, but in a timelier manner. Never mind that they are clustered around high school and college campuses and sell their carcinogenic wares to unsuspecting teenage girls. I remember arguing with a tanning lobbyist at a state hearing who claimed that dermatologists were trying to line their pockets because we sell ultraviolet radiation in our offices.
There are powerful social drivers behind tanning bed use. A tan in our office cubicle-based work force implies wealth and success since no one really has time to sit around a pool and cultivate one. The young are healthy and indestructible and haven’t seen or won’t believe the carnage wrought by skin cancer. Some do buy the pictures of resultant wrinkling. The good news is that tanning is nowhere as nearly addictive as tobacco and should be easier to vanquish. As the data continue to roll in, and as more movie stars go under the knife for skin cancer, the momentum builds. We are making progress and we will continue the campaign because prevention efforts will save more anguish and lives than an army of dermatologists.
It has taken 50 years to turn the tide on cigarette smoking and with a similarly long cancer latency, tanning will take persistent effort. The problem, however, has been identified, the solution obvious, and an ultimate victory for our patients inevitable. Dermatologists everywhere should take great pride in this victory at the FDA. Remember, tanning is the new tobacco!
Dr. Coldiron is a past president of the American Academy of Dermatology. He is currently in private practice, but maintains a clinical assistant professorship at the University of Cincinnati. He cares for patients, teaches medical students and residents, and has several active clinical research projects. Dr. Coldiron is the author of more than 80 scientific letters, papers, and several book chapters, and he speaks frequently on a variety of topics.
Hospital Medicine 2016 Returns to San Diego
A great thing about San Diego is that the weather there is to a meteorologist as the common cold is to a doctor: not too challenging. If you guess mild and sunny, you won’t be far off.
Read more about the new tracks, speakers at HM16.
The HM16 program, on the other hand, might be a challenge. There’s a lot to choose from. The latest in clinical care, technology, practice management, building better relationships with patients—it all will be covered and then some. TH
A great thing about San Diego is that the weather there is to a meteorologist as the common cold is to a doctor: not too challenging. If you guess mild and sunny, you won’t be far off.
Read more about the new tracks, speakers at HM16.
The HM16 program, on the other hand, might be a challenge. There’s a lot to choose from. The latest in clinical care, technology, practice management, building better relationships with patients—it all will be covered and then some. TH
A great thing about San Diego is that the weather there is to a meteorologist as the common cold is to a doctor: not too challenging. If you guess mild and sunny, you won’t be far off.
Read more about the new tracks, speakers at HM16.
The HM16 program, on the other hand, might be a challenge. There’s a lot to choose from. The latest in clinical care, technology, practice management, building better relationships with patients—it all will be covered and then some. TH
A New Schedule Could Be Better for Your Hospitalist Group
Present “hospitalist” in a word association exercise to a wide range of healthcare personnel in clinical and administrative roles, and many would instantly respond with “seven-on/seven-off schedule.”
Some numbers from SHM’s 2014 State of Hospital Medicine report:
- 53.8%: Portion of hospitalist groups using a seven-on/seven-off schedule.
- 182: Median number of shifts worked annually by a full-time hospitalist (standard contract hours, does not include “extra” shifts).
- 65%: Portion of groups having day shifts that are 12.0–13.9 hours in length.
These numbers suggest to me that, at least outside of academia, the standard hospitalist is working 12-hour shifts on a seven-on/seven-off schedule. And that mirrors my experience working on-site with hundreds of hospitalist groups across the country.
In other words, the hospitalist marketplace has spoken unambiguously regarding the favored work schedule. In some ways, it is a defining feature of hospitalist practice. In the same way that a defining characteristic of Millennials is devotion to social media and that air travel is associated with cramped seats, this work schedule is a defining characteristic for hospitalists.
Schedule Benefits? Many …
There is a reason for its popularity: It is simple to understand and operationalize, it provides for good hospitalist-patient continuity, and having every other week off is often cited as a principle reason for becoming a hospitalist (in many cases, it might only take a clerk or administrator a few hours to create a group’s work schedule for a whole year). Many hospitalist groups have followed this schedule for a decade or longer, and while they might have periodically discussed moving to an entirely different model, most have stuck with what they know.
I’m convinced this schedule will be around for many years to come.
Not Ideal in All Respects
Despite this schedule’s popularity, I regularly talk with hospitalists who say it has become very stressful and monotonous. They say they would really like to change to something else but feel stuck by the complexity of alternative models and the difficulty achieving consensus within the group regarding what model offers enough advantages—and acceptable costs—to be worth it.
They cite as shortcomings of the seven-on/seven-off schedule:
- It can be a Herculean task to alter the schedule to arrange a day or two off during the regularly scheduled week. They often give up on the effort, and over time, this can lead to some resentment toward their work.
- There is a tendency to adopt a systole-diastole lifestyle, with no activities other than work during the week on (e.g., no trips to the gym, dinners out with family, etc.) and an effort to move all of these into the week off. They’ll say, “What other profession requires one to shut down their personal life for seven days every other week?”
- It can be difficult to reliably use the seven days off productively. Sometimes it might be better to return to work after only two to four days off if at other times it were easy to arrange more than seven consecutive days off.
- The “switch day” can be difficult for the hospital. Such schedules nearly always are arranged so that all the doctors conclude seven days of work on the same day and are replaced by others the following day. Every hospitalist patient (typically more than half of all patients in the hospital) gets a new doctor on the same day, and the whole hospital runs less efficiently as a result.
Change Your Schedule?
Who am I kidding? Few groups, probably none to be precise, are likely to change their schedule as a result of reading this column. But I’m among what seems to be a small contingent who believe alternative schedules can work. Whether your group decides to pursue a different model should be entirely up to its members, but it is worthwhile to periodically discuss the costs and benefits of your current schedule as well as what other options might be practical. In most cases the discussion will conclude without any significant change, but discussing it periodically might turn up worthwhile small adjustments.
But if your group is ready to make a meaningful change away from a rigid seven-on/seven-off schedule, the first step could be to vary the number of days off. No longer would all in the group switch on the same day; only one doctor would switch at a time (unless there are more than seven day shifts), and that could occur on any day of the week.
To illustrate, let’s say you’re in a group with four day shifts. For this week, Dr. Plant might start Monday after four days off, Dr. Bonham has had 11 days off and starts Tuesday, Dr. Page starts Friday after nine days off, and Dr. Jones starts Saturday after six days off. Each will work seven consecutive day shifts, and the number of off days will vary depending on their own wishes and the needs of the group. This is much more complicated to schedule, but varying the switch day and number of days off between weeks can be good for work-life balance.
Some will quickly identify difficulties, such as how to get the kids’ nanny to match a varying work schedule like this. I know many hospitalists who have done this successfully and are glad they did, but I’m sure there are also many for whom changing to a schedule like this might require moving from their current terrific childcare arrangements to a new one, something that they (justifiably) are unwilling to do.
And if your group successfully moves to a seven-on/X-off schedule (i.e., varied number of days off), you could next think about varying the number of consecutive days worked. Maybe it could range from no fewer than five or six (to preserve reasonable continuity) to as many as 10 or 11 as long as you have the stamina.
I don’t have research proving this would be a better schedule. But my own career, and the experiences of a number of others I’ve spoken with, is enough to convince me it’s worth considering. TH
Present “hospitalist” in a word association exercise to a wide range of healthcare personnel in clinical and administrative roles, and many would instantly respond with “seven-on/seven-off schedule.”
Some numbers from SHM’s 2014 State of Hospital Medicine report:
- 53.8%: Portion of hospitalist groups using a seven-on/seven-off schedule.
- 182: Median number of shifts worked annually by a full-time hospitalist (standard contract hours, does not include “extra” shifts).
- 65%: Portion of groups having day shifts that are 12.0–13.9 hours in length.
These numbers suggest to me that, at least outside of academia, the standard hospitalist is working 12-hour shifts on a seven-on/seven-off schedule. And that mirrors my experience working on-site with hundreds of hospitalist groups across the country.
In other words, the hospitalist marketplace has spoken unambiguously regarding the favored work schedule. In some ways, it is a defining feature of hospitalist practice. In the same way that a defining characteristic of Millennials is devotion to social media and that air travel is associated with cramped seats, this work schedule is a defining characteristic for hospitalists.
Schedule Benefits? Many …
There is a reason for its popularity: It is simple to understand and operationalize, it provides for good hospitalist-patient continuity, and having every other week off is often cited as a principle reason for becoming a hospitalist (in many cases, it might only take a clerk or administrator a few hours to create a group’s work schedule for a whole year). Many hospitalist groups have followed this schedule for a decade or longer, and while they might have periodically discussed moving to an entirely different model, most have stuck with what they know.
I’m convinced this schedule will be around for many years to come.
Not Ideal in All Respects
Despite this schedule’s popularity, I regularly talk with hospitalists who say it has become very stressful and monotonous. They say they would really like to change to something else but feel stuck by the complexity of alternative models and the difficulty achieving consensus within the group regarding what model offers enough advantages—and acceptable costs—to be worth it.
They cite as shortcomings of the seven-on/seven-off schedule:
- It can be a Herculean task to alter the schedule to arrange a day or two off during the regularly scheduled week. They often give up on the effort, and over time, this can lead to some resentment toward their work.
- There is a tendency to adopt a systole-diastole lifestyle, with no activities other than work during the week on (e.g., no trips to the gym, dinners out with family, etc.) and an effort to move all of these into the week off. They’ll say, “What other profession requires one to shut down their personal life for seven days every other week?”
- It can be difficult to reliably use the seven days off productively. Sometimes it might be better to return to work after only two to four days off if at other times it were easy to arrange more than seven consecutive days off.
- The “switch day” can be difficult for the hospital. Such schedules nearly always are arranged so that all the doctors conclude seven days of work on the same day and are replaced by others the following day. Every hospitalist patient (typically more than half of all patients in the hospital) gets a new doctor on the same day, and the whole hospital runs less efficiently as a result.
Change Your Schedule?
Who am I kidding? Few groups, probably none to be precise, are likely to change their schedule as a result of reading this column. But I’m among what seems to be a small contingent who believe alternative schedules can work. Whether your group decides to pursue a different model should be entirely up to its members, but it is worthwhile to periodically discuss the costs and benefits of your current schedule as well as what other options might be practical. In most cases the discussion will conclude without any significant change, but discussing it periodically might turn up worthwhile small adjustments.
But if your group is ready to make a meaningful change away from a rigid seven-on/seven-off schedule, the first step could be to vary the number of days off. No longer would all in the group switch on the same day; only one doctor would switch at a time (unless there are more than seven day shifts), and that could occur on any day of the week.
To illustrate, let’s say you’re in a group with four day shifts. For this week, Dr. Plant might start Monday after four days off, Dr. Bonham has had 11 days off and starts Tuesday, Dr. Page starts Friday after nine days off, and Dr. Jones starts Saturday after six days off. Each will work seven consecutive day shifts, and the number of off days will vary depending on their own wishes and the needs of the group. This is much more complicated to schedule, but varying the switch day and number of days off between weeks can be good for work-life balance.
Some will quickly identify difficulties, such as how to get the kids’ nanny to match a varying work schedule like this. I know many hospitalists who have done this successfully and are glad they did, but I’m sure there are also many for whom changing to a schedule like this might require moving from their current terrific childcare arrangements to a new one, something that they (justifiably) are unwilling to do.
And if your group successfully moves to a seven-on/X-off schedule (i.e., varied number of days off), you could next think about varying the number of consecutive days worked. Maybe it could range from no fewer than five or six (to preserve reasonable continuity) to as many as 10 or 11 as long as you have the stamina.
I don’t have research proving this would be a better schedule. But my own career, and the experiences of a number of others I’ve spoken with, is enough to convince me it’s worth considering. TH
Present “hospitalist” in a word association exercise to a wide range of healthcare personnel in clinical and administrative roles, and many would instantly respond with “seven-on/seven-off schedule.”
Some numbers from SHM’s 2014 State of Hospital Medicine report:
- 53.8%: Portion of hospitalist groups using a seven-on/seven-off schedule.
- 182: Median number of shifts worked annually by a full-time hospitalist (standard contract hours, does not include “extra” shifts).
- 65%: Portion of groups having day shifts that are 12.0–13.9 hours in length.
These numbers suggest to me that, at least outside of academia, the standard hospitalist is working 12-hour shifts on a seven-on/seven-off schedule. And that mirrors my experience working on-site with hundreds of hospitalist groups across the country.
In other words, the hospitalist marketplace has spoken unambiguously regarding the favored work schedule. In some ways, it is a defining feature of hospitalist practice. In the same way that a defining characteristic of Millennials is devotion to social media and that air travel is associated with cramped seats, this work schedule is a defining characteristic for hospitalists.
Schedule Benefits? Many …
There is a reason for its popularity: It is simple to understand and operationalize, it provides for good hospitalist-patient continuity, and having every other week off is often cited as a principle reason for becoming a hospitalist (in many cases, it might only take a clerk or administrator a few hours to create a group’s work schedule for a whole year). Many hospitalist groups have followed this schedule for a decade or longer, and while they might have periodically discussed moving to an entirely different model, most have stuck with what they know.
I’m convinced this schedule will be around for many years to come.
Not Ideal in All Respects
Despite this schedule’s popularity, I regularly talk with hospitalists who say it has become very stressful and monotonous. They say they would really like to change to something else but feel stuck by the complexity of alternative models and the difficulty achieving consensus within the group regarding what model offers enough advantages—and acceptable costs—to be worth it.
They cite as shortcomings of the seven-on/seven-off schedule:
- It can be a Herculean task to alter the schedule to arrange a day or two off during the regularly scheduled week. They often give up on the effort, and over time, this can lead to some resentment toward their work.
- There is a tendency to adopt a systole-diastole lifestyle, with no activities other than work during the week on (e.g., no trips to the gym, dinners out with family, etc.) and an effort to move all of these into the week off. They’ll say, “What other profession requires one to shut down their personal life for seven days every other week?”
- It can be difficult to reliably use the seven days off productively. Sometimes it might be better to return to work after only two to four days off if at other times it were easy to arrange more than seven consecutive days off.
- The “switch day” can be difficult for the hospital. Such schedules nearly always are arranged so that all the doctors conclude seven days of work on the same day and are replaced by others the following day. Every hospitalist patient (typically more than half of all patients in the hospital) gets a new doctor on the same day, and the whole hospital runs less efficiently as a result.
Change Your Schedule?
Who am I kidding? Few groups, probably none to be precise, are likely to change their schedule as a result of reading this column. But I’m among what seems to be a small contingent who believe alternative schedules can work. Whether your group decides to pursue a different model should be entirely up to its members, but it is worthwhile to periodically discuss the costs and benefits of your current schedule as well as what other options might be practical. In most cases the discussion will conclude without any significant change, but discussing it periodically might turn up worthwhile small adjustments.
But if your group is ready to make a meaningful change away from a rigid seven-on/seven-off schedule, the first step could be to vary the number of days off. No longer would all in the group switch on the same day; only one doctor would switch at a time (unless there are more than seven day shifts), and that could occur on any day of the week.
To illustrate, let’s say you’re in a group with four day shifts. For this week, Dr. Plant might start Monday after four days off, Dr. Bonham has had 11 days off and starts Tuesday, Dr. Page starts Friday after nine days off, and Dr. Jones starts Saturday after six days off. Each will work seven consecutive day shifts, and the number of off days will vary depending on their own wishes and the needs of the group. This is much more complicated to schedule, but varying the switch day and number of days off between weeks can be good for work-life balance.
Some will quickly identify difficulties, such as how to get the kids’ nanny to match a varying work schedule like this. I know many hospitalists who have done this successfully and are glad they did, but I’m sure there are also many for whom changing to a schedule like this might require moving from their current terrific childcare arrangements to a new one, something that they (justifiably) are unwilling to do.
And if your group successfully moves to a seven-on/X-off schedule (i.e., varied number of days off), you could next think about varying the number of consecutive days worked. Maybe it could range from no fewer than five or six (to preserve reasonable continuity) to as many as 10 or 11 as long as you have the stamina.
I don’t have research proving this would be a better schedule. But my own career, and the experiences of a number of others I’ve spoken with, is enough to convince me it’s worth considering. TH
Fewer doses of malaria drug just as effective
Photo by Sarah Mattison
A trial of African children suggests that 3 doses of artesunate can be just as effective as 5 doses for treating severe malaria.
A 3-dose intramuscular (IM) artesunate regimen proved noninferior to 5 doses of IM artesunate.
However, a 3-dose intravenous (IV) artesunate regimen was not as effective.
Peter Kremsner, MD, of Eberhard Karls Universität Tübingen in Germany, and his colleagues reported these results in PLOS Medicine.
The World Health Organization recommends that patients with severe malaria be given a 5-dose regimen of IV or IM artesunate at the time of admission (0 hours) and at 12, 24, 48, and 72 hours. However, in resource-limited settings, administering 5 doses on schedule can be challenging.
So Dr Kremsner and his colleagues wanted to determine if 3 doses of artesunate would be just as effective. They conducted an open-label, randomized, controlled trial investigating the efficacy of 3-dose IV or IM artesunate at 0, 24, and 48 hours.
The researchers enrolled 1047 children (0.5 to 10 years of age) who had severe malaria and were treated at 7 different sites in 5 African countries. The children were randomized to receive a total artesunate dose of 12 mg/kg as a control regimen of 5 IM injections (n=348) or 3 injections of 4 mg/kg either IM (n=348) or IV (n=351).
Of these children, 1002 received treatment per-protocol—331 in the 5-dose group, 338 in the 3-dose IM group, and 333 in the 3-dose IV group.
Seventy-eight percent of patients in the 3-dose IM group had about 99% parasite clearance at 24 hours, as did 79% of patients in the 5-dose IM group, a result that met a preset criterion for noninferiority (P=0.02).
However, the 3-dose IV regimen did not meet the noninferiority criterion. Seventy-four percent of these children had about 99% parasite clearance at 24 hours (P=0.24).
Twenty-two percent of the entire study population developed delayed anemia, but there was no difference in the incidence of this adverse event between the treatment arms.
The researchers said further studies are needed to clarify whether treatment with artesunate or the malaria infection itself was responsible for the delayed anemia. And patients receiving the drug should be monitored for this complication.
The team also said their findings suggest a 3-dose IM artesunate regimen can be effective against severe malaria in children, but this study did have limitations. For instance, due to practical constraints, the primary endpoint was parasite clearance at 24 hours rather than survival.
Photo by Sarah Mattison
A trial of African children suggests that 3 doses of artesunate can be just as effective as 5 doses for treating severe malaria.
A 3-dose intramuscular (IM) artesunate regimen proved noninferior to 5 doses of IM artesunate.
However, a 3-dose intravenous (IV) artesunate regimen was not as effective.
Peter Kremsner, MD, of Eberhard Karls Universität Tübingen in Germany, and his colleagues reported these results in PLOS Medicine.
The World Health Organization recommends that patients with severe malaria be given a 5-dose regimen of IV or IM artesunate at the time of admission (0 hours) and at 12, 24, 48, and 72 hours. However, in resource-limited settings, administering 5 doses on schedule can be challenging.
So Dr Kremsner and his colleagues wanted to determine if 3 doses of artesunate would be just as effective. They conducted an open-label, randomized, controlled trial investigating the efficacy of 3-dose IV or IM artesunate at 0, 24, and 48 hours.
The researchers enrolled 1047 children (0.5 to 10 years of age) who had severe malaria and were treated at 7 different sites in 5 African countries. The children were randomized to receive a total artesunate dose of 12 mg/kg as a control regimen of 5 IM injections (n=348) or 3 injections of 4 mg/kg either IM (n=348) or IV (n=351).
Of these children, 1002 received treatment per-protocol—331 in the 5-dose group, 338 in the 3-dose IM group, and 333 in the 3-dose IV group.
Seventy-eight percent of patients in the 3-dose IM group had about 99% parasite clearance at 24 hours, as did 79% of patients in the 5-dose IM group, a result that met a preset criterion for noninferiority (P=0.02).
However, the 3-dose IV regimen did not meet the noninferiority criterion. Seventy-four percent of these children had about 99% parasite clearance at 24 hours (P=0.24).
Twenty-two percent of the entire study population developed delayed anemia, but there was no difference in the incidence of this adverse event between the treatment arms.
The researchers said further studies are needed to clarify whether treatment with artesunate or the malaria infection itself was responsible for the delayed anemia. And patients receiving the drug should be monitored for this complication.
The team also said their findings suggest a 3-dose IM artesunate regimen can be effective against severe malaria in children, but this study did have limitations. For instance, due to practical constraints, the primary endpoint was parasite clearance at 24 hours rather than survival.
Photo by Sarah Mattison
A trial of African children suggests that 3 doses of artesunate can be just as effective as 5 doses for treating severe malaria.
A 3-dose intramuscular (IM) artesunate regimen proved noninferior to 5 doses of IM artesunate.
However, a 3-dose intravenous (IV) artesunate regimen was not as effective.
Peter Kremsner, MD, of Eberhard Karls Universität Tübingen in Germany, and his colleagues reported these results in PLOS Medicine.
The World Health Organization recommends that patients with severe malaria be given a 5-dose regimen of IV or IM artesunate at the time of admission (0 hours) and at 12, 24, 48, and 72 hours. However, in resource-limited settings, administering 5 doses on schedule can be challenging.
So Dr Kremsner and his colleagues wanted to determine if 3 doses of artesunate would be just as effective. They conducted an open-label, randomized, controlled trial investigating the efficacy of 3-dose IV or IM artesunate at 0, 24, and 48 hours.
The researchers enrolled 1047 children (0.5 to 10 years of age) who had severe malaria and were treated at 7 different sites in 5 African countries. The children were randomized to receive a total artesunate dose of 12 mg/kg as a control regimen of 5 IM injections (n=348) or 3 injections of 4 mg/kg either IM (n=348) or IV (n=351).
Of these children, 1002 received treatment per-protocol—331 in the 5-dose group, 338 in the 3-dose IM group, and 333 in the 3-dose IV group.
Seventy-eight percent of patients in the 3-dose IM group had about 99% parasite clearance at 24 hours, as did 79% of patients in the 5-dose IM group, a result that met a preset criterion for noninferiority (P=0.02).
However, the 3-dose IV regimen did not meet the noninferiority criterion. Seventy-four percent of these children had about 99% parasite clearance at 24 hours (P=0.24).
Twenty-two percent of the entire study population developed delayed anemia, but there was no difference in the incidence of this adverse event between the treatment arms.
The researchers said further studies are needed to clarify whether treatment with artesunate or the malaria infection itself was responsible for the delayed anemia. And patients receiving the drug should be monitored for this complication.
The team also said their findings suggest a 3-dose IM artesunate regimen can be effective against severe malaria in children, but this study did have limitations. For instance, due to practical constraints, the primary endpoint was parasite clearance at 24 hours rather than survival.
Drug gets priority review as CLL treatment
Image by Mary Ann Thompson
Despite previous safety concerns, the US Food and Drug Administration (FDA) has granted priority review for the BCL-2 inhibitor venetoclax.
The FDA is reviewing the drug as a potential treatment for patients with chronic lymphocytic leukemia (CLL), including those with 17p deletion, who have received at least 1 prior therapy.
A priority review designation is granted to drugs thought to have the potential to provide significant improvements in the treatment, prevention, or diagnosis of a disease.
The designation means the FDA’s goal is to take action on a drug application within 6 months, compared to 10 months under standard review.
Venetoclax has proven active against CLL and other hematologic malignancies, but it is known to induce tumor lysis syndrome (TLS). In fact, TLS-related deaths temporarily halted enrollment in trials of venetoclax. But researchers discovered ways to reduce the risk of TLS, and the trials continued.
Venetoclax received breakthrough therapy designation from the FDA last year for the treatment of patients with relapsed or refractory CLL and 17p deletion. This designation is designed to expedite the development and review of medicines intended to treat serious or life-threatening diseases.
The new drug application for venetoclax is based, in part, on data from the phase 2 M13-982 study, which were just presented at the 2015 ASH Annual Meeting.
Phase 2 trial
M13-982 is an open-label, single-arm, multicenter study in which researchers are evaluating the efficacy and safety of venetoclax in patients with relapsed, refractory, or previously untreated CLL with 17p deletion.
The study included 107 patients with relapsed or refractory disease, and all but 1 had 17p deletion. An additional 50 patients with relapsed, refractory, or previously untreated disease have been enrolled in the safety expansion cohort.
The primary endpoint of the study is overall response rate as determined by an independent review committee, and secondary endpoints include complete response, partial response, duration of response, progression-free survival, and overall survival. The level of minimal residual disease (MRD) in peripheral blood and/or bone marrow was assessed in a subset of patients.
The study met its primary endpoint, with an overall response rate of 79.4% among the 107 patients with relapsed or refractory disease. In addition, 7.5% of patients achieved a complete response, with or without complete recovery of blood counts in the bone marrow.
Forty-five patients had an assessment for MRD in the blood. Of these, 18 patients achieved MRD-negativity. Ten of these 18 patients also had bone marrow assessments, and 6 were MRD-negative.
At 1 year, 84.7% of all responses and 94.4% of MRD-negative responses were maintained. The 1-year progression-free survival and overall survival rates were 72% and 86.7%, respectively.
The most common serious adverse events were pyrexia (7%), autoimmune hemolytic anemia (7%), pneumonia (6%), and febrile neutropenia (5%). The most common grade 3-4 adverse events were neutropenia (40%), infection (20%), anemia (18%), and thrombocytopenia (15%).
Laboratory TLS was reported in 5 patients. None had clinical consequences.
Venetoclax is under development by AbbVie and Genentech/Roche.
Image by Mary Ann Thompson
Despite previous safety concerns, the US Food and Drug Administration (FDA) has granted priority review for the BCL-2 inhibitor venetoclax.
The FDA is reviewing the drug as a potential treatment for patients with chronic lymphocytic leukemia (CLL), including those with 17p deletion, who have received at least 1 prior therapy.
A priority review designation is granted to drugs thought to have the potential to provide significant improvements in the treatment, prevention, or diagnosis of a disease.
The designation means the FDA’s goal is to take action on a drug application within 6 months, compared to 10 months under standard review.
Venetoclax has proven active against CLL and other hematologic malignancies, but it is known to induce tumor lysis syndrome (TLS). In fact, TLS-related deaths temporarily halted enrollment in trials of venetoclax. But researchers discovered ways to reduce the risk of TLS, and the trials continued.
Venetoclax received breakthrough therapy designation from the FDA last year for the treatment of patients with relapsed or refractory CLL and 17p deletion. This designation is designed to expedite the development and review of medicines intended to treat serious or life-threatening diseases.
The new drug application for venetoclax is based, in part, on data from the phase 2 M13-982 study, which were just presented at the 2015 ASH Annual Meeting.
Phase 2 trial
M13-982 is an open-label, single-arm, multicenter study in which researchers are evaluating the efficacy and safety of venetoclax in patients with relapsed, refractory, or previously untreated CLL with 17p deletion.
The study included 107 patients with relapsed or refractory disease, and all but 1 had 17p deletion. An additional 50 patients with relapsed, refractory, or previously untreated disease have been enrolled in the safety expansion cohort.
The primary endpoint of the study is overall response rate as determined by an independent review committee, and secondary endpoints include complete response, partial response, duration of response, progression-free survival, and overall survival. The level of minimal residual disease (MRD) in peripheral blood and/or bone marrow was assessed in a subset of patients.
The study met its primary endpoint, with an overall response rate of 79.4% among the 107 patients with relapsed or refractory disease. In addition, 7.5% of patients achieved a complete response, with or without complete recovery of blood counts in the bone marrow.
Forty-five patients had an assessment for MRD in the blood. Of these, 18 patients achieved MRD-negativity. Ten of these 18 patients also had bone marrow assessments, and 6 were MRD-negative.
At 1 year, 84.7% of all responses and 94.4% of MRD-negative responses were maintained. The 1-year progression-free survival and overall survival rates were 72% and 86.7%, respectively.
The most common serious adverse events were pyrexia (7%), autoimmune hemolytic anemia (7%), pneumonia (6%), and febrile neutropenia (5%). The most common grade 3-4 adverse events were neutropenia (40%), infection (20%), anemia (18%), and thrombocytopenia (15%).
Laboratory TLS was reported in 5 patients. None had clinical consequences.
Venetoclax is under development by AbbVie and Genentech/Roche.
Image by Mary Ann Thompson
Despite previous safety concerns, the US Food and Drug Administration (FDA) has granted priority review for the BCL-2 inhibitor venetoclax.
The FDA is reviewing the drug as a potential treatment for patients with chronic lymphocytic leukemia (CLL), including those with 17p deletion, who have received at least 1 prior therapy.
A priority review designation is granted to drugs thought to have the potential to provide significant improvements in the treatment, prevention, or diagnosis of a disease.
The designation means the FDA’s goal is to take action on a drug application within 6 months, compared to 10 months under standard review.
Venetoclax has proven active against CLL and other hematologic malignancies, but it is known to induce tumor lysis syndrome (TLS). In fact, TLS-related deaths temporarily halted enrollment in trials of venetoclax. But researchers discovered ways to reduce the risk of TLS, and the trials continued.
Venetoclax received breakthrough therapy designation from the FDA last year for the treatment of patients with relapsed or refractory CLL and 17p deletion. This designation is designed to expedite the development and review of medicines intended to treat serious or life-threatening diseases.
The new drug application for venetoclax is based, in part, on data from the phase 2 M13-982 study, which were just presented at the 2015 ASH Annual Meeting.
Phase 2 trial
M13-982 is an open-label, single-arm, multicenter study in which researchers are evaluating the efficacy and safety of venetoclax in patients with relapsed, refractory, or previously untreated CLL with 17p deletion.
The study included 107 patients with relapsed or refractory disease, and all but 1 had 17p deletion. An additional 50 patients with relapsed, refractory, or previously untreated disease have been enrolled in the safety expansion cohort.
The primary endpoint of the study is overall response rate as determined by an independent review committee, and secondary endpoints include complete response, partial response, duration of response, progression-free survival, and overall survival. The level of minimal residual disease (MRD) in peripheral blood and/or bone marrow was assessed in a subset of patients.
The study met its primary endpoint, with an overall response rate of 79.4% among the 107 patients with relapsed or refractory disease. In addition, 7.5% of patients achieved a complete response, with or without complete recovery of blood counts in the bone marrow.
Forty-five patients had an assessment for MRD in the blood. Of these, 18 patients achieved MRD-negativity. Ten of these 18 patients also had bone marrow assessments, and 6 were MRD-negative.
At 1 year, 84.7% of all responses and 94.4% of MRD-negative responses were maintained. The 1-year progression-free survival and overall survival rates were 72% and 86.7%, respectively.
The most common serious adverse events were pyrexia (7%), autoimmune hemolytic anemia (7%), pneumonia (6%), and febrile neutropenia (5%). The most common grade 3-4 adverse events were neutropenia (40%), infection (20%), anemia (18%), and thrombocytopenia (15%).
Laboratory TLS was reported in 5 patients. None had clinical consequences.
Venetoclax is under development by AbbVie and Genentech/Roche.
PBSCs used to treat BMF despite drawbacks
Photo from the Canterbury
District Health Board
Although studies have suggested that peripheral blood stem cells (PBSCs) are not the ideal graft source for patients with bone marrow failure (BMF), new research suggests transplant centers worldwide are still using PBSCs in these patients.
The study included data on more than 3000 hematopoietic stem cell transplants (HSCTs) performed in patients with BMF.
The numbers revealed that PBSCs were most-used in the Asia-Pacific region, Africa, and the Eastern Mediterranean region. But they were also used in Europe and the Americas.
Ayami Yoshimi, MD, PhD, of the University of Freiburg, Germany, and colleagues disclosed these findings in a letter to JAMA.
The researchers noted that bone marrow is the recommended graft source for HSCT in patients with BMF, as studies have shown that PBSCs are associated with higher rates of graft-vs-host disease and lower rates of survival.
With this in mind, the team examined the graft sources used in patients with BMF who underwent HSCTs from 2009 through 2010. The researchers looked at 194 World Health Organization member states and found that 74 had reported at least 1 HSCT during that time period.
Of the 114,217 transplants performed, there were 3282 allogeneic HSCTs in patients with BMF. Overall, the most-used graft source was bone marrow (54%), followed by PBSCs (41%), and then cord blood (5%).
Bone marrow was used most commonly in the Americas (75%) and in Europe (60%) but not in the Eastern Mediterranean region and Africa (46%) or in the Asia-Pacific region (41%; excluding Japan, 19%).
The researchers also looked at graft source according to donor type, both overall and by region, but they excluded the 180 cord blood transplants from this analysis.
The team found that, among patients who had a related donor, 57% received bone marrow and 43% received PBSCs.
For related HSCTs in the Americas, 75% of patients received bone marrow and 25% received PBSCs. In Europe, 63% received bone marrow and 37% received PBSCs. In the Eastern Mediterranean and Africa, 47% received bone marrow and 53% received PBSCs. And in the Asia-Pacific region, 37% received bone marrow and 63% received PBSCs.
Among patients who had unrelated donors, 57% received bone marrow and 43% received PBSCs.
For unrelated HSCTs in the Americas, 74% of patients received bone marrow and 26% received PBSCs. In Europe, 56% received bone marrow and 44% received PBSCs. In the Asia-Pacific region, 47% received bone marrow and 53% received PBSCs. And in the Eastern Mediterranean and Africa, 100% received PBSCs.
The use of bone marrow increased from 20% in countries with low and low-middle incomes to 50% in countries with high-middle incomes and 64% in countries with high incomes (P<0.001). There was a significant association between gross national income per capita and stem cell source (P=0.002).
The researchers speculated that PBSCs are still used in BMF patients, despite the disadvantages, because transplant centers routinely obtain PBSCs for other indications, cell separators are available at any transplant center, and PBSC transplants can be performed at a lower cost than bone marrow transplants.
The team said the association between graft source and income supports the idea that short-term financial considerations are important.
So transplant organizations and authorities should help foster regional-accredited bone marrow harvest centers for patients with nonmalignant disorders. And unrelated donor registries should provide information on the necessity of bone marrow donation for patients with BMF.
Photo from the Canterbury
District Health Board
Although studies have suggested that peripheral blood stem cells (PBSCs) are not the ideal graft source for patients with bone marrow failure (BMF), new research suggests transplant centers worldwide are still using PBSCs in these patients.
The study included data on more than 3000 hematopoietic stem cell transplants (HSCTs) performed in patients with BMF.
The numbers revealed that PBSCs were most-used in the Asia-Pacific region, Africa, and the Eastern Mediterranean region. But they were also used in Europe and the Americas.
Ayami Yoshimi, MD, PhD, of the University of Freiburg, Germany, and colleagues disclosed these findings in a letter to JAMA.
The researchers noted that bone marrow is the recommended graft source for HSCT in patients with BMF, as studies have shown that PBSCs are associated with higher rates of graft-vs-host disease and lower rates of survival.
With this in mind, the team examined the graft sources used in patients with BMF who underwent HSCTs from 2009 through 2010. The researchers looked at 194 World Health Organization member states and found that 74 had reported at least 1 HSCT during that time period.
Of the 114,217 transplants performed, there were 3282 allogeneic HSCTs in patients with BMF. Overall, the most-used graft source was bone marrow (54%), followed by PBSCs (41%), and then cord blood (5%).
Bone marrow was used most commonly in the Americas (75%) and in Europe (60%) but not in the Eastern Mediterranean region and Africa (46%) or in the Asia-Pacific region (41%; excluding Japan, 19%).
The researchers also looked at graft source according to donor type, both overall and by region, but they excluded the 180 cord blood transplants from this analysis.
The team found that, among patients who had a related donor, 57% received bone marrow and 43% received PBSCs.
For related HSCTs in the Americas, 75% of patients received bone marrow and 25% received PBSCs. In Europe, 63% received bone marrow and 37% received PBSCs. In the Eastern Mediterranean and Africa, 47% received bone marrow and 53% received PBSCs. And in the Asia-Pacific region, 37% received bone marrow and 63% received PBSCs.
Among patients who had unrelated donors, 57% received bone marrow and 43% received PBSCs.
For unrelated HSCTs in the Americas, 74% of patients received bone marrow and 26% received PBSCs. In Europe, 56% received bone marrow and 44% received PBSCs. In the Asia-Pacific region, 47% received bone marrow and 53% received PBSCs. And in the Eastern Mediterranean and Africa, 100% received PBSCs.
The use of bone marrow increased from 20% in countries with low and low-middle incomes to 50% in countries with high-middle incomes and 64% in countries with high incomes (P<0.001). There was a significant association between gross national income per capita and stem cell source (P=0.002).
The researchers speculated that PBSCs are still used in BMF patients, despite the disadvantages, because transplant centers routinely obtain PBSCs for other indications, cell separators are available at any transplant center, and PBSC transplants can be performed at a lower cost than bone marrow transplants.
The team said the association between graft source and income supports the idea that short-term financial considerations are important.
So transplant organizations and authorities should help foster regional-accredited bone marrow harvest centers for patients with nonmalignant disorders. And unrelated donor registries should provide information on the necessity of bone marrow donation for patients with BMF.
Photo from the Canterbury
District Health Board
Although studies have suggested that peripheral blood stem cells (PBSCs) are not the ideal graft source for patients with bone marrow failure (BMF), new research suggests transplant centers worldwide are still using PBSCs in these patients.
The study included data on more than 3000 hematopoietic stem cell transplants (HSCTs) performed in patients with BMF.
The numbers revealed that PBSCs were most-used in the Asia-Pacific region, Africa, and the Eastern Mediterranean region. But they were also used in Europe and the Americas.
Ayami Yoshimi, MD, PhD, of the University of Freiburg, Germany, and colleagues disclosed these findings in a letter to JAMA.
The researchers noted that bone marrow is the recommended graft source for HSCT in patients with BMF, as studies have shown that PBSCs are associated with higher rates of graft-vs-host disease and lower rates of survival.
With this in mind, the team examined the graft sources used in patients with BMF who underwent HSCTs from 2009 through 2010. The researchers looked at 194 World Health Organization member states and found that 74 had reported at least 1 HSCT during that time period.
Of the 114,217 transplants performed, there were 3282 allogeneic HSCTs in patients with BMF. Overall, the most-used graft source was bone marrow (54%), followed by PBSCs (41%), and then cord blood (5%).
Bone marrow was used most commonly in the Americas (75%) and in Europe (60%) but not in the Eastern Mediterranean region and Africa (46%) or in the Asia-Pacific region (41%; excluding Japan, 19%).
The researchers also looked at graft source according to donor type, both overall and by region, but they excluded the 180 cord blood transplants from this analysis.
The team found that, among patients who had a related donor, 57% received bone marrow and 43% received PBSCs.
For related HSCTs in the Americas, 75% of patients received bone marrow and 25% received PBSCs. In Europe, 63% received bone marrow and 37% received PBSCs. In the Eastern Mediterranean and Africa, 47% received bone marrow and 53% received PBSCs. And in the Asia-Pacific region, 37% received bone marrow and 63% received PBSCs.
Among patients who had unrelated donors, 57% received bone marrow and 43% received PBSCs.
For unrelated HSCTs in the Americas, 74% of patients received bone marrow and 26% received PBSCs. In Europe, 56% received bone marrow and 44% received PBSCs. In the Asia-Pacific region, 47% received bone marrow and 53% received PBSCs. And in the Eastern Mediterranean and Africa, 100% received PBSCs.
The use of bone marrow increased from 20% in countries with low and low-middle incomes to 50% in countries with high-middle incomes and 64% in countries with high incomes (P<0.001). There was a significant association between gross national income per capita and stem cell source (P=0.002).
The researchers speculated that PBSCs are still used in BMF patients, despite the disadvantages, because transplant centers routinely obtain PBSCs for other indications, cell separators are available at any transplant center, and PBSC transplants can be performed at a lower cost than bone marrow transplants.
The team said the association between graft source and income supports the idea that short-term financial considerations are important.
So transplant organizations and authorities should help foster regional-accredited bone marrow harvest centers for patients with nonmalignant disorders. And unrelated donor registries should provide information on the necessity of bone marrow donation for patients with BMF.
Team identifies new mechanism of megakaryocyte differentiation
in the bone marrow
Investigators have discovered a new mechanism of megakaryocyte differentiation, according to a paper published in eLife.
They found that overexpression of the methyltransferase enzyme PRMT1 in acute megakaryocytic leukemia blocks megakaryocyte differentiation by downregulating levels of the RNA-binding protein RBM15.
The team therefore believes that targeting PRMT1 could restore megakaryocyte differentiation in this malignancy.
They also think their findings could lead to new approaches for researching and treating other hematologic malignancies and solid tumors.
Xinyang Zhao, PhD, of the University of Alabama at Birmingham, and his colleagues began this study looking at PRMT1, which attaches a methyl group onto specific arginine amino acid residues of target proteins.
The investigators screened for proteins that were tagged with methyl groups by PRMT1 and selected one of them—RBM15—for further study. RBM15 was of interest because a mutant fusion of RBM15 and MKL1 proteins is associated with acute megakaryoblastic leukemia.
The team discovered that when a cell’s PRMT1 levels are high, a greater proportion of RBM15 is tagged with methyl groups on certain arginine residues. This tagging causes a ligase called CNOT4 to mark RBM15 with another tag, ubiquitin, which marks the protein for transport to the cell’s garbage removal machinery.
The methyl-tagged RBM15 proteins rapidly disappear, even though the amount of RBM15 messenger RNA does not change. Thus, the expression levels of PRMT1 inversely affect the amount of RBM15.
When the concentration of RBM15 is low, megakaryocytic progenitor cells cannot move forward to differentiation. But when the concentration of RBM15 is high enough, the progenitor cells differentiate into mature megakaryocytes.
The investigators also found that RBM15 binds to intron regions of the pre-messenger RNA for genes known to be important in megakaryocyte differentiation, including 3 transcription factors—RUNX1, GATA1, and TAL1—that are important for normal and abnormal hematopoiesis.
And RBM15 appears to recruit the splicing factor SF3B1 to correctly splice exons. When RBM15 is low, one or more exons are not correctly spliced.
The team said this is a new mechanism for cell differentiation, initiated by methylation of RNA-binding proteins.
“The regulation of alternative splicing by RBM15 through SF3B1 is an exciting and novel pathway that clearly participates in the decision of a megakaryocyte to grow or differentiate,” said John Crispino, PhD, of the Northwestern University Feinberg School of Medicine in Chicago, Illinois, who was not involved in this study.
“These findings suggest that modulation of RBM15 activity by suppressing PRMT1 activity may change the splicing pattern of megakaryocytic tumor cells and facilitate their differentiation.”
The investigators also believe RBM15 may have broader functions in cells. They found that RBM15 binds directly to the pre-messenger RNA of 1257 genes. Among them are genes involved in metabolic regulation.
In agreement with this finding, the team discovered that overexpression of PRMT1 or reduced expression of RBM15 enhances the creation of more mitochondria.
The investigators have further identified metabolic pathways regulated by PRMT1 in leukemia cells. They said these data, in a manuscript under preparation, will further link tumorigenesis to metabolic pathways.
The team also noted that SF3B1 contains mutations in more than 70% of myelodysplastic syndrome patients and 20% of chronic lymphocytic leukemia patients, and mutated SF3B1 appears in other hematologic malignancies as well.
So the investigators believe that understanding the PRMT1-RBM15 axis can shed new light on SF3B1-mutated hematologic malignancies and may lead to targeting PRMT1 as a novel therapy for myelodysplastic syndromes. The team is already testing PRMT1 inhibitors.
in the bone marrow
Investigators have discovered a new mechanism of megakaryocyte differentiation, according to a paper published in eLife.
They found that overexpression of the methyltransferase enzyme PRMT1 in acute megakaryocytic leukemia blocks megakaryocyte differentiation by downregulating levels of the RNA-binding protein RBM15.
The team therefore believes that targeting PRMT1 could restore megakaryocyte differentiation in this malignancy.
They also think their findings could lead to new approaches for researching and treating other hematologic malignancies and solid tumors.
Xinyang Zhao, PhD, of the University of Alabama at Birmingham, and his colleagues began this study looking at PRMT1, which attaches a methyl group onto specific arginine amino acid residues of target proteins.
The investigators screened for proteins that were tagged with methyl groups by PRMT1 and selected one of them—RBM15—for further study. RBM15 was of interest because a mutant fusion of RBM15 and MKL1 proteins is associated with acute megakaryoblastic leukemia.
The team discovered that when a cell’s PRMT1 levels are high, a greater proportion of RBM15 is tagged with methyl groups on certain arginine residues. This tagging causes a ligase called CNOT4 to mark RBM15 with another tag, ubiquitin, which marks the protein for transport to the cell’s garbage removal machinery.
The methyl-tagged RBM15 proteins rapidly disappear, even though the amount of RBM15 messenger RNA does not change. Thus, the expression levels of PRMT1 inversely affect the amount of RBM15.
When the concentration of RBM15 is low, megakaryocytic progenitor cells cannot move forward to differentiation. But when the concentration of RBM15 is high enough, the progenitor cells differentiate into mature megakaryocytes.
The investigators also found that RBM15 binds to intron regions of the pre-messenger RNA for genes known to be important in megakaryocyte differentiation, including 3 transcription factors—RUNX1, GATA1, and TAL1—that are important for normal and abnormal hematopoiesis.
And RBM15 appears to recruit the splicing factor SF3B1 to correctly splice exons. When RBM15 is low, one or more exons are not correctly spliced.
The team said this is a new mechanism for cell differentiation, initiated by methylation of RNA-binding proteins.
“The regulation of alternative splicing by RBM15 through SF3B1 is an exciting and novel pathway that clearly participates in the decision of a megakaryocyte to grow or differentiate,” said John Crispino, PhD, of the Northwestern University Feinberg School of Medicine in Chicago, Illinois, who was not involved in this study.
“These findings suggest that modulation of RBM15 activity by suppressing PRMT1 activity may change the splicing pattern of megakaryocytic tumor cells and facilitate their differentiation.”
The investigators also believe RBM15 may have broader functions in cells. They found that RBM15 binds directly to the pre-messenger RNA of 1257 genes. Among them are genes involved in metabolic regulation.
In agreement with this finding, the team discovered that overexpression of PRMT1 or reduced expression of RBM15 enhances the creation of more mitochondria.
The investigators have further identified metabolic pathways regulated by PRMT1 in leukemia cells. They said these data, in a manuscript under preparation, will further link tumorigenesis to metabolic pathways.
The team also noted that SF3B1 contains mutations in more than 70% of myelodysplastic syndrome patients and 20% of chronic lymphocytic leukemia patients, and mutated SF3B1 appears in other hematologic malignancies as well.
So the investigators believe that understanding the PRMT1-RBM15 axis can shed new light on SF3B1-mutated hematologic malignancies and may lead to targeting PRMT1 as a novel therapy for myelodysplastic syndromes. The team is already testing PRMT1 inhibitors.
in the bone marrow
Investigators have discovered a new mechanism of megakaryocyte differentiation, according to a paper published in eLife.
They found that overexpression of the methyltransferase enzyme PRMT1 in acute megakaryocytic leukemia blocks megakaryocyte differentiation by downregulating levels of the RNA-binding protein RBM15.
The team therefore believes that targeting PRMT1 could restore megakaryocyte differentiation in this malignancy.
They also think their findings could lead to new approaches for researching and treating other hematologic malignancies and solid tumors.
Xinyang Zhao, PhD, of the University of Alabama at Birmingham, and his colleagues began this study looking at PRMT1, which attaches a methyl group onto specific arginine amino acid residues of target proteins.
The investigators screened for proteins that were tagged with methyl groups by PRMT1 and selected one of them—RBM15—for further study. RBM15 was of interest because a mutant fusion of RBM15 and MKL1 proteins is associated with acute megakaryoblastic leukemia.
The team discovered that when a cell’s PRMT1 levels are high, a greater proportion of RBM15 is tagged with methyl groups on certain arginine residues. This tagging causes a ligase called CNOT4 to mark RBM15 with another tag, ubiquitin, which marks the protein for transport to the cell’s garbage removal machinery.
The methyl-tagged RBM15 proteins rapidly disappear, even though the amount of RBM15 messenger RNA does not change. Thus, the expression levels of PRMT1 inversely affect the amount of RBM15.
When the concentration of RBM15 is low, megakaryocytic progenitor cells cannot move forward to differentiation. But when the concentration of RBM15 is high enough, the progenitor cells differentiate into mature megakaryocytes.
The investigators also found that RBM15 binds to intron regions of the pre-messenger RNA for genes known to be important in megakaryocyte differentiation, including 3 transcription factors—RUNX1, GATA1, and TAL1—that are important for normal and abnormal hematopoiesis.
And RBM15 appears to recruit the splicing factor SF3B1 to correctly splice exons. When RBM15 is low, one or more exons are not correctly spliced.
The team said this is a new mechanism for cell differentiation, initiated by methylation of RNA-binding proteins.
“The regulation of alternative splicing by RBM15 through SF3B1 is an exciting and novel pathway that clearly participates in the decision of a megakaryocyte to grow or differentiate,” said John Crispino, PhD, of the Northwestern University Feinberg School of Medicine in Chicago, Illinois, who was not involved in this study.
“These findings suggest that modulation of RBM15 activity by suppressing PRMT1 activity may change the splicing pattern of megakaryocytic tumor cells and facilitate their differentiation.”
The investigators also believe RBM15 may have broader functions in cells. They found that RBM15 binds directly to the pre-messenger RNA of 1257 genes. Among them are genes involved in metabolic regulation.
In agreement with this finding, the team discovered that overexpression of PRMT1 or reduced expression of RBM15 enhances the creation of more mitochondria.
The investigators have further identified metabolic pathways regulated by PRMT1 in leukemia cells. They said these data, in a manuscript under preparation, will further link tumorigenesis to metabolic pathways.
The team also noted that SF3B1 contains mutations in more than 70% of myelodysplastic syndrome patients and 20% of chronic lymphocytic leukemia patients, and mutated SF3B1 appears in other hematologic malignancies as well.
So the investigators believe that understanding the PRMT1-RBM15 axis can shed new light on SF3B1-mutated hematologic malignancies and may lead to targeting PRMT1 as a novel therapy for myelodysplastic syndromes. The team is already testing PRMT1 inhibitors.
Impact of Pneumonia Guidelines
Overutilization of resources is a significant, yet underappreciated, problem in medicine. Many interventions target underutilization (eg, immunizations) or misuse (eg, antibiotic prescribing for viral pharyngitis), yet overutilization remains as a significant contributor to healthcare waste.[1] In an effort to reduce waste, the Choosing Wisely campaign created a work group to highlight areas of overutilization, specifically noting both diagnostic tests and therapies for common pediatric conditions with no proven benefit and possible harm to the patient.[2] Respiratory illnesses have been a target of many quality‐improvement efforts, and pneumonia represents a common diagnosis in pediatrics.[3] The use of diagnostic testing for pneumonia is an area where care can be optimized and aligned with evidence.
Laboratory testing and diagnostic imaging are routinely used for the management of children with community‐acquired pneumonia (CAP). Several studies have documented substantial variability in the use of these resources for pneumonia management, with higher resource use associated with a higher chance of hospitalization after emergency department (ED) evaluation and a longer length of stay among those requiring hospitalization.[4, 5] This variation in diagnostic resource utilization has been attributed, at least in part, to a lack of consensus on the management of pneumonia. There is wide variability in diagnostic testing, and due to potential consequences for patients presenting with pneumonia, efforts to standardize care offer an opportunity to improve healthcare value.
In August 2011, the first national, evidence‐based consensus guidelines for the management of childhood CAP were published jointly by the Pediatric Infectious Diseases Society (PIDS) and the Infectious Diseases Society of America (IDSA).[6] A primary focus of these guidelines was the recommendation for the use of narrow spectrum antibiotics for the management of uncomplicated pneumonia. Previous studies have assessed the impact of the publication of the PIDS/IDSA guidelines on empiric antibiotic selection for the management of pneumonia.[7, 8] In addition, the guidelines provided recommendations regarding diagnostic test utilization, in particular discouraging blood tests (eg, complete blood counts) and radiologic studies for nontoxic, fully immunized children treated as outpatients, as well as repeat testing for children hospitalized with CAP who are improving.
Although single centers have demonstrated changes in utilization patterns based on clinical practice guidelines,[9, 10, 11, 12] whether these guidelines have impacted diagnostic test utilization among US children with CAP in a larger scale remains unknown. Therefore, we sought to determine the impact of the PIDS/IDSA guidelines on the use of diagnostic testing among children with CAP using a national sample of US children's hospitals. Because the guidelines discourage repeat diagnostic testing in patients who are improving, we also evaluated the association between repeat diagnostic studies and severity of illness.
METHODS
This retrospective cohort study used data from the Pediatric Health Information System (PHIS) (Children's Hospital Association, Overland Park, KS). The PHIS database contains deidentified administrative data, detailing demographic, diagnostic, procedure, and billing data from 47 freestanding, tertiary care children's hospitals. This database accounts for approximately 20% of all annual pediatric hospitalizations in the United States. Data quality is ensured through a joint effort between the Children's Hospital Association and participating hospitals.
Patient Population
Data from 32 (of the 47) hospitals included in PHIS with complete inpatient and ED data were used to evaluate hospital‐level resource utilization for children 1 to 18 years of age discharged January 1, 2008 to June 30, 2014 with a diagnosis of pneumonia (International Classification of Diseases, 9th Revision [ICD‐9] codes 480.x‐486.x, 487.0).[13] Our goal was to identify previously healthy children with uncomplicated pneumonia, so we excluded patients with complex chronic conditions,[14] billing charges for intensive care management and/or pleural drainage procedure (IDC‐9 codes 510.0, 510.9, 511.0, 511.1, 511.8, 511.9, 513.x) on day of admission or the next day, or prior pneumonia admission in the last 30 days. We studied 2 mutually exclusive populations: children with pneumonia treated in the ED (ie, patients who were evaluated in the ED and discharged to home), and children hospitalized with pneumonia, including those admitted through the ED.
Guideline Publication and Study Periods
For an exploratory before and after comparison, patients were grouped into 2 cohorts based on a guideline online publication date of August 1, 2011: preguideline (January 1, 2008 to July 31, 2011) and postguideline (August 1, 2011 to June 30, 2014).
Study Outcomes
The measured outcomes were the monthly proportion of pneumonia patients for whom specific diagnostic tests were performed, as determined from billing data. The diagnostic tests evaluated were complete blood count (CBC), blood culture, C‐reactive protein (CRP), and chest radiograph (CXR). Standardized costs were also calculated from PHIS charges as previously described to standardize the cost of the individual tests and remove interhospital cost variation.[3]
Relationship of Repeat Testing and Severity of Illness
Because higher illness severity and clinical deterioration may warrant repeat testing, we also explored the association of repeat diagnostic testing for inpatients with severity of illness by using the following variables as measures of severity: length of stay (LOS), transfer to intensive care unit (ICU), or pleural drainage procedure after admission (>2 calendar days after admission). Repeat diagnostic testing was stratified by number of tests.
Statistical Analysis
The categorical demographic characteristics of the pre‐ and postguideline populations were summarized using frequencies and percentages, and compared using 2 tests. Continuous demographics were summarized with medians and interquartile ranges (IQRs) and compared with the Wilcoxon rank sum test. Segmented regression, clustered by hospital, was used to assess trends in monthly resource utilization as well as associated standardized costs before and after guidelines publication. To estimate the impact of the guidelines overall, we compared the observed diagnostic resource use at the end of the study period with expected use projected from trends in the preguidelines period (ie, if there were no new guidelines). Individual interrupted time series were also built for each hospital. From these models, we assessed which hospitals had a significant difference between the rate observed at the end of the study and that estimated from their preguideline trajectory. To assess the relationship between the number of positive improvements at a hospital and hospital characteristics, we used Spearman's correlation and Kruskal‐Wallis tests. All analyses were performed with SAS version 9.3 (SAS Institute, Inc., Cary, NC), and P values <0.05 were considered statistically significant. In accordance with the policies of the Cincinnati Children's Hospital Medical Center Institutional Review Board, this research, using a deidentified dataset, was not considered human subjects research.
RESULTS
There were 275,288 hospital admissions meeting study inclusion criteria of 1 to 18 years of age with a diagnosis of pneumonia from 2008 to 2014. Of these, 54,749 met exclusion criteria (1874 had pleural drainage procedure on day 0 or 1, 51,306 had complex chronic conditions, 1569 were hospitalized with pneumonia in the last 30 days). Characteristics of the remaining 220,539 patients in the final sample are shown in Table 1. The median age was 4 years (IQR, 27 years); a majority of the children were male (53%) and had public insurance (58%). There were 128,855 patients in the preguideline period (January 1, 2008 to July 31, 2011) and 91,684 in the post guideline period (August 1, 2011June 30, 2014).
Overall | Preguideline | Postguideline | P | |
---|---|---|---|---|
| ||||
No. of discharges | 220,539 | 128,855 | 91,684 | |
Type of encounter | ||||
ED only | 150,215 (68.1) | 88,790 (68.9) | 61,425 (67) | <0.001 |
Inpatient | 70,324 (31.9) | 40,065 (31.1) | 30,259 (33) | |
Age | ||||
14 years | 129,360 (58.7) | 77,802 (60.4) | 51,558 (56.2) | <0.001 |
59 years | 58,609 (26.6) | 32,708 (25.4) | 25,901 (28.3) | |
1018 years | 32,570 (14.8) | 18,345 (14.2) | 14,225 (15.5) | |
Median [IQR] | 4 [27] | 3 [27] | 4 [27] | <0.001 |
Gender | ||||
Male | 116,718 (52.9) | 68,319 (53) | 48,399 (52.8) | 00.285 |
Female | 103,813 (47.1) | 60,532 (47) | 43,281 (47.2) | |
Race | ||||
Non‐Hispanic white | 84,423 (38.3) | 47,327 (36.7) | 37,096 (40.5) | <0.001 |
Non‐Hispanic black | 60,062 (27.2) | 35,870 (27.8) | 24,192 (26.4) | |
Hispanic | 51,184 (23.2) | 31,167 (24.2) | 20,017 (21.8) | |
Asian | 6,444 (2.9) | 3,691 (2.9) | 2,753 (3) | |
Other | 18,426 (8.4) | 10,800 (8.4) | 7,626 (8.3) | |
Payer | ||||
Government | 128,047 (58.1) | 70,742 (54.9) | 57,305 (62.5) | <0.001 |
Private | 73,338 (33.3) | 44,410 (34.5) | 28,928 (31.6) | |
Other | 19,154 (8.7) | 13,703 (10.6) | 5,451 (5.9) | |
Disposition | ||||
HHS | 684 (0.3) | 411 (0.3) | 273 (0.3) | <0.001 |
Home | 209,710 (95.1) | 123,236 (95.6) | 86,474 (94.3) | |
Other | 9,749 (4.4) | 4,962 (3.9) | 4,787 (5.2) | |
SNF | 396 (0.2) | 246 (0.2) | 150 (0.2) | |
Season | ||||
Spring | 60,171 (27.3) | 36,709 (28.5) | 23,462 (25.6) | <0.001 |
Summer | 29,891 (13.6) | 17,748 (13.8) | 12,143 (13.2) | |
Fall | 52,161 (23.7) | 28,332 (22) | 23,829 (26) | |
Winter | 78,316 (35.5) | 46,066 (35.8) | 32,250 (35.2) | |
LOS | ||||
13 days | 204,812 (92.9) | 119,497 (92.7) | 85,315 (93.1) | <0.001 |
46 days | 10,454 (4.7) | 6,148 (4.8) | 4,306 (4.7) | |
7+ days | 5,273 (2.4) | 3,210 (2.5) | 2,063 (2.3) | |
Median [IQR] | 1 [11] | 1 [11] | 1 [11] | 0.144 |
Admitted patients, median [IQR] | 2 [13] | 2 [13] | 2 [13] | <0.001 |
Discharged From the ED
Throughout the study, utilization of CBC, blood cultures, and CRP was <20%, whereas CXR use was >75%. In segmented regression analysis, CRP utilization was relatively stable before the guidelines publication. However, by the end of the study period, the projected estimate of CRP utilization without guidelines (expected) was 2.9% compared with 4.8% with the guidelines (observed) (P < 0.05) (Figure 1). A similar pattern of higher rates of diagnostic utilization after the guidelines compared with projected estimates without the guidelines was also seen in the ED utilization of CBC, blood cultures, and CXR (Figure 1); however, these trends did not achieve statistical significance. Table 2 provides specific values. Using a standard cost of $19.52 for CRP testing, annual costs across all hospitals increased $11,783 for ED evaluation of CAP.
Baseline (%) | Preguideline Trend | Level Change at Guideline | Change in Trend After Guideline | Estimates at End of Study* | |||
---|---|---|---|---|---|---|---|
Without Guideline (%) | With Guideline (%) | P | |||||
| |||||||
ED‐only encounters | |||||||
Blood culture | 14.6 | 0.1 | 0.8 | 0.1 | 5.5 | 8.6 | NS |
CBC | 19.2 | 0.1 | 0.4 | 0.1 | 10.7 | 14.0 | NS |
CRP | 5.4 | 0.0 | 0.6 | 0.1 | 2.9 | 4.8 | <0.05 |
Chest x‐ray | 85.4 | 0.1 | 0.1 | 0.0 | 80.9 | 81.1 | NS |
Inpatient encounters | |||||||
Blood culture | 50.6 | 0.0 | 1.7 | 0.2 | 49.2 | 41.4 | <0.05 |
Repeat blood culture | 6.5 | 0.0 | 1.0 | 0.1 | 8.9 | 5.8 | NS |
CBC | 65.2 | 0.0 | 3.1 | 0.0 | 65.0 | 62.2 | NS |
Repeat CBC | 23.4 | 0.0 | 4.2 | 0.0 | 20.8 | 16.0 | NS |
CRP | 25.7 | 0.0 | 1.1 | 0.0 | 23.8 | 23.5 | NS |
Repeat CRP | 12.5 | 0.1 | 2.2 | 0.1 | 7.1 | 7.3 | NS |
Chest x‐ray | 89.4 | 0.1 | 0.7 | 0.0 | 85.4 | 83.9 | NS |
Repeat chest x‐ray | 25.5 | 0.0 | 2.0 | 0.1 | 24.1 | 17.7 | <0.05 |

Inpatient Encounters
In the segmented regression analysis of children hospitalized with CAP, guideline publication was associated with changes in the monthly use of some diagnostic tests. For example, by the end of the study period, the use of blood culture was 41.4% (observed), whereas the projected estimated use in the absence of the guidelines was 49.2% (expected) (P < 0.05) (Figure 2). Table 2 includes the data for the other tests, CBC, CRP, and CXR, in which similar patterns are noted with lower utilization rates after the guidelines, compared with expected utilization rates without the guidelines; however, these trends did not achieve statistical significance. Evaluating the utilization of repeat testing for inpatients, only repeat CXR achieved statistical significance (P < 0.05), with utilization rates of 17.7% with the guidelines (actual) compared with 24.1% without the guidelines (predicted).

To better understand the use of repeat testing, a comparison of severity outcomesLOS, ICU transfer, and pleural drainage procedureswas performed between patients with no repeat testing (70%) and patients with 1 or more repeat tests (30%). Patients with repeat testing had longer LOS (no repeat testing LOS 1 [IQR, 12]) versus 1 repeat test LOS 3 ([IQR, 24] vs 2+ repeat tests LOS 5 [IQR, 38]), higher rate of ICU transfer (no repeat testing 4.6% vs 1 repeat test 14.6% vs 2+ repeat test 35.6%), and higher rate of pleural drainage (no repeat testing 0% vs 1 repeat test 0.1% vs 2+ repeat test 5.9%] (all P < 0.001).
Using standard costs of $37.57 for blood cultures and $73.28 for CXR, annual costs for children with CAP across all hospitals decreased by $91,512 due to decreased utilization of blood cultures, and by $146,840 due to decreased utilization of CXR.
Hospital‐Level Variation in the Impact of the National Guideline
Figure 3 is a visual representation (heat map) of the impact of the guidelines at the hospital level at the end of the study from the individual interrupted time series. Based on this heat map (Figure 3), there was wide variability between hospitals in the impact of the guideline on each test in different settings (ED or inpatient). By diagnostic testing, 7 hospitals significantly decreased utilization of blood cultures for inpatients, and 5 hospitals significantly decreased utilization for repeat blood cultures and repeat CXR. Correlation between the number of positive improvements at a hospital and region (P = 0.974), number of CAP cases (P = 0.731), or percentage of public insurance (P = 0.241) were all nonsignificant.

DISCUSSION
This study complements previous assessments by evaluating the impact of the 2011 IDSA/PIDS consensus guidelines on the management of children with CAP cared for at US children's hospitals. Prior studies have shown increased use of narrow‐spectrum antibiotics for children with CAP after the publication of these guidelines.[7] The current study focused on diagnostic testing for CAP before and after the publication of the 2011 guidelines. In the ED setting, use of some diagnostic tests (blood culture, CBC, CXR, CRP) was declining prior to guideline publication, but appeared to plateau and/or increase after 2011. Among children admitted with CAP, use of diagnostic testing was relatively stable prior to 2011, and use of these tests (blood culture, CBC, CXR, CRP) declined after guideline publication. Overall, changes in diagnostic resource utilization 3 years after publication were modest, with few changes achieving statistical significance. There was a large variability in the impact of guidelines on test use between hospitals.
For outpatients, including those managed in the ED, the PIDS/IDSA guidelines recommend limited laboratory testing in nontoxic, fully immunized patients. The guidelines discourage the use of diagnostic testing among outpatients because of their low yield (eg, blood culture), and because test results may not impact management (eg, CBC).[6] In the years prior to guideline publication, there was already a declining trend in testing rates, including blood cultures, CBC, and CRP, for patients in the ED. After guideline publication, the rate of blood cultures, CBC, and CRP increased, but only the increase in CRP utilization achieved statistical significance. We would not expect utilization for common diagnostic tests (eg, CBC for outpatients with CAP) to be at or close to 0% because of the complexity of clinical decision making regarding admission that factors in aspects of patient history, exam findings, and underlying risk.[15] ED utilization of blood cultures was <10%, CBC <15%, and CRP <5% after guideline publication, which may represent the lowest testing limit that could be achieved.
CXRs obtained in the ED did not decrease over the entire study period. The rates of CXR use (close to 80%) seen in our study are similar to prior ED studies.[5, 16] Management of children with CAP in the ED might be different than outpatient primary care management because (1) unlike primary care providers, ED providers do not have an established relationship with their patients and do not have the opportunity for follow‐up and serial exams, making them less likely to tolerate diagnostic uncertainty; and (2) ED providers may see sicker patients. However, use of CXR in the ED does represent an opportunity for further study to understand if decreased utilization is feasible without adversely impacting clinical outcomes.
The CAP guidelines provide a strong recommendation to obtain blood culture in moderate to severe pneumonia. Despite this, blood culture utilization declined after guideline publication. Less than 10% of children hospitalized with uncomplicated CAP have positive blood cultures, which calls into question the utility of blood cultures for all admitted patients.[17, 18, 19] The recent EPIC (Epidemiology of Pneumonia in the Community) study showed that a majority of children hospitalized with pneumonia do not have growth of bacteria in culture, but there may be a role for blood cultures in patients with a strong suspicion of complicated CAP or in the patient with moderate to severe disease.[20] In addition to blood cultures, the guidelines also recommend CBC and CXR in moderate to severely ill children. This observed decline in testing in CBC and CXR may be related to individual physician assessments of which patients are moderately to severely ill, as the guidelines do not recommend testing for children with less severe disease. Our exclusion of patients requiring intensive care management or pleural drainage on admission might have selected children with a milder course of illness, although still requiring admission.
The guidelines discourage repeat diagnostic testing among children hospitalized with CAP who are improving. In this study, repeat CXR and CBC occurred in approximately 20% of patients, but repeat blood culture and CRP was much lower. As with initial diagnostic testing for inpatients with CAP, the rates of some repeat testing decreased with the guidelines. However, those with repeat testing had longer LOS and were more likely to require ICU transfer or a pleural drainage procedure compared to children without repeat testing. This suggests that repeat testing is used more often in children with a severe presentation or a worsening clinical course, and not done routinely on hospitalized patients.
The financial impact of decreased testing is modest, because the tests themselves are relatively inexpensive. However, the lack of substantial cost savings should not preclude efforts to continue to improve adherence to the guidelines. Not only is increased testing associated with higher hospitalization rates,[5] potentially yielding higher costs and family stress, increased testing may also lead to patient discomfort and possibly increased radiation exposure through chest radiography.
Many of the diagnostic testing recommendations in the CAP guidelines are based on weak evidence, which may contribute to the lack of substantial adoption. Nevertheless, adherence to guideline recommendations requires sustained effort on the part of individual physicians that should be encouraged through institutional support.[21] Continuous education and clinical decision support, as well as reminders in the electronic medical record, would make guideline recommendations more visible and may help overcome the inertia of previous practice.[15] The hospital‐level heat map (Figure 3) included in this study demonstrates that the impact of the guidelines was variable across sites. Although a few sites had decreased diagnostic testing in many areas with no increased testing in any category, there were several sites that had no improvement in any diagnostic testing category. In addition, hospital‐level factors like size, geography, and insurance status were not associated with number of improvements. To better understand drivers of change at individual hospitals, future studies should evaluate specific strategies utilized by the rapid guideline adopters.
This study is subject to several limitations. The use of ICD‐9 codes to identify patients with CAP may not capture all patients with this diagnosis; however, these codes have been previously validated.[13] Additionally, because patients were identified using ICD‐9 coding assigned at the time of discharge, testing performed in the ED setting may not reflect care for a child with known pneumonia, but rather may reflect testing for a child with fever or other signs of infection. PHIS collects data from freestanding children's hospitals, which care for a majority of children with CAP in the US, but our findings may not be generalizable to other hospitals. In addition, we did not examine drivers of trends within individual institutions. We did not have detailed information to examine whether the PHIS hospitals in our study had actively worked to adopt the CAP guidelines. We were also unable to assess physician's familiarity with guidelines or the level of disagreement with the recommendations. Furthermore, the PHIS database does not permit detailed correlation of diagnostic testing with clinical parameters. In contrast to the diagnostic testing evaluated in this study, which is primarily discouraged by the IDSA/PIDS guidelines, respiratory viral testing for children with CAP is recommended but could not be evaluated, as data on such testing are not readily available in PHIS.
CONCLUSION
Publication of the IDSA/PIDS evidence‐based guidelines for the management of CAP was associated with modest, variable changes in use of diagnostic testing. Further adoption of the CAP guidelines should reduce variation in care and decrease unnecessary resource utilization in the management of CAP. Our study demonstrates that efforts to promote decreased resource utilization should target specific situations (eg, repeat testing for inpatients who are improving). Adherence to guidelines may be improved by the adoption of local practices that integrate and improve daily workflow, like order sets and clinical decision support tools.
Disclosure: Nothing to report.
- Eliminating waste in US health care. JAMA. 2012;307(14):1513–1516. , .
- Choosing wisely in pediatric hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):479–485. , , , et al.
- Pediatric Research in Inpatient Settings (PRIS) Network. Prioritization of comparative effectiveness research topics in hospital pediatrics. Arch Pediatr Adolesc Med. 2012;166(12):1155–1164. , , , et al.;
- Variability in processes of care and outcomes among children hospitalized with community‐acquired pneumonia. Pediatr Infect Dis J. 2012;31(10):1036–1041. , , , et al.
- Variation in emergency department diagnostic testing and disposition outcomes in pneumonia. Pediatrics. 2013;132(2):237–244. , , , , .
- Pediatric Infectious Diseases Society and the Infectious Diseases Society of America. The management of community‐acquired pneumonia in infants and children older than 3 months of age: clinical practice guidelines by the Pediatric Infectious Diseases Society and the Infectious Diseases Society of America. Clin Infect Dis. 2011;53(7):e25–e76. , , , et al.;
- Impact of Infectious Diseases Society of America/Pediatric Infectious Diseases Society guidelines on treatment of community‐acquired pneumonia in hospitalized children. Clin Infect Dis. 2014;58(6):834–838. , , , et al.,
- Antibiotic choice for children hospitalized with pneumonia and adherence to national guidelines. Pediatrics. 2015;136(1):44–52. , , , et al.
- Quality improvement methods increase appropriate antibiotic prescribing for childhood pneumonia. Pediatrics. 2013;131(5):e1623–e1631. , , , et al.
- Improvement methodology increases guideline recommended blood cultures in children with pneumonia. Pediatrics. 2015;135(4):e1052–e1059. , , , et al.
- Impact of a guideline on management of children hospitalized with community‐acquired pneumonia. Pediatrics. 2012;129(3):e597–e604. , , , , , .
- Effectiveness of antimicrobial guidelines for community‐acquired pneumonia in children. Pediatrics. 2012;129(5):e1326–e1333. , , , .
- Identifying pediatric community‐acquired pneumonia hospitalizations: accuracy of administrative billing codes. JAMA Pediatr. 2013;167(9):851–858. , , , et al.
- Pediatric complex chronic conditions classification system version 2: updated for ICD‐10 and complex medical technology dependence and transplantation. BMC Pediatr. 2014;14:199. , , , , .
- Establishing superior benchmarks of care in clinical practice: a proposal to drive achievable health care value. JAMA Pediatr. 2015;169(4):301–302. , .
- Emergency department management of childhood pneumonia in the United States prior to publication of national guidelines. Acad Emerg Med. 2013;20(3):240–246. , , , .
- Prevalence of bacteremia in hospitalized pediatric patients with community‐acquired pneumonia. Pediatr Infect Dis J. 2013;32(7):736–740. , , , et al.
- The prevalence of bacteremia in pediatric patients with community‐acquired pneumonia: guidelines to reduce the frequency of obtaining blood cultures. Hosp Pediatr. 2013;3(2):92–96. , , , , .
- Do all children hospitalized with community‐acquired pneumonia require blood cultures? Hosp Pediatr. 2013;3(2):177–179. .
- CDC EPIC Study Team. Community‐acquired pneumonia requiring hospitalization among U.S. children. N Engl J Med. 2015;372(9):835–845. , , , et al.;
- Influence of hospital guidelines on management of children hospitalized with pneumonia. Pediatrics. 2012;130(5):e823–e830. , , , et al.
Overutilization of resources is a significant, yet underappreciated, problem in medicine. Many interventions target underutilization (eg, immunizations) or misuse (eg, antibiotic prescribing for viral pharyngitis), yet overutilization remains as a significant contributor to healthcare waste.[1] In an effort to reduce waste, the Choosing Wisely campaign created a work group to highlight areas of overutilization, specifically noting both diagnostic tests and therapies for common pediatric conditions with no proven benefit and possible harm to the patient.[2] Respiratory illnesses have been a target of many quality‐improvement efforts, and pneumonia represents a common diagnosis in pediatrics.[3] The use of diagnostic testing for pneumonia is an area where care can be optimized and aligned with evidence.
Laboratory testing and diagnostic imaging are routinely used for the management of children with community‐acquired pneumonia (CAP). Several studies have documented substantial variability in the use of these resources for pneumonia management, with higher resource use associated with a higher chance of hospitalization after emergency department (ED) evaluation and a longer length of stay among those requiring hospitalization.[4, 5] This variation in diagnostic resource utilization has been attributed, at least in part, to a lack of consensus on the management of pneumonia. There is wide variability in diagnostic testing, and due to potential consequences for patients presenting with pneumonia, efforts to standardize care offer an opportunity to improve healthcare value.
In August 2011, the first national, evidence‐based consensus guidelines for the management of childhood CAP were published jointly by the Pediatric Infectious Diseases Society (PIDS) and the Infectious Diseases Society of America (IDSA).[6] A primary focus of these guidelines was the recommendation for the use of narrow spectrum antibiotics for the management of uncomplicated pneumonia. Previous studies have assessed the impact of the publication of the PIDS/IDSA guidelines on empiric antibiotic selection for the management of pneumonia.[7, 8] In addition, the guidelines provided recommendations regarding diagnostic test utilization, in particular discouraging blood tests (eg, complete blood counts) and radiologic studies for nontoxic, fully immunized children treated as outpatients, as well as repeat testing for children hospitalized with CAP who are improving.
Although single centers have demonstrated changes in utilization patterns based on clinical practice guidelines,[9, 10, 11, 12] whether these guidelines have impacted diagnostic test utilization among US children with CAP in a larger scale remains unknown. Therefore, we sought to determine the impact of the PIDS/IDSA guidelines on the use of diagnostic testing among children with CAP using a national sample of US children's hospitals. Because the guidelines discourage repeat diagnostic testing in patients who are improving, we also evaluated the association between repeat diagnostic studies and severity of illness.
METHODS
This retrospective cohort study used data from the Pediatric Health Information System (PHIS) (Children's Hospital Association, Overland Park, KS). The PHIS database contains deidentified administrative data, detailing demographic, diagnostic, procedure, and billing data from 47 freestanding, tertiary care children's hospitals. This database accounts for approximately 20% of all annual pediatric hospitalizations in the United States. Data quality is ensured through a joint effort between the Children's Hospital Association and participating hospitals.
Patient Population
Data from 32 (of the 47) hospitals included in PHIS with complete inpatient and ED data were used to evaluate hospital‐level resource utilization for children 1 to 18 years of age discharged January 1, 2008 to June 30, 2014 with a diagnosis of pneumonia (International Classification of Diseases, 9th Revision [ICD‐9] codes 480.x‐486.x, 487.0).[13] Our goal was to identify previously healthy children with uncomplicated pneumonia, so we excluded patients with complex chronic conditions,[14] billing charges for intensive care management and/or pleural drainage procedure (IDC‐9 codes 510.0, 510.9, 511.0, 511.1, 511.8, 511.9, 513.x) on day of admission or the next day, or prior pneumonia admission in the last 30 days. We studied 2 mutually exclusive populations: children with pneumonia treated in the ED (ie, patients who were evaluated in the ED and discharged to home), and children hospitalized with pneumonia, including those admitted through the ED.
Guideline Publication and Study Periods
For an exploratory before and after comparison, patients were grouped into 2 cohorts based on a guideline online publication date of August 1, 2011: preguideline (January 1, 2008 to July 31, 2011) and postguideline (August 1, 2011 to June 30, 2014).
Study Outcomes
The measured outcomes were the monthly proportion of pneumonia patients for whom specific diagnostic tests were performed, as determined from billing data. The diagnostic tests evaluated were complete blood count (CBC), blood culture, C‐reactive protein (CRP), and chest radiograph (CXR). Standardized costs were also calculated from PHIS charges as previously described to standardize the cost of the individual tests and remove interhospital cost variation.[3]
Relationship of Repeat Testing and Severity of Illness
Because higher illness severity and clinical deterioration may warrant repeat testing, we also explored the association of repeat diagnostic testing for inpatients with severity of illness by using the following variables as measures of severity: length of stay (LOS), transfer to intensive care unit (ICU), or pleural drainage procedure after admission (>2 calendar days after admission). Repeat diagnostic testing was stratified by number of tests.
Statistical Analysis
The categorical demographic characteristics of the pre‐ and postguideline populations were summarized using frequencies and percentages, and compared using 2 tests. Continuous demographics were summarized with medians and interquartile ranges (IQRs) and compared with the Wilcoxon rank sum test. Segmented regression, clustered by hospital, was used to assess trends in monthly resource utilization as well as associated standardized costs before and after guidelines publication. To estimate the impact of the guidelines overall, we compared the observed diagnostic resource use at the end of the study period with expected use projected from trends in the preguidelines period (ie, if there were no new guidelines). Individual interrupted time series were also built for each hospital. From these models, we assessed which hospitals had a significant difference between the rate observed at the end of the study and that estimated from their preguideline trajectory. To assess the relationship between the number of positive improvements at a hospital and hospital characteristics, we used Spearman's correlation and Kruskal‐Wallis tests. All analyses were performed with SAS version 9.3 (SAS Institute, Inc., Cary, NC), and P values <0.05 were considered statistically significant. In accordance with the policies of the Cincinnati Children's Hospital Medical Center Institutional Review Board, this research, using a deidentified dataset, was not considered human subjects research.
RESULTS
There were 275,288 hospital admissions meeting study inclusion criteria of 1 to 18 years of age with a diagnosis of pneumonia from 2008 to 2014. Of these, 54,749 met exclusion criteria (1874 had pleural drainage procedure on day 0 or 1, 51,306 had complex chronic conditions, 1569 were hospitalized with pneumonia in the last 30 days). Characteristics of the remaining 220,539 patients in the final sample are shown in Table 1. The median age was 4 years (IQR, 27 years); a majority of the children were male (53%) and had public insurance (58%). There were 128,855 patients in the preguideline period (January 1, 2008 to July 31, 2011) and 91,684 in the post guideline period (August 1, 2011June 30, 2014).
Overall | Preguideline | Postguideline | P | |
---|---|---|---|---|
| ||||
No. of discharges | 220,539 | 128,855 | 91,684 | |
Type of encounter | ||||
ED only | 150,215 (68.1) | 88,790 (68.9) | 61,425 (67) | <0.001 |
Inpatient | 70,324 (31.9) | 40,065 (31.1) | 30,259 (33) | |
Age | ||||
14 years | 129,360 (58.7) | 77,802 (60.4) | 51,558 (56.2) | <0.001 |
59 years | 58,609 (26.6) | 32,708 (25.4) | 25,901 (28.3) | |
1018 years | 32,570 (14.8) | 18,345 (14.2) | 14,225 (15.5) | |
Median [IQR] | 4 [27] | 3 [27] | 4 [27] | <0.001 |
Gender | ||||
Male | 116,718 (52.9) | 68,319 (53) | 48,399 (52.8) | 00.285 |
Female | 103,813 (47.1) | 60,532 (47) | 43,281 (47.2) | |
Race | ||||
Non‐Hispanic white | 84,423 (38.3) | 47,327 (36.7) | 37,096 (40.5) | <0.001 |
Non‐Hispanic black | 60,062 (27.2) | 35,870 (27.8) | 24,192 (26.4) | |
Hispanic | 51,184 (23.2) | 31,167 (24.2) | 20,017 (21.8) | |
Asian | 6,444 (2.9) | 3,691 (2.9) | 2,753 (3) | |
Other | 18,426 (8.4) | 10,800 (8.4) | 7,626 (8.3) | |
Payer | ||||
Government | 128,047 (58.1) | 70,742 (54.9) | 57,305 (62.5) | <0.001 |
Private | 73,338 (33.3) | 44,410 (34.5) | 28,928 (31.6) | |
Other | 19,154 (8.7) | 13,703 (10.6) | 5,451 (5.9) | |
Disposition | ||||
HHS | 684 (0.3) | 411 (0.3) | 273 (0.3) | <0.001 |
Home | 209,710 (95.1) | 123,236 (95.6) | 86,474 (94.3) | |
Other | 9,749 (4.4) | 4,962 (3.9) | 4,787 (5.2) | |
SNF | 396 (0.2) | 246 (0.2) | 150 (0.2) | |
Season | ||||
Spring | 60,171 (27.3) | 36,709 (28.5) | 23,462 (25.6) | <0.001 |
Summer | 29,891 (13.6) | 17,748 (13.8) | 12,143 (13.2) | |
Fall | 52,161 (23.7) | 28,332 (22) | 23,829 (26) | |
Winter | 78,316 (35.5) | 46,066 (35.8) | 32,250 (35.2) | |
LOS | ||||
13 days | 204,812 (92.9) | 119,497 (92.7) | 85,315 (93.1) | <0.001 |
46 days | 10,454 (4.7) | 6,148 (4.8) | 4,306 (4.7) | |
7+ days | 5,273 (2.4) | 3,210 (2.5) | 2,063 (2.3) | |
Median [IQR] | 1 [11] | 1 [11] | 1 [11] | 0.144 |
Admitted patients, median [IQR] | 2 [13] | 2 [13] | 2 [13] | <0.001 |
Discharged From the ED
Throughout the study, utilization of CBC, blood cultures, and CRP was <20%, whereas CXR use was >75%. In segmented regression analysis, CRP utilization was relatively stable before the guidelines publication. However, by the end of the study period, the projected estimate of CRP utilization without guidelines (expected) was 2.9% compared with 4.8% with the guidelines (observed) (P < 0.05) (Figure 1). A similar pattern of higher rates of diagnostic utilization after the guidelines compared with projected estimates without the guidelines was also seen in the ED utilization of CBC, blood cultures, and CXR (Figure 1); however, these trends did not achieve statistical significance. Table 2 provides specific values. Using a standard cost of $19.52 for CRP testing, annual costs across all hospitals increased $11,783 for ED evaluation of CAP.
Baseline (%) | Preguideline Trend | Level Change at Guideline | Change in Trend After Guideline | Estimates at End of Study* | |||
---|---|---|---|---|---|---|---|
Without Guideline (%) | With Guideline (%) | P | |||||
| |||||||
ED‐only encounters | |||||||
Blood culture | 14.6 | 0.1 | 0.8 | 0.1 | 5.5 | 8.6 | NS |
CBC | 19.2 | 0.1 | 0.4 | 0.1 | 10.7 | 14.0 | NS |
CRP | 5.4 | 0.0 | 0.6 | 0.1 | 2.9 | 4.8 | <0.05 |
Chest x‐ray | 85.4 | 0.1 | 0.1 | 0.0 | 80.9 | 81.1 | NS |
Inpatient encounters | |||||||
Blood culture | 50.6 | 0.0 | 1.7 | 0.2 | 49.2 | 41.4 | <0.05 |
Repeat blood culture | 6.5 | 0.0 | 1.0 | 0.1 | 8.9 | 5.8 | NS |
CBC | 65.2 | 0.0 | 3.1 | 0.0 | 65.0 | 62.2 | NS |
Repeat CBC | 23.4 | 0.0 | 4.2 | 0.0 | 20.8 | 16.0 | NS |
CRP | 25.7 | 0.0 | 1.1 | 0.0 | 23.8 | 23.5 | NS |
Repeat CRP | 12.5 | 0.1 | 2.2 | 0.1 | 7.1 | 7.3 | NS |
Chest x‐ray | 89.4 | 0.1 | 0.7 | 0.0 | 85.4 | 83.9 | NS |
Repeat chest x‐ray | 25.5 | 0.0 | 2.0 | 0.1 | 24.1 | 17.7 | <0.05 |

Inpatient Encounters
In the segmented regression analysis of children hospitalized with CAP, guideline publication was associated with changes in the monthly use of some diagnostic tests. For example, by the end of the study period, the use of blood culture was 41.4% (observed), whereas the projected estimated use in the absence of the guidelines was 49.2% (expected) (P < 0.05) (Figure 2). Table 2 includes the data for the other tests, CBC, CRP, and CXR, in which similar patterns are noted with lower utilization rates after the guidelines, compared with expected utilization rates without the guidelines; however, these trends did not achieve statistical significance. Evaluating the utilization of repeat testing for inpatients, only repeat CXR achieved statistical significance (P < 0.05), with utilization rates of 17.7% with the guidelines (actual) compared with 24.1% without the guidelines (predicted).

To better understand the use of repeat testing, a comparison of severity outcomesLOS, ICU transfer, and pleural drainage procedureswas performed between patients with no repeat testing (70%) and patients with 1 or more repeat tests (30%). Patients with repeat testing had longer LOS (no repeat testing LOS 1 [IQR, 12]) versus 1 repeat test LOS 3 ([IQR, 24] vs 2+ repeat tests LOS 5 [IQR, 38]), higher rate of ICU transfer (no repeat testing 4.6% vs 1 repeat test 14.6% vs 2+ repeat test 35.6%), and higher rate of pleural drainage (no repeat testing 0% vs 1 repeat test 0.1% vs 2+ repeat test 5.9%] (all P < 0.001).
Using standard costs of $37.57 for blood cultures and $73.28 for CXR, annual costs for children with CAP across all hospitals decreased by $91,512 due to decreased utilization of blood cultures, and by $146,840 due to decreased utilization of CXR.
Hospital‐Level Variation in the Impact of the National Guideline
Figure 3 is a visual representation (heat map) of the impact of the guidelines at the hospital level at the end of the study from the individual interrupted time series. Based on this heat map (Figure 3), there was wide variability between hospitals in the impact of the guideline on each test in different settings (ED or inpatient). By diagnostic testing, 7 hospitals significantly decreased utilization of blood cultures for inpatients, and 5 hospitals significantly decreased utilization for repeat blood cultures and repeat CXR. Correlation between the number of positive improvements at a hospital and region (P = 0.974), number of CAP cases (P = 0.731), or percentage of public insurance (P = 0.241) were all nonsignificant.

DISCUSSION
This study complements previous assessments by evaluating the impact of the 2011 IDSA/PIDS consensus guidelines on the management of children with CAP cared for at US children's hospitals. Prior studies have shown increased use of narrow‐spectrum antibiotics for children with CAP after the publication of these guidelines.[7] The current study focused on diagnostic testing for CAP before and after the publication of the 2011 guidelines. In the ED setting, use of some diagnostic tests (blood culture, CBC, CXR, CRP) was declining prior to guideline publication, but appeared to plateau and/or increase after 2011. Among children admitted with CAP, use of diagnostic testing was relatively stable prior to 2011, and use of these tests (blood culture, CBC, CXR, CRP) declined after guideline publication. Overall, changes in diagnostic resource utilization 3 years after publication were modest, with few changes achieving statistical significance. There was a large variability in the impact of guidelines on test use between hospitals.
For outpatients, including those managed in the ED, the PIDS/IDSA guidelines recommend limited laboratory testing in nontoxic, fully immunized patients. The guidelines discourage the use of diagnostic testing among outpatients because of their low yield (eg, blood culture), and because test results may not impact management (eg, CBC).[6] In the years prior to guideline publication, there was already a declining trend in testing rates, including blood cultures, CBC, and CRP, for patients in the ED. After guideline publication, the rate of blood cultures, CBC, and CRP increased, but only the increase in CRP utilization achieved statistical significance. We would not expect utilization for common diagnostic tests (eg, CBC for outpatients with CAP) to be at or close to 0% because of the complexity of clinical decision making regarding admission that factors in aspects of patient history, exam findings, and underlying risk.[15] ED utilization of blood cultures was <10%, CBC <15%, and CRP <5% after guideline publication, which may represent the lowest testing limit that could be achieved.
CXRs obtained in the ED did not decrease over the entire study period. The rates of CXR use (close to 80%) seen in our study are similar to prior ED studies.[5, 16] Management of children with CAP in the ED might be different than outpatient primary care management because (1) unlike primary care providers, ED providers do not have an established relationship with their patients and do not have the opportunity for follow‐up and serial exams, making them less likely to tolerate diagnostic uncertainty; and (2) ED providers may see sicker patients. However, use of CXR in the ED does represent an opportunity for further study to understand if decreased utilization is feasible without adversely impacting clinical outcomes.
The CAP guidelines provide a strong recommendation to obtain blood culture in moderate to severe pneumonia. Despite this, blood culture utilization declined after guideline publication. Less than 10% of children hospitalized with uncomplicated CAP have positive blood cultures, which calls into question the utility of blood cultures for all admitted patients.[17, 18, 19] The recent EPIC (Epidemiology of Pneumonia in the Community) study showed that a majority of children hospitalized with pneumonia do not have growth of bacteria in culture, but there may be a role for blood cultures in patients with a strong suspicion of complicated CAP or in the patient with moderate to severe disease.[20] In addition to blood cultures, the guidelines also recommend CBC and CXR in moderate to severely ill children. This observed decline in testing in CBC and CXR may be related to individual physician assessments of which patients are moderately to severely ill, as the guidelines do not recommend testing for children with less severe disease. Our exclusion of patients requiring intensive care management or pleural drainage on admission might have selected children with a milder course of illness, although still requiring admission.
The guidelines discourage repeat diagnostic testing among children hospitalized with CAP who are improving. In this study, repeat CXR and CBC occurred in approximately 20% of patients, but repeat blood culture and CRP was much lower. As with initial diagnostic testing for inpatients with CAP, the rates of some repeat testing decreased with the guidelines. However, those with repeat testing had longer LOS and were more likely to require ICU transfer or a pleural drainage procedure compared to children without repeat testing. This suggests that repeat testing is used more often in children with a severe presentation or a worsening clinical course, and not done routinely on hospitalized patients.
The financial impact of decreased testing is modest, because the tests themselves are relatively inexpensive. However, the lack of substantial cost savings should not preclude efforts to continue to improve adherence to the guidelines. Not only is increased testing associated with higher hospitalization rates,[5] potentially yielding higher costs and family stress, increased testing may also lead to patient discomfort and possibly increased radiation exposure through chest radiography.
Many of the diagnostic testing recommendations in the CAP guidelines are based on weak evidence, which may contribute to the lack of substantial adoption. Nevertheless, adherence to guideline recommendations requires sustained effort on the part of individual physicians that should be encouraged through institutional support.[21] Continuous education and clinical decision support, as well as reminders in the electronic medical record, would make guideline recommendations more visible and may help overcome the inertia of previous practice.[15] The hospital‐level heat map (Figure 3) included in this study demonstrates that the impact of the guidelines was variable across sites. Although a few sites had decreased diagnostic testing in many areas with no increased testing in any category, there were several sites that had no improvement in any diagnostic testing category. In addition, hospital‐level factors like size, geography, and insurance status were not associated with number of improvements. To better understand drivers of change at individual hospitals, future studies should evaluate specific strategies utilized by the rapid guideline adopters.
This study is subject to several limitations. The use of ICD‐9 codes to identify patients with CAP may not capture all patients with this diagnosis; however, these codes have been previously validated.[13] Additionally, because patients were identified using ICD‐9 coding assigned at the time of discharge, testing performed in the ED setting may not reflect care for a child with known pneumonia, but rather may reflect testing for a child with fever or other signs of infection. PHIS collects data from freestanding children's hospitals, which care for a majority of children with CAP in the US, but our findings may not be generalizable to other hospitals. In addition, we did not examine drivers of trends within individual institutions. We did not have detailed information to examine whether the PHIS hospitals in our study had actively worked to adopt the CAP guidelines. We were also unable to assess physician's familiarity with guidelines or the level of disagreement with the recommendations. Furthermore, the PHIS database does not permit detailed correlation of diagnostic testing with clinical parameters. In contrast to the diagnostic testing evaluated in this study, which is primarily discouraged by the IDSA/PIDS guidelines, respiratory viral testing for children with CAP is recommended but could not be evaluated, as data on such testing are not readily available in PHIS.
CONCLUSION
Publication of the IDSA/PIDS evidence‐based guidelines for the management of CAP was associated with modest, variable changes in use of diagnostic testing. Further adoption of the CAP guidelines should reduce variation in care and decrease unnecessary resource utilization in the management of CAP. Our study demonstrates that efforts to promote decreased resource utilization should target specific situations (eg, repeat testing for inpatients who are improving). Adherence to guidelines may be improved by the adoption of local practices that integrate and improve daily workflow, like order sets and clinical decision support tools.
Disclosure: Nothing to report.
Overutilization of resources is a significant, yet underappreciated, problem in medicine. Many interventions target underutilization (eg, immunizations) or misuse (eg, antibiotic prescribing for viral pharyngitis), yet overutilization remains as a significant contributor to healthcare waste.[1] In an effort to reduce waste, the Choosing Wisely campaign created a work group to highlight areas of overutilization, specifically noting both diagnostic tests and therapies for common pediatric conditions with no proven benefit and possible harm to the patient.[2] Respiratory illnesses have been a target of many quality‐improvement efforts, and pneumonia represents a common diagnosis in pediatrics.[3] The use of diagnostic testing for pneumonia is an area where care can be optimized and aligned with evidence.
Laboratory testing and diagnostic imaging are routinely used for the management of children with community‐acquired pneumonia (CAP). Several studies have documented substantial variability in the use of these resources for pneumonia management, with higher resource use associated with a higher chance of hospitalization after emergency department (ED) evaluation and a longer length of stay among those requiring hospitalization.[4, 5] This variation in diagnostic resource utilization has been attributed, at least in part, to a lack of consensus on the management of pneumonia. There is wide variability in diagnostic testing, and due to potential consequences for patients presenting with pneumonia, efforts to standardize care offer an opportunity to improve healthcare value.
In August 2011, the first national, evidence‐based consensus guidelines for the management of childhood CAP were published jointly by the Pediatric Infectious Diseases Society (PIDS) and the Infectious Diseases Society of America (IDSA).[6] A primary focus of these guidelines was the recommendation for the use of narrow spectrum antibiotics for the management of uncomplicated pneumonia. Previous studies have assessed the impact of the publication of the PIDS/IDSA guidelines on empiric antibiotic selection for the management of pneumonia.[7, 8] In addition, the guidelines provided recommendations regarding diagnostic test utilization, in particular discouraging blood tests (eg, complete blood counts) and radiologic studies for nontoxic, fully immunized children treated as outpatients, as well as repeat testing for children hospitalized with CAP who are improving.
Although single centers have demonstrated changes in utilization patterns based on clinical practice guidelines,[9, 10, 11, 12] whether these guidelines have impacted diagnostic test utilization among US children with CAP in a larger scale remains unknown. Therefore, we sought to determine the impact of the PIDS/IDSA guidelines on the use of diagnostic testing among children with CAP using a national sample of US children's hospitals. Because the guidelines discourage repeat diagnostic testing in patients who are improving, we also evaluated the association between repeat diagnostic studies and severity of illness.
METHODS
This retrospective cohort study used data from the Pediatric Health Information System (PHIS) (Children's Hospital Association, Overland Park, KS). The PHIS database contains deidentified administrative data, detailing demographic, diagnostic, procedure, and billing data from 47 freestanding, tertiary care children's hospitals. This database accounts for approximately 20% of all annual pediatric hospitalizations in the United States. Data quality is ensured through a joint effort between the Children's Hospital Association and participating hospitals.
Patient Population
Data from 32 (of the 47) hospitals included in PHIS with complete inpatient and ED data were used to evaluate hospital‐level resource utilization for children 1 to 18 years of age discharged January 1, 2008 to June 30, 2014 with a diagnosis of pneumonia (International Classification of Diseases, 9th Revision [ICD‐9] codes 480.x‐486.x, 487.0).[13] Our goal was to identify previously healthy children with uncomplicated pneumonia, so we excluded patients with complex chronic conditions,[14] billing charges for intensive care management and/or pleural drainage procedure (IDC‐9 codes 510.0, 510.9, 511.0, 511.1, 511.8, 511.9, 513.x) on day of admission or the next day, or prior pneumonia admission in the last 30 days. We studied 2 mutually exclusive populations: children with pneumonia treated in the ED (ie, patients who were evaluated in the ED and discharged to home), and children hospitalized with pneumonia, including those admitted through the ED.
Guideline Publication and Study Periods
For an exploratory before and after comparison, patients were grouped into 2 cohorts based on a guideline online publication date of August 1, 2011: preguideline (January 1, 2008 to July 31, 2011) and postguideline (August 1, 2011 to June 30, 2014).
Study Outcomes
The measured outcomes were the monthly proportion of pneumonia patients for whom specific diagnostic tests were performed, as determined from billing data. The diagnostic tests evaluated were complete blood count (CBC), blood culture, C‐reactive protein (CRP), and chest radiograph (CXR). Standardized costs were also calculated from PHIS charges as previously described to standardize the cost of the individual tests and remove interhospital cost variation.[3]
Relationship of Repeat Testing and Severity of Illness
Because higher illness severity and clinical deterioration may warrant repeat testing, we also explored the association of repeat diagnostic testing for inpatients with severity of illness by using the following variables as measures of severity: length of stay (LOS), transfer to intensive care unit (ICU), or pleural drainage procedure after admission (>2 calendar days after admission). Repeat diagnostic testing was stratified by number of tests.
Statistical Analysis
The categorical demographic characteristics of the pre‐ and postguideline populations were summarized using frequencies and percentages, and compared using 2 tests. Continuous demographics were summarized with medians and interquartile ranges (IQRs) and compared with the Wilcoxon rank sum test. Segmented regression, clustered by hospital, was used to assess trends in monthly resource utilization as well as associated standardized costs before and after guidelines publication. To estimate the impact of the guidelines overall, we compared the observed diagnostic resource use at the end of the study period with expected use projected from trends in the preguidelines period (ie, if there were no new guidelines). Individual interrupted time series were also built for each hospital. From these models, we assessed which hospitals had a significant difference between the rate observed at the end of the study and that estimated from their preguideline trajectory. To assess the relationship between the number of positive improvements at a hospital and hospital characteristics, we used Spearman's correlation and Kruskal‐Wallis tests. All analyses were performed with SAS version 9.3 (SAS Institute, Inc., Cary, NC), and P values <0.05 were considered statistically significant. In accordance with the policies of the Cincinnati Children's Hospital Medical Center Institutional Review Board, this research, using a deidentified dataset, was not considered human subjects research.
RESULTS
There were 275,288 hospital admissions meeting study inclusion criteria of 1 to 18 years of age with a diagnosis of pneumonia from 2008 to 2014. Of these, 54,749 met exclusion criteria (1874 had pleural drainage procedure on day 0 or 1, 51,306 had complex chronic conditions, 1569 were hospitalized with pneumonia in the last 30 days). Characteristics of the remaining 220,539 patients in the final sample are shown in Table 1. The median age was 4 years (IQR, 27 years); a majority of the children were male (53%) and had public insurance (58%). There were 128,855 patients in the preguideline period (January 1, 2008 to July 31, 2011) and 91,684 in the post guideline period (August 1, 2011June 30, 2014).
Overall | Preguideline | Postguideline | P | |
---|---|---|---|---|
| ||||
No. of discharges | 220,539 | 128,855 | 91,684 | |
Type of encounter | ||||
ED only | 150,215 (68.1) | 88,790 (68.9) | 61,425 (67) | <0.001 |
Inpatient | 70,324 (31.9) | 40,065 (31.1) | 30,259 (33) | |
Age | ||||
14 years | 129,360 (58.7) | 77,802 (60.4) | 51,558 (56.2) | <0.001 |
59 years | 58,609 (26.6) | 32,708 (25.4) | 25,901 (28.3) | |
1018 years | 32,570 (14.8) | 18,345 (14.2) | 14,225 (15.5) | |
Median [IQR] | 4 [27] | 3 [27] | 4 [27] | <0.001 |
Gender | ||||
Male | 116,718 (52.9) | 68,319 (53) | 48,399 (52.8) | 00.285 |
Female | 103,813 (47.1) | 60,532 (47) | 43,281 (47.2) | |
Race | ||||
Non‐Hispanic white | 84,423 (38.3) | 47,327 (36.7) | 37,096 (40.5) | <0.001 |
Non‐Hispanic black | 60,062 (27.2) | 35,870 (27.8) | 24,192 (26.4) | |
Hispanic | 51,184 (23.2) | 31,167 (24.2) | 20,017 (21.8) | |
Asian | 6,444 (2.9) | 3,691 (2.9) | 2,753 (3) | |
Other | 18,426 (8.4) | 10,800 (8.4) | 7,626 (8.3) | |
Payer | ||||
Government | 128,047 (58.1) | 70,742 (54.9) | 57,305 (62.5) | <0.001 |
Private | 73,338 (33.3) | 44,410 (34.5) | 28,928 (31.6) | |
Other | 19,154 (8.7) | 13,703 (10.6) | 5,451 (5.9) | |
Disposition | ||||
HHS | 684 (0.3) | 411 (0.3) | 273 (0.3) | <0.001 |
Home | 209,710 (95.1) | 123,236 (95.6) | 86,474 (94.3) | |
Other | 9,749 (4.4) | 4,962 (3.9) | 4,787 (5.2) | |
SNF | 396 (0.2) | 246 (0.2) | 150 (0.2) | |
Season | ||||
Spring | 60,171 (27.3) | 36,709 (28.5) | 23,462 (25.6) | <0.001 |
Summer | 29,891 (13.6) | 17,748 (13.8) | 12,143 (13.2) | |
Fall | 52,161 (23.7) | 28,332 (22) | 23,829 (26) | |
Winter | 78,316 (35.5) | 46,066 (35.8) | 32,250 (35.2) | |
LOS | ||||
13 days | 204,812 (92.9) | 119,497 (92.7) | 85,315 (93.1) | <0.001 |
46 days | 10,454 (4.7) | 6,148 (4.8) | 4,306 (4.7) | |
7+ days | 5,273 (2.4) | 3,210 (2.5) | 2,063 (2.3) | |
Median [IQR] | 1 [11] | 1 [11] | 1 [11] | 0.144 |
Admitted patients, median [IQR] | 2 [13] | 2 [13] | 2 [13] | <0.001 |
Discharged From the ED
Throughout the study, utilization of CBC, blood cultures, and CRP was <20%, whereas CXR use was >75%. In segmented regression analysis, CRP utilization was relatively stable before the guidelines publication. However, by the end of the study period, the projected estimate of CRP utilization without guidelines (expected) was 2.9% compared with 4.8% with the guidelines (observed) (P < 0.05) (Figure 1). A similar pattern of higher rates of diagnostic utilization after the guidelines compared with projected estimates without the guidelines was also seen in the ED utilization of CBC, blood cultures, and CXR (Figure 1); however, these trends did not achieve statistical significance. Table 2 provides specific values. Using a standard cost of $19.52 for CRP testing, annual costs across all hospitals increased $11,783 for ED evaluation of CAP.
Baseline (%) | Preguideline Trend | Level Change at Guideline | Change in Trend After Guideline | Estimates at End of Study* | |||
---|---|---|---|---|---|---|---|
Without Guideline (%) | With Guideline (%) | P | |||||
| |||||||
ED‐only encounters | |||||||
Blood culture | 14.6 | 0.1 | 0.8 | 0.1 | 5.5 | 8.6 | NS |
CBC | 19.2 | 0.1 | 0.4 | 0.1 | 10.7 | 14.0 | NS |
CRP | 5.4 | 0.0 | 0.6 | 0.1 | 2.9 | 4.8 | <0.05 |
Chest x‐ray | 85.4 | 0.1 | 0.1 | 0.0 | 80.9 | 81.1 | NS |
Inpatient encounters | |||||||
Blood culture | 50.6 | 0.0 | 1.7 | 0.2 | 49.2 | 41.4 | <0.05 |
Repeat blood culture | 6.5 | 0.0 | 1.0 | 0.1 | 8.9 | 5.8 | NS |
CBC | 65.2 | 0.0 | 3.1 | 0.0 | 65.0 | 62.2 | NS |
Repeat CBC | 23.4 | 0.0 | 4.2 | 0.0 | 20.8 | 16.0 | NS |
CRP | 25.7 | 0.0 | 1.1 | 0.0 | 23.8 | 23.5 | NS |
Repeat CRP | 12.5 | 0.1 | 2.2 | 0.1 | 7.1 | 7.3 | NS |
Chest x‐ray | 89.4 | 0.1 | 0.7 | 0.0 | 85.4 | 83.9 | NS |
Repeat chest x‐ray | 25.5 | 0.0 | 2.0 | 0.1 | 24.1 | 17.7 | <0.05 |

Inpatient Encounters
In the segmented regression analysis of children hospitalized with CAP, guideline publication was associated with changes in the monthly use of some diagnostic tests. For example, by the end of the study period, the use of blood culture was 41.4% (observed), whereas the projected estimated use in the absence of the guidelines was 49.2% (expected) (P < 0.05) (Figure 2). Table 2 includes the data for the other tests, CBC, CRP, and CXR, in which similar patterns are noted with lower utilization rates after the guidelines, compared with expected utilization rates without the guidelines; however, these trends did not achieve statistical significance. Evaluating the utilization of repeat testing for inpatients, only repeat CXR achieved statistical significance (P < 0.05), with utilization rates of 17.7% with the guidelines (actual) compared with 24.1% without the guidelines (predicted).

To better understand the use of repeat testing, a comparison of severity outcomesLOS, ICU transfer, and pleural drainage procedureswas performed between patients with no repeat testing (70%) and patients with 1 or more repeat tests (30%). Patients with repeat testing had longer LOS (no repeat testing LOS 1 [IQR, 12]) versus 1 repeat test LOS 3 ([IQR, 24] vs 2+ repeat tests LOS 5 [IQR, 38]), higher rate of ICU transfer (no repeat testing 4.6% vs 1 repeat test 14.6% vs 2+ repeat test 35.6%), and higher rate of pleural drainage (no repeat testing 0% vs 1 repeat test 0.1% vs 2+ repeat test 5.9%] (all P < 0.001).
Using standard costs of $37.57 for blood cultures and $73.28 for CXR, annual costs for children with CAP across all hospitals decreased by $91,512 due to decreased utilization of blood cultures, and by $146,840 due to decreased utilization of CXR.
Hospital‐Level Variation in the Impact of the National Guideline
Figure 3 is a visual representation (heat map) of the impact of the guidelines at the hospital level at the end of the study from the individual interrupted time series. Based on this heat map (Figure 3), there was wide variability between hospitals in the impact of the guideline on each test in different settings (ED or inpatient). By diagnostic testing, 7 hospitals significantly decreased utilization of blood cultures for inpatients, and 5 hospitals significantly decreased utilization for repeat blood cultures and repeat CXR. Correlation between the number of positive improvements at a hospital and region (P = 0.974), number of CAP cases (P = 0.731), or percentage of public insurance (P = 0.241) were all nonsignificant.

DISCUSSION
This study complements previous assessments by evaluating the impact of the 2011 IDSA/PIDS consensus guidelines on the management of children with CAP cared for at US children's hospitals. Prior studies have shown increased use of narrow‐spectrum antibiotics for children with CAP after the publication of these guidelines.[7] The current study focused on diagnostic testing for CAP before and after the publication of the 2011 guidelines. In the ED setting, use of some diagnostic tests (blood culture, CBC, CXR, CRP) was declining prior to guideline publication, but appeared to plateau and/or increase after 2011. Among children admitted with CAP, use of diagnostic testing was relatively stable prior to 2011, and use of these tests (blood culture, CBC, CXR, CRP) declined after guideline publication. Overall, changes in diagnostic resource utilization 3 years after publication were modest, with few changes achieving statistical significance. There was a large variability in the impact of guidelines on test use between hospitals.
For outpatients, including those managed in the ED, the PIDS/IDSA guidelines recommend limited laboratory testing in nontoxic, fully immunized patients. The guidelines discourage the use of diagnostic testing among outpatients because of their low yield (eg, blood culture), and because test results may not impact management (eg, CBC).[6] In the years prior to guideline publication, there was already a declining trend in testing rates, including blood cultures, CBC, and CRP, for patients in the ED. After guideline publication, the rate of blood cultures, CBC, and CRP increased, but only the increase in CRP utilization achieved statistical significance. We would not expect utilization for common diagnostic tests (eg, CBC for outpatients with CAP) to be at or close to 0% because of the complexity of clinical decision making regarding admission that factors in aspects of patient history, exam findings, and underlying risk.[15] ED utilization of blood cultures was <10%, CBC <15%, and CRP <5% after guideline publication, which may represent the lowest testing limit that could be achieved.
CXRs obtained in the ED did not decrease over the entire study period. The rates of CXR use (close to 80%) seen in our study are similar to prior ED studies.[5, 16] Management of children with CAP in the ED might be different than outpatient primary care management because (1) unlike primary care providers, ED providers do not have an established relationship with their patients and do not have the opportunity for follow‐up and serial exams, making them less likely to tolerate diagnostic uncertainty; and (2) ED providers may see sicker patients. However, use of CXR in the ED does represent an opportunity for further study to understand if decreased utilization is feasible without adversely impacting clinical outcomes.
The CAP guidelines provide a strong recommendation to obtain blood culture in moderate to severe pneumonia. Despite this, blood culture utilization declined after guideline publication. Less than 10% of children hospitalized with uncomplicated CAP have positive blood cultures, which calls into question the utility of blood cultures for all admitted patients.[17, 18, 19] The recent EPIC (Epidemiology of Pneumonia in the Community) study showed that a majority of children hospitalized with pneumonia do not have growth of bacteria in culture, but there may be a role for blood cultures in patients with a strong suspicion of complicated CAP or in the patient with moderate to severe disease.[20] In addition to blood cultures, the guidelines also recommend CBC and CXR in moderate to severely ill children. This observed decline in testing in CBC and CXR may be related to individual physician assessments of which patients are moderately to severely ill, as the guidelines do not recommend testing for children with less severe disease. Our exclusion of patients requiring intensive care management or pleural drainage on admission might have selected children with a milder course of illness, although still requiring admission.
The guidelines discourage repeat diagnostic testing among children hospitalized with CAP who are improving. In this study, repeat CXR and CBC occurred in approximately 20% of patients, but repeat blood culture and CRP was much lower. As with initial diagnostic testing for inpatients with CAP, the rates of some repeat testing decreased with the guidelines. However, those with repeat testing had longer LOS and were more likely to require ICU transfer or a pleural drainage procedure compared to children without repeat testing. This suggests that repeat testing is used more often in children with a severe presentation or a worsening clinical course, and not done routinely on hospitalized patients.
The financial impact of decreased testing is modest, because the tests themselves are relatively inexpensive. However, the lack of substantial cost savings should not preclude efforts to continue to improve adherence to the guidelines. Not only is increased testing associated with higher hospitalization rates,[5] potentially yielding higher costs and family stress, increased testing may also lead to patient discomfort and possibly increased radiation exposure through chest radiography.
Many of the diagnostic testing recommendations in the CAP guidelines are based on weak evidence, which may contribute to the lack of substantial adoption. Nevertheless, adherence to guideline recommendations requires sustained effort on the part of individual physicians that should be encouraged through institutional support.[21] Continuous education and clinical decision support, as well as reminders in the electronic medical record, would make guideline recommendations more visible and may help overcome the inertia of previous practice.[15] The hospital‐level heat map (Figure 3) included in this study demonstrates that the impact of the guidelines was variable across sites. Although a few sites had decreased diagnostic testing in many areas with no increased testing in any category, there were several sites that had no improvement in any diagnostic testing category. In addition, hospital‐level factors like size, geography, and insurance status were not associated with number of improvements. To better understand drivers of change at individual hospitals, future studies should evaluate specific strategies utilized by the rapid guideline adopters.
This study is subject to several limitations. The use of ICD‐9 codes to identify patients with CAP may not capture all patients with this diagnosis; however, these codes have been previously validated.[13] Additionally, because patients were identified using ICD‐9 coding assigned at the time of discharge, testing performed in the ED setting may not reflect care for a child with known pneumonia, but rather may reflect testing for a child with fever or other signs of infection. PHIS collects data from freestanding children's hospitals, which care for a majority of children with CAP in the US, but our findings may not be generalizable to other hospitals. In addition, we did not examine drivers of trends within individual institutions. We did not have detailed information to examine whether the PHIS hospitals in our study had actively worked to adopt the CAP guidelines. We were also unable to assess physician's familiarity with guidelines or the level of disagreement with the recommendations. Furthermore, the PHIS database does not permit detailed correlation of diagnostic testing with clinical parameters. In contrast to the diagnostic testing evaluated in this study, which is primarily discouraged by the IDSA/PIDS guidelines, respiratory viral testing for children with CAP is recommended but could not be evaluated, as data on such testing are not readily available in PHIS.
CONCLUSION
Publication of the IDSA/PIDS evidence‐based guidelines for the management of CAP was associated with modest, variable changes in use of diagnostic testing. Further adoption of the CAP guidelines should reduce variation in care and decrease unnecessary resource utilization in the management of CAP. Our study demonstrates that efforts to promote decreased resource utilization should target specific situations (eg, repeat testing for inpatients who are improving). Adherence to guidelines may be improved by the adoption of local practices that integrate and improve daily workflow, like order sets and clinical decision support tools.
Disclosure: Nothing to report.
- Eliminating waste in US health care. JAMA. 2012;307(14):1513–1516. , .
- Choosing wisely in pediatric hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):479–485. , , , et al.
- Pediatric Research in Inpatient Settings (PRIS) Network. Prioritization of comparative effectiveness research topics in hospital pediatrics. Arch Pediatr Adolesc Med. 2012;166(12):1155–1164. , , , et al.;
- Variability in processes of care and outcomes among children hospitalized with community‐acquired pneumonia. Pediatr Infect Dis J. 2012;31(10):1036–1041. , , , et al.
- Variation in emergency department diagnostic testing and disposition outcomes in pneumonia. Pediatrics. 2013;132(2):237–244. , , , , .
- Pediatric Infectious Diseases Society and the Infectious Diseases Society of America. The management of community‐acquired pneumonia in infants and children older than 3 months of age: clinical practice guidelines by the Pediatric Infectious Diseases Society and the Infectious Diseases Society of America. Clin Infect Dis. 2011;53(7):e25–e76. , , , et al.;
- Impact of Infectious Diseases Society of America/Pediatric Infectious Diseases Society guidelines on treatment of community‐acquired pneumonia in hospitalized children. Clin Infect Dis. 2014;58(6):834–838. , , , et al.,
- Antibiotic choice for children hospitalized with pneumonia and adherence to national guidelines. Pediatrics. 2015;136(1):44–52. , , , et al.
- Quality improvement methods increase appropriate antibiotic prescribing for childhood pneumonia. Pediatrics. 2013;131(5):e1623–e1631. , , , et al.
- Improvement methodology increases guideline recommended blood cultures in children with pneumonia. Pediatrics. 2015;135(4):e1052–e1059. , , , et al.
- Impact of a guideline on management of children hospitalized with community‐acquired pneumonia. Pediatrics. 2012;129(3):e597–e604. , , , , , .
- Effectiveness of antimicrobial guidelines for community‐acquired pneumonia in children. Pediatrics. 2012;129(5):e1326–e1333. , , , .
- Identifying pediatric community‐acquired pneumonia hospitalizations: accuracy of administrative billing codes. JAMA Pediatr. 2013;167(9):851–858. , , , et al.
- Pediatric complex chronic conditions classification system version 2: updated for ICD‐10 and complex medical technology dependence and transplantation. BMC Pediatr. 2014;14:199. , , , , .
- Establishing superior benchmarks of care in clinical practice: a proposal to drive achievable health care value. JAMA Pediatr. 2015;169(4):301–302. , .
- Emergency department management of childhood pneumonia in the United States prior to publication of national guidelines. Acad Emerg Med. 2013;20(3):240–246. , , , .
- Prevalence of bacteremia in hospitalized pediatric patients with community‐acquired pneumonia. Pediatr Infect Dis J. 2013;32(7):736–740. , , , et al.
- The prevalence of bacteremia in pediatric patients with community‐acquired pneumonia: guidelines to reduce the frequency of obtaining blood cultures. Hosp Pediatr. 2013;3(2):92–96. , , , , .
- Do all children hospitalized with community‐acquired pneumonia require blood cultures? Hosp Pediatr. 2013;3(2):177–179. .
- CDC EPIC Study Team. Community‐acquired pneumonia requiring hospitalization among U.S. children. N Engl J Med. 2015;372(9):835–845. , , , et al.;
- Influence of hospital guidelines on management of children hospitalized with pneumonia. Pediatrics. 2012;130(5):e823–e830. , , , et al.
- Eliminating waste in US health care. JAMA. 2012;307(14):1513–1516. , .
- Choosing wisely in pediatric hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):479–485. , , , et al.
- Pediatric Research in Inpatient Settings (PRIS) Network. Prioritization of comparative effectiveness research topics in hospital pediatrics. Arch Pediatr Adolesc Med. 2012;166(12):1155–1164. , , , et al.;
- Variability in processes of care and outcomes among children hospitalized with community‐acquired pneumonia. Pediatr Infect Dis J. 2012;31(10):1036–1041. , , , et al.
- Variation in emergency department diagnostic testing and disposition outcomes in pneumonia. Pediatrics. 2013;132(2):237–244. , , , , .
- Pediatric Infectious Diseases Society and the Infectious Diseases Society of America. The management of community‐acquired pneumonia in infants and children older than 3 months of age: clinical practice guidelines by the Pediatric Infectious Diseases Society and the Infectious Diseases Society of America. Clin Infect Dis. 2011;53(7):e25–e76. , , , et al.;
- Impact of Infectious Diseases Society of America/Pediatric Infectious Diseases Society guidelines on treatment of community‐acquired pneumonia in hospitalized children. Clin Infect Dis. 2014;58(6):834–838. , , , et al.,
- Antibiotic choice for children hospitalized with pneumonia and adherence to national guidelines. Pediatrics. 2015;136(1):44–52. , , , et al.
- Quality improvement methods increase appropriate antibiotic prescribing for childhood pneumonia. Pediatrics. 2013;131(5):e1623–e1631. , , , et al.
- Improvement methodology increases guideline recommended blood cultures in children with pneumonia. Pediatrics. 2015;135(4):e1052–e1059. , , , et al.
- Impact of a guideline on management of children hospitalized with community‐acquired pneumonia. Pediatrics. 2012;129(3):e597–e604. , , , , , .
- Effectiveness of antimicrobial guidelines for community‐acquired pneumonia in children. Pediatrics. 2012;129(5):e1326–e1333. , , , .
- Identifying pediatric community‐acquired pneumonia hospitalizations: accuracy of administrative billing codes. JAMA Pediatr. 2013;167(9):851–858. , , , et al.
- Pediatric complex chronic conditions classification system version 2: updated for ICD‐10 and complex medical technology dependence and transplantation. BMC Pediatr. 2014;14:199. , , , , .
- Establishing superior benchmarks of care in clinical practice: a proposal to drive achievable health care value. JAMA Pediatr. 2015;169(4):301–302. , .
- Emergency department management of childhood pneumonia in the United States prior to publication of national guidelines. Acad Emerg Med. 2013;20(3):240–246. , , , .
- Prevalence of bacteremia in hospitalized pediatric patients with community‐acquired pneumonia. Pediatr Infect Dis J. 2013;32(7):736–740. , , , et al.
- The prevalence of bacteremia in pediatric patients with community‐acquired pneumonia: guidelines to reduce the frequency of obtaining blood cultures. Hosp Pediatr. 2013;3(2):92–96. , , , , .
- Do all children hospitalized with community‐acquired pneumonia require blood cultures? Hosp Pediatr. 2013;3(2):177–179. .
- CDC EPIC Study Team. Community‐acquired pneumonia requiring hospitalization among U.S. children. N Engl J Med. 2015;372(9):835–845. , , , et al.;
- Influence of hospital guidelines on management of children hospitalized with pneumonia. Pediatrics. 2012;130(5):e823–e830. , , , et al.
© 2015 Society of Hospital Medicine
Hyperkalemia Treatment and Hypoglycemia
Hyperkalemia occurs in as many as 10% of all hospitalized patients,[1] leading to potentially fatal arrhythmias or cardiac arrest that results from ionic imbalance within the resting membrane potential of myocardial tissue.[2] Acute instances may be stabilized with insulin to stimulate intracellular uptake of potassium, but this increases the risk of hypoglycemia.[2] Centers for Medicare and Medicaid Services quality measures require hospitals to minimize hypoglycemic events, particularly serious events with blood glucose (BG) <40 mg/dL,[3] due to an association with an increase in mortality in the hospital setting.[4] Previous research at our tertiary care hospital found that 8.7% of patients had suffered a hypoglycemic event following insulin administration pursuant to acute hyperkalemia treatment, and that patients with a lower body weight are at increased risk of hypoglycemia, particularly severe hypoglycemia (BG <40 mg/dL).[5] Increasing the total dose of dextrose provided around the time of insulin administration is suggested to reduce this concern.[5]
Patients at our institution receive 50 g of dextrose in conjunction with intravenous (IV) insulin for hyperkalemia treatment. To further reduce the potential for hypoglycemia, our institution amended the acute hyperkalemia order set to provide prescribers an alternative dosing strategy to the standard 10 U of IV insulin traditionally used for this purpose. Beginning November 10, 2013, our computer prescriber order entry (CPOE) system automatically prepopulated a dose of 0.1 U/kg of body weight for any patients weighing <95 kg (doses rounded to the nearest whole unit) when the acute hyperkalemia order set was utilized. The maximum dose allowed continued to be 10 U. The revised order set also changed nursing orders to require BG monitoring as frequently as every hour following the administration of insulin and dextrose for the treatment of hyperkalemia.
The purpose of this study is to investigate whether weight‐based insulin dosing (0.1 U/kg) for patients weighing <95 kg, rather than a standard 10‐U insulin dose, resulted in fewer hypoglycemic episodes and patients affected. Secondarily, this study sought to determine the impact of weight‐based insulin dosing on potassium‐lowering effects of therapy and to detect any risk factors for development of hypoglycemia among this patient population.
METHODS
This institutional review boardapproved, single‐center, retrospective chart review examined patients for whom the physician order entry set for hyperkalemia therapy was utilized, including patients who weighed less than 95 kg and received regular insulin via weight‐based dosing (0.1 U/kg of body weight up to a maximum of 10 U) during the period November 10, 2013 to May 31, 2014, versus those who received fixed insulin dosing (10 U regardless of body weight) during the period May 1, 2013 to November 9, 2013. During each of these periods, the CPOE system autopopulated the recommended insulin dose, with the possibility for physician manual dose entry. Data collection was limited to the first use of insulin for hyperkalemia treatment per patient in each period.
Patients weighing <95 kg were the focus of this study because they received <10 U of insulin under the weight‐based dosing strategy. Patients were excluded from the study if they had a body weight >95 kg or no weight recorded, were not administered insulin as ordered, received greater than the CPOE‐specified insulin dose, or had no BG readings recorded within 24 hours of insulin administration. The first 66 patients within each group meeting all inclusion and exclusion criteria were randomly selected for analysis. This recruitment target was developed to provide enough patients for a meaningful analysis of hypoglycemia events based on previous reports from our institution.[5]
Hypoglycemia was defined as a recorded BG level <70 mg/dL within 24 hours after insulin administration; severe hypoglycemia was defined as a recorded BG <40 mg/dL within 24 hours. Individual episodes of hypoglycemia and severe hypoglycemia were recorded for each instance of such event separated by at least 1 hour from the time of the first recorded event. In addition, episodes of hypoglycemia or severe hypoglycemia and number of patients affected were assessed at within 6 hours, 6 to 12 hours, and 12 to 24 hours after insulin administration as separate subsets for statistical analysis.
For the purpose of assessing the potassium‐lowering efficacy of weight‐based versus traditional dosing of insulin, maximum serum potassium levels were examined in the 12‐hour interval before the hyperkalemia order set was implemented and compared with minimum potassium levels in the 12 hours after insulin was administered. A comparison of the treatment groups assessed differences between the mean decrease in serum potassium from baseline, the mean minimum potassium achieved, the number of patients achieving minimum potassium below 5.0 mEq/L, and the number of patients who subsequently received repeat treatment for hyperkalemia within 24 hours of treatment with insulin.
Statistical analysis was conducted utilizing 2 and Fisher exact tests for nominal data and Student t test for continuous data to detect statistically significant differences between the groups. Binomial logistic multivariable analysis using a backward stepwise approach was used to determine factors for development of hypoglycemia, analyzed on a per‐patient basis to prevent characteristics from being over‐represented when events occurred multiple times to a single patient. All analyses were completed by using SPSS version 18 (SPSS Inc., Chicago, IL).
RESULTS
In total, 1734 entries were available for the acute hyperkalemia order set with insulin during the 2 periods investigated. Only 464 patients were eligible for manual chart review once weight‐based exclusions were identified by electronic database, with additional exclusion criteria later extracted from patient charts. Patients in both treatment groups were fairly well balanced, with a slightly lower body weight in the 10‐U insulin group recorded (Table 1). Patients in the weight‐based dosing group received between 4 and 9 U of insulin, depending on body weight.
Characteristics | 10 U Insulin, n = 66 | 0.1 U/kg Insulin, n = 66 | P Value (2‐Sided) |
---|---|---|---|
| |||
Weight, kg | 69.9 (14.2) | 74.2 (12.6) | 0.07 |
Age, y | 55.7 (15.7) | 61.9 (17.6) | 0.36 |
Male gender | 37 (56.1%) | 41 (62.1%) | 0.60 |
Caucasian race | 40 (60.6%) | 37 (56.1%) | 0.55 |
Serum creatinine, mg/dL | 3.16 (4.38) | 3.04 (4.61) | 0.9 |
Creatinine clearance <30 mL/min | 41 (62.1%) | 41 (62.1%) | 0.6 |
Dialysis | 20 (30.3%) | 16 (24.2%) | 0.56 |
Baseline blood glucose, mg/dL | 166.0 (71.7) | 147.3 (48.0) | 0.08 |
Received other insulin within 24 hours of hyperkalemia treatment | 30 (45.4%) | 25 (37.9%) | 0.48 |
Received K+ supplement within 24 hours of hyperkalemia treatment | 9 (13.6%) | 11 (16.7%) | 0.81 |
Baseline serum K+, mmol/L | 6.1 (0.5) | 6.1 (0.7) | 0.76 |
Baseline serum K+ >6.0 mmol/L | 41 (62.1%) | 33 (50%) | 0.22 |
No. of additional treatments for hyperkalemia in addition to insulin/dextrose | 1.5 (0.8) | 1.4 (0.9) | 0.49 |
A reduction in the number of hypoglycemic episodes was detected in the weight‐based dosing group of 56% within 24 hours, from 18 to 8 events (P = 0.05) (Table 2). The number of hypoglycemic events in every subset of time intervals was likewise reduced by at least 50% using weight‐based dosing (from 7 to 3 events within 6 hours, from 5 to 2 events in 612 hours, from 6 to 3 events in 1224 hours). The number of patients who experienced hypoglycemia within 24 hours after receiving insulin also was reduced in the weight‐based dosing group by 46% (P = 0.22).
Outcomes | 10 U Insulin, n = 66 | 0.1 U/kg Insulin, n = 66 | P Value (2‐Sided) |
---|---|---|---|
| |||
Hypoglycemia, <70 mg/dL | |||
No. of patients | 13 (19.7%) | 7 (10.6%) | 0.22 |
No. of events total | 18 (27.3%) | 8 (12.1%) | 0.05 |
No. of events 06 hours | 7 (10.6%) | 3 (4.5%) | 0.32 |
No. of events 612 hours | 5 (7.6%) | 2 (3.0%) | 0.44 |
No. of events 1224 hours | 6 (9.1%) | 3 (4.5%) | 0.49 |
Severe hypoglycemia | |||
No. of patients | 2 (3.0%) | 1 (1.5%) | >0.99 |
No. of events total | 2 (3%) | 1 (1.5%) | >0.99 |
Potassium‐lowering effects | |||
Minimum K+ after therapy, mmol/L (SD) | 4.9 (0.7) | 4.8 (0.7) | 0.84 |
Minimum serum K+ < 5.0 mmol/L (%) | 37 (56.1%) | 35 (53.0%) | 0.32 |
Average K+ decrease, mmol/L (SD) | 1.35 (0.97) | 1.34 (0.94) | 0.94 |
Repeat treatment given (%) | 24 (36.4%) | 24 (36.4%) | >0.99 |
Potassium lowering was comparable across both dosing strategies in every measure assessed (Table 2). Multivariate analysis revealed that baseline BG <140 mg/dL (adjusted odds ratio: 4.3, 95% confidence interval [CI]: 1.4‐13.7, P = 0.01) and female gender (adjusted odds ratio: 3.2, 95% CI: 1.1‐9.1, P = 0.03) were associated with an increased risk of hypoglycemia. Other factors, including administration of insulin beyond that for hyperkalemia treatment and use of additional hypoglycemic agents, were not associated with the development of hypoglycemia, which is consistent with previous reports.[6]
CONCLUSIONS
Our findings indicate that using a weight‐based approach to insulin dosing when treating hyperkalemia may lead to a reduction in hypoglycemia without sacrificing the efficacy of potassium lowering. Females and patients with glucose values <140 mg/dL were at increased risk of hypoglycemia in this cohort. Based on the results of this research, a weight‐based dosing strategy of 0.1 U/kg IV insulin up to a maximum of 10 U should be considered, with further research desirable to validate these results.
This study was strengthened by the inclusion of all patients regardless of baseline glucose, baseline potassium, administration of other insulins, level of renal impairment, or symptomatic display of hypoglycemia or cardiac dysfunction, thus providing a broad representation of patients treated for acute hyperkalemia. This pilot study was limited in its scope by data collection for only 66 randomized patients per group rather than the entire patient population. In addition, the study utilized patient information from a single site, with few ethnicities represented. Validation of this research using a larger sample size should include greater variation in the patients served. Our inclusion of a hypoglycemia definition up to 24 hours after treatment may also be criticized. However, this is similar to previous reports and allows for a liberal time period for follow‐up glucose monitoring to be recorded.[7]
Because of its small sample size and the low event rate, this study was unable to draw conclusions about the ability of weight‐based insulin dosing to affect severe hypoglycemic events (<40 mg/dL). A study of more than 400 patients would be necessary to find statistically significant differences in the risk of severe hypoglycemia. Furthermore, because we did not examine the results from all patients in this cohort, we cannot conclusively determine the impact of treatment. The retrospective nature of this study limited our ability to capture hypoglycemic episodes during periods in which BG levels were not recorded. Additionally, changes to the post‐treatment glucose monitoring protocol may have also affected the incidence of hypoglycemia in 2 potential ways. First, early and unrecorded interventions may have occurred in patients with a trend toward hypoglycemia. Second, the longer time to follow‐up in the nonweight‐based group may have led to additional hypoglycemic episodes being missed. A prospective trial design could provide more comprehensive information about patient response to weight‐based versus traditional dosing of IV insulin for hyperkalemia. Further investigations on reducing adverse effects of insulin when treating hyperkalemia should focus on female patients and those with lower baseline BG values. Additionally, as newer agents to treat hyperkalemia are developed and tested, the approach to management should be revisited.[8, 9, 10]
Disclosures: Garry S. Tobin, MD, lectures or is on the speakers bureau for Eli Lilly, Jansen, Boehringher Ingelheim, and Novo Nordisk, and performs data safety monitoring for Novo Nordisk. The authors report no other potential conflicts of interest.
- Hyperkalemia in hospitalized patients: causes, adequacy of treatment, and results of an attempt to improve physician compliance with published therapy guidelines. Arch Intern Med. 1998;158:917–924. , , , .
- 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Part 12.6: cardiac arrest associated with life‐threatening electrolyte disturbances. Circulation. 2010;122:S829–S861.
- Centers for Medicare 29(2):101–107.
- Incidence of hypoglycemia following insulin‐based acute stabilization of hyperkalemia treatment. J Hosp Med. 2012;7(3):239–242. , , , .
- Hypoglycemia in the treatment of hyperkalemia with insulin in patients with end‐stage renal disease. Clin Kidney J. 2014;7(3):248–250. , , .
- Prediction and prevention of treatment‐related inpatient hypoglycemia. J Diabetes Sci Technol. 2012;6(2):302–309. , , , .
- Effect of sodium zirconium cyclosilicate on potassium lowering for 28 days among outpatients with hyperkalemia: the HARMONIZE randomized clinical trial. JAMA. 2014;312(21):2223–2233. , , , et al.
- Zirconium cyclosilicate in hyperkalemia. N Engl J Med. 2015;372:222–231. , , , et al.
- Patiromer in patients with kidney disease and hyperkalemia receiving RAAS inhibitors. N Engl J Med. 2015;372:211–221. , , , et al.
Hyperkalemia occurs in as many as 10% of all hospitalized patients,[1] leading to potentially fatal arrhythmias or cardiac arrest that results from ionic imbalance within the resting membrane potential of myocardial tissue.[2] Acute instances may be stabilized with insulin to stimulate intracellular uptake of potassium, but this increases the risk of hypoglycemia.[2] Centers for Medicare and Medicaid Services quality measures require hospitals to minimize hypoglycemic events, particularly serious events with blood glucose (BG) <40 mg/dL,[3] due to an association with an increase in mortality in the hospital setting.[4] Previous research at our tertiary care hospital found that 8.7% of patients had suffered a hypoglycemic event following insulin administration pursuant to acute hyperkalemia treatment, and that patients with a lower body weight are at increased risk of hypoglycemia, particularly severe hypoglycemia (BG <40 mg/dL).[5] Increasing the total dose of dextrose provided around the time of insulin administration is suggested to reduce this concern.[5]
Patients at our institution receive 50 g of dextrose in conjunction with intravenous (IV) insulin for hyperkalemia treatment. To further reduce the potential for hypoglycemia, our institution amended the acute hyperkalemia order set to provide prescribers an alternative dosing strategy to the standard 10 U of IV insulin traditionally used for this purpose. Beginning November 10, 2013, our computer prescriber order entry (CPOE) system automatically prepopulated a dose of 0.1 U/kg of body weight for any patients weighing <95 kg (doses rounded to the nearest whole unit) when the acute hyperkalemia order set was utilized. The maximum dose allowed continued to be 10 U. The revised order set also changed nursing orders to require BG monitoring as frequently as every hour following the administration of insulin and dextrose for the treatment of hyperkalemia.
The purpose of this study is to investigate whether weight‐based insulin dosing (0.1 U/kg) for patients weighing <95 kg, rather than a standard 10‐U insulin dose, resulted in fewer hypoglycemic episodes and patients affected. Secondarily, this study sought to determine the impact of weight‐based insulin dosing on potassium‐lowering effects of therapy and to detect any risk factors for development of hypoglycemia among this patient population.
METHODS
This institutional review boardapproved, single‐center, retrospective chart review examined patients for whom the physician order entry set for hyperkalemia therapy was utilized, including patients who weighed less than 95 kg and received regular insulin via weight‐based dosing (0.1 U/kg of body weight up to a maximum of 10 U) during the period November 10, 2013 to May 31, 2014, versus those who received fixed insulin dosing (10 U regardless of body weight) during the period May 1, 2013 to November 9, 2013. During each of these periods, the CPOE system autopopulated the recommended insulin dose, with the possibility for physician manual dose entry. Data collection was limited to the first use of insulin for hyperkalemia treatment per patient in each period.
Patients weighing <95 kg were the focus of this study because they received <10 U of insulin under the weight‐based dosing strategy. Patients were excluded from the study if they had a body weight >95 kg or no weight recorded, were not administered insulin as ordered, received greater than the CPOE‐specified insulin dose, or had no BG readings recorded within 24 hours of insulin administration. The first 66 patients within each group meeting all inclusion and exclusion criteria were randomly selected for analysis. This recruitment target was developed to provide enough patients for a meaningful analysis of hypoglycemia events based on previous reports from our institution.[5]
Hypoglycemia was defined as a recorded BG level <70 mg/dL within 24 hours after insulin administration; severe hypoglycemia was defined as a recorded BG <40 mg/dL within 24 hours. Individual episodes of hypoglycemia and severe hypoglycemia were recorded for each instance of such event separated by at least 1 hour from the time of the first recorded event. In addition, episodes of hypoglycemia or severe hypoglycemia and number of patients affected were assessed at within 6 hours, 6 to 12 hours, and 12 to 24 hours after insulin administration as separate subsets for statistical analysis.
For the purpose of assessing the potassium‐lowering efficacy of weight‐based versus traditional dosing of insulin, maximum serum potassium levels were examined in the 12‐hour interval before the hyperkalemia order set was implemented and compared with minimum potassium levels in the 12 hours after insulin was administered. A comparison of the treatment groups assessed differences between the mean decrease in serum potassium from baseline, the mean minimum potassium achieved, the number of patients achieving minimum potassium below 5.0 mEq/L, and the number of patients who subsequently received repeat treatment for hyperkalemia within 24 hours of treatment with insulin.
Statistical analysis was conducted utilizing 2 and Fisher exact tests for nominal data and Student t test for continuous data to detect statistically significant differences between the groups. Binomial logistic multivariable analysis using a backward stepwise approach was used to determine factors for development of hypoglycemia, analyzed on a per‐patient basis to prevent characteristics from being over‐represented when events occurred multiple times to a single patient. All analyses were completed by using SPSS version 18 (SPSS Inc., Chicago, IL).
RESULTS
In total, 1734 entries were available for the acute hyperkalemia order set with insulin during the 2 periods investigated. Only 464 patients were eligible for manual chart review once weight‐based exclusions were identified by electronic database, with additional exclusion criteria later extracted from patient charts. Patients in both treatment groups were fairly well balanced, with a slightly lower body weight in the 10‐U insulin group recorded (Table 1). Patients in the weight‐based dosing group received between 4 and 9 U of insulin, depending on body weight.
Characteristics | 10 U Insulin, n = 66 | 0.1 U/kg Insulin, n = 66 | P Value (2‐Sided) |
---|---|---|---|
| |||
Weight, kg | 69.9 (14.2) | 74.2 (12.6) | 0.07 |
Age, y | 55.7 (15.7) | 61.9 (17.6) | 0.36 |
Male gender | 37 (56.1%) | 41 (62.1%) | 0.60 |
Caucasian race | 40 (60.6%) | 37 (56.1%) | 0.55 |
Serum creatinine, mg/dL | 3.16 (4.38) | 3.04 (4.61) | 0.9 |
Creatinine clearance <30 mL/min | 41 (62.1%) | 41 (62.1%) | 0.6 |
Dialysis | 20 (30.3%) | 16 (24.2%) | 0.56 |
Baseline blood glucose, mg/dL | 166.0 (71.7) | 147.3 (48.0) | 0.08 |
Received other insulin within 24 hours of hyperkalemia treatment | 30 (45.4%) | 25 (37.9%) | 0.48 |
Received K+ supplement within 24 hours of hyperkalemia treatment | 9 (13.6%) | 11 (16.7%) | 0.81 |
Baseline serum K+, mmol/L | 6.1 (0.5) | 6.1 (0.7) | 0.76 |
Baseline serum K+ >6.0 mmol/L | 41 (62.1%) | 33 (50%) | 0.22 |
No. of additional treatments for hyperkalemia in addition to insulin/dextrose | 1.5 (0.8) | 1.4 (0.9) | 0.49 |
A reduction in the number of hypoglycemic episodes was detected in the weight‐based dosing group of 56% within 24 hours, from 18 to 8 events (P = 0.05) (Table 2). The number of hypoglycemic events in every subset of time intervals was likewise reduced by at least 50% using weight‐based dosing (from 7 to 3 events within 6 hours, from 5 to 2 events in 612 hours, from 6 to 3 events in 1224 hours). The number of patients who experienced hypoglycemia within 24 hours after receiving insulin also was reduced in the weight‐based dosing group by 46% (P = 0.22).
Outcomes | 10 U Insulin, n = 66 | 0.1 U/kg Insulin, n = 66 | P Value (2‐Sided) |
---|---|---|---|
| |||
Hypoglycemia, <70 mg/dL | |||
No. of patients | 13 (19.7%) | 7 (10.6%) | 0.22 |
No. of events total | 18 (27.3%) | 8 (12.1%) | 0.05 |
No. of events 06 hours | 7 (10.6%) | 3 (4.5%) | 0.32 |
No. of events 612 hours | 5 (7.6%) | 2 (3.0%) | 0.44 |
No. of events 1224 hours | 6 (9.1%) | 3 (4.5%) | 0.49 |
Severe hypoglycemia | |||
No. of patients | 2 (3.0%) | 1 (1.5%) | >0.99 |
No. of events total | 2 (3%) | 1 (1.5%) | >0.99 |
Potassium‐lowering effects | |||
Minimum K+ after therapy, mmol/L (SD) | 4.9 (0.7) | 4.8 (0.7) | 0.84 |
Minimum serum K+ < 5.0 mmol/L (%) | 37 (56.1%) | 35 (53.0%) | 0.32 |
Average K+ decrease, mmol/L (SD) | 1.35 (0.97) | 1.34 (0.94) | 0.94 |
Repeat treatment given (%) | 24 (36.4%) | 24 (36.4%) | >0.99 |
Potassium lowering was comparable across both dosing strategies in every measure assessed (Table 2). Multivariate analysis revealed that baseline BG <140 mg/dL (adjusted odds ratio: 4.3, 95% confidence interval [CI]: 1.4‐13.7, P = 0.01) and female gender (adjusted odds ratio: 3.2, 95% CI: 1.1‐9.1, P = 0.03) were associated with an increased risk of hypoglycemia. Other factors, including administration of insulin beyond that for hyperkalemia treatment and use of additional hypoglycemic agents, were not associated with the development of hypoglycemia, which is consistent with previous reports.[6]
CONCLUSIONS
Our findings indicate that using a weight‐based approach to insulin dosing when treating hyperkalemia may lead to a reduction in hypoglycemia without sacrificing the efficacy of potassium lowering. Females and patients with glucose values <140 mg/dL were at increased risk of hypoglycemia in this cohort. Based on the results of this research, a weight‐based dosing strategy of 0.1 U/kg IV insulin up to a maximum of 10 U should be considered, with further research desirable to validate these results.
This study was strengthened by the inclusion of all patients regardless of baseline glucose, baseline potassium, administration of other insulins, level of renal impairment, or symptomatic display of hypoglycemia or cardiac dysfunction, thus providing a broad representation of patients treated for acute hyperkalemia. This pilot study was limited in its scope by data collection for only 66 randomized patients per group rather than the entire patient population. In addition, the study utilized patient information from a single site, with few ethnicities represented. Validation of this research using a larger sample size should include greater variation in the patients served. Our inclusion of a hypoglycemia definition up to 24 hours after treatment may also be criticized. However, this is similar to previous reports and allows for a liberal time period for follow‐up glucose monitoring to be recorded.[7]
Because of its small sample size and the low event rate, this study was unable to draw conclusions about the ability of weight‐based insulin dosing to affect severe hypoglycemic events (<40 mg/dL). A study of more than 400 patients would be necessary to find statistically significant differences in the risk of severe hypoglycemia. Furthermore, because we did not examine the results from all patients in this cohort, we cannot conclusively determine the impact of treatment. The retrospective nature of this study limited our ability to capture hypoglycemic episodes during periods in which BG levels were not recorded. Additionally, changes to the post‐treatment glucose monitoring protocol may have also affected the incidence of hypoglycemia in 2 potential ways. First, early and unrecorded interventions may have occurred in patients with a trend toward hypoglycemia. Second, the longer time to follow‐up in the nonweight‐based group may have led to additional hypoglycemic episodes being missed. A prospective trial design could provide more comprehensive information about patient response to weight‐based versus traditional dosing of IV insulin for hyperkalemia. Further investigations on reducing adverse effects of insulin when treating hyperkalemia should focus on female patients and those with lower baseline BG values. Additionally, as newer agents to treat hyperkalemia are developed and tested, the approach to management should be revisited.[8, 9, 10]
Disclosures: Garry S. Tobin, MD, lectures or is on the speakers bureau for Eli Lilly, Jansen, Boehringher Ingelheim, and Novo Nordisk, and performs data safety monitoring for Novo Nordisk. The authors report no other potential conflicts of interest.
Hyperkalemia occurs in as many as 10% of all hospitalized patients,[1] leading to potentially fatal arrhythmias or cardiac arrest that results from ionic imbalance within the resting membrane potential of myocardial tissue.[2] Acute instances may be stabilized with insulin to stimulate intracellular uptake of potassium, but this increases the risk of hypoglycemia.[2] Centers for Medicare and Medicaid Services quality measures require hospitals to minimize hypoglycemic events, particularly serious events with blood glucose (BG) <40 mg/dL,[3] due to an association with an increase in mortality in the hospital setting.[4] Previous research at our tertiary care hospital found that 8.7% of patients had suffered a hypoglycemic event following insulin administration pursuant to acute hyperkalemia treatment, and that patients with a lower body weight are at increased risk of hypoglycemia, particularly severe hypoglycemia (BG <40 mg/dL).[5] Increasing the total dose of dextrose provided around the time of insulin administration is suggested to reduce this concern.[5]
Patients at our institution receive 50 g of dextrose in conjunction with intravenous (IV) insulin for hyperkalemia treatment. To further reduce the potential for hypoglycemia, our institution amended the acute hyperkalemia order set to provide prescribers an alternative dosing strategy to the standard 10 U of IV insulin traditionally used for this purpose. Beginning November 10, 2013, our computer prescriber order entry (CPOE) system automatically prepopulated a dose of 0.1 U/kg of body weight for any patients weighing <95 kg (doses rounded to the nearest whole unit) when the acute hyperkalemia order set was utilized. The maximum dose allowed continued to be 10 U. The revised order set also changed nursing orders to require BG monitoring as frequently as every hour following the administration of insulin and dextrose for the treatment of hyperkalemia.
The purpose of this study is to investigate whether weight‐based insulin dosing (0.1 U/kg) for patients weighing <95 kg, rather than a standard 10‐U insulin dose, resulted in fewer hypoglycemic episodes and patients affected. Secondarily, this study sought to determine the impact of weight‐based insulin dosing on potassium‐lowering effects of therapy and to detect any risk factors for development of hypoglycemia among this patient population.
METHODS
This institutional review boardapproved, single‐center, retrospective chart review examined patients for whom the physician order entry set for hyperkalemia therapy was utilized, including patients who weighed less than 95 kg and received regular insulin via weight‐based dosing (0.1 U/kg of body weight up to a maximum of 10 U) during the period November 10, 2013 to May 31, 2014, versus those who received fixed insulin dosing (10 U regardless of body weight) during the period May 1, 2013 to November 9, 2013. During each of these periods, the CPOE system autopopulated the recommended insulin dose, with the possibility for physician manual dose entry. Data collection was limited to the first use of insulin for hyperkalemia treatment per patient in each period.
Patients weighing <95 kg were the focus of this study because they received <10 U of insulin under the weight‐based dosing strategy. Patients were excluded from the study if they had a body weight >95 kg or no weight recorded, were not administered insulin as ordered, received greater than the CPOE‐specified insulin dose, or had no BG readings recorded within 24 hours of insulin administration. The first 66 patients within each group meeting all inclusion and exclusion criteria were randomly selected for analysis. This recruitment target was developed to provide enough patients for a meaningful analysis of hypoglycemia events based on previous reports from our institution.[5]
Hypoglycemia was defined as a recorded BG level <70 mg/dL within 24 hours after insulin administration; severe hypoglycemia was defined as a recorded BG <40 mg/dL within 24 hours. Individual episodes of hypoglycemia and severe hypoglycemia were recorded for each instance of such event separated by at least 1 hour from the time of the first recorded event. In addition, episodes of hypoglycemia or severe hypoglycemia and number of patients affected were assessed at within 6 hours, 6 to 12 hours, and 12 to 24 hours after insulin administration as separate subsets for statistical analysis.
For the purpose of assessing the potassium‐lowering efficacy of weight‐based versus traditional dosing of insulin, maximum serum potassium levels were examined in the 12‐hour interval before the hyperkalemia order set was implemented and compared with minimum potassium levels in the 12 hours after insulin was administered. A comparison of the treatment groups assessed differences between the mean decrease in serum potassium from baseline, the mean minimum potassium achieved, the number of patients achieving minimum potassium below 5.0 mEq/L, and the number of patients who subsequently received repeat treatment for hyperkalemia within 24 hours of treatment with insulin.
Statistical analysis was conducted utilizing 2 and Fisher exact tests for nominal data and Student t test for continuous data to detect statistically significant differences between the groups. Binomial logistic multivariable analysis using a backward stepwise approach was used to determine factors for development of hypoglycemia, analyzed on a per‐patient basis to prevent characteristics from being over‐represented when events occurred multiple times to a single patient. All analyses were completed by using SPSS version 18 (SPSS Inc., Chicago, IL).
RESULTS
In total, 1734 entries were available for the acute hyperkalemia order set with insulin during the 2 periods investigated. Only 464 patients were eligible for manual chart review once weight‐based exclusions were identified by electronic database, with additional exclusion criteria later extracted from patient charts. Patients in both treatment groups were fairly well balanced, with a slightly lower body weight in the 10‐U insulin group recorded (Table 1). Patients in the weight‐based dosing group received between 4 and 9 U of insulin, depending on body weight.
Characteristics | 10 U Insulin, n = 66 | 0.1 U/kg Insulin, n = 66 | P Value (2‐Sided) |
---|---|---|---|
| |||
Weight, kg | 69.9 (14.2) | 74.2 (12.6) | 0.07 |
Age, y | 55.7 (15.7) | 61.9 (17.6) | 0.36 |
Male gender | 37 (56.1%) | 41 (62.1%) | 0.60 |
Caucasian race | 40 (60.6%) | 37 (56.1%) | 0.55 |
Serum creatinine, mg/dL | 3.16 (4.38) | 3.04 (4.61) | 0.9 |
Creatinine clearance <30 mL/min | 41 (62.1%) | 41 (62.1%) | 0.6 |
Dialysis | 20 (30.3%) | 16 (24.2%) | 0.56 |
Baseline blood glucose, mg/dL | 166.0 (71.7) | 147.3 (48.0) | 0.08 |
Received other insulin within 24 hours of hyperkalemia treatment | 30 (45.4%) | 25 (37.9%) | 0.48 |
Received K+ supplement within 24 hours of hyperkalemia treatment | 9 (13.6%) | 11 (16.7%) | 0.81 |
Baseline serum K+, mmol/L | 6.1 (0.5) | 6.1 (0.7) | 0.76 |
Baseline serum K+ >6.0 mmol/L | 41 (62.1%) | 33 (50%) | 0.22 |
No. of additional treatments for hyperkalemia in addition to insulin/dextrose | 1.5 (0.8) | 1.4 (0.9) | 0.49 |
A reduction in the number of hypoglycemic episodes was detected in the weight‐based dosing group of 56% within 24 hours, from 18 to 8 events (P = 0.05) (Table 2). The number of hypoglycemic events in every subset of time intervals was likewise reduced by at least 50% using weight‐based dosing (from 7 to 3 events within 6 hours, from 5 to 2 events in 612 hours, from 6 to 3 events in 1224 hours). The number of patients who experienced hypoglycemia within 24 hours after receiving insulin also was reduced in the weight‐based dosing group by 46% (P = 0.22).
Outcomes | 10 U Insulin, n = 66 | 0.1 U/kg Insulin, n = 66 | P Value (2‐Sided) |
---|---|---|---|
| |||
Hypoglycemia, <70 mg/dL | |||
No. of patients | 13 (19.7%) | 7 (10.6%) | 0.22 |
No. of events total | 18 (27.3%) | 8 (12.1%) | 0.05 |
No. of events 06 hours | 7 (10.6%) | 3 (4.5%) | 0.32 |
No. of events 612 hours | 5 (7.6%) | 2 (3.0%) | 0.44 |
No. of events 1224 hours | 6 (9.1%) | 3 (4.5%) | 0.49 |
Severe hypoglycemia | |||
No. of patients | 2 (3.0%) | 1 (1.5%) | >0.99 |
No. of events total | 2 (3%) | 1 (1.5%) | >0.99 |
Potassium‐lowering effects | |||
Minimum K+ after therapy, mmol/L (SD) | 4.9 (0.7) | 4.8 (0.7) | 0.84 |
Minimum serum K+ < 5.0 mmol/L (%) | 37 (56.1%) | 35 (53.0%) | 0.32 |
Average K+ decrease, mmol/L (SD) | 1.35 (0.97) | 1.34 (0.94) | 0.94 |
Repeat treatment given (%) | 24 (36.4%) | 24 (36.4%) | >0.99 |
Potassium lowering was comparable across both dosing strategies in every measure assessed (Table 2). Multivariate analysis revealed that baseline BG <140 mg/dL (adjusted odds ratio: 4.3, 95% confidence interval [CI]: 1.4‐13.7, P = 0.01) and female gender (adjusted odds ratio: 3.2, 95% CI: 1.1‐9.1, P = 0.03) were associated with an increased risk of hypoglycemia. Other factors, including administration of insulin beyond that for hyperkalemia treatment and use of additional hypoglycemic agents, were not associated with the development of hypoglycemia, which is consistent with previous reports.[6]
CONCLUSIONS
Our findings indicate that using a weight‐based approach to insulin dosing when treating hyperkalemia may lead to a reduction in hypoglycemia without sacrificing the efficacy of potassium lowering. Females and patients with glucose values <140 mg/dL were at increased risk of hypoglycemia in this cohort. Based on the results of this research, a weight‐based dosing strategy of 0.1 U/kg IV insulin up to a maximum of 10 U should be considered, with further research desirable to validate these results.
This study was strengthened by the inclusion of all patients regardless of baseline glucose, baseline potassium, administration of other insulins, level of renal impairment, or symptomatic display of hypoglycemia or cardiac dysfunction, thus providing a broad representation of patients treated for acute hyperkalemia. This pilot study was limited in its scope by data collection for only 66 randomized patients per group rather than the entire patient population. In addition, the study utilized patient information from a single site, with few ethnicities represented. Validation of this research using a larger sample size should include greater variation in the patients served. Our inclusion of a hypoglycemia definition up to 24 hours after treatment may also be criticized. However, this is similar to previous reports and allows for a liberal time period for follow‐up glucose monitoring to be recorded.[7]
Because of its small sample size and the low event rate, this study was unable to draw conclusions about the ability of weight‐based insulin dosing to affect severe hypoglycemic events (<40 mg/dL). A study of more than 400 patients would be necessary to find statistically significant differences in the risk of severe hypoglycemia. Furthermore, because we did not examine the results from all patients in this cohort, we cannot conclusively determine the impact of treatment. The retrospective nature of this study limited our ability to capture hypoglycemic episodes during periods in which BG levels were not recorded. Additionally, changes to the post‐treatment glucose monitoring protocol may have also affected the incidence of hypoglycemia in 2 potential ways. First, early and unrecorded interventions may have occurred in patients with a trend toward hypoglycemia. Second, the longer time to follow‐up in the nonweight‐based group may have led to additional hypoglycemic episodes being missed. A prospective trial design could provide more comprehensive information about patient response to weight‐based versus traditional dosing of IV insulin for hyperkalemia. Further investigations on reducing adverse effects of insulin when treating hyperkalemia should focus on female patients and those with lower baseline BG values. Additionally, as newer agents to treat hyperkalemia are developed and tested, the approach to management should be revisited.[8, 9, 10]
Disclosures: Garry S. Tobin, MD, lectures or is on the speakers bureau for Eli Lilly, Jansen, Boehringher Ingelheim, and Novo Nordisk, and performs data safety monitoring for Novo Nordisk. The authors report no other potential conflicts of interest.
- Hyperkalemia in hospitalized patients: causes, adequacy of treatment, and results of an attempt to improve physician compliance with published therapy guidelines. Arch Intern Med. 1998;158:917–924. , , , .
- 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Part 12.6: cardiac arrest associated with life‐threatening electrolyte disturbances. Circulation. 2010;122:S829–S861.
- Centers for Medicare 29(2):101–107.
- Incidence of hypoglycemia following insulin‐based acute stabilization of hyperkalemia treatment. J Hosp Med. 2012;7(3):239–242. , , , .
- Hypoglycemia in the treatment of hyperkalemia with insulin in patients with end‐stage renal disease. Clin Kidney J. 2014;7(3):248–250. , , .
- Prediction and prevention of treatment‐related inpatient hypoglycemia. J Diabetes Sci Technol. 2012;6(2):302–309. , , , .
- Effect of sodium zirconium cyclosilicate on potassium lowering for 28 days among outpatients with hyperkalemia: the HARMONIZE randomized clinical trial. JAMA. 2014;312(21):2223–2233. , , , et al.
- Zirconium cyclosilicate in hyperkalemia. N Engl J Med. 2015;372:222–231. , , , et al.
- Patiromer in patients with kidney disease and hyperkalemia receiving RAAS inhibitors. N Engl J Med. 2015;372:211–221. , , , et al.
- Hyperkalemia in hospitalized patients: causes, adequacy of treatment, and results of an attempt to improve physician compliance with published therapy guidelines. Arch Intern Med. 1998;158:917–924. , , , .
- 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Part 12.6: cardiac arrest associated with life‐threatening electrolyte disturbances. Circulation. 2010;122:S829–S861.
- Centers for Medicare 29(2):101–107.
- Incidence of hypoglycemia following insulin‐based acute stabilization of hyperkalemia treatment. J Hosp Med. 2012;7(3):239–242. , , , .
- Hypoglycemia in the treatment of hyperkalemia with insulin in patients with end‐stage renal disease. Clin Kidney J. 2014;7(3):248–250. , , .
- Prediction and prevention of treatment‐related inpatient hypoglycemia. J Diabetes Sci Technol. 2012;6(2):302–309. , , , .
- Effect of sodium zirconium cyclosilicate on potassium lowering for 28 days among outpatients with hyperkalemia: the HARMONIZE randomized clinical trial. JAMA. 2014;312(21):2223–2233. , , , et al.
- Zirconium cyclosilicate in hyperkalemia. N Engl J Med. 2015;372:222–231. , , , et al.
- Patiromer in patients with kidney disease and hyperkalemia receiving RAAS inhibitors. N Engl J Med. 2015;372:211–221. , , , et al.