Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort
Allow Teaser Image

Steroids underused in bacterial meningitis despite low risk

Article Type
Changed
Mon, 01/07/2019 - 13:01

– Physicians often skipped out on using steroids when treating bacterial meningitis even though the benefits clearly outweigh the risks, Cinthia Gallegos, MD, reported during an oral presentation at an annual meeting on infectious diseases.

In a recent multicenter retrospective cohort study, only 40% of adults with bacterial meningitis received steroids within 4 hours of hospital admission, as recommended by the European Society of Clinical Microbiology and Infectious Diseases (ESCMID), and only 14% received steroids concomitantly or 10-20 minutes prior to antibiotic initiation, as recommended by the Infectious Diseases Society of America (IDSA), said Dr. Gallegos, an infectious disease fellow at University of Texas, Houston.

“Steroids are being underutilized in our patient population,” she said. “And when steroids are used, they are being used later than is recommended.”

Amy Karon/Frontline Medical News
Dr. Cinthia Gallegos


To evaluate the prevalence of guideline-concordant steroid use, Dr. Gallegos and her associates analyzed the medical records of 120 adults with culture-confirmed, community-acquired bacterial meningitis treated at 10 Houston-area hospitals between 2008 and 2016.

Median duration of steroid therapy was 4 hours, which is consistent with IDSA guidelines, she noted.

Among the five patients (4%) who developed delayed cerebral thrombosis, three had Streptococcus pneumoniae meningitis, one had methicillin-resistant Staphylococcus aureus meningitis, and one had Listeria meningitis. All had received either dexamethasone monotherapy or dexamethasone and methylprednisolone within 4 hours of antibiotic initiation. They showed an initial improvement in clinical course, including normal CT and MRI, but their clinical condition deteriorated between 5 and 12 days later. “Repeat imaging showed thrombosis of different areas of the brain,” Dr. Gallegos said. Two patients died, two developed moderate or severe disability, and one fully recovered. The patients ranged in age from 26 to 69; three were male, and two were female.

The 4% rate closely resembles what is seen in the Netherlands, said Diederik van de Beek, MD, PhD, of the Academic Medical Center in Amsterdam, who comoderated the session at the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society. “We have some recent data where we did autopsies of cases and we saw a huge amount of bacterial fragments around the blood vessels,” he said. “We have seen this in previous autopsy studies, but here it was a massive amount of bacterial fragments.”

Researchers have suggested that delayed cerebral thrombosis in bacterial meningitis results from increases in C5a and C5b-9 levels in the cerebrospinal fluid and from an increase in the tissue factor VII pathway, Dr. Gallegos said.

Researchers think that these patients historically developed vasculitis, but that this complication “has disappeared somewhat in the dexamethasone era,” said Dr. van de Beek, lead author of the 2016 ESCMID guidelines on bacterial meningitis. “It appears that some patients are ‘pro-inflammatory’ and still react 7-9 days after treatment,” he said. “The difficult question is whether we give 4 days of steroids or longer. A clinical trial is not feasible, so we [recommend] 4 days.”

Left untreated, bacterial meningitis is fatal in up to 70% of cases, and about one in five survivors faces limb loss or neurologic disability, according to the Centers for Disease Control and Prevention. The advent of penicillin and other antibiotics dramatically improved survival, but death rates remained around 10% for meningitis associated with Neisseria meningitides and Haemophilus influenza infection, and often exceeded 30% for S. pneumoniae meningitis. “That’s important because besides antibiotics, the only treatment that decreases mortality has been shown to be steroids,” Dr. Gallegos said.

High-quality evidence supports their use. In a double-blind, randomized, multicenter trial of 301 adults with bacterial meningitis, adjunctive dexamethasone was associated with a 50% improvement in mortality, compared with adjunctive placebo (N Engl J Med. 2002 Nov 14;347[20]:1549-56). Other data confirm that steroids do not prevent vancomycin from concentrating in CSF or increase the risk of hippocampal apoptosis. But although both IDSA and ESCMID endorse steroids as adjunctive therapy to help control intracranial pressure in patients with bacterial meningitis, studies have shown much higher rates of steroid use in the Netherlands, Sweden, and Denmark than in the United States.

The Grant A. Starr Foundation provided funding. The investigators had no conflicts of interest.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Physicians often skipped out on using steroids when treating bacterial meningitis even though the benefits clearly outweigh the risks, Cinthia Gallegos, MD, reported during an oral presentation at an annual meeting on infectious diseases.

In a recent multicenter retrospective cohort study, only 40% of adults with bacterial meningitis received steroids within 4 hours of hospital admission, as recommended by the European Society of Clinical Microbiology and Infectious Diseases (ESCMID), and only 14% received steroids concomitantly or 10-20 minutes prior to antibiotic initiation, as recommended by the Infectious Diseases Society of America (IDSA), said Dr. Gallegos, an infectious disease fellow at University of Texas, Houston.

“Steroids are being underutilized in our patient population,” she said. “And when steroids are used, they are being used later than is recommended.”

Amy Karon/Frontline Medical News
Dr. Cinthia Gallegos


To evaluate the prevalence of guideline-concordant steroid use, Dr. Gallegos and her associates analyzed the medical records of 120 adults with culture-confirmed, community-acquired bacterial meningitis treated at 10 Houston-area hospitals between 2008 and 2016.

Median duration of steroid therapy was 4 hours, which is consistent with IDSA guidelines, she noted.

Among the five patients (4%) who developed delayed cerebral thrombosis, three had Streptococcus pneumoniae meningitis, one had methicillin-resistant Staphylococcus aureus meningitis, and one had Listeria meningitis. All had received either dexamethasone monotherapy or dexamethasone and methylprednisolone within 4 hours of antibiotic initiation. They showed an initial improvement in clinical course, including normal CT and MRI, but their clinical condition deteriorated between 5 and 12 days later. “Repeat imaging showed thrombosis of different areas of the brain,” Dr. Gallegos said. Two patients died, two developed moderate or severe disability, and one fully recovered. The patients ranged in age from 26 to 69; three were male, and two were female.

The 4% rate closely resembles what is seen in the Netherlands, said Diederik van de Beek, MD, PhD, of the Academic Medical Center in Amsterdam, who comoderated the session at the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society. “We have some recent data where we did autopsies of cases and we saw a huge amount of bacterial fragments around the blood vessels,” he said. “We have seen this in previous autopsy studies, but here it was a massive amount of bacterial fragments.”

Researchers have suggested that delayed cerebral thrombosis in bacterial meningitis results from increases in C5a and C5b-9 levels in the cerebrospinal fluid and from an increase in the tissue factor VII pathway, Dr. Gallegos said.

Researchers think that these patients historically developed vasculitis, but that this complication “has disappeared somewhat in the dexamethasone era,” said Dr. van de Beek, lead author of the 2016 ESCMID guidelines on bacterial meningitis. “It appears that some patients are ‘pro-inflammatory’ and still react 7-9 days after treatment,” he said. “The difficult question is whether we give 4 days of steroids or longer. A clinical trial is not feasible, so we [recommend] 4 days.”

Left untreated, bacterial meningitis is fatal in up to 70% of cases, and about one in five survivors faces limb loss or neurologic disability, according to the Centers for Disease Control and Prevention. The advent of penicillin and other antibiotics dramatically improved survival, but death rates remained around 10% for meningitis associated with Neisseria meningitides and Haemophilus influenza infection, and often exceeded 30% for S. pneumoniae meningitis. “That’s important because besides antibiotics, the only treatment that decreases mortality has been shown to be steroids,” Dr. Gallegos said.

High-quality evidence supports their use. In a double-blind, randomized, multicenter trial of 301 adults with bacterial meningitis, adjunctive dexamethasone was associated with a 50% improvement in mortality, compared with adjunctive placebo (N Engl J Med. 2002 Nov 14;347[20]:1549-56). Other data confirm that steroids do not prevent vancomycin from concentrating in CSF or increase the risk of hippocampal apoptosis. But although both IDSA and ESCMID endorse steroids as adjunctive therapy to help control intracranial pressure in patients with bacterial meningitis, studies have shown much higher rates of steroid use in the Netherlands, Sweden, and Denmark than in the United States.

The Grant A. Starr Foundation provided funding. The investigators had no conflicts of interest.

– Physicians often skipped out on using steroids when treating bacterial meningitis even though the benefits clearly outweigh the risks, Cinthia Gallegos, MD, reported during an oral presentation at an annual meeting on infectious diseases.

In a recent multicenter retrospective cohort study, only 40% of adults with bacterial meningitis received steroids within 4 hours of hospital admission, as recommended by the European Society of Clinical Microbiology and Infectious Diseases (ESCMID), and only 14% received steroids concomitantly or 10-20 minutes prior to antibiotic initiation, as recommended by the Infectious Diseases Society of America (IDSA), said Dr. Gallegos, an infectious disease fellow at University of Texas, Houston.

“Steroids are being underutilized in our patient population,” she said. “And when steroids are used, they are being used later than is recommended.”

Amy Karon/Frontline Medical News
Dr. Cinthia Gallegos


To evaluate the prevalence of guideline-concordant steroid use, Dr. Gallegos and her associates analyzed the medical records of 120 adults with culture-confirmed, community-acquired bacterial meningitis treated at 10 Houston-area hospitals between 2008 and 2016.

Median duration of steroid therapy was 4 hours, which is consistent with IDSA guidelines, she noted.

Among the five patients (4%) who developed delayed cerebral thrombosis, three had Streptococcus pneumoniae meningitis, one had methicillin-resistant Staphylococcus aureus meningitis, and one had Listeria meningitis. All had received either dexamethasone monotherapy or dexamethasone and methylprednisolone within 4 hours of antibiotic initiation. They showed an initial improvement in clinical course, including normal CT and MRI, but their clinical condition deteriorated between 5 and 12 days later. “Repeat imaging showed thrombosis of different areas of the brain,” Dr. Gallegos said. Two patients died, two developed moderate or severe disability, and one fully recovered. The patients ranged in age from 26 to 69; three were male, and two were female.

The 4% rate closely resembles what is seen in the Netherlands, said Diederik van de Beek, MD, PhD, of the Academic Medical Center in Amsterdam, who comoderated the session at the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society. “We have some recent data where we did autopsies of cases and we saw a huge amount of bacterial fragments around the blood vessels,” he said. “We have seen this in previous autopsy studies, but here it was a massive amount of bacterial fragments.”

Researchers have suggested that delayed cerebral thrombosis in bacterial meningitis results from increases in C5a and C5b-9 levels in the cerebrospinal fluid and from an increase in the tissue factor VII pathway, Dr. Gallegos said.

Researchers think that these patients historically developed vasculitis, but that this complication “has disappeared somewhat in the dexamethasone era,” said Dr. van de Beek, lead author of the 2016 ESCMID guidelines on bacterial meningitis. “It appears that some patients are ‘pro-inflammatory’ and still react 7-9 days after treatment,” he said. “The difficult question is whether we give 4 days of steroids or longer. A clinical trial is not feasible, so we [recommend] 4 days.”

Left untreated, bacterial meningitis is fatal in up to 70% of cases, and about one in five survivors faces limb loss or neurologic disability, according to the Centers for Disease Control and Prevention. The advent of penicillin and other antibiotics dramatically improved survival, but death rates remained around 10% for meningitis associated with Neisseria meningitides and Haemophilus influenza infection, and often exceeded 30% for S. pneumoniae meningitis. “That’s important because besides antibiotics, the only treatment that decreases mortality has been shown to be steroids,” Dr. Gallegos said.

High-quality evidence supports their use. In a double-blind, randomized, multicenter trial of 301 adults with bacterial meningitis, adjunctive dexamethasone was associated with a 50% improvement in mortality, compared with adjunctive placebo (N Engl J Med. 2002 Nov 14;347[20]:1549-56). Other data confirm that steroids do not prevent vancomycin from concentrating in CSF or increase the risk of hippocampal apoptosis. But although both IDSA and ESCMID endorse steroids as adjunctive therapy to help control intracranial pressure in patients with bacterial meningitis, studies have shown much higher rates of steroid use in the Netherlands, Sweden, and Denmark than in the United States.

The Grant A. Starr Foundation provided funding. The investigators had no conflicts of interest.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT IDWEEK 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

Key clinical point: Steroids were underutilized in patients with bacterial meningitis despite their low risk of causing delayed cerebral thrombosis.

Major finding: Five of 120 (4%) of patients developed delayed cerebral thrombosis. Only 40% received steroids within the maximum recommended time frame.

Data source: A retrospective multicenter study of 120 adults with culture-confirmed bacterial meningitis.

Disclosures: The Grant A. Starr Foundation provided funding. The investigators had no conflicts of interest.

Disqus Comments
Default

Negative nasal swabs reliably predicted no MRSA infection

Article Type
Changed
Fri, 01/18/2019 - 17:05

Only 0.2% of intensive care unit patients developed MRSA infections after testing negative on nasal surveillance swabs, said Darunee Chotiprasitsakul, MD, of Johns Hopkins Medicine in Baltimore.

But physicians often prescribed vancomycin anyway, accumulating nearly 7,400 potentially avoidable treatment days over a 19-month period, she said during an oral presentation at an annual meeting on infectious diseases.

Current guidelines recommend empiric vancomycin to cover MRSA infection when ill patients have a history of MRSA colonization or recent hospitalization or exposure to antibiotics. Patients whose nasal screening swabs are negative for MRSA have been shown to be at low risk of subsequent infection, but guidelines don’t address how to use swab results to guide decisions about empiric vancomycin, Dr. Chotiprasitsakul said.

Dr. Darunee Chotiprasitsakul


Therefore, she and her associates studied 11,882 adults without historical MRSA infection or colonization who received nasal swabs for routine surveillance in adult ICUs at Johns Hopkins. A total of 441 patients (4%) had positive swabs, while 96% tested negative.

Among patients with negative swabs, only 25 (0.22%) developed MRSA infection requiring treatment. Thus, the negative predictive value of a nasal swab for MRSA was 99%, making the probability of infection despite a negative swab “exceedingly low,” Dr. Chotiprasitsakul said.

But clinicians seemed not to use negative swab results to curtail vancomycin therapy, she found. Rates of empiric vancomycin use were 36% among patients with positive swabs and 39% among those with negative swabs. Over 19 months, ICU patients received 7,371 avoidable days of vancomycin, a median of 3 days per patient.

Matching patients by ICU and days at risk identified no significant predictors of MRSA infection, Dr. Chotiprasitsakul said. Johns Hopkins Medicine has robust infection control practices, high compliance with hand hygiene and contact precautions, and low rates of nosocomial MRSA transmission, she noted. The predictive value of a negative MRSA nasal swab could be lower at institutions where that isn’t the case, she said.

Johns Hopkins is working to curtail unnecessary use of vancomycin, said senior author Sara Cosgrove, MD, professor of medicine in infectious diseases and director of the department of antimicrobial stewardship. The team has added the findings to its guidelines for antibiotic use, which are available in an app for Johns Hopkins providers, she said in an interview.

The stewardship also highlights the data when discussing starting and stopping vancomycin in patients at very low risk for MRSA infections, she said. “In general, providers have responded favorably to acting upon this new information,” Dr. Cosgrove noted.

Johns Hopkins continues to track median days of vancomycin use per patient and per 1,000 days in its units. “[We] will assess if there is an impact on vancomycin use over the coming year,” said Dr. Cosgrove.

The investigators had no conflicts of interest. The event marked the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Only 0.2% of intensive care unit patients developed MRSA infections after testing negative on nasal surveillance swabs, said Darunee Chotiprasitsakul, MD, of Johns Hopkins Medicine in Baltimore.

But physicians often prescribed vancomycin anyway, accumulating nearly 7,400 potentially avoidable treatment days over a 19-month period, she said during an oral presentation at an annual meeting on infectious diseases.

Current guidelines recommend empiric vancomycin to cover MRSA infection when ill patients have a history of MRSA colonization or recent hospitalization or exposure to antibiotics. Patients whose nasal screening swabs are negative for MRSA have been shown to be at low risk of subsequent infection, but guidelines don’t address how to use swab results to guide decisions about empiric vancomycin, Dr. Chotiprasitsakul said.

Dr. Darunee Chotiprasitsakul


Therefore, she and her associates studied 11,882 adults without historical MRSA infection or colonization who received nasal swabs for routine surveillance in adult ICUs at Johns Hopkins. A total of 441 patients (4%) had positive swabs, while 96% tested negative.

Among patients with negative swabs, only 25 (0.22%) developed MRSA infection requiring treatment. Thus, the negative predictive value of a nasal swab for MRSA was 99%, making the probability of infection despite a negative swab “exceedingly low,” Dr. Chotiprasitsakul said.

But clinicians seemed not to use negative swab results to curtail vancomycin therapy, she found. Rates of empiric vancomycin use were 36% among patients with positive swabs and 39% among those with negative swabs. Over 19 months, ICU patients received 7,371 avoidable days of vancomycin, a median of 3 days per patient.

Matching patients by ICU and days at risk identified no significant predictors of MRSA infection, Dr. Chotiprasitsakul said. Johns Hopkins Medicine has robust infection control practices, high compliance with hand hygiene and contact precautions, and low rates of nosocomial MRSA transmission, she noted. The predictive value of a negative MRSA nasal swab could be lower at institutions where that isn’t the case, she said.

Johns Hopkins is working to curtail unnecessary use of vancomycin, said senior author Sara Cosgrove, MD, professor of medicine in infectious diseases and director of the department of antimicrobial stewardship. The team has added the findings to its guidelines for antibiotic use, which are available in an app for Johns Hopkins providers, she said in an interview.

The stewardship also highlights the data when discussing starting and stopping vancomycin in patients at very low risk for MRSA infections, she said. “In general, providers have responded favorably to acting upon this new information,” Dr. Cosgrove noted.

Johns Hopkins continues to track median days of vancomycin use per patient and per 1,000 days in its units. “[We] will assess if there is an impact on vancomycin use over the coming year,” said Dr. Cosgrove.

The investigators had no conflicts of interest. The event marked the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society.

Only 0.2% of intensive care unit patients developed MRSA infections after testing negative on nasal surveillance swabs, said Darunee Chotiprasitsakul, MD, of Johns Hopkins Medicine in Baltimore.

But physicians often prescribed vancomycin anyway, accumulating nearly 7,400 potentially avoidable treatment days over a 19-month period, she said during an oral presentation at an annual meeting on infectious diseases.

Current guidelines recommend empiric vancomycin to cover MRSA infection when ill patients have a history of MRSA colonization or recent hospitalization or exposure to antibiotics. Patients whose nasal screening swabs are negative for MRSA have been shown to be at low risk of subsequent infection, but guidelines don’t address how to use swab results to guide decisions about empiric vancomycin, Dr. Chotiprasitsakul said.

Dr. Darunee Chotiprasitsakul


Therefore, she and her associates studied 11,882 adults without historical MRSA infection or colonization who received nasal swabs for routine surveillance in adult ICUs at Johns Hopkins. A total of 441 patients (4%) had positive swabs, while 96% tested negative.

Among patients with negative swabs, only 25 (0.22%) developed MRSA infection requiring treatment. Thus, the negative predictive value of a nasal swab for MRSA was 99%, making the probability of infection despite a negative swab “exceedingly low,” Dr. Chotiprasitsakul said.

But clinicians seemed not to use negative swab results to curtail vancomycin therapy, she found. Rates of empiric vancomycin use were 36% among patients with positive swabs and 39% among those with negative swabs. Over 19 months, ICU patients received 7,371 avoidable days of vancomycin, a median of 3 days per patient.

Matching patients by ICU and days at risk identified no significant predictors of MRSA infection, Dr. Chotiprasitsakul said. Johns Hopkins Medicine has robust infection control practices, high compliance with hand hygiene and contact precautions, and low rates of nosocomial MRSA transmission, she noted. The predictive value of a negative MRSA nasal swab could be lower at institutions where that isn’t the case, she said.

Johns Hopkins is working to curtail unnecessary use of vancomycin, said senior author Sara Cosgrove, MD, professor of medicine in infectious diseases and director of the department of antimicrobial stewardship. The team has added the findings to its guidelines for antibiotic use, which are available in an app for Johns Hopkins providers, she said in an interview.

The stewardship also highlights the data when discussing starting and stopping vancomycin in patients at very low risk for MRSA infections, she said. “In general, providers have responded favorably to acting upon this new information,” Dr. Cosgrove noted.

Johns Hopkins continues to track median days of vancomycin use per patient and per 1,000 days in its units. “[We] will assess if there is an impact on vancomycin use over the coming year,” said Dr. Cosgrove.

The investigators had no conflicts of interest. The event marked the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT IDWEEK 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Only 0.2% of ICU patients with negative surveillance nasal swabs developed MRSA infections during the same hospitalization.

Major finding: The predictive value of a negative swab was 99%.

Data source: A study of 11,882 adults without historical MRSA infection or colonization who received nasal swabs for routine surveillance.

Disclosures: The investigators had no conflicts of interest.

Disqus Comments
Default

VA study finds high MRSA infection risk among those colonized with the bacterium

Article Type
Changed
Sat, 12/08/2018 - 14:31

– Patients colonized with MRSA are at high risk of MRSA infection both in the predischarge and postdischarge time periods, results from an 8-year Veterans Affairs study showed.

“MRSA colonization is recognized as being a strong predictor of subsequent infection,” Richard E. Nelson, PhD, said at an annual scientific meeting on infectious diseases. “What’s less understood is, are there differences in infection rates among patients who are colonized at different times? And, is there a difference between patients who import colonization with them to a hospital versus those who acquire it during a hospital stay? In addition, infection control efforts mainly focus on the predischarge time period. What about infections that develop post discharge?”

In an effort to investigate these questions, Dr. Nelson of the VA Salt Lake City Healthcare System, and his associates, evaluated more than 1.3 million acute care inpatient admissions to 125 VA hospitals nationwide from January 2008 through December 2015 who had surveillance tests performed for MRSA carriage.

copyright Pixland/Thinkstock


The researchers restricted admissions to individuals with at least 365 days of VA activity prior to admission and categorized them into three groups: no colonization (defined as those who had no positive surveillance tests (n = 1,196,928); importation (defined as those who tested positive for MRSA colonization on admission (n = 95,833); and acquisition (defined as those who did not test positive for MRSA on admission but tested positive on a subsequent surveillance test during their admission (n = 15,146). Next, they captured MRSA infections in these individuals prior to discharge and at 30 and 90 days post discharge. Infections were defined as positive MRSA cultures taken from sterile sites, including blood, catheter site, or bone.

Overall, patients were in their mid-60s, and those who imported MRSA and those who acquired it were more likely to be male, less likely to be married, and more likely to not have health insurance. The acquirers had by far the highest rates of predischarge infections, which peaked in 2010 and declined through 2015,” said Dr. Nelson, who also holds a faculty position in University of Utah’s department of internal medicine, in the division of epidemiology. Specifically, the proportion of predischarge MRSA infections, compared with 30 days post discharge, were 40.4% vs. 59.6%, respectively, in the no colonization group; 63% vs. 37% in the importation group, and 80.8% vs. 19.2% in the acquisition group.

He also reported that the proportion of predischarge MRSA infections, compared with 90 days post discharge, were 20.5% vs. 79.5%, respectively, in the no colonization group; 47.3% vs. 52.7% in the importation group, and 70.5% vs. 29.5% in the acquisition group. The time from acquisition to infection was a mean of 8.7 days in the 30-day analysis and a mean of 22.4 days in the 90-day analysis.

Multivariate logistic regression revealed that the impact of colonization status on infection was highest in the acquisition group, compared with the importation group. Specifically, the odds ratio of developing a MRSA infection among the importation group was 29.22 in the predischarge period, OR 10.87 at post discharge 30 days, and OR 7.64 at post discharge 90 days (P less than .001 for all). Meanwhile, the OR among the acquisition group was 85.19 in the predischarge period, OR 13.01 at post discharge 30 days, and OR 8.26 at post discharge 90 days (P less than .001 for all).

Dr. Nelson acknowledged certain limitations of the study, including the fact that it only identified postdischarge infections that were detected in a VA facility. “This is likely an underestimate of postdischarge infections, because we’re missing the infection that occur in non-VA facilities,” he said at the event, which marked the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society. “Also, patients can be colonized in many different body locations, but the VA protocol is that the surveillance test be done in the nostrils. So we may have misclassified patients who were colonized in a different body location as being uncolonized, when in fact they were colonized.”

The study was funded by a grant from the VA. Dr. Nelson reported having no financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Patients colonized with MRSA are at high risk of MRSA infection both in the predischarge and postdischarge time periods, results from an 8-year Veterans Affairs study showed.

“MRSA colonization is recognized as being a strong predictor of subsequent infection,” Richard E. Nelson, PhD, said at an annual scientific meeting on infectious diseases. “What’s less understood is, are there differences in infection rates among patients who are colonized at different times? And, is there a difference between patients who import colonization with them to a hospital versus those who acquire it during a hospital stay? In addition, infection control efforts mainly focus on the predischarge time period. What about infections that develop post discharge?”

In an effort to investigate these questions, Dr. Nelson of the VA Salt Lake City Healthcare System, and his associates, evaluated more than 1.3 million acute care inpatient admissions to 125 VA hospitals nationwide from January 2008 through December 2015 who had surveillance tests performed for MRSA carriage.

copyright Pixland/Thinkstock


The researchers restricted admissions to individuals with at least 365 days of VA activity prior to admission and categorized them into three groups: no colonization (defined as those who had no positive surveillance tests (n = 1,196,928); importation (defined as those who tested positive for MRSA colonization on admission (n = 95,833); and acquisition (defined as those who did not test positive for MRSA on admission but tested positive on a subsequent surveillance test during their admission (n = 15,146). Next, they captured MRSA infections in these individuals prior to discharge and at 30 and 90 days post discharge. Infections were defined as positive MRSA cultures taken from sterile sites, including blood, catheter site, or bone.

Overall, patients were in their mid-60s, and those who imported MRSA and those who acquired it were more likely to be male, less likely to be married, and more likely to not have health insurance. The acquirers had by far the highest rates of predischarge infections, which peaked in 2010 and declined through 2015,” said Dr. Nelson, who also holds a faculty position in University of Utah’s department of internal medicine, in the division of epidemiology. Specifically, the proportion of predischarge MRSA infections, compared with 30 days post discharge, were 40.4% vs. 59.6%, respectively, in the no colonization group; 63% vs. 37% in the importation group, and 80.8% vs. 19.2% in the acquisition group.

He also reported that the proportion of predischarge MRSA infections, compared with 90 days post discharge, were 20.5% vs. 79.5%, respectively, in the no colonization group; 47.3% vs. 52.7% in the importation group, and 70.5% vs. 29.5% in the acquisition group. The time from acquisition to infection was a mean of 8.7 days in the 30-day analysis and a mean of 22.4 days in the 90-day analysis.

Multivariate logistic regression revealed that the impact of colonization status on infection was highest in the acquisition group, compared with the importation group. Specifically, the odds ratio of developing a MRSA infection among the importation group was 29.22 in the predischarge period, OR 10.87 at post discharge 30 days, and OR 7.64 at post discharge 90 days (P less than .001 for all). Meanwhile, the OR among the acquisition group was 85.19 in the predischarge period, OR 13.01 at post discharge 30 days, and OR 8.26 at post discharge 90 days (P less than .001 for all).

Dr. Nelson acknowledged certain limitations of the study, including the fact that it only identified postdischarge infections that were detected in a VA facility. “This is likely an underestimate of postdischarge infections, because we’re missing the infection that occur in non-VA facilities,” he said at the event, which marked the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society. “Also, patients can be colonized in many different body locations, but the VA protocol is that the surveillance test be done in the nostrils. So we may have misclassified patients who were colonized in a different body location as being uncolonized, when in fact they were colonized.”

The study was funded by a grant from the VA. Dr. Nelson reported having no financial disclosures.

– Patients colonized with MRSA are at high risk of MRSA infection both in the predischarge and postdischarge time periods, results from an 8-year Veterans Affairs study showed.

“MRSA colonization is recognized as being a strong predictor of subsequent infection,” Richard E. Nelson, PhD, said at an annual scientific meeting on infectious diseases. “What’s less understood is, are there differences in infection rates among patients who are colonized at different times? And, is there a difference between patients who import colonization with them to a hospital versus those who acquire it during a hospital stay? In addition, infection control efforts mainly focus on the predischarge time period. What about infections that develop post discharge?”

In an effort to investigate these questions, Dr. Nelson of the VA Salt Lake City Healthcare System, and his associates, evaluated more than 1.3 million acute care inpatient admissions to 125 VA hospitals nationwide from January 2008 through December 2015 who had surveillance tests performed for MRSA carriage.

copyright Pixland/Thinkstock


The researchers restricted admissions to individuals with at least 365 days of VA activity prior to admission and categorized them into three groups: no colonization (defined as those who had no positive surveillance tests (n = 1,196,928); importation (defined as those who tested positive for MRSA colonization on admission (n = 95,833); and acquisition (defined as those who did not test positive for MRSA on admission but tested positive on a subsequent surveillance test during their admission (n = 15,146). Next, they captured MRSA infections in these individuals prior to discharge and at 30 and 90 days post discharge. Infections were defined as positive MRSA cultures taken from sterile sites, including blood, catheter site, or bone.

Overall, patients were in their mid-60s, and those who imported MRSA and those who acquired it were more likely to be male, less likely to be married, and more likely to not have health insurance. The acquirers had by far the highest rates of predischarge infections, which peaked in 2010 and declined through 2015,” said Dr. Nelson, who also holds a faculty position in University of Utah’s department of internal medicine, in the division of epidemiology. Specifically, the proportion of predischarge MRSA infections, compared with 30 days post discharge, were 40.4% vs. 59.6%, respectively, in the no colonization group; 63% vs. 37% in the importation group, and 80.8% vs. 19.2% in the acquisition group.

He also reported that the proportion of predischarge MRSA infections, compared with 90 days post discharge, were 20.5% vs. 79.5%, respectively, in the no colonization group; 47.3% vs. 52.7% in the importation group, and 70.5% vs. 29.5% in the acquisition group. The time from acquisition to infection was a mean of 8.7 days in the 30-day analysis and a mean of 22.4 days in the 90-day analysis.

Multivariate logistic regression revealed that the impact of colonization status on infection was highest in the acquisition group, compared with the importation group. Specifically, the odds ratio of developing a MRSA infection among the importation group was 29.22 in the predischarge period, OR 10.87 at post discharge 30 days, and OR 7.64 at post discharge 90 days (P less than .001 for all). Meanwhile, the OR among the acquisition group was 85.19 in the predischarge period, OR 13.01 at post discharge 30 days, and OR 8.26 at post discharge 90 days (P less than .001 for all).

Dr. Nelson acknowledged certain limitations of the study, including the fact that it only identified postdischarge infections that were detected in a VA facility. “This is likely an underestimate of postdischarge infections, because we’re missing the infection that occur in non-VA facilities,” he said at the event, which marked the combined annual meetings of the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America, the HIV Medicine Association, and the Pediatric Infectious Diseases Society. “Also, patients can be colonized in many different body locations, but the VA protocol is that the surveillance test be done in the nostrils. So we may have misclassified patients who were colonized in a different body location as being uncolonized, when in fact they were colonized.”

The study was funded by a grant from the VA. Dr. Nelson reported having no financial disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

REPORTING FROM ID WEEK 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: About half of postdischarge MRSA infections were in patients who acquired the organism before discharge.

Major finding: The proportion of predischarge MRSA infections, compared with 30 days post discharge, were 40.4% vs. 59.6%, respectively, in the no colonization group; 63% vs. 37% in the importation group, and 80.8% vs. 19.2% in the acquisition group.

Study details: An analysis of more than 1.3 million acute care inpatient admissions to 125 VA hospitals nationwide from January 2008 through December 2015.

Disclosures: The study was funded by a grant from the VA. Dr. Nelson reported having no financial disclosures.

Disqus Comments
Default

VIDEO: Intermittent furosemide during acute HFpEF favors kidneys

Bolus furosemide became standard following DOSE report
Article Type
Changed
Tue, 07/21/2020 - 14:18

– Patients with heart failure with preserved ejection fraction who were hospitalized for acute decompensation had a significantly smaller rise in serum creatinine when treated with intermittent, bolus doses of furosemide, compared with patients who received a continuous furosemide infusion in a single-center, randomized trial with 90 patients.

Intermittent furosemide also resulted in many fewer episodes of worsening renal function. In the trial, 12% of patients who received bolus furosemide doses developed worsening renal function during hospitalization compared with 36% of patients treated with a continuous furosemide infusion, Kavita Sharma, MD, said at the annual scientific meeting of the Heart Failure Society of America.

While acknowledging that this finding is preliminary because it was made in a relatively small, single-center study, “I’d be cautious about continuous infusion” in acute decompensated patients with heart failure with preserved ejection fraction (HFpEF); “bolus is preferred,” Dr. Sharma said in a video interview.

Results from the prior Diuretic Optimization Strategies Evaluation (DOSE) trial, published in 2011, had shown no significant difference in renal function in hospitalized heart failure patients randomized to receive either bolus or continuous furosemide, but that study largely enrolled patients with heart failure with reduced ejection fraction (HFrEF) (N Engl J Med. 2011 Mar 3;364[9]:797-805).

“When patients with HFpEF are hospitalized with acute heart failure there is a high rate of kidney injury, that often results in slowing diuresis leading to longer hospital stays. With adjustment for changes in blood pressure and volume of diuresis we saw a fourfold increase in worsening renal failure [with continuous infusion], so you should think twice before using continuous dosing,” said Dr. Sharma, a heart failure cardiologist at Johns Hopkins Medicine in Baltimore.

She presented results from Diuretics and Dopamine in Heart Failure With Preserved Ejection Fraction (ROPA-DOP), which randomized 90 hospitalized heart failure patients with a left ventricular ejection fraction of at least 50% and an estimated glomerular filtration rate of more than 15 mL/min/1.73 m2. The enrolled patients averaged 66 years old, 61% were women, their average body mass index was 41 kg/m2, and their average estimated glomerular filtration rate was 58 mL/min/1.73 m2.

The study’s primary endpoint was percent change in creatinine during hospitalization, which rose by an average 5% in the patients who received intermittent bolus furosemide and by an average 16% in patient who received a continuous infusion, a statistically significant difference. In a regression analysis that controlled for between-group differences in patient’s age, sex, race, body mass index, smoking status, changes in systolic blood pressure, heart rate, fluid balance after 72 hours, and other variables, patients treated with continuous furosemide infusion averaged an 11% greater increase in serum creatinine, Dr. Sharma reported. After similar adjustments, the secondary endpoint rate of worsening renal function was more than four times more likely to occur in the patients on continuous infusion compared with those who received intermittent bolus treatment, she said.

A second aspect of the ROPA-DOP trial randomized the same patients to received either low dose (3 mcg/kg per min) dopamine or placebo during hospitalization. The results showed that low-dose dopamine had no significant impact on either change in creatinine levels or on the incidence of worsening renal function compared with placebo, though dopamine treatment did link with a nonsignificant trend toward somewhat greater diuresis. These results were consistent with prior findings in the Renal Optimization Strategies Evaluation (ROSE) trial (JAMA. 2013 Nov 18;310[23]:2533-43), which used a mixed population of patients with HFpEF or HFrEF but predominantly patients with HFrEF, Dr. Sharma noted.

“It was a neutral finding [for dopamine in ROPA-DOP], and while there was no harm from dopamine there was clearly no benefit,” she said. It is possible that HFpEF patients with right ventricular dysfunction secondary to pulmonary hypertension might benefit from low-dose dopamine, but this needs further study, Dr. Sharma said.

Body

 

In the Diuretic Optimization Strategies Evaluation (DOSE) trial, we enrolled heart failure patients with a mix of reduced ejection fraction and preserved ejection fraction. The DOSE results showed no relationship between ejection fraction and the response to furosemide treatment by intermittent bolus or by continuous infusion in patients hospitalized with acute decompensated heart failure. The results also showed that continuous infusion was no better than intermittent bolus treatment, and following our report in 2011 (N Engl J Med. 2011 Mar 3;364[9]:797-805), many centers that had previously relied on continuous furosemide switched to use of bolus doses primarily because continuous infusion is much less convenient.

Mitchel L. Zoler/Frontline Medical News
Dr. G. Michael Felker

 

But it is important to keep in mind that trial results focus on averages and populations of patients. Anecdotally, we see some acute heart failure patients who seem to respond better to continuous infusion, and so some clinicians switch patients who do not respond well to bolus treatment to continuous infusion. In DOSE, we only tested the efficacy of the initial strategy; we have no evidence on whether or not changing the dosing strategy helps patients who do not respond adequately to an initial strategy of intermittent bolus doses.

G. Michael Felker, MD , professor of medicine at Duke University, Durham, N.C., made these comments in an interview. He has been a consultant to Amgen, Bristol-Myers Squibb, GlaxoSmithKline, Medtronic, MyoKardia, Novartis, Stealth, and Trevena.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Related Articles
Body

 

In the Diuretic Optimization Strategies Evaluation (DOSE) trial, we enrolled heart failure patients with a mix of reduced ejection fraction and preserved ejection fraction. The DOSE results showed no relationship between ejection fraction and the response to furosemide treatment by intermittent bolus or by continuous infusion in patients hospitalized with acute decompensated heart failure. The results also showed that continuous infusion was no better than intermittent bolus treatment, and following our report in 2011 (N Engl J Med. 2011 Mar 3;364[9]:797-805), many centers that had previously relied on continuous furosemide switched to use of bolus doses primarily because continuous infusion is much less convenient.

Mitchel L. Zoler/Frontline Medical News
Dr. G. Michael Felker

 

But it is important to keep in mind that trial results focus on averages and populations of patients. Anecdotally, we see some acute heart failure patients who seem to respond better to continuous infusion, and so some clinicians switch patients who do not respond well to bolus treatment to continuous infusion. In DOSE, we only tested the efficacy of the initial strategy; we have no evidence on whether or not changing the dosing strategy helps patients who do not respond adequately to an initial strategy of intermittent bolus doses.

G. Michael Felker, MD , professor of medicine at Duke University, Durham, N.C., made these comments in an interview. He has been a consultant to Amgen, Bristol-Myers Squibb, GlaxoSmithKline, Medtronic, MyoKardia, Novartis, Stealth, and Trevena.

Body

 

In the Diuretic Optimization Strategies Evaluation (DOSE) trial, we enrolled heart failure patients with a mix of reduced ejection fraction and preserved ejection fraction. The DOSE results showed no relationship between ejection fraction and the response to furosemide treatment by intermittent bolus or by continuous infusion in patients hospitalized with acute decompensated heart failure. The results also showed that continuous infusion was no better than intermittent bolus treatment, and following our report in 2011 (N Engl J Med. 2011 Mar 3;364[9]:797-805), many centers that had previously relied on continuous furosemide switched to use of bolus doses primarily because continuous infusion is much less convenient.

Mitchel L. Zoler/Frontline Medical News
Dr. G. Michael Felker

 

But it is important to keep in mind that trial results focus on averages and populations of patients. Anecdotally, we see some acute heart failure patients who seem to respond better to continuous infusion, and so some clinicians switch patients who do not respond well to bolus treatment to continuous infusion. In DOSE, we only tested the efficacy of the initial strategy; we have no evidence on whether or not changing the dosing strategy helps patients who do not respond adequately to an initial strategy of intermittent bolus doses.

G. Michael Felker, MD , professor of medicine at Duke University, Durham, N.C., made these comments in an interview. He has been a consultant to Amgen, Bristol-Myers Squibb, GlaxoSmithKline, Medtronic, MyoKardia, Novartis, Stealth, and Trevena.

Title
Bolus furosemide became standard following DOSE report
Bolus furosemide became standard following DOSE report

– Patients with heart failure with preserved ejection fraction who were hospitalized for acute decompensation had a significantly smaller rise in serum creatinine when treated with intermittent, bolus doses of furosemide, compared with patients who received a continuous furosemide infusion in a single-center, randomized trial with 90 patients.

Intermittent furosemide also resulted in many fewer episodes of worsening renal function. In the trial, 12% of patients who received bolus furosemide doses developed worsening renal function during hospitalization compared with 36% of patients treated with a continuous furosemide infusion, Kavita Sharma, MD, said at the annual scientific meeting of the Heart Failure Society of America.

While acknowledging that this finding is preliminary because it was made in a relatively small, single-center study, “I’d be cautious about continuous infusion” in acute decompensated patients with heart failure with preserved ejection fraction (HFpEF); “bolus is preferred,” Dr. Sharma said in a video interview.

Results from the prior Diuretic Optimization Strategies Evaluation (DOSE) trial, published in 2011, had shown no significant difference in renal function in hospitalized heart failure patients randomized to receive either bolus or continuous furosemide, but that study largely enrolled patients with heart failure with reduced ejection fraction (HFrEF) (N Engl J Med. 2011 Mar 3;364[9]:797-805).

“When patients with HFpEF are hospitalized with acute heart failure there is a high rate of kidney injury, that often results in slowing diuresis leading to longer hospital stays. With adjustment for changes in blood pressure and volume of diuresis we saw a fourfold increase in worsening renal failure [with continuous infusion], so you should think twice before using continuous dosing,” said Dr. Sharma, a heart failure cardiologist at Johns Hopkins Medicine in Baltimore.

She presented results from Diuretics and Dopamine in Heart Failure With Preserved Ejection Fraction (ROPA-DOP), which randomized 90 hospitalized heart failure patients with a left ventricular ejection fraction of at least 50% and an estimated glomerular filtration rate of more than 15 mL/min/1.73 m2. The enrolled patients averaged 66 years old, 61% were women, their average body mass index was 41 kg/m2, and their average estimated glomerular filtration rate was 58 mL/min/1.73 m2.

The study’s primary endpoint was percent change in creatinine during hospitalization, which rose by an average 5% in the patients who received intermittent bolus furosemide and by an average 16% in patient who received a continuous infusion, a statistically significant difference. In a regression analysis that controlled for between-group differences in patient’s age, sex, race, body mass index, smoking status, changes in systolic blood pressure, heart rate, fluid balance after 72 hours, and other variables, patients treated with continuous furosemide infusion averaged an 11% greater increase in serum creatinine, Dr. Sharma reported. After similar adjustments, the secondary endpoint rate of worsening renal function was more than four times more likely to occur in the patients on continuous infusion compared with those who received intermittent bolus treatment, she said.

A second aspect of the ROPA-DOP trial randomized the same patients to received either low dose (3 mcg/kg per min) dopamine or placebo during hospitalization. The results showed that low-dose dopamine had no significant impact on either change in creatinine levels or on the incidence of worsening renal function compared with placebo, though dopamine treatment did link with a nonsignificant trend toward somewhat greater diuresis. These results were consistent with prior findings in the Renal Optimization Strategies Evaluation (ROSE) trial (JAMA. 2013 Nov 18;310[23]:2533-43), which used a mixed population of patients with HFpEF or HFrEF but predominantly patients with HFrEF, Dr. Sharma noted.

“It was a neutral finding [for dopamine in ROPA-DOP], and while there was no harm from dopamine there was clearly no benefit,” she said. It is possible that HFpEF patients with right ventricular dysfunction secondary to pulmonary hypertension might benefit from low-dose dopamine, but this needs further study, Dr. Sharma said.

– Patients with heart failure with preserved ejection fraction who were hospitalized for acute decompensation had a significantly smaller rise in serum creatinine when treated with intermittent, bolus doses of furosemide, compared with patients who received a continuous furosemide infusion in a single-center, randomized trial with 90 patients.

Intermittent furosemide also resulted in many fewer episodes of worsening renal function. In the trial, 12% of patients who received bolus furosemide doses developed worsening renal function during hospitalization compared with 36% of patients treated with a continuous furosemide infusion, Kavita Sharma, MD, said at the annual scientific meeting of the Heart Failure Society of America.

While acknowledging that this finding is preliminary because it was made in a relatively small, single-center study, “I’d be cautious about continuous infusion” in acute decompensated patients with heart failure with preserved ejection fraction (HFpEF); “bolus is preferred,” Dr. Sharma said in a video interview.

Results from the prior Diuretic Optimization Strategies Evaluation (DOSE) trial, published in 2011, had shown no significant difference in renal function in hospitalized heart failure patients randomized to receive either bolus or continuous furosemide, but that study largely enrolled patients with heart failure with reduced ejection fraction (HFrEF) (N Engl J Med. 2011 Mar 3;364[9]:797-805).

“When patients with HFpEF are hospitalized with acute heart failure there is a high rate of kidney injury, that often results in slowing diuresis leading to longer hospital stays. With adjustment for changes in blood pressure and volume of diuresis we saw a fourfold increase in worsening renal failure [with continuous infusion], so you should think twice before using continuous dosing,” said Dr. Sharma, a heart failure cardiologist at Johns Hopkins Medicine in Baltimore.

She presented results from Diuretics and Dopamine in Heart Failure With Preserved Ejection Fraction (ROPA-DOP), which randomized 90 hospitalized heart failure patients with a left ventricular ejection fraction of at least 50% and an estimated glomerular filtration rate of more than 15 mL/min/1.73 m2. The enrolled patients averaged 66 years old, 61% were women, their average body mass index was 41 kg/m2, and their average estimated glomerular filtration rate was 58 mL/min/1.73 m2.

The study’s primary endpoint was percent change in creatinine during hospitalization, which rose by an average 5% in the patients who received intermittent bolus furosemide and by an average 16% in patient who received a continuous infusion, a statistically significant difference. In a regression analysis that controlled for between-group differences in patient’s age, sex, race, body mass index, smoking status, changes in systolic blood pressure, heart rate, fluid balance after 72 hours, and other variables, patients treated with continuous furosemide infusion averaged an 11% greater increase in serum creatinine, Dr. Sharma reported. After similar adjustments, the secondary endpoint rate of worsening renal function was more than four times more likely to occur in the patients on continuous infusion compared with those who received intermittent bolus treatment, she said.

A second aspect of the ROPA-DOP trial randomized the same patients to received either low dose (3 mcg/kg per min) dopamine or placebo during hospitalization. The results showed that low-dose dopamine had no significant impact on either change in creatinine levels or on the incidence of worsening renal function compared with placebo, though dopamine treatment did link with a nonsignificant trend toward somewhat greater diuresis. These results were consistent with prior findings in the Renal Optimization Strategies Evaluation (ROSE) trial (JAMA. 2013 Nov 18;310[23]:2533-43), which used a mixed population of patients with HFpEF or HFrEF but predominantly patients with HFrEF, Dr. Sharma noted.

“It was a neutral finding [for dopamine in ROPA-DOP], and while there was no harm from dopamine there was clearly no benefit,” she said. It is possible that HFpEF patients with right ventricular dysfunction secondary to pulmonary hypertension might benefit from low-dose dopamine, but this needs further study, Dr. Sharma said.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT THE HFSA ANNUAL SCIENTIFIC MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Furosemide delivered as intermittent bolus injections resulted in a smaller rise in serum creatinine and less worsening renal function compared with a continuous infusion in patients hospitalized with acute decompensation secondary to heart failure with preserved ejection fraction.

Major finding: Serum creatinine rose by an average 5% with intermittent bolus furosemide and by 16% with continuous infusion.

Data source: ROPA-DOP, a single-center randomized trial with 90 patients.

Disclosures: Dr. Sharma had no disclosures.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Sneak Peek: Journal of Hospital Medicine – Oct. 2017

Article Type
Changed
Fri, 09/14/2018 - 11:57
Sound and light levels are similarly disruptive in ICU and non-ICU wards

 

BACKGROUND: Hospitalized patients frequently report poor sleep, partly due to the inpatient environment. In-hospital sound and light levels are not well described on non–intensive care unit (non-ICU) wards. Although non-ICU wards may have lower average and peak noise levels, sound level changes (SLCs), which are important in disrupting sleep, may still be a substantial problem.

OBJECTIVE: To compare ambient sound and light levels, including SLCs, in ICU and non-ICU environments.

DESIGN: Observational study.

SETTING: Tertiary-care hospital.

MEASUREMENTS: Sound measurements of 0.5 Hz were analyzed to provide average hourly sound levels, sound peaks, and SLCs greater than or equal to 17.5 decibels (dB). For light data, measurements taken at 2-minute intervals provided average and maximum light levels.

RESULTS: The ICU rooms were louder than non-ICU wards; hourly averages ranged from 56.1 plus or minus 1.3 dB to 60.3 plus or minus 1.7 dB in the ICU, 47.3 plus or minus 3.7 dB to 55.1 plus or minus 3.7 dB on the telemetry floor, and 44.6 plus or minus 2.1 dB to 53.7 plus or minus 3.6 dB on the general ward. However, SLCs greater than or equal to 17.5 dB were not statistically different (ICU, 203.9 plus or minus 28.8 times; non-ICU, 270.9 plus or minus 39.5; P = 0.11). In both ICU and non-ICU wards, average daytime light levels were less than 250 lux, and peak light levels occurred in the afternoon and early evening.

CONCLUSIONS: While quieter, non-ICU wards have as many SLCs as ICUs do, which has implications for quality improvement measurements. Efforts to further reduce average noise levels might be counterproductive. Light levels in the hospital (ICU and non-ICU) may not be optimal for maintenance of a normal circadian rhythm for most people.

Read the entire article in the Journal of Hospital Medicine.
 

Also in JHM this month

Associations of physician empathy with patient anxiety and ratings of communication in hospital admission encounters

AUTHORS: Rachel Weiss, MD, Eric Vittinghoff, PhD, MPH, Margaret C. Fang, MD, MPH, Jenica E. W. Cimino, Kristen Adams Chasteen, MD, Robert M. Arnold, MD, Andrew D. Auerbach, MD, Wendy G. Anderson, MD, MS


A concise tool for measuring care coordination from the provider’s perspective in the hospital setting

AUTHORS: Christine M. Weston, PhD, and Sehyo Yune, MD, Eric B. Bass, MD, MPH, Scott A. Berkowitz, MD, MBA, Daniel J. Brotman, MD, Amy Deutschendorf, MS, RN, ACNS-BC, Eric E. Howell, MD, Melissa B. Richardson, MBA Carol Sylvester, RN, MS, Albert W. Wu, MD, MPH


Post–intensive care unit psychiatric comorbidity and quality of life

AUTHORS: Sophia Wang, MD, and Chris Mosher, MD, Anthony J. Perkins, MS, Sujuan Gao, PhD, Sue Lasiter, RN, PhD, Sikandar Khan, MD, Malaz Boustani, MD, MPH, Babar Khan, MD, MS


An opportunity to improve Medicare’s planned readmissions measure

AUTHORS: Chad Ellimoottil, MD, MS, Roger K. Khouri Jr., MD, Apoorv Dhir, BA, Hechuan Hou, MS, David C. Miller, MD, MPH, James M. Dupree, MD, MPH


Against medical advice discharges

AUTHORS: David Alfandre, MD, MSPH, Jay Brenner, MD, Eberechukwu Onukwugha, MS, PhD

Publications
Topics
Sections
Sound and light levels are similarly disruptive in ICU and non-ICU wards
Sound and light levels are similarly disruptive in ICU and non-ICU wards

 

BACKGROUND: Hospitalized patients frequently report poor sleep, partly due to the inpatient environment. In-hospital sound and light levels are not well described on non–intensive care unit (non-ICU) wards. Although non-ICU wards may have lower average and peak noise levels, sound level changes (SLCs), which are important in disrupting sleep, may still be a substantial problem.

OBJECTIVE: To compare ambient sound and light levels, including SLCs, in ICU and non-ICU environments.

DESIGN: Observational study.

SETTING: Tertiary-care hospital.

MEASUREMENTS: Sound measurements of 0.5 Hz were analyzed to provide average hourly sound levels, sound peaks, and SLCs greater than or equal to 17.5 decibels (dB). For light data, measurements taken at 2-minute intervals provided average and maximum light levels.

RESULTS: The ICU rooms were louder than non-ICU wards; hourly averages ranged from 56.1 plus or minus 1.3 dB to 60.3 plus or minus 1.7 dB in the ICU, 47.3 plus or minus 3.7 dB to 55.1 plus or minus 3.7 dB on the telemetry floor, and 44.6 plus or minus 2.1 dB to 53.7 plus or minus 3.6 dB on the general ward. However, SLCs greater than or equal to 17.5 dB were not statistically different (ICU, 203.9 plus or minus 28.8 times; non-ICU, 270.9 plus or minus 39.5; P = 0.11). In both ICU and non-ICU wards, average daytime light levels were less than 250 lux, and peak light levels occurred in the afternoon and early evening.

CONCLUSIONS: While quieter, non-ICU wards have as many SLCs as ICUs do, which has implications for quality improvement measurements. Efforts to further reduce average noise levels might be counterproductive. Light levels in the hospital (ICU and non-ICU) may not be optimal for maintenance of a normal circadian rhythm for most people.

Read the entire article in the Journal of Hospital Medicine.
 

Also in JHM this month

Associations of physician empathy with patient anxiety and ratings of communication in hospital admission encounters

AUTHORS: Rachel Weiss, MD, Eric Vittinghoff, PhD, MPH, Margaret C. Fang, MD, MPH, Jenica E. W. Cimino, Kristen Adams Chasteen, MD, Robert M. Arnold, MD, Andrew D. Auerbach, MD, Wendy G. Anderson, MD, MS


A concise tool for measuring care coordination from the provider’s perspective in the hospital setting

AUTHORS: Christine M. Weston, PhD, and Sehyo Yune, MD, Eric B. Bass, MD, MPH, Scott A. Berkowitz, MD, MBA, Daniel J. Brotman, MD, Amy Deutschendorf, MS, RN, ACNS-BC, Eric E. Howell, MD, Melissa B. Richardson, MBA Carol Sylvester, RN, MS, Albert W. Wu, MD, MPH


Post–intensive care unit psychiatric comorbidity and quality of life

AUTHORS: Sophia Wang, MD, and Chris Mosher, MD, Anthony J. Perkins, MS, Sujuan Gao, PhD, Sue Lasiter, RN, PhD, Sikandar Khan, MD, Malaz Boustani, MD, MPH, Babar Khan, MD, MS


An opportunity to improve Medicare’s planned readmissions measure

AUTHORS: Chad Ellimoottil, MD, MS, Roger K. Khouri Jr., MD, Apoorv Dhir, BA, Hechuan Hou, MS, David C. Miller, MD, MPH, James M. Dupree, MD, MPH


Against medical advice discharges

AUTHORS: David Alfandre, MD, MSPH, Jay Brenner, MD, Eberechukwu Onukwugha, MS, PhD

 

BACKGROUND: Hospitalized patients frequently report poor sleep, partly due to the inpatient environment. In-hospital sound and light levels are not well described on non–intensive care unit (non-ICU) wards. Although non-ICU wards may have lower average and peak noise levels, sound level changes (SLCs), which are important in disrupting sleep, may still be a substantial problem.

OBJECTIVE: To compare ambient sound and light levels, including SLCs, in ICU and non-ICU environments.

DESIGN: Observational study.

SETTING: Tertiary-care hospital.

MEASUREMENTS: Sound measurements of 0.5 Hz were analyzed to provide average hourly sound levels, sound peaks, and SLCs greater than or equal to 17.5 decibels (dB). For light data, measurements taken at 2-minute intervals provided average and maximum light levels.

RESULTS: The ICU rooms were louder than non-ICU wards; hourly averages ranged from 56.1 plus or minus 1.3 dB to 60.3 plus or minus 1.7 dB in the ICU, 47.3 plus or minus 3.7 dB to 55.1 plus or minus 3.7 dB on the telemetry floor, and 44.6 plus or minus 2.1 dB to 53.7 plus or minus 3.6 dB on the general ward. However, SLCs greater than or equal to 17.5 dB were not statistically different (ICU, 203.9 plus or minus 28.8 times; non-ICU, 270.9 plus or minus 39.5; P = 0.11). In both ICU and non-ICU wards, average daytime light levels were less than 250 lux, and peak light levels occurred in the afternoon and early evening.

CONCLUSIONS: While quieter, non-ICU wards have as many SLCs as ICUs do, which has implications for quality improvement measurements. Efforts to further reduce average noise levels might be counterproductive. Light levels in the hospital (ICU and non-ICU) may not be optimal for maintenance of a normal circadian rhythm for most people.

Read the entire article in the Journal of Hospital Medicine.
 

Also in JHM this month

Associations of physician empathy with patient anxiety and ratings of communication in hospital admission encounters

AUTHORS: Rachel Weiss, MD, Eric Vittinghoff, PhD, MPH, Margaret C. Fang, MD, MPH, Jenica E. W. Cimino, Kristen Adams Chasteen, MD, Robert M. Arnold, MD, Andrew D. Auerbach, MD, Wendy G. Anderson, MD, MS


A concise tool for measuring care coordination from the provider’s perspective in the hospital setting

AUTHORS: Christine M. Weston, PhD, and Sehyo Yune, MD, Eric B. Bass, MD, MPH, Scott A. Berkowitz, MD, MBA, Daniel J. Brotman, MD, Amy Deutschendorf, MS, RN, ACNS-BC, Eric E. Howell, MD, Melissa B. Richardson, MBA Carol Sylvester, RN, MS, Albert W. Wu, MD, MPH


Post–intensive care unit psychiatric comorbidity and quality of life

AUTHORS: Sophia Wang, MD, and Chris Mosher, MD, Anthony J. Perkins, MS, Sujuan Gao, PhD, Sue Lasiter, RN, PhD, Sikandar Khan, MD, Malaz Boustani, MD, MPH, Babar Khan, MD, MS


An opportunity to improve Medicare’s planned readmissions measure

AUTHORS: Chad Ellimoottil, MD, MS, Roger K. Khouri Jr., MD, Apoorv Dhir, BA, Hechuan Hou, MS, David C. Miller, MD, MPH, James M. Dupree, MD, MPH


Against medical advice discharges

AUTHORS: David Alfandre, MD, MSPH, Jay Brenner, MD, Eberechukwu Onukwugha, MS, PhD

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

BP accuracy is the ghost in the machine

Article Type
Changed
Fri, 01/18/2019 - 17:04

 

– Amid all the talk about subgroup blood pressure targets and tiny differences in drug regimens at a recent hypertension meeting, there was an elephant in the room that attendees refused to ignore.

Hypertension control – the number one way to prevent cardiovascular death – depends on a simple measurement taught to all medical practitioners, but one that’s rarely done right: blood pressure measurement. “We do it wrong,” said Dr. Steven Yarows, a primary care physician in Chelsea, Mich., who estimated he’s taken 44,000 blood pressures in his 36 years of practice.

M. Alexander Otto/Frontline Medical News
Dr. Steven Yarows
Inaccurate measurement is such a problem in the United States that someone in his audience half-joked that the American Heart Association should release two hypertension guidelines the next time around, one for when blood pressure is measured correctly, “and one for the rest of us.”

Everyone in medicine is taught that people should rest a bit and not talk while their blood pressure is taken; that the last measurement matters more than the first; and that most Americans need a large-sized cuff. Current guidelines are based on patients sitting for 5-10 minutes alone in a quiet room while an automatic machine averages their last 3-5 blood pressures.

But when Dr. Yarows asked his 300 or so audience members – hypertension physicians who paid to come to the meeting – how many actually followed those rules, four hands went up. It’s not good enough; “if you are going to make a diagnosis that lasts a lifetime, you have to be accurate,” he said at the joint scientific sessions of the American Heart Association Council on Hypertension, AHA Council on Kidney Cardiovascular Disease, and American Society of Hypertension.

There’s resistance. No one has a room set aside for blood pressure; staff don’t want to deal with it; and at a time when primary care doctors are nickel and dimed for everything they do, insurers haven’t stepped up to pay to make accurate blood pressure a priority.

To do it right, you have to ask patients to come in 10 minutes early and have a room set up for them where they can sit alone with a large oscillometric cuff to average a few blood pressures at rest, Dr. Yarows said. They also need at least one 24-hour monitoring.

“Most of the time, the patient walks over from the waiting room, they get on the scale which automatically elevates the blood pressure tremendously, and then they sit down and talk about their family while their blood pressure is being taken.” Even in normotensive patients, that alone could raise systolic pressure 20 mm Hg or more, he said. It makes one-time blood pressure pretty much meaningless.

The biggest problem is that blood pressure is hugely variable, so it’s hard to know what matters. In one of Dr. Yarows’ normotensive patients, BP varied 44 mm Hg systolic and 37 mm Hg diastolic over 24 hours. In a hypertensive patient, systolic pressure varied 62 mm Hg and diastolic 48 mm Hg over 24 hours. Another patient was 114/85 mm Hg at noon, and 159/73 mm Hg an hour later. “That’s a huge spread,” he said.

Twenty-four hour monitoring is the only way to really know if patients are hypertensive and need treatment. “Any person you suspect of having hypertension, before you place them on medicine, you should have 24 hour blood pressure monitoring. This is the most effective way to determine if they do have high blood pressure,” and how much it needs to be lowered, he said.

Dr. Yarows had no disclosures.
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Amid all the talk about subgroup blood pressure targets and tiny differences in drug regimens at a recent hypertension meeting, there was an elephant in the room that attendees refused to ignore.

Hypertension control – the number one way to prevent cardiovascular death – depends on a simple measurement taught to all medical practitioners, but one that’s rarely done right: blood pressure measurement. “We do it wrong,” said Dr. Steven Yarows, a primary care physician in Chelsea, Mich., who estimated he’s taken 44,000 blood pressures in his 36 years of practice.

M. Alexander Otto/Frontline Medical News
Dr. Steven Yarows
Inaccurate measurement is such a problem in the United States that someone in his audience half-joked that the American Heart Association should release two hypertension guidelines the next time around, one for when blood pressure is measured correctly, “and one for the rest of us.”

Everyone in medicine is taught that people should rest a bit and not talk while their blood pressure is taken; that the last measurement matters more than the first; and that most Americans need a large-sized cuff. Current guidelines are based on patients sitting for 5-10 minutes alone in a quiet room while an automatic machine averages their last 3-5 blood pressures.

But when Dr. Yarows asked his 300 or so audience members – hypertension physicians who paid to come to the meeting – how many actually followed those rules, four hands went up. It’s not good enough; “if you are going to make a diagnosis that lasts a lifetime, you have to be accurate,” he said at the joint scientific sessions of the American Heart Association Council on Hypertension, AHA Council on Kidney Cardiovascular Disease, and American Society of Hypertension.

There’s resistance. No one has a room set aside for blood pressure; staff don’t want to deal with it; and at a time when primary care doctors are nickel and dimed for everything they do, insurers haven’t stepped up to pay to make accurate blood pressure a priority.

To do it right, you have to ask patients to come in 10 minutes early and have a room set up for them where they can sit alone with a large oscillometric cuff to average a few blood pressures at rest, Dr. Yarows said. They also need at least one 24-hour monitoring.

“Most of the time, the patient walks over from the waiting room, they get on the scale which automatically elevates the blood pressure tremendously, and then they sit down and talk about their family while their blood pressure is being taken.” Even in normotensive patients, that alone could raise systolic pressure 20 mm Hg or more, he said. It makes one-time blood pressure pretty much meaningless.

The biggest problem is that blood pressure is hugely variable, so it’s hard to know what matters. In one of Dr. Yarows’ normotensive patients, BP varied 44 mm Hg systolic and 37 mm Hg diastolic over 24 hours. In a hypertensive patient, systolic pressure varied 62 mm Hg and diastolic 48 mm Hg over 24 hours. Another patient was 114/85 mm Hg at noon, and 159/73 mm Hg an hour later. “That’s a huge spread,” he said.

Twenty-four hour monitoring is the only way to really know if patients are hypertensive and need treatment. “Any person you suspect of having hypertension, before you place them on medicine, you should have 24 hour blood pressure monitoring. This is the most effective way to determine if they do have high blood pressure,” and how much it needs to be lowered, he said.

Dr. Yarows had no disclosures.
 

 

– Amid all the talk about subgroup blood pressure targets and tiny differences in drug regimens at a recent hypertension meeting, there was an elephant in the room that attendees refused to ignore.

Hypertension control – the number one way to prevent cardiovascular death – depends on a simple measurement taught to all medical practitioners, but one that’s rarely done right: blood pressure measurement. “We do it wrong,” said Dr. Steven Yarows, a primary care physician in Chelsea, Mich., who estimated he’s taken 44,000 blood pressures in his 36 years of practice.

M. Alexander Otto/Frontline Medical News
Dr. Steven Yarows
Inaccurate measurement is such a problem in the United States that someone in his audience half-joked that the American Heart Association should release two hypertension guidelines the next time around, one for when blood pressure is measured correctly, “and one for the rest of us.”

Everyone in medicine is taught that people should rest a bit and not talk while their blood pressure is taken; that the last measurement matters more than the first; and that most Americans need a large-sized cuff. Current guidelines are based on patients sitting for 5-10 minutes alone in a quiet room while an automatic machine averages their last 3-5 blood pressures.

But when Dr. Yarows asked his 300 or so audience members – hypertension physicians who paid to come to the meeting – how many actually followed those rules, four hands went up. It’s not good enough; “if you are going to make a diagnosis that lasts a lifetime, you have to be accurate,” he said at the joint scientific sessions of the American Heart Association Council on Hypertension, AHA Council on Kidney Cardiovascular Disease, and American Society of Hypertension.

There’s resistance. No one has a room set aside for blood pressure; staff don’t want to deal with it; and at a time when primary care doctors are nickel and dimed for everything they do, insurers haven’t stepped up to pay to make accurate blood pressure a priority.

To do it right, you have to ask patients to come in 10 minutes early and have a room set up for them where they can sit alone with a large oscillometric cuff to average a few blood pressures at rest, Dr. Yarows said. They also need at least one 24-hour monitoring.

“Most of the time, the patient walks over from the waiting room, they get on the scale which automatically elevates the blood pressure tremendously, and then they sit down and talk about their family while their blood pressure is being taken.” Even in normotensive patients, that alone could raise systolic pressure 20 mm Hg or more, he said. It makes one-time blood pressure pretty much meaningless.

The biggest problem is that blood pressure is hugely variable, so it’s hard to know what matters. In one of Dr. Yarows’ normotensive patients, BP varied 44 mm Hg systolic and 37 mm Hg diastolic over 24 hours. In a hypertensive patient, systolic pressure varied 62 mm Hg and diastolic 48 mm Hg over 24 hours. Another patient was 114/85 mm Hg at noon, and 159/73 mm Hg an hour later. “That’s a huge spread,” he said.

Twenty-four hour monitoring is the only way to really know if patients are hypertensive and need treatment. “Any person you suspect of having hypertension, before you place them on medicine, you should have 24 hour blood pressure monitoring. This is the most effective way to determine if they do have high blood pressure,” and how much it needs to be lowered, he said.

Dr. Yarows had no disclosures.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM JOINT HYPERTENSION 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Behavioral approach to appropriate antimicrobial prescribing in hospitals: The DUMAS study

Article Type
Changed
Fri, 09/14/2018 - 11:57

 

Clinical question: How effective is an antimicrobial stewardship approach grounded in behavioral theory and focused on preserving prescriber autonomy and participation in improving appropriateness of antimicrobial prescribing in hospitals?

Background: Antimicrobial stewardship programs aim to achieve the goal of improving antimicrobial prescribing. This leads to significant benefits, including decreased antimicrobial resistance, improved clinical outcomes (lower morbidity and mortality), and lower health care costs. Stewardship programs do not often focus on the human behavior element of the prescribing physicians. Changing antimicrobial prescribing is a complex behavioral process, and there is a known persistent resistance between prescribers and the stewardship team. In a simple sense, this resistance is generated by tension created when prescribers do not have autonomy.

Previous studies that used interventions based on behavioral theory found promising results, but none of them were in a hospital setting. Rather, most of these studies had a narrow focus of respiratory tract infections in outpatient clinics.

Study design: Prospective, stepped-wedge, participatory intervention study.

Setting: Seven clinical departments (two medical, three surgical, and two pediatric) in a tertiary care medical center and a general teaching hospital, both in Amsterdam. The first hospital was a 700-bed tertiary care center with salaried specialists, while the second hospital was a 550-bed general medical center with self-employed specialists.

Synopsis: During a baseline period of 16 months and an intervention period of 12 months, physicians who prescribed systemic antimicrobial drugs for any indication were included in the study. In all, 1,121 patient cases with 700 prescriptions were studied during the baseline period and 882 patient cases with 531 prescriptions were studied during the intervention period. The intervention was as follows: Prescribers were offered a free choice of how to improve their antimicrobial prescribing. They were stimulated to choose interventions with higher potential for success based on a root cause analysis of inappropriate prescribing. The study was inspired by the participatory action research paradigm, which focuses on collaboration and empowerment of the stakeholders in the change process. In this study, prescribers were given reports of root cause analysis of their prior prescribing patterns. Then, they were invited to choose and codevelop one or more interventions that were tailored and individualized to improve their own prescribing. This approach draws on the following three behavioral principles: 1) respect for the prescriber’s autonomy to avoid feelings of resistance; 2) the inclination that people have to value a product higher and feel more ownership of it if they made it themselves (the IKEA effect); 3) the tendency of people to follow up on an active and public commitment.

The primary outcome was antimicrobial appropriateness, measured with a validated appropriateness assessment instrument. One of three infectious disease specialists assessed the adult prescriptions, and one of three infectious disease/immunology pediatricians assessed the pediatric prescriptions for appropriateness. Appropriateness criteria were as follows: indication, choice of antimicrobial, dosage, route, and duration. A secondary outcome was antimicrobial consumption, reported as days of therapy per 100 admissions per month. Other outcomes were changes in specific appropriateness categories, intravenous antimicrobial consumption, consumption of specific antimicrobial subgroups, and length of hospital stay.

The mean antimicrobial appropriateness increased from 64.1% at intervention start to 77.4% at 12-month follow-up (+13.3%; relative risk, 1.17; 95% CI, 1.04-1.27), without a change in slope. Antimicrobial consumption remained the same during both study periods. Length of hospital stay did not change relative to the start of the intervention approach.

This is the first study of its kind, as a hospital antimicrobial stewardship program study grounded in behavioral science, with the key element being the free choice allowed to the prescribers, who made their own autonomous decisions about how to improve their prescribing. The authors hypothesize that the prescribers felt relatively nonthreatened by their approach since the prescribers maintained their free will to change their own behavior if so desired. The prescribers were given a free intervention choice. For example, they could have just chosen “education” as an easy out; however, the root cause analysis seemed to be an impetus for the prescribers to choose interventions that would be more effective. A prior study in a nursing home setting was unsuccessful; aside from other differences, that study used a predetermined set of interventions, thus lacking the autonomy and IKEA effect seen in this study.

Bottom line: Use of a behavioral approach that preserves prescriber autonomy resulted in an increase in antimicrobial appropriateness sustained for at least 12 months. The intervention is effective, inexpensive, and transferable to various health care settings.

Citation: Sikkens JJ, van Agtmael MA, Peters EJG, et al. Behavioral approach to appropriate antimicrobial prescribing in hospitals: The dutch unique method for antimicrobial stewardship (DUMAS) participatoryintervention study. JAMA Intern Med. Published online May 1, 2017. doi: 10.1001/jamainternmed.2017.0946.

Dr. Ramee is a hospitalist at Ochsner Health System, New Orleans.

Publications
Topics
Sections

 

Clinical question: How effective is an antimicrobial stewardship approach grounded in behavioral theory and focused on preserving prescriber autonomy and participation in improving appropriateness of antimicrobial prescribing in hospitals?

Background: Antimicrobial stewardship programs aim to achieve the goal of improving antimicrobial prescribing. This leads to significant benefits, including decreased antimicrobial resistance, improved clinical outcomes (lower morbidity and mortality), and lower health care costs. Stewardship programs do not often focus on the human behavior element of the prescribing physicians. Changing antimicrobial prescribing is a complex behavioral process, and there is a known persistent resistance between prescribers and the stewardship team. In a simple sense, this resistance is generated by tension created when prescribers do not have autonomy.

Previous studies that used interventions based on behavioral theory found promising results, but none of them were in a hospital setting. Rather, most of these studies had a narrow focus of respiratory tract infections in outpatient clinics.

Study design: Prospective, stepped-wedge, participatory intervention study.

Setting: Seven clinical departments (two medical, three surgical, and two pediatric) in a tertiary care medical center and a general teaching hospital, both in Amsterdam. The first hospital was a 700-bed tertiary care center with salaried specialists, while the second hospital was a 550-bed general medical center with self-employed specialists.

Synopsis: During a baseline period of 16 months and an intervention period of 12 months, physicians who prescribed systemic antimicrobial drugs for any indication were included in the study. In all, 1,121 patient cases with 700 prescriptions were studied during the baseline period and 882 patient cases with 531 prescriptions were studied during the intervention period. The intervention was as follows: Prescribers were offered a free choice of how to improve their antimicrobial prescribing. They were stimulated to choose interventions with higher potential for success based on a root cause analysis of inappropriate prescribing. The study was inspired by the participatory action research paradigm, which focuses on collaboration and empowerment of the stakeholders in the change process. In this study, prescribers were given reports of root cause analysis of their prior prescribing patterns. Then, they were invited to choose and codevelop one or more interventions that were tailored and individualized to improve their own prescribing. This approach draws on the following three behavioral principles: 1) respect for the prescriber’s autonomy to avoid feelings of resistance; 2) the inclination that people have to value a product higher and feel more ownership of it if they made it themselves (the IKEA effect); 3) the tendency of people to follow up on an active and public commitment.

The primary outcome was antimicrobial appropriateness, measured with a validated appropriateness assessment instrument. One of three infectious disease specialists assessed the adult prescriptions, and one of three infectious disease/immunology pediatricians assessed the pediatric prescriptions for appropriateness. Appropriateness criteria were as follows: indication, choice of antimicrobial, dosage, route, and duration. A secondary outcome was antimicrobial consumption, reported as days of therapy per 100 admissions per month. Other outcomes were changes in specific appropriateness categories, intravenous antimicrobial consumption, consumption of specific antimicrobial subgroups, and length of hospital stay.

The mean antimicrobial appropriateness increased from 64.1% at intervention start to 77.4% at 12-month follow-up (+13.3%; relative risk, 1.17; 95% CI, 1.04-1.27), without a change in slope. Antimicrobial consumption remained the same during both study periods. Length of hospital stay did not change relative to the start of the intervention approach.

This is the first study of its kind, as a hospital antimicrobial stewardship program study grounded in behavioral science, with the key element being the free choice allowed to the prescribers, who made their own autonomous decisions about how to improve their prescribing. The authors hypothesize that the prescribers felt relatively nonthreatened by their approach since the prescribers maintained their free will to change their own behavior if so desired. The prescribers were given a free intervention choice. For example, they could have just chosen “education” as an easy out; however, the root cause analysis seemed to be an impetus for the prescribers to choose interventions that would be more effective. A prior study in a nursing home setting was unsuccessful; aside from other differences, that study used a predetermined set of interventions, thus lacking the autonomy and IKEA effect seen in this study.

Bottom line: Use of a behavioral approach that preserves prescriber autonomy resulted in an increase in antimicrobial appropriateness sustained for at least 12 months. The intervention is effective, inexpensive, and transferable to various health care settings.

Citation: Sikkens JJ, van Agtmael MA, Peters EJG, et al. Behavioral approach to appropriate antimicrobial prescribing in hospitals: The dutch unique method for antimicrobial stewardship (DUMAS) participatoryintervention study. JAMA Intern Med. Published online May 1, 2017. doi: 10.1001/jamainternmed.2017.0946.

Dr. Ramee is a hospitalist at Ochsner Health System, New Orleans.

 

Clinical question: How effective is an antimicrobial stewardship approach grounded in behavioral theory and focused on preserving prescriber autonomy and participation in improving appropriateness of antimicrobial prescribing in hospitals?

Background: Antimicrobial stewardship programs aim to achieve the goal of improving antimicrobial prescribing. This leads to significant benefits, including decreased antimicrobial resistance, improved clinical outcomes (lower morbidity and mortality), and lower health care costs. Stewardship programs do not often focus on the human behavior element of the prescribing physicians. Changing antimicrobial prescribing is a complex behavioral process, and there is a known persistent resistance between prescribers and the stewardship team. In a simple sense, this resistance is generated by tension created when prescribers do not have autonomy.

Previous studies that used interventions based on behavioral theory found promising results, but none of them were in a hospital setting. Rather, most of these studies had a narrow focus of respiratory tract infections in outpatient clinics.

Study design: Prospective, stepped-wedge, participatory intervention study.

Setting: Seven clinical departments (two medical, three surgical, and two pediatric) in a tertiary care medical center and a general teaching hospital, both in Amsterdam. The first hospital was a 700-bed tertiary care center with salaried specialists, while the second hospital was a 550-bed general medical center with self-employed specialists.

Synopsis: During a baseline period of 16 months and an intervention period of 12 months, physicians who prescribed systemic antimicrobial drugs for any indication were included in the study. In all, 1,121 patient cases with 700 prescriptions were studied during the baseline period and 882 patient cases with 531 prescriptions were studied during the intervention period. The intervention was as follows: Prescribers were offered a free choice of how to improve their antimicrobial prescribing. They were stimulated to choose interventions with higher potential for success based on a root cause analysis of inappropriate prescribing. The study was inspired by the participatory action research paradigm, which focuses on collaboration and empowerment of the stakeholders in the change process. In this study, prescribers were given reports of root cause analysis of their prior prescribing patterns. Then, they were invited to choose and codevelop one or more interventions that were tailored and individualized to improve their own prescribing. This approach draws on the following three behavioral principles: 1) respect for the prescriber’s autonomy to avoid feelings of resistance; 2) the inclination that people have to value a product higher and feel more ownership of it if they made it themselves (the IKEA effect); 3) the tendency of people to follow up on an active and public commitment.

The primary outcome was antimicrobial appropriateness, measured with a validated appropriateness assessment instrument. One of three infectious disease specialists assessed the adult prescriptions, and one of three infectious disease/immunology pediatricians assessed the pediatric prescriptions for appropriateness. Appropriateness criteria were as follows: indication, choice of antimicrobial, dosage, route, and duration. A secondary outcome was antimicrobial consumption, reported as days of therapy per 100 admissions per month. Other outcomes were changes in specific appropriateness categories, intravenous antimicrobial consumption, consumption of specific antimicrobial subgroups, and length of hospital stay.

The mean antimicrobial appropriateness increased from 64.1% at intervention start to 77.4% at 12-month follow-up (+13.3%; relative risk, 1.17; 95% CI, 1.04-1.27), without a change in slope. Antimicrobial consumption remained the same during both study periods. Length of hospital stay did not change relative to the start of the intervention approach.

This is the first study of its kind, as a hospital antimicrobial stewardship program study grounded in behavioral science, with the key element being the free choice allowed to the prescribers, who made their own autonomous decisions about how to improve their prescribing. The authors hypothesize that the prescribers felt relatively nonthreatened by their approach since the prescribers maintained their free will to change their own behavior if so desired. The prescribers were given a free intervention choice. For example, they could have just chosen “education” as an easy out; however, the root cause analysis seemed to be an impetus for the prescribers to choose interventions that would be more effective. A prior study in a nursing home setting was unsuccessful; aside from other differences, that study used a predetermined set of interventions, thus lacking the autonomy and IKEA effect seen in this study.

Bottom line: Use of a behavioral approach that preserves prescriber autonomy resulted in an increase in antimicrobial appropriateness sustained for at least 12 months. The intervention is effective, inexpensive, and transferable to various health care settings.

Citation: Sikkens JJ, van Agtmael MA, Peters EJG, et al. Behavioral approach to appropriate antimicrobial prescribing in hospitals: The dutch unique method for antimicrobial stewardship (DUMAS) participatoryintervention study. JAMA Intern Med. Published online May 1, 2017. doi: 10.1001/jamainternmed.2017.0946.

Dr. Ramee is a hospitalist at Ochsner Health System, New Orleans.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Optimal empiric treatment for uncomplicated cellulitis

Article Type
Changed
Fri, 09/14/2018 - 11:57

 

Clinical question: Is empiric MRSA coverage for nonpurulent cellulitis necessary?

Background: Most nonpurulent skin and soft tissue infections are caused by beta-hemolytic streptococci and methicillin-susceptible Staphylococcus aureus. However, there is a growing incidence of community-acquired methicillin-resistant S. aureus infections. The authors of this study attempted to answer whether adding empiric methicillin-resistant S. aureus coverage reduces the risk of treatment failure.

Dr. Emily Ramee
Study design: Multicenter, double-blind, randomized superiority trial.

Setting: Five emergency departments in the United States.

Synopsis: The authors of this study randomized 500 patients with cellulitis without purulent drainage or evidence of abscess as confirmed by sonography to receive a 7-day course of either cephalexin with placebo or cephalexin plus trimethoprim­sulfamethoxazole. When analyzing those patients who took most of the prescribed pills (greater than 75% of doses) according to treatment protocol, there was no significant difference in clinical cure rate between the two arms of the study, reaffirming current guidelines that advocate against empiric methicillin-resistant S. aureus coverage for uncomplicated cellulitis.

When the authors analyzed the result of their data with the assumption that patients who were lost to follow-up had treatment failure, there was a trend favoring the addition of trimethoprim-sulfamethoxazole with cephalexin over monotherapy with cephalexin (P = .07). Although the authors concluded that this finding may warrant further investigation, this was essentially a negative study.

Bottom line: Empirically adding community­-acquired methicillin-resistant S. aureus coverage with trimethoprim-sulfamethoxazole to uncomplicated cellulitis did not statistically improve a clinical cure, compared with empiric treatment with monotherapy with cephalexin.

Citation: Moran GJ, Krishnadasan A, Mower WR, et al. Effect of cephalexin plus trimethoprim-sulfamethoxazole vs. cephalexin alone on clinical cure of uncomplicated cellulitis. JAMA. 2017;317(20):2088-96.

Dr. Ramee is a hospitalist at Ochsner Health System, New Orleans.

Publications
Topics
Sections

 

Clinical question: Is empiric MRSA coverage for nonpurulent cellulitis necessary?

Background: Most nonpurulent skin and soft tissue infections are caused by beta-hemolytic streptococci and methicillin-susceptible Staphylococcus aureus. However, there is a growing incidence of community-acquired methicillin-resistant S. aureus infections. The authors of this study attempted to answer whether adding empiric methicillin-resistant S. aureus coverage reduces the risk of treatment failure.

Dr. Emily Ramee
Study design: Multicenter, double-blind, randomized superiority trial.

Setting: Five emergency departments in the United States.

Synopsis: The authors of this study randomized 500 patients with cellulitis without purulent drainage or evidence of abscess as confirmed by sonography to receive a 7-day course of either cephalexin with placebo or cephalexin plus trimethoprim­sulfamethoxazole. When analyzing those patients who took most of the prescribed pills (greater than 75% of doses) according to treatment protocol, there was no significant difference in clinical cure rate between the two arms of the study, reaffirming current guidelines that advocate against empiric methicillin-resistant S. aureus coverage for uncomplicated cellulitis.

When the authors analyzed the result of their data with the assumption that patients who were lost to follow-up had treatment failure, there was a trend favoring the addition of trimethoprim-sulfamethoxazole with cephalexin over monotherapy with cephalexin (P = .07). Although the authors concluded that this finding may warrant further investigation, this was essentially a negative study.

Bottom line: Empirically adding community­-acquired methicillin-resistant S. aureus coverage with trimethoprim-sulfamethoxazole to uncomplicated cellulitis did not statistically improve a clinical cure, compared with empiric treatment with monotherapy with cephalexin.

Citation: Moran GJ, Krishnadasan A, Mower WR, et al. Effect of cephalexin plus trimethoprim-sulfamethoxazole vs. cephalexin alone on clinical cure of uncomplicated cellulitis. JAMA. 2017;317(20):2088-96.

Dr. Ramee is a hospitalist at Ochsner Health System, New Orleans.

 

Clinical question: Is empiric MRSA coverage for nonpurulent cellulitis necessary?

Background: Most nonpurulent skin and soft tissue infections are caused by beta-hemolytic streptococci and methicillin-susceptible Staphylococcus aureus. However, there is a growing incidence of community-acquired methicillin-resistant S. aureus infections. The authors of this study attempted to answer whether adding empiric methicillin-resistant S. aureus coverage reduces the risk of treatment failure.

Dr. Emily Ramee
Study design: Multicenter, double-blind, randomized superiority trial.

Setting: Five emergency departments in the United States.

Synopsis: The authors of this study randomized 500 patients with cellulitis without purulent drainage or evidence of abscess as confirmed by sonography to receive a 7-day course of either cephalexin with placebo or cephalexin plus trimethoprim­sulfamethoxazole. When analyzing those patients who took most of the prescribed pills (greater than 75% of doses) according to treatment protocol, there was no significant difference in clinical cure rate between the two arms of the study, reaffirming current guidelines that advocate against empiric methicillin-resistant S. aureus coverage for uncomplicated cellulitis.

When the authors analyzed the result of their data with the assumption that patients who were lost to follow-up had treatment failure, there was a trend favoring the addition of trimethoprim-sulfamethoxazole with cephalexin over monotherapy with cephalexin (P = .07). Although the authors concluded that this finding may warrant further investigation, this was essentially a negative study.

Bottom line: Empirically adding community­-acquired methicillin-resistant S. aureus coverage with trimethoprim-sulfamethoxazole to uncomplicated cellulitis did not statistically improve a clinical cure, compared with empiric treatment with monotherapy with cephalexin.

Citation: Moran GJ, Krishnadasan A, Mower WR, et al. Effect of cephalexin plus trimethoprim-sulfamethoxazole vs. cephalexin alone on clinical cure of uncomplicated cellulitis. JAMA. 2017;317(20):2088-96.

Dr. Ramee is a hospitalist at Ochsner Health System, New Orleans.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Oral anticoagulation ‘reasonable’ in advanced kidney disease with A-fib

Article Type
Changed
Fri, 01/18/2019 - 17:04

– Oral anticoagulation had a net overall benefit for patients with atrial fibrillation and advanced chronic kidney disease, based on results of a large observational study reported at the annual congress of the European Society of Cardiology.

The novel direct-acting oral anticoagulants (NOACs) and warfarin were all similarly effective in this study of 39,241 patients who had stage 4 or 5 chronic kidney disease (CKD), atrial fibrillation, and were not on dialysis. Compared with no oral anticoagulation, the drugs cut in half the risk of stroke or systemic embolism, with no increased risk of major bleeding.

“In patients with advanced CKD, it appears that OACs [oral anticoagulants] are reasonable,” concluded Peter A. Noseworthy, MD, of the Mayo Clinic in Rochester, Minn.


This is a potentially practice-changing finding given the “striking underutilization” of OACs in advanced CKD, he noted. Indeed, only one-third of the patients in this study were prescribed an OAC and picked up their prescriptions. And while the study has the limitations inherent to an observational study reliant upon data from a large U.S. administrative database – chiefly, the potential for residual confounding because of factors that couldn’t be adjusted for statistically – these real-world data may be as good as it gets, since patients with advanced CKD were excluded from the pivotal trials of the NOACs.

Apixaban (Eliquis) was the winner in this study: It separated itself from the pack by reducing the major bleeding risk by 57%, compared with warfarin, although it wasn’t significantly more effective than the other drugs in terms of stroke prevention. In contrast, the major bleeding rates for dabigatran (Pradaxa) and rivaroxaban (Xarelto) weren’t significantly different from warfarin in this challenging patient population.

In a related analysis of 10,712 patients with atrial fibrillation and advanced CKD who were on dialysis, use of an OAC was once again a winning strategy: It resulted not only in an impressive 58% reduction in the risk of stroke or systemic embolism, but also a 26% reduction in the risk of major bleeding, compared with no OAC.

Here again, apixaban was arguably the drug of choice. None of the 125 dialysis patients on apixaban experienced a stroke or systemic embolism. In contrast, dabigatran and rivaroxaban were associated with greater than threefold higher stroke rates than in patients on warfarin, although these differences didn’t achieve statistical significance because of small numbers, just 36 patients on dabigatran and 56 on rivaroxaban, the cardiologist continued.

For these analyses of the relationship between OAC exposure and stroke and bleeding outcomes, Dr. Noseworthy and his coinvestigators used propensity scores based upon 59 clinical and sociodemographic characteristics.

Asked why rates of utilization of OACs are so low in patients with advanced CKD, Dr. Noseworthy replied that he didn’t find that particularly surprising.

“Even if you look only at patients without renal dysfunction, there is incredible undertreatment of atrial fibrillation with OACs. And adherence is very poor,” he observed.

Moreover, in talking with nephrologists, he finds many of them have legitimate reservations about prescribing OACs for patients with end-stage renal disease on hemodialysis.

“They’re undergoing a lot of procedures. They’re having a ton of lines placed; they’re having fistulas revised; and they have very high rates of GI bleeding. In some studies the annual risk of bleeding is 20%-40% in this population. And they’re a frail population with frequent falls,” Dr. Noseworthy said.

He reported having no financial conflicts of interest regarding his study, which was conducted free of commercial support.

 

 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Related Articles

– Oral anticoagulation had a net overall benefit for patients with atrial fibrillation and advanced chronic kidney disease, based on results of a large observational study reported at the annual congress of the European Society of Cardiology.

The novel direct-acting oral anticoagulants (NOACs) and warfarin were all similarly effective in this study of 39,241 patients who had stage 4 or 5 chronic kidney disease (CKD), atrial fibrillation, and were not on dialysis. Compared with no oral anticoagulation, the drugs cut in half the risk of stroke or systemic embolism, with no increased risk of major bleeding.

“In patients with advanced CKD, it appears that OACs [oral anticoagulants] are reasonable,” concluded Peter A. Noseworthy, MD, of the Mayo Clinic in Rochester, Minn.


This is a potentially practice-changing finding given the “striking underutilization” of OACs in advanced CKD, he noted. Indeed, only one-third of the patients in this study were prescribed an OAC and picked up their prescriptions. And while the study has the limitations inherent to an observational study reliant upon data from a large U.S. administrative database – chiefly, the potential for residual confounding because of factors that couldn’t be adjusted for statistically – these real-world data may be as good as it gets, since patients with advanced CKD were excluded from the pivotal trials of the NOACs.

Apixaban (Eliquis) was the winner in this study: It separated itself from the pack by reducing the major bleeding risk by 57%, compared with warfarin, although it wasn’t significantly more effective than the other drugs in terms of stroke prevention. In contrast, the major bleeding rates for dabigatran (Pradaxa) and rivaroxaban (Xarelto) weren’t significantly different from warfarin in this challenging patient population.

In a related analysis of 10,712 patients with atrial fibrillation and advanced CKD who were on dialysis, use of an OAC was once again a winning strategy: It resulted not only in an impressive 58% reduction in the risk of stroke or systemic embolism, but also a 26% reduction in the risk of major bleeding, compared with no OAC.

Here again, apixaban was arguably the drug of choice. None of the 125 dialysis patients on apixaban experienced a stroke or systemic embolism. In contrast, dabigatran and rivaroxaban were associated with greater than threefold higher stroke rates than in patients on warfarin, although these differences didn’t achieve statistical significance because of small numbers, just 36 patients on dabigatran and 56 on rivaroxaban, the cardiologist continued.

For these analyses of the relationship between OAC exposure and stroke and bleeding outcomes, Dr. Noseworthy and his coinvestigators used propensity scores based upon 59 clinical and sociodemographic characteristics.

Asked why rates of utilization of OACs are so low in patients with advanced CKD, Dr. Noseworthy replied that he didn’t find that particularly surprising.

“Even if you look only at patients without renal dysfunction, there is incredible undertreatment of atrial fibrillation with OACs. And adherence is very poor,” he observed.

Moreover, in talking with nephrologists, he finds many of them have legitimate reservations about prescribing OACs for patients with end-stage renal disease on hemodialysis.

“They’re undergoing a lot of procedures. They’re having a ton of lines placed; they’re having fistulas revised; and they have very high rates of GI bleeding. In some studies the annual risk of bleeding is 20%-40% in this population. And they’re a frail population with frequent falls,” Dr. Noseworthy said.

He reported having no financial conflicts of interest regarding his study, which was conducted free of commercial support.

 

 

– Oral anticoagulation had a net overall benefit for patients with atrial fibrillation and advanced chronic kidney disease, based on results of a large observational study reported at the annual congress of the European Society of Cardiology.

The novel direct-acting oral anticoagulants (NOACs) and warfarin were all similarly effective in this study of 39,241 patients who had stage 4 or 5 chronic kidney disease (CKD), atrial fibrillation, and were not on dialysis. Compared with no oral anticoagulation, the drugs cut in half the risk of stroke or systemic embolism, with no increased risk of major bleeding.

“In patients with advanced CKD, it appears that OACs [oral anticoagulants] are reasonable,” concluded Peter A. Noseworthy, MD, of the Mayo Clinic in Rochester, Minn.


This is a potentially practice-changing finding given the “striking underutilization” of OACs in advanced CKD, he noted. Indeed, only one-third of the patients in this study were prescribed an OAC and picked up their prescriptions. And while the study has the limitations inherent to an observational study reliant upon data from a large U.S. administrative database – chiefly, the potential for residual confounding because of factors that couldn’t be adjusted for statistically – these real-world data may be as good as it gets, since patients with advanced CKD were excluded from the pivotal trials of the NOACs.

Apixaban (Eliquis) was the winner in this study: It separated itself from the pack by reducing the major bleeding risk by 57%, compared with warfarin, although it wasn’t significantly more effective than the other drugs in terms of stroke prevention. In contrast, the major bleeding rates for dabigatran (Pradaxa) and rivaroxaban (Xarelto) weren’t significantly different from warfarin in this challenging patient population.

In a related analysis of 10,712 patients with atrial fibrillation and advanced CKD who were on dialysis, use of an OAC was once again a winning strategy: It resulted not only in an impressive 58% reduction in the risk of stroke or systemic embolism, but also a 26% reduction in the risk of major bleeding, compared with no OAC.

Here again, apixaban was arguably the drug of choice. None of the 125 dialysis patients on apixaban experienced a stroke or systemic embolism. In contrast, dabigatran and rivaroxaban were associated with greater than threefold higher stroke rates than in patients on warfarin, although these differences didn’t achieve statistical significance because of small numbers, just 36 patients on dabigatran and 56 on rivaroxaban, the cardiologist continued.

For these analyses of the relationship between OAC exposure and stroke and bleeding outcomes, Dr. Noseworthy and his coinvestigators used propensity scores based upon 59 clinical and sociodemographic characteristics.

Asked why rates of utilization of OACs are so low in patients with advanced CKD, Dr. Noseworthy replied that he didn’t find that particularly surprising.

“Even if you look only at patients without renal dysfunction, there is incredible undertreatment of atrial fibrillation with OACs. And adherence is very poor,” he observed.

Moreover, in talking with nephrologists, he finds many of them have legitimate reservations about prescribing OACs for patients with end-stage renal disease on hemodialysis.

“They’re undergoing a lot of procedures. They’re having a ton of lines placed; they’re having fistulas revised; and they have very high rates of GI bleeding. In some studies the annual risk of bleeding is 20%-40% in this population. And they’re a frail population with frequent falls,” Dr. Noseworthy said.

He reported having no financial conflicts of interest regarding his study, which was conducted free of commercial support.

 

 

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE ESC CONGRESS 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Oral anticoagulation in patients with atrial fibrillation and advanced chronic kidney disease is associated with reduced risk of stroke and no increased risk of major bleeding.

Major finding: The risk of stroke/systemic embolism in patients with advanced chronic kidney disease who were on oral anticoagulation was reduced by 49% among those not on hemodialysis and by 58% in those who were, compared with similar patients not on oral anticoagulation.

Data source: This was an observational study of nearly 50,000 patients with atrial fibrillation and stage 4 or 5 chronic kidney disease in a large U.S. administrative database.

Disclosures: The presenter reported having no financial conflicts of interest regarding his study, which was conducted free of commercial support.
 

Disqus Comments
Default

Use of BZD and sedative-hypnotics among hospitalized elderly

Article Type
Changed
Fri, 09/14/2018 - 11:57

 

Clinical question: Which hospitalized older patients are inappropriately prescribed benzodiazepines or sedative hypnotics post discharge, and who is prescribing these medications?

Background: During hospitalization, older patients commonly suffer from agitation and insomnia. Unfortunately, benzodiazepines and sedative hypnotics are commonly used as first-line treatments for these conditions despite significant risk which includes cognitive impairment, postural instability, increased risk of falls and hip fracture as well as lack of effectiveness. The purpose of this study is to determine the magnitude of the issue, discover root causes, and determine the type or types of corrective action needed.

Study Design: Single-center retrospective observational study.

Setting: Urban academic medical center in Toronto.

Synopsis: Patient- and prescriber-level variables were identified and associated with potentially inappropriate newly prescribed benzodiazepine or sedative-hypnotics to medical-surgical inpatients aged 65 or older (regular users were excluded), which amounted to 208 patients of the 1,308 patients studied. The majority of the indications were for insomnia or agitation/anxiety prescribed overnight with 222 out of 1,308 patients (15.9%).

There was significant increase in these prescriptions if the patient was admitted to a surgical or specialty service compared to the general internal medicine service (odds ratio, 6.61; 95% confidence interval, 2.70-16.17). First-year trainees prescribed these medications more than did attending or fellows (OR, 0.28; 95% CI, 0.08-0.93).

Study limitations include being from a single institution, not being blinded, and inadequate statistical power. Therefore, it may lack generalizability, may be subjected to observer bias, and may not detect significant effects of covariates.

Bottom line: Sleep disruption and poor quality of sleep were the primary reason for the majority of potentially inappropriate newly prescribed benzodiazepines and sedative hypnotics, with first-year trainees being more likely to prescribe these medications compared to attendings and fellows.

Citation: Pek EA, Ramfry A, Pendrith C, et al. High prevalence of inappropriate benzodiazepine and sedative hypnotic prescriptions among hospitalized older adults. J Hosp Med. 2017 May;12(5):310-6.

Dr. Choe is a hospitalist at Ochsner Health System, New Orleans.

Publications
Topics
Sections

 

Clinical question: Which hospitalized older patients are inappropriately prescribed benzodiazepines or sedative hypnotics post discharge, and who is prescribing these medications?

Background: During hospitalization, older patients commonly suffer from agitation and insomnia. Unfortunately, benzodiazepines and sedative hypnotics are commonly used as first-line treatments for these conditions despite significant risk which includes cognitive impairment, postural instability, increased risk of falls and hip fracture as well as lack of effectiveness. The purpose of this study is to determine the magnitude of the issue, discover root causes, and determine the type or types of corrective action needed.

Study Design: Single-center retrospective observational study.

Setting: Urban academic medical center in Toronto.

Synopsis: Patient- and prescriber-level variables were identified and associated with potentially inappropriate newly prescribed benzodiazepine or sedative-hypnotics to medical-surgical inpatients aged 65 or older (regular users were excluded), which amounted to 208 patients of the 1,308 patients studied. The majority of the indications were for insomnia or agitation/anxiety prescribed overnight with 222 out of 1,308 patients (15.9%).

There was significant increase in these prescriptions if the patient was admitted to a surgical or specialty service compared to the general internal medicine service (odds ratio, 6.61; 95% confidence interval, 2.70-16.17). First-year trainees prescribed these medications more than did attending or fellows (OR, 0.28; 95% CI, 0.08-0.93).

Study limitations include being from a single institution, not being blinded, and inadequate statistical power. Therefore, it may lack generalizability, may be subjected to observer bias, and may not detect significant effects of covariates.

Bottom line: Sleep disruption and poor quality of sleep were the primary reason for the majority of potentially inappropriate newly prescribed benzodiazepines and sedative hypnotics, with first-year trainees being more likely to prescribe these medications compared to attendings and fellows.

Citation: Pek EA, Ramfry A, Pendrith C, et al. High prevalence of inappropriate benzodiazepine and sedative hypnotic prescriptions among hospitalized older adults. J Hosp Med. 2017 May;12(5):310-6.

Dr. Choe is a hospitalist at Ochsner Health System, New Orleans.

 

Clinical question: Which hospitalized older patients are inappropriately prescribed benzodiazepines or sedative hypnotics post discharge, and who is prescribing these medications?

Background: During hospitalization, older patients commonly suffer from agitation and insomnia. Unfortunately, benzodiazepines and sedative hypnotics are commonly used as first-line treatments for these conditions despite significant risk which includes cognitive impairment, postural instability, increased risk of falls and hip fracture as well as lack of effectiveness. The purpose of this study is to determine the magnitude of the issue, discover root causes, and determine the type or types of corrective action needed.

Study Design: Single-center retrospective observational study.

Setting: Urban academic medical center in Toronto.

Synopsis: Patient- and prescriber-level variables were identified and associated with potentially inappropriate newly prescribed benzodiazepine or sedative-hypnotics to medical-surgical inpatients aged 65 or older (regular users were excluded), which amounted to 208 patients of the 1,308 patients studied. The majority of the indications were for insomnia or agitation/anxiety prescribed overnight with 222 out of 1,308 patients (15.9%).

There was significant increase in these prescriptions if the patient was admitted to a surgical or specialty service compared to the general internal medicine service (odds ratio, 6.61; 95% confidence interval, 2.70-16.17). First-year trainees prescribed these medications more than did attending or fellows (OR, 0.28; 95% CI, 0.08-0.93).

Study limitations include being from a single institution, not being blinded, and inadequate statistical power. Therefore, it may lack generalizability, may be subjected to observer bias, and may not detect significant effects of covariates.

Bottom line: Sleep disruption and poor quality of sleep were the primary reason for the majority of potentially inappropriate newly prescribed benzodiazepines and sedative hypnotics, with first-year trainees being more likely to prescribe these medications compared to attendings and fellows.

Citation: Pek EA, Ramfry A, Pendrith C, et al. High prevalence of inappropriate benzodiazepine and sedative hypnotic prescriptions among hospitalized older adults. J Hosp Med. 2017 May;12(5):310-6.

Dr. Choe is a hospitalist at Ochsner Health System, New Orleans.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default