User login
HM14 Keynote Speakers Encourage Hospitalists to Deliver High-Quality, Low-Cost Patient Care
LAS VEGAS—A record 3,600 hospitalists swarmed the Mandalay Bay Resort and Casino for four days of education and networking that wrapped with the “father of HM,” Bob Wachter, MD, MHM, dressed as Elton John, warbling a hospitalist-centric version of Sir Elton’s chart topper, “Your Song,” to a packed ballroom.
“[HM14] is just intoxicating,” said hospitalist Kevin Gilroy, MD, of Greenville (S.C.) Health System. “And it ends with our daddy getting up there and lighting it up as Elton John. What other conference does that?”
In perhaps the most tweeted line from HM14, keynote speaker Ian Morrison, PhD, compared the addictiveness of crack cocaine with physicians’ dedication to the fee-for-service payment system.
“It’s really hard to get off of it,” the national healthcare expert deadpanned to a packed ballroom at the Mandalay Bay Resort and Casino.
The zinger was one of the highlights of the annual meeting’s three plenary addresses, which alternately gave the record 3,600 hospitalists in attendance doses of sobriety about the difficulty of healthcare reform and comedy bits from Dr. Morrison and HM dean Robert Wachter, MD, MHM.
The keynote titled “Obamacare Is Here: What Does It Mean for You and Your Hospital?” featured a panel discussion among Centers for Medicare & Medicaid Services (CMS) chief medical officer Patrick Conway, MD, MSc, MHM, FAAP; executive director and CEO of the Medical University of South Carolina in Charleston and former SHM president Patrick Cawley, MD, MBA, MHM, FACP, FACHE; veteran healthcare executive Patrick Courneya, MD; and American Enterprise Institute resident fellow Scott Gottlieb, MD. The quartet—dubbed the Patricks and Scott by several emcees—followed their hour-long plenary with a question-and-answer session.
“I think this is ultimately going to hurt the financial standing of the hospital industry,” said Dr. Gottlieb, a newcomer to SHM’s annual meeting. “A lot of these hospitals that are taking on these capitated contracts, taking on risk, consolidating physicians, I think they’re going to get themselves into financial trouble in the next five years. That’s going to put pressure on the hospitalists.”

–Dr. Gottlieb
Dr. Cawley said that just a few years ago, his institution subsidized five medical groups. Now it’s 25. He has a simple message for hospitalists not committed to providing better care at lower costs: “You’re not going to be on my good side.”
Dr. Wachter told medical students and residents that he sees no end in sight to the unrelenting pressure to provide that high-quality, low-cost care, while also making sure patient satisfaction rises. And he’s more than OK with that.
“It’s important to recognize that the goal we’re being asked to achieve—to deliver high-quality, satisfying, evidence-based care without undue variations, where we’re not harming people and doing it at a cost that doesn’t bankrupt society—is unambiguously right,” he said. “It’s such an obviously right goal that what is odd is that this was not our goal until recently. So the fact that our field has taken this on as our mantra is very satisfying and completely appropriate.”
The keynote addresses also highlighted another satisfying result: Immediate past SHM President Eric Howell, MD, SFHM, reached the goal he set at 2013’s annual meeting to double the society’s number of student and housestaff members from 500 to 1,000.
Newly minted SHM President Burke Kealey, MD, SFHM, has a goal that is a bit more abstract: He wants hospitalists to look at improving healthcare affordability, patient health, and the patient experience—as a single goal.
“We put the energy and the effort of the moment behind the squeaky wheel,” said Dr. Kealey, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn. “What I would like us to do is all start thinking about all three at the same time, and with equal weight at all times. To me, this is the next evolution of the hospitalist.”
Dr. Kealey’s tack for his one-year term is borrowed from the Institute of Healthcare Improvement, whose “triple aim” initiative has the same goals. But Dr. Kealey believes that focusing on any of the three areas while giving short shrift to the others misses the point of bettering the overall healthcare system.
“To improve health, but then people can’t afford that healthcare, is a nonstarter,” he said. “To make things finally affordable, but then people stay away because it’s a bad experience, makes no sense, either. We must do it all together.”

–Dr. Kealey
And hospitalists are in the perfect position to do it, said Dr. Morrison, a founding partner of Strategic Health Perspectives, a forecasting service for the healthcare industry that includes joint venture partners Harris Interactive and the Harvard School of Public Health’s department of health policy and management. He sees hospitalist leaders as change agents, as the rigmarole of healthcare reform shakes out over the next few years.
Dr. Morrison, a native of Scotland whose delivery was half stand-up comic, half policy wonk (he introduced himself as Dr. Wachter’s Scottish caddy), said that while politicians and pundits dicker over how a generational shift in policies will be implemented, hospitalists will be the ones balancing that change with patients’ needs.
“This is the work of the future,” he said, “and it is not policy wonk work; it is clinical work. It is about the transformation of the delivery system. That is the central challenge of the future.
“We’ve got to integrate across the continuum of care, using all the innovation that both public and private sectors can deliver. This is not going to be determined by CMS, in my view, but by the kind of innovation that America is always good at.”
LAS VEGAS—A record 3,600 hospitalists swarmed the Mandalay Bay Resort and Casino for four days of education and networking that wrapped with the “father of HM,” Bob Wachter, MD, MHM, dressed as Elton John, warbling a hospitalist-centric version of Sir Elton’s chart topper, “Your Song,” to a packed ballroom.
“[HM14] is just intoxicating,” said hospitalist Kevin Gilroy, MD, of Greenville (S.C.) Health System. “And it ends with our daddy getting up there and lighting it up as Elton John. What other conference does that?”
In perhaps the most tweeted line from HM14, keynote speaker Ian Morrison, PhD, compared the addictiveness of crack cocaine with physicians’ dedication to the fee-for-service payment system.
“It’s really hard to get off of it,” the national healthcare expert deadpanned to a packed ballroom at the Mandalay Bay Resort and Casino.
The zinger was one of the highlights of the annual meeting’s three plenary addresses, which alternately gave the record 3,600 hospitalists in attendance doses of sobriety about the difficulty of healthcare reform and comedy bits from Dr. Morrison and HM dean Robert Wachter, MD, MHM.
The keynote titled “Obamacare Is Here: What Does It Mean for You and Your Hospital?” featured a panel discussion among Centers for Medicare & Medicaid Services (CMS) chief medical officer Patrick Conway, MD, MSc, MHM, FAAP; executive director and CEO of the Medical University of South Carolina in Charleston and former SHM president Patrick Cawley, MD, MBA, MHM, FACP, FACHE; veteran healthcare executive Patrick Courneya, MD; and American Enterprise Institute resident fellow Scott Gottlieb, MD. The quartet—dubbed the Patricks and Scott by several emcees—followed their hour-long plenary with a question-and-answer session.
“I think this is ultimately going to hurt the financial standing of the hospital industry,” said Dr. Gottlieb, a newcomer to SHM’s annual meeting. “A lot of these hospitals that are taking on these capitated contracts, taking on risk, consolidating physicians, I think they’re going to get themselves into financial trouble in the next five years. That’s going to put pressure on the hospitalists.”

–Dr. Gottlieb
Dr. Cawley said that just a few years ago, his institution subsidized five medical groups. Now it’s 25. He has a simple message for hospitalists not committed to providing better care at lower costs: “You’re not going to be on my good side.”
Dr. Wachter told medical students and residents that he sees no end in sight to the unrelenting pressure to provide that high-quality, low-cost care, while also making sure patient satisfaction rises. And he’s more than OK with that.
“It’s important to recognize that the goal we’re being asked to achieve—to deliver high-quality, satisfying, evidence-based care without undue variations, where we’re not harming people and doing it at a cost that doesn’t bankrupt society—is unambiguously right,” he said. “It’s such an obviously right goal that what is odd is that this was not our goal until recently. So the fact that our field has taken this on as our mantra is very satisfying and completely appropriate.”
The keynote addresses also highlighted another satisfying result: Immediate past SHM President Eric Howell, MD, SFHM, reached the goal he set at 2013’s annual meeting to double the society’s number of student and housestaff members from 500 to 1,000.
Newly minted SHM President Burke Kealey, MD, SFHM, has a goal that is a bit more abstract: He wants hospitalists to look at improving healthcare affordability, patient health, and the patient experience—as a single goal.
“We put the energy and the effort of the moment behind the squeaky wheel,” said Dr. Kealey, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn. “What I would like us to do is all start thinking about all three at the same time, and with equal weight at all times. To me, this is the next evolution of the hospitalist.”
Dr. Kealey’s tack for his one-year term is borrowed from the Institute of Healthcare Improvement, whose “triple aim” initiative has the same goals. But Dr. Kealey believes that focusing on any of the three areas while giving short shrift to the others misses the point of bettering the overall healthcare system.
“To improve health, but then people can’t afford that healthcare, is a nonstarter,” he said. “To make things finally affordable, but then people stay away because it’s a bad experience, makes no sense, either. We must do it all together.”

–Dr. Kealey
And hospitalists are in the perfect position to do it, said Dr. Morrison, a founding partner of Strategic Health Perspectives, a forecasting service for the healthcare industry that includes joint venture partners Harris Interactive and the Harvard School of Public Health’s department of health policy and management. He sees hospitalist leaders as change agents, as the rigmarole of healthcare reform shakes out over the next few years.
Dr. Morrison, a native of Scotland whose delivery was half stand-up comic, half policy wonk (he introduced himself as Dr. Wachter’s Scottish caddy), said that while politicians and pundits dicker over how a generational shift in policies will be implemented, hospitalists will be the ones balancing that change with patients’ needs.
“This is the work of the future,” he said, “and it is not policy wonk work; it is clinical work. It is about the transformation of the delivery system. That is the central challenge of the future.
“We’ve got to integrate across the continuum of care, using all the innovation that both public and private sectors can deliver. This is not going to be determined by CMS, in my view, but by the kind of innovation that America is always good at.”
LAS VEGAS—A record 3,600 hospitalists swarmed the Mandalay Bay Resort and Casino for four days of education and networking that wrapped with the “father of HM,” Bob Wachter, MD, MHM, dressed as Elton John, warbling a hospitalist-centric version of Sir Elton’s chart topper, “Your Song,” to a packed ballroom.
“[HM14] is just intoxicating,” said hospitalist Kevin Gilroy, MD, of Greenville (S.C.) Health System. “And it ends with our daddy getting up there and lighting it up as Elton John. What other conference does that?”
In perhaps the most tweeted line from HM14, keynote speaker Ian Morrison, PhD, compared the addictiveness of crack cocaine with physicians’ dedication to the fee-for-service payment system.
“It’s really hard to get off of it,” the national healthcare expert deadpanned to a packed ballroom at the Mandalay Bay Resort and Casino.
The zinger was one of the highlights of the annual meeting’s three plenary addresses, which alternately gave the record 3,600 hospitalists in attendance doses of sobriety about the difficulty of healthcare reform and comedy bits from Dr. Morrison and HM dean Robert Wachter, MD, MHM.
The keynote titled “Obamacare Is Here: What Does It Mean for You and Your Hospital?” featured a panel discussion among Centers for Medicare & Medicaid Services (CMS) chief medical officer Patrick Conway, MD, MSc, MHM, FAAP; executive director and CEO of the Medical University of South Carolina in Charleston and former SHM president Patrick Cawley, MD, MBA, MHM, FACP, FACHE; veteran healthcare executive Patrick Courneya, MD; and American Enterprise Institute resident fellow Scott Gottlieb, MD. The quartet—dubbed the Patricks and Scott by several emcees—followed their hour-long plenary with a question-and-answer session.
“I think this is ultimately going to hurt the financial standing of the hospital industry,” said Dr. Gottlieb, a newcomer to SHM’s annual meeting. “A lot of these hospitals that are taking on these capitated contracts, taking on risk, consolidating physicians, I think they’re going to get themselves into financial trouble in the next five years. That’s going to put pressure on the hospitalists.”

–Dr. Gottlieb
Dr. Cawley said that just a few years ago, his institution subsidized five medical groups. Now it’s 25. He has a simple message for hospitalists not committed to providing better care at lower costs: “You’re not going to be on my good side.”
Dr. Wachter told medical students and residents that he sees no end in sight to the unrelenting pressure to provide that high-quality, low-cost care, while also making sure patient satisfaction rises. And he’s more than OK with that.
“It’s important to recognize that the goal we’re being asked to achieve—to deliver high-quality, satisfying, evidence-based care without undue variations, where we’re not harming people and doing it at a cost that doesn’t bankrupt society—is unambiguously right,” he said. “It’s such an obviously right goal that what is odd is that this was not our goal until recently. So the fact that our field has taken this on as our mantra is very satisfying and completely appropriate.”
The keynote addresses also highlighted another satisfying result: Immediate past SHM President Eric Howell, MD, SFHM, reached the goal he set at 2013’s annual meeting to double the society’s number of student and housestaff members from 500 to 1,000.
Newly minted SHM President Burke Kealey, MD, SFHM, has a goal that is a bit more abstract: He wants hospitalists to look at improving healthcare affordability, patient health, and the patient experience—as a single goal.
“We put the energy and the effort of the moment behind the squeaky wheel,” said Dr. Kealey, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn. “What I would like us to do is all start thinking about all three at the same time, and with equal weight at all times. To me, this is the next evolution of the hospitalist.”
Dr. Kealey’s tack for his one-year term is borrowed from the Institute of Healthcare Improvement, whose “triple aim” initiative has the same goals. But Dr. Kealey believes that focusing on any of the three areas while giving short shrift to the others misses the point of bettering the overall healthcare system.
“To improve health, but then people can’t afford that healthcare, is a nonstarter,” he said. “To make things finally affordable, but then people stay away because it’s a bad experience, makes no sense, either. We must do it all together.”

–Dr. Kealey
And hospitalists are in the perfect position to do it, said Dr. Morrison, a founding partner of Strategic Health Perspectives, a forecasting service for the healthcare industry that includes joint venture partners Harris Interactive and the Harvard School of Public Health’s department of health policy and management. He sees hospitalist leaders as change agents, as the rigmarole of healthcare reform shakes out over the next few years.
Dr. Morrison, a native of Scotland whose delivery was half stand-up comic, half policy wonk (he introduced himself as Dr. Wachter’s Scottish caddy), said that while politicians and pundits dicker over how a generational shift in policies will be implemented, hospitalists will be the ones balancing that change with patients’ needs.
“This is the work of the future,” he said, “and it is not policy wonk work; it is clinical work. It is about the transformation of the delivery system. That is the central challenge of the future.
“We’ve got to integrate across the continuum of care, using all the innovation that both public and private sectors can deliver. This is not going to be determined by CMS, in my view, but by the kind of innovation that America is always good at.”
Spinal Cord Stimulation May Restore Movement to Patients With Paraplegia
Epidural stimulation may help people with spinal cord injury develop functional connectivity and re-establish voluntary control of previously paralyzed muscles, investigators stated in research published online ahead of print April 8 in Brain. The therapy has the potential to change the prognosis of people with paralysis, even if it is administered years after injury, according to the authors.
In the study, four individuals with chronic complete motor paralysis were able to perform voluntary tasks while receiving epidural stimulation. Three of the participants recovered voluntary movement with epidural stimulation soon after implantation, and two of the three had complete loss of both motor and sensory function at study initiation. The results show “that by neuromodulating the spinal circuitry at subthreshold motor levels with epidural stimulation, chronically complete paralyzed individuals can process conceptual, auditory, and visual input to regain specific voluntary control of paralyzed muscles,” said Claudia A. Angeli, PhD. “We have uncovered a fundamentally new intervention strategy that can dramatically affect recovery of voluntary movement in individuals with complete paralysis even years after injury.”
Investigators Trained Four Patients With Paraplegia
The current investigation continues research that Dr. Angeli, Senior Researcher in the Human Locomotion Research Center at Frazier Rehab Institute in Louisville, and her colleagues began in 2009. In that trial, a young man paralyzed below his chest had a 16-electrode array implanted on his spinal cord. He underwent daily training in standing and walking, during which he was suspended over a treadmill while the array delivered electrical pulses to his spinal cord. The man became able to bear his own weight and stand without assistance for four minutes. At seven months, the man regained some voluntary control of his legs. Other impairments caused by the injury, such as of blood pressure control, body temperature regulation, bladder control, and sexual function, also began to improve over time in the absence of stimulation.
In the current study, Dr. Angeli and her colleagues implanted an epidural spinal cord stimulation unit and a 16-electrode array over the spinal cord in three additional participants with motor complete spinal cord injury. Participants’ average age at the time of implantation, including the patient who received implantation in 2009, was 27. All four patients were male, and their injuries had occurred at least two years before implantation. All individuals were unable to stand or walk independently or voluntarily move their lower extremities after their injuries.
After implantation, the investigators tested the individuals’ ability to move voluntarily with epidural stimulation. Participants underwent testing again after receiving intense stand training using epidural stimulation and after receiving intense step training in combination with epidural stimulation.
One Patient Performed Voluntary Movements Within a Week
All four individuals became able to move their legs intentionally in response to a verbal command. The first participant had no motor activity when he attempted to move without epidural stimulation following a verbal command, and the other three individuals had no motor activity when they attempted to move without epidural stimulation following a visual cue. All four individuals, however, generated electromyogram activity and movement during ankle dorsiflexion in the presence of epidural stimulation during their first experimental session, either before or after stand training.
At the start of the current study, the second person to receive implantation was unable to move or experience any sensation below the point of injury. This patient was able to perform voluntary movements during the first week of stimulation. The other patients recovered voluntary movement nearly as quickly as the second participant.
The speed at which each subject recovered may constitute evidence of dormant connections in patients with complete motor paralysis. “Rather than there being a complete separation of the upper and lower regions relative to the injury, it’s possible that there is some contact, but that these connections are not functional,” said V. Reggie Edgerton, PhD, Director of the Neuromuscular Research Laboratory at the University of California, Los Angeles. “The spinal stimulation could be reawakening these connections.”
All participants were able to synchronize leg, ankle, and toe movements in unison with the rise and fall of a wave displayed on a computer screen. Three individuals were able to change the force at which they flexed their leg, depending on the intensity of auditory cues. “The fact that the brain is able to take advantage of the few connections that may be remaining and then process this complicated visual, auditory, and perceptual information is pretty amazing,” said Dr. Edgerton. “It tells us that the information from the brain is getting to the right place in the spinal cord, so that the person can control, with fairly impressive accuracy, the nature of the movement.”
At the end of the training, some of the participants were able to execute voluntary movements with greater force and with less stimulation than before, and others had improved movement accuracy. It is unclear whether the improvement resulted from the training or from the cumulative effects of stimulation over time, said the authors, who plan to address this question in their next study.
—Erik Greb
Suggested Reading
Angeli CA, Edgerton VR, Gerasimenko YP, Harkema SJ. Altering spinal cord excitability enables voluntary movements after chronic complete paralysis in humans. Brain. 2014 Apr 8 [Epub ahead of print].
Harkema S, Gerasimenko Y, Hodes J, et al. Effect of epidural stimulation of the lumbosacral spinal cord on voluntary movement, standing, and assisted stepping after motor complete paraplegia: a case study. Lancet. 2011;377(9781):1938-1947.
Sayenko DG, Angeli C, Harkema SJ, et al. Neuromodulation of evoked muscle potentials induced by epidural spinal-cord stimulation in paralyzed individuals. J Neurophysiol. 2014;111(5):1088-1099.
Epidural stimulation may help people with spinal cord injury develop functional connectivity and re-establish voluntary control of previously paralyzed muscles, investigators stated in research published online ahead of print April 8 in Brain. The therapy has the potential to change the prognosis of people with paralysis, even if it is administered years after injury, according to the authors.
In the study, four individuals with chronic complete motor paralysis were able to perform voluntary tasks while receiving epidural stimulation. Three of the participants recovered voluntary movement with epidural stimulation soon after implantation, and two of the three had complete loss of both motor and sensory function at study initiation. The results show “that by neuromodulating the spinal circuitry at subthreshold motor levels with epidural stimulation, chronically complete paralyzed individuals can process conceptual, auditory, and visual input to regain specific voluntary control of paralyzed muscles,” said Claudia A. Angeli, PhD. “We have uncovered a fundamentally new intervention strategy that can dramatically affect recovery of voluntary movement in individuals with complete paralysis even years after injury.”
Investigators Trained Four Patients With Paraplegia
The current investigation continues research that Dr. Angeli, Senior Researcher in the Human Locomotion Research Center at Frazier Rehab Institute in Louisville, and her colleagues began in 2009. In that trial, a young man paralyzed below his chest had a 16-electrode array implanted on his spinal cord. He underwent daily training in standing and walking, during which he was suspended over a treadmill while the array delivered electrical pulses to his spinal cord. The man became able to bear his own weight and stand without assistance for four minutes. At seven months, the man regained some voluntary control of his legs. Other impairments caused by the injury, such as of blood pressure control, body temperature regulation, bladder control, and sexual function, also began to improve over time in the absence of stimulation.
In the current study, Dr. Angeli and her colleagues implanted an epidural spinal cord stimulation unit and a 16-electrode array over the spinal cord in three additional participants with motor complete spinal cord injury. Participants’ average age at the time of implantation, including the patient who received implantation in 2009, was 27. All four patients were male, and their injuries had occurred at least two years before implantation. All individuals were unable to stand or walk independently or voluntarily move their lower extremities after their injuries.
After implantation, the investigators tested the individuals’ ability to move voluntarily with epidural stimulation. Participants underwent testing again after receiving intense stand training using epidural stimulation and after receiving intense step training in combination with epidural stimulation.
One Patient Performed Voluntary Movements Within a Week
All four individuals became able to move their legs intentionally in response to a verbal command. The first participant had no motor activity when he attempted to move without epidural stimulation following a verbal command, and the other three individuals had no motor activity when they attempted to move without epidural stimulation following a visual cue. All four individuals, however, generated electromyogram activity and movement during ankle dorsiflexion in the presence of epidural stimulation during their first experimental session, either before or after stand training.
At the start of the current study, the second person to receive implantation was unable to move or experience any sensation below the point of injury. This patient was able to perform voluntary movements during the first week of stimulation. The other patients recovered voluntary movement nearly as quickly as the second participant.
The speed at which each subject recovered may constitute evidence of dormant connections in patients with complete motor paralysis. “Rather than there being a complete separation of the upper and lower regions relative to the injury, it’s possible that there is some contact, but that these connections are not functional,” said V. Reggie Edgerton, PhD, Director of the Neuromuscular Research Laboratory at the University of California, Los Angeles. “The spinal stimulation could be reawakening these connections.”
All participants were able to synchronize leg, ankle, and toe movements in unison with the rise and fall of a wave displayed on a computer screen. Three individuals were able to change the force at which they flexed their leg, depending on the intensity of auditory cues. “The fact that the brain is able to take advantage of the few connections that may be remaining and then process this complicated visual, auditory, and perceptual information is pretty amazing,” said Dr. Edgerton. “It tells us that the information from the brain is getting to the right place in the spinal cord, so that the person can control, with fairly impressive accuracy, the nature of the movement.”
At the end of the training, some of the participants were able to execute voluntary movements with greater force and with less stimulation than before, and others had improved movement accuracy. It is unclear whether the improvement resulted from the training or from the cumulative effects of stimulation over time, said the authors, who plan to address this question in their next study.
—Erik Greb
Epidural stimulation may help people with spinal cord injury develop functional connectivity and re-establish voluntary control of previously paralyzed muscles, investigators stated in research published online ahead of print April 8 in Brain. The therapy has the potential to change the prognosis of people with paralysis, even if it is administered years after injury, according to the authors.
In the study, four individuals with chronic complete motor paralysis were able to perform voluntary tasks while receiving epidural stimulation. Three of the participants recovered voluntary movement with epidural stimulation soon after implantation, and two of the three had complete loss of both motor and sensory function at study initiation. The results show “that by neuromodulating the spinal circuitry at subthreshold motor levels with epidural stimulation, chronically complete paralyzed individuals can process conceptual, auditory, and visual input to regain specific voluntary control of paralyzed muscles,” said Claudia A. Angeli, PhD. “We have uncovered a fundamentally new intervention strategy that can dramatically affect recovery of voluntary movement in individuals with complete paralysis even years after injury.”
Investigators Trained Four Patients With Paraplegia
The current investigation continues research that Dr. Angeli, Senior Researcher in the Human Locomotion Research Center at Frazier Rehab Institute in Louisville, and her colleagues began in 2009. In that trial, a young man paralyzed below his chest had a 16-electrode array implanted on his spinal cord. He underwent daily training in standing and walking, during which he was suspended over a treadmill while the array delivered electrical pulses to his spinal cord. The man became able to bear his own weight and stand without assistance for four minutes. At seven months, the man regained some voluntary control of his legs. Other impairments caused by the injury, such as of blood pressure control, body temperature regulation, bladder control, and sexual function, also began to improve over time in the absence of stimulation.
In the current study, Dr. Angeli and her colleagues implanted an epidural spinal cord stimulation unit and a 16-electrode array over the spinal cord in three additional participants with motor complete spinal cord injury. Participants’ average age at the time of implantation, including the patient who received implantation in 2009, was 27. All four patients were male, and their injuries had occurred at least two years before implantation. All individuals were unable to stand or walk independently or voluntarily move their lower extremities after their injuries.
After implantation, the investigators tested the individuals’ ability to move voluntarily with epidural stimulation. Participants underwent testing again after receiving intense stand training using epidural stimulation and after receiving intense step training in combination with epidural stimulation.
One Patient Performed Voluntary Movements Within a Week
All four individuals became able to move their legs intentionally in response to a verbal command. The first participant had no motor activity when he attempted to move without epidural stimulation following a verbal command, and the other three individuals had no motor activity when they attempted to move without epidural stimulation following a visual cue. All four individuals, however, generated electromyogram activity and movement during ankle dorsiflexion in the presence of epidural stimulation during their first experimental session, either before or after stand training.
At the start of the current study, the second person to receive implantation was unable to move or experience any sensation below the point of injury. This patient was able to perform voluntary movements during the first week of stimulation. The other patients recovered voluntary movement nearly as quickly as the second participant.
The speed at which each subject recovered may constitute evidence of dormant connections in patients with complete motor paralysis. “Rather than there being a complete separation of the upper and lower regions relative to the injury, it’s possible that there is some contact, but that these connections are not functional,” said V. Reggie Edgerton, PhD, Director of the Neuromuscular Research Laboratory at the University of California, Los Angeles. “The spinal stimulation could be reawakening these connections.”
All participants were able to synchronize leg, ankle, and toe movements in unison with the rise and fall of a wave displayed on a computer screen. Three individuals were able to change the force at which they flexed their leg, depending on the intensity of auditory cues. “The fact that the brain is able to take advantage of the few connections that may be remaining and then process this complicated visual, auditory, and perceptual information is pretty amazing,” said Dr. Edgerton. “It tells us that the information from the brain is getting to the right place in the spinal cord, so that the person can control, with fairly impressive accuracy, the nature of the movement.”
At the end of the training, some of the participants were able to execute voluntary movements with greater force and with less stimulation than before, and others had improved movement accuracy. It is unclear whether the improvement resulted from the training or from the cumulative effects of stimulation over time, said the authors, who plan to address this question in their next study.
—Erik Greb
Suggested Reading
Angeli CA, Edgerton VR, Gerasimenko YP, Harkema SJ. Altering spinal cord excitability enables voluntary movements after chronic complete paralysis in humans. Brain. 2014 Apr 8 [Epub ahead of print].
Harkema S, Gerasimenko Y, Hodes J, et al. Effect of epidural stimulation of the lumbosacral spinal cord on voluntary movement, standing, and assisted stepping after motor complete paraplegia: a case study. Lancet. 2011;377(9781):1938-1947.
Sayenko DG, Angeli C, Harkema SJ, et al. Neuromodulation of evoked muscle potentials induced by epidural spinal-cord stimulation in paralyzed individuals. J Neurophysiol. 2014;111(5):1088-1099.
Suggested Reading
Angeli CA, Edgerton VR, Gerasimenko YP, Harkema SJ. Altering spinal cord excitability enables voluntary movements after chronic complete paralysis in humans. Brain. 2014 Apr 8 [Epub ahead of print].
Harkema S, Gerasimenko Y, Hodes J, et al. Effect of epidural stimulation of the lumbosacral spinal cord on voluntary movement, standing, and assisted stepping after motor complete paraplegia: a case study. Lancet. 2011;377(9781):1938-1947.
Sayenko DG, Angeli C, Harkema SJ, et al. Neuromodulation of evoked muscle potentials induced by epidural spinal-cord stimulation in paralyzed individuals. J Neurophysiol. 2014;111(5):1088-1099.
Heartland Virus Cases Could Resume in May
The CDC has advised clinicians to be on the lookout beginning in May for cases of Heartland virus disease, which includes symptoms such as fever, leukopenia, and thrombocytopenia. In the March 28 issue of Morbidity and Mortality Weekly Report, the CDC urged health care providers to consider Heartland virus testing in patients with these symptoms, without other likely explanation, who are negative for Ehrlichia and Anaplasma or who have not responded to doxycycline.
Heartland virus disease was first identified in 2009 through two cases in Missouri. Both patients were farmers who were hospitalized with fever, leukopenia, and thrombocytopenia. The virus was presumed then to be tick-borne.
In 2012 and 2013, six confirmed cases of Heartland virus disease were reported; four patients required hospitalization, and one with comorbidities died. Five cases occurred in Missouri, and the remaining case was in Tennessee; all occurred in men ages 50 and older. The cases were reported from May to September, with half reported in May.
The disease is confirmed through laboratory evidence of recent Heartland virus infection, along with a clinically compatible illness, defined as a fever of 100.4° F or greater, white blood cell count below 4,500 cells/mm3, and platelet count less than 150,000/mm3. Other reported symptoms have included fatigue, anorexia, headache, nausea, myalgia, and arthralgia.
In 2013, Heartland virus was isolated for the first time from the lone star tick (Amblyomma americanum), which has a wide distribution in the United States and is a known vector of other diseases. A CDC report from 2003 warned that the virus’s importance would likely grow as a vector of zoonotic pathogens affecting humans because of environmental and demographic factors, and that cases of human disease mediated by A. americanum would increase.
Because no medication exists to prevent or treat Heartland virus disease, insect repellent and long-sleeved clothing are recommended for prevention, along with avoidance of bushy or wooded areas. People exposed to these environments should perform tick checks after spending time outdoors, the CDC said.
Daniel M. Pastula, MD, from the University of California, San Francisco, Medical Center, contributed to the report on Heartland virus. Clinicians with questions may contact state health departments or the CDC Arbovirus Diseases Branch, at 970-221-6400.
—Jennie Smith
Suggested Reading
Goldsmith CS, Ksiazek TG, Rollin PE, et al. Cell culture and electron microscopy for identifying viruses in diseases of unknown cause. Emerg Infect Dis. 2013;19(6):886-891.
Pastula DM, Turabelidze G, Yates KF, et al. Notes from the field: heartland virus disease—United States, 2012-2013. MMWR Morb Mortal Wkly Rep. 2014;63(12):270-271.
The CDC has advised clinicians to be on the lookout beginning in May for cases of Heartland virus disease, which includes symptoms such as fever, leukopenia, and thrombocytopenia. In the March 28 issue of Morbidity and Mortality Weekly Report, the CDC urged health care providers to consider Heartland virus testing in patients with these symptoms, without other likely explanation, who are negative for Ehrlichia and Anaplasma or who have not responded to doxycycline.
Heartland virus disease was first identified in 2009 through two cases in Missouri. Both patients were farmers who were hospitalized with fever, leukopenia, and thrombocytopenia. The virus was presumed then to be tick-borne.
In 2012 and 2013, six confirmed cases of Heartland virus disease were reported; four patients required hospitalization, and one with comorbidities died. Five cases occurred in Missouri, and the remaining case was in Tennessee; all occurred in men ages 50 and older. The cases were reported from May to September, with half reported in May.
The disease is confirmed through laboratory evidence of recent Heartland virus infection, along with a clinically compatible illness, defined as a fever of 100.4° F or greater, white blood cell count below 4,500 cells/mm3, and platelet count less than 150,000/mm3. Other reported symptoms have included fatigue, anorexia, headache, nausea, myalgia, and arthralgia.
In 2013, Heartland virus was isolated for the first time from the lone star tick (Amblyomma americanum), which has a wide distribution in the United States and is a known vector of other diseases. A CDC report from 2003 warned that the virus’s importance would likely grow as a vector of zoonotic pathogens affecting humans because of environmental and demographic factors, and that cases of human disease mediated by A. americanum would increase.
Because no medication exists to prevent or treat Heartland virus disease, insect repellent and long-sleeved clothing are recommended for prevention, along with avoidance of bushy or wooded areas. People exposed to these environments should perform tick checks after spending time outdoors, the CDC said.
Daniel M. Pastula, MD, from the University of California, San Francisco, Medical Center, contributed to the report on Heartland virus. Clinicians with questions may contact state health departments or the CDC Arbovirus Diseases Branch, at 970-221-6400.
—Jennie Smith
The CDC has advised clinicians to be on the lookout beginning in May for cases of Heartland virus disease, which includes symptoms such as fever, leukopenia, and thrombocytopenia. In the March 28 issue of Morbidity and Mortality Weekly Report, the CDC urged health care providers to consider Heartland virus testing in patients with these symptoms, without other likely explanation, who are negative for Ehrlichia and Anaplasma or who have not responded to doxycycline.
Heartland virus disease was first identified in 2009 through two cases in Missouri. Both patients were farmers who were hospitalized with fever, leukopenia, and thrombocytopenia. The virus was presumed then to be tick-borne.
In 2012 and 2013, six confirmed cases of Heartland virus disease were reported; four patients required hospitalization, and one with comorbidities died. Five cases occurred in Missouri, and the remaining case was in Tennessee; all occurred in men ages 50 and older. The cases were reported from May to September, with half reported in May.
The disease is confirmed through laboratory evidence of recent Heartland virus infection, along with a clinically compatible illness, defined as a fever of 100.4° F or greater, white blood cell count below 4,500 cells/mm3, and platelet count less than 150,000/mm3. Other reported symptoms have included fatigue, anorexia, headache, nausea, myalgia, and arthralgia.
In 2013, Heartland virus was isolated for the first time from the lone star tick (Amblyomma americanum), which has a wide distribution in the United States and is a known vector of other diseases. A CDC report from 2003 warned that the virus’s importance would likely grow as a vector of zoonotic pathogens affecting humans because of environmental and demographic factors, and that cases of human disease mediated by A. americanum would increase.
Because no medication exists to prevent or treat Heartland virus disease, insect repellent and long-sleeved clothing are recommended for prevention, along with avoidance of bushy or wooded areas. People exposed to these environments should perform tick checks after spending time outdoors, the CDC said.
Daniel M. Pastula, MD, from the University of California, San Francisco, Medical Center, contributed to the report on Heartland virus. Clinicians with questions may contact state health departments or the CDC Arbovirus Diseases Branch, at 970-221-6400.
—Jennie Smith
Suggested Reading
Goldsmith CS, Ksiazek TG, Rollin PE, et al. Cell culture and electron microscopy for identifying viruses in diseases of unknown cause. Emerg Infect Dis. 2013;19(6):886-891.
Pastula DM, Turabelidze G, Yates KF, et al. Notes from the field: heartland virus disease—United States, 2012-2013. MMWR Morb Mortal Wkly Rep. 2014;63(12):270-271.
Suggested Reading
Goldsmith CS, Ksiazek TG, Rollin PE, et al. Cell culture and electron microscopy for identifying viruses in diseases of unknown cause. Emerg Infect Dis. 2013;19(6):886-891.
Pastula DM, Turabelidze G, Yates KF, et al. Notes from the field: heartland virus disease—United States, 2012-2013. MMWR Morb Mortal Wkly Rep. 2014;63(12):270-271.
Guideline Analyzes Methods for Detecting Nonvalvular Atrial Fibrillation
In patients with recent cryptogenic stroke, cardiac rhythm monitoring probably detects occult nonvalvular atrial fibrillation (NVAF), according to an evidence-based guideline drafted by the American Academy of Neurology (AAN). The guideline, which was published February 25 in Neurology, describes how to identify and treat patients with NVAF to prevent cardioembolic stroke. The document also provides advice about when to conduct cardiac rhythm monitoring and offer anticoagulants, including when to recommend newer agents in place of warfarin.
The guideline may already be outdated, however, because it does not take the results of the recent CRYSTAL-AF study into account. In that study, long-term cardiac rhythm monitoring of patients with previous cryptogenic stroke detected asymptomatic atrial fibrillation at a significantly higher rate than standard monitoring methods did.
The guideline also recommends extending the routine use of anticoagulation to patients with NVAF who are generally undertreated or whose health was considered a possible barrier to their use, such as individuals age 75 or older, people with mild dementia, and people at moderate risk of falls.
“Cognizant of the global reach of the AAN, the guideline also examines the evidence base for a treatment alternative to warfarin or its analogues for patients in developing countries who may not have access to the new oral anticoagulants,” said Antonio Culebras, MD, lead author of the guideline.
“The World Health Organization has determined that atrial fibrillation has reached near-epidemic proportions,” said Dr. Culebras, Professor of Neurology at the State University of New York, Syracuse. “Approximately one in 20 individuals with atrial fibrillation will have a stroke unless treated appropriately.”
The risk for stroke among patients with NVAF is highest in people with a history of transient ischemic attack or prior stroke. For this population, the risk of stroke is approximately 10% per year. Patients with no risk factors other than NVAF have a less than 2% increased risk of stroke per year.
The AAN issued a practice parameter on NVAF in 1998. At the time, warfarin, adjusted to an international normalized ratio (INR) of 2.0, was the recommended standard treatment for patients at risk of cardioembolic stroke. Aspirin was the only recommended alternative for patients unable to receive the vitamin K antagonist or for those who were deemed to be at low risk of stroke, although clinical trials had not established aspirin’s efficacy in these patients.
Since 1998, several new oral anticoagulants (NOACs) have become available, including the direct thrombin inhibitor dabigatran and two factor Xa inhibitors, rivaroxaban and apixaban, that have been shown to be at least as effective as, if not more effective than, warfarin. Cardiac rhythm monitoring by various methods also has been introduced as a means to detect NVAF in asymptomatic patients.
The aims of the new AAN guideline were to analyze the latest evidence on the detection of atrial fibrillation using new technologies and to examine the efficacy of treatments to reduce the risk of stroke without increasing the risk of hemorrhage, compared with the long-standing standard of therapy, warfarin. Data published from 1998 to March 2013 were considered in the preparation of the guideline.
A Comparison of Cardiac Monitoring Technologies
The authors identified 17 studies that examined the use of cardiac monitoring technologies to detect new cases of NVAF. The most common methods were 24-hour Holter monitoring and serial ECG, but emerging evidence on newer technologies also was included in the analysis. The proportion of patients identified with NVAF ranged from 0% to 23%, and the average detection rate was 10.7% in all of the studies included.
“The guideline addresses the question of long-term monitoring of patients with NVAF,” said Dr. Culebras. “It recommends that clinicians ‘might’ [ie, with level C evidence] obtain outpatient cardiac rhythm studies in patients with cryptogenic stroke without known NVAF to identify patients with occult NVAF.” The guideline also advises that monitoring might be needed for prolonged periods of one or more weeks, rather than for shorter periods such as 24 hours.
At the time the guideline was being prepared, however, data from the CRYSTAL-AF study were not available, which means that the guideline is already outdated, said Richard A. Bernstein, MD, PhD, Professor of Neurology at Northwestern University in Chicago. Dr. Bernstein was not a coauthor of the guideline.
Dr. Bernstein was a member of the steering committee for the CRYSTAL-AF trial, which found that an insertable cardiac monitor detected NVAF more often at six months than serial ECG or Holter monitoring (8.9% vs 1.4%). Approximately 74% of cases of NVAF that were detected were asymptomatic.
“CRYSTAL-AF represents the state of the art for cardiac monitoring in cryptogenic stroke patients and makes the AAN guidelines obsolete,” said Dr. Bernstein. “[The study] shows that even intermediate-term monitoring (less than one month) will miss the majority of atrial fibrillation in this population, and that most of the atrial fibrillation we find with long-term (greater than one year) monitoring is likely to be clinically significant.”
The AAN guideline includes “no discussion of truly long-term monitoring … which is unfortunate,” he added. Nevertheless, “anything that gets neurologists thinking about long-term cardiac monitoring is likely to be beneficial.”
Guideline Recommends Anticoagulants for Stroke Prevention
The AAN guideline also provides general recommendations on the use of NOACs as alternatives to warfarin. The authors note that in comparison with warfarin, rivaroxaban is probably at least as effective, and dabigatran and apixaban may be more effective. In addition, although apixaban is likely to be more effective than aspirin, it is associated with a similar risk of bleeding. NOACs’ advantages over warfarin include an overall lower risk of intracranial hemorrhage and their elimination of the need for routine anticoagulant monitoring.
Clinicians have the following options available, according to the AAN guideline: warfarin (INR, 2.0 to 3.0), dabigatran (150 mg bid), rivaroxaban (15 to 20 mg/dL), apixaban (2.5 to 5 mg bid), and triflusal (600 mg) plus acenocoumarol (INR, 1.25 to 2.0). If a patient is already taking warfarin and is well controlled, he or she should remain on that therapy and not switch to a newer oral anticoagulant, said the authors.
The combination of clopidogrel and aspirin is probably less effective than warfarin, but probably better than aspirin alone, according to the guideline. The risk of hemorrhage, however, is higher with clopidogrel and aspirin.
The combination of triflusal and acenocoumarol is “likely more effective” than acenocoumarol alone, said the authors. Triflusal is available in Europe, Latin America, and Southeast Asia, and acenocoumarol is available in Europe.
The document is not intended to dictate which treatment to use, Dr. Culebras explained. “The guideline leaves room on purpose for clinicians to use their judgment,” he said. “The overall objective of the guideline is to reduce therapeutic uncertainty and not to issue commandments for treatment.”
Although Dr. Bernstein criticized the guidelines for not recommending anticoagulants strongly enough, the recommendations on anticoagulant choice are “reasonable in that they impute potential clinical profiles of patients who might particularly benefit from one NOAC over another, without making a claim that these recommendations are based on solid data,” he said. “This [approach] reflects how doctors make decisions when we don’t have direct comparative studies, and I think that is helpful.”
—Sara Freeman
Suggested Reading
Culebras A, Messé SR, Chaturvedi S, et al. Summary of evidence-based guideline update: prevention of stroke in nonvalvular atrial fibrillation: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2014;82(8):716-724.
In patients with recent cryptogenic stroke, cardiac rhythm monitoring probably detects occult nonvalvular atrial fibrillation (NVAF), according to an evidence-based guideline drafted by the American Academy of Neurology (AAN). The guideline, which was published February 25 in Neurology, describes how to identify and treat patients with NVAF to prevent cardioembolic stroke. The document also provides advice about when to conduct cardiac rhythm monitoring and offer anticoagulants, including when to recommend newer agents in place of warfarin.
The guideline may already be outdated, however, because it does not take the results of the recent CRYSTAL-AF study into account. In that study, long-term cardiac rhythm monitoring of patients with previous cryptogenic stroke detected asymptomatic atrial fibrillation at a significantly higher rate than standard monitoring methods did.
The guideline also recommends extending the routine use of anticoagulation to patients with NVAF who are generally undertreated or whose health was considered a possible barrier to their use, such as individuals age 75 or older, people with mild dementia, and people at moderate risk of falls.
“Cognizant of the global reach of the AAN, the guideline also examines the evidence base for a treatment alternative to warfarin or its analogues for patients in developing countries who may not have access to the new oral anticoagulants,” said Antonio Culebras, MD, lead author of the guideline.
“The World Health Organization has determined that atrial fibrillation has reached near-epidemic proportions,” said Dr. Culebras, Professor of Neurology at the State University of New York, Syracuse. “Approximately one in 20 individuals with atrial fibrillation will have a stroke unless treated appropriately.”
The risk for stroke among patients with NVAF is highest in people with a history of transient ischemic attack or prior stroke. For this population, the risk of stroke is approximately 10% per year. Patients with no risk factors other than NVAF have a less than 2% increased risk of stroke per year.
The AAN issued a practice parameter on NVAF in 1998. At the time, warfarin, adjusted to an international normalized ratio (INR) of 2.0, was the recommended standard treatment for patients at risk of cardioembolic stroke. Aspirin was the only recommended alternative for patients unable to receive the vitamin K antagonist or for those who were deemed to be at low risk of stroke, although clinical trials had not established aspirin’s efficacy in these patients.
Since 1998, several new oral anticoagulants (NOACs) have become available, including the direct thrombin inhibitor dabigatran and two factor Xa inhibitors, rivaroxaban and apixaban, that have been shown to be at least as effective as, if not more effective than, warfarin. Cardiac rhythm monitoring by various methods also has been introduced as a means to detect NVAF in asymptomatic patients.
The aims of the new AAN guideline were to analyze the latest evidence on the detection of atrial fibrillation using new technologies and to examine the efficacy of treatments to reduce the risk of stroke without increasing the risk of hemorrhage, compared with the long-standing standard of therapy, warfarin. Data published from 1998 to March 2013 were considered in the preparation of the guideline.
A Comparison of Cardiac Monitoring Technologies
The authors identified 17 studies that examined the use of cardiac monitoring technologies to detect new cases of NVAF. The most common methods were 24-hour Holter monitoring and serial ECG, but emerging evidence on newer technologies also was included in the analysis. The proportion of patients identified with NVAF ranged from 0% to 23%, and the average detection rate was 10.7% in all of the studies included.
“The guideline addresses the question of long-term monitoring of patients with NVAF,” said Dr. Culebras. “It recommends that clinicians ‘might’ [ie, with level C evidence] obtain outpatient cardiac rhythm studies in patients with cryptogenic stroke without known NVAF to identify patients with occult NVAF.” The guideline also advises that monitoring might be needed for prolonged periods of one or more weeks, rather than for shorter periods such as 24 hours.
At the time the guideline was being prepared, however, data from the CRYSTAL-AF study were not available, which means that the guideline is already outdated, said Richard A. Bernstein, MD, PhD, Professor of Neurology at Northwestern University in Chicago. Dr. Bernstein was not a coauthor of the guideline.
Dr. Bernstein was a member of the steering committee for the CRYSTAL-AF trial, which found that an insertable cardiac monitor detected NVAF more often at six months than serial ECG or Holter monitoring (8.9% vs 1.4%). Approximately 74% of cases of NVAF that were detected were asymptomatic.
“CRYSTAL-AF represents the state of the art for cardiac monitoring in cryptogenic stroke patients and makes the AAN guidelines obsolete,” said Dr. Bernstein. “[The study] shows that even intermediate-term monitoring (less than one month) will miss the majority of atrial fibrillation in this population, and that most of the atrial fibrillation we find with long-term (greater than one year) monitoring is likely to be clinically significant.”
The AAN guideline includes “no discussion of truly long-term monitoring … which is unfortunate,” he added. Nevertheless, “anything that gets neurologists thinking about long-term cardiac monitoring is likely to be beneficial.”
Guideline Recommends Anticoagulants for Stroke Prevention
The AAN guideline also provides general recommendations on the use of NOACs as alternatives to warfarin. The authors note that in comparison with warfarin, rivaroxaban is probably at least as effective, and dabigatran and apixaban may be more effective. In addition, although apixaban is likely to be more effective than aspirin, it is associated with a similar risk of bleeding. NOACs’ advantages over warfarin include an overall lower risk of intracranial hemorrhage and their elimination of the need for routine anticoagulant monitoring.
Clinicians have the following options available, according to the AAN guideline: warfarin (INR, 2.0 to 3.0), dabigatran (150 mg bid), rivaroxaban (15 to 20 mg/dL), apixaban (2.5 to 5 mg bid), and triflusal (600 mg) plus acenocoumarol (INR, 1.25 to 2.0). If a patient is already taking warfarin and is well controlled, he or she should remain on that therapy and not switch to a newer oral anticoagulant, said the authors.
The combination of clopidogrel and aspirin is probably less effective than warfarin, but probably better than aspirin alone, according to the guideline. The risk of hemorrhage, however, is higher with clopidogrel and aspirin.
The combination of triflusal and acenocoumarol is “likely more effective” than acenocoumarol alone, said the authors. Triflusal is available in Europe, Latin America, and Southeast Asia, and acenocoumarol is available in Europe.
The document is not intended to dictate which treatment to use, Dr. Culebras explained. “The guideline leaves room on purpose for clinicians to use their judgment,” he said. “The overall objective of the guideline is to reduce therapeutic uncertainty and not to issue commandments for treatment.”
Although Dr. Bernstein criticized the guidelines for not recommending anticoagulants strongly enough, the recommendations on anticoagulant choice are “reasonable in that they impute potential clinical profiles of patients who might particularly benefit from one NOAC over another, without making a claim that these recommendations are based on solid data,” he said. “This [approach] reflects how doctors make decisions when we don’t have direct comparative studies, and I think that is helpful.”
—Sara Freeman
In patients with recent cryptogenic stroke, cardiac rhythm monitoring probably detects occult nonvalvular atrial fibrillation (NVAF), according to an evidence-based guideline drafted by the American Academy of Neurology (AAN). The guideline, which was published February 25 in Neurology, describes how to identify and treat patients with NVAF to prevent cardioembolic stroke. The document also provides advice about when to conduct cardiac rhythm monitoring and offer anticoagulants, including when to recommend newer agents in place of warfarin.
The guideline may already be outdated, however, because it does not take the results of the recent CRYSTAL-AF study into account. In that study, long-term cardiac rhythm monitoring of patients with previous cryptogenic stroke detected asymptomatic atrial fibrillation at a significantly higher rate than standard monitoring methods did.
The guideline also recommends extending the routine use of anticoagulation to patients with NVAF who are generally undertreated or whose health was considered a possible barrier to their use, such as individuals age 75 or older, people with mild dementia, and people at moderate risk of falls.
“Cognizant of the global reach of the AAN, the guideline also examines the evidence base for a treatment alternative to warfarin or its analogues for patients in developing countries who may not have access to the new oral anticoagulants,” said Antonio Culebras, MD, lead author of the guideline.
“The World Health Organization has determined that atrial fibrillation has reached near-epidemic proportions,” said Dr. Culebras, Professor of Neurology at the State University of New York, Syracuse. “Approximately one in 20 individuals with atrial fibrillation will have a stroke unless treated appropriately.”
The risk for stroke among patients with NVAF is highest in people with a history of transient ischemic attack or prior stroke. For this population, the risk of stroke is approximately 10% per year. Patients with no risk factors other than NVAF have a less than 2% increased risk of stroke per year.
The AAN issued a practice parameter on NVAF in 1998. At the time, warfarin, adjusted to an international normalized ratio (INR) of 2.0, was the recommended standard treatment for patients at risk of cardioembolic stroke. Aspirin was the only recommended alternative for patients unable to receive the vitamin K antagonist or for those who were deemed to be at low risk of stroke, although clinical trials had not established aspirin’s efficacy in these patients.
Since 1998, several new oral anticoagulants (NOACs) have become available, including the direct thrombin inhibitor dabigatran and two factor Xa inhibitors, rivaroxaban and apixaban, that have been shown to be at least as effective as, if not more effective than, warfarin. Cardiac rhythm monitoring by various methods also has been introduced as a means to detect NVAF in asymptomatic patients.
The aims of the new AAN guideline were to analyze the latest evidence on the detection of atrial fibrillation using new technologies and to examine the efficacy of treatments to reduce the risk of stroke without increasing the risk of hemorrhage, compared with the long-standing standard of therapy, warfarin. Data published from 1998 to March 2013 were considered in the preparation of the guideline.
A Comparison of Cardiac Monitoring Technologies
The authors identified 17 studies that examined the use of cardiac monitoring technologies to detect new cases of NVAF. The most common methods were 24-hour Holter monitoring and serial ECG, but emerging evidence on newer technologies also was included in the analysis. The proportion of patients identified with NVAF ranged from 0% to 23%, and the average detection rate was 10.7% in all of the studies included.
“The guideline addresses the question of long-term monitoring of patients with NVAF,” said Dr. Culebras. “It recommends that clinicians ‘might’ [ie, with level C evidence] obtain outpatient cardiac rhythm studies in patients with cryptogenic stroke without known NVAF to identify patients with occult NVAF.” The guideline also advises that monitoring might be needed for prolonged periods of one or more weeks, rather than for shorter periods such as 24 hours.
At the time the guideline was being prepared, however, data from the CRYSTAL-AF study were not available, which means that the guideline is already outdated, said Richard A. Bernstein, MD, PhD, Professor of Neurology at Northwestern University in Chicago. Dr. Bernstein was not a coauthor of the guideline.
Dr. Bernstein was a member of the steering committee for the CRYSTAL-AF trial, which found that an insertable cardiac monitor detected NVAF more often at six months than serial ECG or Holter monitoring (8.9% vs 1.4%). Approximately 74% of cases of NVAF that were detected were asymptomatic.
“CRYSTAL-AF represents the state of the art for cardiac monitoring in cryptogenic stroke patients and makes the AAN guidelines obsolete,” said Dr. Bernstein. “[The study] shows that even intermediate-term monitoring (less than one month) will miss the majority of atrial fibrillation in this population, and that most of the atrial fibrillation we find with long-term (greater than one year) monitoring is likely to be clinically significant.”
The AAN guideline includes “no discussion of truly long-term monitoring … which is unfortunate,” he added. Nevertheless, “anything that gets neurologists thinking about long-term cardiac monitoring is likely to be beneficial.”
Guideline Recommends Anticoagulants for Stroke Prevention
The AAN guideline also provides general recommendations on the use of NOACs as alternatives to warfarin. The authors note that in comparison with warfarin, rivaroxaban is probably at least as effective, and dabigatran and apixaban may be more effective. In addition, although apixaban is likely to be more effective than aspirin, it is associated with a similar risk of bleeding. NOACs’ advantages over warfarin include an overall lower risk of intracranial hemorrhage and their elimination of the need for routine anticoagulant monitoring.
Clinicians have the following options available, according to the AAN guideline: warfarin (INR, 2.0 to 3.0), dabigatran (150 mg bid), rivaroxaban (15 to 20 mg/dL), apixaban (2.5 to 5 mg bid), and triflusal (600 mg) plus acenocoumarol (INR, 1.25 to 2.0). If a patient is already taking warfarin and is well controlled, he or she should remain on that therapy and not switch to a newer oral anticoagulant, said the authors.
The combination of clopidogrel and aspirin is probably less effective than warfarin, but probably better than aspirin alone, according to the guideline. The risk of hemorrhage, however, is higher with clopidogrel and aspirin.
The combination of triflusal and acenocoumarol is “likely more effective” than acenocoumarol alone, said the authors. Triflusal is available in Europe, Latin America, and Southeast Asia, and acenocoumarol is available in Europe.
The document is not intended to dictate which treatment to use, Dr. Culebras explained. “The guideline leaves room on purpose for clinicians to use their judgment,” he said. “The overall objective of the guideline is to reduce therapeutic uncertainty and not to issue commandments for treatment.”
Although Dr. Bernstein criticized the guidelines for not recommending anticoagulants strongly enough, the recommendations on anticoagulant choice are “reasonable in that they impute potential clinical profiles of patients who might particularly benefit from one NOAC over another, without making a claim that these recommendations are based on solid data,” he said. “This [approach] reflects how doctors make decisions when we don’t have direct comparative studies, and I think that is helpful.”
—Sara Freeman
Suggested Reading
Culebras A, Messé SR, Chaturvedi S, et al. Summary of evidence-based guideline update: prevention of stroke in nonvalvular atrial fibrillation: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2014;82(8):716-724.
Suggested Reading
Culebras A, Messé SR, Chaturvedi S, et al. Summary of evidence-based guideline update: prevention of stroke in nonvalvular atrial fibrillation: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2014;82(8):716-724.
Autism May Start in Utero
Autism may begin in utero, according to a study of postmortem brain tissue from children with and without autism published online ahead of print March 27 in the New England Journal of Medicine.
The findings imply that layer formation and layer-specific neuronal differentiation are dysregulated during prenatal development. The study also suggests that early recognition and treatment of autism may allow the developing brains of autistic children to construct alternative brain pathways around the patchy defects in the cortex. The result could be improved social functioning and communication, the researchers theorized.
Researchers used gene expression to examine cellular markers in each of the cortical layers, as well as genes that are associated with autism. Markers for several layers of the cortex were absent in the brain tissue of 10 of 11 (91%) children with autism and in one of 11 (9%) control children. The areas of disorganization were seen in multiple cortical layers, with most abnormal expression noted in layers 4 and 5 and focal disruption of cortical laminar architecture as patches that were 5 to 7 mm long.
—Mary Jo M. Dales
Suggested Reading
Stoner R, Chow ML, Boyle MP, et al. Patches of disorganization in the neocortex of children with autism. N Engl J Med. 2014;370(13):1209-1219.
Autism may begin in utero, according to a study of postmortem brain tissue from children with and without autism published online ahead of print March 27 in the New England Journal of Medicine.
The findings imply that layer formation and layer-specific neuronal differentiation are dysregulated during prenatal development. The study also suggests that early recognition and treatment of autism may allow the developing brains of autistic children to construct alternative brain pathways around the patchy defects in the cortex. The result could be improved social functioning and communication, the researchers theorized.
Researchers used gene expression to examine cellular markers in each of the cortical layers, as well as genes that are associated with autism. Markers for several layers of the cortex were absent in the brain tissue of 10 of 11 (91%) children with autism and in one of 11 (9%) control children. The areas of disorganization were seen in multiple cortical layers, with most abnormal expression noted in layers 4 and 5 and focal disruption of cortical laminar architecture as patches that were 5 to 7 mm long.
—Mary Jo M. Dales
Autism may begin in utero, according to a study of postmortem brain tissue from children with and without autism published online ahead of print March 27 in the New England Journal of Medicine.
The findings imply that layer formation and layer-specific neuronal differentiation are dysregulated during prenatal development. The study also suggests that early recognition and treatment of autism may allow the developing brains of autistic children to construct alternative brain pathways around the patchy defects in the cortex. The result could be improved social functioning and communication, the researchers theorized.
Researchers used gene expression to examine cellular markers in each of the cortical layers, as well as genes that are associated with autism. Markers for several layers of the cortex were absent in the brain tissue of 10 of 11 (91%) children with autism and in one of 11 (9%) control children. The areas of disorganization were seen in multiple cortical layers, with most abnormal expression noted in layers 4 and 5 and focal disruption of cortical laminar architecture as patches that were 5 to 7 mm long.
—Mary Jo M. Dales
Suggested Reading
Stoner R, Chow ML, Boyle MP, et al. Patches of disorganization in the neocortex of children with autism. N Engl J Med. 2014;370(13):1209-1219.
Suggested Reading
Stoner R, Chow ML, Boyle MP, et al. Patches of disorganization in the neocortex of children with autism. N Engl J Med. 2014;370(13):1209-1219.
Prevalence of Autism Spectrum Disorder Is Increasing
The CDC estimates that about one in 68 US children has autism spectrum disorder, according to findings published in the March 28 issue of Morbidity and Mortality Weekly Report Surveillance Summaries. This prevalence is a 30% increase from the CDC’s estimate of one in 88 children using 2008 data.
The findings also show that autism spectrum disorder continues to be more prevalent in boys than in girls: one in 42 boys had autism spectrum disorder in the latest report, compared with one in 189 girls.
The increased prevalence could be attributed to improved clinician identification of autism, a growing number of autistic children with average to above-average intellectual ability, or a combination of both factors, said Coleen Boyle, PhD, Director of the CDC’s National Center on Birth Defects and Developmental Disabilities (NCBDDD).
The CDC analyzed 2010 data collected by its Autism and Developmental Disabilities Monitoring (ADDM) Network, which provides population-based estimates of autism spectrum disorder prevalence in children age 8 at 11 sites in the United States based on records from community sources that diagnose and provide services to children with developmental disabilities.
Of the 11 sites studied, seven had information available on the intellectual ability of at least 70% of children with autism spectrum disorder. Of the 3,604 children for whom data were available, 31% were classified as having intellectual disability (IQ of 70 or lower), 23% were considered borderline (IQ = 71 to 85), and 46% had IQ scores of greater than 85, considered average or above average intellectual ability.
“We recognize now that autism is a spectrum, no longer limited to the severely affected,” said Marshalyn Yeargin-Allsopp, MD, Chief of the Developmental Disabilities branch of NCBDDD. “There are children with higher IQs being diagnosed who may not even be receiving special education services, and the numbers may reflect that.”
Non-Hispanic white children were 30% more likely to be diagnosed with autism spectrum disorder than were non-Hispanic black children and about 50% more likely to be diagnosed with autism spectrum disorder than were Hispanic children.
Dr. Boyle stressed the importance of early screening and identification of autism spectrum disorder in children (it can be diagnosed by the time a child reaches age 2) and urged parents to take action if a child shows any signs of developmental delays.
“Community leaders, health professionals, educators, and childcare providers should use these data to ensure that children with autism spectrum disorder are identified as early as possible and connected to the services they need,” said Dr. Boyle.
To help promote early intervention in autism spectrum disorder, the CDC will be launching an awareness initiative called “Birth to Five, Watch Me Thrive,” which aims to provide parents, teachers, and community members with information and resources about developmental milestones and screening for autism.
“Most children with autism are not diagnosed until after age 4,” said Dr. Boyle. “The CDC will continue to promote early identification and research. The earlier a child is identified and connected with services, the better.”
The CDC cited several limitations to the report. First, the surveillance sites were not selected to be representative of the entire United States. Second, population denominators used for this report were based on the 2010 decennial census. Comparisons with previous ADDM findings thus should be interpreted with caution because ADDM reports from nondecennial surveillance years are likely influenced by greater error in the population denominators used for those previous surveillance years, which were based on postcensus estimates. Third, three of the nine sites with access to review children’s education records did not receive permission to do so in all school districts within the site’s overall surveillance area. Fourth, findings that address intellectual ability might not be generalizable to all ADDM sites. Finally, race and ethnicity are presented in broad terms and should not be interpreted as generalizable to all persons within those categories.
—Madhu Rajaraman
Suggested Reading
Developmental Disabilities Monitoring Network Surveillance Year 2010 Principal Investigators. Prevalence of autism spectrum disorder among children aged 8 years—autism and developmental disabilities monitoring network, 11 sites, United States, 2010. MMWR Surveill Summ. 2014; Mar 28;63 Suppl 2:1-21.
The CDC estimates that about one in 68 US children has autism spectrum disorder, according to findings published in the March 28 issue of Morbidity and Mortality Weekly Report Surveillance Summaries. This prevalence is a 30% increase from the CDC’s estimate of one in 88 children using 2008 data.
The findings also show that autism spectrum disorder continues to be more prevalent in boys than in girls: one in 42 boys had autism spectrum disorder in the latest report, compared with one in 189 girls.
The increased prevalence could be attributed to improved clinician identification of autism, a growing number of autistic children with average to above-average intellectual ability, or a combination of both factors, said Coleen Boyle, PhD, Director of the CDC’s National Center on Birth Defects and Developmental Disabilities (NCBDDD).
The CDC analyzed 2010 data collected by its Autism and Developmental Disabilities Monitoring (ADDM) Network, which provides population-based estimates of autism spectrum disorder prevalence in children age 8 at 11 sites in the United States based on records from community sources that diagnose and provide services to children with developmental disabilities.
Of the 11 sites studied, seven had information available on the intellectual ability of at least 70% of children with autism spectrum disorder. Of the 3,604 children for whom data were available, 31% were classified as having intellectual disability (IQ of 70 or lower), 23% were considered borderline (IQ = 71 to 85), and 46% had IQ scores of greater than 85, considered average or above average intellectual ability.
“We recognize now that autism is a spectrum, no longer limited to the severely affected,” said Marshalyn Yeargin-Allsopp, MD, Chief of the Developmental Disabilities branch of NCBDDD. “There are children with higher IQs being diagnosed who may not even be receiving special education services, and the numbers may reflect that.”
Non-Hispanic white children were 30% more likely to be diagnosed with autism spectrum disorder than were non-Hispanic black children and about 50% more likely to be diagnosed with autism spectrum disorder than were Hispanic children.
Dr. Boyle stressed the importance of early screening and identification of autism spectrum disorder in children (it can be diagnosed by the time a child reaches age 2) and urged parents to take action if a child shows any signs of developmental delays.
“Community leaders, health professionals, educators, and childcare providers should use these data to ensure that children with autism spectrum disorder are identified as early as possible and connected to the services they need,” said Dr. Boyle.
To help promote early intervention in autism spectrum disorder, the CDC will be launching an awareness initiative called “Birth to Five, Watch Me Thrive,” which aims to provide parents, teachers, and community members with information and resources about developmental milestones and screening for autism.
“Most children with autism are not diagnosed until after age 4,” said Dr. Boyle. “The CDC will continue to promote early identification and research. The earlier a child is identified and connected with services, the better.”
The CDC cited several limitations to the report. First, the surveillance sites were not selected to be representative of the entire United States. Second, population denominators used for this report were based on the 2010 decennial census. Comparisons with previous ADDM findings thus should be interpreted with caution because ADDM reports from nondecennial surveillance years are likely influenced by greater error in the population denominators used for those previous surveillance years, which were based on postcensus estimates. Third, three of the nine sites with access to review children’s education records did not receive permission to do so in all school districts within the site’s overall surveillance area. Fourth, findings that address intellectual ability might not be generalizable to all ADDM sites. Finally, race and ethnicity are presented in broad terms and should not be interpreted as generalizable to all persons within those categories.
—Madhu Rajaraman
The CDC estimates that about one in 68 US children has autism spectrum disorder, according to findings published in the March 28 issue of Morbidity and Mortality Weekly Report Surveillance Summaries. This prevalence is a 30% increase from the CDC’s estimate of one in 88 children using 2008 data.
The findings also show that autism spectrum disorder continues to be more prevalent in boys than in girls: one in 42 boys had autism spectrum disorder in the latest report, compared with one in 189 girls.
The increased prevalence could be attributed to improved clinician identification of autism, a growing number of autistic children with average to above-average intellectual ability, or a combination of both factors, said Coleen Boyle, PhD, Director of the CDC’s National Center on Birth Defects and Developmental Disabilities (NCBDDD).
The CDC analyzed 2010 data collected by its Autism and Developmental Disabilities Monitoring (ADDM) Network, which provides population-based estimates of autism spectrum disorder prevalence in children age 8 at 11 sites in the United States based on records from community sources that diagnose and provide services to children with developmental disabilities.
Of the 11 sites studied, seven had information available on the intellectual ability of at least 70% of children with autism spectrum disorder. Of the 3,604 children for whom data were available, 31% were classified as having intellectual disability (IQ of 70 or lower), 23% were considered borderline (IQ = 71 to 85), and 46% had IQ scores of greater than 85, considered average or above average intellectual ability.
“We recognize now that autism is a spectrum, no longer limited to the severely affected,” said Marshalyn Yeargin-Allsopp, MD, Chief of the Developmental Disabilities branch of NCBDDD. “There are children with higher IQs being diagnosed who may not even be receiving special education services, and the numbers may reflect that.”
Non-Hispanic white children were 30% more likely to be diagnosed with autism spectrum disorder than were non-Hispanic black children and about 50% more likely to be diagnosed with autism spectrum disorder than were Hispanic children.
Dr. Boyle stressed the importance of early screening and identification of autism spectrum disorder in children (it can be diagnosed by the time a child reaches age 2) and urged parents to take action if a child shows any signs of developmental delays.
“Community leaders, health professionals, educators, and childcare providers should use these data to ensure that children with autism spectrum disorder are identified as early as possible and connected to the services they need,” said Dr. Boyle.
To help promote early intervention in autism spectrum disorder, the CDC will be launching an awareness initiative called “Birth to Five, Watch Me Thrive,” which aims to provide parents, teachers, and community members with information and resources about developmental milestones and screening for autism.
“Most children with autism are not diagnosed until after age 4,” said Dr. Boyle. “The CDC will continue to promote early identification and research. The earlier a child is identified and connected with services, the better.”
The CDC cited several limitations to the report. First, the surveillance sites were not selected to be representative of the entire United States. Second, population denominators used for this report were based on the 2010 decennial census. Comparisons with previous ADDM findings thus should be interpreted with caution because ADDM reports from nondecennial surveillance years are likely influenced by greater error in the population denominators used for those previous surveillance years, which were based on postcensus estimates. Third, three of the nine sites with access to review children’s education records did not receive permission to do so in all school districts within the site’s overall surveillance area. Fourth, findings that address intellectual ability might not be generalizable to all ADDM sites. Finally, race and ethnicity are presented in broad terms and should not be interpreted as generalizable to all persons within those categories.
—Madhu Rajaraman
Suggested Reading
Developmental Disabilities Monitoring Network Surveillance Year 2010 Principal Investigators. Prevalence of autism spectrum disorder among children aged 8 years—autism and developmental disabilities monitoring network, 11 sites, United States, 2010. MMWR Surveill Summ. 2014; Mar 28;63 Suppl 2:1-21.
Suggested Reading
Developmental Disabilities Monitoring Network Surveillance Year 2010 Principal Investigators. Prevalence of autism spectrum disorder among children aged 8 years—autism and developmental disabilities monitoring network, 11 sites, United States, 2010. MMWR Surveill Summ. 2014; Mar 28;63 Suppl 2:1-21.
Data Support New Cholesterol Treatment Guidelines
Pooled cohort risk equations developed by the American College of Cardiology and the American Heart Association accurately estimate atherosclerotic cardiovascular disease (ACVD) risk, according to an analysis published online ahead of print March 29 in JAMA. The equations thus may be an appropriate guide for clinicians who must decide whether to recommend statins for particular patients.
“The formulas worked well—the rates of heart attack and stroke observed were similar to those predicted by the formulas,” said Paul Muntner, PhD, Professor of Epidemiology at the University of Alabama at Birmingham School of Public Health. “Additionally, participants who were predicted to have high risk were the ones most likely to have heart attacks and strokes, while those who were predicted to have a low risk had low incidence of heart attacks or strokes.”
The equations were part of cholesterol treatment guidelines published in November 2013 in the Journal of the American College of Cardiology. They were designed to help physicians identify which patients to treat and which patients may not benefit from treatment. Among the concerns that the medical community raised about the guidelines was that the risk equations would overestimate the number of people who would have a heart attack or stroke and thus result in overtreatment.
Applying the Equations to REGARDS Data
Dr. Muntner and colleagues examined data for participants enrolled in the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study to assess the risk equations’ performance. The researchers focused on 10,997 individuals without clinical ACVD or diabetes, with a low-density lipoprotein cholesterol level between 70 and 189 mg/dL, and who were not taking statins. At baseline, the REGARDS researchers conducted computer-assisted telephone interviews to collect information on participants’ age, race, sex, smoking status, comorbid conditions, and use of antihypertensive and antidiabetes medications. Health professionals conducted in-home examinations. Investigators spoke by phone with participants every six months to assess new-onset stroke and coronary heart disease events. Dr. Muntner and colleagues defined the outcome for their primary analyses as the first ACVD event (ie, nonfatal or fatal stroke, nonfatal myocardial infarction, or death resulting from coronary heart disease), which was consistent with the definition used to derive the pooled cohort risk equations.
Predicted and Observed Incidences Were Similar
Participants’ mean age was approximately 62. Of the total population, 37.6% were African American and 40.7% were male. Approximately 35% of participants lived in the stroke belt, and approximately 15% were current smokers. Individuals with higher 10-year predicted ACVD risk were more likely to be older, African American, male, current smokers, and were more likely to take an antihypertensive medication.
For participants in the overall REGARDS population with a 10-year predicted ACVD risk of less than 5%, observed and predicted five-year incidence rates were 2.2 and 2.0, respectively, per 1,000 person-years. In higher 10-year predicted ACVD risk strata, five-year observed risk was lower than predicted risk. For people with predicted risk of 10% or greater, the observed and predicted risks were 12.6 and 17.8, respectively.
Calibration of the equations was better, and overestimation of risk was reduced, among participants for whom statin treatment should be considered based on ACVD risk. Most of the overestimation occurred for participants with a 10-year predicted ACVD risk of 10% or greater. In addition, Hosmer–Lemeshow χ2 analysis indicated good calibration among women, African Americans, and Caucasians. The pooled cohort risk equations performed similarly in the Stroke Belt and in the remainder of the continental United States.
The observed and predicted five-year ACVD incidences per 1,000 person-years were 1.9 and 1.9, respectively, for participants with a 10-year predicted ACVD risk of less than 5%; 4.8 and 4.8, respectively, for individuals with predicted risk of 5% to less than 7.5%; 6.1 and 6.9, respectively, for participants with predicted risk of 7.5% to less than 10%; and 12.0 and 15.1, respectively, for participants with predicted risk of 10% or greater. Among participants with Medicare-linked data, the observed and predicted five-year ACVD incidence per 1,000 person-years were 5.3 and 4.0, respectively, for participants with a predicted risk of less than 7.5%; 7.9 and 6.4, respectively, for participants with predicted risk of 7.5% to less than 10%; and 17.4 and 16.4, respectively, for participants with predicted risk of 10% or greater.
The Guidelines Could Improve the Use of Statins
The authors concluded that the risk equations demonstrated good discrimination and were well calibrated in the population for which they were designed to be used. “We think this is important because there are millions of patients who may benefit from taking statins, and doctors need to identify these patients while not prescribing treatment for patients who may receive little benefit,” said Dr. Muntner.
The study findings may persuade physicians that they can use the equations to obtain valid information. “We hope that showing that the formula works in a large nationwide group of adults will lead doctors to use it,” said Dr. Muntner. “In turn, this [practice] could lead to higher rates of appropriate use of statins and reduction in heart attack and stroke risk.”
The REGARDS study is ongoing, but follow-up at the time of the current analyses was limited to five years. Dr. Muntner’s group plans to perform additional analyses when data from a longer follow-up of participants become available.
—Erik Greb
Suggested Reading
Muntner P, Colantonio LD, Cushman M, et al. Validation of the atherosclerotic cardiovascular disease pooled cohort risk equations. JAMA. 2014 Mar 29 [Epub ahead of print].
Ridker PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease. Lancet. 2013;382(9907):1762-1765.
Stone NJ, Robinson J, Lichtenstein AH, et al. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2013 Nov 7 [Epub ahead of print].
Pooled cohort risk equations developed by the American College of Cardiology and the American Heart Association accurately estimate atherosclerotic cardiovascular disease (ACVD) risk, according to an analysis published online ahead of print March 29 in JAMA. The equations thus may be an appropriate guide for clinicians who must decide whether to recommend statins for particular patients.
“The formulas worked well—the rates of heart attack and stroke observed were similar to those predicted by the formulas,” said Paul Muntner, PhD, Professor of Epidemiology at the University of Alabama at Birmingham School of Public Health. “Additionally, participants who were predicted to have high risk were the ones most likely to have heart attacks and strokes, while those who were predicted to have a low risk had low incidence of heart attacks or strokes.”
The equations were part of cholesterol treatment guidelines published in November 2013 in the Journal of the American College of Cardiology. They were designed to help physicians identify which patients to treat and which patients may not benefit from treatment. Among the concerns that the medical community raised about the guidelines was that the risk equations would overestimate the number of people who would have a heart attack or stroke and thus result in overtreatment.
Applying the Equations to REGARDS Data
Dr. Muntner and colleagues examined data for participants enrolled in the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study to assess the risk equations’ performance. The researchers focused on 10,997 individuals without clinical ACVD or diabetes, with a low-density lipoprotein cholesterol level between 70 and 189 mg/dL, and who were not taking statins. At baseline, the REGARDS researchers conducted computer-assisted telephone interviews to collect information on participants’ age, race, sex, smoking status, comorbid conditions, and use of antihypertensive and antidiabetes medications. Health professionals conducted in-home examinations. Investigators spoke by phone with participants every six months to assess new-onset stroke and coronary heart disease events. Dr. Muntner and colleagues defined the outcome for their primary analyses as the first ACVD event (ie, nonfatal or fatal stroke, nonfatal myocardial infarction, or death resulting from coronary heart disease), which was consistent with the definition used to derive the pooled cohort risk equations.
Predicted and Observed Incidences Were Similar
Participants’ mean age was approximately 62. Of the total population, 37.6% were African American and 40.7% were male. Approximately 35% of participants lived in the stroke belt, and approximately 15% were current smokers. Individuals with higher 10-year predicted ACVD risk were more likely to be older, African American, male, current smokers, and were more likely to take an antihypertensive medication.
For participants in the overall REGARDS population with a 10-year predicted ACVD risk of less than 5%, observed and predicted five-year incidence rates were 2.2 and 2.0, respectively, per 1,000 person-years. In higher 10-year predicted ACVD risk strata, five-year observed risk was lower than predicted risk. For people with predicted risk of 10% or greater, the observed and predicted risks were 12.6 and 17.8, respectively.
Calibration of the equations was better, and overestimation of risk was reduced, among participants for whom statin treatment should be considered based on ACVD risk. Most of the overestimation occurred for participants with a 10-year predicted ACVD risk of 10% or greater. In addition, Hosmer–Lemeshow χ2 analysis indicated good calibration among women, African Americans, and Caucasians. The pooled cohort risk equations performed similarly in the Stroke Belt and in the remainder of the continental United States.
The observed and predicted five-year ACVD incidences per 1,000 person-years were 1.9 and 1.9, respectively, for participants with a 10-year predicted ACVD risk of less than 5%; 4.8 and 4.8, respectively, for individuals with predicted risk of 5% to less than 7.5%; 6.1 and 6.9, respectively, for participants with predicted risk of 7.5% to less than 10%; and 12.0 and 15.1, respectively, for participants with predicted risk of 10% or greater. Among participants with Medicare-linked data, the observed and predicted five-year ACVD incidence per 1,000 person-years were 5.3 and 4.0, respectively, for participants with a predicted risk of less than 7.5%; 7.9 and 6.4, respectively, for participants with predicted risk of 7.5% to less than 10%; and 17.4 and 16.4, respectively, for participants with predicted risk of 10% or greater.
The Guidelines Could Improve the Use of Statins
The authors concluded that the risk equations demonstrated good discrimination and were well calibrated in the population for which they were designed to be used. “We think this is important because there are millions of patients who may benefit from taking statins, and doctors need to identify these patients while not prescribing treatment for patients who may receive little benefit,” said Dr. Muntner.
The study findings may persuade physicians that they can use the equations to obtain valid information. “We hope that showing that the formula works in a large nationwide group of adults will lead doctors to use it,” said Dr. Muntner. “In turn, this [practice] could lead to higher rates of appropriate use of statins and reduction in heart attack and stroke risk.”
The REGARDS study is ongoing, but follow-up at the time of the current analyses was limited to five years. Dr. Muntner’s group plans to perform additional analyses when data from a longer follow-up of participants become available.
—Erik Greb
Pooled cohort risk equations developed by the American College of Cardiology and the American Heart Association accurately estimate atherosclerotic cardiovascular disease (ACVD) risk, according to an analysis published online ahead of print March 29 in JAMA. The equations thus may be an appropriate guide for clinicians who must decide whether to recommend statins for particular patients.
“The formulas worked well—the rates of heart attack and stroke observed were similar to those predicted by the formulas,” said Paul Muntner, PhD, Professor of Epidemiology at the University of Alabama at Birmingham School of Public Health. “Additionally, participants who were predicted to have high risk were the ones most likely to have heart attacks and strokes, while those who were predicted to have a low risk had low incidence of heart attacks or strokes.”
The equations were part of cholesterol treatment guidelines published in November 2013 in the Journal of the American College of Cardiology. They were designed to help physicians identify which patients to treat and which patients may not benefit from treatment. Among the concerns that the medical community raised about the guidelines was that the risk equations would overestimate the number of people who would have a heart attack or stroke and thus result in overtreatment.
Applying the Equations to REGARDS Data
Dr. Muntner and colleagues examined data for participants enrolled in the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study to assess the risk equations’ performance. The researchers focused on 10,997 individuals without clinical ACVD or diabetes, with a low-density lipoprotein cholesterol level between 70 and 189 mg/dL, and who were not taking statins. At baseline, the REGARDS researchers conducted computer-assisted telephone interviews to collect information on participants’ age, race, sex, smoking status, comorbid conditions, and use of antihypertensive and antidiabetes medications. Health professionals conducted in-home examinations. Investigators spoke by phone with participants every six months to assess new-onset stroke and coronary heart disease events. Dr. Muntner and colleagues defined the outcome for their primary analyses as the first ACVD event (ie, nonfatal or fatal stroke, nonfatal myocardial infarction, or death resulting from coronary heart disease), which was consistent with the definition used to derive the pooled cohort risk equations.
Predicted and Observed Incidences Were Similar
Participants’ mean age was approximately 62. Of the total population, 37.6% were African American and 40.7% were male. Approximately 35% of participants lived in the stroke belt, and approximately 15% were current smokers. Individuals with higher 10-year predicted ACVD risk were more likely to be older, African American, male, current smokers, and were more likely to take an antihypertensive medication.
For participants in the overall REGARDS population with a 10-year predicted ACVD risk of less than 5%, observed and predicted five-year incidence rates were 2.2 and 2.0, respectively, per 1,000 person-years. In higher 10-year predicted ACVD risk strata, five-year observed risk was lower than predicted risk. For people with predicted risk of 10% or greater, the observed and predicted risks were 12.6 and 17.8, respectively.
Calibration of the equations was better, and overestimation of risk was reduced, among participants for whom statin treatment should be considered based on ACVD risk. Most of the overestimation occurred for participants with a 10-year predicted ACVD risk of 10% or greater. In addition, Hosmer–Lemeshow χ2 analysis indicated good calibration among women, African Americans, and Caucasians. The pooled cohort risk equations performed similarly in the Stroke Belt and in the remainder of the continental United States.
The observed and predicted five-year ACVD incidences per 1,000 person-years were 1.9 and 1.9, respectively, for participants with a 10-year predicted ACVD risk of less than 5%; 4.8 and 4.8, respectively, for individuals with predicted risk of 5% to less than 7.5%; 6.1 and 6.9, respectively, for participants with predicted risk of 7.5% to less than 10%; and 12.0 and 15.1, respectively, for participants with predicted risk of 10% or greater. Among participants with Medicare-linked data, the observed and predicted five-year ACVD incidence per 1,000 person-years were 5.3 and 4.0, respectively, for participants with a predicted risk of less than 7.5%; 7.9 and 6.4, respectively, for participants with predicted risk of 7.5% to less than 10%; and 17.4 and 16.4, respectively, for participants with predicted risk of 10% or greater.
The Guidelines Could Improve the Use of Statins
The authors concluded that the risk equations demonstrated good discrimination and were well calibrated in the population for which they were designed to be used. “We think this is important because there are millions of patients who may benefit from taking statins, and doctors need to identify these patients while not prescribing treatment for patients who may receive little benefit,” said Dr. Muntner.
The study findings may persuade physicians that they can use the equations to obtain valid information. “We hope that showing that the formula works in a large nationwide group of adults will lead doctors to use it,” said Dr. Muntner. “In turn, this [practice] could lead to higher rates of appropriate use of statins and reduction in heart attack and stroke risk.”
The REGARDS study is ongoing, but follow-up at the time of the current analyses was limited to five years. Dr. Muntner’s group plans to perform additional analyses when data from a longer follow-up of participants become available.
—Erik Greb
Suggested Reading
Muntner P, Colantonio LD, Cushman M, et al. Validation of the atherosclerotic cardiovascular disease pooled cohort risk equations. JAMA. 2014 Mar 29 [Epub ahead of print].
Ridker PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease. Lancet. 2013;382(9907):1762-1765.
Stone NJ, Robinson J, Lichtenstein AH, et al. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2013 Nov 7 [Epub ahead of print].
Suggested Reading
Muntner P, Colantonio LD, Cushman M, et al. Validation of the atherosclerotic cardiovascular disease pooled cohort risk equations. JAMA. 2014 Mar 29 [Epub ahead of print].
Ridker PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease. Lancet. 2013;382(9907):1762-1765.
Stone NJ, Robinson J, Lichtenstein AH, et al. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2013 Nov 7 [Epub ahead of print].
Hospitalist Reviews on Pre-Operative Beta Blockers, Therapeutic Hypothermia after Cardiac Arrest, Colloids vs. Crystalloids for Hypovolemic Shock
In This Edition
Literature At A Glance
A guide to this month’s studies
- Facecards improve familiarity with physician names, not satisfaction
- Pre-operative beta-blockers may benefit some cardiac patients
- Benefit of therapeutic hypothermia after cardiac arrest unclear
- Patients prefer inpatient boarding to ED boarding
- Triple rule outs for chest pain
- Colloids vs. crystalloids for critically ill patients presenting with hypovolemic shock
- Interdisciplinary intervention improves medication compliance, not blood pressure or LDL-C levels
- Edoxaban is noninferior to warfarin in Afib patients
- Beta blockers lower mortality after acute MI in COPD patients
- Low-dose dopamine or low-dose nesiritide in acute heart failure with renal dysfunction
Facecards Improve Familiarity with Physician Names but Not Satisfaction
Clinical question: Do facecards improve patients’ familiarity with physicians and increase satisfaction, trust, and agreement with physicians?
Background: Facecards can improve patients’ knowledge of names and roles of physicians, but their impact on other outcomes is unclear. This pilot trial was designed to assess facecards’ impact on patient satisfaction, trust, or agreement with physicians.
Study design: Cluster, randomized controlled trial (RCT).
Setting: A large teaching hospital in the United States.
Synopsis: Patients (n=138) were randomized to receive either facecards with the name and picture of their hospitalists, as well as a brief description of the hospitalist’s role (n=66), or to receive traditional communication (n=72). There were no significant differences in patient age, sex, or race.
Patients who received a facecard were more likely to correctly identify their hospital physician (89.1% vs. 51.1%; P< 0.01) and were more likely to correctly identify the role of their hospital physician than those in the control group (67.4% vs. 16.3%; P<0.01).
Patients who received a facecard rated satisfaction, trust, and agreement slightly higher compared with those who had not received a card, but the results were not statistically significant (P values 0.27, 0.32, 0.37, respectively.) The authors note that larger studies may be needed to see a difference in these areas.
Bottom line: Facecards improve patients’ knowledge of the names and roles of hospital physicians but have no clear impact on satisfaction with, trust of, or agreement with physicians.
Citation: Simons Y, Caprio T, Furiasse N, Kriss, M, Williams MV, O’Leary KJ. The impact of facecards on patients’ knowledge, satisfaction, trust, and agreement with hospitalist physicians: a pilot study. J Hosp Med. 2014;9(3):137-141.
Pre-Operative Beta Blockers May Benefit Some Cardiac Patients
Clinical question: In patients with ischemic heart disease (IHD) undergoing non-cardiac surgery, do pre-operative beta blockers reduce post-operative major cardiovascular events (MACE) or mortality at 30 days?
Background: Peri-operative beta blocker use has become more restricted, as evidence about which patients derive benefit has become clearer. Opinions and practice vary regarding whether all patients with IHD, or only certain populations within this group, benefit from peri-operative beta blockers.
Study design: Retrospective, national registry-based cohort study.
Setting: Denmark, 2004-2009.
Synopsis: No benefit was found for the overall cohort of 28,263 patients. Patients with IHD and heart failure (n=7990) had lower risk of MACE (HR=0.75, 95% CI, 0.70-0.87) and mortality (HR=0.80, 95% CI, 0.70-0.92). Patients with IHD and myocardial infarction within two years (n=1664) had lower risk of MACE (HR=0.54, 95% CI, 0.37-0.78) but not mortality.
Beta blocker dose and compliance were unknown. Whether patients had symptoms or inducible ischemia was not clear.
This study supports the concept that higher-risk patients benefit more from peri-operative beta blockers, but it is not high-grade evidence.
Bottom line: Not all patients with IHD benefit from pre-operative beta blockers; those with concomitant heart failure or recent MI have a lower risk of MACE and/or mortality at 30 days with beta blockers.
Citation: Andersson C, Merie C, Jorgensen M, et al. Association of ß-blocker therapy with risks of adverse cardiovascular events and deaths in patients with ischemic heart disease undergoing non-cardiac surgery: a Danish nationwide cohort study. JAMA Intern Med. 2014;174(3):336-344.
Benefit of Therapeutic Hypothermia after Cardiac Arrest Unclear
Clinical question: Does targeted hypothermia (33°C) after cardiac arrest confer benefits compared with targeted temperature management at 36°C?
Background: Therapeutic hypothermia is a current recommendation in resuscitation guidelines after cardiac arrest. Fever develops in many patients after arrest, and it is unclear if the treatment benefit is due to hypothermia or due to the prevention of fever.
Study design: RCT.
Setting: ICUs in Europe and Australia.
Synopsis: The study authors randomized 950 patients who experienced out-of-hospital cardiac arrest to targeted temperature management at either 36°C or 33°C. The goal of this trial was to prevent fever in both groups during the first 36 hours after cardiac arrest. No statistically significant difference in outcomes between these two approaches was found. In the 33°C group, 54% died or had poor neurologic function, compared with 52% in the 36°C group (risk ratio 1.02; 95% CI 0.88 to 1.16; P=0.78).
Given the wide confidence interval, a trial with either more participants or more events might be able to determine whether a true difference in these management approaches exists.
Bottom line: Therapeutic hypothermia at 33°C after out-of-hospital cardiac arrest did not confer a benefit compared with targeted temperature management at 36°C.
Citation: Nielsen N, Wetterslev J, Cronberg T, et al. Targeted temperature management at 33°C versus 36°C after cardiac arrest. N Engl J Med. 2013;369(23):2197-2206.
Patients Prefer Inpatient Boarding to Emergency Department Boarding
Clinical question: Do patients who experience overcrowding and long waits in the emergency department (ED) prefer boarding within ED hallways or within inpatient medical unit hallways?
Background: Boarding of admitted patients in EDs can be problematic, especially with regard to patient safety and patient satisfaction. Patient satisfaction data comparing boarding in the ED versus boarding in an inpatient unit hallway is limited.
Study design: Post-discharge, structured, telephone satisfaction survey.
Setting: Suburban, university-based teaching hospital.
Synopsis: A group of patients who experienced hallway boarding in the ED and then hallway boarding on the inpatient medical unit were identified. They were contacted by phone and asked to take a survey on their experience; 105 of 110 patients identified agreed. Patients were asked to rate their location preference with regard to various aspects of care. A five-point Likert scale consisting of the following answers was used: ED hallway much better, ED hallway better, no preference, inpatient hallway better, and inpatient hallway much better.
The inpatient hallway was the overall preferred location in 85% of respondents. Respondents preferred inpatient boarding with regard to multiple other parameters: rest, 85%; safety, 83%; confidentiality, 82%; treatment, 78%; comfort, 79%; quiet, 84%; staff availability, 84%; and privacy, 84%. For no item was there a preference for boarding in the ED.
Patient demographics in this hospital may differ from other settings and should be considered when applying the results. With Hospital Consumer Assessment of Healthcare Providers and Systems scores and ED throughput being publicly reported, further studies in this area would be valuable.
Bottom line: In a post-discharge telephone survey, patients preferred boarding in inpatient unit hallways rather than boarding in the ED.
Citation: Viccellio P, Zito JA, Sayage V, et al. Patients overwhelmingly prefer inpatient boarding to emergency department boarding. J Emerg Med. 2013;45(6):942-946.
“Triple Rule Outs” for Chest Pain: A Tool to Evaluate the Coronaries but Not Pulmonary Embolism or Aortic Dissection
Clinical question: How does “triple rule out” (TRO) computed tomographic (CT) angiography compare to other imaging modalities in evaluating coronary and other life-threatening etiologies of chest pain, such as pulmonary embolism (PE) and aortic dissection?
Background: TRO CT angiography is a noninvasive technology that evaluates the coronary arteries, thoracic aorta, and pulmonary vasculature simultaneously. Comparison with other tests in the diagnosis of common clinical conditions is useful information for clinical practice.
Study design: Systematic review and meta-analysis.
Setting: Systematic review of 11 studies (one randomized, 10 observational).
Synopsis: Using an enrolled population of 3,539 patients, TRO CT was compared to other imaging modalities on the basis of image quality, diagnostic accuracy, radiation, and contrast volume. When TRO CT was compared to dedicated CT scans, no significant imaging difference was discovered. TRO CT detected CAD with a sensitivity of 94.3% (95% CI, 89.1% to 97.5%, I2=58.2%) and specificity of 97.4% (95% CI, 96.1% to 98.5%, I2=91.2%).
An insufficient number of patients with PE or aortic dissection were studied to generate diagnostic accuracy for these conditions. TRO CT involved greater radiation exposure and contrast exposure than non-TRO CT.
This study reports high accuracy of TRO CT in the diagnosis of coronary artery disease. Due to the low prevalence of patients with PE or aortic dissection (<1%), the data cannot be extrapolated to these conditions.
Bottom line: Although TRO CT is highly accurate for detecting coronary artery disease, there is insufficient data to recommend its use for the diagnosis of PE or aortic dissection.
Citation: Ayaram D, Bellolio MF, Murad MH, et al. Triple rule-out computed tomographic angiography for chest pain: a diagnostic systematic review and meta-analysis. Acad Emerg Med. 2013;20(9):861-871.
Colloids vs. Crystalloids for Critically Ill Patients Presenting with Hypovolemic Shock
Clinical question: In critically ill patients admitted to the ICU with hypovolemic shock, does the use of colloid for fluid resuscitation, compared with crystalloid, improve mortality?
Background: The current Surviving Sepsis Campaign guidelines recommend crystalloids as the preferred fluid for resuscitation of patients with hypovolemic shock; however, evidence supporting the choice of intravenous colloid vs. crystalloid solutions for management of hypovolemic shock is weak.
Study design: RCT.
Setting: International, multi-center study.
Synopsis: Researchers randomized 2,857 adult patients who were admitted to an ICU and required fluid resuscitation for acute hypovolemia to receive either crystalloids or colloids.
At 28 days, there were 359 deaths (25.4%) in the colloids group vs. 390 deaths (27.0%) in the crystalloids group (P=0.26). At 90 days, there were 434 deaths (30.7%) in the colloids group vs. 493 deaths (34.2%) in the crystalloids group (P=0.03).
Renal replacement therapy was used in 11.0% of the colloids group vs. 12.5% of the crystalloids group (P=0.19). There were more days alive without mechanical ventilation in the colloids group vs. the crystalloids group at seven days (P=0.01) and at 28 days (P=0.01), and there were more days alive without vasopressor therapy in the colloids group vs. the crystalloids group at seven days (P=0.04) and at 28 days (P=0.03).
Major limitations of the study included the use of open-labeled fluids during allocation, so the initial investigators were not blinded to the type of fluid. Moreover, the study compared two therapeutic strategies (colloid vs. crystalloids) rather than two types of molecules.
Bottom line: In ICU patients with hypovolemia requiring resuscitation, the use of colloids vs. crystalloids did not result in a significant difference in 28-day mortality; however, 90-day mortality was lower among patients receiving colloids.
Citation: Annane D, Siami S, Jaber S, et al. Effects of fluid resuscitation with colloids vs crystalloids on mortality of critically ill patients presenting with hypovolemic shock: the CRISTAL randomization trial. JAMA. 2013;310(17):1809-1817.
Interdisciplinary Intervention Improves Medication Compliance, Not Blood Pressure or LDL-C Levels
Clinical question: Can intervention by pharmacists and physicians improve compliance to cardio-protective medications?
Background: Adherence to cardio-protective medications in the year after hospitalization for acute coronary syndrome is poor.
Study design: RCT.
Setting: Four Department of Veterans Affairs medical centers.
Synopsis: The intervention consisted of pharmacist-led medication reconciliation, patient education, pharmacist and PCP +/- cardiologist collaboration, and voice messaging. The outcome measured was the proportion of patients adherent to medication regimens based on a mean proportion of days covered (PDC) >0.80 in the year after discharge, using pharmacy refill data for clopidogrel, beta blockers, statins, and ACEI/ARBs.
Two hundred forty-one patients (95.3%) completed the study. In the intervention group, 89.3% of patients were adherent vs. 73.9% in the usual care group (P=0.003). Mean PDC was higher in the intervention group (0.94 vs. 0.87; P<0.001). A greater proportion of intervention patients were adherent to clopidogrel (86.8% vs. 70.7%; P=0.03), statins (93.2% vs. 71.3%; P<0.001), and ACEI/ARBs (93.1% vs. 81.7%; P=0.03), but not beta blockers (88.1% vs. 84.8%; P=0.59). There were no statistically significant differences in the proportion of patients who achieved blood pressure and LDL-C level goals.
Bottom line: An interdisciplinary, multi-faceted intervention increased medication compliance in the year after discharge for ACS but did not improve blood pressure or LDL-C levels.
Citation: Ho PM, Lambert-Kerzner A, Carey EP, et al. Multifaceted intervention to improve medication adherence and secondary prevention measures after acute coronary syndrome hospital discharge. JAMA Intern Med. 2014;174(2):186-193.
Edoxaban Is Noninferior to Warfarin in Patients with Atrial Fibrillation
Clinical question: What is the long-term efficacy and safety of edoxaban compared with warfarin in patients with atrial fibrillation (Afib)?
Background: Edoxaban is an oral factor Xa inhibitor approved for use in Japan for the prevention of venous thromboembolism after orthopedic surgery. No specific antidote for edoxaban exists, but hemostatic agents can reverse its anticoagulation effect.
Study design: RCT.
Setting: More than 1,300 centers in 46 countries.
Synopsis: Researchers randomized 21,105 patients in a 1:1:1 ratio to receive warfarin (goal INR of 2-3), low-dose edoxaban, or high-dose edoxoban. All patients received two sets of drugs, either active warfarin with placebo edoxaban or active edoxaban (high- or low-dose) and placebo warfarin (with sham INRs drawn), and were followed for a median of 2.8 years.
The annualized rate of stroke or systemic embolic event was 1.5% in the warfarin group, compared with 1.18% in the high-dose edoxaban group (hazard ratio 0.79; P<0.001) and 1.61% in the low-dose edoxaban group (hazard ratio 1.07; P=0.005). Annualized rate of major bleeding was 3.43% with warfarin, 2.75% with high-dose edoxoban (hazard ratio 0.80; P<0.001), and 1.61% with low-dose edoxaban (hazard ratio 0.47; P<0.001).
Both edoxaban regimens were noninferior to warfarin for the prevention of stroke or systemic emboli. The rates of cardiovascular events, bleeding, or death from any cause was lower with both doses of edoxaban as compared with warfarin.
Bottom line: Once-daily edoxaban is noninferior to warfarin for the prevention of stroke or systemic emboli and is associated with lower rates of bleeding and death.
Citation: Giugliano RP, Ruff CT, Braunwald E, et al. Edoxaban versus warfarin in patients with atrial fibrillation. New Engl J Med. 2013;369(22):2093-2104.
Beta Blockers Lower Mortality after Acute Myocardial Infarction in COPD Patients
Clinical question: Does the use and timing of beta blockers in COPD patients experiencing a first myocardial infarction (MI) affect survival after the event?
Background: Beta blockers are effective in reducing mortality and reinfarction after an MI; however, concerns regarding the side effects of beta blockers, such as bronchospasm, continue to limit their use in patients with COPD.
Study design: Population-based cohort study.
Setting: The Myocardial Ischemia National Audit Project, linked to the General Practice Research Database, in the United Kingdom.
Synopsis: Researchers identified 1,063 patients over the age of 18 with COPD admitted to the hospital with a first acute MI. Use of beta blockers during hospitalization was associated with increased overall and one-year survival. Initiation of beta blockers during an MI had a mortality-adjusted hazard ratio of 0.50 (95% CI 0.36 to 0.69; P<0.001; median follow-up time=2.9 years).
Patients already on beta blockers prior to the MI had overall survival-adjusted hazard ratio of 0.59 (95% CI 0.44 to 0.79; P<0.001). Both scenarios showed survival benefits compared to COPD patients who were not prescribed beta blockers. Patients given beta blockers with COPD either during the MI hospitalization or before the event were younger and had fewer comorbidities. This may have accounted for some of the survival bias.
Bottom line: The use of beta blockers in patients with COPD started prior to, or at the time of, hospital admission for a first MI is associated with improved survival.
Citation: Quint JK, Herret E, Bhaskaran K, et al. Effect of ß blockers on mortality after myocardial infarction in adults with COPD: population-based cohort study of UK electronic healthcare records. BMJ. 2013;347:f6650.
Neither Low-Dose Dopamine nor Low-Dose Nesiritide Improves Renal Dysfunction in Acute Heart Failure Patients
Clinical question: Does low-dose dopamine or low-dose nesiritide added to diuretic therapy enhance pulmonary volume reduction and preserve renal function in patients with acute heart failure and renal dysfunction, compared to placebo?
Background: Small studies have suggested that low-dose dopamine or low-dose nesiritide may be beneficial in enhancing decongestion and improving renal dysfunction; however, there is ambiguity in overall benefit. Some observational studies suggest that dopamine and nesiritide are associated with higher length of stay, higher costs, and greater mortality.
Study Design: RCT.
Setting: Twenty-six hospital sites in the U.S. and Canada.
Synopsis: Three hundred sixty patients with acute heart failure and renal dysfunction were randomized to receive either nesiritide or dopamine within 24 hours of admission. Within each of these arms, patients were then randomized, in a double-blinded 2:1 fashion, into active treatment versus placebo groups. Treatment groups were compared to the pooled placebo groups.
Two main endpoints were urine output and change in serum cystatin C, from enrollment to 72 hours. Compared with placebo, low-dose dopamine had no significant effect on urine output or serum cystatin C level. Similarly, low-dose nesiritide had no significant effect on 72-hour urine output or serum cystatin C level.
Other studies have shown these drugs to be potentially harmful. Hospitalists should use caution and carefully interpret the relevant evidence when considering their use.
Bottom line: Neither low-dose nesiritide nor low-dose dopamine improved urine output or serum cystatin C levels at 72 hours in patients with acute heart failure and renal dysfunction.
Citation: Chen HH, Anstrom KJ, Givertz MM, et al. Low-dose dopamine or low-dose nesiritide in acute heart failure with renal dysfunction: The ROSE acute heart failure randomized trial. JAMA. 2013;310(23):2533-2543.
In This Edition
Literature At A Glance
A guide to this month’s studies
- Facecards improve familiarity with physician names, not satisfaction
- Pre-operative beta-blockers may benefit some cardiac patients
- Benefit of therapeutic hypothermia after cardiac arrest unclear
- Patients prefer inpatient boarding to ED boarding
- Triple rule outs for chest pain
- Colloids vs. crystalloids for critically ill patients presenting with hypovolemic shock
- Interdisciplinary intervention improves medication compliance, not blood pressure or LDL-C levels
- Edoxaban is noninferior to warfarin in Afib patients
- Beta blockers lower mortality after acute MI in COPD patients
- Low-dose dopamine or low-dose nesiritide in acute heart failure with renal dysfunction
Facecards Improve Familiarity with Physician Names but Not Satisfaction
Clinical question: Do facecards improve patients’ familiarity with physicians and increase satisfaction, trust, and agreement with physicians?
Background: Facecards can improve patients’ knowledge of names and roles of physicians, but their impact on other outcomes is unclear. This pilot trial was designed to assess facecards’ impact on patient satisfaction, trust, or agreement with physicians.
Study design: Cluster, randomized controlled trial (RCT).
Setting: A large teaching hospital in the United States.
Synopsis: Patients (n=138) were randomized to receive either facecards with the name and picture of their hospitalists, as well as a brief description of the hospitalist’s role (n=66), or to receive traditional communication (n=72). There were no significant differences in patient age, sex, or race.
Patients who received a facecard were more likely to correctly identify their hospital physician (89.1% vs. 51.1%; P< 0.01) and were more likely to correctly identify the role of their hospital physician than those in the control group (67.4% vs. 16.3%; P<0.01).
Patients who received a facecard rated satisfaction, trust, and agreement slightly higher compared with those who had not received a card, but the results were not statistically significant (P values 0.27, 0.32, 0.37, respectively.) The authors note that larger studies may be needed to see a difference in these areas.
Bottom line: Facecards improve patients’ knowledge of the names and roles of hospital physicians but have no clear impact on satisfaction with, trust of, or agreement with physicians.
Citation: Simons Y, Caprio T, Furiasse N, Kriss, M, Williams MV, O’Leary KJ. The impact of facecards on patients’ knowledge, satisfaction, trust, and agreement with hospitalist physicians: a pilot study. J Hosp Med. 2014;9(3):137-141.
Pre-Operative Beta Blockers May Benefit Some Cardiac Patients
Clinical question: In patients with ischemic heart disease (IHD) undergoing non-cardiac surgery, do pre-operative beta blockers reduce post-operative major cardiovascular events (MACE) or mortality at 30 days?
Background: Peri-operative beta blocker use has become more restricted, as evidence about which patients derive benefit has become clearer. Opinions and practice vary regarding whether all patients with IHD, or only certain populations within this group, benefit from peri-operative beta blockers.
Study design: Retrospective, national registry-based cohort study.
Setting: Denmark, 2004-2009.
Synopsis: No benefit was found for the overall cohort of 28,263 patients. Patients with IHD and heart failure (n=7990) had lower risk of MACE (HR=0.75, 95% CI, 0.70-0.87) and mortality (HR=0.80, 95% CI, 0.70-0.92). Patients with IHD and myocardial infarction within two years (n=1664) had lower risk of MACE (HR=0.54, 95% CI, 0.37-0.78) but not mortality.
Beta blocker dose and compliance were unknown. Whether patients had symptoms or inducible ischemia was not clear.
This study supports the concept that higher-risk patients benefit more from peri-operative beta blockers, but it is not high-grade evidence.
Bottom line: Not all patients with IHD benefit from pre-operative beta blockers; those with concomitant heart failure or recent MI have a lower risk of MACE and/or mortality at 30 days with beta blockers.
Citation: Andersson C, Merie C, Jorgensen M, et al. Association of ß-blocker therapy with risks of adverse cardiovascular events and deaths in patients with ischemic heart disease undergoing non-cardiac surgery: a Danish nationwide cohort study. JAMA Intern Med. 2014;174(3):336-344.
Benefit of Therapeutic Hypothermia after Cardiac Arrest Unclear
Clinical question: Does targeted hypothermia (33°C) after cardiac arrest confer benefits compared with targeted temperature management at 36°C?
Background: Therapeutic hypothermia is a current recommendation in resuscitation guidelines after cardiac arrest. Fever develops in many patients after arrest, and it is unclear if the treatment benefit is due to hypothermia or due to the prevention of fever.
Study design: RCT.
Setting: ICUs in Europe and Australia.
Synopsis: The study authors randomized 950 patients who experienced out-of-hospital cardiac arrest to targeted temperature management at either 36°C or 33°C. The goal of this trial was to prevent fever in both groups during the first 36 hours after cardiac arrest. No statistically significant difference in outcomes between these two approaches was found. In the 33°C group, 54% died or had poor neurologic function, compared with 52% in the 36°C group (risk ratio 1.02; 95% CI 0.88 to 1.16; P=0.78).
Given the wide confidence interval, a trial with either more participants or more events might be able to determine whether a true difference in these management approaches exists.
Bottom line: Therapeutic hypothermia at 33°C after out-of-hospital cardiac arrest did not confer a benefit compared with targeted temperature management at 36°C.
Citation: Nielsen N, Wetterslev J, Cronberg T, et al. Targeted temperature management at 33°C versus 36°C after cardiac arrest. N Engl J Med. 2013;369(23):2197-2206.
Patients Prefer Inpatient Boarding to Emergency Department Boarding
Clinical question: Do patients who experience overcrowding and long waits in the emergency department (ED) prefer boarding within ED hallways or within inpatient medical unit hallways?
Background: Boarding of admitted patients in EDs can be problematic, especially with regard to patient safety and patient satisfaction. Patient satisfaction data comparing boarding in the ED versus boarding in an inpatient unit hallway is limited.
Study design: Post-discharge, structured, telephone satisfaction survey.
Setting: Suburban, university-based teaching hospital.
Synopsis: A group of patients who experienced hallway boarding in the ED and then hallway boarding on the inpatient medical unit were identified. They were contacted by phone and asked to take a survey on their experience; 105 of 110 patients identified agreed. Patients were asked to rate their location preference with regard to various aspects of care. A five-point Likert scale consisting of the following answers was used: ED hallway much better, ED hallway better, no preference, inpatient hallway better, and inpatient hallway much better.
The inpatient hallway was the overall preferred location in 85% of respondents. Respondents preferred inpatient boarding with regard to multiple other parameters: rest, 85%; safety, 83%; confidentiality, 82%; treatment, 78%; comfort, 79%; quiet, 84%; staff availability, 84%; and privacy, 84%. For no item was there a preference for boarding in the ED.
Patient demographics in this hospital may differ from other settings and should be considered when applying the results. With Hospital Consumer Assessment of Healthcare Providers and Systems scores and ED throughput being publicly reported, further studies in this area would be valuable.
Bottom line: In a post-discharge telephone survey, patients preferred boarding in inpatient unit hallways rather than boarding in the ED.
Citation: Viccellio P, Zito JA, Sayage V, et al. Patients overwhelmingly prefer inpatient boarding to emergency department boarding. J Emerg Med. 2013;45(6):942-946.
“Triple Rule Outs” for Chest Pain: A Tool to Evaluate the Coronaries but Not Pulmonary Embolism or Aortic Dissection
Clinical question: How does “triple rule out” (TRO) computed tomographic (CT) angiography compare to other imaging modalities in evaluating coronary and other life-threatening etiologies of chest pain, such as pulmonary embolism (PE) and aortic dissection?
Background: TRO CT angiography is a noninvasive technology that evaluates the coronary arteries, thoracic aorta, and pulmonary vasculature simultaneously. Comparison with other tests in the diagnosis of common clinical conditions is useful information for clinical practice.
Study design: Systematic review and meta-analysis.
Setting: Systematic review of 11 studies (one randomized, 10 observational).
Synopsis: Using an enrolled population of 3,539 patients, TRO CT was compared to other imaging modalities on the basis of image quality, diagnostic accuracy, radiation, and contrast volume. When TRO CT was compared to dedicated CT scans, no significant imaging difference was discovered. TRO CT detected CAD with a sensitivity of 94.3% (95% CI, 89.1% to 97.5%, I2=58.2%) and specificity of 97.4% (95% CI, 96.1% to 98.5%, I2=91.2%).
An insufficient number of patients with PE or aortic dissection were studied to generate diagnostic accuracy for these conditions. TRO CT involved greater radiation exposure and contrast exposure than non-TRO CT.
This study reports high accuracy of TRO CT in the diagnosis of coronary artery disease. Due to the low prevalence of patients with PE or aortic dissection (<1%), the data cannot be extrapolated to these conditions.
Bottom line: Although TRO CT is highly accurate for detecting coronary artery disease, there is insufficient data to recommend its use for the diagnosis of PE or aortic dissection.
Citation: Ayaram D, Bellolio MF, Murad MH, et al. Triple rule-out computed tomographic angiography for chest pain: a diagnostic systematic review and meta-analysis. Acad Emerg Med. 2013;20(9):861-871.
Colloids vs. Crystalloids for Critically Ill Patients Presenting with Hypovolemic Shock
Clinical question: In critically ill patients admitted to the ICU with hypovolemic shock, does the use of colloid for fluid resuscitation, compared with crystalloid, improve mortality?
Background: The current Surviving Sepsis Campaign guidelines recommend crystalloids as the preferred fluid for resuscitation of patients with hypovolemic shock; however, evidence supporting the choice of intravenous colloid vs. crystalloid solutions for management of hypovolemic shock is weak.
Study design: RCT.
Setting: International, multi-center study.
Synopsis: Researchers randomized 2,857 adult patients who were admitted to an ICU and required fluid resuscitation for acute hypovolemia to receive either crystalloids or colloids.
At 28 days, there were 359 deaths (25.4%) in the colloids group vs. 390 deaths (27.0%) in the crystalloids group (P=0.26). At 90 days, there were 434 deaths (30.7%) in the colloids group vs. 493 deaths (34.2%) in the crystalloids group (P=0.03).
Renal replacement therapy was used in 11.0% of the colloids group vs. 12.5% of the crystalloids group (P=0.19). There were more days alive without mechanical ventilation in the colloids group vs. the crystalloids group at seven days (P=0.01) and at 28 days (P=0.01), and there were more days alive without vasopressor therapy in the colloids group vs. the crystalloids group at seven days (P=0.04) and at 28 days (P=0.03).
Major limitations of the study included the use of open-labeled fluids during allocation, so the initial investigators were not blinded to the type of fluid. Moreover, the study compared two therapeutic strategies (colloid vs. crystalloids) rather than two types of molecules.
Bottom line: In ICU patients with hypovolemia requiring resuscitation, the use of colloids vs. crystalloids did not result in a significant difference in 28-day mortality; however, 90-day mortality was lower among patients receiving colloids.
Citation: Annane D, Siami S, Jaber S, et al. Effects of fluid resuscitation with colloids vs crystalloids on mortality of critically ill patients presenting with hypovolemic shock: the CRISTAL randomization trial. JAMA. 2013;310(17):1809-1817.
Interdisciplinary Intervention Improves Medication Compliance, Not Blood Pressure or LDL-C Levels
Clinical question: Can intervention by pharmacists and physicians improve compliance to cardio-protective medications?
Background: Adherence to cardio-protective medications in the year after hospitalization for acute coronary syndrome is poor.
Study design: RCT.
Setting: Four Department of Veterans Affairs medical centers.
Synopsis: The intervention consisted of pharmacist-led medication reconciliation, patient education, pharmacist and PCP +/- cardiologist collaboration, and voice messaging. The outcome measured was the proportion of patients adherent to medication regimens based on a mean proportion of days covered (PDC) >0.80 in the year after discharge, using pharmacy refill data for clopidogrel, beta blockers, statins, and ACEI/ARBs.
Two hundred forty-one patients (95.3%) completed the study. In the intervention group, 89.3% of patients were adherent vs. 73.9% in the usual care group (P=0.003). Mean PDC was higher in the intervention group (0.94 vs. 0.87; P<0.001). A greater proportion of intervention patients were adherent to clopidogrel (86.8% vs. 70.7%; P=0.03), statins (93.2% vs. 71.3%; P<0.001), and ACEI/ARBs (93.1% vs. 81.7%; P=0.03), but not beta blockers (88.1% vs. 84.8%; P=0.59). There were no statistically significant differences in the proportion of patients who achieved blood pressure and LDL-C level goals.
Bottom line: An interdisciplinary, multi-faceted intervention increased medication compliance in the year after discharge for ACS but did not improve blood pressure or LDL-C levels.
Citation: Ho PM, Lambert-Kerzner A, Carey EP, et al. Multifaceted intervention to improve medication adherence and secondary prevention measures after acute coronary syndrome hospital discharge. JAMA Intern Med. 2014;174(2):186-193.
Edoxaban Is Noninferior to Warfarin in Patients with Atrial Fibrillation
Clinical question: What is the long-term efficacy and safety of edoxaban compared with warfarin in patients with atrial fibrillation (Afib)?
Background: Edoxaban is an oral factor Xa inhibitor approved for use in Japan for the prevention of venous thromboembolism after orthopedic surgery. No specific antidote for edoxaban exists, but hemostatic agents can reverse its anticoagulation effect.
Study design: RCT.
Setting: More than 1,300 centers in 46 countries.
Synopsis: Researchers randomized 21,105 patients in a 1:1:1 ratio to receive warfarin (goal INR of 2-3), low-dose edoxaban, or high-dose edoxoban. All patients received two sets of drugs, either active warfarin with placebo edoxaban or active edoxaban (high- or low-dose) and placebo warfarin (with sham INRs drawn), and were followed for a median of 2.8 years.
The annualized rate of stroke or systemic embolic event was 1.5% in the warfarin group, compared with 1.18% in the high-dose edoxaban group (hazard ratio 0.79; P<0.001) and 1.61% in the low-dose edoxaban group (hazard ratio 1.07; P=0.005). Annualized rate of major bleeding was 3.43% with warfarin, 2.75% with high-dose edoxoban (hazard ratio 0.80; P<0.001), and 1.61% with low-dose edoxaban (hazard ratio 0.47; P<0.001).
Both edoxaban regimens were noninferior to warfarin for the prevention of stroke or systemic emboli. The rates of cardiovascular events, bleeding, or death from any cause was lower with both doses of edoxaban as compared with warfarin.
Bottom line: Once-daily edoxaban is noninferior to warfarin for the prevention of stroke or systemic emboli and is associated with lower rates of bleeding and death.
Citation: Giugliano RP, Ruff CT, Braunwald E, et al. Edoxaban versus warfarin in patients with atrial fibrillation. New Engl J Med. 2013;369(22):2093-2104.
Beta Blockers Lower Mortality after Acute Myocardial Infarction in COPD Patients
Clinical question: Does the use and timing of beta blockers in COPD patients experiencing a first myocardial infarction (MI) affect survival after the event?
Background: Beta blockers are effective in reducing mortality and reinfarction after an MI; however, concerns regarding the side effects of beta blockers, such as bronchospasm, continue to limit their use in patients with COPD.
Study design: Population-based cohort study.
Setting: The Myocardial Ischemia National Audit Project, linked to the General Practice Research Database, in the United Kingdom.
Synopsis: Researchers identified 1,063 patients over the age of 18 with COPD admitted to the hospital with a first acute MI. Use of beta blockers during hospitalization was associated with increased overall and one-year survival. Initiation of beta blockers during an MI had a mortality-adjusted hazard ratio of 0.50 (95% CI 0.36 to 0.69; P<0.001; median follow-up time=2.9 years).
Patients already on beta blockers prior to the MI had overall survival-adjusted hazard ratio of 0.59 (95% CI 0.44 to 0.79; P<0.001). Both scenarios showed survival benefits compared to COPD patients who were not prescribed beta blockers. Patients given beta blockers with COPD either during the MI hospitalization or before the event were younger and had fewer comorbidities. This may have accounted for some of the survival bias.
Bottom line: The use of beta blockers in patients with COPD started prior to, or at the time of, hospital admission for a first MI is associated with improved survival.
Citation: Quint JK, Herret E, Bhaskaran K, et al. Effect of ß blockers on mortality after myocardial infarction in adults with COPD: population-based cohort study of UK electronic healthcare records. BMJ. 2013;347:f6650.
Neither Low-Dose Dopamine nor Low-Dose Nesiritide Improves Renal Dysfunction in Acute Heart Failure Patients
Clinical question: Does low-dose dopamine or low-dose nesiritide added to diuretic therapy enhance pulmonary volume reduction and preserve renal function in patients with acute heart failure and renal dysfunction, compared to placebo?
Background: Small studies have suggested that low-dose dopamine or low-dose nesiritide may be beneficial in enhancing decongestion and improving renal dysfunction; however, there is ambiguity in overall benefit. Some observational studies suggest that dopamine and nesiritide are associated with higher length of stay, higher costs, and greater mortality.
Study Design: RCT.
Setting: Twenty-six hospital sites in the U.S. and Canada.
Synopsis: Three hundred sixty patients with acute heart failure and renal dysfunction were randomized to receive either nesiritide or dopamine within 24 hours of admission. Within each of these arms, patients were then randomized, in a double-blinded 2:1 fashion, into active treatment versus placebo groups. Treatment groups were compared to the pooled placebo groups.
Two main endpoints were urine output and change in serum cystatin C, from enrollment to 72 hours. Compared with placebo, low-dose dopamine had no significant effect on urine output or serum cystatin C level. Similarly, low-dose nesiritide had no significant effect on 72-hour urine output or serum cystatin C level.
Other studies have shown these drugs to be potentially harmful. Hospitalists should use caution and carefully interpret the relevant evidence when considering their use.
Bottom line: Neither low-dose nesiritide nor low-dose dopamine improved urine output or serum cystatin C levels at 72 hours in patients with acute heart failure and renal dysfunction.
Citation: Chen HH, Anstrom KJ, Givertz MM, et al. Low-dose dopamine or low-dose nesiritide in acute heart failure with renal dysfunction: The ROSE acute heart failure randomized trial. JAMA. 2013;310(23):2533-2543.
In This Edition
Literature At A Glance
A guide to this month’s studies
- Facecards improve familiarity with physician names, not satisfaction
- Pre-operative beta-blockers may benefit some cardiac patients
- Benefit of therapeutic hypothermia after cardiac arrest unclear
- Patients prefer inpatient boarding to ED boarding
- Triple rule outs for chest pain
- Colloids vs. crystalloids for critically ill patients presenting with hypovolemic shock
- Interdisciplinary intervention improves medication compliance, not blood pressure or LDL-C levels
- Edoxaban is noninferior to warfarin in Afib patients
- Beta blockers lower mortality after acute MI in COPD patients
- Low-dose dopamine or low-dose nesiritide in acute heart failure with renal dysfunction
Facecards Improve Familiarity with Physician Names but Not Satisfaction
Clinical question: Do facecards improve patients’ familiarity with physicians and increase satisfaction, trust, and agreement with physicians?
Background: Facecards can improve patients’ knowledge of names and roles of physicians, but their impact on other outcomes is unclear. This pilot trial was designed to assess facecards’ impact on patient satisfaction, trust, or agreement with physicians.
Study design: Cluster, randomized controlled trial (RCT).
Setting: A large teaching hospital in the United States.
Synopsis: Patients (n=138) were randomized to receive either facecards with the name and picture of their hospitalists, as well as a brief description of the hospitalist’s role (n=66), or to receive traditional communication (n=72). There were no significant differences in patient age, sex, or race.
Patients who received a facecard were more likely to correctly identify their hospital physician (89.1% vs. 51.1%; P< 0.01) and were more likely to correctly identify the role of their hospital physician than those in the control group (67.4% vs. 16.3%; P<0.01).
Patients who received a facecard rated satisfaction, trust, and agreement slightly higher compared with those who had not received a card, but the results were not statistically significant (P values 0.27, 0.32, 0.37, respectively.) The authors note that larger studies may be needed to see a difference in these areas.
Bottom line: Facecards improve patients’ knowledge of the names and roles of hospital physicians but have no clear impact on satisfaction with, trust of, or agreement with physicians.
Citation: Simons Y, Caprio T, Furiasse N, Kriss, M, Williams MV, O’Leary KJ. The impact of facecards on patients’ knowledge, satisfaction, trust, and agreement with hospitalist physicians: a pilot study. J Hosp Med. 2014;9(3):137-141.
Pre-Operative Beta Blockers May Benefit Some Cardiac Patients
Clinical question: In patients with ischemic heart disease (IHD) undergoing non-cardiac surgery, do pre-operative beta blockers reduce post-operative major cardiovascular events (MACE) or mortality at 30 days?
Background: Peri-operative beta blocker use has become more restricted, as evidence about which patients derive benefit has become clearer. Opinions and practice vary regarding whether all patients with IHD, or only certain populations within this group, benefit from peri-operative beta blockers.
Study design: Retrospective, national registry-based cohort study.
Setting: Denmark, 2004-2009.
Synopsis: No benefit was found for the overall cohort of 28,263 patients. Patients with IHD and heart failure (n=7990) had lower risk of MACE (HR=0.75, 95% CI, 0.70-0.87) and mortality (HR=0.80, 95% CI, 0.70-0.92). Patients with IHD and myocardial infarction within two years (n=1664) had lower risk of MACE (HR=0.54, 95% CI, 0.37-0.78) but not mortality.
Beta blocker dose and compliance were unknown. Whether patients had symptoms or inducible ischemia was not clear.
This study supports the concept that higher-risk patients benefit more from peri-operative beta blockers, but it is not high-grade evidence.
Bottom line: Not all patients with IHD benefit from pre-operative beta blockers; those with concomitant heart failure or recent MI have a lower risk of MACE and/or mortality at 30 days with beta blockers.
Citation: Andersson C, Merie C, Jorgensen M, et al. Association of ß-blocker therapy with risks of adverse cardiovascular events and deaths in patients with ischemic heart disease undergoing non-cardiac surgery: a Danish nationwide cohort study. JAMA Intern Med. 2014;174(3):336-344.
Benefit of Therapeutic Hypothermia after Cardiac Arrest Unclear
Clinical question: Does targeted hypothermia (33°C) after cardiac arrest confer benefits compared with targeted temperature management at 36°C?
Background: Therapeutic hypothermia is a current recommendation in resuscitation guidelines after cardiac arrest. Fever develops in many patients after arrest, and it is unclear if the treatment benefit is due to hypothermia or due to the prevention of fever.
Study design: RCT.
Setting: ICUs in Europe and Australia.
Synopsis: The study authors randomized 950 patients who experienced out-of-hospital cardiac arrest to targeted temperature management at either 36°C or 33°C. The goal of this trial was to prevent fever in both groups during the first 36 hours after cardiac arrest. No statistically significant difference in outcomes between these two approaches was found. In the 33°C group, 54% died or had poor neurologic function, compared with 52% in the 36°C group (risk ratio 1.02; 95% CI 0.88 to 1.16; P=0.78).
Given the wide confidence interval, a trial with either more participants or more events might be able to determine whether a true difference in these management approaches exists.
Bottom line: Therapeutic hypothermia at 33°C after out-of-hospital cardiac arrest did not confer a benefit compared with targeted temperature management at 36°C.
Citation: Nielsen N, Wetterslev J, Cronberg T, et al. Targeted temperature management at 33°C versus 36°C after cardiac arrest. N Engl J Med. 2013;369(23):2197-2206.
Patients Prefer Inpatient Boarding to Emergency Department Boarding
Clinical question: Do patients who experience overcrowding and long waits in the emergency department (ED) prefer boarding within ED hallways or within inpatient medical unit hallways?
Background: Boarding of admitted patients in EDs can be problematic, especially with regard to patient safety and patient satisfaction. Patient satisfaction data comparing boarding in the ED versus boarding in an inpatient unit hallway is limited.
Study design: Post-discharge, structured, telephone satisfaction survey.
Setting: Suburban, university-based teaching hospital.
Synopsis: A group of patients who experienced hallway boarding in the ED and then hallway boarding on the inpatient medical unit were identified. They were contacted by phone and asked to take a survey on their experience; 105 of 110 patients identified agreed. Patients were asked to rate their location preference with regard to various aspects of care. A five-point Likert scale consisting of the following answers was used: ED hallway much better, ED hallway better, no preference, inpatient hallway better, and inpatient hallway much better.
The inpatient hallway was the overall preferred location in 85% of respondents. Respondents preferred inpatient boarding with regard to multiple other parameters: rest, 85%; safety, 83%; confidentiality, 82%; treatment, 78%; comfort, 79%; quiet, 84%; staff availability, 84%; and privacy, 84%. For no item was there a preference for boarding in the ED.
Patient demographics in this hospital may differ from other settings and should be considered when applying the results. With Hospital Consumer Assessment of Healthcare Providers and Systems scores and ED throughput being publicly reported, further studies in this area would be valuable.
Bottom line: In a post-discharge telephone survey, patients preferred boarding in inpatient unit hallways rather than boarding in the ED.
Citation: Viccellio P, Zito JA, Sayage V, et al. Patients overwhelmingly prefer inpatient boarding to emergency department boarding. J Emerg Med. 2013;45(6):942-946.
“Triple Rule Outs” for Chest Pain: A Tool to Evaluate the Coronaries but Not Pulmonary Embolism or Aortic Dissection
Clinical question: How does “triple rule out” (TRO) computed tomographic (CT) angiography compare to other imaging modalities in evaluating coronary and other life-threatening etiologies of chest pain, such as pulmonary embolism (PE) and aortic dissection?
Background: TRO CT angiography is a noninvasive technology that evaluates the coronary arteries, thoracic aorta, and pulmonary vasculature simultaneously. Comparison with other tests in the diagnosis of common clinical conditions is useful information for clinical practice.
Study design: Systematic review and meta-analysis.
Setting: Systematic review of 11 studies (one randomized, 10 observational).
Synopsis: Using an enrolled population of 3,539 patients, TRO CT was compared to other imaging modalities on the basis of image quality, diagnostic accuracy, radiation, and contrast volume. When TRO CT was compared to dedicated CT scans, no significant imaging difference was discovered. TRO CT detected CAD with a sensitivity of 94.3% (95% CI, 89.1% to 97.5%, I2=58.2%) and specificity of 97.4% (95% CI, 96.1% to 98.5%, I2=91.2%).
An insufficient number of patients with PE or aortic dissection were studied to generate diagnostic accuracy for these conditions. TRO CT involved greater radiation exposure and contrast exposure than non-TRO CT.
This study reports high accuracy of TRO CT in the diagnosis of coronary artery disease. Due to the low prevalence of patients with PE or aortic dissection (<1%), the data cannot be extrapolated to these conditions.
Bottom line: Although TRO CT is highly accurate for detecting coronary artery disease, there is insufficient data to recommend its use for the diagnosis of PE or aortic dissection.
Citation: Ayaram D, Bellolio MF, Murad MH, et al. Triple rule-out computed tomographic angiography for chest pain: a diagnostic systematic review and meta-analysis. Acad Emerg Med. 2013;20(9):861-871.
Colloids vs. Crystalloids for Critically Ill Patients Presenting with Hypovolemic Shock
Clinical question: In critically ill patients admitted to the ICU with hypovolemic shock, does the use of colloid for fluid resuscitation, compared with crystalloid, improve mortality?
Background: The current Surviving Sepsis Campaign guidelines recommend crystalloids as the preferred fluid for resuscitation of patients with hypovolemic shock; however, evidence supporting the choice of intravenous colloid vs. crystalloid solutions for management of hypovolemic shock is weak.
Study design: RCT.
Setting: International, multi-center study.
Synopsis: Researchers randomized 2,857 adult patients who were admitted to an ICU and required fluid resuscitation for acute hypovolemia to receive either crystalloids or colloids.
At 28 days, there were 359 deaths (25.4%) in the colloids group vs. 390 deaths (27.0%) in the crystalloids group (P=0.26). At 90 days, there were 434 deaths (30.7%) in the colloids group vs. 493 deaths (34.2%) in the crystalloids group (P=0.03).
Renal replacement therapy was used in 11.0% of the colloids group vs. 12.5% of the crystalloids group (P=0.19). There were more days alive without mechanical ventilation in the colloids group vs. the crystalloids group at seven days (P=0.01) and at 28 days (P=0.01), and there were more days alive without vasopressor therapy in the colloids group vs. the crystalloids group at seven days (P=0.04) and at 28 days (P=0.03).
Major limitations of the study included the use of open-labeled fluids during allocation, so the initial investigators were not blinded to the type of fluid. Moreover, the study compared two therapeutic strategies (colloid vs. crystalloids) rather than two types of molecules.
Bottom line: In ICU patients with hypovolemia requiring resuscitation, the use of colloids vs. crystalloids did not result in a significant difference in 28-day mortality; however, 90-day mortality was lower among patients receiving colloids.
Citation: Annane D, Siami S, Jaber S, et al. Effects of fluid resuscitation with colloids vs crystalloids on mortality of critically ill patients presenting with hypovolemic shock: the CRISTAL randomization trial. JAMA. 2013;310(17):1809-1817.
Interdisciplinary Intervention Improves Medication Compliance, Not Blood Pressure or LDL-C Levels
Clinical question: Can intervention by pharmacists and physicians improve compliance to cardio-protective medications?
Background: Adherence to cardio-protective medications in the year after hospitalization for acute coronary syndrome is poor.
Study design: RCT.
Setting: Four Department of Veterans Affairs medical centers.
Synopsis: The intervention consisted of pharmacist-led medication reconciliation, patient education, pharmacist and PCP +/- cardiologist collaboration, and voice messaging. The outcome measured was the proportion of patients adherent to medication regimens based on a mean proportion of days covered (PDC) >0.80 in the year after discharge, using pharmacy refill data for clopidogrel, beta blockers, statins, and ACEI/ARBs.
Two hundred forty-one patients (95.3%) completed the study. In the intervention group, 89.3% of patients were adherent vs. 73.9% in the usual care group (P=0.003). Mean PDC was higher in the intervention group (0.94 vs. 0.87; P<0.001). A greater proportion of intervention patients were adherent to clopidogrel (86.8% vs. 70.7%; P=0.03), statins (93.2% vs. 71.3%; P<0.001), and ACEI/ARBs (93.1% vs. 81.7%; P=0.03), but not beta blockers (88.1% vs. 84.8%; P=0.59). There were no statistically significant differences in the proportion of patients who achieved blood pressure and LDL-C level goals.
Bottom line: An interdisciplinary, multi-faceted intervention increased medication compliance in the year after discharge for ACS but did not improve blood pressure or LDL-C levels.
Citation: Ho PM, Lambert-Kerzner A, Carey EP, et al. Multifaceted intervention to improve medication adherence and secondary prevention measures after acute coronary syndrome hospital discharge. JAMA Intern Med. 2014;174(2):186-193.
Edoxaban Is Noninferior to Warfarin in Patients with Atrial Fibrillation
Clinical question: What is the long-term efficacy and safety of edoxaban compared with warfarin in patients with atrial fibrillation (Afib)?
Background: Edoxaban is an oral factor Xa inhibitor approved for use in Japan for the prevention of venous thromboembolism after orthopedic surgery. No specific antidote for edoxaban exists, but hemostatic agents can reverse its anticoagulation effect.
Study design: RCT.
Setting: More than 1,300 centers in 46 countries.
Synopsis: Researchers randomized 21,105 patients in a 1:1:1 ratio to receive warfarin (goal INR of 2-3), low-dose edoxaban, or high-dose edoxoban. All patients received two sets of drugs, either active warfarin with placebo edoxaban or active edoxaban (high- or low-dose) and placebo warfarin (with sham INRs drawn), and were followed for a median of 2.8 years.
The annualized rate of stroke or systemic embolic event was 1.5% in the warfarin group, compared with 1.18% in the high-dose edoxaban group (hazard ratio 0.79; P<0.001) and 1.61% in the low-dose edoxaban group (hazard ratio 1.07; P=0.005). Annualized rate of major bleeding was 3.43% with warfarin, 2.75% with high-dose edoxoban (hazard ratio 0.80; P<0.001), and 1.61% with low-dose edoxaban (hazard ratio 0.47; P<0.001).
Both edoxaban regimens were noninferior to warfarin for the prevention of stroke or systemic emboli. The rates of cardiovascular events, bleeding, or death from any cause was lower with both doses of edoxaban as compared with warfarin.
Bottom line: Once-daily edoxaban is noninferior to warfarin for the prevention of stroke or systemic emboli and is associated with lower rates of bleeding and death.
Citation: Giugliano RP, Ruff CT, Braunwald E, et al. Edoxaban versus warfarin in patients with atrial fibrillation. New Engl J Med. 2013;369(22):2093-2104.
Beta Blockers Lower Mortality after Acute Myocardial Infarction in COPD Patients
Clinical question: Does the use and timing of beta blockers in COPD patients experiencing a first myocardial infarction (MI) affect survival after the event?
Background: Beta blockers are effective in reducing mortality and reinfarction after an MI; however, concerns regarding the side effects of beta blockers, such as bronchospasm, continue to limit their use in patients with COPD.
Study design: Population-based cohort study.
Setting: The Myocardial Ischemia National Audit Project, linked to the General Practice Research Database, in the United Kingdom.
Synopsis: Researchers identified 1,063 patients over the age of 18 with COPD admitted to the hospital with a first acute MI. Use of beta blockers during hospitalization was associated with increased overall and one-year survival. Initiation of beta blockers during an MI had a mortality-adjusted hazard ratio of 0.50 (95% CI 0.36 to 0.69; P<0.001; median follow-up time=2.9 years).
Patients already on beta blockers prior to the MI had overall survival-adjusted hazard ratio of 0.59 (95% CI 0.44 to 0.79; P<0.001). Both scenarios showed survival benefits compared to COPD patients who were not prescribed beta blockers. Patients given beta blockers with COPD either during the MI hospitalization or before the event were younger and had fewer comorbidities. This may have accounted for some of the survival bias.
Bottom line: The use of beta blockers in patients with COPD started prior to, or at the time of, hospital admission for a first MI is associated with improved survival.
Citation: Quint JK, Herret E, Bhaskaran K, et al. Effect of ß blockers on mortality after myocardial infarction in adults with COPD: population-based cohort study of UK electronic healthcare records. BMJ. 2013;347:f6650.
Neither Low-Dose Dopamine nor Low-Dose Nesiritide Improves Renal Dysfunction in Acute Heart Failure Patients
Clinical question: Does low-dose dopamine or low-dose nesiritide added to diuretic therapy enhance pulmonary volume reduction and preserve renal function in patients with acute heart failure and renal dysfunction, compared to placebo?
Background: Small studies have suggested that low-dose dopamine or low-dose nesiritide may be beneficial in enhancing decongestion and improving renal dysfunction; however, there is ambiguity in overall benefit. Some observational studies suggest that dopamine and nesiritide are associated with higher length of stay, higher costs, and greater mortality.
Study Design: RCT.
Setting: Twenty-six hospital sites in the U.S. and Canada.
Synopsis: Three hundred sixty patients with acute heart failure and renal dysfunction were randomized to receive either nesiritide or dopamine within 24 hours of admission. Within each of these arms, patients were then randomized, in a double-blinded 2:1 fashion, into active treatment versus placebo groups. Treatment groups were compared to the pooled placebo groups.
Two main endpoints were urine output and change in serum cystatin C, from enrollment to 72 hours. Compared with placebo, low-dose dopamine had no significant effect on urine output or serum cystatin C level. Similarly, low-dose nesiritide had no significant effect on 72-hour urine output or serum cystatin C level.
Other studies have shown these drugs to be potentially harmful. Hospitalists should use caution and carefully interpret the relevant evidence when considering their use.
Bottom line: Neither low-dose nesiritide nor low-dose dopamine improved urine output or serum cystatin C levels at 72 hours in patients with acute heart failure and renal dysfunction.
Citation: Chen HH, Anstrom KJ, Givertz MM, et al. Low-dose dopamine or low-dose nesiritide in acute heart failure with renal dysfunction: The ROSE acute heart failure randomized trial. JAMA. 2013;310(23):2533-2543.
Evidence May Be Insufficient to Support Cognitive Impairment Screening
Citing a lack of data about the benefits and harms of screening, the US Preventive Services Task Force (USPSTF) has left unchanged the recommendations of its 2003 guidelines on cognitive impairment screening in older adults, according to an update published online ahead of print March 25 in the Annals of Internal Medicine.
“The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for cognitive impairment,” said Virginia A. Moyer, MD, MPH, in a report on behalf of the USPSTF.
“Evidence on the effect of screening and early detection of mild to moderate dementia on decision-making, planning, or other important patient outcomes is a critical gap in the evidence,” she added. Other research needs include further study of the harms of screening, new interventions that address the changing needs of patients and families, and interventions that affect the long-term clinical direction of mild to moderate dementia.
In its review, USPSTF evaluated 55 studies of instruments that screen for cognitive impairment, of which 46 provided evidence on the sensitivity of dementia screening and 27 provided evidence on screening for mild cognitive impairment. Tests included various tasks to assess at least one cognitive function, such as memory, attention, language, and visuospatial or executive functioning. USPSTF examined studies that used the Mini-Mental State Examination (MMSE), Clock Drawing Test, verbal fluency tests, Informant Questionnaire on Cognitive Decline in the Elderly, Memory Impairment Screen, Mini-Cog Test, Abbreviated Mental Test, and Short Portable Mental Status Questionnaire.
The MMSE was the most evaluated screening tool, with 25 published studies. The mean age of participants ranged from 69 to 95 years, and the mean prevalence of dementia ranged from 1.2% to 38%. The pooled sensitivity from 14 studies for the most commonly reported cut points was 88.3%, and specificity was 86.2%.
Other screening tools that were evaluated “were studied in far fewer studies (four to seven studies each), had limited reproducibility in primary care–relevant populations, and had unknown optimum cut points,” said Dr. Moyer, Adjunct Professor of Pediatrics at Baylor College of Medicine in Houston.
In addition, no trials studied the “direct effect of screening” by comparing screened and unscreened patients and reporting important clinical and decision-making outcomes, said the report’s authors. Nor did any studies report on direct or indirect harms from false positive or false negative screening results, psychologic harms, unnecessary diagnostic testing, or labeling.
Although this report differs from the 2003 recommendation because it considers screening and treatment for mild cognitive impairment in addition to dementia, and it includes additional information about the test performance of screening instruments, “the overall evidence is insufficient to make a recommendation on screening,” said Dr. Moyer.
—Madhu Rajaraman
Suggested Reading
Moyer VA. Screening for cognitive impairment in older adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014 Mar 25 [Epub ahead of print].
Citing a lack of data about the benefits and harms of screening, the US Preventive Services Task Force (USPSTF) has left unchanged the recommendations of its 2003 guidelines on cognitive impairment screening in older adults, according to an update published online ahead of print March 25 in the Annals of Internal Medicine.
“The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for cognitive impairment,” said Virginia A. Moyer, MD, MPH, in a report on behalf of the USPSTF.
“Evidence on the effect of screening and early detection of mild to moderate dementia on decision-making, planning, or other important patient outcomes is a critical gap in the evidence,” she added. Other research needs include further study of the harms of screening, new interventions that address the changing needs of patients and families, and interventions that affect the long-term clinical direction of mild to moderate dementia.
In its review, USPSTF evaluated 55 studies of instruments that screen for cognitive impairment, of which 46 provided evidence on the sensitivity of dementia screening and 27 provided evidence on screening for mild cognitive impairment. Tests included various tasks to assess at least one cognitive function, such as memory, attention, language, and visuospatial or executive functioning. USPSTF examined studies that used the Mini-Mental State Examination (MMSE), Clock Drawing Test, verbal fluency tests, Informant Questionnaire on Cognitive Decline in the Elderly, Memory Impairment Screen, Mini-Cog Test, Abbreviated Mental Test, and Short Portable Mental Status Questionnaire.
The MMSE was the most evaluated screening tool, with 25 published studies. The mean age of participants ranged from 69 to 95 years, and the mean prevalence of dementia ranged from 1.2% to 38%. The pooled sensitivity from 14 studies for the most commonly reported cut points was 88.3%, and specificity was 86.2%.
Other screening tools that were evaluated “were studied in far fewer studies (four to seven studies each), had limited reproducibility in primary care–relevant populations, and had unknown optimum cut points,” said Dr. Moyer, Adjunct Professor of Pediatrics at Baylor College of Medicine in Houston.
In addition, no trials studied the “direct effect of screening” by comparing screened and unscreened patients and reporting important clinical and decision-making outcomes, said the report’s authors. Nor did any studies report on direct or indirect harms from false positive or false negative screening results, psychologic harms, unnecessary diagnostic testing, or labeling.
Although this report differs from the 2003 recommendation because it considers screening and treatment for mild cognitive impairment in addition to dementia, and it includes additional information about the test performance of screening instruments, “the overall evidence is insufficient to make a recommendation on screening,” said Dr. Moyer.
—Madhu Rajaraman
Citing a lack of data about the benefits and harms of screening, the US Preventive Services Task Force (USPSTF) has left unchanged the recommendations of its 2003 guidelines on cognitive impairment screening in older adults, according to an update published online ahead of print March 25 in the Annals of Internal Medicine.
“The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for cognitive impairment,” said Virginia A. Moyer, MD, MPH, in a report on behalf of the USPSTF.
“Evidence on the effect of screening and early detection of mild to moderate dementia on decision-making, planning, or other important patient outcomes is a critical gap in the evidence,” she added. Other research needs include further study of the harms of screening, new interventions that address the changing needs of patients and families, and interventions that affect the long-term clinical direction of mild to moderate dementia.
In its review, USPSTF evaluated 55 studies of instruments that screen for cognitive impairment, of which 46 provided evidence on the sensitivity of dementia screening and 27 provided evidence on screening for mild cognitive impairment. Tests included various tasks to assess at least one cognitive function, such as memory, attention, language, and visuospatial or executive functioning. USPSTF examined studies that used the Mini-Mental State Examination (MMSE), Clock Drawing Test, verbal fluency tests, Informant Questionnaire on Cognitive Decline in the Elderly, Memory Impairment Screen, Mini-Cog Test, Abbreviated Mental Test, and Short Portable Mental Status Questionnaire.
The MMSE was the most evaluated screening tool, with 25 published studies. The mean age of participants ranged from 69 to 95 years, and the mean prevalence of dementia ranged from 1.2% to 38%. The pooled sensitivity from 14 studies for the most commonly reported cut points was 88.3%, and specificity was 86.2%.
Other screening tools that were evaluated “were studied in far fewer studies (four to seven studies each), had limited reproducibility in primary care–relevant populations, and had unknown optimum cut points,” said Dr. Moyer, Adjunct Professor of Pediatrics at Baylor College of Medicine in Houston.
In addition, no trials studied the “direct effect of screening” by comparing screened and unscreened patients and reporting important clinical and decision-making outcomes, said the report’s authors. Nor did any studies report on direct or indirect harms from false positive or false negative screening results, psychologic harms, unnecessary diagnostic testing, or labeling.
Although this report differs from the 2003 recommendation because it considers screening and treatment for mild cognitive impairment in addition to dementia, and it includes additional information about the test performance of screening instruments, “the overall evidence is insufficient to make a recommendation on screening,” said Dr. Moyer.
—Madhu Rajaraman
Suggested Reading
Moyer VA. Screening for cognitive impairment in older adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014 Mar 25 [Epub ahead of print].
Suggested Reading
Moyer VA. Screening for cognitive impairment in older adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014 Mar 25 [Epub ahead of print].
Alzheimer’s Disease Mortality Rate Is Higher Than Reported
Alzheimer’s disease may contribute to almost as many deaths in the United States as heart disease or cancer, researchers reported online ahead of print March 5 in Neurology.
Alzheimer’s disease is the sixth leading cause of death in the US, according to the CDC, whereas heart disease and cancer are numbers one and two, respectively. These numbers are based on what is reported on death certificates.
“Alzheimer’s disease and other dementias are under-reported on death certificates and medical records,” said lead author Bryan D. James, PhD, an epidemiologist at the Rush University Medical Center in Chicago. “Death certificates often list the immediate cause of death, such as pneumonia, rather than listing dementia as an underlying cause.” Dr. James added that attempting to identify a single cause of death does not always accurately reflect the process of dying for most elderly people, as multiple health issues often contribute to the event.
“The estimates generated by our analysis suggest that deaths from Alzheimer’s disease far exceed the numbers reported by the CDC and those listed on death certificates,” he said.
A total of 2,566 people ages 65 and older (average age, 78) received annual testing for dementia. The investigators found that after an average of eight years, 1,090 participants died. A total of 559 participants without dementia at the start of the study developed Alzheimer’s disease. The average time from diagnosis to death was about four years. After death, Alzheimer’s disease was confirmed through autopsy for about 90% of those who were clinically diagnosed.
The death rate was more than four times higher after a diagnosis of Alzheimer’s disease in people ages 75 to 84 and nearly three times higher in people ages 85 and older. More than one-third of all deaths in those age groups were attributable to Alzheimer’s disease.
An estimated 503,400 deaths from Alzheimer’s disease occurred in the US among people older than 75 in 2010, which is five to six times higher than the 83,494 reported by the CDC based on death certificates, noted Dr. James.
“Determining the true effects of dementia in this country is important for raising public awareness and identifying research priorities regarding this epidemic,” he said.
Suggested Reading
James BD, Leurgans SE, Hebert LE, et al. Contribution of Alzheimer disease to mortality in the United States. Neurology. 2014 March 5 [Epub ahead of print].
Alzheimer’s disease may contribute to almost as many deaths in the United States as heart disease or cancer, researchers reported online ahead of print March 5 in Neurology.
Alzheimer’s disease is the sixth leading cause of death in the US, according to the CDC, whereas heart disease and cancer are numbers one and two, respectively. These numbers are based on what is reported on death certificates.
“Alzheimer’s disease and other dementias are under-reported on death certificates and medical records,” said lead author Bryan D. James, PhD, an epidemiologist at the Rush University Medical Center in Chicago. “Death certificates often list the immediate cause of death, such as pneumonia, rather than listing dementia as an underlying cause.” Dr. James added that attempting to identify a single cause of death does not always accurately reflect the process of dying for most elderly people, as multiple health issues often contribute to the event.
“The estimates generated by our analysis suggest that deaths from Alzheimer’s disease far exceed the numbers reported by the CDC and those listed on death certificates,” he said.
A total of 2,566 people ages 65 and older (average age, 78) received annual testing for dementia. The investigators found that after an average of eight years, 1,090 participants died. A total of 559 participants without dementia at the start of the study developed Alzheimer’s disease. The average time from diagnosis to death was about four years. After death, Alzheimer’s disease was confirmed through autopsy for about 90% of those who were clinically diagnosed.
The death rate was more than four times higher after a diagnosis of Alzheimer’s disease in people ages 75 to 84 and nearly three times higher in people ages 85 and older. More than one-third of all deaths in those age groups were attributable to Alzheimer’s disease.
An estimated 503,400 deaths from Alzheimer’s disease occurred in the US among people older than 75 in 2010, which is five to six times higher than the 83,494 reported by the CDC based on death certificates, noted Dr. James.
“Determining the true effects of dementia in this country is important for raising public awareness and identifying research priorities regarding this epidemic,” he said.
Alzheimer’s disease may contribute to almost as many deaths in the United States as heart disease or cancer, researchers reported online ahead of print March 5 in Neurology.
Alzheimer’s disease is the sixth leading cause of death in the US, according to the CDC, whereas heart disease and cancer are numbers one and two, respectively. These numbers are based on what is reported on death certificates.
“Alzheimer’s disease and other dementias are under-reported on death certificates and medical records,” said lead author Bryan D. James, PhD, an epidemiologist at the Rush University Medical Center in Chicago. “Death certificates often list the immediate cause of death, such as pneumonia, rather than listing dementia as an underlying cause.” Dr. James added that attempting to identify a single cause of death does not always accurately reflect the process of dying for most elderly people, as multiple health issues often contribute to the event.
“The estimates generated by our analysis suggest that deaths from Alzheimer’s disease far exceed the numbers reported by the CDC and those listed on death certificates,” he said.
A total of 2,566 people ages 65 and older (average age, 78) received annual testing for dementia. The investigators found that after an average of eight years, 1,090 participants died. A total of 559 participants without dementia at the start of the study developed Alzheimer’s disease. The average time from diagnosis to death was about four years. After death, Alzheimer’s disease was confirmed through autopsy for about 90% of those who were clinically diagnosed.
The death rate was more than four times higher after a diagnosis of Alzheimer’s disease in people ages 75 to 84 and nearly three times higher in people ages 85 and older. More than one-third of all deaths in those age groups were attributable to Alzheimer’s disease.
An estimated 503,400 deaths from Alzheimer’s disease occurred in the US among people older than 75 in 2010, which is five to six times higher than the 83,494 reported by the CDC based on death certificates, noted Dr. James.
“Determining the true effects of dementia in this country is important for raising public awareness and identifying research priorities regarding this epidemic,” he said.
Suggested Reading
James BD, Leurgans SE, Hebert LE, et al. Contribution of Alzheimer disease to mortality in the United States. Neurology. 2014 March 5 [Epub ahead of print].
Suggested Reading
James BD, Leurgans SE, Hebert LE, et al. Contribution of Alzheimer disease to mortality in the United States. Neurology. 2014 March 5 [Epub ahead of print].