User login
Doug Brunk is a San Diego-based award-winning reporter who began covering health care in 1991. Before joining the company, he wrote for the health sciences division of Columbia University and was an associate editor at Contemporary Long Term Care magazine when it won a Jesse H. Neal Award. His work has been syndicated by the Los Angeles Times and he is the author of two books related to the University of Kentucky Wildcats men's basketball program. Doug has a master’s degree in magazine journalism from the S.I. Newhouse School of Public Communications at Syracuse University. Follow him on Twitter @dougbrunk.
Initial shockable rhythm predicts survival in cardiac arrest
SAN DIEGO – Initial shockable rhythm during resuscitation for out-of-hospital cardiac arrest was the strongest predictor of survival, a large retrospective analysis found. However, conversion to subsequent shockable rhythm significantly influenced postarrest survival and neurological outcomes.
“Our study supports that some patients with initial nonshockable rhythm can benefit from early rhythm analysis to provide shocks if indicated with a period of CPR rather than delayed rhythm analysis,” lead study author Dr. Marcus E.H. Ong said in an interview in advance of the annual meeting of the National Association of EMS Physicians.
Traditional resuscitation efforts are focused on early defibrillation to patients with shockable rhythms, as this is thought to have the best prognosis, according to Dr. Ong of the department of emergency medicine at Singapore General Hospital. “However, only a minority of out-of-hospital cardiac arrests present with initial shockable rhythm, especially in Asia,” he said. “A number of patients with initial nonshockable rhythm actually revert to a shockable rhythm after a period of resuscitation. If conversion to subsequent shockable rhythm is a strong predictor of subsequent outcomes, this finding might have an important implication for clinical prognostication and selection of subsequent therapies.”
The researchers retrospectively evaluated all cases of adult out-of-hospital cardiac arrest (OHCA) collected by the Pan-Asian Resuscitation Outcomes Study (PAROS) registry in seven Asian countries during 2009-2012. PAROS is a resuscitation clinical research network established in collaboration with EMS agencies and academic centers in Singapore, Japan, South Korea, Malaysia, Thailand, Taiwan, and United Arab Emirates. The primary outcome was survival to hospital discharge. The researchers evaluated the outcomes of OHCA based on three groups – initial shockable rhythm, subsequent shockable rhythm, and remained in shockable rhythm – and developed a two-stage model to assess the influence of initial rhythm and subsequent conversion rhythm on survival to admission (first stage) and survival to discharge (second stage).
Dr. Ong and his associates reported results from 5,356 OHCA cases with initial shockable rhythm and 33,974 cases with initial nonshockable rhythm. The researchers found that OHCA with initial shockable rhythm and subsequent conversion to shockable rhythm independently predicted survival to hospital discharge (odds ratios of 6.10 and 2.00, respectively). Following adjustment of baseline and prehospital characteristics, subsequent conversion to shockable rhythm significantly improved survival to admission (OR, 1.53), survival to discharge (OR, 2.00), postarrest overall outcomes (OR, 5.12), and cerebral performance outcomes (OR, 5.39).
In the two-stage analysis, Dr. Ong and his associates found that subsequent conversion to shockable rhythm significantly influenced survival to admission (OR, 1.27) and survival to discharge (OR, 1.42), good overall outcomes (OR, 2.14) and cerebral performance outcomes (OR, 2.2).
“We believe these large, multinational, population-based, prospective cohort data show clearly that while survival and neurological outcomes after OHCA were most favorable in patients presenting with initial shockable rhythm, outcomes were much better in those with subsequent shockable rhythm conversion compared to those with persistent nonshockable rhythm,” Dr. Ong said. “The patients with converted shockable rhythm appear to be more similar in terms of outcomes to those with initial shockable rhythm, compared with those with initial nonshockable rhythms.”
He went on to note that initial shockable rhythm is usually used as a selection criterion for postresuscitation care such as therapeutic temperature management, urgent percutaneous coronary intervention, and extracorporeal membrane oxygenation. “However, our study results suggest that this approach ignores a group of initially nonshockable rhythms that will convert to shockable during resuscitation,” he said. “Our results suggest that patients with initial nonshockable rhythms with subsequent conversion to shockable should be given the same therapeutic benefit of these aggressive postresuscitation interventions.”
Dr. Ong acknowledged certain limitations of the study, including the fact that other confounders may have affected the outcomes of OHCA, such as quality and process of resuscitation, in-hospital care, the number of shocks given, and time to shocks. “There might be variations in patient, EMS, and hospital factors including resuscitation protocols which might have affected the outcomes,” he said. “However, this study managed these risks with the use of a multinational registry involving a relatively large number of cardiac arrest cases and usage of a standardized template for data collection and adjustment of the country variance in all analysis models.”
Dr. Ong reported having no financial disclosures.
SAN DIEGO – Initial shockable rhythm during resuscitation for out-of-hospital cardiac arrest was the strongest predictor of survival, a large retrospective analysis found. However, conversion to subsequent shockable rhythm significantly influenced postarrest survival and neurological outcomes.
“Our study supports that some patients with initial nonshockable rhythm can benefit from early rhythm analysis to provide shocks if indicated with a period of CPR rather than delayed rhythm analysis,” lead study author Dr. Marcus E.H. Ong said in an interview in advance of the annual meeting of the National Association of EMS Physicians.
Traditional resuscitation efforts are focused on early defibrillation to patients with shockable rhythms, as this is thought to have the best prognosis, according to Dr. Ong of the department of emergency medicine at Singapore General Hospital. “However, only a minority of out-of-hospital cardiac arrests present with initial shockable rhythm, especially in Asia,” he said. “A number of patients with initial nonshockable rhythm actually revert to a shockable rhythm after a period of resuscitation. If conversion to subsequent shockable rhythm is a strong predictor of subsequent outcomes, this finding might have an important implication for clinical prognostication and selection of subsequent therapies.”
The researchers retrospectively evaluated all cases of adult out-of-hospital cardiac arrest (OHCA) collected by the Pan-Asian Resuscitation Outcomes Study (PAROS) registry in seven Asian countries during 2009-2012. PAROS is a resuscitation clinical research network established in collaboration with EMS agencies and academic centers in Singapore, Japan, South Korea, Malaysia, Thailand, Taiwan, and United Arab Emirates. The primary outcome was survival to hospital discharge. The researchers evaluated the outcomes of OHCA based on three groups – initial shockable rhythm, subsequent shockable rhythm, and remained in shockable rhythm – and developed a two-stage model to assess the influence of initial rhythm and subsequent conversion rhythm on survival to admission (first stage) and survival to discharge (second stage).
Dr. Ong and his associates reported results from 5,356 OHCA cases with initial shockable rhythm and 33,974 cases with initial nonshockable rhythm. The researchers found that OHCA with initial shockable rhythm and subsequent conversion to shockable rhythm independently predicted survival to hospital discharge (odds ratios of 6.10 and 2.00, respectively). Following adjustment of baseline and prehospital characteristics, subsequent conversion to shockable rhythm significantly improved survival to admission (OR, 1.53), survival to discharge (OR, 2.00), postarrest overall outcomes (OR, 5.12), and cerebral performance outcomes (OR, 5.39).
In the two-stage analysis, Dr. Ong and his associates found that subsequent conversion to shockable rhythm significantly influenced survival to admission (OR, 1.27) and survival to discharge (OR, 1.42), good overall outcomes (OR, 2.14) and cerebral performance outcomes (OR, 2.2).
“We believe these large, multinational, population-based, prospective cohort data show clearly that while survival and neurological outcomes after OHCA were most favorable in patients presenting with initial shockable rhythm, outcomes were much better in those with subsequent shockable rhythm conversion compared to those with persistent nonshockable rhythm,” Dr. Ong said. “The patients with converted shockable rhythm appear to be more similar in terms of outcomes to those with initial shockable rhythm, compared with those with initial nonshockable rhythms.”
He went on to note that initial shockable rhythm is usually used as a selection criterion for postresuscitation care such as therapeutic temperature management, urgent percutaneous coronary intervention, and extracorporeal membrane oxygenation. “However, our study results suggest that this approach ignores a group of initially nonshockable rhythms that will convert to shockable during resuscitation,” he said. “Our results suggest that patients with initial nonshockable rhythms with subsequent conversion to shockable should be given the same therapeutic benefit of these aggressive postresuscitation interventions.”
Dr. Ong acknowledged certain limitations of the study, including the fact that other confounders may have affected the outcomes of OHCA, such as quality and process of resuscitation, in-hospital care, the number of shocks given, and time to shocks. “There might be variations in patient, EMS, and hospital factors including resuscitation protocols which might have affected the outcomes,” he said. “However, this study managed these risks with the use of a multinational registry involving a relatively large number of cardiac arrest cases and usage of a standardized template for data collection and adjustment of the country variance in all analysis models.”
Dr. Ong reported having no financial disclosures.
SAN DIEGO – Initial shockable rhythm during resuscitation for out-of-hospital cardiac arrest was the strongest predictor of survival, a large retrospective analysis found. However, conversion to subsequent shockable rhythm significantly influenced postarrest survival and neurological outcomes.
“Our study supports that some patients with initial nonshockable rhythm can benefit from early rhythm analysis to provide shocks if indicated with a period of CPR rather than delayed rhythm analysis,” lead study author Dr. Marcus E.H. Ong said in an interview in advance of the annual meeting of the National Association of EMS Physicians.
Traditional resuscitation efforts are focused on early defibrillation to patients with shockable rhythms, as this is thought to have the best prognosis, according to Dr. Ong of the department of emergency medicine at Singapore General Hospital. “However, only a minority of out-of-hospital cardiac arrests present with initial shockable rhythm, especially in Asia,” he said. “A number of patients with initial nonshockable rhythm actually revert to a shockable rhythm after a period of resuscitation. If conversion to subsequent shockable rhythm is a strong predictor of subsequent outcomes, this finding might have an important implication for clinical prognostication and selection of subsequent therapies.”
The researchers retrospectively evaluated all cases of adult out-of-hospital cardiac arrest (OHCA) collected by the Pan-Asian Resuscitation Outcomes Study (PAROS) registry in seven Asian countries during 2009-2012. PAROS is a resuscitation clinical research network established in collaboration with EMS agencies and academic centers in Singapore, Japan, South Korea, Malaysia, Thailand, Taiwan, and United Arab Emirates. The primary outcome was survival to hospital discharge. The researchers evaluated the outcomes of OHCA based on three groups – initial shockable rhythm, subsequent shockable rhythm, and remained in shockable rhythm – and developed a two-stage model to assess the influence of initial rhythm and subsequent conversion rhythm on survival to admission (first stage) and survival to discharge (second stage).
Dr. Ong and his associates reported results from 5,356 OHCA cases with initial shockable rhythm and 33,974 cases with initial nonshockable rhythm. The researchers found that OHCA with initial shockable rhythm and subsequent conversion to shockable rhythm independently predicted survival to hospital discharge (odds ratios of 6.10 and 2.00, respectively). Following adjustment of baseline and prehospital characteristics, subsequent conversion to shockable rhythm significantly improved survival to admission (OR, 1.53), survival to discharge (OR, 2.00), postarrest overall outcomes (OR, 5.12), and cerebral performance outcomes (OR, 5.39).
In the two-stage analysis, Dr. Ong and his associates found that subsequent conversion to shockable rhythm significantly influenced survival to admission (OR, 1.27) and survival to discharge (OR, 1.42), good overall outcomes (OR, 2.14) and cerebral performance outcomes (OR, 2.2).
“We believe these large, multinational, population-based, prospective cohort data show clearly that while survival and neurological outcomes after OHCA were most favorable in patients presenting with initial shockable rhythm, outcomes were much better in those with subsequent shockable rhythm conversion compared to those with persistent nonshockable rhythm,” Dr. Ong said. “The patients with converted shockable rhythm appear to be more similar in terms of outcomes to those with initial shockable rhythm, compared with those with initial nonshockable rhythms.”
He went on to note that initial shockable rhythm is usually used as a selection criterion for postresuscitation care such as therapeutic temperature management, urgent percutaneous coronary intervention, and extracorporeal membrane oxygenation. “However, our study results suggest that this approach ignores a group of initially nonshockable rhythms that will convert to shockable during resuscitation,” he said. “Our results suggest that patients with initial nonshockable rhythms with subsequent conversion to shockable should be given the same therapeutic benefit of these aggressive postresuscitation interventions.”
Dr. Ong acknowledged certain limitations of the study, including the fact that other confounders may have affected the outcomes of OHCA, such as quality and process of resuscitation, in-hospital care, the number of shocks given, and time to shocks. “There might be variations in patient, EMS, and hospital factors including resuscitation protocols which might have affected the outcomes,” he said. “However, this study managed these risks with the use of a multinational registry involving a relatively large number of cardiac arrest cases and usage of a standardized template for data collection and adjustment of the country variance in all analysis models.”
Dr. Ong reported having no financial disclosures.
AT NAEMSP 2015
Key clinical point: Among cases of out-of-hospital cardiac arrest, initial shockable rhythm was the strongest predictor of survival.
Major finding: Out-of-hospital cardiac arrest with initial shockable rhythm and subsequent conversion to shockable rhythm independently predicted survival to hospital discharge (odds ratios of 6.10 and 2.00, respectively).
Data source: An analysis of 5,356 OHCA cases with initial shockable rhythm and 33,974 cases with initial nonshockable rhythm collected by the Pan-Asian Resuscitation Outcomes Study (PAROS) registry.
Disclosures: Dr. Ong reported having no financial disclosures.
Study Eyes Impact of Blood Pressure on Survival in TBI
SAN DIEGO – In the setting of traumatic brain injury, increases in systolic blood pressure after the nadir are independently associated with improved survival in hypotensive patients.
In addition, even substantial blood pressure increases do not seem to harm normotensive patients. These findings come from a subanalysis of the ongoing National Institutes of Health–funded Excellence in Prehospital Injury Care (EPIC) TBI study.
“Very little is known about the patterns of blood pressure in traumatic brain injury in the field,” principal investigator Dr. Daniel W. Spaite said at the annual meeting of the National Association of EMS Physicians. “For instance, nobody knows whether it’s better to have your blood pressure increasing, stable, or decreasing in the field with regard to outcome, especially mortality. Typical studies that do have EMS data linked only have a single blood pressure measurement documented, so there’s no knowledge of trends in EMS blood pressure in TBI.”
Dr. Spaite, professor and Virginia Piper Endowed Chair of Emergency Medicine at the University of Arizona, Tucson, and his colleagues evaluated the association between mortality and increases in prehospital systolic blood pressure after the lowest recorded measurement in major TBI patients who are part of the EPIC study – the statewide implementation of TBI guidelines from the Brain Trauma Foundation and the NAEMSP. Data sources include the Arizona State Trauma Registry, which has comprehensive hospital outcome data. “The cases are then linked and the EMS patient care reports are carefully abstracted by the EPIC data team,” Dr. Spaite explained. “This included major TBI (which is, clinically, both moderate and severe) and all patients whose lowest systolic BP was between 40 and 300 mm Hg.”
The researchers used logistic regression to examine the association between the increase in EMS systolic blood pressure (SBP) after the lowest EMS blood pressure and its association with adjusted probability of death. They then partitioned the study population into four cohorts based on each patient’s prehospital systolic BP (40-89 mm Hg, 90-139 mm Hg, 140-159 mm Hg, and 160-300 mm Hg). In each cohort, they identified the independent association between the magnitude of increase in SBP and the adjusted probability of death.
Dr. Spaite reported findings from 14,567 TBI patients. More than two-thirds (68%) were male, and their mean age was 45 years. The researchers observed that, in the hypotensive cohort, mortality dropped significantly if the SBP increased after the lowest SBP. “Improvements were dramatic with increases of 40-80 mm Hg,” he said. In the normotensive group, increases in SBP were associated with very slight reductions in mortality. Even large increases in SBP, such as in the range of 70-90 mm Hg, did not appear to be detrimental.
In the mildly hypertensive group, large systolic increases were associated with higher mortality. “Interestingly, even if your lowest [SBP] is between 140 and 159 mm Hg, until you get above an increase of 40 mm Hg above that, you don’t start seeing increases in mortality,” Dr. Spaite said. In the severely hypertensive group, mortality was higher with any subsequent increase in SBP, “which doesn’t surprise any of us,” he said. “It’s dramatically higher if the increase is large.”
Dr. Spaite emphasized that the current analysis is based on observational data, “so this does not prove that treating hypotension improves outcome. … That direct question is part of the EPIC study itself and awaits the final analysis, hopefully in mid-2017. This is the first large report of blood pressure trends in the prehospital management of TBI.”
He concluded that the current findings in the hypotensive and normotensive cohorts “support guideline recommendations for restoring and optimizing cerebral perfusion in EMS traumatic brain injury management. What is fascinating about the literature is that the focus in TBI has always been on hypotension, but there’s very little information about what’s the best or the optimal blood pressure.”
EPIC is funded by the National Institutes of Health. Dr. Spaite reported having no relevant financial disclosures.
SAN DIEGO – In the setting of traumatic brain injury, increases in systolic blood pressure after the nadir are independently associated with improved survival in hypotensive patients.
In addition, even substantial blood pressure increases do not seem to harm normotensive patients. These findings come from a subanalysis of the ongoing National Institutes of Health–funded Excellence in Prehospital Injury Care (EPIC) TBI study.
“Very little is known about the patterns of blood pressure in traumatic brain injury in the field,” principal investigator Dr. Daniel W. Spaite said at the annual meeting of the National Association of EMS Physicians. “For instance, nobody knows whether it’s better to have your blood pressure increasing, stable, or decreasing in the field with regard to outcome, especially mortality. Typical studies that do have EMS data linked only have a single blood pressure measurement documented, so there’s no knowledge of trends in EMS blood pressure in TBI.”
Dr. Spaite, professor and Virginia Piper Endowed Chair of Emergency Medicine at the University of Arizona, Tucson, and his colleagues evaluated the association between mortality and increases in prehospital systolic blood pressure after the lowest recorded measurement in major TBI patients who are part of the EPIC study – the statewide implementation of TBI guidelines from the Brain Trauma Foundation and the NAEMSP. Data sources include the Arizona State Trauma Registry, which has comprehensive hospital outcome data. “The cases are then linked and the EMS patient care reports are carefully abstracted by the EPIC data team,” Dr. Spaite explained. “This included major TBI (which is, clinically, both moderate and severe) and all patients whose lowest systolic BP was between 40 and 300 mm Hg.”
The researchers used logistic regression to examine the association between the increase in EMS systolic blood pressure (SBP) after the lowest EMS blood pressure and its association with adjusted probability of death. They then partitioned the study population into four cohorts based on each patient’s prehospital systolic BP (40-89 mm Hg, 90-139 mm Hg, 140-159 mm Hg, and 160-300 mm Hg). In each cohort, they identified the independent association between the magnitude of increase in SBP and the adjusted probability of death.
Dr. Spaite reported findings from 14,567 TBI patients. More than two-thirds (68%) were male, and their mean age was 45 years. The researchers observed that, in the hypotensive cohort, mortality dropped significantly if the SBP increased after the lowest SBP. “Improvements were dramatic with increases of 40-80 mm Hg,” he said. In the normotensive group, increases in SBP were associated with very slight reductions in mortality. Even large increases in SBP, such as in the range of 70-90 mm Hg, did not appear to be detrimental.
In the mildly hypertensive group, large systolic increases were associated with higher mortality. “Interestingly, even if your lowest [SBP] is between 140 and 159 mm Hg, until you get above an increase of 40 mm Hg above that, you don’t start seeing increases in mortality,” Dr. Spaite said. In the severely hypertensive group, mortality was higher with any subsequent increase in SBP, “which doesn’t surprise any of us,” he said. “It’s dramatically higher if the increase is large.”
Dr. Spaite emphasized that the current analysis is based on observational data, “so this does not prove that treating hypotension improves outcome. … That direct question is part of the EPIC study itself and awaits the final analysis, hopefully in mid-2017. This is the first large report of blood pressure trends in the prehospital management of TBI.”
He concluded that the current findings in the hypotensive and normotensive cohorts “support guideline recommendations for restoring and optimizing cerebral perfusion in EMS traumatic brain injury management. What is fascinating about the literature is that the focus in TBI has always been on hypotension, but there’s very little information about what’s the best or the optimal blood pressure.”
EPIC is funded by the National Institutes of Health. Dr. Spaite reported having no relevant financial disclosures.
SAN DIEGO – In the setting of traumatic brain injury, increases in systolic blood pressure after the nadir are independently associated with improved survival in hypotensive patients.
In addition, even substantial blood pressure increases do not seem to harm normotensive patients. These findings come from a subanalysis of the ongoing National Institutes of Health–funded Excellence in Prehospital Injury Care (EPIC) TBI study.
“Very little is known about the patterns of blood pressure in traumatic brain injury in the field,” principal investigator Dr. Daniel W. Spaite said at the annual meeting of the National Association of EMS Physicians. “For instance, nobody knows whether it’s better to have your blood pressure increasing, stable, or decreasing in the field with regard to outcome, especially mortality. Typical studies that do have EMS data linked only have a single blood pressure measurement documented, so there’s no knowledge of trends in EMS blood pressure in TBI.”
Dr. Spaite, professor and Virginia Piper Endowed Chair of Emergency Medicine at the University of Arizona, Tucson, and his colleagues evaluated the association between mortality and increases in prehospital systolic blood pressure after the lowest recorded measurement in major TBI patients who are part of the EPIC study – the statewide implementation of TBI guidelines from the Brain Trauma Foundation and the NAEMSP. Data sources include the Arizona State Trauma Registry, which has comprehensive hospital outcome data. “The cases are then linked and the EMS patient care reports are carefully abstracted by the EPIC data team,” Dr. Spaite explained. “This included major TBI (which is, clinically, both moderate and severe) and all patients whose lowest systolic BP was between 40 and 300 mm Hg.”
The researchers used logistic regression to examine the association between the increase in EMS systolic blood pressure (SBP) after the lowest EMS blood pressure and its association with adjusted probability of death. They then partitioned the study population into four cohorts based on each patient’s prehospital systolic BP (40-89 mm Hg, 90-139 mm Hg, 140-159 mm Hg, and 160-300 mm Hg). In each cohort, they identified the independent association between the magnitude of increase in SBP and the adjusted probability of death.
Dr. Spaite reported findings from 14,567 TBI patients. More than two-thirds (68%) were male, and their mean age was 45 years. The researchers observed that, in the hypotensive cohort, mortality dropped significantly if the SBP increased after the lowest SBP. “Improvements were dramatic with increases of 40-80 mm Hg,” he said. In the normotensive group, increases in SBP were associated with very slight reductions in mortality. Even large increases in SBP, such as in the range of 70-90 mm Hg, did not appear to be detrimental.
In the mildly hypertensive group, large systolic increases were associated with higher mortality. “Interestingly, even if your lowest [SBP] is between 140 and 159 mm Hg, until you get above an increase of 40 mm Hg above that, you don’t start seeing increases in mortality,” Dr. Spaite said. In the severely hypertensive group, mortality was higher with any subsequent increase in SBP, “which doesn’t surprise any of us,” he said. “It’s dramatically higher if the increase is large.”
Dr. Spaite emphasized that the current analysis is based on observational data, “so this does not prove that treating hypotension improves outcome. … That direct question is part of the EPIC study itself and awaits the final analysis, hopefully in mid-2017. This is the first large report of blood pressure trends in the prehospital management of TBI.”
He concluded that the current findings in the hypotensive and normotensive cohorts “support guideline recommendations for restoring and optimizing cerebral perfusion in EMS traumatic brain injury management. What is fascinating about the literature is that the focus in TBI has always been on hypotension, but there’s very little information about what’s the best or the optimal blood pressure.”
EPIC is funded by the National Institutes of Health. Dr. Spaite reported having no relevant financial disclosures.
AT NAEMSP 2016
Study eyes impact of blood pressure on survival in TBI
SAN DIEGO – In the setting of traumatic brain injury, increases in systolic blood pressure after the nadir are independently associated with improved survival in hypotensive patients.
In addition, even substantial blood pressure increases do not seem to harm normotensive patients. These findings come from a subanalysis of the ongoing National Institutes of Health–funded Excellence in Prehospital Injury Care (EPIC) TBI study.
“Very little is known about the patterns of blood pressure in traumatic brain injury in the field,” principal investigator Dr. Daniel W. Spaite said at the annual meeting of the National Association of EMS Physicians. “For instance, nobody knows whether it’s better to have your blood pressure increasing, stable, or decreasing in the field with regard to outcome, especially mortality. Typical studies that do have EMS data linked only have a single blood pressure measurement documented, so there’s no knowledge of trends in EMS blood pressure in TBI.”
Dr. Spaite, professor and Virginia Piper Endowed Chair of Emergency Medicine at the University of Arizona, Tucson, and his colleagues evaluated the association between mortality and increases in prehospital systolic blood pressure after the lowest recorded measurement in major TBI patients who are part of the EPIC study – the statewide implementation of TBI guidelines from the Brain Trauma Foundation and the NAEMSP. Data sources include the Arizona State Trauma Registry, which has comprehensive hospital outcome data. “The cases are then linked and the EMS patient care reports are carefully abstracted by the EPIC data team,” Dr. Spaite explained. “This included major TBI (which is, clinically, both moderate and severe) and all patients whose lowest systolic BP was between 40 and 300 mm Hg.”
The researchers used logistic regression to examine the association between the increase in EMS systolic blood pressure (SBP) after the lowest EMS blood pressure and its association with adjusted probability of death. They then partitioned the study population into four cohorts based on each patient’s prehospital systolic BP (40-89 mm Hg, 90-139 mm Hg, 140-159 mm Hg, and 160-300 mm Hg). In each cohort, they identified the independent association between the magnitude of increase in SBP and the adjusted probability of death.
Dr. Spaite reported findings from 14,567 TBI patients. More than two-thirds (68%) were male, and their mean age was 45 years. The researchers observed that, in the hypotensive cohort, mortality dropped significantly if the SBP increased after the lowest SBP. “Improvements were dramatic with increases of 40-80 mm Hg,” he said. In the normotensive group, increases in SBP were associated with very slight reductions in mortality. Even large increases in SBP, such as in the range of 70-90 mm Hg, did not appear to be detrimental.
In the mildly hypertensive group, large systolic increases were associated with higher mortality. “Interestingly, even if your lowest [SBP] is between 140 and 159 mm Hg, until you get above an increase of 40 mm Hg above that, you don’t start seeing increases in mortality,” Dr. Spaite said. In the severely hypertensive group, mortality was higher with any subsequent increase in SBP, “which doesn’t surprise any of us,” he said. “It’s dramatically higher if the increase is large.”
Dr. Spaite emphasized that the current analysis is based on observational data, “so this does not prove that treating hypotension improves outcome. … That direct question is part of the EPIC study itself and awaits the final analysis, hopefully in mid-2017. This is the first large report of blood pressure trends in the prehospital management of TBI.”
He concluded that the current findings in the hypotensive and normotensive cohorts “support guideline recommendations for restoring and optimizing cerebral perfusion in EMS traumatic brain injury management. What is fascinating about the literature is that the focus in TBI has always been on hypotension, but there’s very little information about what’s the best or the optimal blood pressure.”
EPIC is funded by the National Institutes of Health. Dr. Spaite reported having no relevant financial disclosures.
SAN DIEGO – In the setting of traumatic brain injury, increases in systolic blood pressure after the nadir are independently associated with improved survival in hypotensive patients.
In addition, even substantial blood pressure increases do not seem to harm normotensive patients. These findings come from a subanalysis of the ongoing National Institutes of Health–funded Excellence in Prehospital Injury Care (EPIC) TBI study.
“Very little is known about the patterns of blood pressure in traumatic brain injury in the field,” principal investigator Dr. Daniel W. Spaite said at the annual meeting of the National Association of EMS Physicians. “For instance, nobody knows whether it’s better to have your blood pressure increasing, stable, or decreasing in the field with regard to outcome, especially mortality. Typical studies that do have EMS data linked only have a single blood pressure measurement documented, so there’s no knowledge of trends in EMS blood pressure in TBI.”
Dr. Spaite, professor and Virginia Piper Endowed Chair of Emergency Medicine at the University of Arizona, Tucson, and his colleagues evaluated the association between mortality and increases in prehospital systolic blood pressure after the lowest recorded measurement in major TBI patients who are part of the EPIC study – the statewide implementation of TBI guidelines from the Brain Trauma Foundation and the NAEMSP. Data sources include the Arizona State Trauma Registry, which has comprehensive hospital outcome data. “The cases are then linked and the EMS patient care reports are carefully abstracted by the EPIC data team,” Dr. Spaite explained. “This included major TBI (which is, clinically, both moderate and severe) and all patients whose lowest systolic BP was between 40 and 300 mm Hg.”
The researchers used logistic regression to examine the association between the increase in EMS systolic blood pressure (SBP) after the lowest EMS blood pressure and its association with adjusted probability of death. They then partitioned the study population into four cohorts based on each patient’s prehospital systolic BP (40-89 mm Hg, 90-139 mm Hg, 140-159 mm Hg, and 160-300 mm Hg). In each cohort, they identified the independent association between the magnitude of increase in SBP and the adjusted probability of death.
Dr. Spaite reported findings from 14,567 TBI patients. More than two-thirds (68%) were male, and their mean age was 45 years. The researchers observed that, in the hypotensive cohort, mortality dropped significantly if the SBP increased after the lowest SBP. “Improvements were dramatic with increases of 40-80 mm Hg,” he said. In the normotensive group, increases in SBP were associated with very slight reductions in mortality. Even large increases in SBP, such as in the range of 70-90 mm Hg, did not appear to be detrimental.
In the mildly hypertensive group, large systolic increases were associated with higher mortality. “Interestingly, even if your lowest [SBP] is between 140 and 159 mm Hg, until you get above an increase of 40 mm Hg above that, you don’t start seeing increases in mortality,” Dr. Spaite said. In the severely hypertensive group, mortality was higher with any subsequent increase in SBP, “which doesn’t surprise any of us,” he said. “It’s dramatically higher if the increase is large.”
Dr. Spaite emphasized that the current analysis is based on observational data, “so this does not prove that treating hypotension improves outcome. … That direct question is part of the EPIC study itself and awaits the final analysis, hopefully in mid-2017. This is the first large report of blood pressure trends in the prehospital management of TBI.”
He concluded that the current findings in the hypotensive and normotensive cohorts “support guideline recommendations for restoring and optimizing cerebral perfusion in EMS traumatic brain injury management. What is fascinating about the literature is that the focus in TBI has always been on hypotension, but there’s very little information about what’s the best or the optimal blood pressure.”
EPIC is funded by the National Institutes of Health. Dr. Spaite reported having no relevant financial disclosures.
SAN DIEGO – In the setting of traumatic brain injury, increases in systolic blood pressure after the nadir are independently associated with improved survival in hypotensive patients.
In addition, even substantial blood pressure increases do not seem to harm normotensive patients. These findings come from a subanalysis of the ongoing National Institutes of Health–funded Excellence in Prehospital Injury Care (EPIC) TBI study.
“Very little is known about the patterns of blood pressure in traumatic brain injury in the field,” principal investigator Dr. Daniel W. Spaite said at the annual meeting of the National Association of EMS Physicians. “For instance, nobody knows whether it’s better to have your blood pressure increasing, stable, or decreasing in the field with regard to outcome, especially mortality. Typical studies that do have EMS data linked only have a single blood pressure measurement documented, so there’s no knowledge of trends in EMS blood pressure in TBI.”
Dr. Spaite, professor and Virginia Piper Endowed Chair of Emergency Medicine at the University of Arizona, Tucson, and his colleagues evaluated the association between mortality and increases in prehospital systolic blood pressure after the lowest recorded measurement in major TBI patients who are part of the EPIC study – the statewide implementation of TBI guidelines from the Brain Trauma Foundation and the NAEMSP. Data sources include the Arizona State Trauma Registry, which has comprehensive hospital outcome data. “The cases are then linked and the EMS patient care reports are carefully abstracted by the EPIC data team,” Dr. Spaite explained. “This included major TBI (which is, clinically, both moderate and severe) and all patients whose lowest systolic BP was between 40 and 300 mm Hg.”
The researchers used logistic regression to examine the association between the increase in EMS systolic blood pressure (SBP) after the lowest EMS blood pressure and its association with adjusted probability of death. They then partitioned the study population into four cohorts based on each patient’s prehospital systolic BP (40-89 mm Hg, 90-139 mm Hg, 140-159 mm Hg, and 160-300 mm Hg). In each cohort, they identified the independent association between the magnitude of increase in SBP and the adjusted probability of death.
Dr. Spaite reported findings from 14,567 TBI patients. More than two-thirds (68%) were male, and their mean age was 45 years. The researchers observed that, in the hypotensive cohort, mortality dropped significantly if the SBP increased after the lowest SBP. “Improvements were dramatic with increases of 40-80 mm Hg,” he said. In the normotensive group, increases in SBP were associated with very slight reductions in mortality. Even large increases in SBP, such as in the range of 70-90 mm Hg, did not appear to be detrimental.
In the mildly hypertensive group, large systolic increases were associated with higher mortality. “Interestingly, even if your lowest [SBP] is between 140 and 159 mm Hg, until you get above an increase of 40 mm Hg above that, you don’t start seeing increases in mortality,” Dr. Spaite said. In the severely hypertensive group, mortality was higher with any subsequent increase in SBP, “which doesn’t surprise any of us,” he said. “It’s dramatically higher if the increase is large.”
Dr. Spaite emphasized that the current analysis is based on observational data, “so this does not prove that treating hypotension improves outcome. … That direct question is part of the EPIC study itself and awaits the final analysis, hopefully in mid-2017. This is the first large report of blood pressure trends in the prehospital management of TBI.”
He concluded that the current findings in the hypotensive and normotensive cohorts “support guideline recommendations for restoring and optimizing cerebral perfusion in EMS traumatic brain injury management. What is fascinating about the literature is that the focus in TBI has always been on hypotension, but there’s very little information about what’s the best or the optimal blood pressure.”
EPIC is funded by the National Institutes of Health. Dr. Spaite reported having no relevant financial disclosures.
AT NAEMSP 2016
Key clinical point: The optimal systolic blood pressure in traumatic brain injury may be higher than previously thought.
Major finding: In the hypotensive cohort, mortality dropped significantly if the systolic blood pressure increased after the lowest SBP. In the normotensive group, increases in SBP were associated with very slight reductions in mortality.
Data source: An analysis of 14,567 TBI patients enrolled in the National Institutes of Health–funded Excellence in Prehospital Injury Care TBI Study.
Disclosures: EPIC is funded by NIH. Dr. Spaite reported having no relevant financial disclosures.
Study eyes sleep-wake cycles in EMS clinicians
SAN DIEGO – Among emergency medical services clinicians, shift length alone was not associated with sleep duration, sleep quality, or self-reported fatigue at the start or end of shift work, preliminary results from a pilot study showed.
“There is a compelling need to address the sleep health and fatigue of EMS clinicians,” lead study author P. Daniel Patterson, Ph.D., said in an interview. “Reports of EMS clinicians falling asleep while driving ambulances have increased in recent history. There is a palpable concern for the safety of patients and clinicians. Prior studies show half of EMS clinicians report excessive mental and physical fatigue while at work; half get less than 6 hours of sleep daily, half rate their sleep quality as poor, and more than one-third report excessive daytime sleepiness.”
In an effort to characterize the sleep-wake and shift patterns of EMS clinicians working diverse shift schedules, Dr. Patterson, research director for MedCenter Air in the department of emergency medicine at Carolinas HealthCare System Medical Center, Charlotte, N.C., and his associates randomly selected 20 EMS clinicians participating in a randomized pilot trial that used text messaging to determine how real-time assessments of perceived sleepiness and fatigue impacts alertness and other behavior during shift work. These individuals provided detailed sleep diaries for 14 straight days in addition to reports of their fatigue in real time during shift work. The researchers used descriptive statistics to characterize sleep patterns in relation to shift work and self-reported fatigue during shifts.
Of the 20 study participants, 14 recorded at least one shift during the 14-day observation period. The mean number of shifts among these 14 participants was five, the most common shift duration worked was 24 hours (49%), and they had a mean of 34 hours off between scheduled shifts. Dr. Patterson, who is a practicing, nationally registered paramedic, reported at the annual meeting of the National Association of EMS Physicians that the shift length for eight of the study participants did not vary, while six worked shifts that ranged from 5 to 48 hours. The researchers found that when participants worked an extended shift of 24 hours, they slept significantly less before the shift than when working a shift of shorter duration (P = .0001). Participants who worked 24-hour shifts averaged 4.9 hours of sleep/rest during scheduled shifts, compared with 0.5 hours of total sleep/rest during shifts of shorter duration (P less than .0001). Dr. Patterson also reported that there appeared to be no differences based on shift duration (24 hours vs. other duration) in total sleep during the 24 hours after a scheduled shift.
He underscored the preliminary nature of the findings and acknowledged the potential for selection bias. “Much of our data are based on clinician self-report. Where possible, we used reliable and valid instruments and tools tested in the EMS setting to improve the internal validity.”
In a separate study presented during a poster session at the meeting, Dr. Patterson and his associates reported they are analyzing detailed sleep diaries and other sleep health information to evaluate psychomotor vigilance in more than 100 air-medical clinicians located in the South, Midwest, and Northeast United States.
Dr. Patterson also made a plea for EMS clinicians to participate in studies focused on their sleep health and fatigue. “We need these data to guide the development of fatigue risk management programs.”
Dr. Patterson disclosed that the study involving 14 EMS clinicians was funded by the Pittsburgh Emergency Medicine Foundation, the MedEvac Foundation, and by a career development award from the National Center for Resources and the National Institutes of Health. He reported that funding for the second study of air-medical clinicians is funded by the MedEvac Foundation.
SAN DIEGO – Among emergency medical services clinicians, shift length alone was not associated with sleep duration, sleep quality, or self-reported fatigue at the start or end of shift work, preliminary results from a pilot study showed.
“There is a compelling need to address the sleep health and fatigue of EMS clinicians,” lead study author P. Daniel Patterson, Ph.D., said in an interview. “Reports of EMS clinicians falling asleep while driving ambulances have increased in recent history. There is a palpable concern for the safety of patients and clinicians. Prior studies show half of EMS clinicians report excessive mental and physical fatigue while at work; half get less than 6 hours of sleep daily, half rate their sleep quality as poor, and more than one-third report excessive daytime sleepiness.”
In an effort to characterize the sleep-wake and shift patterns of EMS clinicians working diverse shift schedules, Dr. Patterson, research director for MedCenter Air in the department of emergency medicine at Carolinas HealthCare System Medical Center, Charlotte, N.C., and his associates randomly selected 20 EMS clinicians participating in a randomized pilot trial that used text messaging to determine how real-time assessments of perceived sleepiness and fatigue impacts alertness and other behavior during shift work. These individuals provided detailed sleep diaries for 14 straight days in addition to reports of their fatigue in real time during shift work. The researchers used descriptive statistics to characterize sleep patterns in relation to shift work and self-reported fatigue during shifts.
Of the 20 study participants, 14 recorded at least one shift during the 14-day observation period. The mean number of shifts among these 14 participants was five, the most common shift duration worked was 24 hours (49%), and they had a mean of 34 hours off between scheduled shifts. Dr. Patterson, who is a practicing, nationally registered paramedic, reported at the annual meeting of the National Association of EMS Physicians that the shift length for eight of the study participants did not vary, while six worked shifts that ranged from 5 to 48 hours. The researchers found that when participants worked an extended shift of 24 hours, they slept significantly less before the shift than when working a shift of shorter duration (P = .0001). Participants who worked 24-hour shifts averaged 4.9 hours of sleep/rest during scheduled shifts, compared with 0.5 hours of total sleep/rest during shifts of shorter duration (P less than .0001). Dr. Patterson also reported that there appeared to be no differences based on shift duration (24 hours vs. other duration) in total sleep during the 24 hours after a scheduled shift.
He underscored the preliminary nature of the findings and acknowledged the potential for selection bias. “Much of our data are based on clinician self-report. Where possible, we used reliable and valid instruments and tools tested in the EMS setting to improve the internal validity.”
In a separate study presented during a poster session at the meeting, Dr. Patterson and his associates reported they are analyzing detailed sleep diaries and other sleep health information to evaluate psychomotor vigilance in more than 100 air-medical clinicians located in the South, Midwest, and Northeast United States.
Dr. Patterson also made a plea for EMS clinicians to participate in studies focused on their sleep health and fatigue. “We need these data to guide the development of fatigue risk management programs.”
Dr. Patterson disclosed that the study involving 14 EMS clinicians was funded by the Pittsburgh Emergency Medicine Foundation, the MedEvac Foundation, and by a career development award from the National Center for Resources and the National Institutes of Health. He reported that funding for the second study of air-medical clinicians is funded by the MedEvac Foundation.
SAN DIEGO – Among emergency medical services clinicians, shift length alone was not associated with sleep duration, sleep quality, or self-reported fatigue at the start or end of shift work, preliminary results from a pilot study showed.
“There is a compelling need to address the sleep health and fatigue of EMS clinicians,” lead study author P. Daniel Patterson, Ph.D., said in an interview. “Reports of EMS clinicians falling asleep while driving ambulances have increased in recent history. There is a palpable concern for the safety of patients and clinicians. Prior studies show half of EMS clinicians report excessive mental and physical fatigue while at work; half get less than 6 hours of sleep daily, half rate their sleep quality as poor, and more than one-third report excessive daytime sleepiness.”
In an effort to characterize the sleep-wake and shift patterns of EMS clinicians working diverse shift schedules, Dr. Patterson, research director for MedCenter Air in the department of emergency medicine at Carolinas HealthCare System Medical Center, Charlotte, N.C., and his associates randomly selected 20 EMS clinicians participating in a randomized pilot trial that used text messaging to determine how real-time assessments of perceived sleepiness and fatigue impacts alertness and other behavior during shift work. These individuals provided detailed sleep diaries for 14 straight days in addition to reports of their fatigue in real time during shift work. The researchers used descriptive statistics to characterize sleep patterns in relation to shift work and self-reported fatigue during shifts.
Of the 20 study participants, 14 recorded at least one shift during the 14-day observation period. The mean number of shifts among these 14 participants was five, the most common shift duration worked was 24 hours (49%), and they had a mean of 34 hours off between scheduled shifts. Dr. Patterson, who is a practicing, nationally registered paramedic, reported at the annual meeting of the National Association of EMS Physicians that the shift length for eight of the study participants did not vary, while six worked shifts that ranged from 5 to 48 hours. The researchers found that when participants worked an extended shift of 24 hours, they slept significantly less before the shift than when working a shift of shorter duration (P = .0001). Participants who worked 24-hour shifts averaged 4.9 hours of sleep/rest during scheduled shifts, compared with 0.5 hours of total sleep/rest during shifts of shorter duration (P less than .0001). Dr. Patterson also reported that there appeared to be no differences based on shift duration (24 hours vs. other duration) in total sleep during the 24 hours after a scheduled shift.
He underscored the preliminary nature of the findings and acknowledged the potential for selection bias. “Much of our data are based on clinician self-report. Where possible, we used reliable and valid instruments and tools tested in the EMS setting to improve the internal validity.”
In a separate study presented during a poster session at the meeting, Dr. Patterson and his associates reported they are analyzing detailed sleep diaries and other sleep health information to evaluate psychomotor vigilance in more than 100 air-medical clinicians located in the South, Midwest, and Northeast United States.
Dr. Patterson also made a plea for EMS clinicians to participate in studies focused on their sleep health and fatigue. “We need these data to guide the development of fatigue risk management programs.”
Dr. Patterson disclosed that the study involving 14 EMS clinicians was funded by the Pittsburgh Emergency Medicine Foundation, the MedEvac Foundation, and by a career development award from the National Center for Resources and the National Institutes of Health. He reported that funding for the second study of air-medical clinicians is funded by the MedEvac Foundation.
AT NAEMSP 2016
Denosumab boosts BMD in kidney transplant recipients
SAN DIEGO – Twice-yearly denosumab effectively increased bone mineral density in kidney transplant recipients, but was associated with more frequent episodes of urinary tract infections and hypocalcemia, results from a randomized trial showed.
“Kidney transplant recipients lose bone mass and are at increased risk for fractures, more so in females than in males,” Dr. Rudolf P. Wuthrich said at Kidney Week 2015, sponsored by the American Society of Nephrology. Results from previous studies suggest that one in five patients may develop a fracture within 5 years after kidney transplantation.
Considering that current therapeutic options to prevent bone loss are limited, Dr. Wuthrich, director of the Clinic for Nephrology at University Hospital Zurich, and his associates assessed the efficacy and safety of receptor activator of nuclear factor–kappaB ligand (RANKL) inhibition with denosumab to improve bone mineralization in the first year after kidney transplantation. They recruited 108 patients from June 2011 to May 2014. Of these, 90 were randomized within 4 weeks after kidney transplant surgery in a 1:1 ratio to receive subcutaneous injections of 60 mg denosumab at baseline and after 6 months, or no treatment. The study’s primary endpoint was the percentage change in bone mineral density measured by DXA at the lumbar spine at 12 months. The study, known as Denosumab for Prevention of Osteoporosis in Renal Transplant Recipients (POSTOP), was limited to adults who had undergone kidney transplantation within 28 days and who were on standard triple immunosuppression, including a calcineurin antagonist, mycophenolate, and steroids.
Dr. Wuthrich reported results from 46 patients in the denosumab group and 44 patients in the control group. At baseline, their mean age was 50 years, 63% were male, and 96% were white. After 12 months, the total lumbar spine BMD increased by 4.6% in the denosumab group and decreased by 0.5% in the control group, for a between-group difference of 5.1% (P less than .0001). Denosumab also significantly increased BMD at the total hip by 1.9% (P = .035) over that in the control group at 12 months.
High-resolution peripheral quantitative computed tomography in a subgroup of 24 patients showed that denosumab also significantly increased BMD and cortical thickness at the distal tibia and radius (P less than .05). Two biomarkers of bone resorption in beta C-terminal telopeptide and urine deoxypyridinoline markedly decreased in the denosumab group, as did two biomarkers of bone formation in procollagen type 1 N-terminal propeptide and bone-specific alkaline phosphatase (P less than .0001).
In terms of adverse events, there were significantly more urinary tract infections in the denosumab group, compared with the control group (15% vs. 9%, respectively), as well as more episodes of diarrhea (9% vs. 5%), and transient hypocalcemia (3% vs. 0.3%). The number of serious adverse events was similar between groups, at 17% and 19%, respectively.
“We had significantly increased bone mineral density at all measured skeletal sites in response to denosumab,” Dr. Wuthrich concluded. “We had a significant increase in bone biomarkers and we can say that denosumab was generally safe in a complex population of immunosuppressed kidney transplant recipients. But it was associated with a higher incidence of urinary tract infections. At this point we have no good explanation as to why this is. We also had a few episodes of transient and asymptomatic hypocalcemia.”
The researchers reported having no financial disclosures.
SAN DIEGO – Twice-yearly denosumab effectively increased bone mineral density in kidney transplant recipients, but was associated with more frequent episodes of urinary tract infections and hypocalcemia, results from a randomized trial showed.
“Kidney transplant recipients lose bone mass and are at increased risk for fractures, more so in females than in males,” Dr. Rudolf P. Wuthrich said at Kidney Week 2015, sponsored by the American Society of Nephrology. Results from previous studies suggest that one in five patients may develop a fracture within 5 years after kidney transplantation.
Considering that current therapeutic options to prevent bone loss are limited, Dr. Wuthrich, director of the Clinic for Nephrology at University Hospital Zurich, and his associates assessed the efficacy and safety of receptor activator of nuclear factor–kappaB ligand (RANKL) inhibition with denosumab to improve bone mineralization in the first year after kidney transplantation. They recruited 108 patients from June 2011 to May 2014. Of these, 90 were randomized within 4 weeks after kidney transplant surgery in a 1:1 ratio to receive subcutaneous injections of 60 mg denosumab at baseline and after 6 months, or no treatment. The study’s primary endpoint was the percentage change in bone mineral density measured by DXA at the lumbar spine at 12 months. The study, known as Denosumab for Prevention of Osteoporosis in Renal Transplant Recipients (POSTOP), was limited to adults who had undergone kidney transplantation within 28 days and who were on standard triple immunosuppression, including a calcineurin antagonist, mycophenolate, and steroids.
Dr. Wuthrich reported results from 46 patients in the denosumab group and 44 patients in the control group. At baseline, their mean age was 50 years, 63% were male, and 96% were white. After 12 months, the total lumbar spine BMD increased by 4.6% in the denosumab group and decreased by 0.5% in the control group, for a between-group difference of 5.1% (P less than .0001). Denosumab also significantly increased BMD at the total hip by 1.9% (P = .035) over that in the control group at 12 months.
High-resolution peripheral quantitative computed tomography in a subgroup of 24 patients showed that denosumab also significantly increased BMD and cortical thickness at the distal tibia and radius (P less than .05). Two biomarkers of bone resorption in beta C-terminal telopeptide and urine deoxypyridinoline markedly decreased in the denosumab group, as did two biomarkers of bone formation in procollagen type 1 N-terminal propeptide and bone-specific alkaline phosphatase (P less than .0001).
In terms of adverse events, there were significantly more urinary tract infections in the denosumab group, compared with the control group (15% vs. 9%, respectively), as well as more episodes of diarrhea (9% vs. 5%), and transient hypocalcemia (3% vs. 0.3%). The number of serious adverse events was similar between groups, at 17% and 19%, respectively.
“We had significantly increased bone mineral density at all measured skeletal sites in response to denosumab,” Dr. Wuthrich concluded. “We had a significant increase in bone biomarkers and we can say that denosumab was generally safe in a complex population of immunosuppressed kidney transplant recipients. But it was associated with a higher incidence of urinary tract infections. At this point we have no good explanation as to why this is. We also had a few episodes of transient and asymptomatic hypocalcemia.”
The researchers reported having no financial disclosures.
SAN DIEGO – Twice-yearly denosumab effectively increased bone mineral density in kidney transplant recipients, but was associated with more frequent episodes of urinary tract infections and hypocalcemia, results from a randomized trial showed.
“Kidney transplant recipients lose bone mass and are at increased risk for fractures, more so in females than in males,” Dr. Rudolf P. Wuthrich said at Kidney Week 2015, sponsored by the American Society of Nephrology. Results from previous studies suggest that one in five patients may develop a fracture within 5 years after kidney transplantation.
Considering that current therapeutic options to prevent bone loss are limited, Dr. Wuthrich, director of the Clinic for Nephrology at University Hospital Zurich, and his associates assessed the efficacy and safety of receptor activator of nuclear factor–kappaB ligand (RANKL) inhibition with denosumab to improve bone mineralization in the first year after kidney transplantation. They recruited 108 patients from June 2011 to May 2014. Of these, 90 were randomized within 4 weeks after kidney transplant surgery in a 1:1 ratio to receive subcutaneous injections of 60 mg denosumab at baseline and after 6 months, or no treatment. The study’s primary endpoint was the percentage change in bone mineral density measured by DXA at the lumbar spine at 12 months. The study, known as Denosumab for Prevention of Osteoporosis in Renal Transplant Recipients (POSTOP), was limited to adults who had undergone kidney transplantation within 28 days and who were on standard triple immunosuppression, including a calcineurin antagonist, mycophenolate, and steroids.
Dr. Wuthrich reported results from 46 patients in the denosumab group and 44 patients in the control group. At baseline, their mean age was 50 years, 63% were male, and 96% were white. After 12 months, the total lumbar spine BMD increased by 4.6% in the denosumab group and decreased by 0.5% in the control group, for a between-group difference of 5.1% (P less than .0001). Denosumab also significantly increased BMD at the total hip by 1.9% (P = .035) over that in the control group at 12 months.
High-resolution peripheral quantitative computed tomography in a subgroup of 24 patients showed that denosumab also significantly increased BMD and cortical thickness at the distal tibia and radius (P less than .05). Two biomarkers of bone resorption in beta C-terminal telopeptide and urine deoxypyridinoline markedly decreased in the denosumab group, as did two biomarkers of bone formation in procollagen type 1 N-terminal propeptide and bone-specific alkaline phosphatase (P less than .0001).
In terms of adverse events, there were significantly more urinary tract infections in the denosumab group, compared with the control group (15% vs. 9%, respectively), as well as more episodes of diarrhea (9% vs. 5%), and transient hypocalcemia (3% vs. 0.3%). The number of serious adverse events was similar between groups, at 17% and 19%, respectively.
“We had significantly increased bone mineral density at all measured skeletal sites in response to denosumab,” Dr. Wuthrich concluded. “We had a significant increase in bone biomarkers and we can say that denosumab was generally safe in a complex population of immunosuppressed kidney transplant recipients. But it was associated with a higher incidence of urinary tract infections. At this point we have no good explanation as to why this is. We also had a few episodes of transient and asymptomatic hypocalcemia.”
The researchers reported having no financial disclosures.
AT KIDNEY WEEK 2015
Key clinical point: Denosumab effectively increased bone mineral density in kidney transplant recipients in the POSTOP trial.
Major finding: After 12 months, total lumbar spine BMD increased by 4.6% in the denosumab group and decreased by 0.5% in the control group, for a between-group difference of 5.1% (P less than .0001).
Data source: POSTOP, a study of 90 patients who were randomized within 4 weeks after kidney transplant surgery in a 1:1 ratio to receive subcutaneous injections of 60 mg denosumab at baseline and after 6 months, or no treatment.
Disclosures: The researchers reported having no financial disclosures.
Novel agent for adult GH can be administered once weekly
Use of a novel reversible albumin-binding human growth hormone (GH) derivative administered subcutaneously once weekly for 4 weeks was safe and effective in adults with growth hormone deficiency, according to a phase I, randomized, open-label trial.
Results from a recent clinical trial of the agent, known as NNC0195-0092 and being developed by Norvo Nordisk, indicated the feasibility of a once-weekly dosing regimen in healthy men (J Clin Endocrinol Metab. 2014;99:E1819-29). The purpose of the current study was to report the first data obtained from a multiple-dose trial of NNC0195-0092 conducted in men and women at three hospitals in Denmark and one in Sweden.
“GH is currently administered as daily subcutaneous injections; however, a long-acting GH formulation that decreases injection frequency may improve treatment adherence and reduce the inconvenience associated with daily injections,” researchers led by Dr. Michael Højby Rasmussen wrote in the article published online Jan. 4 in the Journal of Clinical Endocrinology and Metabolism (2016. doi: 10.1210/jc.2015-1991). They went on to note that the plasma half-life of therapeutic peptides such as GH can be extended through binding to serum albumin, which “has a high affinity and binding capacity for fatty acids, and acylation of fatty acids to therapeutic proteins has been used to facilitate binding of these molecules to circulating albumin. In NNC0195-0092, fatty acids with noncovalent albumin-binding properties have been attached by acylation.”
Dr. Rasmussen of Novo Nordisk, Denmark, and his associates reported results from 25 men and nine women with a mean age of 53 years who were assigned into four cohorts of eight subjects and randomized to receive once-weekly NNC0195-0092 for 4 weeks in doses that ranged from 0.02 to 0.12 mg/kg, or daily injections of Norditropin NordiFlex for 4 weeks with a dose replicating the pretrial dose of somatropin. They found that the number of adverse events was similar at the 0.02, 0.04, and 0.08 mg/kg doses of NNC0195-0092, compared with the daily injections of Norditropin NordiFlex, while the number of adverse events was greatest at the 0.12 mg/kg dose of NNC0195-0092.
“No clinically significant safety and tolerability signals causally related to NNC0195-0092 were identified, nor were any immunogenicity concerns revealed,” the investigators concluded. “The IGF-I profiles were consistent with a once-weekly treatment profile of NNC0195-0092 at a starting dose of 0.02-0.04 mg/kg/wk.”
The trial was supported by Novo Nordisk. Dr. Rasmussen disclosed that he is an employee of the company.
Use of a novel reversible albumin-binding human growth hormone (GH) derivative administered subcutaneously once weekly for 4 weeks was safe and effective in adults with growth hormone deficiency, according to a phase I, randomized, open-label trial.
Results from a recent clinical trial of the agent, known as NNC0195-0092 and being developed by Norvo Nordisk, indicated the feasibility of a once-weekly dosing regimen in healthy men (J Clin Endocrinol Metab. 2014;99:E1819-29). The purpose of the current study was to report the first data obtained from a multiple-dose trial of NNC0195-0092 conducted in men and women at three hospitals in Denmark and one in Sweden.
“GH is currently administered as daily subcutaneous injections; however, a long-acting GH formulation that decreases injection frequency may improve treatment adherence and reduce the inconvenience associated with daily injections,” researchers led by Dr. Michael Højby Rasmussen wrote in the article published online Jan. 4 in the Journal of Clinical Endocrinology and Metabolism (2016. doi: 10.1210/jc.2015-1991). They went on to note that the plasma half-life of therapeutic peptides such as GH can be extended through binding to serum albumin, which “has a high affinity and binding capacity for fatty acids, and acylation of fatty acids to therapeutic proteins has been used to facilitate binding of these molecules to circulating albumin. In NNC0195-0092, fatty acids with noncovalent albumin-binding properties have been attached by acylation.”
Dr. Rasmussen of Novo Nordisk, Denmark, and his associates reported results from 25 men and nine women with a mean age of 53 years who were assigned into four cohorts of eight subjects and randomized to receive once-weekly NNC0195-0092 for 4 weeks in doses that ranged from 0.02 to 0.12 mg/kg, or daily injections of Norditropin NordiFlex for 4 weeks with a dose replicating the pretrial dose of somatropin. They found that the number of adverse events was similar at the 0.02, 0.04, and 0.08 mg/kg doses of NNC0195-0092, compared with the daily injections of Norditropin NordiFlex, while the number of adverse events was greatest at the 0.12 mg/kg dose of NNC0195-0092.
“No clinically significant safety and tolerability signals causally related to NNC0195-0092 were identified, nor were any immunogenicity concerns revealed,” the investigators concluded. “The IGF-I profiles were consistent with a once-weekly treatment profile of NNC0195-0092 at a starting dose of 0.02-0.04 mg/kg/wk.”
The trial was supported by Novo Nordisk. Dr. Rasmussen disclosed that he is an employee of the company.
Use of a novel reversible albumin-binding human growth hormone (GH) derivative administered subcutaneously once weekly for 4 weeks was safe and effective in adults with growth hormone deficiency, according to a phase I, randomized, open-label trial.
Results from a recent clinical trial of the agent, known as NNC0195-0092 and being developed by Norvo Nordisk, indicated the feasibility of a once-weekly dosing regimen in healthy men (J Clin Endocrinol Metab. 2014;99:E1819-29). The purpose of the current study was to report the first data obtained from a multiple-dose trial of NNC0195-0092 conducted in men and women at three hospitals in Denmark and one in Sweden.
“GH is currently administered as daily subcutaneous injections; however, a long-acting GH formulation that decreases injection frequency may improve treatment adherence and reduce the inconvenience associated with daily injections,” researchers led by Dr. Michael Højby Rasmussen wrote in the article published online Jan. 4 in the Journal of Clinical Endocrinology and Metabolism (2016. doi: 10.1210/jc.2015-1991). They went on to note that the plasma half-life of therapeutic peptides such as GH can be extended through binding to serum albumin, which “has a high affinity and binding capacity for fatty acids, and acylation of fatty acids to therapeutic proteins has been used to facilitate binding of these molecules to circulating albumin. In NNC0195-0092, fatty acids with noncovalent albumin-binding properties have been attached by acylation.”
Dr. Rasmussen of Novo Nordisk, Denmark, and his associates reported results from 25 men and nine women with a mean age of 53 years who were assigned into four cohorts of eight subjects and randomized to receive once-weekly NNC0195-0092 for 4 weeks in doses that ranged from 0.02 to 0.12 mg/kg, or daily injections of Norditropin NordiFlex for 4 weeks with a dose replicating the pretrial dose of somatropin. They found that the number of adverse events was similar at the 0.02, 0.04, and 0.08 mg/kg doses of NNC0195-0092, compared with the daily injections of Norditropin NordiFlex, while the number of adverse events was greatest at the 0.12 mg/kg dose of NNC0195-0092.
“No clinically significant safety and tolerability signals causally related to NNC0195-0092 were identified, nor were any immunogenicity concerns revealed,” the investigators concluded. “The IGF-I profiles were consistent with a once-weekly treatment profile of NNC0195-0092 at a starting dose of 0.02-0.04 mg/kg/wk.”
The trial was supported by Novo Nordisk. Dr. Rasmussen disclosed that he is an employee of the company.
FROM THE JOURNAL OF CLINICAL ENDOCRINOLOGY AND METABOLISM
Key clinical point: Four once-weekly doses of NNC0195-0092 administered to patients with adult growth hormone deficiency were well tolerated.
Major finding: The number of adverse events was similar at the 0.02, 0.04, and 0.08 mg/kg doses of NNC0195-0092, compared with the daily injections of Norditropin NordiFlex, while the number of adverse events was greatest at the 0.12 mg/kg dose of NNC0195-0092.
Data source: A phase I, open-label, randomized study that set out to evaluate the safety and tolerability of multiple once-weekly doses of NNC0195-0092, compared with daily GH in 34 patients with adult growth hormone deficiency.
Disclosures: The trial was supported by Novo Nordisk. Dr. Rasmussen disclosed that he is an employee of the company.
Hospital-acquired pneumonia threatens cervical spinal cord injury patients
SAN DIEGO – The overall rate of hospital-acquired pneumonia following cervical spinal cord injury is about 20%, results from a study of national data demonstrated.
“Cervical spinal cord injury patients are at an increased risk for the development of hospital-acquired pneumonia,” lead study author Dr. Pablo J. Diaz-Collado said in an interview after the annual meeting of the Cervical Spine Research Society.
“Complete cord injuries, longer length of stay, ICU stay and ventilation time lead to significantly increased risk of HAP, which then leads to poor inpatient outcomes,” he said. “It is of crucial importance to keep these risk factors in mind when treating patients with cervical spinal cord injuries. There is a need to optimize the management protocols for these patients to help prevent the development of HAPs.”
Dr. Diaz-Collado, an orthopedic surgery resident at Yale–New Haven (Conn.) Hospital, and his associates identified 5,198 cervical spinal cord injury patients in the 2011 and 2012 National Trauma Data Bank (NTDB) to analyze risk factors for the development of HAP and inpatient outcomes in this population. They used multivariate logistic regression to identify independent associations of various risk factors with the occurrence of HAP.
The researchers found that the overall incidence of HAP among cervical spinal cord injury patients was 20.5%, which amounted to 1,065 patients. Factors independently associated with HAP were complete spinal cord injuries (compared to central cord injuries; OR 1.44; P = .009); longer inpatient length of stay (OR 3.08 for a stay that lasted 7-13 days, OR 10.21 for 21-27 days, and OR 14.89 for 35 days or more; P = .001 or less for all associations); longer ICU stay (OR 2.86 for a stay that lasted 9-11 days, OR 3.05 for 12-14 days, and OR 2.94 for 15 days or more; P less than .001 for all associations), and longer time on mechanical ventilation (OR 2.68 for ventilation that lasted 3-6 days, OR 3.76 for 7-13 days, OR 3.98 for 14-20 days, and OR 3.99 for 21 days or more; P less than .001 for all associations).
After the researchers controlled for all other risk factors, including patient comorbidities, Injury Severity Score, and other inpatient complications, HAP was associated with increased odds of death (OR 1.60; P = .005), inpatient adverse events (OR 1.65; P less than .001), discharge to an extended-care facility (OR 1.93; P = .001), and longer length of stay (a mean of an additional 10.93 days; P less than .001).
Dr. Diaz-Collado acknowledged that the study is “limited by the quality of the data entry. In addition, the database does not include classifications of fractures, and thus stratification of the analysis in terms of the different kinds of fractures in the cervical spine is not possible. Finally, procedural codes are less accurate and thus including whether or not patients underwent a surgical intervention is less reliable.”
Dr. Diaz-Collado reported having no financial disclosures.
SAN DIEGO – The overall rate of hospital-acquired pneumonia following cervical spinal cord injury is about 20%, results from a study of national data demonstrated.
“Cervical spinal cord injury patients are at an increased risk for the development of hospital-acquired pneumonia,” lead study author Dr. Pablo J. Diaz-Collado said in an interview after the annual meeting of the Cervical Spine Research Society.
“Complete cord injuries, longer length of stay, ICU stay and ventilation time lead to significantly increased risk of HAP, which then leads to poor inpatient outcomes,” he said. “It is of crucial importance to keep these risk factors in mind when treating patients with cervical spinal cord injuries. There is a need to optimize the management protocols for these patients to help prevent the development of HAPs.”
Dr. Diaz-Collado, an orthopedic surgery resident at Yale–New Haven (Conn.) Hospital, and his associates identified 5,198 cervical spinal cord injury patients in the 2011 and 2012 National Trauma Data Bank (NTDB) to analyze risk factors for the development of HAP and inpatient outcomes in this population. They used multivariate logistic regression to identify independent associations of various risk factors with the occurrence of HAP.
The researchers found that the overall incidence of HAP among cervical spinal cord injury patients was 20.5%, which amounted to 1,065 patients. Factors independently associated with HAP were complete spinal cord injuries (compared to central cord injuries; OR 1.44; P = .009); longer inpatient length of stay (OR 3.08 for a stay that lasted 7-13 days, OR 10.21 for 21-27 days, and OR 14.89 for 35 days or more; P = .001 or less for all associations); longer ICU stay (OR 2.86 for a stay that lasted 9-11 days, OR 3.05 for 12-14 days, and OR 2.94 for 15 days or more; P less than .001 for all associations), and longer time on mechanical ventilation (OR 2.68 for ventilation that lasted 3-6 days, OR 3.76 for 7-13 days, OR 3.98 for 14-20 days, and OR 3.99 for 21 days or more; P less than .001 for all associations).
After the researchers controlled for all other risk factors, including patient comorbidities, Injury Severity Score, and other inpatient complications, HAP was associated with increased odds of death (OR 1.60; P = .005), inpatient adverse events (OR 1.65; P less than .001), discharge to an extended-care facility (OR 1.93; P = .001), and longer length of stay (a mean of an additional 10.93 days; P less than .001).
Dr. Diaz-Collado acknowledged that the study is “limited by the quality of the data entry. In addition, the database does not include classifications of fractures, and thus stratification of the analysis in terms of the different kinds of fractures in the cervical spine is not possible. Finally, procedural codes are less accurate and thus including whether or not patients underwent a surgical intervention is less reliable.”
Dr. Diaz-Collado reported having no financial disclosures.
SAN DIEGO – The overall rate of hospital-acquired pneumonia following cervical spinal cord injury is about 20%, results from a study of national data demonstrated.
“Cervical spinal cord injury patients are at an increased risk for the development of hospital-acquired pneumonia,” lead study author Dr. Pablo J. Diaz-Collado said in an interview after the annual meeting of the Cervical Spine Research Society.
“Complete cord injuries, longer length of stay, ICU stay and ventilation time lead to significantly increased risk of HAP, which then leads to poor inpatient outcomes,” he said. “It is of crucial importance to keep these risk factors in mind when treating patients with cervical spinal cord injuries. There is a need to optimize the management protocols for these patients to help prevent the development of HAPs.”
Dr. Diaz-Collado, an orthopedic surgery resident at Yale–New Haven (Conn.) Hospital, and his associates identified 5,198 cervical spinal cord injury patients in the 2011 and 2012 National Trauma Data Bank (NTDB) to analyze risk factors for the development of HAP and inpatient outcomes in this population. They used multivariate logistic regression to identify independent associations of various risk factors with the occurrence of HAP.
The researchers found that the overall incidence of HAP among cervical spinal cord injury patients was 20.5%, which amounted to 1,065 patients. Factors independently associated with HAP were complete spinal cord injuries (compared to central cord injuries; OR 1.44; P = .009); longer inpatient length of stay (OR 3.08 for a stay that lasted 7-13 days, OR 10.21 for 21-27 days, and OR 14.89 for 35 days or more; P = .001 or less for all associations); longer ICU stay (OR 2.86 for a stay that lasted 9-11 days, OR 3.05 for 12-14 days, and OR 2.94 for 15 days or more; P less than .001 for all associations), and longer time on mechanical ventilation (OR 2.68 for ventilation that lasted 3-6 days, OR 3.76 for 7-13 days, OR 3.98 for 14-20 days, and OR 3.99 for 21 days or more; P less than .001 for all associations).
After the researchers controlled for all other risk factors, including patient comorbidities, Injury Severity Score, and other inpatient complications, HAP was associated with increased odds of death (OR 1.60; P = .005), inpatient adverse events (OR 1.65; P less than .001), discharge to an extended-care facility (OR 1.93; P = .001), and longer length of stay (a mean of an additional 10.93 days; P less than .001).
Dr. Diaz-Collado acknowledged that the study is “limited by the quality of the data entry. In addition, the database does not include classifications of fractures, and thus stratification of the analysis in terms of the different kinds of fractures in the cervical spine is not possible. Finally, procedural codes are less accurate and thus including whether or not patients underwent a surgical intervention is less reliable.”
Dr. Diaz-Collado reported having no financial disclosures.
AT CSRS 2015
Key clinical point: About one in five cervical spinal cord injury patients develop hospital-acquired pneumonia.
Major finding: The overall incidence of HAP among cervical spinal cord injury patients was 20.5%.
Data source: A study of 5,198 cervical spinal cord injury patients in the 2011 and 2012 National Trauma Data Bank.
Disclosures: Dr. Diaz-Collado reported having no financial disclosures.
Long spine fusions can give patients improved quality of life
SAN DIEGO – When necessary, long fusions that extend from the C-spine to the pelvis can result in health-related quality of life improvements, results from a multicenter study suggest.
“Patients with spinal deformities will sometimes require long fusion constructs that extend into the cervical spine,” lead study author Dr. Han-Jo Kim said at the annual meeting of the Cervical Spine Research Society. “The prevalence of these cases is increasing, especially as revision surgery for conditions such as proximal junctional kyphosis increase. They are also indicated for other diagnoses, such a progressive cervical deformity, cervical myelopathy as well as neuromuscular disorders.”
Prior investigations that have examined outcomes for these long constructs usually focus on patients who have had fusions from the upper thoracic spine to the pelvis, added Dr. Kim, an orthopedic spine surgeon at the Hospital for Special Surgery, New York. “To my knowledge, there are no studies in the literature that report on the subset of patients who have had fusions from the cervical spine to the pelvis,” he said. “The question is, even though these revisions may be necessary, does surgical intervention result in improved outcomes for these patients despite the extent of these long fusions?”
In an effort to determine the outcomes and rates of complications in patients who had fusions from the cervical spine to the pelvis, Dr. Kim and his associates conducted a retrospective review of patients who underwent fusions from the cervical spine to the pelvis at four institutions during 2003-2014. The researchers administered outcome scores utilizing the Scoliosis Research Society 22 (SRS-22r) questionnaire; the Oswestry Disability Index (ODI); and the Neck Disability Index (NDI); and collected demographic data including age, body mass index, and follow-up time; medical history including comorbidity data, operative details, radiographic and articular outcomes data; and postoperative complications.
Of 55 patients initially included in the study, complete data were available for 46 (84%). Their average age was 42 years, nearly one-third (30%) were classified as ASA III, 4.2% were smokers, and the average follow-up time was 2.7 years. “The majority of these cases were revision operations, and osteotomies were performed in close to 60% of these patients,” Dr. Kim said. “The average operating time was over 300 minutes, and there was an average of over 2 L of blood loss for these cases.”
The researchers observed improvements in the activity, pain, and mental health domains of the SRS, as well as an improvement in the SRS total score, which improved from an average of 3.0 preoperatively to 3.5 postoperatively (P less than .01). This was greater than the minimally clinically important difference for the SRS-22r. “At least one [minimally clinically important difference] was met in all of the SRS domains, as well as in the NDI,” Dr. Kim said. “There was no change in the ODI, as we would expect for this patient subset.”
Radiographic outcomes improved significantly, he continued, with an average 31-degree correction in maximum kyphosis and a 3.3-cm improvement in sagittal vertical axis. The overall rate of complications was 71%, with major complications comprising about 39% of these cases. Medical complications were high as well (a rate of 61%), as was the rate of surgical complications (43%). More than half of the patients (54%) required reoperation during the follow-up period, and the rate of pseudarthrosis was 29%.
“These results demonstrate improved outcomes following cervical to pelvic fusions, despite the magnitude of their operations and extent of fusion,” Dr. Kim concluded. “In addition, despite the high rate of complications and reoperations, we noted a significant improvement in radiographic and clinical outcomes.”
Dr. Kim disclosed that he is a consultant for Zimmer Biomet and K2M.
SAN DIEGO – When necessary, long fusions that extend from the C-spine to the pelvis can result in health-related quality of life improvements, results from a multicenter study suggest.
“Patients with spinal deformities will sometimes require long fusion constructs that extend into the cervical spine,” lead study author Dr. Han-Jo Kim said at the annual meeting of the Cervical Spine Research Society. “The prevalence of these cases is increasing, especially as revision surgery for conditions such as proximal junctional kyphosis increase. They are also indicated for other diagnoses, such a progressive cervical deformity, cervical myelopathy as well as neuromuscular disorders.”
Prior investigations that have examined outcomes for these long constructs usually focus on patients who have had fusions from the upper thoracic spine to the pelvis, added Dr. Kim, an orthopedic spine surgeon at the Hospital for Special Surgery, New York. “To my knowledge, there are no studies in the literature that report on the subset of patients who have had fusions from the cervical spine to the pelvis,” he said. “The question is, even though these revisions may be necessary, does surgical intervention result in improved outcomes for these patients despite the extent of these long fusions?”
In an effort to determine the outcomes and rates of complications in patients who had fusions from the cervical spine to the pelvis, Dr. Kim and his associates conducted a retrospective review of patients who underwent fusions from the cervical spine to the pelvis at four institutions during 2003-2014. The researchers administered outcome scores utilizing the Scoliosis Research Society 22 (SRS-22r) questionnaire; the Oswestry Disability Index (ODI); and the Neck Disability Index (NDI); and collected demographic data including age, body mass index, and follow-up time; medical history including comorbidity data, operative details, radiographic and articular outcomes data; and postoperative complications.
Of 55 patients initially included in the study, complete data were available for 46 (84%). Their average age was 42 years, nearly one-third (30%) were classified as ASA III, 4.2% were smokers, and the average follow-up time was 2.7 years. “The majority of these cases were revision operations, and osteotomies were performed in close to 60% of these patients,” Dr. Kim said. “The average operating time was over 300 minutes, and there was an average of over 2 L of blood loss for these cases.”
The researchers observed improvements in the activity, pain, and mental health domains of the SRS, as well as an improvement in the SRS total score, which improved from an average of 3.0 preoperatively to 3.5 postoperatively (P less than .01). This was greater than the minimally clinically important difference for the SRS-22r. “At least one [minimally clinically important difference] was met in all of the SRS domains, as well as in the NDI,” Dr. Kim said. “There was no change in the ODI, as we would expect for this patient subset.”
Radiographic outcomes improved significantly, he continued, with an average 31-degree correction in maximum kyphosis and a 3.3-cm improvement in sagittal vertical axis. The overall rate of complications was 71%, with major complications comprising about 39% of these cases. Medical complications were high as well (a rate of 61%), as was the rate of surgical complications (43%). More than half of the patients (54%) required reoperation during the follow-up period, and the rate of pseudarthrosis was 29%.
“These results demonstrate improved outcomes following cervical to pelvic fusions, despite the magnitude of their operations and extent of fusion,” Dr. Kim concluded. “In addition, despite the high rate of complications and reoperations, we noted a significant improvement in radiographic and clinical outcomes.”
Dr. Kim disclosed that he is a consultant for Zimmer Biomet and K2M.
SAN DIEGO – When necessary, long fusions that extend from the C-spine to the pelvis can result in health-related quality of life improvements, results from a multicenter study suggest.
“Patients with spinal deformities will sometimes require long fusion constructs that extend into the cervical spine,” lead study author Dr. Han-Jo Kim said at the annual meeting of the Cervical Spine Research Society. “The prevalence of these cases is increasing, especially as revision surgery for conditions such as proximal junctional kyphosis increase. They are also indicated for other diagnoses, such a progressive cervical deformity, cervical myelopathy as well as neuromuscular disorders.”
Prior investigations that have examined outcomes for these long constructs usually focus on patients who have had fusions from the upper thoracic spine to the pelvis, added Dr. Kim, an orthopedic spine surgeon at the Hospital for Special Surgery, New York. “To my knowledge, there are no studies in the literature that report on the subset of patients who have had fusions from the cervical spine to the pelvis,” he said. “The question is, even though these revisions may be necessary, does surgical intervention result in improved outcomes for these patients despite the extent of these long fusions?”
In an effort to determine the outcomes and rates of complications in patients who had fusions from the cervical spine to the pelvis, Dr. Kim and his associates conducted a retrospective review of patients who underwent fusions from the cervical spine to the pelvis at four institutions during 2003-2014. The researchers administered outcome scores utilizing the Scoliosis Research Society 22 (SRS-22r) questionnaire; the Oswestry Disability Index (ODI); and the Neck Disability Index (NDI); and collected demographic data including age, body mass index, and follow-up time; medical history including comorbidity data, operative details, radiographic and articular outcomes data; and postoperative complications.
Of 55 patients initially included in the study, complete data were available for 46 (84%). Their average age was 42 years, nearly one-third (30%) were classified as ASA III, 4.2% were smokers, and the average follow-up time was 2.7 years. “The majority of these cases were revision operations, and osteotomies were performed in close to 60% of these patients,” Dr. Kim said. “The average operating time was over 300 minutes, and there was an average of over 2 L of blood loss for these cases.”
The researchers observed improvements in the activity, pain, and mental health domains of the SRS, as well as an improvement in the SRS total score, which improved from an average of 3.0 preoperatively to 3.5 postoperatively (P less than .01). This was greater than the minimally clinically important difference for the SRS-22r. “At least one [minimally clinically important difference] was met in all of the SRS domains, as well as in the NDI,” Dr. Kim said. “There was no change in the ODI, as we would expect for this patient subset.”
Radiographic outcomes improved significantly, he continued, with an average 31-degree correction in maximum kyphosis and a 3.3-cm improvement in sagittal vertical axis. The overall rate of complications was 71%, with major complications comprising about 39% of these cases. Medical complications were high as well (a rate of 61%), as was the rate of surgical complications (43%). More than half of the patients (54%) required reoperation during the follow-up period, and the rate of pseudarthrosis was 29%.
“These results demonstrate improved outcomes following cervical to pelvic fusions, despite the magnitude of their operations and extent of fusion,” Dr. Kim concluded. “In addition, despite the high rate of complications and reoperations, we noted a significant improvement in radiographic and clinical outcomes.”
Dr. Kim disclosed that he is a consultant for Zimmer Biomet and K2M.
AT CSRS 2015
Key clinical point: Following cervical to pelvic fusions, patients can achieve improved clinical and quality of life outcomes.
Major finding: The Scoliosis Research Society total score improved from an average of 3.0 preoperatively to 3.5 postoperatively (P less than .01).
Data source: A retrospective review of 55 patients who underwent fusions from the cervical spine to the pelvis at four institutions during 2003-2014.
Disclosures: Dr. Kim disclosed that he is a consultant for Zimmer Biomet and K2M.
Aerosol foam product found effective for psoriasis vulgaris
An alcohol-free aerosol foam that contains a fixed combination of calcipotriene 0.005% plus betamethasone dipropionate 0.064% provided rapid itch relief and was well tolerated in patients with psoriasis vulgaris, according to results from a randomized, phase III study.
In earlier phase II studies, the product, which is being developed by Denmark-based LEO Pharma, was shown to be effective and had a safe tolerability profile, with no clinically relevant impact on the hypothalamic-pituitary-adrenal axis or on calcium metabolism. In an effort to confirm the efficacy and safety profile seen in the phase II trials, researchers led by Dr. Craig Leonardi, a dermatologist at Saint Louis (Mo.) University, performed the phase III study to compare the aerosol foam product with vehicle when applied once daily for up to 4 weeks in patients with psoriasis vulgaris.
Reporting in the December 2015 issue of the Journal of Drugs in Dermatology, Dr. Leonardi and his associates at 27 outpatient sites enrolled 426 patients aged 18 years and older between June and October of 2013 in a trial known as PSO-FAST (Cal/BD Foam in Psoriasis Vulgaris, a Four-Week, Vehicle-Controlled, Efficacy, and Safety Trial). The primary outcome was the proportion of patients who achieved treatment success at week 4, based on the physician’s global assessment, which was defined as clear or almost clear (for patients with at least moderate disease at baseline) or clear (for patients who had mild disease at baseline). Secondary outcomes included a modified (excluding head) Psoriasis Area and Severity Index (mPASI) and patient’s assessment of itch based on a visual analog scale(J. Drugs Dermatol. 2015;14[12]:1468-77).
Of the 426 patients, 323 received the aerosol foam product, while 103 received the vehicle. Their median age was 51 years, 59% were male, and their mean body mass index was 32.3 kg/m2. At week 4, the researchers found that significantly more patients in the aerosol foam group achieved treatment success, compared with those in the vehicle group (53.3% vs. 4.8%, respectively; odds ratio 30.3; P less than .001).
In addition, the mean mPASI score was significantly lower among patients in the aerosol foam group, compared with those in the vehicle group (a score of 2.0 vs. 5.5, for an adjusted difference of –3.3; P less than .001).
Among the 96% of patients who reported any level of itch at baseline, 36.8% in the aerosol foam group reported a 70% reduction in itch at day 3, compared with 24% of those in the vehicle group (OR 1.9; P = .018).
“This trial also demonstrated itch alleviation led to significant reductions in sleep loss, with 70.8% of patients using Cal/BD aerosol foam reporting a 70% reduction in itch-related sleep loss by week 4,” the researchers wrote.
No clinically significant changes in mean albumin-corrected serum calcium or urinary calcium to creatinine ratio was observed in either group. The researchers concluded that the Cal/DB aerosol foam product “may be a beneficial treatment option for those patients who are candidates for therapy with a superpotent steroid, but desire a therapeutic safety profile similar to that of a less potent steroid.”
An alcohol-free aerosol foam that contains a fixed combination of calcipotriene 0.005% plus betamethasone dipropionate 0.064% provided rapid itch relief and was well tolerated in patients with psoriasis vulgaris, according to results from a randomized, phase III study.
In earlier phase II studies, the product, which is being developed by Denmark-based LEO Pharma, was shown to be effective and had a safe tolerability profile, with no clinically relevant impact on the hypothalamic-pituitary-adrenal axis or on calcium metabolism. In an effort to confirm the efficacy and safety profile seen in the phase II trials, researchers led by Dr. Craig Leonardi, a dermatologist at Saint Louis (Mo.) University, performed the phase III study to compare the aerosol foam product with vehicle when applied once daily for up to 4 weeks in patients with psoriasis vulgaris.
Reporting in the December 2015 issue of the Journal of Drugs in Dermatology, Dr. Leonardi and his associates at 27 outpatient sites enrolled 426 patients aged 18 years and older between June and October of 2013 in a trial known as PSO-FAST (Cal/BD Foam in Psoriasis Vulgaris, a Four-Week, Vehicle-Controlled, Efficacy, and Safety Trial). The primary outcome was the proportion of patients who achieved treatment success at week 4, based on the physician’s global assessment, which was defined as clear or almost clear (for patients with at least moderate disease at baseline) or clear (for patients who had mild disease at baseline). Secondary outcomes included a modified (excluding head) Psoriasis Area and Severity Index (mPASI) and patient’s assessment of itch based on a visual analog scale(J. Drugs Dermatol. 2015;14[12]:1468-77).
Of the 426 patients, 323 received the aerosol foam product, while 103 received the vehicle. Their median age was 51 years, 59% were male, and their mean body mass index was 32.3 kg/m2. At week 4, the researchers found that significantly more patients in the aerosol foam group achieved treatment success, compared with those in the vehicle group (53.3% vs. 4.8%, respectively; odds ratio 30.3; P less than .001).
In addition, the mean mPASI score was significantly lower among patients in the aerosol foam group, compared with those in the vehicle group (a score of 2.0 vs. 5.5, for an adjusted difference of –3.3; P less than .001).
Among the 96% of patients who reported any level of itch at baseline, 36.8% in the aerosol foam group reported a 70% reduction in itch at day 3, compared with 24% of those in the vehicle group (OR 1.9; P = .018).
“This trial also demonstrated itch alleviation led to significant reductions in sleep loss, with 70.8% of patients using Cal/BD aerosol foam reporting a 70% reduction in itch-related sleep loss by week 4,” the researchers wrote.
No clinically significant changes in mean albumin-corrected serum calcium or urinary calcium to creatinine ratio was observed in either group. The researchers concluded that the Cal/DB aerosol foam product “may be a beneficial treatment option for those patients who are candidates for therapy with a superpotent steroid, but desire a therapeutic safety profile similar to that of a less potent steroid.”
An alcohol-free aerosol foam that contains a fixed combination of calcipotriene 0.005% plus betamethasone dipropionate 0.064% provided rapid itch relief and was well tolerated in patients with psoriasis vulgaris, according to results from a randomized, phase III study.
In earlier phase II studies, the product, which is being developed by Denmark-based LEO Pharma, was shown to be effective and had a safe tolerability profile, with no clinically relevant impact on the hypothalamic-pituitary-adrenal axis or on calcium metabolism. In an effort to confirm the efficacy and safety profile seen in the phase II trials, researchers led by Dr. Craig Leonardi, a dermatologist at Saint Louis (Mo.) University, performed the phase III study to compare the aerosol foam product with vehicle when applied once daily for up to 4 weeks in patients with psoriasis vulgaris.
Reporting in the December 2015 issue of the Journal of Drugs in Dermatology, Dr. Leonardi and his associates at 27 outpatient sites enrolled 426 patients aged 18 years and older between June and October of 2013 in a trial known as PSO-FAST (Cal/BD Foam in Psoriasis Vulgaris, a Four-Week, Vehicle-Controlled, Efficacy, and Safety Trial). The primary outcome was the proportion of patients who achieved treatment success at week 4, based on the physician’s global assessment, which was defined as clear or almost clear (for patients with at least moderate disease at baseline) or clear (for patients who had mild disease at baseline). Secondary outcomes included a modified (excluding head) Psoriasis Area and Severity Index (mPASI) and patient’s assessment of itch based on a visual analog scale(J. Drugs Dermatol. 2015;14[12]:1468-77).
Of the 426 patients, 323 received the aerosol foam product, while 103 received the vehicle. Their median age was 51 years, 59% were male, and their mean body mass index was 32.3 kg/m2. At week 4, the researchers found that significantly more patients in the aerosol foam group achieved treatment success, compared with those in the vehicle group (53.3% vs. 4.8%, respectively; odds ratio 30.3; P less than .001).
In addition, the mean mPASI score was significantly lower among patients in the aerosol foam group, compared with those in the vehicle group (a score of 2.0 vs. 5.5, for an adjusted difference of –3.3; P less than .001).
Among the 96% of patients who reported any level of itch at baseline, 36.8% in the aerosol foam group reported a 70% reduction in itch at day 3, compared with 24% of those in the vehicle group (OR 1.9; P = .018).
“This trial also demonstrated itch alleviation led to significant reductions in sleep loss, with 70.8% of patients using Cal/BD aerosol foam reporting a 70% reduction in itch-related sleep loss by week 4,” the researchers wrote.
No clinically significant changes in mean albumin-corrected serum calcium or urinary calcium to creatinine ratio was observed in either group. The researchers concluded that the Cal/DB aerosol foam product “may be a beneficial treatment option for those patients who are candidates for therapy with a superpotent steroid, but desire a therapeutic safety profile similar to that of a less potent steroid.”
FROM JOURNAL OF DRUGS IN DERMATOLOGY
Key clinical point: A foam product containing calcipotriene 0.005% plus betamethasone dipropionate 0.064% was found to be safe and effective for patients with psoriasis vulgaris.
Major finding: At week 4, a significantly greater number of patients in the aerosol foam group achieved treatment success, compared with those in the vehicle group (53.3% vs. 4.8%, respectively; OR 30.3; P less than .001).
Data source: A randomized, phase III trial compared an aerosol foam product containing calcipotriene 0.005% plus betamethasone dipropionate 0.064% with vehicle, applied once daily for up to 4 weeks in 426 patients with psoriasis vulgaris.
Disclosures: Dr. Leonardi reported that he has been a consultant for LEO Pharma. All of the other authors disclosed having current or former ties to LEO Pharma, including two who are currently employed by the company. Dr. Leonardi and many of the other study authors also reported having numerous ties to other pharmaceutical companies.
Residents’ Forum: Docs not at par on post-call days
SAN DIEGO – If you feel sleepy and out of sorts on a post-call day, compared with a normal work-day, you’re not alone.
Anesthesiology faculty reported significant increases in feeling irritable, jittery, and sleepy, along with significant decreases in feeling confident, energetic, and talkative following an on-call period, according to a study presented at the annual meeting of the American Society of Anesthesiologists.
To date, most studies of partial sleep deprivation in health care settings have focused on residents and interns, and less on medical faculty, said lead study author Dr. Haleh Saadat of the department of anesthesiology and pain medicine at Nationwide Children’s Hospital in Columbus, Ohio. “Our call is 17 hours, from 3 p.m. to 7 a.m.; but the call period at most hospitals is 24 hours, and even longer at some private practices,” she said in an interview.
To examine the effects of partial sleep deprivation on reaction time, simple cognitive skills, and mood status in 21 anesthesiologists, Dr. Saadat and her associates obtained verbal consent from the study participants and measured reaction time, mood states, and eight subjective behavioral characteristics at two different time points: between 6:30 a.m. and 8 a.m. on a regular noncall day of work, and between 6:30 a.m. and 8 a.m. after an overnight call (a shift that runs from 3 p.m. to 7 a.m.). The behavioral characteristics included feeling alert, energetic, anxious, confident, irritable, jittery/nervous, sleepy, and talkative, and the researchers used paired t-tests to compare variable means between regular sleep days and post-call days.
Reaction time decreased in all 21 subjects after night call, indicating worse performance (P = .047), while total mood disturbance was significantly higher on post-call days, relative to noncall days (P less than .001).
Of the 21 anesthesiologists, 19 completed all simple cognitive task questions at both time points and reported significant increases in several of these parameters on post-call days, compared with normal work-days.
Post-call observations found participants feeling more irritable, confident, energetic, sleepy (P less than .001), feeling more jittery (P = .003), and feeling less talkative (P less than .001) than on normal work–days.
Coping strategies used to address their sleep deprivation were measured as well, with “most of our subjects using problem solving, followed by seeking social support and avoidance,” Dr. Saadat noted. “People who used avoidance had greater declines in reaction time on post–call days, compared with the rest of the study participants. It didn’t matter whether you were male, female, younger, or older.”
Dr. Saadat called for additional studies to evaluate the neurocognitive impact of partial sleep deprivation on physicians’ on-call duties.
“I would like to see if we can replicate the results in bigger centers,” she said. “If this is what is happening, we may need to pay more attention to faculty’s work hours in both academic and private practice settings – not only among anesthesiologists, but also in other specialties. These observations require a closer look at the potential implications for patients’ and professionals’ safety.”
The researchers reported no financial disclosures.
As a surgical resident, I have experienced firsthand the “drunk-tired” phenomenon, and to be honest, I do not believe it to be such a rare occurrence. “Drunk-tired” may be eloquently defined as being so tired you start behaving like you’re drunk, without actually consuming any alcohol of course.
The first manuscript relating fatigue amongst shift workers to performance impairment was published in 1996 by Dawson et al. demonstrating that moderate levels of fatigue actually produce more impairment than being legally intoxicated (Nature 1997;388:235). It didn’t take much of a leap to translate these observations to health care workers who work long hours, do shift work, and are on-call at times for more than 24 hours at a time. Recently, at the annual meeting of the American Society of Anesthesiologists in San Diego, Dr. Haleh Saadat from Ohio presented her study on the effects of partial sleep deprivation in staff anaesthesiologists leading to significantly decreased reaction times, cognitive skills, and mood disturbances on post-call days, compared with normal work days. No surprise there, as this is in line with what Dawson and his colleagues published nearly two decades ago. This study can certainly be translated to medical students, residents, fellows and staff from the breadth of specialties in medicine. In my opinion, what’s the point? I can already foresee what these studies are going to demonstrate, namely a clean sweep of all forms of cognitive and motor impairments when a subject is sleep deprived. The question becomes how we are translating all of this information into action that changes the lives of health care professionals and more importantly improves patient safety. Understandably, this is a loaded question and I am simply too exhausted to wrap my head around it.
So, next time you’re post call, feeling irritable, discoordinated, and inhibited, just remember: you’re as good as drunk and you should probably sleep it off.
Dr. Laura Drudi is the resident medical editor for Vascular Specialist.
As a surgical resident, I have experienced firsthand the “drunk-tired” phenomenon, and to be honest, I do not believe it to be such a rare occurrence. “Drunk-tired” may be eloquently defined as being so tired you start behaving like you’re drunk, without actually consuming any alcohol of course.
The first manuscript relating fatigue amongst shift workers to performance impairment was published in 1996 by Dawson et al. demonstrating that moderate levels of fatigue actually produce more impairment than being legally intoxicated (Nature 1997;388:235). It didn’t take much of a leap to translate these observations to health care workers who work long hours, do shift work, and are on-call at times for more than 24 hours at a time. Recently, at the annual meeting of the American Society of Anesthesiologists in San Diego, Dr. Haleh Saadat from Ohio presented her study on the effects of partial sleep deprivation in staff anaesthesiologists leading to significantly decreased reaction times, cognitive skills, and mood disturbances on post-call days, compared with normal work days. No surprise there, as this is in line with what Dawson and his colleagues published nearly two decades ago. This study can certainly be translated to medical students, residents, fellows and staff from the breadth of specialties in medicine. In my opinion, what’s the point? I can already foresee what these studies are going to demonstrate, namely a clean sweep of all forms of cognitive and motor impairments when a subject is sleep deprived. The question becomes how we are translating all of this information into action that changes the lives of health care professionals and more importantly improves patient safety. Understandably, this is a loaded question and I am simply too exhausted to wrap my head around it.
So, next time you’re post call, feeling irritable, discoordinated, and inhibited, just remember: you’re as good as drunk and you should probably sleep it off.
Dr. Laura Drudi is the resident medical editor for Vascular Specialist.
As a surgical resident, I have experienced firsthand the “drunk-tired” phenomenon, and to be honest, I do not believe it to be such a rare occurrence. “Drunk-tired” may be eloquently defined as being so tired you start behaving like you’re drunk, without actually consuming any alcohol of course.
The first manuscript relating fatigue amongst shift workers to performance impairment was published in 1996 by Dawson et al. demonstrating that moderate levels of fatigue actually produce more impairment than being legally intoxicated (Nature 1997;388:235). It didn’t take much of a leap to translate these observations to health care workers who work long hours, do shift work, and are on-call at times for more than 24 hours at a time. Recently, at the annual meeting of the American Society of Anesthesiologists in San Diego, Dr. Haleh Saadat from Ohio presented her study on the effects of partial sleep deprivation in staff anaesthesiologists leading to significantly decreased reaction times, cognitive skills, and mood disturbances on post-call days, compared with normal work days. No surprise there, as this is in line with what Dawson and his colleagues published nearly two decades ago. This study can certainly be translated to medical students, residents, fellows and staff from the breadth of specialties in medicine. In my opinion, what’s the point? I can already foresee what these studies are going to demonstrate, namely a clean sweep of all forms of cognitive and motor impairments when a subject is sleep deprived. The question becomes how we are translating all of this information into action that changes the lives of health care professionals and more importantly improves patient safety. Understandably, this is a loaded question and I am simply too exhausted to wrap my head around it.
So, next time you’re post call, feeling irritable, discoordinated, and inhibited, just remember: you’re as good as drunk and you should probably sleep it off.
Dr. Laura Drudi is the resident medical editor for Vascular Specialist.
SAN DIEGO – If you feel sleepy and out of sorts on a post-call day, compared with a normal work-day, you’re not alone.
Anesthesiology faculty reported significant increases in feeling irritable, jittery, and sleepy, along with significant decreases in feeling confident, energetic, and talkative following an on-call period, according to a study presented at the annual meeting of the American Society of Anesthesiologists.
To date, most studies of partial sleep deprivation in health care settings have focused on residents and interns, and less on medical faculty, said lead study author Dr. Haleh Saadat of the department of anesthesiology and pain medicine at Nationwide Children’s Hospital in Columbus, Ohio. “Our call is 17 hours, from 3 p.m. to 7 a.m.; but the call period at most hospitals is 24 hours, and even longer at some private practices,” she said in an interview.
To examine the effects of partial sleep deprivation on reaction time, simple cognitive skills, and mood status in 21 anesthesiologists, Dr. Saadat and her associates obtained verbal consent from the study participants and measured reaction time, mood states, and eight subjective behavioral characteristics at two different time points: between 6:30 a.m. and 8 a.m. on a regular noncall day of work, and between 6:30 a.m. and 8 a.m. after an overnight call (a shift that runs from 3 p.m. to 7 a.m.). The behavioral characteristics included feeling alert, energetic, anxious, confident, irritable, jittery/nervous, sleepy, and talkative, and the researchers used paired t-tests to compare variable means between regular sleep days and post-call days.
Reaction time decreased in all 21 subjects after night call, indicating worse performance (P = .047), while total mood disturbance was significantly higher on post-call days, relative to noncall days (P less than .001).
Of the 21 anesthesiologists, 19 completed all simple cognitive task questions at both time points and reported significant increases in several of these parameters on post-call days, compared with normal work-days.
Post-call observations found participants feeling more irritable, confident, energetic, sleepy (P less than .001), feeling more jittery (P = .003), and feeling less talkative (P less than .001) than on normal work–days.
Coping strategies used to address their sleep deprivation were measured as well, with “most of our subjects using problem solving, followed by seeking social support and avoidance,” Dr. Saadat noted. “People who used avoidance had greater declines in reaction time on post–call days, compared with the rest of the study participants. It didn’t matter whether you were male, female, younger, or older.”
Dr. Saadat called for additional studies to evaluate the neurocognitive impact of partial sleep deprivation on physicians’ on-call duties.
“I would like to see if we can replicate the results in bigger centers,” she said. “If this is what is happening, we may need to pay more attention to faculty’s work hours in both academic and private practice settings – not only among anesthesiologists, but also in other specialties. These observations require a closer look at the potential implications for patients’ and professionals’ safety.”
The researchers reported no financial disclosures.
SAN DIEGO – If you feel sleepy and out of sorts on a post-call day, compared with a normal work-day, you’re not alone.
Anesthesiology faculty reported significant increases in feeling irritable, jittery, and sleepy, along with significant decreases in feeling confident, energetic, and talkative following an on-call period, according to a study presented at the annual meeting of the American Society of Anesthesiologists.
To date, most studies of partial sleep deprivation in health care settings have focused on residents and interns, and less on medical faculty, said lead study author Dr. Haleh Saadat of the department of anesthesiology and pain medicine at Nationwide Children’s Hospital in Columbus, Ohio. “Our call is 17 hours, from 3 p.m. to 7 a.m.; but the call period at most hospitals is 24 hours, and even longer at some private practices,” she said in an interview.
To examine the effects of partial sleep deprivation on reaction time, simple cognitive skills, and mood status in 21 anesthesiologists, Dr. Saadat and her associates obtained verbal consent from the study participants and measured reaction time, mood states, and eight subjective behavioral characteristics at two different time points: between 6:30 a.m. and 8 a.m. on a regular noncall day of work, and between 6:30 a.m. and 8 a.m. after an overnight call (a shift that runs from 3 p.m. to 7 a.m.). The behavioral characteristics included feeling alert, energetic, anxious, confident, irritable, jittery/nervous, sleepy, and talkative, and the researchers used paired t-tests to compare variable means between regular sleep days and post-call days.
Reaction time decreased in all 21 subjects after night call, indicating worse performance (P = .047), while total mood disturbance was significantly higher on post-call days, relative to noncall days (P less than .001).
Of the 21 anesthesiologists, 19 completed all simple cognitive task questions at both time points and reported significant increases in several of these parameters on post-call days, compared with normal work-days.
Post-call observations found participants feeling more irritable, confident, energetic, sleepy (P less than .001), feeling more jittery (P = .003), and feeling less talkative (P less than .001) than on normal work–days.
Coping strategies used to address their sleep deprivation were measured as well, with “most of our subjects using problem solving, followed by seeking social support and avoidance,” Dr. Saadat noted. “People who used avoidance had greater declines in reaction time on post–call days, compared with the rest of the study participants. It didn’t matter whether you were male, female, younger, or older.”
Dr. Saadat called for additional studies to evaluate the neurocognitive impact of partial sleep deprivation on physicians’ on-call duties.
“I would like to see if we can replicate the results in bigger centers,” she said. “If this is what is happening, we may need to pay more attention to faculty’s work hours in both academic and private practice settings – not only among anesthesiologists, but also in other specialties. These observations require a closer look at the potential implications for patients’ and professionals’ safety.”
The researchers reported no financial disclosures.