User login
New Tests May Accurately Detect Creutzfeldt–Jakob Disease
Two minimally invasive assays for detecting prions that are diagnostic of Creutzfeldt–Jakob disease (CJD) in living patients show promise, according to preliminary studies published August 7 in the New England Journal of Medicine.
One assay tests epithelial samples obtained from nasal brushings, and the other tests urine samples. Both tests can be used in patients suspected of having the sporadic, inherited, or acquired forms of CJD, such as variant CJD and iatrogenic CJD. Both assays had sensitivities and specificities ranging between 93% and 100% in small patient populations in these exploratory studies. This range of sensitivities and specificities is better than the diagnostic accuracy of CSF testing.
If these findings are replicated in larger studies, both assays will have the potential for establishing a definitive diagnosis of CJD in clinical settings. The test that uses nasal brushings may establish a definitive diagnosis earlier in the course of the disease than has been possible previously, thus potentially enabling intervention for this fatal neurodegenerative disorder.
In addition, the incidental finding that simple brushing of the olfactory mucosa yields a greater quantity of prion seeds than is found in CSF suggests that infectivity may be present in the nasal cavity, which has important biosafety implications, the researchers noted.
Epithelial Test Had 100% Specificity
In the first report, investigators applied real-time quaking-induced conversion technology to olfactory epithelium samples from 31 patients who had rapidly progressive dementia and were referred for evaluation of possible or probable CJD. These patients concurrently underwent CSF sampling. Twelve patients with other neurodegenerative disorders, primarily Alzheimer’s disease or Parkinson’s disease, and 31 patients who had no neurologic disorders were controls, said Christina D. Orrú, PhD, a researcher at the Laboratory of Persistent Viral Diseases at the National Institute of Allergy and Infectious Diseases’s Rocky Mountain Laboratories in Hamilton, Montana, and her colleagues.
Obtaining the nasal brushings was described as a gentle procedure in which unsedated patients were first given a local vasoconstrictor applied with a nasal tampon, and then had a fiber-optic rhinoscope with a disposable sheath inserted into the nasal cavity to locate the olfactory mucosal lining of the nasal vault. A sterile, disposable brush was inserted alongside the rhinoscope, gently rolled on the mucosal surface, withdrawn, and immersed in saline solution in a centrifuge tube for further preparation.
The assays using this material yielded positive results for all 15 patients who had definite sporadic CJD, 13 of the 14 who had probable sporadic CJD, and both patients who had inherited CJD. In contrast, all 43 controls had negative results. This performance represents a sensitivity of 97% and a specificity of 100% in this study population. In comparison, testing of CSF samples from the same patients had a 77% sensitivity, said Dr. Orrú and her associates.
The substantial prion seeding in the olfactory mucosa, which was of greater magnitude than that in the CSF, raises the possibility that CJD prions could contaminate patients’ nasal discharges. “Nasal and aerosol-borne transmission of prion diseases have been documented in animal models, but there is no epidemiologic evidence for aerosol-borne transmission of sporadic CJD” to date, the investigators wrote.
Medical instruments that come into contact with the nasal mucosa may become contaminated with prions, “which poses the question of whether iatrogenic transmission is possible. Therefore, further study of possible biohazards ... is warranted,” the authors concluded.
Urine Test Was Highly Sensitive
In the second study, Fabio Moda, PhD, then a postdoctoral fellow at the Mitchell Center for Research in Alzheimer’s Disease and Related Brain Disorders at the University of Texas in Houston, and his associates assayed urine samples for minute quantities of the misfolded prion protein using an extensive amplification technology. The group tested samples from 68 patients with sporadic CJD, 14 with variant CJD, and 156 controls. The control group included four patients with genetic prion diseases, 50 with other neurodegenerative disorders (eg, Alzheimer’s disease, Parkinson’s disease, frontotemporal dementia, motor neuron disease, and progressive supranuclear palsy), 50 patients with nondegenerative neurologic disorders (chiefly cerebrovascular disease, multiple sclerosis, epilepsy, brain tumors, autoimmune encephalitis, and meningitis), and 52 healthy adults.
This assay had a sensitivity of 93% and a specificity of 100% in distinguishing CJD from other brain disorders and from brain health in this patient population, said the authors. The quantities of the prion protein excreted in the urine were extremely small, so the researchers did not address the potential for infectivity in this study.
Better Specificity Estimates Are Needed
These findings are encouraging because clinicians and researchers have long sought a sensitive and minimally invasive diagnostic tool specifically targeted to the protein that causes all forms of CJD, said Colin L. Masters, MD, Deputy Director of Mental Health at the Florey Institute of Neuroscience and Mental Health, University of Melbourne, in an accompanying editorial.
It will be important for additional studies to determine more precise estimates of the tests’ specificities, which are necessitated by the wide confidence intervals reported, because the tests can lead to breakthrough false-positive results.
CJD “is extremely uncommon, and a test without near-perfect specificity may also result in many false positive results if it is applied to patients with a low probability of having the disease,” said Dr. Masters. “In these circumstances, it is important to highlight the preliminary nature of these studies.”
Moreover, the finding that abnormal prion protein seeds are found in the olfactory mucosa “at concentrations equivalent to those in diseased brain, and several logs greater than those in CSF,” has implications for infection control. “Some experts have [already] recommended appropriate decontamination of surgical instruments that come into contact with the olfactory epithelium of patients at high risk for CJD,” he concluded.
—Mary Ann Moon
Suggested Reading
Orrú CD, Bongianni M, Tonoli G, et al. A test for Creutzfeldt-Jakob disease using nasal brushings. N Engl J Med. 2014; 371(6):519-529.
Moda F, Gambetti P, Notari S, et al. Prions in the urine of patients with variant Creutzfeldt-Jakob disease. N Engl J Med. 2014;371(6):530-539.
Masters CL. Shaken, not sonicated? N Engl J Med. 2014; 371(6):571-572.
Two minimally invasive assays for detecting prions that are diagnostic of Creutzfeldt–Jakob disease (CJD) in living patients show promise, according to preliminary studies published August 7 in the New England Journal of Medicine.
One assay tests epithelial samples obtained from nasal brushings, and the other tests urine samples. Both tests can be used in patients suspected of having the sporadic, inherited, or acquired forms of CJD, such as variant CJD and iatrogenic CJD. Both assays had sensitivities and specificities ranging between 93% and 100% in small patient populations in these exploratory studies. This range of sensitivities and specificities is better than the diagnostic accuracy of CSF testing.
If these findings are replicated in larger studies, both assays will have the potential for establishing a definitive diagnosis of CJD in clinical settings. The test that uses nasal brushings may establish a definitive diagnosis earlier in the course of the disease than has been possible previously, thus potentially enabling intervention for this fatal neurodegenerative disorder.
In addition, the incidental finding that simple brushing of the olfactory mucosa yields a greater quantity of prion seeds than is found in CSF suggests that infectivity may be present in the nasal cavity, which has important biosafety implications, the researchers noted.
Epithelial Test Had 100% Specificity
In the first report, investigators applied real-time quaking-induced conversion technology to olfactory epithelium samples from 31 patients who had rapidly progressive dementia and were referred for evaluation of possible or probable CJD. These patients concurrently underwent CSF sampling. Twelve patients with other neurodegenerative disorders, primarily Alzheimer’s disease or Parkinson’s disease, and 31 patients who had no neurologic disorders were controls, said Christina D. Orrú, PhD, a researcher at the Laboratory of Persistent Viral Diseases at the National Institute of Allergy and Infectious Diseases’s Rocky Mountain Laboratories in Hamilton, Montana, and her colleagues.
Obtaining the nasal brushings was described as a gentle procedure in which unsedated patients were first given a local vasoconstrictor applied with a nasal tampon, and then had a fiber-optic rhinoscope with a disposable sheath inserted into the nasal cavity to locate the olfactory mucosal lining of the nasal vault. A sterile, disposable brush was inserted alongside the rhinoscope, gently rolled on the mucosal surface, withdrawn, and immersed in saline solution in a centrifuge tube for further preparation.
The assays using this material yielded positive results for all 15 patients who had definite sporadic CJD, 13 of the 14 who had probable sporadic CJD, and both patients who had inherited CJD. In contrast, all 43 controls had negative results. This performance represents a sensitivity of 97% and a specificity of 100% in this study population. In comparison, testing of CSF samples from the same patients had a 77% sensitivity, said Dr. Orrú and her associates.
The substantial prion seeding in the olfactory mucosa, which was of greater magnitude than that in the CSF, raises the possibility that CJD prions could contaminate patients’ nasal discharges. “Nasal and aerosol-borne transmission of prion diseases have been documented in animal models, but there is no epidemiologic evidence for aerosol-borne transmission of sporadic CJD” to date, the investigators wrote.
Medical instruments that come into contact with the nasal mucosa may become contaminated with prions, “which poses the question of whether iatrogenic transmission is possible. Therefore, further study of possible biohazards ... is warranted,” the authors concluded.
Urine Test Was Highly Sensitive
In the second study, Fabio Moda, PhD, then a postdoctoral fellow at the Mitchell Center for Research in Alzheimer’s Disease and Related Brain Disorders at the University of Texas in Houston, and his associates assayed urine samples for minute quantities of the misfolded prion protein using an extensive amplification technology. The group tested samples from 68 patients with sporadic CJD, 14 with variant CJD, and 156 controls. The control group included four patients with genetic prion diseases, 50 with other neurodegenerative disorders (eg, Alzheimer’s disease, Parkinson’s disease, frontotemporal dementia, motor neuron disease, and progressive supranuclear palsy), 50 patients with nondegenerative neurologic disorders (chiefly cerebrovascular disease, multiple sclerosis, epilepsy, brain tumors, autoimmune encephalitis, and meningitis), and 52 healthy adults.
This assay had a sensitivity of 93% and a specificity of 100% in distinguishing CJD from other brain disorders and from brain health in this patient population, said the authors. The quantities of the prion protein excreted in the urine were extremely small, so the researchers did not address the potential for infectivity in this study.
Better Specificity Estimates Are Needed
These findings are encouraging because clinicians and researchers have long sought a sensitive and minimally invasive diagnostic tool specifically targeted to the protein that causes all forms of CJD, said Colin L. Masters, MD, Deputy Director of Mental Health at the Florey Institute of Neuroscience and Mental Health, University of Melbourne, in an accompanying editorial.
It will be important for additional studies to determine more precise estimates of the tests’ specificities, which are necessitated by the wide confidence intervals reported, because the tests can lead to breakthrough false-positive results.
CJD “is extremely uncommon, and a test without near-perfect specificity may also result in many false positive results if it is applied to patients with a low probability of having the disease,” said Dr. Masters. “In these circumstances, it is important to highlight the preliminary nature of these studies.”
Moreover, the finding that abnormal prion protein seeds are found in the olfactory mucosa “at concentrations equivalent to those in diseased brain, and several logs greater than those in CSF,” has implications for infection control. “Some experts have [already] recommended appropriate decontamination of surgical instruments that come into contact with the olfactory epithelium of patients at high risk for CJD,” he concluded.
—Mary Ann Moon
Two minimally invasive assays for detecting prions that are diagnostic of Creutzfeldt–Jakob disease (CJD) in living patients show promise, according to preliminary studies published August 7 in the New England Journal of Medicine.
One assay tests epithelial samples obtained from nasal brushings, and the other tests urine samples. Both tests can be used in patients suspected of having the sporadic, inherited, or acquired forms of CJD, such as variant CJD and iatrogenic CJD. Both assays had sensitivities and specificities ranging between 93% and 100% in small patient populations in these exploratory studies. This range of sensitivities and specificities is better than the diagnostic accuracy of CSF testing.
If these findings are replicated in larger studies, both assays will have the potential for establishing a definitive diagnosis of CJD in clinical settings. The test that uses nasal brushings may establish a definitive diagnosis earlier in the course of the disease than has been possible previously, thus potentially enabling intervention for this fatal neurodegenerative disorder.
In addition, the incidental finding that simple brushing of the olfactory mucosa yields a greater quantity of prion seeds than is found in CSF suggests that infectivity may be present in the nasal cavity, which has important biosafety implications, the researchers noted.
Epithelial Test Had 100% Specificity
In the first report, investigators applied real-time quaking-induced conversion technology to olfactory epithelium samples from 31 patients who had rapidly progressive dementia and were referred for evaluation of possible or probable CJD. These patients concurrently underwent CSF sampling. Twelve patients with other neurodegenerative disorders, primarily Alzheimer’s disease or Parkinson’s disease, and 31 patients who had no neurologic disorders were controls, said Christina D. Orrú, PhD, a researcher at the Laboratory of Persistent Viral Diseases at the National Institute of Allergy and Infectious Diseases’s Rocky Mountain Laboratories in Hamilton, Montana, and her colleagues.
Obtaining the nasal brushings was described as a gentle procedure in which unsedated patients were first given a local vasoconstrictor applied with a nasal tampon, and then had a fiber-optic rhinoscope with a disposable sheath inserted into the nasal cavity to locate the olfactory mucosal lining of the nasal vault. A sterile, disposable brush was inserted alongside the rhinoscope, gently rolled on the mucosal surface, withdrawn, and immersed in saline solution in a centrifuge tube for further preparation.
The assays using this material yielded positive results for all 15 patients who had definite sporadic CJD, 13 of the 14 who had probable sporadic CJD, and both patients who had inherited CJD. In contrast, all 43 controls had negative results. This performance represents a sensitivity of 97% and a specificity of 100% in this study population. In comparison, testing of CSF samples from the same patients had a 77% sensitivity, said Dr. Orrú and her associates.
The substantial prion seeding in the olfactory mucosa, which was of greater magnitude than that in the CSF, raises the possibility that CJD prions could contaminate patients’ nasal discharges. “Nasal and aerosol-borne transmission of prion diseases have been documented in animal models, but there is no epidemiologic evidence for aerosol-borne transmission of sporadic CJD” to date, the investigators wrote.
Medical instruments that come into contact with the nasal mucosa may become contaminated with prions, “which poses the question of whether iatrogenic transmission is possible. Therefore, further study of possible biohazards ... is warranted,” the authors concluded.
Urine Test Was Highly Sensitive
In the second study, Fabio Moda, PhD, then a postdoctoral fellow at the Mitchell Center for Research in Alzheimer’s Disease and Related Brain Disorders at the University of Texas in Houston, and his associates assayed urine samples for minute quantities of the misfolded prion protein using an extensive amplification technology. The group tested samples from 68 patients with sporadic CJD, 14 with variant CJD, and 156 controls. The control group included four patients with genetic prion diseases, 50 with other neurodegenerative disorders (eg, Alzheimer’s disease, Parkinson’s disease, frontotemporal dementia, motor neuron disease, and progressive supranuclear palsy), 50 patients with nondegenerative neurologic disorders (chiefly cerebrovascular disease, multiple sclerosis, epilepsy, brain tumors, autoimmune encephalitis, and meningitis), and 52 healthy adults.
This assay had a sensitivity of 93% and a specificity of 100% in distinguishing CJD from other brain disorders and from brain health in this patient population, said the authors. The quantities of the prion protein excreted in the urine were extremely small, so the researchers did not address the potential for infectivity in this study.
Better Specificity Estimates Are Needed
These findings are encouraging because clinicians and researchers have long sought a sensitive and minimally invasive diagnostic tool specifically targeted to the protein that causes all forms of CJD, said Colin L. Masters, MD, Deputy Director of Mental Health at the Florey Institute of Neuroscience and Mental Health, University of Melbourne, in an accompanying editorial.
It will be important for additional studies to determine more precise estimates of the tests’ specificities, which are necessitated by the wide confidence intervals reported, because the tests can lead to breakthrough false-positive results.
CJD “is extremely uncommon, and a test without near-perfect specificity may also result in many false positive results if it is applied to patients with a low probability of having the disease,” said Dr. Masters. “In these circumstances, it is important to highlight the preliminary nature of these studies.”
Moreover, the finding that abnormal prion protein seeds are found in the olfactory mucosa “at concentrations equivalent to those in diseased brain, and several logs greater than those in CSF,” has implications for infection control. “Some experts have [already] recommended appropriate decontamination of surgical instruments that come into contact with the olfactory epithelium of patients at high risk for CJD,” he concluded.
—Mary Ann Moon
Suggested Reading
Orrú CD, Bongianni M, Tonoli G, et al. A test for Creutzfeldt-Jakob disease using nasal brushings. N Engl J Med. 2014; 371(6):519-529.
Moda F, Gambetti P, Notari S, et al. Prions in the urine of patients with variant Creutzfeldt-Jakob disease. N Engl J Med. 2014;371(6):530-539.
Masters CL. Shaken, not sonicated? N Engl J Med. 2014; 371(6):571-572.
Suggested Reading
Orrú CD, Bongianni M, Tonoli G, et al. A test for Creutzfeldt-Jakob disease using nasal brushings. N Engl J Med. 2014; 371(6):519-529.
Moda F, Gambetti P, Notari S, et al. Prions in the urine of patients with variant Creutzfeldt-Jakob disease. N Engl J Med. 2014;371(6):530-539.
Masters CL. Shaken, not sonicated? N Engl J Med. 2014; 371(6):571-572.
Serum Albumin and Creatinine Levels May Indicate ALS Disease Severity
Serum levels of albumin and creatinine are independent biomarkers of disease severity in men and women with amyotrophic lateral sclerosis (ALS), according to a report published online ahead of print July 21 in JAMA Neurology. Lower levels of the chemicals denote more serious disease and predict a shorter survival time.
To identify potential correlations between hematologic biomarkers and ALS severity, researchers analyzed data for 638 patients from a regional registry of patients diagnosed between 2007 and 2011 in the Piemonte and Valle d’Aosta areas of Italy. The 352 men and 286 women underwent complete physical examinations at the time of diagnosis, which included tests for 17 serum biomarkers. The only two serum biomarkers that correlated with ALS severity were albumin level, which reflected inflammation, and creatinine, which reflected muscle wasting, said Adriano Chiò, MD, Associate Professor of Neuroscience at the University of Turin in Italy, and his associates.
Both biomarkers showed an inverse dose–response relationship with clinical function at diagnosis in men and women. The biomarkers had sensitivity and specificity values at predicting one-year mortality that were similar to those of “the best established prognostic factors” for ALS, such as forced vital capacity, age, and scores on the ALS Functional Rating Scale-Revised, said the investigators.
Dr. Chiò and his colleagues performed a validation study in a cohort of 122 patients (54 men and 68 women) at all stages of ALS who were treated at an ALS tertiary care center in Italy. This study confirmed the findings of the discovery cohort. “Creatinine and albumin are reliable and easily detectable blood markers of the severity of motor dysfunction in ALS and could be used in defining patients’ prognosis at the time of diagnosis,” the investigators stated.
—Mary Ann Moon
Suggested Reading
Bozik ME, Mitsumoto H, Brooks BR, et al. A post hoc analysis of subgroup outcomes and creatinine in the phase III clinical trial (EMPOWER) of dexpramipexole in ALS. Amyotroph Lateral Scler Frontotemporal Degener. 2014 Aug 15:1-8 [Epub ahead of print].
Chiò A, Calvo A, Bovio G, et al. Amyotrophic lateral sclerosis outcome measures and the role of albumin and creatinine: a population-based study. JAMA Neurol. 2014 Jul 21 [Epub ahead of print].
Pagani M, Chiò A, Valentini MC, et al. Functional pattern of brain FDG-PET in amyotrophic lateral sclerosis. Neurology. 2014 Aug 13 [Epub ahead of print].
Serum levels of albumin and creatinine are independent biomarkers of disease severity in men and women with amyotrophic lateral sclerosis (ALS), according to a report published online ahead of print July 21 in JAMA Neurology. Lower levels of the chemicals denote more serious disease and predict a shorter survival time.
To identify potential correlations between hematologic biomarkers and ALS severity, researchers analyzed data for 638 patients from a regional registry of patients diagnosed between 2007 and 2011 in the Piemonte and Valle d’Aosta areas of Italy. The 352 men and 286 women underwent complete physical examinations at the time of diagnosis, which included tests for 17 serum biomarkers. The only two serum biomarkers that correlated with ALS severity were albumin level, which reflected inflammation, and creatinine, which reflected muscle wasting, said Adriano Chiò, MD, Associate Professor of Neuroscience at the University of Turin in Italy, and his associates.
Both biomarkers showed an inverse dose–response relationship with clinical function at diagnosis in men and women. The biomarkers had sensitivity and specificity values at predicting one-year mortality that were similar to those of “the best established prognostic factors” for ALS, such as forced vital capacity, age, and scores on the ALS Functional Rating Scale-Revised, said the investigators.
Dr. Chiò and his colleagues performed a validation study in a cohort of 122 patients (54 men and 68 women) at all stages of ALS who were treated at an ALS tertiary care center in Italy. This study confirmed the findings of the discovery cohort. “Creatinine and albumin are reliable and easily detectable blood markers of the severity of motor dysfunction in ALS and could be used in defining patients’ prognosis at the time of diagnosis,” the investigators stated.
—Mary Ann Moon
Serum levels of albumin and creatinine are independent biomarkers of disease severity in men and women with amyotrophic lateral sclerosis (ALS), according to a report published online ahead of print July 21 in JAMA Neurology. Lower levels of the chemicals denote more serious disease and predict a shorter survival time.
To identify potential correlations between hematologic biomarkers and ALS severity, researchers analyzed data for 638 patients from a regional registry of patients diagnosed between 2007 and 2011 in the Piemonte and Valle d’Aosta areas of Italy. The 352 men and 286 women underwent complete physical examinations at the time of diagnosis, which included tests for 17 serum biomarkers. The only two serum biomarkers that correlated with ALS severity were albumin level, which reflected inflammation, and creatinine, which reflected muscle wasting, said Adriano Chiò, MD, Associate Professor of Neuroscience at the University of Turin in Italy, and his associates.
Both biomarkers showed an inverse dose–response relationship with clinical function at diagnosis in men and women. The biomarkers had sensitivity and specificity values at predicting one-year mortality that were similar to those of “the best established prognostic factors” for ALS, such as forced vital capacity, age, and scores on the ALS Functional Rating Scale-Revised, said the investigators.
Dr. Chiò and his colleagues performed a validation study in a cohort of 122 patients (54 men and 68 women) at all stages of ALS who were treated at an ALS tertiary care center in Italy. This study confirmed the findings of the discovery cohort. “Creatinine and albumin are reliable and easily detectable blood markers of the severity of motor dysfunction in ALS and could be used in defining patients’ prognosis at the time of diagnosis,” the investigators stated.
—Mary Ann Moon
Suggested Reading
Bozik ME, Mitsumoto H, Brooks BR, et al. A post hoc analysis of subgroup outcomes and creatinine in the phase III clinical trial (EMPOWER) of dexpramipexole in ALS. Amyotroph Lateral Scler Frontotemporal Degener. 2014 Aug 15:1-8 [Epub ahead of print].
Chiò A, Calvo A, Bovio G, et al. Amyotrophic lateral sclerosis outcome measures and the role of albumin and creatinine: a population-based study. JAMA Neurol. 2014 Jul 21 [Epub ahead of print].
Pagani M, Chiò A, Valentini MC, et al. Functional pattern of brain FDG-PET in amyotrophic lateral sclerosis. Neurology. 2014 Aug 13 [Epub ahead of print].
Suggested Reading
Bozik ME, Mitsumoto H, Brooks BR, et al. A post hoc analysis of subgroup outcomes and creatinine in the phase III clinical trial (EMPOWER) of dexpramipexole in ALS. Amyotroph Lateral Scler Frontotemporal Degener. 2014 Aug 15:1-8 [Epub ahead of print].
Chiò A, Calvo A, Bovio G, et al. Amyotrophic lateral sclerosis outcome measures and the role of albumin and creatinine: a population-based study. JAMA Neurol. 2014 Jul 21 [Epub ahead of print].
Pagani M, Chiò A, Valentini MC, et al. Functional pattern of brain FDG-PET in amyotrophic lateral sclerosis. Neurology. 2014 Aug 13 [Epub ahead of print].
Joint Committee Offers Recommendations for Postconcussion Return to Play
Privacy laws can present a challenge to physicians managing athletes with concussion, particularly if those athletes want to return to play against the physician’s advice, according to a position paper on sports-related concussion published online ahead of print July 9 in Neurology. Waivers may help physicians avoid this challenge, however.
“Evaluating and managing sports-related concussion raises a variety of distinctive ethical and legal issues for physicians, especially relating to return-to-play decisions,” said Matthew P. Kirschen, MD, PhD, a neurologist at the Children’s Hospital of Philadelphia, and his colleagues. The official position paper was written by the Ethics, Law, and Humanities Committee, a joint committee of the American Academy of Neurology, the American Neurological Association, and the Child Neurology Society.
Lack of appropriate physician training is also a major concern in sports-related concussion. A survey by the American Academy of Neurology found that although most neurologists see patients with sports-related concussion, few have had formal or informal training on managing concussion.
Potential Role for Waivers
One of the most challenging aspects of managing sports-related concussion is the decision about when the athlete can return to play, which can be problematic if he or she wants to return to play prematurely. Athletes may ignore their physician’s advice or seek a physician who will approve their return to play, which may bring the original physician into conflict with privacy laws that restrict the sharing of personal health information without the patient’s consent.
“The evaluating physician could find himself or herself in the difficult position of being legally restricted from sharing a concussion evaluation with the athlete’s coaches and school personnel, even though making such a disclosure might be in the best interest of the athlete’s health,” said the authors.
In response to this problem, some institutions now require athletes to sign waivers that allow personal health information to be shared between the physician affiliated with the school department and the coaches and other team or school staff.
Youth Sports Concussion Laws Vary by State
All 50 states have adopted youth sports concussion laws that address questions related to education, removal from play, and return to play. Statutes differ, however, about which individuals are authorized to clear an athlete to return to the field. Some statutes specify that a physician must make this determination, while others allow athletic trainers, nurse practitioners, and physician assistants to do so. “States do not uniformly require that individuals providing clearance be trained in the evaluation and management of concussion,” said the authors.
Physicians responsible for the care of athletes, either on or off the sidelines, should ensure that they have appropriate training and experience in recognizing, evaluating, and managing concussion and potential brain injury, the authors of the report advised. State-based youth sports concussion laws generally have a low removal-from-play threshold to protect young athletes from harm, which may encourage coaches, parents, and athletes to take the risks of concussion seriously.
A Concussion Registry Could Aid Understanding
Discussion of sports-related concussion is timely, said Ellen Deibert, MD, a neurologist at Wellspan Neurology in York, Pennsylvania, in an accompanying editorial. The Centers for Disease Control and Prevention recently estimated that from 1.6 to 3.8 million sports and recreation-related concussions occur each year. The previous annual estimate was 300,000 such concussions per year.
“Overall, the article is a refreshing reminder of the issues surrounding the treatment of sports-related concussion and the need for continued education and research on this topic,” said Dr. Deibert.
The position paper’s authors call for the establishment of a concussion registry to improve understanding of concussion. Such a registry “would need to be interdisciplinary and in collaboration with other subspecialists already involved in concussion management,” said Dr. Deibert. “The role the neurologist plays will eventually be defined during that process. However, in 2014, there remains an immediate need for providers to treat concussion patients. The only question you need to answer is what your role will be in supporting this effort.”
—Bianca Nogrady
Suggested Reading
Deibert E. Concussion and the neurologist: A work in progress. Neurology. 2014 Jul 9 [Epub ahead of print].
Kirschen MP, Tsou A, Bird Nelson S, et al. Legal and ethical implications in the evaluation and management of sports-related concussion. Neurology. 2014 Jul 9 [Epub ahead of print].
Privacy laws can present a challenge to physicians managing athletes with concussion, particularly if those athletes want to return to play against the physician’s advice, according to a position paper on sports-related concussion published online ahead of print July 9 in Neurology. Waivers may help physicians avoid this challenge, however.
“Evaluating and managing sports-related concussion raises a variety of distinctive ethical and legal issues for physicians, especially relating to return-to-play decisions,” said Matthew P. Kirschen, MD, PhD, a neurologist at the Children’s Hospital of Philadelphia, and his colleagues. The official position paper was written by the Ethics, Law, and Humanities Committee, a joint committee of the American Academy of Neurology, the American Neurological Association, and the Child Neurology Society.
Lack of appropriate physician training is also a major concern in sports-related concussion. A survey by the American Academy of Neurology found that although most neurologists see patients with sports-related concussion, few have had formal or informal training on managing concussion.
Potential Role for Waivers
One of the most challenging aspects of managing sports-related concussion is the decision about when the athlete can return to play, which can be problematic if he or she wants to return to play prematurely. Athletes may ignore their physician’s advice or seek a physician who will approve their return to play, which may bring the original physician into conflict with privacy laws that restrict the sharing of personal health information without the patient’s consent.
“The evaluating physician could find himself or herself in the difficult position of being legally restricted from sharing a concussion evaluation with the athlete’s coaches and school personnel, even though making such a disclosure might be in the best interest of the athlete’s health,” said the authors.
In response to this problem, some institutions now require athletes to sign waivers that allow personal health information to be shared between the physician affiliated with the school department and the coaches and other team or school staff.
Youth Sports Concussion Laws Vary by State
All 50 states have adopted youth sports concussion laws that address questions related to education, removal from play, and return to play. Statutes differ, however, about which individuals are authorized to clear an athlete to return to the field. Some statutes specify that a physician must make this determination, while others allow athletic trainers, nurse practitioners, and physician assistants to do so. “States do not uniformly require that individuals providing clearance be trained in the evaluation and management of concussion,” said the authors.
Physicians responsible for the care of athletes, either on or off the sidelines, should ensure that they have appropriate training and experience in recognizing, evaluating, and managing concussion and potential brain injury, the authors of the report advised. State-based youth sports concussion laws generally have a low removal-from-play threshold to protect young athletes from harm, which may encourage coaches, parents, and athletes to take the risks of concussion seriously.
A Concussion Registry Could Aid Understanding
Discussion of sports-related concussion is timely, said Ellen Deibert, MD, a neurologist at Wellspan Neurology in York, Pennsylvania, in an accompanying editorial. The Centers for Disease Control and Prevention recently estimated that from 1.6 to 3.8 million sports and recreation-related concussions occur each year. The previous annual estimate was 300,000 such concussions per year.
“Overall, the article is a refreshing reminder of the issues surrounding the treatment of sports-related concussion and the need for continued education and research on this topic,” said Dr. Deibert.
The position paper’s authors call for the establishment of a concussion registry to improve understanding of concussion. Such a registry “would need to be interdisciplinary and in collaboration with other subspecialists already involved in concussion management,” said Dr. Deibert. “The role the neurologist plays will eventually be defined during that process. However, in 2014, there remains an immediate need for providers to treat concussion patients. The only question you need to answer is what your role will be in supporting this effort.”
—Bianca Nogrady
Privacy laws can present a challenge to physicians managing athletes with concussion, particularly if those athletes want to return to play against the physician’s advice, according to a position paper on sports-related concussion published online ahead of print July 9 in Neurology. Waivers may help physicians avoid this challenge, however.
“Evaluating and managing sports-related concussion raises a variety of distinctive ethical and legal issues for physicians, especially relating to return-to-play decisions,” said Matthew P. Kirschen, MD, PhD, a neurologist at the Children’s Hospital of Philadelphia, and his colleagues. The official position paper was written by the Ethics, Law, and Humanities Committee, a joint committee of the American Academy of Neurology, the American Neurological Association, and the Child Neurology Society.
Lack of appropriate physician training is also a major concern in sports-related concussion. A survey by the American Academy of Neurology found that although most neurologists see patients with sports-related concussion, few have had formal or informal training on managing concussion.
Potential Role for Waivers
One of the most challenging aspects of managing sports-related concussion is the decision about when the athlete can return to play, which can be problematic if he or she wants to return to play prematurely. Athletes may ignore their physician’s advice or seek a physician who will approve their return to play, which may bring the original physician into conflict with privacy laws that restrict the sharing of personal health information without the patient’s consent.
“The evaluating physician could find himself or herself in the difficult position of being legally restricted from sharing a concussion evaluation with the athlete’s coaches and school personnel, even though making such a disclosure might be in the best interest of the athlete’s health,” said the authors.
In response to this problem, some institutions now require athletes to sign waivers that allow personal health information to be shared between the physician affiliated with the school department and the coaches and other team or school staff.
Youth Sports Concussion Laws Vary by State
All 50 states have adopted youth sports concussion laws that address questions related to education, removal from play, and return to play. Statutes differ, however, about which individuals are authorized to clear an athlete to return to the field. Some statutes specify that a physician must make this determination, while others allow athletic trainers, nurse practitioners, and physician assistants to do so. “States do not uniformly require that individuals providing clearance be trained in the evaluation and management of concussion,” said the authors.
Physicians responsible for the care of athletes, either on or off the sidelines, should ensure that they have appropriate training and experience in recognizing, evaluating, and managing concussion and potential brain injury, the authors of the report advised. State-based youth sports concussion laws generally have a low removal-from-play threshold to protect young athletes from harm, which may encourage coaches, parents, and athletes to take the risks of concussion seriously.
A Concussion Registry Could Aid Understanding
Discussion of sports-related concussion is timely, said Ellen Deibert, MD, a neurologist at Wellspan Neurology in York, Pennsylvania, in an accompanying editorial. The Centers for Disease Control and Prevention recently estimated that from 1.6 to 3.8 million sports and recreation-related concussions occur each year. The previous annual estimate was 300,000 such concussions per year.
“Overall, the article is a refreshing reminder of the issues surrounding the treatment of sports-related concussion and the need for continued education and research on this topic,” said Dr. Deibert.
The position paper’s authors call for the establishment of a concussion registry to improve understanding of concussion. Such a registry “would need to be interdisciplinary and in collaboration with other subspecialists already involved in concussion management,” said Dr. Deibert. “The role the neurologist plays will eventually be defined during that process. However, in 2014, there remains an immediate need for providers to treat concussion patients. The only question you need to answer is what your role will be in supporting this effort.”
—Bianca Nogrady
Suggested Reading
Deibert E. Concussion and the neurologist: A work in progress. Neurology. 2014 Jul 9 [Epub ahead of print].
Kirschen MP, Tsou A, Bird Nelson S, et al. Legal and ethical implications in the evaluation and management of sports-related concussion. Neurology. 2014 Jul 9 [Epub ahead of print].
Suggested Reading
Deibert E. Concussion and the neurologist: A work in progress. Neurology. 2014 Jul 9 [Epub ahead of print].
Kirschen MP, Tsou A, Bird Nelson S, et al. Legal and ethical implications in the evaluation and management of sports-related concussion. Neurology. 2014 Jul 9 [Epub ahead of print].
Prolonged Monitoring May Be the Best Way to Detect Atrial Fibrillation After Cryptogenic Stroke
Extended monitoring of heart rhythm detects paroxysmal atrial fibrillation in patients who previously had a cryptogenic stroke or transient ischemic attack (TIA) at significantly higher rates than conventional methods do, according to two randomized studies published June 26 in the New England Journal of Medicine.
Revealing occult atrial fibrillation is crucial because stroke survivors whose atrial fibrillation is unrecognized typically receive antiplatelet therapy for secondary prevention, which is inferior to the anticoagulation treatment given for clinically apparent atrial fibrillation, said both groups of researchers.
ECG Monitoring Was Superior to Holter Monitoring
In one study, investigators used noninvasive ambulatory ECG monitoring to track heart rhythm for 30 days in 572 patients (mean age, 72) soon after they had an ischemic stroke or TIA. Comprehensive evaluations, typically including the conventional 24-hour session of Holter monitoring, failed to identify any atrial fibrillation or other cause of the event, so it was classified as cryptogenic, said David J. Gladstone, MD, PhD, stroke neurologist at the University of Toronto and coleader of the University of Toronto Stroke Program, and his associates.
At 16 stroke centers across Canada, the patients were randomly assigned during a three-year period to either prolonged ECG monitoring (287 patients) or to one additional round of 24-hour Holter monitoring (285 control subjects) for the detection of occult atrial fibrillation. The extended ECG monitoring was superior to Holter monitoring; it detected at least one episode of atrial fibrillation in 16.1% of patients, compared with 3.2% in the control group. The extended monitoring also was superior at detecting continuous atrial fibrillation lasting from 2.5 minutes to many hours. This outcome was found in 9.9% of the intervention group and 2.5% of the control group, the investigators reported.
The prolonged monitoring “nearly doubled the proportion of patients who subsequently received anticoagulant therapy for secondary prevention of stroke—a finding we interpret as a clinically meaningful change in treatment that has the potential to avert recurrent strokes,” said Dr. Gladstone.
“We think that the common practice of relying on 24 to 48 hours of monitoring for atrial fibrillation after a stroke or TIA of undetermined cause is insufficient and consider it an initial screen rather than a final test, especially given our finding that the yield of clinical follow-up alone as a means of detecting atrial fibrillation was negligible,” he added.
ICM Was Superior to Conventional Follow-Up
In the other study, conventional follow-up was compared with heart rhythm monitoring using an insertable cardiac monitor (ICM) in 441 patients (mean age, 61.5) who recently had an ischemic stroke or TIA classified as cryptogenic. The ICM automatically detected and recorded atrial fibrillation, irrespective of heart rate or symptoms. Patients in the control group “underwent assessment at scheduled and unscheduled visits, with ECG monitoring performed at the discretion of the site investigator,” said Tommaso Sanna, MD, a researcher at the Catholic University of the Sacred Heart in Rome, and his associates.
At six months, atrial fibrillation was detected in 8.9% of the 221 participants randomly assigned to the study intervention, compared with 1.4% of the 220 control subjects. At 12 months, the atrial fibrillation detection rates were 12.4% and 2.0%, respectively. This difference in detection was consistent across all subgroups of patients, regardless of age, sex, race or ethnicity, type of index event, presence or absence of patent foramen ovale, and CHADS score, said Dr. Sanna.
The use of oral anticoagulants for secondary prevention more than doubled among patients who received an ICM. The researchers calculated that 14 ICM devices would need to be implanted to detect one episode of atrial fibrillation during six months of monitoring, 10 devices to detect an episode of atrial fibrillation during 12 months of follow-up, and four devices to detect an episode of atrial fibrillation during 36 months of follow-up.
The findings of both studies indicate that in current practice, a substantial number of patients who have been diagnosed with cryptogenic stroke or TIA have occult atrial fibrillation that goes undiagnosed and untreated.
Should the Standard of Care Change?
The results of these two studies indicate that “prolonged monitoring of heart rhythm should now become part of the standard care of patients with cryptogenic stroke,” said Hooman Kamel, MD, Assistant Professor of Neuroscience at the Weill Cornell Brain and Mind Research Institute at Weill Cornell Medical College in New York City.
“Most patients with cryptogenic stroke or TIA should undergo at least several weeks of rhythm monitoring. Relatively inexpensive external loop recorders, such as those used in the [Gladstone] trial, will probably be cost-effective; the value of more expensive implantable loop recorders is less clear,” he noted.
—Mary Ann Moon
Suggested Reading
Gladstone DJ, Spring M, Dorian P, et al. Atrial fibrillation in patients with cryptogenic stroke. N Engl J Med. 2014;370(26):2467-2477.
Sanna T, Diener HC, Passman RS, et al. Cryptogenic stroke and underlying atrial fibrillation. N Engl J Med. 2014;370(26):2478-2486.
Extended monitoring of heart rhythm detects paroxysmal atrial fibrillation in patients who previously had a cryptogenic stroke or transient ischemic attack (TIA) at significantly higher rates than conventional methods do, according to two randomized studies published June 26 in the New England Journal of Medicine.
Revealing occult atrial fibrillation is crucial because stroke survivors whose atrial fibrillation is unrecognized typically receive antiplatelet therapy for secondary prevention, which is inferior to the anticoagulation treatment given for clinically apparent atrial fibrillation, said both groups of researchers.
ECG Monitoring Was Superior to Holter Monitoring
In one study, investigators used noninvasive ambulatory ECG monitoring to track heart rhythm for 30 days in 572 patients (mean age, 72) soon after they had an ischemic stroke or TIA. Comprehensive evaluations, typically including the conventional 24-hour session of Holter monitoring, failed to identify any atrial fibrillation or other cause of the event, so it was classified as cryptogenic, said David J. Gladstone, MD, PhD, stroke neurologist at the University of Toronto and coleader of the University of Toronto Stroke Program, and his associates.
At 16 stroke centers across Canada, the patients were randomly assigned during a three-year period to either prolonged ECG monitoring (287 patients) or to one additional round of 24-hour Holter monitoring (285 control subjects) for the detection of occult atrial fibrillation. The extended ECG monitoring was superior to Holter monitoring; it detected at least one episode of atrial fibrillation in 16.1% of patients, compared with 3.2% in the control group. The extended monitoring also was superior at detecting continuous atrial fibrillation lasting from 2.5 minutes to many hours. This outcome was found in 9.9% of the intervention group and 2.5% of the control group, the investigators reported.
The prolonged monitoring “nearly doubled the proportion of patients who subsequently received anticoagulant therapy for secondary prevention of stroke—a finding we interpret as a clinically meaningful change in treatment that has the potential to avert recurrent strokes,” said Dr. Gladstone.
“We think that the common practice of relying on 24 to 48 hours of monitoring for atrial fibrillation after a stroke or TIA of undetermined cause is insufficient and consider it an initial screen rather than a final test, especially given our finding that the yield of clinical follow-up alone as a means of detecting atrial fibrillation was negligible,” he added.
ICM Was Superior to Conventional Follow-Up
In the other study, conventional follow-up was compared with heart rhythm monitoring using an insertable cardiac monitor (ICM) in 441 patients (mean age, 61.5) who recently had an ischemic stroke or TIA classified as cryptogenic. The ICM automatically detected and recorded atrial fibrillation, irrespective of heart rate or symptoms. Patients in the control group “underwent assessment at scheduled and unscheduled visits, with ECG monitoring performed at the discretion of the site investigator,” said Tommaso Sanna, MD, a researcher at the Catholic University of the Sacred Heart in Rome, and his associates.
At six months, atrial fibrillation was detected in 8.9% of the 221 participants randomly assigned to the study intervention, compared with 1.4% of the 220 control subjects. At 12 months, the atrial fibrillation detection rates were 12.4% and 2.0%, respectively. This difference in detection was consistent across all subgroups of patients, regardless of age, sex, race or ethnicity, type of index event, presence or absence of patent foramen ovale, and CHADS score, said Dr. Sanna.
The use of oral anticoagulants for secondary prevention more than doubled among patients who received an ICM. The researchers calculated that 14 ICM devices would need to be implanted to detect one episode of atrial fibrillation during six months of monitoring, 10 devices to detect an episode of atrial fibrillation during 12 months of follow-up, and four devices to detect an episode of atrial fibrillation during 36 months of follow-up.
The findings of both studies indicate that in current practice, a substantial number of patients who have been diagnosed with cryptogenic stroke or TIA have occult atrial fibrillation that goes undiagnosed and untreated.
Should the Standard of Care Change?
The results of these two studies indicate that “prolonged monitoring of heart rhythm should now become part of the standard care of patients with cryptogenic stroke,” said Hooman Kamel, MD, Assistant Professor of Neuroscience at the Weill Cornell Brain and Mind Research Institute at Weill Cornell Medical College in New York City.
“Most patients with cryptogenic stroke or TIA should undergo at least several weeks of rhythm monitoring. Relatively inexpensive external loop recorders, such as those used in the [Gladstone] trial, will probably be cost-effective; the value of more expensive implantable loop recorders is less clear,” he noted.
—Mary Ann Moon
Extended monitoring of heart rhythm detects paroxysmal atrial fibrillation in patients who previously had a cryptogenic stroke or transient ischemic attack (TIA) at significantly higher rates than conventional methods do, according to two randomized studies published June 26 in the New England Journal of Medicine.
Revealing occult atrial fibrillation is crucial because stroke survivors whose atrial fibrillation is unrecognized typically receive antiplatelet therapy for secondary prevention, which is inferior to the anticoagulation treatment given for clinically apparent atrial fibrillation, said both groups of researchers.
ECG Monitoring Was Superior to Holter Monitoring
In one study, investigators used noninvasive ambulatory ECG monitoring to track heart rhythm for 30 days in 572 patients (mean age, 72) soon after they had an ischemic stroke or TIA. Comprehensive evaluations, typically including the conventional 24-hour session of Holter monitoring, failed to identify any atrial fibrillation or other cause of the event, so it was classified as cryptogenic, said David J. Gladstone, MD, PhD, stroke neurologist at the University of Toronto and coleader of the University of Toronto Stroke Program, and his associates.
At 16 stroke centers across Canada, the patients were randomly assigned during a three-year period to either prolonged ECG monitoring (287 patients) or to one additional round of 24-hour Holter monitoring (285 control subjects) for the detection of occult atrial fibrillation. The extended ECG monitoring was superior to Holter monitoring; it detected at least one episode of atrial fibrillation in 16.1% of patients, compared with 3.2% in the control group. The extended monitoring also was superior at detecting continuous atrial fibrillation lasting from 2.5 minutes to many hours. This outcome was found in 9.9% of the intervention group and 2.5% of the control group, the investigators reported.
The prolonged monitoring “nearly doubled the proportion of patients who subsequently received anticoagulant therapy for secondary prevention of stroke—a finding we interpret as a clinically meaningful change in treatment that has the potential to avert recurrent strokes,” said Dr. Gladstone.
“We think that the common practice of relying on 24 to 48 hours of monitoring for atrial fibrillation after a stroke or TIA of undetermined cause is insufficient and consider it an initial screen rather than a final test, especially given our finding that the yield of clinical follow-up alone as a means of detecting atrial fibrillation was negligible,” he added.
ICM Was Superior to Conventional Follow-Up
In the other study, conventional follow-up was compared with heart rhythm monitoring using an insertable cardiac monitor (ICM) in 441 patients (mean age, 61.5) who recently had an ischemic stroke or TIA classified as cryptogenic. The ICM automatically detected and recorded atrial fibrillation, irrespective of heart rate or symptoms. Patients in the control group “underwent assessment at scheduled and unscheduled visits, with ECG monitoring performed at the discretion of the site investigator,” said Tommaso Sanna, MD, a researcher at the Catholic University of the Sacred Heart in Rome, and his associates.
At six months, atrial fibrillation was detected in 8.9% of the 221 participants randomly assigned to the study intervention, compared with 1.4% of the 220 control subjects. At 12 months, the atrial fibrillation detection rates were 12.4% and 2.0%, respectively. This difference in detection was consistent across all subgroups of patients, regardless of age, sex, race or ethnicity, type of index event, presence or absence of patent foramen ovale, and CHADS score, said Dr. Sanna.
The use of oral anticoagulants for secondary prevention more than doubled among patients who received an ICM. The researchers calculated that 14 ICM devices would need to be implanted to detect one episode of atrial fibrillation during six months of monitoring, 10 devices to detect an episode of atrial fibrillation during 12 months of follow-up, and four devices to detect an episode of atrial fibrillation during 36 months of follow-up.
The findings of both studies indicate that in current practice, a substantial number of patients who have been diagnosed with cryptogenic stroke or TIA have occult atrial fibrillation that goes undiagnosed and untreated.
Should the Standard of Care Change?
The results of these two studies indicate that “prolonged monitoring of heart rhythm should now become part of the standard care of patients with cryptogenic stroke,” said Hooman Kamel, MD, Assistant Professor of Neuroscience at the Weill Cornell Brain and Mind Research Institute at Weill Cornell Medical College in New York City.
“Most patients with cryptogenic stroke or TIA should undergo at least several weeks of rhythm monitoring. Relatively inexpensive external loop recorders, such as those used in the [Gladstone] trial, will probably be cost-effective; the value of more expensive implantable loop recorders is less clear,” he noted.
—Mary Ann Moon
Suggested Reading
Gladstone DJ, Spring M, Dorian P, et al. Atrial fibrillation in patients with cryptogenic stroke. N Engl J Med. 2014;370(26):2467-2477.
Sanna T, Diener HC, Passman RS, et al. Cryptogenic stroke and underlying atrial fibrillation. N Engl J Med. 2014;370(26):2478-2486.
Suggested Reading
Gladstone DJ, Spring M, Dorian P, et al. Atrial fibrillation in patients with cryptogenic stroke. N Engl J Med. 2014;370(26):2467-2477.
Sanna T, Diener HC, Passman RS, et al. Cryptogenic stroke and underlying atrial fibrillation. N Engl J Med. 2014;370(26):2478-2486.
Epidemiology, Consequences of Non-Leg VTE
Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.
However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.
However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.
However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Model for End-Stage Liver Disease (MELD) May Help Determine Mortality Risk
Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?
Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.
Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.
Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.
Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.
Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).
Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.
Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.
Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.
Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?
Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.
Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.
Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.
Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.
Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).
Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.
Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.
Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.
Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?
Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.
Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.
Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.
Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.
Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).
Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.
Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.
Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.
Emergency Department Visits, Hospitalizations Due to Insulin
Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?
Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.
Study design: Observational study using national adverse drug surveillance database and national household survey.
Setting: U.S. hospitals, excluding psychiatric and penal institutions.
Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.
The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.
Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.
Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.
Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?
Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.
Study design: Observational study using national adverse drug surveillance database and national household survey.
Setting: U.S. hospitals, excluding psychiatric and penal institutions.
Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.
The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.
Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.
Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.
Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?
Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.
Study design: Observational study using national adverse drug surveillance database and national household survey.
Setting: U.S. hospitals, excluding psychiatric and penal institutions.
Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.
The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.
Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.
Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.
Healthcare Worker Attire Recommendations
Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?
Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.
Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.
Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.
Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.
Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.
There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.
Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.
BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.
Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.
Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.
Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?
Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.
Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.
Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.
Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.
Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.
There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.
Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.
BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.
Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.
Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.
Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?
Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.
Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.
Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.
Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.
Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.
There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.
Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.
BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.
Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.
Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.
Prediction Tool for Readmissions Due to End-of-Life Care
Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?
Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.
Study design: Nested case-control.
Setting: Academic, tertiary-care medical center.
Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.
The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.
Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.
Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.
Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?
Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.
Study design: Nested case-control.
Setting: Academic, tertiary-care medical center.
Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.
The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.
Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.
Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.
Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?
Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.
Study design: Nested case-control.
Setting: Academic, tertiary-care medical center.
Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.
The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.
Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.
Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.
Colonic Malignancy Risk Appears Low After Uncomplicated Diverticulitis
Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?
Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.
Study design: Meta-analysis.
Setting: Search of online databases and the Cochrane Library.
Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).
For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.
Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.
Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.
Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.
Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?
Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.
Study design: Meta-analysis.
Setting: Search of online databases and the Cochrane Library.
Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).
For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.
Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.
Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.
Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.
Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?
Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.
Study design: Meta-analysis.
Setting: Search of online databases and the Cochrane Library.
Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).
For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.
Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.
Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.
Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.