User login
Is oral zolmitriptan efficacious in the acute treatment of cluster headache?
BACKGROUND: Acute treatments for cluster headaches include oxygen, ergotamine derivatives, and intranasal or subcutaneous sumatriptan. Although up to 95% of acute cluster headache patients treated with subcutaneous sumatriptan experience pain relief within 15 minutes,1 the route of administration and restrictions on recommended daily dosage may limit patient use of this therapy. Oxygen is effective as abortive therapy but is frequently unavailable in settings where acute cluster headaches are experienced. Rectal and oral ergotamine derivatives have poor bioavailability, and all ergot alkaloids have a high incidence of adverse effects. Oral zolmitriptan is efficacious in the acute treatment of migraine headache. However, no previous studies have evaluated the efficacy of oral triptans in the treatment of cluster headaches.
POPULATION STUDIED: The authors of this study included patients aged 18 to 65 years who were recruited from multiple specialty referral centers in Canada, the United Kingdom, and Sweden. All subjects had an established diagnosis of chronic or episodic cluster headache, described as headaches typically lasting 45 minutes or longer that were distinguishable from other types of episodic headaches, and had tolerated previous treatment with a 5-hydroxytryptamine (5-HT) agonist, such as sumatriptan or ergotamine. The study excluded patients with a history of basilar, ophthalmoplegic, or hemiplegic migraine, and those with risk factors contraindicating the use of 5-HT agonists.
STUDY DESIGN AND VALIDITY: This randomized double-blinded placebo-controlled crossover study compared 5-mg and 10-mg doses of zolmitriptan with placebo for the acute treatment of cluster headaches. Headache intensity was rated on a diary card with a 5-point severity scale (no, mild, moderate, severe, or very severe pain); only headaches of moderate to very severe intensity were treated. Subjects were required to take the study medication within 10 minutes of headache onset, were not permitted to take escape medications, such as oxygen or analgesics, within 30 minutes of taking study medications, and were not permitted to institute prophylactic treatment during the study period. Subjects whose cluster headache period ended before treatment or who had fewer than 3 headaches before the end of the study period were excluded from the analysis. Those who failed to comply with the strict requirements for medication use were noted, but were still included in the intention-to-treat analysis. This is a well-designed study, with no major threats to validity. Patients were selected from referral centers and thus may differ from cluster headache sufferers in a primary care clinic population.
OUTCOMES MEASURED: The primary outcome was headache improvement at 30 minutes, defined as a reduction in headache intensity of 2 or more points on the 5-point scale. Secondary treatment outcomes included the proportion of subjects experiencing any headache relief at 15 and 30 minutes, experiencing headache relief at any time, using escape medication 30 to 180 minutes after treatment, having mild or no pain 30 minutes after treatment, and obtaining relief of associated symptoms. Subjects in each study arm were also asked to indicate their preferred treatment.
RESULTS: Different treatment responses were found for episodic and chronic cluster headache subgroups (the latter patients had attacks for more than a year without remission). Chronic cluster headache subjects showed no statistically significant treatment response to zolmitriptan. Compared with placebo, a greater proportion of episodic cluster headache sufferers experienced a 2-point reduction in headache intensity after taking 10 mg of zolmitriptan (47% vs 29%). Six patients would need to be treated with this dose for 1 patient to improve this much (number needed to treat [NNT]=6). Use of 10 mg zolmitriptan was also associated with statistically significant improvement in all of the secondary outcomes. Patients treated with 5 mg zolmitriptan had improvement in only 3 secondary outcomes: headache relief at any time (NNT=6), lower likelihood of escape medication use (NNT=5), and mild or no pain at 30 minutes (NNT=7). Zolmitriptan was associated with a significantly greater incidence of medication-related adverse effects (number needed to harm=5 for the 10-mg dose). The most frequently described adverse effects were paresthesia, heaviness, asthenia, nausea, dizziness, and (nonchest) tightness. No medication-related events led to withdrawal from the study. Forty-five percent of subjects preferred the 10-mg dose compared with 29% who preferred the 5-mg dose, and 26% the placebo.
Oral zolmitriptan (particularly the 10-mg dose) is efficacious in acute treatment of episodic cluster headaches. Because of its ease of administration relative to other treatment options, oral zolmitriptan may be a good choice for patients unable to use sumatriptan. However, it shares similar adverse effects with other 5-HT agonists and has a slower onset of action compared with subcutaneous sumatriptan. Head-to-head trials in primary care populations comparing oral zolmitriptan with abortive oxygen treatment and with different forms of sumatriptan are needed to better establish the role of zolmitriptan in management of cluster headaches.
BACKGROUND: Acute treatments for cluster headaches include oxygen, ergotamine derivatives, and intranasal or subcutaneous sumatriptan. Although up to 95% of acute cluster headache patients treated with subcutaneous sumatriptan experience pain relief within 15 minutes,1 the route of administration and restrictions on recommended daily dosage may limit patient use of this therapy. Oxygen is effective as abortive therapy but is frequently unavailable in settings where acute cluster headaches are experienced. Rectal and oral ergotamine derivatives have poor bioavailability, and all ergot alkaloids have a high incidence of adverse effects. Oral zolmitriptan is efficacious in the acute treatment of migraine headache. However, no previous studies have evaluated the efficacy of oral triptans in the treatment of cluster headaches.
POPULATION STUDIED: The authors of this study included patients aged 18 to 65 years who were recruited from multiple specialty referral centers in Canada, the United Kingdom, and Sweden. All subjects had an established diagnosis of chronic or episodic cluster headache, described as headaches typically lasting 45 minutes or longer that were distinguishable from other types of episodic headaches, and had tolerated previous treatment with a 5-hydroxytryptamine (5-HT) agonist, such as sumatriptan or ergotamine. The study excluded patients with a history of basilar, ophthalmoplegic, or hemiplegic migraine, and those with risk factors contraindicating the use of 5-HT agonists.
STUDY DESIGN AND VALIDITY: This randomized double-blinded placebo-controlled crossover study compared 5-mg and 10-mg doses of zolmitriptan with placebo for the acute treatment of cluster headaches. Headache intensity was rated on a diary card with a 5-point severity scale (no, mild, moderate, severe, or very severe pain); only headaches of moderate to very severe intensity were treated. Subjects were required to take the study medication within 10 minutes of headache onset, were not permitted to take escape medications, such as oxygen or analgesics, within 30 minutes of taking study medications, and were not permitted to institute prophylactic treatment during the study period. Subjects whose cluster headache period ended before treatment or who had fewer than 3 headaches before the end of the study period were excluded from the analysis. Those who failed to comply with the strict requirements for medication use were noted, but were still included in the intention-to-treat analysis. This is a well-designed study, with no major threats to validity. Patients were selected from referral centers and thus may differ from cluster headache sufferers in a primary care clinic population.
OUTCOMES MEASURED: The primary outcome was headache improvement at 30 minutes, defined as a reduction in headache intensity of 2 or more points on the 5-point scale. Secondary treatment outcomes included the proportion of subjects experiencing any headache relief at 15 and 30 minutes, experiencing headache relief at any time, using escape medication 30 to 180 minutes after treatment, having mild or no pain 30 minutes after treatment, and obtaining relief of associated symptoms. Subjects in each study arm were also asked to indicate their preferred treatment.
RESULTS: Different treatment responses were found for episodic and chronic cluster headache subgroups (the latter patients had attacks for more than a year without remission). Chronic cluster headache subjects showed no statistically significant treatment response to zolmitriptan. Compared with placebo, a greater proportion of episodic cluster headache sufferers experienced a 2-point reduction in headache intensity after taking 10 mg of zolmitriptan (47% vs 29%). Six patients would need to be treated with this dose for 1 patient to improve this much (number needed to treat [NNT]=6). Use of 10 mg zolmitriptan was also associated with statistically significant improvement in all of the secondary outcomes. Patients treated with 5 mg zolmitriptan had improvement in only 3 secondary outcomes: headache relief at any time (NNT=6), lower likelihood of escape medication use (NNT=5), and mild or no pain at 30 minutes (NNT=7). Zolmitriptan was associated with a significantly greater incidence of medication-related adverse effects (number needed to harm=5 for the 10-mg dose). The most frequently described adverse effects were paresthesia, heaviness, asthenia, nausea, dizziness, and (nonchest) tightness. No medication-related events led to withdrawal from the study. Forty-five percent of subjects preferred the 10-mg dose compared with 29% who preferred the 5-mg dose, and 26% the placebo.
Oral zolmitriptan (particularly the 10-mg dose) is efficacious in acute treatment of episodic cluster headaches. Because of its ease of administration relative to other treatment options, oral zolmitriptan may be a good choice for patients unable to use sumatriptan. However, it shares similar adverse effects with other 5-HT agonists and has a slower onset of action compared with subcutaneous sumatriptan. Head-to-head trials in primary care populations comparing oral zolmitriptan with abortive oxygen treatment and with different forms of sumatriptan are needed to better establish the role of zolmitriptan in management of cluster headaches.
BACKGROUND: Acute treatments for cluster headaches include oxygen, ergotamine derivatives, and intranasal or subcutaneous sumatriptan. Although up to 95% of acute cluster headache patients treated with subcutaneous sumatriptan experience pain relief within 15 minutes,1 the route of administration and restrictions on recommended daily dosage may limit patient use of this therapy. Oxygen is effective as abortive therapy but is frequently unavailable in settings where acute cluster headaches are experienced. Rectal and oral ergotamine derivatives have poor bioavailability, and all ergot alkaloids have a high incidence of adverse effects. Oral zolmitriptan is efficacious in the acute treatment of migraine headache. However, no previous studies have evaluated the efficacy of oral triptans in the treatment of cluster headaches.
POPULATION STUDIED: The authors of this study included patients aged 18 to 65 years who were recruited from multiple specialty referral centers in Canada, the United Kingdom, and Sweden. All subjects had an established diagnosis of chronic or episodic cluster headache, described as headaches typically lasting 45 minutes or longer that were distinguishable from other types of episodic headaches, and had tolerated previous treatment with a 5-hydroxytryptamine (5-HT) agonist, such as sumatriptan or ergotamine. The study excluded patients with a history of basilar, ophthalmoplegic, or hemiplegic migraine, and those with risk factors contraindicating the use of 5-HT agonists.
STUDY DESIGN AND VALIDITY: This randomized double-blinded placebo-controlled crossover study compared 5-mg and 10-mg doses of zolmitriptan with placebo for the acute treatment of cluster headaches. Headache intensity was rated on a diary card with a 5-point severity scale (no, mild, moderate, severe, or very severe pain); only headaches of moderate to very severe intensity were treated. Subjects were required to take the study medication within 10 minutes of headache onset, were not permitted to take escape medications, such as oxygen or analgesics, within 30 minutes of taking study medications, and were not permitted to institute prophylactic treatment during the study period. Subjects whose cluster headache period ended before treatment or who had fewer than 3 headaches before the end of the study period were excluded from the analysis. Those who failed to comply with the strict requirements for medication use were noted, but were still included in the intention-to-treat analysis. This is a well-designed study, with no major threats to validity. Patients were selected from referral centers and thus may differ from cluster headache sufferers in a primary care clinic population.
OUTCOMES MEASURED: The primary outcome was headache improvement at 30 minutes, defined as a reduction in headache intensity of 2 or more points on the 5-point scale. Secondary treatment outcomes included the proportion of subjects experiencing any headache relief at 15 and 30 minutes, experiencing headache relief at any time, using escape medication 30 to 180 minutes after treatment, having mild or no pain 30 minutes after treatment, and obtaining relief of associated symptoms. Subjects in each study arm were also asked to indicate their preferred treatment.
RESULTS: Different treatment responses were found for episodic and chronic cluster headache subgroups (the latter patients had attacks for more than a year without remission). Chronic cluster headache subjects showed no statistically significant treatment response to zolmitriptan. Compared with placebo, a greater proportion of episodic cluster headache sufferers experienced a 2-point reduction in headache intensity after taking 10 mg of zolmitriptan (47% vs 29%). Six patients would need to be treated with this dose for 1 patient to improve this much (number needed to treat [NNT]=6). Use of 10 mg zolmitriptan was also associated with statistically significant improvement in all of the secondary outcomes. Patients treated with 5 mg zolmitriptan had improvement in only 3 secondary outcomes: headache relief at any time (NNT=6), lower likelihood of escape medication use (NNT=5), and mild or no pain at 30 minutes (NNT=7). Zolmitriptan was associated with a significantly greater incidence of medication-related adverse effects (number needed to harm=5 for the 10-mg dose). The most frequently described adverse effects were paresthesia, heaviness, asthenia, nausea, dizziness, and (nonchest) tightness. No medication-related events led to withdrawal from the study. Forty-five percent of subjects preferred the 10-mg dose compared with 29% who preferred the 5-mg dose, and 26% the placebo.
Oral zolmitriptan (particularly the 10-mg dose) is efficacious in acute treatment of episodic cluster headaches. Because of its ease of administration relative to other treatment options, oral zolmitriptan may be a good choice for patients unable to use sumatriptan. However, it shares similar adverse effects with other 5-HT agonists and has a slower onset of action compared with subcutaneous sumatriptan. Head-to-head trials in primary care populations comparing oral zolmitriptan with abortive oxygen treatment and with different forms of sumatriptan are needed to better establish the role of zolmitriptan in management of cluster headaches.
Does delayed pushing reduce difficult deliveries for nulliparous women with epidural analgesia?
BACKGROUND: Epidural analgesia, though effective, can prolong second stage labor and increase midpelvic delivery and maternal and neonatal morbidity. Studies indicate delayed pushing may decrease the need for forceps delivery. The authors of this randomized trial assessed the outcomes of a delayed pushing strategy of labor management.
POPULATION STUDIED: A total of 1862 nulliparous women were enrolled at 12 sites in Canada, Switzerland, and the United States. Enrollment criteria included more than 37 weeks’ gestation, vertex singleton presentation, normal fetal heart status, and effective continuous epidural analgesia. The average age of participants was 28 years, and more than 94% were white or Asian; other risk factors were not described. The high reported episiotomy rate (41%) suggests that the settings encouraged obstetric intervention, and the lack of information about intrapartum routines and other obstetric risk factors makes assessment of comparability difficult.
STUDY DESIGN AND VALIDITY: At complete dilatation the women were randomized (allocation concealment uncertain) to a pushing or delayed pushing group. Pushing in the latter group was discouraged for 2 hours unless there was an irresistible urge to push, the fetal head was at the perineum, or there was a medical indication to hasten delivery. Oxytocin use was standardized. Analysis was by intention to treat with control for potential confounding by the Mantel-Haenszel method.
OUTCOMES MEASURED: The primary outcome was difficult deliveries, defined as second stage cesarean section, midpelvic forceps or vacuum delivery, low pelvic forceps, or rotation of fetal head more than 45 degrees, or as any operative vaginal delivery preceded by manual rotation of the head 45 degrees. Secondary maternal outcomes included lacerations, blood loss, peripartum fever, and blood transfusions, as well as a postpartum survey of the mother’s sense of control during her labor and delivery. Pediatric outcomes included cord pH, Apgar scores, neonatal intensive care unit admission, and a neonatal morbidity index. Patient satisfaction and cost of care were not addressed.
RESULTS: Difficult deliveries were reduced in the delayed pushing group (relative risk=0.79; 95% confidence interval, 0.66-0.95; number needed to treat [NNT]=21). The most pronounced difference was in reduced midpelvic procedures; stratification by oxytocin use or other variables yielded no difference in difficult deliveries. The protective effect of delayed pushing on difficult delivery was greatest for women who had a fetus in a transverse or posterior position (NNT=8) or with a fetal station above +2 (NNT=17). Mothers in the delayed pushing group had a higher rate of intrapartum fever but no significant differences in antibiotic use, postpartum fevers, or any other outcome. The groups were similar in the mother’s reported sense of control. Infants in the delayed pushing group had a higher rate of cord pH <7.10, but there was no significant difference in the neonatal morbidity index scores or in specific adverse outcomes.
Delayed pushing for up to 2 hours after full cervical dilatation in nulliparous women receiving epidural analgesia is safe and may lower the risk of difficult deliveries. This may be especially true in settings with a high rate of routine obstetric interventions. Future studies should include more power for specific adverse pediatric outcomes and should address the generalizability of delayed pushing to patients of color and to a less interventional obstetric milieu. In the meantime, clinicians should allow patients to delay pushing, if close fetal monitoring is in place.
BACKGROUND: Epidural analgesia, though effective, can prolong second stage labor and increase midpelvic delivery and maternal and neonatal morbidity. Studies indicate delayed pushing may decrease the need for forceps delivery. The authors of this randomized trial assessed the outcomes of a delayed pushing strategy of labor management.
POPULATION STUDIED: A total of 1862 nulliparous women were enrolled at 12 sites in Canada, Switzerland, and the United States. Enrollment criteria included more than 37 weeks’ gestation, vertex singleton presentation, normal fetal heart status, and effective continuous epidural analgesia. The average age of participants was 28 years, and more than 94% were white or Asian; other risk factors were not described. The high reported episiotomy rate (41%) suggests that the settings encouraged obstetric intervention, and the lack of information about intrapartum routines and other obstetric risk factors makes assessment of comparability difficult.
STUDY DESIGN AND VALIDITY: At complete dilatation the women were randomized (allocation concealment uncertain) to a pushing or delayed pushing group. Pushing in the latter group was discouraged for 2 hours unless there was an irresistible urge to push, the fetal head was at the perineum, or there was a medical indication to hasten delivery. Oxytocin use was standardized. Analysis was by intention to treat with control for potential confounding by the Mantel-Haenszel method.
OUTCOMES MEASURED: The primary outcome was difficult deliveries, defined as second stage cesarean section, midpelvic forceps or vacuum delivery, low pelvic forceps, or rotation of fetal head more than 45 degrees, or as any operative vaginal delivery preceded by manual rotation of the head 45 degrees. Secondary maternal outcomes included lacerations, blood loss, peripartum fever, and blood transfusions, as well as a postpartum survey of the mother’s sense of control during her labor and delivery. Pediatric outcomes included cord pH, Apgar scores, neonatal intensive care unit admission, and a neonatal morbidity index. Patient satisfaction and cost of care were not addressed.
RESULTS: Difficult deliveries were reduced in the delayed pushing group (relative risk=0.79; 95% confidence interval, 0.66-0.95; number needed to treat [NNT]=21). The most pronounced difference was in reduced midpelvic procedures; stratification by oxytocin use or other variables yielded no difference in difficult deliveries. The protective effect of delayed pushing on difficult delivery was greatest for women who had a fetus in a transverse or posterior position (NNT=8) or with a fetal station above +2 (NNT=17). Mothers in the delayed pushing group had a higher rate of intrapartum fever but no significant differences in antibiotic use, postpartum fevers, or any other outcome. The groups were similar in the mother’s reported sense of control. Infants in the delayed pushing group had a higher rate of cord pH <7.10, but there was no significant difference in the neonatal morbidity index scores or in specific adverse outcomes.
Delayed pushing for up to 2 hours after full cervical dilatation in nulliparous women receiving epidural analgesia is safe and may lower the risk of difficult deliveries. This may be especially true in settings with a high rate of routine obstetric interventions. Future studies should include more power for specific adverse pediatric outcomes and should address the generalizability of delayed pushing to patients of color and to a less interventional obstetric milieu. In the meantime, clinicians should allow patients to delay pushing, if close fetal monitoring is in place.
BACKGROUND: Epidural analgesia, though effective, can prolong second stage labor and increase midpelvic delivery and maternal and neonatal morbidity. Studies indicate delayed pushing may decrease the need for forceps delivery. The authors of this randomized trial assessed the outcomes of a delayed pushing strategy of labor management.
POPULATION STUDIED: A total of 1862 nulliparous women were enrolled at 12 sites in Canada, Switzerland, and the United States. Enrollment criteria included more than 37 weeks’ gestation, vertex singleton presentation, normal fetal heart status, and effective continuous epidural analgesia. The average age of participants was 28 years, and more than 94% were white or Asian; other risk factors were not described. The high reported episiotomy rate (41%) suggests that the settings encouraged obstetric intervention, and the lack of information about intrapartum routines and other obstetric risk factors makes assessment of comparability difficult.
STUDY DESIGN AND VALIDITY: At complete dilatation the women were randomized (allocation concealment uncertain) to a pushing or delayed pushing group. Pushing in the latter group was discouraged for 2 hours unless there was an irresistible urge to push, the fetal head was at the perineum, or there was a medical indication to hasten delivery. Oxytocin use was standardized. Analysis was by intention to treat with control for potential confounding by the Mantel-Haenszel method.
OUTCOMES MEASURED: The primary outcome was difficult deliveries, defined as second stage cesarean section, midpelvic forceps or vacuum delivery, low pelvic forceps, or rotation of fetal head more than 45 degrees, or as any operative vaginal delivery preceded by manual rotation of the head 45 degrees. Secondary maternal outcomes included lacerations, blood loss, peripartum fever, and blood transfusions, as well as a postpartum survey of the mother’s sense of control during her labor and delivery. Pediatric outcomes included cord pH, Apgar scores, neonatal intensive care unit admission, and a neonatal morbidity index. Patient satisfaction and cost of care were not addressed.
RESULTS: Difficult deliveries were reduced in the delayed pushing group (relative risk=0.79; 95% confidence interval, 0.66-0.95; number needed to treat [NNT]=21). The most pronounced difference was in reduced midpelvic procedures; stratification by oxytocin use or other variables yielded no difference in difficult deliveries. The protective effect of delayed pushing on difficult delivery was greatest for women who had a fetus in a transverse or posterior position (NNT=8) or with a fetal station above +2 (NNT=17). Mothers in the delayed pushing group had a higher rate of intrapartum fever but no significant differences in antibiotic use, postpartum fevers, or any other outcome. The groups were similar in the mother’s reported sense of control. Infants in the delayed pushing group had a higher rate of cord pH <7.10, but there was no significant difference in the neonatal morbidity index scores or in specific adverse outcomes.
Delayed pushing for up to 2 hours after full cervical dilatation in nulliparous women receiving epidural analgesia is safe and may lower the risk of difficult deliveries. This may be especially true in settings with a high rate of routine obstetric interventions. Future studies should include more power for specific adverse pediatric outcomes and should address the generalizability of delayed pushing to patients of color and to a less interventional obstetric milieu. In the meantime, clinicians should allow patients to delay pushing, if close fetal monitoring is in place.
How accurate are the history and physical examination in diagnosing carpal tunnel syndrome (CTS)?
BACKGROUND: Approximately 3% of adults in population-based studies have symptomatic CTS confirmed by electrodiagnostic studies. Clinicians use many different historical and physical findings to diagnose CTS. This study is a systematic review of the accuracy of history and physical examination findings in diagnosing CTS using electrodiagnostic studies as the gold standard.
POPULATION STUDIED: Because this is a systematic review, patients from several different populations were studied. Details were not given on the demographics of patients in the included studies, although none were performed in the family practice setting. This is a possible limitation for family physicians wishing to apply these data to their practices.
STUDY DESIGN AND VALIDITY: The authors searched MEDLINE from January 1966 to February 2000 for relevant articles. Included studies had to meet the following criteria: patients presented to the clinician for symptoms suggestive of CTS; the physical examination maneuvers were clearly described; there was an independent comparison with 1 or more electrodiagnostic parameters; and the authors could extract the data needed to calculate sensitivity, specificity, and likelihood ratios. Twelve articles met these inclusion criteria. Likelihood ratios were pooled if the overall accuracy between studies was homogeneous (ie, studies generally reported similar results). The search could have been improved by contacting the authors of studies that had insufficient data to calculate sensitivity and specificity.
OUTCOMES MEASURED: The primary outcomes were the sensitivity, specificity, and likelihood ratios for each history and physical examination finding.
RESULTS: The flick test had the best positive likelihood ratio (LR+=21.4; 95% confidence interval [CI], 10.8-42.1) and negative likelihood ratio (LR-=0.1; 95% CI, 0.0-0.1), but was only reported in a single study. It is performed as follows: When asking the patient, “What do you actually do with your hand(s) when the symptoms are at their worst?” the patient demonstrates a flicking movement of the wrist and hand similar to that used in shaking down a thermometer. Slightly to moderately useful tests for ruling in CTS include a decreased ability to perceive painful stimuli along the palmar aspect index finger when compared with the ipsilateral little finger (LR+=3.1), the Katz hand diagram with classic or probable patterns (LR+=2.4), weak thumb abduction (LR+=1.8), abnormal monofilament testing (LR+=1.5), and the Phalen sign (LR+=1.3). The confidence intervals of the LR+ and LR- of the following signs and symptoms included 1.0, signifying no diagnostic utility: nocturnal paresthesias, thenar atrophy, 2-point discrimination, abnormal vibration sense, pressure provocation test, and tourniquet test. The square wrist sign (LR-=2.7) and closed fist sign (LR-=7.3) were each only reported in 1 study but show promise. Only the Flick test had an LR- of less than 0.5.
This useful systematic review found that the flick test, a classic or probable Katz hand diagram, hypalgesia, and weak thumb abduction increase the likelihood that a patient will have a positive electrodiagnostic study result for CTS. See the figures and tables in the original article for more details on performing these tests. The Tinel and Phelan tests used by many physicians are less accurate and should be discarded in favor of the tests described. The flick, abduction, and hypalgesia tests in particular can easily be adapted to the family practice setting. Use of these findings can help physicians choose the appropriate initial therapy for their patients, select patients who need further testing, and help focus the work-up on alternative diagnoses if the electrodiagnostic findings are negative.
BACKGROUND: Approximately 3% of adults in population-based studies have symptomatic CTS confirmed by electrodiagnostic studies. Clinicians use many different historical and physical findings to diagnose CTS. This study is a systematic review of the accuracy of history and physical examination findings in diagnosing CTS using electrodiagnostic studies as the gold standard.
POPULATION STUDIED: Because this is a systematic review, patients from several different populations were studied. Details were not given on the demographics of patients in the included studies, although none were performed in the family practice setting. This is a possible limitation for family physicians wishing to apply these data to their practices.
STUDY DESIGN AND VALIDITY: The authors searched MEDLINE from January 1966 to February 2000 for relevant articles. Included studies had to meet the following criteria: patients presented to the clinician for symptoms suggestive of CTS; the physical examination maneuvers were clearly described; there was an independent comparison with 1 or more electrodiagnostic parameters; and the authors could extract the data needed to calculate sensitivity, specificity, and likelihood ratios. Twelve articles met these inclusion criteria. Likelihood ratios were pooled if the overall accuracy between studies was homogeneous (ie, studies generally reported similar results). The search could have been improved by contacting the authors of studies that had insufficient data to calculate sensitivity and specificity.
OUTCOMES MEASURED: The primary outcomes were the sensitivity, specificity, and likelihood ratios for each history and physical examination finding.
RESULTS: The flick test had the best positive likelihood ratio (LR+=21.4; 95% confidence interval [CI], 10.8-42.1) and negative likelihood ratio (LR-=0.1; 95% CI, 0.0-0.1), but was only reported in a single study. It is performed as follows: When asking the patient, “What do you actually do with your hand(s) when the symptoms are at their worst?” the patient demonstrates a flicking movement of the wrist and hand similar to that used in shaking down a thermometer. Slightly to moderately useful tests for ruling in CTS include a decreased ability to perceive painful stimuli along the palmar aspect index finger when compared with the ipsilateral little finger (LR+=3.1), the Katz hand diagram with classic or probable patterns (LR+=2.4), weak thumb abduction (LR+=1.8), abnormal monofilament testing (LR+=1.5), and the Phalen sign (LR+=1.3). The confidence intervals of the LR+ and LR- of the following signs and symptoms included 1.0, signifying no diagnostic utility: nocturnal paresthesias, thenar atrophy, 2-point discrimination, abnormal vibration sense, pressure provocation test, and tourniquet test. The square wrist sign (LR-=2.7) and closed fist sign (LR-=7.3) were each only reported in 1 study but show promise. Only the Flick test had an LR- of less than 0.5.
This useful systematic review found that the flick test, a classic or probable Katz hand diagram, hypalgesia, and weak thumb abduction increase the likelihood that a patient will have a positive electrodiagnostic study result for CTS. See the figures and tables in the original article for more details on performing these tests. The Tinel and Phelan tests used by many physicians are less accurate and should be discarded in favor of the tests described. The flick, abduction, and hypalgesia tests in particular can easily be adapted to the family practice setting. Use of these findings can help physicians choose the appropriate initial therapy for their patients, select patients who need further testing, and help focus the work-up on alternative diagnoses if the electrodiagnostic findings are negative.
BACKGROUND: Approximately 3% of adults in population-based studies have symptomatic CTS confirmed by electrodiagnostic studies. Clinicians use many different historical and physical findings to diagnose CTS. This study is a systematic review of the accuracy of history and physical examination findings in diagnosing CTS using electrodiagnostic studies as the gold standard.
POPULATION STUDIED: Because this is a systematic review, patients from several different populations were studied. Details were not given on the demographics of patients in the included studies, although none were performed in the family practice setting. This is a possible limitation for family physicians wishing to apply these data to their practices.
STUDY DESIGN AND VALIDITY: The authors searched MEDLINE from January 1966 to February 2000 for relevant articles. Included studies had to meet the following criteria: patients presented to the clinician for symptoms suggestive of CTS; the physical examination maneuvers were clearly described; there was an independent comparison with 1 or more electrodiagnostic parameters; and the authors could extract the data needed to calculate sensitivity, specificity, and likelihood ratios. Twelve articles met these inclusion criteria. Likelihood ratios were pooled if the overall accuracy between studies was homogeneous (ie, studies generally reported similar results). The search could have been improved by contacting the authors of studies that had insufficient data to calculate sensitivity and specificity.
OUTCOMES MEASURED: The primary outcomes were the sensitivity, specificity, and likelihood ratios for each history and physical examination finding.
RESULTS: The flick test had the best positive likelihood ratio (LR+=21.4; 95% confidence interval [CI], 10.8-42.1) and negative likelihood ratio (LR-=0.1; 95% CI, 0.0-0.1), but was only reported in a single study. It is performed as follows: When asking the patient, “What do you actually do with your hand(s) when the symptoms are at their worst?” the patient demonstrates a flicking movement of the wrist and hand similar to that used in shaking down a thermometer. Slightly to moderately useful tests for ruling in CTS include a decreased ability to perceive painful stimuli along the palmar aspect index finger when compared with the ipsilateral little finger (LR+=3.1), the Katz hand diagram with classic or probable patterns (LR+=2.4), weak thumb abduction (LR+=1.8), abnormal monofilament testing (LR+=1.5), and the Phalen sign (LR+=1.3). The confidence intervals of the LR+ and LR- of the following signs and symptoms included 1.0, signifying no diagnostic utility: nocturnal paresthesias, thenar atrophy, 2-point discrimination, abnormal vibration sense, pressure provocation test, and tourniquet test. The square wrist sign (LR-=2.7) and closed fist sign (LR-=7.3) were each only reported in 1 study but show promise. Only the Flick test had an LR- of less than 0.5.
This useful systematic review found that the flick test, a classic or probable Katz hand diagram, hypalgesia, and weak thumb abduction increase the likelihood that a patient will have a positive electrodiagnostic study result for CTS. See the figures and tables in the original article for more details on performing these tests. The Tinel and Phelan tests used by many physicians are less accurate and should be discarded in favor of the tests described. The flick, abduction, and hypalgesia tests in particular can easily be adapted to the family practice setting. Use of these findings can help physicians choose the appropriate initial therapy for their patients, select patients who need further testing, and help focus the work-up on alternative diagnoses if the electrodiagnostic findings are negative.
Are high-dose inhaled steroids effective for chronic obstructive pulmonary disease (COPD)?
BACKGROUND: No pharmacologic intervention has been demonstrated to affect health deterioration or disease advancement of COPD. Use of inhaled steroids in moderate doses is common, but a controlled trial has shown that such treatment results in only a small benefit in changes in forced expiratory volume in 1 second (FEV1) and minimal improvement in clinical parameters.1
POPULATION STUDIED: Patients were current or former smokers aged between 40 and 75 years. All had nonasthmatic COPD defined as an FEV1 less than 85% of predicted, and an FEV1/forced vital capacity percentage less than 70% with less than 10% improvement from inhaled b-agonists. Previous use of inhaled or oral corticosteriods was permitted. Patients were excluded if they had a life expectancy of less than 5 years from concurrent diseases or if they used b-blockers. Concurrent use of theophyllines and bronchodilators was allowed during the study.
STUDY DESIGN AND VALIDITY: This was a randomized placebo-controlled double-blinded study of 751 patients. There was no mention of allocation concealment. After an 8-week period of withdrawal from steroid use, patients received 14 days of oral prednisolone to determine whether a response to acute corticosteroids could predict a response to long-term inhaled corticosteroids. Patients then received either placebo or 500 mg fluticasone using a metered dose inhaler with a spacer twice daily. Patients were evaluated every 3 months for 3 years. Health status was measured by the St. George’s respiratory questionnaire; a 4-point change in this 100-point scale was judged to be clinically significant. An exacerbation was defined as worsening of respiratory symptoms requiring treatment with oral cortico- steroids or antibiotics.
OUTCOMES MEASURED: The primary end point was the annual decline in FEV1. Secondary end points were the frequencies of exacerbations, changes in health status, withdrawals because of respiratory disease, morning serum cortisol concentrations, and adverse events.
RESULTS: There was no difference in the decline of respiratory function as measured by FEV1 over the 3 years of the study in the fluticasone or placebo groups (59 mL/year vs 50 mL/year). The yearly exacerbation rate was lower in the fluticasone group than in the placebo group (0.99 vs 1.32 per year; P=.026). This resulted in 3 patients treated with high-dose fluticasone for a year (at a retail pharmacy cost in the United States of $1500 per patient) to prevent 1 exacerbation requiring steroids or antibiotics (number needed to treat=3). Health status measured by the increase in questionnaire score declined at a slower rate in the fluticasone group than in the placebo group (2.0 vs 3.2 units/year; P=.004). Although this was statistically significant, the difference is unlikely to be clinically relevant. Adverse effects were similar in each group. The response to oral prednisolone did not predict a subsequent response to inhaled corticosteroids.
High-dose inhaled corticosteroid use has a minimal clinical effect in patients with COPD. It did not affect the rate of decline of lung function and did not markedly affect health status. The only clinical benefit seen in this trial was a decrease in the frequency of exa- cerbations requiring oral steroid or antibiotic treatment. Since a trial of oral steroids was not useful in selecting patients more likely to benefit from this intervention, the decision to use inhaled steroids should be made on other clinical grounds and monitored periodically to determine effectiveness. The dose in this study is significantly higher than most dosages of inhaled steroids prescribed. Another study2 suggests that potent inhaled steroids may decrease bone mineral density. Given this risk and the small benefit demonstrated in this study, inhaled steroids should be used infrequently in patients with COPD.
BACKGROUND: No pharmacologic intervention has been demonstrated to affect health deterioration or disease advancement of COPD. Use of inhaled steroids in moderate doses is common, but a controlled trial has shown that such treatment results in only a small benefit in changes in forced expiratory volume in 1 second (FEV1) and minimal improvement in clinical parameters.1
POPULATION STUDIED: Patients were current or former smokers aged between 40 and 75 years. All had nonasthmatic COPD defined as an FEV1 less than 85% of predicted, and an FEV1/forced vital capacity percentage less than 70% with less than 10% improvement from inhaled b-agonists. Previous use of inhaled or oral corticosteriods was permitted. Patients were excluded if they had a life expectancy of less than 5 years from concurrent diseases or if they used b-blockers. Concurrent use of theophyllines and bronchodilators was allowed during the study.
STUDY DESIGN AND VALIDITY: This was a randomized placebo-controlled double-blinded study of 751 patients. There was no mention of allocation concealment. After an 8-week period of withdrawal from steroid use, patients received 14 days of oral prednisolone to determine whether a response to acute corticosteroids could predict a response to long-term inhaled corticosteroids. Patients then received either placebo or 500 mg fluticasone using a metered dose inhaler with a spacer twice daily. Patients were evaluated every 3 months for 3 years. Health status was measured by the St. George’s respiratory questionnaire; a 4-point change in this 100-point scale was judged to be clinically significant. An exacerbation was defined as worsening of respiratory symptoms requiring treatment with oral cortico- steroids or antibiotics.
OUTCOMES MEASURED: The primary end point was the annual decline in FEV1. Secondary end points were the frequencies of exacerbations, changes in health status, withdrawals because of respiratory disease, morning serum cortisol concentrations, and adverse events.
RESULTS: There was no difference in the decline of respiratory function as measured by FEV1 over the 3 years of the study in the fluticasone or placebo groups (59 mL/year vs 50 mL/year). The yearly exacerbation rate was lower in the fluticasone group than in the placebo group (0.99 vs 1.32 per year; P=.026). This resulted in 3 patients treated with high-dose fluticasone for a year (at a retail pharmacy cost in the United States of $1500 per patient) to prevent 1 exacerbation requiring steroids or antibiotics (number needed to treat=3). Health status measured by the increase in questionnaire score declined at a slower rate in the fluticasone group than in the placebo group (2.0 vs 3.2 units/year; P=.004). Although this was statistically significant, the difference is unlikely to be clinically relevant. Adverse effects were similar in each group. The response to oral prednisolone did not predict a subsequent response to inhaled corticosteroids.
High-dose inhaled corticosteroid use has a minimal clinical effect in patients with COPD. It did not affect the rate of decline of lung function and did not markedly affect health status. The only clinical benefit seen in this trial was a decrease in the frequency of exa- cerbations requiring oral steroid or antibiotic treatment. Since a trial of oral steroids was not useful in selecting patients more likely to benefit from this intervention, the decision to use inhaled steroids should be made on other clinical grounds and monitored periodically to determine effectiveness. The dose in this study is significantly higher than most dosages of inhaled steroids prescribed. Another study2 suggests that potent inhaled steroids may decrease bone mineral density. Given this risk and the small benefit demonstrated in this study, inhaled steroids should be used infrequently in patients with COPD.
BACKGROUND: No pharmacologic intervention has been demonstrated to affect health deterioration or disease advancement of COPD. Use of inhaled steroids in moderate doses is common, but a controlled trial has shown that such treatment results in only a small benefit in changes in forced expiratory volume in 1 second (FEV1) and minimal improvement in clinical parameters.1
POPULATION STUDIED: Patients were current or former smokers aged between 40 and 75 years. All had nonasthmatic COPD defined as an FEV1 less than 85% of predicted, and an FEV1/forced vital capacity percentage less than 70% with less than 10% improvement from inhaled b-agonists. Previous use of inhaled or oral corticosteriods was permitted. Patients were excluded if they had a life expectancy of less than 5 years from concurrent diseases or if they used b-blockers. Concurrent use of theophyllines and bronchodilators was allowed during the study.
STUDY DESIGN AND VALIDITY: This was a randomized placebo-controlled double-blinded study of 751 patients. There was no mention of allocation concealment. After an 8-week period of withdrawal from steroid use, patients received 14 days of oral prednisolone to determine whether a response to acute corticosteroids could predict a response to long-term inhaled corticosteroids. Patients then received either placebo or 500 mg fluticasone using a metered dose inhaler with a spacer twice daily. Patients were evaluated every 3 months for 3 years. Health status was measured by the St. George’s respiratory questionnaire; a 4-point change in this 100-point scale was judged to be clinically significant. An exacerbation was defined as worsening of respiratory symptoms requiring treatment with oral cortico- steroids or antibiotics.
OUTCOMES MEASURED: The primary end point was the annual decline in FEV1. Secondary end points were the frequencies of exacerbations, changes in health status, withdrawals because of respiratory disease, morning serum cortisol concentrations, and adverse events.
RESULTS: There was no difference in the decline of respiratory function as measured by FEV1 over the 3 years of the study in the fluticasone or placebo groups (59 mL/year vs 50 mL/year). The yearly exacerbation rate was lower in the fluticasone group than in the placebo group (0.99 vs 1.32 per year; P=.026). This resulted in 3 patients treated with high-dose fluticasone for a year (at a retail pharmacy cost in the United States of $1500 per patient) to prevent 1 exacerbation requiring steroids or antibiotics (number needed to treat=3). Health status measured by the increase in questionnaire score declined at a slower rate in the fluticasone group than in the placebo group (2.0 vs 3.2 units/year; P=.004). Although this was statistically significant, the difference is unlikely to be clinically relevant. Adverse effects were similar in each group. The response to oral prednisolone did not predict a subsequent response to inhaled corticosteroids.
High-dose inhaled corticosteroid use has a minimal clinical effect in patients with COPD. It did not affect the rate of decline of lung function and did not markedly affect health status. The only clinical benefit seen in this trial was a decrease in the frequency of exa- cerbations requiring oral steroid or antibiotic treatment. Since a trial of oral steroids was not useful in selecting patients more likely to benefit from this intervention, the decision to use inhaled steroids should be made on other clinical grounds and monitored periodically to determine effectiveness. The dose in this study is significantly higher than most dosages of inhaled steroids prescribed. Another study2 suggests that potent inhaled steroids may decrease bone mineral density. Given this risk and the small benefit demonstrated in this study, inhaled steroids should be used infrequently in patients with COPD.
Is it always necessary to suture all lacerations after a vaginal delivery?
BACKGROUND: Some birth trauma lacerations after vaginal delivery benefit from suture repair because of size or association with ongoing bleeding. Other lacerations are trivial and can be left to heal without intervention. This study was done to compare the outcomes of suture repair of minor lacerations with the outcomes following spontaneous healing.
POPULATION STUDIED: A total of 80 women delivered by midwives in a large university hospital in Sweden were enrolled. The total number of eligible patients who did not participate was not stated. Inclusion criteria required lacerations of the labia minora, vagina, or perineum that did not bleed, fell well together, and were less than 2 cm deep or 2 cm long. Whether some lacerations were too small for inclusion in the study was not stated.
STUDY DESIGN AND VALIDITY: Using sealed opaque envelopes to conceal allocation assignment, eligible subjects were randomized to either a spontaneous healing or suture group. Patients were enrolled between the time of delivery and time of repair. Suturing was performed with polyglycolic acid under topical or pudendal block anesthesia. Healing was evaluated at 2 to 3 days and at 8 weeks by clinical examination, and at 6 months by questionnaire. Two patients in the suture group were withdrawn from analysis because a nonstudy suture material was used. Study groups were comparable in age, labor characteristics, birth weights, labor analgesia used, oxytocin use, and birth positions. Only one woman was delivered in the lithotomy position. Outcomes assessment blinding was not possible, because the observer could easily tell whether suturing had been performed. Power was calculated to achieve a 95% confidence of detecting a 20% difference in effect between the 2 groups.
OUTCOMES MEASURED: Primary outcome measures were healing by visual inspection, discomfort, and return to sexual intercourse. Secondary outcome measures were duration of breastfeeding and patient perceived effect of the laceration on breastfeeding.
RESULTS: There were a total of 87 lacerations in the 40 patients in the nonsutured group and 74 lacerations in the 38 patients in the sutured group. At 2 to 3 days, 11 of 87 lacerations in the nonsutured group had minor problems in healing compared with 4 of 74 lacerations in the sutured group (nonsignificant difference). At 8 weeks, 8 of 87 lacerations in the nonsutured group had minor problems in healing compared with 8 of 74 lacerations in the sutured group (nonsignificant difference). More women in the sutured group reported using analgesic drugs for perineal pain. There were no differences in return to intercourse or duration of breastfeeding. Five patients in the sutured group had sutures removed because of annoyance. Sixteen percent of the women in the sutured group perceived that breastfeeding was affected by the laceration, while none of the women in the nonintervention group perceived an effect (P=.04; number needed to harm=6.3).
Suturing minor nonbleeding postpartum lacerations (<2 cm long and <2 cm deep) does not improve healing rates or decrease perineal discomfort. Clinicians not in the habit of suturing small lacerations need not begin doing so. Those who suture all lacerations can safely leave minor lacerations untouched. Appropriate repair of larger lacerations was not addressed by this study.
BACKGROUND: Some birth trauma lacerations after vaginal delivery benefit from suture repair because of size or association with ongoing bleeding. Other lacerations are trivial and can be left to heal without intervention. This study was done to compare the outcomes of suture repair of minor lacerations with the outcomes following spontaneous healing.
POPULATION STUDIED: A total of 80 women delivered by midwives in a large university hospital in Sweden were enrolled. The total number of eligible patients who did not participate was not stated. Inclusion criteria required lacerations of the labia minora, vagina, or perineum that did not bleed, fell well together, and were less than 2 cm deep or 2 cm long. Whether some lacerations were too small for inclusion in the study was not stated.
STUDY DESIGN AND VALIDITY: Using sealed opaque envelopes to conceal allocation assignment, eligible subjects were randomized to either a spontaneous healing or suture group. Patients were enrolled between the time of delivery and time of repair. Suturing was performed with polyglycolic acid under topical or pudendal block anesthesia. Healing was evaluated at 2 to 3 days and at 8 weeks by clinical examination, and at 6 months by questionnaire. Two patients in the suture group were withdrawn from analysis because a nonstudy suture material was used. Study groups were comparable in age, labor characteristics, birth weights, labor analgesia used, oxytocin use, and birth positions. Only one woman was delivered in the lithotomy position. Outcomes assessment blinding was not possible, because the observer could easily tell whether suturing had been performed. Power was calculated to achieve a 95% confidence of detecting a 20% difference in effect between the 2 groups.
OUTCOMES MEASURED: Primary outcome measures were healing by visual inspection, discomfort, and return to sexual intercourse. Secondary outcome measures were duration of breastfeeding and patient perceived effect of the laceration on breastfeeding.
RESULTS: There were a total of 87 lacerations in the 40 patients in the nonsutured group and 74 lacerations in the 38 patients in the sutured group. At 2 to 3 days, 11 of 87 lacerations in the nonsutured group had minor problems in healing compared with 4 of 74 lacerations in the sutured group (nonsignificant difference). At 8 weeks, 8 of 87 lacerations in the nonsutured group had minor problems in healing compared with 8 of 74 lacerations in the sutured group (nonsignificant difference). More women in the sutured group reported using analgesic drugs for perineal pain. There were no differences in return to intercourse or duration of breastfeeding. Five patients in the sutured group had sutures removed because of annoyance. Sixteen percent of the women in the sutured group perceived that breastfeeding was affected by the laceration, while none of the women in the nonintervention group perceived an effect (P=.04; number needed to harm=6.3).
Suturing minor nonbleeding postpartum lacerations (<2 cm long and <2 cm deep) does not improve healing rates or decrease perineal discomfort. Clinicians not in the habit of suturing small lacerations need not begin doing so. Those who suture all lacerations can safely leave minor lacerations untouched. Appropriate repair of larger lacerations was not addressed by this study.
BACKGROUND: Some birth trauma lacerations after vaginal delivery benefit from suture repair because of size or association with ongoing bleeding. Other lacerations are trivial and can be left to heal without intervention. This study was done to compare the outcomes of suture repair of minor lacerations with the outcomes following spontaneous healing.
POPULATION STUDIED: A total of 80 women delivered by midwives in a large university hospital in Sweden were enrolled. The total number of eligible patients who did not participate was not stated. Inclusion criteria required lacerations of the labia minora, vagina, or perineum that did not bleed, fell well together, and were less than 2 cm deep or 2 cm long. Whether some lacerations were too small for inclusion in the study was not stated.
STUDY DESIGN AND VALIDITY: Using sealed opaque envelopes to conceal allocation assignment, eligible subjects were randomized to either a spontaneous healing or suture group. Patients were enrolled between the time of delivery and time of repair. Suturing was performed with polyglycolic acid under topical or pudendal block anesthesia. Healing was evaluated at 2 to 3 days and at 8 weeks by clinical examination, and at 6 months by questionnaire. Two patients in the suture group were withdrawn from analysis because a nonstudy suture material was used. Study groups were comparable in age, labor characteristics, birth weights, labor analgesia used, oxytocin use, and birth positions. Only one woman was delivered in the lithotomy position. Outcomes assessment blinding was not possible, because the observer could easily tell whether suturing had been performed. Power was calculated to achieve a 95% confidence of detecting a 20% difference in effect between the 2 groups.
OUTCOMES MEASURED: Primary outcome measures were healing by visual inspection, discomfort, and return to sexual intercourse. Secondary outcome measures were duration of breastfeeding and patient perceived effect of the laceration on breastfeeding.
RESULTS: There were a total of 87 lacerations in the 40 patients in the nonsutured group and 74 lacerations in the 38 patients in the sutured group. At 2 to 3 days, 11 of 87 lacerations in the nonsutured group had minor problems in healing compared with 4 of 74 lacerations in the sutured group (nonsignificant difference). At 8 weeks, 8 of 87 lacerations in the nonsutured group had minor problems in healing compared with 8 of 74 lacerations in the sutured group (nonsignificant difference). More women in the sutured group reported using analgesic drugs for perineal pain. There were no differences in return to intercourse or duration of breastfeeding. Five patients in the sutured group had sutures removed because of annoyance. Sixteen percent of the women in the sutured group perceived that breastfeeding was affected by the laceration, while none of the women in the nonintervention group perceived an effect (P=.04; number needed to harm=6.3).
Suturing minor nonbleeding postpartum lacerations (<2 cm long and <2 cm deep) does not improve healing rates or decrease perineal discomfort. Clinicians not in the habit of suturing small lacerations need not begin doing so. Those who suture all lacerations can safely leave minor lacerations untouched. Appropriate repair of larger lacerations was not addressed by this study.
How safe and effective are nonsteroidal anti-inflammatory drugs (NSAIDs) in the treatment of acute or chronic nonspecific low back pain (LBP)?
BACKGROUND: LBP is a major health problem that causes significant medical expense and much absenteeism and disability. The disease is usually self-limited, but the pain can be severe. NSAIDs are widely prescribed for LBP and are recommended in several back pain management guidelines. Their effectiveness has not been proven beyond a doubt, so the authors of this systematic review attempted to synthesize the world literature on the subject.
POPULATION STUDIED: Fifty-one studies enrolled subjects between the ages of 18 and 65 years who had LBP with or without sciatica. Subjects with specific LBP caused by infection, neoplasm, osteoporosis, fractures, or rheumatoid arthritis were excluded. Acute ( 12 weeks) and chronic (>12 weeks) LBP patients were included.
STUDY DESIGN AND VALIDITY: This was a systematic review in which the authors evaluated randomized double-blinded controlled trials. The studies were identified by searching MEDLINE (1966 to 1998), EMBASE (1988 to 1998), and the Cochrane Controlled Trials Register (issue 3, 1998). References in identified studies were also screened. The quality assessment was based on the presence or absence of 11 criteria, including randomization, allocation concealment, blinding, intention-to-treat analysis, and follow-up. Quantitative analysis was appropriately limited to clinically homogeneous studies. Separate analyses were performed for the primary outcome measures of pain intensity, overall improvement, functional status, and return to work. Qualitative analysis was performed if the studies were clinically heterogeneous (ie, if they were too different to combine) or if the data required for statistical pooling were lacking.
OUTCOMES MEASURED: The primary outcome measures in hierarchical order were: pain intensity using a visual analog scale or a numerical rating scale, a global measure such as overall improvement or proportion of patients recovered, back pain specific functional status, and return to work.
RESULTS: In acute LBP there was clear statistical evidence of slight global short-term improvement with NSAIDs compared with placebo, without any statistically significant difference in side effects. There was not enough information to determine the effectiveness of NSAIDs in chronic LBP. NSAIDs were slightly more effective than acetaminophen for both acute and chronic LBP. NSAIDs were no better than muscle relaxants, narcotics, physiotherapy, or spinal manipulation for acute LBP; they were, however, somewhat better than bed rest. There was no difference between types of NSAIDs. There was no advantage in adding muscle relaxants to NSAIDs for acute LBP, and the addition of B vitamins to NSAIDs was supported by very limited evidence.
On the basis of the patient-oriented outcomes from this review, it is reasonable to treat acute or chronic LBP with NSAIDs. All NSAIDs are equally effective and have minimal side effects, so generic ibuprofen is probably the best choice (fewer serious side effects and lower cost). Acetaminophen is a reasonable, though slightly less effective, alternative.
BACKGROUND: LBP is a major health problem that causes significant medical expense and much absenteeism and disability. The disease is usually self-limited, but the pain can be severe. NSAIDs are widely prescribed for LBP and are recommended in several back pain management guidelines. Their effectiveness has not been proven beyond a doubt, so the authors of this systematic review attempted to synthesize the world literature on the subject.
POPULATION STUDIED: Fifty-one studies enrolled subjects between the ages of 18 and 65 years who had LBP with or without sciatica. Subjects with specific LBP caused by infection, neoplasm, osteoporosis, fractures, or rheumatoid arthritis were excluded. Acute ( 12 weeks) and chronic (>12 weeks) LBP patients were included.
STUDY DESIGN AND VALIDITY: This was a systematic review in which the authors evaluated randomized double-blinded controlled trials. The studies were identified by searching MEDLINE (1966 to 1998), EMBASE (1988 to 1998), and the Cochrane Controlled Trials Register (issue 3, 1998). References in identified studies were also screened. The quality assessment was based on the presence or absence of 11 criteria, including randomization, allocation concealment, blinding, intention-to-treat analysis, and follow-up. Quantitative analysis was appropriately limited to clinically homogeneous studies. Separate analyses were performed for the primary outcome measures of pain intensity, overall improvement, functional status, and return to work. Qualitative analysis was performed if the studies were clinically heterogeneous (ie, if they were too different to combine) or if the data required for statistical pooling were lacking.
OUTCOMES MEASURED: The primary outcome measures in hierarchical order were: pain intensity using a visual analog scale or a numerical rating scale, a global measure such as overall improvement or proportion of patients recovered, back pain specific functional status, and return to work.
RESULTS: In acute LBP there was clear statistical evidence of slight global short-term improvement with NSAIDs compared with placebo, without any statistically significant difference in side effects. There was not enough information to determine the effectiveness of NSAIDs in chronic LBP. NSAIDs were slightly more effective than acetaminophen for both acute and chronic LBP. NSAIDs were no better than muscle relaxants, narcotics, physiotherapy, or spinal manipulation for acute LBP; they were, however, somewhat better than bed rest. There was no difference between types of NSAIDs. There was no advantage in adding muscle relaxants to NSAIDs for acute LBP, and the addition of B vitamins to NSAIDs was supported by very limited evidence.
On the basis of the patient-oriented outcomes from this review, it is reasonable to treat acute or chronic LBP with NSAIDs. All NSAIDs are equally effective and have minimal side effects, so generic ibuprofen is probably the best choice (fewer serious side effects and lower cost). Acetaminophen is a reasonable, though slightly less effective, alternative.
BACKGROUND: LBP is a major health problem that causes significant medical expense and much absenteeism and disability. The disease is usually self-limited, but the pain can be severe. NSAIDs are widely prescribed for LBP and are recommended in several back pain management guidelines. Their effectiveness has not been proven beyond a doubt, so the authors of this systematic review attempted to synthesize the world literature on the subject.
POPULATION STUDIED: Fifty-one studies enrolled subjects between the ages of 18 and 65 years who had LBP with or without sciatica. Subjects with specific LBP caused by infection, neoplasm, osteoporosis, fractures, or rheumatoid arthritis were excluded. Acute ( 12 weeks) and chronic (>12 weeks) LBP patients were included.
STUDY DESIGN AND VALIDITY: This was a systematic review in which the authors evaluated randomized double-blinded controlled trials. The studies were identified by searching MEDLINE (1966 to 1998), EMBASE (1988 to 1998), and the Cochrane Controlled Trials Register (issue 3, 1998). References in identified studies were also screened. The quality assessment was based on the presence or absence of 11 criteria, including randomization, allocation concealment, blinding, intention-to-treat analysis, and follow-up. Quantitative analysis was appropriately limited to clinically homogeneous studies. Separate analyses were performed for the primary outcome measures of pain intensity, overall improvement, functional status, and return to work. Qualitative analysis was performed if the studies were clinically heterogeneous (ie, if they were too different to combine) or if the data required for statistical pooling were lacking.
OUTCOMES MEASURED: The primary outcome measures in hierarchical order were: pain intensity using a visual analog scale or a numerical rating scale, a global measure such as overall improvement or proportion of patients recovered, back pain specific functional status, and return to work.
RESULTS: In acute LBP there was clear statistical evidence of slight global short-term improvement with NSAIDs compared with placebo, without any statistically significant difference in side effects. There was not enough information to determine the effectiveness of NSAIDs in chronic LBP. NSAIDs were slightly more effective than acetaminophen for both acute and chronic LBP. NSAIDs were no better than muscle relaxants, narcotics, physiotherapy, or spinal manipulation for acute LBP; they were, however, somewhat better than bed rest. There was no difference between types of NSAIDs. There was no advantage in adding muscle relaxants to NSAIDs for acute LBP, and the addition of B vitamins to NSAIDs was supported by very limited evidence.
On the basis of the patient-oriented outcomes from this review, it is reasonable to treat acute or chronic LBP with NSAIDs. All NSAIDs are equally effective and have minimal side effects, so generic ibuprofen is probably the best choice (fewer serious side effects and lower cost). Acetaminophen is a reasonable, though slightly less effective, alternative.
How should we determine the best treatment for patients with asymptomatic carotid stenosis?
BACKGROUND: It is unclear whether patients with carotid stenosis and no symptoms benefit from endarterectomy. The authors of this study examined data from the North American Symptomatic Carotid Endarterectomy Trial (NASCET) on patients from 1988 to 1997 with unilateral symptomatic carotid stenosis and asymptomatic contralateral stenosis. The goal was to learn what happened to the asymptomatic stenoses during the follow-up period.
POPULATION STUDIED: The authors studied 1820 patients with asymptomatic carotid stenosis, of whom 1604 had less than a 60% stenosis and 216 had a stenosis between 60% and 99%. The mean age was 66 years; 68% were men. Comorbidities were common: 60% had hypertension, 22% had diabetes mellitus, 36% had a history of myocardial infarction or angina, and 24% had evidence of a clinically silent brain infarction in the territory of the asymptomatic carotid artery (but were still considered asymptomatic for this study).
STUDY DESIGN AND VALIDITY: The NASCET trial enrolled 2885 total patients with a recent transient ischemic attack or a nondisabling ischemic stroke, and randomized them to medical care alone or medical care and endarterectomy. All patients were appropriately evaluated at baseline with carotid angiograms and computed tomography or magnetic resonance imaging of the brain. Patients received follow-up care every 4 months for 5 years. Data in the study was centrally reviewed, and ischemic strokes were classified by underlying cause. Patients were ineligible if they had a cardiac source of embolism or a disease likely to cause death within 5 years.For this analysis, the researchers excluded patients with a history of bilateral carotid symptoms, surgery on the asymptomatic carotid artery, those who had no available angiograms of the asymptomatic artery, and those with either complete occlusion or no evidence of disease in the internal carotid artery. This left a final population of 1820 patients with carotid artery stenosis and no symptoms in the distribution of that vessel. Appropriate risk analysis was performed using Kaplan-Meier curves. A secondary analysis was done using Cox proportional hazard regression modeling to determine whether different risk factors were associated with the 3 causes of stroke.
OUTCOMES MEASURED: The primary outcome measured was the risk of first stroke at 5 years in both the symptomatic and asymptomatic arteries, stratified by the degree of stenosis and the etiology of the stroke (large artery, embolic, or lacunar).
RESULTS: Patients with an asymptomatic stenosis had approximately half the risk of stroke of those with a symptomatic stenosis. The risk of stroke over 5 years among patients with asymptomatic stenosis was 8% for patients with less than 60% stenosis, 14.8% for those with 60% to 74% stenosis, 18.5% for those with 75% to 94% stenosis, and 14.7% for those with 95% to 99% stenosis. Approximately 80% of the first strokes were not preceded by any symptoms of a transient ischemic attack. Almost half the strokes in patients with asymptomatic carotid artery disease had causes other than large artery disease. This information was used to adjust estimates of the benefit of endarterectomy from the Asymptomatic Carotid Atherosclerotic Study. The absolute risk reduction for any stroke during 5 years of follow-up in surgically treated patients is 5.9%; the corresponding risk reduction is only 3.5% for large artery stroke. To prevent 1 large artery stroke at 5 years, 29 patients would have to undergo carotid endarterectomy. This benefit must be balanced against an operative risk of 3% in the hands of the best surgeons, compared with a 4% to 5% risk reported in published series that were less stringently designed.
This study has several important caveats for our patients with symptomatic and asymptomatic carotid artery disease. First, the risk of stroke is substantially lower in patients with asymptomatic carotid artery disease than in those who are symptomatic. Also, approximately 50% of strokes that occur in the territory of the asymptomatic carotid artery do not have their origin in the large artery. The greatest risk of large artery stroke appears to be among patients with a high degree of stenosis, a history of diabetes mellitus, and a history of silent brain infarct beyond the asymptomatic lesion. Finally, the high risk of perioperative death or stroke makes it important that we focus our efforts on doing risk factor screening and properly treating the important risks once identified.
BACKGROUND: It is unclear whether patients with carotid stenosis and no symptoms benefit from endarterectomy. The authors of this study examined data from the North American Symptomatic Carotid Endarterectomy Trial (NASCET) on patients from 1988 to 1997 with unilateral symptomatic carotid stenosis and asymptomatic contralateral stenosis. The goal was to learn what happened to the asymptomatic stenoses during the follow-up period.
POPULATION STUDIED: The authors studied 1820 patients with asymptomatic carotid stenosis, of whom 1604 had less than a 60% stenosis and 216 had a stenosis between 60% and 99%. The mean age was 66 years; 68% were men. Comorbidities were common: 60% had hypertension, 22% had diabetes mellitus, 36% had a history of myocardial infarction or angina, and 24% had evidence of a clinically silent brain infarction in the territory of the asymptomatic carotid artery (but were still considered asymptomatic for this study).
STUDY DESIGN AND VALIDITY: The NASCET trial enrolled 2885 total patients with a recent transient ischemic attack or a nondisabling ischemic stroke, and randomized them to medical care alone or medical care and endarterectomy. All patients were appropriately evaluated at baseline with carotid angiograms and computed tomography or magnetic resonance imaging of the brain. Patients received follow-up care every 4 months for 5 years. Data in the study was centrally reviewed, and ischemic strokes were classified by underlying cause. Patients were ineligible if they had a cardiac source of embolism or a disease likely to cause death within 5 years.For this analysis, the researchers excluded patients with a history of bilateral carotid symptoms, surgery on the asymptomatic carotid artery, those who had no available angiograms of the asymptomatic artery, and those with either complete occlusion or no evidence of disease in the internal carotid artery. This left a final population of 1820 patients with carotid artery stenosis and no symptoms in the distribution of that vessel. Appropriate risk analysis was performed using Kaplan-Meier curves. A secondary analysis was done using Cox proportional hazard regression modeling to determine whether different risk factors were associated with the 3 causes of stroke.
OUTCOMES MEASURED: The primary outcome measured was the risk of first stroke at 5 years in both the symptomatic and asymptomatic arteries, stratified by the degree of stenosis and the etiology of the stroke (large artery, embolic, or lacunar).
RESULTS: Patients with an asymptomatic stenosis had approximately half the risk of stroke of those with a symptomatic stenosis. The risk of stroke over 5 years among patients with asymptomatic stenosis was 8% for patients with less than 60% stenosis, 14.8% for those with 60% to 74% stenosis, 18.5% for those with 75% to 94% stenosis, and 14.7% for those with 95% to 99% stenosis. Approximately 80% of the first strokes were not preceded by any symptoms of a transient ischemic attack. Almost half the strokes in patients with asymptomatic carotid artery disease had causes other than large artery disease. This information was used to adjust estimates of the benefit of endarterectomy from the Asymptomatic Carotid Atherosclerotic Study. The absolute risk reduction for any stroke during 5 years of follow-up in surgically treated patients is 5.9%; the corresponding risk reduction is only 3.5% for large artery stroke. To prevent 1 large artery stroke at 5 years, 29 patients would have to undergo carotid endarterectomy. This benefit must be balanced against an operative risk of 3% in the hands of the best surgeons, compared with a 4% to 5% risk reported in published series that were less stringently designed.
This study has several important caveats for our patients with symptomatic and asymptomatic carotid artery disease. First, the risk of stroke is substantially lower in patients with asymptomatic carotid artery disease than in those who are symptomatic. Also, approximately 50% of strokes that occur in the territory of the asymptomatic carotid artery do not have their origin in the large artery. The greatest risk of large artery stroke appears to be among patients with a high degree of stenosis, a history of diabetes mellitus, and a history of silent brain infarct beyond the asymptomatic lesion. Finally, the high risk of perioperative death or stroke makes it important that we focus our efforts on doing risk factor screening and properly treating the important risks once identified.
BACKGROUND: It is unclear whether patients with carotid stenosis and no symptoms benefit from endarterectomy. The authors of this study examined data from the North American Symptomatic Carotid Endarterectomy Trial (NASCET) on patients from 1988 to 1997 with unilateral symptomatic carotid stenosis and asymptomatic contralateral stenosis. The goal was to learn what happened to the asymptomatic stenoses during the follow-up period.
POPULATION STUDIED: The authors studied 1820 patients with asymptomatic carotid stenosis, of whom 1604 had less than a 60% stenosis and 216 had a stenosis between 60% and 99%. The mean age was 66 years; 68% were men. Comorbidities were common: 60% had hypertension, 22% had diabetes mellitus, 36% had a history of myocardial infarction or angina, and 24% had evidence of a clinically silent brain infarction in the territory of the asymptomatic carotid artery (but were still considered asymptomatic for this study).
STUDY DESIGN AND VALIDITY: The NASCET trial enrolled 2885 total patients with a recent transient ischemic attack or a nondisabling ischemic stroke, and randomized them to medical care alone or medical care and endarterectomy. All patients were appropriately evaluated at baseline with carotid angiograms and computed tomography or magnetic resonance imaging of the brain. Patients received follow-up care every 4 months for 5 years. Data in the study was centrally reviewed, and ischemic strokes were classified by underlying cause. Patients were ineligible if they had a cardiac source of embolism or a disease likely to cause death within 5 years.For this analysis, the researchers excluded patients with a history of bilateral carotid symptoms, surgery on the asymptomatic carotid artery, those who had no available angiograms of the asymptomatic artery, and those with either complete occlusion or no evidence of disease in the internal carotid artery. This left a final population of 1820 patients with carotid artery stenosis and no symptoms in the distribution of that vessel. Appropriate risk analysis was performed using Kaplan-Meier curves. A secondary analysis was done using Cox proportional hazard regression modeling to determine whether different risk factors were associated with the 3 causes of stroke.
OUTCOMES MEASURED: The primary outcome measured was the risk of first stroke at 5 years in both the symptomatic and asymptomatic arteries, stratified by the degree of stenosis and the etiology of the stroke (large artery, embolic, or lacunar).
RESULTS: Patients with an asymptomatic stenosis had approximately half the risk of stroke of those with a symptomatic stenosis. The risk of stroke over 5 years among patients with asymptomatic stenosis was 8% for patients with less than 60% stenosis, 14.8% for those with 60% to 74% stenosis, 18.5% for those with 75% to 94% stenosis, and 14.7% for those with 95% to 99% stenosis. Approximately 80% of the first strokes were not preceded by any symptoms of a transient ischemic attack. Almost half the strokes in patients with asymptomatic carotid artery disease had causes other than large artery disease. This information was used to adjust estimates of the benefit of endarterectomy from the Asymptomatic Carotid Atherosclerotic Study. The absolute risk reduction for any stroke during 5 years of follow-up in surgically treated patients is 5.9%; the corresponding risk reduction is only 3.5% for large artery stroke. To prevent 1 large artery stroke at 5 years, 29 patients would have to undergo carotid endarterectomy. This benefit must be balanced against an operative risk of 3% in the hands of the best surgeons, compared with a 4% to 5% risk reported in published series that were less stringently designed.
This study has several important caveats for our patients with symptomatic and asymptomatic carotid artery disease. First, the risk of stroke is substantially lower in patients with asymptomatic carotid artery disease than in those who are symptomatic. Also, approximately 50% of strokes that occur in the territory of the asymptomatic carotid artery do not have their origin in the large artery. The greatest risk of large artery stroke appears to be among patients with a high degree of stenosis, a history of diabetes mellitus, and a history of silent brain infarct beyond the asymptomatic lesion. Finally, the high risk of perioperative death or stroke makes it important that we focus our efforts on doing risk factor screening and properly treating the important risks once identified.
Is there a simple and accurate algorithm that clinicians can use to more effectively select women for bone densitometry testing?
BACKGROUND: North American women currently have a lifetime risk for osteoporotic fracture of nearly 50%. The mortality rate for hip fracture, the most common site, is nearly 10%. The development of new medications that can prevent loss of bone mineral density (BMD) and reduce the risk of fracture in selected groups of high-risk women has increased interest in the diagnosis of osteoporosis. The dual-energy x-ray absorptiometry (DEXA) scan is the most widely used test and costs between $100 and $200. The purpose of this study was to develop and validate a clinical prediction rule that would identify patients at the greatest risk of osteoporosis.
POPULATION STUDIED: The authors recruited 1376 Canadian women who underwent a DEXA scan of both the lumbar spine and the femoral neck as part of the Ontario Multicentre Osteoporosis Study. The mean age was 63 years, 94.9% were white, and 2.9% were Asian.
STUDY DESIGN AND VALIDITY: Patients were randomly divided into development (n = 926) and validation (n = 450) groups. Logistic regression analysis was used in the development group to identify independent risk factors for low BMD in the femoral neck and lumbar spine. This information was used to create several models with different permutations of the type and number of risk factors included. The most successful combination of variables, designated the Ontario Risk Assessment Instrument (ORAI), was then tested on the validation group. The major threats to validity are the homogenous nature of the population and the fact that these women were part of a clinical trial.
OUTCOMES MEASURED: The primary outcome was the sensitivity and specificity of the clinical prediction rule for the detection of women with a BMD T-Score 2 or more standard deviations below the norm.
RESULTS: An algorithm based on the 3 risk factors of age, weight, and current estrogen was 93.3% sensitive and 46.4% specific. The corresponding positive and negative likelihood ratios were 1.7 and 0.2, respectively. Adding other variables such as race, medical history, and lifestyle risks to the algorithm did not significantly increase the accuracy of the tool. The ORAI selects the following women as candidates for DEXA scanning: all women aged older than 45 years and weighing less than 60 kg (132 lb), all women aged 55 to 64 years weighing less that 70 kg (155 lb) and not taking supplemental estrogen, and all women aged 65 years and older regardless of their weight or current estrogen use status.
The ORAI is helpful for identifying women at risk for low BMD. It is a well-designed and well-validated rule that is easy to apply. However, several important issues must be better understood. For example, we do not presently know the best course of action when osteoporosis is diagnosed. With growing questions about its cardiac effects and a clear association with breast cancer, the role of hormone replacement therapy is in question. The bisphosphonates and drugs like raloxifene are promising treatments for osteoporosis but are expensive and lack long-term efficacy and safety data. In addition, BMD is a disease-oriented outcome. Until we have data from patient-oriented studies of the outcome of population-based screening, interventions aimed at lifestyle modification (eg, diet, exercise, and smoking cessation) and reduction of the risk of falling are cheaper than any of the medications, have no side effects, and are effective in the reduction of fractures.
BACKGROUND: North American women currently have a lifetime risk for osteoporotic fracture of nearly 50%. The mortality rate for hip fracture, the most common site, is nearly 10%. The development of new medications that can prevent loss of bone mineral density (BMD) and reduce the risk of fracture in selected groups of high-risk women has increased interest in the diagnosis of osteoporosis. The dual-energy x-ray absorptiometry (DEXA) scan is the most widely used test and costs between $100 and $200. The purpose of this study was to develop and validate a clinical prediction rule that would identify patients at the greatest risk of osteoporosis.
POPULATION STUDIED: The authors recruited 1376 Canadian women who underwent a DEXA scan of both the lumbar spine and the femoral neck as part of the Ontario Multicentre Osteoporosis Study. The mean age was 63 years, 94.9% were white, and 2.9% were Asian.
STUDY DESIGN AND VALIDITY: Patients were randomly divided into development (n = 926) and validation (n = 450) groups. Logistic regression analysis was used in the development group to identify independent risk factors for low BMD in the femoral neck and lumbar spine. This information was used to create several models with different permutations of the type and number of risk factors included. The most successful combination of variables, designated the Ontario Risk Assessment Instrument (ORAI), was then tested on the validation group. The major threats to validity are the homogenous nature of the population and the fact that these women were part of a clinical trial.
OUTCOMES MEASURED: The primary outcome was the sensitivity and specificity of the clinical prediction rule for the detection of women with a BMD T-Score 2 or more standard deviations below the norm.
RESULTS: An algorithm based on the 3 risk factors of age, weight, and current estrogen was 93.3% sensitive and 46.4% specific. The corresponding positive and negative likelihood ratios were 1.7 and 0.2, respectively. Adding other variables such as race, medical history, and lifestyle risks to the algorithm did not significantly increase the accuracy of the tool. The ORAI selects the following women as candidates for DEXA scanning: all women aged older than 45 years and weighing less than 60 kg (132 lb), all women aged 55 to 64 years weighing less that 70 kg (155 lb) and not taking supplemental estrogen, and all women aged 65 years and older regardless of their weight or current estrogen use status.
The ORAI is helpful for identifying women at risk for low BMD. It is a well-designed and well-validated rule that is easy to apply. However, several important issues must be better understood. For example, we do not presently know the best course of action when osteoporosis is diagnosed. With growing questions about its cardiac effects and a clear association with breast cancer, the role of hormone replacement therapy is in question. The bisphosphonates and drugs like raloxifene are promising treatments for osteoporosis but are expensive and lack long-term efficacy and safety data. In addition, BMD is a disease-oriented outcome. Until we have data from patient-oriented studies of the outcome of population-based screening, interventions aimed at lifestyle modification (eg, diet, exercise, and smoking cessation) and reduction of the risk of falling are cheaper than any of the medications, have no side effects, and are effective in the reduction of fractures.
BACKGROUND: North American women currently have a lifetime risk for osteoporotic fracture of nearly 50%. The mortality rate for hip fracture, the most common site, is nearly 10%. The development of new medications that can prevent loss of bone mineral density (BMD) and reduce the risk of fracture in selected groups of high-risk women has increased interest in the diagnosis of osteoporosis. The dual-energy x-ray absorptiometry (DEXA) scan is the most widely used test and costs between $100 and $200. The purpose of this study was to develop and validate a clinical prediction rule that would identify patients at the greatest risk of osteoporosis.
POPULATION STUDIED: The authors recruited 1376 Canadian women who underwent a DEXA scan of both the lumbar spine and the femoral neck as part of the Ontario Multicentre Osteoporosis Study. The mean age was 63 years, 94.9% were white, and 2.9% were Asian.
STUDY DESIGN AND VALIDITY: Patients were randomly divided into development (n = 926) and validation (n = 450) groups. Logistic regression analysis was used in the development group to identify independent risk factors for low BMD in the femoral neck and lumbar spine. This information was used to create several models with different permutations of the type and number of risk factors included. The most successful combination of variables, designated the Ontario Risk Assessment Instrument (ORAI), was then tested on the validation group. The major threats to validity are the homogenous nature of the population and the fact that these women were part of a clinical trial.
OUTCOMES MEASURED: The primary outcome was the sensitivity and specificity of the clinical prediction rule for the detection of women with a BMD T-Score 2 or more standard deviations below the norm.
RESULTS: An algorithm based on the 3 risk factors of age, weight, and current estrogen was 93.3% sensitive and 46.4% specific. The corresponding positive and negative likelihood ratios were 1.7 and 0.2, respectively. Adding other variables such as race, medical history, and lifestyle risks to the algorithm did not significantly increase the accuracy of the tool. The ORAI selects the following women as candidates for DEXA scanning: all women aged older than 45 years and weighing less than 60 kg (132 lb), all women aged 55 to 64 years weighing less that 70 kg (155 lb) and not taking supplemental estrogen, and all women aged 65 years and older regardless of their weight or current estrogen use status.
The ORAI is helpful for identifying women at risk for low BMD. It is a well-designed and well-validated rule that is easy to apply. However, several important issues must be better understood. For example, we do not presently know the best course of action when osteoporosis is diagnosed. With growing questions about its cardiac effects and a clear association with breast cancer, the role of hormone replacement therapy is in question. The bisphosphonates and drugs like raloxifene are promising treatments for osteoporosis but are expensive and lack long-term efficacy and safety data. In addition, BMD is a disease-oriented outcome. Until we have data from patient-oriented studies of the outcome of population-based screening, interventions aimed at lifestyle modification (eg, diet, exercise, and smoking cessation) and reduction of the risk of falling are cheaper than any of the medications, have no side effects, and are effective in the reduction of fractures.
Is there a clinical difference in outcomes when b-agonist therapy is delivered through metered-dose inhaler (MDI) with a spacing device compared with standard nebulizer treatments in acutely wheezing children?
BACKGROUND: Asthma remains a leading cause of hospitalization in children. It has been determined that the MDI is equally as effective as nebulized wet aerosol therapy for treatment of acute asthma in adults, and may even work better in children older than 2 years.1 The authors of this study investigated whether the same relationship holds true in children between the ages of 10 months and 4 years.
POPULATION STUDIED: The investigators enrolled 42 children aged 10 months to 4 years presenting to the emergency department of a large hospital in Israel. Children were not included if they had a history of cardiac disease or chronic respiratory disease (other than asthma), had an altered level of consciousness, or were in respiratory failure. Most subjects were referred from their primary care physicians to the emergency department because of the severity of their presentation.
STUDY DESIGN AND VALIDITY: This study was a randomized controlled double-blind double-dummy clinical trial. Subjects were randomly assigned to 2 groups. Randomization assignment was concealed. The first group received a standard dose of salbutamol (2.5 mg in 1.5 cc of normal saline) by nebulized aerosol therapy along with 4 puffs of placebo by MDI with a spacing device and facemask. The second group received 4 puffs of salbutamol (400 μg) by MDI with spacer and facemask along with 2 mL of normal saline by nebulized aerosol. Clinical scores (respiratory rate, pulse rate, pulse oximetry, wheezing, breath sounds, and retractions) were calculated at baseline and also 15 minutes after the conclusion of each respiratory treatment. Each patient received a total of 3 treatments delivered at 20-minute intervals. The study is well designed. The authors do not mention if any treatments were rendered by the referring physicians before arrival in the emergency department. The presence of antecedent b-agonist therapy could have affected the outcomes. This study was large enough to find a difference in the major outcomes (if one exists) but not to determine whether MDI therapy results in a change in the rate of hospitalization.
OUTCOMES MEASURED: The 2 major outcomes were respiratory rate and the patient’s clinical score. Minor outcomes included pulse rate and room air pulse oximetry. Hospitalization rates between the groups were also compared.
RESULTS: The study groups were similar at baseline. The reduction in respiratory rate and the improvement in patients’ clinical scores were similar between groups. Side effect rates were similar in the 2 groups. A total of 31% required hospitalization, but there was no difference in the rate of hospitalization between groups.
The use of a MDI with spacer and facemask is clinically equal to the use of nebulized aerosol for the delivery of b-agonist therapy in acutely wheezing infants between the ages of 10 months and 4 years. Symptoms resolve similarly with the 2 methods. This study was not large enough to determine whether one administration method is superior with regard to hospitalization rate, although a recent meta-analysis1 involving studies of older children demonstrated shorter stays in MDI-treated children. Education regarding the proper use of the MDI-spacer-facemask combination (ie, the facemask should cover the mouth and nose) in infants and children is a key component to ensuring therapeutic success.
BACKGROUND: Asthma remains a leading cause of hospitalization in children. It has been determined that the MDI is equally as effective as nebulized wet aerosol therapy for treatment of acute asthma in adults, and may even work better in children older than 2 years.1 The authors of this study investigated whether the same relationship holds true in children between the ages of 10 months and 4 years.
POPULATION STUDIED: The investigators enrolled 42 children aged 10 months to 4 years presenting to the emergency department of a large hospital in Israel. Children were not included if they had a history of cardiac disease or chronic respiratory disease (other than asthma), had an altered level of consciousness, or were in respiratory failure. Most subjects were referred from their primary care physicians to the emergency department because of the severity of their presentation.
STUDY DESIGN AND VALIDITY: This study was a randomized controlled double-blind double-dummy clinical trial. Subjects were randomly assigned to 2 groups. Randomization assignment was concealed. The first group received a standard dose of salbutamol (2.5 mg in 1.5 cc of normal saline) by nebulized aerosol therapy along with 4 puffs of placebo by MDI with a spacing device and facemask. The second group received 4 puffs of salbutamol (400 μg) by MDI with spacer and facemask along with 2 mL of normal saline by nebulized aerosol. Clinical scores (respiratory rate, pulse rate, pulse oximetry, wheezing, breath sounds, and retractions) were calculated at baseline and also 15 minutes after the conclusion of each respiratory treatment. Each patient received a total of 3 treatments delivered at 20-minute intervals. The study is well designed. The authors do not mention if any treatments were rendered by the referring physicians before arrival in the emergency department. The presence of antecedent b-agonist therapy could have affected the outcomes. This study was large enough to find a difference in the major outcomes (if one exists) but not to determine whether MDI therapy results in a change in the rate of hospitalization.
OUTCOMES MEASURED: The 2 major outcomes were respiratory rate and the patient’s clinical score. Minor outcomes included pulse rate and room air pulse oximetry. Hospitalization rates between the groups were also compared.
RESULTS: The study groups were similar at baseline. The reduction in respiratory rate and the improvement in patients’ clinical scores were similar between groups. Side effect rates were similar in the 2 groups. A total of 31% required hospitalization, but there was no difference in the rate of hospitalization between groups.
The use of a MDI with spacer and facemask is clinically equal to the use of nebulized aerosol for the delivery of b-agonist therapy in acutely wheezing infants between the ages of 10 months and 4 years. Symptoms resolve similarly with the 2 methods. This study was not large enough to determine whether one administration method is superior with regard to hospitalization rate, although a recent meta-analysis1 involving studies of older children demonstrated shorter stays in MDI-treated children. Education regarding the proper use of the MDI-spacer-facemask combination (ie, the facemask should cover the mouth and nose) in infants and children is a key component to ensuring therapeutic success.
BACKGROUND: Asthma remains a leading cause of hospitalization in children. It has been determined that the MDI is equally as effective as nebulized wet aerosol therapy for treatment of acute asthma in adults, and may even work better in children older than 2 years.1 The authors of this study investigated whether the same relationship holds true in children between the ages of 10 months and 4 years.
POPULATION STUDIED: The investigators enrolled 42 children aged 10 months to 4 years presenting to the emergency department of a large hospital in Israel. Children were not included if they had a history of cardiac disease or chronic respiratory disease (other than asthma), had an altered level of consciousness, or were in respiratory failure. Most subjects were referred from their primary care physicians to the emergency department because of the severity of their presentation.
STUDY DESIGN AND VALIDITY: This study was a randomized controlled double-blind double-dummy clinical trial. Subjects were randomly assigned to 2 groups. Randomization assignment was concealed. The first group received a standard dose of salbutamol (2.5 mg in 1.5 cc of normal saline) by nebulized aerosol therapy along with 4 puffs of placebo by MDI with a spacing device and facemask. The second group received 4 puffs of salbutamol (400 μg) by MDI with spacer and facemask along with 2 mL of normal saline by nebulized aerosol. Clinical scores (respiratory rate, pulse rate, pulse oximetry, wheezing, breath sounds, and retractions) were calculated at baseline and also 15 minutes after the conclusion of each respiratory treatment. Each patient received a total of 3 treatments delivered at 20-minute intervals. The study is well designed. The authors do not mention if any treatments were rendered by the referring physicians before arrival in the emergency department. The presence of antecedent b-agonist therapy could have affected the outcomes. This study was large enough to find a difference in the major outcomes (if one exists) but not to determine whether MDI therapy results in a change in the rate of hospitalization.
OUTCOMES MEASURED: The 2 major outcomes were respiratory rate and the patient’s clinical score. Minor outcomes included pulse rate and room air pulse oximetry. Hospitalization rates between the groups were also compared.
RESULTS: The study groups were similar at baseline. The reduction in respiratory rate and the improvement in patients’ clinical scores were similar between groups. Side effect rates were similar in the 2 groups. A total of 31% required hospitalization, but there was no difference in the rate of hospitalization between groups.
The use of a MDI with spacer and facemask is clinically equal to the use of nebulized aerosol for the delivery of b-agonist therapy in acutely wheezing infants between the ages of 10 months and 4 years. Symptoms resolve similarly with the 2 methods. This study was not large enough to determine whether one administration method is superior with regard to hospitalization rate, although a recent meta-analysis1 involving studies of older children demonstrated shorter stays in MDI-treated children. Education regarding the proper use of the MDI-spacer-facemask combination (ie, the facemask should cover the mouth and nose) in infants and children is a key component to ensuring therapeutic success.
Is losartan superior to captopril in reducing all-cause mortality in elderly patients with symptomatic heart failure?
BACKGROUND: Because of their beneficial effects on mortality risk and functional status, angiotensin-converting-enzyme (ACE) inhibitors should be prescribed for all patients with heart failure and systolic left ventricular dysfunction unless specific contraindications exist. However, some physicians do not prescribe them because of fear of adverse effects. Angiotensin II type 1 receptor blockers (AT1RBs) may be better tolerated than ACE inhibitors. A secondary analysis of 49 deaths in the original Evaluation of Losartan in the Elderly (ELITE) study, in which the primary end point was the effect of treatment on renal function, showed an unexpected survival benefit for the AT1RB losartan over captopril, an ACE inhibitor. ELITE II was a larger trial designed to confirm whether losartan is superior to captopril in reducing all-cause mortality in elderly heart failure patients.
POPULATION STUDIED: The study included 3152 patients aged 60 years or older with New York Heart Association class II to IV heart failure and an ejection fraction of 40% or less. Most of the patients recruited from the 289 outpatient centers in 46 countries were white men aged older than 65 years who had never received an ACE inhibitor or an AT1RB. Exclusion criteria included previous intolerance or contraindication to either the study drug, systolic blood pressure greater than 90 mm Hg, uncontrolled hypertension, obstructive valvular heart disease, recent cardiac procedure or event, anticipated cardiac surgery, or recent cerebrovascular event.
STUDY DESIGN AND VALIDITY: This study was a prospective randomized double-blind trial funded by the manufacturer of losartan. Designed as an event-driven superiority trial, the study had 90% power to detect a relative 25% difference in all-cause mortality between treatments. At each study center, randomization was stratified on the basis of use of b-blockers. After a single-blind run-in period of 1 to 28 days, 1578 patients were allocated to losartan (12.5 mg titrated to a maximum of 50 mg once daily) and 1574 to captopril (12.5 mg titrated to a maximum of 50 mg 3 times daily). Clinical assessments were done weekly during dose titration and then every 4 months. Periodic laboratory assessments were also performed. The appropriate study design and intention-to-treat analysis were used for this efficacy trial. Neither the patients, those assessing clinical outcomes, nor the drug safety monitoring committee were aware of treatment status. Concealed allocation to treatment group at each study site was assured through central block randomization. The results are only applicable to elderly patients.
OUTCOMES MEASURED: The primary outcome was all-cause mortality, and the secondary end point was the composite of sudden cardiac death or resuscitated cardiac arrest.
RESULTS: Treatment groups were well matched demographically and for confounding variables that might affect response to treatment. Only 1 patient from each group was lost to follow-up during a median follow-up period of 18 months. A total of 280 deaths (17.5%) occurred in the losartan group compared with 250 (15.9%) in the captopril group, with an annual mortality rate of 11.7% and 10.4%, respectively (hazard ratio=1.13; 95.7% confidence interval, 0.95-1.35; P=.16). Neither of these differences was statistically significant, but power may have been an issue. Similarly, there was no significant difference in the composite of sudden death or resuscitated arrests (9.0% vs 7.3%). Fewer patients in the losartan group (excluding those who died) discontinued treatment because of side effects (9.7% vs 14.7%, P <.001).
Clinicians should continue to prescribe ACE inhibitors as initial treatment for elderly patients with symptomatic heart failure. Losartan was clearly not superior (or even equivalent) to captopril in reducing all-cause mortality and should not be used as first-line therapy for these patients.
BACKGROUND: Because of their beneficial effects on mortality risk and functional status, angiotensin-converting-enzyme (ACE) inhibitors should be prescribed for all patients with heart failure and systolic left ventricular dysfunction unless specific contraindications exist. However, some physicians do not prescribe them because of fear of adverse effects. Angiotensin II type 1 receptor blockers (AT1RBs) may be better tolerated than ACE inhibitors. A secondary analysis of 49 deaths in the original Evaluation of Losartan in the Elderly (ELITE) study, in which the primary end point was the effect of treatment on renal function, showed an unexpected survival benefit for the AT1RB losartan over captopril, an ACE inhibitor. ELITE II was a larger trial designed to confirm whether losartan is superior to captopril in reducing all-cause mortality in elderly heart failure patients.
POPULATION STUDIED: The study included 3152 patients aged 60 years or older with New York Heart Association class II to IV heart failure and an ejection fraction of 40% or less. Most of the patients recruited from the 289 outpatient centers in 46 countries were white men aged older than 65 years who had never received an ACE inhibitor or an AT1RB. Exclusion criteria included previous intolerance or contraindication to either the study drug, systolic blood pressure greater than 90 mm Hg, uncontrolled hypertension, obstructive valvular heart disease, recent cardiac procedure or event, anticipated cardiac surgery, or recent cerebrovascular event.
STUDY DESIGN AND VALIDITY: This study was a prospective randomized double-blind trial funded by the manufacturer of losartan. Designed as an event-driven superiority trial, the study had 90% power to detect a relative 25% difference in all-cause mortality between treatments. At each study center, randomization was stratified on the basis of use of b-blockers. After a single-blind run-in period of 1 to 28 days, 1578 patients were allocated to losartan (12.5 mg titrated to a maximum of 50 mg once daily) and 1574 to captopril (12.5 mg titrated to a maximum of 50 mg 3 times daily). Clinical assessments were done weekly during dose titration and then every 4 months. Periodic laboratory assessments were also performed. The appropriate study design and intention-to-treat analysis were used for this efficacy trial. Neither the patients, those assessing clinical outcomes, nor the drug safety monitoring committee were aware of treatment status. Concealed allocation to treatment group at each study site was assured through central block randomization. The results are only applicable to elderly patients.
OUTCOMES MEASURED: The primary outcome was all-cause mortality, and the secondary end point was the composite of sudden cardiac death or resuscitated cardiac arrest.
RESULTS: Treatment groups were well matched demographically and for confounding variables that might affect response to treatment. Only 1 patient from each group was lost to follow-up during a median follow-up period of 18 months. A total of 280 deaths (17.5%) occurred in the losartan group compared with 250 (15.9%) in the captopril group, with an annual mortality rate of 11.7% and 10.4%, respectively (hazard ratio=1.13; 95.7% confidence interval, 0.95-1.35; P=.16). Neither of these differences was statistically significant, but power may have been an issue. Similarly, there was no significant difference in the composite of sudden death or resuscitated arrests (9.0% vs 7.3%). Fewer patients in the losartan group (excluding those who died) discontinued treatment because of side effects (9.7% vs 14.7%, P <.001).
Clinicians should continue to prescribe ACE inhibitors as initial treatment for elderly patients with symptomatic heart failure. Losartan was clearly not superior (or even equivalent) to captopril in reducing all-cause mortality and should not be used as first-line therapy for these patients.
BACKGROUND: Because of their beneficial effects on mortality risk and functional status, angiotensin-converting-enzyme (ACE) inhibitors should be prescribed for all patients with heart failure and systolic left ventricular dysfunction unless specific contraindications exist. However, some physicians do not prescribe them because of fear of adverse effects. Angiotensin II type 1 receptor blockers (AT1RBs) may be better tolerated than ACE inhibitors. A secondary analysis of 49 deaths in the original Evaluation of Losartan in the Elderly (ELITE) study, in which the primary end point was the effect of treatment on renal function, showed an unexpected survival benefit for the AT1RB losartan over captopril, an ACE inhibitor. ELITE II was a larger trial designed to confirm whether losartan is superior to captopril in reducing all-cause mortality in elderly heart failure patients.
POPULATION STUDIED: The study included 3152 patients aged 60 years or older with New York Heart Association class II to IV heart failure and an ejection fraction of 40% or less. Most of the patients recruited from the 289 outpatient centers in 46 countries were white men aged older than 65 years who had never received an ACE inhibitor or an AT1RB. Exclusion criteria included previous intolerance or contraindication to either the study drug, systolic blood pressure greater than 90 mm Hg, uncontrolled hypertension, obstructive valvular heart disease, recent cardiac procedure or event, anticipated cardiac surgery, or recent cerebrovascular event.
STUDY DESIGN AND VALIDITY: This study was a prospective randomized double-blind trial funded by the manufacturer of losartan. Designed as an event-driven superiority trial, the study had 90% power to detect a relative 25% difference in all-cause mortality between treatments. At each study center, randomization was stratified on the basis of use of b-blockers. After a single-blind run-in period of 1 to 28 days, 1578 patients were allocated to losartan (12.5 mg titrated to a maximum of 50 mg once daily) and 1574 to captopril (12.5 mg titrated to a maximum of 50 mg 3 times daily). Clinical assessments were done weekly during dose titration and then every 4 months. Periodic laboratory assessments were also performed. The appropriate study design and intention-to-treat analysis were used for this efficacy trial. Neither the patients, those assessing clinical outcomes, nor the drug safety monitoring committee were aware of treatment status. Concealed allocation to treatment group at each study site was assured through central block randomization. The results are only applicable to elderly patients.
OUTCOMES MEASURED: The primary outcome was all-cause mortality, and the secondary end point was the composite of sudden cardiac death or resuscitated cardiac arrest.
RESULTS: Treatment groups were well matched demographically and for confounding variables that might affect response to treatment. Only 1 patient from each group was lost to follow-up during a median follow-up period of 18 months. A total of 280 deaths (17.5%) occurred in the losartan group compared with 250 (15.9%) in the captopril group, with an annual mortality rate of 11.7% and 10.4%, respectively (hazard ratio=1.13; 95.7% confidence interval, 0.95-1.35; P=.16). Neither of these differences was statistically significant, but power may have been an issue. Similarly, there was no significant difference in the composite of sudden death or resuscitated arrests (9.0% vs 7.3%). Fewer patients in the losartan group (excluding those who died) discontinued treatment because of side effects (9.7% vs 14.7%, P <.001).
Clinicians should continue to prescribe ACE inhibitors as initial treatment for elderly patients with symptomatic heart failure. Losartan was clearly not superior (or even equivalent) to captopril in reducing all-cause mortality and should not be used as first-line therapy for these patients.