User login
Vitamin E Does Not Prevent Cardiovascular Events in High-Risk Patients
CLINICAL QUESTION: Does vitamin E supplementation play a role in the secondary prevention of cardiovascular events in high-risk patients?
BACKGROUND: The role of the antioxidant vitamin E in primary and secondary prevention of cardiovascular disease is unclear. Observational studies suggest that vitamin E may slow the development and progression of atherosclerosis. Four published randomized controlled trials reached differing conclusions that may be partly attributed to different study designs. This investigation, as part of the Heart Outcomes Prevention Evaluation (HOPE) study, evaluates the utility of vitamin E in secondary prevention in high-risk cardiovascular patients.
POPULATION STUDIED: This study enrolled 10,576 individuals aged 55 years and older from more than 200 outpatient centers on 3 continents. Enrolled patients had known coronary artery disease, stroke, or peripheral vascular disease. Patients were also enrolled if they had diabetes and one of the following cardiovascular risk factors: hypertension, hypercholesterolemia, tobacco use, or microalbuminuria. Exclusion criteria included a known ejection fraction of less than 40%, currently taking an angiotensin-converting enzyme (ACE) inhibitor or vitamin E, having uncontrolled hypertension or overt nephropathy, or a history of myocardial infarction or stroke within 4 weeks of enrollment. Approximately 25% of the patients were women.
STUDY DESIGN AND VALIDITY: This study was designed to test the effect of vitamin E and ramipril on cardiovascular events in high-risk patients. The authors of this study report only the RESULTS: for vitamin E. Before randomization an initial run-in period with a low dose of ramipril eliminated more than 1000 participants because of noncompliance, medication side effects, or abnormal serum creatinine or potassium levels. In an adequately concealed fashion, 9541 patients were randomized in a double-blinded fashion to either 400 IU of vitamin E from natural sources or placebo. They were followed for a mean of 4.5 years with evaluation every 6 months. The vitamin E group had 89% compliance at 5 years. The study was discontinued early because of the significant benefit demonstrated in the ramipril arm (see next POEM). The study had adequate power to detect a clinically relevant reduction in the primary outcome. The diverse population characteristics of enrollees will tend to make this study’s findings applicable to many primary care patients.
OUTCOMES MEASURED: The primary outcome was the combined end point of myocardial infarction, stroke, or cardiovascular death while secondary outcomes were death from any cause, worsening congestive heart failure, unstable angina, complications associated with diabetes, or any of the individual primary end points.
RESULTS: There were no significant differences in primary or secondary outcomes between the vitamin E group and the placebo group. No differences in primary and secondary outcomes were revealed when analyzing patient subgroups defined by sex, age, previous cardiovascular disease, medication use, diabetes, tobacco use, or ramipril versus placebo use. There was also no increase in the rate of adverse effects with the vitamin E group, in particular with respect to hemorrhagic stroke.
This well-designed study provides convincing evidence that high-dose vitamin E from natural sources during a 4- to 6-year period does not reduce the incidence of cardiovascular events in high-risk patients. Observational studies have associated vitamin consumption, particularly vitamin E, with reduced incidence of coronary artery disease. In those studies it is difficult to distinguish whether the vitamin or other lifestyle factors, such as exercise or other facets of diet, contribute to this finding. This study does not address the role of vitamin E in primary prevention or comment on any benefits of longer-term use. In many centers, this study is continuing to evaluate the benefit of vitamin E for the prevention of cancer.
CLINICAL QUESTION: Does vitamin E supplementation play a role in the secondary prevention of cardiovascular events in high-risk patients?
BACKGROUND: The role of the antioxidant vitamin E in primary and secondary prevention of cardiovascular disease is unclear. Observational studies suggest that vitamin E may slow the development and progression of atherosclerosis. Four published randomized controlled trials reached differing conclusions that may be partly attributed to different study designs. This investigation, as part of the Heart Outcomes Prevention Evaluation (HOPE) study, evaluates the utility of vitamin E in secondary prevention in high-risk cardiovascular patients.
POPULATION STUDIED: This study enrolled 10,576 individuals aged 55 years and older from more than 200 outpatient centers on 3 continents. Enrolled patients had known coronary artery disease, stroke, or peripheral vascular disease. Patients were also enrolled if they had diabetes and one of the following cardiovascular risk factors: hypertension, hypercholesterolemia, tobacco use, or microalbuminuria. Exclusion criteria included a known ejection fraction of less than 40%, currently taking an angiotensin-converting enzyme (ACE) inhibitor or vitamin E, having uncontrolled hypertension or overt nephropathy, or a history of myocardial infarction or stroke within 4 weeks of enrollment. Approximately 25% of the patients were women.
STUDY DESIGN AND VALIDITY: This study was designed to test the effect of vitamin E and ramipril on cardiovascular events in high-risk patients. The authors of this study report only the RESULTS: for vitamin E. Before randomization an initial run-in period with a low dose of ramipril eliminated more than 1000 participants because of noncompliance, medication side effects, or abnormal serum creatinine or potassium levels. In an adequately concealed fashion, 9541 patients were randomized in a double-blinded fashion to either 400 IU of vitamin E from natural sources or placebo. They were followed for a mean of 4.5 years with evaluation every 6 months. The vitamin E group had 89% compliance at 5 years. The study was discontinued early because of the significant benefit demonstrated in the ramipril arm (see next POEM). The study had adequate power to detect a clinically relevant reduction in the primary outcome. The diverse population characteristics of enrollees will tend to make this study’s findings applicable to many primary care patients.
OUTCOMES MEASURED: The primary outcome was the combined end point of myocardial infarction, stroke, or cardiovascular death while secondary outcomes were death from any cause, worsening congestive heart failure, unstable angina, complications associated with diabetes, or any of the individual primary end points.
RESULTS: There were no significant differences in primary or secondary outcomes between the vitamin E group and the placebo group. No differences in primary and secondary outcomes were revealed when analyzing patient subgroups defined by sex, age, previous cardiovascular disease, medication use, diabetes, tobacco use, or ramipril versus placebo use. There was also no increase in the rate of adverse effects with the vitamin E group, in particular with respect to hemorrhagic stroke.
This well-designed study provides convincing evidence that high-dose vitamin E from natural sources during a 4- to 6-year period does not reduce the incidence of cardiovascular events in high-risk patients. Observational studies have associated vitamin consumption, particularly vitamin E, with reduced incidence of coronary artery disease. In those studies it is difficult to distinguish whether the vitamin or other lifestyle factors, such as exercise or other facets of diet, contribute to this finding. This study does not address the role of vitamin E in primary prevention or comment on any benefits of longer-term use. In many centers, this study is continuing to evaluate the benefit of vitamin E for the prevention of cancer.
CLINICAL QUESTION: Does vitamin E supplementation play a role in the secondary prevention of cardiovascular events in high-risk patients?
BACKGROUND: The role of the antioxidant vitamin E in primary and secondary prevention of cardiovascular disease is unclear. Observational studies suggest that vitamin E may slow the development and progression of atherosclerosis. Four published randomized controlled trials reached differing conclusions that may be partly attributed to different study designs. This investigation, as part of the Heart Outcomes Prevention Evaluation (HOPE) study, evaluates the utility of vitamin E in secondary prevention in high-risk cardiovascular patients.
POPULATION STUDIED: This study enrolled 10,576 individuals aged 55 years and older from more than 200 outpatient centers on 3 continents. Enrolled patients had known coronary artery disease, stroke, or peripheral vascular disease. Patients were also enrolled if they had diabetes and one of the following cardiovascular risk factors: hypertension, hypercholesterolemia, tobacco use, or microalbuminuria. Exclusion criteria included a known ejection fraction of less than 40%, currently taking an angiotensin-converting enzyme (ACE) inhibitor or vitamin E, having uncontrolled hypertension or overt nephropathy, or a history of myocardial infarction or stroke within 4 weeks of enrollment. Approximately 25% of the patients were women.
STUDY DESIGN AND VALIDITY: This study was designed to test the effect of vitamin E and ramipril on cardiovascular events in high-risk patients. The authors of this study report only the RESULTS: for vitamin E. Before randomization an initial run-in period with a low dose of ramipril eliminated more than 1000 participants because of noncompliance, medication side effects, or abnormal serum creatinine or potassium levels. In an adequately concealed fashion, 9541 patients were randomized in a double-blinded fashion to either 400 IU of vitamin E from natural sources or placebo. They were followed for a mean of 4.5 years with evaluation every 6 months. The vitamin E group had 89% compliance at 5 years. The study was discontinued early because of the significant benefit demonstrated in the ramipril arm (see next POEM). The study had adequate power to detect a clinically relevant reduction in the primary outcome. The diverse population characteristics of enrollees will tend to make this study’s findings applicable to many primary care patients.
OUTCOMES MEASURED: The primary outcome was the combined end point of myocardial infarction, stroke, or cardiovascular death while secondary outcomes were death from any cause, worsening congestive heart failure, unstable angina, complications associated with diabetes, or any of the individual primary end points.
RESULTS: There were no significant differences in primary or secondary outcomes between the vitamin E group and the placebo group. No differences in primary and secondary outcomes were revealed when analyzing patient subgroups defined by sex, age, previous cardiovascular disease, medication use, diabetes, tobacco use, or ramipril versus placebo use. There was also no increase in the rate of adverse effects with the vitamin E group, in particular with respect to hemorrhagic stroke.
This well-designed study provides convincing evidence that high-dose vitamin E from natural sources during a 4- to 6-year period does not reduce the incidence of cardiovascular events in high-risk patients. Observational studies have associated vitamin consumption, particularly vitamin E, with reduced incidence of coronary artery disease. In those studies it is difficult to distinguish whether the vitamin or other lifestyle factors, such as exercise or other facets of diet, contribute to this finding. This study does not address the role of vitamin E in primary prevention or comment on any benefits of longer-term use. In many centers, this study is continuing to evaluate the benefit of vitamin E for the prevention of cancer.
Culture Confirmation of Negative Rapid Strep Test Results
CLINICAL QUESTION: Does the use of a high-sensitivity rapid strep test without throat culture result in more complications than a culture-only strategy?
BACKGROUND: Because the sensitivity of currently used rapid strep tests is relatively low, many physicians order a throat culture to confirm the negative results of a test. Recently a high-sensitivity rapid antigen test has become available. However, there are concerns that the use of the rapid test without culture confirmation of negative results can lead to increased clinical complications as a result of missed or delayed diagnosis. The authors report the clinical experience of a health system that changed its diagnostic strategy from culture-only to exclusive use of the rapid strep test.
POPULATION STUDIED: The authors identified 30,036 patient encounters with a diagnosis of pharyngitis during a 4-year period. All patients were seen at one of 2 satellite clinics of the Lahey Clinic, a multispecialty integrated delivery system in Massachusetts.
STUDY DESIGN AND VALIDITY: This is a quasiexperimental study that documented the effect of a systemwide change in diagnostic testing strategy for streptococcal pharyngitis. During the first 2 years of the study, patients with pharyngitis were tested using bacterial culture alone. During the subsequent 2 years, patients were tested using the new high-sensitivity antigen test (STREP A OIA, Biostar, Inc). The authors compared the rates of complications from pharyngitis in these 2 groups of patients. Suppurative complications included peritonsillar or retropharyngeal abscess. Nonsuppurative complications included acute rheumatic fever or poststreptococcal glomerulonephritis. All patients were identified using International Classification of Diseases-ninth revision (ICD-9) diagnostic codes. As the authors correctly point out, retrospective case series are not randomized and do not control for a possible, though unlikely, change in streptococcal virulence during the study period. In addition, ICD-9 codes can be inaccurate, although the authors did attempt to confirm diagnoses through chart review. Of greater concern, however, is the sample size in this study. On the basis of the complication rates that are subsequently reported (approximately 0.25%), each arm of the study would have required more than 500,000 patients to find a 10% difference in complications. With approximately 15,000 patients in each group, the study had the power to detect an increase in the rate of complications from 0.25% to 0.42%. Using only the suppurative complications in patients who received culture or rapid antigen test before the visit, the study would have the power to detect an increase of 0.08% to 0.19%. Finally, the primary author’s financial relationship with the manufacturer of the rapid strep test is clearly and appropriately disclosed.
OUTCOMES MEASURED: The authors reported the number of suppurative and nonsuppurative complications during the study period. Through chart review, the authors identified and excluded patients who initially presented with the complication, because those patients had no antecedent diagnostic testing.
RESULTS: During the initial culture-only period, 15,399 patients were identified with pharyngitis, and 65% received throat culture. Thirty-seven patients (0.24%) suffered a subsequent suppurative complication, and there were no nonsuppurative complications. During the second period 14,637 patients presented with pharyngitis, and 51% received the rapid antigen test. Thirty-six patients (0.25%) had a suppurative complication, and one had a nonsuppurative complication. Of the 71 patients with suppurative complications whose charts were available, 40 had been seen before the complication, and 23 had received a throat culture (12) or rapid antigen test (11). However, only 3 throat cultures and 3 rapid antigen tests were positive. There were no statistically significant differences in the rates of complications between the 2 study periods.
Although there was no difference in complication rate between patients who were tested with the high-sensitivity rapid strep test and those who were tested with throat culture only, the sample size in this study is only large enough to detect a doubling in the complication rate. Use of the new rapid test instead of throat culture appears to be safe, but the decision to alter diagnostic testing for strep pharyngitis should not be based on this study alone.
CLINICAL QUESTION: Does the use of a high-sensitivity rapid strep test without throat culture result in more complications than a culture-only strategy?
BACKGROUND: Because the sensitivity of currently used rapid strep tests is relatively low, many physicians order a throat culture to confirm the negative results of a test. Recently a high-sensitivity rapid antigen test has become available. However, there are concerns that the use of the rapid test without culture confirmation of negative results can lead to increased clinical complications as a result of missed or delayed diagnosis. The authors report the clinical experience of a health system that changed its diagnostic strategy from culture-only to exclusive use of the rapid strep test.
POPULATION STUDIED: The authors identified 30,036 patient encounters with a diagnosis of pharyngitis during a 4-year period. All patients were seen at one of 2 satellite clinics of the Lahey Clinic, a multispecialty integrated delivery system in Massachusetts.
STUDY DESIGN AND VALIDITY: This is a quasiexperimental study that documented the effect of a systemwide change in diagnostic testing strategy for streptococcal pharyngitis. During the first 2 years of the study, patients with pharyngitis were tested using bacterial culture alone. During the subsequent 2 years, patients were tested using the new high-sensitivity antigen test (STREP A OIA, Biostar, Inc). The authors compared the rates of complications from pharyngitis in these 2 groups of patients. Suppurative complications included peritonsillar or retropharyngeal abscess. Nonsuppurative complications included acute rheumatic fever or poststreptococcal glomerulonephritis. All patients were identified using International Classification of Diseases-ninth revision (ICD-9) diagnostic codes. As the authors correctly point out, retrospective case series are not randomized and do not control for a possible, though unlikely, change in streptococcal virulence during the study period. In addition, ICD-9 codes can be inaccurate, although the authors did attempt to confirm diagnoses through chart review. Of greater concern, however, is the sample size in this study. On the basis of the complication rates that are subsequently reported (approximately 0.25%), each arm of the study would have required more than 500,000 patients to find a 10% difference in complications. With approximately 15,000 patients in each group, the study had the power to detect an increase in the rate of complications from 0.25% to 0.42%. Using only the suppurative complications in patients who received culture or rapid antigen test before the visit, the study would have the power to detect an increase of 0.08% to 0.19%. Finally, the primary author’s financial relationship with the manufacturer of the rapid strep test is clearly and appropriately disclosed.
OUTCOMES MEASURED: The authors reported the number of suppurative and nonsuppurative complications during the study period. Through chart review, the authors identified and excluded patients who initially presented with the complication, because those patients had no antecedent diagnostic testing.
RESULTS: During the initial culture-only period, 15,399 patients were identified with pharyngitis, and 65% received throat culture. Thirty-seven patients (0.24%) suffered a subsequent suppurative complication, and there were no nonsuppurative complications. During the second period 14,637 patients presented with pharyngitis, and 51% received the rapid antigen test. Thirty-six patients (0.25%) had a suppurative complication, and one had a nonsuppurative complication. Of the 71 patients with suppurative complications whose charts were available, 40 had been seen before the complication, and 23 had received a throat culture (12) or rapid antigen test (11). However, only 3 throat cultures and 3 rapid antigen tests were positive. There were no statistically significant differences in the rates of complications between the 2 study periods.
Although there was no difference in complication rate between patients who were tested with the high-sensitivity rapid strep test and those who were tested with throat culture only, the sample size in this study is only large enough to detect a doubling in the complication rate. Use of the new rapid test instead of throat culture appears to be safe, but the decision to alter diagnostic testing for strep pharyngitis should not be based on this study alone.
CLINICAL QUESTION: Does the use of a high-sensitivity rapid strep test without throat culture result in more complications than a culture-only strategy?
BACKGROUND: Because the sensitivity of currently used rapid strep tests is relatively low, many physicians order a throat culture to confirm the negative results of a test. Recently a high-sensitivity rapid antigen test has become available. However, there are concerns that the use of the rapid test without culture confirmation of negative results can lead to increased clinical complications as a result of missed or delayed diagnosis. The authors report the clinical experience of a health system that changed its diagnostic strategy from culture-only to exclusive use of the rapid strep test.
POPULATION STUDIED: The authors identified 30,036 patient encounters with a diagnosis of pharyngitis during a 4-year period. All patients were seen at one of 2 satellite clinics of the Lahey Clinic, a multispecialty integrated delivery system in Massachusetts.
STUDY DESIGN AND VALIDITY: This is a quasiexperimental study that documented the effect of a systemwide change in diagnostic testing strategy for streptococcal pharyngitis. During the first 2 years of the study, patients with pharyngitis were tested using bacterial culture alone. During the subsequent 2 years, patients were tested using the new high-sensitivity antigen test (STREP A OIA, Biostar, Inc). The authors compared the rates of complications from pharyngitis in these 2 groups of patients. Suppurative complications included peritonsillar or retropharyngeal abscess. Nonsuppurative complications included acute rheumatic fever or poststreptococcal glomerulonephritis. All patients were identified using International Classification of Diseases-ninth revision (ICD-9) diagnostic codes. As the authors correctly point out, retrospective case series are not randomized and do not control for a possible, though unlikely, change in streptococcal virulence during the study period. In addition, ICD-9 codes can be inaccurate, although the authors did attempt to confirm diagnoses through chart review. Of greater concern, however, is the sample size in this study. On the basis of the complication rates that are subsequently reported (approximately 0.25%), each arm of the study would have required more than 500,000 patients to find a 10% difference in complications. With approximately 15,000 patients in each group, the study had the power to detect an increase in the rate of complications from 0.25% to 0.42%. Using only the suppurative complications in patients who received culture or rapid antigen test before the visit, the study would have the power to detect an increase of 0.08% to 0.19%. Finally, the primary author’s financial relationship with the manufacturer of the rapid strep test is clearly and appropriately disclosed.
OUTCOMES MEASURED: The authors reported the number of suppurative and nonsuppurative complications during the study period. Through chart review, the authors identified and excluded patients who initially presented with the complication, because those patients had no antecedent diagnostic testing.
RESULTS: During the initial culture-only period, 15,399 patients were identified with pharyngitis, and 65% received throat culture. Thirty-seven patients (0.24%) suffered a subsequent suppurative complication, and there were no nonsuppurative complications. During the second period 14,637 patients presented with pharyngitis, and 51% received the rapid antigen test. Thirty-six patients (0.25%) had a suppurative complication, and one had a nonsuppurative complication. Of the 71 patients with suppurative complications whose charts were available, 40 had been seen before the complication, and 23 had received a throat culture (12) or rapid antigen test (11). However, only 3 throat cultures and 3 rapid antigen tests were positive. There were no statistically significant differences in the rates of complications between the 2 study periods.
Although there was no difference in complication rate between patients who were tested with the high-sensitivity rapid strep test and those who were tested with throat culture only, the sample size in this study is only large enough to detect a doubling in the complication rate. Use of the new rapid test instead of throat culture appears to be safe, but the decision to alter diagnostic testing for strep pharyngitis should not be based on this study alone.
Screening Mammography May Not Be Effective at Any Age
CLINICAL QUESTION: Does screening mammography reduce breast cancer mortality?
BACKGROUND: Randomized controlled trials have found that mammography screening for breast cancer reduces mortality. Overall, though, this effect is very small and could have been influenced by very small changes. The 8 randomized studies have evaluated more than 450,000 women, with less than 1% of them dying of breast cancer. In this large group, the difference in mortality in the the screened versus the unscreened groups was only 65 women (837 breast cancer deaths in the screened groups, 902 in the unscreened groups).
Since this difference between the 2 groups is so small, it is crucial that screened and unscreened groups have identical characteristics, so that the initial risk of breast cancer is the same. This study reviewed the methodologic quality of past trials to determine whether methodologic issues could have affected the RESULTS: of these studies.
POPULATION STUDIED: The authors identified 8 randomized controlled clinical trials of screening mammography. To evaluate the studies for the meta-analysis, they carefully scrutinized the methodology of these studies. The reviewers focused primarily on how the researchers concealed the assignment to the groups so that no one knew in advance whether the next woman to be entered in the study would be randomized. If this concealment occurred properly, the randomized groups should have similar characteristics.
OUTCOME MEASURED: The primary focus of this analysis was whether the methodology of the studies, rather than a beneficial effect of mammography screening, could have accounted for the difference in mortality. The analysis compared risk of mortality—both due to any cause and due to breast cancer—in trials with and without appropriate randomization methods.
RESULTS: he authors concluded that 6 of the 8 trials used a process of randomization that failed to produce similar groups. One trial enrolled women in pairs but somehow ended up with unequal numbers of women in the 2 groups. In another trial, approximately twice as many women in the screened group were in the highest socioeconomic stratum, an imbalance that should not have occurred if the enrollment was truly random. One trial enrolled significantly fewer women in the screened group who had a preexisting breast lump. All of these imbalances suggest that the 2 groups being compared were not truly comparable. In one trial, women who were not screened were an average of 6 months older than those who received screening, a statistically significant and important difference when the outcome being considered is mortality rate.
In addition, 4 of the 6 trials failed to account consistently for the patients enrolled in their study. Patients initially enrolled in the study were not included in the final analysis, presumably because of administrative problems with managing the patient database.
These 6 flawed studies are the ones that support the usefulness of mammography screening. Breast cancer-related deaths were significantly lower in the screened group (relative risk [RR]=0.75; 95% confidence interval [CI], 0.67-0.83).
Two trials used adequate randomization and accounted for all of the enrolled women. Those 2 trials also used masked assessment of the cause of death, eliminating another source of potential bias. The combined data from those 2 trials showed no effect of screening on breast cancer mortality (RR=1.04; 95% CI, 0.84-1.27) or on total mortality (RR=0.99; 95% CI, 0.94-1.05).
Three trials evaluated the effect of mammography on overall mortality rates. All-cause mortality was not significantly affected by mammography screening. Two trials evaluated morbidity, finding surgery and radiotherapy to be performed more frequently in the screened patients. Also, benign findings in biopsy samples were reported 2 to 4 times more frequently in the screened patients.
This study casts an important doubt on the methodologic quality of studies purporting to show a benefit of screening mammography. Studies that started with truly equal groups show no benefit to screening. However, prevailing politics, patients’ preconceptions, and the fear of litigation are likely to counterbalance the results of this study. The best approach to offering mammograms to women of any age will be to give them the current facts regarding mammography screening: (1) one of every thousand women screened by mammography may be prevented from dying of breast cancer, although there may not be a benefit at all; (2) mammography screening has never been shown to help women to live longer; and, (3) half of the women who receive yearly mammograms for 10 years will have a false-positive result, and 19% will be subjected to biopsy.
CLINICAL QUESTION: Does screening mammography reduce breast cancer mortality?
BACKGROUND: Randomized controlled trials have found that mammography screening for breast cancer reduces mortality. Overall, though, this effect is very small and could have been influenced by very small changes. The 8 randomized studies have evaluated more than 450,000 women, with less than 1% of them dying of breast cancer. In this large group, the difference in mortality in the the screened versus the unscreened groups was only 65 women (837 breast cancer deaths in the screened groups, 902 in the unscreened groups).
Since this difference between the 2 groups is so small, it is crucial that screened and unscreened groups have identical characteristics, so that the initial risk of breast cancer is the same. This study reviewed the methodologic quality of past trials to determine whether methodologic issues could have affected the RESULTS: of these studies.
POPULATION STUDIED: The authors identified 8 randomized controlled clinical trials of screening mammography. To evaluate the studies for the meta-analysis, they carefully scrutinized the methodology of these studies. The reviewers focused primarily on how the researchers concealed the assignment to the groups so that no one knew in advance whether the next woman to be entered in the study would be randomized. If this concealment occurred properly, the randomized groups should have similar characteristics.
OUTCOME MEASURED: The primary focus of this analysis was whether the methodology of the studies, rather than a beneficial effect of mammography screening, could have accounted for the difference in mortality. The analysis compared risk of mortality—both due to any cause and due to breast cancer—in trials with and without appropriate randomization methods.
RESULTS: he authors concluded that 6 of the 8 trials used a process of randomization that failed to produce similar groups. One trial enrolled women in pairs but somehow ended up with unequal numbers of women in the 2 groups. In another trial, approximately twice as many women in the screened group were in the highest socioeconomic stratum, an imbalance that should not have occurred if the enrollment was truly random. One trial enrolled significantly fewer women in the screened group who had a preexisting breast lump. All of these imbalances suggest that the 2 groups being compared were not truly comparable. In one trial, women who were not screened were an average of 6 months older than those who received screening, a statistically significant and important difference when the outcome being considered is mortality rate.
In addition, 4 of the 6 trials failed to account consistently for the patients enrolled in their study. Patients initially enrolled in the study were not included in the final analysis, presumably because of administrative problems with managing the patient database.
These 6 flawed studies are the ones that support the usefulness of mammography screening. Breast cancer-related deaths were significantly lower in the screened group (relative risk [RR]=0.75; 95% confidence interval [CI], 0.67-0.83).
Two trials used adequate randomization and accounted for all of the enrolled women. Those 2 trials also used masked assessment of the cause of death, eliminating another source of potential bias. The combined data from those 2 trials showed no effect of screening on breast cancer mortality (RR=1.04; 95% CI, 0.84-1.27) or on total mortality (RR=0.99; 95% CI, 0.94-1.05).
Three trials evaluated the effect of mammography on overall mortality rates. All-cause mortality was not significantly affected by mammography screening. Two trials evaluated morbidity, finding surgery and radiotherapy to be performed more frequently in the screened patients. Also, benign findings in biopsy samples were reported 2 to 4 times more frequently in the screened patients.
This study casts an important doubt on the methodologic quality of studies purporting to show a benefit of screening mammography. Studies that started with truly equal groups show no benefit to screening. However, prevailing politics, patients’ preconceptions, and the fear of litigation are likely to counterbalance the results of this study. The best approach to offering mammograms to women of any age will be to give them the current facts regarding mammography screening: (1) one of every thousand women screened by mammography may be prevented from dying of breast cancer, although there may not be a benefit at all; (2) mammography screening has never been shown to help women to live longer; and, (3) half of the women who receive yearly mammograms for 10 years will have a false-positive result, and 19% will be subjected to biopsy.
CLINICAL QUESTION: Does screening mammography reduce breast cancer mortality?
BACKGROUND: Randomized controlled trials have found that mammography screening for breast cancer reduces mortality. Overall, though, this effect is very small and could have been influenced by very small changes. The 8 randomized studies have evaluated more than 450,000 women, with less than 1% of them dying of breast cancer. In this large group, the difference in mortality in the the screened versus the unscreened groups was only 65 women (837 breast cancer deaths in the screened groups, 902 in the unscreened groups).
Since this difference between the 2 groups is so small, it is crucial that screened and unscreened groups have identical characteristics, so that the initial risk of breast cancer is the same. This study reviewed the methodologic quality of past trials to determine whether methodologic issues could have affected the RESULTS: of these studies.
POPULATION STUDIED: The authors identified 8 randomized controlled clinical trials of screening mammography. To evaluate the studies for the meta-analysis, they carefully scrutinized the methodology of these studies. The reviewers focused primarily on how the researchers concealed the assignment to the groups so that no one knew in advance whether the next woman to be entered in the study would be randomized. If this concealment occurred properly, the randomized groups should have similar characteristics.
OUTCOME MEASURED: The primary focus of this analysis was whether the methodology of the studies, rather than a beneficial effect of mammography screening, could have accounted for the difference in mortality. The analysis compared risk of mortality—both due to any cause and due to breast cancer—in trials with and without appropriate randomization methods.
RESULTS: he authors concluded that 6 of the 8 trials used a process of randomization that failed to produce similar groups. One trial enrolled women in pairs but somehow ended up with unequal numbers of women in the 2 groups. In another trial, approximately twice as many women in the screened group were in the highest socioeconomic stratum, an imbalance that should not have occurred if the enrollment was truly random. One trial enrolled significantly fewer women in the screened group who had a preexisting breast lump. All of these imbalances suggest that the 2 groups being compared were not truly comparable. In one trial, women who were not screened were an average of 6 months older than those who received screening, a statistically significant and important difference when the outcome being considered is mortality rate.
In addition, 4 of the 6 trials failed to account consistently for the patients enrolled in their study. Patients initially enrolled in the study were not included in the final analysis, presumably because of administrative problems with managing the patient database.
These 6 flawed studies are the ones that support the usefulness of mammography screening. Breast cancer-related deaths were significantly lower in the screened group (relative risk [RR]=0.75; 95% confidence interval [CI], 0.67-0.83).
Two trials used adequate randomization and accounted for all of the enrolled women. Those 2 trials also used masked assessment of the cause of death, eliminating another source of potential bias. The combined data from those 2 trials showed no effect of screening on breast cancer mortality (RR=1.04; 95% CI, 0.84-1.27) or on total mortality (RR=0.99; 95% CI, 0.94-1.05).
Three trials evaluated the effect of mammography on overall mortality rates. All-cause mortality was not significantly affected by mammography screening. Two trials evaluated morbidity, finding surgery and radiotherapy to be performed more frequently in the screened patients. Also, benign findings in biopsy samples were reported 2 to 4 times more frequently in the screened patients.
This study casts an important doubt on the methodologic quality of studies purporting to show a benefit of screening mammography. Studies that started with truly equal groups show no benefit to screening. However, prevailing politics, patients’ preconceptions, and the fear of litigation are likely to counterbalance the results of this study. The best approach to offering mammograms to women of any age will be to give them the current facts regarding mammography screening: (1) one of every thousand women screened by mammography may be prevented from dying of breast cancer, although there may not be a benefit at all; (2) mammography screening has never been shown to help women to live longer; and, (3) half of the women who receive yearly mammograms for 10 years will have a false-positive result, and 19% will be subjected to biopsy.
Estrogen-Progestin Increases Breast Cancer Risk
CLINICAL QUESTION: For postmenopausal women taking hormone replacement therapy (HRT), does the addition of progestin to estrogen increase the risk of breast cancer above the risk associated with estrogen replacement therapy (ERT) alone?
BACKGROUND: It is clear that postmenopausal HRT is associated with an increase in the risk of a diagnosis of breast cancer. This risk is related to the duration and type of HRT used. ERT and combination estrogen-progestin hormone therapy (CHRT) are the most commonly prescribed regimens. This study examines the impact of CHRT on breast cancer risk.
POPULATION STUDIED: This study is a follow-up to the Breast Cancer Detection Demonstration Project (BCDDP) originally conducted from 1973 to 1980. The original sample included 59,907 patients. Subsequent phone interviews and mailed questionnaires conducted between 1980 and 1995 tracked participants and their behaviors related to breast health. Specifically, individual risk factors for breast cancer, the use of breast cancer screening practices (particularly mammography), the use of hormone replacement therapy (type and duration), and the rate of breast-related procedures were assessed. Participants were predominantly white (86%). For this study, subjects were excluded if they had a prophylactic mastectomy or if they had used hormone replacement in shots, patches, or creams.
STUDY DESIGN AND VALIDITY: This cohort study examined follow-up BCDDP data collected between 1980 and 1995. A total of 46,355 subjects were available for analysis. Cases of breast cancer were identified in study participants, and regression analyses were used to calculate the relative risk (RR) of breast cancer associated with different patterns of HRT use. The weaknesses of this study included the ethnic homogeneity of the sample, the use of 10-year-old data to calculate body mass index (BMI), and the lack of differentiation between continuous and sequential CHRT.
OUTCOMES MEASURED: The primary outcome measured was the incidence of breast cancer relative to the type and duration of HRT.
RESULTS: A total of 2082 cases of breast cancer were identified during 473,687 person-years of accumulated follow-up (4.4% of the women). Increases in risk of breast cancer with estrogen only (RR=1.2; 95% confidence interval [CI], 1.0-1.4; number needed to harm [NNH]=1100) and estrogen-progestin (RR=1.4; 95% CI, 1.1-1.8; NNH=641) were found only with use within the previous 4 years. Current use of CHRT was also associated with an increase in breast cancer risk. Lean women (BMI <24.4 kg/m2) who had been using CHRT for at least 4 years had the highest risk of breast cancer, and there was no statistically significant increased risk of cancer in heavier women.
The combination of estrogen and progestin slightly increases the risk of breast cancer beyond that associated with estrogen alone in lean women only. The risk of breast cancer with postmenopausal HRT is most apparent in current or recent users of HRT and is related to duration of use (>4 years). An increase in the diagnosis of breast cancer in women taking postmenopausal HRT does not necessarily translate to an associated increase in breast cancer mortality.1,2 For many women the benefits of HRT may outweigh the risks. The following should be considered when counseling postmenopausal women about HRT: (1) postmenopausal HRT use leads to an annual increase in the risk of breast cancer equivalent to an extra year of remaining premenopausal; (2) the increase in breast cancer risk associated with postmenopausal HRT is particularly apparent for lean white women with current or recent HRT use; (3) postmenopausal women without a uterus who are considering HRT should take estrogen alone; and (4) preventive counseling to promote favorable diet, exercise, and lifestyle behaviors is the cornerstone of healthy aging. The use of medication to manage menopause should not be viewed as the de facto clinical standard of care.
CLINICAL QUESTION: For postmenopausal women taking hormone replacement therapy (HRT), does the addition of progestin to estrogen increase the risk of breast cancer above the risk associated with estrogen replacement therapy (ERT) alone?
BACKGROUND: It is clear that postmenopausal HRT is associated with an increase in the risk of a diagnosis of breast cancer. This risk is related to the duration and type of HRT used. ERT and combination estrogen-progestin hormone therapy (CHRT) are the most commonly prescribed regimens. This study examines the impact of CHRT on breast cancer risk.
POPULATION STUDIED: This study is a follow-up to the Breast Cancer Detection Demonstration Project (BCDDP) originally conducted from 1973 to 1980. The original sample included 59,907 patients. Subsequent phone interviews and mailed questionnaires conducted between 1980 and 1995 tracked participants and their behaviors related to breast health. Specifically, individual risk factors for breast cancer, the use of breast cancer screening practices (particularly mammography), the use of hormone replacement therapy (type and duration), and the rate of breast-related procedures were assessed. Participants were predominantly white (86%). For this study, subjects were excluded if they had a prophylactic mastectomy or if they had used hormone replacement in shots, patches, or creams.
STUDY DESIGN AND VALIDITY: This cohort study examined follow-up BCDDP data collected between 1980 and 1995. A total of 46,355 subjects were available for analysis. Cases of breast cancer were identified in study participants, and regression analyses were used to calculate the relative risk (RR) of breast cancer associated with different patterns of HRT use. The weaknesses of this study included the ethnic homogeneity of the sample, the use of 10-year-old data to calculate body mass index (BMI), and the lack of differentiation between continuous and sequential CHRT.
OUTCOMES MEASURED: The primary outcome measured was the incidence of breast cancer relative to the type and duration of HRT.
RESULTS: A total of 2082 cases of breast cancer were identified during 473,687 person-years of accumulated follow-up (4.4% of the women). Increases in risk of breast cancer with estrogen only (RR=1.2; 95% confidence interval [CI], 1.0-1.4; number needed to harm [NNH]=1100) and estrogen-progestin (RR=1.4; 95% CI, 1.1-1.8; NNH=641) were found only with use within the previous 4 years. Current use of CHRT was also associated with an increase in breast cancer risk. Lean women (BMI <24.4 kg/m2) who had been using CHRT for at least 4 years had the highest risk of breast cancer, and there was no statistically significant increased risk of cancer in heavier women.
The combination of estrogen and progestin slightly increases the risk of breast cancer beyond that associated with estrogen alone in lean women only. The risk of breast cancer with postmenopausal HRT is most apparent in current or recent users of HRT and is related to duration of use (>4 years). An increase in the diagnosis of breast cancer in women taking postmenopausal HRT does not necessarily translate to an associated increase in breast cancer mortality.1,2 For many women the benefits of HRT may outweigh the risks. The following should be considered when counseling postmenopausal women about HRT: (1) postmenopausal HRT use leads to an annual increase in the risk of breast cancer equivalent to an extra year of remaining premenopausal; (2) the increase in breast cancer risk associated with postmenopausal HRT is particularly apparent for lean white women with current or recent HRT use; (3) postmenopausal women without a uterus who are considering HRT should take estrogen alone; and (4) preventive counseling to promote favorable diet, exercise, and lifestyle behaviors is the cornerstone of healthy aging. The use of medication to manage menopause should not be viewed as the de facto clinical standard of care.
CLINICAL QUESTION: For postmenopausal women taking hormone replacement therapy (HRT), does the addition of progestin to estrogen increase the risk of breast cancer above the risk associated with estrogen replacement therapy (ERT) alone?
BACKGROUND: It is clear that postmenopausal HRT is associated with an increase in the risk of a diagnosis of breast cancer. This risk is related to the duration and type of HRT used. ERT and combination estrogen-progestin hormone therapy (CHRT) are the most commonly prescribed regimens. This study examines the impact of CHRT on breast cancer risk.
POPULATION STUDIED: This study is a follow-up to the Breast Cancer Detection Demonstration Project (BCDDP) originally conducted from 1973 to 1980. The original sample included 59,907 patients. Subsequent phone interviews and mailed questionnaires conducted between 1980 and 1995 tracked participants and their behaviors related to breast health. Specifically, individual risk factors for breast cancer, the use of breast cancer screening practices (particularly mammography), the use of hormone replacement therapy (type and duration), and the rate of breast-related procedures were assessed. Participants were predominantly white (86%). For this study, subjects were excluded if they had a prophylactic mastectomy or if they had used hormone replacement in shots, patches, or creams.
STUDY DESIGN AND VALIDITY: This cohort study examined follow-up BCDDP data collected between 1980 and 1995. A total of 46,355 subjects were available for analysis. Cases of breast cancer were identified in study participants, and regression analyses were used to calculate the relative risk (RR) of breast cancer associated with different patterns of HRT use. The weaknesses of this study included the ethnic homogeneity of the sample, the use of 10-year-old data to calculate body mass index (BMI), and the lack of differentiation between continuous and sequential CHRT.
OUTCOMES MEASURED: The primary outcome measured was the incidence of breast cancer relative to the type and duration of HRT.
RESULTS: A total of 2082 cases of breast cancer were identified during 473,687 person-years of accumulated follow-up (4.4% of the women). Increases in risk of breast cancer with estrogen only (RR=1.2; 95% confidence interval [CI], 1.0-1.4; number needed to harm [NNH]=1100) and estrogen-progestin (RR=1.4; 95% CI, 1.1-1.8; NNH=641) were found only with use within the previous 4 years. Current use of CHRT was also associated with an increase in breast cancer risk. Lean women (BMI <24.4 kg/m2) who had been using CHRT for at least 4 years had the highest risk of breast cancer, and there was no statistically significant increased risk of cancer in heavier women.
The combination of estrogen and progestin slightly increases the risk of breast cancer beyond that associated with estrogen alone in lean women only. The risk of breast cancer with postmenopausal HRT is most apparent in current or recent users of HRT and is related to duration of use (>4 years). An increase in the diagnosis of breast cancer in women taking postmenopausal HRT does not necessarily translate to an associated increase in breast cancer mortality.1,2 For many women the benefits of HRT may outweigh the risks. The following should be considered when counseling postmenopausal women about HRT: (1) postmenopausal HRT use leads to an annual increase in the risk of breast cancer equivalent to an extra year of remaining premenopausal; (2) the increase in breast cancer risk associated with postmenopausal HRT is particularly apparent for lean white women with current or recent HRT use; (3) postmenopausal women without a uterus who are considering HRT should take estrogen alone; and (4) preventive counseling to promote favorable diet, exercise, and lifestyle behaviors is the cornerstone of healthy aging. The use of medication to manage menopause should not be viewed as the de facto clinical standard of care.
Intubation Ineffective in Vigorous Meconium-Stained Infants
CLINICAL QUESTION: Does the apparently vigorous newborn infant need to be intubated and undergo intratracheal suctioning after delivery through meconium-stained amniotic fluid (MSAF)?
BACKGROUND: Approximately 13% of all newborns are delivered through MSAF. Based on reports from the 1970s, newborns born through MSAF were believed to have a lower risk of developing meconium aspiration syndrome (MAS) if they were electively intubated and had intratracheal suctioning immediately after delivery, regardless of their clinical appearance or Apgar score. Other investigators have proposed that clinically vigorous infants may not need to be intubated and can be managed expectantly.
POPULATION STUDIED: The study population included 2094 newborn infants born to mothers from 12 participating birth centers in the United States and South America, from both university and predominantly clinical centers. Inclusion criteria included birth through MSAF, Ž37 weeks’ gestation, and apparent vigor of the child 10 to 15 seconds after birth as defined by a heart rate >100 beats per minute, reasonable tone, and spontaneous respirations. Study subjects represented a diverse population in regard to ethnicity, sex, maternal age, gravidity, and consistency of meconium fluid. Mode of delivery was mostly vaginal (78%).
STUDY DESIGN AND VALIDITY: Using computer-generated random numbers, infants were assigned to intubation and intratracheal suction (INT, n=1051) or to expectant management only (EXP, n=1043). Group assignment was concealed by using sealed opaque envelopes opened immediately before deliveries complicated by meconium staining. The policy at all birth sites was to suction the oropharynx of each meconium-stained neonate with either a catheter or bulb syringe before delivery of the infant’s shoulders or trunk. The INT group subjects were significantly more likely to have lower 1-minute Apgar scores (P <.0018). There were no other significant differences between the 2 groups. Study personnel responsible for assessing outcomes were blind to the treatment group assignment. All of the investigators remained blind to the results until the completion of the trial. Data analysis was by intention to treat.
OUTCOMES MEASURED: The major outcome studied was the development of MAS or other respiratory disorders. The time period of observation for development of these complications was not quantified.
RESULTS: Only 149 (7%) of all infants had respiratory distress, 62 (3%) of whom had MAS. There was no significant difference between the INT and EXP groups in the incidence of MAS (3.2% vs 4.5%, respectively) or in the incidence of other respiratory disorders (3.8% vs 4.5%, respectively). There was a low rate of complications from intubations (3.8%), which were generally mild and short-lived. The development of MAS was associated with cesarean birth, less than 5 maternal prenatal visits, birth through thick meconium versus thin, and not having oropharyngeal suctioning before the delivery of the shoulders. However, even in the presence of the thickest consistency MSAF, intratracheal suctioning was no better than expectant management at preventing respiratory complications. Some crossover between treatment groups did occur: 17 of the 1051 infants randomized to INT were not intubated, mostly because of difficulty with intubation. None of these infants developed MAS. A total of 64 of the 1043 infants in the EXP group were intubated after their clinical status deteriorated, and either MAS or another respiratory disorder developed in 11 of these infants.
Immediate intubation with intratracheal suctioning was no better than expectant management in preventing respiratory complications in apparently vigorous meconium-stained newborn infants. This study provides good evidence for withholding the insertion of the endotracheal tube for vigorous newborns, regardless of how much meconium is present. Close observation appears to be okay, so do not just do something—wait. This study also provides additional support for the simple but effective procedure of bulb or catheter suctioning at the perineum before delivery of the shoulders and trunk.
CLINICAL QUESTION: Does the apparently vigorous newborn infant need to be intubated and undergo intratracheal suctioning after delivery through meconium-stained amniotic fluid (MSAF)?
BACKGROUND: Approximately 13% of all newborns are delivered through MSAF. Based on reports from the 1970s, newborns born through MSAF were believed to have a lower risk of developing meconium aspiration syndrome (MAS) if they were electively intubated and had intratracheal suctioning immediately after delivery, regardless of their clinical appearance or Apgar score. Other investigators have proposed that clinically vigorous infants may not need to be intubated and can be managed expectantly.
POPULATION STUDIED: The study population included 2094 newborn infants born to mothers from 12 participating birth centers in the United States and South America, from both university and predominantly clinical centers. Inclusion criteria included birth through MSAF, Ž37 weeks’ gestation, and apparent vigor of the child 10 to 15 seconds after birth as defined by a heart rate >100 beats per minute, reasonable tone, and spontaneous respirations. Study subjects represented a diverse population in regard to ethnicity, sex, maternal age, gravidity, and consistency of meconium fluid. Mode of delivery was mostly vaginal (78%).
STUDY DESIGN AND VALIDITY: Using computer-generated random numbers, infants were assigned to intubation and intratracheal suction (INT, n=1051) or to expectant management only (EXP, n=1043). Group assignment was concealed by using sealed opaque envelopes opened immediately before deliveries complicated by meconium staining. The policy at all birth sites was to suction the oropharynx of each meconium-stained neonate with either a catheter or bulb syringe before delivery of the infant’s shoulders or trunk. The INT group subjects were significantly more likely to have lower 1-minute Apgar scores (P <.0018). There were no other significant differences between the 2 groups. Study personnel responsible for assessing outcomes were blind to the treatment group assignment. All of the investigators remained blind to the results until the completion of the trial. Data analysis was by intention to treat.
OUTCOMES MEASURED: The major outcome studied was the development of MAS or other respiratory disorders. The time period of observation for development of these complications was not quantified.
RESULTS: Only 149 (7%) of all infants had respiratory distress, 62 (3%) of whom had MAS. There was no significant difference between the INT and EXP groups in the incidence of MAS (3.2% vs 4.5%, respectively) or in the incidence of other respiratory disorders (3.8% vs 4.5%, respectively). There was a low rate of complications from intubations (3.8%), which were generally mild and short-lived. The development of MAS was associated with cesarean birth, less than 5 maternal prenatal visits, birth through thick meconium versus thin, and not having oropharyngeal suctioning before the delivery of the shoulders. However, even in the presence of the thickest consistency MSAF, intratracheal suctioning was no better than expectant management at preventing respiratory complications. Some crossover between treatment groups did occur: 17 of the 1051 infants randomized to INT were not intubated, mostly because of difficulty with intubation. None of these infants developed MAS. A total of 64 of the 1043 infants in the EXP group were intubated after their clinical status deteriorated, and either MAS or another respiratory disorder developed in 11 of these infants.
Immediate intubation with intratracheal suctioning was no better than expectant management in preventing respiratory complications in apparently vigorous meconium-stained newborn infants. This study provides good evidence for withholding the insertion of the endotracheal tube for vigorous newborns, regardless of how much meconium is present. Close observation appears to be okay, so do not just do something—wait. This study also provides additional support for the simple but effective procedure of bulb or catheter suctioning at the perineum before delivery of the shoulders and trunk.
CLINICAL QUESTION: Does the apparently vigorous newborn infant need to be intubated and undergo intratracheal suctioning after delivery through meconium-stained amniotic fluid (MSAF)?
BACKGROUND: Approximately 13% of all newborns are delivered through MSAF. Based on reports from the 1970s, newborns born through MSAF were believed to have a lower risk of developing meconium aspiration syndrome (MAS) if they were electively intubated and had intratracheal suctioning immediately after delivery, regardless of their clinical appearance or Apgar score. Other investigators have proposed that clinically vigorous infants may not need to be intubated and can be managed expectantly.
POPULATION STUDIED: The study population included 2094 newborn infants born to mothers from 12 participating birth centers in the United States and South America, from both university and predominantly clinical centers. Inclusion criteria included birth through MSAF, Ž37 weeks’ gestation, and apparent vigor of the child 10 to 15 seconds after birth as defined by a heart rate >100 beats per minute, reasonable tone, and spontaneous respirations. Study subjects represented a diverse population in regard to ethnicity, sex, maternal age, gravidity, and consistency of meconium fluid. Mode of delivery was mostly vaginal (78%).
STUDY DESIGN AND VALIDITY: Using computer-generated random numbers, infants were assigned to intubation and intratracheal suction (INT, n=1051) or to expectant management only (EXP, n=1043). Group assignment was concealed by using sealed opaque envelopes opened immediately before deliveries complicated by meconium staining. The policy at all birth sites was to suction the oropharynx of each meconium-stained neonate with either a catheter or bulb syringe before delivery of the infant’s shoulders or trunk. The INT group subjects were significantly more likely to have lower 1-minute Apgar scores (P <.0018). There were no other significant differences between the 2 groups. Study personnel responsible for assessing outcomes were blind to the treatment group assignment. All of the investigators remained blind to the results until the completion of the trial. Data analysis was by intention to treat.
OUTCOMES MEASURED: The major outcome studied was the development of MAS or other respiratory disorders. The time period of observation for development of these complications was not quantified.
RESULTS: Only 149 (7%) of all infants had respiratory distress, 62 (3%) of whom had MAS. There was no significant difference between the INT and EXP groups in the incidence of MAS (3.2% vs 4.5%, respectively) or in the incidence of other respiratory disorders (3.8% vs 4.5%, respectively). There was a low rate of complications from intubations (3.8%), which were generally mild and short-lived. The development of MAS was associated with cesarean birth, less than 5 maternal prenatal visits, birth through thick meconium versus thin, and not having oropharyngeal suctioning before the delivery of the shoulders. However, even in the presence of the thickest consistency MSAF, intratracheal suctioning was no better than expectant management at preventing respiratory complications. Some crossover between treatment groups did occur: 17 of the 1051 infants randomized to INT were not intubated, mostly because of difficulty with intubation. None of these infants developed MAS. A total of 64 of the 1043 infants in the EXP group were intubated after their clinical status deteriorated, and either MAS or another respiratory disorder developed in 11 of these infants.
Immediate intubation with intratracheal suctioning was no better than expectant management in preventing respiratory complications in apparently vigorous meconium-stained newborn infants. This study provides good evidence for withholding the insertion of the endotracheal tube for vigorous newborns, regardless of how much meconium is present. Close observation appears to be okay, so do not just do something—wait. This study also provides additional support for the simple but effective procedure of bulb or catheter suctioning at the perineum before delivery of the shoulders and trunk.
Effects [FET1] of Influenza Vaccination of Health Care Workers on Mortality of Elderly People in Long-Term Care: A Randomized Controlled Trial
CLINICAL QUESTION: Does vaccination of health care providers working in long-term care facilities lower mortality and rates of influenza infection in patients?
BACKGROUND: The Centers for Disease Control and Prevention (CDC) recommend influenza vaccination of all patients in long-term care facilities and of health care workers employed there. Several studies have demonstrated the effectiveness of vaccinating elderly patients, and other studies have shown decreased infection rates in vaccinated health care workers.1,2 The effectiveness of vaccinating health care workers for preventing the spread of infection from worker to patient is not as well documented. The authors of this study evaluated whether vaccinating the health care workers at long-term care facilities reduced the nosocomial infection rate and the mortality of the patients in the facilities.
POPULATION STUDIED: A total of 1217 health care workers from 20 long-term care geriatric facilities in Scotland and the 1437 patients for whom they cared during a 6-month period participated in the study. Patients’ age, sex, and degree of disability based on a modified Barthel index were recorded.
STUDY DESIGN AND VALIDITY: Long-term care facilities were matched according to the number of beds and the vaccination policy. Employees randomly selected from half of these facilities were offered an influenza vaccination. Approximately half of the health care workers who were offered a vaccination received it, compared with less than 5% of workers in the control group. A random sample of 50% of the patients at each facility underwent prospective influenza monitoring by nasal and throat swab. Because patient demographics were not well defined, it is difficult to determine if the patients and long-term care facilities in the study are similar to those in other countries. Vaccinations are not routine for the elderly population of the United Kingdom. Consequently, vaccinating a transmission source such as health care workers could be more beneficial in the United Kingdom than in the United States. Also, all-cause mortality rates were very high (13.6%-22.4%) during the 6 months of the study, denoting a higher-risk population than that encountered in many other facilities.
OUTCOMES MEASURED: The outcomes measured included the mortality rate of patients during the winter months and the number of confirmed cases of influenza A and B.
RESULTS: Influenza rates were similar (5.4% vs 6.7%). Overall, the vaccination program was associated with lower mortality (13.6% vs 22.4%, P=.014) among residents. This benefit remained even after adjusting for the higher vaccination rate of residents in the facilities in which the health care workers were not vaccinated. However, after accounting for differences in age, sex, vaccination rate, and disability between the 2 groups, the reduction in the adjusted mortality rate was not statistically significant (adjusted odds ratio=0.6; 95% confidence interval, 0.36-1.04; P=.09).
Vaccination of health care providers working in geriatric inpatient facilities was associated with a decreased mortality among residents, despite equal rates of influenza infection. However, after adjusting for the baseline health of the patients, this benefit disappeared. Practitioners should continue to strive to meet CDC guidelines for vaccination of elderly adults and health care workers, but this study provides only a small impetus to do so.
CLINICAL QUESTION: Does vaccination of health care providers working in long-term care facilities lower mortality and rates of influenza infection in patients?
BACKGROUND: The Centers for Disease Control and Prevention (CDC) recommend influenza vaccination of all patients in long-term care facilities and of health care workers employed there. Several studies have demonstrated the effectiveness of vaccinating elderly patients, and other studies have shown decreased infection rates in vaccinated health care workers.1,2 The effectiveness of vaccinating health care workers for preventing the spread of infection from worker to patient is not as well documented. The authors of this study evaluated whether vaccinating the health care workers at long-term care facilities reduced the nosocomial infection rate and the mortality of the patients in the facilities.
POPULATION STUDIED: A total of 1217 health care workers from 20 long-term care geriatric facilities in Scotland and the 1437 patients for whom they cared during a 6-month period participated in the study. Patients’ age, sex, and degree of disability based on a modified Barthel index were recorded.
STUDY DESIGN AND VALIDITY: Long-term care facilities were matched according to the number of beds and the vaccination policy. Employees randomly selected from half of these facilities were offered an influenza vaccination. Approximately half of the health care workers who were offered a vaccination received it, compared with less than 5% of workers in the control group. A random sample of 50% of the patients at each facility underwent prospective influenza monitoring by nasal and throat swab. Because patient demographics were not well defined, it is difficult to determine if the patients and long-term care facilities in the study are similar to those in other countries. Vaccinations are not routine for the elderly population of the United Kingdom. Consequently, vaccinating a transmission source such as health care workers could be more beneficial in the United Kingdom than in the United States. Also, all-cause mortality rates were very high (13.6%-22.4%) during the 6 months of the study, denoting a higher-risk population than that encountered in many other facilities.
OUTCOMES MEASURED: The outcomes measured included the mortality rate of patients during the winter months and the number of confirmed cases of influenza A and B.
RESULTS: Influenza rates were similar (5.4% vs 6.7%). Overall, the vaccination program was associated with lower mortality (13.6% vs 22.4%, P=.014) among residents. This benefit remained even after adjusting for the higher vaccination rate of residents in the facilities in which the health care workers were not vaccinated. However, after accounting for differences in age, sex, vaccination rate, and disability between the 2 groups, the reduction in the adjusted mortality rate was not statistically significant (adjusted odds ratio=0.6; 95% confidence interval, 0.36-1.04; P=.09).
Vaccination of health care providers working in geriatric inpatient facilities was associated with a decreased mortality among residents, despite equal rates of influenza infection. However, after adjusting for the baseline health of the patients, this benefit disappeared. Practitioners should continue to strive to meet CDC guidelines for vaccination of elderly adults and health care workers, but this study provides only a small impetus to do so.
CLINICAL QUESTION: Does vaccination of health care providers working in long-term care facilities lower mortality and rates of influenza infection in patients?
BACKGROUND: The Centers for Disease Control and Prevention (CDC) recommend influenza vaccination of all patients in long-term care facilities and of health care workers employed there. Several studies have demonstrated the effectiveness of vaccinating elderly patients, and other studies have shown decreased infection rates in vaccinated health care workers.1,2 The effectiveness of vaccinating health care workers for preventing the spread of infection from worker to patient is not as well documented. The authors of this study evaluated whether vaccinating the health care workers at long-term care facilities reduced the nosocomial infection rate and the mortality of the patients in the facilities.
POPULATION STUDIED: A total of 1217 health care workers from 20 long-term care geriatric facilities in Scotland and the 1437 patients for whom they cared during a 6-month period participated in the study. Patients’ age, sex, and degree of disability based on a modified Barthel index were recorded.
STUDY DESIGN AND VALIDITY: Long-term care facilities were matched according to the number of beds and the vaccination policy. Employees randomly selected from half of these facilities were offered an influenza vaccination. Approximately half of the health care workers who were offered a vaccination received it, compared with less than 5% of workers in the control group. A random sample of 50% of the patients at each facility underwent prospective influenza monitoring by nasal and throat swab. Because patient demographics were not well defined, it is difficult to determine if the patients and long-term care facilities in the study are similar to those in other countries. Vaccinations are not routine for the elderly population of the United Kingdom. Consequently, vaccinating a transmission source such as health care workers could be more beneficial in the United Kingdom than in the United States. Also, all-cause mortality rates were very high (13.6%-22.4%) during the 6 months of the study, denoting a higher-risk population than that encountered in many other facilities.
OUTCOMES MEASURED: The outcomes measured included the mortality rate of patients during the winter months and the number of confirmed cases of influenza A and B.
RESULTS: Influenza rates were similar (5.4% vs 6.7%). Overall, the vaccination program was associated with lower mortality (13.6% vs 22.4%, P=.014) among residents. This benefit remained even after adjusting for the higher vaccination rate of residents in the facilities in which the health care workers were not vaccinated. However, after accounting for differences in age, sex, vaccination rate, and disability between the 2 groups, the reduction in the adjusted mortality rate was not statistically significant (adjusted odds ratio=0.6; 95% confidence interval, 0.36-1.04; P=.09).
Vaccination of health care providers working in geriatric inpatient facilities was associated with a decreased mortality among residents, despite equal rates of influenza infection. However, after adjusting for the baseline health of the patients, this benefit disappeared. Practitioners should continue to strive to meet CDC guidelines for vaccination of elderly adults and health care workers, but this study provides only a small impetus to do so.
Routine Preoperative Testing Before Cataract Surgery
CLINICAL QUESTION: Does routine medical testing before cataract surgery reduce the rate of perioperative complications?
BACKGROUND: Although routine medical tests (serum chemistries, complete blood counts, and electrocardiograms) are commonly ordered before elective surgery, their value is questionable. This study prospectively analyzes the usefulness of routine medical testing before cataract surgery.
POPULATION STUDIED: Patients aged older than 50 years presenting to one of 9 clinical centers for elective cataract surgery were enrolled (18,189 patients and 19,557 cataract surgeries). Study centers included private, academic, and community-based hospitals. Exclusion criteria were minimal; only patients who did not speak English or Spanish, who were scheduled for general anesthesia, who had a history of a myocardial infarction in the preceding 3 months, or who had routine preoperative medical testing 28 days before enrollment were excluded.
STUDY DESIGN AND VALIDITY: This is a randomized prospective trial. Patients were given a letter to take to their primary care physician that explained the study and whether the physician was to order routine tests. One group had a battery of routine preoperative tests, and the other did not. All patients had a complete history taken and a physical examination performed before surgery. Adverse medical events were recorded on the day of surgery (intraoperative) and for 1 week following surgery (postoperative). Each adverse event was independently documented by 2 investigators and confirmed in a blinded fashion. This study is elegant in design and execution. Patient representativeness and generalizability were addressed in the study design, and crossovers were appropriately analyzed by intention to treat. The authors were specific in their aim to maximize the generalizability of their results.
OUTCOMES MEASURED: The primary outcomes measured were recorded as adverse events. These included myocardial infarction/ischemia, congestive heart failure, arrhythmia, hypertension, hypotension, cerebral infarct/ischemia, bronchospasm, respiratory failure, hypoglycemia, diabetic ketoacidosis, and oxygen desaturation.
RESULTS: Only 5.9% of the patients crossed over from one group to another, and most of those were from routine testing to no testing. There was no difference between the rates of intraoperative events in the routine testing (19.2 events per 1000 operations) and the no-testing (19.7 events per 1000 operations) groups. There was also no significant difference between the rates of postoperative events in the routine testing (12.6 events per 1000 operations) and no-testing (12.1 events per 1000 operations) groups. Subgroup analyses revealed no benefit of routine testing among groups stratified according to age, ethnicity, sex, or health status.
Routine preoperative medical testing before elective cataract surgery does not improve patient outcomes but does increase the cost of care. It contributes nothing that cannot be elicited by a careful medical history and physical examination. This confirms the age-old dictum: Do not order a test unless you know what you are going to do with the result. In this situation, physicians should rely less on technology and more on clinical skills.
CLINICAL QUESTION: Does routine medical testing before cataract surgery reduce the rate of perioperative complications?
BACKGROUND: Although routine medical tests (serum chemistries, complete blood counts, and electrocardiograms) are commonly ordered before elective surgery, their value is questionable. This study prospectively analyzes the usefulness of routine medical testing before cataract surgery.
POPULATION STUDIED: Patients aged older than 50 years presenting to one of 9 clinical centers for elective cataract surgery were enrolled (18,189 patients and 19,557 cataract surgeries). Study centers included private, academic, and community-based hospitals. Exclusion criteria were minimal; only patients who did not speak English or Spanish, who were scheduled for general anesthesia, who had a history of a myocardial infarction in the preceding 3 months, or who had routine preoperative medical testing 28 days before enrollment were excluded.
STUDY DESIGN AND VALIDITY: This is a randomized prospective trial. Patients were given a letter to take to their primary care physician that explained the study and whether the physician was to order routine tests. One group had a battery of routine preoperative tests, and the other did not. All patients had a complete history taken and a physical examination performed before surgery. Adverse medical events were recorded on the day of surgery (intraoperative) and for 1 week following surgery (postoperative). Each adverse event was independently documented by 2 investigators and confirmed in a blinded fashion. This study is elegant in design and execution. Patient representativeness and generalizability were addressed in the study design, and crossovers were appropriately analyzed by intention to treat. The authors were specific in their aim to maximize the generalizability of their results.
OUTCOMES MEASURED: The primary outcomes measured were recorded as adverse events. These included myocardial infarction/ischemia, congestive heart failure, arrhythmia, hypertension, hypotension, cerebral infarct/ischemia, bronchospasm, respiratory failure, hypoglycemia, diabetic ketoacidosis, and oxygen desaturation.
RESULTS: Only 5.9% of the patients crossed over from one group to another, and most of those were from routine testing to no testing. There was no difference between the rates of intraoperative events in the routine testing (19.2 events per 1000 operations) and the no-testing (19.7 events per 1000 operations) groups. There was also no significant difference between the rates of postoperative events in the routine testing (12.6 events per 1000 operations) and no-testing (12.1 events per 1000 operations) groups. Subgroup analyses revealed no benefit of routine testing among groups stratified according to age, ethnicity, sex, or health status.
Routine preoperative medical testing before elective cataract surgery does not improve patient outcomes but does increase the cost of care. It contributes nothing that cannot be elicited by a careful medical history and physical examination. This confirms the age-old dictum: Do not order a test unless you know what you are going to do with the result. In this situation, physicians should rely less on technology and more on clinical skills.
CLINICAL QUESTION: Does routine medical testing before cataract surgery reduce the rate of perioperative complications?
BACKGROUND: Although routine medical tests (serum chemistries, complete blood counts, and electrocardiograms) are commonly ordered before elective surgery, their value is questionable. This study prospectively analyzes the usefulness of routine medical testing before cataract surgery.
POPULATION STUDIED: Patients aged older than 50 years presenting to one of 9 clinical centers for elective cataract surgery were enrolled (18,189 patients and 19,557 cataract surgeries). Study centers included private, academic, and community-based hospitals. Exclusion criteria were minimal; only patients who did not speak English or Spanish, who were scheduled for general anesthesia, who had a history of a myocardial infarction in the preceding 3 months, or who had routine preoperative medical testing 28 days before enrollment were excluded.
STUDY DESIGN AND VALIDITY: This is a randomized prospective trial. Patients were given a letter to take to their primary care physician that explained the study and whether the physician was to order routine tests. One group had a battery of routine preoperative tests, and the other did not. All patients had a complete history taken and a physical examination performed before surgery. Adverse medical events were recorded on the day of surgery (intraoperative) and for 1 week following surgery (postoperative). Each adverse event was independently documented by 2 investigators and confirmed in a blinded fashion. This study is elegant in design and execution. Patient representativeness and generalizability were addressed in the study design, and crossovers were appropriately analyzed by intention to treat. The authors were specific in their aim to maximize the generalizability of their results.
OUTCOMES MEASURED: The primary outcomes measured were recorded as adverse events. These included myocardial infarction/ischemia, congestive heart failure, arrhythmia, hypertension, hypotension, cerebral infarct/ischemia, bronchospasm, respiratory failure, hypoglycemia, diabetic ketoacidosis, and oxygen desaturation.
RESULTS: Only 5.9% of the patients crossed over from one group to another, and most of those were from routine testing to no testing. There was no difference between the rates of intraoperative events in the routine testing (19.2 events per 1000 operations) and the no-testing (19.7 events per 1000 operations) groups. There was also no significant difference between the rates of postoperative events in the routine testing (12.6 events per 1000 operations) and no-testing (12.1 events per 1000 operations) groups. Subgroup analyses revealed no benefit of routine testing among groups stratified according to age, ethnicity, sex, or health status.
Routine preoperative medical testing before elective cataract surgery does not improve patient outcomes but does increase the cost of care. It contributes nothing that cannot be elicited by a careful medical history and physical examination. This confirms the age-old dictum: Do not order a test unless you know what you are going to do with the result. In this situation, physicians should rely less on technology and more on clinical skills.
Vitamin C Prevents Reflex Sympathetic Dystrophy
CLINICAL QUESTION: Can vitamin C prevent reflex sympathetic dystrophy following a wrist fracture?
BACKGROUND: Reflex sympathetic dystrophy (RSD) may result in increased morbidity, health care costs, and time lost from work. It is not known whether prevention is possible. There is some evidence that oxygen radicals are involved in the pathogenesis of RSD. Antioxidants such as vitamin C have been shown to reduce morbidity in burn injuries. This led the investigators to test vitamin C for RSD prevention.
POPULATION STUDIED: The authors of this study enrolled 115 adults aged 24 to 88 years who were evaluated in an emergency department with a fracture of at least one wrist. Seventy-nine percent of the fractures occurred in women. All patients were treated conservatively using immoblization. Patients were excluded with unacceptable reduction or secondary dislocation or if they would be unavailable for follow-up.
STUDY DESIGN AND VALIDITY: This was a double-blind placebo-controlled trial. Subjects were randomised to receive 500 mg of vitamin C or placebo daily for 50 days following immobilization. Patients were assessed in person at 1, 4, 6, 12, and 26 weeks after the fracture and interviewed by telephone after 1 year. RSD was diagnosed when 4 of 6 symptoms were present throughout an area larger than the wrist: unexplained diffuse pain, difference in skin temperature relative to the other arm, difference in skin color relative to the other arm, diffuse edema, limited active range of motion, and occurrence or increase of these signs and symptoms after activity.
In general, this was a well-designed study. Researchers were prevented from knowing to which group the patient would be assigned in the trial (concealed allocation), which prevented selective enrolment of patients. Treatment and control groups were similar, and follow-up was adequate. Menopausal status of patients was not determined.
OUTCOMES MEASURED: The primary outcome measurement was the development of RSD.
RESULTS: RSD occurred in 4 (7%) fractures in the vitamin C group and 14 (22%) in the placebo group (relative risk [RR]=2.91; 95% confidence interval [Cl], 1.02-8.32). RSD occurred significantly more often in older patients (P=.008). There was no association between the occurrence of RSD and the side of the fracture, dominance, or the need to undergo reduction. There was a significant increase (RR=0.37; 95% Cl, 0.16-0.89) of RSD in type B and C fractures (AO classification). Early complaints while wearing the plaster cast are highly predictive of the occurrence of RSD (RR=0.17; 95% Cl, 0.07-0.41).
Vitamin C 500 mg per day for 50 days following a wrist fracture is effective for preventing RSD. Although this dose is 10-fold higher than the recommended daily allowance, it is still well below the overdose level. This inexpensive and relatively easy treatment seems especially prudent for older patients and those with early complaints while wearing the plaster cast. The high control group incidence of RSD (22%) has been found in other studies, and suggests that clinicians should pay close attention to the development of this complication in patients with wrist fractures.
CLINICAL QUESTION: Can vitamin C prevent reflex sympathetic dystrophy following a wrist fracture?
BACKGROUND: Reflex sympathetic dystrophy (RSD) may result in increased morbidity, health care costs, and time lost from work. It is not known whether prevention is possible. There is some evidence that oxygen radicals are involved in the pathogenesis of RSD. Antioxidants such as vitamin C have been shown to reduce morbidity in burn injuries. This led the investigators to test vitamin C for RSD prevention.
POPULATION STUDIED: The authors of this study enrolled 115 adults aged 24 to 88 years who were evaluated in an emergency department with a fracture of at least one wrist. Seventy-nine percent of the fractures occurred in women. All patients were treated conservatively using immoblization. Patients were excluded with unacceptable reduction or secondary dislocation or if they would be unavailable for follow-up.
STUDY DESIGN AND VALIDITY: This was a double-blind placebo-controlled trial. Subjects were randomised to receive 500 mg of vitamin C or placebo daily for 50 days following immobilization. Patients were assessed in person at 1, 4, 6, 12, and 26 weeks after the fracture and interviewed by telephone after 1 year. RSD was diagnosed when 4 of 6 symptoms were present throughout an area larger than the wrist: unexplained diffuse pain, difference in skin temperature relative to the other arm, difference in skin color relative to the other arm, diffuse edema, limited active range of motion, and occurrence or increase of these signs and symptoms after activity.
In general, this was a well-designed study. Researchers were prevented from knowing to which group the patient would be assigned in the trial (concealed allocation), which prevented selective enrolment of patients. Treatment and control groups were similar, and follow-up was adequate. Menopausal status of patients was not determined.
OUTCOMES MEASURED: The primary outcome measurement was the development of RSD.
RESULTS: RSD occurred in 4 (7%) fractures in the vitamin C group and 14 (22%) in the placebo group (relative risk [RR]=2.91; 95% confidence interval [Cl], 1.02-8.32). RSD occurred significantly more often in older patients (P=.008). There was no association between the occurrence of RSD and the side of the fracture, dominance, or the need to undergo reduction. There was a significant increase (RR=0.37; 95% Cl, 0.16-0.89) of RSD in type B and C fractures (AO classification). Early complaints while wearing the plaster cast are highly predictive of the occurrence of RSD (RR=0.17; 95% Cl, 0.07-0.41).
Vitamin C 500 mg per day for 50 days following a wrist fracture is effective for preventing RSD. Although this dose is 10-fold higher than the recommended daily allowance, it is still well below the overdose level. This inexpensive and relatively easy treatment seems especially prudent for older patients and those with early complaints while wearing the plaster cast. The high control group incidence of RSD (22%) has been found in other studies, and suggests that clinicians should pay close attention to the development of this complication in patients with wrist fractures.
CLINICAL QUESTION: Can vitamin C prevent reflex sympathetic dystrophy following a wrist fracture?
BACKGROUND: Reflex sympathetic dystrophy (RSD) may result in increased morbidity, health care costs, and time lost from work. It is not known whether prevention is possible. There is some evidence that oxygen radicals are involved in the pathogenesis of RSD. Antioxidants such as vitamin C have been shown to reduce morbidity in burn injuries. This led the investigators to test vitamin C for RSD prevention.
POPULATION STUDIED: The authors of this study enrolled 115 adults aged 24 to 88 years who were evaluated in an emergency department with a fracture of at least one wrist. Seventy-nine percent of the fractures occurred in women. All patients were treated conservatively using immoblization. Patients were excluded with unacceptable reduction or secondary dislocation or if they would be unavailable for follow-up.
STUDY DESIGN AND VALIDITY: This was a double-blind placebo-controlled trial. Subjects were randomised to receive 500 mg of vitamin C or placebo daily for 50 days following immobilization. Patients were assessed in person at 1, 4, 6, 12, and 26 weeks after the fracture and interviewed by telephone after 1 year. RSD was diagnosed when 4 of 6 symptoms were present throughout an area larger than the wrist: unexplained diffuse pain, difference in skin temperature relative to the other arm, difference in skin color relative to the other arm, diffuse edema, limited active range of motion, and occurrence or increase of these signs and symptoms after activity.
In general, this was a well-designed study. Researchers were prevented from knowing to which group the patient would be assigned in the trial (concealed allocation), which prevented selective enrolment of patients. Treatment and control groups were similar, and follow-up was adequate. Menopausal status of patients was not determined.
OUTCOMES MEASURED: The primary outcome measurement was the development of RSD.
RESULTS: RSD occurred in 4 (7%) fractures in the vitamin C group and 14 (22%) in the placebo group (relative risk [RR]=2.91; 95% confidence interval [Cl], 1.02-8.32). RSD occurred significantly more often in older patients (P=.008). There was no association between the occurrence of RSD and the side of the fracture, dominance, or the need to undergo reduction. There was a significant increase (RR=0.37; 95% Cl, 0.16-0.89) of RSD in type B and C fractures (AO classification). Early complaints while wearing the plaster cast are highly predictive of the occurrence of RSD (RR=0.17; 95% Cl, 0.07-0.41).
Vitamin C 500 mg per day for 50 days following a wrist fracture is effective for preventing RSD. Although this dose is 10-fold higher than the recommended daily allowance, it is still well below the overdose level. This inexpensive and relatively easy treatment seems especially prudent for older patients and those with early complaints while wearing the plaster cast. The high control group incidence of RSD (22%) has been found in other studies, and suggests that clinicians should pay close attention to the development of this complication in patients with wrist fractures.
Intra-Arterial Prourokinase Effective for Acute Stroke Therapy
CLINICAL QUESTION: How effective and safe is intra-arterial prourokinase (proUK) in patients with acute stroke of less than 6 hours’ duration caused by middle cerebral artery occlusion?
BACKGROUND: Intravenous tissue-type plasminogen activator (tPA) benefits acute ischemic stroke patients if given within 3 hours. Most stroke patients present after 3 hours of symptoms and are not eligible for tPA. The authors of this study investigated whether this therapeutic window can be extended to 6 hours with the use of intra-arterial proUK in patients with middle cerebral artery ischemic strokes. The investigators chose middle cerebral artery stroke because it is the most frequent site of arterial occlusion in patients with a severe stroke of less than 6 hours’ duration.
POPULATION STUDIED: Fifty-four North American centers screened 12,323 patients who presented with symptoms consistent with acute stroke. Only 180 patients (2%) were eligible for randomization. Major inclusion criteria included symptoms of less than 6 hours’ duration, symptoms consistent with middle cerebral artery stroke, head computed tomography scan negative for hemorrhage or major infarction, and angiographically proven occlusion of the middle cerebral artery. The most common cause of exclusion was failure to make the 6-hour cutoff (4053 patients, 33% of those screened). The average age of participants was 64 years. Women made up 41% of the study, and 80% of participants were white.
STUDY DESIGN AND VALIDITY: This was a randomized trial comparing 9 mg of intra-arterial proUK plus heparin to a control group receiving only heparin. The proUK was infused into the middle cerebral artery through a microcatheter during a 2-hour period. Head computed tomography scans were done at baseline, 24 hours, and at 7 to 10 days to evaluate for parenchymal hemorrhage, and an angiogram was performed on study and control patients at 2-hours to access final vessel patency. A neurologist blinded to treatment assignment assessed patients clinically at 7 to 10 days, 30 days, and 90 days.
Although a small sample was randomized, the study and control groups were similar. Researchers were prevented from knowing to which group patients would be assigned before entering them in the trial (concealed allocation), which prevented selective enrollment of patients. A single-blinded study was chosen for ethical concerns about infusing a placebo into the middle cerebral artery with little chance of benefit. Intention-to-treat analysis was applied.
OUTCOMES MEASURED: The primary outcomes measured were patients achieving a modified Rankin stroke score of 2 or less (slight or no disability) at 90 days and hemorrhagic transformation causing neurologic deterioration within 24 hours of treatment. Secondary outcomes included intracranial hemorrhage with neurologic deterioration at 10 days, the 90-day mortality rate, the middle cerebral artery recanalization rate, and the procedural complication rates.
RESULTS: At 90 days, 40% of the proUK patients and 25% of the control patients had no or minimal disability (P=.04; number needed to harm [NNH]=7). Intracranial hemorrhage with neurological deterioration within 24 hours occurred in 10% of proUK patients and 2% of control patients (P=.06; NNH=12). By day 10, intracranial hemorrhage with neurological deterioration remained at 10% in proUK patients and rose to 4% in control patients (P=.22; NNH=17). This probably signifies delayed recanalization with hemorrhagic transformation in the control group. There was no difference in the 90-day mortality rate. Procedural complications consisted of one proUK patient who had worsening of neurologic symptoms during treatment (1/121) and one patient in the proUK group who had anaphylaxis (1/121).
Patients with an acute ischemic stroke of less than 6 hours’ duration with angiographically proven middle cerebral artery occlusion derive some benefit from intra-arterial proUK. Thus, the therapeutic window of 3 hours required for tPA administration may potentially be lengthened to 6 hours with the use of intra-arterial proUK. This clinical benefit is countered by an increased risk of immediate deterioration due to intracranial hemorrhage. Applying the results of this study to a clinical setting will be challenging, requiring a rapid response stroke team. Less than 2% of the total patients presenting with stroke symptoms in this study were eligible for treatment with proUK.
CLINICAL QUESTION: How effective and safe is intra-arterial prourokinase (proUK) in patients with acute stroke of less than 6 hours’ duration caused by middle cerebral artery occlusion?
BACKGROUND: Intravenous tissue-type plasminogen activator (tPA) benefits acute ischemic stroke patients if given within 3 hours. Most stroke patients present after 3 hours of symptoms and are not eligible for tPA. The authors of this study investigated whether this therapeutic window can be extended to 6 hours with the use of intra-arterial proUK in patients with middle cerebral artery ischemic strokes. The investigators chose middle cerebral artery stroke because it is the most frequent site of arterial occlusion in patients with a severe stroke of less than 6 hours’ duration.
POPULATION STUDIED: Fifty-four North American centers screened 12,323 patients who presented with symptoms consistent with acute stroke. Only 180 patients (2%) were eligible for randomization. Major inclusion criteria included symptoms of less than 6 hours’ duration, symptoms consistent with middle cerebral artery stroke, head computed tomography scan negative for hemorrhage or major infarction, and angiographically proven occlusion of the middle cerebral artery. The most common cause of exclusion was failure to make the 6-hour cutoff (4053 patients, 33% of those screened). The average age of participants was 64 years. Women made up 41% of the study, and 80% of participants were white.
STUDY DESIGN AND VALIDITY: This was a randomized trial comparing 9 mg of intra-arterial proUK plus heparin to a control group receiving only heparin. The proUK was infused into the middle cerebral artery through a microcatheter during a 2-hour period. Head computed tomography scans were done at baseline, 24 hours, and at 7 to 10 days to evaluate for parenchymal hemorrhage, and an angiogram was performed on study and control patients at 2-hours to access final vessel patency. A neurologist blinded to treatment assignment assessed patients clinically at 7 to 10 days, 30 days, and 90 days.
Although a small sample was randomized, the study and control groups were similar. Researchers were prevented from knowing to which group patients would be assigned before entering them in the trial (concealed allocation), which prevented selective enrollment of patients. A single-blinded study was chosen for ethical concerns about infusing a placebo into the middle cerebral artery with little chance of benefit. Intention-to-treat analysis was applied.
OUTCOMES MEASURED: The primary outcomes measured were patients achieving a modified Rankin stroke score of 2 or less (slight or no disability) at 90 days and hemorrhagic transformation causing neurologic deterioration within 24 hours of treatment. Secondary outcomes included intracranial hemorrhage with neurologic deterioration at 10 days, the 90-day mortality rate, the middle cerebral artery recanalization rate, and the procedural complication rates.
RESULTS: At 90 days, 40% of the proUK patients and 25% of the control patients had no or minimal disability (P=.04; number needed to harm [NNH]=7). Intracranial hemorrhage with neurological deterioration within 24 hours occurred in 10% of proUK patients and 2% of control patients (P=.06; NNH=12). By day 10, intracranial hemorrhage with neurological deterioration remained at 10% in proUK patients and rose to 4% in control patients (P=.22; NNH=17). This probably signifies delayed recanalization with hemorrhagic transformation in the control group. There was no difference in the 90-day mortality rate. Procedural complications consisted of one proUK patient who had worsening of neurologic symptoms during treatment (1/121) and one patient in the proUK group who had anaphylaxis (1/121).
Patients with an acute ischemic stroke of less than 6 hours’ duration with angiographically proven middle cerebral artery occlusion derive some benefit from intra-arterial proUK. Thus, the therapeutic window of 3 hours required for tPA administration may potentially be lengthened to 6 hours with the use of intra-arterial proUK. This clinical benefit is countered by an increased risk of immediate deterioration due to intracranial hemorrhage. Applying the results of this study to a clinical setting will be challenging, requiring a rapid response stroke team. Less than 2% of the total patients presenting with stroke symptoms in this study were eligible for treatment with proUK.
CLINICAL QUESTION: How effective and safe is intra-arterial prourokinase (proUK) in patients with acute stroke of less than 6 hours’ duration caused by middle cerebral artery occlusion?
BACKGROUND: Intravenous tissue-type plasminogen activator (tPA) benefits acute ischemic stroke patients if given within 3 hours. Most stroke patients present after 3 hours of symptoms and are not eligible for tPA. The authors of this study investigated whether this therapeutic window can be extended to 6 hours with the use of intra-arterial proUK in patients with middle cerebral artery ischemic strokes. The investigators chose middle cerebral artery stroke because it is the most frequent site of arterial occlusion in patients with a severe stroke of less than 6 hours’ duration.
POPULATION STUDIED: Fifty-four North American centers screened 12,323 patients who presented with symptoms consistent with acute stroke. Only 180 patients (2%) were eligible for randomization. Major inclusion criteria included symptoms of less than 6 hours’ duration, symptoms consistent with middle cerebral artery stroke, head computed tomography scan negative for hemorrhage or major infarction, and angiographically proven occlusion of the middle cerebral artery. The most common cause of exclusion was failure to make the 6-hour cutoff (4053 patients, 33% of those screened). The average age of participants was 64 years. Women made up 41% of the study, and 80% of participants were white.
STUDY DESIGN AND VALIDITY: This was a randomized trial comparing 9 mg of intra-arterial proUK plus heparin to a control group receiving only heparin. The proUK was infused into the middle cerebral artery through a microcatheter during a 2-hour period. Head computed tomography scans were done at baseline, 24 hours, and at 7 to 10 days to evaluate for parenchymal hemorrhage, and an angiogram was performed on study and control patients at 2-hours to access final vessel patency. A neurologist blinded to treatment assignment assessed patients clinically at 7 to 10 days, 30 days, and 90 days.
Although a small sample was randomized, the study and control groups were similar. Researchers were prevented from knowing to which group patients would be assigned before entering them in the trial (concealed allocation), which prevented selective enrollment of patients. A single-blinded study was chosen for ethical concerns about infusing a placebo into the middle cerebral artery with little chance of benefit. Intention-to-treat analysis was applied.
OUTCOMES MEASURED: The primary outcomes measured were patients achieving a modified Rankin stroke score of 2 or less (slight or no disability) at 90 days and hemorrhagic transformation causing neurologic deterioration within 24 hours of treatment. Secondary outcomes included intracranial hemorrhage with neurologic deterioration at 10 days, the 90-day mortality rate, the middle cerebral artery recanalization rate, and the procedural complication rates.
RESULTS: At 90 days, 40% of the proUK patients and 25% of the control patients had no or minimal disability (P=.04; number needed to harm [NNH]=7). Intracranial hemorrhage with neurological deterioration within 24 hours occurred in 10% of proUK patients and 2% of control patients (P=.06; NNH=12). By day 10, intracranial hemorrhage with neurological deterioration remained at 10% in proUK patients and rose to 4% in control patients (P=.22; NNH=17). This probably signifies delayed recanalization with hemorrhagic transformation in the control group. There was no difference in the 90-day mortality rate. Procedural complications consisted of one proUK patient who had worsening of neurologic symptoms during treatment (1/121) and one patient in the proUK group who had anaphylaxis (1/121).
Patients with an acute ischemic stroke of less than 6 hours’ duration with angiographically proven middle cerebral artery occlusion derive some benefit from intra-arterial proUK. Thus, the therapeutic window of 3 hours required for tPA administration may potentially be lengthened to 6 hours with the use of intra-arterial proUK. This clinical benefit is countered by an increased risk of immediate deterioration due to intracranial hemorrhage. Applying the results of this study to a clinical setting will be challenging, requiring a rapid response stroke team. Less than 2% of the total patients presenting with stroke symptoms in this study were eligible for treatment with proUK.
Antihistamines for Atopic Dermatitis
CLINICAL QUESTION: Do antihistamines relieve itching in atopic dermatitis?
BACKGROUND: Atopic dermatitis is a common malady characterized by intense pruritus. The itch-scratch cycle exacerbates the problem and is frequently treated with antihistamines, although there is little evidence for their effectiveness. population studied n Three European trials enrolled patients ranging in age from 11 to 67 years who met inclusion criteria for atopic dermatitis.
STUDY DESIGN AND VALIDITY: This was a systematic review in which the authors evaluated randomized controlled trials examining the effect of antihistamines on pruritis in atopic dermatitis. Sixteen studies were initially identified by searching MEDLINE (1966 to 1999), The Cochrane Database of Systematic Reviews, and Best Evidence databases. The authors used a modified version of Sackett’s criteria for clinical evidence to assess quality. Thirteen trials that received a grade of C by the Sackett criteria were excluded because of lack of randomization, placebo control, blinding, or a sample size of less than 20 people. The remaining 3 studies were randomized double-blind placebo-controlled trials. However, all had small sample sizes, and therefore received a grade of B. There were no grade-A studies.
Terfenadine, clemastine fumurate, and cetirizine hydrochloride were evaluated in these 3 studies. All 3 trials permitted the use of emollients and topical steroids. The first 2 studies enrolled fewer than 30 subjects but used a crossover design; this design reduces the needed sample size, since each patient reduces variance by serving as his or her own control. The cetirizine trial enrolled 178 patients who were divided into 4 parallel comparison groups.
The 3 studies suffered from small sample size, which can lead to the possibility of missing a benefit from antihistamines when one existed. Two of the studies excluded dropouts in their analysis, a potential threat to validity. The authors of the systematic review apparently did not contact authors of previous studies to see if additional data were available. Since all 3 of the studies used similar visual analog scales for rating pruritus and had similar patient groups, the data could have been pooled to allow a meta-analysis. Such a meta-analysis would not have been specific for any one of the studied antihistamines but would have helped overcome concerns about sample size.
OUTCOMES MEASURED: All 3 trials used visual analog scores to report the severity of pruritus. Investigator assessments (severity of excoriation or visual analog scores) were used in 2 of the studies. A computerized method of recording symptoms by patients was used in the third.
RESULTS: No improvement in pruritus was found in the 2 smaller crossover trials for terfenadine or clemastine fumurate. In the third trial, the 13 patients taking higher doses (40 mg) of cetirizine and experiencing sedation had an improved mean pruritus score. However, the range of scores overlapped considerably with patients who did not experience sedation, suggesting that the difference may not be significant. Unfortunately, no statistical analysis was done on the effect size.
Although the existing studies are limited in quality and quantity, this systematic review finds no evidence that second generation antihistamines are helpful in relieving pruritus due to atopic dermatitis. Clinical trials of better quality, employing a larger sample size and evaluating both first and second generation antihistamines, would be particularly helpful.
CLINICAL QUESTION: Do antihistamines relieve itching in atopic dermatitis?
BACKGROUND: Atopic dermatitis is a common malady characterized by intense pruritus. The itch-scratch cycle exacerbates the problem and is frequently treated with antihistamines, although there is little evidence for their effectiveness. population studied n Three European trials enrolled patients ranging in age from 11 to 67 years who met inclusion criteria for atopic dermatitis.
STUDY DESIGN AND VALIDITY: This was a systematic review in which the authors evaluated randomized controlled trials examining the effect of antihistamines on pruritis in atopic dermatitis. Sixteen studies were initially identified by searching MEDLINE (1966 to 1999), The Cochrane Database of Systematic Reviews, and Best Evidence databases. The authors used a modified version of Sackett’s criteria for clinical evidence to assess quality. Thirteen trials that received a grade of C by the Sackett criteria were excluded because of lack of randomization, placebo control, blinding, or a sample size of less than 20 people. The remaining 3 studies were randomized double-blind placebo-controlled trials. However, all had small sample sizes, and therefore received a grade of B. There were no grade-A studies.
Terfenadine, clemastine fumurate, and cetirizine hydrochloride were evaluated in these 3 studies. All 3 trials permitted the use of emollients and topical steroids. The first 2 studies enrolled fewer than 30 subjects but used a crossover design; this design reduces the needed sample size, since each patient reduces variance by serving as his or her own control. The cetirizine trial enrolled 178 patients who were divided into 4 parallel comparison groups.
The 3 studies suffered from small sample size, which can lead to the possibility of missing a benefit from antihistamines when one existed. Two of the studies excluded dropouts in their analysis, a potential threat to validity. The authors of the systematic review apparently did not contact authors of previous studies to see if additional data were available. Since all 3 of the studies used similar visual analog scales for rating pruritus and had similar patient groups, the data could have been pooled to allow a meta-analysis. Such a meta-analysis would not have been specific for any one of the studied antihistamines but would have helped overcome concerns about sample size.
OUTCOMES MEASURED: All 3 trials used visual analog scores to report the severity of pruritus. Investigator assessments (severity of excoriation or visual analog scores) were used in 2 of the studies. A computerized method of recording symptoms by patients was used in the third.
RESULTS: No improvement in pruritus was found in the 2 smaller crossover trials for terfenadine or clemastine fumurate. In the third trial, the 13 patients taking higher doses (40 mg) of cetirizine and experiencing sedation had an improved mean pruritus score. However, the range of scores overlapped considerably with patients who did not experience sedation, suggesting that the difference may not be significant. Unfortunately, no statistical analysis was done on the effect size.
Although the existing studies are limited in quality and quantity, this systematic review finds no evidence that second generation antihistamines are helpful in relieving pruritus due to atopic dermatitis. Clinical trials of better quality, employing a larger sample size and evaluating both first and second generation antihistamines, would be particularly helpful.
CLINICAL QUESTION: Do antihistamines relieve itching in atopic dermatitis?
BACKGROUND: Atopic dermatitis is a common malady characterized by intense pruritus. The itch-scratch cycle exacerbates the problem and is frequently treated with antihistamines, although there is little evidence for their effectiveness. population studied n Three European trials enrolled patients ranging in age from 11 to 67 years who met inclusion criteria for atopic dermatitis.
STUDY DESIGN AND VALIDITY: This was a systematic review in which the authors evaluated randomized controlled trials examining the effect of antihistamines on pruritis in atopic dermatitis. Sixteen studies were initially identified by searching MEDLINE (1966 to 1999), The Cochrane Database of Systematic Reviews, and Best Evidence databases. The authors used a modified version of Sackett’s criteria for clinical evidence to assess quality. Thirteen trials that received a grade of C by the Sackett criteria were excluded because of lack of randomization, placebo control, blinding, or a sample size of less than 20 people. The remaining 3 studies were randomized double-blind placebo-controlled trials. However, all had small sample sizes, and therefore received a grade of B. There were no grade-A studies.
Terfenadine, clemastine fumurate, and cetirizine hydrochloride were evaluated in these 3 studies. All 3 trials permitted the use of emollients and topical steroids. The first 2 studies enrolled fewer than 30 subjects but used a crossover design; this design reduces the needed sample size, since each patient reduces variance by serving as his or her own control. The cetirizine trial enrolled 178 patients who were divided into 4 parallel comparison groups.
The 3 studies suffered from small sample size, which can lead to the possibility of missing a benefit from antihistamines when one existed. Two of the studies excluded dropouts in their analysis, a potential threat to validity. The authors of the systematic review apparently did not contact authors of previous studies to see if additional data were available. Since all 3 of the studies used similar visual analog scales for rating pruritus and had similar patient groups, the data could have been pooled to allow a meta-analysis. Such a meta-analysis would not have been specific for any one of the studied antihistamines but would have helped overcome concerns about sample size.
OUTCOMES MEASURED: All 3 trials used visual analog scores to report the severity of pruritus. Investigator assessments (severity of excoriation or visual analog scores) were used in 2 of the studies. A computerized method of recording symptoms by patients was used in the third.
RESULTS: No improvement in pruritus was found in the 2 smaller crossover trials for terfenadine or clemastine fumurate. In the third trial, the 13 patients taking higher doses (40 mg) of cetirizine and experiencing sedation had an improved mean pruritus score. However, the range of scores overlapped considerably with patients who did not experience sedation, suggesting that the difference may not be significant. Unfortunately, no statistical analysis was done on the effect size.
Although the existing studies are limited in quality and quantity, this systematic review finds no evidence that second generation antihistamines are helpful in relieving pruritus due to atopic dermatitis. Clinical trials of better quality, employing a larger sample size and evaluating both first and second generation antihistamines, would be particularly helpful.