Screening Mammography in Women Aged 70 to 79 Years

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Screening Mammography in Women Aged 70 to 79 Years

CLINICAL QUESTION: Is screening mammography in women aged 70 to 79 years beneficial?

BACKGROUND: There is limited direct evidence either for or against screening mammography in elderly women. This analysis had 2 purposes: estimate the effects of continued screening in women aged 70 to 79 years and predict whether it may be more cost-effective to screen only women with higher bone mineral density (BMD) because of their greater risk of developing breast cancer. population studied n The authors included a hypothetical cohort of 10,000 healthy women, all of whom had BMD testing at age 65 and biennial screening mammography until age 69.

STUDY DESIGN AND VALIDITY: This decision and cost-effectiveness analysis compared 3 strategies: (1) discontinue screening mammography after age 69; (2) continue biennial screening until age 79 years only for women whose distal radial BMD is in the top 3 quartiles (check BMD strategy); and (3) continue biennial screening for all women to age 79. The primary analysis included costs for screening mammography ($116), working-up abnormal mammograms, and treating invasive breast cancer and ductal carcinoma in situ, but not for the BMD test. Probabilities included age-adjusted breast cancer incidence and 10-year mortality rates, all-cause mortality rate, percentage of mortality reduction from screening (27%), abnormal mammogram rate, and the breast cancer risk associated with different BMD quartiles. Costs and health benefits were discounted 3% in the primary analysis. One-way sensitivity analyses were conducted for quality-adjusted life after diagnosis of breast cancer, discount rates, BMD test cost, mortality reduction from mammography, 10-year breast cancer mortality rate, and breast cancer risk reduction associated with low BMD.

An appropriately comprehensive spectrum of direct costs and effects were included and based on actual data when possible.1 The effect of screening mammography on breast cancer mortality reduction was taken from a meta-analysis of women aged 50 to 74 years. Neither indirect costs nor the disutility of having a mammogram were included, and sensitivity analyses were not performed for costs other than for the BMD tests. The analysis did not include other strategies, such as annual mammography or using other clinical information to stratify women who might benefit more (eg, with the presence of other risk factors for breast cancer) or less (eg, presence of comorbidities) from screening.

OUTCOMES MEASURED: The authors measured the number of deaths due to breast cancer averted, average increase in overall and quality-adjusted life expectancy, and cost per year of life saved (YLS) and quality-adjusted life year (QALY) saved.

RESULTS: Compared with discontinuing mammography at age 69 years, continued biennial screening in women with BMD in the top 3 quartiles would prevent 9.4 deaths (number needed to screen [NNS]=1064) and add an average 2.1 days to life expectancy at an incremental cost of $67,000 per year of life saved. Compared with the check BMD strategy, continued biennial mammography in all 10,000 women would prevent an additional 1.4 deaths (NNS=7143) and add only 0.3 days of life expectancy at an incremental cost of $118,000 per year of life saved. If a woman’s life utility is 0.8 after being diagnosed with treatable breast cancer, the cost per QALY saved in the check BMD strategy is $1,200,000, and the strategy of screening all women is more harmful because it leads to an incremental decrease in average life expectancy of 0.2 days. The analysis was also sensitive to discount rates (eg, for a discount rate of 15% the cost per YLS in the check BMD strategy is $313,000). Finally, if the cost of the BMD test ($50) is included, the strategies of check BMD and screen all women are equally cost-effective ($75,000 per YLS).

RECOMMENDATIONS FOR CLINICAL PRACTICE

Continuing biennial screening mammography is of borderline cost-effectiveness in healthy women aged 70 to 79 years whose BMD is in the highest 3 quartiles (interventions that cost <$50,000 per YLS are generally felt to be cost-effective). It is not cost-effective, and may even be harmful, in women with lower BMD, unless they have other risk factors for breast cancer (which may include estrogen replacement therapy). It is also not cost-effective in elderly women who value the present much more than the future (ie, who have higher discount rates) or who would have a considerably lower quality of life if diagnosed with treatable breast cancer.

Author and Disclosure Information

Winnie Xu, MD, MS
Pamela Vnenchak, MD
John Smucny, MD
Lafayette Family Medicine Residency New York E-mail: [email protected]

Issue
The Journal of Family Practice - 49(03)
Publications
Topics
Page Number
266-267
Sections
Author and Disclosure Information

Winnie Xu, MD, MS
Pamela Vnenchak, MD
John Smucny, MD
Lafayette Family Medicine Residency New York E-mail: [email protected]

Author and Disclosure Information

Winnie Xu, MD, MS
Pamela Vnenchak, MD
John Smucny, MD
Lafayette Family Medicine Residency New York E-mail: [email protected]

CLINICAL QUESTION: Is screening mammography in women aged 70 to 79 years beneficial?

BACKGROUND: There is limited direct evidence either for or against screening mammography in elderly women. This analysis had 2 purposes: estimate the effects of continued screening in women aged 70 to 79 years and predict whether it may be more cost-effective to screen only women with higher bone mineral density (BMD) because of their greater risk of developing breast cancer. population studied n The authors included a hypothetical cohort of 10,000 healthy women, all of whom had BMD testing at age 65 and biennial screening mammography until age 69.

STUDY DESIGN AND VALIDITY: This decision and cost-effectiveness analysis compared 3 strategies: (1) discontinue screening mammography after age 69; (2) continue biennial screening until age 79 years only for women whose distal radial BMD is in the top 3 quartiles (check BMD strategy); and (3) continue biennial screening for all women to age 79. The primary analysis included costs for screening mammography ($116), working-up abnormal mammograms, and treating invasive breast cancer and ductal carcinoma in situ, but not for the BMD test. Probabilities included age-adjusted breast cancer incidence and 10-year mortality rates, all-cause mortality rate, percentage of mortality reduction from screening (27%), abnormal mammogram rate, and the breast cancer risk associated with different BMD quartiles. Costs and health benefits were discounted 3% in the primary analysis. One-way sensitivity analyses were conducted for quality-adjusted life after diagnosis of breast cancer, discount rates, BMD test cost, mortality reduction from mammography, 10-year breast cancer mortality rate, and breast cancer risk reduction associated with low BMD.

An appropriately comprehensive spectrum of direct costs and effects were included and based on actual data when possible.1 The effect of screening mammography on breast cancer mortality reduction was taken from a meta-analysis of women aged 50 to 74 years. Neither indirect costs nor the disutility of having a mammogram were included, and sensitivity analyses were not performed for costs other than for the BMD tests. The analysis did not include other strategies, such as annual mammography or using other clinical information to stratify women who might benefit more (eg, with the presence of other risk factors for breast cancer) or less (eg, presence of comorbidities) from screening.

OUTCOMES MEASURED: The authors measured the number of deaths due to breast cancer averted, average increase in overall and quality-adjusted life expectancy, and cost per year of life saved (YLS) and quality-adjusted life year (QALY) saved.

RESULTS: Compared with discontinuing mammography at age 69 years, continued biennial screening in women with BMD in the top 3 quartiles would prevent 9.4 deaths (number needed to screen [NNS]=1064) and add an average 2.1 days to life expectancy at an incremental cost of $67,000 per year of life saved. Compared with the check BMD strategy, continued biennial mammography in all 10,000 women would prevent an additional 1.4 deaths (NNS=7143) and add only 0.3 days of life expectancy at an incremental cost of $118,000 per year of life saved. If a woman’s life utility is 0.8 after being diagnosed with treatable breast cancer, the cost per QALY saved in the check BMD strategy is $1,200,000, and the strategy of screening all women is more harmful because it leads to an incremental decrease in average life expectancy of 0.2 days. The analysis was also sensitive to discount rates (eg, for a discount rate of 15% the cost per YLS in the check BMD strategy is $313,000). Finally, if the cost of the BMD test ($50) is included, the strategies of check BMD and screen all women are equally cost-effective ($75,000 per YLS).

RECOMMENDATIONS FOR CLINICAL PRACTICE

Continuing biennial screening mammography is of borderline cost-effectiveness in healthy women aged 70 to 79 years whose BMD is in the highest 3 quartiles (interventions that cost <$50,000 per YLS are generally felt to be cost-effective). It is not cost-effective, and may even be harmful, in women with lower BMD, unless they have other risk factors for breast cancer (which may include estrogen replacement therapy). It is also not cost-effective in elderly women who value the present much more than the future (ie, who have higher discount rates) or who would have a considerably lower quality of life if diagnosed with treatable breast cancer.

CLINICAL QUESTION: Is screening mammography in women aged 70 to 79 years beneficial?

BACKGROUND: There is limited direct evidence either for or against screening mammography in elderly women. This analysis had 2 purposes: estimate the effects of continued screening in women aged 70 to 79 years and predict whether it may be more cost-effective to screen only women with higher bone mineral density (BMD) because of their greater risk of developing breast cancer. population studied n The authors included a hypothetical cohort of 10,000 healthy women, all of whom had BMD testing at age 65 and biennial screening mammography until age 69.

STUDY DESIGN AND VALIDITY: This decision and cost-effectiveness analysis compared 3 strategies: (1) discontinue screening mammography after age 69; (2) continue biennial screening until age 79 years only for women whose distal radial BMD is in the top 3 quartiles (check BMD strategy); and (3) continue biennial screening for all women to age 79. The primary analysis included costs for screening mammography ($116), working-up abnormal mammograms, and treating invasive breast cancer and ductal carcinoma in situ, but not for the BMD test. Probabilities included age-adjusted breast cancer incidence and 10-year mortality rates, all-cause mortality rate, percentage of mortality reduction from screening (27%), abnormal mammogram rate, and the breast cancer risk associated with different BMD quartiles. Costs and health benefits were discounted 3% in the primary analysis. One-way sensitivity analyses were conducted for quality-adjusted life after diagnosis of breast cancer, discount rates, BMD test cost, mortality reduction from mammography, 10-year breast cancer mortality rate, and breast cancer risk reduction associated with low BMD.

An appropriately comprehensive spectrum of direct costs and effects were included and based on actual data when possible.1 The effect of screening mammography on breast cancer mortality reduction was taken from a meta-analysis of women aged 50 to 74 years. Neither indirect costs nor the disutility of having a mammogram were included, and sensitivity analyses were not performed for costs other than for the BMD tests. The analysis did not include other strategies, such as annual mammography or using other clinical information to stratify women who might benefit more (eg, with the presence of other risk factors for breast cancer) or less (eg, presence of comorbidities) from screening.

OUTCOMES MEASURED: The authors measured the number of deaths due to breast cancer averted, average increase in overall and quality-adjusted life expectancy, and cost per year of life saved (YLS) and quality-adjusted life year (QALY) saved.

RESULTS: Compared with discontinuing mammography at age 69 years, continued biennial screening in women with BMD in the top 3 quartiles would prevent 9.4 deaths (number needed to screen [NNS]=1064) and add an average 2.1 days to life expectancy at an incremental cost of $67,000 per year of life saved. Compared with the check BMD strategy, continued biennial mammography in all 10,000 women would prevent an additional 1.4 deaths (NNS=7143) and add only 0.3 days of life expectancy at an incremental cost of $118,000 per year of life saved. If a woman’s life utility is 0.8 after being diagnosed with treatable breast cancer, the cost per QALY saved in the check BMD strategy is $1,200,000, and the strategy of screening all women is more harmful because it leads to an incremental decrease in average life expectancy of 0.2 days. The analysis was also sensitive to discount rates (eg, for a discount rate of 15% the cost per YLS in the check BMD strategy is $313,000). Finally, if the cost of the BMD test ($50) is included, the strategies of check BMD and screen all women are equally cost-effective ($75,000 per YLS).

RECOMMENDATIONS FOR CLINICAL PRACTICE

Continuing biennial screening mammography is of borderline cost-effectiveness in healthy women aged 70 to 79 years whose BMD is in the highest 3 quartiles (interventions that cost <$50,000 per YLS are generally felt to be cost-effective). It is not cost-effective, and may even be harmful, in women with lower BMD, unless they have other risk factors for breast cancer (which may include estrogen replacement therapy). It is also not cost-effective in elderly women who value the present much more than the future (ie, who have higher discount rates) or who would have a considerably lower quality of life if diagnosed with treatable breast cancer.

Issue
The Journal of Family Practice - 49(03)
Issue
The Journal of Family Practice - 49(03)
Page Number
266-267
Page Number
266-267
Publications
Publications
Topics
Article Type
Display Headline
Screening Mammography in Women Aged 70 to 79 Years
Display Headline
Screening Mammography in Women Aged 70 to 79 Years
Sections
Disallow All Ads

Stool Antigen Immunoassay for Detection of H Pylori Infection

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Stool Antigen Immunoassay for Detection of H Pylori Infection

CLINICAL QUESTION: How does a noninvasive stool antigen immunoassay for detection of Helicobacter pylori infection compare with standard invasive diagnostic methods?

BACKGROUND: The reference standard for the diagnosis of H pylori infection is endoscopic biopsy and histopathologic confirmation of the organism, which is often impractical and cost-prohibitive for patients presenting with dyspepsia. Noninvasive tests include serum antibody titers, the carbon-13 urea breath test, and stool testing with either polymerase chain reaction or antigen enzyme immunoassay. Stool testing may represent an alternative noninvasive diagnostic approach that is both inexpensive and acceptable to patients.

POPULATION STUDIED: The investigators enrolled 104 consecutive patients undergoing upper endoscopy from a university gastroenterology practice. Sex distribution slightly favored men (57%).

STUDY DESIGN AND VALIDITY: All of the patients underwent a rapid urease test, histology, and culture. The reference standard for H pylori infection was a positive result for at least 2 of the 3 tests. The first stool following endoscopy was tested for the presence of H pylori antigen using the Premier Platinum HpSA Immunoassay (Meridian Diagnostics; Cincinnati, Ohio).

The study design was straightforward and readily reproducible. The investigators avoided workup bias by performing all 4 diagnostic tests on all study participants. The authors do not explicitly state whether the tests were conducted in a blinded manner, thus it is unclear if expectation bias was avoided. They detail, however, which investigators performed various study tasks, and from that information we can presume that the tests were performed in a blinded fashion. Because of possible referral bias, a higher prevalence of endoscopic disease would be expected in the study population when compared with that seen in a typical family practice setting.

OUTCOMES MEASURED: The investigators determined the endoscopic diagnoses for all patients. They also calculated the sensitivity, specificity, positive predictive value, and negative predictive value for the stool test.

RESULTS: The endoscopic diagnoses were: nonulcer dyspepsia, 48%; active ulcer disease, 32%; gastric or duodenal erosions, 16%; and gastric polyps, 4%. The sensitivity and specificity for the stool test were 96% (95% confidence interval [CI], 90.6%-100%) and 93% (95% CI, 85.1%-99.5%), respectively. The positive and negative predictive values were 92% and 96%, respectively. The investigators did not calculate likelihood ratios (LRs); however, the data supplied allows the reader to do so. The positive LR was 12.5, and the negative LR was 0.04. Given a prevalence of H pylori infection of 40% (typical for the primary care setting), a positive test result would increase the likelihood of infection to 89%, and a negative result would decrease the likelihood to 3%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study describes an accurate inexpensive noninvasive stool antigen immunoassay test for detection of H pylori infection. The cost per assay is approximately $27.1 The test compares favorably with the current reference standard. It represents an improvement over other noninvasive diagnostic methods such as the carbon-13 urea breath test (less costly and more convenient) and serologic antibody titers (at least as accurate and more rapid than laboratory assays). Although confirmation of this study’s findings in a primary care setting would be helpful, the advantages of this test make it an immediately favorable alternative to other available methods.

Author and Disclosure Information

Michele A. Spirko, MD
Virginia Commonwealth University Richmond E-mail: [email protected]

Issue
The Journal of Family Practice - 49(03)
Publications
Topics
Page Number
265
Sections
Author and Disclosure Information

Michele A. Spirko, MD
Virginia Commonwealth University Richmond E-mail: [email protected]

Author and Disclosure Information

Michele A. Spirko, MD
Virginia Commonwealth University Richmond E-mail: [email protected]

CLINICAL QUESTION: How does a noninvasive stool antigen immunoassay for detection of Helicobacter pylori infection compare with standard invasive diagnostic methods?

BACKGROUND: The reference standard for the diagnosis of H pylori infection is endoscopic biopsy and histopathologic confirmation of the organism, which is often impractical and cost-prohibitive for patients presenting with dyspepsia. Noninvasive tests include serum antibody titers, the carbon-13 urea breath test, and stool testing with either polymerase chain reaction or antigen enzyme immunoassay. Stool testing may represent an alternative noninvasive diagnostic approach that is both inexpensive and acceptable to patients.

POPULATION STUDIED: The investigators enrolled 104 consecutive patients undergoing upper endoscopy from a university gastroenterology practice. Sex distribution slightly favored men (57%).

STUDY DESIGN AND VALIDITY: All of the patients underwent a rapid urease test, histology, and culture. The reference standard for H pylori infection was a positive result for at least 2 of the 3 tests. The first stool following endoscopy was tested for the presence of H pylori antigen using the Premier Platinum HpSA Immunoassay (Meridian Diagnostics; Cincinnati, Ohio).

The study design was straightforward and readily reproducible. The investigators avoided workup bias by performing all 4 diagnostic tests on all study participants. The authors do not explicitly state whether the tests were conducted in a blinded manner, thus it is unclear if expectation bias was avoided. They detail, however, which investigators performed various study tasks, and from that information we can presume that the tests were performed in a blinded fashion. Because of possible referral bias, a higher prevalence of endoscopic disease would be expected in the study population when compared with that seen in a typical family practice setting.

OUTCOMES MEASURED: The investigators determined the endoscopic diagnoses for all patients. They also calculated the sensitivity, specificity, positive predictive value, and negative predictive value for the stool test.

RESULTS: The endoscopic diagnoses were: nonulcer dyspepsia, 48%; active ulcer disease, 32%; gastric or duodenal erosions, 16%; and gastric polyps, 4%. The sensitivity and specificity for the stool test were 96% (95% confidence interval [CI], 90.6%-100%) and 93% (95% CI, 85.1%-99.5%), respectively. The positive and negative predictive values were 92% and 96%, respectively. The investigators did not calculate likelihood ratios (LRs); however, the data supplied allows the reader to do so. The positive LR was 12.5, and the negative LR was 0.04. Given a prevalence of H pylori infection of 40% (typical for the primary care setting), a positive test result would increase the likelihood of infection to 89%, and a negative result would decrease the likelihood to 3%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study describes an accurate inexpensive noninvasive stool antigen immunoassay test for detection of H pylori infection. The cost per assay is approximately $27.1 The test compares favorably with the current reference standard. It represents an improvement over other noninvasive diagnostic methods such as the carbon-13 urea breath test (less costly and more convenient) and serologic antibody titers (at least as accurate and more rapid than laboratory assays). Although confirmation of this study’s findings in a primary care setting would be helpful, the advantages of this test make it an immediately favorable alternative to other available methods.

CLINICAL QUESTION: How does a noninvasive stool antigen immunoassay for detection of Helicobacter pylori infection compare with standard invasive diagnostic methods?

BACKGROUND: The reference standard for the diagnosis of H pylori infection is endoscopic biopsy and histopathologic confirmation of the organism, which is often impractical and cost-prohibitive for patients presenting with dyspepsia. Noninvasive tests include serum antibody titers, the carbon-13 urea breath test, and stool testing with either polymerase chain reaction or antigen enzyme immunoassay. Stool testing may represent an alternative noninvasive diagnostic approach that is both inexpensive and acceptable to patients.

POPULATION STUDIED: The investigators enrolled 104 consecutive patients undergoing upper endoscopy from a university gastroenterology practice. Sex distribution slightly favored men (57%).

STUDY DESIGN AND VALIDITY: All of the patients underwent a rapid urease test, histology, and culture. The reference standard for H pylori infection was a positive result for at least 2 of the 3 tests. The first stool following endoscopy was tested for the presence of H pylori antigen using the Premier Platinum HpSA Immunoassay (Meridian Diagnostics; Cincinnati, Ohio).

The study design was straightforward and readily reproducible. The investigators avoided workup bias by performing all 4 diagnostic tests on all study participants. The authors do not explicitly state whether the tests were conducted in a blinded manner, thus it is unclear if expectation bias was avoided. They detail, however, which investigators performed various study tasks, and from that information we can presume that the tests were performed in a blinded fashion. Because of possible referral bias, a higher prevalence of endoscopic disease would be expected in the study population when compared with that seen in a typical family practice setting.

OUTCOMES MEASURED: The investigators determined the endoscopic diagnoses for all patients. They also calculated the sensitivity, specificity, positive predictive value, and negative predictive value for the stool test.

RESULTS: The endoscopic diagnoses were: nonulcer dyspepsia, 48%; active ulcer disease, 32%; gastric or duodenal erosions, 16%; and gastric polyps, 4%. The sensitivity and specificity for the stool test were 96% (95% confidence interval [CI], 90.6%-100%) and 93% (95% CI, 85.1%-99.5%), respectively. The positive and negative predictive values were 92% and 96%, respectively. The investigators did not calculate likelihood ratios (LRs); however, the data supplied allows the reader to do so. The positive LR was 12.5, and the negative LR was 0.04. Given a prevalence of H pylori infection of 40% (typical for the primary care setting), a positive test result would increase the likelihood of infection to 89%, and a negative result would decrease the likelihood to 3%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study describes an accurate inexpensive noninvasive stool antigen immunoassay test for detection of H pylori infection. The cost per assay is approximately $27.1 The test compares favorably with the current reference standard. It represents an improvement over other noninvasive diagnostic methods such as the carbon-13 urea breath test (less costly and more convenient) and serologic antibody titers (at least as accurate and more rapid than laboratory assays). Although confirmation of this study’s findings in a primary care setting would be helpful, the advantages of this test make it an immediately favorable alternative to other available methods.

Issue
The Journal of Family Practice - 49(03)
Issue
The Journal of Family Practice - 49(03)
Page Number
265
Page Number
265
Publications
Publications
Topics
Article Type
Display Headline
Stool Antigen Immunoassay for Detection of H Pylori Infection
Display Headline
Stool Antigen Immunoassay for Detection of H Pylori Infection
Sections
Disallow All Ads

Finger-stick of Laboratory Serological Testing for H Pylori Antibody?

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Finger-stick of Laboratory Serological Testing for H Pylori Antibody?

CLINICAL QUESTION: How does a new whole-blood Helicobacter pylori antibody test compare with quantitative laboratory serology?

BACKGROUND: Baseline screening for H pylori infection with an antibody test is a widely used, reasonably accurate marker of infection, and may be cost-effective.1 Serologic tests require venipuncture, followed by a delay as the serum is either sent to a reference laboratory or the sample is prepared for an in-office test. Whole-blood finger-stick antibody tests offer a simple in-office test with more rapid results, but the first generation of tests were less accurate than laboratory serology. A new whole-blood finger-stick antibody test is now available (StatSimple; Saliva Diagnostic Systems; Vancouver, Washington) that may be more desirable because of its ease of use, low cost, and the lack of a requirement for Clinical Laboratory Improvement Amendment (CLIA) certification.

POPULATION STUDIED: Study participants were scheduled to undergo endoscopy for clinical indications. Patients were excluded for being younger than 18 years; having a history of previous treatment for H pylori; or having used antibiotics, bismuth-containing medications, omeprazole, or lansoprazole in the previous 4 weeks. A total of 201 patients met the inclusion criteria.

STUDY DESIGN AND VALIDITY: All patients had one antral biopsy taken for a rapid urease test and 2 for histologic examination. A finger-stick was performed to obtain 100 mg of blood for the whole-blood antibody test, and venipuncture was performed to obtain serum for a quantitative enzyme-linked immunosorbent assay serologic test. Given the lack of a clear reference standard for a diagnosis of H pylori, each antibody test was measured against 2 reference standards. Reference standard 1, the more sensitive one, consisted of having either a positive rapid urease test result or a positive histologic examination result. Reference standard 2 was more specific but less sensitive and required that both biopsy results were positive. Researchers performing each test were blinded to the results of all other tests.

OUTCOMES MEASURED: The primary outcomes were the sensitivity and specificity of the whole-blood and quantitative serologic antibody tests.

RESULTS: The sensitivities of the whole-blood test and quantitative serology using reference standard 1 (86% vs 92%, P=.19) and the gold reference standard 2 (90% vs 94%, P=.41) were not significantly different. The whole-blood test had similar or slightly greater specificity than the quantitative serology using reference standard 1 (88% vs 77%, P=.052) and the gold standard 2 (79% vs 67%; P=.048). The positive and negative likelihood ratios for the whole-blood test using reference standard 2 were 4.3 and 0.1. Given a prevalence of H pylori infection of 40% (typical of the primary care setting in the United States), a negative test result reduced that likelihood to 6%, and a positive result increased it to 73%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

New-generation finger-stick whole-blood H pylori antibody testing can be simple, inexpensive, and CLIA-exempt, and has a sensitivity and specificity comparable with that of quantitative serologic tests. This test is not useful for following the status of treated patients. Although the authors comment on the finger-stick method being low cost, they did not compare this method with other available diagnostic tests. Other minimally invasive tests are also available. For example, a recent study of a stool immunoassay for H pylori antigen showed excellent sensitivity and specificity (>90%), with costs similar to serologic tests.2 Stool antigen testing could end up being the preferred testing modality because of its combination of low cost, high sensitivity and specificity, minimal invasiveness, and potential for rapid evaluation of efficacy of treatment.

Author and Disclosure Information

Jennifer Bates, MD
Barry Saver, MD, MPH
University of Washington Seattle E-mail: [email protected]

Issue
The Journal of Family Practice - 49(03)
Publications
Topics
Page Number
205,265
Sections
Author and Disclosure Information

Jennifer Bates, MD
Barry Saver, MD, MPH
University of Washington Seattle E-mail: [email protected]

Author and Disclosure Information

Jennifer Bates, MD
Barry Saver, MD, MPH
University of Washington Seattle E-mail: [email protected]

CLINICAL QUESTION: How does a new whole-blood Helicobacter pylori antibody test compare with quantitative laboratory serology?

BACKGROUND: Baseline screening for H pylori infection with an antibody test is a widely used, reasonably accurate marker of infection, and may be cost-effective.1 Serologic tests require venipuncture, followed by a delay as the serum is either sent to a reference laboratory or the sample is prepared for an in-office test. Whole-blood finger-stick antibody tests offer a simple in-office test with more rapid results, but the first generation of tests were less accurate than laboratory serology. A new whole-blood finger-stick antibody test is now available (StatSimple; Saliva Diagnostic Systems; Vancouver, Washington) that may be more desirable because of its ease of use, low cost, and the lack of a requirement for Clinical Laboratory Improvement Amendment (CLIA) certification.

POPULATION STUDIED: Study participants were scheduled to undergo endoscopy for clinical indications. Patients were excluded for being younger than 18 years; having a history of previous treatment for H pylori; or having used antibiotics, bismuth-containing medications, omeprazole, or lansoprazole in the previous 4 weeks. A total of 201 patients met the inclusion criteria.

STUDY DESIGN AND VALIDITY: All patients had one antral biopsy taken for a rapid urease test and 2 for histologic examination. A finger-stick was performed to obtain 100 mg of blood for the whole-blood antibody test, and venipuncture was performed to obtain serum for a quantitative enzyme-linked immunosorbent assay serologic test. Given the lack of a clear reference standard for a diagnosis of H pylori, each antibody test was measured against 2 reference standards. Reference standard 1, the more sensitive one, consisted of having either a positive rapid urease test result or a positive histologic examination result. Reference standard 2 was more specific but less sensitive and required that both biopsy results were positive. Researchers performing each test were blinded to the results of all other tests.

OUTCOMES MEASURED: The primary outcomes were the sensitivity and specificity of the whole-blood and quantitative serologic antibody tests.

RESULTS: The sensitivities of the whole-blood test and quantitative serology using reference standard 1 (86% vs 92%, P=.19) and the gold reference standard 2 (90% vs 94%, P=.41) were not significantly different. The whole-blood test had similar or slightly greater specificity than the quantitative serology using reference standard 1 (88% vs 77%, P=.052) and the gold standard 2 (79% vs 67%; P=.048). The positive and negative likelihood ratios for the whole-blood test using reference standard 2 were 4.3 and 0.1. Given a prevalence of H pylori infection of 40% (typical of the primary care setting in the United States), a negative test result reduced that likelihood to 6%, and a positive result increased it to 73%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

New-generation finger-stick whole-blood H pylori antibody testing can be simple, inexpensive, and CLIA-exempt, and has a sensitivity and specificity comparable with that of quantitative serologic tests. This test is not useful for following the status of treated patients. Although the authors comment on the finger-stick method being low cost, they did not compare this method with other available diagnostic tests. Other minimally invasive tests are also available. For example, a recent study of a stool immunoassay for H pylori antigen showed excellent sensitivity and specificity (>90%), with costs similar to serologic tests.2 Stool antigen testing could end up being the preferred testing modality because of its combination of low cost, high sensitivity and specificity, minimal invasiveness, and potential for rapid evaluation of efficacy of treatment.

CLINICAL QUESTION: How does a new whole-blood Helicobacter pylori antibody test compare with quantitative laboratory serology?

BACKGROUND: Baseline screening for H pylori infection with an antibody test is a widely used, reasonably accurate marker of infection, and may be cost-effective.1 Serologic tests require venipuncture, followed by a delay as the serum is either sent to a reference laboratory or the sample is prepared for an in-office test. Whole-blood finger-stick antibody tests offer a simple in-office test with more rapid results, but the first generation of tests were less accurate than laboratory serology. A new whole-blood finger-stick antibody test is now available (StatSimple; Saliva Diagnostic Systems; Vancouver, Washington) that may be more desirable because of its ease of use, low cost, and the lack of a requirement for Clinical Laboratory Improvement Amendment (CLIA) certification.

POPULATION STUDIED: Study participants were scheduled to undergo endoscopy for clinical indications. Patients were excluded for being younger than 18 years; having a history of previous treatment for H pylori; or having used antibiotics, bismuth-containing medications, omeprazole, or lansoprazole in the previous 4 weeks. A total of 201 patients met the inclusion criteria.

STUDY DESIGN AND VALIDITY: All patients had one antral biopsy taken for a rapid urease test and 2 for histologic examination. A finger-stick was performed to obtain 100 mg of blood for the whole-blood antibody test, and venipuncture was performed to obtain serum for a quantitative enzyme-linked immunosorbent assay serologic test. Given the lack of a clear reference standard for a diagnosis of H pylori, each antibody test was measured against 2 reference standards. Reference standard 1, the more sensitive one, consisted of having either a positive rapid urease test result or a positive histologic examination result. Reference standard 2 was more specific but less sensitive and required that both biopsy results were positive. Researchers performing each test were blinded to the results of all other tests.

OUTCOMES MEASURED: The primary outcomes were the sensitivity and specificity of the whole-blood and quantitative serologic antibody tests.

RESULTS: The sensitivities of the whole-blood test and quantitative serology using reference standard 1 (86% vs 92%, P=.19) and the gold reference standard 2 (90% vs 94%, P=.41) were not significantly different. The whole-blood test had similar or slightly greater specificity than the quantitative serology using reference standard 1 (88% vs 77%, P=.052) and the gold standard 2 (79% vs 67%; P=.048). The positive and negative likelihood ratios for the whole-blood test using reference standard 2 were 4.3 and 0.1. Given a prevalence of H pylori infection of 40% (typical of the primary care setting in the United States), a negative test result reduced that likelihood to 6%, and a positive result increased it to 73%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

New-generation finger-stick whole-blood H pylori antibody testing can be simple, inexpensive, and CLIA-exempt, and has a sensitivity and specificity comparable with that of quantitative serologic tests. This test is not useful for following the status of treated patients. Although the authors comment on the finger-stick method being low cost, they did not compare this method with other available diagnostic tests. Other minimally invasive tests are also available. For example, a recent study of a stool immunoassay for H pylori antigen showed excellent sensitivity and specificity (>90%), with costs similar to serologic tests.2 Stool antigen testing could end up being the preferred testing modality because of its combination of low cost, high sensitivity and specificity, minimal invasiveness, and potential for rapid evaluation of efficacy of treatment.

Issue
The Journal of Family Practice - 49(03)
Issue
The Journal of Family Practice - 49(03)
Page Number
205,265
Page Number
205,265
Publications
Publications
Topics
Article Type
Display Headline
Finger-stick of Laboratory Serological Testing for H Pylori Antibody?
Display Headline
Finger-stick of Laboratory Serological Testing for H Pylori Antibody?
Sections
Disallow All Ads

Caffeine Consumption and the Risk of Spontaneous Abortions

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Caffeine Consumption and the Risk of Spontaneous Abortions

CLINICAL QUESTION: Is maternal consumption of caffeine associated with an increased risk of spontaneous abortion?

BACKGROUND: Some previous studies have reported a doubling of the risk of fetal loss with caffeine use during pregnancy, some reported a risk only with large amounts of caffeine, and others have not shown any increased risk. Those studies were limited because of small sample size, suboptimal study design, and reliance on self-reporting for the quantity of caffeine consumed. The authors of this study measured the primary metabolite of caffeine, paraxanthine, to see if it is associated with the risk of spontaneous abortions.

POPULATION STUDIED: This study used data from the Collaborative Perinatal Project, a prospective study of pregnancy, labor, and child development at 12 sites in the United States. More than 42,000 women were enrolled, and 55,000 births were tracked between 1959 and 1966. The authors of this study identified 591 women in the cohort who experienced a spontaneous abortion before 140 days’ gestation. Each woman with a spontaneous abortion was matched with at least 4 control patients who were from the same site and had serum obtained on the same day of gestation.

STUDY DESIGN AND VALIDITY: This was a nested case-control study in which serum paraxanthine levels were measured in 591 women who experienced spontaneous abortions at less than 140 days’ gestation and in 2558 matched control patients who gave birth to live infants at 28 weeks’ gestation or later. Laboratory personnel were blinded to the outcome of the pregnancy. The women who had spontaneous abortions were different from those in the control group in several ways. They were older, smoked more, and were less likely to have vomited or used medications with caffeine. Serum paraxanthine levels were higher in older women who were white, smokers, and those who did not vomit during pregnancy. Therefore, the odds ratios were adjusted for smoking status, age, and race. Because this was a retrospective case-control study, there is the potential for bias-caused selection of the controls; however, the design is acceptable to answer the question.

OUTCOMES MEASURED: The primary outcome was the risk of an early spontaneous abortion (as estimated by the odds ratio) for different serum paraxanthine levels. Patients were divided into 3 groups on the basis of their paraxanthine levels: less than 50 ng/mL, 50 to 1845 ng/mL, and greater than 1845 ng/mL.

RESULTS: The women who experienced an early spontaneous abortion had higher mean serum paraxanthine concentrations than the control group (752 ng/mL vs 583 ng/mL, P <.001). However, the odds ratio for spontaneous abortion was not significantly elevated at paraxanthine levels of 1845 ng/mL or lower. When the lowest level group (<50 ng/mL) was compared with the highest level group (>1845 ng/mL), the odds ratio indicated a higher risk for the latter group (odds ratio=1.9; 95% confidence interval, 1.2-2.8). This paraxanthine level corresponds to a caffeine in take of 1100 mg, or roughly 11 cups of coffee per day in a smoker and 6 cups of coffee per day in a nonsmoker.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The authors of this study used a metabolite of caffeine to evaluate the risk of spontaneous abortions from caffeine intake during pregnancy. It has an adequate sample size and does not rely on self-reported caffeine use, which gives it an advantage over previous studies. It does not definitively establish a causal relationship between caffeine and spontaneous abortions but does suggest that if such a relationship exists only very high levels of caffeine pose a threat. With these new data physicians can more confidently discourage high levels of caffeine intake during pregnancy and may tolerate occasional caffeine use in some pregnant patients.

Author and Disclosure Information

Julie Hendrich, MD
Bruce M. LeClair, MD, MPH
Medical College of Georgia Augusta E-mail: [email protected]

Issue
The Journal of Family Practice - 49(03)
Publications
Topics
Page Number
204-205
Sections
Author and Disclosure Information

Julie Hendrich, MD
Bruce M. LeClair, MD, MPH
Medical College of Georgia Augusta E-mail: [email protected]

Author and Disclosure Information

Julie Hendrich, MD
Bruce M. LeClair, MD, MPH
Medical College of Georgia Augusta E-mail: [email protected]

CLINICAL QUESTION: Is maternal consumption of caffeine associated with an increased risk of spontaneous abortion?

BACKGROUND: Some previous studies have reported a doubling of the risk of fetal loss with caffeine use during pregnancy, some reported a risk only with large amounts of caffeine, and others have not shown any increased risk. Those studies were limited because of small sample size, suboptimal study design, and reliance on self-reporting for the quantity of caffeine consumed. The authors of this study measured the primary metabolite of caffeine, paraxanthine, to see if it is associated with the risk of spontaneous abortions.

POPULATION STUDIED: This study used data from the Collaborative Perinatal Project, a prospective study of pregnancy, labor, and child development at 12 sites in the United States. More than 42,000 women were enrolled, and 55,000 births were tracked between 1959 and 1966. The authors of this study identified 591 women in the cohort who experienced a spontaneous abortion before 140 days’ gestation. Each woman with a spontaneous abortion was matched with at least 4 control patients who were from the same site and had serum obtained on the same day of gestation.

STUDY DESIGN AND VALIDITY: This was a nested case-control study in which serum paraxanthine levels were measured in 591 women who experienced spontaneous abortions at less than 140 days’ gestation and in 2558 matched control patients who gave birth to live infants at 28 weeks’ gestation or later. Laboratory personnel were blinded to the outcome of the pregnancy. The women who had spontaneous abortions were different from those in the control group in several ways. They were older, smoked more, and were less likely to have vomited or used medications with caffeine. Serum paraxanthine levels were higher in older women who were white, smokers, and those who did not vomit during pregnancy. Therefore, the odds ratios were adjusted for smoking status, age, and race. Because this was a retrospective case-control study, there is the potential for bias-caused selection of the controls; however, the design is acceptable to answer the question.

OUTCOMES MEASURED: The primary outcome was the risk of an early spontaneous abortion (as estimated by the odds ratio) for different serum paraxanthine levels. Patients were divided into 3 groups on the basis of their paraxanthine levels: less than 50 ng/mL, 50 to 1845 ng/mL, and greater than 1845 ng/mL.

RESULTS: The women who experienced an early spontaneous abortion had higher mean serum paraxanthine concentrations than the control group (752 ng/mL vs 583 ng/mL, P <.001). However, the odds ratio for spontaneous abortion was not significantly elevated at paraxanthine levels of 1845 ng/mL or lower. When the lowest level group (<50 ng/mL) was compared with the highest level group (>1845 ng/mL), the odds ratio indicated a higher risk for the latter group (odds ratio=1.9; 95% confidence interval, 1.2-2.8). This paraxanthine level corresponds to a caffeine in take of 1100 mg, or roughly 11 cups of coffee per day in a smoker and 6 cups of coffee per day in a nonsmoker.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The authors of this study used a metabolite of caffeine to evaluate the risk of spontaneous abortions from caffeine intake during pregnancy. It has an adequate sample size and does not rely on self-reported caffeine use, which gives it an advantage over previous studies. It does not definitively establish a causal relationship between caffeine and spontaneous abortions but does suggest that if such a relationship exists only very high levels of caffeine pose a threat. With these new data physicians can more confidently discourage high levels of caffeine intake during pregnancy and may tolerate occasional caffeine use in some pregnant patients.

CLINICAL QUESTION: Is maternal consumption of caffeine associated with an increased risk of spontaneous abortion?

BACKGROUND: Some previous studies have reported a doubling of the risk of fetal loss with caffeine use during pregnancy, some reported a risk only with large amounts of caffeine, and others have not shown any increased risk. Those studies were limited because of small sample size, suboptimal study design, and reliance on self-reporting for the quantity of caffeine consumed. The authors of this study measured the primary metabolite of caffeine, paraxanthine, to see if it is associated with the risk of spontaneous abortions.

POPULATION STUDIED: This study used data from the Collaborative Perinatal Project, a prospective study of pregnancy, labor, and child development at 12 sites in the United States. More than 42,000 women were enrolled, and 55,000 births were tracked between 1959 and 1966. The authors of this study identified 591 women in the cohort who experienced a spontaneous abortion before 140 days’ gestation. Each woman with a spontaneous abortion was matched with at least 4 control patients who were from the same site and had serum obtained on the same day of gestation.

STUDY DESIGN AND VALIDITY: This was a nested case-control study in which serum paraxanthine levels were measured in 591 women who experienced spontaneous abortions at less than 140 days’ gestation and in 2558 matched control patients who gave birth to live infants at 28 weeks’ gestation or later. Laboratory personnel were blinded to the outcome of the pregnancy. The women who had spontaneous abortions were different from those in the control group in several ways. They were older, smoked more, and were less likely to have vomited or used medications with caffeine. Serum paraxanthine levels were higher in older women who were white, smokers, and those who did not vomit during pregnancy. Therefore, the odds ratios were adjusted for smoking status, age, and race. Because this was a retrospective case-control study, there is the potential for bias-caused selection of the controls; however, the design is acceptable to answer the question.

OUTCOMES MEASURED: The primary outcome was the risk of an early spontaneous abortion (as estimated by the odds ratio) for different serum paraxanthine levels. Patients were divided into 3 groups on the basis of their paraxanthine levels: less than 50 ng/mL, 50 to 1845 ng/mL, and greater than 1845 ng/mL.

RESULTS: The women who experienced an early spontaneous abortion had higher mean serum paraxanthine concentrations than the control group (752 ng/mL vs 583 ng/mL, P <.001). However, the odds ratio for spontaneous abortion was not significantly elevated at paraxanthine levels of 1845 ng/mL or lower. When the lowest level group (<50 ng/mL) was compared with the highest level group (>1845 ng/mL), the odds ratio indicated a higher risk for the latter group (odds ratio=1.9; 95% confidence interval, 1.2-2.8). This paraxanthine level corresponds to a caffeine in take of 1100 mg, or roughly 11 cups of coffee per day in a smoker and 6 cups of coffee per day in a nonsmoker.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The authors of this study used a metabolite of caffeine to evaluate the risk of spontaneous abortions from caffeine intake during pregnancy. It has an adequate sample size and does not rely on self-reported caffeine use, which gives it an advantage over previous studies. It does not definitively establish a causal relationship between caffeine and spontaneous abortions but does suggest that if such a relationship exists only very high levels of caffeine pose a threat. With these new data physicians can more confidently discourage high levels of caffeine intake during pregnancy and may tolerate occasional caffeine use in some pregnant patients.

Issue
The Journal of Family Practice - 49(03)
Issue
The Journal of Family Practice - 49(03)
Page Number
204-205
Page Number
204-205
Publications
Publications
Topics
Article Type
Display Headline
Caffeine Consumption and the Risk of Spontaneous Abortions
Display Headline
Caffeine Consumption and the Risk of Spontaneous Abortions
Sections
Disallow All Ads

Bisoprolol Prevents Mortality and Myocardial Infarction After Vascular Surgery

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Bisoprolol Prevents Mortality and Myocardial Infarction After Vascular Surgery

CLINICAL QUESTION: Does the perioperative administration of bisoprolol prevent nonfatal myocardial infarction (MI) or cardiovascular mortality in high-risk patients undergoing vascular surgery?

BACKGROUND: Many patients suffer from cardiovascular complications following major vascular surgery, and although various interventions have been proposed to improve the associated cardiovascular risks, none has been found to be efficacious. Since b-blockers demonstrate a major benefit in acute MI and other cardiovascular diseases, investigators studied the benefit of bisoprolol when given perioperatively to high-risk patients undergoing vascular surgery. Bisoprolol is a b-1 selective hydrophilic b-blocker without intrinsic sympathomimetic activity.

POPULATION STUDIED: All of the included patients were undergoing elective abdominal aortic or infrainguinal arterial reconstruction at study sites in Canada, the Netherlands, and Italy. Patients with cardiac risk factors—older than 70 years, previous MI, treatment for congestive heart failure, ventricular arrhythmias, diabetes mellitus, or reduced capacity to perform activities of daily living—underwent dobutamine stress echocardiography. Those with a positive result were considered to be high risk and were included in the study. Patients were excluded if they had significant heart wall motion abnormalities, asthma, or evidence of left main or triple coronary vessel disease during stress testing. A total of 1351 patients were screened, and 846 were found to have cardiac risk factors. Positive stress tests were documented in 173 patients, and 112 underwent randomization (59 bisoprolol plus standard care, 53 standard care alone). More than 80% of the patients were men.

STUDY DESIGN AND VALIDITY: This was a multicenter randomized controlled trial in which patients received standard perioperative care or standard care plus bisoprolol. Bisoprolol 5 mg daily was initiated at least 1 week before surgery (mean = 37 days before surgery; range = 7 to 89 days), and continued for 30 days postoperatively. Approximately 1 week after initiation, patients were reassessed and the dosage could be increased to a maximum of 10 mg daily if the heart rate remained at more than 60 beats per minute (bpm). Administration by nasogastric tube or intravenous metoprolol was substituted at times when patients were unable to take the medication orally. Bisoprolol was withheld if the heart rate dropped below 50 bpm or if the systolic blood pressure was less than 100 mm Hg.

Overall, the methods of this study were appropriate to answer the clinical question. Although physicians and patients were not blinded during the study, a monitoring committee evaluated outcomes in a masked fashion. The authors did not describe the methods used to prevent researchers from knowing to which group the patient would be assigned (concealed allocation). This lack of concealment could introduce selective enrollment of patients. The patient population studied was elderly; therefore, the impact of bisoprolol in younger or lower-risk patients undergoing vascular surgery may be less dramatic.

OUTCOMES MEASURED: The primary end points were death from cardiac causes or nonfatal myocardial infarction during the perioperative period.

RESULTS: The bisoprolol-treated group experienced fewer deaths (3.4% vs 17%, P=.02; number needed to treat [NNT]=7.3) and fewer nonfatal MIs (0% vs 17%, P <.001; NNT=5.9) than those receiving standard care alone. For the combined end point of death from cardiac causes or nonfatal MI, the overall rate in the bisoprolol group was 3.4% vs 34% in the standard care group (P <.001; NNT=3.3). The study was stopped after interim analysis because of the significant difference between groups. Investigators did not report side effects associated with the administration of bisoprolol.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This well-designed clinical trial demonstrates the major impact of perioperative administration of the b-blocker bisoprolol in high-risk patients undergoing vascular surgery. For every 3 high-risk patients treated, one death or nonfatal MI is prevented. It is not known if the benefits associated with bisoprolol will be realized with the use of other b-blockers.

Author and Disclosure Information

Lori M. Dickerson, PharmD
Peter J. Carek, MD, MS
Medical University of South Carolina Charleston E-mail: [email protected]

Issue
The Journal of Family Practice - 49(03)
Publications
Topics
Page Number
203-204
Sections
Author and Disclosure Information

Lori M. Dickerson, PharmD
Peter J. Carek, MD, MS
Medical University of South Carolina Charleston E-mail: [email protected]

Author and Disclosure Information

Lori M. Dickerson, PharmD
Peter J. Carek, MD, MS
Medical University of South Carolina Charleston E-mail: [email protected]

CLINICAL QUESTION: Does the perioperative administration of bisoprolol prevent nonfatal myocardial infarction (MI) or cardiovascular mortality in high-risk patients undergoing vascular surgery?

BACKGROUND: Many patients suffer from cardiovascular complications following major vascular surgery, and although various interventions have been proposed to improve the associated cardiovascular risks, none has been found to be efficacious. Since b-blockers demonstrate a major benefit in acute MI and other cardiovascular diseases, investigators studied the benefit of bisoprolol when given perioperatively to high-risk patients undergoing vascular surgery. Bisoprolol is a b-1 selective hydrophilic b-blocker without intrinsic sympathomimetic activity.

POPULATION STUDIED: All of the included patients were undergoing elective abdominal aortic or infrainguinal arterial reconstruction at study sites in Canada, the Netherlands, and Italy. Patients with cardiac risk factors—older than 70 years, previous MI, treatment for congestive heart failure, ventricular arrhythmias, diabetes mellitus, or reduced capacity to perform activities of daily living—underwent dobutamine stress echocardiography. Those with a positive result were considered to be high risk and were included in the study. Patients were excluded if they had significant heart wall motion abnormalities, asthma, or evidence of left main or triple coronary vessel disease during stress testing. A total of 1351 patients were screened, and 846 were found to have cardiac risk factors. Positive stress tests were documented in 173 patients, and 112 underwent randomization (59 bisoprolol plus standard care, 53 standard care alone). More than 80% of the patients were men.

STUDY DESIGN AND VALIDITY: This was a multicenter randomized controlled trial in which patients received standard perioperative care or standard care plus bisoprolol. Bisoprolol 5 mg daily was initiated at least 1 week before surgery (mean = 37 days before surgery; range = 7 to 89 days), and continued for 30 days postoperatively. Approximately 1 week after initiation, patients were reassessed and the dosage could be increased to a maximum of 10 mg daily if the heart rate remained at more than 60 beats per minute (bpm). Administration by nasogastric tube or intravenous metoprolol was substituted at times when patients were unable to take the medication orally. Bisoprolol was withheld if the heart rate dropped below 50 bpm or if the systolic blood pressure was less than 100 mm Hg.

Overall, the methods of this study were appropriate to answer the clinical question. Although physicians and patients were not blinded during the study, a monitoring committee evaluated outcomes in a masked fashion. The authors did not describe the methods used to prevent researchers from knowing to which group the patient would be assigned (concealed allocation). This lack of concealment could introduce selective enrollment of patients. The patient population studied was elderly; therefore, the impact of bisoprolol in younger or lower-risk patients undergoing vascular surgery may be less dramatic.

OUTCOMES MEASURED: The primary end points were death from cardiac causes or nonfatal myocardial infarction during the perioperative period.

RESULTS: The bisoprolol-treated group experienced fewer deaths (3.4% vs 17%, P=.02; number needed to treat [NNT]=7.3) and fewer nonfatal MIs (0% vs 17%, P <.001; NNT=5.9) than those receiving standard care alone. For the combined end point of death from cardiac causes or nonfatal MI, the overall rate in the bisoprolol group was 3.4% vs 34% in the standard care group (P <.001; NNT=3.3). The study was stopped after interim analysis because of the significant difference between groups. Investigators did not report side effects associated with the administration of bisoprolol.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This well-designed clinical trial demonstrates the major impact of perioperative administration of the b-blocker bisoprolol in high-risk patients undergoing vascular surgery. For every 3 high-risk patients treated, one death or nonfatal MI is prevented. It is not known if the benefits associated with bisoprolol will be realized with the use of other b-blockers.

CLINICAL QUESTION: Does the perioperative administration of bisoprolol prevent nonfatal myocardial infarction (MI) or cardiovascular mortality in high-risk patients undergoing vascular surgery?

BACKGROUND: Many patients suffer from cardiovascular complications following major vascular surgery, and although various interventions have been proposed to improve the associated cardiovascular risks, none has been found to be efficacious. Since b-blockers demonstrate a major benefit in acute MI and other cardiovascular diseases, investigators studied the benefit of bisoprolol when given perioperatively to high-risk patients undergoing vascular surgery. Bisoprolol is a b-1 selective hydrophilic b-blocker without intrinsic sympathomimetic activity.

POPULATION STUDIED: All of the included patients were undergoing elective abdominal aortic or infrainguinal arterial reconstruction at study sites in Canada, the Netherlands, and Italy. Patients with cardiac risk factors—older than 70 years, previous MI, treatment for congestive heart failure, ventricular arrhythmias, diabetes mellitus, or reduced capacity to perform activities of daily living—underwent dobutamine stress echocardiography. Those with a positive result were considered to be high risk and were included in the study. Patients were excluded if they had significant heart wall motion abnormalities, asthma, or evidence of left main or triple coronary vessel disease during stress testing. A total of 1351 patients were screened, and 846 were found to have cardiac risk factors. Positive stress tests were documented in 173 patients, and 112 underwent randomization (59 bisoprolol plus standard care, 53 standard care alone). More than 80% of the patients were men.

STUDY DESIGN AND VALIDITY: This was a multicenter randomized controlled trial in which patients received standard perioperative care or standard care plus bisoprolol. Bisoprolol 5 mg daily was initiated at least 1 week before surgery (mean = 37 days before surgery; range = 7 to 89 days), and continued for 30 days postoperatively. Approximately 1 week after initiation, patients were reassessed and the dosage could be increased to a maximum of 10 mg daily if the heart rate remained at more than 60 beats per minute (bpm). Administration by nasogastric tube or intravenous metoprolol was substituted at times when patients were unable to take the medication orally. Bisoprolol was withheld if the heart rate dropped below 50 bpm or if the systolic blood pressure was less than 100 mm Hg.

Overall, the methods of this study were appropriate to answer the clinical question. Although physicians and patients were not blinded during the study, a monitoring committee evaluated outcomes in a masked fashion. The authors did not describe the methods used to prevent researchers from knowing to which group the patient would be assigned (concealed allocation). This lack of concealment could introduce selective enrollment of patients. The patient population studied was elderly; therefore, the impact of bisoprolol in younger or lower-risk patients undergoing vascular surgery may be less dramatic.

OUTCOMES MEASURED: The primary end points were death from cardiac causes or nonfatal myocardial infarction during the perioperative period.

RESULTS: The bisoprolol-treated group experienced fewer deaths (3.4% vs 17%, P=.02; number needed to treat [NNT]=7.3) and fewer nonfatal MIs (0% vs 17%, P <.001; NNT=5.9) than those receiving standard care alone. For the combined end point of death from cardiac causes or nonfatal MI, the overall rate in the bisoprolol group was 3.4% vs 34% in the standard care group (P <.001; NNT=3.3). The study was stopped after interim analysis because of the significant difference between groups. Investigators did not report side effects associated with the administration of bisoprolol.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This well-designed clinical trial demonstrates the major impact of perioperative administration of the b-blocker bisoprolol in high-risk patients undergoing vascular surgery. For every 3 high-risk patients treated, one death or nonfatal MI is prevented. It is not known if the benefits associated with bisoprolol will be realized with the use of other b-blockers.

Issue
The Journal of Family Practice - 49(03)
Issue
The Journal of Family Practice - 49(03)
Page Number
203-204
Page Number
203-204
Publications
Publications
Topics
Article Type
Display Headline
Bisoprolol Prevents Mortality and Myocardial Infarction After Vascular Surgery
Display Headline
Bisoprolol Prevents Mortality and Myocardial Infarction After Vascular Surgery
Sections
Disallow All Ads

The Use of Tocolytics in Preterm Labor

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
The Use of Tocolytics in Preterm Labor

CLINICAL QUESTION: How effective are tocolytics for the treatment of preterm labor?

BACKGROUND: Tocolytic drugs have not been shown to reduce the number of preterm deliveries but may prolong pregnancy for up to 48 hours. Postponing delivery may allow the administration of corticosteroid drugs to enhance pulmonary maturity and reduce the severity of respiratory distress syndrome.1

POPULATION STUDIED: The patients included were women in a referred care setting presenting with preterm labor. Seventeen studies published from 1966 to 1997 involving 2785 patients met the final criteria. Five studies used ritodrine as the tocolytic; 4 used magnesium sulfate; 3 used indomethacin; 2 used terbutaline; 2 used atosiban; 1 used isoxsuprine; and 1 used ethanol.

STUDY DESIGN AND VALIDITY: The authors conducted a comprehensive search of MEDLINE and the Cochrane Controlled Trials Register to identify articles for this review. A study was included if it was a placebo-controlled randomized trial evaluating the effect of a tocolytic on women in preterm labor and reported perinatal, neonatal, or maternal outcomes.

OUTCOMES MEASURED: The primary outcomes measured were duration of prolonged pregnancy, perinatal/neonatal outcomes, and maternal adverse effects. Meta-analyses were done for each outcome for all trials and for specific types of tocolytic therapy when possible.

RESULTS: Tocolytics decreased the likelihood of delivery within 24 hours (odds ratio [OR] = 0.47; 95% confidence interval [CI], 0.29 - 0.77), 48 hours (OR = 0.57; 95% CI, 0.38 - 0.83), and 7 days (OR = 0.60; 95% CI, 0.38 - 0.95). Specifically, betamimetics, indomethacin, atosiban, and ethanol (but not magnesium sulfate) were associated with significant prolongation of pregnancy. Tocolytics were not associated with significant reduction in births before 30, 32, or 37 weeks’ gestation. Tocolytics were not associated with significantly reduced rates of perinatal death, respiratory distress syndrome, intraventricular hemorrhage, necrotizing enterocolitis, patent ductus arteriosus, neonatal sepsis, seizures, hypoglycemia, or birth weight of >2500 grams. Maternal adverse effects significantly associated with tocolytic use were palpitations, nausea, tremor, chorioamnionitis, hyperglycemia, hypo- kalemia, and the need to discontinue treatment. Specifically, betamimetics were associated with increases in most maternal side effects.

 

RECOMMENDATIONS FOR CLINICAL PRACTICE

Tocolytic drugs can prolong pregnancy for at least 48 hours and possibly up to 7 days, but there is no convincing evidence of reduction in preterm delivery or perinatal morbidity or mortality. The delay in delivery afforded by a tocolytic agent, however, allows time for the administration of corticosteroids, which is associated with a significant reduction in neonatal morbidity and mortality.1,2 The neonatal benefits of tocolysis combined with steroid administration may outweigh the maternal risks, particularly in the setting of extreme prematurity.

Author and Disclosure Information

Christine Hsieh, MD
Thomas Jefferson University Hospital, Philadelphia, Pennsylvania E-mail: [email protected]

Issue
The Journal of Family Practice - 49(02)
Publications
Topics
Page Number
186
Sections
Author and Disclosure Information

Christine Hsieh, MD
Thomas Jefferson University Hospital, Philadelphia, Pennsylvania E-mail: [email protected]

Author and Disclosure Information

Christine Hsieh, MD
Thomas Jefferson University Hospital, Philadelphia, Pennsylvania E-mail: [email protected]

CLINICAL QUESTION: How effective are tocolytics for the treatment of preterm labor?

BACKGROUND: Tocolytic drugs have not been shown to reduce the number of preterm deliveries but may prolong pregnancy for up to 48 hours. Postponing delivery may allow the administration of corticosteroid drugs to enhance pulmonary maturity and reduce the severity of respiratory distress syndrome.1

POPULATION STUDIED: The patients included were women in a referred care setting presenting with preterm labor. Seventeen studies published from 1966 to 1997 involving 2785 patients met the final criteria. Five studies used ritodrine as the tocolytic; 4 used magnesium sulfate; 3 used indomethacin; 2 used terbutaline; 2 used atosiban; 1 used isoxsuprine; and 1 used ethanol.

STUDY DESIGN AND VALIDITY: The authors conducted a comprehensive search of MEDLINE and the Cochrane Controlled Trials Register to identify articles for this review. A study was included if it was a placebo-controlled randomized trial evaluating the effect of a tocolytic on women in preterm labor and reported perinatal, neonatal, or maternal outcomes.

OUTCOMES MEASURED: The primary outcomes measured were duration of prolonged pregnancy, perinatal/neonatal outcomes, and maternal adverse effects. Meta-analyses were done for each outcome for all trials and for specific types of tocolytic therapy when possible.

RESULTS: Tocolytics decreased the likelihood of delivery within 24 hours (odds ratio [OR] = 0.47; 95% confidence interval [CI], 0.29 - 0.77), 48 hours (OR = 0.57; 95% CI, 0.38 - 0.83), and 7 days (OR = 0.60; 95% CI, 0.38 - 0.95). Specifically, betamimetics, indomethacin, atosiban, and ethanol (but not magnesium sulfate) were associated with significant prolongation of pregnancy. Tocolytics were not associated with significant reduction in births before 30, 32, or 37 weeks’ gestation. Tocolytics were not associated with significantly reduced rates of perinatal death, respiratory distress syndrome, intraventricular hemorrhage, necrotizing enterocolitis, patent ductus arteriosus, neonatal sepsis, seizures, hypoglycemia, or birth weight of >2500 grams. Maternal adverse effects significantly associated with tocolytic use were palpitations, nausea, tremor, chorioamnionitis, hyperglycemia, hypo- kalemia, and the need to discontinue treatment. Specifically, betamimetics were associated with increases in most maternal side effects.

 

RECOMMENDATIONS FOR CLINICAL PRACTICE

Tocolytic drugs can prolong pregnancy for at least 48 hours and possibly up to 7 days, but there is no convincing evidence of reduction in preterm delivery or perinatal morbidity or mortality. The delay in delivery afforded by a tocolytic agent, however, allows time for the administration of corticosteroids, which is associated with a significant reduction in neonatal morbidity and mortality.1,2 The neonatal benefits of tocolysis combined with steroid administration may outweigh the maternal risks, particularly in the setting of extreme prematurity.

CLINICAL QUESTION: How effective are tocolytics for the treatment of preterm labor?

BACKGROUND: Tocolytic drugs have not been shown to reduce the number of preterm deliveries but may prolong pregnancy for up to 48 hours. Postponing delivery may allow the administration of corticosteroid drugs to enhance pulmonary maturity and reduce the severity of respiratory distress syndrome.1

POPULATION STUDIED: The patients included were women in a referred care setting presenting with preterm labor. Seventeen studies published from 1966 to 1997 involving 2785 patients met the final criteria. Five studies used ritodrine as the tocolytic; 4 used magnesium sulfate; 3 used indomethacin; 2 used terbutaline; 2 used atosiban; 1 used isoxsuprine; and 1 used ethanol.

STUDY DESIGN AND VALIDITY: The authors conducted a comprehensive search of MEDLINE and the Cochrane Controlled Trials Register to identify articles for this review. A study was included if it was a placebo-controlled randomized trial evaluating the effect of a tocolytic on women in preterm labor and reported perinatal, neonatal, or maternal outcomes.

OUTCOMES MEASURED: The primary outcomes measured were duration of prolonged pregnancy, perinatal/neonatal outcomes, and maternal adverse effects. Meta-analyses were done for each outcome for all trials and for specific types of tocolytic therapy when possible.

RESULTS: Tocolytics decreased the likelihood of delivery within 24 hours (odds ratio [OR] = 0.47; 95% confidence interval [CI], 0.29 - 0.77), 48 hours (OR = 0.57; 95% CI, 0.38 - 0.83), and 7 days (OR = 0.60; 95% CI, 0.38 - 0.95). Specifically, betamimetics, indomethacin, atosiban, and ethanol (but not magnesium sulfate) were associated with significant prolongation of pregnancy. Tocolytics were not associated with significant reduction in births before 30, 32, or 37 weeks’ gestation. Tocolytics were not associated with significantly reduced rates of perinatal death, respiratory distress syndrome, intraventricular hemorrhage, necrotizing enterocolitis, patent ductus arteriosus, neonatal sepsis, seizures, hypoglycemia, or birth weight of >2500 grams. Maternal adverse effects significantly associated with tocolytic use were palpitations, nausea, tremor, chorioamnionitis, hyperglycemia, hypo- kalemia, and the need to discontinue treatment. Specifically, betamimetics were associated with increases in most maternal side effects.

 

RECOMMENDATIONS FOR CLINICAL PRACTICE

Tocolytic drugs can prolong pregnancy for at least 48 hours and possibly up to 7 days, but there is no convincing evidence of reduction in preterm delivery or perinatal morbidity or mortality. The delay in delivery afforded by a tocolytic agent, however, allows time for the administration of corticosteroids, which is associated with a significant reduction in neonatal morbidity and mortality.1,2 The neonatal benefits of tocolysis combined with steroid administration may outweigh the maternal risks, particularly in the setting of extreme prematurity.

Issue
The Journal of Family Practice - 49(02)
Issue
The Journal of Family Practice - 49(02)
Page Number
186
Page Number
186
Publications
Publications
Topics
Article Type
Display Headline
The Use of Tocolytics in Preterm Labor
Display Headline
The Use of Tocolytics in Preterm Labor
Sections
Disallow All Ads

Best Treatment for Single-Vessel Coronary Artery Disease

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Best Treatment for Single-Vessel Coronary Artery Disease

CLINICAL QUESTION: What is the best treatment for left anterior descending artery stenosis in patients with stable angina?

BACKGROUND: Long-term studies comparing surgical bypass, angioplasty (PTCA), and medical therapy in the treatment of patients with stable angina and left anterior descending (LAD) artery stenosis are not available. LAD lesions are thought to have a worse prognosis than other singe-vessel lesions; therefore, more aggressive strategies have been promoted.

POPULATION STUDIED: Consecutive patients at a single institution in Brazil were selected from 1988 to 1991. Patients had stable angina, a proximal LAD stenosis, no previous myocardial infarction, and normal left ventricular function. Of 313 patients, approximately 15% did not meet the medical or angiographic criteria, and 15% refused to participate. Baseline characteristics were similar in the 3 groups, except mean total cholesterol levels, which were 240 in the medical group, 213 in the PTCA group, and 230 in the bypass group (no test of significance was performed).

STUDY DESIGN AND VALIDITY: This was a randomized control trial in which participants were randomly assigned to one of the 3 treatment groups (with approximately 70 patients in each group). Crossover from one group to another, according to symptoms, was allowed at any time. Medical therapy could include b-blockers, nitrates, calcium antagonists, and antiplatelet agents. Angiograms were repeated with the occurrence of a new ischemic event and after 5 years of follow-up. Analysis was by intention to treat. Advantages of the study design include careful clinical and angiographic case definitions, care at a single institution, and long-term and complete follow-up. Limitations included generalizability of the study (as most of the patients were Brazilian men) and lack of blinding of the physicians caring for the patients. Researchers may not have been prevented from knowing to which group the patient would be assigned before entering him in the trial (concealed allocation), which could introduce selective randomization of patients.

OUTCOMES MEASURED: The primary end point was the occurrence of cardiac-related death, acute myocardial infarction, or refractory angina requiring revascularization.

RESULTS: Patients treated either with bypass or medical therapy were significantly more likely than patients treated with PTCA to be event-free at the end of 5 years: 91% in the bypass group, 76% in the medical group, and 60% in the PTCA group (P = .001 for PTCA vs the other 2 treatments). Cardiac-related deaths were similar in the 3 groups. Of 72 medically treated patients, 8 required surgery, and 4 were treated with PTCA. Of 72 PTCA patients, 30% received repeat PTCA, and 8 underwent surgery. After 5 years, significantly fewer patients treated medically were free of angina: 26% of the medical group compared with 65% of the PTCA group and 73% of the surgery group (P <.001). No study patients had refractory angina. During follow-up, 50% of all patients developed new stenoses >50%) with no differences among groups. Rates of return to regular employment were similar among the groups.

RECOMMENDATIONS FOR CLINICAL PRACTICE

In this trial comparing treatments for LAD disease in patients with stable angina and normal ventricular function, the PTCA group had a significantly increased risk of events during 5 years of follow-up, while the medical group had significantly increased risk of angina. Rates of refractory angina, cardiac deaths, and return to employment were similar in all groups, suggesting that outcome differences were not clinically significant. Since this trial was completed, coronary stenting has become more popular, which may lead to better outcomes than PTCA in LAD lesions,1 and the benefit of cholesterol reduction in preventing recurrent events in CAD patients has been shown to be substantial. More recent randomized controlled trials on treatment of patients with a variety of coronary syndromes support the concept of symptom-guided therapy rather than the routine use of interventional procedures.2

Author and Disclosure Information

Eric Henley, MD, MPH
Department of Family and Community Medicine Rockford, Illinois E-mail: [email protected]

Issue
The Journal of Family Practice - 49(02)
Publications
Topics
Page Number
185-186
Sections
Author and Disclosure Information

Eric Henley, MD, MPH
Department of Family and Community Medicine Rockford, Illinois E-mail: [email protected]

Author and Disclosure Information

Eric Henley, MD, MPH
Department of Family and Community Medicine Rockford, Illinois E-mail: [email protected]

CLINICAL QUESTION: What is the best treatment for left anterior descending artery stenosis in patients with stable angina?

BACKGROUND: Long-term studies comparing surgical bypass, angioplasty (PTCA), and medical therapy in the treatment of patients with stable angina and left anterior descending (LAD) artery stenosis are not available. LAD lesions are thought to have a worse prognosis than other singe-vessel lesions; therefore, more aggressive strategies have been promoted.

POPULATION STUDIED: Consecutive patients at a single institution in Brazil were selected from 1988 to 1991. Patients had stable angina, a proximal LAD stenosis, no previous myocardial infarction, and normal left ventricular function. Of 313 patients, approximately 15% did not meet the medical or angiographic criteria, and 15% refused to participate. Baseline characteristics were similar in the 3 groups, except mean total cholesterol levels, which were 240 in the medical group, 213 in the PTCA group, and 230 in the bypass group (no test of significance was performed).

STUDY DESIGN AND VALIDITY: This was a randomized control trial in which participants were randomly assigned to one of the 3 treatment groups (with approximately 70 patients in each group). Crossover from one group to another, according to symptoms, was allowed at any time. Medical therapy could include b-blockers, nitrates, calcium antagonists, and antiplatelet agents. Angiograms were repeated with the occurrence of a new ischemic event and after 5 years of follow-up. Analysis was by intention to treat. Advantages of the study design include careful clinical and angiographic case definitions, care at a single institution, and long-term and complete follow-up. Limitations included generalizability of the study (as most of the patients were Brazilian men) and lack of blinding of the physicians caring for the patients. Researchers may not have been prevented from knowing to which group the patient would be assigned before entering him in the trial (concealed allocation), which could introduce selective randomization of patients.

OUTCOMES MEASURED: The primary end point was the occurrence of cardiac-related death, acute myocardial infarction, or refractory angina requiring revascularization.

RESULTS: Patients treated either with bypass or medical therapy were significantly more likely than patients treated with PTCA to be event-free at the end of 5 years: 91% in the bypass group, 76% in the medical group, and 60% in the PTCA group (P = .001 for PTCA vs the other 2 treatments). Cardiac-related deaths were similar in the 3 groups. Of 72 medically treated patients, 8 required surgery, and 4 were treated with PTCA. Of 72 PTCA patients, 30% received repeat PTCA, and 8 underwent surgery. After 5 years, significantly fewer patients treated medically were free of angina: 26% of the medical group compared with 65% of the PTCA group and 73% of the surgery group (P <.001). No study patients had refractory angina. During follow-up, 50% of all patients developed new stenoses >50%) with no differences among groups. Rates of return to regular employment were similar among the groups.

RECOMMENDATIONS FOR CLINICAL PRACTICE

In this trial comparing treatments for LAD disease in patients with stable angina and normal ventricular function, the PTCA group had a significantly increased risk of events during 5 years of follow-up, while the medical group had significantly increased risk of angina. Rates of refractory angina, cardiac deaths, and return to employment were similar in all groups, suggesting that outcome differences were not clinically significant. Since this trial was completed, coronary stenting has become more popular, which may lead to better outcomes than PTCA in LAD lesions,1 and the benefit of cholesterol reduction in preventing recurrent events in CAD patients has been shown to be substantial. More recent randomized controlled trials on treatment of patients with a variety of coronary syndromes support the concept of symptom-guided therapy rather than the routine use of interventional procedures.2

CLINICAL QUESTION: What is the best treatment for left anterior descending artery stenosis in patients with stable angina?

BACKGROUND: Long-term studies comparing surgical bypass, angioplasty (PTCA), and medical therapy in the treatment of patients with stable angina and left anterior descending (LAD) artery stenosis are not available. LAD lesions are thought to have a worse prognosis than other singe-vessel lesions; therefore, more aggressive strategies have been promoted.

POPULATION STUDIED: Consecutive patients at a single institution in Brazil were selected from 1988 to 1991. Patients had stable angina, a proximal LAD stenosis, no previous myocardial infarction, and normal left ventricular function. Of 313 patients, approximately 15% did not meet the medical or angiographic criteria, and 15% refused to participate. Baseline characteristics were similar in the 3 groups, except mean total cholesterol levels, which were 240 in the medical group, 213 in the PTCA group, and 230 in the bypass group (no test of significance was performed).

STUDY DESIGN AND VALIDITY: This was a randomized control trial in which participants were randomly assigned to one of the 3 treatment groups (with approximately 70 patients in each group). Crossover from one group to another, according to symptoms, was allowed at any time. Medical therapy could include b-blockers, nitrates, calcium antagonists, and antiplatelet agents. Angiograms were repeated with the occurrence of a new ischemic event and after 5 years of follow-up. Analysis was by intention to treat. Advantages of the study design include careful clinical and angiographic case definitions, care at a single institution, and long-term and complete follow-up. Limitations included generalizability of the study (as most of the patients were Brazilian men) and lack of blinding of the physicians caring for the patients. Researchers may not have been prevented from knowing to which group the patient would be assigned before entering him in the trial (concealed allocation), which could introduce selective randomization of patients.

OUTCOMES MEASURED: The primary end point was the occurrence of cardiac-related death, acute myocardial infarction, or refractory angina requiring revascularization.

RESULTS: Patients treated either with bypass or medical therapy were significantly more likely than patients treated with PTCA to be event-free at the end of 5 years: 91% in the bypass group, 76% in the medical group, and 60% in the PTCA group (P = .001 for PTCA vs the other 2 treatments). Cardiac-related deaths were similar in the 3 groups. Of 72 medically treated patients, 8 required surgery, and 4 were treated with PTCA. Of 72 PTCA patients, 30% received repeat PTCA, and 8 underwent surgery. After 5 years, significantly fewer patients treated medically were free of angina: 26% of the medical group compared with 65% of the PTCA group and 73% of the surgery group (P <.001). No study patients had refractory angina. During follow-up, 50% of all patients developed new stenoses >50%) with no differences among groups. Rates of return to regular employment were similar among the groups.

RECOMMENDATIONS FOR CLINICAL PRACTICE

In this trial comparing treatments for LAD disease in patients with stable angina and normal ventricular function, the PTCA group had a significantly increased risk of events during 5 years of follow-up, while the medical group had significantly increased risk of angina. Rates of refractory angina, cardiac deaths, and return to employment were similar in all groups, suggesting that outcome differences were not clinically significant. Since this trial was completed, coronary stenting has become more popular, which may lead to better outcomes than PTCA in LAD lesions,1 and the benefit of cholesterol reduction in preventing recurrent events in CAD patients has been shown to be substantial. More recent randomized controlled trials on treatment of patients with a variety of coronary syndromes support the concept of symptom-guided therapy rather than the routine use of interventional procedures.2

Issue
The Journal of Family Practice - 49(02)
Issue
The Journal of Family Practice - 49(02)
Page Number
185-186
Page Number
185-186
Publications
Publications
Topics
Article Type
Display Headline
Best Treatment for Single-Vessel Coronary Artery Disease
Display Headline
Best Treatment for Single-Vessel Coronary Artery Disease
Sections
Disallow All Ads

Screening for Intracranial Aneurysms in High-Risk Relatives

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Screening for Intracranial Aneurysms in High-Risk Relatives

CLINICAL QUESTION: Should we screen for intracranial aneurysms in first-degree relatives of patients with subarachnoid hemorrhage?

BACKGROUND: A subarachnoid hemorrhage from a ruptured intracranial aneurysm is often a devastating event, with approximately 70% of affected patients dying or becoming functionally dependent. A family history of subarachnoid hemorrhage is a significant risk factor, associated with a 3- to 7-fold higher incidence. Previous studies have reported that intracranial aneurysms are found in 8% of persons with 2 relatives who have hemorrhaged. The investigators studied the risks and benefits of screening first-degree relatives (parents, siblings, or children) for aneurysms with magnetic resonance angiography (MRA).

POPULATION STUDIED: Index patients admitted for subarachnoid hemorrhage were consecutively identified at one of 2 Dutch academic health centers. The mean age of the index patients was 52 years, and 69% were women. One hundred seventy-two of 193 had living first-degree relatives and agreed to participate. Six hundred twenty-six of the 980 known relatives were screened; they were aged 20 to 70 years (mean = 41 years), and none had contraindications to MRA or surgery. The vast majority of screened relatives were either siblings or children of the index patients, and 52% were women.

STUDY DESIGN AND VALIDITY: If a definite aneurysm was seen with MRA, conventional angiography with neurosurgical consultation was recommended. Possible aneurysms were screened again with MRA 6 to 12 months later. The investigators reported the outcome of screening and surgical intervention in those that underwent surgery. No follow-up assessments were performed in relatives with normal findings or those with aneurysms who did not undergo surgery; this complicates the interpretation of the results. On the basis of previous studies, the authors also attempted to estimate the risk of rupture, disability, and mortality if the aneurysms had not been detected. There was no control group that did not undergo screening. This also weakens the interpretation of the reported functional changes.

OUTCOMES MEASURED: The authors of the article report the prevalence of intracranial aneurysms, and for those who underwent surgery it outlines the intervention performed, neurologic disability 6 months postoperatively estimated risk of hemorrhage without surgery, and estimated life expectancy with and without surgery.

RESULTS: Among screened relatives, 25 of 626 (4.0%) had unruptured aneurysms, and 18 of these 25 underwent conventional angiography and surgery. Surgery was not indicated in 4 relatives, and the other 3 refused intervention. In 11 of the 18 subjects who underwent conventional angiography and surgery, disability was higher 6 months postoperatively than before angiography. One of these 11 had severe complications from conventional angiography. Four patients had specific postoperative sequelae of partial hemianopia, unilateral visual loss, or anosmia. The remaining 6 had nonspecific symptoms such as headache, fatigue, impaired concentration, or emotional problems.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The authors of this study do not support a general MRA screening policy for all first-degree relatives of patients with subarachnoid hemorrhage. The 2.5 years of added life expectancy for those who undergo surgery (or approximately 4 weeks per person screened) often comes with a price of prolonged neurologic impairment. This study provides concrete information for discussion between doctors and relatives of patients with subarachnoid hemorrhage. For every 1000 patients screened, 40 would have an aneurysm, 30 would have surgery, 10 to 20 would have neurologic sequelae from screening and intervention, and 7 would avoid a subarachnoid hemorrhage.

Author and Disclosure Information

Erik J. Lindbloom, MD, MSPH
University of Missouri–Columbia E-mail: [email protected]

Issue
The Journal of Family Practice - 49(02)
Publications
Topics
Page Number
184-185
Sections
Author and Disclosure Information

Erik J. Lindbloom, MD, MSPH
University of Missouri–Columbia E-mail: [email protected]

Author and Disclosure Information

Erik J. Lindbloom, MD, MSPH
University of Missouri–Columbia E-mail: [email protected]

CLINICAL QUESTION: Should we screen for intracranial aneurysms in first-degree relatives of patients with subarachnoid hemorrhage?

BACKGROUND: A subarachnoid hemorrhage from a ruptured intracranial aneurysm is often a devastating event, with approximately 70% of affected patients dying or becoming functionally dependent. A family history of subarachnoid hemorrhage is a significant risk factor, associated with a 3- to 7-fold higher incidence. Previous studies have reported that intracranial aneurysms are found in 8% of persons with 2 relatives who have hemorrhaged. The investigators studied the risks and benefits of screening first-degree relatives (parents, siblings, or children) for aneurysms with magnetic resonance angiography (MRA).

POPULATION STUDIED: Index patients admitted for subarachnoid hemorrhage were consecutively identified at one of 2 Dutch academic health centers. The mean age of the index patients was 52 years, and 69% were women. One hundred seventy-two of 193 had living first-degree relatives and agreed to participate. Six hundred twenty-six of the 980 known relatives were screened; they were aged 20 to 70 years (mean = 41 years), and none had contraindications to MRA or surgery. The vast majority of screened relatives were either siblings or children of the index patients, and 52% were women.

STUDY DESIGN AND VALIDITY: If a definite aneurysm was seen with MRA, conventional angiography with neurosurgical consultation was recommended. Possible aneurysms were screened again with MRA 6 to 12 months later. The investigators reported the outcome of screening and surgical intervention in those that underwent surgery. No follow-up assessments were performed in relatives with normal findings or those with aneurysms who did not undergo surgery; this complicates the interpretation of the results. On the basis of previous studies, the authors also attempted to estimate the risk of rupture, disability, and mortality if the aneurysms had not been detected. There was no control group that did not undergo screening. This also weakens the interpretation of the reported functional changes.

OUTCOMES MEASURED: The authors of the article report the prevalence of intracranial aneurysms, and for those who underwent surgery it outlines the intervention performed, neurologic disability 6 months postoperatively estimated risk of hemorrhage without surgery, and estimated life expectancy with and without surgery.

RESULTS: Among screened relatives, 25 of 626 (4.0%) had unruptured aneurysms, and 18 of these 25 underwent conventional angiography and surgery. Surgery was not indicated in 4 relatives, and the other 3 refused intervention. In 11 of the 18 subjects who underwent conventional angiography and surgery, disability was higher 6 months postoperatively than before angiography. One of these 11 had severe complications from conventional angiography. Four patients had specific postoperative sequelae of partial hemianopia, unilateral visual loss, or anosmia. The remaining 6 had nonspecific symptoms such as headache, fatigue, impaired concentration, or emotional problems.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The authors of this study do not support a general MRA screening policy for all first-degree relatives of patients with subarachnoid hemorrhage. The 2.5 years of added life expectancy for those who undergo surgery (or approximately 4 weeks per person screened) often comes with a price of prolonged neurologic impairment. This study provides concrete information for discussion between doctors and relatives of patients with subarachnoid hemorrhage. For every 1000 patients screened, 40 would have an aneurysm, 30 would have surgery, 10 to 20 would have neurologic sequelae from screening and intervention, and 7 would avoid a subarachnoid hemorrhage.

CLINICAL QUESTION: Should we screen for intracranial aneurysms in first-degree relatives of patients with subarachnoid hemorrhage?

BACKGROUND: A subarachnoid hemorrhage from a ruptured intracranial aneurysm is often a devastating event, with approximately 70% of affected patients dying or becoming functionally dependent. A family history of subarachnoid hemorrhage is a significant risk factor, associated with a 3- to 7-fold higher incidence. Previous studies have reported that intracranial aneurysms are found in 8% of persons with 2 relatives who have hemorrhaged. The investigators studied the risks and benefits of screening first-degree relatives (parents, siblings, or children) for aneurysms with magnetic resonance angiography (MRA).

POPULATION STUDIED: Index patients admitted for subarachnoid hemorrhage were consecutively identified at one of 2 Dutch academic health centers. The mean age of the index patients was 52 years, and 69% were women. One hundred seventy-two of 193 had living first-degree relatives and agreed to participate. Six hundred twenty-six of the 980 known relatives were screened; they were aged 20 to 70 years (mean = 41 years), and none had contraindications to MRA or surgery. The vast majority of screened relatives were either siblings or children of the index patients, and 52% were women.

STUDY DESIGN AND VALIDITY: If a definite aneurysm was seen with MRA, conventional angiography with neurosurgical consultation was recommended. Possible aneurysms were screened again with MRA 6 to 12 months later. The investigators reported the outcome of screening and surgical intervention in those that underwent surgery. No follow-up assessments were performed in relatives with normal findings or those with aneurysms who did not undergo surgery; this complicates the interpretation of the results. On the basis of previous studies, the authors also attempted to estimate the risk of rupture, disability, and mortality if the aneurysms had not been detected. There was no control group that did not undergo screening. This also weakens the interpretation of the reported functional changes.

OUTCOMES MEASURED: The authors of the article report the prevalence of intracranial aneurysms, and for those who underwent surgery it outlines the intervention performed, neurologic disability 6 months postoperatively estimated risk of hemorrhage without surgery, and estimated life expectancy with and without surgery.

RESULTS: Among screened relatives, 25 of 626 (4.0%) had unruptured aneurysms, and 18 of these 25 underwent conventional angiography and surgery. Surgery was not indicated in 4 relatives, and the other 3 refused intervention. In 11 of the 18 subjects who underwent conventional angiography and surgery, disability was higher 6 months postoperatively than before angiography. One of these 11 had severe complications from conventional angiography. Four patients had specific postoperative sequelae of partial hemianopia, unilateral visual loss, or anosmia. The remaining 6 had nonspecific symptoms such as headache, fatigue, impaired concentration, or emotional problems.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The authors of this study do not support a general MRA screening policy for all first-degree relatives of patients with subarachnoid hemorrhage. The 2.5 years of added life expectancy for those who undergo surgery (or approximately 4 weeks per person screened) often comes with a price of prolonged neurologic impairment. This study provides concrete information for discussion between doctors and relatives of patients with subarachnoid hemorrhage. For every 1000 patients screened, 40 would have an aneurysm, 30 would have surgery, 10 to 20 would have neurologic sequelae from screening and intervention, and 7 would avoid a subarachnoid hemorrhage.

Issue
The Journal of Family Practice - 49(02)
Issue
The Journal of Family Practice - 49(02)
Page Number
184-185
Page Number
184-185
Publications
Publications
Topics
Article Type
Display Headline
Screening for Intracranial Aneurysms in High-Risk Relatives
Display Headline
Screening for Intracranial Aneurysms in High-Risk Relatives
Sections
Disallow All Ads

Oseltamivir for Flu Prevention

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Oseltamivir for Flu Prevention

CLINICAL QUESTION: Can oseltamivir be used to prevent influenza infection?

BACKGROUND: Oseltamivir reduces the replication of influenza A and B by inhibiting influenza neuraminidase, an essential enzyme involved in the final packaging of new viral particles. Rimantadine and amantadine have been used for prophylaxis but are only effective against influenza A; adverse effects and cost limit their use. Oseltamivir is currently indicated to treat acute influenza infection if given within the first 2 days of symptoms. The present study evaluated oseltamivir for the prevention of influenza in healthy persons.

POPULATION STUDIED: The researchers studied 1562 healthy subjects with a mean age of 35 years (range = 18 to 65 years); 63% were women. Key exclusion criteria included having had an influenza vaccination in the previous year or an acute respiratory illness accompanied by a fever in the week before drug administration. Patients with any indication for influenza immunization according to 1998 Centers for Disease Control (CDC) and Prevention guidelines were also excluded.

STUDY DESIGN AND VALIDITY: Two identical randomized placebo-controlled double-blinded multicenter studies were conducted. Because of a low incidence of influenza infection (38/1559, 2.4%), the investigators combined the data before unblinding. Subjects were started on oseltamivir or placebo following an increase in influenzavirus activity at the clinical site. They received either oseltamivir 75 mg once daily, 75 mg twice daily, or placebo for 6 weeks. Return visits occurred at weeks 3, 6, and 8. Participants were instructed to return to the clinic for influenzalike symptoms. Nasal and pharyngeal swabs were collected for influenzavirus culture when patients presented with illness. Influenza antibody testing was done at baseline and 8 weeks.

OUTCOMES MEASURED: The primary end point was laboratory-confirmed influenzalike illness during the 6-week period of drug administration. Influenzalike illness was defined as an oral temperature of Ž37.2°C, with at least one respiratory symptom (cough, sore throat, or nasal congestion) and at least one constitutional symptom (aches, fatigue, headache, or chills or sweats). Laboratory confirmation was defined as culture of influenzavirus or 4-fold increase in antibody titer.

RESULTS: The rates of laboratory-confirmed clinical influenza were significantly lower in the oseltamivir once-daily (6/520, 1.2%) and twice-daily (7/520, 1.3%) groups than in the placebo group (25/519, 4.8%). These differences were statistically significant (P <.001 and P = .001, respectively). The relative risk reduction for influenza of once and twice daily dosing were 76% (95% confidence interval [CI], 46-91) and 72% (95% CI, 40-89), respectively. The number needed to treat (NNT) to prevent one case of influenza is 29. Drop-out rates were low and did not differ between the groups (3.1% to 4.0%), suggesting that the medication was well tolerated. Compliance between the groups did not differ.

RECOMMENDATIONS FOR CLINICAL PRACTICE

Oseltamivir has the ability to prevent influenza infection, but the real world utility of this agent for prophylaxis is limited. Using an estimate of $225 per 6-week course of 75 mg once daily and an NNT of 29, the cost of preventing one case of influenza with oseltamivir is $6525. In a similar unvaccinated healthy population, the average cost per person attributed to respirato ry illness (including influenza) has been estimated at $152.18 (1994 dollars) per flu season.1 This estimate includes time lost from work and physician visits. In the same study, it was found that the vaccination of healthy working adults saved $46.85 per person vaccinated. In a low-risk population, oseltamivir is unlikely to be cost-effective. Recent CDC guidelines state that all persons interested in decreasing their likelihood of becoming ill with influenza may be vaccinated.2 Whether oseltamivir is the agent of choice for rare instances requiring prophylaxis will require comparison with rimantadine. In an unvaccinated population at high risk during an influenza B outbreak, oseltamivir may offer protection. However this study only documented one case of influenza B in the 1559 study participants.

Author and Disclosure Information

Julie M. Johnson, PharmD
Rex W. Force, PharmD
Idaho State University, Pocatello E-mail: [email protected]

Issue
The Journal of Family Practice - 49(02)
Publications
Topics
Page Number
183-184
Sections
Author and Disclosure Information

Julie M. Johnson, PharmD
Rex W. Force, PharmD
Idaho State University, Pocatello E-mail: [email protected]

Author and Disclosure Information

Julie M. Johnson, PharmD
Rex W. Force, PharmD
Idaho State University, Pocatello E-mail: [email protected]

CLINICAL QUESTION: Can oseltamivir be used to prevent influenza infection?

BACKGROUND: Oseltamivir reduces the replication of influenza A and B by inhibiting influenza neuraminidase, an essential enzyme involved in the final packaging of new viral particles. Rimantadine and amantadine have been used for prophylaxis but are only effective against influenza A; adverse effects and cost limit their use. Oseltamivir is currently indicated to treat acute influenza infection if given within the first 2 days of symptoms. The present study evaluated oseltamivir for the prevention of influenza in healthy persons.

POPULATION STUDIED: The researchers studied 1562 healthy subjects with a mean age of 35 years (range = 18 to 65 years); 63% were women. Key exclusion criteria included having had an influenza vaccination in the previous year or an acute respiratory illness accompanied by a fever in the week before drug administration. Patients with any indication for influenza immunization according to 1998 Centers for Disease Control (CDC) and Prevention guidelines were also excluded.

STUDY DESIGN AND VALIDITY: Two identical randomized placebo-controlled double-blinded multicenter studies were conducted. Because of a low incidence of influenza infection (38/1559, 2.4%), the investigators combined the data before unblinding. Subjects were started on oseltamivir or placebo following an increase in influenzavirus activity at the clinical site. They received either oseltamivir 75 mg once daily, 75 mg twice daily, or placebo for 6 weeks. Return visits occurred at weeks 3, 6, and 8. Participants were instructed to return to the clinic for influenzalike symptoms. Nasal and pharyngeal swabs were collected for influenzavirus culture when patients presented with illness. Influenza antibody testing was done at baseline and 8 weeks.

OUTCOMES MEASURED: The primary end point was laboratory-confirmed influenzalike illness during the 6-week period of drug administration. Influenzalike illness was defined as an oral temperature of Ž37.2°C, with at least one respiratory symptom (cough, sore throat, or nasal congestion) and at least one constitutional symptom (aches, fatigue, headache, or chills or sweats). Laboratory confirmation was defined as culture of influenzavirus or 4-fold increase in antibody titer.

RESULTS: The rates of laboratory-confirmed clinical influenza were significantly lower in the oseltamivir once-daily (6/520, 1.2%) and twice-daily (7/520, 1.3%) groups than in the placebo group (25/519, 4.8%). These differences were statistically significant (P <.001 and P = .001, respectively). The relative risk reduction for influenza of once and twice daily dosing were 76% (95% confidence interval [CI], 46-91) and 72% (95% CI, 40-89), respectively. The number needed to treat (NNT) to prevent one case of influenza is 29. Drop-out rates were low and did not differ between the groups (3.1% to 4.0%), suggesting that the medication was well tolerated. Compliance between the groups did not differ.

RECOMMENDATIONS FOR CLINICAL PRACTICE

Oseltamivir has the ability to prevent influenza infection, but the real world utility of this agent for prophylaxis is limited. Using an estimate of $225 per 6-week course of 75 mg once daily and an NNT of 29, the cost of preventing one case of influenza with oseltamivir is $6525. In a similar unvaccinated healthy population, the average cost per person attributed to respirato ry illness (including influenza) has been estimated at $152.18 (1994 dollars) per flu season.1 This estimate includes time lost from work and physician visits. In the same study, it was found that the vaccination of healthy working adults saved $46.85 per person vaccinated. In a low-risk population, oseltamivir is unlikely to be cost-effective. Recent CDC guidelines state that all persons interested in decreasing their likelihood of becoming ill with influenza may be vaccinated.2 Whether oseltamivir is the agent of choice for rare instances requiring prophylaxis will require comparison with rimantadine. In an unvaccinated population at high risk during an influenza B outbreak, oseltamivir may offer protection. However this study only documented one case of influenza B in the 1559 study participants.

CLINICAL QUESTION: Can oseltamivir be used to prevent influenza infection?

BACKGROUND: Oseltamivir reduces the replication of influenza A and B by inhibiting influenza neuraminidase, an essential enzyme involved in the final packaging of new viral particles. Rimantadine and amantadine have been used for prophylaxis but are only effective against influenza A; adverse effects and cost limit their use. Oseltamivir is currently indicated to treat acute influenza infection if given within the first 2 days of symptoms. The present study evaluated oseltamivir for the prevention of influenza in healthy persons.

POPULATION STUDIED: The researchers studied 1562 healthy subjects with a mean age of 35 years (range = 18 to 65 years); 63% were women. Key exclusion criteria included having had an influenza vaccination in the previous year or an acute respiratory illness accompanied by a fever in the week before drug administration. Patients with any indication for influenza immunization according to 1998 Centers for Disease Control (CDC) and Prevention guidelines were also excluded.

STUDY DESIGN AND VALIDITY: Two identical randomized placebo-controlled double-blinded multicenter studies were conducted. Because of a low incidence of influenza infection (38/1559, 2.4%), the investigators combined the data before unblinding. Subjects were started on oseltamivir or placebo following an increase in influenzavirus activity at the clinical site. They received either oseltamivir 75 mg once daily, 75 mg twice daily, or placebo for 6 weeks. Return visits occurred at weeks 3, 6, and 8. Participants were instructed to return to the clinic for influenzalike symptoms. Nasal and pharyngeal swabs were collected for influenzavirus culture when patients presented with illness. Influenza antibody testing was done at baseline and 8 weeks.

OUTCOMES MEASURED: The primary end point was laboratory-confirmed influenzalike illness during the 6-week period of drug administration. Influenzalike illness was defined as an oral temperature of Ž37.2°C, with at least one respiratory symptom (cough, sore throat, or nasal congestion) and at least one constitutional symptom (aches, fatigue, headache, or chills or sweats). Laboratory confirmation was defined as culture of influenzavirus or 4-fold increase in antibody titer.

RESULTS: The rates of laboratory-confirmed clinical influenza were significantly lower in the oseltamivir once-daily (6/520, 1.2%) and twice-daily (7/520, 1.3%) groups than in the placebo group (25/519, 4.8%). These differences were statistically significant (P <.001 and P = .001, respectively). The relative risk reduction for influenza of once and twice daily dosing were 76% (95% confidence interval [CI], 46-91) and 72% (95% CI, 40-89), respectively. The number needed to treat (NNT) to prevent one case of influenza is 29. Drop-out rates were low and did not differ between the groups (3.1% to 4.0%), suggesting that the medication was well tolerated. Compliance between the groups did not differ.

RECOMMENDATIONS FOR CLINICAL PRACTICE

Oseltamivir has the ability to prevent influenza infection, but the real world utility of this agent for prophylaxis is limited. Using an estimate of $225 per 6-week course of 75 mg once daily and an NNT of 29, the cost of preventing one case of influenza with oseltamivir is $6525. In a similar unvaccinated healthy population, the average cost per person attributed to respirato ry illness (including influenza) has been estimated at $152.18 (1994 dollars) per flu season.1 This estimate includes time lost from work and physician visits. In the same study, it was found that the vaccination of healthy working adults saved $46.85 per person vaccinated. In a low-risk population, oseltamivir is unlikely to be cost-effective. Recent CDC guidelines state that all persons interested in decreasing their likelihood of becoming ill with influenza may be vaccinated.2 Whether oseltamivir is the agent of choice for rare instances requiring prophylaxis will require comparison with rimantadine. In an unvaccinated population at high risk during an influenza B outbreak, oseltamivir may offer protection. However this study only documented one case of influenza B in the 1559 study participants.

Issue
The Journal of Family Practice - 49(02)
Issue
The Journal of Family Practice - 49(02)
Page Number
183-184
Page Number
183-184
Publications
Publications
Topics
Article Type
Display Headline
Oseltamivir for Flu Prevention
Display Headline
Oseltamivir for Flu Prevention
Sections
Disallow All Ads

Outcomes for New Anti-hypertensives in the Elderly

Article Type
Changed
Mon, 01/14/2019 - 11:09
Display Headline
Outcomes for New Anti-hypertensives in the Elderly

CLINICAL QUESTION: Is there a difference in efficacy between older and newer antihypertensive medications in preventing cardiovascular morbidity and mortality?

BACKGROUND: It is well known that b-blockers and diuretics decrease cardiovascular morbidity and mortality.1 However, the efficacy of newer classes of antihypertensive drugs, such as angiotensin-converting enzyme (ACE) inhibitors and calcium antagonists, has not been established.

POPULATION STUDIED: Subjects included 6628 hypertensive men and women aged 70 to 84 years from 312 health centers in Sweden. Hypertension was defined as a reading of >179 mm Hg systolic, >104 mm Hg diastolic, or both.

STUDY DESIGN AND VALIDITY: This was a prospective randomized trial. Patients were assigned treatment to 1 of 3 categories of medications: conventional antihypertensive drugs, ACE inhibitors, or calcium antagonists. Conventional drugs used were oral atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or fixed-ratio hydrochlorothiazide 25 mg plus amiloride 2.5 mg, all given once daily. The ACE inhibitors were enalapril 10 mg or lisinopril 10 mg given once daily, and the calcium antagonists were felodipine 2.5 mg or isradipine 2.5 mg given once daily. If the target blood pressure of 160/95 mm Hg had not been reached by 2 months, combination therapy was instituted. Patients on b-blockers or ACE inhibitors were given a diuretic, while those on diuretics or calcium antagonists were given a b-blocker. After the initial dose-titration periods, patients were seen twice each year. At each visit heart rate and blood pressure were measured. Adverse events were evaluated from the patient’s history. Laboratory tests and electrocardiograms were done annually and on an as-needed basis.

OUTCOMES MEASURED: The primary end point was the rate of cardiovascular mortality. Secondary outcomes included the rates of fatal and nonfatal stroke, fatal and nonfatal myocardial infarction, atrial fibrillation, congestive heart failure, diabetes mellitus, and all-cause mortality. Subgroup analysis was performed on people with diabetes.

RESULTS: Most patients in this study had Stage 3 hypertension > 180 mm Hg systolic or 110 mm Hg diastolic). The rates of the primary and secondary end points were similar among the 3 treatment arms. The only difference was fewer fatal and nonfatal myocardial infarctions and less congestive heart failure among patients taking ACE inhibitors than those taking calcium antagonists. Although 46% of patients required more than one drug to control their hypertension, 61% to 66% of the patients in each group were on their original regimen at the end of the trial. Adverse events were common in all 3 groups: 25.5% of patients taking calcium antagonists had ankle edema, 30% on an ACE inhibitor had cough, and 25% to 28% in each group had dizziness.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The risk of cardiovascular morbidity and mortality was similar in all groups of elderly patients taking either conventional hypertensives, ACE inhibitors, or calcium antagonists. It is reassuring that there was no increase in stroke using ACE inhibitors as suggested in the Captopril Prevention Project study.2 Side effects were very common in all groups. Diuretics and b-blockers should still be recommended as first-line treatment on the basis of cost and efficacy. In general, ACE inihibitors are preferred to calcium antagonists because the former are more effective at preventing myocardial infarction and congestive heart failure.

Author and Disclosure Information

Kenneth H. Johnson, DO
Cathra M. Chappelle, MD
Eastern Maine Medical Center, Bangor E-mail: [email protected]

Issue
The Journal of Family Practice - 49(02)
Publications
Topics
Page Number
111,183
Sections
Author and Disclosure Information

Kenneth H. Johnson, DO
Cathra M. Chappelle, MD
Eastern Maine Medical Center, Bangor E-mail: [email protected]

Author and Disclosure Information

Kenneth H. Johnson, DO
Cathra M. Chappelle, MD
Eastern Maine Medical Center, Bangor E-mail: [email protected]

CLINICAL QUESTION: Is there a difference in efficacy between older and newer antihypertensive medications in preventing cardiovascular morbidity and mortality?

BACKGROUND: It is well known that b-blockers and diuretics decrease cardiovascular morbidity and mortality.1 However, the efficacy of newer classes of antihypertensive drugs, such as angiotensin-converting enzyme (ACE) inhibitors and calcium antagonists, has not been established.

POPULATION STUDIED: Subjects included 6628 hypertensive men and women aged 70 to 84 years from 312 health centers in Sweden. Hypertension was defined as a reading of >179 mm Hg systolic, >104 mm Hg diastolic, or both.

STUDY DESIGN AND VALIDITY: This was a prospective randomized trial. Patients were assigned treatment to 1 of 3 categories of medications: conventional antihypertensive drugs, ACE inhibitors, or calcium antagonists. Conventional drugs used were oral atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or fixed-ratio hydrochlorothiazide 25 mg plus amiloride 2.5 mg, all given once daily. The ACE inhibitors were enalapril 10 mg or lisinopril 10 mg given once daily, and the calcium antagonists were felodipine 2.5 mg or isradipine 2.5 mg given once daily. If the target blood pressure of 160/95 mm Hg had not been reached by 2 months, combination therapy was instituted. Patients on b-blockers or ACE inhibitors were given a diuretic, while those on diuretics or calcium antagonists were given a b-blocker. After the initial dose-titration periods, patients were seen twice each year. At each visit heart rate and blood pressure were measured. Adverse events were evaluated from the patient’s history. Laboratory tests and electrocardiograms were done annually and on an as-needed basis.

OUTCOMES MEASURED: The primary end point was the rate of cardiovascular mortality. Secondary outcomes included the rates of fatal and nonfatal stroke, fatal and nonfatal myocardial infarction, atrial fibrillation, congestive heart failure, diabetes mellitus, and all-cause mortality. Subgroup analysis was performed on people with diabetes.

RESULTS: Most patients in this study had Stage 3 hypertension > 180 mm Hg systolic or 110 mm Hg diastolic). The rates of the primary and secondary end points were similar among the 3 treatment arms. The only difference was fewer fatal and nonfatal myocardial infarctions and less congestive heart failure among patients taking ACE inhibitors than those taking calcium antagonists. Although 46% of patients required more than one drug to control their hypertension, 61% to 66% of the patients in each group were on their original regimen at the end of the trial. Adverse events were common in all 3 groups: 25.5% of patients taking calcium antagonists had ankle edema, 30% on an ACE inhibitor had cough, and 25% to 28% in each group had dizziness.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The risk of cardiovascular morbidity and mortality was similar in all groups of elderly patients taking either conventional hypertensives, ACE inhibitors, or calcium antagonists. It is reassuring that there was no increase in stroke using ACE inhibitors as suggested in the Captopril Prevention Project study.2 Side effects were very common in all groups. Diuretics and b-blockers should still be recommended as first-line treatment on the basis of cost and efficacy. In general, ACE inihibitors are preferred to calcium antagonists because the former are more effective at preventing myocardial infarction and congestive heart failure.

CLINICAL QUESTION: Is there a difference in efficacy between older and newer antihypertensive medications in preventing cardiovascular morbidity and mortality?

BACKGROUND: It is well known that b-blockers and diuretics decrease cardiovascular morbidity and mortality.1 However, the efficacy of newer classes of antihypertensive drugs, such as angiotensin-converting enzyme (ACE) inhibitors and calcium antagonists, has not been established.

POPULATION STUDIED: Subjects included 6628 hypertensive men and women aged 70 to 84 years from 312 health centers in Sweden. Hypertension was defined as a reading of >179 mm Hg systolic, >104 mm Hg diastolic, or both.

STUDY DESIGN AND VALIDITY: This was a prospective randomized trial. Patients were assigned treatment to 1 of 3 categories of medications: conventional antihypertensive drugs, ACE inhibitors, or calcium antagonists. Conventional drugs used were oral atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or fixed-ratio hydrochlorothiazide 25 mg plus amiloride 2.5 mg, all given once daily. The ACE inhibitors were enalapril 10 mg or lisinopril 10 mg given once daily, and the calcium antagonists were felodipine 2.5 mg or isradipine 2.5 mg given once daily. If the target blood pressure of 160/95 mm Hg had not been reached by 2 months, combination therapy was instituted. Patients on b-blockers or ACE inhibitors were given a diuretic, while those on diuretics or calcium antagonists were given a b-blocker. After the initial dose-titration periods, patients were seen twice each year. At each visit heart rate and blood pressure were measured. Adverse events were evaluated from the patient’s history. Laboratory tests and electrocardiograms were done annually and on an as-needed basis.

OUTCOMES MEASURED: The primary end point was the rate of cardiovascular mortality. Secondary outcomes included the rates of fatal and nonfatal stroke, fatal and nonfatal myocardial infarction, atrial fibrillation, congestive heart failure, diabetes mellitus, and all-cause mortality. Subgroup analysis was performed on people with diabetes.

RESULTS: Most patients in this study had Stage 3 hypertension > 180 mm Hg systolic or 110 mm Hg diastolic). The rates of the primary and secondary end points were similar among the 3 treatment arms. The only difference was fewer fatal and nonfatal myocardial infarctions and less congestive heart failure among patients taking ACE inhibitors than those taking calcium antagonists. Although 46% of patients required more than one drug to control their hypertension, 61% to 66% of the patients in each group were on their original regimen at the end of the trial. Adverse events were common in all 3 groups: 25.5% of patients taking calcium antagonists had ankle edema, 30% on an ACE inhibitor had cough, and 25% to 28% in each group had dizziness.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The risk of cardiovascular morbidity and mortality was similar in all groups of elderly patients taking either conventional hypertensives, ACE inhibitors, or calcium antagonists. It is reassuring that there was no increase in stroke using ACE inhibitors as suggested in the Captopril Prevention Project study.2 Side effects were very common in all groups. Diuretics and b-blockers should still be recommended as first-line treatment on the basis of cost and efficacy. In general, ACE inihibitors are preferred to calcium antagonists because the former are more effective at preventing myocardial infarction and congestive heart failure.

Issue
The Journal of Family Practice - 49(02)
Issue
The Journal of Family Practice - 49(02)
Page Number
111,183
Page Number
111,183
Publications
Publications
Topics
Article Type
Display Headline
Outcomes for New Anti-hypertensives in the Elderly
Display Headline
Outcomes for New Anti-hypertensives in the Elderly
Sections
Disallow All Ads