Does lipid lowering increase nonillness mortality?

Article Type
Changed
Mon, 01/14/2019 - 11:05
Display Headline
Does lipid lowering increase nonillness mortality?

BACKGROUND: Though cholesterol-lowering therapy can reduce cardiovascular morbidity and mortality, earlier studies raised concerns that reducing cholesterol concentrations might increase the risk of cancer and deaths from suicides, accidents, and violence (ie, nonillness mortality).

POPULATION STUDIED: This meta-analysis included clinical trials of cholesterol-lowering treatments in which participants were randomly assigned to a cholesterol-lowering intervention group or a control group. The investigators only included trials designed to measure effects of treatment on clinical events and mortality. Most participants were men aged between 40 and 70 years.

STUDY DESIGN AND VALIDITY: The studies were identified using an ancestry approach (locating previous studies cited in reference lists of already identified studies cited in reference articles) and a MEDLINE computer literature search from 1966 to March 2000. A total of 21 trials met inclusion criteria, though only 15 contained data on nonillness mortality. Some investigators provided previously unpublished data for 4 trials, yielding data from 19 of the 21 trials. The included studies investigated cholesterol-lowering either by drug therapy, diet modification, or both. The most common reasons for exclusion were the use of multifactorial risk interventions and studies not designed to monitor clinical events and cause-specific mortality.

OUTCOMES MEASURED: The outcomes were nonillness mortality, including suicides, accidents, and violence.

RESULTS: The studies that met inclusion criteria generated approximately 338,000 patient-years of randomized clinical trial data. Overall, treatments with the goal of lower total cholesterol did not affect the rate of nonillness mortality (odd ratio [OR]=1.18; 95% confidence interval [CI], 0.91-1.52). There was no effect in studies of primary prevention, secondary prevention, or studies of the “statin” drugs. Trials of diet and non-statin drugs (13 trials including 39,260 patients) also did not show a difference, although there was a trend toward increased nonillness mortality (OR=1.32; 95% CI, 0.98-1.77; P=.06). The absolute risk increase in these trials was 4.7 per 1000, which translates to a number needed to harm of 213.The rate of nonillness mortality was not related to the degree of cholesterol reduction.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This meta-analysis did not show a statistically significant relationship between cholesterol lowering and increased risk of nonillness mortality. We would not be benefiting our patients, especially those at highest risk for cardiovascular disease, by limiting our use of lipid-lowering therapy because of this theoretical concern.

Author and Disclosure Information

Tod Sweeney, MD
Christy Odell, MD
Joel Botler, MD
Neil Korsen, MD
Maine Medical Center, Portland
E-mail: [email protected] or [email protected]

Issue
The Journal of Family Practice - 50(04)
Publications
Topics
Page Number
297
Sections
Author and Disclosure Information

Tod Sweeney, MD
Christy Odell, MD
Joel Botler, MD
Neil Korsen, MD
Maine Medical Center, Portland
E-mail: [email protected] or [email protected]

Author and Disclosure Information

Tod Sweeney, MD
Christy Odell, MD
Joel Botler, MD
Neil Korsen, MD
Maine Medical Center, Portland
E-mail: [email protected] or [email protected]

BACKGROUND: Though cholesterol-lowering therapy can reduce cardiovascular morbidity and mortality, earlier studies raised concerns that reducing cholesterol concentrations might increase the risk of cancer and deaths from suicides, accidents, and violence (ie, nonillness mortality).

POPULATION STUDIED: This meta-analysis included clinical trials of cholesterol-lowering treatments in which participants were randomly assigned to a cholesterol-lowering intervention group or a control group. The investigators only included trials designed to measure effects of treatment on clinical events and mortality. Most participants were men aged between 40 and 70 years.

STUDY DESIGN AND VALIDITY: The studies were identified using an ancestry approach (locating previous studies cited in reference lists of already identified studies cited in reference articles) and a MEDLINE computer literature search from 1966 to March 2000. A total of 21 trials met inclusion criteria, though only 15 contained data on nonillness mortality. Some investigators provided previously unpublished data for 4 trials, yielding data from 19 of the 21 trials. The included studies investigated cholesterol-lowering either by drug therapy, diet modification, or both. The most common reasons for exclusion were the use of multifactorial risk interventions and studies not designed to monitor clinical events and cause-specific mortality.

OUTCOMES MEASURED: The outcomes were nonillness mortality, including suicides, accidents, and violence.

RESULTS: The studies that met inclusion criteria generated approximately 338,000 patient-years of randomized clinical trial data. Overall, treatments with the goal of lower total cholesterol did not affect the rate of nonillness mortality (odd ratio [OR]=1.18; 95% confidence interval [CI], 0.91-1.52). There was no effect in studies of primary prevention, secondary prevention, or studies of the “statin” drugs. Trials of diet and non-statin drugs (13 trials including 39,260 patients) also did not show a difference, although there was a trend toward increased nonillness mortality (OR=1.32; 95% CI, 0.98-1.77; P=.06). The absolute risk increase in these trials was 4.7 per 1000, which translates to a number needed to harm of 213.The rate of nonillness mortality was not related to the degree of cholesterol reduction.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This meta-analysis did not show a statistically significant relationship between cholesterol lowering and increased risk of nonillness mortality. We would not be benefiting our patients, especially those at highest risk for cardiovascular disease, by limiting our use of lipid-lowering therapy because of this theoretical concern.

BACKGROUND: Though cholesterol-lowering therapy can reduce cardiovascular morbidity and mortality, earlier studies raised concerns that reducing cholesterol concentrations might increase the risk of cancer and deaths from suicides, accidents, and violence (ie, nonillness mortality).

POPULATION STUDIED: This meta-analysis included clinical trials of cholesterol-lowering treatments in which participants were randomly assigned to a cholesterol-lowering intervention group or a control group. The investigators only included trials designed to measure effects of treatment on clinical events and mortality. Most participants were men aged between 40 and 70 years.

STUDY DESIGN AND VALIDITY: The studies were identified using an ancestry approach (locating previous studies cited in reference lists of already identified studies cited in reference articles) and a MEDLINE computer literature search from 1966 to March 2000. A total of 21 trials met inclusion criteria, though only 15 contained data on nonillness mortality. Some investigators provided previously unpublished data for 4 trials, yielding data from 19 of the 21 trials. The included studies investigated cholesterol-lowering either by drug therapy, diet modification, or both. The most common reasons for exclusion were the use of multifactorial risk interventions and studies not designed to monitor clinical events and cause-specific mortality.

OUTCOMES MEASURED: The outcomes were nonillness mortality, including suicides, accidents, and violence.

RESULTS: The studies that met inclusion criteria generated approximately 338,000 patient-years of randomized clinical trial data. Overall, treatments with the goal of lower total cholesterol did not affect the rate of nonillness mortality (odd ratio [OR]=1.18; 95% confidence interval [CI], 0.91-1.52). There was no effect in studies of primary prevention, secondary prevention, or studies of the “statin” drugs. Trials of diet and non-statin drugs (13 trials including 39,260 patients) also did not show a difference, although there was a trend toward increased nonillness mortality (OR=1.32; 95% CI, 0.98-1.77; P=.06). The absolute risk increase in these trials was 4.7 per 1000, which translates to a number needed to harm of 213.The rate of nonillness mortality was not related to the degree of cholesterol reduction.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This meta-analysis did not show a statistically significant relationship between cholesterol lowering and increased risk of nonillness mortality. We would not be benefiting our patients, especially those at highest risk for cardiovascular disease, by limiting our use of lipid-lowering therapy because of this theoretical concern.

Issue
The Journal of Family Practice - 50(04)
Issue
The Journal of Family Practice - 50(04)
Page Number
297
Page Number
297
Publications
Publications
Topics
Article Type
Display Headline
Does lipid lowering increase nonillness mortality?
Display Headline
Does lipid lowering increase nonillness mortality?
Sections
Disallow All Ads

Is oral dexamethasone as effective as intramuscular dexamethasone for outpatient management of moderate croup?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
Is oral dexamethasone as effective as intramuscular dexamethasone for outpatient management of moderate croup?

BACKGROUND: Recent meta-analyses have concluded that steroids ameliorate croup, but questions remain about the effectiveness of oral dosing.

POPULATION STUDIED: A total of 277 children with moderate croup were enrolled from a pediatric emergency department of an academic medical center. Moderate croup was defined as hoarseness and barking cough associated with retractions or stridor at rest. Children with mild disease—barky cough only without retractions—or with severe croup—with cyanosis, severe retractions, or altered mental status—were excluded. Other exclusions were reactive airway exacerbation, epiglotitis, pneumonia, upper airway anomalies, immunosupression, recent steroids, or symptoms present for more than 48 hours. The mean age was 2 years; 69% were men. Eighty-five percent had the illness for more than 24 hours, and 66% had a fever. Thus, the patients seem similar to those seen in family practice offices, but more information about the referral pattern, socioeconomic status, diagnostic work-up, or clinical status would be very valuable in assessing the generalizability of this trial to nonacademic emergency department settings.

STUDY DESIGN AND VALIDITY: This was a single-blinded randomized controlled study. Patients were randomized to a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) administered either orally or intramuscularly (IM). The oral medication was administrated as a crushed tablet mixed with flavored syrup or jelly. Nurses and parents knew the treatment status; physicians assessing the child after treatment were unaware of the mode of administration. After discharge no routine follow-up appointment was given, but an investigator masked to treatment assignment telephoned caretakers at 48 to 72 hours after treatment to determine unscheduled returns for treatment and the child’s clinical status. The sample size was calculated on the basis of a power of 0.8 to detect a 10% difference of return visits. Student t test and chi squares were used to analyze data.

OUTCOMES MEASURED: The primary outcome was parental report of return for further care after discharge. Unscheduled returns were defined as the subsequent need for additional steroids, racemic epinephrine, and/or hospitalization. A secondary outcome was the caregiver assessment of symptom improvement at 48 to 72 hours. Outcomes important for primary care providers were not measured, such as caretaker satisfaction with treatment; missed school, daycare, or work; or costs for parents or for the hospital.

RESULTS: The groups were similar at the outset. There were no statistically significant differences between patients receiving IM versus oral dexamethasone in unscheduled returns (32% vs 25%, respectively) or unscheduled return failures (8% vs 9%, respectively), and there was no difference in caretaker reports of symptomatic improvement. Only 1 of 138 children in the oral group had emesis. Patients receiving racemic epinephrine at the first visit were more likely to return, regardless of the route of dexamethasone administration.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study provides evidence that a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) given orally is as effective as injectable administration for the outpatient treatment of moderate croup. Oral dexamethasone given in a syrup or jelly is well tolerated. Clinicians should feel comfortable using either oral or IM dexamethasone to treat patients with moderate croup.

Author and Disclosure Information

Warren Newton, MD MPH
University of North Carolina, Chapel Hill E-mail: [email protected]

Issue
The Journal of Family Practice - 50(03)
Publications
Topics
Page Number
260
Sections
Author and Disclosure Information

Warren Newton, MD MPH
University of North Carolina, Chapel Hill E-mail: [email protected]

Author and Disclosure Information

Warren Newton, MD MPH
University of North Carolina, Chapel Hill E-mail: [email protected]

BACKGROUND: Recent meta-analyses have concluded that steroids ameliorate croup, but questions remain about the effectiveness of oral dosing.

POPULATION STUDIED: A total of 277 children with moderate croup were enrolled from a pediatric emergency department of an academic medical center. Moderate croup was defined as hoarseness and barking cough associated with retractions or stridor at rest. Children with mild disease—barky cough only without retractions—or with severe croup—with cyanosis, severe retractions, or altered mental status—were excluded. Other exclusions were reactive airway exacerbation, epiglotitis, pneumonia, upper airway anomalies, immunosupression, recent steroids, or symptoms present for more than 48 hours. The mean age was 2 years; 69% were men. Eighty-five percent had the illness for more than 24 hours, and 66% had a fever. Thus, the patients seem similar to those seen in family practice offices, but more information about the referral pattern, socioeconomic status, diagnostic work-up, or clinical status would be very valuable in assessing the generalizability of this trial to nonacademic emergency department settings.

STUDY DESIGN AND VALIDITY: This was a single-blinded randomized controlled study. Patients were randomized to a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) administered either orally or intramuscularly (IM). The oral medication was administrated as a crushed tablet mixed with flavored syrup or jelly. Nurses and parents knew the treatment status; physicians assessing the child after treatment were unaware of the mode of administration. After discharge no routine follow-up appointment was given, but an investigator masked to treatment assignment telephoned caretakers at 48 to 72 hours after treatment to determine unscheduled returns for treatment and the child’s clinical status. The sample size was calculated on the basis of a power of 0.8 to detect a 10% difference of return visits. Student t test and chi squares were used to analyze data.

OUTCOMES MEASURED: The primary outcome was parental report of return for further care after discharge. Unscheduled returns were defined as the subsequent need for additional steroids, racemic epinephrine, and/or hospitalization. A secondary outcome was the caregiver assessment of symptom improvement at 48 to 72 hours. Outcomes important for primary care providers were not measured, such as caretaker satisfaction with treatment; missed school, daycare, or work; or costs for parents or for the hospital.

RESULTS: The groups were similar at the outset. There were no statistically significant differences between patients receiving IM versus oral dexamethasone in unscheduled returns (32% vs 25%, respectively) or unscheduled return failures (8% vs 9%, respectively), and there was no difference in caretaker reports of symptomatic improvement. Only 1 of 138 children in the oral group had emesis. Patients receiving racemic epinephrine at the first visit were more likely to return, regardless of the route of dexamethasone administration.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study provides evidence that a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) given orally is as effective as injectable administration for the outpatient treatment of moderate croup. Oral dexamethasone given in a syrup or jelly is well tolerated. Clinicians should feel comfortable using either oral or IM dexamethasone to treat patients with moderate croup.

BACKGROUND: Recent meta-analyses have concluded that steroids ameliorate croup, but questions remain about the effectiveness of oral dosing.

POPULATION STUDIED: A total of 277 children with moderate croup were enrolled from a pediatric emergency department of an academic medical center. Moderate croup was defined as hoarseness and barking cough associated with retractions or stridor at rest. Children with mild disease—barky cough only without retractions—or with severe croup—with cyanosis, severe retractions, or altered mental status—were excluded. Other exclusions were reactive airway exacerbation, epiglotitis, pneumonia, upper airway anomalies, immunosupression, recent steroids, or symptoms present for more than 48 hours. The mean age was 2 years; 69% were men. Eighty-five percent had the illness for more than 24 hours, and 66% had a fever. Thus, the patients seem similar to those seen in family practice offices, but more information about the referral pattern, socioeconomic status, diagnostic work-up, or clinical status would be very valuable in assessing the generalizability of this trial to nonacademic emergency department settings.

STUDY DESIGN AND VALIDITY: This was a single-blinded randomized controlled study. Patients were randomized to a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) administered either orally or intramuscularly (IM). The oral medication was administrated as a crushed tablet mixed with flavored syrup or jelly. Nurses and parents knew the treatment status; physicians assessing the child after treatment were unaware of the mode of administration. After discharge no routine follow-up appointment was given, but an investigator masked to treatment assignment telephoned caretakers at 48 to 72 hours after treatment to determine unscheduled returns for treatment and the child’s clinical status. The sample size was calculated on the basis of a power of 0.8 to detect a 10% difference of return visits. Student t test and chi squares were used to analyze data.

OUTCOMES MEASURED: The primary outcome was parental report of return for further care after discharge. Unscheduled returns were defined as the subsequent need for additional steroids, racemic epinephrine, and/or hospitalization. A secondary outcome was the caregiver assessment of symptom improvement at 48 to 72 hours. Outcomes important for primary care providers were not measured, such as caretaker satisfaction with treatment; missed school, daycare, or work; or costs for parents or for the hospital.

RESULTS: The groups were similar at the outset. There were no statistically significant differences between patients receiving IM versus oral dexamethasone in unscheduled returns (32% vs 25%, respectively) or unscheduled return failures (8% vs 9%, respectively), and there was no difference in caretaker reports of symptomatic improvement. Only 1 of 138 children in the oral group had emesis. Patients receiving racemic epinephrine at the first visit were more likely to return, regardless of the route of dexamethasone administration.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study provides evidence that a single dose of dexamethasone (0.6 mg/kg, maximum dose 8 mg) given orally is as effective as injectable administration for the outpatient treatment of moderate croup. Oral dexamethasone given in a syrup or jelly is well tolerated. Clinicians should feel comfortable using either oral or IM dexamethasone to treat patients with moderate croup.

Issue
The Journal of Family Practice - 50(03)
Issue
The Journal of Family Practice - 50(03)
Page Number
260
Page Number
260
Publications
Publications
Topics
Article Type
Display Headline
Is oral dexamethasone as effective as intramuscular dexamethasone for outpatient management of moderate croup?
Display Headline
Is oral dexamethasone as effective as intramuscular dexamethasone for outpatient management of moderate croup?
Sections
Disallow All Ads

Does a hip protector reduce the risk of hip fracture in frail elderly patients?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
Does a hip protector reduce the risk of hip fracture in frail elderly patients?

BACKGROUND: Various interventions to prevent hip fractures have been tested with mediocre success. Most have treated underlying risk factors, such as osteoporosis and fall propensity. This study evaluated the effectiveness of an external hip protector to prevent hip fractures.

POPULATION STUDIED: The trial involved 1801 ambulatory but frail elderly adults (1409 women and 392 men, mean age=82 years) from 22 community-based health care centers in Finland. All patients were aged at least 70 years, were ambulatory (assisted or unassisted), and had at least one identifiable risk factor for hip fracture.

STUDY DESIGN AND VALIDITY: Patients were randomized using adequate concealment of allocation, in an unblinded manner, to receive a hip protector or to not receive one. The hip protector (KPH Hip Protector, Respecta, Helsinki, Finland) covered the greater trochanter, was 19 cm × 9 cm with a convex shape. It was designed to shunt the energy of an impact away from the greater trochanter, the most common site of hip fracture. The 2 padded protectors were worn inside pockets of a stretchy undergarment and did not limit walking or sitting. The subjects in the hip protector group were asked to wear the protector whenever they were on their feet and especially when they were at risk for falling. Many patients randomized to receive the hip protector (204 of 650) refused to participate. The dropout rate during the 18-month study period was high (657 out of 1427), mostly because of death, inability to walk, or refusal to continue in the study. The subjects from a waiting list replaced the dropouts. The sample size was sufficient to identify a 50% reduction in hip fractures over 1 year. As a group, hip protector subjects had significantly more risk factors for falls. However, statistical adjustment for baseline differences did not alter the results. The authors compensated for the high dropout rate by using an intention-to-treat analysis and including the subjects in the analysis for the time period of participation. Information on other factors associated with hip fractures (race, presence of osteoporosis, and use of osteoporosis medications) would have been helpful for generalizing the results to US patients.

OUTCOMES MEASURED: The primary outcome was hip fracture. Secondary outcome variables were the number and rate of falls in the hip protector group and the number of days the subjects wore the protector.

RESULTS: During follow-up, 13 subjects in the hip protector group had a hip fracture compared with 67 controls. Hip fracture risk was significantly lower in the treatment group (21.3 vs 46.0 per 1000 person-years; relative hazard 0.4; 95% confidence interval [CI], 0.2-0.8; P=.008). The risk of other fractures was similar in the 2 groups, which supports the effectiveness of the hip protector (ie, these patients were not at risk for fractures in general). Subjects in the hip protector group wore them during 48% of all days and during 74% of all falls, suggesting that they were being worn during higher risk times. In the hip protector group, 4 subjects had a hip fracture while wearing the device; 9 subjects had a hip fracture while not wearing it (P=.002). A total of 41 people would have to wear a hip protector for 1 year to prevent one hip fracture (95% CI, 25-115).

RECOMMENDATIONS FOR CLINICAL PRACTICE

Elderly frail adults at risk for falls should be encouraged to use these simple cost-effective devices. The price is less than $100, much cheaper than the cost associated with a fracture.1 The hip protector is approved by the US Food and Drug Administration, manufactured by several companies in the United States, and can be ordered on the Internet (search word “hip protector”).

Author and Disclosure Information

Tsveti Markova, MD
Kendra Schwartz, MD, MSPH
Wayne State University, Detroit, Michigan E-mail: [email protected]

Issue
The Journal of Family Practice - 50(03)
Publications
Topics
Page Number
259
Sections
Author and Disclosure Information

Tsveti Markova, MD
Kendra Schwartz, MD, MSPH
Wayne State University, Detroit, Michigan E-mail: [email protected]

Author and Disclosure Information

Tsveti Markova, MD
Kendra Schwartz, MD, MSPH
Wayne State University, Detroit, Michigan E-mail: [email protected]

BACKGROUND: Various interventions to prevent hip fractures have been tested with mediocre success. Most have treated underlying risk factors, such as osteoporosis and fall propensity. This study evaluated the effectiveness of an external hip protector to prevent hip fractures.

POPULATION STUDIED: The trial involved 1801 ambulatory but frail elderly adults (1409 women and 392 men, mean age=82 years) from 22 community-based health care centers in Finland. All patients were aged at least 70 years, were ambulatory (assisted or unassisted), and had at least one identifiable risk factor for hip fracture.

STUDY DESIGN AND VALIDITY: Patients were randomized using adequate concealment of allocation, in an unblinded manner, to receive a hip protector or to not receive one. The hip protector (KPH Hip Protector, Respecta, Helsinki, Finland) covered the greater trochanter, was 19 cm × 9 cm with a convex shape. It was designed to shunt the energy of an impact away from the greater trochanter, the most common site of hip fracture. The 2 padded protectors were worn inside pockets of a stretchy undergarment and did not limit walking or sitting. The subjects in the hip protector group were asked to wear the protector whenever they were on their feet and especially when they were at risk for falling. Many patients randomized to receive the hip protector (204 of 650) refused to participate. The dropout rate during the 18-month study period was high (657 out of 1427), mostly because of death, inability to walk, or refusal to continue in the study. The subjects from a waiting list replaced the dropouts. The sample size was sufficient to identify a 50% reduction in hip fractures over 1 year. As a group, hip protector subjects had significantly more risk factors for falls. However, statistical adjustment for baseline differences did not alter the results. The authors compensated for the high dropout rate by using an intention-to-treat analysis and including the subjects in the analysis for the time period of participation. Information on other factors associated with hip fractures (race, presence of osteoporosis, and use of osteoporosis medications) would have been helpful for generalizing the results to US patients.

OUTCOMES MEASURED: The primary outcome was hip fracture. Secondary outcome variables were the number and rate of falls in the hip protector group and the number of days the subjects wore the protector.

RESULTS: During follow-up, 13 subjects in the hip protector group had a hip fracture compared with 67 controls. Hip fracture risk was significantly lower in the treatment group (21.3 vs 46.0 per 1000 person-years; relative hazard 0.4; 95% confidence interval [CI], 0.2-0.8; P=.008). The risk of other fractures was similar in the 2 groups, which supports the effectiveness of the hip protector (ie, these patients were not at risk for fractures in general). Subjects in the hip protector group wore them during 48% of all days and during 74% of all falls, suggesting that they were being worn during higher risk times. In the hip protector group, 4 subjects had a hip fracture while wearing the device; 9 subjects had a hip fracture while not wearing it (P=.002). A total of 41 people would have to wear a hip protector for 1 year to prevent one hip fracture (95% CI, 25-115).

RECOMMENDATIONS FOR CLINICAL PRACTICE

Elderly frail adults at risk for falls should be encouraged to use these simple cost-effective devices. The price is less than $100, much cheaper than the cost associated with a fracture.1 The hip protector is approved by the US Food and Drug Administration, manufactured by several companies in the United States, and can be ordered on the Internet (search word “hip protector”).

BACKGROUND: Various interventions to prevent hip fractures have been tested with mediocre success. Most have treated underlying risk factors, such as osteoporosis and fall propensity. This study evaluated the effectiveness of an external hip protector to prevent hip fractures.

POPULATION STUDIED: The trial involved 1801 ambulatory but frail elderly adults (1409 women and 392 men, mean age=82 years) from 22 community-based health care centers in Finland. All patients were aged at least 70 years, were ambulatory (assisted or unassisted), and had at least one identifiable risk factor for hip fracture.

STUDY DESIGN AND VALIDITY: Patients were randomized using adequate concealment of allocation, in an unblinded manner, to receive a hip protector or to not receive one. The hip protector (KPH Hip Protector, Respecta, Helsinki, Finland) covered the greater trochanter, was 19 cm × 9 cm with a convex shape. It was designed to shunt the energy of an impact away from the greater trochanter, the most common site of hip fracture. The 2 padded protectors were worn inside pockets of a stretchy undergarment and did not limit walking or sitting. The subjects in the hip protector group were asked to wear the protector whenever they were on their feet and especially when they were at risk for falling. Many patients randomized to receive the hip protector (204 of 650) refused to participate. The dropout rate during the 18-month study period was high (657 out of 1427), mostly because of death, inability to walk, or refusal to continue in the study. The subjects from a waiting list replaced the dropouts. The sample size was sufficient to identify a 50% reduction in hip fractures over 1 year. As a group, hip protector subjects had significantly more risk factors for falls. However, statistical adjustment for baseline differences did not alter the results. The authors compensated for the high dropout rate by using an intention-to-treat analysis and including the subjects in the analysis for the time period of participation. Information on other factors associated with hip fractures (race, presence of osteoporosis, and use of osteoporosis medications) would have been helpful for generalizing the results to US patients.

OUTCOMES MEASURED: The primary outcome was hip fracture. Secondary outcome variables were the number and rate of falls in the hip protector group and the number of days the subjects wore the protector.

RESULTS: During follow-up, 13 subjects in the hip protector group had a hip fracture compared with 67 controls. Hip fracture risk was significantly lower in the treatment group (21.3 vs 46.0 per 1000 person-years; relative hazard 0.4; 95% confidence interval [CI], 0.2-0.8; P=.008). The risk of other fractures was similar in the 2 groups, which supports the effectiveness of the hip protector (ie, these patients were not at risk for fractures in general). Subjects in the hip protector group wore them during 48% of all days and during 74% of all falls, suggesting that they were being worn during higher risk times. In the hip protector group, 4 subjects had a hip fracture while wearing the device; 9 subjects had a hip fracture while not wearing it (P=.002). A total of 41 people would have to wear a hip protector for 1 year to prevent one hip fracture (95% CI, 25-115).

RECOMMENDATIONS FOR CLINICAL PRACTICE

Elderly frail adults at risk for falls should be encouraged to use these simple cost-effective devices. The price is less than $100, much cheaper than the cost associated with a fracture.1 The hip protector is approved by the US Food and Drug Administration, manufactured by several companies in the United States, and can be ordered on the Internet (search word “hip protector”).

Issue
The Journal of Family Practice - 50(03)
Issue
The Journal of Family Practice - 50(03)
Page Number
259
Page Number
259
Publications
Publications
Topics
Article Type
Display Headline
Does a hip protector reduce the risk of hip fracture in frail elderly patients?
Display Headline
Does a hip protector reduce the risk of hip fracture in frail elderly patients?
Sections
Disallow All Ads

Should calcium channel blockers be used as first-line antihypertensive therapy?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
Should calcium channel blockers be used as first-line antihypertensive therapy?

BACKGROUND: Calcium channel blockers (CCBs) are more effective than placebo in lowering blood pressure and in preventing subsequent cardiovascular outcomes. However, observational studies of short-acting CCBs and controlled trials of long-acting CCBs have shown that although blood pressure is controlled, cardiovascular event rates increase. The authors performed a meta-analysis of randomized controlled trials comparing CCBs with first-line antihypertensives regarding their effects on cardiovascular events. population studied The analysis included 9 studies involving 27,743 patients. The mean age range was 53.9 to 76.1 years. Both men and women were represented in the analysis. Follow-up was 2 to 7 years with an estimated total follow-up of 120,000 person-years.

STUDY DESIGN AND VALIDITY: This is a meta-analysis of the existing literature. Studies were identified for inclusion through a systematic search of MEDLINE. To be included the randomized trials had to have more than 100 participants, follow-up longer than 2 years, compare CCBs with other first-line agents, and evaluate the effect on cardiovascular outcomes. The studies compared CCBs with diuretics, b-blockers, angiotensin-converting enzyme inhibitors, and clonidine. Two investigators independently abstracted data. Outcome data were analyzed by intention to treat except in one study that consisted of 429 patients. Tests for heterogeneity were performed. The meta-analysis is well done although limited by the quality of the included studies. Each of the 9 studies in the analysis has a limitation. Five of the studies were open design, and in 4 studies the authors had to contact investigators to obtain information on the primary outcomes. Two studies dealt primarily with patients with diabetes. In one study randomization favored the CCB arm, while in another the non-CCB arm had a more favorable baseline. The dropout rate for the studies ranged from 7% to 60%. A variety of sensitivity analyses were done to determine if one study, one drug type, or one type of patient profile contributed to the results. This did not appear to be the situation, although there was insufficient power in some of the sensitivity analyses to answer this question confidently.

OUTCOMES MEASURED: The outcomes measured were changes in systolic and diastolic blood pressure, acute myocardial infarctions, congestive heart failure, stroke, and all-cause mortality. The authors also evaluated the effect of treatment on the combined outcome of major cardiovascular events including acute myocardial infarction, congestive heart failure, stroke, and cardiovascular mortality.

RESULTS: CCBs lowered both systolic and diastolic blood pressure comparably with first-line agents. CCBs had a higher risk of acute myocardial infarction (odds ratio [OR]=1.26; 95% confidence interval [CI], 1.11-1.43), congestive heart failure (OR=1.25; 95% CI, 1.07-1.46), and major cardiovascular events (OR=1.10; 95% CI, 1.02-1.18). CCBs were comparable with other agents for reducing the risk of stroke (OR=0.90; 95% CI, 0.80-1.02) and all-cause mortality (OR=1.03; 95% CI, 0.94-1.13).

RECOMMENDATIONS FOR CLINICAL PRACTICE

CCBs should not be used as first-line antihypertensive therapy in patients at risk for coronary heart disease and heart failure. Although CCBs lower blood pressure, their effect on preventing of acute myocardial infarction, congestive heart failure, and overall cardiovascular mortality is less favorable than with first-line therapies. The risk of stroke and overall mortality is comparable with first-line therapy. In targeted populations, such as Asians or those with isolated hypertension with no risk factors for coronary artery disease, CCBs might be considered as first-line agents. This meta-analysis supports the recommendation of the Sixth Report on Prevention, Detection, Evaluation and Treatment of High Blood Pressure: Use diuretics and b-blockers as first-line agents.

Author and Disclosure Information

Carin E. Reust, MD
University of Missouri-Columbia E-mail: [email protected]

Issue
The Journal of Family Practice - 50(03)
Publications
Topics
Page Number
258
Sections
Author and Disclosure Information

Carin E. Reust, MD
University of Missouri-Columbia E-mail: [email protected]

Author and Disclosure Information

Carin E. Reust, MD
University of Missouri-Columbia E-mail: [email protected]

BACKGROUND: Calcium channel blockers (CCBs) are more effective than placebo in lowering blood pressure and in preventing subsequent cardiovascular outcomes. However, observational studies of short-acting CCBs and controlled trials of long-acting CCBs have shown that although blood pressure is controlled, cardiovascular event rates increase. The authors performed a meta-analysis of randomized controlled trials comparing CCBs with first-line antihypertensives regarding their effects on cardiovascular events. population studied The analysis included 9 studies involving 27,743 patients. The mean age range was 53.9 to 76.1 years. Both men and women were represented in the analysis. Follow-up was 2 to 7 years with an estimated total follow-up of 120,000 person-years.

STUDY DESIGN AND VALIDITY: This is a meta-analysis of the existing literature. Studies were identified for inclusion through a systematic search of MEDLINE. To be included the randomized trials had to have more than 100 participants, follow-up longer than 2 years, compare CCBs with other first-line agents, and evaluate the effect on cardiovascular outcomes. The studies compared CCBs with diuretics, b-blockers, angiotensin-converting enzyme inhibitors, and clonidine. Two investigators independently abstracted data. Outcome data were analyzed by intention to treat except in one study that consisted of 429 patients. Tests for heterogeneity were performed. The meta-analysis is well done although limited by the quality of the included studies. Each of the 9 studies in the analysis has a limitation. Five of the studies were open design, and in 4 studies the authors had to contact investigators to obtain information on the primary outcomes. Two studies dealt primarily with patients with diabetes. In one study randomization favored the CCB arm, while in another the non-CCB arm had a more favorable baseline. The dropout rate for the studies ranged from 7% to 60%. A variety of sensitivity analyses were done to determine if one study, one drug type, or one type of patient profile contributed to the results. This did not appear to be the situation, although there was insufficient power in some of the sensitivity analyses to answer this question confidently.

OUTCOMES MEASURED: The outcomes measured were changes in systolic and diastolic blood pressure, acute myocardial infarctions, congestive heart failure, stroke, and all-cause mortality. The authors also evaluated the effect of treatment on the combined outcome of major cardiovascular events including acute myocardial infarction, congestive heart failure, stroke, and cardiovascular mortality.

RESULTS: CCBs lowered both systolic and diastolic blood pressure comparably with first-line agents. CCBs had a higher risk of acute myocardial infarction (odds ratio [OR]=1.26; 95% confidence interval [CI], 1.11-1.43), congestive heart failure (OR=1.25; 95% CI, 1.07-1.46), and major cardiovascular events (OR=1.10; 95% CI, 1.02-1.18). CCBs were comparable with other agents for reducing the risk of stroke (OR=0.90; 95% CI, 0.80-1.02) and all-cause mortality (OR=1.03; 95% CI, 0.94-1.13).

RECOMMENDATIONS FOR CLINICAL PRACTICE

CCBs should not be used as first-line antihypertensive therapy in patients at risk for coronary heart disease and heart failure. Although CCBs lower blood pressure, their effect on preventing of acute myocardial infarction, congestive heart failure, and overall cardiovascular mortality is less favorable than with first-line therapies. The risk of stroke and overall mortality is comparable with first-line therapy. In targeted populations, such as Asians or those with isolated hypertension with no risk factors for coronary artery disease, CCBs might be considered as first-line agents. This meta-analysis supports the recommendation of the Sixth Report on Prevention, Detection, Evaluation and Treatment of High Blood Pressure: Use diuretics and b-blockers as first-line agents.

BACKGROUND: Calcium channel blockers (CCBs) are more effective than placebo in lowering blood pressure and in preventing subsequent cardiovascular outcomes. However, observational studies of short-acting CCBs and controlled trials of long-acting CCBs have shown that although blood pressure is controlled, cardiovascular event rates increase. The authors performed a meta-analysis of randomized controlled trials comparing CCBs with first-line antihypertensives regarding their effects on cardiovascular events. population studied The analysis included 9 studies involving 27,743 patients. The mean age range was 53.9 to 76.1 years. Both men and women were represented in the analysis. Follow-up was 2 to 7 years with an estimated total follow-up of 120,000 person-years.

STUDY DESIGN AND VALIDITY: This is a meta-analysis of the existing literature. Studies were identified for inclusion through a systematic search of MEDLINE. To be included the randomized trials had to have more than 100 participants, follow-up longer than 2 years, compare CCBs with other first-line agents, and evaluate the effect on cardiovascular outcomes. The studies compared CCBs with diuretics, b-blockers, angiotensin-converting enzyme inhibitors, and clonidine. Two investigators independently abstracted data. Outcome data were analyzed by intention to treat except in one study that consisted of 429 patients. Tests for heterogeneity were performed. The meta-analysis is well done although limited by the quality of the included studies. Each of the 9 studies in the analysis has a limitation. Five of the studies were open design, and in 4 studies the authors had to contact investigators to obtain information on the primary outcomes. Two studies dealt primarily with patients with diabetes. In one study randomization favored the CCB arm, while in another the non-CCB arm had a more favorable baseline. The dropout rate for the studies ranged from 7% to 60%. A variety of sensitivity analyses were done to determine if one study, one drug type, or one type of patient profile contributed to the results. This did not appear to be the situation, although there was insufficient power in some of the sensitivity analyses to answer this question confidently.

OUTCOMES MEASURED: The outcomes measured were changes in systolic and diastolic blood pressure, acute myocardial infarctions, congestive heart failure, stroke, and all-cause mortality. The authors also evaluated the effect of treatment on the combined outcome of major cardiovascular events including acute myocardial infarction, congestive heart failure, stroke, and cardiovascular mortality.

RESULTS: CCBs lowered both systolic and diastolic blood pressure comparably with first-line agents. CCBs had a higher risk of acute myocardial infarction (odds ratio [OR]=1.26; 95% confidence interval [CI], 1.11-1.43), congestive heart failure (OR=1.25; 95% CI, 1.07-1.46), and major cardiovascular events (OR=1.10; 95% CI, 1.02-1.18). CCBs were comparable with other agents for reducing the risk of stroke (OR=0.90; 95% CI, 0.80-1.02) and all-cause mortality (OR=1.03; 95% CI, 0.94-1.13).

RECOMMENDATIONS FOR CLINICAL PRACTICE

CCBs should not be used as first-line antihypertensive therapy in patients at risk for coronary heart disease and heart failure. Although CCBs lower blood pressure, their effect on preventing of acute myocardial infarction, congestive heart failure, and overall cardiovascular mortality is less favorable than with first-line therapies. The risk of stroke and overall mortality is comparable with first-line therapy. In targeted populations, such as Asians or those with isolated hypertension with no risk factors for coronary artery disease, CCBs might be considered as first-line agents. This meta-analysis supports the recommendation of the Sixth Report on Prevention, Detection, Evaluation and Treatment of High Blood Pressure: Use diuretics and b-blockers as first-line agents.

Issue
The Journal of Family Practice - 50(03)
Issue
The Journal of Family Practice - 50(03)
Page Number
258
Page Number
258
Publications
Publications
Topics
Article Type
Display Headline
Should calcium channel blockers be used as first-line antihypertensive therapy?
Display Headline
Should calcium channel blockers be used as first-line antihypertensive therapy?
Sections
Disallow All Ads

Is teething in infants associated with fever or other symptoms?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
Is teething in infants associated with fever or other symptoms?

BACKGROUND: Parents and clinicians have traditionally attributed to teething many symptoms, such as fever, pain, irritability, diarrhea, drooling, and sleep disturbance. However, little evidence exists to support this claim. The authors investigated the relationship between tooth eruption, fever, and teething symptoms.

POPULATION STUDIED: All children aged 6 months to 2 years from 3 Australian daycare centers were eligible for the study if they attended at least 3 days a week. Twenty-one children (78% of eligible children) participated (mean age=14.4 months), and all completed the study.

STUDY DESIGN AND VALIDITY: This was a prospective cohort study. Data on symptoms for each child were collected using questionnaires completed by parents each morning and by daycare center staff each afternoon over a 7-month period. A dental therapist examined each of the children daily over the same time period for signs of tooth eruption and measured each child’s temperature using an infrared tympanic thermometer. Tooth eruptions were defined as the first day a tooth edge emerged from the oral mucosa and remained consistently visible. Toothdays were defined as the 5 days leading up to a tooth eruption. Non-toothdays were defined as days more than 28 days clear of an eruption. Data were compared by logistic regression analysis. The study was limited in several ways. The small sample size may have affected the power of the study to detect small but clinically significant differences. There was an attempt at blinding the participants and their parents to the purpose of the study, but this blinding was incomplete and was lost over time. Parents and daycare staff might have been inclined to over-report symptoms if a child was having a tooth eruption. There was no mention made in the study of parental administration of antipyretic or analgesic medication that may have been used to treat symptoms. Use of such medications might have affected temperature readings or the results of daycare staff questionnaires. Also, tympanic temperature readings have been found inaccurate for detecting fever compared with core temperature readings. Finally, a substantial portion of data was missing from the report (6% of dental therapist data, 13% of staff member data, and 17% of parent data) without adequate explanation.

OUTCOMES MEASURED: Tympanic temperature and symptom questionnaires (mood, wellness/illness, drooling, sleep, diarrhea/constipation, strong diapers, rashes, and flushing) were compared on toothdays and non-toothdays. A questionnaire was also given to parents at the end of the study that assessed their beliefs about which symptoms their child experienced.

RESULTS: Over the 7 months of the study, 2067 days of data were collected. There were 90 tooth eruptions, 236 toothdays, and 895 non-toothdays recorded. There was no statistically significant difference found in tympanic temperature when comparing toothdays to non-toothdays. Of the 32 separate analyses of symptoms that were performed, only parent-reported diarrhea was associated with tooth eruption (odds ratio=1.86; 95% confidence interval, 1.26-2.73). However, this association disappeared when looking at the 10 days leading up to an eruption or the 5 days on either side of an eruption and did not exist as reported by childcare staff. Children with fever or most other symptoms tended to be younger, suggesting that age could potentially confound any observed relationship between tooth eruptions and symptoms. In the final questionnaire, all parents retrospectively reported that their child suffered a variety of teething symptoms.

RECOMMENDATION FOR CLINICAL PRACTICE

Many parents believe that teething causes a range of symptoms (fever, irritability, sleep disturbance, drooling, and so forth). This study provided no conclusive evidence that a relationship exists between the eruption of teeth and the experience of symptoms. Temperature greater than 38ÞC or other serious symptoms in an infant should not be regarded by clinicians as due to teething and should be evaluated appropriately.

Author and Disclosure Information

Jeremiah Frank, MD
Jonathan Drezner, MD
University of Washington Medical Center Seattle E-mail: [email protected]

Issue
The Journal of Family Practice - 50(03)
Publications
Topics
Page Number
257
Sections
Author and Disclosure Information

Jeremiah Frank, MD
Jonathan Drezner, MD
University of Washington Medical Center Seattle E-mail: [email protected]

Author and Disclosure Information

Jeremiah Frank, MD
Jonathan Drezner, MD
University of Washington Medical Center Seattle E-mail: [email protected]

BACKGROUND: Parents and clinicians have traditionally attributed to teething many symptoms, such as fever, pain, irritability, diarrhea, drooling, and sleep disturbance. However, little evidence exists to support this claim. The authors investigated the relationship between tooth eruption, fever, and teething symptoms.

POPULATION STUDIED: All children aged 6 months to 2 years from 3 Australian daycare centers were eligible for the study if they attended at least 3 days a week. Twenty-one children (78% of eligible children) participated (mean age=14.4 months), and all completed the study.

STUDY DESIGN AND VALIDITY: This was a prospective cohort study. Data on symptoms for each child were collected using questionnaires completed by parents each morning and by daycare center staff each afternoon over a 7-month period. A dental therapist examined each of the children daily over the same time period for signs of tooth eruption and measured each child’s temperature using an infrared tympanic thermometer. Tooth eruptions were defined as the first day a tooth edge emerged from the oral mucosa and remained consistently visible. Toothdays were defined as the 5 days leading up to a tooth eruption. Non-toothdays were defined as days more than 28 days clear of an eruption. Data were compared by logistic regression analysis. The study was limited in several ways. The small sample size may have affected the power of the study to detect small but clinically significant differences. There was an attempt at blinding the participants and their parents to the purpose of the study, but this blinding was incomplete and was lost over time. Parents and daycare staff might have been inclined to over-report symptoms if a child was having a tooth eruption. There was no mention made in the study of parental administration of antipyretic or analgesic medication that may have been used to treat symptoms. Use of such medications might have affected temperature readings or the results of daycare staff questionnaires. Also, tympanic temperature readings have been found inaccurate for detecting fever compared with core temperature readings. Finally, a substantial portion of data was missing from the report (6% of dental therapist data, 13% of staff member data, and 17% of parent data) without adequate explanation.

OUTCOMES MEASURED: Tympanic temperature and symptom questionnaires (mood, wellness/illness, drooling, sleep, diarrhea/constipation, strong diapers, rashes, and flushing) were compared on toothdays and non-toothdays. A questionnaire was also given to parents at the end of the study that assessed their beliefs about which symptoms their child experienced.

RESULTS: Over the 7 months of the study, 2067 days of data were collected. There were 90 tooth eruptions, 236 toothdays, and 895 non-toothdays recorded. There was no statistically significant difference found in tympanic temperature when comparing toothdays to non-toothdays. Of the 32 separate analyses of symptoms that were performed, only parent-reported diarrhea was associated with tooth eruption (odds ratio=1.86; 95% confidence interval, 1.26-2.73). However, this association disappeared when looking at the 10 days leading up to an eruption or the 5 days on either side of an eruption and did not exist as reported by childcare staff. Children with fever or most other symptoms tended to be younger, suggesting that age could potentially confound any observed relationship between tooth eruptions and symptoms. In the final questionnaire, all parents retrospectively reported that their child suffered a variety of teething symptoms.

RECOMMENDATION FOR CLINICAL PRACTICE

Many parents believe that teething causes a range of symptoms (fever, irritability, sleep disturbance, drooling, and so forth). This study provided no conclusive evidence that a relationship exists between the eruption of teeth and the experience of symptoms. Temperature greater than 38ÞC or other serious symptoms in an infant should not be regarded by clinicians as due to teething and should be evaluated appropriately.

BACKGROUND: Parents and clinicians have traditionally attributed to teething many symptoms, such as fever, pain, irritability, diarrhea, drooling, and sleep disturbance. However, little evidence exists to support this claim. The authors investigated the relationship between tooth eruption, fever, and teething symptoms.

POPULATION STUDIED: All children aged 6 months to 2 years from 3 Australian daycare centers were eligible for the study if they attended at least 3 days a week. Twenty-one children (78% of eligible children) participated (mean age=14.4 months), and all completed the study.

STUDY DESIGN AND VALIDITY: This was a prospective cohort study. Data on symptoms for each child were collected using questionnaires completed by parents each morning and by daycare center staff each afternoon over a 7-month period. A dental therapist examined each of the children daily over the same time period for signs of tooth eruption and measured each child’s temperature using an infrared tympanic thermometer. Tooth eruptions were defined as the first day a tooth edge emerged from the oral mucosa and remained consistently visible. Toothdays were defined as the 5 days leading up to a tooth eruption. Non-toothdays were defined as days more than 28 days clear of an eruption. Data were compared by logistic regression analysis. The study was limited in several ways. The small sample size may have affected the power of the study to detect small but clinically significant differences. There was an attempt at blinding the participants and their parents to the purpose of the study, but this blinding was incomplete and was lost over time. Parents and daycare staff might have been inclined to over-report symptoms if a child was having a tooth eruption. There was no mention made in the study of parental administration of antipyretic or analgesic medication that may have been used to treat symptoms. Use of such medications might have affected temperature readings or the results of daycare staff questionnaires. Also, tympanic temperature readings have been found inaccurate for detecting fever compared with core temperature readings. Finally, a substantial portion of data was missing from the report (6% of dental therapist data, 13% of staff member data, and 17% of parent data) without adequate explanation.

OUTCOMES MEASURED: Tympanic temperature and symptom questionnaires (mood, wellness/illness, drooling, sleep, diarrhea/constipation, strong diapers, rashes, and flushing) were compared on toothdays and non-toothdays. A questionnaire was also given to parents at the end of the study that assessed their beliefs about which symptoms their child experienced.

RESULTS: Over the 7 months of the study, 2067 days of data were collected. There were 90 tooth eruptions, 236 toothdays, and 895 non-toothdays recorded. There was no statistically significant difference found in tympanic temperature when comparing toothdays to non-toothdays. Of the 32 separate analyses of symptoms that were performed, only parent-reported diarrhea was associated with tooth eruption (odds ratio=1.86; 95% confidence interval, 1.26-2.73). However, this association disappeared when looking at the 10 days leading up to an eruption or the 5 days on either side of an eruption and did not exist as reported by childcare staff. Children with fever or most other symptoms tended to be younger, suggesting that age could potentially confound any observed relationship between tooth eruptions and symptoms. In the final questionnaire, all parents retrospectively reported that their child suffered a variety of teething symptoms.

RECOMMENDATION FOR CLINICAL PRACTICE

Many parents believe that teething causes a range of symptoms (fever, irritability, sleep disturbance, drooling, and so forth). This study provided no conclusive evidence that a relationship exists between the eruption of teeth and the experience of symptoms. Temperature greater than 38ÞC or other serious symptoms in an infant should not be regarded by clinicians as due to teething and should be evaluated appropriately.

Issue
The Journal of Family Practice - 50(03)
Issue
The Journal of Family Practice - 50(03)
Page Number
257
Page Number
257
Publications
Publications
Topics
Article Type
Display Headline
Is teething in infants associated with fever or other symptoms?
Display Headline
Is teething in infants associated with fever or other symptoms?
Sections
Disallow All Ads

Is rofecoxib safer than naproxen?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
Is rofecoxib safer than naproxen?

BACKGROUND: The adverse gastrointestinal effects of older nonselective nonsteroidal anti-inflammatory drugs (NSAIDs) are well documented. These effects are attributed to the drugs’ inhibition of cyclooxygenase-1 (COX-1), which is found in the gastrointestinal tract. The newer NSAIDs that selectively inhibit cyclooxygenase-2 (COX-2) have been shown to cause fewer endoscopically proven gastrointestinal erosions/ulcerations than nonselective NSAIDs that inhibit both COX-1 and COX-2. This disease-oriented evidence leads to concern about whether selective NSAIDs are really safer in the clinical setting than nonselective NSAIDs. This study attempts to answer the following question: Does rofecoxib, a selective COX-2 NSAID, lead to fewer clinically significant gastrointestinal events than naproxen, a nonselective NSAID?

POPULATION STUDIED: Patients were aged 50 years or older with rheumatoid arthritis (or 40 years or older with rheumatoid arthritis and taking long-term glucocorticoids) and were expected to be taking an NSAID for at least 1 year. Patients were recruited from 301 centers in 22 countries.

STUDY DESIGN AND VALIDITY: This was a randomized double-blinded controlled trial. In general it was a well-designed study with no major flaws. The randomization process and the steps taken to conceal the treatment assignment were not described in detail. A total of 8076 patients were initially assigned to either 50 mg of rofecoxib once daily or 500 mg of naproxen twice daily. Patients were not allowed to take other NSAIDs but could take antacids and H2-receptor antagonists. Median follow-up was 9.0 months. Patients were followed until the study ended even if they stopped taking the study medications. Only 71.1% of the patients continued to take their assigned medication until the end of the study. Rates of discontinuation (29.3% for rofecoxib, 28.5% for naproxen), discontinuation for adverse events (16.4% and 16.1%, respectively), and discontinuation for lack of efficacy (6.3% and 6.5%, respectively) were similar between the groups. A committee using preset criteria and blinded to the treatment assessed the end points.

OUTCOMES MEASURED: Two outcomes were measured: the total number of gastrointestinal events (gastroduodenal perforation or obstruction, upper gastrointestinal bleeding, and symptomatic gastroduodenal ulcer) and the number of complicated events (perforation, obstruction, and severe upper gastrointestinal bleeding). Also, the effectiveness of medication in reducing disease activity was measured using the Global Assessment of Disease Activity questionnaire. Finally, all episodes of confirmed and unconfirmed gastrointestinal bleeding were analyzed.

RESULTS: The risk of a gastrointestinal event was lower for patients receiving rofecoxib for the year of the study period (2.1 vs 4.5 events per 100 patient years, number needed to treat [NNT]=42). The risk of a complicated event was also lower for patients receiving rofecoxib (0.6 vs 1.6 events per 100 patient years, NNT=100). Overall mortality between the 2 groups was similar. Both drugs were equally effective in relieving the symptoms of rheumatoid arthritis. Of note, the rate of myocardial infarction was lower in the rofecoxib groups (0.1%) than in the naproxen group (0.4%); however, the mortality rates from cardiovascular causes were similar between the 2 groups. The difference in the rate of myocardial infarctions has been attributed to the antiplatelet effect (related to COX-1) of traditional NSAIDs, which the selective NSAIDs do not possess.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The risk of gastrointestinal events is lower with rofecoxib than with naproxen in patients treated continuously for 1 year with standard doses. The absolute difference between these 2 agents is small and should be weighed against the increased cost of rofecoxib. A recent study comparing celecoxib with ibuprofen and diclofenac showed similar results.1 Of note in the previous study was that patients taking aspirin for cardiovascular prophylaxis and celecoxib had the same rate of adverse gastrointestinal events as those taking ibuprofen or diclofenac. Even a small amount of aspirin seems to negate the advantage of the selective NSAID. A selective NSAID may be a better choice for patients at high risk for adverse gastrointestinal events and who need long-term treatment, while a nonselective NSAID is probably the better choice for patients who are at low risk and require short-term therapy.

Author and Disclosure Information

Alan Adelman, MD, MS
Pennsylvania State University, Hershey
[email protected]

Issue
The Journal of Family Practice - 50(03)
Publications
Topics
Page Number
204
Sections
Author and Disclosure Information

Alan Adelman, MD, MS
Pennsylvania State University, Hershey
[email protected]

Author and Disclosure Information

Alan Adelman, MD, MS
Pennsylvania State University, Hershey
[email protected]

BACKGROUND: The adverse gastrointestinal effects of older nonselective nonsteroidal anti-inflammatory drugs (NSAIDs) are well documented. These effects are attributed to the drugs’ inhibition of cyclooxygenase-1 (COX-1), which is found in the gastrointestinal tract. The newer NSAIDs that selectively inhibit cyclooxygenase-2 (COX-2) have been shown to cause fewer endoscopically proven gastrointestinal erosions/ulcerations than nonselective NSAIDs that inhibit both COX-1 and COX-2. This disease-oriented evidence leads to concern about whether selective NSAIDs are really safer in the clinical setting than nonselective NSAIDs. This study attempts to answer the following question: Does rofecoxib, a selective COX-2 NSAID, lead to fewer clinically significant gastrointestinal events than naproxen, a nonselective NSAID?

POPULATION STUDIED: Patients were aged 50 years or older with rheumatoid arthritis (or 40 years or older with rheumatoid arthritis and taking long-term glucocorticoids) and were expected to be taking an NSAID for at least 1 year. Patients were recruited from 301 centers in 22 countries.

STUDY DESIGN AND VALIDITY: This was a randomized double-blinded controlled trial. In general it was a well-designed study with no major flaws. The randomization process and the steps taken to conceal the treatment assignment were not described in detail. A total of 8076 patients were initially assigned to either 50 mg of rofecoxib once daily or 500 mg of naproxen twice daily. Patients were not allowed to take other NSAIDs but could take antacids and H2-receptor antagonists. Median follow-up was 9.0 months. Patients were followed until the study ended even if they stopped taking the study medications. Only 71.1% of the patients continued to take their assigned medication until the end of the study. Rates of discontinuation (29.3% for rofecoxib, 28.5% for naproxen), discontinuation for adverse events (16.4% and 16.1%, respectively), and discontinuation for lack of efficacy (6.3% and 6.5%, respectively) were similar between the groups. A committee using preset criteria and blinded to the treatment assessed the end points.

OUTCOMES MEASURED: Two outcomes were measured: the total number of gastrointestinal events (gastroduodenal perforation or obstruction, upper gastrointestinal bleeding, and symptomatic gastroduodenal ulcer) and the number of complicated events (perforation, obstruction, and severe upper gastrointestinal bleeding). Also, the effectiveness of medication in reducing disease activity was measured using the Global Assessment of Disease Activity questionnaire. Finally, all episodes of confirmed and unconfirmed gastrointestinal bleeding were analyzed.

RESULTS: The risk of a gastrointestinal event was lower for patients receiving rofecoxib for the year of the study period (2.1 vs 4.5 events per 100 patient years, number needed to treat [NNT]=42). The risk of a complicated event was also lower for patients receiving rofecoxib (0.6 vs 1.6 events per 100 patient years, NNT=100). Overall mortality between the 2 groups was similar. Both drugs were equally effective in relieving the symptoms of rheumatoid arthritis. Of note, the rate of myocardial infarction was lower in the rofecoxib groups (0.1%) than in the naproxen group (0.4%); however, the mortality rates from cardiovascular causes were similar between the 2 groups. The difference in the rate of myocardial infarctions has been attributed to the antiplatelet effect (related to COX-1) of traditional NSAIDs, which the selective NSAIDs do not possess.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The risk of gastrointestinal events is lower with rofecoxib than with naproxen in patients treated continuously for 1 year with standard doses. The absolute difference between these 2 agents is small and should be weighed against the increased cost of rofecoxib. A recent study comparing celecoxib with ibuprofen and diclofenac showed similar results.1 Of note in the previous study was that patients taking aspirin for cardiovascular prophylaxis and celecoxib had the same rate of adverse gastrointestinal events as those taking ibuprofen or diclofenac. Even a small amount of aspirin seems to negate the advantage of the selective NSAID. A selective NSAID may be a better choice for patients at high risk for adverse gastrointestinal events and who need long-term treatment, while a nonselective NSAID is probably the better choice for patients who are at low risk and require short-term therapy.

BACKGROUND: The adverse gastrointestinal effects of older nonselective nonsteroidal anti-inflammatory drugs (NSAIDs) are well documented. These effects are attributed to the drugs’ inhibition of cyclooxygenase-1 (COX-1), which is found in the gastrointestinal tract. The newer NSAIDs that selectively inhibit cyclooxygenase-2 (COX-2) have been shown to cause fewer endoscopically proven gastrointestinal erosions/ulcerations than nonselective NSAIDs that inhibit both COX-1 and COX-2. This disease-oriented evidence leads to concern about whether selective NSAIDs are really safer in the clinical setting than nonselective NSAIDs. This study attempts to answer the following question: Does rofecoxib, a selective COX-2 NSAID, lead to fewer clinically significant gastrointestinal events than naproxen, a nonselective NSAID?

POPULATION STUDIED: Patients were aged 50 years or older with rheumatoid arthritis (or 40 years or older with rheumatoid arthritis and taking long-term glucocorticoids) and were expected to be taking an NSAID for at least 1 year. Patients were recruited from 301 centers in 22 countries.

STUDY DESIGN AND VALIDITY: This was a randomized double-blinded controlled trial. In general it was a well-designed study with no major flaws. The randomization process and the steps taken to conceal the treatment assignment were not described in detail. A total of 8076 patients were initially assigned to either 50 mg of rofecoxib once daily or 500 mg of naproxen twice daily. Patients were not allowed to take other NSAIDs but could take antacids and H2-receptor antagonists. Median follow-up was 9.0 months. Patients were followed until the study ended even if they stopped taking the study medications. Only 71.1% of the patients continued to take their assigned medication until the end of the study. Rates of discontinuation (29.3% for rofecoxib, 28.5% for naproxen), discontinuation for adverse events (16.4% and 16.1%, respectively), and discontinuation for lack of efficacy (6.3% and 6.5%, respectively) were similar between the groups. A committee using preset criteria and blinded to the treatment assessed the end points.

OUTCOMES MEASURED: Two outcomes were measured: the total number of gastrointestinal events (gastroduodenal perforation or obstruction, upper gastrointestinal bleeding, and symptomatic gastroduodenal ulcer) and the number of complicated events (perforation, obstruction, and severe upper gastrointestinal bleeding). Also, the effectiveness of medication in reducing disease activity was measured using the Global Assessment of Disease Activity questionnaire. Finally, all episodes of confirmed and unconfirmed gastrointestinal bleeding were analyzed.

RESULTS: The risk of a gastrointestinal event was lower for patients receiving rofecoxib for the year of the study period (2.1 vs 4.5 events per 100 patient years, number needed to treat [NNT]=42). The risk of a complicated event was also lower for patients receiving rofecoxib (0.6 vs 1.6 events per 100 patient years, NNT=100). Overall mortality between the 2 groups was similar. Both drugs were equally effective in relieving the symptoms of rheumatoid arthritis. Of note, the rate of myocardial infarction was lower in the rofecoxib groups (0.1%) than in the naproxen group (0.4%); however, the mortality rates from cardiovascular causes were similar between the 2 groups. The difference in the rate of myocardial infarctions has been attributed to the antiplatelet effect (related to COX-1) of traditional NSAIDs, which the selective NSAIDs do not possess.

RECOMMENDATIONS FOR CLINICAL PRACTICE

The risk of gastrointestinal events is lower with rofecoxib than with naproxen in patients treated continuously for 1 year with standard doses. The absolute difference between these 2 agents is small and should be weighed against the increased cost of rofecoxib. A recent study comparing celecoxib with ibuprofen and diclofenac showed similar results.1 Of note in the previous study was that patients taking aspirin for cardiovascular prophylaxis and celecoxib had the same rate of adverse gastrointestinal events as those taking ibuprofen or diclofenac. Even a small amount of aspirin seems to negate the advantage of the selective NSAID. A selective NSAID may be a better choice for patients at high risk for adverse gastrointestinal events and who need long-term treatment, while a nonselective NSAID is probably the better choice for patients who are at low risk and require short-term therapy.

Issue
The Journal of Family Practice - 50(03)
Issue
The Journal of Family Practice - 50(03)
Page Number
204
Page Number
204
Publications
Publications
Topics
Article Type
Display Headline
Is rofecoxib safer than naproxen?
Display Headline
Is rofecoxib safer than naproxen?
Sections
Disallow All Ads

Is imipramine or buspirone treatment effective in patients wishing to discontinue long-term benzodiazepine use?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
Is imipramine or buspirone treatment effective in patients wishing to discontinue long-term benzodiazepine use?

BACKGROUND: Discontinuation of benzodiazepines in patients on long-term treatment may be associated with restlessness, agitation, increased anxiety, insomnia, irritability, palpitations, and many other troublesome symptoms. Thus, patients with anxiety disorders taking long-term benzodiazepine therapy have difficulty successfully discontinuing treatment. Approaches to discontinuation include a gradual taper, cognitive-behavioral therapy, and adjunctive pharmacologic treatment.

POPULATION STUDIED: A total of 107 adult patients with generalized anxiety disorder were recruited from physician offices and by notices in the media. The patients had taken benzodiazepines for an average of 8.5 years (range=1-31 years); 91% had previously attempted discontinuation (average=3.4 attempts). Interestingly, only 24% of the patients were satisfied with their benzodiazepine therapy. The mean age was 48 years (range=22-77 years); 45% were women. All were treated at a psychopharmacology research unit and returned to the care of their family physicians at the end of the study.

STUDY DESIGN AND VALIDITY: After 2 to 4 weeks taking a steady dose of their benzodiazepine, the patients were assigned to receive double-blind treatment with imipramine 25 mg, buspirone 5 mg, or placebo. Each study drug was titrated over 2 weeks to a goal of 6 capsules daily in divided doses. After 4 weeks of concomitant treatment with a benzodiazepine and assigned study drug, the benzodiazepine dose was tapered by approximately 25% each week for 4 to 6 weeks. This taper was followed by a 5-week benzodiazepine-free phase; treatment was continued for 3 weeks followed by 2 weeks of placebo treatment. Patients were monitored weekly for withdrawal symptoms, anxiety, and depression. There are several limitations in the validity of this study. There is no mention of the method of randomization or of concealed allocation of study patients. Thirty-two patients (30%) did not complete the taper phase, and these patients were not all accounted for, that is, there was no intention-to-treat analysis. Additionally, the total number of patients remaining in each treatment arm was small, and the study may not have been large enough to find differences if they actually exist. Also, there was no comparison of demographics or clinical characteristics of the 3 treatment groups.

OUTCOMES MEASURED: The main outcomes measured were: (1) whether the taper was successful as defined by a benzodiazepine-free state at 12 weeks post-taper and (2) severity of symptoms of benzodiazepine discontinuation as rated by both physicians and patients.

RESULTS: At 3 months after benzodiazepine discontinuation, 82.5% of the imipramine-treated patients were benzodiazepine-free compared with 37.5% of the placebo-treated patients (P >.01; number needed to treat=2). Buspirone was less effective, and the success rate was not statistically different from placebo. However, the severity of withdrawal symptoms was worse in the imipramine-treated patients than with the buspirone- and placebo-treated patients (16.6 vs 8.9 and 10.4, respectively) on the 38-item physician-rated checklist (P >.03). Similar results were obtained using the patient-rated checklist, although these results were not shown. Dry mouth occurred significantly more frequently in imipramine-treated patients than in buspirone- and placebo-treated patients (number needed to harm=2).

RECOMMENDATIONS FOR CLINICAL PRACTICE

This small study indicates that imipramine is a viable adjunctive agent in promoting benzodiazepine discontinuation in motivated patients who are dissatisfied with their treatment. It does not, however, decrease the severity of withdrawal symptoms compared with placebo. Buspirone did not affect withdrawal rates, although the study probably did not have sufficient power to detect a benefit if one truly exists. Imipramine may be a useful therapy to help patients discontinue benzodiazepine use, though a confirmatory study would be useful before making a firm recommendation.

Author and Disclosure Information

Daniel L. Sontheimer, MD
Adrienne Z. Ables, PharmD
Spartanburg Family Medicine Residency Program South Carolina
[email protected]

Issue
The Journal of Family Practice - 50(03)
Publications
Topics
Page Number
203
Sections
Author and Disclosure Information

Daniel L. Sontheimer, MD
Adrienne Z. Ables, PharmD
Spartanburg Family Medicine Residency Program South Carolina
[email protected]

Author and Disclosure Information

Daniel L. Sontheimer, MD
Adrienne Z. Ables, PharmD
Spartanburg Family Medicine Residency Program South Carolina
[email protected]

BACKGROUND: Discontinuation of benzodiazepines in patients on long-term treatment may be associated with restlessness, agitation, increased anxiety, insomnia, irritability, palpitations, and many other troublesome symptoms. Thus, patients with anxiety disorders taking long-term benzodiazepine therapy have difficulty successfully discontinuing treatment. Approaches to discontinuation include a gradual taper, cognitive-behavioral therapy, and adjunctive pharmacologic treatment.

POPULATION STUDIED: A total of 107 adult patients with generalized anxiety disorder were recruited from physician offices and by notices in the media. The patients had taken benzodiazepines for an average of 8.5 years (range=1-31 years); 91% had previously attempted discontinuation (average=3.4 attempts). Interestingly, only 24% of the patients were satisfied with their benzodiazepine therapy. The mean age was 48 years (range=22-77 years); 45% were women. All were treated at a psychopharmacology research unit and returned to the care of their family physicians at the end of the study.

STUDY DESIGN AND VALIDITY: After 2 to 4 weeks taking a steady dose of their benzodiazepine, the patients were assigned to receive double-blind treatment with imipramine 25 mg, buspirone 5 mg, or placebo. Each study drug was titrated over 2 weeks to a goal of 6 capsules daily in divided doses. After 4 weeks of concomitant treatment with a benzodiazepine and assigned study drug, the benzodiazepine dose was tapered by approximately 25% each week for 4 to 6 weeks. This taper was followed by a 5-week benzodiazepine-free phase; treatment was continued for 3 weeks followed by 2 weeks of placebo treatment. Patients were monitored weekly for withdrawal symptoms, anxiety, and depression. There are several limitations in the validity of this study. There is no mention of the method of randomization or of concealed allocation of study patients. Thirty-two patients (30%) did not complete the taper phase, and these patients were not all accounted for, that is, there was no intention-to-treat analysis. Additionally, the total number of patients remaining in each treatment arm was small, and the study may not have been large enough to find differences if they actually exist. Also, there was no comparison of demographics or clinical characteristics of the 3 treatment groups.

OUTCOMES MEASURED: The main outcomes measured were: (1) whether the taper was successful as defined by a benzodiazepine-free state at 12 weeks post-taper and (2) severity of symptoms of benzodiazepine discontinuation as rated by both physicians and patients.

RESULTS: At 3 months after benzodiazepine discontinuation, 82.5% of the imipramine-treated patients were benzodiazepine-free compared with 37.5% of the placebo-treated patients (P >.01; number needed to treat=2). Buspirone was less effective, and the success rate was not statistically different from placebo. However, the severity of withdrawal symptoms was worse in the imipramine-treated patients than with the buspirone- and placebo-treated patients (16.6 vs 8.9 and 10.4, respectively) on the 38-item physician-rated checklist (P >.03). Similar results were obtained using the patient-rated checklist, although these results were not shown. Dry mouth occurred significantly more frequently in imipramine-treated patients than in buspirone- and placebo-treated patients (number needed to harm=2).

RECOMMENDATIONS FOR CLINICAL PRACTICE

This small study indicates that imipramine is a viable adjunctive agent in promoting benzodiazepine discontinuation in motivated patients who are dissatisfied with their treatment. It does not, however, decrease the severity of withdrawal symptoms compared with placebo. Buspirone did not affect withdrawal rates, although the study probably did not have sufficient power to detect a benefit if one truly exists. Imipramine may be a useful therapy to help patients discontinue benzodiazepine use, though a confirmatory study would be useful before making a firm recommendation.

BACKGROUND: Discontinuation of benzodiazepines in patients on long-term treatment may be associated with restlessness, agitation, increased anxiety, insomnia, irritability, palpitations, and many other troublesome symptoms. Thus, patients with anxiety disorders taking long-term benzodiazepine therapy have difficulty successfully discontinuing treatment. Approaches to discontinuation include a gradual taper, cognitive-behavioral therapy, and adjunctive pharmacologic treatment.

POPULATION STUDIED: A total of 107 adult patients with generalized anxiety disorder were recruited from physician offices and by notices in the media. The patients had taken benzodiazepines for an average of 8.5 years (range=1-31 years); 91% had previously attempted discontinuation (average=3.4 attempts). Interestingly, only 24% of the patients were satisfied with their benzodiazepine therapy. The mean age was 48 years (range=22-77 years); 45% were women. All were treated at a psychopharmacology research unit and returned to the care of their family physicians at the end of the study.

STUDY DESIGN AND VALIDITY: After 2 to 4 weeks taking a steady dose of their benzodiazepine, the patients were assigned to receive double-blind treatment with imipramine 25 mg, buspirone 5 mg, or placebo. Each study drug was titrated over 2 weeks to a goal of 6 capsules daily in divided doses. After 4 weeks of concomitant treatment with a benzodiazepine and assigned study drug, the benzodiazepine dose was tapered by approximately 25% each week for 4 to 6 weeks. This taper was followed by a 5-week benzodiazepine-free phase; treatment was continued for 3 weeks followed by 2 weeks of placebo treatment. Patients were monitored weekly for withdrawal symptoms, anxiety, and depression. There are several limitations in the validity of this study. There is no mention of the method of randomization or of concealed allocation of study patients. Thirty-two patients (30%) did not complete the taper phase, and these patients were not all accounted for, that is, there was no intention-to-treat analysis. Additionally, the total number of patients remaining in each treatment arm was small, and the study may not have been large enough to find differences if they actually exist. Also, there was no comparison of demographics or clinical characteristics of the 3 treatment groups.

OUTCOMES MEASURED: The main outcomes measured were: (1) whether the taper was successful as defined by a benzodiazepine-free state at 12 weeks post-taper and (2) severity of symptoms of benzodiazepine discontinuation as rated by both physicians and patients.

RESULTS: At 3 months after benzodiazepine discontinuation, 82.5% of the imipramine-treated patients were benzodiazepine-free compared with 37.5% of the placebo-treated patients (P >.01; number needed to treat=2). Buspirone was less effective, and the success rate was not statistically different from placebo. However, the severity of withdrawal symptoms was worse in the imipramine-treated patients than with the buspirone- and placebo-treated patients (16.6 vs 8.9 and 10.4, respectively) on the 38-item physician-rated checklist (P >.03). Similar results were obtained using the patient-rated checklist, although these results were not shown. Dry mouth occurred significantly more frequently in imipramine-treated patients than in buspirone- and placebo-treated patients (number needed to harm=2).

RECOMMENDATIONS FOR CLINICAL PRACTICE

This small study indicates that imipramine is a viable adjunctive agent in promoting benzodiazepine discontinuation in motivated patients who are dissatisfied with their treatment. It does not, however, decrease the severity of withdrawal symptoms compared with placebo. Buspirone did not affect withdrawal rates, although the study probably did not have sufficient power to detect a benefit if one truly exists. Imipramine may be a useful therapy to help patients discontinue benzodiazepine use, though a confirmatory study would be useful before making a firm recommendation.

Issue
The Journal of Family Practice - 50(03)
Issue
The Journal of Family Practice - 50(03)
Page Number
203
Page Number
203
Publications
Publications
Topics
Article Type
Display Headline
Is imipramine or buspirone treatment effective in patients wishing to discontinue long-term benzodiazepine use?
Display Headline
Is imipramine or buspirone treatment effective in patients wishing to discontinue long-term benzodiazepine use?
Sections
Disallow All Ads

Do patients with local reactions to allergy shots require dosage reductions for subsequent injections?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
Do patients with local reactions to allergy shots require dosage reductions for subsequent injections?

BACKGROUND: Many physicians reduce the dose of allergen immunotherapy when patients have significant local reactions to their allergy shots, believing that these patients are at higher risk for systemic reactions. This dose reduction is made despite the fact that the World Health Organization stated in a position paper on allergen immunotherapy that local reactions are not predictive of subsequent systemic reactions.

POPULATION STUDIED: This study was conducted at a single-site Air Force allergy clinic. During the 18-month study period 12,926 allergy shots were given. No further demographic details were provided.

STUDY DESIGN AND VALIDITY: This nonconcurrent cohort study compared reaction rates to allergy shots for 9 months (October 1996 to June 1997) before an intervention with reaction rates for the 9 months (October 1997 to June 1998) after the intervention. The first group (8076 injections) had their allergy shot dose reduced if they had an immediate local reaction 20 mm or larger or if they had any localized swelling that persisted more than 12 hours. The second group (4850 injections) had no dose reduction for immediate and local reactions unless the reaction was larger than the patient’s hand (adult=8-10 cm) or caused the patient significant discomfort. In most respects the study groups can be considered similar. In fact, in many instances the same subject was probably included in both groups (because most patients receive allergy shots for several years they would have been captured in both 9-month study periods). The potential for differences in the study groups comes from selection bias and those lost to follow-up. The first 9-month period included 8076 injections, while the second 9-month period had only 4850 injections. The authors state that this is because there was difficulty getting extract during the second 9-month period, which delayed the initiation of immunotherapy for some. Because allergy shots can be grouped into 2 phases (build-up and maintenance) and the traditional teaching is that reactions are less common in patients getting maintenance shots, the second group may have had a higher proportion receiving the less-risky maintenance injections. The follow-up of both groups was by review of clinic records, of which 74% were located for the first group and 78% for the second group. Bias could be introduced if the patients who were lost to follow-up were significantly different.

OUTCOMES MEASURED: Systemic reaction rates during the 2 periods were determined. Among those with a systemic reaction, the number of times a local reaction immediately preceded the systemic reaction and the total number of previous local reactions were also determined.

RESULTS: Systemic reaction rates were not statistically different during the 2 9-month periods (0.8% before vs 1.0% after, P=.24). The number of times a local reaction preceded a systemic reaction in the first period was not significantly different from the second 9-month period (18.8% before vs 10.5% after, P=.37). The total local reaction rate for those with systemic reactions was not significantly different during the 2 study periods (7.3% before vs 4.7% after, P=.07). The calculated sensitivity for a local reaction predicting a systemic reaction at the next dose was 15% with a positive predictive value of a local reaction for a subsequent systemic reaction of 17%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study supports recommendations that an allergy shot dosage reduction is not needed after a local reaction to the previous dose, unless the reaction is larger than 8 cm. There were no significant differences in the rate of systemic reactions between those who had their dose reduced because of a local reaction and those who did not. Also, a local reaction after an allergy shot is a poor predictor of subsequent systemic reactions. Such a no-adjustment policy should get patients to their maintenance dose more quickly and may reduce dosing errors in patients receiving 2 or more vaccines. For patients using more than one vaccine, typical dose-adjustment policies prompt reduction of just one of the vaccines. After this the patient would be on dissimilar doses and have a higher potential for dosing error.

Author and Disclosure Information

Scott Kinkade, MD, CPT, MC
US Army, Darnall Family Practice Residency, Fort Hood, Texas
[email protected]

Issue
The Journal of Family Practice - 50(03)
Publications
Topics
Page Number
202
Sections
Author and Disclosure Information

Scott Kinkade, MD, CPT, MC
US Army, Darnall Family Practice Residency, Fort Hood, Texas
[email protected]

Author and Disclosure Information

Scott Kinkade, MD, CPT, MC
US Army, Darnall Family Practice Residency, Fort Hood, Texas
[email protected]

BACKGROUND: Many physicians reduce the dose of allergen immunotherapy when patients have significant local reactions to their allergy shots, believing that these patients are at higher risk for systemic reactions. This dose reduction is made despite the fact that the World Health Organization stated in a position paper on allergen immunotherapy that local reactions are not predictive of subsequent systemic reactions.

POPULATION STUDIED: This study was conducted at a single-site Air Force allergy clinic. During the 18-month study period 12,926 allergy shots were given. No further demographic details were provided.

STUDY DESIGN AND VALIDITY: This nonconcurrent cohort study compared reaction rates to allergy shots for 9 months (October 1996 to June 1997) before an intervention with reaction rates for the 9 months (October 1997 to June 1998) after the intervention. The first group (8076 injections) had their allergy shot dose reduced if they had an immediate local reaction 20 mm or larger or if they had any localized swelling that persisted more than 12 hours. The second group (4850 injections) had no dose reduction for immediate and local reactions unless the reaction was larger than the patient’s hand (adult=8-10 cm) or caused the patient significant discomfort. In most respects the study groups can be considered similar. In fact, in many instances the same subject was probably included in both groups (because most patients receive allergy shots for several years they would have been captured in both 9-month study periods). The potential for differences in the study groups comes from selection bias and those lost to follow-up. The first 9-month period included 8076 injections, while the second 9-month period had only 4850 injections. The authors state that this is because there was difficulty getting extract during the second 9-month period, which delayed the initiation of immunotherapy for some. Because allergy shots can be grouped into 2 phases (build-up and maintenance) and the traditional teaching is that reactions are less common in patients getting maintenance shots, the second group may have had a higher proportion receiving the less-risky maintenance injections. The follow-up of both groups was by review of clinic records, of which 74% were located for the first group and 78% for the second group. Bias could be introduced if the patients who were lost to follow-up were significantly different.

OUTCOMES MEASURED: Systemic reaction rates during the 2 periods were determined. Among those with a systemic reaction, the number of times a local reaction immediately preceded the systemic reaction and the total number of previous local reactions were also determined.

RESULTS: Systemic reaction rates were not statistically different during the 2 9-month periods (0.8% before vs 1.0% after, P=.24). The number of times a local reaction preceded a systemic reaction in the first period was not significantly different from the second 9-month period (18.8% before vs 10.5% after, P=.37). The total local reaction rate for those with systemic reactions was not significantly different during the 2 study periods (7.3% before vs 4.7% after, P=.07). The calculated sensitivity for a local reaction predicting a systemic reaction at the next dose was 15% with a positive predictive value of a local reaction for a subsequent systemic reaction of 17%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study supports recommendations that an allergy shot dosage reduction is not needed after a local reaction to the previous dose, unless the reaction is larger than 8 cm. There were no significant differences in the rate of systemic reactions between those who had their dose reduced because of a local reaction and those who did not. Also, a local reaction after an allergy shot is a poor predictor of subsequent systemic reactions. Such a no-adjustment policy should get patients to their maintenance dose more quickly and may reduce dosing errors in patients receiving 2 or more vaccines. For patients using more than one vaccine, typical dose-adjustment policies prompt reduction of just one of the vaccines. After this the patient would be on dissimilar doses and have a higher potential for dosing error.

BACKGROUND: Many physicians reduce the dose of allergen immunotherapy when patients have significant local reactions to their allergy shots, believing that these patients are at higher risk for systemic reactions. This dose reduction is made despite the fact that the World Health Organization stated in a position paper on allergen immunotherapy that local reactions are not predictive of subsequent systemic reactions.

POPULATION STUDIED: This study was conducted at a single-site Air Force allergy clinic. During the 18-month study period 12,926 allergy shots were given. No further demographic details were provided.

STUDY DESIGN AND VALIDITY: This nonconcurrent cohort study compared reaction rates to allergy shots for 9 months (October 1996 to June 1997) before an intervention with reaction rates for the 9 months (October 1997 to June 1998) after the intervention. The first group (8076 injections) had their allergy shot dose reduced if they had an immediate local reaction 20 mm or larger or if they had any localized swelling that persisted more than 12 hours. The second group (4850 injections) had no dose reduction for immediate and local reactions unless the reaction was larger than the patient’s hand (adult=8-10 cm) or caused the patient significant discomfort. In most respects the study groups can be considered similar. In fact, in many instances the same subject was probably included in both groups (because most patients receive allergy shots for several years they would have been captured in both 9-month study periods). The potential for differences in the study groups comes from selection bias and those lost to follow-up. The first 9-month period included 8076 injections, while the second 9-month period had only 4850 injections. The authors state that this is because there was difficulty getting extract during the second 9-month period, which delayed the initiation of immunotherapy for some. Because allergy shots can be grouped into 2 phases (build-up and maintenance) and the traditional teaching is that reactions are less common in patients getting maintenance shots, the second group may have had a higher proportion receiving the less-risky maintenance injections. The follow-up of both groups was by review of clinic records, of which 74% were located for the first group and 78% for the second group. Bias could be introduced if the patients who were lost to follow-up were significantly different.

OUTCOMES MEASURED: Systemic reaction rates during the 2 periods were determined. Among those with a systemic reaction, the number of times a local reaction immediately preceded the systemic reaction and the total number of previous local reactions were also determined.

RESULTS: Systemic reaction rates were not statistically different during the 2 9-month periods (0.8% before vs 1.0% after, P=.24). The number of times a local reaction preceded a systemic reaction in the first period was not significantly different from the second 9-month period (18.8% before vs 10.5% after, P=.37). The total local reaction rate for those with systemic reactions was not significantly different during the 2 study periods (7.3% before vs 4.7% after, P=.07). The calculated sensitivity for a local reaction predicting a systemic reaction at the next dose was 15% with a positive predictive value of a local reaction for a subsequent systemic reaction of 17%.

RECOMMENDATIONS FOR CLINICAL PRACTICE

This study supports recommendations that an allergy shot dosage reduction is not needed after a local reaction to the previous dose, unless the reaction is larger than 8 cm. There were no significant differences in the rate of systemic reactions between those who had their dose reduced because of a local reaction and those who did not. Also, a local reaction after an allergy shot is a poor predictor of subsequent systemic reactions. Such a no-adjustment policy should get patients to their maintenance dose more quickly and may reduce dosing errors in patients receiving 2 or more vaccines. For patients using more than one vaccine, typical dose-adjustment policies prompt reduction of just one of the vaccines. After this the patient would be on dissimilar doses and have a higher potential for dosing error.

Issue
The Journal of Family Practice - 50(03)
Issue
The Journal of Family Practice - 50(03)
Page Number
202
Page Number
202
Publications
Publications
Topics
Article Type
Display Headline
Do patients with local reactions to allergy shots require dosage reductions for subsequent injections?
Display Headline
Do patients with local reactions to allergy shots require dosage reductions for subsequent injections?
Sections
Disallow All Ads

What clinical features are useful in diagnosing strep throat?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
What clinical features are useful in diagnosing strep throat?

BACKGROUND: Sore throat is a common complaint, with causes that include viruses and group A b-hemolytic streptococcus. Untreated pharyngitis due to this latter organism can lead to serious sequelae, such as peritonsillar abscess and rheumatic fever. A quick and accurate diagnosis, therefore, is important. Despite its relatively high prevalence among patients with pharyngitis, always performing a diagnostic laboratory test to uncover group A streptococcus is both impractical and costly. Identifying clinical correlates of strep throat would be useful.

POPULATION STUDIED: Adult and pediatric outpatients presenting with sore throat were studied.

STUDY DESIGN AND VALIDITY: The authors performed a thorough MEDLINE search to identify and systematically review studies of the diagnosis of group A b-hemolytic streptococcal pharyngitis in patients presenting with sore throat. Unpublished data were not sought. Initially 917 articles were identified, of which 9 met level I evidence criteria (large blinded prospective studies using throat culture as the reference standard). Pairs of authors reviewed each study, and discrepancies were resolved by discussion. The authors then pooled data to calculate the sensitivity, specificity, positive likelihood ratio (LR+), and negative likelihood ratio (LR-) of various history and physical examination elements. The authors only included studies with 300 or more subjects. Based on this threshold, 8 studies (1182 patients) were excluded from the pooled analysis.

OUTCOMES MEASURED: Primary outcomes measured included the sensitivity, specificity, LR+, and LR- of different clinical features.

RESULTS: The presence of tonsillar exudate or, pharyngeal exudate and a history of streptococcus exposure in the previous 2 weeks were most useful in predicting current streptococcus pharyngitis (LR+ = 3.4, 2.1, and 1.9, respectively). The absence of tender anterior cervical lymph nodes, tonsillar enlargement, and tonsillar or pharyngeal exudate was most useful in ruling out strep throat (LR- = 0.60, 0.63, and 0.74, respectively).

RECOMMENDATIONS FOR CLINICAL PRACTICE

No single history or physical examination element can effectively rule in or rule out strep throat. However, clinical prediction rules using a constellation of clinical symptoms and signs (such as presence of tonsillar or pharyngeal exudate and history of exposure to streptococcus) can be helpful in diagnostic testing and treatment decisions when patients present with a sore throat in the outpatient setting. No recommendations were made regarding which probability thresholds should be used when making treatment decisions.

Author and Disclosure Information

Carolyn A. Eaton, MD
UPMC-St. Margaret, Pittsburgh, Pennsylvana
[email protected]

Issue
The Journal of Family Practice - 50(03)
Publications
Topics
Page Number
201
Sections
Author and Disclosure Information

Carolyn A. Eaton, MD
UPMC-St. Margaret, Pittsburgh, Pennsylvana
[email protected]

Author and Disclosure Information

Carolyn A. Eaton, MD
UPMC-St. Margaret, Pittsburgh, Pennsylvana
[email protected]

BACKGROUND: Sore throat is a common complaint, with causes that include viruses and group A b-hemolytic streptococcus. Untreated pharyngitis due to this latter organism can lead to serious sequelae, such as peritonsillar abscess and rheumatic fever. A quick and accurate diagnosis, therefore, is important. Despite its relatively high prevalence among patients with pharyngitis, always performing a diagnostic laboratory test to uncover group A streptococcus is both impractical and costly. Identifying clinical correlates of strep throat would be useful.

POPULATION STUDIED: Adult and pediatric outpatients presenting with sore throat were studied.

STUDY DESIGN AND VALIDITY: The authors performed a thorough MEDLINE search to identify and systematically review studies of the diagnosis of group A b-hemolytic streptococcal pharyngitis in patients presenting with sore throat. Unpublished data were not sought. Initially 917 articles were identified, of which 9 met level I evidence criteria (large blinded prospective studies using throat culture as the reference standard). Pairs of authors reviewed each study, and discrepancies were resolved by discussion. The authors then pooled data to calculate the sensitivity, specificity, positive likelihood ratio (LR+), and negative likelihood ratio (LR-) of various history and physical examination elements. The authors only included studies with 300 or more subjects. Based on this threshold, 8 studies (1182 patients) were excluded from the pooled analysis.

OUTCOMES MEASURED: Primary outcomes measured included the sensitivity, specificity, LR+, and LR- of different clinical features.

RESULTS: The presence of tonsillar exudate or, pharyngeal exudate and a history of streptococcus exposure in the previous 2 weeks were most useful in predicting current streptococcus pharyngitis (LR+ = 3.4, 2.1, and 1.9, respectively). The absence of tender anterior cervical lymph nodes, tonsillar enlargement, and tonsillar or pharyngeal exudate was most useful in ruling out strep throat (LR- = 0.60, 0.63, and 0.74, respectively).

RECOMMENDATIONS FOR CLINICAL PRACTICE

No single history or physical examination element can effectively rule in or rule out strep throat. However, clinical prediction rules using a constellation of clinical symptoms and signs (such as presence of tonsillar or pharyngeal exudate and history of exposure to streptococcus) can be helpful in diagnostic testing and treatment decisions when patients present with a sore throat in the outpatient setting. No recommendations were made regarding which probability thresholds should be used when making treatment decisions.

BACKGROUND: Sore throat is a common complaint, with causes that include viruses and group A b-hemolytic streptococcus. Untreated pharyngitis due to this latter organism can lead to serious sequelae, such as peritonsillar abscess and rheumatic fever. A quick and accurate diagnosis, therefore, is important. Despite its relatively high prevalence among patients with pharyngitis, always performing a diagnostic laboratory test to uncover group A streptococcus is both impractical and costly. Identifying clinical correlates of strep throat would be useful.

POPULATION STUDIED: Adult and pediatric outpatients presenting with sore throat were studied.

STUDY DESIGN AND VALIDITY: The authors performed a thorough MEDLINE search to identify and systematically review studies of the diagnosis of group A b-hemolytic streptococcal pharyngitis in patients presenting with sore throat. Unpublished data were not sought. Initially 917 articles were identified, of which 9 met level I evidence criteria (large blinded prospective studies using throat culture as the reference standard). Pairs of authors reviewed each study, and discrepancies were resolved by discussion. The authors then pooled data to calculate the sensitivity, specificity, positive likelihood ratio (LR+), and negative likelihood ratio (LR-) of various history and physical examination elements. The authors only included studies with 300 or more subjects. Based on this threshold, 8 studies (1182 patients) were excluded from the pooled analysis.

OUTCOMES MEASURED: Primary outcomes measured included the sensitivity, specificity, LR+, and LR- of different clinical features.

RESULTS: The presence of tonsillar exudate or, pharyngeal exudate and a history of streptococcus exposure in the previous 2 weeks were most useful in predicting current streptococcus pharyngitis (LR+ = 3.4, 2.1, and 1.9, respectively). The absence of tender anterior cervical lymph nodes, tonsillar enlargement, and tonsillar or pharyngeal exudate was most useful in ruling out strep throat (LR- = 0.60, 0.63, and 0.74, respectively).

RECOMMENDATIONS FOR CLINICAL PRACTICE

No single history or physical examination element can effectively rule in or rule out strep throat. However, clinical prediction rules using a constellation of clinical symptoms and signs (such as presence of tonsillar or pharyngeal exudate and history of exposure to streptococcus) can be helpful in diagnostic testing and treatment decisions when patients present with a sore throat in the outpatient setting. No recommendations were made regarding which probability thresholds should be used when making treatment decisions.

Issue
The Journal of Family Practice - 50(03)
Issue
The Journal of Family Practice - 50(03)
Page Number
201
Page Number
201
Publications
Publications
Topics
Article Type
Display Headline
What clinical features are useful in diagnosing strep throat?
Display Headline
What clinical features are useful in diagnosing strep throat?
Sections
Disallow All Ads

Is cilostazol more effective than pentoxifylline in the treatment of symptoms of intermittent claudication?

Article Type
Changed
Mon, 01/14/2019 - 11:04
Display Headline
Is cilostazol more effective than pentoxifylline in the treatment of symptoms of intermittent claudication?

BACKGROUND: Pentoxifylline and cilostazol are the only 2 prescription drugs labeled for treatment of intermittent claudication. No studies have compared the relative benefit of these agents. The only other therapy demonstrated to be effective is active exercise intervention, usually in the form of a walking program.

POPULATION STUDIED: The investigators enrolled 698 patients with stable moderate to severe symptoms of intermittent claudication and confirmed peripheral vascular disease. These patients had symptoms present for at least 6 months but without substantial change in the last 3 months. All patients had peripheral vascular disease confirmed by either a resting ankle/brachial index of 0.90 or lower and a 10-mm Hg or higher decrease in ankle pressure measured 1 minute after walking to maximal walking distance, or a 20-mm Hg or higher decrease in postexercise ankle pressure in the symptomatic extremity. To qualify for study inclusion, the patients needed a baseline pain-free walking distance between 53.5 meters (1 minute on treadmill protocol) and 537.7 meters (10 minutes). The study subjects were 76% men and had an average age of 66 years. Patients were ineligible who had critical lower extremity ischemia, arterial reconstruction, or sympathectomy within the previous 3 months, as were patients with an exercise capacity limited by conditions other than intermittent claudication. The 3 groups were similar in age, sex, race, smoking status, diabetes, hypercholesterolemia, hypertension, and baseline disease severity.

STUDY DESIGN AND VALIDITY: The study was a double-blind multicenter trial with patients randomized using concealed allocation to receive either cilostazol (100 mg orally twice daily), pentoxifylline (400 mg orally 3 times daily), or placebo for a 24-week period. No specific counseling about diet, smoking cessation, or exercise was offered to the patients during the study period. At baseline and every 4 weeks afterward, study participants underwent evaluation including medical history, physical examination, treadmill testing, Doppler limb pressure measurements, and assessment of adverse events.

OUTCOMES MEASURED: The primary study end point was a comparison of the relative effects of cilostazol and pentoxifylline on walking ability, as measured by maximal walking distance on a standardized treadmill test. Secondary end points included pain-free walking distance and resting Doppler limb pressures. Perception of functional ability and quality of life was measured with the Medical Outcomes Scale Short Form-36 (SF-36) and the Walking Impairment Questionnaire. In addition, physicians and patients were asked for their subjective assessment of benefit at the end of treatment.

RESULTS: Using a treadmill protocol and assessed by intention-to treat analysis, cilostazol increased the maximal walking distance by 54% over baseline (average=107 m), compared with a 30% increase with pentoxifylline (P <.001) and a 34% increase in the placebo group (P <.001). The improvement with pentoxifylline was similar to that in the placebo group (average=65 m). Walking distances progressively increased in all 3 groups and did not plateau within the 24-week study. Side effects including headache, diarrhea, and abnormal stools occurred more commonly in the cilostazol group, yet the withdrawal rate was similar between the 2 active drug treatment groups (16%-19%). Scores on the SF-36 and the Walking Impairment Questionnaire revealed no significant differences in general health perception or patient-reported walking distance. In subjective assessment by patients, 51% of the cilostazol group judged their outcome to be successful compared with 39% in the pentoxifylline group (P=.004) and 34% in the placebo group (P <.001).

RECOMMENDATIONS FOR CLINICAL PRACTICE

For patients with moderate to severe intermittent claudication, cilostazol results in longer walking distances than pentoxifylline. Future studies will need to address the symptomatic improvement gained over longer durations of therapy with cilostazol and in combination with a directed exercise program.

Author and Disclosure Information

David Weismantel, MD
Michigan State University, East Lansing
E-mail: [email protected]

Issue
The Journal of Family Practice - 50(02)
Publications
Topics
Page Number
181
Sections
Author and Disclosure Information

David Weismantel, MD
Michigan State University, East Lansing
E-mail: [email protected]

Author and Disclosure Information

David Weismantel, MD
Michigan State University, East Lansing
E-mail: [email protected]

BACKGROUND: Pentoxifylline and cilostazol are the only 2 prescription drugs labeled for treatment of intermittent claudication. No studies have compared the relative benefit of these agents. The only other therapy demonstrated to be effective is active exercise intervention, usually in the form of a walking program.

POPULATION STUDIED: The investigators enrolled 698 patients with stable moderate to severe symptoms of intermittent claudication and confirmed peripheral vascular disease. These patients had symptoms present for at least 6 months but without substantial change in the last 3 months. All patients had peripheral vascular disease confirmed by either a resting ankle/brachial index of 0.90 or lower and a 10-mm Hg or higher decrease in ankle pressure measured 1 minute after walking to maximal walking distance, or a 20-mm Hg or higher decrease in postexercise ankle pressure in the symptomatic extremity. To qualify for study inclusion, the patients needed a baseline pain-free walking distance between 53.5 meters (1 minute on treadmill protocol) and 537.7 meters (10 minutes). The study subjects were 76% men and had an average age of 66 years. Patients were ineligible who had critical lower extremity ischemia, arterial reconstruction, or sympathectomy within the previous 3 months, as were patients with an exercise capacity limited by conditions other than intermittent claudication. The 3 groups were similar in age, sex, race, smoking status, diabetes, hypercholesterolemia, hypertension, and baseline disease severity.

STUDY DESIGN AND VALIDITY: The study was a double-blind multicenter trial with patients randomized using concealed allocation to receive either cilostazol (100 mg orally twice daily), pentoxifylline (400 mg orally 3 times daily), or placebo for a 24-week period. No specific counseling about diet, smoking cessation, or exercise was offered to the patients during the study period. At baseline and every 4 weeks afterward, study participants underwent evaluation including medical history, physical examination, treadmill testing, Doppler limb pressure measurements, and assessment of adverse events.

OUTCOMES MEASURED: The primary study end point was a comparison of the relative effects of cilostazol and pentoxifylline on walking ability, as measured by maximal walking distance on a standardized treadmill test. Secondary end points included pain-free walking distance and resting Doppler limb pressures. Perception of functional ability and quality of life was measured with the Medical Outcomes Scale Short Form-36 (SF-36) and the Walking Impairment Questionnaire. In addition, physicians and patients were asked for their subjective assessment of benefit at the end of treatment.

RESULTS: Using a treadmill protocol and assessed by intention-to treat analysis, cilostazol increased the maximal walking distance by 54% over baseline (average=107 m), compared with a 30% increase with pentoxifylline (P <.001) and a 34% increase in the placebo group (P <.001). The improvement with pentoxifylline was similar to that in the placebo group (average=65 m). Walking distances progressively increased in all 3 groups and did not plateau within the 24-week study. Side effects including headache, diarrhea, and abnormal stools occurred more commonly in the cilostazol group, yet the withdrawal rate was similar between the 2 active drug treatment groups (16%-19%). Scores on the SF-36 and the Walking Impairment Questionnaire revealed no significant differences in general health perception or patient-reported walking distance. In subjective assessment by patients, 51% of the cilostazol group judged their outcome to be successful compared with 39% in the pentoxifylline group (P=.004) and 34% in the placebo group (P <.001).

RECOMMENDATIONS FOR CLINICAL PRACTICE

For patients with moderate to severe intermittent claudication, cilostazol results in longer walking distances than pentoxifylline. Future studies will need to address the symptomatic improvement gained over longer durations of therapy with cilostazol and in combination with a directed exercise program.

BACKGROUND: Pentoxifylline and cilostazol are the only 2 prescription drugs labeled for treatment of intermittent claudication. No studies have compared the relative benefit of these agents. The only other therapy demonstrated to be effective is active exercise intervention, usually in the form of a walking program.

POPULATION STUDIED: The investigators enrolled 698 patients with stable moderate to severe symptoms of intermittent claudication and confirmed peripheral vascular disease. These patients had symptoms present for at least 6 months but without substantial change in the last 3 months. All patients had peripheral vascular disease confirmed by either a resting ankle/brachial index of 0.90 or lower and a 10-mm Hg or higher decrease in ankle pressure measured 1 minute after walking to maximal walking distance, or a 20-mm Hg or higher decrease in postexercise ankle pressure in the symptomatic extremity. To qualify for study inclusion, the patients needed a baseline pain-free walking distance between 53.5 meters (1 minute on treadmill protocol) and 537.7 meters (10 minutes). The study subjects were 76% men and had an average age of 66 years. Patients were ineligible who had critical lower extremity ischemia, arterial reconstruction, or sympathectomy within the previous 3 months, as were patients with an exercise capacity limited by conditions other than intermittent claudication. The 3 groups were similar in age, sex, race, smoking status, diabetes, hypercholesterolemia, hypertension, and baseline disease severity.

STUDY DESIGN AND VALIDITY: The study was a double-blind multicenter trial with patients randomized using concealed allocation to receive either cilostazol (100 mg orally twice daily), pentoxifylline (400 mg orally 3 times daily), or placebo for a 24-week period. No specific counseling about diet, smoking cessation, or exercise was offered to the patients during the study period. At baseline and every 4 weeks afterward, study participants underwent evaluation including medical history, physical examination, treadmill testing, Doppler limb pressure measurements, and assessment of adverse events.

OUTCOMES MEASURED: The primary study end point was a comparison of the relative effects of cilostazol and pentoxifylline on walking ability, as measured by maximal walking distance on a standardized treadmill test. Secondary end points included pain-free walking distance and resting Doppler limb pressures. Perception of functional ability and quality of life was measured with the Medical Outcomes Scale Short Form-36 (SF-36) and the Walking Impairment Questionnaire. In addition, physicians and patients were asked for their subjective assessment of benefit at the end of treatment.

RESULTS: Using a treadmill protocol and assessed by intention-to treat analysis, cilostazol increased the maximal walking distance by 54% over baseline (average=107 m), compared with a 30% increase with pentoxifylline (P <.001) and a 34% increase in the placebo group (P <.001). The improvement with pentoxifylline was similar to that in the placebo group (average=65 m). Walking distances progressively increased in all 3 groups and did not plateau within the 24-week study. Side effects including headache, diarrhea, and abnormal stools occurred more commonly in the cilostazol group, yet the withdrawal rate was similar between the 2 active drug treatment groups (16%-19%). Scores on the SF-36 and the Walking Impairment Questionnaire revealed no significant differences in general health perception or patient-reported walking distance. In subjective assessment by patients, 51% of the cilostazol group judged their outcome to be successful compared with 39% in the pentoxifylline group (P=.004) and 34% in the placebo group (P <.001).

RECOMMENDATIONS FOR CLINICAL PRACTICE

For patients with moderate to severe intermittent claudication, cilostazol results in longer walking distances than pentoxifylline. Future studies will need to address the symptomatic improvement gained over longer durations of therapy with cilostazol and in combination with a directed exercise program.

Issue
The Journal of Family Practice - 50(02)
Issue
The Journal of Family Practice - 50(02)
Page Number
181
Page Number
181
Publications
Publications
Topics
Article Type
Display Headline
Is cilostazol more effective than pentoxifylline in the treatment of symptoms of intermittent claudication?
Display Headline
Is cilostazol more effective than pentoxifylline in the treatment of symptoms of intermittent claudication?
Sections
Disallow All Ads