Slot System
Featured Buckets
Featured Buckets Admin

How Valid Is the “Healthy Obese” Phenotype For Older Women?

Article Type
Changed
Tue, 03/06/2018 - 15:57
Display Headline
How Valid Is the “Healthy Obese” Phenotype For Older Women?

Study Overview

Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.

Design. Observational cohort study.

Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].

For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.

There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).

The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.

Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).

Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.

Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.

Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.

In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).

Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.

When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.

When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).

Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.

Commentary

The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.

Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.

This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).

The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).

Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].

The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].

Applications for Clinical Practice

To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.

—Kristina Lewis, MD, MPH

References

1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.

2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.

3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.

4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.

5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.

6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.

7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Sections

Study Overview

Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.

Design. Observational cohort study.

Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].

For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.

There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).

The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.

Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).

Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.

Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.

Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.

In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).

Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.

When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.

When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).

Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.

Commentary

The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.

Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.

This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).

The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).

Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].

The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].

Applications for Clinical Practice

To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.

—Kristina Lewis, MD, MPH

Study Overview

Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.

Design. Observational cohort study.

Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].

For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.

There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).

The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.

Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).

Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.

Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.

Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.

In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).

Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.

When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.

When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).

Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.

Commentary

The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.

Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.

This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).

The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).

Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].

The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].

Applications for Clinical Practice

To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.

—Kristina Lewis, MD, MPH

References

1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.

2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.

3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.

4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.

5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.

6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.

7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.

References

1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.

2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.

3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.

4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.

5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.

6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.

7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Article Type
Display Headline
How Valid Is the “Healthy Obese” Phenotype For Older Women?
Display Headline
How Valid Is the “Healthy Obese” Phenotype For Older Women?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Long-Term Outcomes of Bariatric Surgery in Obese Adults

Article Type
Changed
Tue, 03/06/2018 - 15:55
Display Headline
Long-Term Outcomes of Bariatric Surgery in Obese Adults

Study Overview

Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.

Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.

Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m, were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.

Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.

Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.

Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.

Commentary

Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.

In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.

This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.

There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.

A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.

Applications for Clinical Practice

This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.

—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN

References

1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.

2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

Study Overview

Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.

Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.

Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m, were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.

Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.

Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.

Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.

Commentary

Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.

In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.

This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.

There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.

A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.

Applications for Clinical Practice

This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.

—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN

Study Overview

Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.

Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.

Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m, were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.

Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.

Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.

Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.

Commentary

Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.

In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.

This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.

There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.

A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.

Applications for Clinical Practice

This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.

—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN

References

1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.

2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.

References

1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.

2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Long-Term Outcomes of Bariatric Surgery in Obese Adults
Display Headline
Long-Term Outcomes of Bariatric Surgery in Obese Adults
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis

Article Type
Changed
Tue, 03/06/2018 - 15:51
Display Headline
Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis

Study Overview

Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.

Design. Prospective cohort study.

Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)

Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).

Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.

Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.

Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.

Commentary

Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.

The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.

It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.

Applications for Clinical Practice

Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.

—William Hung, MD, MPH

References

1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.

2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.

3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.

4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.

5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

Study Overview

Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.

Design. Prospective cohort study.

Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)

Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).

Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.

Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.

Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.

Commentary

Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.

The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.

It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.

Applications for Clinical Practice

Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.

—William Hung, MD, MPH

Study Overview

Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.

Design. Prospective cohort study.

Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)

Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).

Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.

Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.

Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.

Commentary

Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.

The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.

It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.

Applications for Clinical Practice

Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.

—William Hung, MD, MPH

References

1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.

2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.

3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.

4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.

5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.

References

1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.

2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.

3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.

4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.

5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis
Display Headline
Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit

Article Type
Changed
Tue, 03/06/2018 - 15:47
Display Headline
Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit

Study Overview

Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.

Design. Cross-sectional survey.

Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.

Main outcome measure. Caregiver use of an asthma action plan.

Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years.  Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.

Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.

Commentary

With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.

Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].

This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.

The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.

There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.

Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.

Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.

Applications for Clinical Practice

Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.

—Allison Squires, PhD, RN

References

1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.

2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.

3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.

4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.

5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.

6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.

7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.

8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

Study Overview

Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.

Design. Cross-sectional survey.

Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.

Main outcome measure. Caregiver use of an asthma action plan.

Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years.  Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.

Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.

Commentary

With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.

Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].

This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.

The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.

There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.

Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.

Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.

Applications for Clinical Practice

Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.

—Allison Squires, PhD, RN

Study Overview

Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.

Design. Cross-sectional survey.

Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.

Main outcome measure. Caregiver use of an asthma action plan.

Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years.  Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.

Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.

Commentary

With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.

Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].

This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.

The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.

There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.

Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.

Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.

Applications for Clinical Practice

Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.

—Allison Squires, PhD, RN

References

1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.

2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.

3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.

4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.

5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.

6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.

7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.

8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.

References

1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.

2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.

3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.

4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.

5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.

6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.

7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.

8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit
Display Headline
Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Another Win for Veggies

Article Type
Changed
Thu, 10/19/2017 - 14:29
Display Headline
Another Win for Veggies

 

Study Overview

Objective. To determine the association between a vegetarian diet and blood pressure (BP).

Design. Systematic review and meta-analysis of controlled clinical trials and observational studies.

Setting and participants. MEDLINE and Web of Science were respectively searched for English articles published between 1946 to October 2013 and 1900 to November 2013. Inclusion criteria were age > 20 years and vegetarian diet. This included a vegan diet (omitting all animal products), ovo/lacto/pesco vegetarian diet (including eggs/dairy/fish), or semi-vegetarian diet (meat or fish rarely). Exclusion criteria included studies of twins, a multipronged intervention, describing only categorical BP, and reliance on a case series. A total of 258 records were identified. Seven clinical trials and 32 observational studies met inclusion criteria. The 7 clinical trials encompassed 311 participants (median, 38; range, 11–113) with a mean age of 44.5 years (range, 38.0–54.3). All were open-label, 6 were randomized, and 6 provided food to participants. The 32 observational studies included 21,604 participants (median, 152; range, 20–9242) with a mean age of 46.6 years (range, 28.8–68.4 years). Fifteen of these studies included mixed diet types (vegan, lacto, ovolacto, pesca, and/or semivegetarian).

Main outcome measures. The primary outcome was BP. Differences in systolic BP (SBP) and diastolic BP (DBP) between groups consuming vegetarian or comparison diets were pooled using a random effects model. The study compared clinical trials and observational studies separately. Funnel plots, the Egger test, and the trim-and-fill method were all used to assess and correct for publication bias.

Results. Vegetarian diets in clinical trials were associated with a mean SBP reduction of −4.8 mm Hg (95% confidence interval [CI], −6.6 to −3.1; P < 0.001; I2 = 0; P = 0.45 for heterogeneity) and DBP reduction of −2.2 mm Hg (95% CI, −3.5 to −1.0; P < 0.001; I2 = 0; P= 0.43 for heterogeneity) when compared with omnivorous diets. Observational studies had larger reductions but significant heterogeneity: SBP −6.9 mm Hg (95% CI, −9.1 to −4.7; P < 0.001; I2 = 91.4; P < 0.001 for heterogeneity) and DBP −4.7 mm Hg (95% CI, −6.3 to −3.1; P < 0.001; I2 = 92.6; P < 0.001 for heterogeneity). This heterogeneity was best explained by proportion of men (β −0.03; P < 0.001), baseline SBP (β −0.13; P = 0.003), baseline DBP (β −0.30; P < 0.001), sample size (β 0.001; P< 0.001), and BMI (β −0.46; P = 0.02). This suggests that vegetarian diets and lower BP are more strongly associated in men and those with higher baseline BP and BMI.

Subgroup analysis included stratification by age, gender, BMI, diet type, sample size, diet duration, BP medication use, baseline BP, and geographic region. In subgroup analysis of clinical trials, no statistically significant difference between group variation or heterogeneity existed. In comparison, subgroup analysis of observational studies reduced heterogeneity and often effect size. For example, lower SBP was evident in the majority male subgroups (mean SBP/DBP: –18.5 mm Hg/–10.1 mm Hg).

Publication bias existed for both clinical trials and observational studies. According to trim-and-fill methodology, 3 clinical trials of smaller size and larger BP reduction likely were missing (Egger P = 0.04). Their addition shifted mean SBP reduction from −4.8 mm Hg (−6.6 to −3.1) to −5.2 mm Hg (−6.9 to −3.5). Observational studies lacked medium sized negative trials and were overrepresented by larger positive trials (Egger P < 0.001), although this was not confirmed by trim-and-fill (yet this method performs less well under heterogeneous conditions) [1].

Conclusion. Vegetarian diets, when compared with omnivorous diets, are associated with reductions in BP.

 

Commentary

Several studies show that dietary modifications are effective in preventing and managing hypertension [2,3]. Landmark randomized trials, including the DASH diet [4], DASH-sodium diet [5], and OmniHeart diets [6], all of which emphasize fruit and vegetable intake but are not vegetarian, have led to SBP and DBP reductions ranging from 5.5 to 9.5 mm Hg and 3.0 to 5.2, respectively. However, the impact of a vegetarian diet still remains debated, particularly given disparate findings among randomized controlled trials (RCTs). For example, findings in the early- and mid-1980s of small RCTs with ovolactovegetarians (a vegetarian who consumes eggs and dairy products but not animal flesh) suggested reductions similar to the pooled SBP reduction of –4.8 mm Hg Yokoyama et al report [7,8]. In contrast, one RCT comparing ovolactovegetarian with lean meat diets failed to show a BP benefit [9]. Striking is the dearth of RCTs in the last 20 years to assist in better estimating this impact, particularly given its continual recommendation in the scientific [10] and lay communities [11]. To the authors’ credit, this is the first meta-analysis and second systematic review of this important relationship [12].

A vegetarian diet likely supports BP reductions through a variety of mechanisms, most notably via an abundance of potassium [13]. Potassium likely promotes vasodilation, which facilitates glomerular filtration, allowing decreased renal sodium reabsorption and decreased platelet aggregation. Other more controversial hypotheses include decreased energy density leading to reduced BMI [14], decreased sodium intake [15], reduced blood viscosity [16], and high polyunsaturated with low saturated fat content [17].

Strengths of this analysis include the large observational sample size, the separate analysis of clinical trials and observational studies, the lengthy search time-frame, the subgroup analyses, and the adjustment for publication bias. Although the overall association was robust throughout, we agree with the authors that large heterogeneity among observational studies, small clinical trial sample sizes, and the variation in what “vegetarian” represents throughout the world and in individual studies all represent limitations. The participants in many of the observational studies could have technically eaten meat with unclear and undefined frequency, and this may have explained the heterogeneity of these studies. Correspondingly, the lack of heterogeneity observed in the clinical trials may be due to the fact that participants were provided meals in 6 out of 7 studies.

It was surprising that only 7 clinical trials were found. The authors utilized 2 databases, but perhaps searching additional databases such as EMBASE or CINAHL would have yielded other pertinent studies. The authors also did not use a bias assessment tool such as that proposed by the Cochrane bias methods group, which could have better discriminated high- from low-quality trials and made for useful subgroup analyses [1]. Similarly, reporting on both attrition and adherence could have assisted in decreasing heterogeneity during subgroup analyses and determining high-quality from low-quality studies. For example, adherence in Ferdowsian et al (vegan diet) was determined by unannounced dietician phone calls and found that only 57% of participants abstained from animal products. This may have been secondary to the study’s design, in that “providing meals” included simply making them an available option at the company cafeteria instead of requiring consumption of a study-specific vegetarian meal [18].

 

Applications for Clinical Practice

In this meta-analysis, vegetarian diets were associated with –4.8 mm Hg SBP and −2.2 mm Hg DBP reductions, indicating that providers can recommend a vegetarian diet as on par with other lifestyle changes, including low-sodium diet, weight loss, and exercise. A vegetarian diet may be comparable to pharmacologic therapy in magnitude of BP change. Short- and long-term pharmacologic therapy is associated with respective SBP/DBP reductions of –8.3/–3.8 and –5.4/–2.3, not altogether different from reductions seen with vegetarian or vegetable-heavy diets [19].

Although there are barriers to a vegetarian diet, including provider attitudes [20], cost [21], poor culinary skill [22], palatability, and adherence, pharmacologic BP treatment also presents barriers: adherence to BP medications is estimated to be 50% to 70% [23], and harm due to side effects can preclude use. Thus, providers can present a vegetarian diet as a potentially effective option, depending on patient preference and ability to adhere.

—David M. Levine, MD, MA, New York University
School of Medicine, and Melanie Jay, MD, MS

References

1. Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available at http://handbook.cochrane.org/chapter_10/10_4_4_2_trim_and_fill.htm.

2. Eckel RH, Jakicic JM, Ard JD, et al. 2013 AHA/ACC guideline on lifestyle management to reduce cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation 2013 Nov 12. [Epub ahead of print]

3. James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

4. Appel LJ, Moore TJ, Obarzanek E, et al. A clinical trial of the effects of dietary patterns on blood pressure. N Engl J Med 1997;336:1117–24.

5. Sacks FM, Svetkey LP, Vollmer WM, et al. Effects on blood pressure of reduced dietary sodium and the dietary approaches to stop hypertension (DASH) diet. N Engl J Med 2001;344:3–10.

6. Appel LJ, Sacks FM, Carey VJ, et al. Effects of protein, monounsaturated fat, and carbohydrate intake on blood pressure and serum lipids. JAMA 2005;294:2455–64.

7. Rouse IL, Beilin LJ, Armstrong BK, Vandongen R. Blood-pressure–lowering effect of a vegetarian diet: controlled trial in normotensive subjects. Lancet 1983;321:5–10.

8. Margetts BM, Beilin LJ, Vandongen R, Armstrong BK. Vegetarian diet in mild hypertension: a randomised controlled trial. BMJ 1986;293:1468–71.

9. Kestin M, Rouse IL, Correll RA, Nestel PJ. Cardiovascular disease risk factors in free-living men: comparison of two prudent diets, one based on lactoovovegetarianism and the other allowing lean meat. Am J Clin Nutr 1989;50:280–7.

10. Alpert JS. Nutritional advice for the patient with heart disease: what diet should we recommend for our patients? Circulation 2011;124:e258–e260.

11. Gordinier J. Making vegan a new normal. New York Times. 26 Sept 2012. Page D1.

12. Berkow SE, Barnard ND. Blood pressure regulation and vegetarian diets. Nutr Rev 2005;63:1–8.

13. Aburto NJ, Hanson S, Gutierrez H, et al. Effect of increased potassium intake on cardiovascular risk factors and disease: systematic review and meta-analyses. BMJ 2013;346:f1378.

14. Berkow SE, Barnard N. Vegetarian diets and weight status. Nutr Rev 2006;64:175–88.

15. Larsson CL, Johansson GK. Dietary intake and nutritional status of young vegans and omnivores in Sweden. Am J Clin Nutr 2002;76:100–6.

16. Ernst E, Pietsch L, Matrai A, Eisenberg J. Blood rheology in vegetarians. Br J Nutr 1986;56:555–60.

17. Iacono JM, Dougherty RM. Effects of polyunsaturated fats on blood pressure. Annu Rev Nutr 1993;13:243–60.

18. Ferdowsian HR, Barnard ND, Hoover VJ, et al. A multicomponent intervention reduces body weight and cardiovascular risk at a GEICO corporate site. Am J Health Promot 2010;24:384–7.

19. Brugts JJ, Ninomiya T, Boersma E, et al. The consistency of the treatment effect of an ACE-inhibitor based treatment regimen in patients with vascular disease or high risk of vascular disease: a combined analysis of individual data of ADVANCE, EUROPA, and PROGRESS trials. Eur Heart J 2009;30:1385–94.

20. Berman BM, Singh BB, Hartnoll SM, et al. Primary care physicians and complementary-alternative medicine: training, attitudes, and practice patterns. Am Board Fam Pract 1998;11:272–81.

21. Drewnowski A, Darmon N. The economics of obesity: dietary energy density and energy cost. Am J Clin Nutr 2005;82(1 Suppl):265S-273S.

22. Lea EJ, Crawford D, Worsley A. Public views of the benefits and barriers to the consumption of a plant-based diet. Eur J Clin Nutr 2006;60:828–37.

23. Schroeder K, Fahey T, Ebrahim S. How can we improve adherence to blood pressure–lowering medication in ambulatory care? Systematic review of randomized controlled trials. Arch Intern Med 2004;164:722–32.

Issue
Journal of Clinical Outcomes Management - May 2014, VOL. 21, NO. 5
Publications
Sections

 

Study Overview

Objective. To determine the association between a vegetarian diet and blood pressure (BP).

Design. Systematic review and meta-analysis of controlled clinical trials and observational studies.

Setting and participants. MEDLINE and Web of Science were respectively searched for English articles published between 1946 to October 2013 and 1900 to November 2013. Inclusion criteria were age > 20 years and vegetarian diet. This included a vegan diet (omitting all animal products), ovo/lacto/pesco vegetarian diet (including eggs/dairy/fish), or semi-vegetarian diet (meat or fish rarely). Exclusion criteria included studies of twins, a multipronged intervention, describing only categorical BP, and reliance on a case series. A total of 258 records were identified. Seven clinical trials and 32 observational studies met inclusion criteria. The 7 clinical trials encompassed 311 participants (median, 38; range, 11–113) with a mean age of 44.5 years (range, 38.0–54.3). All were open-label, 6 were randomized, and 6 provided food to participants. The 32 observational studies included 21,604 participants (median, 152; range, 20–9242) with a mean age of 46.6 years (range, 28.8–68.4 years). Fifteen of these studies included mixed diet types (vegan, lacto, ovolacto, pesca, and/or semivegetarian).

Main outcome measures. The primary outcome was BP. Differences in systolic BP (SBP) and diastolic BP (DBP) between groups consuming vegetarian or comparison diets were pooled using a random effects model. The study compared clinical trials and observational studies separately. Funnel plots, the Egger test, and the trim-and-fill method were all used to assess and correct for publication bias.

Results. Vegetarian diets in clinical trials were associated with a mean SBP reduction of −4.8 mm Hg (95% confidence interval [CI], −6.6 to −3.1; P < 0.001; I2 = 0; P = 0.45 for heterogeneity) and DBP reduction of −2.2 mm Hg (95% CI, −3.5 to −1.0; P < 0.001; I2 = 0; P= 0.43 for heterogeneity) when compared with omnivorous diets. Observational studies had larger reductions but significant heterogeneity: SBP −6.9 mm Hg (95% CI, −9.1 to −4.7; P < 0.001; I2 = 91.4; P < 0.001 for heterogeneity) and DBP −4.7 mm Hg (95% CI, −6.3 to −3.1; P < 0.001; I2 = 92.6; P < 0.001 for heterogeneity). This heterogeneity was best explained by proportion of men (β −0.03; P < 0.001), baseline SBP (β −0.13; P = 0.003), baseline DBP (β −0.30; P < 0.001), sample size (β 0.001; P< 0.001), and BMI (β −0.46; P = 0.02). This suggests that vegetarian diets and lower BP are more strongly associated in men and those with higher baseline BP and BMI.

Subgroup analysis included stratification by age, gender, BMI, diet type, sample size, diet duration, BP medication use, baseline BP, and geographic region. In subgroup analysis of clinical trials, no statistically significant difference between group variation or heterogeneity existed. In comparison, subgroup analysis of observational studies reduced heterogeneity and often effect size. For example, lower SBP was evident in the majority male subgroups (mean SBP/DBP: –18.5 mm Hg/–10.1 mm Hg).

Publication bias existed for both clinical trials and observational studies. According to trim-and-fill methodology, 3 clinical trials of smaller size and larger BP reduction likely were missing (Egger P = 0.04). Their addition shifted mean SBP reduction from −4.8 mm Hg (−6.6 to −3.1) to −5.2 mm Hg (−6.9 to −3.5). Observational studies lacked medium sized negative trials and were overrepresented by larger positive trials (Egger P < 0.001), although this was not confirmed by trim-and-fill (yet this method performs less well under heterogeneous conditions) [1].

Conclusion. Vegetarian diets, when compared with omnivorous diets, are associated with reductions in BP.

 

Commentary

Several studies show that dietary modifications are effective in preventing and managing hypertension [2,3]. Landmark randomized trials, including the DASH diet [4], DASH-sodium diet [5], and OmniHeart diets [6], all of which emphasize fruit and vegetable intake but are not vegetarian, have led to SBP and DBP reductions ranging from 5.5 to 9.5 mm Hg and 3.0 to 5.2, respectively. However, the impact of a vegetarian diet still remains debated, particularly given disparate findings among randomized controlled trials (RCTs). For example, findings in the early- and mid-1980s of small RCTs with ovolactovegetarians (a vegetarian who consumes eggs and dairy products but not animal flesh) suggested reductions similar to the pooled SBP reduction of –4.8 mm Hg Yokoyama et al report [7,8]. In contrast, one RCT comparing ovolactovegetarian with lean meat diets failed to show a BP benefit [9]. Striking is the dearth of RCTs in the last 20 years to assist in better estimating this impact, particularly given its continual recommendation in the scientific [10] and lay communities [11]. To the authors’ credit, this is the first meta-analysis and second systematic review of this important relationship [12].

A vegetarian diet likely supports BP reductions through a variety of mechanisms, most notably via an abundance of potassium [13]. Potassium likely promotes vasodilation, which facilitates glomerular filtration, allowing decreased renal sodium reabsorption and decreased platelet aggregation. Other more controversial hypotheses include decreased energy density leading to reduced BMI [14], decreased sodium intake [15], reduced blood viscosity [16], and high polyunsaturated with low saturated fat content [17].

Strengths of this analysis include the large observational sample size, the separate analysis of clinical trials and observational studies, the lengthy search time-frame, the subgroup analyses, and the adjustment for publication bias. Although the overall association was robust throughout, we agree with the authors that large heterogeneity among observational studies, small clinical trial sample sizes, and the variation in what “vegetarian” represents throughout the world and in individual studies all represent limitations. The participants in many of the observational studies could have technically eaten meat with unclear and undefined frequency, and this may have explained the heterogeneity of these studies. Correspondingly, the lack of heterogeneity observed in the clinical trials may be due to the fact that participants were provided meals in 6 out of 7 studies.

It was surprising that only 7 clinical trials were found. The authors utilized 2 databases, but perhaps searching additional databases such as EMBASE or CINAHL would have yielded other pertinent studies. The authors also did not use a bias assessment tool such as that proposed by the Cochrane bias methods group, which could have better discriminated high- from low-quality trials and made for useful subgroup analyses [1]. Similarly, reporting on both attrition and adherence could have assisted in decreasing heterogeneity during subgroup analyses and determining high-quality from low-quality studies. For example, adherence in Ferdowsian et al (vegan diet) was determined by unannounced dietician phone calls and found that only 57% of participants abstained from animal products. This may have been secondary to the study’s design, in that “providing meals” included simply making them an available option at the company cafeteria instead of requiring consumption of a study-specific vegetarian meal [18].

 

Applications for Clinical Practice

In this meta-analysis, vegetarian diets were associated with –4.8 mm Hg SBP and −2.2 mm Hg DBP reductions, indicating that providers can recommend a vegetarian diet as on par with other lifestyle changes, including low-sodium diet, weight loss, and exercise. A vegetarian diet may be comparable to pharmacologic therapy in magnitude of BP change. Short- and long-term pharmacologic therapy is associated with respective SBP/DBP reductions of –8.3/–3.8 and –5.4/–2.3, not altogether different from reductions seen with vegetarian or vegetable-heavy diets [19].

Although there are barriers to a vegetarian diet, including provider attitudes [20], cost [21], poor culinary skill [22], palatability, and adherence, pharmacologic BP treatment also presents barriers: adherence to BP medications is estimated to be 50% to 70% [23], and harm due to side effects can preclude use. Thus, providers can present a vegetarian diet as a potentially effective option, depending on patient preference and ability to adhere.

—David M. Levine, MD, MA, New York University
School of Medicine, and Melanie Jay, MD, MS

References

1. Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available at http://handbook.cochrane.org/chapter_10/10_4_4_2_trim_and_fill.htm.

2. Eckel RH, Jakicic JM, Ard JD, et al. 2013 AHA/ACC guideline on lifestyle management to reduce cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation 2013 Nov 12. [Epub ahead of print]

3. James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

4. Appel LJ, Moore TJ, Obarzanek E, et al. A clinical trial of the effects of dietary patterns on blood pressure. N Engl J Med 1997;336:1117–24.

5. Sacks FM, Svetkey LP, Vollmer WM, et al. Effects on blood pressure of reduced dietary sodium and the dietary approaches to stop hypertension (DASH) diet. N Engl J Med 2001;344:3–10.

6. Appel LJ, Sacks FM, Carey VJ, et al. Effects of protein, monounsaturated fat, and carbohydrate intake on blood pressure and serum lipids. JAMA 2005;294:2455–64.

7. Rouse IL, Beilin LJ, Armstrong BK, Vandongen R. Blood-pressure–lowering effect of a vegetarian diet: controlled trial in normotensive subjects. Lancet 1983;321:5–10.

8. Margetts BM, Beilin LJ, Vandongen R, Armstrong BK. Vegetarian diet in mild hypertension: a randomised controlled trial. BMJ 1986;293:1468–71.

9. Kestin M, Rouse IL, Correll RA, Nestel PJ. Cardiovascular disease risk factors in free-living men: comparison of two prudent diets, one based on lactoovovegetarianism and the other allowing lean meat. Am J Clin Nutr 1989;50:280–7.

10. Alpert JS. Nutritional advice for the patient with heart disease: what diet should we recommend for our patients? Circulation 2011;124:e258–e260.

11. Gordinier J. Making vegan a new normal. New York Times. 26 Sept 2012. Page D1.

12. Berkow SE, Barnard ND. Blood pressure regulation and vegetarian diets. Nutr Rev 2005;63:1–8.

13. Aburto NJ, Hanson S, Gutierrez H, et al. Effect of increased potassium intake on cardiovascular risk factors and disease: systematic review and meta-analyses. BMJ 2013;346:f1378.

14. Berkow SE, Barnard N. Vegetarian diets and weight status. Nutr Rev 2006;64:175–88.

15. Larsson CL, Johansson GK. Dietary intake and nutritional status of young vegans and omnivores in Sweden. Am J Clin Nutr 2002;76:100–6.

16. Ernst E, Pietsch L, Matrai A, Eisenberg J. Blood rheology in vegetarians. Br J Nutr 1986;56:555–60.

17. Iacono JM, Dougherty RM. Effects of polyunsaturated fats on blood pressure. Annu Rev Nutr 1993;13:243–60.

18. Ferdowsian HR, Barnard ND, Hoover VJ, et al. A multicomponent intervention reduces body weight and cardiovascular risk at a GEICO corporate site. Am J Health Promot 2010;24:384–7.

19. Brugts JJ, Ninomiya T, Boersma E, et al. The consistency of the treatment effect of an ACE-inhibitor based treatment regimen in patients with vascular disease or high risk of vascular disease: a combined analysis of individual data of ADVANCE, EUROPA, and PROGRESS trials. Eur Heart J 2009;30:1385–94.

20. Berman BM, Singh BB, Hartnoll SM, et al. Primary care physicians and complementary-alternative medicine: training, attitudes, and practice patterns. Am Board Fam Pract 1998;11:272–81.

21. Drewnowski A, Darmon N. The economics of obesity: dietary energy density and energy cost. Am J Clin Nutr 2005;82(1 Suppl):265S-273S.

22. Lea EJ, Crawford D, Worsley A. Public views of the benefits and barriers to the consumption of a plant-based diet. Eur J Clin Nutr 2006;60:828–37.

23. Schroeder K, Fahey T, Ebrahim S. How can we improve adherence to blood pressure–lowering medication in ambulatory care? Systematic review of randomized controlled trials. Arch Intern Med 2004;164:722–32.

 

Study Overview

Objective. To determine the association between a vegetarian diet and blood pressure (BP).

Design. Systematic review and meta-analysis of controlled clinical trials and observational studies.

Setting and participants. MEDLINE and Web of Science were respectively searched for English articles published between 1946 to October 2013 and 1900 to November 2013. Inclusion criteria were age > 20 years and vegetarian diet. This included a vegan diet (omitting all animal products), ovo/lacto/pesco vegetarian diet (including eggs/dairy/fish), or semi-vegetarian diet (meat or fish rarely). Exclusion criteria included studies of twins, a multipronged intervention, describing only categorical BP, and reliance on a case series. A total of 258 records were identified. Seven clinical trials and 32 observational studies met inclusion criteria. The 7 clinical trials encompassed 311 participants (median, 38; range, 11–113) with a mean age of 44.5 years (range, 38.0–54.3). All were open-label, 6 were randomized, and 6 provided food to participants. The 32 observational studies included 21,604 participants (median, 152; range, 20–9242) with a mean age of 46.6 years (range, 28.8–68.4 years). Fifteen of these studies included mixed diet types (vegan, lacto, ovolacto, pesca, and/or semivegetarian).

Main outcome measures. The primary outcome was BP. Differences in systolic BP (SBP) and diastolic BP (DBP) between groups consuming vegetarian or comparison diets were pooled using a random effects model. The study compared clinical trials and observational studies separately. Funnel plots, the Egger test, and the trim-and-fill method were all used to assess and correct for publication bias.

Results. Vegetarian diets in clinical trials were associated with a mean SBP reduction of −4.8 mm Hg (95% confidence interval [CI], −6.6 to −3.1; P < 0.001; I2 = 0; P = 0.45 for heterogeneity) and DBP reduction of −2.2 mm Hg (95% CI, −3.5 to −1.0; P < 0.001; I2 = 0; P= 0.43 for heterogeneity) when compared with omnivorous diets. Observational studies had larger reductions but significant heterogeneity: SBP −6.9 mm Hg (95% CI, −9.1 to −4.7; P < 0.001; I2 = 91.4; P < 0.001 for heterogeneity) and DBP −4.7 mm Hg (95% CI, −6.3 to −3.1; P < 0.001; I2 = 92.6; P < 0.001 for heterogeneity). This heterogeneity was best explained by proportion of men (β −0.03; P < 0.001), baseline SBP (β −0.13; P = 0.003), baseline DBP (β −0.30; P < 0.001), sample size (β 0.001; P< 0.001), and BMI (β −0.46; P = 0.02). This suggests that vegetarian diets and lower BP are more strongly associated in men and those with higher baseline BP and BMI.

Subgroup analysis included stratification by age, gender, BMI, diet type, sample size, diet duration, BP medication use, baseline BP, and geographic region. In subgroup analysis of clinical trials, no statistically significant difference between group variation or heterogeneity existed. In comparison, subgroup analysis of observational studies reduced heterogeneity and often effect size. For example, lower SBP was evident in the majority male subgroups (mean SBP/DBP: –18.5 mm Hg/–10.1 mm Hg).

Publication bias existed for both clinical trials and observational studies. According to trim-and-fill methodology, 3 clinical trials of smaller size and larger BP reduction likely were missing (Egger P = 0.04). Their addition shifted mean SBP reduction from −4.8 mm Hg (−6.6 to −3.1) to −5.2 mm Hg (−6.9 to −3.5). Observational studies lacked medium sized negative trials and were overrepresented by larger positive trials (Egger P < 0.001), although this was not confirmed by trim-and-fill (yet this method performs less well under heterogeneous conditions) [1].

Conclusion. Vegetarian diets, when compared with omnivorous diets, are associated with reductions in BP.

 

Commentary

Several studies show that dietary modifications are effective in preventing and managing hypertension [2,3]. Landmark randomized trials, including the DASH diet [4], DASH-sodium diet [5], and OmniHeart diets [6], all of which emphasize fruit and vegetable intake but are not vegetarian, have led to SBP and DBP reductions ranging from 5.5 to 9.5 mm Hg and 3.0 to 5.2, respectively. However, the impact of a vegetarian diet still remains debated, particularly given disparate findings among randomized controlled trials (RCTs). For example, findings in the early- and mid-1980s of small RCTs with ovolactovegetarians (a vegetarian who consumes eggs and dairy products but not animal flesh) suggested reductions similar to the pooled SBP reduction of –4.8 mm Hg Yokoyama et al report [7,8]. In contrast, one RCT comparing ovolactovegetarian with lean meat diets failed to show a BP benefit [9]. Striking is the dearth of RCTs in the last 20 years to assist in better estimating this impact, particularly given its continual recommendation in the scientific [10] and lay communities [11]. To the authors’ credit, this is the first meta-analysis and second systematic review of this important relationship [12].

A vegetarian diet likely supports BP reductions through a variety of mechanisms, most notably via an abundance of potassium [13]. Potassium likely promotes vasodilation, which facilitates glomerular filtration, allowing decreased renal sodium reabsorption and decreased platelet aggregation. Other more controversial hypotheses include decreased energy density leading to reduced BMI [14], decreased sodium intake [15], reduced blood viscosity [16], and high polyunsaturated with low saturated fat content [17].

Strengths of this analysis include the large observational sample size, the separate analysis of clinical trials and observational studies, the lengthy search time-frame, the subgroup analyses, and the adjustment for publication bias. Although the overall association was robust throughout, we agree with the authors that large heterogeneity among observational studies, small clinical trial sample sizes, and the variation in what “vegetarian” represents throughout the world and in individual studies all represent limitations. The participants in many of the observational studies could have technically eaten meat with unclear and undefined frequency, and this may have explained the heterogeneity of these studies. Correspondingly, the lack of heterogeneity observed in the clinical trials may be due to the fact that participants were provided meals in 6 out of 7 studies.

It was surprising that only 7 clinical trials were found. The authors utilized 2 databases, but perhaps searching additional databases such as EMBASE or CINAHL would have yielded other pertinent studies. The authors also did not use a bias assessment tool such as that proposed by the Cochrane bias methods group, which could have better discriminated high- from low-quality trials and made for useful subgroup analyses [1]. Similarly, reporting on both attrition and adherence could have assisted in decreasing heterogeneity during subgroup analyses and determining high-quality from low-quality studies. For example, adherence in Ferdowsian et al (vegan diet) was determined by unannounced dietician phone calls and found that only 57% of participants abstained from animal products. This may have been secondary to the study’s design, in that “providing meals” included simply making them an available option at the company cafeteria instead of requiring consumption of a study-specific vegetarian meal [18].

 

Applications for Clinical Practice

In this meta-analysis, vegetarian diets were associated with –4.8 mm Hg SBP and −2.2 mm Hg DBP reductions, indicating that providers can recommend a vegetarian diet as on par with other lifestyle changes, including low-sodium diet, weight loss, and exercise. A vegetarian diet may be comparable to pharmacologic therapy in magnitude of BP change. Short- and long-term pharmacologic therapy is associated with respective SBP/DBP reductions of –8.3/–3.8 and –5.4/–2.3, not altogether different from reductions seen with vegetarian or vegetable-heavy diets [19].

Although there are barriers to a vegetarian diet, including provider attitudes [20], cost [21], poor culinary skill [22], palatability, and adherence, pharmacologic BP treatment also presents barriers: adherence to BP medications is estimated to be 50% to 70% [23], and harm due to side effects can preclude use. Thus, providers can present a vegetarian diet as a potentially effective option, depending on patient preference and ability to adhere.

—David M. Levine, MD, MA, New York University
School of Medicine, and Melanie Jay, MD, MS

References

1. Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available at http://handbook.cochrane.org/chapter_10/10_4_4_2_trim_and_fill.htm.

2. Eckel RH, Jakicic JM, Ard JD, et al. 2013 AHA/ACC guideline on lifestyle management to reduce cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation 2013 Nov 12. [Epub ahead of print]

3. James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

4. Appel LJ, Moore TJ, Obarzanek E, et al. A clinical trial of the effects of dietary patterns on blood pressure. N Engl J Med 1997;336:1117–24.

5. Sacks FM, Svetkey LP, Vollmer WM, et al. Effects on blood pressure of reduced dietary sodium and the dietary approaches to stop hypertension (DASH) diet. N Engl J Med 2001;344:3–10.

6. Appel LJ, Sacks FM, Carey VJ, et al. Effects of protein, monounsaturated fat, and carbohydrate intake on blood pressure and serum lipids. JAMA 2005;294:2455–64.

7. Rouse IL, Beilin LJ, Armstrong BK, Vandongen R. Blood-pressure–lowering effect of a vegetarian diet: controlled trial in normotensive subjects. Lancet 1983;321:5–10.

8. Margetts BM, Beilin LJ, Vandongen R, Armstrong BK. Vegetarian diet in mild hypertension: a randomised controlled trial. BMJ 1986;293:1468–71.

9. Kestin M, Rouse IL, Correll RA, Nestel PJ. Cardiovascular disease risk factors in free-living men: comparison of two prudent diets, one based on lactoovovegetarianism and the other allowing lean meat. Am J Clin Nutr 1989;50:280–7.

10. Alpert JS. Nutritional advice for the patient with heart disease: what diet should we recommend for our patients? Circulation 2011;124:e258–e260.

11. Gordinier J. Making vegan a new normal. New York Times. 26 Sept 2012. Page D1.

12. Berkow SE, Barnard ND. Blood pressure regulation and vegetarian diets. Nutr Rev 2005;63:1–8.

13. Aburto NJ, Hanson S, Gutierrez H, et al. Effect of increased potassium intake on cardiovascular risk factors and disease: systematic review and meta-analyses. BMJ 2013;346:f1378.

14. Berkow SE, Barnard N. Vegetarian diets and weight status. Nutr Rev 2006;64:175–88.

15. Larsson CL, Johansson GK. Dietary intake and nutritional status of young vegans and omnivores in Sweden. Am J Clin Nutr 2002;76:100–6.

16. Ernst E, Pietsch L, Matrai A, Eisenberg J. Blood rheology in vegetarians. Br J Nutr 1986;56:555–60.

17. Iacono JM, Dougherty RM. Effects of polyunsaturated fats on blood pressure. Annu Rev Nutr 1993;13:243–60.

18. Ferdowsian HR, Barnard ND, Hoover VJ, et al. A multicomponent intervention reduces body weight and cardiovascular risk at a GEICO corporate site. Am J Health Promot 2010;24:384–7.

19. Brugts JJ, Ninomiya T, Boersma E, et al. The consistency of the treatment effect of an ACE-inhibitor based treatment regimen in patients with vascular disease or high risk of vascular disease: a combined analysis of individual data of ADVANCE, EUROPA, and PROGRESS trials. Eur Heart J 2009;30:1385–94.

20. Berman BM, Singh BB, Hartnoll SM, et al. Primary care physicians and complementary-alternative medicine: training, attitudes, and practice patterns. Am Board Fam Pract 1998;11:272–81.

21. Drewnowski A, Darmon N. The economics of obesity: dietary energy density and energy cost. Am J Clin Nutr 2005;82(1 Suppl):265S-273S.

22. Lea EJ, Crawford D, Worsley A. Public views of the benefits and barriers to the consumption of a plant-based diet. Eur J Clin Nutr 2006;60:828–37.

23. Schroeder K, Fahey T, Ebrahim S. How can we improve adherence to blood pressure–lowering medication in ambulatory care? Systematic review of randomized controlled trials. Arch Intern Med 2004;164:722–32.

Issue
Journal of Clinical Outcomes Management - May 2014, VOL. 21, NO. 5
Issue
Journal of Clinical Outcomes Management - May 2014, VOL. 21, NO. 5
Publications
Publications
Article Type
Display Headline
Another Win for Veggies
Display Headline
Another Win for Veggies
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

New Cholesterol Guidelines Would Significantly Increase Statin Use If Implemented

Article Type
Changed
Thu, 03/28/2019 - 15:46
Display Headline
New Cholesterol Guidelines Would Significantly Increase Statin Use If Implemented

Pencina MJ, Navar-Boggan AM, D’Agostino RB, et al. Application of new cholesterol guidelines to a population-based sample. N Engl J Med 2014;370:1422–31.
 

Study Overview

Objective. To quantify how many people would qualify for statin treatment under the 2013 American College of Cardiology/American Heart Association (ACC/AHA) guidelines [1].

Design. Descriptive, repeated cross-sectional study examining data from the 2005–2010 National Health and Nutrition Examination Surveys (NHANES). Data on the medical diagnoses and risk factors for cardiovascular disease for NHANES participants aged 40–75 years (n = 3773) were used to extrapolate to 115.4 million US adults in the same age-range. Exclusions were for triglyceride levels > 400 mg/dL (100 participants) and missing LDL cholesterol measurement (36 participants).

Main outcome measure. Percentage of the US adult population that would be recommended statin therapy according to the 2013 ACC/AHA guidelines as compared with the 2004 guideline produced by the Third Adult Treatment Panel (ATP III) of the National Cholesterol Education Program [2,3].

Main results. Of the NHANES participants, 49% were male, 13% had cardiovascular disease, 46% had hypertension, 21% had diabetes, 21% were smokers, and 41% had obesity. Median age was 56 years (interquartile range [IQR], 41–73), median total cholesterol was 199 mg/dL (IQR, 138–272), median LDL cholesterol was 118 mg/dL (IQR, 64–182), and HDL cholesterol was 52 mg/dL (IQR, 33–86).

Overall, 2135 participants (57%) qualified for statin treatment according to the ACC/AHA guidelines as compared with 1583 (42%) under the ATP III guidelines. Additional participants qualifying under the ACC/AHA guideline were more likely to be male, older in age, have a lower LDL cholesterol, and without known cardiovascular disease, diabetes, obesity, or hypertension. Extrapolated to the US population, 56 million people (49% of the US population age 40 to 75 years, 95% CI, 46–51) would be recommended for statin treatment under the ACC/AHA guidelines compared with 43.2 million (37.5%, 95% CI, 35.3–39.7) under ATP III.

Most new candidates for statins meet criteria for primary prevention of a cardiovascular event: 2.2 million persons with diabetes and 8.2 million considered at high risk for an event in 10 years based on the new ACC/AHA risk calculator [4]. Age also was an important predictor of newly eligible statin candidates. According to ATP III, 48% of 60- to 75-year-olds would qualify for treatment, but 78% would qualify based on ACC/AHA. According to extrapolated NHANES data, 25.2 million people were taking statins from 2005 to 2010; the ACC/AHA guidelines would more than double this number.

Conclusion. The 2013 ACC/AHA cholesterol treatment guidelines would substantially increase the number of patients recommended for statin therapy.

 

Commentary

In November 2013, the long-awaited cholesterol treatment guidelines from the ACC/AHA hit like an earthquake [5]. The guidelines called for abandoning the traditional treat-to-target approach, in which clinicians treat patients to specific levels of LDL cholesterol [1] and instead called for statin treatment based on cardiovascular risk profile. The guideline authors made this change because of the lack of evidence supporting a treat-to-target approach; nearly all randomized controlled trials with statins used fixed doses of statins rather than trying to achieve specific LDL levels. This study by Pencina and colleagues demonstrates how implementation of the new guideline could dramatically change practice. If fully implemented, the guideline would lead to treatment for more than 12 million more patients and double the number of currently treated patients. Nearly all of the newly treated patients would receive treatment for primary prevention.

The guideline defines 4 categories of patients to be considered for treatment: (1) patients with known cardiovascular disease, (2) patients with LDL cholesterol ≥ 190 mg/dL, (3) patients with diabetes aged 40 to 75 years and LDL cholesterol ≥ 70 mg/dL, and (4) patients aged 40 to 75 years with LDL cholesterol ≥ 70 mg/dL and an estimated 10-year risk of a cardiovascular event of ≥ 7.5%. The guidelines call for patients in groups 1 and 2 to receive high-intensity statins (rosuvastatin 20 to 40 mg, atorvastatin 40 to 80 mg), although patients with known cardiovascular disease > 75 years of age can receive moderate-intensity statins. Group 3 should receive high-intensity statins if their 10-year risk is ≥ 7.5%; if otherwise, they can receive a moderate-intensity statin. Group 4 should receive a moderate-to high-intensity statin. As with most guidelines, the guidelines offer the caveat that physicians should take an informed consent approach regarding treatment and make decisions in consultation with their patients.

The publicity surrounding the new guidelines was heightened by the controversy that emerged regarding the new Pooled Cohort Risk Equation developed by the guideline committee [4] for determining 10-year risk. Using data from 5 well-known cohort studies (over 24,000 participants), they created the new risk calculator because of what they viewed as limitations of existing risk calculators: (1) the lack of racial diversity in samples used to derive them, (2) the lack of use of stroke as a cardiovascular outcome, and (3) the use of some subjective outcomes, such as coronary revascularization, angina, and congestive heart failure. Critics have suggested that the new risk calculator is poorly calibrated to more recent cohorts and that the threshold for treatment (≥ 7.5% 10-year risk) is too low and should be 10% or higher [6,7].

Physicians have long used risk calculators to help guide treatment. As an example, the Framingham Heart Study risk score was endorsed by the ATP III guideline. However, all risk scores have limitations, as clearly articulated by the developers of the ACC/AHA risk calculator:

This process is admittedly imperfect; no one has 10% or 20% of a heart attack during a 10-year period. Individuals with the same estimated risk will either have or not have the event of interest, and only those patients who are destined to have an event can have their event prevented by therapy. The criticism of the risk estimation approach to treatment-decision making also applies to the alternative, and much less efficient approach, of checking the patient’s characteristics against numerous and complex inclusion and exclusion criteria for a potentially large number of pertinent trials [4].

No matter how well calibrated or thoughtful, all calculators will be flawed. But guidelines are meant to be just that—guides rather than a prescription for treatment.

 

Applications for Clinical Practice

The ACC and AHA have promised a 2014 update to their guideline, which may come with adjustments to the risk calculator. Perhaps calibration of the calculator in newer cohorts will improve and the threshold for treatment will change. In the meantime, the guidelines and the accompanying calculator have an important role in helping physicians decide whom to treat for primary prevention of cardiovascular disease. Physicians should consider applying the new guidelines, while having an informed consent discussion with their patients about the risks and benefits of treatment.

—Jason P. Block, MD, MPH

References

1. Stone NJ, Robinson J, Lichtenstein AH, et al. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013 Nov 7 [Epub ahead of print].

2. Grundy SM, Cleeman JI, Merz CN, et al. Implications of recent clinical trials for the National Cholesterol Education Program Adult Treatment Panel III guidelines. J Am Coll Cardiol 2004;44:720–32.

3. National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). Third report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III) final report. Circulation 2002;106:3143–421.

4. Goff DC, Lloyd-Jones DM, Bennett G, et al. 2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013 Nov 7 [Epub ahead of print].

5. Kolata G. Risk calculator for cholesterol appears flawed. New York Times. 17 November 2013. Accessed at www.nytimes.com/2013/11/18/health/risk-calculator-for-cholesterol-appears-flawed.html?_r_0 on 10 April 2014.

6. Ridker PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease. Lancet 2013;382: 1762–5.

7. Downs J, Good C. New cholesterol guidelines: has Godot finally arrived? Ann Intern Med 2014;160:354–5.

Issue
Journal of Clinical Outcomes Management - May 2014, VOL. 21, NO. 5
Publications
Topics
Sections

Pencina MJ, Navar-Boggan AM, D’Agostino RB, et al. Application of new cholesterol guidelines to a population-based sample. N Engl J Med 2014;370:1422–31.
 

Study Overview

Objective. To quantify how many people would qualify for statin treatment under the 2013 American College of Cardiology/American Heart Association (ACC/AHA) guidelines [1].

Design. Descriptive, repeated cross-sectional study examining data from the 2005–2010 National Health and Nutrition Examination Surveys (NHANES). Data on the medical diagnoses and risk factors for cardiovascular disease for NHANES participants aged 40–75 years (n = 3773) were used to extrapolate to 115.4 million US adults in the same age-range. Exclusions were for triglyceride levels > 400 mg/dL (100 participants) and missing LDL cholesterol measurement (36 participants).

Main outcome measure. Percentage of the US adult population that would be recommended statin therapy according to the 2013 ACC/AHA guidelines as compared with the 2004 guideline produced by the Third Adult Treatment Panel (ATP III) of the National Cholesterol Education Program [2,3].

Main results. Of the NHANES participants, 49% were male, 13% had cardiovascular disease, 46% had hypertension, 21% had diabetes, 21% were smokers, and 41% had obesity. Median age was 56 years (interquartile range [IQR], 41–73), median total cholesterol was 199 mg/dL (IQR, 138–272), median LDL cholesterol was 118 mg/dL (IQR, 64–182), and HDL cholesterol was 52 mg/dL (IQR, 33–86).

Overall, 2135 participants (57%) qualified for statin treatment according to the ACC/AHA guidelines as compared with 1583 (42%) under the ATP III guidelines. Additional participants qualifying under the ACC/AHA guideline were more likely to be male, older in age, have a lower LDL cholesterol, and without known cardiovascular disease, diabetes, obesity, or hypertension. Extrapolated to the US population, 56 million people (49% of the US population age 40 to 75 years, 95% CI, 46–51) would be recommended for statin treatment under the ACC/AHA guidelines compared with 43.2 million (37.5%, 95% CI, 35.3–39.7) under ATP III.

Most new candidates for statins meet criteria for primary prevention of a cardiovascular event: 2.2 million persons with diabetes and 8.2 million considered at high risk for an event in 10 years based on the new ACC/AHA risk calculator [4]. Age also was an important predictor of newly eligible statin candidates. According to ATP III, 48% of 60- to 75-year-olds would qualify for treatment, but 78% would qualify based on ACC/AHA. According to extrapolated NHANES data, 25.2 million people were taking statins from 2005 to 2010; the ACC/AHA guidelines would more than double this number.

Conclusion. The 2013 ACC/AHA cholesterol treatment guidelines would substantially increase the number of patients recommended for statin therapy.

 

Commentary

In November 2013, the long-awaited cholesterol treatment guidelines from the ACC/AHA hit like an earthquake [5]. The guidelines called for abandoning the traditional treat-to-target approach, in which clinicians treat patients to specific levels of LDL cholesterol [1] and instead called for statin treatment based on cardiovascular risk profile. The guideline authors made this change because of the lack of evidence supporting a treat-to-target approach; nearly all randomized controlled trials with statins used fixed doses of statins rather than trying to achieve specific LDL levels. This study by Pencina and colleagues demonstrates how implementation of the new guideline could dramatically change practice. If fully implemented, the guideline would lead to treatment for more than 12 million more patients and double the number of currently treated patients. Nearly all of the newly treated patients would receive treatment for primary prevention.

The guideline defines 4 categories of patients to be considered for treatment: (1) patients with known cardiovascular disease, (2) patients with LDL cholesterol ≥ 190 mg/dL, (3) patients with diabetes aged 40 to 75 years and LDL cholesterol ≥ 70 mg/dL, and (4) patients aged 40 to 75 years with LDL cholesterol ≥ 70 mg/dL and an estimated 10-year risk of a cardiovascular event of ≥ 7.5%. The guidelines call for patients in groups 1 and 2 to receive high-intensity statins (rosuvastatin 20 to 40 mg, atorvastatin 40 to 80 mg), although patients with known cardiovascular disease > 75 years of age can receive moderate-intensity statins. Group 3 should receive high-intensity statins if their 10-year risk is ≥ 7.5%; if otherwise, they can receive a moderate-intensity statin. Group 4 should receive a moderate-to high-intensity statin. As with most guidelines, the guidelines offer the caveat that physicians should take an informed consent approach regarding treatment and make decisions in consultation with their patients.

The publicity surrounding the new guidelines was heightened by the controversy that emerged regarding the new Pooled Cohort Risk Equation developed by the guideline committee [4] for determining 10-year risk. Using data from 5 well-known cohort studies (over 24,000 participants), they created the new risk calculator because of what they viewed as limitations of existing risk calculators: (1) the lack of racial diversity in samples used to derive them, (2) the lack of use of stroke as a cardiovascular outcome, and (3) the use of some subjective outcomes, such as coronary revascularization, angina, and congestive heart failure. Critics have suggested that the new risk calculator is poorly calibrated to more recent cohorts and that the threshold for treatment (≥ 7.5% 10-year risk) is too low and should be 10% or higher [6,7].

Physicians have long used risk calculators to help guide treatment. As an example, the Framingham Heart Study risk score was endorsed by the ATP III guideline. However, all risk scores have limitations, as clearly articulated by the developers of the ACC/AHA risk calculator:

This process is admittedly imperfect; no one has 10% or 20% of a heart attack during a 10-year period. Individuals with the same estimated risk will either have or not have the event of interest, and only those patients who are destined to have an event can have their event prevented by therapy. The criticism of the risk estimation approach to treatment-decision making also applies to the alternative, and much less efficient approach, of checking the patient’s characteristics against numerous and complex inclusion and exclusion criteria for a potentially large number of pertinent trials [4].

No matter how well calibrated or thoughtful, all calculators will be flawed. But guidelines are meant to be just that—guides rather than a prescription for treatment.

 

Applications for Clinical Practice

The ACC and AHA have promised a 2014 update to their guideline, which may come with adjustments to the risk calculator. Perhaps calibration of the calculator in newer cohorts will improve and the threshold for treatment will change. In the meantime, the guidelines and the accompanying calculator have an important role in helping physicians decide whom to treat for primary prevention of cardiovascular disease. Physicians should consider applying the new guidelines, while having an informed consent discussion with their patients about the risks and benefits of treatment.

—Jason P. Block, MD, MPH

References

1. Stone NJ, Robinson J, Lichtenstein AH, et al. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013 Nov 7 [Epub ahead of print].

2. Grundy SM, Cleeman JI, Merz CN, et al. Implications of recent clinical trials for the National Cholesterol Education Program Adult Treatment Panel III guidelines. J Am Coll Cardiol 2004;44:720–32.

3. National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). Third report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III) final report. Circulation 2002;106:3143–421.

4. Goff DC, Lloyd-Jones DM, Bennett G, et al. 2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013 Nov 7 [Epub ahead of print].

5. Kolata G. Risk calculator for cholesterol appears flawed. New York Times. 17 November 2013. Accessed at www.nytimes.com/2013/11/18/health/risk-calculator-for-cholesterol-appears-flawed.html?_r_0 on 10 April 2014.

6. Ridker PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease. Lancet 2013;382: 1762–5.

7. Downs J, Good C. New cholesterol guidelines: has Godot finally arrived? Ann Intern Med 2014;160:354–5.

Pencina MJ, Navar-Boggan AM, D’Agostino RB, et al. Application of new cholesterol guidelines to a population-based sample. N Engl J Med 2014;370:1422–31.
 

Study Overview

Objective. To quantify how many people would qualify for statin treatment under the 2013 American College of Cardiology/American Heart Association (ACC/AHA) guidelines [1].

Design. Descriptive, repeated cross-sectional study examining data from the 2005–2010 National Health and Nutrition Examination Surveys (NHANES). Data on the medical diagnoses and risk factors for cardiovascular disease for NHANES participants aged 40–75 years (n = 3773) were used to extrapolate to 115.4 million US adults in the same age-range. Exclusions were for triglyceride levels > 400 mg/dL (100 participants) and missing LDL cholesterol measurement (36 participants).

Main outcome measure. Percentage of the US adult population that would be recommended statin therapy according to the 2013 ACC/AHA guidelines as compared with the 2004 guideline produced by the Third Adult Treatment Panel (ATP III) of the National Cholesterol Education Program [2,3].

Main results. Of the NHANES participants, 49% were male, 13% had cardiovascular disease, 46% had hypertension, 21% had diabetes, 21% were smokers, and 41% had obesity. Median age was 56 years (interquartile range [IQR], 41–73), median total cholesterol was 199 mg/dL (IQR, 138–272), median LDL cholesterol was 118 mg/dL (IQR, 64–182), and HDL cholesterol was 52 mg/dL (IQR, 33–86).

Overall, 2135 participants (57%) qualified for statin treatment according to the ACC/AHA guidelines as compared with 1583 (42%) under the ATP III guidelines. Additional participants qualifying under the ACC/AHA guideline were more likely to be male, older in age, have a lower LDL cholesterol, and without known cardiovascular disease, diabetes, obesity, or hypertension. Extrapolated to the US population, 56 million people (49% of the US population age 40 to 75 years, 95% CI, 46–51) would be recommended for statin treatment under the ACC/AHA guidelines compared with 43.2 million (37.5%, 95% CI, 35.3–39.7) under ATP III.

Most new candidates for statins meet criteria for primary prevention of a cardiovascular event: 2.2 million persons with diabetes and 8.2 million considered at high risk for an event in 10 years based on the new ACC/AHA risk calculator [4]. Age also was an important predictor of newly eligible statin candidates. According to ATP III, 48% of 60- to 75-year-olds would qualify for treatment, but 78% would qualify based on ACC/AHA. According to extrapolated NHANES data, 25.2 million people were taking statins from 2005 to 2010; the ACC/AHA guidelines would more than double this number.

Conclusion. The 2013 ACC/AHA cholesterol treatment guidelines would substantially increase the number of patients recommended for statin therapy.

 

Commentary

In November 2013, the long-awaited cholesterol treatment guidelines from the ACC/AHA hit like an earthquake [5]. The guidelines called for abandoning the traditional treat-to-target approach, in which clinicians treat patients to specific levels of LDL cholesterol [1] and instead called for statin treatment based on cardiovascular risk profile. The guideline authors made this change because of the lack of evidence supporting a treat-to-target approach; nearly all randomized controlled trials with statins used fixed doses of statins rather than trying to achieve specific LDL levels. This study by Pencina and colleagues demonstrates how implementation of the new guideline could dramatically change practice. If fully implemented, the guideline would lead to treatment for more than 12 million more patients and double the number of currently treated patients. Nearly all of the newly treated patients would receive treatment for primary prevention.

The guideline defines 4 categories of patients to be considered for treatment: (1) patients with known cardiovascular disease, (2) patients with LDL cholesterol ≥ 190 mg/dL, (3) patients with diabetes aged 40 to 75 years and LDL cholesterol ≥ 70 mg/dL, and (4) patients aged 40 to 75 years with LDL cholesterol ≥ 70 mg/dL and an estimated 10-year risk of a cardiovascular event of ≥ 7.5%. The guidelines call for patients in groups 1 and 2 to receive high-intensity statins (rosuvastatin 20 to 40 mg, atorvastatin 40 to 80 mg), although patients with known cardiovascular disease > 75 years of age can receive moderate-intensity statins. Group 3 should receive high-intensity statins if their 10-year risk is ≥ 7.5%; if otherwise, they can receive a moderate-intensity statin. Group 4 should receive a moderate-to high-intensity statin. As with most guidelines, the guidelines offer the caveat that physicians should take an informed consent approach regarding treatment and make decisions in consultation with their patients.

The publicity surrounding the new guidelines was heightened by the controversy that emerged regarding the new Pooled Cohort Risk Equation developed by the guideline committee [4] for determining 10-year risk. Using data from 5 well-known cohort studies (over 24,000 participants), they created the new risk calculator because of what they viewed as limitations of existing risk calculators: (1) the lack of racial diversity in samples used to derive them, (2) the lack of use of stroke as a cardiovascular outcome, and (3) the use of some subjective outcomes, such as coronary revascularization, angina, and congestive heart failure. Critics have suggested that the new risk calculator is poorly calibrated to more recent cohorts and that the threshold for treatment (≥ 7.5% 10-year risk) is too low and should be 10% or higher [6,7].

Physicians have long used risk calculators to help guide treatment. As an example, the Framingham Heart Study risk score was endorsed by the ATP III guideline. However, all risk scores have limitations, as clearly articulated by the developers of the ACC/AHA risk calculator:

This process is admittedly imperfect; no one has 10% or 20% of a heart attack during a 10-year period. Individuals with the same estimated risk will either have or not have the event of interest, and only those patients who are destined to have an event can have their event prevented by therapy. The criticism of the risk estimation approach to treatment-decision making also applies to the alternative, and much less efficient approach, of checking the patient’s characteristics against numerous and complex inclusion and exclusion criteria for a potentially large number of pertinent trials [4].

No matter how well calibrated or thoughtful, all calculators will be flawed. But guidelines are meant to be just that—guides rather than a prescription for treatment.

 

Applications for Clinical Practice

The ACC and AHA have promised a 2014 update to their guideline, which may come with adjustments to the risk calculator. Perhaps calibration of the calculator in newer cohorts will improve and the threshold for treatment will change. In the meantime, the guidelines and the accompanying calculator have an important role in helping physicians decide whom to treat for primary prevention of cardiovascular disease. Physicians should consider applying the new guidelines, while having an informed consent discussion with their patients about the risks and benefits of treatment.

—Jason P. Block, MD, MPH

References

1. Stone NJ, Robinson J, Lichtenstein AH, et al. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013 Nov 7 [Epub ahead of print].

2. Grundy SM, Cleeman JI, Merz CN, et al. Implications of recent clinical trials for the National Cholesterol Education Program Adult Treatment Panel III guidelines. J Am Coll Cardiol 2004;44:720–32.

3. National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). Third report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III) final report. Circulation 2002;106:3143–421.

4. Goff DC, Lloyd-Jones DM, Bennett G, et al. 2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013 Nov 7 [Epub ahead of print].

5. Kolata G. Risk calculator for cholesterol appears flawed. New York Times. 17 November 2013. Accessed at www.nytimes.com/2013/11/18/health/risk-calculator-for-cholesterol-appears-flawed.html?_r_0 on 10 April 2014.

6. Ridker PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease. Lancet 2013;382: 1762–5.

7. Downs J, Good C. New cholesterol guidelines: has Godot finally arrived? Ann Intern Med 2014;160:354–5.

Issue
Journal of Clinical Outcomes Management - May 2014, VOL. 21, NO. 5
Issue
Journal of Clinical Outcomes Management - May 2014, VOL. 21, NO. 5
Publications
Publications
Topics
Article Type
Display Headline
New Cholesterol Guidelines Would Significantly Increase Statin Use If Implemented
Display Headline
New Cholesterol Guidelines Would Significantly Increase Statin Use If Implemented
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Sequential and Concomitant Therapies for <em>Helicobacter pylori </em>Eradication

Article Type
Changed
Tue, 03/06/2018 - 14:41
Display Headline
Sequential and Concomitant Therapies for Helicobacter pylori Eradication

Study Overview

Objective. To compare the effectiveness and safety of “sequential” and “concomitant” regimens for H. pylori eradication in a setting with increased rates of clarithromycin resistance.

Design. Prospective, multi-center, randomized controlled trial using an intention-to treat and a per-protocol analysis (patients who adhered to study protocol and had a medication compliance of ≥ 90%).

Settings and participants. Patients from 11 Spanish hospitals with confirmed H. pylori infection were invited to participate from December 2010 to May 2012. Participants were at least 18 years old with either non-investigated/functional dyspepsia or gastric/duodenal ulcer. Exclusion criteria included patients with prior H. pylori eradication treatment, the use of bismuth salts or antibiotics for 4 weeks prior to study inclusion, advanced chronic disease that would preclude study completion or follow-up visits, pregnant or breastfeeding patients, as well as patients with prior gastric surgery or alcohol or drug abuse. Participants were allocated using computerized randomization. Study physicians obtained informed consent in the outpatient clinic setting as well as disclosed study arm assignment and dispersed study drugs to participants. The study was unblinded since the number of study drugs and dosing regimens differed between treatment arms.

Intervention. The sequential treatment group received 5 days of dual therapy with omeprazole 20 mg and amoxicillin 1 g every 12 hours, followed by 5 days of triple therapy with omeprazole 20 mg, clarithromycin 500 mg, and metronidazole 500 mg every 12 hours. The concomitant treatment group received 10 days of quadruple therapy with omeprazole 20 mg, amoxicillin 1 g, clarithromycin 500 mg, and metronidazole 500 mg every 12 hours. All drugs were of generic branding.

Main outcome measures. The primary outcome measure was eradication of H. pylori infection confirmed by C-urea breath test or histology a minimum of 4 weeks after ending treatment; secondary outcome was treatment regimen compliance of at least 90% with each study drug.

Main results. 338 patients were randomized, 170 to sequential treatment and 168 to concomitant treatment. There was no significant difference between the 2 arms in relation to age or gender. The average age of participants was similar (47.5 vs 47.3 years in the sequential and concomitant groups, respectively). Women comprised 58.8% of the sequential treatment population and 62.5% of the concomitant population. 95% of both study arms finished treatment.

There was no difference in the primary outcome of eradication of H. pylori infection between the 2 treatment groups in the intention-to-treat analysis as well as in the per-protocol analysis (81.2% vs 86.9%, P = 0.15, and 85.6% vs 91.2%, P = 0.14, in the sequential and concomitant treatment groups, respectively). No statistically significant differences were found between treatment groups based on type of underlying disease. Treatment regimen compliance was also not statistically different between treatment regimens (82.4% sequential vs 82.7% concomitant).

The 2 treatment regimens did not differ significantly in terms of rate and severity of adverse events (P = 0.09). Overall, adverse reactions were reported in 58.6% of the study patients (54.1% in the sequential treatment arm and 63.1% in the concomitant treatment arm). The most common adverse reactions were taste distortions (35.9%), diarrhea (20.1%), and nausea (10.8%). Overall these adverse reactions were characterized as mild (59.2%), moderate (36.2%), and severe (5%).

Conclusion. There was no significant difference between treatment outcomes. Both treatments arms were found to have acceptable compliance and safety profiles.

Commentary

Gastric cancer is the fifth most common malignancy in the world and the third leading cause of cancer death, with estimates of almost 1 million new cases for the year 2012 leading to over 720,000 deaths [1]. On a national level, gastric cancer is less common, with estimates of 21,600 new cases for the year 2013 (1.3% of new cancer cases), leading to an estimated 10,990 deaths (1.9% of all cancer deaths) [2]. Infection with H. pylori is the major risk factor for noncardia gastric cancer (cancer in all areas of the stomach, except for the top portion near where it joins the esophagus) and has been implicated in the development of peptic ulcer disease, chronic gastritis, gastric B-cell mucosa-associated lymphoid tissue lymphoma, and gastric adenocarcinoma [3].

The American College of Gastroenterology [4] and the European Consensus guidelines [5] provide evidence-based recommendations for H. pylori treatment. Standard triple therapy with a proton-pump inhibitor (PPI), clarithromycin, and amoxicillin remains the most widely prescribed regimen, although increasing rates of clarithromycin resistance as well as decreasing rates of H. pylori eradication have prompted investigations of alternative medication and dosing regimens [6].

The present study assesses the efficacy of concomitant therapy for 10 days compared with sequential therapy (omeprazole plus amoxicillin for 5 days, followed by omeprazole, clarithromycin and metronidazole for 5 days). The authors found similar compliance and safety profile rates between the 2 groups, and no significant differences in terms of H. pylori eradication rates. In multivariate analysis, eradication was not associated with patient age, sex, treatment hospital, type of treatment, smoking habit, or presence of ulcer, but was associated with compliance. A strength of this study is the prospective, randomized design, with 11 Spanish hospitals participating. Another strength is the high retention rate, with 95% of subjects completing the trial. A limitation of the trial, as noted by the authors, was not assessing antibiotic resistance in the study patients. This is a relevant omission due to clarithromycin resistance rates in Spain of approximately 14%, which could influence the efficacy of H. pylori eradication when using clarithromycin. Lastly, this study assessed eradication of H. pylori at an interval of at least 4 weeks post-treatment, whereas other investigations have used longer time intervals. Future efforts could assess for H. pylori at an interval of at least 8 weeks post-treatment in order to further validate efficacy of eradication treatment.

Applications for Clinical Practice

Non-bismuth, quadruple concomitant therapy appears to be an effective, safe, well-tolerated and less complex alternative than sequential therapy for H. pylori eradication.  Therefore, this regimen appears well suited for use in settings where efficacy of triple therapy is unacceptably low, either due to increasing rates of clarithromycin resistance and/or decreasing rates of H. pylori eradication.

—Kristen R. Weaver, ACNP-BC, ANP-BC and Allison Squires, PhD, RN

References

1. GLOBOCAN 2012: Estimated cancer incidence, mortality and prevalence worldwide in 2012. International Agency for Research on Cancer. Accessed 22 Feb 2014 at http://globocan.iarc.fr/Pages/fact_sheets_cancer.aspx..

2. SEER Stat fact sheets: stomach cancer. Bethesda, MD: National Cancer Institute. Accessed 22 Feb 2014 at http://seer.cancer.gov/statfacts/html/stomach.html.

3. De Martel C. Gastric cancer: epidemiology and risk factors. Gastroenterol Clin North Am 2013;42:219–40.

4. Chey WD, Wong BC. American College of Gastroenterology guideline on the management of Helicobacter pylori infection. Am J Gastroenterology 2007;102:1808–25.

5. Malfertheiner P, Megraud F, O’Morain CA, et al. Management of Helicobacter pylori infection—the Maastrict IV/Florence Consensus Report. Gut 2012;61:646–64.

6. O’Connor A, Molina-Infante J, Gisbert JP, O’Morain C. Treatment of Helicobacter pylori infection 2013. Helicobacter 2013;18(Suppl 1):58–65.

Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Publications
Topics
Sections

Study Overview

Objective. To compare the effectiveness and safety of “sequential” and “concomitant” regimens for H. pylori eradication in a setting with increased rates of clarithromycin resistance.

Design. Prospective, multi-center, randomized controlled trial using an intention-to treat and a per-protocol analysis (patients who adhered to study protocol and had a medication compliance of ≥ 90%).

Settings and participants. Patients from 11 Spanish hospitals with confirmed H. pylori infection were invited to participate from December 2010 to May 2012. Participants were at least 18 years old with either non-investigated/functional dyspepsia or gastric/duodenal ulcer. Exclusion criteria included patients with prior H. pylori eradication treatment, the use of bismuth salts or antibiotics for 4 weeks prior to study inclusion, advanced chronic disease that would preclude study completion or follow-up visits, pregnant or breastfeeding patients, as well as patients with prior gastric surgery or alcohol or drug abuse. Participants were allocated using computerized randomization. Study physicians obtained informed consent in the outpatient clinic setting as well as disclosed study arm assignment and dispersed study drugs to participants. The study was unblinded since the number of study drugs and dosing regimens differed between treatment arms.

Intervention. The sequential treatment group received 5 days of dual therapy with omeprazole 20 mg and amoxicillin 1 g every 12 hours, followed by 5 days of triple therapy with omeprazole 20 mg, clarithromycin 500 mg, and metronidazole 500 mg every 12 hours. The concomitant treatment group received 10 days of quadruple therapy with omeprazole 20 mg, amoxicillin 1 g, clarithromycin 500 mg, and metronidazole 500 mg every 12 hours. All drugs were of generic branding.

Main outcome measures. The primary outcome measure was eradication of H. pylori infection confirmed by C-urea breath test or histology a minimum of 4 weeks after ending treatment; secondary outcome was treatment regimen compliance of at least 90% with each study drug.

Main results. 338 patients were randomized, 170 to sequential treatment and 168 to concomitant treatment. There was no significant difference between the 2 arms in relation to age or gender. The average age of participants was similar (47.5 vs 47.3 years in the sequential and concomitant groups, respectively). Women comprised 58.8% of the sequential treatment population and 62.5% of the concomitant population. 95% of both study arms finished treatment.

There was no difference in the primary outcome of eradication of H. pylori infection between the 2 treatment groups in the intention-to-treat analysis as well as in the per-protocol analysis (81.2% vs 86.9%, P = 0.15, and 85.6% vs 91.2%, P = 0.14, in the sequential and concomitant treatment groups, respectively). No statistically significant differences were found between treatment groups based on type of underlying disease. Treatment regimen compliance was also not statistically different between treatment regimens (82.4% sequential vs 82.7% concomitant).

The 2 treatment regimens did not differ significantly in terms of rate and severity of adverse events (P = 0.09). Overall, adverse reactions were reported in 58.6% of the study patients (54.1% in the sequential treatment arm and 63.1% in the concomitant treatment arm). The most common adverse reactions were taste distortions (35.9%), diarrhea (20.1%), and nausea (10.8%). Overall these adverse reactions were characterized as mild (59.2%), moderate (36.2%), and severe (5%).

Conclusion. There was no significant difference between treatment outcomes. Both treatments arms were found to have acceptable compliance and safety profiles.

Commentary

Gastric cancer is the fifth most common malignancy in the world and the third leading cause of cancer death, with estimates of almost 1 million new cases for the year 2012 leading to over 720,000 deaths [1]. On a national level, gastric cancer is less common, with estimates of 21,600 new cases for the year 2013 (1.3% of new cancer cases), leading to an estimated 10,990 deaths (1.9% of all cancer deaths) [2]. Infection with H. pylori is the major risk factor for noncardia gastric cancer (cancer in all areas of the stomach, except for the top portion near where it joins the esophagus) and has been implicated in the development of peptic ulcer disease, chronic gastritis, gastric B-cell mucosa-associated lymphoid tissue lymphoma, and gastric adenocarcinoma [3].

The American College of Gastroenterology [4] and the European Consensus guidelines [5] provide evidence-based recommendations for H. pylori treatment. Standard triple therapy with a proton-pump inhibitor (PPI), clarithromycin, and amoxicillin remains the most widely prescribed regimen, although increasing rates of clarithromycin resistance as well as decreasing rates of H. pylori eradication have prompted investigations of alternative medication and dosing regimens [6].

The present study assesses the efficacy of concomitant therapy for 10 days compared with sequential therapy (omeprazole plus amoxicillin for 5 days, followed by omeprazole, clarithromycin and metronidazole for 5 days). The authors found similar compliance and safety profile rates between the 2 groups, and no significant differences in terms of H. pylori eradication rates. In multivariate analysis, eradication was not associated with patient age, sex, treatment hospital, type of treatment, smoking habit, or presence of ulcer, but was associated with compliance. A strength of this study is the prospective, randomized design, with 11 Spanish hospitals participating. Another strength is the high retention rate, with 95% of subjects completing the trial. A limitation of the trial, as noted by the authors, was not assessing antibiotic resistance in the study patients. This is a relevant omission due to clarithromycin resistance rates in Spain of approximately 14%, which could influence the efficacy of H. pylori eradication when using clarithromycin. Lastly, this study assessed eradication of H. pylori at an interval of at least 4 weeks post-treatment, whereas other investigations have used longer time intervals. Future efforts could assess for H. pylori at an interval of at least 8 weeks post-treatment in order to further validate efficacy of eradication treatment.

Applications for Clinical Practice

Non-bismuth, quadruple concomitant therapy appears to be an effective, safe, well-tolerated and less complex alternative than sequential therapy for H. pylori eradication.  Therefore, this regimen appears well suited for use in settings where efficacy of triple therapy is unacceptably low, either due to increasing rates of clarithromycin resistance and/or decreasing rates of H. pylori eradication.

—Kristen R. Weaver, ACNP-BC, ANP-BC and Allison Squires, PhD, RN

Study Overview

Objective. To compare the effectiveness and safety of “sequential” and “concomitant” regimens for H. pylori eradication in a setting with increased rates of clarithromycin resistance.

Design. Prospective, multi-center, randomized controlled trial using an intention-to treat and a per-protocol analysis (patients who adhered to study protocol and had a medication compliance of ≥ 90%).

Settings and participants. Patients from 11 Spanish hospitals with confirmed H. pylori infection were invited to participate from December 2010 to May 2012. Participants were at least 18 years old with either non-investigated/functional dyspepsia or gastric/duodenal ulcer. Exclusion criteria included patients with prior H. pylori eradication treatment, the use of bismuth salts or antibiotics for 4 weeks prior to study inclusion, advanced chronic disease that would preclude study completion or follow-up visits, pregnant or breastfeeding patients, as well as patients with prior gastric surgery or alcohol or drug abuse. Participants were allocated using computerized randomization. Study physicians obtained informed consent in the outpatient clinic setting as well as disclosed study arm assignment and dispersed study drugs to participants. The study was unblinded since the number of study drugs and dosing regimens differed between treatment arms.

Intervention. The sequential treatment group received 5 days of dual therapy with omeprazole 20 mg and amoxicillin 1 g every 12 hours, followed by 5 days of triple therapy with omeprazole 20 mg, clarithromycin 500 mg, and metronidazole 500 mg every 12 hours. The concomitant treatment group received 10 days of quadruple therapy with omeprazole 20 mg, amoxicillin 1 g, clarithromycin 500 mg, and metronidazole 500 mg every 12 hours. All drugs were of generic branding.

Main outcome measures. The primary outcome measure was eradication of H. pylori infection confirmed by C-urea breath test or histology a minimum of 4 weeks after ending treatment; secondary outcome was treatment regimen compliance of at least 90% with each study drug.

Main results. 338 patients were randomized, 170 to sequential treatment and 168 to concomitant treatment. There was no significant difference between the 2 arms in relation to age or gender. The average age of participants was similar (47.5 vs 47.3 years in the sequential and concomitant groups, respectively). Women comprised 58.8% of the sequential treatment population and 62.5% of the concomitant population. 95% of both study arms finished treatment.

There was no difference in the primary outcome of eradication of H. pylori infection between the 2 treatment groups in the intention-to-treat analysis as well as in the per-protocol analysis (81.2% vs 86.9%, P = 0.15, and 85.6% vs 91.2%, P = 0.14, in the sequential and concomitant treatment groups, respectively). No statistically significant differences were found between treatment groups based on type of underlying disease. Treatment regimen compliance was also not statistically different between treatment regimens (82.4% sequential vs 82.7% concomitant).

The 2 treatment regimens did not differ significantly in terms of rate and severity of adverse events (P = 0.09). Overall, adverse reactions were reported in 58.6% of the study patients (54.1% in the sequential treatment arm and 63.1% in the concomitant treatment arm). The most common adverse reactions were taste distortions (35.9%), diarrhea (20.1%), and nausea (10.8%). Overall these adverse reactions were characterized as mild (59.2%), moderate (36.2%), and severe (5%).

Conclusion. There was no significant difference between treatment outcomes. Both treatments arms were found to have acceptable compliance and safety profiles.

Commentary

Gastric cancer is the fifth most common malignancy in the world and the third leading cause of cancer death, with estimates of almost 1 million new cases for the year 2012 leading to over 720,000 deaths [1]. On a national level, gastric cancer is less common, with estimates of 21,600 new cases for the year 2013 (1.3% of new cancer cases), leading to an estimated 10,990 deaths (1.9% of all cancer deaths) [2]. Infection with H. pylori is the major risk factor for noncardia gastric cancer (cancer in all areas of the stomach, except for the top portion near where it joins the esophagus) and has been implicated in the development of peptic ulcer disease, chronic gastritis, gastric B-cell mucosa-associated lymphoid tissue lymphoma, and gastric adenocarcinoma [3].

The American College of Gastroenterology [4] and the European Consensus guidelines [5] provide evidence-based recommendations for H. pylori treatment. Standard triple therapy with a proton-pump inhibitor (PPI), clarithromycin, and amoxicillin remains the most widely prescribed regimen, although increasing rates of clarithromycin resistance as well as decreasing rates of H. pylori eradication have prompted investigations of alternative medication and dosing regimens [6].

The present study assesses the efficacy of concomitant therapy for 10 days compared with sequential therapy (omeprazole plus amoxicillin for 5 days, followed by omeprazole, clarithromycin and metronidazole for 5 days). The authors found similar compliance and safety profile rates between the 2 groups, and no significant differences in terms of H. pylori eradication rates. In multivariate analysis, eradication was not associated with patient age, sex, treatment hospital, type of treatment, smoking habit, or presence of ulcer, but was associated with compliance. A strength of this study is the prospective, randomized design, with 11 Spanish hospitals participating. Another strength is the high retention rate, with 95% of subjects completing the trial. A limitation of the trial, as noted by the authors, was not assessing antibiotic resistance in the study patients. This is a relevant omission due to clarithromycin resistance rates in Spain of approximately 14%, which could influence the efficacy of H. pylori eradication when using clarithromycin. Lastly, this study assessed eradication of H. pylori at an interval of at least 4 weeks post-treatment, whereas other investigations have used longer time intervals. Future efforts could assess for H. pylori at an interval of at least 8 weeks post-treatment in order to further validate efficacy of eradication treatment.

Applications for Clinical Practice

Non-bismuth, quadruple concomitant therapy appears to be an effective, safe, well-tolerated and less complex alternative than sequential therapy for H. pylori eradication.  Therefore, this regimen appears well suited for use in settings where efficacy of triple therapy is unacceptably low, either due to increasing rates of clarithromycin resistance and/or decreasing rates of H. pylori eradication.

—Kristen R. Weaver, ACNP-BC, ANP-BC and Allison Squires, PhD, RN

References

1. GLOBOCAN 2012: Estimated cancer incidence, mortality and prevalence worldwide in 2012. International Agency for Research on Cancer. Accessed 22 Feb 2014 at http://globocan.iarc.fr/Pages/fact_sheets_cancer.aspx..

2. SEER Stat fact sheets: stomach cancer. Bethesda, MD: National Cancer Institute. Accessed 22 Feb 2014 at http://seer.cancer.gov/statfacts/html/stomach.html.

3. De Martel C. Gastric cancer: epidemiology and risk factors. Gastroenterol Clin North Am 2013;42:219–40.

4. Chey WD, Wong BC. American College of Gastroenterology guideline on the management of Helicobacter pylori infection. Am J Gastroenterology 2007;102:1808–25.

5. Malfertheiner P, Megraud F, O’Morain CA, et al. Management of Helicobacter pylori infection—the Maastrict IV/Florence Consensus Report. Gut 2012;61:646–64.

6. O’Connor A, Molina-Infante J, Gisbert JP, O’Morain C. Treatment of Helicobacter pylori infection 2013. Helicobacter 2013;18(Suppl 1):58–65.

References

1. GLOBOCAN 2012: Estimated cancer incidence, mortality and prevalence worldwide in 2012. International Agency for Research on Cancer. Accessed 22 Feb 2014 at http://globocan.iarc.fr/Pages/fact_sheets_cancer.aspx..

2. SEER Stat fact sheets: stomach cancer. Bethesda, MD: National Cancer Institute. Accessed 22 Feb 2014 at http://seer.cancer.gov/statfacts/html/stomach.html.

3. De Martel C. Gastric cancer: epidemiology and risk factors. Gastroenterol Clin North Am 2013;42:219–40.

4. Chey WD, Wong BC. American College of Gastroenterology guideline on the management of Helicobacter pylori infection. Am J Gastroenterology 2007;102:1808–25.

5. Malfertheiner P, Megraud F, O’Morain CA, et al. Management of Helicobacter pylori infection—the Maastrict IV/Florence Consensus Report. Gut 2012;61:646–64.

6. O’Connor A, Molina-Infante J, Gisbert JP, O’Morain C. Treatment of Helicobacter pylori infection 2013. Helicobacter 2013;18(Suppl 1):58–65.

Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Publications
Publications
Topics
Article Type
Display Headline
Sequential and Concomitant Therapies for Helicobacter pylori Eradication
Display Headline
Sequential and Concomitant Therapies for Helicobacter pylori Eradication
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Does Exercise Help Reduce Cancer-Related Fatigue?

Article Type
Changed
Tue, 03/06/2018 - 14:20
Display Headline
Does Exercise Help Reduce Cancer-Related Fatigue?

Study Overview

Objective. To systematically review randomized controlled trials (RCTs) examining the effects of exercise interventions on cancer-related fatigue (CRF) in patients during and after treatment to determine differential effects.

Design. Meta-analysis.

Data. 70 RCTs with a combined sample of 4881 oncology patients during active treatment (eg, chemotherapy, radiation therapy, hormone therapy) or after completion of treatment published before August 2011 that analyzed the effect on CRF of an exercise program compared with a non-exercise control. Excluded from analysis were RCTs that compared exercise with other types of interventions (ie, education, pharmacotherapy, different methods of exercise). 43 studies examined exercise during treatment while 27 studied the effects after treatment.

Measurement. Effect size was calculated to determine the magnitude of the effect of exercise on improving CRF.

Main results. The effect size (Δ = 0.34, P < 0.001) for the total sample of 70 RCTs indicated that exercise has a moderate effect on CRF regardless of treatment status. When effect sizes were calculated for the 43 RCTs that examined patients during treatment, exercise was found to significantly decrease CRF (Δ = 0.32, P < 0.001). Based on calculated effect size for the 27 RCTs that examined exercise after treatment completion, exercise continues to significantly decrease CRF (Δ = 0.38, P < 0.001). The effect of exercise on CRF was consistent not only during or after treatment, but also across cancer diagnosis, patient age, and sex.

Exercise reduces CRF both during and after treatment. In patients who exercise, CRF severity decreases by 4.9% compared to a 29.1% increase in CRF in patients who do not exercise. After treatment, exercise decreases CRF by 20.5% compared to a decrease of 1.3% in patients who do not exercise.

Both during and after treatment, patients with higher exercise adherence experienced the most improvement (P < 0.001). Patients in active treatment with less severe baseline CRF demonstrated greater adherence to the exercise program and saw greater improvements in CRF. Patients who were further from active treatment saw greater CRF severity reduction than patients closer to active treatment. After treatment, the longer the exercise program, the more effective it was in decreasing CRF. No specific type of exercise program (eg, home-based, supervised, vigorous, moderate) was shown to be more effective than another.

Conclusion. Exercise decreases CRF in patients during and after treatment. The type of exercise does not change the positive effect of exercise, so it is important to encourage patients to be active.

Commentary

Cancer-related fatigue (CRF) is the most disturbing symptom associated with cancer diagnosis and its treatment [1]. Defined as a persistent, subjective sense of tiredness that is not proportional to activity and not relieved by rest, CRF is reported in over 80% of oncology patients during active treatment [1]. This symptom is not limited to the active treatment phase, with over 30% of cancer survivors reporting CRF lasting at least 5 years [2]. CRF is associated with decreased quality of life (QOL), decreased functional status, and decreased participation in social activities [1]. The pathogenesis of CRF is not fully understood [3,4]. Disruptions in biochemical pathways [5], genome expression [6] chemotherapy or radiation treatments [7,8], cancer pathogenesis [4], or a combination of factors [9] are hypothesized as contributing to the development and severity of CRF. The complexity of CRF pathogenesis makes clinical management difficult.

The current meta-analysis suggests that exercise is an effective nonpharmacologic intervention to ameliorate the impact of this devastating symptom and improve patients’ QOL [10–12]. The meta-analysis demonstrated strong rigor, using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [13]. Multiple electronic databases were accessed and additional evidence was obtained by review of the retrieved article reference lists. No language limitations were placed on the search, adding to the potential generalizability of the results. The procedures to extract data and evaluate the quality of each retrieved article are detailed providing evidence of the rigor of the authors’ methodology.

The limitations of the meta-analysis are related to the difficulties extracting data from multiple studies without consistent reporting of exercise mode, duration or evaluation methods. Inconsistent CRF assessment methods across studies limits the validity of the results quantifying the magnitude of CRF change identified. Despite the limitations, this is the first known meta-analysis of the effect of exercise on CRF during and after treatment synthesizing current research to provide clinical reccomendations.

As with all exercise prescriptions for any patient, the patient’s level of adherence is a moderating factor for its effectiveness. A recent study describes an interesting exercise intervention that utilizes a resource some cancer patients may already have in their homes. Seven patients with early-stage non-small cell lung cancer performed light-intensity walking and balance exercises in a virtual reality environment with the Nintendo Wii Fit Plus for 6 weeks after thora-cotomy [14]. Exercise started the first week after hospitalization and continued for 6 weeks. Outcomes seen included a decrease in CRF severity, a high level of satisfaction, high adherence rate, and an increase in self-efficacy for managing their CRF [14]. While the small sample size and homogeneous cancer diagnosis and stage limit generalizability, the study describes a promising approach to supporting patient adherence to exercise.

Applications for Clinical Practice

The results of this meta-analysis support exercise as an effective intervention to decrease CRF in oncology patients during and after treatment. Based on the results, exercise should be prescribed as a nonpharmacologic intervention to decrease CRF. Patients’ adherence to the exercise intervention is needed for effective CRF reduction. Thus, exercise prescription should be tailored to patients individual preferences, abilities, and available resources.

—Fay Wright, MSN, APRN, and Allison Squires, PhD, RN

References

1. Berger AM, Abernethy A, Atkinson A, et al. NCCN guidelines: cancer-related fatigue. Version 1. National Comprehensive Cancer Network; 2013.

2. Cella D, Lai J-S, Chang C-H, et al. Fatigue in cancer patients compared with fatigue in the general United States population. Cancer 2002;94:528–38.

3. Mustian K, Morrow G, Carroll J, et al. Integrative nonpharmacologic behavioral interventions for the management of cancer-related fatigue. Oncologist 2007;12 Suppl 1:52–67.

4. Ryan J, Carroll J, Ryan E, et al. Mechanisms of cancer-related fatigue. Oncologist 2007;12 Suppl 1:22–34.

5. Hoffman AJ, Given B, von Eye A, et al. Relationships among pain, fatigue, insomnia, and gender in persons with lung cancer. Oncol Nurs Forum 2007;34:785–92.

6. Miaskowski C, Dodd MJ, Lee KA, et al. Preliminary evidence of an association between a functional interleukin-6 polymorphism and fatigue and sleep disturbance in oncology patients and their family caregivers. J Pain Symptom Manage 2010;40:531–44.

7. Hwang SY, Chang V, Rue M, Kasimis B. Multidimensional independent predictors of cancer-related fatigue. J Pain Symptom Manage 2003;26:604–14.

8. Cleeland C, Mendoza T, Wang X, et al. Levels of symptom burden during chemotherapy for advanced lung cancer: Differences between public hospitals and a tertiary cancer center. J Clin Oncol 2011;29:2859–65.

9. Cleeland C, Bennett G, Dantzer R, et al. Are the symptoms of cancer and cancer treatment due to a shared biologic mechanism? A cytokine-immunologic model of cancer symptoms. Cancer 2003;97:2919–25.

10. Al Majid S, Gray DP. A biobehavioral model for the study of exercise interventions in cancer-related fatigue. Biol Res Nurs 2009;10:381–91.

11. Cramp F, Byron-Daniel J. Exercise for the management of cancer-related fatigue in adults. Cochrane Database Syst Rev 2012;11:CD006145.

12. Puetz TW, Herring MP. Differential effects of exercise on cancer-related fatigue during and following treatment: a meta-analysis. Am J Prev Med 2012;43:e1–24.

13. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097.

14. Hoffman AJ, Brintnall RA, Brown JK, et al. Too sick not to exercise: Using a 6-week, home-based exercise intervention for cancer-related fatigue self-management for postsurgical non-small cell lung cancer patients. Cancer Nurs 2013;36:175–88.

Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Publications
Topics
Sections

Study Overview

Objective. To systematically review randomized controlled trials (RCTs) examining the effects of exercise interventions on cancer-related fatigue (CRF) in patients during and after treatment to determine differential effects.

Design. Meta-analysis.

Data. 70 RCTs with a combined sample of 4881 oncology patients during active treatment (eg, chemotherapy, radiation therapy, hormone therapy) or after completion of treatment published before August 2011 that analyzed the effect on CRF of an exercise program compared with a non-exercise control. Excluded from analysis were RCTs that compared exercise with other types of interventions (ie, education, pharmacotherapy, different methods of exercise). 43 studies examined exercise during treatment while 27 studied the effects after treatment.

Measurement. Effect size was calculated to determine the magnitude of the effect of exercise on improving CRF.

Main results. The effect size (Δ = 0.34, P < 0.001) for the total sample of 70 RCTs indicated that exercise has a moderate effect on CRF regardless of treatment status. When effect sizes were calculated for the 43 RCTs that examined patients during treatment, exercise was found to significantly decrease CRF (Δ = 0.32, P < 0.001). Based on calculated effect size for the 27 RCTs that examined exercise after treatment completion, exercise continues to significantly decrease CRF (Δ = 0.38, P < 0.001). The effect of exercise on CRF was consistent not only during or after treatment, but also across cancer diagnosis, patient age, and sex.

Exercise reduces CRF both during and after treatment. In patients who exercise, CRF severity decreases by 4.9% compared to a 29.1% increase in CRF in patients who do not exercise. After treatment, exercise decreases CRF by 20.5% compared to a decrease of 1.3% in patients who do not exercise.

Both during and after treatment, patients with higher exercise adherence experienced the most improvement (P < 0.001). Patients in active treatment with less severe baseline CRF demonstrated greater adherence to the exercise program and saw greater improvements in CRF. Patients who were further from active treatment saw greater CRF severity reduction than patients closer to active treatment. After treatment, the longer the exercise program, the more effective it was in decreasing CRF. No specific type of exercise program (eg, home-based, supervised, vigorous, moderate) was shown to be more effective than another.

Conclusion. Exercise decreases CRF in patients during and after treatment. The type of exercise does not change the positive effect of exercise, so it is important to encourage patients to be active.

Commentary

Cancer-related fatigue (CRF) is the most disturbing symptom associated with cancer diagnosis and its treatment [1]. Defined as a persistent, subjective sense of tiredness that is not proportional to activity and not relieved by rest, CRF is reported in over 80% of oncology patients during active treatment [1]. This symptom is not limited to the active treatment phase, with over 30% of cancer survivors reporting CRF lasting at least 5 years [2]. CRF is associated with decreased quality of life (QOL), decreased functional status, and decreased participation in social activities [1]. The pathogenesis of CRF is not fully understood [3,4]. Disruptions in biochemical pathways [5], genome expression [6] chemotherapy or radiation treatments [7,8], cancer pathogenesis [4], or a combination of factors [9] are hypothesized as contributing to the development and severity of CRF. The complexity of CRF pathogenesis makes clinical management difficult.

The current meta-analysis suggests that exercise is an effective nonpharmacologic intervention to ameliorate the impact of this devastating symptom and improve patients’ QOL [10–12]. The meta-analysis demonstrated strong rigor, using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [13]. Multiple electronic databases were accessed and additional evidence was obtained by review of the retrieved article reference lists. No language limitations were placed on the search, adding to the potential generalizability of the results. The procedures to extract data and evaluate the quality of each retrieved article are detailed providing evidence of the rigor of the authors’ methodology.

The limitations of the meta-analysis are related to the difficulties extracting data from multiple studies without consistent reporting of exercise mode, duration or evaluation methods. Inconsistent CRF assessment methods across studies limits the validity of the results quantifying the magnitude of CRF change identified. Despite the limitations, this is the first known meta-analysis of the effect of exercise on CRF during and after treatment synthesizing current research to provide clinical reccomendations.

As with all exercise prescriptions for any patient, the patient’s level of adherence is a moderating factor for its effectiveness. A recent study describes an interesting exercise intervention that utilizes a resource some cancer patients may already have in their homes. Seven patients with early-stage non-small cell lung cancer performed light-intensity walking and balance exercises in a virtual reality environment with the Nintendo Wii Fit Plus for 6 weeks after thora-cotomy [14]. Exercise started the first week after hospitalization and continued for 6 weeks. Outcomes seen included a decrease in CRF severity, a high level of satisfaction, high adherence rate, and an increase in self-efficacy for managing their CRF [14]. While the small sample size and homogeneous cancer diagnosis and stage limit generalizability, the study describes a promising approach to supporting patient adherence to exercise.

Applications for Clinical Practice

The results of this meta-analysis support exercise as an effective intervention to decrease CRF in oncology patients during and after treatment. Based on the results, exercise should be prescribed as a nonpharmacologic intervention to decrease CRF. Patients’ adherence to the exercise intervention is needed for effective CRF reduction. Thus, exercise prescription should be tailored to patients individual preferences, abilities, and available resources.

—Fay Wright, MSN, APRN, and Allison Squires, PhD, RN

Study Overview

Objective. To systematically review randomized controlled trials (RCTs) examining the effects of exercise interventions on cancer-related fatigue (CRF) in patients during and after treatment to determine differential effects.

Design. Meta-analysis.

Data. 70 RCTs with a combined sample of 4881 oncology patients during active treatment (eg, chemotherapy, radiation therapy, hormone therapy) or after completion of treatment published before August 2011 that analyzed the effect on CRF of an exercise program compared with a non-exercise control. Excluded from analysis were RCTs that compared exercise with other types of interventions (ie, education, pharmacotherapy, different methods of exercise). 43 studies examined exercise during treatment while 27 studied the effects after treatment.

Measurement. Effect size was calculated to determine the magnitude of the effect of exercise on improving CRF.

Main results. The effect size (Δ = 0.34, P < 0.001) for the total sample of 70 RCTs indicated that exercise has a moderate effect on CRF regardless of treatment status. When effect sizes were calculated for the 43 RCTs that examined patients during treatment, exercise was found to significantly decrease CRF (Δ = 0.32, P < 0.001). Based on calculated effect size for the 27 RCTs that examined exercise after treatment completion, exercise continues to significantly decrease CRF (Δ = 0.38, P < 0.001). The effect of exercise on CRF was consistent not only during or after treatment, but also across cancer diagnosis, patient age, and sex.

Exercise reduces CRF both during and after treatment. In patients who exercise, CRF severity decreases by 4.9% compared to a 29.1% increase in CRF in patients who do not exercise. After treatment, exercise decreases CRF by 20.5% compared to a decrease of 1.3% in patients who do not exercise.

Both during and after treatment, patients with higher exercise adherence experienced the most improvement (P < 0.001). Patients in active treatment with less severe baseline CRF demonstrated greater adherence to the exercise program and saw greater improvements in CRF. Patients who were further from active treatment saw greater CRF severity reduction than patients closer to active treatment. After treatment, the longer the exercise program, the more effective it was in decreasing CRF. No specific type of exercise program (eg, home-based, supervised, vigorous, moderate) was shown to be more effective than another.

Conclusion. Exercise decreases CRF in patients during and after treatment. The type of exercise does not change the positive effect of exercise, so it is important to encourage patients to be active.

Commentary

Cancer-related fatigue (CRF) is the most disturbing symptom associated with cancer diagnosis and its treatment [1]. Defined as a persistent, subjective sense of tiredness that is not proportional to activity and not relieved by rest, CRF is reported in over 80% of oncology patients during active treatment [1]. This symptom is not limited to the active treatment phase, with over 30% of cancer survivors reporting CRF lasting at least 5 years [2]. CRF is associated with decreased quality of life (QOL), decreased functional status, and decreased participation in social activities [1]. The pathogenesis of CRF is not fully understood [3,4]. Disruptions in biochemical pathways [5], genome expression [6] chemotherapy or radiation treatments [7,8], cancer pathogenesis [4], or a combination of factors [9] are hypothesized as contributing to the development and severity of CRF. The complexity of CRF pathogenesis makes clinical management difficult.

The current meta-analysis suggests that exercise is an effective nonpharmacologic intervention to ameliorate the impact of this devastating symptom and improve patients’ QOL [10–12]. The meta-analysis demonstrated strong rigor, using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [13]. Multiple electronic databases were accessed and additional evidence was obtained by review of the retrieved article reference lists. No language limitations were placed on the search, adding to the potential generalizability of the results. The procedures to extract data and evaluate the quality of each retrieved article are detailed providing evidence of the rigor of the authors’ methodology.

The limitations of the meta-analysis are related to the difficulties extracting data from multiple studies without consistent reporting of exercise mode, duration or evaluation methods. Inconsistent CRF assessment methods across studies limits the validity of the results quantifying the magnitude of CRF change identified. Despite the limitations, this is the first known meta-analysis of the effect of exercise on CRF during and after treatment synthesizing current research to provide clinical reccomendations.

As with all exercise prescriptions for any patient, the patient’s level of adherence is a moderating factor for its effectiveness. A recent study describes an interesting exercise intervention that utilizes a resource some cancer patients may already have in their homes. Seven patients with early-stage non-small cell lung cancer performed light-intensity walking and balance exercises in a virtual reality environment with the Nintendo Wii Fit Plus for 6 weeks after thora-cotomy [14]. Exercise started the first week after hospitalization and continued for 6 weeks. Outcomes seen included a decrease in CRF severity, a high level of satisfaction, high adherence rate, and an increase in self-efficacy for managing their CRF [14]. While the small sample size and homogeneous cancer diagnosis and stage limit generalizability, the study describes a promising approach to supporting patient adherence to exercise.

Applications for Clinical Practice

The results of this meta-analysis support exercise as an effective intervention to decrease CRF in oncology patients during and after treatment. Based on the results, exercise should be prescribed as a nonpharmacologic intervention to decrease CRF. Patients’ adherence to the exercise intervention is needed for effective CRF reduction. Thus, exercise prescription should be tailored to patients individual preferences, abilities, and available resources.

—Fay Wright, MSN, APRN, and Allison Squires, PhD, RN

References

1. Berger AM, Abernethy A, Atkinson A, et al. NCCN guidelines: cancer-related fatigue. Version 1. National Comprehensive Cancer Network; 2013.

2. Cella D, Lai J-S, Chang C-H, et al. Fatigue in cancer patients compared with fatigue in the general United States population. Cancer 2002;94:528–38.

3. Mustian K, Morrow G, Carroll J, et al. Integrative nonpharmacologic behavioral interventions for the management of cancer-related fatigue. Oncologist 2007;12 Suppl 1:52–67.

4. Ryan J, Carroll J, Ryan E, et al. Mechanisms of cancer-related fatigue. Oncologist 2007;12 Suppl 1:22–34.

5. Hoffman AJ, Given B, von Eye A, et al. Relationships among pain, fatigue, insomnia, and gender in persons with lung cancer. Oncol Nurs Forum 2007;34:785–92.

6. Miaskowski C, Dodd MJ, Lee KA, et al. Preliminary evidence of an association between a functional interleukin-6 polymorphism and fatigue and sleep disturbance in oncology patients and their family caregivers. J Pain Symptom Manage 2010;40:531–44.

7. Hwang SY, Chang V, Rue M, Kasimis B. Multidimensional independent predictors of cancer-related fatigue. J Pain Symptom Manage 2003;26:604–14.

8. Cleeland C, Mendoza T, Wang X, et al. Levels of symptom burden during chemotherapy for advanced lung cancer: Differences between public hospitals and a tertiary cancer center. J Clin Oncol 2011;29:2859–65.

9. Cleeland C, Bennett G, Dantzer R, et al. Are the symptoms of cancer and cancer treatment due to a shared biologic mechanism? A cytokine-immunologic model of cancer symptoms. Cancer 2003;97:2919–25.

10. Al Majid S, Gray DP. A biobehavioral model for the study of exercise interventions in cancer-related fatigue. Biol Res Nurs 2009;10:381–91.

11. Cramp F, Byron-Daniel J. Exercise for the management of cancer-related fatigue in adults. Cochrane Database Syst Rev 2012;11:CD006145.

12. Puetz TW, Herring MP. Differential effects of exercise on cancer-related fatigue during and following treatment: a meta-analysis. Am J Prev Med 2012;43:e1–24.

13. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097.

14. Hoffman AJ, Brintnall RA, Brown JK, et al. Too sick not to exercise: Using a 6-week, home-based exercise intervention for cancer-related fatigue self-management for postsurgical non-small cell lung cancer patients. Cancer Nurs 2013;36:175–88.

References

1. Berger AM, Abernethy A, Atkinson A, et al. NCCN guidelines: cancer-related fatigue. Version 1. National Comprehensive Cancer Network; 2013.

2. Cella D, Lai J-S, Chang C-H, et al. Fatigue in cancer patients compared with fatigue in the general United States population. Cancer 2002;94:528–38.

3. Mustian K, Morrow G, Carroll J, et al. Integrative nonpharmacologic behavioral interventions for the management of cancer-related fatigue. Oncologist 2007;12 Suppl 1:52–67.

4. Ryan J, Carroll J, Ryan E, et al. Mechanisms of cancer-related fatigue. Oncologist 2007;12 Suppl 1:22–34.

5. Hoffman AJ, Given B, von Eye A, et al. Relationships among pain, fatigue, insomnia, and gender in persons with lung cancer. Oncol Nurs Forum 2007;34:785–92.

6. Miaskowski C, Dodd MJ, Lee KA, et al. Preliminary evidence of an association between a functional interleukin-6 polymorphism and fatigue and sleep disturbance in oncology patients and their family caregivers. J Pain Symptom Manage 2010;40:531–44.

7. Hwang SY, Chang V, Rue M, Kasimis B. Multidimensional independent predictors of cancer-related fatigue. J Pain Symptom Manage 2003;26:604–14.

8. Cleeland C, Mendoza T, Wang X, et al. Levels of symptom burden during chemotherapy for advanced lung cancer: Differences between public hospitals and a tertiary cancer center. J Clin Oncol 2011;29:2859–65.

9. Cleeland C, Bennett G, Dantzer R, et al. Are the symptoms of cancer and cancer treatment due to a shared biologic mechanism? A cytokine-immunologic model of cancer symptoms. Cancer 2003;97:2919–25.

10. Al Majid S, Gray DP. A biobehavioral model for the study of exercise interventions in cancer-related fatigue. Biol Res Nurs 2009;10:381–91.

11. Cramp F, Byron-Daniel J. Exercise for the management of cancer-related fatigue in adults. Cochrane Database Syst Rev 2012;11:CD006145.

12. Puetz TW, Herring MP. Differential effects of exercise on cancer-related fatigue during and following treatment: a meta-analysis. Am J Prev Med 2012;43:e1–24.

13. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097.

14. Hoffman AJ, Brintnall RA, Brown JK, et al. Too sick not to exercise: Using a 6-week, home-based exercise intervention for cancer-related fatigue self-management for postsurgical non-small cell lung cancer patients. Cancer Nurs 2013;36:175–88.

Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Publications
Publications
Topics
Article Type
Display Headline
Does Exercise Help Reduce Cancer-Related Fatigue?
Display Headline
Does Exercise Help Reduce Cancer-Related Fatigue?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Declining Adverse Event Rates Among Patients With Cardiac Conditions But Not With Pneumonia or Surgical Conditions

Article Type
Changed
Thu, 03/28/2019 - 15:48
Display Headline
Declining Adverse Event Rates Among Patients With Cardiac Conditions But Not With Pneumonia or Surgical Conditions

Study Overview

Objective. To examine changes in adverse event rates among Medicare patients with common medical conditions and conditions requiring surgery hospitalized in acute care hospitals.

Design. Retrospective review utilizing the Medicare Patient Safety Monitoring System (MPSMS) [1], a large database of information abstracted from medical records of a random sample of hospitalized patients in the United States. The database was established in by the Centers for Medicare and Medicaid Services in 2001 to track adverse events in hospitals among Medicare patients, with data collected from every year thereafter except for 2008. The MPSMS tracks 21 indicators of safety that can be reliably abstracted from medical records. Among these are inpatients falls, hospital-acquired pressure ulcers, catheter-associated urinary tract infections, selected hospital-acquired infections, selected adverse events related to high-risk medications, operative events and postoperative events for certain conditions.

Setting and participants. Medicare patients aged 65 and older who had been hospitalized for acute myocardial infarction, congestive heart failure, pneumonia, or conditions requiring surgery from 2005 to 2007 and 2009 to 2011. A total of 61,523 patients were included in the final study sample—11,399 with acute myocardial infarction, 15,374 with congestive heart failure, 18,269 with pneumonia, and 16,481 with conditions requiring surgery from a total of 4372 hospitals.

Main outcome measures. The rate of occurrence of adverse events for which patients were at risk, the proportion of patients with 1 or more adverse events, and the number of adverse events per 1000 hospitalizations.

Statistical analysis. Outcome rates were described and reported in 2-year intervals: 2005–2006, 2007–2009, and 2010–2011. Trends in the number of adverse events per 1000 hospitalizations were modeled using a linear mixed-effects model with Poisson link function. Other composite outcomes were also modeled using linear mixed models for trend analysis.

Main results. Adverse event rates among patients with myocardial infarction and congestive heart failure declined significantly. Among patients with myocardial infarction, rate of adverse event among patients at risk for events declined from 5% to 3.7% (rate difference 1.3%; 95% confidence interval [CI], 0.7 to 1.9) and among patients with congestive heart failure, the rate declined from 3.7% to 2.7% (rate difference 1%; 95% CI, 0.5 to 1.4). Proportion of patients with 1 or more adverse events declined by 6.6% (95% CI, 3.3 to 10.2) among patients with myocardial infarction, and 3.3% (95% CI, 1.0 to 5.5) among patients with congestive heart failure. Number of adverse events per 1000 hospitalizations also declined by 139.7 among patients with myocardial infarction and by 68.3 among patients with congestive heart failure. On the other hand, among patients admitted for pneumonia or for conditions requiring surgery, adverse events rates remained the same. Rate of adverse events among patients admitted for pneumonia remained the same at 3.4% in 2005–2006 and 3.5% in 2010–2011; and for patients admitted for conditions requiring surgery, rate of adverse events remained the same at 3.2% in 2005–2006 and 3.3% in 2010–2011. Similarly, proportion of patients with 1 or more events in the hospital also remained the same in patients with pneumonia (a proportion of 17.1% in 2005–2006 and 17.5% in 2010–11) and conditions requiring surgery (a proportion of 21.6% in 2005–2006 and 22.7% in 2010–2011). Number of events per 1000 hospitalizations also did not change over time. When accounting for patient characteristics and geographic differences in the models, the results also did not substantially change.

Conclusions. In a large nationally representative sample of older adults aged 65 and above, adverse event rates declined among patients admitted for cardiac conditions, including myocardial infarction and congestive heart failure, but did not decline among patients admitted for other medical (pneumonia) or surgical conditions.

Commentary

Patient safety in inpatient hospital care is of paramount importance, and the Affordable Care Act has placed significant emphasis on improving patient safety by aligning incentives and disincentives with patient outcomes on the hospital level [2,3].These measures, including adverse event rates, are reported publicly in reports such as Hospital Compare [3–5].The current study reports on the recent national trends in safety and adverse events using data abstracted from medical records among older Medicare patients with 4 common conditions. The demonstration of the trends in adverse events represent an important first step towards understanding the current environment and trends in patient safety.  The finding that in-hospital adverse event rates have improved in patients admitted for cardiac conditions is reassuring  given that there were substantial nationwide efforts in promoting patient safety in hospitals, but the lack of progress in other conditions both medical and surgical is rather disappointing.

There is good quality evidence suggesting how hospitals may make changes to improve patient safety; these steps may include adopting care practices and protocols such as pressure ulcer monitoring and prevention protocols, fall prevention protocols, safety checklists, models for older adults inpatient care such as Mobile Acute Care of Elderly teams [6] and Acute Care for the Elderly models [7], quality improvement initiatives, and incorporation of information systems for data tracking and reporting, to name a few. How hospitals adopt different practices for the care of patients with different conditions may explain the study findings. The challenge is to figure out why noncardiac conditions do not have improving trends in patient safety and to demonstrate what works (and what doesn’t) on the hospital level. Understanding how care is delivered on the hospital level and correlating hospital level practices with patient outcomes from databases such as MPSMS may yield clues as to what specific steps hospitals have taken that have yielded changes in patient safety.

Applications for Clinical Practice

This study highlights trends in adverse events among hospitalized older adults that demonstrated improvements for patients with cardiac conditions but not for others. Future studies need to focus on understanding what works and what doesn’t so that hospitals can adopt safety practices that improve outcomes for older hospitalized patients.

—William Hung, MD, MPH

 

References

1. Hunt DR, Verzier N, Abend SL, et al. Fundamentals of Medicare patient safety surveillance: intent, relevance and transparency. Rockville, MD: Agency for Healthcare Research and Quality. Available at archive.ahrq.gov/qual/nhqr05/fullreport/Mpsms.htm.

2. Rosenbaum S. The Patient Protection and Affordable Care Act: implications for public health policy and practice. Public Health Rep 2011;126:130–5.

3. Werner RM, Bradlow ET. Relationship between Medicare’s hospital compare performance measures and mortality rates. JAMA 2006;296:2694–702.

4. Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood) 2010;29:1319–24.

5. Kruse GB, Polsky D, Stuart EA, Werner RM. The impact of hospital pay-for-performance on hospital and Medicare costs. Health Serv Res 2012;47:2118–36.

6. Hung WW, Ross JS, Farber J, Siu AL. Evaluation of the Mobile Acute Care of the Elderly (MACE) service. JAMA Intern Med 2013:1–7.

7. Landefeld CS, Palmer RM, Kresevic DM, Fortinsky RH, Kowal J. A randomized trial of care in a hospital medical unit especially designed to improve the functional outcomes of acutely ill older patients. N Engl J Med 1995;332:1338–44

Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Publications
Topics
Sections

Study Overview

Objective. To examine changes in adverse event rates among Medicare patients with common medical conditions and conditions requiring surgery hospitalized in acute care hospitals.

Design. Retrospective review utilizing the Medicare Patient Safety Monitoring System (MPSMS) [1], a large database of information abstracted from medical records of a random sample of hospitalized patients in the United States. The database was established in by the Centers for Medicare and Medicaid Services in 2001 to track adverse events in hospitals among Medicare patients, with data collected from every year thereafter except for 2008. The MPSMS tracks 21 indicators of safety that can be reliably abstracted from medical records. Among these are inpatients falls, hospital-acquired pressure ulcers, catheter-associated urinary tract infections, selected hospital-acquired infections, selected adverse events related to high-risk medications, operative events and postoperative events for certain conditions.

Setting and participants. Medicare patients aged 65 and older who had been hospitalized for acute myocardial infarction, congestive heart failure, pneumonia, or conditions requiring surgery from 2005 to 2007 and 2009 to 2011. A total of 61,523 patients were included in the final study sample—11,399 with acute myocardial infarction, 15,374 with congestive heart failure, 18,269 with pneumonia, and 16,481 with conditions requiring surgery from a total of 4372 hospitals.

Main outcome measures. The rate of occurrence of adverse events for which patients were at risk, the proportion of patients with 1 or more adverse events, and the number of adverse events per 1000 hospitalizations.

Statistical analysis. Outcome rates were described and reported in 2-year intervals: 2005–2006, 2007–2009, and 2010–2011. Trends in the number of adverse events per 1000 hospitalizations were modeled using a linear mixed-effects model with Poisson link function. Other composite outcomes were also modeled using linear mixed models for trend analysis.

Main results. Adverse event rates among patients with myocardial infarction and congestive heart failure declined significantly. Among patients with myocardial infarction, rate of adverse event among patients at risk for events declined from 5% to 3.7% (rate difference 1.3%; 95% confidence interval [CI], 0.7 to 1.9) and among patients with congestive heart failure, the rate declined from 3.7% to 2.7% (rate difference 1%; 95% CI, 0.5 to 1.4). Proportion of patients with 1 or more adverse events declined by 6.6% (95% CI, 3.3 to 10.2) among patients with myocardial infarction, and 3.3% (95% CI, 1.0 to 5.5) among patients with congestive heart failure. Number of adverse events per 1000 hospitalizations also declined by 139.7 among patients with myocardial infarction and by 68.3 among patients with congestive heart failure. On the other hand, among patients admitted for pneumonia or for conditions requiring surgery, adverse events rates remained the same. Rate of adverse events among patients admitted for pneumonia remained the same at 3.4% in 2005–2006 and 3.5% in 2010–2011; and for patients admitted for conditions requiring surgery, rate of adverse events remained the same at 3.2% in 2005–2006 and 3.3% in 2010–2011. Similarly, proportion of patients with 1 or more events in the hospital also remained the same in patients with pneumonia (a proportion of 17.1% in 2005–2006 and 17.5% in 2010–11) and conditions requiring surgery (a proportion of 21.6% in 2005–2006 and 22.7% in 2010–2011). Number of events per 1000 hospitalizations also did not change over time. When accounting for patient characteristics and geographic differences in the models, the results also did not substantially change.

Conclusions. In a large nationally representative sample of older adults aged 65 and above, adverse event rates declined among patients admitted for cardiac conditions, including myocardial infarction and congestive heart failure, but did not decline among patients admitted for other medical (pneumonia) or surgical conditions.

Commentary

Patient safety in inpatient hospital care is of paramount importance, and the Affordable Care Act has placed significant emphasis on improving patient safety by aligning incentives and disincentives with patient outcomes on the hospital level [2,3].These measures, including adverse event rates, are reported publicly in reports such as Hospital Compare [3–5].The current study reports on the recent national trends in safety and adverse events using data abstracted from medical records among older Medicare patients with 4 common conditions. The demonstration of the trends in adverse events represent an important first step towards understanding the current environment and trends in patient safety.  The finding that in-hospital adverse event rates have improved in patients admitted for cardiac conditions is reassuring  given that there were substantial nationwide efforts in promoting patient safety in hospitals, but the lack of progress in other conditions both medical and surgical is rather disappointing.

There is good quality evidence suggesting how hospitals may make changes to improve patient safety; these steps may include adopting care practices and protocols such as pressure ulcer monitoring and prevention protocols, fall prevention protocols, safety checklists, models for older adults inpatient care such as Mobile Acute Care of Elderly teams [6] and Acute Care for the Elderly models [7], quality improvement initiatives, and incorporation of information systems for data tracking and reporting, to name a few. How hospitals adopt different practices for the care of patients with different conditions may explain the study findings. The challenge is to figure out why noncardiac conditions do not have improving trends in patient safety and to demonstrate what works (and what doesn’t) on the hospital level. Understanding how care is delivered on the hospital level and correlating hospital level practices with patient outcomes from databases such as MPSMS may yield clues as to what specific steps hospitals have taken that have yielded changes in patient safety.

Applications for Clinical Practice

This study highlights trends in adverse events among hospitalized older adults that demonstrated improvements for patients with cardiac conditions but not for others. Future studies need to focus on understanding what works and what doesn’t so that hospitals can adopt safety practices that improve outcomes for older hospitalized patients.

—William Hung, MD, MPH

 

Study Overview

Objective. To examine changes in adverse event rates among Medicare patients with common medical conditions and conditions requiring surgery hospitalized in acute care hospitals.

Design. Retrospective review utilizing the Medicare Patient Safety Monitoring System (MPSMS) [1], a large database of information abstracted from medical records of a random sample of hospitalized patients in the United States. The database was established in by the Centers for Medicare and Medicaid Services in 2001 to track adverse events in hospitals among Medicare patients, with data collected from every year thereafter except for 2008. The MPSMS tracks 21 indicators of safety that can be reliably abstracted from medical records. Among these are inpatients falls, hospital-acquired pressure ulcers, catheter-associated urinary tract infections, selected hospital-acquired infections, selected adverse events related to high-risk medications, operative events and postoperative events for certain conditions.

Setting and participants. Medicare patients aged 65 and older who had been hospitalized for acute myocardial infarction, congestive heart failure, pneumonia, or conditions requiring surgery from 2005 to 2007 and 2009 to 2011. A total of 61,523 patients were included in the final study sample—11,399 with acute myocardial infarction, 15,374 with congestive heart failure, 18,269 with pneumonia, and 16,481 with conditions requiring surgery from a total of 4372 hospitals.

Main outcome measures. The rate of occurrence of adverse events for which patients were at risk, the proportion of patients with 1 or more adverse events, and the number of adverse events per 1000 hospitalizations.

Statistical analysis. Outcome rates were described and reported in 2-year intervals: 2005–2006, 2007–2009, and 2010–2011. Trends in the number of adverse events per 1000 hospitalizations were modeled using a linear mixed-effects model with Poisson link function. Other composite outcomes were also modeled using linear mixed models for trend analysis.

Main results. Adverse event rates among patients with myocardial infarction and congestive heart failure declined significantly. Among patients with myocardial infarction, rate of adverse event among patients at risk for events declined from 5% to 3.7% (rate difference 1.3%; 95% confidence interval [CI], 0.7 to 1.9) and among patients with congestive heart failure, the rate declined from 3.7% to 2.7% (rate difference 1%; 95% CI, 0.5 to 1.4). Proportion of patients with 1 or more adverse events declined by 6.6% (95% CI, 3.3 to 10.2) among patients with myocardial infarction, and 3.3% (95% CI, 1.0 to 5.5) among patients with congestive heart failure. Number of adverse events per 1000 hospitalizations also declined by 139.7 among patients with myocardial infarction and by 68.3 among patients with congestive heart failure. On the other hand, among patients admitted for pneumonia or for conditions requiring surgery, adverse events rates remained the same. Rate of adverse events among patients admitted for pneumonia remained the same at 3.4% in 2005–2006 and 3.5% in 2010–2011; and for patients admitted for conditions requiring surgery, rate of adverse events remained the same at 3.2% in 2005–2006 and 3.3% in 2010–2011. Similarly, proportion of patients with 1 or more events in the hospital also remained the same in patients with pneumonia (a proportion of 17.1% in 2005–2006 and 17.5% in 2010–11) and conditions requiring surgery (a proportion of 21.6% in 2005–2006 and 22.7% in 2010–2011). Number of events per 1000 hospitalizations also did not change over time. When accounting for patient characteristics and geographic differences in the models, the results also did not substantially change.

Conclusions. In a large nationally representative sample of older adults aged 65 and above, adverse event rates declined among patients admitted for cardiac conditions, including myocardial infarction and congestive heart failure, but did not decline among patients admitted for other medical (pneumonia) or surgical conditions.

Commentary

Patient safety in inpatient hospital care is of paramount importance, and the Affordable Care Act has placed significant emphasis on improving patient safety by aligning incentives and disincentives with patient outcomes on the hospital level [2,3].These measures, including adverse event rates, are reported publicly in reports such as Hospital Compare [3–5].The current study reports on the recent national trends in safety and adverse events using data abstracted from medical records among older Medicare patients with 4 common conditions. The demonstration of the trends in adverse events represent an important first step towards understanding the current environment and trends in patient safety.  The finding that in-hospital adverse event rates have improved in patients admitted for cardiac conditions is reassuring  given that there were substantial nationwide efforts in promoting patient safety in hospitals, but the lack of progress in other conditions both medical and surgical is rather disappointing.

There is good quality evidence suggesting how hospitals may make changes to improve patient safety; these steps may include adopting care practices and protocols such as pressure ulcer monitoring and prevention protocols, fall prevention protocols, safety checklists, models for older adults inpatient care such as Mobile Acute Care of Elderly teams [6] and Acute Care for the Elderly models [7], quality improvement initiatives, and incorporation of information systems for data tracking and reporting, to name a few. How hospitals adopt different practices for the care of patients with different conditions may explain the study findings. The challenge is to figure out why noncardiac conditions do not have improving trends in patient safety and to demonstrate what works (and what doesn’t) on the hospital level. Understanding how care is delivered on the hospital level and correlating hospital level practices with patient outcomes from databases such as MPSMS may yield clues as to what specific steps hospitals have taken that have yielded changes in patient safety.

Applications for Clinical Practice

This study highlights trends in adverse events among hospitalized older adults that demonstrated improvements for patients with cardiac conditions but not for others. Future studies need to focus on understanding what works and what doesn’t so that hospitals can adopt safety practices that improve outcomes for older hospitalized patients.

—William Hung, MD, MPH

 

References

1. Hunt DR, Verzier N, Abend SL, et al. Fundamentals of Medicare patient safety surveillance: intent, relevance and transparency. Rockville, MD: Agency for Healthcare Research and Quality. Available at archive.ahrq.gov/qual/nhqr05/fullreport/Mpsms.htm.

2. Rosenbaum S. The Patient Protection and Affordable Care Act: implications for public health policy and practice. Public Health Rep 2011;126:130–5.

3. Werner RM, Bradlow ET. Relationship between Medicare’s hospital compare performance measures and mortality rates. JAMA 2006;296:2694–702.

4. Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood) 2010;29:1319–24.

5. Kruse GB, Polsky D, Stuart EA, Werner RM. The impact of hospital pay-for-performance on hospital and Medicare costs. Health Serv Res 2012;47:2118–36.

6. Hung WW, Ross JS, Farber J, Siu AL. Evaluation of the Mobile Acute Care of the Elderly (MACE) service. JAMA Intern Med 2013:1–7.

7. Landefeld CS, Palmer RM, Kresevic DM, Fortinsky RH, Kowal J. A randomized trial of care in a hospital medical unit especially designed to improve the functional outcomes of acutely ill older patients. N Engl J Med 1995;332:1338–44

References

1. Hunt DR, Verzier N, Abend SL, et al. Fundamentals of Medicare patient safety surveillance: intent, relevance and transparency. Rockville, MD: Agency for Healthcare Research and Quality. Available at archive.ahrq.gov/qual/nhqr05/fullreport/Mpsms.htm.

2. Rosenbaum S. The Patient Protection and Affordable Care Act: implications for public health policy and practice. Public Health Rep 2011;126:130–5.

3. Werner RM, Bradlow ET. Relationship between Medicare’s hospital compare performance measures and mortality rates. JAMA 2006;296:2694–702.

4. Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood) 2010;29:1319–24.

5. Kruse GB, Polsky D, Stuart EA, Werner RM. The impact of hospital pay-for-performance on hospital and Medicare costs. Health Serv Res 2012;47:2118–36.

6. Hung WW, Ross JS, Farber J, Siu AL. Evaluation of the Mobile Acute Care of the Elderly (MACE) service. JAMA Intern Med 2013:1–7.

7. Landefeld CS, Palmer RM, Kresevic DM, Fortinsky RH, Kowal J. A randomized trial of care in a hospital medical unit especially designed to improve the functional outcomes of acutely ill older patients. N Engl J Med 1995;332:1338–44

Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Publications
Publications
Topics
Article Type
Display Headline
Declining Adverse Event Rates Among Patients With Cardiac Conditions But Not With Pneumonia or Surgical Conditions
Display Headline
Declining Adverse Event Rates Among Patients With Cardiac Conditions But Not With Pneumonia or Surgical Conditions
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Should Radiofrequency Ablation Be First-line Treatment for Paroxysmal Atrial Fibrillation?

Article Type
Changed
Tue, 03/06/2018 - 14:10
Display Headline
Should Radiofrequency Ablation Be First-line Treatment for Paroxysmal Atrial Fibrillation?

Study Overview

Objective. To compare radiofrequency ablation (RFA) with antiarrhythmic drugs in treating patients with paroxysmal atrial fibrillation as a first-line therapy.

Design. Randomized controlled trial.

Setting and participants. This multi-center study was conducted at 16 sites in 5 countries and enrolled 127 patients between July 2006 and January 2010. Adult patients < 75 years old with a history of paroxysmal atrial fibrillation who had at least 1 episode of symptomatic paroxysmal atrial fibrillation in the 6 months prior to enrollment and had no previous antiarrhythmic drug treatment were recruited. Patients were excluded if they had structural heart disease or had a complete contraindication for the use of heparin, warfarin, or both.

Patients were randomized by variable block generated by computer to receive either antiarrhythmic drugs or RFA. All patients were followed up at 1, 3, 6, 12, and 24 months after randomization. Each patient received a transtelephonic monitor system and was trained to record and transmit symptomatic episodes of possible atrial fibrillation. Patients were also instructed to transmit biweekly recordings on a Friday, regardless of whether they had experienced symptoms. Blinded experienced electrophysiologists analyzed all recordings, which may also have included scheduled or unscheduled electrocardiogram, Holter, or rhythm strips.

Patients randomized to the antiarrhythmic drug group were administered medications chosen by the investigators. Drug dosages titrated during the 90-day blanking period were maintained throughout the study. Patients in the antiarrhythmic drug group were also allowed to cross over and undergo ablation after 90 days if medical treatment had failed.

Patients randomized to the RFA group underwent circumferential isolation of the pulmonary veins. Additional ablation lesions were also allowed at investigator’s choice. Furthermore, selections of the ablation catheter, power and irrigation settings, as well as the use of navigation systems were left to the discretion of the investigator. Following RFA, anticoagulation with warfarin was maintained for at least 3 months.

Main outcome measures. The primary outcome was time to first recurrence of symptomatic or asymptomatic atrial fibrillation, atrial flutter, or atrial tachycardia lasting more than 30 seconds. Secondary outcomes were symptomatic recurrences of atrial fibrillation, atrial flutter, or atrial tachycardia during the study period and quality of life as measured by EQ-5D Tariff score. There was a 90-day blanking period (the time after randomization when an AF event is not counted).

Main results. The RFA group experienced a significantly lower rate of recurrence of atrial tachyarrhythmias at 2 years compared with the antiarrhythmic drug group (54.5% vs. 72.1%, hazard ratio [HR] 0.56 [95% CI, 0.35–0.90]; P = 0.02). The difference was present but smaller for the rate of symptomatic arrhythmias (47% RFA group vs. 59% drug therapy group, HR 0.56 [95% CI, 0.33–0.95]; P = 0.03). There were no differences among treatment groups in regard to quality of life at 1-year follow-up using the EQ-5D Tariff score. No deaths or strokes reported in either group; 4 cases (6%) of cardiac tampoade were reported in the RFA group.

Conclusion. The authors of this study conclude that for paroxysmal atrial fibrillation patients without previous antiarrhythmic drug treatment, RFA resulted in a lower rate of recurrence of atrial tachyarrhythmias at 2 years compared with standard antiarrhythmic drug treatment. However, recurrence was frequent in both groups after 2 years.

Commentary

Atrial fibrillation is a common arrhythmia associated with an increased risk of stroke and other adverse events. Current practice guidelines recommend antiarrhythmic drugs as the first-line therapy for patients with symptomatic paroxysmal atrial fibrillation. However, a significant proportion of patients are nonadherent with antiarrhythmic therapy. As a result, antiarrhythmic therapy is only 46% effective at 12 months in preventing the recurrence of atrial fibrillation [1].

The purpose of the current study (Radiofrequency Ablation vs. Antiarrhythmic Drugs for Atrial Fibrillation Treatment-2, or RAAFT-2) was to determine whether RFA is superior to antiarrhythmic drugs as first-line therapy in patients with paroxysmal atrial fibrillation who had not been exposed to antiarrhythmic treatment. Over the past decade, various single-center trials attempted to demonstrate the superiority of RAF. Evidence from these trials suggested that RFA resulted in lower burden of atrial fibrillation and more patients free from atrial fibrillation. However, RFA had a higher initial cost, higher rate of complications, and conferred no improvement in the quality of life [2–5].

Despite the statistically significant lower rate of recurrence of atrial tachyarrhythmias in the RFA group, there are many limitations with this multi-center, multi-country study. First, selection bias may have been present, as it took 42 months to recruit 127 patients in 16 centers and 5 countries for a very common disease. Second, the use of a transtelephonic monitor was unique. When the investigators excluded transtelephonic monitor results and used electrocardiogram and Holter monitor results, similar to previous trials, the primary outcomes were no longer different. Third, biases in the study design favor RFA. For example, investigators permitted substantial variation in the RFA procedures but restricted dosage changes in the antiarrhythmic drugs group. Finally, 26 of the 61 patients (42.6%) assigned to the antiarrhythmic drug group crossed over to undergo RFA, and the intention-to-treat basis became invalid.

One might ask, what is the worth of this trial? This trial provides additional evidence about about the risks of RFA. While no deaths or strokes were reported in this trial, 6 of the 66 patients (9.1%) in the RFA group had a serious adverse event, with 4 patients (6%)  experiencing pericardial effusion with tamponade. The 6% tamponade rate is similar to that found in previous trials [2]. On the other hand, only 3 of the 61 patients (4.9%) in the antiarrhythmic drugs group experienced a serious adverse event (1 had atrial flutter with 1:1 atrioventricular conduction, 2 had syncope).

Applications for Clinical Practice

This trial of radiofrequency ablation vs. antiarrhythmic drugs as first-line treatment of paroxysmal atrial fibrillation provides further evidence of the risks and benefits of each of these options. The current guidelines should be followed. However, given the high level of medical therapy noncompliance, selected patients should also be given the option of using RFA as primary treatment. Patients who are offered the procedure should be made aware of the risks, and providers should  incorporate patient’s risk perceptions and preferences in treatment planning.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Camm AJ, Lip GY, De Caterina R, et al. 2012 Focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation. Developed with the special contribution of the European Heart Rhythm Association. Eur Heart  J 2012;33:2719–47.

2. Cosedis Nielsen J, Johannessen A, Raatikainen P, et al. Radiofrequency ablation as initial therapy in paroxysmal atrial fibrillation. N Engl J Med 2012;367:1587–95.

3. Dorian P, Paquette M, Newman D, et al. Quality of life improves with treatment in the Canadian Trial of Atrial Fibrillation. Am Heart J 2002;143:984–90.

4. Wazni OM, Marrouche NF, Martin DO,  et al. Radiofrequency ablation vs antiarrhythmic drugs as first-line treatment of symptomatic atrial fibrillation: a randomized trial. JAMA 2005;293;2634–40.

5. Khaykin Y, Wang X, Natale A, et al. Cost comparison of ablation versus antiarrhythmic drugs as first-line therapy for atrial fibrillation: an economic evaluation of the RAAFT pilot study. J Cardiovasc Electrophysiol 2009;20:7–12.

Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Publications
Topics
Sections

Study Overview

Objective. To compare radiofrequency ablation (RFA) with antiarrhythmic drugs in treating patients with paroxysmal atrial fibrillation as a first-line therapy.

Design. Randomized controlled trial.

Setting and participants. This multi-center study was conducted at 16 sites in 5 countries and enrolled 127 patients between July 2006 and January 2010. Adult patients < 75 years old with a history of paroxysmal atrial fibrillation who had at least 1 episode of symptomatic paroxysmal atrial fibrillation in the 6 months prior to enrollment and had no previous antiarrhythmic drug treatment were recruited. Patients were excluded if they had structural heart disease or had a complete contraindication for the use of heparin, warfarin, or both.

Patients were randomized by variable block generated by computer to receive either antiarrhythmic drugs or RFA. All patients were followed up at 1, 3, 6, 12, and 24 months after randomization. Each patient received a transtelephonic monitor system and was trained to record and transmit symptomatic episodes of possible atrial fibrillation. Patients were also instructed to transmit biweekly recordings on a Friday, regardless of whether they had experienced symptoms. Blinded experienced electrophysiologists analyzed all recordings, which may also have included scheduled or unscheduled electrocardiogram, Holter, or rhythm strips.

Patients randomized to the antiarrhythmic drug group were administered medications chosen by the investigators. Drug dosages titrated during the 90-day blanking period were maintained throughout the study. Patients in the antiarrhythmic drug group were also allowed to cross over and undergo ablation after 90 days if medical treatment had failed.

Patients randomized to the RFA group underwent circumferential isolation of the pulmonary veins. Additional ablation lesions were also allowed at investigator’s choice. Furthermore, selections of the ablation catheter, power and irrigation settings, as well as the use of navigation systems were left to the discretion of the investigator. Following RFA, anticoagulation with warfarin was maintained for at least 3 months.

Main outcome measures. The primary outcome was time to first recurrence of symptomatic or asymptomatic atrial fibrillation, atrial flutter, or atrial tachycardia lasting more than 30 seconds. Secondary outcomes were symptomatic recurrences of atrial fibrillation, atrial flutter, or atrial tachycardia during the study period and quality of life as measured by EQ-5D Tariff score. There was a 90-day blanking period (the time after randomization when an AF event is not counted).

Main results. The RFA group experienced a significantly lower rate of recurrence of atrial tachyarrhythmias at 2 years compared with the antiarrhythmic drug group (54.5% vs. 72.1%, hazard ratio [HR] 0.56 [95% CI, 0.35–0.90]; P = 0.02). The difference was present but smaller for the rate of symptomatic arrhythmias (47% RFA group vs. 59% drug therapy group, HR 0.56 [95% CI, 0.33–0.95]; P = 0.03). There were no differences among treatment groups in regard to quality of life at 1-year follow-up using the EQ-5D Tariff score. No deaths or strokes reported in either group; 4 cases (6%) of cardiac tampoade were reported in the RFA group.

Conclusion. The authors of this study conclude that for paroxysmal atrial fibrillation patients without previous antiarrhythmic drug treatment, RFA resulted in a lower rate of recurrence of atrial tachyarrhythmias at 2 years compared with standard antiarrhythmic drug treatment. However, recurrence was frequent in both groups after 2 years.

Commentary

Atrial fibrillation is a common arrhythmia associated with an increased risk of stroke and other adverse events. Current practice guidelines recommend antiarrhythmic drugs as the first-line therapy for patients with symptomatic paroxysmal atrial fibrillation. However, a significant proportion of patients are nonadherent with antiarrhythmic therapy. As a result, antiarrhythmic therapy is only 46% effective at 12 months in preventing the recurrence of atrial fibrillation [1].

The purpose of the current study (Radiofrequency Ablation vs. Antiarrhythmic Drugs for Atrial Fibrillation Treatment-2, or RAAFT-2) was to determine whether RFA is superior to antiarrhythmic drugs as first-line therapy in patients with paroxysmal atrial fibrillation who had not been exposed to antiarrhythmic treatment. Over the past decade, various single-center trials attempted to demonstrate the superiority of RAF. Evidence from these trials suggested that RFA resulted in lower burden of atrial fibrillation and more patients free from atrial fibrillation. However, RFA had a higher initial cost, higher rate of complications, and conferred no improvement in the quality of life [2–5].

Despite the statistically significant lower rate of recurrence of atrial tachyarrhythmias in the RFA group, there are many limitations with this multi-center, multi-country study. First, selection bias may have been present, as it took 42 months to recruit 127 patients in 16 centers and 5 countries for a very common disease. Second, the use of a transtelephonic monitor was unique. When the investigators excluded transtelephonic monitor results and used electrocardiogram and Holter monitor results, similar to previous trials, the primary outcomes were no longer different. Third, biases in the study design favor RFA. For example, investigators permitted substantial variation in the RFA procedures but restricted dosage changes in the antiarrhythmic drugs group. Finally, 26 of the 61 patients (42.6%) assigned to the antiarrhythmic drug group crossed over to undergo RFA, and the intention-to-treat basis became invalid.

One might ask, what is the worth of this trial? This trial provides additional evidence about about the risks of RFA. While no deaths or strokes were reported in this trial, 6 of the 66 patients (9.1%) in the RFA group had a serious adverse event, with 4 patients (6%)  experiencing pericardial effusion with tamponade. The 6% tamponade rate is similar to that found in previous trials [2]. On the other hand, only 3 of the 61 patients (4.9%) in the antiarrhythmic drugs group experienced a serious adverse event (1 had atrial flutter with 1:1 atrioventricular conduction, 2 had syncope).

Applications for Clinical Practice

This trial of radiofrequency ablation vs. antiarrhythmic drugs as first-line treatment of paroxysmal atrial fibrillation provides further evidence of the risks and benefits of each of these options. The current guidelines should be followed. However, given the high level of medical therapy noncompliance, selected patients should also be given the option of using RFA as primary treatment. Patients who are offered the procedure should be made aware of the risks, and providers should  incorporate patient’s risk perceptions and preferences in treatment planning.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To compare radiofrequency ablation (RFA) with antiarrhythmic drugs in treating patients with paroxysmal atrial fibrillation as a first-line therapy.

Design. Randomized controlled trial.

Setting and participants. This multi-center study was conducted at 16 sites in 5 countries and enrolled 127 patients between July 2006 and January 2010. Adult patients < 75 years old with a history of paroxysmal atrial fibrillation who had at least 1 episode of symptomatic paroxysmal atrial fibrillation in the 6 months prior to enrollment and had no previous antiarrhythmic drug treatment were recruited. Patients were excluded if they had structural heart disease or had a complete contraindication for the use of heparin, warfarin, or both.

Patients were randomized by variable block generated by computer to receive either antiarrhythmic drugs or RFA. All patients were followed up at 1, 3, 6, 12, and 24 months after randomization. Each patient received a transtelephonic monitor system and was trained to record and transmit symptomatic episodes of possible atrial fibrillation. Patients were also instructed to transmit biweekly recordings on a Friday, regardless of whether they had experienced symptoms. Blinded experienced electrophysiologists analyzed all recordings, which may also have included scheduled or unscheduled electrocardiogram, Holter, or rhythm strips.

Patients randomized to the antiarrhythmic drug group were administered medications chosen by the investigators. Drug dosages titrated during the 90-day blanking period were maintained throughout the study. Patients in the antiarrhythmic drug group were also allowed to cross over and undergo ablation after 90 days if medical treatment had failed.

Patients randomized to the RFA group underwent circumferential isolation of the pulmonary veins. Additional ablation lesions were also allowed at investigator’s choice. Furthermore, selections of the ablation catheter, power and irrigation settings, as well as the use of navigation systems were left to the discretion of the investigator. Following RFA, anticoagulation with warfarin was maintained for at least 3 months.

Main outcome measures. The primary outcome was time to first recurrence of symptomatic or asymptomatic atrial fibrillation, atrial flutter, or atrial tachycardia lasting more than 30 seconds. Secondary outcomes were symptomatic recurrences of atrial fibrillation, atrial flutter, or atrial tachycardia during the study period and quality of life as measured by EQ-5D Tariff score. There was a 90-day blanking period (the time after randomization when an AF event is not counted).

Main results. The RFA group experienced a significantly lower rate of recurrence of atrial tachyarrhythmias at 2 years compared with the antiarrhythmic drug group (54.5% vs. 72.1%, hazard ratio [HR] 0.56 [95% CI, 0.35–0.90]; P = 0.02). The difference was present but smaller for the rate of symptomatic arrhythmias (47% RFA group vs. 59% drug therapy group, HR 0.56 [95% CI, 0.33–0.95]; P = 0.03). There were no differences among treatment groups in regard to quality of life at 1-year follow-up using the EQ-5D Tariff score. No deaths or strokes reported in either group; 4 cases (6%) of cardiac tampoade were reported in the RFA group.

Conclusion. The authors of this study conclude that for paroxysmal atrial fibrillation patients without previous antiarrhythmic drug treatment, RFA resulted in a lower rate of recurrence of atrial tachyarrhythmias at 2 years compared with standard antiarrhythmic drug treatment. However, recurrence was frequent in both groups after 2 years.

Commentary

Atrial fibrillation is a common arrhythmia associated with an increased risk of stroke and other adverse events. Current practice guidelines recommend antiarrhythmic drugs as the first-line therapy for patients with symptomatic paroxysmal atrial fibrillation. However, a significant proportion of patients are nonadherent with antiarrhythmic therapy. As a result, antiarrhythmic therapy is only 46% effective at 12 months in preventing the recurrence of atrial fibrillation [1].

The purpose of the current study (Radiofrequency Ablation vs. Antiarrhythmic Drugs for Atrial Fibrillation Treatment-2, or RAAFT-2) was to determine whether RFA is superior to antiarrhythmic drugs as first-line therapy in patients with paroxysmal atrial fibrillation who had not been exposed to antiarrhythmic treatment. Over the past decade, various single-center trials attempted to demonstrate the superiority of RAF. Evidence from these trials suggested that RFA resulted in lower burden of atrial fibrillation and more patients free from atrial fibrillation. However, RFA had a higher initial cost, higher rate of complications, and conferred no improvement in the quality of life [2–5].

Despite the statistically significant lower rate of recurrence of atrial tachyarrhythmias in the RFA group, there are many limitations with this multi-center, multi-country study. First, selection bias may have been present, as it took 42 months to recruit 127 patients in 16 centers and 5 countries for a very common disease. Second, the use of a transtelephonic monitor was unique. When the investigators excluded transtelephonic monitor results and used electrocardiogram and Holter monitor results, similar to previous trials, the primary outcomes were no longer different. Third, biases in the study design favor RFA. For example, investigators permitted substantial variation in the RFA procedures but restricted dosage changes in the antiarrhythmic drugs group. Finally, 26 of the 61 patients (42.6%) assigned to the antiarrhythmic drug group crossed over to undergo RFA, and the intention-to-treat basis became invalid.

One might ask, what is the worth of this trial? This trial provides additional evidence about about the risks of RFA. While no deaths or strokes were reported in this trial, 6 of the 66 patients (9.1%) in the RFA group had a serious adverse event, with 4 patients (6%)  experiencing pericardial effusion with tamponade. The 6% tamponade rate is similar to that found in previous trials [2]. On the other hand, only 3 of the 61 patients (4.9%) in the antiarrhythmic drugs group experienced a serious adverse event (1 had atrial flutter with 1:1 atrioventricular conduction, 2 had syncope).

Applications for Clinical Practice

This trial of radiofrequency ablation vs. antiarrhythmic drugs as first-line treatment of paroxysmal atrial fibrillation provides further evidence of the risks and benefits of each of these options. The current guidelines should be followed. However, given the high level of medical therapy noncompliance, selected patients should also be given the option of using RFA as primary treatment. Patients who are offered the procedure should be made aware of the risks, and providers should  incorporate patient’s risk perceptions and preferences in treatment planning.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Camm AJ, Lip GY, De Caterina R, et al. 2012 Focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation. Developed with the special contribution of the European Heart Rhythm Association. Eur Heart  J 2012;33:2719–47.

2. Cosedis Nielsen J, Johannessen A, Raatikainen P, et al. Radiofrequency ablation as initial therapy in paroxysmal atrial fibrillation. N Engl J Med 2012;367:1587–95.

3. Dorian P, Paquette M, Newman D, et al. Quality of life improves with treatment in the Canadian Trial of Atrial Fibrillation. Am Heart J 2002;143:984–90.

4. Wazni OM, Marrouche NF, Martin DO,  et al. Radiofrequency ablation vs antiarrhythmic drugs as first-line treatment of symptomatic atrial fibrillation: a randomized trial. JAMA 2005;293;2634–40.

5. Khaykin Y, Wang X, Natale A, et al. Cost comparison of ablation versus antiarrhythmic drugs as first-line therapy for atrial fibrillation: an economic evaluation of the RAAFT pilot study. J Cardiovasc Electrophysiol 2009;20:7–12.

References

1. Camm AJ, Lip GY, De Caterina R, et al. 2012 Focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation. Developed with the special contribution of the European Heart Rhythm Association. Eur Heart  J 2012;33:2719–47.

2. Cosedis Nielsen J, Johannessen A, Raatikainen P, et al. Radiofrequency ablation as initial therapy in paroxysmal atrial fibrillation. N Engl J Med 2012;367:1587–95.

3. Dorian P, Paquette M, Newman D, et al. Quality of life improves with treatment in the Canadian Trial of Atrial Fibrillation. Am Heart J 2002;143:984–90.

4. Wazni OM, Marrouche NF, Martin DO,  et al. Radiofrequency ablation vs antiarrhythmic drugs as first-line treatment of symptomatic atrial fibrillation: a randomized trial. JAMA 2005;293;2634–40.

5. Khaykin Y, Wang X, Natale A, et al. Cost comparison of ablation versus antiarrhythmic drugs as first-line therapy for atrial fibrillation: an economic evaluation of the RAAFT pilot study. J Cardiovasc Electrophysiol 2009;20:7–12.

Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Issue
Journal of Clinical Outcomes Management - April 2014, VOL. 21, NO. 4
Publications
Publications
Topics
Article Type
Display Headline
Should Radiofrequency Ablation Be First-line Treatment for Paroxysmal Atrial Fibrillation?
Display Headline
Should Radiofrequency Ablation Be First-line Treatment for Paroxysmal Atrial Fibrillation?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default