Slot System
Featured Buckets
Featured Buckets Admin

Liraglutide Produces Clinically Significant Weight Loss in Nondiabetic Patients, But At What Cost?

Article Type
Changed
Tue, 03/06/2018 - 10:18
Display Headline
Liraglutide Produces Clinically Significant Weight Loss in Nondiabetic Patients, But At What Cost?

Study Overview

Objective. To evaluate the efficacy of liraglutide for weight loss in a group of nondiabetic patients with obesity.

Design. Randomized double-blind placebo-controlled trial.

Setting and participants. This trial took place across 27 countries in Europe, North America, South America, Asia, Africa and Australia. It was funded by NovoNordisk, the pharmaceutical company that manufactures liraglutide. Participants were 18 years or older, with a BMI of 30 kg/m2 (or 27 kg/m2 with hypertension or dyslipidemia). Patients with diabetes, those on medications known to induce weight gain (or loss), those with history of bariatric surgery, and those with psychiatric illness were excluded from participating. Patients with prediabetes were not excluded.

Intervention. Participants were randomized (2:1 in favor of study drug) to liraglutide or placebo, stratified according to BMI category and pre-diabetes status. They were started at a 0.6–mg dose of medication and up-titrated as tolerated to a dose of 3.0 mg over several weeks. All received counseling on behavioral changes to promote weight loss. Participants were then followed for 56 weeks. A small subgroup in the liraglutide arm was randomly assigned to switch to placebo after 12 weeks on medication to examine for durability of effect of medication, and to evaluate for safety issues that might occur on drug discontinuation.

Main outcome measures. This study focused on 3 primary outcomes: individual-level weight change from baseline, group-level percentage of participants achieving at least 5% weight loss, and percentage of participants with at least 10% weight loss, all assessed at 56 weeks.

Secondary outcomes included change in BMI, waist circumference, markers of glycemia (hemoglobin A1c, insulin level), markers of cardiometabolic health (blood pressure, lipids, CRP), and health-related quality of life (using several validated survey measures). Adverse events were also assessed.

The investigators used an intention-to-treat analysis, comparing outcomes among all patients who were randomized and received at least 1 dose of liraglutide or placebo. For patients with missing values (eg, due to dropout), outcome values were imputed using the last-observation-carried-forward method. A multivariable analysis of covariance model was used to analyze changes in the primary outcomes and included a covariate for the baseline measure of the outcome in question. Sensitivity analyses were conducted in which the investigators used different imputation techniques (multiple imputation, repeated measures) to account for missing data.

Results. The trial enrolled 3731 participants, 2487 of whom were randomized to receive liraglutide and 1244 of whom received placebo. The groups were similar on measured baseline characteristics, with a mean age of 45 years, mostly female participants (78.7% in liraglutide arm, 78.1% in placebo), and the vast majority of participants identified as “white” race/ethnicity (84.7% in liraglutide, 85.3% in placebo). Mean baseline BMI was 38.3 kg/m2 in both groups. Although overweight patients with BMI 27 kg/m2 or greater were included, they represented a small fraction of all participants (2.7% in liraglutide group and 3.5% in placebo group). Furthermore, although patients with overt diabetes were excluded from participating, over half of the participants qualified as having prediabetes (61.4% in liraglutide group, 60.9% in placebo group). Just over one-third (34.2% of liraglutide group, 35.9% placebo) had hypertension diagnosed at baseline. Study withdrawal was relatively substantial in both groups – 71.9% remained enrolled at 56 weeks in the liraglutide group, and 64.4% remained in the placebo arm. The investigators note that withdrawal due to adverse events was more common in the liraglutide group (9.9% of withdrawals vs. 3.8% in placebo), while other reasons for withdrawing (ineffective therapy, withdrawal of consent) were more common among placebo participants.

Liraglutide participants lost significantly more weight than placebo participants at 56 weeks (mean [SD]  8.0 [6.7] kg vs. 2.6 [5.7] kg). Similarly, more patients in the liraglutide group achieved at least 5% weight loss (63% vs. 27%), and 10% weight loss (33.1% vs. 10.6%) than those taking placebo. When subgroups of patients were examined according to baseline BMI, the investigators suggested that liraglutide appeared to be more effective at promoting weight loss among patients starting below 40 kg/m2.

Hemoglobin A1c dropped significantly more (–0.23 points, P < 0.001) among liraglutide participants than among placebo participants. Similarly, fasting insulin levels dropped by 8% more (P < 0.001) in the liraglutide group at 56 weeks. In keeping with the greater weight loss, markers of cardiometabolic health also improved to a greater extent among liraglutide participants, with larger decreases in blood pressure (SBP –2.8 mm Hg lower in liraglutide, P < 0.001), and LDL (–2.4% difference, P = 0.002), and a larger increase in HDL (1.9% difference, P = 0.001). By week 56, 14% of prediabetic patients in the placebo arm had received a new diagnosis of diabetes, compared to just 4% in the liraglutide group (P < 0.001).

Quality of life scores were higher for liraglutide participants on all included measures except those related to side effects of treatment, where placebo participants reported lower levels of side effects. The most common side effects reported by liraglutide participants related to GI upset, including nausea (40%), diarrhea (21%), and vomiting (16%). More serious events, including cholelithiasis (0.8%), cholecystitis (0.5%), and pancreatitis (0.2%), were also reported. Somewhat surprisingly, although liraglutide is also used to improve glycemic control in diabetics, rates of reported spontaneous hypoglycemia were fairly low in the liraglutide group (1.3% vs. 1.0% in placebo).

Conclusion. Liraglutide given at a dose of 3.0 mg daily, along with lifestyle advice, produces clinically significant weight loss and improvement in glycemic and cardiometabolic parameters that is sustained after 1 full year of treatment.

Commentary

Over the past few years, the FDA has approved a growing list of medications for the treatment of obesity [1,2]. Unlike the prior mainstay for prescription weight management, phentermine, which can only be used for a few months at a time due to concerns about abuse, many of these newer medications are approved for long-term use, aligning well with the growing recognition of obesity as a chronic illness. Interestingly, most of the drugs that have emerged onto the market do not represent novel compounds, but rather are existing drugs that have been repurposed and repackaged for the indication of weight management. These “recycled” medications include Qsymia (a mix of phentermine and topiramate) [1], Contrave (naltrexone and buproprion) [2], and now, Saxenda (liraglutide, also marketed as Victoza for treatment of type 2 diabetes). Liraglutide is a glucagon-like-peptide 1 (GLP-1) analogue, meaning it has an effect similar to that of GLP-1, a gut hormone that stimulates insulin secretion, inhibits pancreatic beta cell apoptosis, inhibits gastric emptying, and decreases appetite by acting on the brain’s satiety centers [3]. For several years, endocrinologists and some internists have been using liraglutide (Victoza) to help with glycemic control in diabetics, with the known benefit that, unlike some other diabetes medications, it tends to promote modest weight loss [4].

In this large multicenter trial, Pi-Sunyer et al evaluated the efficacy of liraglutide at a 3.0 mg daily dose (almost twice the dose used for diabetes) for weight management. The trial utilized a strong study design, with double blinding, randomization of a subgroup for early discontinuation (to evaluate for weight regain and stopping-related side effects), and, importantly, the intervention for both groups also included a behavior change component (albeit one of relatively low intensity, based on the limited description). Patients were followed for 56 weeks on the medication, making the “intervention” phase of the study longer than what has been done in many diet trials. Testing for a long-lasting impact on weight, and at the same time attempting to quantify risks associated with longer-term use of a medication, was an important contribution for this study given that liraglutide is being marketed for long-term use.

After a year on liraglutide, participants in that group had lost around 12 lb more, on average, than those using placebo, and had achieved greater improvements cardiometabolic risk markers, with a much lower risk of developing diabetes. While these findings are promising from a clinical standpoint, it is not clear whether the moderate health impacts of this drug will be sufficient to outweigh several issues that may impede its widespread use in practice. The rate of GI side effects (nausea, vomiting, diarrhea) in liraglutide participants was fairly high, and it is worth considering whether the side effects themselves could have been driving some of the weight loss observed in that group. Furthermore, the out of pocket cost of this medication, when used for weight loss in nondiabetics, is likely to be around $1000 per month. For most patients, this high price will prohibit longer-term use of liraglutide. Even in the setting of a trial where participants faced no out of pocket costs, almost one-third in the liraglutide arm did not complete a year of treatment. On a related note, the primary analysis for this trial used a “last observation carried forward” approach—somewhat concerning given that patients are likely to regain weight after stopping any weight loss intervention, pharmaceutical or otherwise. The authors do report that a range of sensitivity analyses with varying imputation techniques were conducted and did not change the main conclusions of the trial.

Despite the promising findings from this trial, several important clinical questions remain. What is the durability of health effects for patients who discontinue the medication after a year? What safety concerns may arise in those who can afford to continue using liraglutide at this higher dose for several years? A 2-year follow-up study on participants from the current trial has been completed and those results are expected soon, which may help to shed light on some of these issues [5].  Cost-effectiveness evaluations, and head-to-head comparisons of liraglutide with lower cost weight management options would also be very helpful for clinicians presenting a range of treatment options to patients with obesity.

Applications for Clinical Practice

Liraglutide at a daily dose of 3.0 mg represents a new option for treatment of patients with obesity. It should be used in conjunction with behavioral interventions that promote a more healthful diet and increased physical activity, and may result in clinically meaningful weight loss and decreased risk of diabetes. On the other hand, the medication is costly and associated with some unpleasant GI side effects, both important factors that may limit patients’ ability to use it in the long-term. More studies are needed to establish durability of effects and safety beyond a year and that offer direct comparisons with other evidence-based weight loss tools, pharmaceutical and otherwise.

—Kristina Lewis, MD, MPH

References

1. Bray GA, Ryan DH. Update on obesity pharmacotherapy. Ann N Y Acad Sci 2014;1311:1–13.

2. Yanovski SZ, Yanovski JA. Naltrexone extended-release plus bupropion extended-release for treatment of obesity. JAMA 2015;313:1213–4.

3. de Mello AH, Pra M, Cardoso LC, de Bona Schraiber R, Rezin GT. Incretin-based therapies for obesity treatment. Metabolism 23 May 2015.

4. Prasad-Reddy L, Isaacs D. A clinical review of GLP-1 receptor agonists: efficacy and safety in diabetes and beyond. Drugs Context 2015;4:212283.

5. Siraj ES, Williams KJ. Another agent for obesity—will this time be different? N Engl J Med 2015;373:82–3.

Issue
Journal of Clinical Outcomes Management - SEPTEMBER 2015, VOL. 22, NO. 9
Publications
Topics
Sections

Study Overview

Objective. To evaluate the efficacy of liraglutide for weight loss in a group of nondiabetic patients with obesity.

Design. Randomized double-blind placebo-controlled trial.

Setting and participants. This trial took place across 27 countries in Europe, North America, South America, Asia, Africa and Australia. It was funded by NovoNordisk, the pharmaceutical company that manufactures liraglutide. Participants were 18 years or older, with a BMI of 30 kg/m2 (or 27 kg/m2 with hypertension or dyslipidemia). Patients with diabetes, those on medications known to induce weight gain (or loss), those with history of bariatric surgery, and those with psychiatric illness were excluded from participating. Patients with prediabetes were not excluded.

Intervention. Participants were randomized (2:1 in favor of study drug) to liraglutide or placebo, stratified according to BMI category and pre-diabetes status. They were started at a 0.6–mg dose of medication and up-titrated as tolerated to a dose of 3.0 mg over several weeks. All received counseling on behavioral changes to promote weight loss. Participants were then followed for 56 weeks. A small subgroup in the liraglutide arm was randomly assigned to switch to placebo after 12 weeks on medication to examine for durability of effect of medication, and to evaluate for safety issues that might occur on drug discontinuation.

Main outcome measures. This study focused on 3 primary outcomes: individual-level weight change from baseline, group-level percentage of participants achieving at least 5% weight loss, and percentage of participants with at least 10% weight loss, all assessed at 56 weeks.

Secondary outcomes included change in BMI, waist circumference, markers of glycemia (hemoglobin A1c, insulin level), markers of cardiometabolic health (blood pressure, lipids, CRP), and health-related quality of life (using several validated survey measures). Adverse events were also assessed.

The investigators used an intention-to-treat analysis, comparing outcomes among all patients who were randomized and received at least 1 dose of liraglutide or placebo. For patients with missing values (eg, due to dropout), outcome values were imputed using the last-observation-carried-forward method. A multivariable analysis of covariance model was used to analyze changes in the primary outcomes and included a covariate for the baseline measure of the outcome in question. Sensitivity analyses were conducted in which the investigators used different imputation techniques (multiple imputation, repeated measures) to account for missing data.

Results. The trial enrolled 3731 participants, 2487 of whom were randomized to receive liraglutide and 1244 of whom received placebo. The groups were similar on measured baseline characteristics, with a mean age of 45 years, mostly female participants (78.7% in liraglutide arm, 78.1% in placebo), and the vast majority of participants identified as “white” race/ethnicity (84.7% in liraglutide, 85.3% in placebo). Mean baseline BMI was 38.3 kg/m2 in both groups. Although overweight patients with BMI 27 kg/m2 or greater were included, they represented a small fraction of all participants (2.7% in liraglutide group and 3.5% in placebo group). Furthermore, although patients with overt diabetes were excluded from participating, over half of the participants qualified as having prediabetes (61.4% in liraglutide group, 60.9% in placebo group). Just over one-third (34.2% of liraglutide group, 35.9% placebo) had hypertension diagnosed at baseline. Study withdrawal was relatively substantial in both groups – 71.9% remained enrolled at 56 weeks in the liraglutide group, and 64.4% remained in the placebo arm. The investigators note that withdrawal due to adverse events was more common in the liraglutide group (9.9% of withdrawals vs. 3.8% in placebo), while other reasons for withdrawing (ineffective therapy, withdrawal of consent) were more common among placebo participants.

Liraglutide participants lost significantly more weight than placebo participants at 56 weeks (mean [SD]  8.0 [6.7] kg vs. 2.6 [5.7] kg). Similarly, more patients in the liraglutide group achieved at least 5% weight loss (63% vs. 27%), and 10% weight loss (33.1% vs. 10.6%) than those taking placebo. When subgroups of patients were examined according to baseline BMI, the investigators suggested that liraglutide appeared to be more effective at promoting weight loss among patients starting below 40 kg/m2.

Hemoglobin A1c dropped significantly more (–0.23 points, P < 0.001) among liraglutide participants than among placebo participants. Similarly, fasting insulin levels dropped by 8% more (P < 0.001) in the liraglutide group at 56 weeks. In keeping with the greater weight loss, markers of cardiometabolic health also improved to a greater extent among liraglutide participants, with larger decreases in blood pressure (SBP –2.8 mm Hg lower in liraglutide, P < 0.001), and LDL (–2.4% difference, P = 0.002), and a larger increase in HDL (1.9% difference, P = 0.001). By week 56, 14% of prediabetic patients in the placebo arm had received a new diagnosis of diabetes, compared to just 4% in the liraglutide group (P < 0.001).

Quality of life scores were higher for liraglutide participants on all included measures except those related to side effects of treatment, where placebo participants reported lower levels of side effects. The most common side effects reported by liraglutide participants related to GI upset, including nausea (40%), diarrhea (21%), and vomiting (16%). More serious events, including cholelithiasis (0.8%), cholecystitis (0.5%), and pancreatitis (0.2%), were also reported. Somewhat surprisingly, although liraglutide is also used to improve glycemic control in diabetics, rates of reported spontaneous hypoglycemia were fairly low in the liraglutide group (1.3% vs. 1.0% in placebo).

Conclusion. Liraglutide given at a dose of 3.0 mg daily, along with lifestyle advice, produces clinically significant weight loss and improvement in glycemic and cardiometabolic parameters that is sustained after 1 full year of treatment.

Commentary

Over the past few years, the FDA has approved a growing list of medications for the treatment of obesity [1,2]. Unlike the prior mainstay for prescription weight management, phentermine, which can only be used for a few months at a time due to concerns about abuse, many of these newer medications are approved for long-term use, aligning well with the growing recognition of obesity as a chronic illness. Interestingly, most of the drugs that have emerged onto the market do not represent novel compounds, but rather are existing drugs that have been repurposed and repackaged for the indication of weight management. These “recycled” medications include Qsymia (a mix of phentermine and topiramate) [1], Contrave (naltrexone and buproprion) [2], and now, Saxenda (liraglutide, also marketed as Victoza for treatment of type 2 diabetes). Liraglutide is a glucagon-like-peptide 1 (GLP-1) analogue, meaning it has an effect similar to that of GLP-1, a gut hormone that stimulates insulin secretion, inhibits pancreatic beta cell apoptosis, inhibits gastric emptying, and decreases appetite by acting on the brain’s satiety centers [3]. For several years, endocrinologists and some internists have been using liraglutide (Victoza) to help with glycemic control in diabetics, with the known benefit that, unlike some other diabetes medications, it tends to promote modest weight loss [4].

In this large multicenter trial, Pi-Sunyer et al evaluated the efficacy of liraglutide at a 3.0 mg daily dose (almost twice the dose used for diabetes) for weight management. The trial utilized a strong study design, with double blinding, randomization of a subgroup for early discontinuation (to evaluate for weight regain and stopping-related side effects), and, importantly, the intervention for both groups also included a behavior change component (albeit one of relatively low intensity, based on the limited description). Patients were followed for 56 weeks on the medication, making the “intervention” phase of the study longer than what has been done in many diet trials. Testing for a long-lasting impact on weight, and at the same time attempting to quantify risks associated with longer-term use of a medication, was an important contribution for this study given that liraglutide is being marketed for long-term use.

After a year on liraglutide, participants in that group had lost around 12 lb more, on average, than those using placebo, and had achieved greater improvements cardiometabolic risk markers, with a much lower risk of developing diabetes. While these findings are promising from a clinical standpoint, it is not clear whether the moderate health impacts of this drug will be sufficient to outweigh several issues that may impede its widespread use in practice. The rate of GI side effects (nausea, vomiting, diarrhea) in liraglutide participants was fairly high, and it is worth considering whether the side effects themselves could have been driving some of the weight loss observed in that group. Furthermore, the out of pocket cost of this medication, when used for weight loss in nondiabetics, is likely to be around $1000 per month. For most patients, this high price will prohibit longer-term use of liraglutide. Even in the setting of a trial where participants faced no out of pocket costs, almost one-third in the liraglutide arm did not complete a year of treatment. On a related note, the primary analysis for this trial used a “last observation carried forward” approach—somewhat concerning given that patients are likely to regain weight after stopping any weight loss intervention, pharmaceutical or otherwise. The authors do report that a range of sensitivity analyses with varying imputation techniques were conducted and did not change the main conclusions of the trial.

Despite the promising findings from this trial, several important clinical questions remain. What is the durability of health effects for patients who discontinue the medication after a year? What safety concerns may arise in those who can afford to continue using liraglutide at this higher dose for several years? A 2-year follow-up study on participants from the current trial has been completed and those results are expected soon, which may help to shed light on some of these issues [5].  Cost-effectiveness evaluations, and head-to-head comparisons of liraglutide with lower cost weight management options would also be very helpful for clinicians presenting a range of treatment options to patients with obesity.

Applications for Clinical Practice

Liraglutide at a daily dose of 3.0 mg represents a new option for treatment of patients with obesity. It should be used in conjunction with behavioral interventions that promote a more healthful diet and increased physical activity, and may result in clinically meaningful weight loss and decreased risk of diabetes. On the other hand, the medication is costly and associated with some unpleasant GI side effects, both important factors that may limit patients’ ability to use it in the long-term. More studies are needed to establish durability of effects and safety beyond a year and that offer direct comparisons with other evidence-based weight loss tools, pharmaceutical and otherwise.

—Kristina Lewis, MD, MPH

Study Overview

Objective. To evaluate the efficacy of liraglutide for weight loss in a group of nondiabetic patients with obesity.

Design. Randomized double-blind placebo-controlled trial.

Setting and participants. This trial took place across 27 countries in Europe, North America, South America, Asia, Africa and Australia. It was funded by NovoNordisk, the pharmaceutical company that manufactures liraglutide. Participants were 18 years or older, with a BMI of 30 kg/m2 (or 27 kg/m2 with hypertension or dyslipidemia). Patients with diabetes, those on medications known to induce weight gain (or loss), those with history of bariatric surgery, and those with psychiatric illness were excluded from participating. Patients with prediabetes were not excluded.

Intervention. Participants were randomized (2:1 in favor of study drug) to liraglutide or placebo, stratified according to BMI category and pre-diabetes status. They were started at a 0.6–mg dose of medication and up-titrated as tolerated to a dose of 3.0 mg over several weeks. All received counseling on behavioral changes to promote weight loss. Participants were then followed for 56 weeks. A small subgroup in the liraglutide arm was randomly assigned to switch to placebo after 12 weeks on medication to examine for durability of effect of medication, and to evaluate for safety issues that might occur on drug discontinuation.

Main outcome measures. This study focused on 3 primary outcomes: individual-level weight change from baseline, group-level percentage of participants achieving at least 5% weight loss, and percentage of participants with at least 10% weight loss, all assessed at 56 weeks.

Secondary outcomes included change in BMI, waist circumference, markers of glycemia (hemoglobin A1c, insulin level), markers of cardiometabolic health (blood pressure, lipids, CRP), and health-related quality of life (using several validated survey measures). Adverse events were also assessed.

The investigators used an intention-to-treat analysis, comparing outcomes among all patients who were randomized and received at least 1 dose of liraglutide or placebo. For patients with missing values (eg, due to dropout), outcome values were imputed using the last-observation-carried-forward method. A multivariable analysis of covariance model was used to analyze changes in the primary outcomes and included a covariate for the baseline measure of the outcome in question. Sensitivity analyses were conducted in which the investigators used different imputation techniques (multiple imputation, repeated measures) to account for missing data.

Results. The trial enrolled 3731 participants, 2487 of whom were randomized to receive liraglutide and 1244 of whom received placebo. The groups were similar on measured baseline characteristics, with a mean age of 45 years, mostly female participants (78.7% in liraglutide arm, 78.1% in placebo), and the vast majority of participants identified as “white” race/ethnicity (84.7% in liraglutide, 85.3% in placebo). Mean baseline BMI was 38.3 kg/m2 in both groups. Although overweight patients with BMI 27 kg/m2 or greater were included, they represented a small fraction of all participants (2.7% in liraglutide group and 3.5% in placebo group). Furthermore, although patients with overt diabetes were excluded from participating, over half of the participants qualified as having prediabetes (61.4% in liraglutide group, 60.9% in placebo group). Just over one-third (34.2% of liraglutide group, 35.9% placebo) had hypertension diagnosed at baseline. Study withdrawal was relatively substantial in both groups – 71.9% remained enrolled at 56 weeks in the liraglutide group, and 64.4% remained in the placebo arm. The investigators note that withdrawal due to adverse events was more common in the liraglutide group (9.9% of withdrawals vs. 3.8% in placebo), while other reasons for withdrawing (ineffective therapy, withdrawal of consent) were more common among placebo participants.

Liraglutide participants lost significantly more weight than placebo participants at 56 weeks (mean [SD]  8.0 [6.7] kg vs. 2.6 [5.7] kg). Similarly, more patients in the liraglutide group achieved at least 5% weight loss (63% vs. 27%), and 10% weight loss (33.1% vs. 10.6%) than those taking placebo. When subgroups of patients were examined according to baseline BMI, the investigators suggested that liraglutide appeared to be more effective at promoting weight loss among patients starting below 40 kg/m2.

Hemoglobin A1c dropped significantly more (–0.23 points, P < 0.001) among liraglutide participants than among placebo participants. Similarly, fasting insulin levels dropped by 8% more (P < 0.001) in the liraglutide group at 56 weeks. In keeping with the greater weight loss, markers of cardiometabolic health also improved to a greater extent among liraglutide participants, with larger decreases in blood pressure (SBP –2.8 mm Hg lower in liraglutide, P < 0.001), and LDL (–2.4% difference, P = 0.002), and a larger increase in HDL (1.9% difference, P = 0.001). By week 56, 14% of prediabetic patients in the placebo arm had received a new diagnosis of diabetes, compared to just 4% in the liraglutide group (P < 0.001).

Quality of life scores were higher for liraglutide participants on all included measures except those related to side effects of treatment, where placebo participants reported lower levels of side effects. The most common side effects reported by liraglutide participants related to GI upset, including nausea (40%), diarrhea (21%), and vomiting (16%). More serious events, including cholelithiasis (0.8%), cholecystitis (0.5%), and pancreatitis (0.2%), were also reported. Somewhat surprisingly, although liraglutide is also used to improve glycemic control in diabetics, rates of reported spontaneous hypoglycemia were fairly low in the liraglutide group (1.3% vs. 1.0% in placebo).

Conclusion. Liraglutide given at a dose of 3.0 mg daily, along with lifestyle advice, produces clinically significant weight loss and improvement in glycemic and cardiometabolic parameters that is sustained after 1 full year of treatment.

Commentary

Over the past few years, the FDA has approved a growing list of medications for the treatment of obesity [1,2]. Unlike the prior mainstay for prescription weight management, phentermine, which can only be used for a few months at a time due to concerns about abuse, many of these newer medications are approved for long-term use, aligning well with the growing recognition of obesity as a chronic illness. Interestingly, most of the drugs that have emerged onto the market do not represent novel compounds, but rather are existing drugs that have been repurposed and repackaged for the indication of weight management. These “recycled” medications include Qsymia (a mix of phentermine and topiramate) [1], Contrave (naltrexone and buproprion) [2], and now, Saxenda (liraglutide, also marketed as Victoza for treatment of type 2 diabetes). Liraglutide is a glucagon-like-peptide 1 (GLP-1) analogue, meaning it has an effect similar to that of GLP-1, a gut hormone that stimulates insulin secretion, inhibits pancreatic beta cell apoptosis, inhibits gastric emptying, and decreases appetite by acting on the brain’s satiety centers [3]. For several years, endocrinologists and some internists have been using liraglutide (Victoza) to help with glycemic control in diabetics, with the known benefit that, unlike some other diabetes medications, it tends to promote modest weight loss [4].

In this large multicenter trial, Pi-Sunyer et al evaluated the efficacy of liraglutide at a 3.0 mg daily dose (almost twice the dose used for diabetes) for weight management. The trial utilized a strong study design, with double blinding, randomization of a subgroup for early discontinuation (to evaluate for weight regain and stopping-related side effects), and, importantly, the intervention for both groups also included a behavior change component (albeit one of relatively low intensity, based on the limited description). Patients were followed for 56 weeks on the medication, making the “intervention” phase of the study longer than what has been done in many diet trials. Testing for a long-lasting impact on weight, and at the same time attempting to quantify risks associated with longer-term use of a medication, was an important contribution for this study given that liraglutide is being marketed for long-term use.

After a year on liraglutide, participants in that group had lost around 12 lb more, on average, than those using placebo, and had achieved greater improvements cardiometabolic risk markers, with a much lower risk of developing diabetes. While these findings are promising from a clinical standpoint, it is not clear whether the moderate health impacts of this drug will be sufficient to outweigh several issues that may impede its widespread use in practice. The rate of GI side effects (nausea, vomiting, diarrhea) in liraglutide participants was fairly high, and it is worth considering whether the side effects themselves could have been driving some of the weight loss observed in that group. Furthermore, the out of pocket cost of this medication, when used for weight loss in nondiabetics, is likely to be around $1000 per month. For most patients, this high price will prohibit longer-term use of liraglutide. Even in the setting of a trial where participants faced no out of pocket costs, almost one-third in the liraglutide arm did not complete a year of treatment. On a related note, the primary analysis for this trial used a “last observation carried forward” approach—somewhat concerning given that patients are likely to regain weight after stopping any weight loss intervention, pharmaceutical or otherwise. The authors do report that a range of sensitivity analyses with varying imputation techniques were conducted and did not change the main conclusions of the trial.

Despite the promising findings from this trial, several important clinical questions remain. What is the durability of health effects for patients who discontinue the medication after a year? What safety concerns may arise in those who can afford to continue using liraglutide at this higher dose for several years? A 2-year follow-up study on participants from the current trial has been completed and those results are expected soon, which may help to shed light on some of these issues [5].  Cost-effectiveness evaluations, and head-to-head comparisons of liraglutide with lower cost weight management options would also be very helpful for clinicians presenting a range of treatment options to patients with obesity.

Applications for Clinical Practice

Liraglutide at a daily dose of 3.0 mg represents a new option for treatment of patients with obesity. It should be used in conjunction with behavioral interventions that promote a more healthful diet and increased physical activity, and may result in clinically meaningful weight loss and decreased risk of diabetes. On the other hand, the medication is costly and associated with some unpleasant GI side effects, both important factors that may limit patients’ ability to use it in the long-term. More studies are needed to establish durability of effects and safety beyond a year and that offer direct comparisons with other evidence-based weight loss tools, pharmaceutical and otherwise.

—Kristina Lewis, MD, MPH

References

1. Bray GA, Ryan DH. Update on obesity pharmacotherapy. Ann N Y Acad Sci 2014;1311:1–13.

2. Yanovski SZ, Yanovski JA. Naltrexone extended-release plus bupropion extended-release for treatment of obesity. JAMA 2015;313:1213–4.

3. de Mello AH, Pra M, Cardoso LC, de Bona Schraiber R, Rezin GT. Incretin-based therapies for obesity treatment. Metabolism 23 May 2015.

4. Prasad-Reddy L, Isaacs D. A clinical review of GLP-1 receptor agonists: efficacy and safety in diabetes and beyond. Drugs Context 2015;4:212283.

5. Siraj ES, Williams KJ. Another agent for obesity—will this time be different? N Engl J Med 2015;373:82–3.

References

1. Bray GA, Ryan DH. Update on obesity pharmacotherapy. Ann N Y Acad Sci 2014;1311:1–13.

2. Yanovski SZ, Yanovski JA. Naltrexone extended-release plus bupropion extended-release for treatment of obesity. JAMA 2015;313:1213–4.

3. de Mello AH, Pra M, Cardoso LC, de Bona Schraiber R, Rezin GT. Incretin-based therapies for obesity treatment. Metabolism 23 May 2015.

4. Prasad-Reddy L, Isaacs D. A clinical review of GLP-1 receptor agonists: efficacy and safety in diabetes and beyond. Drugs Context 2015;4:212283.

5. Siraj ES, Williams KJ. Another agent for obesity—will this time be different? N Engl J Med 2015;373:82–3.

Issue
Journal of Clinical Outcomes Management - SEPTEMBER 2015, VOL. 22, NO. 9
Issue
Journal of Clinical Outcomes Management - SEPTEMBER 2015, VOL. 22, NO. 9
Publications
Publications
Topics
Article Type
Display Headline
Liraglutide Produces Clinically Significant Weight Loss in Nondiabetic Patients, But At What Cost?
Display Headline
Liraglutide Produces Clinically Significant Weight Loss in Nondiabetic Patients, But At What Cost?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Fast and Furious: Rapid Weight Loss Via a Very Low Calorie Diet May Lead to Better Long-Term Outcomes Than a Gradual Weight Loss Program

Article Type
Changed
Fri, 03/02/2018 - 15:06
Display Headline
Fast and Furious: Rapid Weight Loss Via a Very Low Calorie Diet May Lead to Better Long-Term Outcomes Than a Gradual Weight Loss Program

Study Overview

Objective. To determine if the rate at which a person loses weight impacts long-term weight management.

Design. Two-phase, non-masked, randomized controlled trial.

Setting and participants. Study participants were recruited through radio and newspaper advertisements and word of mouth in Melbourne, Australia. Eligible participants were randomized into 2 different weight loss programs—a 12-week rapid program or a 36-week gradual program—using a computer-generated randomization sequence with a block design to account for the potential confounding factors of age, sex, and body mass index (BMI). Investigators and laboratory staff were blind to the group assignments. Inclusion criteria were healthy men and women aged between 18–70 years who were weight stable for 3 months and had a BMI between 30.0–45.0kg/m2. Exclusion criteria included use of a very low energy diet or weight loss drugs in the previous 3 months, contraceptive use, pregnancy or lactation, smoking, current use of drugs known to affect body weight, previous weight loss surgery, and the presence of clinically significant disease (including diabetes).

Intervention. Participants were randomized to the rapid or gradual weight loss program, both with the stated goal of 15% weight loss. For phase 1, participants in the rapid weight loss group replaced 3 meals a day with a commercially available meal replacement (Optifast, Nestlé Nutrition) over a period of 12 weeks (450–800 kcal/day). Participants in the gradual group replaced 1 to 2 meals daily with the same supplements and followed a diet program based on recommendations from the Australian Guide to Healthy Eating for the other meals over a period of 36 weeks (400–500 kcal deficit per day). Both groups were given comparable dietary education materials and had appointments every 2 weeks with the same dietician. Participants who achieved 12.5% or greater weight loss were eligible for phase 2. In phase 2, participants met with their same dietician at weeks 4 and 12, and then every 12 weeks until week 144. During appointments, the dietician assessed adherence based on participants’ self-reported food intake, and participants were encouraged to partake in 30 minutes of physical activity of mild to moderate intensity. Participants who gained weight were given a 400–500 kcal deficit diet.

Main outcome measures. The main outcome was mean weight loss maintained at week 144 of phase 2. Secondary outcomes were mean difference in fasting ghrelin and leptin concentrations measured at baseline, end of phase 1 (week 12 for rapid and week 36 for gradual), and at weeks 48 and 144 of phase 2. The authors examined the following changes from baseline: weight, BMI, waist and hip circumferences, fat mass, fat free mass, ghrelin, leptin, and physical activity (steps per day). A standardized protocol was followed for all measurements.

Results. Researchers evaluated 525 participants, of which 321 were excluded for ineligibility, being unwilling to participate, or having type 2 diabetes. Of the 204, 4 dropped out after randomization leaving 97 in the rapid weight loss group and 103 in the gradual group during phase 1. The mean age of participants was 49.8 (SD = 10.9) years with 25.5% men. There were no significant demographic or weight differences between the 2 groups. The completion rate for phase 1 was 94% in the rapid program and 82% of the gradual program. The mean phase 1 weight changes in the rapid and gradual program groups were –13 kg and –8.9 kg, respectively. A higher proportion of participants in the rapid weight loss group lost 12.5% or more of their weight than in the gradual group (76/97 vs. 53/103). 127 participants entered phase 2 of the study (2 in the gradual group who lost 12.5% body weight before 12 weeks were excluded). 1 participant in the rapid group developed cholecystitis requiring cholecystectomy.

In Phase 2, seven participants in the rapid group withdrew due to logistical issues, psychological stress, and other health-related issues; 4 participants in the gradual group withdrew for the same reasons, as well as pregnancy. 2 participants from the rapid group developed cancer. All but 6 participants regained weight (5 in rapid group, 1 in gradual group) and were put on a 400-500 kcal deficit diet. There was no significant difference in mean weight regain of the rapid and gradual participants. By week 144 of phase 2, average weight regain in the gradual group was 10.4 kg (95% confidence interval [CI] 8.4–12.4; 71.2% of lost weight regained, CI 58.1–84.3) and 10.3 kg in rapid weight loss participants (95% CI 8.5–12.1; 70.5% of lost weight regained, CI 57.8–83.2). This result did not change significantly in the intention to treat analysis where dropouts were assumed to return to baseline.

During phase 2, leptin concentrations increased in both groups, and there was no difference in leptin concentrations between the 2 groups at weeks 48 and 144, nor were they significantly different from baseline at week 48. Ghrelin concentrations increased in both groups from baseline, but there was no significant difference between the groups at the end of 144 weeks.

Conclusion. In highly selected Australian participants, rapid weight loss (12 weeks) using a very low calorie meal replacement program led to greater weight loss than a gradual weight loss program (36 weeks) using a combination of meal replacements and diet recommendations. In participants who lost 12.5% or greater body weight, the speed at which participants regained weight was similar in both groups.

Commentary

Obesity rates have increased globally over the past 20 years. In the United States, Yang and Colditz found that approximately 35% of men and 37% of women are obese and approximately 40% of men and 30% of women are overweight, marking the first time that obese Americans outnumber overweight Americans [1]. Approximately 45 million Americans diet each year, and Americans spend $33 billion on weight-loss products annually. Thus, we need to determine the most effective and cost-effective weight management practices. The Purcell et al study suggests that a 12-week intervention may lead to greater weight loss and better adherence than a 36-week program, and that weight regain in participants achieving 12.5% or greater weight loss may be the same in both interventions. While they did not formally evaluate cost effectiveness, these findings suggest that a rapid weight loss program through a very low calorie diet (VLCD) may be more cost-effective since they achieved better results in a shorter period of time. However, caution must be taken before universally recommending VLCDs to promote rapid weight loss.

Many organizations advise patients to lose weight slowly to increase their chances of reaching weight loss goals and long-term success. The American Heart Association, American College of Cardiology, and The Obesity Society (AHA/ACC/TOS) guidelines for the management of overweight and obesity in adults recommend 3 types of diets for weight loss: a 1200–1800 calorie diet, depending on weight and gender; a 500 kcal/day or 750kcal/day energy deficit, or an evidence-based diet that restricts specific food types (such as high-carbohydrate foods) [2]. These guidelines also state that individuals likely need to follow lifestyle changes for more than 6 months to increase their chances of achieving weight loss goals [2]. They acknowledge maximum weight loss is typically achieved at 6 months, and is commonly followed by plateau and gradual regain [2]. The US Preventive Services Task Force (USPSTF) also advises gradual weight loss [3].

The results of the Purcell et al study and others provide evidence that contradicts these recommendations. For example, Nackers et al found that people who lost weight quickly achieved and maintained greater weight loss than participants who lost weight gradually [4]. Further, those who lost weight rapidly were no more susceptible to regaining weight than people who lost weight gradually [4]. Toburo and Astrup also found the rate of initial weight loss had no impact on the long-term outcomes of weight maintenance [5]. Astrup and Rössner found initial weight loss was positively associated with long-term weight maintenance, and rapid weight loss resulted in improved sustained weight maintenance [6]. Finally, Wing and Phelan found the best predictor of weight regain was the length of time weight loss was maintained, not how the weight was lost [7].

VCLDs replace regular meals with prepared formulas to promote rapid weight loss, and are not recommended for the mildly obese or overweight. VLCDs have been shown to greatly reduce cardiovascular risk factors and relieve obesity-related symptoms; however, they result in more side effects compared to a low calorie diet [8]. Individuals who follow VLCDs must be monitored regularly to ensure they do not experience serious side effects, such as gallstones, electrolyte imbalance that can cause muscle and nerve malfunction, and an irregular heartbeat [9]. Indeed, 1 patient in the rapid group required a cholecystectomy. The providers in this study were obesity specialists, which may account for the strong outcomes and relatively few adverse events.

This study has many strengths. First, researchers achieved low rates of attrition (22% compared to about 40% in other studies) [9,10]. This study also followed participants for 2 years post-intervention and achieved high rates of weight loss in both groups. In addition to low dropout rates and long-term follow-up, the population was highly adherent to each intervention. Limitations of the study include that the authors were highly selective in choosing participants—none of the participants had obesity-related comorbidities such as diabetes or significant medical conditions. Individuals with these conditions may not be able to follow the dietary recommendations used in this study, restricting generalizability from a population that is largely overweight and obese. Further, all participants were from Melbourne, Australia. Since the authors did not provide data on race/ethnicity, we can assume a relatively homogeneous population, further limiting generalizability.

Applications for Clinical Practice

This study suggests that rapid weight loss through VLCDs may achieve better weight loss outcomes and adherence when compared to more gradual programs without resulting in higher weight regain over time in highly selected patients treated by obesity specialists. Caution must be advised since primary care practitioners may not have sufficient training to deliver these diets. VLCDs have higher risk of gallstones and other adverse outcomes such as gout or cardiac events [11,12]. A more gradual weight loss program, similar to the 36-week program in the Purcell et al study, used meal replacements and achieved outcomes that were relatively high, with 72% achieving at least 5% weight loss, and 19% achieving 15% weight loss or greater (P < 0.001) [13]. Indeed, meal replacements of 1 to 2 meals per day have been shown to be safe and effective in primary care [14]. Current AHA/ACC/TOS guidelines on VLCDs are inconclusive, stating there is insufficient evidence to comment on the value of VLCDs, or on strategies to provide more supervision of adherence to these diets [2]. Thus, practitioners without training in the use of VLCDs should still follow USPSTF and other recommendations to promote gradual weight loss [2]. However, if patients want to lose weight faster with a VLCD, then providers can refer them to an obesity specialist since this may promote greater adherence and long-term weight maintenance in select patients.

—Natalie L. Ricci, Mailman School of Public Health, New York, NY, and Melanie Jay, MD, MS

References

1. Yang L, Colditz GA. Prevalence of overweight and obesity in the United States, 2007-2012. JAMA Intern Med 2015 Jun 22.

2. Jensen MD, Ryan DH, Apovian CM, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines; Obesity Society. 2013 AHA/ACC/TOS guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and The Obesity Society. Circulation 2014;129(25 Suppl 2):S102–38.

3. Final recommendation statement: Obesity in adults: screening and management, June 2012. U.S. Preventive Services Task Force. Available at www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/obesity-in-adults-screening-and-management.

4. Nackers LM, Ross KM, Perri MG. The association between rate of initial weight loss and long-term success in obesity treatment: does slow and steady win the race? Int J Behav Med 2010;17:161–7.

5. Toubro S, Astrup A. Randomised comparison of diets for maintaining obese subjects’ weight after major weight loss: ad lib, low fat, high carbohydrate diet v fixed energy intake. BMJ 1997;314:29–34.

6. Astrup A, Rössner S. Lessons from obesity management programmes: greater initial weight loss improves long-term maintenance. Obes Rev 2000;1:17–9.

7. Wing RR, Phelan S. Long-term weight loss maintenance. Am J Clin Nutr 2005;82(1 Suppl):222S–225S.

8. Christensen P, Bliddal H, Riecke BF, et al. Comparison of a low-energy diet and a very low-energy diet in sedentary obese individuals: a pragmatic randomized controlled trial. Clin Obes 2011;1:31–40.

9. Anderson JW, Hamilton CC, Brinkman-Kaplan V. Benefits and risks of an intensive very-low-calorie diet program for severe obesity. Am J Gastroenterol 1992;87:6–15.

10. Ditschuneit HH, Flechtner-Mors M, Johnson TD, Adler G. Metabolic and weight-loss effects of a long-term dietary intervention in obese patients. Am J Clin Nutr 1999;69:198–204.

11. Rössner S, Flaten H. VLCD versus LCD in long-term treatment of obesity. Int J  Obes Relat Metab Disord 1997;21:22–6.

12. Weinsier RL, Ullmann DO. Gallstone formation and weight loss. Obes Res 1993;1:51–6.

13. Kruschitz R, Wallner-Liebmann SJ, Lothaller H, et al. Evaluation of a meal replacement-based weight management  program in primary care settings according to the actual European clinical practice guidelines for the management of obesity in adults. Wien Klin Wochenschr 2014;126:598–603.

14. Haas WC, Moore JB, Kaplan M, Lazorick S. Outcomes from a medical weight loss program: primary care clinics versus weight loss clinics. Am J Med 2012;125:603.e7–11.

Issue
Journal of Clinical Outcomes Management - AUGUST 2015, VOL. 22, NO. 8
Publications
Topics
Sections

Study Overview

Objective. To determine if the rate at which a person loses weight impacts long-term weight management.

Design. Two-phase, non-masked, randomized controlled trial.

Setting and participants. Study participants were recruited through radio and newspaper advertisements and word of mouth in Melbourne, Australia. Eligible participants were randomized into 2 different weight loss programs—a 12-week rapid program or a 36-week gradual program—using a computer-generated randomization sequence with a block design to account for the potential confounding factors of age, sex, and body mass index (BMI). Investigators and laboratory staff were blind to the group assignments. Inclusion criteria were healthy men and women aged between 18–70 years who were weight stable for 3 months and had a BMI between 30.0–45.0kg/m2. Exclusion criteria included use of a very low energy diet or weight loss drugs in the previous 3 months, contraceptive use, pregnancy or lactation, smoking, current use of drugs known to affect body weight, previous weight loss surgery, and the presence of clinically significant disease (including diabetes).

Intervention. Participants were randomized to the rapid or gradual weight loss program, both with the stated goal of 15% weight loss. For phase 1, participants in the rapid weight loss group replaced 3 meals a day with a commercially available meal replacement (Optifast, Nestlé Nutrition) over a period of 12 weeks (450–800 kcal/day). Participants in the gradual group replaced 1 to 2 meals daily with the same supplements and followed a diet program based on recommendations from the Australian Guide to Healthy Eating for the other meals over a period of 36 weeks (400–500 kcal deficit per day). Both groups were given comparable dietary education materials and had appointments every 2 weeks with the same dietician. Participants who achieved 12.5% or greater weight loss were eligible for phase 2. In phase 2, participants met with their same dietician at weeks 4 and 12, and then every 12 weeks until week 144. During appointments, the dietician assessed adherence based on participants’ self-reported food intake, and participants were encouraged to partake in 30 minutes of physical activity of mild to moderate intensity. Participants who gained weight were given a 400–500 kcal deficit diet.

Main outcome measures. The main outcome was mean weight loss maintained at week 144 of phase 2. Secondary outcomes were mean difference in fasting ghrelin and leptin concentrations measured at baseline, end of phase 1 (week 12 for rapid and week 36 for gradual), and at weeks 48 and 144 of phase 2. The authors examined the following changes from baseline: weight, BMI, waist and hip circumferences, fat mass, fat free mass, ghrelin, leptin, and physical activity (steps per day). A standardized protocol was followed for all measurements.

Results. Researchers evaluated 525 participants, of which 321 were excluded for ineligibility, being unwilling to participate, or having type 2 diabetes. Of the 204, 4 dropped out after randomization leaving 97 in the rapid weight loss group and 103 in the gradual group during phase 1. The mean age of participants was 49.8 (SD = 10.9) years with 25.5% men. There were no significant demographic or weight differences between the 2 groups. The completion rate for phase 1 was 94% in the rapid program and 82% of the gradual program. The mean phase 1 weight changes in the rapid and gradual program groups were –13 kg and –8.9 kg, respectively. A higher proportion of participants in the rapid weight loss group lost 12.5% or more of their weight than in the gradual group (76/97 vs. 53/103). 127 participants entered phase 2 of the study (2 in the gradual group who lost 12.5% body weight before 12 weeks were excluded). 1 participant in the rapid group developed cholecystitis requiring cholecystectomy.

In Phase 2, seven participants in the rapid group withdrew due to logistical issues, psychological stress, and other health-related issues; 4 participants in the gradual group withdrew for the same reasons, as well as pregnancy. 2 participants from the rapid group developed cancer. All but 6 participants regained weight (5 in rapid group, 1 in gradual group) and were put on a 400-500 kcal deficit diet. There was no significant difference in mean weight regain of the rapid and gradual participants. By week 144 of phase 2, average weight regain in the gradual group was 10.4 kg (95% confidence interval [CI] 8.4–12.4; 71.2% of lost weight regained, CI 58.1–84.3) and 10.3 kg in rapid weight loss participants (95% CI 8.5–12.1; 70.5% of lost weight regained, CI 57.8–83.2). This result did not change significantly in the intention to treat analysis where dropouts were assumed to return to baseline.

During phase 2, leptin concentrations increased in both groups, and there was no difference in leptin concentrations between the 2 groups at weeks 48 and 144, nor were they significantly different from baseline at week 48. Ghrelin concentrations increased in both groups from baseline, but there was no significant difference between the groups at the end of 144 weeks.

Conclusion. In highly selected Australian participants, rapid weight loss (12 weeks) using a very low calorie meal replacement program led to greater weight loss than a gradual weight loss program (36 weeks) using a combination of meal replacements and diet recommendations. In participants who lost 12.5% or greater body weight, the speed at which participants regained weight was similar in both groups.

Commentary

Obesity rates have increased globally over the past 20 years. In the United States, Yang and Colditz found that approximately 35% of men and 37% of women are obese and approximately 40% of men and 30% of women are overweight, marking the first time that obese Americans outnumber overweight Americans [1]. Approximately 45 million Americans diet each year, and Americans spend $33 billion on weight-loss products annually. Thus, we need to determine the most effective and cost-effective weight management practices. The Purcell et al study suggests that a 12-week intervention may lead to greater weight loss and better adherence than a 36-week program, and that weight regain in participants achieving 12.5% or greater weight loss may be the same in both interventions. While they did not formally evaluate cost effectiveness, these findings suggest that a rapid weight loss program through a very low calorie diet (VLCD) may be more cost-effective since they achieved better results in a shorter period of time. However, caution must be taken before universally recommending VLCDs to promote rapid weight loss.

Many organizations advise patients to lose weight slowly to increase their chances of reaching weight loss goals and long-term success. The American Heart Association, American College of Cardiology, and The Obesity Society (AHA/ACC/TOS) guidelines for the management of overweight and obesity in adults recommend 3 types of diets for weight loss: a 1200–1800 calorie diet, depending on weight and gender; a 500 kcal/day or 750kcal/day energy deficit, or an evidence-based diet that restricts specific food types (such as high-carbohydrate foods) [2]. These guidelines also state that individuals likely need to follow lifestyle changes for more than 6 months to increase their chances of achieving weight loss goals [2]. They acknowledge maximum weight loss is typically achieved at 6 months, and is commonly followed by plateau and gradual regain [2]. The US Preventive Services Task Force (USPSTF) also advises gradual weight loss [3].

The results of the Purcell et al study and others provide evidence that contradicts these recommendations. For example, Nackers et al found that people who lost weight quickly achieved and maintained greater weight loss than participants who lost weight gradually [4]. Further, those who lost weight rapidly were no more susceptible to regaining weight than people who lost weight gradually [4]. Toburo and Astrup also found the rate of initial weight loss had no impact on the long-term outcomes of weight maintenance [5]. Astrup and Rössner found initial weight loss was positively associated with long-term weight maintenance, and rapid weight loss resulted in improved sustained weight maintenance [6]. Finally, Wing and Phelan found the best predictor of weight regain was the length of time weight loss was maintained, not how the weight was lost [7].

VCLDs replace regular meals with prepared formulas to promote rapid weight loss, and are not recommended for the mildly obese or overweight. VLCDs have been shown to greatly reduce cardiovascular risk factors and relieve obesity-related symptoms; however, they result in more side effects compared to a low calorie diet [8]. Individuals who follow VLCDs must be monitored regularly to ensure they do not experience serious side effects, such as gallstones, electrolyte imbalance that can cause muscle and nerve malfunction, and an irregular heartbeat [9]. Indeed, 1 patient in the rapid group required a cholecystectomy. The providers in this study were obesity specialists, which may account for the strong outcomes and relatively few adverse events.

This study has many strengths. First, researchers achieved low rates of attrition (22% compared to about 40% in other studies) [9,10]. This study also followed participants for 2 years post-intervention and achieved high rates of weight loss in both groups. In addition to low dropout rates and long-term follow-up, the population was highly adherent to each intervention. Limitations of the study include that the authors were highly selective in choosing participants—none of the participants had obesity-related comorbidities such as diabetes or significant medical conditions. Individuals with these conditions may not be able to follow the dietary recommendations used in this study, restricting generalizability from a population that is largely overweight and obese. Further, all participants were from Melbourne, Australia. Since the authors did not provide data on race/ethnicity, we can assume a relatively homogeneous population, further limiting generalizability.

Applications for Clinical Practice

This study suggests that rapid weight loss through VLCDs may achieve better weight loss outcomes and adherence when compared to more gradual programs without resulting in higher weight regain over time in highly selected patients treated by obesity specialists. Caution must be advised since primary care practitioners may not have sufficient training to deliver these diets. VLCDs have higher risk of gallstones and other adverse outcomes such as gout or cardiac events [11,12]. A more gradual weight loss program, similar to the 36-week program in the Purcell et al study, used meal replacements and achieved outcomes that were relatively high, with 72% achieving at least 5% weight loss, and 19% achieving 15% weight loss or greater (P < 0.001) [13]. Indeed, meal replacements of 1 to 2 meals per day have been shown to be safe and effective in primary care [14]. Current AHA/ACC/TOS guidelines on VLCDs are inconclusive, stating there is insufficient evidence to comment on the value of VLCDs, or on strategies to provide more supervision of adherence to these diets [2]. Thus, practitioners without training in the use of VLCDs should still follow USPSTF and other recommendations to promote gradual weight loss [2]. However, if patients want to lose weight faster with a VLCD, then providers can refer them to an obesity specialist since this may promote greater adherence and long-term weight maintenance in select patients.

—Natalie L. Ricci, Mailman School of Public Health, New York, NY, and Melanie Jay, MD, MS

Study Overview

Objective. To determine if the rate at which a person loses weight impacts long-term weight management.

Design. Two-phase, non-masked, randomized controlled trial.

Setting and participants. Study participants were recruited through radio and newspaper advertisements and word of mouth in Melbourne, Australia. Eligible participants were randomized into 2 different weight loss programs—a 12-week rapid program or a 36-week gradual program—using a computer-generated randomization sequence with a block design to account for the potential confounding factors of age, sex, and body mass index (BMI). Investigators and laboratory staff were blind to the group assignments. Inclusion criteria were healthy men and women aged between 18–70 years who were weight stable for 3 months and had a BMI between 30.0–45.0kg/m2. Exclusion criteria included use of a very low energy diet or weight loss drugs in the previous 3 months, contraceptive use, pregnancy or lactation, smoking, current use of drugs known to affect body weight, previous weight loss surgery, and the presence of clinically significant disease (including diabetes).

Intervention. Participants were randomized to the rapid or gradual weight loss program, both with the stated goal of 15% weight loss. For phase 1, participants in the rapid weight loss group replaced 3 meals a day with a commercially available meal replacement (Optifast, Nestlé Nutrition) over a period of 12 weeks (450–800 kcal/day). Participants in the gradual group replaced 1 to 2 meals daily with the same supplements and followed a diet program based on recommendations from the Australian Guide to Healthy Eating for the other meals over a period of 36 weeks (400–500 kcal deficit per day). Both groups were given comparable dietary education materials and had appointments every 2 weeks with the same dietician. Participants who achieved 12.5% or greater weight loss were eligible for phase 2. In phase 2, participants met with their same dietician at weeks 4 and 12, and then every 12 weeks until week 144. During appointments, the dietician assessed adherence based on participants’ self-reported food intake, and participants were encouraged to partake in 30 minutes of physical activity of mild to moderate intensity. Participants who gained weight were given a 400–500 kcal deficit diet.

Main outcome measures. The main outcome was mean weight loss maintained at week 144 of phase 2. Secondary outcomes were mean difference in fasting ghrelin and leptin concentrations measured at baseline, end of phase 1 (week 12 for rapid and week 36 for gradual), and at weeks 48 and 144 of phase 2. The authors examined the following changes from baseline: weight, BMI, waist and hip circumferences, fat mass, fat free mass, ghrelin, leptin, and physical activity (steps per day). A standardized protocol was followed for all measurements.

Results. Researchers evaluated 525 participants, of which 321 were excluded for ineligibility, being unwilling to participate, or having type 2 diabetes. Of the 204, 4 dropped out after randomization leaving 97 in the rapid weight loss group and 103 in the gradual group during phase 1. The mean age of participants was 49.8 (SD = 10.9) years with 25.5% men. There were no significant demographic or weight differences between the 2 groups. The completion rate for phase 1 was 94% in the rapid program and 82% of the gradual program. The mean phase 1 weight changes in the rapid and gradual program groups were –13 kg and –8.9 kg, respectively. A higher proportion of participants in the rapid weight loss group lost 12.5% or more of their weight than in the gradual group (76/97 vs. 53/103). 127 participants entered phase 2 of the study (2 in the gradual group who lost 12.5% body weight before 12 weeks were excluded). 1 participant in the rapid group developed cholecystitis requiring cholecystectomy.

In Phase 2, seven participants in the rapid group withdrew due to logistical issues, psychological stress, and other health-related issues; 4 participants in the gradual group withdrew for the same reasons, as well as pregnancy. 2 participants from the rapid group developed cancer. All but 6 participants regained weight (5 in rapid group, 1 in gradual group) and were put on a 400-500 kcal deficit diet. There was no significant difference in mean weight regain of the rapid and gradual participants. By week 144 of phase 2, average weight regain in the gradual group was 10.4 kg (95% confidence interval [CI] 8.4–12.4; 71.2% of lost weight regained, CI 58.1–84.3) and 10.3 kg in rapid weight loss participants (95% CI 8.5–12.1; 70.5% of lost weight regained, CI 57.8–83.2). This result did not change significantly in the intention to treat analysis where dropouts were assumed to return to baseline.

During phase 2, leptin concentrations increased in both groups, and there was no difference in leptin concentrations between the 2 groups at weeks 48 and 144, nor were they significantly different from baseline at week 48. Ghrelin concentrations increased in both groups from baseline, but there was no significant difference between the groups at the end of 144 weeks.

Conclusion. In highly selected Australian participants, rapid weight loss (12 weeks) using a very low calorie meal replacement program led to greater weight loss than a gradual weight loss program (36 weeks) using a combination of meal replacements and diet recommendations. In participants who lost 12.5% or greater body weight, the speed at which participants regained weight was similar in both groups.

Commentary

Obesity rates have increased globally over the past 20 years. In the United States, Yang and Colditz found that approximately 35% of men and 37% of women are obese and approximately 40% of men and 30% of women are overweight, marking the first time that obese Americans outnumber overweight Americans [1]. Approximately 45 million Americans diet each year, and Americans spend $33 billion on weight-loss products annually. Thus, we need to determine the most effective and cost-effective weight management practices. The Purcell et al study suggests that a 12-week intervention may lead to greater weight loss and better adherence than a 36-week program, and that weight regain in participants achieving 12.5% or greater weight loss may be the same in both interventions. While they did not formally evaluate cost effectiveness, these findings suggest that a rapid weight loss program through a very low calorie diet (VLCD) may be more cost-effective since they achieved better results in a shorter period of time. However, caution must be taken before universally recommending VLCDs to promote rapid weight loss.

Many organizations advise patients to lose weight slowly to increase their chances of reaching weight loss goals and long-term success. The American Heart Association, American College of Cardiology, and The Obesity Society (AHA/ACC/TOS) guidelines for the management of overweight and obesity in adults recommend 3 types of diets for weight loss: a 1200–1800 calorie diet, depending on weight and gender; a 500 kcal/day or 750kcal/day energy deficit, or an evidence-based diet that restricts specific food types (such as high-carbohydrate foods) [2]. These guidelines also state that individuals likely need to follow lifestyle changes for more than 6 months to increase their chances of achieving weight loss goals [2]. They acknowledge maximum weight loss is typically achieved at 6 months, and is commonly followed by plateau and gradual regain [2]. The US Preventive Services Task Force (USPSTF) also advises gradual weight loss [3].

The results of the Purcell et al study and others provide evidence that contradicts these recommendations. For example, Nackers et al found that people who lost weight quickly achieved and maintained greater weight loss than participants who lost weight gradually [4]. Further, those who lost weight rapidly were no more susceptible to regaining weight than people who lost weight gradually [4]. Toburo and Astrup also found the rate of initial weight loss had no impact on the long-term outcomes of weight maintenance [5]. Astrup and Rössner found initial weight loss was positively associated with long-term weight maintenance, and rapid weight loss resulted in improved sustained weight maintenance [6]. Finally, Wing and Phelan found the best predictor of weight regain was the length of time weight loss was maintained, not how the weight was lost [7].

VCLDs replace regular meals with prepared formulas to promote rapid weight loss, and are not recommended for the mildly obese or overweight. VLCDs have been shown to greatly reduce cardiovascular risk factors and relieve obesity-related symptoms; however, they result in more side effects compared to a low calorie diet [8]. Individuals who follow VLCDs must be monitored regularly to ensure they do not experience serious side effects, such as gallstones, electrolyte imbalance that can cause muscle and nerve malfunction, and an irregular heartbeat [9]. Indeed, 1 patient in the rapid group required a cholecystectomy. The providers in this study were obesity specialists, which may account for the strong outcomes and relatively few adverse events.

This study has many strengths. First, researchers achieved low rates of attrition (22% compared to about 40% in other studies) [9,10]. This study also followed participants for 2 years post-intervention and achieved high rates of weight loss in both groups. In addition to low dropout rates and long-term follow-up, the population was highly adherent to each intervention. Limitations of the study include that the authors were highly selective in choosing participants—none of the participants had obesity-related comorbidities such as diabetes or significant medical conditions. Individuals with these conditions may not be able to follow the dietary recommendations used in this study, restricting generalizability from a population that is largely overweight and obese. Further, all participants were from Melbourne, Australia. Since the authors did not provide data on race/ethnicity, we can assume a relatively homogeneous population, further limiting generalizability.

Applications for Clinical Practice

This study suggests that rapid weight loss through VLCDs may achieve better weight loss outcomes and adherence when compared to more gradual programs without resulting in higher weight regain over time in highly selected patients treated by obesity specialists. Caution must be advised since primary care practitioners may not have sufficient training to deliver these diets. VLCDs have higher risk of gallstones and other adverse outcomes such as gout or cardiac events [11,12]. A more gradual weight loss program, similar to the 36-week program in the Purcell et al study, used meal replacements and achieved outcomes that were relatively high, with 72% achieving at least 5% weight loss, and 19% achieving 15% weight loss or greater (P < 0.001) [13]. Indeed, meal replacements of 1 to 2 meals per day have been shown to be safe and effective in primary care [14]. Current AHA/ACC/TOS guidelines on VLCDs are inconclusive, stating there is insufficient evidence to comment on the value of VLCDs, or on strategies to provide more supervision of adherence to these diets [2]. Thus, practitioners without training in the use of VLCDs should still follow USPSTF and other recommendations to promote gradual weight loss [2]. However, if patients want to lose weight faster with a VLCD, then providers can refer them to an obesity specialist since this may promote greater adherence and long-term weight maintenance in select patients.

—Natalie L. Ricci, Mailman School of Public Health, New York, NY, and Melanie Jay, MD, MS

References

1. Yang L, Colditz GA. Prevalence of overweight and obesity in the United States, 2007-2012. JAMA Intern Med 2015 Jun 22.

2. Jensen MD, Ryan DH, Apovian CM, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines; Obesity Society. 2013 AHA/ACC/TOS guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and The Obesity Society. Circulation 2014;129(25 Suppl 2):S102–38.

3. Final recommendation statement: Obesity in adults: screening and management, June 2012. U.S. Preventive Services Task Force. Available at www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/obesity-in-adults-screening-and-management.

4. Nackers LM, Ross KM, Perri MG. The association between rate of initial weight loss and long-term success in obesity treatment: does slow and steady win the race? Int J Behav Med 2010;17:161–7.

5. Toubro S, Astrup A. Randomised comparison of diets for maintaining obese subjects’ weight after major weight loss: ad lib, low fat, high carbohydrate diet v fixed energy intake. BMJ 1997;314:29–34.

6. Astrup A, Rössner S. Lessons from obesity management programmes: greater initial weight loss improves long-term maintenance. Obes Rev 2000;1:17–9.

7. Wing RR, Phelan S. Long-term weight loss maintenance. Am J Clin Nutr 2005;82(1 Suppl):222S–225S.

8. Christensen P, Bliddal H, Riecke BF, et al. Comparison of a low-energy diet and a very low-energy diet in sedentary obese individuals: a pragmatic randomized controlled trial. Clin Obes 2011;1:31–40.

9. Anderson JW, Hamilton CC, Brinkman-Kaplan V. Benefits and risks of an intensive very-low-calorie diet program for severe obesity. Am J Gastroenterol 1992;87:6–15.

10. Ditschuneit HH, Flechtner-Mors M, Johnson TD, Adler G. Metabolic and weight-loss effects of a long-term dietary intervention in obese patients. Am J Clin Nutr 1999;69:198–204.

11. Rössner S, Flaten H. VLCD versus LCD in long-term treatment of obesity. Int J  Obes Relat Metab Disord 1997;21:22–6.

12. Weinsier RL, Ullmann DO. Gallstone formation and weight loss. Obes Res 1993;1:51–6.

13. Kruschitz R, Wallner-Liebmann SJ, Lothaller H, et al. Evaluation of a meal replacement-based weight management  program in primary care settings according to the actual European clinical practice guidelines for the management of obesity in adults. Wien Klin Wochenschr 2014;126:598–603.

14. Haas WC, Moore JB, Kaplan M, Lazorick S. Outcomes from a medical weight loss program: primary care clinics versus weight loss clinics. Am J Med 2012;125:603.e7–11.

References

1. Yang L, Colditz GA. Prevalence of overweight and obesity in the United States, 2007-2012. JAMA Intern Med 2015 Jun 22.

2. Jensen MD, Ryan DH, Apovian CM, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines; Obesity Society. 2013 AHA/ACC/TOS guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and The Obesity Society. Circulation 2014;129(25 Suppl 2):S102–38.

3. Final recommendation statement: Obesity in adults: screening and management, June 2012. U.S. Preventive Services Task Force. Available at www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/obesity-in-adults-screening-and-management.

4. Nackers LM, Ross KM, Perri MG. The association between rate of initial weight loss and long-term success in obesity treatment: does slow and steady win the race? Int J Behav Med 2010;17:161–7.

5. Toubro S, Astrup A. Randomised comparison of diets for maintaining obese subjects’ weight after major weight loss: ad lib, low fat, high carbohydrate diet v fixed energy intake. BMJ 1997;314:29–34.

6. Astrup A, Rössner S. Lessons from obesity management programmes: greater initial weight loss improves long-term maintenance. Obes Rev 2000;1:17–9.

7. Wing RR, Phelan S. Long-term weight loss maintenance. Am J Clin Nutr 2005;82(1 Suppl):222S–225S.

8. Christensen P, Bliddal H, Riecke BF, et al. Comparison of a low-energy diet and a very low-energy diet in sedentary obese individuals: a pragmatic randomized controlled trial. Clin Obes 2011;1:31–40.

9. Anderson JW, Hamilton CC, Brinkman-Kaplan V. Benefits and risks of an intensive very-low-calorie diet program for severe obesity. Am J Gastroenterol 1992;87:6–15.

10. Ditschuneit HH, Flechtner-Mors M, Johnson TD, Adler G. Metabolic and weight-loss effects of a long-term dietary intervention in obese patients. Am J Clin Nutr 1999;69:198–204.

11. Rössner S, Flaten H. VLCD versus LCD in long-term treatment of obesity. Int J  Obes Relat Metab Disord 1997;21:22–6.

12. Weinsier RL, Ullmann DO. Gallstone formation and weight loss. Obes Res 1993;1:51–6.

13. Kruschitz R, Wallner-Liebmann SJ, Lothaller H, et al. Evaluation of a meal replacement-based weight management  program in primary care settings according to the actual European clinical practice guidelines for the management of obesity in adults. Wien Klin Wochenschr 2014;126:598–603.

14. Haas WC, Moore JB, Kaplan M, Lazorick S. Outcomes from a medical weight loss program: primary care clinics versus weight loss clinics. Am J Med 2012;125:603.e7–11.

Issue
Journal of Clinical Outcomes Management - AUGUST 2015, VOL. 22, NO. 8
Issue
Journal of Clinical Outcomes Management - AUGUST 2015, VOL. 22, NO. 8
Publications
Publications
Topics
Article Type
Display Headline
Fast and Furious: Rapid Weight Loss Via a Very Low Calorie Diet May Lead to Better Long-Term Outcomes Than a Gradual Weight Loss Program
Display Headline
Fast and Furious: Rapid Weight Loss Via a Very Low Calorie Diet May Lead to Better Long-Term Outcomes Than a Gradual Weight Loss Program
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Expanding High Blood Pressure Screening to the Nonprimary Care Setting to Improve Early Recognition

Article Type
Changed
Fri, 03/02/2018 - 15:47
Display Headline
Expanding High Blood Pressure Screening to the Nonprimary Care Setting to Improve Early Recognition

Study Overview

Objective. To identify the prevalence and characteristics of patients identified with high blood pressure (BP) in nonprimary care compared with primary care visits.

Design. Longitudinal population-based study.

Setting and participants. This study was conducted at Kaiser Permanente Southern California (KPSC) after implementation of a system-wide change to improve hypertension care, which included comprehensive decision support tools embedded in the EHR system, including BP measurement flag alerts. Patient eligible for the study were normotensive members (BP < 140/90 mm Hg), older than 18 years, and enrolled in a KPSC health plan for at least 12 months on January of 2009. A gap of < 3 months in health care coverage in the year prior was allowed. Excluded were patients with a history of elevated BP during an outpatient visit, an inpatient or outpatient diagnosis code for hypertension, prescription for any antihypertensive medication within 24 months prior to 1 January 2009, missing BP information or whose only BP measurements were from a visit indicating fever or in preparation for a surgery or pain management. Pregnant patients, patients with missing sex information, and missing visit specialty information were also excluded. The study period was from January 2009 to March 2011.

Measurement. BP was measured routinely at the beginning of almost every primary and nonprimary outpatient visit. Nurses and medical assistants were trained according to a standard KPSC protocol using automated sphygmomanometer digital devices. According to the study protocol, in cases in which BP was elevated (≥ 140/90 mm Hg), a second measurement was obtained. At KPSC, all staff members including those in primary and nonprimary care are certified in BP measurement during their initial staff orientation and recertified annually.

Main outcome measure. An initial BP ≥ 140/90 mm Hg during a primary or nonprimary care outpatient visit.

Results. The mean ages of patients at baseline and at end of follow-up for the primary outcome were 39.7 (SD, 13.9) and 41.5 (SD, 14.0) years, respectively. The total cohort (n = 1,075,522) was nearly equally representative of both men (48.6%) and women (51.4%). The majority of the patients (91.7%) were younger than 60 years. A large proportion of the cohort belonged to racial/ethnic minorities with 33.1% Hispanic, 6.5% black, and 8.4% Asian/Pacific Islander.

The total cohort had 4,903,200 office visits, of which 3,996,190 were primary care visits, 901,275 nonprimary care visits, and 5735 visits of unknown specialty. During a mean follow-up of 1.6 years (SD, 0.8) 111,996 patients had a BP measurement ≥ 140/90 mm Hg. Of these, 92,577 (82.7%) were measured during primary care visits and 19,419 (17.3%) during nonprimary care visits. Of 15,356 patients with confirmed high BP, 12,587 (82%) were measured during primary care visits and 2769 (18.0%) patients during nonprimary care visits. Patients with a BP ≥ 140/90 mm Hg measured during nonprimary care visits were older, more likely to be male and non-Hispanic white, less likely to be obese, but more likely to smoke or have a Framingham risk score ≥ 20%. Ophthalmology/optometry, neurology, and dermatology were the main specialties to identify a first BP ≥ 140/90 mm Hg.

The follow-up after a first elevated BP was marginally higher in patients identified in nonprimary care than in primary care. Among patients with a first BP ≥ 140/90 mm Hg measured during a primary care visit, 60.6% had a follow-up BP within 3 months of the first high BP, 22.9% after 3 months or more, and 16.5% did not have a follow-up BP. Among individuals with a first BP ≥ 140/90 mm Hg measured during a nonprimary care visit, 64.7% had a follow-up BP within 3 months of the first high BP, 22.6% after 3 months or more, and 12.7% did not have a follow-up BP measurement.

The proportion of false-positives, defined as individuals with an initial BP ≥ 140/90 mm Hg who had a follow-up visit with a normal BP within 3 months, was the same for patients identified in primary and nonprimary care. False-positives were most frequent in individuals identified during visits in other specialty care, rheuma-tology, and neurology fields.

Conclusion. Expanding screening for hypertension to nonprimary care settings may improve the detection of hypertension and may contribute to better hypertension control. However, an effective system to ensure appropriate follow-up if high BP is detected is needed. Elderly, non-Hispanic, white male patients and those with very high BP are more likely to benefit from this screening.

Commentary

Hypertension is a common and costly health problem [1]. BP screening can identify adults with hypertension, who are at increased risk of cardiovascular and other diseases. Effective treatments are available to control high BP and reduce associated morbidity and mortality [2], but the first step is to identify patients with this largely asymptomatic disorder.

BP measurement is standard practice in primary care. However, many people do not regularly see a primary care clinician. In this study, researchers aimed to identify the prevalence and characteristics of patients identified with high BP in nonprimary care compared with primary care visits in a large integrated health care system that had implemented a system-level, multifaceted quality improvement program to improve hypertension care. Of the patients who were found to have high BP, 83% were diagnosed in a primary care setting and 17% in a specialty care setting, and the number of false-positive results were comparable.

In general, the study was well conducted and a strength of the study was the large sample size. Limitations included the fact that the study was conducted as part of a quality improvement project in an integrated health system, and there were no control clinics.

The authors noted that a high BP reading requires adequate follow-up, and nonprimary care detected elevated BP patients had lower follow-up rates. Also, some specialties had higher false-positive rates. Quality of measurement can be maximized with regular staff training.

Applications for Clinical Practice

Expanding routine screening for hypertension to non-primary care can potentially improve rates of detection, capturing patients who might otherwise have been missed. An effective system to ensure appropriate follow-up attention if high BP is detected is essential, and it is important that staff be well trained in using standard technique to minimize false-positives, which could lead to unnecessary resource use.

—Paloma Cesar de Sales, BN, RN, MS

References

1. American Heart Association. High blood pressure: statistical fact sheet 2013 update. Available at www.heart.org/idc/groups/heartpublic/@wcm/@sop/@smd/documents/downloadable/ucm_319587.pdf.

2. James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

Issue
Journal of Clinical Outcomes Management - AUGUST 2015, VOL. 22, NO. 8
Publications
Topics
Sections

Study Overview

Objective. To identify the prevalence and characteristics of patients identified with high blood pressure (BP) in nonprimary care compared with primary care visits.

Design. Longitudinal population-based study.

Setting and participants. This study was conducted at Kaiser Permanente Southern California (KPSC) after implementation of a system-wide change to improve hypertension care, which included comprehensive decision support tools embedded in the EHR system, including BP measurement flag alerts. Patient eligible for the study were normotensive members (BP < 140/90 mm Hg), older than 18 years, and enrolled in a KPSC health plan for at least 12 months on January of 2009. A gap of < 3 months in health care coverage in the year prior was allowed. Excluded were patients with a history of elevated BP during an outpatient visit, an inpatient or outpatient diagnosis code for hypertension, prescription for any antihypertensive medication within 24 months prior to 1 January 2009, missing BP information or whose only BP measurements were from a visit indicating fever or in preparation for a surgery or pain management. Pregnant patients, patients with missing sex information, and missing visit specialty information were also excluded. The study period was from January 2009 to March 2011.

Measurement. BP was measured routinely at the beginning of almost every primary and nonprimary outpatient visit. Nurses and medical assistants were trained according to a standard KPSC protocol using automated sphygmomanometer digital devices. According to the study protocol, in cases in which BP was elevated (≥ 140/90 mm Hg), a second measurement was obtained. At KPSC, all staff members including those in primary and nonprimary care are certified in BP measurement during their initial staff orientation and recertified annually.

Main outcome measure. An initial BP ≥ 140/90 mm Hg during a primary or nonprimary care outpatient visit.

Results. The mean ages of patients at baseline and at end of follow-up for the primary outcome were 39.7 (SD, 13.9) and 41.5 (SD, 14.0) years, respectively. The total cohort (n = 1,075,522) was nearly equally representative of both men (48.6%) and women (51.4%). The majority of the patients (91.7%) were younger than 60 years. A large proportion of the cohort belonged to racial/ethnic minorities with 33.1% Hispanic, 6.5% black, and 8.4% Asian/Pacific Islander.

The total cohort had 4,903,200 office visits, of which 3,996,190 were primary care visits, 901,275 nonprimary care visits, and 5735 visits of unknown specialty. During a mean follow-up of 1.6 years (SD, 0.8) 111,996 patients had a BP measurement ≥ 140/90 mm Hg. Of these, 92,577 (82.7%) were measured during primary care visits and 19,419 (17.3%) during nonprimary care visits. Of 15,356 patients with confirmed high BP, 12,587 (82%) were measured during primary care visits and 2769 (18.0%) patients during nonprimary care visits. Patients with a BP ≥ 140/90 mm Hg measured during nonprimary care visits were older, more likely to be male and non-Hispanic white, less likely to be obese, but more likely to smoke or have a Framingham risk score ≥ 20%. Ophthalmology/optometry, neurology, and dermatology were the main specialties to identify a first BP ≥ 140/90 mm Hg.

The follow-up after a first elevated BP was marginally higher in patients identified in nonprimary care than in primary care. Among patients with a first BP ≥ 140/90 mm Hg measured during a primary care visit, 60.6% had a follow-up BP within 3 months of the first high BP, 22.9% after 3 months or more, and 16.5% did not have a follow-up BP. Among individuals with a first BP ≥ 140/90 mm Hg measured during a nonprimary care visit, 64.7% had a follow-up BP within 3 months of the first high BP, 22.6% after 3 months or more, and 12.7% did not have a follow-up BP measurement.

The proportion of false-positives, defined as individuals with an initial BP ≥ 140/90 mm Hg who had a follow-up visit with a normal BP within 3 months, was the same for patients identified in primary and nonprimary care. False-positives were most frequent in individuals identified during visits in other specialty care, rheuma-tology, and neurology fields.

Conclusion. Expanding screening for hypertension to nonprimary care settings may improve the detection of hypertension and may contribute to better hypertension control. However, an effective system to ensure appropriate follow-up if high BP is detected is needed. Elderly, non-Hispanic, white male patients and those with very high BP are more likely to benefit from this screening.

Commentary

Hypertension is a common and costly health problem [1]. BP screening can identify adults with hypertension, who are at increased risk of cardiovascular and other diseases. Effective treatments are available to control high BP and reduce associated morbidity and mortality [2], but the first step is to identify patients with this largely asymptomatic disorder.

BP measurement is standard practice in primary care. However, many people do not regularly see a primary care clinician. In this study, researchers aimed to identify the prevalence and characteristics of patients identified with high BP in nonprimary care compared with primary care visits in a large integrated health care system that had implemented a system-level, multifaceted quality improvement program to improve hypertension care. Of the patients who were found to have high BP, 83% were diagnosed in a primary care setting and 17% in a specialty care setting, and the number of false-positive results were comparable.

In general, the study was well conducted and a strength of the study was the large sample size. Limitations included the fact that the study was conducted as part of a quality improvement project in an integrated health system, and there were no control clinics.

The authors noted that a high BP reading requires adequate follow-up, and nonprimary care detected elevated BP patients had lower follow-up rates. Also, some specialties had higher false-positive rates. Quality of measurement can be maximized with regular staff training.

Applications for Clinical Practice

Expanding routine screening for hypertension to non-primary care can potentially improve rates of detection, capturing patients who might otherwise have been missed. An effective system to ensure appropriate follow-up attention if high BP is detected is essential, and it is important that staff be well trained in using standard technique to minimize false-positives, which could lead to unnecessary resource use.

—Paloma Cesar de Sales, BN, RN, MS

Study Overview

Objective. To identify the prevalence and characteristics of patients identified with high blood pressure (BP) in nonprimary care compared with primary care visits.

Design. Longitudinal population-based study.

Setting and participants. This study was conducted at Kaiser Permanente Southern California (KPSC) after implementation of a system-wide change to improve hypertension care, which included comprehensive decision support tools embedded in the EHR system, including BP measurement flag alerts. Patient eligible for the study were normotensive members (BP < 140/90 mm Hg), older than 18 years, and enrolled in a KPSC health plan for at least 12 months on January of 2009. A gap of < 3 months in health care coverage in the year prior was allowed. Excluded were patients with a history of elevated BP during an outpatient visit, an inpatient or outpatient diagnosis code for hypertension, prescription for any antihypertensive medication within 24 months prior to 1 January 2009, missing BP information or whose only BP measurements were from a visit indicating fever or in preparation for a surgery or pain management. Pregnant patients, patients with missing sex information, and missing visit specialty information were also excluded. The study period was from January 2009 to March 2011.

Measurement. BP was measured routinely at the beginning of almost every primary and nonprimary outpatient visit. Nurses and medical assistants were trained according to a standard KPSC protocol using automated sphygmomanometer digital devices. According to the study protocol, in cases in which BP was elevated (≥ 140/90 mm Hg), a second measurement was obtained. At KPSC, all staff members including those in primary and nonprimary care are certified in BP measurement during their initial staff orientation and recertified annually.

Main outcome measure. An initial BP ≥ 140/90 mm Hg during a primary or nonprimary care outpatient visit.

Results. The mean ages of patients at baseline and at end of follow-up for the primary outcome were 39.7 (SD, 13.9) and 41.5 (SD, 14.0) years, respectively. The total cohort (n = 1,075,522) was nearly equally representative of both men (48.6%) and women (51.4%). The majority of the patients (91.7%) were younger than 60 years. A large proportion of the cohort belonged to racial/ethnic minorities with 33.1% Hispanic, 6.5% black, and 8.4% Asian/Pacific Islander.

The total cohort had 4,903,200 office visits, of which 3,996,190 were primary care visits, 901,275 nonprimary care visits, and 5735 visits of unknown specialty. During a mean follow-up of 1.6 years (SD, 0.8) 111,996 patients had a BP measurement ≥ 140/90 mm Hg. Of these, 92,577 (82.7%) were measured during primary care visits and 19,419 (17.3%) during nonprimary care visits. Of 15,356 patients with confirmed high BP, 12,587 (82%) were measured during primary care visits and 2769 (18.0%) patients during nonprimary care visits. Patients with a BP ≥ 140/90 mm Hg measured during nonprimary care visits were older, more likely to be male and non-Hispanic white, less likely to be obese, but more likely to smoke or have a Framingham risk score ≥ 20%. Ophthalmology/optometry, neurology, and dermatology were the main specialties to identify a first BP ≥ 140/90 mm Hg.

The follow-up after a first elevated BP was marginally higher in patients identified in nonprimary care than in primary care. Among patients with a first BP ≥ 140/90 mm Hg measured during a primary care visit, 60.6% had a follow-up BP within 3 months of the first high BP, 22.9% after 3 months or more, and 16.5% did not have a follow-up BP. Among individuals with a first BP ≥ 140/90 mm Hg measured during a nonprimary care visit, 64.7% had a follow-up BP within 3 months of the first high BP, 22.6% after 3 months or more, and 12.7% did not have a follow-up BP measurement.

The proportion of false-positives, defined as individuals with an initial BP ≥ 140/90 mm Hg who had a follow-up visit with a normal BP within 3 months, was the same for patients identified in primary and nonprimary care. False-positives were most frequent in individuals identified during visits in other specialty care, rheuma-tology, and neurology fields.

Conclusion. Expanding screening for hypertension to nonprimary care settings may improve the detection of hypertension and may contribute to better hypertension control. However, an effective system to ensure appropriate follow-up if high BP is detected is needed. Elderly, non-Hispanic, white male patients and those with very high BP are more likely to benefit from this screening.

Commentary

Hypertension is a common and costly health problem [1]. BP screening can identify adults with hypertension, who are at increased risk of cardiovascular and other diseases. Effective treatments are available to control high BP and reduce associated morbidity and mortality [2], but the first step is to identify patients with this largely asymptomatic disorder.

BP measurement is standard practice in primary care. However, many people do not regularly see a primary care clinician. In this study, researchers aimed to identify the prevalence and characteristics of patients identified with high BP in nonprimary care compared with primary care visits in a large integrated health care system that had implemented a system-level, multifaceted quality improvement program to improve hypertension care. Of the patients who were found to have high BP, 83% were diagnosed in a primary care setting and 17% in a specialty care setting, and the number of false-positive results were comparable.

In general, the study was well conducted and a strength of the study was the large sample size. Limitations included the fact that the study was conducted as part of a quality improvement project in an integrated health system, and there were no control clinics.

The authors noted that a high BP reading requires adequate follow-up, and nonprimary care detected elevated BP patients had lower follow-up rates. Also, some specialties had higher false-positive rates. Quality of measurement can be maximized with regular staff training.

Applications for Clinical Practice

Expanding routine screening for hypertension to non-primary care can potentially improve rates of detection, capturing patients who might otherwise have been missed. An effective system to ensure appropriate follow-up attention if high BP is detected is essential, and it is important that staff be well trained in using standard technique to minimize false-positives, which could lead to unnecessary resource use.

—Paloma Cesar de Sales, BN, RN, MS

References

1. American Heart Association. High blood pressure: statistical fact sheet 2013 update. Available at www.heart.org/idc/groups/heartpublic/@wcm/@sop/@smd/documents/downloadable/ucm_319587.pdf.

2. James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

References

1. American Heart Association. High blood pressure: statistical fact sheet 2013 update. Available at www.heart.org/idc/groups/heartpublic/@wcm/@sop/@smd/documents/downloadable/ucm_319587.pdf.

2. James PA, Oparil S, Carter BL, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

Issue
Journal of Clinical Outcomes Management - AUGUST 2015, VOL. 22, NO. 8
Issue
Journal of Clinical Outcomes Management - AUGUST 2015, VOL. 22, NO. 8
Publications
Publications
Topics
Article Type
Display Headline
Expanding High Blood Pressure Screening to the Nonprimary Care Setting to Improve Early Recognition
Display Headline
Expanding High Blood Pressure Screening to the Nonprimary Care Setting to Improve Early Recognition
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Increase in the Incidence of Hypertensive Emergency Syndrome?

Article Type
Changed
Fri, 03/02/2018 - 14:06
Display Headline
Increase in the Incidence of Hypertensive Emergency Syndrome?

Study Overview

Objective. To investigate national trends in hospital admissions for malignant hypertension and hypertensive encephalopathy.

Study design. Retrospective cohort study. The Nation-wide Inpatient Sample [1] was used to identify all hospitalizations between January 2000 and December 2011 during which a primary diagnosis of malignant hypertension, hypertensive encephalopathy, or essential hypertension occurred. Time series models were estimated for these diagnoses and also for the combined series. A piecewise linear regression analysis was done to investigate whether there were changes in the trends of these series. In addition, the researchers compared patient characteristics.

Results. There was a gradual increase in the number of hypertension-related hospitalizations from 2000 to 2011. However, after 2007, the number of admissions for malignant hypertension and hypertensive encephalopathy increased dramatically, whereas diagnoses for essential hypertension fell (< 0.001). Mortality for malignant hypertension significantly fell after the change point of 2007 (–36%, P = 0.02) but there was no significant difference in mortality for hypertensive encephalopathy or essential hypertension. The number of diagnoses and the adjusted average charges significantly increased after the change point for all hypertension series, although the increase in malignant hypertension and hypertensive encephalopathy was higher than in essential hypertension. Length of stay significantly decreased after 2007 for all series. Mean patient age and number of procedures for all series were similar before and after the change point.

Conclusion. Since the dramatic increase in the number of hospital admissions did not result in dramatic increases in morbidity, which would have been expected, the increase was most likely related to a change in coding practices that was implemented in 2007 and not actual changes in disease incidence.

Commentary

Hypertension is a major public health problem associated with significant morbidity and mortality [2]. In general, hypertension is asymptomatic; however, life-threatening manifestations of hypertension can develop. A hypertensive emergency is a situation in which uncontrolled hypertension is associated with acute end-organ damage.  Most patients presenting with hypertensive emergency have chronic hypertension, although the disorder can present in previously normotensive individuals [3]. The 2 major emergency syndromes are malignant hypertension and hypertensive encephalopathy. They usually require hospitalization, and therefore monitoring trends in admissions for these conditions is a reasonable population-based indicator for failures related to hypertension management.

In this epidemiologic study by Polgreen et al, the authors found a increasing trend in admissions for malig-nant hypertension and hypertensive emergencies, with a substantial increase after 2007. Although the authors considered the possibility that their findings represented a true change in the epidemiology of hypertensive emergencies, they concluded that this appears unlikely, as the diagnoses of essential hypertension fell, and in addition, an expected associated increase in morbidity was not seen. They attribute the shift to a change in assignment of administrative billing codes. In 2007, DRG codes were changed to medical severity DRG codes [4]. The authors acknowledge that there was a recession from 2007 to 2009 that led to an increase in the number of uninsured Americans [5]. However, they noted that the uninsured were no more likely to be diagnosed with malignant hypertension or hypertensive encephalopathy than essential hypertension and there is no reason to think that a change provider’s management of hypertension could have been responsible.

Limitations to this study were the use of administrative data only and the lack of data on outpatient medication use.

Applications for Clinical Practice

As the authors suggest, the study raised questions regarding the use of administrative data for monitoring hypertension outcomes. Future studies are needed to examine whether the rise in diagnoses for malignant hypertension and hypertensive encephalopathy are related to coding practices or other variables.

—Paloma Cesar de Sales, BN, RN, MS

References

1. Nationwide Inpatient Sample overview. Available at www.hcup-us.ahrq.gov/nisoverview.jsp.

2. American Heart Association. High blood pressure: statistical fact sheet 2013 Update. Available at www.heart.org.

3. Vaughan CJ, Delanty N. Hypertensive emergencies. Lancet 2000;356:411–7.

4. Centers for Medicare and Medicaid Services. Acute care hospital inpatient prospective payment system. Payment system fact sheet series. April 2013. Available at www.cms.gov/outreach-and-education/.

5. Holahan J. The 2007-09 recession and health insurance coverage. Health Aff (Millwood) 2011;30:145–52.

Issue
Journal of Clinical Outcomes Management - July 2015, VOL. 22, NO. 7
Publications
Topics
Sections

Study Overview

Objective. To investigate national trends in hospital admissions for malignant hypertension and hypertensive encephalopathy.

Study design. Retrospective cohort study. The Nation-wide Inpatient Sample [1] was used to identify all hospitalizations between January 2000 and December 2011 during which a primary diagnosis of malignant hypertension, hypertensive encephalopathy, or essential hypertension occurred. Time series models were estimated for these diagnoses and also for the combined series. A piecewise linear regression analysis was done to investigate whether there were changes in the trends of these series. In addition, the researchers compared patient characteristics.

Results. There was a gradual increase in the number of hypertension-related hospitalizations from 2000 to 2011. However, after 2007, the number of admissions for malignant hypertension and hypertensive encephalopathy increased dramatically, whereas diagnoses for essential hypertension fell (< 0.001). Mortality for malignant hypertension significantly fell after the change point of 2007 (–36%, P = 0.02) but there was no significant difference in mortality for hypertensive encephalopathy or essential hypertension. The number of diagnoses and the adjusted average charges significantly increased after the change point for all hypertension series, although the increase in malignant hypertension and hypertensive encephalopathy was higher than in essential hypertension. Length of stay significantly decreased after 2007 for all series. Mean patient age and number of procedures for all series were similar before and after the change point.

Conclusion. Since the dramatic increase in the number of hospital admissions did not result in dramatic increases in morbidity, which would have been expected, the increase was most likely related to a change in coding practices that was implemented in 2007 and not actual changes in disease incidence.

Commentary

Hypertension is a major public health problem associated with significant morbidity and mortality [2]. In general, hypertension is asymptomatic; however, life-threatening manifestations of hypertension can develop. A hypertensive emergency is a situation in which uncontrolled hypertension is associated with acute end-organ damage.  Most patients presenting with hypertensive emergency have chronic hypertension, although the disorder can present in previously normotensive individuals [3]. The 2 major emergency syndromes are malignant hypertension and hypertensive encephalopathy. They usually require hospitalization, and therefore monitoring trends in admissions for these conditions is a reasonable population-based indicator for failures related to hypertension management.

In this epidemiologic study by Polgreen et al, the authors found a increasing trend in admissions for malig-nant hypertension and hypertensive emergencies, with a substantial increase after 2007. Although the authors considered the possibility that their findings represented a true change in the epidemiology of hypertensive emergencies, they concluded that this appears unlikely, as the diagnoses of essential hypertension fell, and in addition, an expected associated increase in morbidity was not seen. They attribute the shift to a change in assignment of administrative billing codes. In 2007, DRG codes were changed to medical severity DRG codes [4]. The authors acknowledge that there was a recession from 2007 to 2009 that led to an increase in the number of uninsured Americans [5]. However, they noted that the uninsured were no more likely to be diagnosed with malignant hypertension or hypertensive encephalopathy than essential hypertension and there is no reason to think that a change provider’s management of hypertension could have been responsible.

Limitations to this study were the use of administrative data only and the lack of data on outpatient medication use.

Applications for Clinical Practice

As the authors suggest, the study raised questions regarding the use of administrative data for monitoring hypertension outcomes. Future studies are needed to examine whether the rise in diagnoses for malignant hypertension and hypertensive encephalopathy are related to coding practices or other variables.

—Paloma Cesar de Sales, BN, RN, MS

Study Overview

Objective. To investigate national trends in hospital admissions for malignant hypertension and hypertensive encephalopathy.

Study design. Retrospective cohort study. The Nation-wide Inpatient Sample [1] was used to identify all hospitalizations between January 2000 and December 2011 during which a primary diagnosis of malignant hypertension, hypertensive encephalopathy, or essential hypertension occurred. Time series models were estimated for these diagnoses and also for the combined series. A piecewise linear regression analysis was done to investigate whether there were changes in the trends of these series. In addition, the researchers compared patient characteristics.

Results. There was a gradual increase in the number of hypertension-related hospitalizations from 2000 to 2011. However, after 2007, the number of admissions for malignant hypertension and hypertensive encephalopathy increased dramatically, whereas diagnoses for essential hypertension fell (< 0.001). Mortality for malignant hypertension significantly fell after the change point of 2007 (–36%, P = 0.02) but there was no significant difference in mortality for hypertensive encephalopathy or essential hypertension. The number of diagnoses and the adjusted average charges significantly increased after the change point for all hypertension series, although the increase in malignant hypertension and hypertensive encephalopathy was higher than in essential hypertension. Length of stay significantly decreased after 2007 for all series. Mean patient age and number of procedures for all series were similar before and after the change point.

Conclusion. Since the dramatic increase in the number of hospital admissions did not result in dramatic increases in morbidity, which would have been expected, the increase was most likely related to a change in coding practices that was implemented in 2007 and not actual changes in disease incidence.

Commentary

Hypertension is a major public health problem associated with significant morbidity and mortality [2]. In general, hypertension is asymptomatic; however, life-threatening manifestations of hypertension can develop. A hypertensive emergency is a situation in which uncontrolled hypertension is associated with acute end-organ damage.  Most patients presenting with hypertensive emergency have chronic hypertension, although the disorder can present in previously normotensive individuals [3]. The 2 major emergency syndromes are malignant hypertension and hypertensive encephalopathy. They usually require hospitalization, and therefore monitoring trends in admissions for these conditions is a reasonable population-based indicator for failures related to hypertension management.

In this epidemiologic study by Polgreen et al, the authors found a increasing trend in admissions for malig-nant hypertension and hypertensive emergencies, with a substantial increase after 2007. Although the authors considered the possibility that their findings represented a true change in the epidemiology of hypertensive emergencies, they concluded that this appears unlikely, as the diagnoses of essential hypertension fell, and in addition, an expected associated increase in morbidity was not seen. They attribute the shift to a change in assignment of administrative billing codes. In 2007, DRG codes were changed to medical severity DRG codes [4]. The authors acknowledge that there was a recession from 2007 to 2009 that led to an increase in the number of uninsured Americans [5]. However, they noted that the uninsured were no more likely to be diagnosed with malignant hypertension or hypertensive encephalopathy than essential hypertension and there is no reason to think that a change provider’s management of hypertension could have been responsible.

Limitations to this study were the use of administrative data only and the lack of data on outpatient medication use.

Applications for Clinical Practice

As the authors suggest, the study raised questions regarding the use of administrative data for monitoring hypertension outcomes. Future studies are needed to examine whether the rise in diagnoses for malignant hypertension and hypertensive encephalopathy are related to coding practices or other variables.

—Paloma Cesar de Sales, BN, RN, MS

References

1. Nationwide Inpatient Sample overview. Available at www.hcup-us.ahrq.gov/nisoverview.jsp.

2. American Heart Association. High blood pressure: statistical fact sheet 2013 Update. Available at www.heart.org.

3. Vaughan CJ, Delanty N. Hypertensive emergencies. Lancet 2000;356:411–7.

4. Centers for Medicare and Medicaid Services. Acute care hospital inpatient prospective payment system. Payment system fact sheet series. April 2013. Available at www.cms.gov/outreach-and-education/.

5. Holahan J. The 2007-09 recession and health insurance coverage. Health Aff (Millwood) 2011;30:145–52.

References

1. Nationwide Inpatient Sample overview. Available at www.hcup-us.ahrq.gov/nisoverview.jsp.

2. American Heart Association. High blood pressure: statistical fact sheet 2013 Update. Available at www.heart.org.

3. Vaughan CJ, Delanty N. Hypertensive emergencies. Lancet 2000;356:411–7.

4. Centers for Medicare and Medicaid Services. Acute care hospital inpatient prospective payment system. Payment system fact sheet series. April 2013. Available at www.cms.gov/outreach-and-education/.

5. Holahan J. The 2007-09 recession and health insurance coverage. Health Aff (Millwood) 2011;30:145–52.

Issue
Journal of Clinical Outcomes Management - July 2015, VOL. 22, NO. 7
Issue
Journal of Clinical Outcomes Management - July 2015, VOL. 22, NO. 7
Publications
Publications
Topics
Article Type
Display Headline
Increase in the Incidence of Hypertensive Emergency Syndrome?
Display Headline
Increase in the Incidence of Hypertensive Emergency Syndrome?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

More Evidence That a High-Fiber Diet May Prevent Type 2 Diabetes

Article Type
Changed
Tue, 05/03/2022 - 15:39
Display Headline
More Evidence That a High-Fiber Diet May Prevent Type 2 Diabetes

Study Overview

Objective. To evaluate the association between intake of dietary fiber and type 2 diabetes.

Design. Case-cohort study (EPIC-InterAct Study), which is nested within the large prospective cohort study EPIC (European Prospective Investigation into Cancer & Nutrition) [1]. EPIC includes participants from 10 European countries and was designed to investigate the relationships between diet, nutritional status, lifestyle, and environmental factors and the incidence of cancer and other chronic disease [1].

Setting and participants. The EPIC-Interact study used data from 8 European countries (Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden, and UK). The Interact sample includes 12,403 individuals who were identified as having developed type 2 diabetes, and a random subcohort of controls who were free of diabetes at baseline (n = 16,835, including 778 cases of incident diabetes) selected from 340,234 eligible EPIC participants. Of the 28,460 participants in the EPIC-InterAct study, excluded were those with prevalent diabetes, missing diabetes status information, post-censoring diabetes, extreme energy intake (top and bottom 1%), and missing values for education level, physical activity, smoking status, and BMI, leaving a final sample of 11,559 cases and 15,258 subcohort participants. No differences were observed in baseline characteristics between the included and excluded participants.

Analysis. Country-specific hazard ratios (HRs) were estimated using Prentice-weighted Cox proportional hazards models and were pooled using a random effects meta-analysis. Dietary intake over the previous 12 months before recruitment was assessed by country-specific or center-specific assessment methods (food-frequency questionnaire and dietary histories) that were developed and validated locally, and data were converted to nutrient intake.

Main outcome measure. Incident cases of diabetes.

Main results. During a median of 10.8 years of follow-up, total fiber intake was associated with a lower risk of diabetes after adjusting for lifestyle and dietary factors (hazard ratio for the highest quartile of fiber intake [> 26 g/day] vs the lowest [< 19 g/day], 0.82; 95% confidence interval, 0.69–0.97, P for trend = 0.02). When the researchers focused on specific types of fiber, they found that people who consumed the highest amounts of cereal and vegetable fiber were 19% and 16%, respectively, less likely to develop type 2 diabetes compared with those who consumed the lowest amounts (P < 0.001). Intake of fruit fiber was not associated with risk of diabetes. When the analyses were additionally adjusted for BMI, the inverse associations between intake of fiber and diabetes were attenuated and no longer statistically significant.

The researchers also conducted a meta-analysis that included 18 other cohorts in addition to the current EPIC-InterAct study. The summary relative risks per 10/g day increase in intake were 0.91 (95% CI, 0.87–0.96) for total fiber, 0.75 (95% CI, 0.65–0.86) for cereal fiber, 0.95 (95% CI, 0.87–1.03) for fruit fiber, and 0.93 (95% CI 0.82–1.05) for vegetable fiber.

Conclusion. Individuals with diets rich in fiber, in particular cereal fiber, may be at lower risk of type 2 diabetes.

 

 

Commentary

The current study by the European InterAct Consortium adds to the available evidence supporting the association of dietary fiber and risk of diabetes. Higher intake of dietary fiber, especially cereal fiber, has been consistently associated with a lower risk of diabetes [2,3].

This study showed that a high intake of total fiber (primarily cereal and vegetable fiber) was associated with an 18% lower risk of type 2 diabetes (adjusted for dietary and lifestyle factors). While there was no association after adjusting for BMI, their meta-analysis of 18 studies did support the an inverse association between total fiber and cereal fiber intake and risk of type 2 diabetes.

What is it about fiber that is protective? With regard to whole grains, a rich source of fiber, potential mechanisms have been identified [5], including the possible impact of improved postprandial glucose response. However, whole grains are rich in nutrients and phytochemicals, and new hypotheses for the health-protective mechanisms of whole grains beyond fiber are being proposed [6]. In addition, the beneficial effect of fiber seen in this and other studies may be partly mediated by a lower BMI. Dietary fiber may affect appetite and energy intake through a range of processes.

It should be noted that although the effect of whole-grain foods for the prevention of type 2 diabetes is strongly supported by numerous epidemiological studies, a 2008 systematic review by the Cochrane Collaboration [4], which included 1 low-quality RCT and 11 cohort studies, stated that the evidence is too weak to be able to draw a definite conclusion about the preventive effect of this dietary factor.

Strengths of the study included its prospective design and large sample size. Limitations of the study include that dietary intake was assessed only at baseline and measurement error through the use of a questionnaire may have occurred. Although food-frequency questionnaires are widely used, they are subjective estimates and subject to recall bias, and some researchers have questioned their value for use in epidemiologic studies [7]. In addition, the authors note that the inverse association for total fiber and cereal fiber intake in the meta-analysis could be due to residual confounding as fiber intake has been associated with healthier diet, lower BMI, and greater physical activity.

Applications for Clinical Practice

The prevalence of type 2 diabetes has increased rapidly during the past decades in the United States. Dietary guidelines recommend the consumption of whole grains to prevent chronic diseases. The results presented in the current study strengthen the evidence supporting cereal fiber as an important determinant of type 2 diabetes risk. Randomized controlled trials are needed and should elucidate this matter.

References

1. Riboli E, Hunt KJ, Slimani N, European Prospective Investigation into Cancer and Nutrition (EPIC): study populations and data collection. Public Health Nutr 2002;5(6B):1113–24.

2. Ye EQ, Chacko SA, Chou EL, et al. Greater whole-grain intake is associated with lower risk of type 2 diabetes, cardiovascular disease, and weight gain. J Nutr 2012;142:1304–13.

3. Huang T, Xu M, Lee A, et al. Consumption of whole grains and cereal fiber and total and cause-specific mortality: prospective analysis of 367,442 individuals. BMC Med 2015;13:59.

4. Priebe MG, van Binsbergen JJ, de Vos R, Vonk RJ. Whole grain foods for theprevention of type 2 diabetes mellitus. Cochrane Database Syst Rev 2008;(1):CD006061.

5. Slavin JL, Martini MC, Jacobs DR Jr, Marquart L. Plausible mechanisms for the protectiveness of whole grains. Am J Clin Nutr 1999;70(3 Suppl):459S–463S.

6. Fardet A. New hypotheses for the health-protective mechanisms of whole-grain cereals: what is beyond fibre? Nutr Res Rev 2010;23:65–134.

7. Shim J-S, Oh K, Kim HC. Dietary assessment methods in epidemiologic studies. Epidemiol Health 2014;36:e2014009.

Issue
Journal of Clinical Outcomes Management - July 2015, VOL. 22, NO. 7
Publications
Topics
Sections

Study Overview

Objective. To evaluate the association between intake of dietary fiber and type 2 diabetes.

Design. Case-cohort study (EPIC-InterAct Study), which is nested within the large prospective cohort study EPIC (European Prospective Investigation into Cancer & Nutrition) [1]. EPIC includes participants from 10 European countries and was designed to investigate the relationships between diet, nutritional status, lifestyle, and environmental factors and the incidence of cancer and other chronic disease [1].

Setting and participants. The EPIC-Interact study used data from 8 European countries (Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden, and UK). The Interact sample includes 12,403 individuals who were identified as having developed type 2 diabetes, and a random subcohort of controls who were free of diabetes at baseline (n = 16,835, including 778 cases of incident diabetes) selected from 340,234 eligible EPIC participants. Of the 28,460 participants in the EPIC-InterAct study, excluded were those with prevalent diabetes, missing diabetes status information, post-censoring diabetes, extreme energy intake (top and bottom 1%), and missing values for education level, physical activity, smoking status, and BMI, leaving a final sample of 11,559 cases and 15,258 subcohort participants. No differences were observed in baseline characteristics between the included and excluded participants.

Analysis. Country-specific hazard ratios (HRs) were estimated using Prentice-weighted Cox proportional hazards models and were pooled using a random effects meta-analysis. Dietary intake over the previous 12 months before recruitment was assessed by country-specific or center-specific assessment methods (food-frequency questionnaire and dietary histories) that were developed and validated locally, and data were converted to nutrient intake.

Main outcome measure. Incident cases of diabetes.

Main results. During a median of 10.8 years of follow-up, total fiber intake was associated with a lower risk of diabetes after adjusting for lifestyle and dietary factors (hazard ratio for the highest quartile of fiber intake [> 26 g/day] vs the lowest [< 19 g/day], 0.82; 95% confidence interval, 0.69–0.97, P for trend = 0.02). When the researchers focused on specific types of fiber, they found that people who consumed the highest amounts of cereal and vegetable fiber were 19% and 16%, respectively, less likely to develop type 2 diabetes compared with those who consumed the lowest amounts (P < 0.001). Intake of fruit fiber was not associated with risk of diabetes. When the analyses were additionally adjusted for BMI, the inverse associations between intake of fiber and diabetes were attenuated and no longer statistically significant.

The researchers also conducted a meta-analysis that included 18 other cohorts in addition to the current EPIC-InterAct study. The summary relative risks per 10/g day increase in intake were 0.91 (95% CI, 0.87–0.96) for total fiber, 0.75 (95% CI, 0.65–0.86) for cereal fiber, 0.95 (95% CI, 0.87–1.03) for fruit fiber, and 0.93 (95% CI 0.82–1.05) for vegetable fiber.

Conclusion. Individuals with diets rich in fiber, in particular cereal fiber, may be at lower risk of type 2 diabetes.

 

 

Commentary

The current study by the European InterAct Consortium adds to the available evidence supporting the association of dietary fiber and risk of diabetes. Higher intake of dietary fiber, especially cereal fiber, has been consistently associated with a lower risk of diabetes [2,3].

This study showed that a high intake of total fiber (primarily cereal and vegetable fiber) was associated with an 18% lower risk of type 2 diabetes (adjusted for dietary and lifestyle factors). While there was no association after adjusting for BMI, their meta-analysis of 18 studies did support the an inverse association between total fiber and cereal fiber intake and risk of type 2 diabetes.

What is it about fiber that is protective? With regard to whole grains, a rich source of fiber, potential mechanisms have been identified [5], including the possible impact of improved postprandial glucose response. However, whole grains are rich in nutrients and phytochemicals, and new hypotheses for the health-protective mechanisms of whole grains beyond fiber are being proposed [6]. In addition, the beneficial effect of fiber seen in this and other studies may be partly mediated by a lower BMI. Dietary fiber may affect appetite and energy intake through a range of processes.

It should be noted that although the effect of whole-grain foods for the prevention of type 2 diabetes is strongly supported by numerous epidemiological studies, a 2008 systematic review by the Cochrane Collaboration [4], which included 1 low-quality RCT and 11 cohort studies, stated that the evidence is too weak to be able to draw a definite conclusion about the preventive effect of this dietary factor.

Strengths of the study included its prospective design and large sample size. Limitations of the study include that dietary intake was assessed only at baseline and measurement error through the use of a questionnaire may have occurred. Although food-frequency questionnaires are widely used, they are subjective estimates and subject to recall bias, and some researchers have questioned their value for use in epidemiologic studies [7]. In addition, the authors note that the inverse association for total fiber and cereal fiber intake in the meta-analysis could be due to residual confounding as fiber intake has been associated with healthier diet, lower BMI, and greater physical activity.

Applications for Clinical Practice

The prevalence of type 2 diabetes has increased rapidly during the past decades in the United States. Dietary guidelines recommend the consumption of whole grains to prevent chronic diseases. The results presented in the current study strengthen the evidence supporting cereal fiber as an important determinant of type 2 diabetes risk. Randomized controlled trials are needed and should elucidate this matter.

Study Overview

Objective. To evaluate the association between intake of dietary fiber and type 2 diabetes.

Design. Case-cohort study (EPIC-InterAct Study), which is nested within the large prospective cohort study EPIC (European Prospective Investigation into Cancer & Nutrition) [1]. EPIC includes participants from 10 European countries and was designed to investigate the relationships between diet, nutritional status, lifestyle, and environmental factors and the incidence of cancer and other chronic disease [1].

Setting and participants. The EPIC-Interact study used data from 8 European countries (Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden, and UK). The Interact sample includes 12,403 individuals who were identified as having developed type 2 diabetes, and a random subcohort of controls who were free of diabetes at baseline (n = 16,835, including 778 cases of incident diabetes) selected from 340,234 eligible EPIC participants. Of the 28,460 participants in the EPIC-InterAct study, excluded were those with prevalent diabetes, missing diabetes status information, post-censoring diabetes, extreme energy intake (top and bottom 1%), and missing values for education level, physical activity, smoking status, and BMI, leaving a final sample of 11,559 cases and 15,258 subcohort participants. No differences were observed in baseline characteristics between the included and excluded participants.

Analysis. Country-specific hazard ratios (HRs) were estimated using Prentice-weighted Cox proportional hazards models and were pooled using a random effects meta-analysis. Dietary intake over the previous 12 months before recruitment was assessed by country-specific or center-specific assessment methods (food-frequency questionnaire and dietary histories) that were developed and validated locally, and data were converted to nutrient intake.

Main outcome measure. Incident cases of diabetes.

Main results. During a median of 10.8 years of follow-up, total fiber intake was associated with a lower risk of diabetes after adjusting for lifestyle and dietary factors (hazard ratio for the highest quartile of fiber intake [> 26 g/day] vs the lowest [< 19 g/day], 0.82; 95% confidence interval, 0.69–0.97, P for trend = 0.02). When the researchers focused on specific types of fiber, they found that people who consumed the highest amounts of cereal and vegetable fiber were 19% and 16%, respectively, less likely to develop type 2 diabetes compared with those who consumed the lowest amounts (P < 0.001). Intake of fruit fiber was not associated with risk of diabetes. When the analyses were additionally adjusted for BMI, the inverse associations between intake of fiber and diabetes were attenuated and no longer statistically significant.

The researchers also conducted a meta-analysis that included 18 other cohorts in addition to the current EPIC-InterAct study. The summary relative risks per 10/g day increase in intake were 0.91 (95% CI, 0.87–0.96) for total fiber, 0.75 (95% CI, 0.65–0.86) for cereal fiber, 0.95 (95% CI, 0.87–1.03) for fruit fiber, and 0.93 (95% CI 0.82–1.05) for vegetable fiber.

Conclusion. Individuals with diets rich in fiber, in particular cereal fiber, may be at lower risk of type 2 diabetes.

 

 

Commentary

The current study by the European InterAct Consortium adds to the available evidence supporting the association of dietary fiber and risk of diabetes. Higher intake of dietary fiber, especially cereal fiber, has been consistently associated with a lower risk of diabetes [2,3].

This study showed that a high intake of total fiber (primarily cereal and vegetable fiber) was associated with an 18% lower risk of type 2 diabetes (adjusted for dietary and lifestyle factors). While there was no association after adjusting for BMI, their meta-analysis of 18 studies did support the an inverse association between total fiber and cereal fiber intake and risk of type 2 diabetes.

What is it about fiber that is protective? With regard to whole grains, a rich source of fiber, potential mechanisms have been identified [5], including the possible impact of improved postprandial glucose response. However, whole grains are rich in nutrients and phytochemicals, and new hypotheses for the health-protective mechanisms of whole grains beyond fiber are being proposed [6]. In addition, the beneficial effect of fiber seen in this and other studies may be partly mediated by a lower BMI. Dietary fiber may affect appetite and energy intake through a range of processes.

It should be noted that although the effect of whole-grain foods for the prevention of type 2 diabetes is strongly supported by numerous epidemiological studies, a 2008 systematic review by the Cochrane Collaboration [4], which included 1 low-quality RCT and 11 cohort studies, stated that the evidence is too weak to be able to draw a definite conclusion about the preventive effect of this dietary factor.

Strengths of the study included its prospective design and large sample size. Limitations of the study include that dietary intake was assessed only at baseline and measurement error through the use of a questionnaire may have occurred. Although food-frequency questionnaires are widely used, they are subjective estimates and subject to recall bias, and some researchers have questioned their value for use in epidemiologic studies [7]. In addition, the authors note that the inverse association for total fiber and cereal fiber intake in the meta-analysis could be due to residual confounding as fiber intake has been associated with healthier diet, lower BMI, and greater physical activity.

Applications for Clinical Practice

The prevalence of type 2 diabetes has increased rapidly during the past decades in the United States. Dietary guidelines recommend the consumption of whole grains to prevent chronic diseases. The results presented in the current study strengthen the evidence supporting cereal fiber as an important determinant of type 2 diabetes risk. Randomized controlled trials are needed and should elucidate this matter.

References

1. Riboli E, Hunt KJ, Slimani N, European Prospective Investigation into Cancer and Nutrition (EPIC): study populations and data collection. Public Health Nutr 2002;5(6B):1113–24.

2. Ye EQ, Chacko SA, Chou EL, et al. Greater whole-grain intake is associated with lower risk of type 2 diabetes, cardiovascular disease, and weight gain. J Nutr 2012;142:1304–13.

3. Huang T, Xu M, Lee A, et al. Consumption of whole grains and cereal fiber and total and cause-specific mortality: prospective analysis of 367,442 individuals. BMC Med 2015;13:59.

4. Priebe MG, van Binsbergen JJ, de Vos R, Vonk RJ. Whole grain foods for theprevention of type 2 diabetes mellitus. Cochrane Database Syst Rev 2008;(1):CD006061.

5. Slavin JL, Martini MC, Jacobs DR Jr, Marquart L. Plausible mechanisms for the protectiveness of whole grains. Am J Clin Nutr 1999;70(3 Suppl):459S–463S.

6. Fardet A. New hypotheses for the health-protective mechanisms of whole-grain cereals: what is beyond fibre? Nutr Res Rev 2010;23:65–134.

7. Shim J-S, Oh K, Kim HC. Dietary assessment methods in epidemiologic studies. Epidemiol Health 2014;36:e2014009.

References

1. Riboli E, Hunt KJ, Slimani N, European Prospective Investigation into Cancer and Nutrition (EPIC): study populations and data collection. Public Health Nutr 2002;5(6B):1113–24.

2. Ye EQ, Chacko SA, Chou EL, et al. Greater whole-grain intake is associated with lower risk of type 2 diabetes, cardiovascular disease, and weight gain. J Nutr 2012;142:1304–13.

3. Huang T, Xu M, Lee A, et al. Consumption of whole grains and cereal fiber and total and cause-specific mortality: prospective analysis of 367,442 individuals. BMC Med 2015;13:59.

4. Priebe MG, van Binsbergen JJ, de Vos R, Vonk RJ. Whole grain foods for theprevention of type 2 diabetes mellitus. Cochrane Database Syst Rev 2008;(1):CD006061.

5. Slavin JL, Martini MC, Jacobs DR Jr, Marquart L. Plausible mechanisms for the protectiveness of whole grains. Am J Clin Nutr 1999;70(3 Suppl):459S–463S.

6. Fardet A. New hypotheses for the health-protective mechanisms of whole-grain cereals: what is beyond fibre? Nutr Res Rev 2010;23:65–134.

7. Shim J-S, Oh K, Kim HC. Dietary assessment methods in epidemiologic studies. Epidemiol Health 2014;36:e2014009.

Issue
Journal of Clinical Outcomes Management - July 2015, VOL. 22, NO. 7
Issue
Journal of Clinical Outcomes Management - July 2015, VOL. 22, NO. 7
Publications
Publications
Topics
Article Type
Display Headline
More Evidence That a High-Fiber Diet May Prevent Type 2 Diabetes
Display Headline
More Evidence That a High-Fiber Diet May Prevent Type 2 Diabetes
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

For Worksite Weight Loss: Something Is Better than Nothing, but Is Something More Even Better than That?

Article Type
Changed
Fri, 03/02/2018 - 13:31
Display Headline
For Worksite Weight Loss: Something Is Better than Nothing, but Is Something More Even Better than That?

Study Overview

Objective. To compare the effectiveness of 2 employee weight management programs—a less-intense program versus a more intense, individually-targeted program with financial incentives—at producing weight loss.

Design. Cluster randomized controlled trial.

Setting and participants. The setting for the “Tailored Worksite Weight Control Programs Project” was 28 small and medium-sized employers in and around Roanoke and Richmond, Virginia. Investigators enrolled the firms after a series of conversations with worksite leaders and conducted stratified cluster randomization based on worksite size (categorizing small firms as those with 100–300 employees and medium firms as those with 301–600 employees). For worksites to be considered for inclusion, the researchers required that the employer have between 100–600 employees total, provide internet access to employees, provide access to a weigh-in kiosk for the weight management program, and be willing to conduct a brief health survey of all employees at baseline to facilitate identification of eligible employees. Once eligible and interested worksites were identified, there were further inclusion criteria for employees themselves. To enroll in the study, an individual employee had to be over 18 years of age, have a BMI ≥ 25 kg/m2, not be pregnant or with a medical condition that would contraindicate participation, and not already participating in a structured weight loss program. Of 73 worksites deemed eligible upon review of local companies, 39 (53.4%) initially agreed to enroll in the study. Of those, 11 dropped out before the intervention due to lack of managerial support and/or employee interest. Within the 28 enrolled worksites that were randomized, 6258 employees were felt to be eligible based on baseline screening. Of those, 1790 (29%) enrolled in the study.

Intervention. At worksites randomized to the INCENT program, study participants received an internet-based, tailored weight loss advice intervention coupled with a financial incentive. The behavioral intervention was based in social cognitive theory. It focused on advising healthier diet and increasing physical activity levels to 150 min/wk. Participants in this group received daily emails from the program that were “tailored” according to their gender and according to their preferred features of physical activity. The modest financial incentive they received was tied to weight loss. They were paid $1 for each percent of body weight lost per month. All INCENT participants also had access to a comprehensive website where they could access information about exercise, including videos, and logs for monitoring activity and dietary intake.

At worksites randomized to the less intense LMW (“Livin’ My Weigh”) program, employees who enrolled received an intervention that also included information about diet and physical activity but did not include daily tailored emails or financial incentives. These participants did receive quarterly newsletters. Both programs were designed to last for 12 months, with a 6-month weight-loss phase followed by a 6-month weight maintenance phase. The results reported in this study focus on weight loss achieved at 6 months.

Main outcome measures. The primary outcome in this study was weight change, measured in 2 ways: mean weight loss at 6 months, and percentage of participants in each arm who had achieved clinically meaningful weight loss (defined as ≥ 5% of body weight) at 6 months. Weight change was measured using calibrated scales at kiosks that were provided within each workplace. Secondary outcomes of interest focused on behavioral measures based on self-report using repeated surveys of participants. These included change in physical activity levels (measured using 6 Behavioral Risk Factor Surveillance System (BRFSS) items, and 8 Rapid Assessment Physical Activity (RAPA) scale items), and change in dietary behaviors (using the Block Fruit-Vegetable Fiber Screener, and the Beverage Intake Questionnaire). Analysis was intention-to-treat (last observation carried forward for those who disenrolled before 6 months) and was conducted at the level of the individual participant, with generalized linear modeling including a time indicator and interaction terms for study group by time, to account for clustering effects.

Results. Of the 1790 participants who enrolled in the study, 1581 (88%) had complete follow-up data for analysis. Study participants were predominantly female (74%), Caucasian (77%), and well educated (only 17% had a high school diploma or less). Participants in the study differed from the overall eligible population for the study in a couple of important ways: they were more likely to be Caucasian and more likely to be women. The groups were well balanced with respect to most baseline characteristics, however, INCENT participants were significantly younger (45.7 vs. 48.2 years) and reported having worked at their current jobs for less time (8.1 vs. 11.6 years on average) than LMW participants. A significantly higher percentage of INCENT participants also reported meeting physical activity recommendations at baseline (10.2% vs. 6.8%, P < 0.05).

At the 6-month mark, participants in both groups lost weight on average (–2.3 lbs in INCENT, and –1.3 lbs in LMW), but there were no significant between-group differences. Likewise, although slightly more participants in INCENT (14.6%) achieved a 5% weight loss compared to those in LMW (9.7%), this difference also was not statistically significant.

For self-reported outcomes, some differences did emerge between the groups. INCENT participants reported a statistically significantly larger increase in daily fruit and vegetable intake (0.2 servings, P < 0.001) and fiber intake (0.58 g, P < 0.001). Within group change measured for self-reported water intake was significant for INCENT participants (increased by 0.47 fl oz per day), whereas it was not for LMW participants. Between group differences were presumably not significant for this measure, as they were not reported.

Conclusion. The authors conclude that both an individually targeted internet-based intervention and a minimal intervention can lead to improvements in activity and diet behaviors, and that both produce a modest amount of weight loss for employees.

Commentary

Given the high prevalence of overweight and obesity in the United States, employers are increasingly interested in programs that can promote more healthful behaviors and achieve weight loss for workers. Because many employers are faced with bearing the health care costs of obese employees [1], and because chronic health conditions linked to obesity may impact worker productivity through increased absenteeism [2], the financial benefits of successful employer-based weight management programs may be significant. Unfortunately, to date, many such programs have gone unevaluated. Those that have been evaluated tend to be lacking in empirical basis (eg, too brief and not based on principles of behavior change). Perhaps because of these programmatic weaknesses, evaluations have not generally shown that employer-based weight management programs are able to move the needle very much on weight [3].It seems that having any program in place is better than having nothing at all, but it is unclear whether programs of greater intensity are able to produce better results.

In this study by Almeida and colleagues, the researchers tested whether a more intense, tailored internet-based behavioral intervention with financial incentives produced greater weight loss than a less-intense program, hypothesizing that it would. Surprisingly, they actually found very little difference between the 2 groups with respect to weight outcomes, and only minimal differences with respect to behavior change. The strengths of this study include a randomized trial design with a strong comparison group, and the use of intention-to-treat analysis. Additionally, both interventions that were tested were “real-world friendly” programs, meaning that they could, in theory, be implemented relatively easily in a wide variety of settings. This is in stark contrast to traditional behavioral weight loss programs that tend to be incredibly intense and costly in nature—probably unappealing to most employers. Despite being of lower intensity, both of the interventions in this study had a clear basis in behavior change theory, which was a strength. Additionally, the retention rates at the end of the 6-month study period were excellent, with almost 90% of participants having complete follow-up data. Although this trend was probably facilitated by having a “captive” employee population, it speaks to the ease of participating in and hosting the programs.

Although the randomized design was a definite strength of this study, the demographic imbalances between the groups at baseline (resulting from individual-level factors that could not be randomized) may have been important. INCENT participants were younger and earlier in their careers, and although the researchers conducted multivariable analyses to try to eliminate confounding, this baseline imbalance raises concerns for whether or not other unmeasured confounding variables might have been unequally distributed between the groups.

It is not surprising that neither intervention produced large amounts of weight loss. Although the interventions were evidence-based in that they were grounded in behavior change theory, the specific behaviors focused on were not those that would be expected to yield significant weight loss. Both interventions, at least as described in this paper, seemed to put a greater emphasis on physical activity than diet (in terms of resources available for participants). While activity is critical for health promotion and weight maintenance [4], it is probably less important than diet for achieving meaningful weight loss. This is particularly the case when one considers the level of activity that was targeted in this study (150 min/wk). Although this is the recommended level for adults in order to maintain health, it is not believed to be sufficient to result in weight loss [5]. In terms of the dietary recommendations described in these programs, a focus on low-fat, high-fiber diets would be only expected to promote weight loss assuming that significant overall calorie reductions were met. Without stating specific caloric limits (which perhaps they did, even if not mentioned in the methods section), it’s hard to know how effective these diets would be at reducing weight, despite their likely positive impacts on overall health. In keeping with these points of emphasis for dietary change, the places where statistically significant differences emerged between the groups were not those that would be expected to produce differential weight loss. Fruit and vegetable intake, while important for health, will not produce weight loss independent of an overall decrease in caloric intake. The other dietary outcome that was significantly different between the groups was fiber intake, likely a correlate of the increased fruit and vegetable intake.

One of the key assumptions driving this study was that INCENT was a more intense program than LMY, and thus would produce greater weight loss. In reality, though, neither of the programs was particularly intensive—there were no face-to-face contacts in either, for example. This issue captures a fundamental trade-off between the need to achieve results and the need for pragmatism in designing interventions. Although less intense interventions are likely to produce less weight loss (as was the case in this study), they are probably also infinitely more likely to be adopted in the real world, making it very important to do studies such as this one.

One area where the INCENT arm could have enhanced its effectiveness without sacrificing pragmatism was in the size of the financial incentive used. The researchers mentioned not wanting to use large incentives in order to avoid “undermining intrinsic motivation,” a concern often raised in these kinds of interventions. Unfortunately, the “$1 per percent weight lost” reward probably went too far in the other direction, being too small to provide any kind of additional motivation. Studies of financial incentives for weight loss reveal that weight loss increases in proportion to the size of the incentive [5], and perhaps this incentive was too tiny to register with most participants, particularly in this population of well-educated, high-earning adults.

Applications for Real-World Implementation

For employers and others considering how to design pragmatic weight management interventions, this study shows that even relatively simple, low-key, internet-based interventions are able to produce some measureable behavior changes and a little bit of weight loss, which is likely meaningful when considered in a large population. On the other hand, reconfiguring the resources in such an intervention to provide greater focus on caloric consumption, higher physical activity levels, and the use of larger financial incentives might well be worth the bang for the buck in trying to improve upon these results.

—Kristine Lewis, MD, MPH

References

1. Colombi AM, Wood GC. Obesity in the workplace: impact on cardiovascular disease, cost and utilization of care. Am Health Drug Benefits 2011;4:271–8.

2. Dee A, Kearns K, O’Neill C, et al. The direct and indirect costs of both overweight and obesity: a systematic review. BMC Res Notes 2014;7:242.

3. Anderson LM, Quinn TA, Glanz K, et al. The effectiveness of worksite nutrition and physical activity interventions for controlling employee overweight and obesity: a systematic review. Am J Prev Med 2010; 37:340–57.

4. Swift DL, Johannsen NM, Lavie CJ, et al. The role of exercise and physical activity in weight loss and maintenance. Prog Cardiovasc Dis 2014;56:447.

5. Jeffery RW. Financial incentives and weight control. Prev Med 2012;55S:61–7.

Issue
Journal of Clinical Outcomes Management - June 2015, VOL. 22, NO. 6
Publications
Topics
Sections

Study Overview

Objective. To compare the effectiveness of 2 employee weight management programs—a less-intense program versus a more intense, individually-targeted program with financial incentives—at producing weight loss.

Design. Cluster randomized controlled trial.

Setting and participants. The setting for the “Tailored Worksite Weight Control Programs Project” was 28 small and medium-sized employers in and around Roanoke and Richmond, Virginia. Investigators enrolled the firms after a series of conversations with worksite leaders and conducted stratified cluster randomization based on worksite size (categorizing small firms as those with 100–300 employees and medium firms as those with 301–600 employees). For worksites to be considered for inclusion, the researchers required that the employer have between 100–600 employees total, provide internet access to employees, provide access to a weigh-in kiosk for the weight management program, and be willing to conduct a brief health survey of all employees at baseline to facilitate identification of eligible employees. Once eligible and interested worksites were identified, there were further inclusion criteria for employees themselves. To enroll in the study, an individual employee had to be over 18 years of age, have a BMI ≥ 25 kg/m2, not be pregnant or with a medical condition that would contraindicate participation, and not already participating in a structured weight loss program. Of 73 worksites deemed eligible upon review of local companies, 39 (53.4%) initially agreed to enroll in the study. Of those, 11 dropped out before the intervention due to lack of managerial support and/or employee interest. Within the 28 enrolled worksites that were randomized, 6258 employees were felt to be eligible based on baseline screening. Of those, 1790 (29%) enrolled in the study.

Intervention. At worksites randomized to the INCENT program, study participants received an internet-based, tailored weight loss advice intervention coupled with a financial incentive. The behavioral intervention was based in social cognitive theory. It focused on advising healthier diet and increasing physical activity levels to 150 min/wk. Participants in this group received daily emails from the program that were “tailored” according to their gender and according to their preferred features of physical activity. The modest financial incentive they received was tied to weight loss. They were paid $1 for each percent of body weight lost per month. All INCENT participants also had access to a comprehensive website where they could access information about exercise, including videos, and logs for monitoring activity and dietary intake.

At worksites randomized to the less intense LMW (“Livin’ My Weigh”) program, employees who enrolled received an intervention that also included information about diet and physical activity but did not include daily tailored emails or financial incentives. These participants did receive quarterly newsletters. Both programs were designed to last for 12 months, with a 6-month weight-loss phase followed by a 6-month weight maintenance phase. The results reported in this study focus on weight loss achieved at 6 months.

Main outcome measures. The primary outcome in this study was weight change, measured in 2 ways: mean weight loss at 6 months, and percentage of participants in each arm who had achieved clinically meaningful weight loss (defined as ≥ 5% of body weight) at 6 months. Weight change was measured using calibrated scales at kiosks that were provided within each workplace. Secondary outcomes of interest focused on behavioral measures based on self-report using repeated surveys of participants. These included change in physical activity levels (measured using 6 Behavioral Risk Factor Surveillance System (BRFSS) items, and 8 Rapid Assessment Physical Activity (RAPA) scale items), and change in dietary behaviors (using the Block Fruit-Vegetable Fiber Screener, and the Beverage Intake Questionnaire). Analysis was intention-to-treat (last observation carried forward for those who disenrolled before 6 months) and was conducted at the level of the individual participant, with generalized linear modeling including a time indicator and interaction terms for study group by time, to account for clustering effects.

Results. Of the 1790 participants who enrolled in the study, 1581 (88%) had complete follow-up data for analysis. Study participants were predominantly female (74%), Caucasian (77%), and well educated (only 17% had a high school diploma or less). Participants in the study differed from the overall eligible population for the study in a couple of important ways: they were more likely to be Caucasian and more likely to be women. The groups were well balanced with respect to most baseline characteristics, however, INCENT participants were significantly younger (45.7 vs. 48.2 years) and reported having worked at their current jobs for less time (8.1 vs. 11.6 years on average) than LMW participants. A significantly higher percentage of INCENT participants also reported meeting physical activity recommendations at baseline (10.2% vs. 6.8%, P < 0.05).

At the 6-month mark, participants in both groups lost weight on average (–2.3 lbs in INCENT, and –1.3 lbs in LMW), but there were no significant between-group differences. Likewise, although slightly more participants in INCENT (14.6%) achieved a 5% weight loss compared to those in LMW (9.7%), this difference also was not statistically significant.

For self-reported outcomes, some differences did emerge between the groups. INCENT participants reported a statistically significantly larger increase in daily fruit and vegetable intake (0.2 servings, P < 0.001) and fiber intake (0.58 g, P < 0.001). Within group change measured for self-reported water intake was significant for INCENT participants (increased by 0.47 fl oz per day), whereas it was not for LMW participants. Between group differences were presumably not significant for this measure, as they were not reported.

Conclusion. The authors conclude that both an individually targeted internet-based intervention and a minimal intervention can lead to improvements in activity and diet behaviors, and that both produce a modest amount of weight loss for employees.

Commentary

Given the high prevalence of overweight and obesity in the United States, employers are increasingly interested in programs that can promote more healthful behaviors and achieve weight loss for workers. Because many employers are faced with bearing the health care costs of obese employees [1], and because chronic health conditions linked to obesity may impact worker productivity through increased absenteeism [2], the financial benefits of successful employer-based weight management programs may be significant. Unfortunately, to date, many such programs have gone unevaluated. Those that have been evaluated tend to be lacking in empirical basis (eg, too brief and not based on principles of behavior change). Perhaps because of these programmatic weaknesses, evaluations have not generally shown that employer-based weight management programs are able to move the needle very much on weight [3].It seems that having any program in place is better than having nothing at all, but it is unclear whether programs of greater intensity are able to produce better results.

In this study by Almeida and colleagues, the researchers tested whether a more intense, tailored internet-based behavioral intervention with financial incentives produced greater weight loss than a less-intense program, hypothesizing that it would. Surprisingly, they actually found very little difference between the 2 groups with respect to weight outcomes, and only minimal differences with respect to behavior change. The strengths of this study include a randomized trial design with a strong comparison group, and the use of intention-to-treat analysis. Additionally, both interventions that were tested were “real-world friendly” programs, meaning that they could, in theory, be implemented relatively easily in a wide variety of settings. This is in stark contrast to traditional behavioral weight loss programs that tend to be incredibly intense and costly in nature—probably unappealing to most employers. Despite being of lower intensity, both of the interventions in this study had a clear basis in behavior change theory, which was a strength. Additionally, the retention rates at the end of the 6-month study period were excellent, with almost 90% of participants having complete follow-up data. Although this trend was probably facilitated by having a “captive” employee population, it speaks to the ease of participating in and hosting the programs.

Although the randomized design was a definite strength of this study, the demographic imbalances between the groups at baseline (resulting from individual-level factors that could not be randomized) may have been important. INCENT participants were younger and earlier in their careers, and although the researchers conducted multivariable analyses to try to eliminate confounding, this baseline imbalance raises concerns for whether or not other unmeasured confounding variables might have been unequally distributed between the groups.

It is not surprising that neither intervention produced large amounts of weight loss. Although the interventions were evidence-based in that they were grounded in behavior change theory, the specific behaviors focused on were not those that would be expected to yield significant weight loss. Both interventions, at least as described in this paper, seemed to put a greater emphasis on physical activity than diet (in terms of resources available for participants). While activity is critical for health promotion and weight maintenance [4], it is probably less important than diet for achieving meaningful weight loss. This is particularly the case when one considers the level of activity that was targeted in this study (150 min/wk). Although this is the recommended level for adults in order to maintain health, it is not believed to be sufficient to result in weight loss [5]. In terms of the dietary recommendations described in these programs, a focus on low-fat, high-fiber diets would be only expected to promote weight loss assuming that significant overall calorie reductions were met. Without stating specific caloric limits (which perhaps they did, even if not mentioned in the methods section), it’s hard to know how effective these diets would be at reducing weight, despite their likely positive impacts on overall health. In keeping with these points of emphasis for dietary change, the places where statistically significant differences emerged between the groups were not those that would be expected to produce differential weight loss. Fruit and vegetable intake, while important for health, will not produce weight loss independent of an overall decrease in caloric intake. The other dietary outcome that was significantly different between the groups was fiber intake, likely a correlate of the increased fruit and vegetable intake.

One of the key assumptions driving this study was that INCENT was a more intense program than LMY, and thus would produce greater weight loss. In reality, though, neither of the programs was particularly intensive—there were no face-to-face contacts in either, for example. This issue captures a fundamental trade-off between the need to achieve results and the need for pragmatism in designing interventions. Although less intense interventions are likely to produce less weight loss (as was the case in this study), they are probably also infinitely more likely to be adopted in the real world, making it very important to do studies such as this one.

One area where the INCENT arm could have enhanced its effectiveness without sacrificing pragmatism was in the size of the financial incentive used. The researchers mentioned not wanting to use large incentives in order to avoid “undermining intrinsic motivation,” a concern often raised in these kinds of interventions. Unfortunately, the “$1 per percent weight lost” reward probably went too far in the other direction, being too small to provide any kind of additional motivation. Studies of financial incentives for weight loss reveal that weight loss increases in proportion to the size of the incentive [5], and perhaps this incentive was too tiny to register with most participants, particularly in this population of well-educated, high-earning adults.

Applications for Real-World Implementation

For employers and others considering how to design pragmatic weight management interventions, this study shows that even relatively simple, low-key, internet-based interventions are able to produce some measureable behavior changes and a little bit of weight loss, which is likely meaningful when considered in a large population. On the other hand, reconfiguring the resources in such an intervention to provide greater focus on caloric consumption, higher physical activity levels, and the use of larger financial incentives might well be worth the bang for the buck in trying to improve upon these results.

—Kristine Lewis, MD, MPH

Study Overview

Objective. To compare the effectiveness of 2 employee weight management programs—a less-intense program versus a more intense, individually-targeted program with financial incentives—at producing weight loss.

Design. Cluster randomized controlled trial.

Setting and participants. The setting for the “Tailored Worksite Weight Control Programs Project” was 28 small and medium-sized employers in and around Roanoke and Richmond, Virginia. Investigators enrolled the firms after a series of conversations with worksite leaders and conducted stratified cluster randomization based on worksite size (categorizing small firms as those with 100–300 employees and medium firms as those with 301–600 employees). For worksites to be considered for inclusion, the researchers required that the employer have between 100–600 employees total, provide internet access to employees, provide access to a weigh-in kiosk for the weight management program, and be willing to conduct a brief health survey of all employees at baseline to facilitate identification of eligible employees. Once eligible and interested worksites were identified, there were further inclusion criteria for employees themselves. To enroll in the study, an individual employee had to be over 18 years of age, have a BMI ≥ 25 kg/m2, not be pregnant or with a medical condition that would contraindicate participation, and not already participating in a structured weight loss program. Of 73 worksites deemed eligible upon review of local companies, 39 (53.4%) initially agreed to enroll in the study. Of those, 11 dropped out before the intervention due to lack of managerial support and/or employee interest. Within the 28 enrolled worksites that were randomized, 6258 employees were felt to be eligible based on baseline screening. Of those, 1790 (29%) enrolled in the study.

Intervention. At worksites randomized to the INCENT program, study participants received an internet-based, tailored weight loss advice intervention coupled with a financial incentive. The behavioral intervention was based in social cognitive theory. It focused on advising healthier diet and increasing physical activity levels to 150 min/wk. Participants in this group received daily emails from the program that were “tailored” according to their gender and according to their preferred features of physical activity. The modest financial incentive they received was tied to weight loss. They were paid $1 for each percent of body weight lost per month. All INCENT participants also had access to a comprehensive website where they could access information about exercise, including videos, and logs for monitoring activity and dietary intake.

At worksites randomized to the less intense LMW (“Livin’ My Weigh”) program, employees who enrolled received an intervention that also included information about diet and physical activity but did not include daily tailored emails or financial incentives. These participants did receive quarterly newsletters. Both programs were designed to last for 12 months, with a 6-month weight-loss phase followed by a 6-month weight maintenance phase. The results reported in this study focus on weight loss achieved at 6 months.

Main outcome measures. The primary outcome in this study was weight change, measured in 2 ways: mean weight loss at 6 months, and percentage of participants in each arm who had achieved clinically meaningful weight loss (defined as ≥ 5% of body weight) at 6 months. Weight change was measured using calibrated scales at kiosks that were provided within each workplace. Secondary outcomes of interest focused on behavioral measures based on self-report using repeated surveys of participants. These included change in physical activity levels (measured using 6 Behavioral Risk Factor Surveillance System (BRFSS) items, and 8 Rapid Assessment Physical Activity (RAPA) scale items), and change in dietary behaviors (using the Block Fruit-Vegetable Fiber Screener, and the Beverage Intake Questionnaire). Analysis was intention-to-treat (last observation carried forward for those who disenrolled before 6 months) and was conducted at the level of the individual participant, with generalized linear modeling including a time indicator and interaction terms for study group by time, to account for clustering effects.

Results. Of the 1790 participants who enrolled in the study, 1581 (88%) had complete follow-up data for analysis. Study participants were predominantly female (74%), Caucasian (77%), and well educated (only 17% had a high school diploma or less). Participants in the study differed from the overall eligible population for the study in a couple of important ways: they were more likely to be Caucasian and more likely to be women. The groups were well balanced with respect to most baseline characteristics, however, INCENT participants were significantly younger (45.7 vs. 48.2 years) and reported having worked at their current jobs for less time (8.1 vs. 11.6 years on average) than LMW participants. A significantly higher percentage of INCENT participants also reported meeting physical activity recommendations at baseline (10.2% vs. 6.8%, P < 0.05).

At the 6-month mark, participants in both groups lost weight on average (–2.3 lbs in INCENT, and –1.3 lbs in LMW), but there were no significant between-group differences. Likewise, although slightly more participants in INCENT (14.6%) achieved a 5% weight loss compared to those in LMW (9.7%), this difference also was not statistically significant.

For self-reported outcomes, some differences did emerge between the groups. INCENT participants reported a statistically significantly larger increase in daily fruit and vegetable intake (0.2 servings, P < 0.001) and fiber intake (0.58 g, P < 0.001). Within group change measured for self-reported water intake was significant for INCENT participants (increased by 0.47 fl oz per day), whereas it was not for LMW participants. Between group differences were presumably not significant for this measure, as they were not reported.

Conclusion. The authors conclude that both an individually targeted internet-based intervention and a minimal intervention can lead to improvements in activity and diet behaviors, and that both produce a modest amount of weight loss for employees.

Commentary

Given the high prevalence of overweight and obesity in the United States, employers are increasingly interested in programs that can promote more healthful behaviors and achieve weight loss for workers. Because many employers are faced with bearing the health care costs of obese employees [1], and because chronic health conditions linked to obesity may impact worker productivity through increased absenteeism [2], the financial benefits of successful employer-based weight management programs may be significant. Unfortunately, to date, many such programs have gone unevaluated. Those that have been evaluated tend to be lacking in empirical basis (eg, too brief and not based on principles of behavior change). Perhaps because of these programmatic weaknesses, evaluations have not generally shown that employer-based weight management programs are able to move the needle very much on weight [3].It seems that having any program in place is better than having nothing at all, but it is unclear whether programs of greater intensity are able to produce better results.

In this study by Almeida and colleagues, the researchers tested whether a more intense, tailored internet-based behavioral intervention with financial incentives produced greater weight loss than a less-intense program, hypothesizing that it would. Surprisingly, they actually found very little difference between the 2 groups with respect to weight outcomes, and only minimal differences with respect to behavior change. The strengths of this study include a randomized trial design with a strong comparison group, and the use of intention-to-treat analysis. Additionally, both interventions that were tested were “real-world friendly” programs, meaning that they could, in theory, be implemented relatively easily in a wide variety of settings. This is in stark contrast to traditional behavioral weight loss programs that tend to be incredibly intense and costly in nature—probably unappealing to most employers. Despite being of lower intensity, both of the interventions in this study had a clear basis in behavior change theory, which was a strength. Additionally, the retention rates at the end of the 6-month study period were excellent, with almost 90% of participants having complete follow-up data. Although this trend was probably facilitated by having a “captive” employee population, it speaks to the ease of participating in and hosting the programs.

Although the randomized design was a definite strength of this study, the demographic imbalances between the groups at baseline (resulting from individual-level factors that could not be randomized) may have been important. INCENT participants were younger and earlier in their careers, and although the researchers conducted multivariable analyses to try to eliminate confounding, this baseline imbalance raises concerns for whether or not other unmeasured confounding variables might have been unequally distributed between the groups.

It is not surprising that neither intervention produced large amounts of weight loss. Although the interventions were evidence-based in that they were grounded in behavior change theory, the specific behaviors focused on were not those that would be expected to yield significant weight loss. Both interventions, at least as described in this paper, seemed to put a greater emphasis on physical activity than diet (in terms of resources available for participants). While activity is critical for health promotion and weight maintenance [4], it is probably less important than diet for achieving meaningful weight loss. This is particularly the case when one considers the level of activity that was targeted in this study (150 min/wk). Although this is the recommended level for adults in order to maintain health, it is not believed to be sufficient to result in weight loss [5]. In terms of the dietary recommendations described in these programs, a focus on low-fat, high-fiber diets would be only expected to promote weight loss assuming that significant overall calorie reductions were met. Without stating specific caloric limits (which perhaps they did, even if not mentioned in the methods section), it’s hard to know how effective these diets would be at reducing weight, despite their likely positive impacts on overall health. In keeping with these points of emphasis for dietary change, the places where statistically significant differences emerged between the groups were not those that would be expected to produce differential weight loss. Fruit and vegetable intake, while important for health, will not produce weight loss independent of an overall decrease in caloric intake. The other dietary outcome that was significantly different between the groups was fiber intake, likely a correlate of the increased fruit and vegetable intake.

One of the key assumptions driving this study was that INCENT was a more intense program than LMY, and thus would produce greater weight loss. In reality, though, neither of the programs was particularly intensive—there were no face-to-face contacts in either, for example. This issue captures a fundamental trade-off between the need to achieve results and the need for pragmatism in designing interventions. Although less intense interventions are likely to produce less weight loss (as was the case in this study), they are probably also infinitely more likely to be adopted in the real world, making it very important to do studies such as this one.

One area where the INCENT arm could have enhanced its effectiveness without sacrificing pragmatism was in the size of the financial incentive used. The researchers mentioned not wanting to use large incentives in order to avoid “undermining intrinsic motivation,” a concern often raised in these kinds of interventions. Unfortunately, the “$1 per percent weight lost” reward probably went too far in the other direction, being too small to provide any kind of additional motivation. Studies of financial incentives for weight loss reveal that weight loss increases in proportion to the size of the incentive [5], and perhaps this incentive was too tiny to register with most participants, particularly in this population of well-educated, high-earning adults.

Applications for Real-World Implementation

For employers and others considering how to design pragmatic weight management interventions, this study shows that even relatively simple, low-key, internet-based interventions are able to produce some measureable behavior changes and a little bit of weight loss, which is likely meaningful when considered in a large population. On the other hand, reconfiguring the resources in such an intervention to provide greater focus on caloric consumption, higher physical activity levels, and the use of larger financial incentives might well be worth the bang for the buck in trying to improve upon these results.

—Kristine Lewis, MD, MPH

References

1. Colombi AM, Wood GC. Obesity in the workplace: impact on cardiovascular disease, cost and utilization of care. Am Health Drug Benefits 2011;4:271–8.

2. Dee A, Kearns K, O’Neill C, et al. The direct and indirect costs of both overweight and obesity: a systematic review. BMC Res Notes 2014;7:242.

3. Anderson LM, Quinn TA, Glanz K, et al. The effectiveness of worksite nutrition and physical activity interventions for controlling employee overweight and obesity: a systematic review. Am J Prev Med 2010; 37:340–57.

4. Swift DL, Johannsen NM, Lavie CJ, et al. The role of exercise and physical activity in weight loss and maintenance. Prog Cardiovasc Dis 2014;56:447.

5. Jeffery RW. Financial incentives and weight control. Prev Med 2012;55S:61–7.

References

1. Colombi AM, Wood GC. Obesity in the workplace: impact on cardiovascular disease, cost and utilization of care. Am Health Drug Benefits 2011;4:271–8.

2. Dee A, Kearns K, O’Neill C, et al. The direct and indirect costs of both overweight and obesity: a systematic review. BMC Res Notes 2014;7:242.

3. Anderson LM, Quinn TA, Glanz K, et al. The effectiveness of worksite nutrition and physical activity interventions for controlling employee overweight and obesity: a systematic review. Am J Prev Med 2010; 37:340–57.

4. Swift DL, Johannsen NM, Lavie CJ, et al. The role of exercise and physical activity in weight loss and maintenance. Prog Cardiovasc Dis 2014;56:447.

5. Jeffery RW. Financial incentives and weight control. Prev Med 2012;55S:61–7.

Issue
Journal of Clinical Outcomes Management - June 2015, VOL. 22, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2015, VOL. 22, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
For Worksite Weight Loss: Something Is Better than Nothing, but Is Something More Even Better than That?
Display Headline
For Worksite Weight Loss: Something Is Better than Nothing, but Is Something More Even Better than That?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Dabigatran Adherence Among Nonvalvular Atrial Fibrillation Patients Is Associated with Pharmacist-Based Activities

Article Type
Changed
Fri, 03/02/2018 - 13:28
Display Headline
Dabigatran Adherence Among Nonvalvular Atrial Fibrillation Patients Is Associated with Pharmacist-Based Activities

Study Overview

Objective. To assess site level adherence to dabigatran among patients with atrial fibrillation and to determine if specific practices at the site level are associated with adherence.

Design. Mixed-methods study involving retrospective quantitative and cross-sectional qualitative data.

Setting and participants. 67 Veterans Health Administration sites with 20 or more patients with dabigatran prescription for nonvalvular atrial fibrillation between 2010 and 2012 were included. Among these sites, 41 sites participated in an inquiry about practices related to dabigatran use. A total of 47 pharmacists among the 41 sites were interviewed. The investigators identified from the interviews 3 specific practices related to dabigatran use: appropriate patient selection (review of indications, contradictions, and prior adherence to other medications), pharmacist-driven patient education, and pharmacist-led adverse event and adherence monitoring. Sites were characterized as having adopted these specific practices or not, based on the interviews.

Main outcome measure. Dabigatran adherence defined  by proportion of days covered (ratio of days supplied by prescription to follow-up duration) of 80% or more. Site level variance in dabigatran adherence among the 67 sites were described. Site level adherence was adjusted by patient level factors and site level factors. The association between site level practice and adherence was examined with Poisson models using generalized estimate equation to account for clustering of patients within sites.

Main results. A total of 67 sites with 4863 patients with prescriptions of dabigatran for atrial fibrillation were included in the analysis. There was wide variation among sites on adherence rate, with a range of 42% to 93% (median, 74%). The sites were categorized as high performing if their site level adherence rate was at least 74%. Among the 41 sites that participated in the qualitative study that defined exposure variables, appropriate patient selection was performed at 31 sites, pharmacist-led education was provided at 30 sites, and pharmacist-led monitoring at 28 sites. There was variation in the duration of monitoring among sites, with 18 of 28 monitoring for 3 to 6 months while the rest of the sites monitor indefinitely. Site level practices differed between low and high performing sites, with high performing sites more likely to have adopted appropriate patient selection with review of adherence (83% vs. 65% in low-performing sites), have pharmacist-driven education (83% vs. 59%), and have pharmacist-led adverse event monitoring (92% vs. 35%). After adjustment for patient level and site level characteristics, the association between adherence and appropriate patient selection (adjusted risk ratio [RR], 1.14; 95% confidence interval [CI], 1.05–1.25) and pharmacist-led adverse event monitoring (RR, 1.25; 95% CI, 1.11–1.41) remained.

Conclusion. There is wide variability in dabigatran adherence among patients with atrial fibrillation at different VA sites. Site level pharmacist-based practices are associated with the level of adherence at the sites.

Commentary

Studies have demonstrated that in a clinical trial setting, dabigatran is as effective as warfarin in stroke prevention among patients with atrial fibrillation and is associated with a lower risk of major hemorrhage [1]. However, outside of clinical trials, effectiveness of a treatment regimen is highly related to whether treatment is adhered to. In contrast with warfarin treatment, where treatment adherence is regularly tracked through monitoring of blood levels and clinic visits, dabigatran does not require monitoring and thus, adherence to dabigatran may not be monitored. A recent study finds that poorer adherence likely contributes to increased risk of stroke and death among patients on dabigatran [2]. The current study examines the variation in adherence rates on a site level and identifies factors that are associated with better adherence. The findings suggest that better patient selection through examination of prior adherence to warfarin and other medications and pharmacist-led adverse event and adherence monitoring are practices that are associated with better adherence. These are potentially important findings that may impact care for patients with atrial fibrillation.

These results need to be interpreted cautiously because of the limitations of the observational study design. Several factors need to be considered when interpreting the study findings. First, despite the VA being a comprehensive system of care, veterans often use care outside of the VA, including obtaining medications outside of VA [3]. Because of the prevalent concurrent use of care outside of VA, examining adherence to medication with only VA records may be incomplete and may erroneously categorize patients as low adherence. Second, the number of patients on dabigatran per facility is rather small and quite variable as well, with some sites that have very few number of patients. Although the investigators have excluded sites with fewer than 20 patients on dabigatran, the variability in the use of dabigatran may reflect site-specific factors, some of which may affect patient selection on the site level, that ultimately may affect outcome. Finally, the interview of pharmacist at each site may reflect the view of one to two pharmacists at each site, and thus may not truly reflect practices at the site throughout the period where patients were selected and outcomes defined.

Applications for Clinical Practice

Although it is tempting to conclude that instituting the pharmacist-based activities in patient selection and adverse event monitoring will lead to better adherence to dabigatran and thus improved patient outcomes, considering the limitations in the study a follow-up intervention study where sites are randomized to institute-specific practices for dabigatran use will be very important to demonstrate definitively the impact of these interventions. Also, as the use of dabigatran and other novel anticoagulants become more prevalent [4], a follow-up study to include a larger sample of patients may also be valuable to demonstrate if the conclusions are upheld.

—William Hung, MD, MPH

References

1. Connolly SJ, Ezekowitz MD, Yusuf S, et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med 2009: 361:1139–50.

2. Shore S, Carey EP, Turakhia MP, et al. Adherence to dabigatran therapy and longitudinal patient outcomes: insights from the veterans health administration. Am Heart J 2014;167:810–7.

3. Hynes DM, Koelling K, Stroupe K, et al. Veterans’ access to and use of Medicare and veterans affairs health care. Med Care 2007:45:214–23.

4. Boyle AM. VA, army clinicians rapidly increase prescribing of novel anticoagulants. US Med Feb 2014. Available at www.usmedicine.com.

Issue
Journal of Clinical Outcomes Management - June 2015, VOL. 22, NO. 6
Publications
Topics
Sections

Study Overview

Objective. To assess site level adherence to dabigatran among patients with atrial fibrillation and to determine if specific practices at the site level are associated with adherence.

Design. Mixed-methods study involving retrospective quantitative and cross-sectional qualitative data.

Setting and participants. 67 Veterans Health Administration sites with 20 or more patients with dabigatran prescription for nonvalvular atrial fibrillation between 2010 and 2012 were included. Among these sites, 41 sites participated in an inquiry about practices related to dabigatran use. A total of 47 pharmacists among the 41 sites were interviewed. The investigators identified from the interviews 3 specific practices related to dabigatran use: appropriate patient selection (review of indications, contradictions, and prior adherence to other medications), pharmacist-driven patient education, and pharmacist-led adverse event and adherence monitoring. Sites were characterized as having adopted these specific practices or not, based on the interviews.

Main outcome measure. Dabigatran adherence defined  by proportion of days covered (ratio of days supplied by prescription to follow-up duration) of 80% or more. Site level variance in dabigatran adherence among the 67 sites were described. Site level adherence was adjusted by patient level factors and site level factors. The association between site level practice and adherence was examined with Poisson models using generalized estimate equation to account for clustering of patients within sites.

Main results. A total of 67 sites with 4863 patients with prescriptions of dabigatran for atrial fibrillation were included in the analysis. There was wide variation among sites on adherence rate, with a range of 42% to 93% (median, 74%). The sites were categorized as high performing if their site level adherence rate was at least 74%. Among the 41 sites that participated in the qualitative study that defined exposure variables, appropriate patient selection was performed at 31 sites, pharmacist-led education was provided at 30 sites, and pharmacist-led monitoring at 28 sites. There was variation in the duration of monitoring among sites, with 18 of 28 monitoring for 3 to 6 months while the rest of the sites monitor indefinitely. Site level practices differed between low and high performing sites, with high performing sites more likely to have adopted appropriate patient selection with review of adherence (83% vs. 65% in low-performing sites), have pharmacist-driven education (83% vs. 59%), and have pharmacist-led adverse event monitoring (92% vs. 35%). After adjustment for patient level and site level characteristics, the association between adherence and appropriate patient selection (adjusted risk ratio [RR], 1.14; 95% confidence interval [CI], 1.05–1.25) and pharmacist-led adverse event monitoring (RR, 1.25; 95% CI, 1.11–1.41) remained.

Conclusion. There is wide variability in dabigatran adherence among patients with atrial fibrillation at different VA sites. Site level pharmacist-based practices are associated with the level of adherence at the sites.

Commentary

Studies have demonstrated that in a clinical trial setting, dabigatran is as effective as warfarin in stroke prevention among patients with atrial fibrillation and is associated with a lower risk of major hemorrhage [1]. However, outside of clinical trials, effectiveness of a treatment regimen is highly related to whether treatment is adhered to. In contrast with warfarin treatment, where treatment adherence is regularly tracked through monitoring of blood levels and clinic visits, dabigatran does not require monitoring and thus, adherence to dabigatran may not be monitored. A recent study finds that poorer adherence likely contributes to increased risk of stroke and death among patients on dabigatran [2]. The current study examines the variation in adherence rates on a site level and identifies factors that are associated with better adherence. The findings suggest that better patient selection through examination of prior adherence to warfarin and other medications and pharmacist-led adverse event and adherence monitoring are practices that are associated with better adherence. These are potentially important findings that may impact care for patients with atrial fibrillation.

These results need to be interpreted cautiously because of the limitations of the observational study design. Several factors need to be considered when interpreting the study findings. First, despite the VA being a comprehensive system of care, veterans often use care outside of the VA, including obtaining medications outside of VA [3]. Because of the prevalent concurrent use of care outside of VA, examining adherence to medication with only VA records may be incomplete and may erroneously categorize patients as low adherence. Second, the number of patients on dabigatran per facility is rather small and quite variable as well, with some sites that have very few number of patients. Although the investigators have excluded sites with fewer than 20 patients on dabigatran, the variability in the use of dabigatran may reflect site-specific factors, some of which may affect patient selection on the site level, that ultimately may affect outcome. Finally, the interview of pharmacist at each site may reflect the view of one to two pharmacists at each site, and thus may not truly reflect practices at the site throughout the period where patients were selected and outcomes defined.

Applications for Clinical Practice

Although it is tempting to conclude that instituting the pharmacist-based activities in patient selection and adverse event monitoring will lead to better adherence to dabigatran and thus improved patient outcomes, considering the limitations in the study a follow-up intervention study where sites are randomized to institute-specific practices for dabigatran use will be very important to demonstrate definitively the impact of these interventions. Also, as the use of dabigatran and other novel anticoagulants become more prevalent [4], a follow-up study to include a larger sample of patients may also be valuable to demonstrate if the conclusions are upheld.

—William Hung, MD, MPH

Study Overview

Objective. To assess site level adherence to dabigatran among patients with atrial fibrillation and to determine if specific practices at the site level are associated with adherence.

Design. Mixed-methods study involving retrospective quantitative and cross-sectional qualitative data.

Setting and participants. 67 Veterans Health Administration sites with 20 or more patients with dabigatran prescription for nonvalvular atrial fibrillation between 2010 and 2012 were included. Among these sites, 41 sites participated in an inquiry about practices related to dabigatran use. A total of 47 pharmacists among the 41 sites were interviewed. The investigators identified from the interviews 3 specific practices related to dabigatran use: appropriate patient selection (review of indications, contradictions, and prior adherence to other medications), pharmacist-driven patient education, and pharmacist-led adverse event and adherence monitoring. Sites were characterized as having adopted these specific practices or not, based on the interviews.

Main outcome measure. Dabigatran adherence defined  by proportion of days covered (ratio of days supplied by prescription to follow-up duration) of 80% or more. Site level variance in dabigatran adherence among the 67 sites were described. Site level adherence was adjusted by patient level factors and site level factors. The association between site level practice and adherence was examined with Poisson models using generalized estimate equation to account for clustering of patients within sites.

Main results. A total of 67 sites with 4863 patients with prescriptions of dabigatran for atrial fibrillation were included in the analysis. There was wide variation among sites on adherence rate, with a range of 42% to 93% (median, 74%). The sites were categorized as high performing if their site level adherence rate was at least 74%. Among the 41 sites that participated in the qualitative study that defined exposure variables, appropriate patient selection was performed at 31 sites, pharmacist-led education was provided at 30 sites, and pharmacist-led monitoring at 28 sites. There was variation in the duration of monitoring among sites, with 18 of 28 monitoring for 3 to 6 months while the rest of the sites monitor indefinitely. Site level practices differed between low and high performing sites, with high performing sites more likely to have adopted appropriate patient selection with review of adherence (83% vs. 65% in low-performing sites), have pharmacist-driven education (83% vs. 59%), and have pharmacist-led adverse event monitoring (92% vs. 35%). After adjustment for patient level and site level characteristics, the association between adherence and appropriate patient selection (adjusted risk ratio [RR], 1.14; 95% confidence interval [CI], 1.05–1.25) and pharmacist-led adverse event monitoring (RR, 1.25; 95% CI, 1.11–1.41) remained.

Conclusion. There is wide variability in dabigatran adherence among patients with atrial fibrillation at different VA sites. Site level pharmacist-based practices are associated with the level of adherence at the sites.

Commentary

Studies have demonstrated that in a clinical trial setting, dabigatran is as effective as warfarin in stroke prevention among patients with atrial fibrillation and is associated with a lower risk of major hemorrhage [1]. However, outside of clinical trials, effectiveness of a treatment regimen is highly related to whether treatment is adhered to. In contrast with warfarin treatment, where treatment adherence is regularly tracked through monitoring of blood levels and clinic visits, dabigatran does not require monitoring and thus, adherence to dabigatran may not be monitored. A recent study finds that poorer adherence likely contributes to increased risk of stroke and death among patients on dabigatran [2]. The current study examines the variation in adherence rates on a site level and identifies factors that are associated with better adherence. The findings suggest that better patient selection through examination of prior adherence to warfarin and other medications and pharmacist-led adverse event and adherence monitoring are practices that are associated with better adherence. These are potentially important findings that may impact care for patients with atrial fibrillation.

These results need to be interpreted cautiously because of the limitations of the observational study design. Several factors need to be considered when interpreting the study findings. First, despite the VA being a comprehensive system of care, veterans often use care outside of the VA, including obtaining medications outside of VA [3]. Because of the prevalent concurrent use of care outside of VA, examining adherence to medication with only VA records may be incomplete and may erroneously categorize patients as low adherence. Second, the number of patients on dabigatran per facility is rather small and quite variable as well, with some sites that have very few number of patients. Although the investigators have excluded sites with fewer than 20 patients on dabigatran, the variability in the use of dabigatran may reflect site-specific factors, some of which may affect patient selection on the site level, that ultimately may affect outcome. Finally, the interview of pharmacist at each site may reflect the view of one to two pharmacists at each site, and thus may not truly reflect practices at the site throughout the period where patients were selected and outcomes defined.

Applications for Clinical Practice

Although it is tempting to conclude that instituting the pharmacist-based activities in patient selection and adverse event monitoring will lead to better adherence to dabigatran and thus improved patient outcomes, considering the limitations in the study a follow-up intervention study where sites are randomized to institute-specific practices for dabigatran use will be very important to demonstrate definitively the impact of these interventions. Also, as the use of dabigatran and other novel anticoagulants become more prevalent [4], a follow-up study to include a larger sample of patients may also be valuable to demonstrate if the conclusions are upheld.

—William Hung, MD, MPH

References

1. Connolly SJ, Ezekowitz MD, Yusuf S, et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med 2009: 361:1139–50.

2. Shore S, Carey EP, Turakhia MP, et al. Adherence to dabigatran therapy and longitudinal patient outcomes: insights from the veterans health administration. Am Heart J 2014;167:810–7.

3. Hynes DM, Koelling K, Stroupe K, et al. Veterans’ access to and use of Medicare and veterans affairs health care. Med Care 2007:45:214–23.

4. Boyle AM. VA, army clinicians rapidly increase prescribing of novel anticoagulants. US Med Feb 2014. Available at www.usmedicine.com.

References

1. Connolly SJ, Ezekowitz MD, Yusuf S, et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med 2009: 361:1139–50.

2. Shore S, Carey EP, Turakhia MP, et al. Adherence to dabigatran therapy and longitudinal patient outcomes: insights from the veterans health administration. Am Heart J 2014;167:810–7.

3. Hynes DM, Koelling K, Stroupe K, et al. Veterans’ access to and use of Medicare and veterans affairs health care. Med Care 2007:45:214–23.

4. Boyle AM. VA, army clinicians rapidly increase prescribing of novel anticoagulants. US Med Feb 2014. Available at www.usmedicine.com.

Issue
Journal of Clinical Outcomes Management - June 2015, VOL. 22, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2015, VOL. 22, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Dabigatran Adherence Among Nonvalvular Atrial Fibrillation Patients Is Associated with Pharmacist-Based Activities
Display Headline
Dabigatran Adherence Among Nonvalvular Atrial Fibrillation Patients Is Associated with Pharmacist-Based Activities
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Nurse Case Management Fails to Yield Improvements in Blood Pressure and Glycemic Control

Article Type
Changed
Tue, 05/03/2022 - 15:40
Display Headline
Nurse Case Management Fails to Yield Improvements in Blood Pressure and Glycemic Control

Study Overview

Objective. To determine the effectiveness of a nurse-led, telephone-delivered behavioral intervention for diabetes (DM) and hypertension (HTN) versus an attention control within primary care community practices.

Study design. A 9-site, 2-arm randomized controlled trial.

Setting and participants. Study participants were recruited from 9 community practices within the Duke Primary Care Research Consortium. The practices were chosen because they traditionally operate outside of the academic context. Subjects were required to have both type 2 DM and HTN, as indicated by their medications and confirmed by administrative data as well as patient self-reporting. Participants had to have been seen at participating practices for at least 1 year and have poorly controlled DM (indicated by most recent A1c ≥ 7.5%), but they were not required to have poorly controlled HTN. Exclusion criteria included fewer than 1 primary care clinic visit during the previous year, serious comorbid illness, type 1 diabetes, inability to receive a telephone intervention in English, residence in a nursing home, and participation in another hypertension or diabetes study [1]. Participants were randomly assigned using a computer-generated randomization sequence [1] to either the intervention or control groups at a 1:1 ratio, stratified by clinic and baseline blood pressure (BP) control.

Intervention. A single nurse with extensive experience in case management delivered both the behavioral intervention and attention control by telephone. In both arms, calls were conducted once every 2 months over a 24-month period.

The calls in the intervention arm consisted of tailored behavior-modifying techniques according to patient barriers. This content was divided into a series of modules relevant to behaviors associated with improving control of BP or blood sugar, including physical activity, weight reduction, sodium intake, smoking cessation, medication adherence, and others. These modules were scheduled according to patient needs (based on certain parameters such as high body mass index or use of insulin) and preferences [1].

The calls in the attention control were not tailored but rather consisted of didactic health-related information unrelated to HTN or DM (eg, flu shots, skin cancer prevention). This content was also highly scripted and designed to limit the potential for interaction between the nurse and patient.

Main outcome measures. A1c and systolic blood pressure (SBP) were primary outcomes. Key secondary outcomes were diastolic blood pressure (DBP), overall BP control, weight, physical activity, self-efficacy, and medication adherence. Study staff obtained measurements at baseline and 6, 12, and 24 months [1].

Results. The researchers assessed 2601 patients for eligibility and excluded 2224. Most patients were excluded for not meeting inclusion criteria (n = 1156), in particular because of improved HbA1c control (n = 983), and 1064 declined to participate. They randomized 377 patients—193 to the intervention arm and 184 to the attention control arm. Participants had an average age of 58.7, 49.1% had an education level of high school or less, 50.1% were non-white, and 54.9% were unemployed/retired. Patient characteristics in the intervention and control arms were similar at baseline. Seventy-eight percent of patients completed the 12-month follow-up and 70% (263) reached the 24-month endpoint. Patients in the intervention arm completed 78% of scheduled calls while patients in the control group completed 81%.

After adjusting for stratification variables, the estimated mean A1c and SBP were similar between arms at 24 months (intervention 0.1% higher than control, 95% CI −0.3 % to 0.5 %, P = 0.50 for A1c; intervention 0.9 mm Hg lower than control, 95% CI −5.4 to 3.5, P = 0.69 for SBP). There were also no significant differences between arms in mean A1c or SBP at 6 or 12 months. However, A1c levels did improve within each arm at the end of the study, with the intervention group improving by approx-imately 0.5% and the control group improving by approximately 0.6%. In terms of secondary outcomes, there were no significant differences between arms in DBP, weight, physical activity, or BP control rates throughout the 2-year study period.

Conclusion. Overall, the intervention and control groups did not differ significantly in terms of A1c, SBP, or any of the secondary outcomes at any point during the 2-year study.

Commentary

The prevalence of type 2 diabetes and its comorbidities (such as hypertension and obesity) have increased due to a variety of factors including an aging population and an increasingly sedentary lifestyle. Several nurse management programs for DM and HTN have been shown to be efficacious in reducing blood sugar levels [2–4] and promoting BP control [5,6]. However, these interventions were conducted in tightly controlled academic settings, and it is unclear how well these programs may translate into community settings. The aim of this study was to test the effectiveness of a nurse-led behavioral telephone intervention for the comanagement of DM and HTN within non–academically affiliated community practices. Results indicated no significant differences between the intervention and control groups for A1c levels or SBP at any point during the 2-year study, but A1c levels did improve for both arms.

Despite this being a negative study, it is a unique and important contribution to the literature. It is the only trial as of yet that has tested the effectiveness of a nurse management intervention targeting both DM and HTN in a real-world, community setting. This novel approach is supported by data that suggests BP control is actually more cost-effective than intensive glycemic control in treating patients with type 2 diabetes [7]. There were several strengths to the study design, including the use of intention-to-treat analysis, stratified randomization, a diverse patient population, and blinding of the study staff who took BP and A1c measurements. Furthermore, a single nurse conducted all telephone calls, ensuring that differences in counseling skill levels would not affect the results of the study. The few weaknesses of the study included the fact that the nurse who delivered the intervention (as well as the patients) could not be blinded to treatment allocation, and the income of study participants was not reported.

The reasons for the negative outcomes of this study are unclear. The authors claim that similar interventions within academic settings have been shown to be effective and speculate that time and financial pressures of community practices may be reasons that the intervention was not successful. However, the “successful” interventions that they cite were quite different from and more intensive than this intervention. For instance, many of these studies used at least 1 call per month [3,4,8], and one even conducted several calls each week [3]. Furthermore, a DM study conducted by Blackberry et al in a community setting with less than 1 call per month (8 calls over 18 months) similarly failed to produce significant results [9], and therefore more frequent calls may be necessary in DM and HTN interventions. In a systematic review, Eakin et al demonstrated that 12 or more calls in a 6- to 12-month period were associated with better outcomes in physical activity and diet interventions [10], and this may also translate to closely related DM and HTN interventions.

In addition to the infrequent calls, this intervention also lacked communication and integration with patients’ primary care teams. Several studies have demonstrated that integration with primary care teams can improve outcomes in DM and HTN interventions [11,12], and nearly all of the successful studies cited by the authors also included at least some form of communication with patients’ primary care providers (PCPs) [2–4,5,8]. In many of these studies the nurse also had prescribing rights to alter medications [2,3,5]. The nurse in this study met monthly with an expert team of clinicians to discuss patient issues but did not communicate directly with any of the patients’ PCPs [1]. The authors acknowledge that this lack of integration may have contributed to their negative results and point to the fact that it is harder to integrate interventions within community practices that often lack internal integration. However, Walsh, Harris, and Roberts demonstrated that integration between primary and secondary care teams was both feasible and effective for a diabetes initiative within community practices [13].

An additional important feature not present in this intervention was self-monitoring of BP levels. Home self-monitoring of BP has been demonstrated to significantly improve BP levels [14], and 2 of the successful studies in academic settings cited by the authors also included a BP self-monitoring component [5,6]. In one of these studies [6], Bosworth et al conducted a 2 × 2 randomized trial to improve HTN control in which the arms consisted of (a) usual care, (b) bimonthly nurse administered telephone intervention only (this arm was highly similar to the intervention arm in this study), (c) BP monitoring 3 times a week only, and (d) a combination of the telephone intervention with the BP monitoring. Interestingly, the only arm that was successful relative to usual care was the combination of the telephone intervention and BP self-monitoring; the arm consisting only of bi-monthly telephone calls (very similar to this intervention) failed despite the study taking place in an academic setting (it was also less effective than BP monitoring only). Thus, the addition of self-monitoring to a nurse case management telephone intervention can achieve better results.

Applications for Clinical Practice

A telephone-based intervention delivered by a trained nurse for co-management of DM and HTN was not more effective than an attention control delivered by the same nurse in a community setting. This may have been due to several factors, including low intensity marked by less than 1 call per month, a lack of integration with other members of the primary care team, and lack of a BP self-monitoring component. Future studies are needed to determine the optimal type and duration of nurse case management interventions targeting glucose and BP control for diabetic patients in community settings.

—Sandeep Sikerwar, BA, and Melanie Jay, MD, MS

References

1. Crowley MJ, Bosworth HB, Coffman CJ, et al. Tailored Case Management for Diabetes and Hypertension (TEACH-DM) in a community population: study design and baseline sample characteristics. Contemp Clin Trials 2013;36:298–306.

2. Aubert RE, Herman WH, Waters J, et al. Nurse case management to improve glycemic control in diabetic patients in a health maintenance organization. A randomized, controlled trial. Ann Intern Med 1998;129:605–12.

3. Thompson DM, Kozak SE, Sheps S. Insulin adjustment by a diabetes nurse educator improves glucose control in insulin-requiring diabetic patients: a randomized trial. CMAJ 1999;161:959–62.

4. Weinberger M, Kirkman MS, Samsa GP, et al. A nurse-coordinated intervention for primary care patients with non-insulin-dependent diabetes mellitus: impact on glycemic control and health-related quality of life. J Gen Intern Med 1995;10:59–66.

5. Bosworth HB, Powers BJ, Olsen MK, et al. Home blood pressure management and improved blood pressure control: results from a randomized controlled trial. Arch Intern Med 2011;171:1173–80.

6. Bosworth HB, Olsen MK, Grubber JM, et al. Two self-management interventions to improve hypertension control: a randomized trial. Ann Intern Med 2009;151:687–95.

7. CDC Diabetes Cost-effectiveness Group. Cost-effectiveness of intensive glycemic control, intensified hypertension control, and serum cholesterol level reduction for type 2 diabetes. JAMA 2002;287:2542–51.

8. Mons U, Raum E, Krämer HU, et al. Effectiveness of a supportive telephone counseling intervention in type 2 diabetes patients: randomized controlled study. PLoS One 2013;8:e77954.

9. Blackberry ID, Furler JS, Best JD, et al. Effectiveness of general practice based, practice nurse led telephone coaching on glycaemic control of type 2 diabetes: the Patient Engagement and Coaching for Health (PEACH) pragmatic cluster randomised controlled trial. BMJ 2013;347:f5272.

10. Eakin EG, Lawler SP, Vandelanotte C, Owen N. Telephone interventions for physical activity and dietary behavior change: a systematic review. Am J Prev Med 2007;32:419–34.

11. Shojania KG, Ranji SR, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA 2006;296:427–40.

12. Katon WJ, Lin EHB, Von Korff M, et al. Collaborative care for patients with depression and chronic illnesses. N Engl J Med 2010;363:2611–20.

13. Walsh JL, Harris BHL, Roberts AW. Evaluation of a community diabetes initiative: Integrating diabetes care. Prim Care Diabetes 2014 Dec 11.

14. Halme L, Vesalainen R, Kaaja M, Kantola I. Self-monitoring of blood pressure promotes achievement of blood pressure target in primary health care. Am J Hypertens 2005;18:1415–20.

Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Publications
Topics
Sections

Study Overview

Objective. To determine the effectiveness of a nurse-led, telephone-delivered behavioral intervention for diabetes (DM) and hypertension (HTN) versus an attention control within primary care community practices.

Study design. A 9-site, 2-arm randomized controlled trial.

Setting and participants. Study participants were recruited from 9 community practices within the Duke Primary Care Research Consortium. The practices were chosen because they traditionally operate outside of the academic context. Subjects were required to have both type 2 DM and HTN, as indicated by their medications and confirmed by administrative data as well as patient self-reporting. Participants had to have been seen at participating practices for at least 1 year and have poorly controlled DM (indicated by most recent A1c ≥ 7.5%), but they were not required to have poorly controlled HTN. Exclusion criteria included fewer than 1 primary care clinic visit during the previous year, serious comorbid illness, type 1 diabetes, inability to receive a telephone intervention in English, residence in a nursing home, and participation in another hypertension or diabetes study [1]. Participants were randomly assigned using a computer-generated randomization sequence [1] to either the intervention or control groups at a 1:1 ratio, stratified by clinic and baseline blood pressure (BP) control.

Intervention. A single nurse with extensive experience in case management delivered both the behavioral intervention and attention control by telephone. In both arms, calls were conducted once every 2 months over a 24-month period.

The calls in the intervention arm consisted of tailored behavior-modifying techniques according to patient barriers. This content was divided into a series of modules relevant to behaviors associated with improving control of BP or blood sugar, including physical activity, weight reduction, sodium intake, smoking cessation, medication adherence, and others. These modules were scheduled according to patient needs (based on certain parameters such as high body mass index or use of insulin) and preferences [1].

The calls in the attention control were not tailored but rather consisted of didactic health-related information unrelated to HTN or DM (eg, flu shots, skin cancer prevention). This content was also highly scripted and designed to limit the potential for interaction between the nurse and patient.

Main outcome measures. A1c and systolic blood pressure (SBP) were primary outcomes. Key secondary outcomes were diastolic blood pressure (DBP), overall BP control, weight, physical activity, self-efficacy, and medication adherence. Study staff obtained measurements at baseline and 6, 12, and 24 months [1].

Results. The researchers assessed 2601 patients for eligibility and excluded 2224. Most patients were excluded for not meeting inclusion criteria (n = 1156), in particular because of improved HbA1c control (n = 983), and 1064 declined to participate. They randomized 377 patients—193 to the intervention arm and 184 to the attention control arm. Participants had an average age of 58.7, 49.1% had an education level of high school or less, 50.1% were non-white, and 54.9% were unemployed/retired. Patient characteristics in the intervention and control arms were similar at baseline. Seventy-eight percent of patients completed the 12-month follow-up and 70% (263) reached the 24-month endpoint. Patients in the intervention arm completed 78% of scheduled calls while patients in the control group completed 81%.

After adjusting for stratification variables, the estimated mean A1c and SBP were similar between arms at 24 months (intervention 0.1% higher than control, 95% CI −0.3 % to 0.5 %, P = 0.50 for A1c; intervention 0.9 mm Hg lower than control, 95% CI −5.4 to 3.5, P = 0.69 for SBP). There were also no significant differences between arms in mean A1c or SBP at 6 or 12 months. However, A1c levels did improve within each arm at the end of the study, with the intervention group improving by approx-imately 0.5% and the control group improving by approximately 0.6%. In terms of secondary outcomes, there were no significant differences between arms in DBP, weight, physical activity, or BP control rates throughout the 2-year study period.

Conclusion. Overall, the intervention and control groups did not differ significantly in terms of A1c, SBP, or any of the secondary outcomes at any point during the 2-year study.

Commentary

The prevalence of type 2 diabetes and its comorbidities (such as hypertension and obesity) have increased due to a variety of factors including an aging population and an increasingly sedentary lifestyle. Several nurse management programs for DM and HTN have been shown to be efficacious in reducing blood sugar levels [2–4] and promoting BP control [5,6]. However, these interventions were conducted in tightly controlled academic settings, and it is unclear how well these programs may translate into community settings. The aim of this study was to test the effectiveness of a nurse-led behavioral telephone intervention for the comanagement of DM and HTN within non–academically affiliated community practices. Results indicated no significant differences between the intervention and control groups for A1c levels or SBP at any point during the 2-year study, but A1c levels did improve for both arms.

Despite this being a negative study, it is a unique and important contribution to the literature. It is the only trial as of yet that has tested the effectiveness of a nurse management intervention targeting both DM and HTN in a real-world, community setting. This novel approach is supported by data that suggests BP control is actually more cost-effective than intensive glycemic control in treating patients with type 2 diabetes [7]. There were several strengths to the study design, including the use of intention-to-treat analysis, stratified randomization, a diverse patient population, and blinding of the study staff who took BP and A1c measurements. Furthermore, a single nurse conducted all telephone calls, ensuring that differences in counseling skill levels would not affect the results of the study. The few weaknesses of the study included the fact that the nurse who delivered the intervention (as well as the patients) could not be blinded to treatment allocation, and the income of study participants was not reported.

The reasons for the negative outcomes of this study are unclear. The authors claim that similar interventions within academic settings have been shown to be effective and speculate that time and financial pressures of community practices may be reasons that the intervention was not successful. However, the “successful” interventions that they cite were quite different from and more intensive than this intervention. For instance, many of these studies used at least 1 call per month [3,4,8], and one even conducted several calls each week [3]. Furthermore, a DM study conducted by Blackberry et al in a community setting with less than 1 call per month (8 calls over 18 months) similarly failed to produce significant results [9], and therefore more frequent calls may be necessary in DM and HTN interventions. In a systematic review, Eakin et al demonstrated that 12 or more calls in a 6- to 12-month period were associated with better outcomes in physical activity and diet interventions [10], and this may also translate to closely related DM and HTN interventions.

In addition to the infrequent calls, this intervention also lacked communication and integration with patients’ primary care teams. Several studies have demonstrated that integration with primary care teams can improve outcomes in DM and HTN interventions [11,12], and nearly all of the successful studies cited by the authors also included at least some form of communication with patients’ primary care providers (PCPs) [2–4,5,8]. In many of these studies the nurse also had prescribing rights to alter medications [2,3,5]. The nurse in this study met monthly with an expert team of clinicians to discuss patient issues but did not communicate directly with any of the patients’ PCPs [1]. The authors acknowledge that this lack of integration may have contributed to their negative results and point to the fact that it is harder to integrate interventions within community practices that often lack internal integration. However, Walsh, Harris, and Roberts demonstrated that integration between primary and secondary care teams was both feasible and effective for a diabetes initiative within community practices [13].

An additional important feature not present in this intervention was self-monitoring of BP levels. Home self-monitoring of BP has been demonstrated to significantly improve BP levels [14], and 2 of the successful studies in academic settings cited by the authors also included a BP self-monitoring component [5,6]. In one of these studies [6], Bosworth et al conducted a 2 × 2 randomized trial to improve HTN control in which the arms consisted of (a) usual care, (b) bimonthly nurse administered telephone intervention only (this arm was highly similar to the intervention arm in this study), (c) BP monitoring 3 times a week only, and (d) a combination of the telephone intervention with the BP monitoring. Interestingly, the only arm that was successful relative to usual care was the combination of the telephone intervention and BP self-monitoring; the arm consisting only of bi-monthly telephone calls (very similar to this intervention) failed despite the study taking place in an academic setting (it was also less effective than BP monitoring only). Thus, the addition of self-monitoring to a nurse case management telephone intervention can achieve better results.

Applications for Clinical Practice

A telephone-based intervention delivered by a trained nurse for co-management of DM and HTN was not more effective than an attention control delivered by the same nurse in a community setting. This may have been due to several factors, including low intensity marked by less than 1 call per month, a lack of integration with other members of the primary care team, and lack of a BP self-monitoring component. Future studies are needed to determine the optimal type and duration of nurse case management interventions targeting glucose and BP control for diabetic patients in community settings.

—Sandeep Sikerwar, BA, and Melanie Jay, MD, MS

Study Overview

Objective. To determine the effectiveness of a nurse-led, telephone-delivered behavioral intervention for diabetes (DM) and hypertension (HTN) versus an attention control within primary care community practices.

Study design. A 9-site, 2-arm randomized controlled trial.

Setting and participants. Study participants were recruited from 9 community practices within the Duke Primary Care Research Consortium. The practices were chosen because they traditionally operate outside of the academic context. Subjects were required to have both type 2 DM and HTN, as indicated by their medications and confirmed by administrative data as well as patient self-reporting. Participants had to have been seen at participating practices for at least 1 year and have poorly controlled DM (indicated by most recent A1c ≥ 7.5%), but they were not required to have poorly controlled HTN. Exclusion criteria included fewer than 1 primary care clinic visit during the previous year, serious comorbid illness, type 1 diabetes, inability to receive a telephone intervention in English, residence in a nursing home, and participation in another hypertension or diabetes study [1]. Participants were randomly assigned using a computer-generated randomization sequence [1] to either the intervention or control groups at a 1:1 ratio, stratified by clinic and baseline blood pressure (BP) control.

Intervention. A single nurse with extensive experience in case management delivered both the behavioral intervention and attention control by telephone. In both arms, calls were conducted once every 2 months over a 24-month period.

The calls in the intervention arm consisted of tailored behavior-modifying techniques according to patient barriers. This content was divided into a series of modules relevant to behaviors associated with improving control of BP or blood sugar, including physical activity, weight reduction, sodium intake, smoking cessation, medication adherence, and others. These modules were scheduled according to patient needs (based on certain parameters such as high body mass index or use of insulin) and preferences [1].

The calls in the attention control were not tailored but rather consisted of didactic health-related information unrelated to HTN or DM (eg, flu shots, skin cancer prevention). This content was also highly scripted and designed to limit the potential for interaction between the nurse and patient.

Main outcome measures. A1c and systolic blood pressure (SBP) were primary outcomes. Key secondary outcomes were diastolic blood pressure (DBP), overall BP control, weight, physical activity, self-efficacy, and medication adherence. Study staff obtained measurements at baseline and 6, 12, and 24 months [1].

Results. The researchers assessed 2601 patients for eligibility and excluded 2224. Most patients were excluded for not meeting inclusion criteria (n = 1156), in particular because of improved HbA1c control (n = 983), and 1064 declined to participate. They randomized 377 patients—193 to the intervention arm and 184 to the attention control arm. Participants had an average age of 58.7, 49.1% had an education level of high school or less, 50.1% were non-white, and 54.9% were unemployed/retired. Patient characteristics in the intervention and control arms were similar at baseline. Seventy-eight percent of patients completed the 12-month follow-up and 70% (263) reached the 24-month endpoint. Patients in the intervention arm completed 78% of scheduled calls while patients in the control group completed 81%.

After adjusting for stratification variables, the estimated mean A1c and SBP were similar between arms at 24 months (intervention 0.1% higher than control, 95% CI −0.3 % to 0.5 %, P = 0.50 for A1c; intervention 0.9 mm Hg lower than control, 95% CI −5.4 to 3.5, P = 0.69 for SBP). There were also no significant differences between arms in mean A1c or SBP at 6 or 12 months. However, A1c levels did improve within each arm at the end of the study, with the intervention group improving by approx-imately 0.5% and the control group improving by approximately 0.6%. In terms of secondary outcomes, there were no significant differences between arms in DBP, weight, physical activity, or BP control rates throughout the 2-year study period.

Conclusion. Overall, the intervention and control groups did not differ significantly in terms of A1c, SBP, or any of the secondary outcomes at any point during the 2-year study.

Commentary

The prevalence of type 2 diabetes and its comorbidities (such as hypertension and obesity) have increased due to a variety of factors including an aging population and an increasingly sedentary lifestyle. Several nurse management programs for DM and HTN have been shown to be efficacious in reducing blood sugar levels [2–4] and promoting BP control [5,6]. However, these interventions were conducted in tightly controlled academic settings, and it is unclear how well these programs may translate into community settings. The aim of this study was to test the effectiveness of a nurse-led behavioral telephone intervention for the comanagement of DM and HTN within non–academically affiliated community practices. Results indicated no significant differences between the intervention and control groups for A1c levels or SBP at any point during the 2-year study, but A1c levels did improve for both arms.

Despite this being a negative study, it is a unique and important contribution to the literature. It is the only trial as of yet that has tested the effectiveness of a nurse management intervention targeting both DM and HTN in a real-world, community setting. This novel approach is supported by data that suggests BP control is actually more cost-effective than intensive glycemic control in treating patients with type 2 diabetes [7]. There were several strengths to the study design, including the use of intention-to-treat analysis, stratified randomization, a diverse patient population, and blinding of the study staff who took BP and A1c measurements. Furthermore, a single nurse conducted all telephone calls, ensuring that differences in counseling skill levels would not affect the results of the study. The few weaknesses of the study included the fact that the nurse who delivered the intervention (as well as the patients) could not be blinded to treatment allocation, and the income of study participants was not reported.

The reasons for the negative outcomes of this study are unclear. The authors claim that similar interventions within academic settings have been shown to be effective and speculate that time and financial pressures of community practices may be reasons that the intervention was not successful. However, the “successful” interventions that they cite were quite different from and more intensive than this intervention. For instance, many of these studies used at least 1 call per month [3,4,8], and one even conducted several calls each week [3]. Furthermore, a DM study conducted by Blackberry et al in a community setting with less than 1 call per month (8 calls over 18 months) similarly failed to produce significant results [9], and therefore more frequent calls may be necessary in DM and HTN interventions. In a systematic review, Eakin et al demonstrated that 12 or more calls in a 6- to 12-month period were associated with better outcomes in physical activity and diet interventions [10], and this may also translate to closely related DM and HTN interventions.

In addition to the infrequent calls, this intervention also lacked communication and integration with patients’ primary care teams. Several studies have demonstrated that integration with primary care teams can improve outcomes in DM and HTN interventions [11,12], and nearly all of the successful studies cited by the authors also included at least some form of communication with patients’ primary care providers (PCPs) [2–4,5,8]. In many of these studies the nurse also had prescribing rights to alter medications [2,3,5]. The nurse in this study met monthly with an expert team of clinicians to discuss patient issues but did not communicate directly with any of the patients’ PCPs [1]. The authors acknowledge that this lack of integration may have contributed to their negative results and point to the fact that it is harder to integrate interventions within community practices that often lack internal integration. However, Walsh, Harris, and Roberts demonstrated that integration between primary and secondary care teams was both feasible and effective for a diabetes initiative within community practices [13].

An additional important feature not present in this intervention was self-monitoring of BP levels. Home self-monitoring of BP has been demonstrated to significantly improve BP levels [14], and 2 of the successful studies in academic settings cited by the authors also included a BP self-monitoring component [5,6]. In one of these studies [6], Bosworth et al conducted a 2 × 2 randomized trial to improve HTN control in which the arms consisted of (a) usual care, (b) bimonthly nurse administered telephone intervention only (this arm was highly similar to the intervention arm in this study), (c) BP monitoring 3 times a week only, and (d) a combination of the telephone intervention with the BP monitoring. Interestingly, the only arm that was successful relative to usual care was the combination of the telephone intervention and BP self-monitoring; the arm consisting only of bi-monthly telephone calls (very similar to this intervention) failed despite the study taking place in an academic setting (it was also less effective than BP monitoring only). Thus, the addition of self-monitoring to a nurse case management telephone intervention can achieve better results.

Applications for Clinical Practice

A telephone-based intervention delivered by a trained nurse for co-management of DM and HTN was not more effective than an attention control delivered by the same nurse in a community setting. This may have been due to several factors, including low intensity marked by less than 1 call per month, a lack of integration with other members of the primary care team, and lack of a BP self-monitoring component. Future studies are needed to determine the optimal type and duration of nurse case management interventions targeting glucose and BP control for diabetic patients in community settings.

—Sandeep Sikerwar, BA, and Melanie Jay, MD, MS

References

1. Crowley MJ, Bosworth HB, Coffman CJ, et al. Tailored Case Management for Diabetes and Hypertension (TEACH-DM) in a community population: study design and baseline sample characteristics. Contemp Clin Trials 2013;36:298–306.

2. Aubert RE, Herman WH, Waters J, et al. Nurse case management to improve glycemic control in diabetic patients in a health maintenance organization. A randomized, controlled trial. Ann Intern Med 1998;129:605–12.

3. Thompson DM, Kozak SE, Sheps S. Insulin adjustment by a diabetes nurse educator improves glucose control in insulin-requiring diabetic patients: a randomized trial. CMAJ 1999;161:959–62.

4. Weinberger M, Kirkman MS, Samsa GP, et al. A nurse-coordinated intervention for primary care patients with non-insulin-dependent diabetes mellitus: impact on glycemic control and health-related quality of life. J Gen Intern Med 1995;10:59–66.

5. Bosworth HB, Powers BJ, Olsen MK, et al. Home blood pressure management and improved blood pressure control: results from a randomized controlled trial. Arch Intern Med 2011;171:1173–80.

6. Bosworth HB, Olsen MK, Grubber JM, et al. Two self-management interventions to improve hypertension control: a randomized trial. Ann Intern Med 2009;151:687–95.

7. CDC Diabetes Cost-effectiveness Group. Cost-effectiveness of intensive glycemic control, intensified hypertension control, and serum cholesterol level reduction for type 2 diabetes. JAMA 2002;287:2542–51.

8. Mons U, Raum E, Krämer HU, et al. Effectiveness of a supportive telephone counseling intervention in type 2 diabetes patients: randomized controlled study. PLoS One 2013;8:e77954.

9. Blackberry ID, Furler JS, Best JD, et al. Effectiveness of general practice based, practice nurse led telephone coaching on glycaemic control of type 2 diabetes: the Patient Engagement and Coaching for Health (PEACH) pragmatic cluster randomised controlled trial. BMJ 2013;347:f5272.

10. Eakin EG, Lawler SP, Vandelanotte C, Owen N. Telephone interventions for physical activity and dietary behavior change: a systematic review. Am J Prev Med 2007;32:419–34.

11. Shojania KG, Ranji SR, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA 2006;296:427–40.

12. Katon WJ, Lin EHB, Von Korff M, et al. Collaborative care for patients with depression and chronic illnesses. N Engl J Med 2010;363:2611–20.

13. Walsh JL, Harris BHL, Roberts AW. Evaluation of a community diabetes initiative: Integrating diabetes care. Prim Care Diabetes 2014 Dec 11.

14. Halme L, Vesalainen R, Kaaja M, Kantola I. Self-monitoring of blood pressure promotes achievement of blood pressure target in primary health care. Am J Hypertens 2005;18:1415–20.

References

1. Crowley MJ, Bosworth HB, Coffman CJ, et al. Tailored Case Management for Diabetes and Hypertension (TEACH-DM) in a community population: study design and baseline sample characteristics. Contemp Clin Trials 2013;36:298–306.

2. Aubert RE, Herman WH, Waters J, et al. Nurse case management to improve glycemic control in diabetic patients in a health maintenance organization. A randomized, controlled trial. Ann Intern Med 1998;129:605–12.

3. Thompson DM, Kozak SE, Sheps S. Insulin adjustment by a diabetes nurse educator improves glucose control in insulin-requiring diabetic patients: a randomized trial. CMAJ 1999;161:959–62.

4. Weinberger M, Kirkman MS, Samsa GP, et al. A nurse-coordinated intervention for primary care patients with non-insulin-dependent diabetes mellitus: impact on glycemic control and health-related quality of life. J Gen Intern Med 1995;10:59–66.

5. Bosworth HB, Powers BJ, Olsen MK, et al. Home blood pressure management and improved blood pressure control: results from a randomized controlled trial. Arch Intern Med 2011;171:1173–80.

6. Bosworth HB, Olsen MK, Grubber JM, et al. Two self-management interventions to improve hypertension control: a randomized trial. Ann Intern Med 2009;151:687–95.

7. CDC Diabetes Cost-effectiveness Group. Cost-effectiveness of intensive glycemic control, intensified hypertension control, and serum cholesterol level reduction for type 2 diabetes. JAMA 2002;287:2542–51.

8. Mons U, Raum E, Krämer HU, et al. Effectiveness of a supportive telephone counseling intervention in type 2 diabetes patients: randomized controlled study. PLoS One 2013;8:e77954.

9. Blackberry ID, Furler JS, Best JD, et al. Effectiveness of general practice based, practice nurse led telephone coaching on glycaemic control of type 2 diabetes: the Patient Engagement and Coaching for Health (PEACH) pragmatic cluster randomised controlled trial. BMJ 2013;347:f5272.

10. Eakin EG, Lawler SP, Vandelanotte C, Owen N. Telephone interventions for physical activity and dietary behavior change: a systematic review. Am J Prev Med 2007;32:419–34.

11. Shojania KG, Ranji SR, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA 2006;296:427–40.

12. Katon WJ, Lin EHB, Von Korff M, et al. Collaborative care for patients with depression and chronic illnesses. N Engl J Med 2010;363:2611–20.

13. Walsh JL, Harris BHL, Roberts AW. Evaluation of a community diabetes initiative: Integrating diabetes care. Prim Care Diabetes 2014 Dec 11.

14. Halme L, Vesalainen R, Kaaja M, Kantola I. Self-monitoring of blood pressure promotes achievement of blood pressure target in primary health care. Am J Hypertens 2005;18:1415–20.

Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Publications
Publications
Topics
Article Type
Display Headline
Nurse Case Management Fails to Yield Improvements in Blood Pressure and Glycemic Control
Display Headline
Nurse Case Management Fails to Yield Improvements in Blood Pressure and Glycemic Control
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Which Revascularization Strategy for Multivessel Coronary Disease?

Article Type
Changed
Thu, 03/01/2018 - 15:22
Display Headline
Which Revascularization Strategy for Multivessel Coronary Disease?

Study Overview

Objective. To compare percutaneous coronary intervention (PCI) using second-generation drug-eluting stents (everolimus-eluting stents) with coronary artery bypass grafting (CABG) among patients with multivessel coronary disease.

Design. Observational registry study with propensity-score matching.

Setting and participants. The study relies on patients identified from the Cardiac Surgery Reporting System (CSRS) and Percutaneous Coronary Intervention Reporting System (PCIRS) registries of the New York State Department of Health. These 2 registries were linked to the New York State Vital Statistics Death registry and to the Statewide Planning and Research Cooperative System registry (SPARCS) to obtain further information like dates of admission, surgery, discharge, and death. Subjects were eligible for inclusion if they had multivessel disease (defined as severe stenosis [≥ 70%] in at least 2 diseased major epicardial coronary arteries) and if they had undergone either PCI with implantation of an everolimus-eluting stent or CABG. Subjects were excluded if they had revascularization within 1 year before index procedure; previous cardiac surgery; severe left main coronary artery disease (degree of stenosis ≥ 50%); PCI with a stent other than an everolimus-eluting stent; myocardial infarction without 24 hours before the index procedure; and unstable hemodynamics or cardiogenic shock.

Main outcome measures. The primary outcome of the study was all-cause mortality. Various secondary outcomes included rates of myocardial infarction, stroke, and repeat vascularization.

Main results. Among 116,915 patients assessed for eligibility, 82,096 were excluded. Among 34,819 who met inclusion criteria, 18,446 were included in the propensity score–matched analysis. With a 1:1 matching algorithm, 9223 were in the PCI with everolimus-eluting stent group and 9223 were in the CABG group. Short-term outcomes (in hospital or ≤ 30 days after the index procedure) favored PCI with everolimus-eluting stents over CABG, with a significantly lower risk of death (0.6% vs. 1.1%; hazard ratio [HR], 0.49; 95% confidence interval [CI], 0.35 to 0.69; P < 0.002) as well as stroke (0.2% vs 1.2%; HR, 0.18; 95% CI, 0.11 to 0.29; P < 0.001). The 2 groups had similar rates of myocardial infarction in the short-term (0.5% and 0.4%; HR, 1.37; 95% CI, 0.89 to 2.12; P = 0.16). After a mean follow-up of 2.9 years, there was a similar annual death rate between groups: 3.1% for PCI and 2.9% for CABG (HR, 1.04; 95% CI, 0.93 to 1.17; P = 0.50). PCI with everolimus-eluting stents was associated with a higher risk of a first myocardial infarction than was CABG (1.9% vs 1.1% per year; HR, 1.51; 95% CI, 1.29 to 1.77; P < 0.001). PCI with everolimus-eluting stents was associated with a lower risk of a first stroke than CABG (0.7% vs. 1.0% per year; HR, 0.62; 95% CI, 0.50 to 0.76; P < 0.001). Finally, PCI with everolimus-eluting stents was associated with a higher risk of a first repeat-revascularization procedure than CABG (7.2% vs. 3.1% per year; HR, 2.35; 95% CI, 2.14 to 2.58; P < 0.001).

Conclusion. In the setting of newer stent technology with second-generation everolimus-eluting stents, the risk of death associated with PCI was similar to that associated with CABG for multivessel coronary artery disease. In the long-term, PCI was associated with a higher risk of myocardial infarction and repeat revascularization, whereas CABG was associated with an increased risk of stroke. In the short-term, PCI had lower risks of both death and stroke.

Commentary

Coronary artery disease is a major public health problem. For patients for whom revascularization is deemed to be appropriate, a choice must be made between PCI and CABG. In previous studies that compared PCI and CABG, CABG was shown to have less need for repeat revascularizations as well as mortality benefits [1–3]. However, these prior studies compared CABG with older generations of stents. In the past decade, stent technologies have improved, as the bare-metal stent era gave way to the first generation of of drug-eluting stents (with sirolimus or paclitaxel), to be followed by second-generation drug-eluting stents (with everolimus or zotarolimus) [4].

In this article, Bangalore and colleagues addressed the issue of whether the use of second-generation drug-eluting stents close the outcome gap that favors CABG over PCI in patients with multivessel coronary artery disease. In patients who were considered to have had complete revascularization performed during PCI (ie, revascularization of all major vessels with clinically significant stenosis), they noted mitigation of the outcome differences between the PCI group and the CABG group. They conclude that the decision-making process by patients and their providers regarding revascularization be placed in the context of individual values and preferences.

One major limitation is that the study is an observational study from registry data. Despite the use of sophisticated statistical techniques including propensity score matching to adjust for confounders that are implicit in any nonrandomized comparison of treatment strategies, observational studies suffer from the definitely proof of causality. These limitations are especially important when the two groups being compared have modest differences in outcome.

Applications for Clinical Practice

This observational study, together with a recent randomized clinical trial in which CABG was compared with PCI with the use of everolimus-eluting stents from the BEST trial [5], provided new insights of the 2 revascularization strategies. Clinicians should engage and empower patients with a shared decision-making approach. The early hazard of CABG in stroke and death may be unacceptable to some patients, whereas others might want to avoid the later hazards of PCI in repeat procedure or having a myocardial infarction. Until a definitive study is available, patients should be informed of the best current knowledge of the pros and cons of the two revascularization strategies.

 —Ka Ming Gordon Ngai, MD, MPH

 

References

1. Farooq V, van Klaveren D, Steyerberg EW, et al. Anatomical and clinical characteristics to guide decision making between coronary artery bypass surgery and percutaneous coronary intervention for individual patients: development and validation of SYNTAX score II. Lancet 2013;381: 639–50.

2. Hannan EL, Racz MJ, Arani DT, et al. A comparison of short- and long-term outcomes for balloon angioplasty and coronary stent placement. J Am Coll Cardiol 2000;36:395–403.

3. Hannan EL, Racz MJ, Walford G, et al. Long-term outcomes of coronary-artery bypass grafting versus stent implantation. N Engl J Med 2005;352: 2174–83.

4. Harrington RA. Selecting revascularization strategies in patients with coronary disease. N Engl J Med 2015;372: 1261–3.

5. Park SJ, Ahn JM, Kim YH, et al. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med 2015;372:1204–12.

Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Publications
Topics
Sections

Study Overview

Objective. To compare percutaneous coronary intervention (PCI) using second-generation drug-eluting stents (everolimus-eluting stents) with coronary artery bypass grafting (CABG) among patients with multivessel coronary disease.

Design. Observational registry study with propensity-score matching.

Setting and participants. The study relies on patients identified from the Cardiac Surgery Reporting System (CSRS) and Percutaneous Coronary Intervention Reporting System (PCIRS) registries of the New York State Department of Health. These 2 registries were linked to the New York State Vital Statistics Death registry and to the Statewide Planning and Research Cooperative System registry (SPARCS) to obtain further information like dates of admission, surgery, discharge, and death. Subjects were eligible for inclusion if they had multivessel disease (defined as severe stenosis [≥ 70%] in at least 2 diseased major epicardial coronary arteries) and if they had undergone either PCI with implantation of an everolimus-eluting stent or CABG. Subjects were excluded if they had revascularization within 1 year before index procedure; previous cardiac surgery; severe left main coronary artery disease (degree of stenosis ≥ 50%); PCI with a stent other than an everolimus-eluting stent; myocardial infarction without 24 hours before the index procedure; and unstable hemodynamics or cardiogenic shock.

Main outcome measures. The primary outcome of the study was all-cause mortality. Various secondary outcomes included rates of myocardial infarction, stroke, and repeat vascularization.

Main results. Among 116,915 patients assessed for eligibility, 82,096 were excluded. Among 34,819 who met inclusion criteria, 18,446 were included in the propensity score–matched analysis. With a 1:1 matching algorithm, 9223 were in the PCI with everolimus-eluting stent group and 9223 were in the CABG group. Short-term outcomes (in hospital or ≤ 30 days after the index procedure) favored PCI with everolimus-eluting stents over CABG, with a significantly lower risk of death (0.6% vs. 1.1%; hazard ratio [HR], 0.49; 95% confidence interval [CI], 0.35 to 0.69; P < 0.002) as well as stroke (0.2% vs 1.2%; HR, 0.18; 95% CI, 0.11 to 0.29; P < 0.001). The 2 groups had similar rates of myocardial infarction in the short-term (0.5% and 0.4%; HR, 1.37; 95% CI, 0.89 to 2.12; P = 0.16). After a mean follow-up of 2.9 years, there was a similar annual death rate between groups: 3.1% for PCI and 2.9% for CABG (HR, 1.04; 95% CI, 0.93 to 1.17; P = 0.50). PCI with everolimus-eluting stents was associated with a higher risk of a first myocardial infarction than was CABG (1.9% vs 1.1% per year; HR, 1.51; 95% CI, 1.29 to 1.77; P < 0.001). PCI with everolimus-eluting stents was associated with a lower risk of a first stroke than CABG (0.7% vs. 1.0% per year; HR, 0.62; 95% CI, 0.50 to 0.76; P < 0.001). Finally, PCI with everolimus-eluting stents was associated with a higher risk of a first repeat-revascularization procedure than CABG (7.2% vs. 3.1% per year; HR, 2.35; 95% CI, 2.14 to 2.58; P < 0.001).

Conclusion. In the setting of newer stent technology with second-generation everolimus-eluting stents, the risk of death associated with PCI was similar to that associated with CABG for multivessel coronary artery disease. In the long-term, PCI was associated with a higher risk of myocardial infarction and repeat revascularization, whereas CABG was associated with an increased risk of stroke. In the short-term, PCI had lower risks of both death and stroke.

Commentary

Coronary artery disease is a major public health problem. For patients for whom revascularization is deemed to be appropriate, a choice must be made between PCI and CABG. In previous studies that compared PCI and CABG, CABG was shown to have less need for repeat revascularizations as well as mortality benefits [1–3]. However, these prior studies compared CABG with older generations of stents. In the past decade, stent technologies have improved, as the bare-metal stent era gave way to the first generation of of drug-eluting stents (with sirolimus or paclitaxel), to be followed by second-generation drug-eluting stents (with everolimus or zotarolimus) [4].

In this article, Bangalore and colleagues addressed the issue of whether the use of second-generation drug-eluting stents close the outcome gap that favors CABG over PCI in patients with multivessel coronary artery disease. In patients who were considered to have had complete revascularization performed during PCI (ie, revascularization of all major vessels with clinically significant stenosis), they noted mitigation of the outcome differences between the PCI group and the CABG group. They conclude that the decision-making process by patients and their providers regarding revascularization be placed in the context of individual values and preferences.

One major limitation is that the study is an observational study from registry data. Despite the use of sophisticated statistical techniques including propensity score matching to adjust for confounders that are implicit in any nonrandomized comparison of treatment strategies, observational studies suffer from the definitely proof of causality. These limitations are especially important when the two groups being compared have modest differences in outcome.

Applications for Clinical Practice

This observational study, together with a recent randomized clinical trial in which CABG was compared with PCI with the use of everolimus-eluting stents from the BEST trial [5], provided new insights of the 2 revascularization strategies. Clinicians should engage and empower patients with a shared decision-making approach. The early hazard of CABG in stroke and death may be unacceptable to some patients, whereas others might want to avoid the later hazards of PCI in repeat procedure or having a myocardial infarction. Until a definitive study is available, patients should be informed of the best current knowledge of the pros and cons of the two revascularization strategies.

 —Ka Ming Gordon Ngai, MD, MPH

 

Study Overview

Objective. To compare percutaneous coronary intervention (PCI) using second-generation drug-eluting stents (everolimus-eluting stents) with coronary artery bypass grafting (CABG) among patients with multivessel coronary disease.

Design. Observational registry study with propensity-score matching.

Setting and participants. The study relies on patients identified from the Cardiac Surgery Reporting System (CSRS) and Percutaneous Coronary Intervention Reporting System (PCIRS) registries of the New York State Department of Health. These 2 registries were linked to the New York State Vital Statistics Death registry and to the Statewide Planning and Research Cooperative System registry (SPARCS) to obtain further information like dates of admission, surgery, discharge, and death. Subjects were eligible for inclusion if they had multivessel disease (defined as severe stenosis [≥ 70%] in at least 2 diseased major epicardial coronary arteries) and if they had undergone either PCI with implantation of an everolimus-eluting stent or CABG. Subjects were excluded if they had revascularization within 1 year before index procedure; previous cardiac surgery; severe left main coronary artery disease (degree of stenosis ≥ 50%); PCI with a stent other than an everolimus-eluting stent; myocardial infarction without 24 hours before the index procedure; and unstable hemodynamics or cardiogenic shock.

Main outcome measures. The primary outcome of the study was all-cause mortality. Various secondary outcomes included rates of myocardial infarction, stroke, and repeat vascularization.

Main results. Among 116,915 patients assessed for eligibility, 82,096 were excluded. Among 34,819 who met inclusion criteria, 18,446 were included in the propensity score–matched analysis. With a 1:1 matching algorithm, 9223 were in the PCI with everolimus-eluting stent group and 9223 were in the CABG group. Short-term outcomes (in hospital or ≤ 30 days after the index procedure) favored PCI with everolimus-eluting stents over CABG, with a significantly lower risk of death (0.6% vs. 1.1%; hazard ratio [HR], 0.49; 95% confidence interval [CI], 0.35 to 0.69; P < 0.002) as well as stroke (0.2% vs 1.2%; HR, 0.18; 95% CI, 0.11 to 0.29; P < 0.001). The 2 groups had similar rates of myocardial infarction in the short-term (0.5% and 0.4%; HR, 1.37; 95% CI, 0.89 to 2.12; P = 0.16). After a mean follow-up of 2.9 years, there was a similar annual death rate between groups: 3.1% for PCI and 2.9% for CABG (HR, 1.04; 95% CI, 0.93 to 1.17; P = 0.50). PCI with everolimus-eluting stents was associated with a higher risk of a first myocardial infarction than was CABG (1.9% vs 1.1% per year; HR, 1.51; 95% CI, 1.29 to 1.77; P < 0.001). PCI with everolimus-eluting stents was associated with a lower risk of a first stroke than CABG (0.7% vs. 1.0% per year; HR, 0.62; 95% CI, 0.50 to 0.76; P < 0.001). Finally, PCI with everolimus-eluting stents was associated with a higher risk of a first repeat-revascularization procedure than CABG (7.2% vs. 3.1% per year; HR, 2.35; 95% CI, 2.14 to 2.58; P < 0.001).

Conclusion. In the setting of newer stent technology with second-generation everolimus-eluting stents, the risk of death associated with PCI was similar to that associated with CABG for multivessel coronary artery disease. In the long-term, PCI was associated with a higher risk of myocardial infarction and repeat revascularization, whereas CABG was associated with an increased risk of stroke. In the short-term, PCI had lower risks of both death and stroke.

Commentary

Coronary artery disease is a major public health problem. For patients for whom revascularization is deemed to be appropriate, a choice must be made between PCI and CABG. In previous studies that compared PCI and CABG, CABG was shown to have less need for repeat revascularizations as well as mortality benefits [1–3]. However, these prior studies compared CABG with older generations of stents. In the past decade, stent technologies have improved, as the bare-metal stent era gave way to the first generation of of drug-eluting stents (with sirolimus or paclitaxel), to be followed by second-generation drug-eluting stents (with everolimus or zotarolimus) [4].

In this article, Bangalore and colleagues addressed the issue of whether the use of second-generation drug-eluting stents close the outcome gap that favors CABG over PCI in patients with multivessel coronary artery disease. In patients who were considered to have had complete revascularization performed during PCI (ie, revascularization of all major vessels with clinically significant stenosis), they noted mitigation of the outcome differences between the PCI group and the CABG group. They conclude that the decision-making process by patients and their providers regarding revascularization be placed in the context of individual values and preferences.

One major limitation is that the study is an observational study from registry data. Despite the use of sophisticated statistical techniques including propensity score matching to adjust for confounders that are implicit in any nonrandomized comparison of treatment strategies, observational studies suffer from the definitely proof of causality. These limitations are especially important when the two groups being compared have modest differences in outcome.

Applications for Clinical Practice

This observational study, together with a recent randomized clinical trial in which CABG was compared with PCI with the use of everolimus-eluting stents from the BEST trial [5], provided new insights of the 2 revascularization strategies. Clinicians should engage and empower patients with a shared decision-making approach. The early hazard of CABG in stroke and death may be unacceptable to some patients, whereas others might want to avoid the later hazards of PCI in repeat procedure or having a myocardial infarction. Until a definitive study is available, patients should be informed of the best current knowledge of the pros and cons of the two revascularization strategies.

 —Ka Ming Gordon Ngai, MD, MPH

 

References

1. Farooq V, van Klaveren D, Steyerberg EW, et al. Anatomical and clinical characteristics to guide decision making between coronary artery bypass surgery and percutaneous coronary intervention for individual patients: development and validation of SYNTAX score II. Lancet 2013;381: 639–50.

2. Hannan EL, Racz MJ, Arani DT, et al. A comparison of short- and long-term outcomes for balloon angioplasty and coronary stent placement. J Am Coll Cardiol 2000;36:395–403.

3. Hannan EL, Racz MJ, Walford G, et al. Long-term outcomes of coronary-artery bypass grafting versus stent implantation. N Engl J Med 2005;352: 2174–83.

4. Harrington RA. Selecting revascularization strategies in patients with coronary disease. N Engl J Med 2015;372: 1261–3.

5. Park SJ, Ahn JM, Kim YH, et al. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med 2015;372:1204–12.

References

1. Farooq V, van Klaveren D, Steyerberg EW, et al. Anatomical and clinical characteristics to guide decision making between coronary artery bypass surgery and percutaneous coronary intervention for individual patients: development and validation of SYNTAX score II. Lancet 2013;381: 639–50.

2. Hannan EL, Racz MJ, Arani DT, et al. A comparison of short- and long-term outcomes for balloon angioplasty and coronary stent placement. J Am Coll Cardiol 2000;36:395–403.

3. Hannan EL, Racz MJ, Walford G, et al. Long-term outcomes of coronary-artery bypass grafting versus stent implantation. N Engl J Med 2005;352: 2174–83.

4. Harrington RA. Selecting revascularization strategies in patients with coronary disease. N Engl J Med 2015;372: 1261–3.

5. Park SJ, Ahn JM, Kim YH, et al. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med 2015;372:1204–12.

Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Publications
Publications
Topics
Article Type
Display Headline
Which Revascularization Strategy for Multivessel Coronary Disease?
Display Headline
Which Revascularization Strategy for Multivessel Coronary Disease?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Mindfulness Meditation for Sleep Problems

Article Type
Changed
Thu, 03/01/2018 - 15:10
Display Headline
Mindfulness Meditation for Sleep Problems

Study Overview

Objective. To test the treatment effect of a structured mindfulness meditation program versus sleep hygiene education for improving sleep quality.

Study design. Single-site, parallel-group randomized clinical trial.

Setting and participants. Adults aged 55 years and older were recruited from the urban Los Angeles community through a newspaper advertisement and flyers posted in community centers. Participants had to agree to be randomized and have a Pittsburgh Sleep Quality Index (PSQI) score [1] exceeding 5 at screening. Exclusion criteria were current smoking, substance dependence, inability to speak English, depression, cognitive impairment, current daily meditation, and obesity. Also excluded were those who reported a current inflammatory disorder, sleep apnea, restless legs syndrome, illness, or infection.

Intervention. Participants were randomized into 2 standardized treatment conditions: the Mindful Awareness Practices program (MAPs) and sleep hygiene education (SHE). Each treatment consisted of weekly 2-hour group-based classes over the course of the 6-week intervention. The comparison sleep hygiene program matched the MAPs condition for time, attention, group interaction, and expectancy of benefit effects. Eight visits to the study site were requested, including 1 pretreatment assessment visit, 6 intervention sessions, and 1 posttreatment assessment visit. Participants were compensated up to $50 in gift cards and received parking vouchers for visits.

Main outcome measure. The primary outcome measure was the PSQI, a commonly used and validated 19-item self-rated questionnaire that assesses sleep quality and disturbances over a 1-month time interval. A global score greater than 5 yields a diagnostic sensitivity of 89.6% and specificity of 86.5% in distinguishing good and poor sleepers [1]. Secondary outcomes included scores on instruments that measured depression, anxiety, stress, and fatigue.

Results. After screening for eligibility, 49 adults were randomized, 24 to MAPS and 25 to SHE. Session attendance was similar across the groups. Mean (± SD) age of participants was 66.3 (7.4) years and 67% were female. Mean PSQI was 10.2 at baseline and 7.4 postintervention for MAPs, and 10.2 at baseline and 9.1 postintervention for SHE. In the intention-to-treat analyses, PSQI improved by 2.8 in MAPS vs. 1.1 in SHE (between-group mean difference, 1.8; 95% confidence interval, 0.6–2.9) with an effect size of 0.89. Relative improvements in depression scores and daytime fatigue were also noted.

Conclusion. The program improved sleep quality relative to SHE. Mindfulness meditation appears to have a role in addressing the burden of sleep problems in older adults.

Commentary

Older adults commonly report disturbed sleep, and an expanding literature suggests that poor sleep increases the risk of adverse health outcomes, including frailty and lower cognitive function. Current nonpharmacologic treatments for disturbed sleep include sleep hygiene education and cognitive behavioral therapy (CBT), which have been shown to be effective. However, as the current study’s authors point out, clinical interventions like CBT are intensive, require administration by highly trained therapists, and are intended for patients with insomnia [2].

These researchers investigated an alternative intervention consisting of mindfulness meditation. Mindfulness has been defined as being intentionally aware of internal and external experiences that occur at the present moment, without judgment. Mindfulness-based interventions are increasingly being studied for a wide array of health conditions, and courses in the community and online are frequently available.

The results of the current study, which applied mindfulness meditation to the problem of sleep disturbance in older adults, are compelling. The effect size of 0.89 was large and of clinical relevance: as the authors point out, in a meta-analysis of behavioral interventions for insomnia, the average effect size for improvement in subjective sleep outcomes among older adults was 0.76 [3]. It is noteworthy that the authors of the current study recruited patients on the basis of PSQI score and did not require a diagnosis of insomnia. The use of the PSQI means that the sample consisted of patients with self-rated poor sleep quality, and epidemiologic evidence suggests that a PSQI score greater than 5 identifies older persons at risk for adverse health outcomes [4]. Thus, this is a logical group to target. In addition, the sample may have included those with undiagnosed insomnia and other sleep disturbances; this fact makes the findings even more impressive [4].

The use of validated measures are a strength of the study. Limitations include lack of postintervention assessment data for 12% of participants and a preponderance of female and highly educated participants.

Applications for Clinical Practice

Standardized mindfulness programs are becoming more widely available, both online and in the community, and can be be introduced to older adults to help them with moderate sleep disturbances.

References

1. Buysse DJ, Reynolds CF 3rd, Monk TH, et al. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res 1989;28:193–213.

2. Morin CM, Bootzin RR, Buysse DJ, Edinger JD, Espie CA, Lichstein KL. Psychological and behavioral treatment of insomnia: update of the recent evidence (1998-2004). Sleep 2006;29:1398–414.

3. Irwin MR, Cole JC, Nicassio PM. Comparative meta-analysis of behavioral interventions for insomnia and their efficacy in middle-aged adults and in older adults 55+ years of age. Health Psychol 2006;25:3–14.

4. Spira AP. Being mindful of later-life sleep quality and its potential role in prevention. JAMA Intern Med. Published online 16 Feb 2015.

Issue
Journal of Clinical Outcomes Management - April 2015, VOL. 22, NO. 4
Publications
Topics
Sections

Study Overview

Objective. To test the treatment effect of a structured mindfulness meditation program versus sleep hygiene education for improving sleep quality.

Study design. Single-site, parallel-group randomized clinical trial.

Setting and participants. Adults aged 55 years and older were recruited from the urban Los Angeles community through a newspaper advertisement and flyers posted in community centers. Participants had to agree to be randomized and have a Pittsburgh Sleep Quality Index (PSQI) score [1] exceeding 5 at screening. Exclusion criteria were current smoking, substance dependence, inability to speak English, depression, cognitive impairment, current daily meditation, and obesity. Also excluded were those who reported a current inflammatory disorder, sleep apnea, restless legs syndrome, illness, or infection.

Intervention. Participants were randomized into 2 standardized treatment conditions: the Mindful Awareness Practices program (MAPs) and sleep hygiene education (SHE). Each treatment consisted of weekly 2-hour group-based classes over the course of the 6-week intervention. The comparison sleep hygiene program matched the MAPs condition for time, attention, group interaction, and expectancy of benefit effects. Eight visits to the study site were requested, including 1 pretreatment assessment visit, 6 intervention sessions, and 1 posttreatment assessment visit. Participants were compensated up to $50 in gift cards and received parking vouchers for visits.

Main outcome measure. The primary outcome measure was the PSQI, a commonly used and validated 19-item self-rated questionnaire that assesses sleep quality and disturbances over a 1-month time interval. A global score greater than 5 yields a diagnostic sensitivity of 89.6% and specificity of 86.5% in distinguishing good and poor sleepers [1]. Secondary outcomes included scores on instruments that measured depression, anxiety, stress, and fatigue.

Results. After screening for eligibility, 49 adults were randomized, 24 to MAPS and 25 to SHE. Session attendance was similar across the groups. Mean (± SD) age of participants was 66.3 (7.4) years and 67% were female. Mean PSQI was 10.2 at baseline and 7.4 postintervention for MAPs, and 10.2 at baseline and 9.1 postintervention for SHE. In the intention-to-treat analyses, PSQI improved by 2.8 in MAPS vs. 1.1 in SHE (between-group mean difference, 1.8; 95% confidence interval, 0.6–2.9) with an effect size of 0.89. Relative improvements in depression scores and daytime fatigue were also noted.

Conclusion. The program improved sleep quality relative to SHE. Mindfulness meditation appears to have a role in addressing the burden of sleep problems in older adults.

Commentary

Older adults commonly report disturbed sleep, and an expanding literature suggests that poor sleep increases the risk of adverse health outcomes, including frailty and lower cognitive function. Current nonpharmacologic treatments for disturbed sleep include sleep hygiene education and cognitive behavioral therapy (CBT), which have been shown to be effective. However, as the current study’s authors point out, clinical interventions like CBT are intensive, require administration by highly trained therapists, and are intended for patients with insomnia [2].

These researchers investigated an alternative intervention consisting of mindfulness meditation. Mindfulness has been defined as being intentionally aware of internal and external experiences that occur at the present moment, without judgment. Mindfulness-based interventions are increasingly being studied for a wide array of health conditions, and courses in the community and online are frequently available.

The results of the current study, which applied mindfulness meditation to the problem of sleep disturbance in older adults, are compelling. The effect size of 0.89 was large and of clinical relevance: as the authors point out, in a meta-analysis of behavioral interventions for insomnia, the average effect size for improvement in subjective sleep outcomes among older adults was 0.76 [3]. It is noteworthy that the authors of the current study recruited patients on the basis of PSQI score and did not require a diagnosis of insomnia. The use of the PSQI means that the sample consisted of patients with self-rated poor sleep quality, and epidemiologic evidence suggests that a PSQI score greater than 5 identifies older persons at risk for adverse health outcomes [4]. Thus, this is a logical group to target. In addition, the sample may have included those with undiagnosed insomnia and other sleep disturbances; this fact makes the findings even more impressive [4].

The use of validated measures are a strength of the study. Limitations include lack of postintervention assessment data for 12% of participants and a preponderance of female and highly educated participants.

Applications for Clinical Practice

Standardized mindfulness programs are becoming more widely available, both online and in the community, and can be be introduced to older adults to help them with moderate sleep disturbances.

Study Overview

Objective. To test the treatment effect of a structured mindfulness meditation program versus sleep hygiene education for improving sleep quality.

Study design. Single-site, parallel-group randomized clinical trial.

Setting and participants. Adults aged 55 years and older were recruited from the urban Los Angeles community through a newspaper advertisement and flyers posted in community centers. Participants had to agree to be randomized and have a Pittsburgh Sleep Quality Index (PSQI) score [1] exceeding 5 at screening. Exclusion criteria were current smoking, substance dependence, inability to speak English, depression, cognitive impairment, current daily meditation, and obesity. Also excluded were those who reported a current inflammatory disorder, sleep apnea, restless legs syndrome, illness, or infection.

Intervention. Participants were randomized into 2 standardized treatment conditions: the Mindful Awareness Practices program (MAPs) and sleep hygiene education (SHE). Each treatment consisted of weekly 2-hour group-based classes over the course of the 6-week intervention. The comparison sleep hygiene program matched the MAPs condition for time, attention, group interaction, and expectancy of benefit effects. Eight visits to the study site were requested, including 1 pretreatment assessment visit, 6 intervention sessions, and 1 posttreatment assessment visit. Participants were compensated up to $50 in gift cards and received parking vouchers for visits.

Main outcome measure. The primary outcome measure was the PSQI, a commonly used and validated 19-item self-rated questionnaire that assesses sleep quality and disturbances over a 1-month time interval. A global score greater than 5 yields a diagnostic sensitivity of 89.6% and specificity of 86.5% in distinguishing good and poor sleepers [1]. Secondary outcomes included scores on instruments that measured depression, anxiety, stress, and fatigue.

Results. After screening for eligibility, 49 adults were randomized, 24 to MAPS and 25 to SHE. Session attendance was similar across the groups. Mean (± SD) age of participants was 66.3 (7.4) years and 67% were female. Mean PSQI was 10.2 at baseline and 7.4 postintervention for MAPs, and 10.2 at baseline and 9.1 postintervention for SHE. In the intention-to-treat analyses, PSQI improved by 2.8 in MAPS vs. 1.1 in SHE (between-group mean difference, 1.8; 95% confidence interval, 0.6–2.9) with an effect size of 0.89. Relative improvements in depression scores and daytime fatigue were also noted.

Conclusion. The program improved sleep quality relative to SHE. Mindfulness meditation appears to have a role in addressing the burden of sleep problems in older adults.

Commentary

Older adults commonly report disturbed sleep, and an expanding literature suggests that poor sleep increases the risk of adverse health outcomes, including frailty and lower cognitive function. Current nonpharmacologic treatments for disturbed sleep include sleep hygiene education and cognitive behavioral therapy (CBT), which have been shown to be effective. However, as the current study’s authors point out, clinical interventions like CBT are intensive, require administration by highly trained therapists, and are intended for patients with insomnia [2].

These researchers investigated an alternative intervention consisting of mindfulness meditation. Mindfulness has been defined as being intentionally aware of internal and external experiences that occur at the present moment, without judgment. Mindfulness-based interventions are increasingly being studied for a wide array of health conditions, and courses in the community and online are frequently available.

The results of the current study, which applied mindfulness meditation to the problem of sleep disturbance in older adults, are compelling. The effect size of 0.89 was large and of clinical relevance: as the authors point out, in a meta-analysis of behavioral interventions for insomnia, the average effect size for improvement in subjective sleep outcomes among older adults was 0.76 [3]. It is noteworthy that the authors of the current study recruited patients on the basis of PSQI score and did not require a diagnosis of insomnia. The use of the PSQI means that the sample consisted of patients with self-rated poor sleep quality, and epidemiologic evidence suggests that a PSQI score greater than 5 identifies older persons at risk for adverse health outcomes [4]. Thus, this is a logical group to target. In addition, the sample may have included those with undiagnosed insomnia and other sleep disturbances; this fact makes the findings even more impressive [4].

The use of validated measures are a strength of the study. Limitations include lack of postintervention assessment data for 12% of participants and a preponderance of female and highly educated participants.

Applications for Clinical Practice

Standardized mindfulness programs are becoming more widely available, both online and in the community, and can be be introduced to older adults to help them with moderate sleep disturbances.

References

1. Buysse DJ, Reynolds CF 3rd, Monk TH, et al. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res 1989;28:193–213.

2. Morin CM, Bootzin RR, Buysse DJ, Edinger JD, Espie CA, Lichstein KL. Psychological and behavioral treatment of insomnia: update of the recent evidence (1998-2004). Sleep 2006;29:1398–414.

3. Irwin MR, Cole JC, Nicassio PM. Comparative meta-analysis of behavioral interventions for insomnia and their efficacy in middle-aged adults and in older adults 55+ years of age. Health Psychol 2006;25:3–14.

4. Spira AP. Being mindful of later-life sleep quality and its potential role in prevention. JAMA Intern Med. Published online 16 Feb 2015.

References

1. Buysse DJ, Reynolds CF 3rd, Monk TH, et al. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res 1989;28:193–213.

2. Morin CM, Bootzin RR, Buysse DJ, Edinger JD, Espie CA, Lichstein KL. Psychological and behavioral treatment of insomnia: update of the recent evidence (1998-2004). Sleep 2006;29:1398–414.

3. Irwin MR, Cole JC, Nicassio PM. Comparative meta-analysis of behavioral interventions for insomnia and their efficacy in middle-aged adults and in older adults 55+ years of age. Health Psychol 2006;25:3–14.

4. Spira AP. Being mindful of later-life sleep quality and its potential role in prevention. JAMA Intern Med. Published online 16 Feb 2015.

Issue
Journal of Clinical Outcomes Management - April 2015, VOL. 22, NO. 4
Issue
Journal of Clinical Outcomes Management - April 2015, VOL. 22, NO. 4
Publications
Publications
Topics
Article Type
Display Headline
Mindfulness Meditation for Sleep Problems
Display Headline
Mindfulness Meditation for Sleep Problems
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default