User login
Optimal Wheelchair Service Provision for Children with Disabilities
Study Overview
Objective. To conduct a systematic review on the effectiveness, service user perspectives, policy intentions, and cost-effectiveness of wheelchairs for disabled children (< 18 years) for the purposes of developing a conceptual framework to inform future research and development.
Design. EPPI-Centre (eppi.ioe.ac.uk/cms/) mixed method systematic review with narrative summary and thematic and narrative synthesis.
Data. A search for relevant studies available in English and published in the last 15 years was performed. All identified study titles were assessed for relevance against the inclusion/exclusion criteria, and a second screening process was used to assess relevance of studies by their abstract. Studies deemed relevant were then obtained in full and underwent an additional review by a second researcher to reduce bias and reach consensus regarding inclusion. After data extraction, evidence was divided into 4 streams according to methodology and topic to enable separate syntheses by evidence type: (1) intervention evidence; (2) opinion evidence; (3) policy and not-for-profit organization (NFPO) literature; and (4) economic evidence. Intervention and economic streams were not synthesised due to vast differences in studies and lack of statistical evidence within each stream, thus narrative summary was conducted.
Main outcome. The primary outcome was to create a conceptual framework to inform future research and wheelchair service development in the UK, with international implications. To inform the searching, management, and interpretation of evidence, the review focused on the following 4 objectives regarding wheelchair interventions for disabled children and young people:
- to determine the effectiveness and cost-effectiveness of wheelchairs for said population;
- to better understand service users, parents, and professional perspectives regarding wheelchairs;
- to explore current UK policy, NFPO publication and clinical guideline recommendations and intentions regarding wheelchair provision; and
- to determine if disabled children’s desired outcomes match with existing policy aspirations and effectiveness evidence.
Main results. Synthesis of the integrated dataset elicited the following findings: (1) higher quality wheelchair services take into account the needs of the whole family; (2) disabled children benefit when psychosocial needs are considered along with health needs; (3) disabled children could benefit if policy recommendations focused on services meeting individual needs rather than following strict eligibility criteria; (4) without appropriate outcome measures the holistic benefits of powered wheelchair interventions cannot be evaluated; (5) disabled children may benefit more when physical outcomes of powered wheelchairs are seen as facilitators to wider holistic benefits, but lack of transition of evidence into practice hinders progress; and (6) disabled children would benefit from public buildings and spaces that promote inclusion of disabled people.
A key finding of this study is that wheelchairs offer disabled children independence and social integration and participation in age-appropriate activities. Secondary findings pertain to policy, specifically, the lack of effective translation of policy and evidence into practice, barriers to service delivery, lack of organization, and absence of knowledge application of what children desire from their wheelchairs. The resulting framework of this review lays out the interconnectedness of the problem areas, required actions, and overall development stages, which can lead to future cost-effective wheelchair services and interventions.
Conclusion. Wheelchairs offer children a variety of benefits, particularly with respect to health, development, and social inclusion. Given the barriers surrounding NHS wheelchair services in the UK, this review provides a solid research foundation for further research and has implications for wheelchair services globally. In particular, the lack of economic evidence found in the review process has implications for the need of appropriate methods to measure cost-effectiveness of interventions in order to promote more efficient service provision.
Commentary
Wheelchair access has compelling implications for improving children’s health, development, and social inclusion. It is this final benefit in particular that makes wheelchair interventions stand out from the rest due to the fact that the wheelchair goes beyond being a medical assistive device to being a gateway to societal participation.
Also related to social inclusion, an additional relevant observation made in this study is the lack of wheelchair access in public spaces. Such barriers can deem a wheelchair irrelevant if it cannot be used in the spaces where the subject needs to travel, such as schools, restaurants, parks, or government offices. Though this study was based in the UK, such barriers to inclusion remain all too common for persons with disability around the world [1], thus positioning the results of study as a starting framework for further global research.
That being said, the authors’ recommendations of resolving public space barriers with the simple addition of wheelchair access is an outdated approach towards inclusion that has been widely challenged by the community of persons with disabilities over the last decade. The promotion of “inclusive design” or “human-centered design” [2] to properly address the challenges of persons with disability is a growing trend, particularly in the United States, and takes into account the highest possible degree of permutation in the local demographics. A recommendation limited to mere wheelchair access stands to shut out other significant portions of the disabled community and exacerbates a “patchwork” approach toward resolving access that is not truly holistic.
Another observation pertains to the financial burden imposed on the family to modify the home when a child in the household has to use a wheelchair. While this is discussed in the article, it is treated separately as it occurs in the private sphere. However, construction regulations, permits, and other aspects pertaining to home-building reside on the government and policy side, even if a private independent entity does the home-building. Hence, policy changes towards inclusive design have implications for both public as well as private spaces.
With respect to the benefits of wheelchair interventions, the authors contend that appropriate interventions stand to “reduce disability discrimination and promote equality.” Concepts such as “discrimination” and “equality” cannot be discussed without political and cultural considerations [3]. The linkage between access and equality is a correlation worthy of discussion; however, the study was not designed to gather data to support sucha correlation.
Finally, while the overall findings of this study are relevant, it would be useful to know more about wheelchair service provision and the elderly, as it is they who comprise the majority of the disabled population [1,4]. The elderly disabled also need caretakers, need to make home modifications and travel to and from public spaces, and experience barriers and service delays Future research on wheelchair interventions would benefit from a comparative intra-population analysis.
Applications for Clinical Practice
This study outlines critical challenges and problems in the process of obtaining a wheelchair, such as poor evaluation methods for matching a wheelchair to patient needs, bureaucratic delays even after the order has been approved, physical accommodations that need to take place once the wheelchair has been acquired, and financial burdens assumed by the family and/or caretakers. Consideration needs to be given to addressing these problems given the importance of adequate wheelchairs for many disabled people.
—Molly A. Martínez, PhD, World Enabled, Berkeley, CA
1. World Health Organization. World report on disability 2011. Available at whqlibdoc.who.int/publications/2011/9789240685215_eng.pdf?ua–1.
2. Fletcher V. Driving innovation: universal design. Presented 14 Jan 2014 at IOM workshop. Available at www.iom.edu/~/media/Files/Activity%20Files/PublicHealth/HearingLoss
Aging/2-7%20Fletcher.pdf .
3. United Nations Permanent Forum on Indigenous Issues. Study on the situation of indigenous persons with disabilities, with a particular focus on challenges faced with regard to the full enjoyment of human rights and inclusion in development. May 2013. Available at www.un.org/disabilities/documents/ecosoc/e.c.19.2013.6.pdf.
4. United Nations Department of Economic and Social Affairs. World population ageing 2013. Available at www.un.org/en/development/desa/population/publications/pdf/ageing/WorldPopulationAgeing2013.pdf.
Study Overview
Objective. To conduct a systematic review on the effectiveness, service user perspectives, policy intentions, and cost-effectiveness of wheelchairs for disabled children (< 18 years) for the purposes of developing a conceptual framework to inform future research and development.
Design. EPPI-Centre (eppi.ioe.ac.uk/cms/) mixed method systematic review with narrative summary and thematic and narrative synthesis.
Data. A search for relevant studies available in English and published in the last 15 years was performed. All identified study titles were assessed for relevance against the inclusion/exclusion criteria, and a second screening process was used to assess relevance of studies by their abstract. Studies deemed relevant were then obtained in full and underwent an additional review by a second researcher to reduce bias and reach consensus regarding inclusion. After data extraction, evidence was divided into 4 streams according to methodology and topic to enable separate syntheses by evidence type: (1) intervention evidence; (2) opinion evidence; (3) policy and not-for-profit organization (NFPO) literature; and (4) economic evidence. Intervention and economic streams were not synthesised due to vast differences in studies and lack of statistical evidence within each stream, thus narrative summary was conducted.
Main outcome. The primary outcome was to create a conceptual framework to inform future research and wheelchair service development in the UK, with international implications. To inform the searching, management, and interpretation of evidence, the review focused on the following 4 objectives regarding wheelchair interventions for disabled children and young people:
- to determine the effectiveness and cost-effectiveness of wheelchairs for said population;
- to better understand service users, parents, and professional perspectives regarding wheelchairs;
- to explore current UK policy, NFPO publication and clinical guideline recommendations and intentions regarding wheelchair provision; and
- to determine if disabled children’s desired outcomes match with existing policy aspirations and effectiveness evidence.
Main results. Synthesis of the integrated dataset elicited the following findings: (1) higher quality wheelchair services take into account the needs of the whole family; (2) disabled children benefit when psychosocial needs are considered along with health needs; (3) disabled children could benefit if policy recommendations focused on services meeting individual needs rather than following strict eligibility criteria; (4) without appropriate outcome measures the holistic benefits of powered wheelchair interventions cannot be evaluated; (5) disabled children may benefit more when physical outcomes of powered wheelchairs are seen as facilitators to wider holistic benefits, but lack of transition of evidence into practice hinders progress; and (6) disabled children would benefit from public buildings and spaces that promote inclusion of disabled people.
A key finding of this study is that wheelchairs offer disabled children independence and social integration and participation in age-appropriate activities. Secondary findings pertain to policy, specifically, the lack of effective translation of policy and evidence into practice, barriers to service delivery, lack of organization, and absence of knowledge application of what children desire from their wheelchairs. The resulting framework of this review lays out the interconnectedness of the problem areas, required actions, and overall development stages, which can lead to future cost-effective wheelchair services and interventions.
Conclusion. Wheelchairs offer children a variety of benefits, particularly with respect to health, development, and social inclusion. Given the barriers surrounding NHS wheelchair services in the UK, this review provides a solid research foundation for further research and has implications for wheelchair services globally. In particular, the lack of economic evidence found in the review process has implications for the need of appropriate methods to measure cost-effectiveness of interventions in order to promote more efficient service provision.
Commentary
Wheelchair access has compelling implications for improving children’s health, development, and social inclusion. It is this final benefit in particular that makes wheelchair interventions stand out from the rest due to the fact that the wheelchair goes beyond being a medical assistive device to being a gateway to societal participation.
Also related to social inclusion, an additional relevant observation made in this study is the lack of wheelchair access in public spaces. Such barriers can deem a wheelchair irrelevant if it cannot be used in the spaces where the subject needs to travel, such as schools, restaurants, parks, or government offices. Though this study was based in the UK, such barriers to inclusion remain all too common for persons with disability around the world [1], thus positioning the results of study as a starting framework for further global research.
That being said, the authors’ recommendations of resolving public space barriers with the simple addition of wheelchair access is an outdated approach towards inclusion that has been widely challenged by the community of persons with disabilities over the last decade. The promotion of “inclusive design” or “human-centered design” [2] to properly address the challenges of persons with disability is a growing trend, particularly in the United States, and takes into account the highest possible degree of permutation in the local demographics. A recommendation limited to mere wheelchair access stands to shut out other significant portions of the disabled community and exacerbates a “patchwork” approach toward resolving access that is not truly holistic.
Another observation pertains to the financial burden imposed on the family to modify the home when a child in the household has to use a wheelchair. While this is discussed in the article, it is treated separately as it occurs in the private sphere. However, construction regulations, permits, and other aspects pertaining to home-building reside on the government and policy side, even if a private independent entity does the home-building. Hence, policy changes towards inclusive design have implications for both public as well as private spaces.
With respect to the benefits of wheelchair interventions, the authors contend that appropriate interventions stand to “reduce disability discrimination and promote equality.” Concepts such as “discrimination” and “equality” cannot be discussed without political and cultural considerations [3]. The linkage between access and equality is a correlation worthy of discussion; however, the study was not designed to gather data to support sucha correlation.
Finally, while the overall findings of this study are relevant, it would be useful to know more about wheelchair service provision and the elderly, as it is they who comprise the majority of the disabled population [1,4]. The elderly disabled also need caretakers, need to make home modifications and travel to and from public spaces, and experience barriers and service delays Future research on wheelchair interventions would benefit from a comparative intra-population analysis.
Applications for Clinical Practice
This study outlines critical challenges and problems in the process of obtaining a wheelchair, such as poor evaluation methods for matching a wheelchair to patient needs, bureaucratic delays even after the order has been approved, physical accommodations that need to take place once the wheelchair has been acquired, and financial burdens assumed by the family and/or caretakers. Consideration needs to be given to addressing these problems given the importance of adequate wheelchairs for many disabled people.
—Molly A. Martínez, PhD, World Enabled, Berkeley, CA
Study Overview
Objective. To conduct a systematic review on the effectiveness, service user perspectives, policy intentions, and cost-effectiveness of wheelchairs for disabled children (< 18 years) for the purposes of developing a conceptual framework to inform future research and development.
Design. EPPI-Centre (eppi.ioe.ac.uk/cms/) mixed method systematic review with narrative summary and thematic and narrative synthesis.
Data. A search for relevant studies available in English and published in the last 15 years was performed. All identified study titles were assessed for relevance against the inclusion/exclusion criteria, and a second screening process was used to assess relevance of studies by their abstract. Studies deemed relevant were then obtained in full and underwent an additional review by a second researcher to reduce bias and reach consensus regarding inclusion. After data extraction, evidence was divided into 4 streams according to methodology and topic to enable separate syntheses by evidence type: (1) intervention evidence; (2) opinion evidence; (3) policy and not-for-profit organization (NFPO) literature; and (4) economic evidence. Intervention and economic streams were not synthesised due to vast differences in studies and lack of statistical evidence within each stream, thus narrative summary was conducted.
Main outcome. The primary outcome was to create a conceptual framework to inform future research and wheelchair service development in the UK, with international implications. To inform the searching, management, and interpretation of evidence, the review focused on the following 4 objectives regarding wheelchair interventions for disabled children and young people:
- to determine the effectiveness and cost-effectiveness of wheelchairs for said population;
- to better understand service users, parents, and professional perspectives regarding wheelchairs;
- to explore current UK policy, NFPO publication and clinical guideline recommendations and intentions regarding wheelchair provision; and
- to determine if disabled children’s desired outcomes match with existing policy aspirations and effectiveness evidence.
Main results. Synthesis of the integrated dataset elicited the following findings: (1) higher quality wheelchair services take into account the needs of the whole family; (2) disabled children benefit when psychosocial needs are considered along with health needs; (3) disabled children could benefit if policy recommendations focused on services meeting individual needs rather than following strict eligibility criteria; (4) without appropriate outcome measures the holistic benefits of powered wheelchair interventions cannot be evaluated; (5) disabled children may benefit more when physical outcomes of powered wheelchairs are seen as facilitators to wider holistic benefits, but lack of transition of evidence into practice hinders progress; and (6) disabled children would benefit from public buildings and spaces that promote inclusion of disabled people.
A key finding of this study is that wheelchairs offer disabled children independence and social integration and participation in age-appropriate activities. Secondary findings pertain to policy, specifically, the lack of effective translation of policy and evidence into practice, barriers to service delivery, lack of organization, and absence of knowledge application of what children desire from their wheelchairs. The resulting framework of this review lays out the interconnectedness of the problem areas, required actions, and overall development stages, which can lead to future cost-effective wheelchair services and interventions.
Conclusion. Wheelchairs offer children a variety of benefits, particularly with respect to health, development, and social inclusion. Given the barriers surrounding NHS wheelchair services in the UK, this review provides a solid research foundation for further research and has implications for wheelchair services globally. In particular, the lack of economic evidence found in the review process has implications for the need of appropriate methods to measure cost-effectiveness of interventions in order to promote more efficient service provision.
Commentary
Wheelchair access has compelling implications for improving children’s health, development, and social inclusion. It is this final benefit in particular that makes wheelchair interventions stand out from the rest due to the fact that the wheelchair goes beyond being a medical assistive device to being a gateway to societal participation.
Also related to social inclusion, an additional relevant observation made in this study is the lack of wheelchair access in public spaces. Such barriers can deem a wheelchair irrelevant if it cannot be used in the spaces where the subject needs to travel, such as schools, restaurants, parks, or government offices. Though this study was based in the UK, such barriers to inclusion remain all too common for persons with disability around the world [1], thus positioning the results of study as a starting framework for further global research.
That being said, the authors’ recommendations of resolving public space barriers with the simple addition of wheelchair access is an outdated approach towards inclusion that has been widely challenged by the community of persons with disabilities over the last decade. The promotion of “inclusive design” or “human-centered design” [2] to properly address the challenges of persons with disability is a growing trend, particularly in the United States, and takes into account the highest possible degree of permutation in the local demographics. A recommendation limited to mere wheelchair access stands to shut out other significant portions of the disabled community and exacerbates a “patchwork” approach toward resolving access that is not truly holistic.
Another observation pertains to the financial burden imposed on the family to modify the home when a child in the household has to use a wheelchair. While this is discussed in the article, it is treated separately as it occurs in the private sphere. However, construction regulations, permits, and other aspects pertaining to home-building reside on the government and policy side, even if a private independent entity does the home-building. Hence, policy changes towards inclusive design have implications for both public as well as private spaces.
With respect to the benefits of wheelchair interventions, the authors contend that appropriate interventions stand to “reduce disability discrimination and promote equality.” Concepts such as “discrimination” and “equality” cannot be discussed without political and cultural considerations [3]. The linkage between access and equality is a correlation worthy of discussion; however, the study was not designed to gather data to support sucha correlation.
Finally, while the overall findings of this study are relevant, it would be useful to know more about wheelchair service provision and the elderly, as it is they who comprise the majority of the disabled population [1,4]. The elderly disabled also need caretakers, need to make home modifications and travel to and from public spaces, and experience barriers and service delays Future research on wheelchair interventions would benefit from a comparative intra-population analysis.
Applications for Clinical Practice
This study outlines critical challenges and problems in the process of obtaining a wheelchair, such as poor evaluation methods for matching a wheelchair to patient needs, bureaucratic delays even after the order has been approved, physical accommodations that need to take place once the wheelchair has been acquired, and financial burdens assumed by the family and/or caretakers. Consideration needs to be given to addressing these problems given the importance of adequate wheelchairs for many disabled people.
—Molly A. Martínez, PhD, World Enabled, Berkeley, CA
1. World Health Organization. World report on disability 2011. Available at whqlibdoc.who.int/publications/2011/9789240685215_eng.pdf?ua–1.
2. Fletcher V. Driving innovation: universal design. Presented 14 Jan 2014 at IOM workshop. Available at www.iom.edu/~/media/Files/Activity%20Files/PublicHealth/HearingLoss
Aging/2-7%20Fletcher.pdf .
3. United Nations Permanent Forum on Indigenous Issues. Study on the situation of indigenous persons with disabilities, with a particular focus on challenges faced with regard to the full enjoyment of human rights and inclusion in development. May 2013. Available at www.un.org/disabilities/documents/ecosoc/e.c.19.2013.6.pdf.
4. United Nations Department of Economic and Social Affairs. World population ageing 2013. Available at www.un.org/en/development/desa/population/publications/pdf/ageing/WorldPopulationAgeing2013.pdf.
1. World Health Organization. World report on disability 2011. Available at whqlibdoc.who.int/publications/2011/9789240685215_eng.pdf?ua–1.
2. Fletcher V. Driving innovation: universal design. Presented 14 Jan 2014 at IOM workshop. Available at www.iom.edu/~/media/Files/Activity%20Files/PublicHealth/HearingLoss
Aging/2-7%20Fletcher.pdf .
3. United Nations Permanent Forum on Indigenous Issues. Study on the situation of indigenous persons with disabilities, with a particular focus on challenges faced with regard to the full enjoyment of human rights and inclusion in development. May 2013. Available at www.un.org/disabilities/documents/ecosoc/e.c.19.2013.6.pdf.
4. United Nations Department of Economic and Social Affairs. World population ageing 2013. Available at www.un.org/en/development/desa/population/publications/pdf/ageing/WorldPopulationAgeing2013.pdf.
Quality of Life in Aging Multiple Sclerosis Patients
Study Overview
Objective. To evaluate the association between clinical and demographic factors and health-related quality of life (HRQOL) among older people with multiple sclerosis (MS).
Design. Cross-sectional survey-based study.
Setting and participants. Patients with MS aged 60 years or older were recruited from 4 MS centers in Long Island, NY. Patients with severe cognitive impairment as determined by the health care practitioner were excluded. Participants were asked to complete 3 surveys at 3 different time-points. In the first survey, participants completed the Morisky Medication Adherence Scale and the Patient Multiple Sclerosis Neuropsychological Screening Questionnaire (P-MSNQ). The second survey was the Multiple Sclerosis Quality of Life-54 (MSQOL-54), and the third survey included the Beck Depression Inventory-II (BDI-II) and a disability status self-assessment scale. Cognitive function was measured at the time of recruitment using the Symbol Digit Modalities Test (SDMT).
Analysis. The Andersen Healthcare Utilization model was used to structure the multivariate regression analysis. This model identifies multiple domains affecting quality of life, and the variables from the surveys were categorized according to domain: predisposing characteristics (demographic variables), enabling resources (caregiver support and living situation), needs (eg, health-related measures), and health behaviors (medication use, adherence).
Main results. A total of 211 completed the first survey, 188 the second, and 179 the third. 80% were female and 95% were white. Average age was 65.5 (SD 5.6) years. 56% of respondents’ self-reported scores on the SDMT classified them as cognitively impaired. Risk of neuropsychological impairment, depression, and disability status were significantly associated with a decreased mental and physical HRQOL. Significantly, there was a strong association between predisposing characteristics and QOL. Being widowed and remaining employed were the strongest predictors of better physical QOL and having an education level of high school or less was a predictor of lower mental HRQOL.
Conclusion. Clinicians should measure HRQOL in older MS patients regularly and assess for depression and cognitive impairment.
Commentary
Quality of life is an important marker of MS patients’ well-being as they cope with this chronic illness [1]. The progression of the disease and its symptomatology often negatively affect HRQOL. However, multiple psychosocial factors, such as coping, mood, self-efficacy, and perceived support, affect QOL of patients with MS more than biological variables such as weakness or burden of radiologic disease [2]. For example, many self-report HRQOL indices are strongly predicted by measures of depression [3]. In addition, many studies have found a positive association between physical disability and reduced QOL [4,5]. Further, while perceived HRQOL may be a meaningful outcome in itself, it may also be a predictor for outcomes such as disability-related changes [6].
MS leads to disability and loss of function in all age-groups, but only a few studies have focused on HRQOL among elderly patients with MS. As patients with MS age, they may develop comorbidities such as hypertension and diabetes that may affect HRQOL. However, in a previous study comparing QOL between older and younger patients with MS, elderly and younger patients with MS had similar QOL even though the elderly patients had more physical limitations [7].
The strength of the current study was using the Andersen Healthcare Utilization regression model in the analysis, since it factors in multiple influences on health status. The striking evidence that employment and being widowed were linked to better physical QOL suggest that older MS patients may have better adaptation and adjustment to their illness. Researchers have shown that the widowed elderly often take on more responsibilities and tasks when they lose their partner, which leads to increased self-esteem and QOL [8]. Another advantage of the study was the fact that the investigators evaluated the different exposure variables and their associations with mental and physical QOL while identifying multiple confounding variables. Additionally, the use of 2 cognitive assessment tools provided a stronger assessment of patients’ cognitive function.
The main weakness of the study was using a cross-sectional study design with convenience sampling. The convenience sample was based on voluntary participation, which may result in self-selection bias. In addition, the self-report design is subject to the usual limitations of self-reporting for data collection: participants may exaggerate symptoms in order to make their situation seem worse or may under-report the severity or frequency of symptoms in order to minimize their problems. While the overall sample size was 211, not all respondents completed all the surveys, and response rates varied by questions. Thus, missing data may have affected results, but which data are missing is not discernable from the paper. That patients were from a single geographic area and had relatively high education levels (44% with college or above) are among the factors that limit the generalizability of the study. Another limitation is the use of the Beck Depression Inventory, which was not specifically designed for use in the elderly. In addition, the results of this study might have been affected by unmeasured confounding variables, for example daily physical activity, which can be a factor that modifies between depression, cognition, and QOL.
Applications for Clinical Practice
This study reinforces the importance of monitoring older MS patients for factors that may influence their HRQOL. The presence of depression, disability, and cognitive impairment should be assessed for regularly. Clinicians should encourage and empower elderly patients to continue with activities, including employment, that promote their mental and physical well-being and help maintain their independence. Assessing patients with geriatric-specific tools may provide more reliable and accurate assessment data that better accounts for aging dynamics. In addition, comobidities must be managed appropriately.
—Aliza Bitton Ben-Zacharia, DNP, ANP, and Allison Squires, PhD, RN, New York University College of Nursing
1. Opara JA, Jaracz K, Brola W. Quality of life in multiple sclerosis. J Med Life 2010;3:352–8.
2. Mitchell AJ, Benito-León J, González JM, Rivera-Navarro J. Quality of life and its assessment in multiple sclerosis: integrating physical and psychological components of wellbeing. Lancet Neurol 2005;4:556–66.
3. Benedict RH, Wahlig E, Bakshi R, et al. Predicting quality of life in multiple sclerosis: accounting for physical disability, fatigue, cognition, mood disorder, personality, and behavior change. J Neurol Sci 2005;231:29–34.
4. Göksel Karatepe A, Kaya T, Günaydn R, et al. Quality of life in patients with multiple sclerosis: the impact of depression, fatigue, and disability. Int J Rehabil Res 2011;34:290–8.
5. Nortvedt MW, Riise T, Myhr KM, Nyland HI. Quality of life in multiple sclerosis: measuring the disease effects more broadly. Neurology 1999;53:1098–103.
6. Visschedijk MA, Uitdehaag BM, Klein M, et al. Value of health-related quality of life to predict disability course in multiple sclerosis. Neurology 2004;63:2046–50.
7. Ploughman M, Austin MW, Murdoch M, et al. Factors influencing healthy aging with multiple sclerosis: a qualitative study. Disabil Rehabil 2012;34:26–33.
8. Minden SL, Frankel D, Hadden LS, et al. Disability in elderly people with multiple sclerosis: An analysis of baseline data from the Sonya Slifka Longitudinal Multiple Sclerosis Study. NeuroRehabilitation. 2004;19:55–67.
Study Overview
Objective. To evaluate the association between clinical and demographic factors and health-related quality of life (HRQOL) among older people with multiple sclerosis (MS).
Design. Cross-sectional survey-based study.
Setting and participants. Patients with MS aged 60 years or older were recruited from 4 MS centers in Long Island, NY. Patients with severe cognitive impairment as determined by the health care practitioner were excluded. Participants were asked to complete 3 surveys at 3 different time-points. In the first survey, participants completed the Morisky Medication Adherence Scale and the Patient Multiple Sclerosis Neuropsychological Screening Questionnaire (P-MSNQ). The second survey was the Multiple Sclerosis Quality of Life-54 (MSQOL-54), and the third survey included the Beck Depression Inventory-II (BDI-II) and a disability status self-assessment scale. Cognitive function was measured at the time of recruitment using the Symbol Digit Modalities Test (SDMT).
Analysis. The Andersen Healthcare Utilization model was used to structure the multivariate regression analysis. This model identifies multiple domains affecting quality of life, and the variables from the surveys were categorized according to domain: predisposing characteristics (demographic variables), enabling resources (caregiver support and living situation), needs (eg, health-related measures), and health behaviors (medication use, adherence).
Main results. A total of 211 completed the first survey, 188 the second, and 179 the third. 80% were female and 95% were white. Average age was 65.5 (SD 5.6) years. 56% of respondents’ self-reported scores on the SDMT classified them as cognitively impaired. Risk of neuropsychological impairment, depression, and disability status were significantly associated with a decreased mental and physical HRQOL. Significantly, there was a strong association between predisposing characteristics and QOL. Being widowed and remaining employed were the strongest predictors of better physical QOL and having an education level of high school or less was a predictor of lower mental HRQOL.
Conclusion. Clinicians should measure HRQOL in older MS patients regularly and assess for depression and cognitive impairment.
Commentary
Quality of life is an important marker of MS patients’ well-being as they cope with this chronic illness [1]. The progression of the disease and its symptomatology often negatively affect HRQOL. However, multiple psychosocial factors, such as coping, mood, self-efficacy, and perceived support, affect QOL of patients with MS more than biological variables such as weakness or burden of radiologic disease [2]. For example, many self-report HRQOL indices are strongly predicted by measures of depression [3]. In addition, many studies have found a positive association between physical disability and reduced QOL [4,5]. Further, while perceived HRQOL may be a meaningful outcome in itself, it may also be a predictor for outcomes such as disability-related changes [6].
MS leads to disability and loss of function in all age-groups, but only a few studies have focused on HRQOL among elderly patients with MS. As patients with MS age, they may develop comorbidities such as hypertension and diabetes that may affect HRQOL. However, in a previous study comparing QOL between older and younger patients with MS, elderly and younger patients with MS had similar QOL even though the elderly patients had more physical limitations [7].
The strength of the current study was using the Andersen Healthcare Utilization regression model in the analysis, since it factors in multiple influences on health status. The striking evidence that employment and being widowed were linked to better physical QOL suggest that older MS patients may have better adaptation and adjustment to their illness. Researchers have shown that the widowed elderly often take on more responsibilities and tasks when they lose their partner, which leads to increased self-esteem and QOL [8]. Another advantage of the study was the fact that the investigators evaluated the different exposure variables and their associations with mental and physical QOL while identifying multiple confounding variables. Additionally, the use of 2 cognitive assessment tools provided a stronger assessment of patients’ cognitive function.
The main weakness of the study was using a cross-sectional study design with convenience sampling. The convenience sample was based on voluntary participation, which may result in self-selection bias. In addition, the self-report design is subject to the usual limitations of self-reporting for data collection: participants may exaggerate symptoms in order to make their situation seem worse or may under-report the severity or frequency of symptoms in order to minimize their problems. While the overall sample size was 211, not all respondents completed all the surveys, and response rates varied by questions. Thus, missing data may have affected results, but which data are missing is not discernable from the paper. That patients were from a single geographic area and had relatively high education levels (44% with college or above) are among the factors that limit the generalizability of the study. Another limitation is the use of the Beck Depression Inventory, which was not specifically designed for use in the elderly. In addition, the results of this study might have been affected by unmeasured confounding variables, for example daily physical activity, which can be a factor that modifies between depression, cognition, and QOL.
Applications for Clinical Practice
This study reinforces the importance of monitoring older MS patients for factors that may influence their HRQOL. The presence of depression, disability, and cognitive impairment should be assessed for regularly. Clinicians should encourage and empower elderly patients to continue with activities, including employment, that promote their mental and physical well-being and help maintain their independence. Assessing patients with geriatric-specific tools may provide more reliable and accurate assessment data that better accounts for aging dynamics. In addition, comobidities must be managed appropriately.
—Aliza Bitton Ben-Zacharia, DNP, ANP, and Allison Squires, PhD, RN, New York University College of Nursing
Study Overview
Objective. To evaluate the association between clinical and demographic factors and health-related quality of life (HRQOL) among older people with multiple sclerosis (MS).
Design. Cross-sectional survey-based study.
Setting and participants. Patients with MS aged 60 years or older were recruited from 4 MS centers in Long Island, NY. Patients with severe cognitive impairment as determined by the health care practitioner were excluded. Participants were asked to complete 3 surveys at 3 different time-points. In the first survey, participants completed the Morisky Medication Adherence Scale and the Patient Multiple Sclerosis Neuropsychological Screening Questionnaire (P-MSNQ). The second survey was the Multiple Sclerosis Quality of Life-54 (MSQOL-54), and the third survey included the Beck Depression Inventory-II (BDI-II) and a disability status self-assessment scale. Cognitive function was measured at the time of recruitment using the Symbol Digit Modalities Test (SDMT).
Analysis. The Andersen Healthcare Utilization model was used to structure the multivariate regression analysis. This model identifies multiple domains affecting quality of life, and the variables from the surveys were categorized according to domain: predisposing characteristics (demographic variables), enabling resources (caregiver support and living situation), needs (eg, health-related measures), and health behaviors (medication use, adherence).
Main results. A total of 211 completed the first survey, 188 the second, and 179 the third. 80% were female and 95% were white. Average age was 65.5 (SD 5.6) years. 56% of respondents’ self-reported scores on the SDMT classified them as cognitively impaired. Risk of neuropsychological impairment, depression, and disability status were significantly associated with a decreased mental and physical HRQOL. Significantly, there was a strong association between predisposing characteristics and QOL. Being widowed and remaining employed were the strongest predictors of better physical QOL and having an education level of high school or less was a predictor of lower mental HRQOL.
Conclusion. Clinicians should measure HRQOL in older MS patients regularly and assess for depression and cognitive impairment.
Commentary
Quality of life is an important marker of MS patients’ well-being as they cope with this chronic illness [1]. The progression of the disease and its symptomatology often negatively affect HRQOL. However, multiple psychosocial factors, such as coping, mood, self-efficacy, and perceived support, affect QOL of patients with MS more than biological variables such as weakness or burden of radiologic disease [2]. For example, many self-report HRQOL indices are strongly predicted by measures of depression [3]. In addition, many studies have found a positive association between physical disability and reduced QOL [4,5]. Further, while perceived HRQOL may be a meaningful outcome in itself, it may also be a predictor for outcomes such as disability-related changes [6].
MS leads to disability and loss of function in all age-groups, but only a few studies have focused on HRQOL among elderly patients with MS. As patients with MS age, they may develop comorbidities such as hypertension and diabetes that may affect HRQOL. However, in a previous study comparing QOL between older and younger patients with MS, elderly and younger patients with MS had similar QOL even though the elderly patients had more physical limitations [7].
The strength of the current study was using the Andersen Healthcare Utilization regression model in the analysis, since it factors in multiple influences on health status. The striking evidence that employment and being widowed were linked to better physical QOL suggest that older MS patients may have better adaptation and adjustment to their illness. Researchers have shown that the widowed elderly often take on more responsibilities and tasks when they lose their partner, which leads to increased self-esteem and QOL [8]. Another advantage of the study was the fact that the investigators evaluated the different exposure variables and their associations with mental and physical QOL while identifying multiple confounding variables. Additionally, the use of 2 cognitive assessment tools provided a stronger assessment of patients’ cognitive function.
The main weakness of the study was using a cross-sectional study design with convenience sampling. The convenience sample was based on voluntary participation, which may result in self-selection bias. In addition, the self-report design is subject to the usual limitations of self-reporting for data collection: participants may exaggerate symptoms in order to make their situation seem worse or may under-report the severity or frequency of symptoms in order to minimize their problems. While the overall sample size was 211, not all respondents completed all the surveys, and response rates varied by questions. Thus, missing data may have affected results, but which data are missing is not discernable from the paper. That patients were from a single geographic area and had relatively high education levels (44% with college or above) are among the factors that limit the generalizability of the study. Another limitation is the use of the Beck Depression Inventory, which was not specifically designed for use in the elderly. In addition, the results of this study might have been affected by unmeasured confounding variables, for example daily physical activity, which can be a factor that modifies between depression, cognition, and QOL.
Applications for Clinical Practice
This study reinforces the importance of monitoring older MS patients for factors that may influence their HRQOL. The presence of depression, disability, and cognitive impairment should be assessed for regularly. Clinicians should encourage and empower elderly patients to continue with activities, including employment, that promote their mental and physical well-being and help maintain their independence. Assessing patients with geriatric-specific tools may provide more reliable and accurate assessment data that better accounts for aging dynamics. In addition, comobidities must be managed appropriately.
—Aliza Bitton Ben-Zacharia, DNP, ANP, and Allison Squires, PhD, RN, New York University College of Nursing
1. Opara JA, Jaracz K, Brola W. Quality of life in multiple sclerosis. J Med Life 2010;3:352–8.
2. Mitchell AJ, Benito-León J, González JM, Rivera-Navarro J. Quality of life and its assessment in multiple sclerosis: integrating physical and psychological components of wellbeing. Lancet Neurol 2005;4:556–66.
3. Benedict RH, Wahlig E, Bakshi R, et al. Predicting quality of life in multiple sclerosis: accounting for physical disability, fatigue, cognition, mood disorder, personality, and behavior change. J Neurol Sci 2005;231:29–34.
4. Göksel Karatepe A, Kaya T, Günaydn R, et al. Quality of life in patients with multiple sclerosis: the impact of depression, fatigue, and disability. Int J Rehabil Res 2011;34:290–8.
5. Nortvedt MW, Riise T, Myhr KM, Nyland HI. Quality of life in multiple sclerosis: measuring the disease effects more broadly. Neurology 1999;53:1098–103.
6. Visschedijk MA, Uitdehaag BM, Klein M, et al. Value of health-related quality of life to predict disability course in multiple sclerosis. Neurology 2004;63:2046–50.
7. Ploughman M, Austin MW, Murdoch M, et al. Factors influencing healthy aging with multiple sclerosis: a qualitative study. Disabil Rehabil 2012;34:26–33.
8. Minden SL, Frankel D, Hadden LS, et al. Disability in elderly people with multiple sclerosis: An analysis of baseline data from the Sonya Slifka Longitudinal Multiple Sclerosis Study. NeuroRehabilitation. 2004;19:55–67.
1. Opara JA, Jaracz K, Brola W. Quality of life in multiple sclerosis. J Med Life 2010;3:352–8.
2. Mitchell AJ, Benito-León J, González JM, Rivera-Navarro J. Quality of life and its assessment in multiple sclerosis: integrating physical and psychological components of wellbeing. Lancet Neurol 2005;4:556–66.
3. Benedict RH, Wahlig E, Bakshi R, et al. Predicting quality of life in multiple sclerosis: accounting for physical disability, fatigue, cognition, mood disorder, personality, and behavior change. J Neurol Sci 2005;231:29–34.
4. Göksel Karatepe A, Kaya T, Günaydn R, et al. Quality of life in patients with multiple sclerosis: the impact of depression, fatigue, and disability. Int J Rehabil Res 2011;34:290–8.
5. Nortvedt MW, Riise T, Myhr KM, Nyland HI. Quality of life in multiple sclerosis: measuring the disease effects more broadly. Neurology 1999;53:1098–103.
6. Visschedijk MA, Uitdehaag BM, Klein M, et al. Value of health-related quality of life to predict disability course in multiple sclerosis. Neurology 2004;63:2046–50.
7. Ploughman M, Austin MW, Murdoch M, et al. Factors influencing healthy aging with multiple sclerosis: a qualitative study. Disabil Rehabil 2012;34:26–33.
8. Minden SL, Frankel D, Hadden LS, et al. Disability in elderly people with multiple sclerosis: An analysis of baseline data from the Sonya Slifka Longitudinal Multiple Sclerosis Study. NeuroRehabilitation. 2004;19:55–67.
Self-Monitoring and Self-Titration of Antihypertensive Medications Result in Better Systolic Blood Pressure Control
Study Overview
Objective. To examine the effect of self-monitoring of blood pressure and self-titration of antihypertensive medications among hypertensive patients with cardiovascular disease, diabetes, or chronic kidney disease.
Design. Unblinded randomized controlled trial.
Setting and participants. The study was conducted in central and east England. Patients with poorly controlled blood pressure with a last recorded systolic blood pressure of at least 145 mm Hg at 59 UK primary care practices were invited to participate. Patients had to be at least 35 years old and have at least 1 of the following comorbidities: transient ischemic attack or stroke, stage 3 chronic kidney disease, or history of coronary artery bypass graft surgery, myocardial infarction, or angina. Patients were excluded if they could not self-monitor blood pressure, had dementia or failed a cognitive screen using the short-orientation memory concentration test, had postural hypotension, took more than 3 antihypertensive medications, had an acute cardiovascular event within the previous 3 months, were receiving care from a specialist for their hypertension, were pregnant, or had a terminal disease. Participants were randomized to the self-management intervention or usual care.
Intervention. Patients in the self-management group were asked to monitor their blood pressure using an automated blood pressure monitor and to titrate their blood pressure medications using an individualized 3-step plan devised by the patient with their family physician. They were trained to do these tasks in 2- or 3-hour sessions. Patients were instructed to take their blood pressure twice each morning for the first week of each month; if 4 or more blood pressure readings during the measurement week for 2 consecutive months were higher than the target blood pressure, patients were to follow their individualized plan to change their medications. The target blood pressure was 120/75 mm Hg, following British guidelines for patients with stroke, diabetes, chronic kidney disease, or coronary heart disease. If patients exhausted all 3 steps for medication titration, they were to return to their family physician for additional instructions. Patients in the usual care group had a routine blood pressure check and medication review appointment with their family physician, which was followed by follow-up care at the discretion of the family physician for blood pressure measurement, blood pressure targets, or adjustment of medication.
Main outcome measure. The primary outcome was systolic blood pressure at 12 months. The difference in outcomes between the intervention and usual care groups was examined while accounting for baseline blood pressure and other clinical factors. 6 blood pressures were taken at 1-minute intervals after an initial 5 minutes of rest. Blood pressure was taken by an electronic automated blood pressure machine. The mean of the second and third readings were used as primary outcome. Outcome assessor was not blinded to group assignment. The primary analysis included all cases with complete data, and a sensitivity analysis with multiple imputations was also performed. Preplanned subgroup analyses included older vs. younger age-groups, men vs. women, and other risk groups.
Main results. Among 10,764 patients assessed for eligibility, 3353 were excluded as they were considered by their family physician to be housebound, have a terminal illness, or not be a suitable candidate. Among the 7411 invited to participate, 4207 did not respond to the invitation and 2003 declined participation (with a third who did not want to alter their own medications, and a third who did not want to measure their own blood pressure). Among the 1201 who attended the baseline clinic, 138 withdrew their consent and 508 were deemed ineligible. A total of 555 were randomized, and 220 in the intervention group and 230 in the control group completed the study and provided outcome data (81%). Patients in the self-management group had a 9.2 mm Hg–lower systolic blood pressure at 12 months (95% CI, 5.7–12.7) compared with the usual care group. The self-management group also had a larger increase in the intake of antihypertensive drugs compared with controls, with an increase in both doses and number of medications. Although adverse symptoms were common in both groups, there were no significant differences in adverse symptoms between groups.
Conclusions. Self-management of hypertension among patients with stroke, cardiovascular disease, and other high-risk conditions is safe and effective in achieving better blood pressure control.
Commentary
Hypertension is a major public health problem. Significant resources have been devoted to advance hypertension management through research, practice improvements, and guideline developments; however, blood pressure control among those with hypertension in the United States remains suboptimal—with only about half achieving adequate control [1].
Advances in technology have made home blood pressure monitoring possible. It offers several advantages to traditional office-based blood pressure management [2], and several studies have shown home blood pressure telemonitoring and team care can achieve better outcomes than office-based management [3]. A significant contribution of the current study is that it demonstrated that the self-management approach is both safe and effective even in high-risk patients, who are perhaps the most likely to have adverse events from treatment but also the most likely to derive benefit from adequate treatment of hypertension.
Although the self-management approach has promise, it also has potential drawbacks. Specifically, as demonstrated by the low enrollment rate in this study, this intervention may not be suitable for all patients. About two-thirds of those who responded to the initial enrollment attempt ultimately declined participation because they did not want to modify their own medications or did not want to perform the tasks of home blood pressure monitoring. This perhaps is a realistic assessment of who may ultimately benefit from this approach—patients who wish to have an active role in managing their medical problems and have the ability to do so. For the clinician, it is important to identify patients who are able to manage the complex task of adjusting their medication regimen; otherwise, the potential for harm may be magnified.
Engaging patients in the management of their chronic disease is a growing trend in chronic disease management. Bringing management of hypertension to patients’ homes, as the accompanying editorial in the issue pointed out, reflects patient-centeredness at its best and represents an important step toward the adaptation of treatment for patients who want to actively take part in their own care [2].
Applications for Clinical Practice
Self-management of blood pressure in patients at high risk of cardiovascular disease appears feasible. As the editorialists note, this study is an important step toward adaptation of treatment for patients who want to actively take part in their own risk-factor control [2]. More research is needed to study the effects of self-titration on long-term outcomes and to identify the appropriate protocols that can be applied by clinicians in the community, both for patient selection and education and medication adjustment.
—William Hung, MD, MPH
1. Egan BM, Zhao Y, Axon RN. US trends in prevalence, awareness, treatment, and control of hypertension, 1988-2008. JAMA 2010;303:2043–50.
2. Nilsson PM, Nystrom FH. Self-titration of antihypertensive therapy in high-risk patients. Bringing it home. JAMA 2014;312:795–6.
3. Margolis KL, Asche SE, Bergdall AR, et al. Effect of home blood pressure telemonitoring and pharmacist management on blood pressure control. a cluster randomized clinical trial. JAMA 2013;310:46–56.
Study Overview
Objective. To examine the effect of self-monitoring of blood pressure and self-titration of antihypertensive medications among hypertensive patients with cardiovascular disease, diabetes, or chronic kidney disease.
Design. Unblinded randomized controlled trial.
Setting and participants. The study was conducted in central and east England. Patients with poorly controlled blood pressure with a last recorded systolic blood pressure of at least 145 mm Hg at 59 UK primary care practices were invited to participate. Patients had to be at least 35 years old and have at least 1 of the following comorbidities: transient ischemic attack or stroke, stage 3 chronic kidney disease, or history of coronary artery bypass graft surgery, myocardial infarction, or angina. Patients were excluded if they could not self-monitor blood pressure, had dementia or failed a cognitive screen using the short-orientation memory concentration test, had postural hypotension, took more than 3 antihypertensive medications, had an acute cardiovascular event within the previous 3 months, were receiving care from a specialist for their hypertension, were pregnant, or had a terminal disease. Participants were randomized to the self-management intervention or usual care.
Intervention. Patients in the self-management group were asked to monitor their blood pressure using an automated blood pressure monitor and to titrate their blood pressure medications using an individualized 3-step plan devised by the patient with their family physician. They were trained to do these tasks in 2- or 3-hour sessions. Patients were instructed to take their blood pressure twice each morning for the first week of each month; if 4 or more blood pressure readings during the measurement week for 2 consecutive months were higher than the target blood pressure, patients were to follow their individualized plan to change their medications. The target blood pressure was 120/75 mm Hg, following British guidelines for patients with stroke, diabetes, chronic kidney disease, or coronary heart disease. If patients exhausted all 3 steps for medication titration, they were to return to their family physician for additional instructions. Patients in the usual care group had a routine blood pressure check and medication review appointment with their family physician, which was followed by follow-up care at the discretion of the family physician for blood pressure measurement, blood pressure targets, or adjustment of medication.
Main outcome measure. The primary outcome was systolic blood pressure at 12 months. The difference in outcomes between the intervention and usual care groups was examined while accounting for baseline blood pressure and other clinical factors. 6 blood pressures were taken at 1-minute intervals after an initial 5 minutes of rest. Blood pressure was taken by an electronic automated blood pressure machine. The mean of the second and third readings were used as primary outcome. Outcome assessor was not blinded to group assignment. The primary analysis included all cases with complete data, and a sensitivity analysis with multiple imputations was also performed. Preplanned subgroup analyses included older vs. younger age-groups, men vs. women, and other risk groups.
Main results. Among 10,764 patients assessed for eligibility, 3353 were excluded as they were considered by their family physician to be housebound, have a terminal illness, or not be a suitable candidate. Among the 7411 invited to participate, 4207 did not respond to the invitation and 2003 declined participation (with a third who did not want to alter their own medications, and a third who did not want to measure their own blood pressure). Among the 1201 who attended the baseline clinic, 138 withdrew their consent and 508 were deemed ineligible. A total of 555 were randomized, and 220 in the intervention group and 230 in the control group completed the study and provided outcome data (81%). Patients in the self-management group had a 9.2 mm Hg–lower systolic blood pressure at 12 months (95% CI, 5.7–12.7) compared with the usual care group. The self-management group also had a larger increase in the intake of antihypertensive drugs compared with controls, with an increase in both doses and number of medications. Although adverse symptoms were common in both groups, there were no significant differences in adverse symptoms between groups.
Conclusions. Self-management of hypertension among patients with stroke, cardiovascular disease, and other high-risk conditions is safe and effective in achieving better blood pressure control.
Commentary
Hypertension is a major public health problem. Significant resources have been devoted to advance hypertension management through research, practice improvements, and guideline developments; however, blood pressure control among those with hypertension in the United States remains suboptimal—with only about half achieving adequate control [1].
Advances in technology have made home blood pressure monitoring possible. It offers several advantages to traditional office-based blood pressure management [2], and several studies have shown home blood pressure telemonitoring and team care can achieve better outcomes than office-based management [3]. A significant contribution of the current study is that it demonstrated that the self-management approach is both safe and effective even in high-risk patients, who are perhaps the most likely to have adverse events from treatment but also the most likely to derive benefit from adequate treatment of hypertension.
Although the self-management approach has promise, it also has potential drawbacks. Specifically, as demonstrated by the low enrollment rate in this study, this intervention may not be suitable for all patients. About two-thirds of those who responded to the initial enrollment attempt ultimately declined participation because they did not want to modify their own medications or did not want to perform the tasks of home blood pressure monitoring. This perhaps is a realistic assessment of who may ultimately benefit from this approach—patients who wish to have an active role in managing their medical problems and have the ability to do so. For the clinician, it is important to identify patients who are able to manage the complex task of adjusting their medication regimen; otherwise, the potential for harm may be magnified.
Engaging patients in the management of their chronic disease is a growing trend in chronic disease management. Bringing management of hypertension to patients’ homes, as the accompanying editorial in the issue pointed out, reflects patient-centeredness at its best and represents an important step toward the adaptation of treatment for patients who want to actively take part in their own care [2].
Applications for Clinical Practice
Self-management of blood pressure in patients at high risk of cardiovascular disease appears feasible. As the editorialists note, this study is an important step toward adaptation of treatment for patients who want to actively take part in their own risk-factor control [2]. More research is needed to study the effects of self-titration on long-term outcomes and to identify the appropriate protocols that can be applied by clinicians in the community, both for patient selection and education and medication adjustment.
—William Hung, MD, MPH
Study Overview
Objective. To examine the effect of self-monitoring of blood pressure and self-titration of antihypertensive medications among hypertensive patients with cardiovascular disease, diabetes, or chronic kidney disease.
Design. Unblinded randomized controlled trial.
Setting and participants. The study was conducted in central and east England. Patients with poorly controlled blood pressure with a last recorded systolic blood pressure of at least 145 mm Hg at 59 UK primary care practices were invited to participate. Patients had to be at least 35 years old and have at least 1 of the following comorbidities: transient ischemic attack or stroke, stage 3 chronic kidney disease, or history of coronary artery bypass graft surgery, myocardial infarction, or angina. Patients were excluded if they could not self-monitor blood pressure, had dementia or failed a cognitive screen using the short-orientation memory concentration test, had postural hypotension, took more than 3 antihypertensive medications, had an acute cardiovascular event within the previous 3 months, were receiving care from a specialist for their hypertension, were pregnant, or had a terminal disease. Participants were randomized to the self-management intervention or usual care.
Intervention. Patients in the self-management group were asked to monitor their blood pressure using an automated blood pressure monitor and to titrate their blood pressure medications using an individualized 3-step plan devised by the patient with their family physician. They were trained to do these tasks in 2- or 3-hour sessions. Patients were instructed to take their blood pressure twice each morning for the first week of each month; if 4 or more blood pressure readings during the measurement week for 2 consecutive months were higher than the target blood pressure, patients were to follow their individualized plan to change their medications. The target blood pressure was 120/75 mm Hg, following British guidelines for patients with stroke, diabetes, chronic kidney disease, or coronary heart disease. If patients exhausted all 3 steps for medication titration, they were to return to their family physician for additional instructions. Patients in the usual care group had a routine blood pressure check and medication review appointment with their family physician, which was followed by follow-up care at the discretion of the family physician for blood pressure measurement, blood pressure targets, or adjustment of medication.
Main outcome measure. The primary outcome was systolic blood pressure at 12 months. The difference in outcomes between the intervention and usual care groups was examined while accounting for baseline blood pressure and other clinical factors. 6 blood pressures were taken at 1-minute intervals after an initial 5 minutes of rest. Blood pressure was taken by an electronic automated blood pressure machine. The mean of the second and third readings were used as primary outcome. Outcome assessor was not blinded to group assignment. The primary analysis included all cases with complete data, and a sensitivity analysis with multiple imputations was also performed. Preplanned subgroup analyses included older vs. younger age-groups, men vs. women, and other risk groups.
Main results. Among 10,764 patients assessed for eligibility, 3353 were excluded as they were considered by their family physician to be housebound, have a terminal illness, or not be a suitable candidate. Among the 7411 invited to participate, 4207 did not respond to the invitation and 2003 declined participation (with a third who did not want to alter their own medications, and a third who did not want to measure their own blood pressure). Among the 1201 who attended the baseline clinic, 138 withdrew their consent and 508 were deemed ineligible. A total of 555 were randomized, and 220 in the intervention group and 230 in the control group completed the study and provided outcome data (81%). Patients in the self-management group had a 9.2 mm Hg–lower systolic blood pressure at 12 months (95% CI, 5.7–12.7) compared with the usual care group. The self-management group also had a larger increase in the intake of antihypertensive drugs compared with controls, with an increase in both doses and number of medications. Although adverse symptoms were common in both groups, there were no significant differences in adverse symptoms between groups.
Conclusions. Self-management of hypertension among patients with stroke, cardiovascular disease, and other high-risk conditions is safe and effective in achieving better blood pressure control.
Commentary
Hypertension is a major public health problem. Significant resources have been devoted to advance hypertension management through research, practice improvements, and guideline developments; however, blood pressure control among those with hypertension in the United States remains suboptimal—with only about half achieving adequate control [1].
Advances in technology have made home blood pressure monitoring possible. It offers several advantages to traditional office-based blood pressure management [2], and several studies have shown home blood pressure telemonitoring and team care can achieve better outcomes than office-based management [3]. A significant contribution of the current study is that it demonstrated that the self-management approach is both safe and effective even in high-risk patients, who are perhaps the most likely to have adverse events from treatment but also the most likely to derive benefit from adequate treatment of hypertension.
Although the self-management approach has promise, it also has potential drawbacks. Specifically, as demonstrated by the low enrollment rate in this study, this intervention may not be suitable for all patients. About two-thirds of those who responded to the initial enrollment attempt ultimately declined participation because they did not want to modify their own medications or did not want to perform the tasks of home blood pressure monitoring. This perhaps is a realistic assessment of who may ultimately benefit from this approach—patients who wish to have an active role in managing their medical problems and have the ability to do so. For the clinician, it is important to identify patients who are able to manage the complex task of adjusting their medication regimen; otherwise, the potential for harm may be magnified.
Engaging patients in the management of their chronic disease is a growing trend in chronic disease management. Bringing management of hypertension to patients’ homes, as the accompanying editorial in the issue pointed out, reflects patient-centeredness at its best and represents an important step toward the adaptation of treatment for patients who want to actively take part in their own care [2].
Applications for Clinical Practice
Self-management of blood pressure in patients at high risk of cardiovascular disease appears feasible. As the editorialists note, this study is an important step toward adaptation of treatment for patients who want to actively take part in their own risk-factor control [2]. More research is needed to study the effects of self-titration on long-term outcomes and to identify the appropriate protocols that can be applied by clinicians in the community, both for patient selection and education and medication adjustment.
—William Hung, MD, MPH
1. Egan BM, Zhao Y, Axon RN. US trends in prevalence, awareness, treatment, and control of hypertension, 1988-2008. JAMA 2010;303:2043–50.
2. Nilsson PM, Nystrom FH. Self-titration of antihypertensive therapy in high-risk patients. Bringing it home. JAMA 2014;312:795–6.
3. Margolis KL, Asche SE, Bergdall AR, et al. Effect of home blood pressure telemonitoring and pharmacist management on blood pressure control. a cluster randomized clinical trial. JAMA 2013;310:46–56.
1. Egan BM, Zhao Y, Axon RN. US trends in prevalence, awareness, treatment, and control of hypertension, 1988-2008. JAMA 2010;303:2043–50.
2. Nilsson PM, Nystrom FH. Self-titration of antihypertensive therapy in high-risk patients. Bringing it home. JAMA 2014;312:795–6.
3. Margolis KL, Asche SE, Bergdall AR, et al. Effect of home blood pressure telemonitoring and pharmacist management on blood pressure control. a cluster randomized clinical trial. JAMA 2013;310:46–56.
Effect of Substituting Nurses for Doctors in Primary Care
Study Overview
Objective. To investigate the clinical effectiveness and costs of nurses working as substitutes for physicians in primary care.
Design. Systematic review and meta-analysis of published randomized controlled trials (RCTs) and 2 economic studies that compared nurse-led care with care by primary care physicians on numerous variables, including satisfaction, hospital admission, mortality, and costs of health care.
Settings and participants. The 24 RCTs were drawn from 5 different countries (UK, Netherlands, USA, Russia, and South Africa). In total, there were 38, 974 participants. Eleven of the studies had less than 200 participants and 13 studies had more than 200 (median, 1624). Mean age was reported in 20 trials and ranged from 10 to 83 years.
Analysis. The authors assessed risk of bias in the studies, calculated the study-specific and pooled relative risks (RR) or standardized mean differences (SMD), and performed fixed-effects meta-analyses.
Main results. Nurse-led care was effective at reducing the overall risk of hospital admission (RR 0.76, 95% CI 0.64–0.91) and mortality (RR 0.89, 95% CI 0.84–0.96) in RCTs of ongoing or non-urgent care, longer (at least 12 months) follow-up episodes, and in larger (n > 200) RCTs. Pooled analysis showed higher overall scores of patient satisfaction with nurse led care (SMD 0.18, 95% Cl 0.13–0.23). Higher-quality RCTs (with better allocation concealment and less attrition) showed higher rates of hospital admissions and mortality with nurse-led care, but the difference was not significant. Subgroup analysis showed that RNs had a stronger effect than nurse practitioners (NPs) on patient satisfaction. The results of cost-effectiveness and improved quality of care analysis with nurses were inconclusive.
Conclusion. Nurse-led care appears to have a positive effect on patient care and outcomes but more rigorous research is needed to confirm these findings.
Commentary
As the backbone of health care systems around the world, primary care is facing numerous challenges threatening patient access to care. Aging populations, economically strapped governments, and an increasing non-communicable disease burden in developing countries are pushing global health systems to their capacity. In addition, the World Health Organization has highlighted the increasing health worker shortage which further limits the capabilities of health systems [1,2]. One proposed solution to addressing physician shortages is using NPs. Recent studies have shown patient satisfaction, physical, emotional, and social function, and other outcomes associated with nurse-led care to be similar to if not better than those associated achieved by physicians [3–5].
The current meta-analysis has some weaknesses. For example, 13 of the 24 studies had attrition rates of at least 20% and only 10 trials had a sufficient sample size to achieve adequate power in at least 1 outcome, making it more difficult to identify true differences between control and intervention groups. The sample of RCTs were heterogeneous in terms of settings, tasks, and reporting of outcomes. Also, study heterogeneity increased the difficulty of data synthesis and limited the amount of information on cost-effective nursing care and quality of care of patients.
In many of the studies, quality of life among patients was measured inconsistently, using various disease specific and generic scales, making it difficult to compare and provide comprehensive results. Additionally, less than 50% of the patient satisfaction scales used validated questionnaires.
Results should be interpreted with caution as the studies were compiled from 5 different countries. The scope of nursing practice differs in each country and the different cadres of nurses (RN vs NP vs licensed practical nurse [LPN]) also have varying responsibilities. Cross comparisons between RN/LPN, NP/physician, and RN/NP need to consider the country context, regulating bodies, and government policies that dictate the capabilities and practice of each of these licensed professionals.
There was a dearth of economic information. Generally, direct costs such as consultations and cases involving patients less than 65 year of age were lower with nurse-led care, but in other studies costs of nurse-led and physician-led care were not significantly different.
Applications for Clinical Practice
As the health worker shortage continues, health care facilities will have to decide on the appropriate skill mix to provide the best patient outcomes while maximizing cost benefit. While this systematic review and meta-analysis is promising in supporting nursing-led primary care, more research is needed, including longer-term studies with larger sample sizes and more extensive assessment of cost and quality of life. The use of validated and standardized instruments to measure patient satisfaction and quality of care will increase study quality and rigor.
—Melissa T. Martelly, MA, BSN, RN, PCCN, and Allison Squires, PhD, New York University College of Nursing
1. World Health Organization. World health report 2006: Working together for health. Geneva: World Health Organization; 2006. Available at www.who.int/whr/2006/en.
2. World Health Organization. A universal truth: No health without a workforce. Geneva: World Health Organization; 2013. Available at www.who.int/workforcealliance/knowledge/resources/GHWA_AUniversalTruthReport.pdf.
3. Horrocks S, Anderson E, Salisbury C. Systematic review of whether nurse practitioners working in primary care can provide equivalent care to doctors. BMJ 2002;3:819–23
4. Naylor MD, Kurtzman ET. The role of nurse practitioners in reinventing primary care. Health Affairs 2010;29:893–9.
5. Carter A, JE, Chochinov AH. Systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department. CJEM 2007;9: 286–95.
Study Overview
Objective. To investigate the clinical effectiveness and costs of nurses working as substitutes for physicians in primary care.
Design. Systematic review and meta-analysis of published randomized controlled trials (RCTs) and 2 economic studies that compared nurse-led care with care by primary care physicians on numerous variables, including satisfaction, hospital admission, mortality, and costs of health care.
Settings and participants. The 24 RCTs were drawn from 5 different countries (UK, Netherlands, USA, Russia, and South Africa). In total, there were 38, 974 participants. Eleven of the studies had less than 200 participants and 13 studies had more than 200 (median, 1624). Mean age was reported in 20 trials and ranged from 10 to 83 years.
Analysis. The authors assessed risk of bias in the studies, calculated the study-specific and pooled relative risks (RR) or standardized mean differences (SMD), and performed fixed-effects meta-analyses.
Main results. Nurse-led care was effective at reducing the overall risk of hospital admission (RR 0.76, 95% CI 0.64–0.91) and mortality (RR 0.89, 95% CI 0.84–0.96) in RCTs of ongoing or non-urgent care, longer (at least 12 months) follow-up episodes, and in larger (n > 200) RCTs. Pooled analysis showed higher overall scores of patient satisfaction with nurse led care (SMD 0.18, 95% Cl 0.13–0.23). Higher-quality RCTs (with better allocation concealment and less attrition) showed higher rates of hospital admissions and mortality with nurse-led care, but the difference was not significant. Subgroup analysis showed that RNs had a stronger effect than nurse practitioners (NPs) on patient satisfaction. The results of cost-effectiveness and improved quality of care analysis with nurses were inconclusive.
Conclusion. Nurse-led care appears to have a positive effect on patient care and outcomes but more rigorous research is needed to confirm these findings.
Commentary
As the backbone of health care systems around the world, primary care is facing numerous challenges threatening patient access to care. Aging populations, economically strapped governments, and an increasing non-communicable disease burden in developing countries are pushing global health systems to their capacity. In addition, the World Health Organization has highlighted the increasing health worker shortage which further limits the capabilities of health systems [1,2]. One proposed solution to addressing physician shortages is using NPs. Recent studies have shown patient satisfaction, physical, emotional, and social function, and other outcomes associated with nurse-led care to be similar to if not better than those associated achieved by physicians [3–5].
The current meta-analysis has some weaknesses. For example, 13 of the 24 studies had attrition rates of at least 20% and only 10 trials had a sufficient sample size to achieve adequate power in at least 1 outcome, making it more difficult to identify true differences between control and intervention groups. The sample of RCTs were heterogeneous in terms of settings, tasks, and reporting of outcomes. Also, study heterogeneity increased the difficulty of data synthesis and limited the amount of information on cost-effective nursing care and quality of care of patients.
In many of the studies, quality of life among patients was measured inconsistently, using various disease specific and generic scales, making it difficult to compare and provide comprehensive results. Additionally, less than 50% of the patient satisfaction scales used validated questionnaires.
Results should be interpreted with caution as the studies were compiled from 5 different countries. The scope of nursing practice differs in each country and the different cadres of nurses (RN vs NP vs licensed practical nurse [LPN]) also have varying responsibilities. Cross comparisons between RN/LPN, NP/physician, and RN/NP need to consider the country context, regulating bodies, and government policies that dictate the capabilities and practice of each of these licensed professionals.
There was a dearth of economic information. Generally, direct costs such as consultations and cases involving patients less than 65 year of age were lower with nurse-led care, but in other studies costs of nurse-led and physician-led care were not significantly different.
Applications for Clinical Practice
As the health worker shortage continues, health care facilities will have to decide on the appropriate skill mix to provide the best patient outcomes while maximizing cost benefit. While this systematic review and meta-analysis is promising in supporting nursing-led primary care, more research is needed, including longer-term studies with larger sample sizes and more extensive assessment of cost and quality of life. The use of validated and standardized instruments to measure patient satisfaction and quality of care will increase study quality and rigor.
—Melissa T. Martelly, MA, BSN, RN, PCCN, and Allison Squires, PhD, New York University College of Nursing
Study Overview
Objective. To investigate the clinical effectiveness and costs of nurses working as substitutes for physicians in primary care.
Design. Systematic review and meta-analysis of published randomized controlled trials (RCTs) and 2 economic studies that compared nurse-led care with care by primary care physicians on numerous variables, including satisfaction, hospital admission, mortality, and costs of health care.
Settings and participants. The 24 RCTs were drawn from 5 different countries (UK, Netherlands, USA, Russia, and South Africa). In total, there were 38, 974 participants. Eleven of the studies had less than 200 participants and 13 studies had more than 200 (median, 1624). Mean age was reported in 20 trials and ranged from 10 to 83 years.
Analysis. The authors assessed risk of bias in the studies, calculated the study-specific and pooled relative risks (RR) or standardized mean differences (SMD), and performed fixed-effects meta-analyses.
Main results. Nurse-led care was effective at reducing the overall risk of hospital admission (RR 0.76, 95% CI 0.64–0.91) and mortality (RR 0.89, 95% CI 0.84–0.96) in RCTs of ongoing or non-urgent care, longer (at least 12 months) follow-up episodes, and in larger (n > 200) RCTs. Pooled analysis showed higher overall scores of patient satisfaction with nurse led care (SMD 0.18, 95% Cl 0.13–0.23). Higher-quality RCTs (with better allocation concealment and less attrition) showed higher rates of hospital admissions and mortality with nurse-led care, but the difference was not significant. Subgroup analysis showed that RNs had a stronger effect than nurse practitioners (NPs) on patient satisfaction. The results of cost-effectiveness and improved quality of care analysis with nurses were inconclusive.
Conclusion. Nurse-led care appears to have a positive effect on patient care and outcomes but more rigorous research is needed to confirm these findings.
Commentary
As the backbone of health care systems around the world, primary care is facing numerous challenges threatening patient access to care. Aging populations, economically strapped governments, and an increasing non-communicable disease burden in developing countries are pushing global health systems to their capacity. In addition, the World Health Organization has highlighted the increasing health worker shortage which further limits the capabilities of health systems [1,2]. One proposed solution to addressing physician shortages is using NPs. Recent studies have shown patient satisfaction, physical, emotional, and social function, and other outcomes associated with nurse-led care to be similar to if not better than those associated achieved by physicians [3–5].
The current meta-analysis has some weaknesses. For example, 13 of the 24 studies had attrition rates of at least 20% and only 10 trials had a sufficient sample size to achieve adequate power in at least 1 outcome, making it more difficult to identify true differences between control and intervention groups. The sample of RCTs were heterogeneous in terms of settings, tasks, and reporting of outcomes. Also, study heterogeneity increased the difficulty of data synthesis and limited the amount of information on cost-effective nursing care and quality of care of patients.
In many of the studies, quality of life among patients was measured inconsistently, using various disease specific and generic scales, making it difficult to compare and provide comprehensive results. Additionally, less than 50% of the patient satisfaction scales used validated questionnaires.
Results should be interpreted with caution as the studies were compiled from 5 different countries. The scope of nursing practice differs in each country and the different cadres of nurses (RN vs NP vs licensed practical nurse [LPN]) also have varying responsibilities. Cross comparisons between RN/LPN, NP/physician, and RN/NP need to consider the country context, regulating bodies, and government policies that dictate the capabilities and practice of each of these licensed professionals.
There was a dearth of economic information. Generally, direct costs such as consultations and cases involving patients less than 65 year of age were lower with nurse-led care, but in other studies costs of nurse-led and physician-led care were not significantly different.
Applications for Clinical Practice
As the health worker shortage continues, health care facilities will have to decide on the appropriate skill mix to provide the best patient outcomes while maximizing cost benefit. While this systematic review and meta-analysis is promising in supporting nursing-led primary care, more research is needed, including longer-term studies with larger sample sizes and more extensive assessment of cost and quality of life. The use of validated and standardized instruments to measure patient satisfaction and quality of care will increase study quality and rigor.
—Melissa T. Martelly, MA, BSN, RN, PCCN, and Allison Squires, PhD, New York University College of Nursing
1. World Health Organization. World health report 2006: Working together for health. Geneva: World Health Organization; 2006. Available at www.who.int/whr/2006/en.
2. World Health Organization. A universal truth: No health without a workforce. Geneva: World Health Organization; 2013. Available at www.who.int/workforcealliance/knowledge/resources/GHWA_AUniversalTruthReport.pdf.
3. Horrocks S, Anderson E, Salisbury C. Systematic review of whether nurse practitioners working in primary care can provide equivalent care to doctors. BMJ 2002;3:819–23
4. Naylor MD, Kurtzman ET. The role of nurse practitioners in reinventing primary care. Health Affairs 2010;29:893–9.
5. Carter A, JE, Chochinov AH. Systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department. CJEM 2007;9: 286–95.
1. World Health Organization. World health report 2006: Working together for health. Geneva: World Health Organization; 2006. Available at www.who.int/whr/2006/en.
2. World Health Organization. A universal truth: No health without a workforce. Geneva: World Health Organization; 2013. Available at www.who.int/workforcealliance/knowledge/resources/GHWA_AUniversalTruthReport.pdf.
3. Horrocks S, Anderson E, Salisbury C. Systematic review of whether nurse practitioners working in primary care can provide equivalent care to doctors. BMJ 2002;3:819–23
4. Naylor MD, Kurtzman ET. The role of nurse practitioners in reinventing primary care. Health Affairs 2010;29:893–9.
5. Carter A, JE, Chochinov AH. Systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department. CJEM 2007;9: 286–95.
Co-Infection with HIV Increases Risk for Decompensation in Patients with HCV
Study Overview
Objective. To compare the incidence of hepatic decompensation in patients who are co-infected with HIV and hepatitis C (HCV) and who underwent antiretroviral treatment and patients who are HCV-monoinfected.
Design. Retrospective cohort study.
Participants and setting. This study used the Veterans Aging Cohort Study Virtual Cohort (VACS-VC), which includes electronic medical record data from patients who are HIV-infected and are receiving care at Veterans Affairs (VA) medical facilities in the United States. Inclusion criteria for patients who were co-infected were: detectable HCV RNA, recently initiated antiretroviral therapy (ART), defined as use of ≥ 3 antiretroviral drugs from 2 classes or ≥ 3 nucleoside analogues within the VA system, HIV RNA level > 500 copies/mL within 180 days before starting ART, and were seen in the VACS-VC for at least 12 months after initiating ART. Inclusion criteria for patients who were monoinfected with HCV were detectable HCV RNA, no HIV diagnosis or antiretroviral prescriptions, and seen in the VACS-VC for at least 12 months prior to inclusion into the study. Exclusion criteria were hepatic decompensation, hepatocellular carcinoma, and liver transplant during the 12-month baseline period or receipt of interferon-based HCV therapy.
Main outcome measure. The primary outcome was incident hepatic decompensation, defined as diagnosis of ascites, spontaneous bacterial peritonitis, or esophageal variceal hemorrhage at hospital discharge or 2 such outpatient diagnoses.
Main results. A total of 10,359 patients met inclusion criteria and were enrolled between 1997 and 2010. Of these, 4280 were patients co-infected with HIV and HCV and treated with antiretroviral agents and 6079 were patients who were HCV-monoinfected. Age, race/ethnicity, and history of diabetes, alcohol dependence or abuse, and injection or non-injection drug were similar between the 2 groups. The majority of participants were men. HCV genotype 1 was most prevalent in both groups. There were more patients who had HCV RNA levels ≥ 400,000 IU/mL and/or ≥ 1x106 copies/mL in the co-infected group versus the monoinfected group.
Hepatic decompensation occurred more frequently among those who were co-infected and receiving ART (271 [6.3%]) than among those who were monoinfected (305 [5.0%], P = 0.004). The incidence rate was 9.5 events per 1000 person-years (95% CI, 7.6–11.9) among patients co-infected with HIV and HCV and treated with ART and 5.7 events per 1000 person-years (95% CI, 4.4–7.4) among patients who were monoinfected. Variceal hemorrhage was less common among patients who were co-infected as compared to those who were monoinfected (71 [26.2%] vs. 168 [55.1%], P < 0.001). The proportion of patients with ascites (226 [83.4%] in the co-infected group vs. 236 [77.4%] in the monoinfected, P = 0.070) and spontaneous bacterial peritonitis (48 [17.7%] in the co-infected group vs. 68 [22.3%] in the monoinfected, P = 0.171) were similar. After adjustment for age, race/ethnicity, diabetes, BMI, history of alcohol abuse, injection or non-injection drug use, and VA center patient volume, patients who were co-infected and receiving ART had a higher rate of hepatic decompensation than monoinfected patients (hazard ratio, 1.83 [95% CI, 1.54–2.18]).
In subgroup analysis, rates of decompensation remained higher even among co-infected patients who maintained HIV RNA levels < 1000 copies/mL (hazard ratio 1.65 [95% CI 1.20–2.27])
Conclusion. Patients who were co-infected with HIV and HCV and treated with ART had higher rates of hepatic decompensation compared with patients monoinfected with HCV. Good control of HIV viral loads in co-infected patients may not be sufficient to improve health outcomes.
Commentary
Currently, it is estimated that there are 3.5 to 5.5 million people in the United States infected with HCV, accounting for about 1.5% of the population. Approximately 20% to 40% of those infected will develop chronic infection and 10% to 25% of these patients will progress to experience severe liver disease [1]. Yet of the 3.5 million people who are thought be chronically infected with HCV, only 50% are diagnosed and are aware of the infection and a mere 16% are treated for HCV [2].
Estimates suggest that about 10% of those with HCV are also infected with HIV. In the era prior to ART for HIV infections, patients with HIV and HCV most commonly died of HIV-related causes. In the post-ART era, patients are surviving longer and are now experiencing HCV-related comorbidities [3].
This study compares the incidence of hepatic decompensation in patients with HIV and HCV co-infection who are undergoing treatment with ART and those with HCV monoinfection. The results show that patients who were co-infected and treated with ART had higher incidence of hepatic decompensation as compared with those who were monoinfected. This study’s strengths are the large enrollment numbers (> 10,000 patients) and the long follow-up periods (6.8 and 9.9 years for the co-infected and monoinfected cohorts, respectively). As the authors indicate, the weakness of this study is the exclusion of the diagnosis of hepatic encephalopathy and jaundice from their definition of hepatic decompensation. Their reasoning for doing so is that these frequently occur due to unrelated causes, such as narcotic overdose and biliary obstruction. It is possible that this resulted in an underestimation of hepatic decompensation. Finally, 98.8% of the enrolled patients were male. The study results cannot be generalized to women.
Since 2011, the availability of direct-acting antivirals for the treatment of HCV has rapidly increased. These new agents have improved treatment outcomes with better sustained virological response, shorter treatment duration, and better adverse event rates [4]. Telaprevir and boceprevir were first-generation protease inhibitors, and these were followed by simeprevir in 2013. Sofosbuvir also became available in 2013 as the first polymerase inhibitor. These agents were and continue to be evaluated for use in HIV/HCV co-infected patients both in treatment-naive and previously treated patients with good outcomes. A fifth agent, faldaprevir, another protease inhibitor, is expected to become available this year and others are in clinical trials [5]. Sustained virologic response rates of 67% to 88% depending on genotype with regimens using sofosbuvir in co-infected patients for example, have been achieved, which are similar to rates in monoinfected patients [6].
Applications for Clinical Practice
The authors found that management of HIV viral loads to less than 1000 copies/mL reduced the risk for hepatic decompensation. However, the difference in incidence rates between those whose HIV load was < 1000 copies/mL and those whose viral load was ≥ 1000 copies/mL was small (9.4 [95% CI, 5.4–16.2] vs. 9.6 [95% CI, 7.5–12.2]). The findings suggest that control of HIV viral loads in co-infected patients is not sufficient to reduce the rate of liver complications. The authors propose that earlier consideration be given to treatment of HCV infection in co-infected patients to improve health outcomes. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America have published guidelines for the diagnosis and management of HCV [7]. The difference in hepatic decompensation rates between mono- and co-infected patients should become less relevant as use of direct-acting antivirals expands.
—Mayu O. Frank, MS, ANP-BC and Allison Squires, PhD, RN, New York University College of Nursing
1. Action plan for the prevention, care, and treatment of viral hepatitis (2014-2016). US Department of Health and Human Services; 2014. Available at http://aids.gov/news-and-events/hepatitis/.
2. Yehia BR, Schranz AJ, Umscheid CA, Lo Re V. The treatment cascade for chronic hepatitis C virus infection in the United States: a systematic review and meta-analysis. PLOS One 2014;9:1–7.
3. Highleyman L. HIV/HCV coinfection: a new era of treatment. BETA 2001; Fall/Winter: 30–47.
4. Shiffman ML. Hepatitis C virus therapy in the direct acting antiviral era. Curr Opin Gastroenterol 2014;30:217–22.
5. Bichoupan K, Dieterich DT, Martel-Laferriere V. HIV-Hepatitis C virus co-infection in the era of direct-acting antivirals. Curr HIV/AIDS Rep. 2014 July 5. [Epub ahead of print]
6. Sulkowski M, Rodriguez-Torres M, Lalezari J, et al. All-oral therapy with sofosbuvir plus ribavirin for the treatment of HCV genotype 1,2, and 3 infection in patients co-infected with HIV (PHOTON-1). 64th annual meeting of the American Association for the Study of Liver Diseases. Washington, DC; Nov 2013.
7. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America. Recommendations for testing, managing, and treating hepatitis C. Accessed 1 Aug 2014 at www.hcvguidelines.org.
Study Overview
Objective. To compare the incidence of hepatic decompensation in patients who are co-infected with HIV and hepatitis C (HCV) and who underwent antiretroviral treatment and patients who are HCV-monoinfected.
Design. Retrospective cohort study.
Participants and setting. This study used the Veterans Aging Cohort Study Virtual Cohort (VACS-VC), which includes electronic medical record data from patients who are HIV-infected and are receiving care at Veterans Affairs (VA) medical facilities in the United States. Inclusion criteria for patients who were co-infected were: detectable HCV RNA, recently initiated antiretroviral therapy (ART), defined as use of ≥ 3 antiretroviral drugs from 2 classes or ≥ 3 nucleoside analogues within the VA system, HIV RNA level > 500 copies/mL within 180 days before starting ART, and were seen in the VACS-VC for at least 12 months after initiating ART. Inclusion criteria for patients who were monoinfected with HCV were detectable HCV RNA, no HIV diagnosis or antiretroviral prescriptions, and seen in the VACS-VC for at least 12 months prior to inclusion into the study. Exclusion criteria were hepatic decompensation, hepatocellular carcinoma, and liver transplant during the 12-month baseline period or receipt of interferon-based HCV therapy.
Main outcome measure. The primary outcome was incident hepatic decompensation, defined as diagnosis of ascites, spontaneous bacterial peritonitis, or esophageal variceal hemorrhage at hospital discharge or 2 such outpatient diagnoses.
Main results. A total of 10,359 patients met inclusion criteria and were enrolled between 1997 and 2010. Of these, 4280 were patients co-infected with HIV and HCV and treated with antiretroviral agents and 6079 were patients who were HCV-monoinfected. Age, race/ethnicity, and history of diabetes, alcohol dependence or abuse, and injection or non-injection drug were similar between the 2 groups. The majority of participants were men. HCV genotype 1 was most prevalent in both groups. There were more patients who had HCV RNA levels ≥ 400,000 IU/mL and/or ≥ 1x106 copies/mL in the co-infected group versus the monoinfected group.
Hepatic decompensation occurred more frequently among those who were co-infected and receiving ART (271 [6.3%]) than among those who were monoinfected (305 [5.0%], P = 0.004). The incidence rate was 9.5 events per 1000 person-years (95% CI, 7.6–11.9) among patients co-infected with HIV and HCV and treated with ART and 5.7 events per 1000 person-years (95% CI, 4.4–7.4) among patients who were monoinfected. Variceal hemorrhage was less common among patients who were co-infected as compared to those who were monoinfected (71 [26.2%] vs. 168 [55.1%], P < 0.001). The proportion of patients with ascites (226 [83.4%] in the co-infected group vs. 236 [77.4%] in the monoinfected, P = 0.070) and spontaneous bacterial peritonitis (48 [17.7%] in the co-infected group vs. 68 [22.3%] in the monoinfected, P = 0.171) were similar. After adjustment for age, race/ethnicity, diabetes, BMI, history of alcohol abuse, injection or non-injection drug use, and VA center patient volume, patients who were co-infected and receiving ART had a higher rate of hepatic decompensation than monoinfected patients (hazard ratio, 1.83 [95% CI, 1.54–2.18]).
In subgroup analysis, rates of decompensation remained higher even among co-infected patients who maintained HIV RNA levels < 1000 copies/mL (hazard ratio 1.65 [95% CI 1.20–2.27])
Conclusion. Patients who were co-infected with HIV and HCV and treated with ART had higher rates of hepatic decompensation compared with patients monoinfected with HCV. Good control of HIV viral loads in co-infected patients may not be sufficient to improve health outcomes.
Commentary
Currently, it is estimated that there are 3.5 to 5.5 million people in the United States infected with HCV, accounting for about 1.5% of the population. Approximately 20% to 40% of those infected will develop chronic infection and 10% to 25% of these patients will progress to experience severe liver disease [1]. Yet of the 3.5 million people who are thought be chronically infected with HCV, only 50% are diagnosed and are aware of the infection and a mere 16% are treated for HCV [2].
Estimates suggest that about 10% of those with HCV are also infected with HIV. In the era prior to ART for HIV infections, patients with HIV and HCV most commonly died of HIV-related causes. In the post-ART era, patients are surviving longer and are now experiencing HCV-related comorbidities [3].
This study compares the incidence of hepatic decompensation in patients with HIV and HCV co-infection who are undergoing treatment with ART and those with HCV monoinfection. The results show that patients who were co-infected and treated with ART had higher incidence of hepatic decompensation as compared with those who were monoinfected. This study’s strengths are the large enrollment numbers (> 10,000 patients) and the long follow-up periods (6.8 and 9.9 years for the co-infected and monoinfected cohorts, respectively). As the authors indicate, the weakness of this study is the exclusion of the diagnosis of hepatic encephalopathy and jaundice from their definition of hepatic decompensation. Their reasoning for doing so is that these frequently occur due to unrelated causes, such as narcotic overdose and biliary obstruction. It is possible that this resulted in an underestimation of hepatic decompensation. Finally, 98.8% of the enrolled patients were male. The study results cannot be generalized to women.
Since 2011, the availability of direct-acting antivirals for the treatment of HCV has rapidly increased. These new agents have improved treatment outcomes with better sustained virological response, shorter treatment duration, and better adverse event rates [4]. Telaprevir and boceprevir were first-generation protease inhibitors, and these were followed by simeprevir in 2013. Sofosbuvir also became available in 2013 as the first polymerase inhibitor. These agents were and continue to be evaluated for use in HIV/HCV co-infected patients both in treatment-naive and previously treated patients with good outcomes. A fifth agent, faldaprevir, another protease inhibitor, is expected to become available this year and others are in clinical trials [5]. Sustained virologic response rates of 67% to 88% depending on genotype with regimens using sofosbuvir in co-infected patients for example, have been achieved, which are similar to rates in monoinfected patients [6].
Applications for Clinical Practice
The authors found that management of HIV viral loads to less than 1000 copies/mL reduced the risk for hepatic decompensation. However, the difference in incidence rates between those whose HIV load was < 1000 copies/mL and those whose viral load was ≥ 1000 copies/mL was small (9.4 [95% CI, 5.4–16.2] vs. 9.6 [95% CI, 7.5–12.2]). The findings suggest that control of HIV viral loads in co-infected patients is not sufficient to reduce the rate of liver complications. The authors propose that earlier consideration be given to treatment of HCV infection in co-infected patients to improve health outcomes. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America have published guidelines for the diagnosis and management of HCV [7]. The difference in hepatic decompensation rates between mono- and co-infected patients should become less relevant as use of direct-acting antivirals expands.
—Mayu O. Frank, MS, ANP-BC and Allison Squires, PhD, RN, New York University College of Nursing
Study Overview
Objective. To compare the incidence of hepatic decompensation in patients who are co-infected with HIV and hepatitis C (HCV) and who underwent antiretroviral treatment and patients who are HCV-monoinfected.
Design. Retrospective cohort study.
Participants and setting. This study used the Veterans Aging Cohort Study Virtual Cohort (VACS-VC), which includes electronic medical record data from patients who are HIV-infected and are receiving care at Veterans Affairs (VA) medical facilities in the United States. Inclusion criteria for patients who were co-infected were: detectable HCV RNA, recently initiated antiretroviral therapy (ART), defined as use of ≥ 3 antiretroviral drugs from 2 classes or ≥ 3 nucleoside analogues within the VA system, HIV RNA level > 500 copies/mL within 180 days before starting ART, and were seen in the VACS-VC for at least 12 months after initiating ART. Inclusion criteria for patients who were monoinfected with HCV were detectable HCV RNA, no HIV diagnosis or antiretroviral prescriptions, and seen in the VACS-VC for at least 12 months prior to inclusion into the study. Exclusion criteria were hepatic decompensation, hepatocellular carcinoma, and liver transplant during the 12-month baseline period or receipt of interferon-based HCV therapy.
Main outcome measure. The primary outcome was incident hepatic decompensation, defined as diagnosis of ascites, spontaneous bacterial peritonitis, or esophageal variceal hemorrhage at hospital discharge or 2 such outpatient diagnoses.
Main results. A total of 10,359 patients met inclusion criteria and were enrolled between 1997 and 2010. Of these, 4280 were patients co-infected with HIV and HCV and treated with antiretroviral agents and 6079 were patients who were HCV-monoinfected. Age, race/ethnicity, and history of diabetes, alcohol dependence or abuse, and injection or non-injection drug were similar between the 2 groups. The majority of participants were men. HCV genotype 1 was most prevalent in both groups. There were more patients who had HCV RNA levels ≥ 400,000 IU/mL and/or ≥ 1x106 copies/mL in the co-infected group versus the monoinfected group.
Hepatic decompensation occurred more frequently among those who were co-infected and receiving ART (271 [6.3%]) than among those who were monoinfected (305 [5.0%], P = 0.004). The incidence rate was 9.5 events per 1000 person-years (95% CI, 7.6–11.9) among patients co-infected with HIV and HCV and treated with ART and 5.7 events per 1000 person-years (95% CI, 4.4–7.4) among patients who were monoinfected. Variceal hemorrhage was less common among patients who were co-infected as compared to those who were monoinfected (71 [26.2%] vs. 168 [55.1%], P < 0.001). The proportion of patients with ascites (226 [83.4%] in the co-infected group vs. 236 [77.4%] in the monoinfected, P = 0.070) and spontaneous bacterial peritonitis (48 [17.7%] in the co-infected group vs. 68 [22.3%] in the monoinfected, P = 0.171) were similar. After adjustment for age, race/ethnicity, diabetes, BMI, history of alcohol abuse, injection or non-injection drug use, and VA center patient volume, patients who were co-infected and receiving ART had a higher rate of hepatic decompensation than monoinfected patients (hazard ratio, 1.83 [95% CI, 1.54–2.18]).
In subgroup analysis, rates of decompensation remained higher even among co-infected patients who maintained HIV RNA levels < 1000 copies/mL (hazard ratio 1.65 [95% CI 1.20–2.27])
Conclusion. Patients who were co-infected with HIV and HCV and treated with ART had higher rates of hepatic decompensation compared with patients monoinfected with HCV. Good control of HIV viral loads in co-infected patients may not be sufficient to improve health outcomes.
Commentary
Currently, it is estimated that there are 3.5 to 5.5 million people in the United States infected with HCV, accounting for about 1.5% of the population. Approximately 20% to 40% of those infected will develop chronic infection and 10% to 25% of these patients will progress to experience severe liver disease [1]. Yet of the 3.5 million people who are thought be chronically infected with HCV, only 50% are diagnosed and are aware of the infection and a mere 16% are treated for HCV [2].
Estimates suggest that about 10% of those with HCV are also infected with HIV. In the era prior to ART for HIV infections, patients with HIV and HCV most commonly died of HIV-related causes. In the post-ART era, patients are surviving longer and are now experiencing HCV-related comorbidities [3].
This study compares the incidence of hepatic decompensation in patients with HIV and HCV co-infection who are undergoing treatment with ART and those with HCV monoinfection. The results show that patients who were co-infected and treated with ART had higher incidence of hepatic decompensation as compared with those who were monoinfected. This study’s strengths are the large enrollment numbers (> 10,000 patients) and the long follow-up periods (6.8 and 9.9 years for the co-infected and monoinfected cohorts, respectively). As the authors indicate, the weakness of this study is the exclusion of the diagnosis of hepatic encephalopathy and jaundice from their definition of hepatic decompensation. Their reasoning for doing so is that these frequently occur due to unrelated causes, such as narcotic overdose and biliary obstruction. It is possible that this resulted in an underestimation of hepatic decompensation. Finally, 98.8% of the enrolled patients were male. The study results cannot be generalized to women.
Since 2011, the availability of direct-acting antivirals for the treatment of HCV has rapidly increased. These new agents have improved treatment outcomes with better sustained virological response, shorter treatment duration, and better adverse event rates [4]. Telaprevir and boceprevir were first-generation protease inhibitors, and these were followed by simeprevir in 2013. Sofosbuvir also became available in 2013 as the first polymerase inhibitor. These agents were and continue to be evaluated for use in HIV/HCV co-infected patients both in treatment-naive and previously treated patients with good outcomes. A fifth agent, faldaprevir, another protease inhibitor, is expected to become available this year and others are in clinical trials [5]. Sustained virologic response rates of 67% to 88% depending on genotype with regimens using sofosbuvir in co-infected patients for example, have been achieved, which are similar to rates in monoinfected patients [6].
Applications for Clinical Practice
The authors found that management of HIV viral loads to less than 1000 copies/mL reduced the risk for hepatic decompensation. However, the difference in incidence rates between those whose HIV load was < 1000 copies/mL and those whose viral load was ≥ 1000 copies/mL was small (9.4 [95% CI, 5.4–16.2] vs. 9.6 [95% CI, 7.5–12.2]). The findings suggest that control of HIV viral loads in co-infected patients is not sufficient to reduce the rate of liver complications. The authors propose that earlier consideration be given to treatment of HCV infection in co-infected patients to improve health outcomes. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America have published guidelines for the diagnosis and management of HCV [7]. The difference in hepatic decompensation rates between mono- and co-infected patients should become less relevant as use of direct-acting antivirals expands.
—Mayu O. Frank, MS, ANP-BC and Allison Squires, PhD, RN, New York University College of Nursing
1. Action plan for the prevention, care, and treatment of viral hepatitis (2014-2016). US Department of Health and Human Services; 2014. Available at http://aids.gov/news-and-events/hepatitis/.
2. Yehia BR, Schranz AJ, Umscheid CA, Lo Re V. The treatment cascade for chronic hepatitis C virus infection in the United States: a systematic review and meta-analysis. PLOS One 2014;9:1–7.
3. Highleyman L. HIV/HCV coinfection: a new era of treatment. BETA 2001; Fall/Winter: 30–47.
4. Shiffman ML. Hepatitis C virus therapy in the direct acting antiviral era. Curr Opin Gastroenterol 2014;30:217–22.
5. Bichoupan K, Dieterich DT, Martel-Laferriere V. HIV-Hepatitis C virus co-infection in the era of direct-acting antivirals. Curr HIV/AIDS Rep. 2014 July 5. [Epub ahead of print]
6. Sulkowski M, Rodriguez-Torres M, Lalezari J, et al. All-oral therapy with sofosbuvir plus ribavirin for the treatment of HCV genotype 1,2, and 3 infection in patients co-infected with HIV (PHOTON-1). 64th annual meeting of the American Association for the Study of Liver Diseases. Washington, DC; Nov 2013.
7. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America. Recommendations for testing, managing, and treating hepatitis C. Accessed 1 Aug 2014 at www.hcvguidelines.org.
1. Action plan for the prevention, care, and treatment of viral hepatitis (2014-2016). US Department of Health and Human Services; 2014. Available at http://aids.gov/news-and-events/hepatitis/.
2. Yehia BR, Schranz AJ, Umscheid CA, Lo Re V. The treatment cascade for chronic hepatitis C virus infection in the United States: a systematic review and meta-analysis. PLOS One 2014;9:1–7.
3. Highleyman L. HIV/HCV coinfection: a new era of treatment. BETA 2001; Fall/Winter: 30–47.
4. Shiffman ML. Hepatitis C virus therapy in the direct acting antiviral era. Curr Opin Gastroenterol 2014;30:217–22.
5. Bichoupan K, Dieterich DT, Martel-Laferriere V. HIV-Hepatitis C virus co-infection in the era of direct-acting antivirals. Curr HIV/AIDS Rep. 2014 July 5. [Epub ahead of print]
6. Sulkowski M, Rodriguez-Torres M, Lalezari J, et al. All-oral therapy with sofosbuvir plus ribavirin for the treatment of HCV genotype 1,2, and 3 infection in patients co-infected with HIV (PHOTON-1). 64th annual meeting of the American Association for the Study of Liver Diseases. Washington, DC; Nov 2013.
7. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America. Recommendations for testing, managing, and treating hepatitis C. Accessed 1 Aug 2014 at www.hcvguidelines.org.
Frailty as a Predictive Factor in Geriatric Trauma Patient Outcomes
Study Overview
Objective. To evaluate the usefulness of the Frailty Index (FI) as a prognostic indicator of adverse outcomes in geriatric trauma patients.
Design. Prospective cohort study.
Setting and participants. Geriatric (aged 65 and over) trauma patients admitted to inpatient units at a Level 1 trauma center in Arizona were enrolled. Patients were excluded if they were intubated/nonresponsive with no family members present or transferred from another institution (eg, skilled nursing facility). The following categories of data were collected: (a) patient demographics, (b) type and mechanism of injury, (c) vital signs (eg, Glasgow coma scale score, systolic blood pressure, heart rate, body temperature), (d) need for operative intervention, (e) in-hospital complications, (f) hospital and intensive care unit (ICU) lengths of stay, and (g) discharge disposition.
Patients or, in the case of nonresponsive patients, their closest relative, responded to the 50-item Frailty Index questionnaire, which includes questions regarding age, comorbid conditions, medications, activities of daily living (ADLs), social activities, mood, and nutrition. FI score ranges from 0 (non-frail) to 1 (frail), with an FI of 0.25 or more indicative of frailty based on established guidelines. Patients were categorized as frail or non-frail according to their FI scores and were followed during the course of their hospitalization.
Main outcome measure. The primary outcome measure was in-hospital complications. In-hospital complications included myocardial infarction, cardiopulmonary arrest, pneumonia, pulmonary embolism, sepsis, urinary tract infection, deep venous thrombosis, disseminated intravascular coagulation, renal insufficiency, and reoperation. The secondary outcome measure was adverse discharge disposition, which was defined as death during the course of hospitalization or discharge to a skilled nursing facility.
Main results. The sample consisted of 250 patients with a mean age of 77.9 years. Among these, 44.0% were considered frail. Patients with frailty were more likely to have a higher Injury Severity Score (P = 0.04) and a higher mean FI (P = 0.01) than those without frailty. There were no statistically significant differences with respect to age (P = 0.21), mechanism of injury (P = 0.09), systolic blood pressure (P = 0.30), or Glasgow Coma Scale score (P = 0.91) between the groups.
Patients with frailty were more likely to develop in-hospital complications (37.3% vs 21.4%, P = 0.001) than those without frailty. Among these complications, pneumonia and urinary tract infection were the most common. There were no differences in the rate of re-operation (P = 0.54) between the 2 groups. An FI of 0.25 or higher was associated with the development of in-hospital complications (P = 0.001) even after adjust-ing for age, systolic blood pressure, heart rate, and Injury Severity Score.
Frail patients had longer hospital length of stay (P = 0.01) and ICU length of stay (P = 0.01), and were more likely to have adverse discharge disposition (37.3% vs. 12.9%, P = 0.001). All patients who died during the course of hospitalization (n = 5) were considered frail. Frailty was also found to be a predictor of adverse discharge disposition (P = 0.001) after adjustment for age, male sex, Injury Severity Score, and mechanism of injury.
Conclusion. The FI is effective in identifying geriatric trauma patients who are vulnerable to poor health outcomes.
Commentary
The diagnosis and treatment of elderly patients is complicated by the presence of multiple geriatric syndromes, including frailty [1]. Frailty is defined as increased vulnerability to negative health outcomes, marked by physical and functional decline, that eventually leads to disability, dependency, and mortality [2]. Factors such as age, malnutrition, and disease give way to dysregulations of bodily systems that eventually lead to reductions in mobility, strength, and cognition in frail older adults [3]. In turn, frail patients, who lack the physiological reserves to withstand illness and adapt to stressors, experience high incidences of hospitalizations, mortality, and reduced quality of life. Unsurprisingly, mortality rates among geriatric trauma patients are higher than those found in ordinary adult trauma patients [4]. It is, therefore, essential to identify patients with frailty at the outset of hospitalization in order to improve health outcomes and reduce mortality rates in this population. Yet, there is a dearth of assessment tools to predict outcomes in frail trauma patients [5].
This study has several strengths. Outcome measures are plainly stated. The inclusion criteria was broad enough to include most geriatric trauma patients, but the authors eliminated a number of confounders by excluding patients admitted from institutional settings, who may have been more susceptible to negative health outcomes at baseline than noninstitutionalized adults. Recruitment strategies were acceptable and reflect ethical standards. Groups were defined based on an accepted and previously validated FI cutoff. Lack of blinding did not threaten the study’s design given that most outcomes were beyond the control of study participants. Multivariate regression adjusted for a number of potential confounders including age, length of hospitalization, and injury severity. The Injury Severity Score, the Abbreviated Injury Scale score, and the Glasgow Coma Scale score are validated instruments that are widely used and enable standardized assessments of cognition and degree of injury.
The study methodology also possesses a number of weaknesses. The authors followed patients from admission to discharge; however, they did not re-evaluate patients following their release from the inpatient setting. It is, therefore, not clear whether the FI is predictive of quality of life, functional status, or hospital readmissions upon discharge into the community. The cohort was largely male (69.2%) and predominately Caucasian. Participants were recruited from only one medical center. All of these limit the study’s generalizability. In addition, the authors do not clarify how they came to define the criteria for in-hospital complications or adverse discharge disposition. For example, the study does not consider skin breakdown, a common concern among older patients who are hospitalized, as an in-hospital complication. In addition, the authors did not adjust for the number of diagnoses at baseline or the presence of chronic comorbid conditions, which are also associated with negative health outcomes.
Applications for Clinical Practice
Although lengthy, with over 50 variables in 5 categories, the FI has the potential to help health care providers improve risk stratification, assess patient acuity, and formulate treatment plans to improve the health of frail elderly patients. The FI will enable hospitals to direct appropriate resources, including staff, to the most vulnerable subsets of patients in order to improve outcomes and reduce costs. Moreover, awareness of frailty enables greater discussion between patients and families of trauma patients about the risks and benefits of complex intervention, increases referrals to palliative care, and improves quality of life in this population [6].
—Tina Sadarangani, MSN, APRN, and Allison Squires, PhD, RN, New York University College of Nursing
1. Rich MW. Heart failure in the oldest patients: the impact of comorbid conditions. Am J Geriatr Cardiol 2005;14:134–41.
2. Fried LP, Ferrucci L, Darer J, et al. Untangling the concepts of disability, frailty, and comorbidity: implications for improved targeting and care. J Gerontol A Biol Sci Med Sci 2004;59:255–63.
3. Lang PO, Michel JP, Zekry D. Frailty syndrome: a transitional state in a dynamic process. Gerontology 2009;55:539–49.
4. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg 2014;76:894–901.
5. American College of Surgeons Trauma Quality Improvement Program. ACS TQIP geriatric trauma management guidelines. Available at https://mtqip.org/docs/.
6. Koller K, Rockwood K. Frailty in older adults: implications for end-of-life care. Cleve Clin J Med 2013;80:168–74.
Study Overview
Objective. To evaluate the usefulness of the Frailty Index (FI) as a prognostic indicator of adverse outcomes in geriatric trauma patients.
Design. Prospective cohort study.
Setting and participants. Geriatric (aged 65 and over) trauma patients admitted to inpatient units at a Level 1 trauma center in Arizona were enrolled. Patients were excluded if they were intubated/nonresponsive with no family members present or transferred from another institution (eg, skilled nursing facility). The following categories of data were collected: (a) patient demographics, (b) type and mechanism of injury, (c) vital signs (eg, Glasgow coma scale score, systolic blood pressure, heart rate, body temperature), (d) need for operative intervention, (e) in-hospital complications, (f) hospital and intensive care unit (ICU) lengths of stay, and (g) discharge disposition.
Patients or, in the case of nonresponsive patients, their closest relative, responded to the 50-item Frailty Index questionnaire, which includes questions regarding age, comorbid conditions, medications, activities of daily living (ADLs), social activities, mood, and nutrition. FI score ranges from 0 (non-frail) to 1 (frail), with an FI of 0.25 or more indicative of frailty based on established guidelines. Patients were categorized as frail or non-frail according to their FI scores and were followed during the course of their hospitalization.
Main outcome measure. The primary outcome measure was in-hospital complications. In-hospital complications included myocardial infarction, cardiopulmonary arrest, pneumonia, pulmonary embolism, sepsis, urinary tract infection, deep venous thrombosis, disseminated intravascular coagulation, renal insufficiency, and reoperation. The secondary outcome measure was adverse discharge disposition, which was defined as death during the course of hospitalization or discharge to a skilled nursing facility.
Main results. The sample consisted of 250 patients with a mean age of 77.9 years. Among these, 44.0% were considered frail. Patients with frailty were more likely to have a higher Injury Severity Score (P = 0.04) and a higher mean FI (P = 0.01) than those without frailty. There were no statistically significant differences with respect to age (P = 0.21), mechanism of injury (P = 0.09), systolic blood pressure (P = 0.30), or Glasgow Coma Scale score (P = 0.91) between the groups.
Patients with frailty were more likely to develop in-hospital complications (37.3% vs 21.4%, P = 0.001) than those without frailty. Among these complications, pneumonia and urinary tract infection were the most common. There were no differences in the rate of re-operation (P = 0.54) between the 2 groups. An FI of 0.25 or higher was associated with the development of in-hospital complications (P = 0.001) even after adjust-ing for age, systolic blood pressure, heart rate, and Injury Severity Score.
Frail patients had longer hospital length of stay (P = 0.01) and ICU length of stay (P = 0.01), and were more likely to have adverse discharge disposition (37.3% vs. 12.9%, P = 0.001). All patients who died during the course of hospitalization (n = 5) were considered frail. Frailty was also found to be a predictor of adverse discharge disposition (P = 0.001) after adjustment for age, male sex, Injury Severity Score, and mechanism of injury.
Conclusion. The FI is effective in identifying geriatric trauma patients who are vulnerable to poor health outcomes.
Commentary
The diagnosis and treatment of elderly patients is complicated by the presence of multiple geriatric syndromes, including frailty [1]. Frailty is defined as increased vulnerability to negative health outcomes, marked by physical and functional decline, that eventually leads to disability, dependency, and mortality [2]. Factors such as age, malnutrition, and disease give way to dysregulations of bodily systems that eventually lead to reductions in mobility, strength, and cognition in frail older adults [3]. In turn, frail patients, who lack the physiological reserves to withstand illness and adapt to stressors, experience high incidences of hospitalizations, mortality, and reduced quality of life. Unsurprisingly, mortality rates among geriatric trauma patients are higher than those found in ordinary adult trauma patients [4]. It is, therefore, essential to identify patients with frailty at the outset of hospitalization in order to improve health outcomes and reduce mortality rates in this population. Yet, there is a dearth of assessment tools to predict outcomes in frail trauma patients [5].
This study has several strengths. Outcome measures are plainly stated. The inclusion criteria was broad enough to include most geriatric trauma patients, but the authors eliminated a number of confounders by excluding patients admitted from institutional settings, who may have been more susceptible to negative health outcomes at baseline than noninstitutionalized adults. Recruitment strategies were acceptable and reflect ethical standards. Groups were defined based on an accepted and previously validated FI cutoff. Lack of blinding did not threaten the study’s design given that most outcomes were beyond the control of study participants. Multivariate regression adjusted for a number of potential confounders including age, length of hospitalization, and injury severity. The Injury Severity Score, the Abbreviated Injury Scale score, and the Glasgow Coma Scale score are validated instruments that are widely used and enable standardized assessments of cognition and degree of injury.
The study methodology also possesses a number of weaknesses. The authors followed patients from admission to discharge; however, they did not re-evaluate patients following their release from the inpatient setting. It is, therefore, not clear whether the FI is predictive of quality of life, functional status, or hospital readmissions upon discharge into the community. The cohort was largely male (69.2%) and predominately Caucasian. Participants were recruited from only one medical center. All of these limit the study’s generalizability. In addition, the authors do not clarify how they came to define the criteria for in-hospital complications or adverse discharge disposition. For example, the study does not consider skin breakdown, a common concern among older patients who are hospitalized, as an in-hospital complication. In addition, the authors did not adjust for the number of diagnoses at baseline or the presence of chronic comorbid conditions, which are also associated with negative health outcomes.
Applications for Clinical Practice
Although lengthy, with over 50 variables in 5 categories, the FI has the potential to help health care providers improve risk stratification, assess patient acuity, and formulate treatment plans to improve the health of frail elderly patients. The FI will enable hospitals to direct appropriate resources, including staff, to the most vulnerable subsets of patients in order to improve outcomes and reduce costs. Moreover, awareness of frailty enables greater discussion between patients and families of trauma patients about the risks and benefits of complex intervention, increases referrals to palliative care, and improves quality of life in this population [6].
—Tina Sadarangani, MSN, APRN, and Allison Squires, PhD, RN, New York University College of Nursing
Study Overview
Objective. To evaluate the usefulness of the Frailty Index (FI) as a prognostic indicator of adverse outcomes in geriatric trauma patients.
Design. Prospective cohort study.
Setting and participants. Geriatric (aged 65 and over) trauma patients admitted to inpatient units at a Level 1 trauma center in Arizona were enrolled. Patients were excluded if they were intubated/nonresponsive with no family members present or transferred from another institution (eg, skilled nursing facility). The following categories of data were collected: (a) patient demographics, (b) type and mechanism of injury, (c) vital signs (eg, Glasgow coma scale score, systolic blood pressure, heart rate, body temperature), (d) need for operative intervention, (e) in-hospital complications, (f) hospital and intensive care unit (ICU) lengths of stay, and (g) discharge disposition.
Patients or, in the case of nonresponsive patients, their closest relative, responded to the 50-item Frailty Index questionnaire, which includes questions regarding age, comorbid conditions, medications, activities of daily living (ADLs), social activities, mood, and nutrition. FI score ranges from 0 (non-frail) to 1 (frail), with an FI of 0.25 or more indicative of frailty based on established guidelines. Patients were categorized as frail or non-frail according to their FI scores and were followed during the course of their hospitalization.
Main outcome measure. The primary outcome measure was in-hospital complications. In-hospital complications included myocardial infarction, cardiopulmonary arrest, pneumonia, pulmonary embolism, sepsis, urinary tract infection, deep venous thrombosis, disseminated intravascular coagulation, renal insufficiency, and reoperation. The secondary outcome measure was adverse discharge disposition, which was defined as death during the course of hospitalization or discharge to a skilled nursing facility.
Main results. The sample consisted of 250 patients with a mean age of 77.9 years. Among these, 44.0% were considered frail. Patients with frailty were more likely to have a higher Injury Severity Score (P = 0.04) and a higher mean FI (P = 0.01) than those without frailty. There were no statistically significant differences with respect to age (P = 0.21), mechanism of injury (P = 0.09), systolic blood pressure (P = 0.30), or Glasgow Coma Scale score (P = 0.91) between the groups.
Patients with frailty were more likely to develop in-hospital complications (37.3% vs 21.4%, P = 0.001) than those without frailty. Among these complications, pneumonia and urinary tract infection were the most common. There were no differences in the rate of re-operation (P = 0.54) between the 2 groups. An FI of 0.25 or higher was associated with the development of in-hospital complications (P = 0.001) even after adjust-ing for age, systolic blood pressure, heart rate, and Injury Severity Score.
Frail patients had longer hospital length of stay (P = 0.01) and ICU length of stay (P = 0.01), and were more likely to have adverse discharge disposition (37.3% vs. 12.9%, P = 0.001). All patients who died during the course of hospitalization (n = 5) were considered frail. Frailty was also found to be a predictor of adverse discharge disposition (P = 0.001) after adjustment for age, male sex, Injury Severity Score, and mechanism of injury.
Conclusion. The FI is effective in identifying geriatric trauma patients who are vulnerable to poor health outcomes.
Commentary
The diagnosis and treatment of elderly patients is complicated by the presence of multiple geriatric syndromes, including frailty [1]. Frailty is defined as increased vulnerability to negative health outcomes, marked by physical and functional decline, that eventually leads to disability, dependency, and mortality [2]. Factors such as age, malnutrition, and disease give way to dysregulations of bodily systems that eventually lead to reductions in mobility, strength, and cognition in frail older adults [3]. In turn, frail patients, who lack the physiological reserves to withstand illness and adapt to stressors, experience high incidences of hospitalizations, mortality, and reduced quality of life. Unsurprisingly, mortality rates among geriatric trauma patients are higher than those found in ordinary adult trauma patients [4]. It is, therefore, essential to identify patients with frailty at the outset of hospitalization in order to improve health outcomes and reduce mortality rates in this population. Yet, there is a dearth of assessment tools to predict outcomes in frail trauma patients [5].
This study has several strengths. Outcome measures are plainly stated. The inclusion criteria was broad enough to include most geriatric trauma patients, but the authors eliminated a number of confounders by excluding patients admitted from institutional settings, who may have been more susceptible to negative health outcomes at baseline than noninstitutionalized adults. Recruitment strategies were acceptable and reflect ethical standards. Groups were defined based on an accepted and previously validated FI cutoff. Lack of blinding did not threaten the study’s design given that most outcomes were beyond the control of study participants. Multivariate regression adjusted for a number of potential confounders including age, length of hospitalization, and injury severity. The Injury Severity Score, the Abbreviated Injury Scale score, and the Glasgow Coma Scale score are validated instruments that are widely used and enable standardized assessments of cognition and degree of injury.
The study methodology also possesses a number of weaknesses. The authors followed patients from admission to discharge; however, they did not re-evaluate patients following their release from the inpatient setting. It is, therefore, not clear whether the FI is predictive of quality of life, functional status, or hospital readmissions upon discharge into the community. The cohort was largely male (69.2%) and predominately Caucasian. Participants were recruited from only one medical center. All of these limit the study’s generalizability. In addition, the authors do not clarify how they came to define the criteria for in-hospital complications or adverse discharge disposition. For example, the study does not consider skin breakdown, a common concern among older patients who are hospitalized, as an in-hospital complication. In addition, the authors did not adjust for the number of diagnoses at baseline or the presence of chronic comorbid conditions, which are also associated with negative health outcomes.
Applications for Clinical Practice
Although lengthy, with over 50 variables in 5 categories, the FI has the potential to help health care providers improve risk stratification, assess patient acuity, and formulate treatment plans to improve the health of frail elderly patients. The FI will enable hospitals to direct appropriate resources, including staff, to the most vulnerable subsets of patients in order to improve outcomes and reduce costs. Moreover, awareness of frailty enables greater discussion between patients and families of trauma patients about the risks and benefits of complex intervention, increases referrals to palliative care, and improves quality of life in this population [6].
—Tina Sadarangani, MSN, APRN, and Allison Squires, PhD, RN, New York University College of Nursing
1. Rich MW. Heart failure in the oldest patients: the impact of comorbid conditions. Am J Geriatr Cardiol 2005;14:134–41.
2. Fried LP, Ferrucci L, Darer J, et al. Untangling the concepts of disability, frailty, and comorbidity: implications for improved targeting and care. J Gerontol A Biol Sci Med Sci 2004;59:255–63.
3. Lang PO, Michel JP, Zekry D. Frailty syndrome: a transitional state in a dynamic process. Gerontology 2009;55:539–49.
4. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg 2014;76:894–901.
5. American College of Surgeons Trauma Quality Improvement Program. ACS TQIP geriatric trauma management guidelines. Available at https://mtqip.org/docs/.
6. Koller K, Rockwood K. Frailty in older adults: implications for end-of-life care. Cleve Clin J Med 2013;80:168–74.
1. Rich MW. Heart failure in the oldest patients: the impact of comorbid conditions. Am J Geriatr Cardiol 2005;14:134–41.
2. Fried LP, Ferrucci L, Darer J, et al. Untangling the concepts of disability, frailty, and comorbidity: implications for improved targeting and care. J Gerontol A Biol Sci Med Sci 2004;59:255–63.
3. Lang PO, Michel JP, Zekry D. Frailty syndrome: a transitional state in a dynamic process. Gerontology 2009;55:539–49.
4. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg 2014;76:894–901.
5. American College of Surgeons Trauma Quality Improvement Program. ACS TQIP geriatric trauma management guidelines. Available at https://mtqip.org/docs/.
6. Koller K, Rockwood K. Frailty in older adults: implications for end-of-life care. Cleve Clin J Med 2013;80:168–74.
Access to a Behavioral Weight Loss Website With or Without Group Sessions Increased Weight Loss in Statewide Campaign
Study Overview
Objective. To determine the efficacy and cost-effectiveness of adding an evidence-based internet behavioral weight loss intervention alone or combined with optional group sessions to ShapeUp Rhode Island 2011 (SURI), a 3-month statewide wellness campaign.
Design. 3-arm randomized clinical trial.
Setting and participants. Study participants were recruited from the Rhode Island community via employers, media, and mass mailings at the time of SURI 2011 registration. Of the 3806 participants that joined the weight loss division, 1139 were willing to be contacted for research, and the first 431 were screened for study eligibility. Exclusion criteria were minimal: age < 18 years or > 70 years, body mass index (BMI) < 25 kg/m2, pregnant, nursing, or plans to become pregnant, a serious medical condition (eg, cancer), unreliable internet access, non-English speaking, current or previous participation in our weight loss studies, and planned relocation. Those who reported a medical condition that could interfere with safe participation (eg, diabetes) obtained doctor’s consent to participate. Of those screened, 230 met inclusion criteria, completed orientation procedures, and were randomized using a 1:2:2 randomization scheme to the standard SURI program (S; n = 46); SURI plus internet behavioral weight loss intervention (SI; n = 90); or SURI plus internet behavioral weight loss intervention plus optional group sessions (SIG; n = 94). To avoid contamination, individuals on the same SURI team (see below) were randomized to the same intervention.
Intervention. Participants in the standard SURI program did not receive any behavioral weight loss treatment. SURI is a self-sustaining, annual community campaign designed to help Rhode Islanders lose weight and increase their physical activity through an online, team-based competition. Participants join in teams, enter the weight loss or physical activity division or both, and compete with other teams. Throughout the 3-month program, participants have access to a reporting SURI website where they submit their weekly weight and activity data and view their personal and team progress. They also receive paper logs to record weight and activity, a pedometer, access to newsletters and community workshops, and recognition for meeting goals.
Participants in the SI arm received the 3-month SURI program plus a 3-month internet behavioral weight loss intervention. Before SURI began, SI participants attended a 1-hour group meeting during which they received their weight loss goal (lose 1 to 2 pounds per week), calorie and fat gram goal (starting weight < 250 lbs: 1200–1500 kcal/day, 40–50 g of fat; starting weight ≥ 250 lbs: 1500–1800 kcal/day, 50–60 g of fat), and activity goal (gradually increase to 200 minutes of aerobic activity per week). During this session, participants were also taught self-monitoring skills and oriented to an internet behavioral weight loss intervention website developed by the authors. The intervention website included 12 weekly, 10- to 15-minute multimedia lessons based on the Diabetes Prevention Program and a self-monitoring platform where participants tracked their daily weight, calorie, and activity information. Participants received weekly automated feedback on their progress. The intervention website also included information on meal plans, prepackaged meals, and meal replacements.
Participants in the SIG arm received everything in SI and were additionally given the option to attend weekly group meetings at Miriam Hospital’s Weight Control and Diabetes Research Center during the 3 months. The 12 weekly, optional group sessions were led by masters-level staff with extensive training in behavioral weight loss. Sessions involved private weigh-ins and covered topics that supplemented the internet intervention (eg, recipe modification, portion control).
Main outcomes measures. The main outcome was weight loss at the end of the 3-month program. Participants completed measures (ie, weight, BMI) in person at baseline and 3 months (post-treatment), and at 6- and 12-month follow-up visits. Adherence measures included reported weight and physical activity on the SURI website (S, SI, and SIG), log ins, viewed lessons, and self-monitoring entries on the intervention website (SI, SIG), and number of groups meetings attended (SIG). To measure weight loss behaviors, the authors used the Weight Control Practices questionnaire to assess engagement in core weight loss strategies targeted in treatment, and the Paffenbarger questionnaire to assess weekly kcal expended in moderate to vigorous activity. The authors also assessed costs from the payer (labor, rent, intervention materials), participant (SURI registration fee, transportation, time spent on intervention), and societal perspective (sum of payer and participant costs) in order to calculate the cost per kg of weight lost in each study arm.
Results. Participants were predominantly female, non-Hispanic white, and had a mean BMI of 34.4 kg/m2 (SE = 0.05). Groups differed only on education (P = 0.02), and attendance at post-treatment and 6- and 12-month follow-up were high (93%, 91%, and 86% respectively). The authors found that weight loss did not differ by educational attainment (P s > 0.57).
Overall, there was a significant group-by-time interaction for weight loss (P < 0.001). Percentage weight loss at 3 months differed among the 3 groups—S: 1.1% ± 0.9%; SI: 4.2% ± 0.6%; SIG: 6.1% ± 0.6% (P s ≤ 0.04). There was also an overall group effect for percentage of individuals achieving 5% weight loss (P < 0.001). SI and SIG had higher percentages of participants who achieved a 5% weight loss than the control (SI: 42%; SIG: 54%; S: 7%; P s < 0.001) but did not differ from one another (P = 0.01). Initial weight losses and percentage of participants who achieved a 5% weight loss were largely maintained through the no-treatment follow-up phase at 6-months, but the 3 groups no longer differed from one another at 12 months (S: 1.2% [SE =0.9]; SI: 2.2% [SE = 0.6]; SIG: 3.3% [SE = 0.6]; P s > 0.05).
All groups reported significant increases in physical activity over time (p < 0.001). More reporting of weight and physical activity data on the SURI website was associated with greater percentage weight loss (r = 0.25; P < 0.001). Number of log ins and lessons viewed on the intervention website were positively associated with percentage weight loss (r = 0.45; P ≤ 0.001; and r = 0.34; P ≤ 0.001 respectively). Greater attendance to group sessions was associated with better weight outcomes (r = 0.61; P ≤ 0.001). Younger age was associated with poorer adherence, including less reporting on the SURI website, viewing of lessons, and logging in to the weight loss website.
There was a significant group-by-time effect interaction for the use of behavioral weight loss strategies (P < 0.001), and increased use of these strategies was associated with greater percentage weight loss in all 3 groups post-treatment. At 12 months, however, there were no differences between groups in the use of these strategies (P s ≤ 0.07).
Cost per kg of weight loss was similar for S ($39) and SI ($35), but both were lower than SIG ($114).
Conclusion. Both intervention arms (SI and SIG) achieved more weight loss at 6 months than SURI alone. Although mean weight loss was greatest with optional group sessions (SIG), the addition of the behavioral intervention website alone (SI) was the most cost-effective method to enhance weight loss. Thus, adding a novel internet behavioral weight loss intervention to a statewide community health initiative may be a cost-effective approach to improving obesity treatment outcomes.
Commentary
Weight loss treatment is recommended for adults with a BMI of > 30 kg/m2, as well as those with BMI < 25 kg/m2 with weight-related comorbidities [1]. Intensive behavioral treatment should be the first line of intervention for overweight and obese individuals and can lead to 8% to 10% weight loss [2], particularly in initial months of treatment [3]. However, behavioral treatment is inherently challenging and time-consuming, and readily available to only a fraction of the intended population. Although weight losses achieved from intensive lifestyle interventions such as the Diabetes Prevention Program (DPP) [4] may be higher, innovative community weight loss programs that use a variety of weight loss strategies can provide opportunities to a wider population of overweight and obese individuals and at a lower cost [3].
This study built upon the authors’ previous work [5], which showed that SURI participants with behavioral weight loss strategies via email significantly improved 3-month weight losses. In this current study, they compared SURI alone to SURI with additional access to an internet behavioral weight loss website with or without optional group sessions. Since significant weight loss was not maintained at 12 months, this suggests that perhaps access to the behavioral weight loss website should have continued for longer and/or included a maintenance phase after the 3-month intervention. Weight loss often reaches its peak around 6 months, and weight regain occurs without effective maintenance therapy [6].
General strengths of the study included the use of a randomized, intention-to-treat design, dissemination of evidence-based weight loss strategies, objective outcomes measurement, adherence metrics, and strong retention of participants with clear accounting of all enrolled patients from recruitment through analysis. This study demonstrated significant weight loss in an intervention with minimal/optional health professional interaction. This intervention also placed responsibility on participants to self-monitor their diet and physical activity, participate in online lessons, and attend optional group sessions. The success of this community-based intervention suggests feasibility and scalability within a real-world setting. The authors also conducted cost-effectiveness analyses demonstrating that the SI program was more cost-effective than SIG.
However, there are weaknesses as well. In setting the sample size for each arm of this study, no justification was described for choosing a 1:2:2 randomization scheme. In randomized control trials, the allocation of participants into the different study arms is often balanced to equal numbers which maximizes statistical power [7]. However, the use of unequal randomization ratios among study arms can be beneficial and even necessary for various reasons including cost, availability of the intervention, overcoming intervention/treatment learning curves, and if a higher drop-out rate is anticipated. Providing a justification for unbalanced sample sizes would be helpful to future researchers looking to replicate the study. Additionally, participants were mostly non-Hispanic white and female, thus limiting generalizability. While representative of the broader Rhode Island population, findings based on this population this may not be applicable to vulnerable (ie, low literacy, resource-poor) or underrepresented populations (ie, minorities) [8].
Applications for Clinical Practice
An internet-based behavioral weight loss intervention, when added to a community weight management initiative, is cost-effective and can lead to short-term weight loss. Given that clinicians often lack time, training, and resources to adequately address obesity in the office [9,10], encouraging patients to enroll in similar programs may be an effective strategy to address such barriers. The study also highlights the need for maintenance interventions to help keep weight off. Findings should be replicated in more diverse communities.
—Katrina F. Mateo, MPH, and Melanie Jay, MD, MS
1. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults. National Heart, Lung, and Blood Institute; 1998.
2. Wadden TA, Butryn ML, Wilson C. Lifestyle modification for the management of obesity. Gastroenterology 2007;132:2226–38.
3. Butryn ML, Webb V, Wadden TA. Behavioral treatment of obesity. Psych Clin North Am 2011;34:841–59.
4. The Diabetes Prevention Program Research Group. The Diabetes Prevention Program (DPP): Description of lifestyle intervention. Diabetes Care 2002;25:2165–71.
5. Wing RR, Crane MM, Thomas JG, et al. Improving weight loss outcomes of community interventions by incorporating behavioral strategies. Am J Public Health 2010;100:2513–9.
6. Wing RR, Tate DF, Gorin A, et al. A self-regulation program for maintenance of weight loss. N Engl J Med 2006;355:1563–71.
7. Dumville JC, Hahn S, Miles JN V, Torgerson DJ. The use of unequal randomisation ratios in clinical trials: a review. Contemp Clin Trials 2006;27:1–12.
8. Marshall PL. Ethical challenges in study design and informed consent for health research in resource-poor settings. World Health Organization; 2007.
9. Jay M, Gillespie C, Ark T, et al. Do internists, pediatricians, and psychiatrists feel competent in obesity care? Using a needs assessment to drive curriculum design. J Gen Intern Med 2008;23:1066–70.
10. Loureiro ML, Nayga RM. Obesity, weight loss, and physician’s advice. Soc Sci Med 2006;62:2458–68.
Study Overview
Objective. To determine the efficacy and cost-effectiveness of adding an evidence-based internet behavioral weight loss intervention alone or combined with optional group sessions to ShapeUp Rhode Island 2011 (SURI), a 3-month statewide wellness campaign.
Design. 3-arm randomized clinical trial.
Setting and participants. Study participants were recruited from the Rhode Island community via employers, media, and mass mailings at the time of SURI 2011 registration. Of the 3806 participants that joined the weight loss division, 1139 were willing to be contacted for research, and the first 431 were screened for study eligibility. Exclusion criteria were minimal: age < 18 years or > 70 years, body mass index (BMI) < 25 kg/m2, pregnant, nursing, or plans to become pregnant, a serious medical condition (eg, cancer), unreliable internet access, non-English speaking, current or previous participation in our weight loss studies, and planned relocation. Those who reported a medical condition that could interfere with safe participation (eg, diabetes) obtained doctor’s consent to participate. Of those screened, 230 met inclusion criteria, completed orientation procedures, and were randomized using a 1:2:2 randomization scheme to the standard SURI program (S; n = 46); SURI plus internet behavioral weight loss intervention (SI; n = 90); or SURI plus internet behavioral weight loss intervention plus optional group sessions (SIG; n = 94). To avoid contamination, individuals on the same SURI team (see below) were randomized to the same intervention.
Intervention. Participants in the standard SURI program did not receive any behavioral weight loss treatment. SURI is a self-sustaining, annual community campaign designed to help Rhode Islanders lose weight and increase their physical activity through an online, team-based competition. Participants join in teams, enter the weight loss or physical activity division or both, and compete with other teams. Throughout the 3-month program, participants have access to a reporting SURI website where they submit their weekly weight and activity data and view their personal and team progress. They also receive paper logs to record weight and activity, a pedometer, access to newsletters and community workshops, and recognition for meeting goals.
Participants in the SI arm received the 3-month SURI program plus a 3-month internet behavioral weight loss intervention. Before SURI began, SI participants attended a 1-hour group meeting during which they received their weight loss goal (lose 1 to 2 pounds per week), calorie and fat gram goal (starting weight < 250 lbs: 1200–1500 kcal/day, 40–50 g of fat; starting weight ≥ 250 lbs: 1500–1800 kcal/day, 50–60 g of fat), and activity goal (gradually increase to 200 minutes of aerobic activity per week). During this session, participants were also taught self-monitoring skills and oriented to an internet behavioral weight loss intervention website developed by the authors. The intervention website included 12 weekly, 10- to 15-minute multimedia lessons based on the Diabetes Prevention Program and a self-monitoring platform where participants tracked their daily weight, calorie, and activity information. Participants received weekly automated feedback on their progress. The intervention website also included information on meal plans, prepackaged meals, and meal replacements.
Participants in the SIG arm received everything in SI and were additionally given the option to attend weekly group meetings at Miriam Hospital’s Weight Control and Diabetes Research Center during the 3 months. The 12 weekly, optional group sessions were led by masters-level staff with extensive training in behavioral weight loss. Sessions involved private weigh-ins and covered topics that supplemented the internet intervention (eg, recipe modification, portion control).
Main outcomes measures. The main outcome was weight loss at the end of the 3-month program. Participants completed measures (ie, weight, BMI) in person at baseline and 3 months (post-treatment), and at 6- and 12-month follow-up visits. Adherence measures included reported weight and physical activity on the SURI website (S, SI, and SIG), log ins, viewed lessons, and self-monitoring entries on the intervention website (SI, SIG), and number of groups meetings attended (SIG). To measure weight loss behaviors, the authors used the Weight Control Practices questionnaire to assess engagement in core weight loss strategies targeted in treatment, and the Paffenbarger questionnaire to assess weekly kcal expended in moderate to vigorous activity. The authors also assessed costs from the payer (labor, rent, intervention materials), participant (SURI registration fee, transportation, time spent on intervention), and societal perspective (sum of payer and participant costs) in order to calculate the cost per kg of weight lost in each study arm.
Results. Participants were predominantly female, non-Hispanic white, and had a mean BMI of 34.4 kg/m2 (SE = 0.05). Groups differed only on education (P = 0.02), and attendance at post-treatment and 6- and 12-month follow-up were high (93%, 91%, and 86% respectively). The authors found that weight loss did not differ by educational attainment (P s > 0.57).
Overall, there was a significant group-by-time interaction for weight loss (P < 0.001). Percentage weight loss at 3 months differed among the 3 groups—S: 1.1% ± 0.9%; SI: 4.2% ± 0.6%; SIG: 6.1% ± 0.6% (P s ≤ 0.04). There was also an overall group effect for percentage of individuals achieving 5% weight loss (P < 0.001). SI and SIG had higher percentages of participants who achieved a 5% weight loss than the control (SI: 42%; SIG: 54%; S: 7%; P s < 0.001) but did not differ from one another (P = 0.01). Initial weight losses and percentage of participants who achieved a 5% weight loss were largely maintained through the no-treatment follow-up phase at 6-months, but the 3 groups no longer differed from one another at 12 months (S: 1.2% [SE =0.9]; SI: 2.2% [SE = 0.6]; SIG: 3.3% [SE = 0.6]; P s > 0.05).
All groups reported significant increases in physical activity over time (p < 0.001). More reporting of weight and physical activity data on the SURI website was associated with greater percentage weight loss (r = 0.25; P < 0.001). Number of log ins and lessons viewed on the intervention website were positively associated with percentage weight loss (r = 0.45; P ≤ 0.001; and r = 0.34; P ≤ 0.001 respectively). Greater attendance to group sessions was associated with better weight outcomes (r = 0.61; P ≤ 0.001). Younger age was associated with poorer adherence, including less reporting on the SURI website, viewing of lessons, and logging in to the weight loss website.
There was a significant group-by-time effect interaction for the use of behavioral weight loss strategies (P < 0.001), and increased use of these strategies was associated with greater percentage weight loss in all 3 groups post-treatment. At 12 months, however, there were no differences between groups in the use of these strategies (P s ≤ 0.07).
Cost per kg of weight loss was similar for S ($39) and SI ($35), but both were lower than SIG ($114).
Conclusion. Both intervention arms (SI and SIG) achieved more weight loss at 6 months than SURI alone. Although mean weight loss was greatest with optional group sessions (SIG), the addition of the behavioral intervention website alone (SI) was the most cost-effective method to enhance weight loss. Thus, adding a novel internet behavioral weight loss intervention to a statewide community health initiative may be a cost-effective approach to improving obesity treatment outcomes.
Commentary
Weight loss treatment is recommended for adults with a BMI of > 30 kg/m2, as well as those with BMI < 25 kg/m2 with weight-related comorbidities [1]. Intensive behavioral treatment should be the first line of intervention for overweight and obese individuals and can lead to 8% to 10% weight loss [2], particularly in initial months of treatment [3]. However, behavioral treatment is inherently challenging and time-consuming, and readily available to only a fraction of the intended population. Although weight losses achieved from intensive lifestyle interventions such as the Diabetes Prevention Program (DPP) [4] may be higher, innovative community weight loss programs that use a variety of weight loss strategies can provide opportunities to a wider population of overweight and obese individuals and at a lower cost [3].
This study built upon the authors’ previous work [5], which showed that SURI participants with behavioral weight loss strategies via email significantly improved 3-month weight losses. In this current study, they compared SURI alone to SURI with additional access to an internet behavioral weight loss website with or without optional group sessions. Since significant weight loss was not maintained at 12 months, this suggests that perhaps access to the behavioral weight loss website should have continued for longer and/or included a maintenance phase after the 3-month intervention. Weight loss often reaches its peak around 6 months, and weight regain occurs without effective maintenance therapy [6].
General strengths of the study included the use of a randomized, intention-to-treat design, dissemination of evidence-based weight loss strategies, objective outcomes measurement, adherence metrics, and strong retention of participants with clear accounting of all enrolled patients from recruitment through analysis. This study demonstrated significant weight loss in an intervention with minimal/optional health professional interaction. This intervention also placed responsibility on participants to self-monitor their diet and physical activity, participate in online lessons, and attend optional group sessions. The success of this community-based intervention suggests feasibility and scalability within a real-world setting. The authors also conducted cost-effectiveness analyses demonstrating that the SI program was more cost-effective than SIG.
However, there are weaknesses as well. In setting the sample size for each arm of this study, no justification was described for choosing a 1:2:2 randomization scheme. In randomized control trials, the allocation of participants into the different study arms is often balanced to equal numbers which maximizes statistical power [7]. However, the use of unequal randomization ratios among study arms can be beneficial and even necessary for various reasons including cost, availability of the intervention, overcoming intervention/treatment learning curves, and if a higher drop-out rate is anticipated. Providing a justification for unbalanced sample sizes would be helpful to future researchers looking to replicate the study. Additionally, participants were mostly non-Hispanic white and female, thus limiting generalizability. While representative of the broader Rhode Island population, findings based on this population this may not be applicable to vulnerable (ie, low literacy, resource-poor) or underrepresented populations (ie, minorities) [8].
Applications for Clinical Practice
An internet-based behavioral weight loss intervention, when added to a community weight management initiative, is cost-effective and can lead to short-term weight loss. Given that clinicians often lack time, training, and resources to adequately address obesity in the office [9,10], encouraging patients to enroll in similar programs may be an effective strategy to address such barriers. The study also highlights the need for maintenance interventions to help keep weight off. Findings should be replicated in more diverse communities.
—Katrina F. Mateo, MPH, and Melanie Jay, MD, MS
Study Overview
Objective. To determine the efficacy and cost-effectiveness of adding an evidence-based internet behavioral weight loss intervention alone or combined with optional group sessions to ShapeUp Rhode Island 2011 (SURI), a 3-month statewide wellness campaign.
Design. 3-arm randomized clinical trial.
Setting and participants. Study participants were recruited from the Rhode Island community via employers, media, and mass mailings at the time of SURI 2011 registration. Of the 3806 participants that joined the weight loss division, 1139 were willing to be contacted for research, and the first 431 were screened for study eligibility. Exclusion criteria were minimal: age < 18 years or > 70 years, body mass index (BMI) < 25 kg/m2, pregnant, nursing, or plans to become pregnant, a serious medical condition (eg, cancer), unreliable internet access, non-English speaking, current or previous participation in our weight loss studies, and planned relocation. Those who reported a medical condition that could interfere with safe participation (eg, diabetes) obtained doctor’s consent to participate. Of those screened, 230 met inclusion criteria, completed orientation procedures, and were randomized using a 1:2:2 randomization scheme to the standard SURI program (S; n = 46); SURI plus internet behavioral weight loss intervention (SI; n = 90); or SURI plus internet behavioral weight loss intervention plus optional group sessions (SIG; n = 94). To avoid contamination, individuals on the same SURI team (see below) were randomized to the same intervention.
Intervention. Participants in the standard SURI program did not receive any behavioral weight loss treatment. SURI is a self-sustaining, annual community campaign designed to help Rhode Islanders lose weight and increase their physical activity through an online, team-based competition. Participants join in teams, enter the weight loss or physical activity division or both, and compete with other teams. Throughout the 3-month program, participants have access to a reporting SURI website where they submit their weekly weight and activity data and view their personal and team progress. They also receive paper logs to record weight and activity, a pedometer, access to newsletters and community workshops, and recognition for meeting goals.
Participants in the SI arm received the 3-month SURI program plus a 3-month internet behavioral weight loss intervention. Before SURI began, SI participants attended a 1-hour group meeting during which they received their weight loss goal (lose 1 to 2 pounds per week), calorie and fat gram goal (starting weight < 250 lbs: 1200–1500 kcal/day, 40–50 g of fat; starting weight ≥ 250 lbs: 1500–1800 kcal/day, 50–60 g of fat), and activity goal (gradually increase to 200 minutes of aerobic activity per week). During this session, participants were also taught self-monitoring skills and oriented to an internet behavioral weight loss intervention website developed by the authors. The intervention website included 12 weekly, 10- to 15-minute multimedia lessons based on the Diabetes Prevention Program and a self-monitoring platform where participants tracked their daily weight, calorie, and activity information. Participants received weekly automated feedback on their progress. The intervention website also included information on meal plans, prepackaged meals, and meal replacements.
Participants in the SIG arm received everything in SI and were additionally given the option to attend weekly group meetings at Miriam Hospital’s Weight Control and Diabetes Research Center during the 3 months. The 12 weekly, optional group sessions were led by masters-level staff with extensive training in behavioral weight loss. Sessions involved private weigh-ins and covered topics that supplemented the internet intervention (eg, recipe modification, portion control).
Main outcomes measures. The main outcome was weight loss at the end of the 3-month program. Participants completed measures (ie, weight, BMI) in person at baseline and 3 months (post-treatment), and at 6- and 12-month follow-up visits. Adherence measures included reported weight and physical activity on the SURI website (S, SI, and SIG), log ins, viewed lessons, and self-monitoring entries on the intervention website (SI, SIG), and number of groups meetings attended (SIG). To measure weight loss behaviors, the authors used the Weight Control Practices questionnaire to assess engagement in core weight loss strategies targeted in treatment, and the Paffenbarger questionnaire to assess weekly kcal expended in moderate to vigorous activity. The authors also assessed costs from the payer (labor, rent, intervention materials), participant (SURI registration fee, transportation, time spent on intervention), and societal perspective (sum of payer and participant costs) in order to calculate the cost per kg of weight lost in each study arm.
Results. Participants were predominantly female, non-Hispanic white, and had a mean BMI of 34.4 kg/m2 (SE = 0.05). Groups differed only on education (P = 0.02), and attendance at post-treatment and 6- and 12-month follow-up were high (93%, 91%, and 86% respectively). The authors found that weight loss did not differ by educational attainment (P s > 0.57).
Overall, there was a significant group-by-time interaction for weight loss (P < 0.001). Percentage weight loss at 3 months differed among the 3 groups—S: 1.1% ± 0.9%; SI: 4.2% ± 0.6%; SIG: 6.1% ± 0.6% (P s ≤ 0.04). There was also an overall group effect for percentage of individuals achieving 5% weight loss (P < 0.001). SI and SIG had higher percentages of participants who achieved a 5% weight loss than the control (SI: 42%; SIG: 54%; S: 7%; P s < 0.001) but did not differ from one another (P = 0.01). Initial weight losses and percentage of participants who achieved a 5% weight loss were largely maintained through the no-treatment follow-up phase at 6-months, but the 3 groups no longer differed from one another at 12 months (S: 1.2% [SE =0.9]; SI: 2.2% [SE = 0.6]; SIG: 3.3% [SE = 0.6]; P s > 0.05).
All groups reported significant increases in physical activity over time (p < 0.001). More reporting of weight and physical activity data on the SURI website was associated with greater percentage weight loss (r = 0.25; P < 0.001). Number of log ins and lessons viewed on the intervention website were positively associated with percentage weight loss (r = 0.45; P ≤ 0.001; and r = 0.34; P ≤ 0.001 respectively). Greater attendance to group sessions was associated with better weight outcomes (r = 0.61; P ≤ 0.001). Younger age was associated with poorer adherence, including less reporting on the SURI website, viewing of lessons, and logging in to the weight loss website.
There was a significant group-by-time effect interaction for the use of behavioral weight loss strategies (P < 0.001), and increased use of these strategies was associated with greater percentage weight loss in all 3 groups post-treatment. At 12 months, however, there were no differences between groups in the use of these strategies (P s ≤ 0.07).
Cost per kg of weight loss was similar for S ($39) and SI ($35), but both were lower than SIG ($114).
Conclusion. Both intervention arms (SI and SIG) achieved more weight loss at 6 months than SURI alone. Although mean weight loss was greatest with optional group sessions (SIG), the addition of the behavioral intervention website alone (SI) was the most cost-effective method to enhance weight loss. Thus, adding a novel internet behavioral weight loss intervention to a statewide community health initiative may be a cost-effective approach to improving obesity treatment outcomes.
Commentary
Weight loss treatment is recommended for adults with a BMI of > 30 kg/m2, as well as those with BMI < 25 kg/m2 with weight-related comorbidities [1]. Intensive behavioral treatment should be the first line of intervention for overweight and obese individuals and can lead to 8% to 10% weight loss [2], particularly in initial months of treatment [3]. However, behavioral treatment is inherently challenging and time-consuming, and readily available to only a fraction of the intended population. Although weight losses achieved from intensive lifestyle interventions such as the Diabetes Prevention Program (DPP) [4] may be higher, innovative community weight loss programs that use a variety of weight loss strategies can provide opportunities to a wider population of overweight and obese individuals and at a lower cost [3].
This study built upon the authors’ previous work [5], which showed that SURI participants with behavioral weight loss strategies via email significantly improved 3-month weight losses. In this current study, they compared SURI alone to SURI with additional access to an internet behavioral weight loss website with or without optional group sessions. Since significant weight loss was not maintained at 12 months, this suggests that perhaps access to the behavioral weight loss website should have continued for longer and/or included a maintenance phase after the 3-month intervention. Weight loss often reaches its peak around 6 months, and weight regain occurs without effective maintenance therapy [6].
General strengths of the study included the use of a randomized, intention-to-treat design, dissemination of evidence-based weight loss strategies, objective outcomes measurement, adherence metrics, and strong retention of participants with clear accounting of all enrolled patients from recruitment through analysis. This study demonstrated significant weight loss in an intervention with minimal/optional health professional interaction. This intervention also placed responsibility on participants to self-monitor their diet and physical activity, participate in online lessons, and attend optional group sessions. The success of this community-based intervention suggests feasibility and scalability within a real-world setting. The authors also conducted cost-effectiveness analyses demonstrating that the SI program was more cost-effective than SIG.
However, there are weaknesses as well. In setting the sample size for each arm of this study, no justification was described for choosing a 1:2:2 randomization scheme. In randomized control trials, the allocation of participants into the different study arms is often balanced to equal numbers which maximizes statistical power [7]. However, the use of unequal randomization ratios among study arms can be beneficial and even necessary for various reasons including cost, availability of the intervention, overcoming intervention/treatment learning curves, and if a higher drop-out rate is anticipated. Providing a justification for unbalanced sample sizes would be helpful to future researchers looking to replicate the study. Additionally, participants were mostly non-Hispanic white and female, thus limiting generalizability. While representative of the broader Rhode Island population, findings based on this population this may not be applicable to vulnerable (ie, low literacy, resource-poor) or underrepresented populations (ie, minorities) [8].
Applications for Clinical Practice
An internet-based behavioral weight loss intervention, when added to a community weight management initiative, is cost-effective and can lead to short-term weight loss. Given that clinicians often lack time, training, and resources to adequately address obesity in the office [9,10], encouraging patients to enroll in similar programs may be an effective strategy to address such barriers. The study also highlights the need for maintenance interventions to help keep weight off. Findings should be replicated in more diverse communities.
—Katrina F. Mateo, MPH, and Melanie Jay, MD, MS
1. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults. National Heart, Lung, and Blood Institute; 1998.
2. Wadden TA, Butryn ML, Wilson C. Lifestyle modification for the management of obesity. Gastroenterology 2007;132:2226–38.
3. Butryn ML, Webb V, Wadden TA. Behavioral treatment of obesity. Psych Clin North Am 2011;34:841–59.
4. The Diabetes Prevention Program Research Group. The Diabetes Prevention Program (DPP): Description of lifestyle intervention. Diabetes Care 2002;25:2165–71.
5. Wing RR, Crane MM, Thomas JG, et al. Improving weight loss outcomes of community interventions by incorporating behavioral strategies. Am J Public Health 2010;100:2513–9.
6. Wing RR, Tate DF, Gorin A, et al. A self-regulation program for maintenance of weight loss. N Engl J Med 2006;355:1563–71.
7. Dumville JC, Hahn S, Miles JN V, Torgerson DJ. The use of unequal randomisation ratios in clinical trials: a review. Contemp Clin Trials 2006;27:1–12.
8. Marshall PL. Ethical challenges in study design and informed consent for health research in resource-poor settings. World Health Organization; 2007.
9. Jay M, Gillespie C, Ark T, et al. Do internists, pediatricians, and psychiatrists feel competent in obesity care? Using a needs assessment to drive curriculum design. J Gen Intern Med 2008;23:1066–70.
10. Loureiro ML, Nayga RM. Obesity, weight loss, and physician’s advice. Soc Sci Med 2006;62:2458–68.
1. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults. National Heart, Lung, and Blood Institute; 1998.
2. Wadden TA, Butryn ML, Wilson C. Lifestyle modification for the management of obesity. Gastroenterology 2007;132:2226–38.
3. Butryn ML, Webb V, Wadden TA. Behavioral treatment of obesity. Psych Clin North Am 2011;34:841–59.
4. The Diabetes Prevention Program Research Group. The Diabetes Prevention Program (DPP): Description of lifestyle intervention. Diabetes Care 2002;25:2165–71.
5. Wing RR, Crane MM, Thomas JG, et al. Improving weight loss outcomes of community interventions by incorporating behavioral strategies. Am J Public Health 2010;100:2513–9.
6. Wing RR, Tate DF, Gorin A, et al. A self-regulation program for maintenance of weight loss. N Engl J Med 2006;355:1563–71.
7. Dumville JC, Hahn S, Miles JN V, Torgerson DJ. The use of unequal randomisation ratios in clinical trials: a review. Contemp Clin Trials 2006;27:1–12.
8. Marshall PL. Ethical challenges in study design and informed consent for health research in resource-poor settings. World Health Organization; 2007.
9. Jay M, Gillespie C, Ark T, et al. Do internists, pediatricians, and psychiatrists feel competent in obesity care? Using a needs assessment to drive curriculum design. J Gen Intern Med 2008;23:1066–70.
10. Loureiro ML, Nayga RM. Obesity, weight loss, and physician’s advice. Soc Sci Med 2006;62:2458–68.
Epidural Steroid Injections for Spinal Stenosis Back Pain Simply Don’t Work
Study Overview
Objective. To determine the effectiveness of epidural injections of glucocorticoids plus anesthetic compared with injections of anesthetic alone in patients with lumbar spinal stenosis.
Design. The LESS (Lumbar Epidural Steroid Injection for Spinal Stenosis) trial—a double-blind, multisite, randomized controlled trial.
Setting and participants. The study was conducted at 16 sites in the United States and enrolled 400 patients between April 2011 and June 2013. Patients at least 50 years of age with spinal stenosis as evidenced by magnetic resonance imaging (MRI) or computed tomography (CT) were invited to participate. Additional eligibility criteria included an average pain rating of more than 4 on a scale of 0 to 10 (0 being the lowest score) for back, buttock, or leg pain. Patients were excluded if they did not have stenosis of the central canal, had spondylolisthesis requiring surgery, or had received epidural glucocorticoid injections within the previous 6 months. Patients were randomly assigned to receive a standard epidural injection of glucocorticoids plus lidocaine or lidocaine alone. At the 3-week follow-up they could choose to receive a repeat injection. At the 6-week assessment they were allowed to cross over to the other treatment group. Patients were blinded throughout the study. The treating physicians were also blinded through the use of 2 opaque prefilled syringes provided by the study staff—one marked “inject” and one marked “discard.”
Main outcome measures. The 2 outcomes, measured at 6 weeks, were the Roland-Morris Disability Questionnaire (RMDQ) score (range, 0 to 24, with higher scores indicating greater physical disability) and the patient’s rating of average buttock, hip, or leg pain in the previous week (scale of 0 to 10 with 0 indicating no pain and 10 indicating “pain as bad as you can imagine”).
Eight secondary patient-oriented outcomes were also measured: (1) at least minimal clinically meaningful improvement (≥ 30%), (2) substantial clinically meaningful improvement (≥ 50%), (3) average back pain in the previous week, and scores on the (4) Brief Pain Inventory (BPI) interference scale, (5) 8-question Patient Health Questionnaire (PHQ-8), (6) Generalized Anxiety Disorder 7 scale (GAD-7), (7) EQ-5D (a health status measure) and (8) Swiss Spinal Stenosis Questionnaire (SSSQ).
Main results. The 2 groups were similar with respect to baseline characteristics, except that the duration of pain was shorter in the lidocaine-alone group. At 6 weeks, both groups had improved RMDQ scores (glucocorticoid –4.2 points vs. no glucocorticoid –3.1 points, respectively). However, the difference in RMDQ score between the 2 groups was not statistically significant (–1.0 points [95% CI, –2.1 to 0.1]; P = 0.07). In addition, there was no difference in treatment effect at 6 weeks as measured by patient’s reported leg pain (–0.2 points [95% CI, –0.8 to 0.4]; P = 0.48). Furthermore, there were no significant differences in the secondary outcomes of clinically meaningful improvement, BPI, SSSQ symptoms and physical function, EQ-5D, and GAD-7 scales at 6 weeks. Among the secondary outcomes, only symptoms of depression and patient satisfaction showed a statistically significant improvement in the glucocorticoid plus lidocaine group. Of note, though not statistically significant, there were more adverse events in the glucocorticoid plus lidocaine group compared to the lidocaine alone group (21.5% vs. 15.5%, respectively). Finally, the glucocorticoid plus lidocaine group also had a significantly higher proportion of patients with cortisol serum suppression compared to the lidocaine alone group.
Conclusion. The authors concluded that there was no difference in pain-related functional disability (as measured by the RMDQ score) and pain intensity between patients receiving fluoroscopically guided epidural injections with glucocorticoids plus lidocaine compared with lidocaine alone for lumbar spinal stenosis. The injection of glucocorticoid should be avoided due to its potentially systemic effects, including suppression of the hypothalamic-pituitary axis and reduction in bone mineral density, which may increase the risk of fracture.
Commentary
Lumbar spinal stenosis is one of the most common causes of spine-related back and leg pain; it disproportionally affects older adults due to degenerative changes resulting in narrowing of the spinal canal and nerve-root. Epidural glucocorticoid injections containing a glucocorticoid and an anesthetic are commonly used to relieve symptoms of lumbar stenosis. While this treatment approach is controversial, more than 2.2 million lumbar epidural glucocorticoid injections are performed in the Medicare population each year [1,2]. Previous uncontrolled studies suggest that epidural glucocorticoid injections provide short-term pain relief for some patients with spinal stenosis [3]. While complications from the procedure are rare, a multistate outbreak of fungal meningitis due to contaminated glucocorticoid injections affected at least 751 patients with 64 deaths in 2012 [4].
The purpose of the current study by Friedly et al was to determine whether adding a glucocorticoid to an anesthetic in epidural spinal injections is superior to anesthetic alone for symptom relief and functional improvement in patients with lumbar spinal stenosis. In contrast to previous studies, the authors defined short-term results as 3 weeks after injection, and long-term results as 6 weeks after injection. Despite the shorter follow-up period, results were similar to previous studies, in that adding glucocorticoid to anesthetic in epidural spinal injection reduced pain and improved patient’s functionality short-term, but improvements were not sustained long-term. Based on these results, the authors concluded that there is no benefit in adding glucocorticoid epidural injections for back pain arising from lumbar spinal stenosis.
One major limitation of this study is the lack of a placebo arm. Because of the lack of a placebo arm, it cannot be ascertained whether epidural injection with lidocaine alone conferred a benefit. However, this study provides robust evidence that epidural steroid injections are not beneficial for treatment of back and leg pain associated with lumbar spinal stenosis.
Applications for Clinical Practice
Epidural steroid injection is long accepted in medical communities as a safe and effective treatment for lumbar spinal stenosis symptoms. In light of the potential dangers of epidural steroid injections, including meningitis, coupled with the increasing cost of the procedure, other potential side effects, and demonstrated ineffectiveness of the treatment, providers should stop recommending epidural steroid injections for lumbar spinal stenosis.
—Ka Ming Gordon Ngai, MD, MPH
1. Manchikanti L, Pampati V, Boswell MV, et al. Analysis of the growth of epidural injections and costs in the Medicare population: a comparative evaluation of 1997, 2002, and 2006 data. Pain Physician 2010;13:199–212.
2. Manchikanti L, Pampati V, Falco FJ, et al. Assessment of the growth of epidural injections in the medicare population from 2000 to 2011. Pain Physician 2013;16:E349–364.
3. Shamliyan TA, Staal JB, Goldmann D, et al. Epidural steroid injections for radicular lumbosacral pain: a systematic review. Phys Med Rehabil Clin North Am 2014;25:471–89.
4. CDC. Multistate outbreak of fungal meningitis and other infections. 23 Oct 2013. Accessed 9 Jul 2014 at www.cdc.gov/hai/outbreaks/meningitis.html.
Study Overview
Objective. To determine the effectiveness of epidural injections of glucocorticoids plus anesthetic compared with injections of anesthetic alone in patients with lumbar spinal stenosis.
Design. The LESS (Lumbar Epidural Steroid Injection for Spinal Stenosis) trial—a double-blind, multisite, randomized controlled trial.
Setting and participants. The study was conducted at 16 sites in the United States and enrolled 400 patients between April 2011 and June 2013. Patients at least 50 years of age with spinal stenosis as evidenced by magnetic resonance imaging (MRI) or computed tomography (CT) were invited to participate. Additional eligibility criteria included an average pain rating of more than 4 on a scale of 0 to 10 (0 being the lowest score) for back, buttock, or leg pain. Patients were excluded if they did not have stenosis of the central canal, had spondylolisthesis requiring surgery, or had received epidural glucocorticoid injections within the previous 6 months. Patients were randomly assigned to receive a standard epidural injection of glucocorticoids plus lidocaine or lidocaine alone. At the 3-week follow-up they could choose to receive a repeat injection. At the 6-week assessment they were allowed to cross over to the other treatment group. Patients were blinded throughout the study. The treating physicians were also blinded through the use of 2 opaque prefilled syringes provided by the study staff—one marked “inject” and one marked “discard.”
Main outcome measures. The 2 outcomes, measured at 6 weeks, were the Roland-Morris Disability Questionnaire (RMDQ) score (range, 0 to 24, with higher scores indicating greater physical disability) and the patient’s rating of average buttock, hip, or leg pain in the previous week (scale of 0 to 10 with 0 indicating no pain and 10 indicating “pain as bad as you can imagine”).
Eight secondary patient-oriented outcomes were also measured: (1) at least minimal clinically meaningful improvement (≥ 30%), (2) substantial clinically meaningful improvement (≥ 50%), (3) average back pain in the previous week, and scores on the (4) Brief Pain Inventory (BPI) interference scale, (5) 8-question Patient Health Questionnaire (PHQ-8), (6) Generalized Anxiety Disorder 7 scale (GAD-7), (7) EQ-5D (a health status measure) and (8) Swiss Spinal Stenosis Questionnaire (SSSQ).
Main results. The 2 groups were similar with respect to baseline characteristics, except that the duration of pain was shorter in the lidocaine-alone group. At 6 weeks, both groups had improved RMDQ scores (glucocorticoid –4.2 points vs. no glucocorticoid –3.1 points, respectively). However, the difference in RMDQ score between the 2 groups was not statistically significant (–1.0 points [95% CI, –2.1 to 0.1]; P = 0.07). In addition, there was no difference in treatment effect at 6 weeks as measured by patient’s reported leg pain (–0.2 points [95% CI, –0.8 to 0.4]; P = 0.48). Furthermore, there were no significant differences in the secondary outcomes of clinically meaningful improvement, BPI, SSSQ symptoms and physical function, EQ-5D, and GAD-7 scales at 6 weeks. Among the secondary outcomes, only symptoms of depression and patient satisfaction showed a statistically significant improvement in the glucocorticoid plus lidocaine group. Of note, though not statistically significant, there were more adverse events in the glucocorticoid plus lidocaine group compared to the lidocaine alone group (21.5% vs. 15.5%, respectively). Finally, the glucocorticoid plus lidocaine group also had a significantly higher proportion of patients with cortisol serum suppression compared to the lidocaine alone group.
Conclusion. The authors concluded that there was no difference in pain-related functional disability (as measured by the RMDQ score) and pain intensity between patients receiving fluoroscopically guided epidural injections with glucocorticoids plus lidocaine compared with lidocaine alone for lumbar spinal stenosis. The injection of glucocorticoid should be avoided due to its potentially systemic effects, including suppression of the hypothalamic-pituitary axis and reduction in bone mineral density, which may increase the risk of fracture.
Commentary
Lumbar spinal stenosis is one of the most common causes of spine-related back and leg pain; it disproportionally affects older adults due to degenerative changes resulting in narrowing of the spinal canal and nerve-root. Epidural glucocorticoid injections containing a glucocorticoid and an anesthetic are commonly used to relieve symptoms of lumbar stenosis. While this treatment approach is controversial, more than 2.2 million lumbar epidural glucocorticoid injections are performed in the Medicare population each year [1,2]. Previous uncontrolled studies suggest that epidural glucocorticoid injections provide short-term pain relief for some patients with spinal stenosis [3]. While complications from the procedure are rare, a multistate outbreak of fungal meningitis due to contaminated glucocorticoid injections affected at least 751 patients with 64 deaths in 2012 [4].
The purpose of the current study by Friedly et al was to determine whether adding a glucocorticoid to an anesthetic in epidural spinal injections is superior to anesthetic alone for symptom relief and functional improvement in patients with lumbar spinal stenosis. In contrast to previous studies, the authors defined short-term results as 3 weeks after injection, and long-term results as 6 weeks after injection. Despite the shorter follow-up period, results were similar to previous studies, in that adding glucocorticoid to anesthetic in epidural spinal injection reduced pain and improved patient’s functionality short-term, but improvements were not sustained long-term. Based on these results, the authors concluded that there is no benefit in adding glucocorticoid epidural injections for back pain arising from lumbar spinal stenosis.
One major limitation of this study is the lack of a placebo arm. Because of the lack of a placebo arm, it cannot be ascertained whether epidural injection with lidocaine alone conferred a benefit. However, this study provides robust evidence that epidural steroid injections are not beneficial for treatment of back and leg pain associated with lumbar spinal stenosis.
Applications for Clinical Practice
Epidural steroid injection is long accepted in medical communities as a safe and effective treatment for lumbar spinal stenosis symptoms. In light of the potential dangers of epidural steroid injections, including meningitis, coupled with the increasing cost of the procedure, other potential side effects, and demonstrated ineffectiveness of the treatment, providers should stop recommending epidural steroid injections for lumbar spinal stenosis.
—Ka Ming Gordon Ngai, MD, MPH
Study Overview
Objective. To determine the effectiveness of epidural injections of glucocorticoids plus anesthetic compared with injections of anesthetic alone in patients with lumbar spinal stenosis.
Design. The LESS (Lumbar Epidural Steroid Injection for Spinal Stenosis) trial—a double-blind, multisite, randomized controlled trial.
Setting and participants. The study was conducted at 16 sites in the United States and enrolled 400 patients between April 2011 and June 2013. Patients at least 50 years of age with spinal stenosis as evidenced by magnetic resonance imaging (MRI) or computed tomography (CT) were invited to participate. Additional eligibility criteria included an average pain rating of more than 4 on a scale of 0 to 10 (0 being the lowest score) for back, buttock, or leg pain. Patients were excluded if they did not have stenosis of the central canal, had spondylolisthesis requiring surgery, or had received epidural glucocorticoid injections within the previous 6 months. Patients were randomly assigned to receive a standard epidural injection of glucocorticoids plus lidocaine or lidocaine alone. At the 3-week follow-up they could choose to receive a repeat injection. At the 6-week assessment they were allowed to cross over to the other treatment group. Patients were blinded throughout the study. The treating physicians were also blinded through the use of 2 opaque prefilled syringes provided by the study staff—one marked “inject” and one marked “discard.”
Main outcome measures. The 2 outcomes, measured at 6 weeks, were the Roland-Morris Disability Questionnaire (RMDQ) score (range, 0 to 24, with higher scores indicating greater physical disability) and the patient’s rating of average buttock, hip, or leg pain in the previous week (scale of 0 to 10 with 0 indicating no pain and 10 indicating “pain as bad as you can imagine”).
Eight secondary patient-oriented outcomes were also measured: (1) at least minimal clinically meaningful improvement (≥ 30%), (2) substantial clinically meaningful improvement (≥ 50%), (3) average back pain in the previous week, and scores on the (4) Brief Pain Inventory (BPI) interference scale, (5) 8-question Patient Health Questionnaire (PHQ-8), (6) Generalized Anxiety Disorder 7 scale (GAD-7), (7) EQ-5D (a health status measure) and (8) Swiss Spinal Stenosis Questionnaire (SSSQ).
Main results. The 2 groups were similar with respect to baseline characteristics, except that the duration of pain was shorter in the lidocaine-alone group. At 6 weeks, both groups had improved RMDQ scores (glucocorticoid –4.2 points vs. no glucocorticoid –3.1 points, respectively). However, the difference in RMDQ score between the 2 groups was not statistically significant (–1.0 points [95% CI, –2.1 to 0.1]; P = 0.07). In addition, there was no difference in treatment effect at 6 weeks as measured by patient’s reported leg pain (–0.2 points [95% CI, –0.8 to 0.4]; P = 0.48). Furthermore, there were no significant differences in the secondary outcomes of clinically meaningful improvement, BPI, SSSQ symptoms and physical function, EQ-5D, and GAD-7 scales at 6 weeks. Among the secondary outcomes, only symptoms of depression and patient satisfaction showed a statistically significant improvement in the glucocorticoid plus lidocaine group. Of note, though not statistically significant, there were more adverse events in the glucocorticoid plus lidocaine group compared to the lidocaine alone group (21.5% vs. 15.5%, respectively). Finally, the glucocorticoid plus lidocaine group also had a significantly higher proportion of patients with cortisol serum suppression compared to the lidocaine alone group.
Conclusion. The authors concluded that there was no difference in pain-related functional disability (as measured by the RMDQ score) and pain intensity between patients receiving fluoroscopically guided epidural injections with glucocorticoids plus lidocaine compared with lidocaine alone for lumbar spinal stenosis. The injection of glucocorticoid should be avoided due to its potentially systemic effects, including suppression of the hypothalamic-pituitary axis and reduction in bone mineral density, which may increase the risk of fracture.
Commentary
Lumbar spinal stenosis is one of the most common causes of spine-related back and leg pain; it disproportionally affects older adults due to degenerative changes resulting in narrowing of the spinal canal and nerve-root. Epidural glucocorticoid injections containing a glucocorticoid and an anesthetic are commonly used to relieve symptoms of lumbar stenosis. While this treatment approach is controversial, more than 2.2 million lumbar epidural glucocorticoid injections are performed in the Medicare population each year [1,2]. Previous uncontrolled studies suggest that epidural glucocorticoid injections provide short-term pain relief for some patients with spinal stenosis [3]. While complications from the procedure are rare, a multistate outbreak of fungal meningitis due to contaminated glucocorticoid injections affected at least 751 patients with 64 deaths in 2012 [4].
The purpose of the current study by Friedly et al was to determine whether adding a glucocorticoid to an anesthetic in epidural spinal injections is superior to anesthetic alone for symptom relief and functional improvement in patients with lumbar spinal stenosis. In contrast to previous studies, the authors defined short-term results as 3 weeks after injection, and long-term results as 6 weeks after injection. Despite the shorter follow-up period, results were similar to previous studies, in that adding glucocorticoid to anesthetic in epidural spinal injection reduced pain and improved patient’s functionality short-term, but improvements were not sustained long-term. Based on these results, the authors concluded that there is no benefit in adding glucocorticoid epidural injections for back pain arising from lumbar spinal stenosis.
One major limitation of this study is the lack of a placebo arm. Because of the lack of a placebo arm, it cannot be ascertained whether epidural injection with lidocaine alone conferred a benefit. However, this study provides robust evidence that epidural steroid injections are not beneficial for treatment of back and leg pain associated with lumbar spinal stenosis.
Applications for Clinical Practice
Epidural steroid injection is long accepted in medical communities as a safe and effective treatment for lumbar spinal stenosis symptoms. In light of the potential dangers of epidural steroid injections, including meningitis, coupled with the increasing cost of the procedure, other potential side effects, and demonstrated ineffectiveness of the treatment, providers should stop recommending epidural steroid injections for lumbar spinal stenosis.
—Ka Ming Gordon Ngai, MD, MPH
1. Manchikanti L, Pampati V, Boswell MV, et al. Analysis of the growth of epidural injections and costs in the Medicare population: a comparative evaluation of 1997, 2002, and 2006 data. Pain Physician 2010;13:199–212.
2. Manchikanti L, Pampati V, Falco FJ, et al. Assessment of the growth of epidural injections in the medicare population from 2000 to 2011. Pain Physician 2013;16:E349–364.
3. Shamliyan TA, Staal JB, Goldmann D, et al. Epidural steroid injections for radicular lumbosacral pain: a systematic review. Phys Med Rehabil Clin North Am 2014;25:471–89.
4. CDC. Multistate outbreak of fungal meningitis and other infections. 23 Oct 2013. Accessed 9 Jul 2014 at www.cdc.gov/hai/outbreaks/meningitis.html.
1. Manchikanti L, Pampati V, Boswell MV, et al. Analysis of the growth of epidural injections and costs in the Medicare population: a comparative evaluation of 1997, 2002, and 2006 data. Pain Physician 2010;13:199–212.
2. Manchikanti L, Pampati V, Falco FJ, et al. Assessment of the growth of epidural injections in the medicare population from 2000 to 2011. Pain Physician 2013;16:E349–364.
3. Shamliyan TA, Staal JB, Goldmann D, et al. Epidural steroid injections for radicular lumbosacral pain: a systematic review. Phys Med Rehabil Clin North Am 2014;25:471–89.
4. CDC. Multistate outbreak of fungal meningitis and other infections. 23 Oct 2013. Accessed 9 Jul 2014 at www.cdc.gov/hai/outbreaks/meningitis.html.
Bariatric Surgery Leads to 3-Year Resolution of Diabetes in 24% to 38% of Patients
Study Overview
Objective. To examine the 3-year efficacy of bariatric surgery on resolution of diabetes.
Design. Randomized controlled trial.
Setting and participants. Patients were participants in the STAMPEDE trial, a single-center study with enrollment from March 2007 to January 2011. 150 patients aged 20 to 60 years with a hemoglobin A1cof > 7% and a BMI of 27 to 43 kg/m2 were studied. Patients were excluded for a history of bariatric surgery or complex abdominal surgery and poorly controlled medical or psychiatric conditions [1]. Patients were randomized to intensive medical therapy, Roux-en-Y gastric bypass, or sleeve gastrectomy. All participants received intensive medical therapy, including lifestyle education, diabetes medical management, and cardiovascular risk reduction administered by a diabetes specialist every 3 months for 2 years and every 6 months thereafter. All surgeries were performed by a single surgeon, using equipment by Ethicon (a sponsor of the study, along with the National Institutes of Health, LifeScan, and the Cleveland Clinic).
Main outcome measure. HbA1c of ≤ 6% at 3 years.
Main results. At baseline, 68% were women and 74% were white. Participants had a mean age of 48 years (SD 8), mean A1c of 9.3% (1.5%), and mean BMI of 36 (3.5). 43% required insulin at baseline. Follow-up at 3 years was 91% (9 participants dropped out after enrollment, 4 lost to follow-up), and at this time, A1c levels were ≤ 6% for 5% of intensive medical therapy participants, 38% who had gastric bypass (P < 0.001 compared with medical therapy), and 24% who had sleeve gastrectomy (P = 0.01 compared with medical therapy); the difference between bypass and sleeve gastrectomy arms was not significant (P = 0.17). Nearly all of the participants reaching the primary outcome in the bariatric surgery arms achieved this goal A1c without using diabetic medications (35% and 20%). For the secondary outcome of A1c ≤ 7% without using diabetic medications, 0%, 58%, and 33% reached this endpoint in the medical therapy, bypass, and sleeve gastrectomy arms, respectively (P < 0.001 for both surgery arms compared to medical therapy; P = 0.01 comparing gastric bypass to sleeve gastrectomy). At 3 years, 2%, 69%, and 43% of participants were not taking any diabetic medications; 55% of medical therapy participants were taking insulin compared with 6% and 8% in the surgery arms. Weight loss was significantly greater in the gastric bypass and sleeve gastrectomy arms (24.5% and 21.1% of baseline body weight compared with the medical therapy arm with 4.2%). HDL cholesterol was higher and triglycerides were lower in both surgery arms, compared with medical therapy, but LDL cholesterol and blood pressure were not significantly different. Surgery participants also were taking fewer cardiovascular medications at 3 years. Quality of life was improved in 5 of 8 domains for the bypass arm compared with medical therapy and in 3 of 8 domains for the sleeve gastrectomy arm.
Conclusion. Gastric bypass and sleeve gastrectomy surgery leads to substantial resolution of diabetes compared to medical therapy.
Commentary
Over the last several decades, bariatric surgery has emerged as important treatment for obesity. Observational studies have demonstrated sustained weight loss persisting up to 15 years, as well as reductions in cardiovascular risk, diabetes, and even mortality [2–5]. In the Swedish Obesity Study, a nonrandomized study of 2010 participants undergoing bariatric surgery and 2037 matched controls, gastric bypass led to a 32% reduction from baseline body weight at 1–2 years after surgery with sustained weight loss of 27% at 15 years [2,3]. Patients undergoing gastric banding lost a bit less weight, with 20% weight loss at 1–2 years and 13% at 15 years. Control subjects lost very little.
Among diabetic Swedish Obesity Study participants, bariatric surgery led to a much higher rate of remission from diabetes over 10 years compared with control patients (36% after surgery, 13% among controls) [2] and lower rates of microvascular and macrovascular complications [6]. Among participants who were not diabetic at baseline, the incidence of diabetes was just 7% in the surgery arm and 24% in the control arm [2]; this difference in incidence persisted for 15 years of follow-up [4].
Among randomized controlled trials, several studies have found short-term resolution of diabetes after surgery. A study of 60 patients (age 30 to 60 years, BMI ≥ 35, A1c ≥ 7%) found that 75% of patients undergoing gastric bypass and 95% of patients undergoing biliopancreatic diversion had fasting glucose of < 100 mg/dL and A1c < 6.5% at 2 years; none of the control subjects met these thresholds for diabetes resolution [7]. Another 1-year trial of 120 US and Taiwanese patients (age 30 to 67 years, BMI 30 to 39.9, A1c ≥ 8%) found that 48% randomized to gastric bypass met a combination endpoint of A1c < 7%, LDL cholesterol < 100 mg/dL, and systolic blood pressure of < 130 mm Hg after 1 year compared with 19% assigned to intensive medical therapy [8]. In the gastric bypass arm, 75% reached an A1c of < 7% compared with 32% receiving medical therapy.
What does the study by Schauer and colleagues contribute? First, the study extended data on diabetes resolution to 3 years, longer than prior studies, and found substantial diabetes resolution in more than 1/3 of gastric bypass patients and 1/4 of sleeve gastrectomy patients (5% receiving medical therapy); over 2/3 and 1/3, respectively, were no longer taking any diabetes medications compared with 2% receiving medical therapy. In an earlier published study reporting on 1-year outcomes of this study, Schauer found diabetes resolution in 42% of those undergoing gastric bypass, 37% with sleeve gastrectomy, and 12% with medical therapy, demonstrating some regression over time [1]. Second, the study compared patients undergoing gastric bypass and sleeve gastrectomy. Sleeve gastrectomy is a newer procedure with less long-term outcome data; for example, none of the Swedish Obesity Study participants had sleeve gastrectomy. Schauer et al demonstrated that both procedures provide similar results for the primary outcome, but use of glucose-lowering medications was less and weight loss was more in the gastric bypass arm. These results provide some evidence that bypass surgery might be superior. Third, the study provided important data on cardiovascular risk factors, showing improvement in triglycerides and HDL cholesterol and quality of life. Quality of life was better after surgery than with medical therapy.
In this study, only 4 patients required reoperations, and no deaths or life-threatening complications were reported. However, mortality and morbidity remain a concern in bariatric surgery. In the earlier published study of this trial, authors noted that 22% of gastric bypass required hospitalization in the year after surgery compared with 8% in the sleeve gastrectomy and 9% in the medical therapy arms [1]. Observational data has shown higher rates of complications. In a study of patients at 10 clinical sites across the US from 2005 to 2007, 30-day mortality was 2.1% for open Roux-en Y gastric bypass and 0.2% for laparoscopic bypass [9]. That study also found substantial morbidity, with nearly 8% of patients after open bypass surgery reaching a composite end-point of death, deep venous thromboembolism, a repeat operation, or persistent hospitalization for 30 days after surgery; 4.8% reached this composite outcome after laparoscopic bypass. In another study of Medicare patients, 30-day mortality was 4.8% after open gastric bypass surgery compared with 1.7% for younger patients [10].
This trial by Schauer and colleagues demonstrates important benefits of gastric bypass and sleeve gastrectomy. While bariatric surgery still has some risk, it increasingly appears to be a viable treatment for patients with obesity, especially if they also have diabetes. Ideal future studies would be large enough to provide more data on predictors of diabetes resolution and long-term successful weight loss. Such information would allow clinicians and patients to better predict how patients might respond to surgery over the long term.
Applications for Clinical Practice
Bariatric surgery leads to a substantial reduction in diabetes over 3 years. While reduction was similar after gastric bypass and sleeve gastrectomy, secondary endpoints demonstrate some superiority of gastric bypass surgery. Clinicians should feel increasingly confident recommending bariatric surgery for their patients with diabetes and obesity.
—Jason P. Block, MD, MPH
1. Schauer PR, Kashyap SR, Wolski K, et al. Bariatric surgery versus intensive medical therapy in obese patients with diabetes. N Engl J Med 2012;366:1567–76.
2. Sjostrom L, Lindroos AK, Peltonen M, et al. Lifestyle, diabetes, and cardiovascular risk factors 10 years after bariatric surgery. N Engl J Med 2004;351:2683–93.
3. Sjostrom L, Narbro K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.
4. Carlsson LM, Peltonen M, Ahlin S, et al. Bariatric surgery and prevention of type 2 diabetes in Swedish obese subjects. N Engl J Med 2012;367:695–704.
5. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.
6. Sjostrom L, Peltonen M, Jacobson P, et al. Association of bariatric surgery with long-term remission of type 2 diabetes and with microvascular and macrovascular complications. JAMA 2014;311:2297–304.
7. Mingrone G, Panunzi S, DeGaetano A, et al. Bariatric surgery versus conventional medical therapy for type 2 diabetes. N Engl J Med 2012;366:1577–85.
8. Ikramuddin S, Korner J, Lee WJ, et al. Roux-en-Y gastric bypass vs intensive medical management for the control of type 2 diabetes, hypertension, and hyperlipidemia: the Diabetes Surgery Study randomized clinical trial. JAMA 2013;309:2240–9.
9. The Longitudinal Assessment of Bariatric Surgery (LABS) Consortium. Perioperative safety in the longitudinal assessment of bariatric surgery. N Engl J Med 2009;361:445–54.
10. Flum DR, Salem L, Elrod JA, et al. Early mortality among Medicare beneficiaries undergoing bariatric surgical procedures. JAMA 2005;294:1903–8.
Study Overview
Objective. To examine the 3-year efficacy of bariatric surgery on resolution of diabetes.
Design. Randomized controlled trial.
Setting and participants. Patients were participants in the STAMPEDE trial, a single-center study with enrollment from March 2007 to January 2011. 150 patients aged 20 to 60 years with a hemoglobin A1cof > 7% and a BMI of 27 to 43 kg/m2 were studied. Patients were excluded for a history of bariatric surgery or complex abdominal surgery and poorly controlled medical or psychiatric conditions [1]. Patients were randomized to intensive medical therapy, Roux-en-Y gastric bypass, or sleeve gastrectomy. All participants received intensive medical therapy, including lifestyle education, diabetes medical management, and cardiovascular risk reduction administered by a diabetes specialist every 3 months for 2 years and every 6 months thereafter. All surgeries were performed by a single surgeon, using equipment by Ethicon (a sponsor of the study, along with the National Institutes of Health, LifeScan, and the Cleveland Clinic).
Main outcome measure. HbA1c of ≤ 6% at 3 years.
Main results. At baseline, 68% were women and 74% were white. Participants had a mean age of 48 years (SD 8), mean A1c of 9.3% (1.5%), and mean BMI of 36 (3.5). 43% required insulin at baseline. Follow-up at 3 years was 91% (9 participants dropped out after enrollment, 4 lost to follow-up), and at this time, A1c levels were ≤ 6% for 5% of intensive medical therapy participants, 38% who had gastric bypass (P < 0.001 compared with medical therapy), and 24% who had sleeve gastrectomy (P = 0.01 compared with medical therapy); the difference between bypass and sleeve gastrectomy arms was not significant (P = 0.17). Nearly all of the participants reaching the primary outcome in the bariatric surgery arms achieved this goal A1c without using diabetic medications (35% and 20%). For the secondary outcome of A1c ≤ 7% without using diabetic medications, 0%, 58%, and 33% reached this endpoint in the medical therapy, bypass, and sleeve gastrectomy arms, respectively (P < 0.001 for both surgery arms compared to medical therapy; P = 0.01 comparing gastric bypass to sleeve gastrectomy). At 3 years, 2%, 69%, and 43% of participants were not taking any diabetic medications; 55% of medical therapy participants were taking insulin compared with 6% and 8% in the surgery arms. Weight loss was significantly greater in the gastric bypass and sleeve gastrectomy arms (24.5% and 21.1% of baseline body weight compared with the medical therapy arm with 4.2%). HDL cholesterol was higher and triglycerides were lower in both surgery arms, compared with medical therapy, but LDL cholesterol and blood pressure were not significantly different. Surgery participants also were taking fewer cardiovascular medications at 3 years. Quality of life was improved in 5 of 8 domains for the bypass arm compared with medical therapy and in 3 of 8 domains for the sleeve gastrectomy arm.
Conclusion. Gastric bypass and sleeve gastrectomy surgery leads to substantial resolution of diabetes compared to medical therapy.
Commentary
Over the last several decades, bariatric surgery has emerged as important treatment for obesity. Observational studies have demonstrated sustained weight loss persisting up to 15 years, as well as reductions in cardiovascular risk, diabetes, and even mortality [2–5]. In the Swedish Obesity Study, a nonrandomized study of 2010 participants undergoing bariatric surgery and 2037 matched controls, gastric bypass led to a 32% reduction from baseline body weight at 1–2 years after surgery with sustained weight loss of 27% at 15 years [2,3]. Patients undergoing gastric banding lost a bit less weight, with 20% weight loss at 1–2 years and 13% at 15 years. Control subjects lost very little.
Among diabetic Swedish Obesity Study participants, bariatric surgery led to a much higher rate of remission from diabetes over 10 years compared with control patients (36% after surgery, 13% among controls) [2] and lower rates of microvascular and macrovascular complications [6]. Among participants who were not diabetic at baseline, the incidence of diabetes was just 7% in the surgery arm and 24% in the control arm [2]; this difference in incidence persisted for 15 years of follow-up [4].
Among randomized controlled trials, several studies have found short-term resolution of diabetes after surgery. A study of 60 patients (age 30 to 60 years, BMI ≥ 35, A1c ≥ 7%) found that 75% of patients undergoing gastric bypass and 95% of patients undergoing biliopancreatic diversion had fasting glucose of < 100 mg/dL and A1c < 6.5% at 2 years; none of the control subjects met these thresholds for diabetes resolution [7]. Another 1-year trial of 120 US and Taiwanese patients (age 30 to 67 years, BMI 30 to 39.9, A1c ≥ 8%) found that 48% randomized to gastric bypass met a combination endpoint of A1c < 7%, LDL cholesterol < 100 mg/dL, and systolic blood pressure of < 130 mm Hg after 1 year compared with 19% assigned to intensive medical therapy [8]. In the gastric bypass arm, 75% reached an A1c of < 7% compared with 32% receiving medical therapy.
What does the study by Schauer and colleagues contribute? First, the study extended data on diabetes resolution to 3 years, longer than prior studies, and found substantial diabetes resolution in more than 1/3 of gastric bypass patients and 1/4 of sleeve gastrectomy patients (5% receiving medical therapy); over 2/3 and 1/3, respectively, were no longer taking any diabetes medications compared with 2% receiving medical therapy. In an earlier published study reporting on 1-year outcomes of this study, Schauer found diabetes resolution in 42% of those undergoing gastric bypass, 37% with sleeve gastrectomy, and 12% with medical therapy, demonstrating some regression over time [1]. Second, the study compared patients undergoing gastric bypass and sleeve gastrectomy. Sleeve gastrectomy is a newer procedure with less long-term outcome data; for example, none of the Swedish Obesity Study participants had sleeve gastrectomy. Schauer et al demonstrated that both procedures provide similar results for the primary outcome, but use of glucose-lowering medications was less and weight loss was more in the gastric bypass arm. These results provide some evidence that bypass surgery might be superior. Third, the study provided important data on cardiovascular risk factors, showing improvement in triglycerides and HDL cholesterol and quality of life. Quality of life was better after surgery than with medical therapy.
In this study, only 4 patients required reoperations, and no deaths or life-threatening complications were reported. However, mortality and morbidity remain a concern in bariatric surgery. In the earlier published study of this trial, authors noted that 22% of gastric bypass required hospitalization in the year after surgery compared with 8% in the sleeve gastrectomy and 9% in the medical therapy arms [1]. Observational data has shown higher rates of complications. In a study of patients at 10 clinical sites across the US from 2005 to 2007, 30-day mortality was 2.1% for open Roux-en Y gastric bypass and 0.2% for laparoscopic bypass [9]. That study also found substantial morbidity, with nearly 8% of patients after open bypass surgery reaching a composite end-point of death, deep venous thromboembolism, a repeat operation, or persistent hospitalization for 30 days after surgery; 4.8% reached this composite outcome after laparoscopic bypass. In another study of Medicare patients, 30-day mortality was 4.8% after open gastric bypass surgery compared with 1.7% for younger patients [10].
This trial by Schauer and colleagues demonstrates important benefits of gastric bypass and sleeve gastrectomy. While bariatric surgery still has some risk, it increasingly appears to be a viable treatment for patients with obesity, especially if they also have diabetes. Ideal future studies would be large enough to provide more data on predictors of diabetes resolution and long-term successful weight loss. Such information would allow clinicians and patients to better predict how patients might respond to surgery over the long term.
Applications for Clinical Practice
Bariatric surgery leads to a substantial reduction in diabetes over 3 years. While reduction was similar after gastric bypass and sleeve gastrectomy, secondary endpoints demonstrate some superiority of gastric bypass surgery. Clinicians should feel increasingly confident recommending bariatric surgery for their patients with diabetes and obesity.
—Jason P. Block, MD, MPH
Study Overview
Objective. To examine the 3-year efficacy of bariatric surgery on resolution of diabetes.
Design. Randomized controlled trial.
Setting and participants. Patients were participants in the STAMPEDE trial, a single-center study with enrollment from March 2007 to January 2011. 150 patients aged 20 to 60 years with a hemoglobin A1cof > 7% and a BMI of 27 to 43 kg/m2 were studied. Patients were excluded for a history of bariatric surgery or complex abdominal surgery and poorly controlled medical or psychiatric conditions [1]. Patients were randomized to intensive medical therapy, Roux-en-Y gastric bypass, or sleeve gastrectomy. All participants received intensive medical therapy, including lifestyle education, diabetes medical management, and cardiovascular risk reduction administered by a diabetes specialist every 3 months for 2 years and every 6 months thereafter. All surgeries were performed by a single surgeon, using equipment by Ethicon (a sponsor of the study, along with the National Institutes of Health, LifeScan, and the Cleveland Clinic).
Main outcome measure. HbA1c of ≤ 6% at 3 years.
Main results. At baseline, 68% were women and 74% were white. Participants had a mean age of 48 years (SD 8), mean A1c of 9.3% (1.5%), and mean BMI of 36 (3.5). 43% required insulin at baseline. Follow-up at 3 years was 91% (9 participants dropped out after enrollment, 4 lost to follow-up), and at this time, A1c levels were ≤ 6% for 5% of intensive medical therapy participants, 38% who had gastric bypass (P < 0.001 compared with medical therapy), and 24% who had sleeve gastrectomy (P = 0.01 compared with medical therapy); the difference between bypass and sleeve gastrectomy arms was not significant (P = 0.17). Nearly all of the participants reaching the primary outcome in the bariatric surgery arms achieved this goal A1c without using diabetic medications (35% and 20%). For the secondary outcome of A1c ≤ 7% without using diabetic medications, 0%, 58%, and 33% reached this endpoint in the medical therapy, bypass, and sleeve gastrectomy arms, respectively (P < 0.001 for both surgery arms compared to medical therapy; P = 0.01 comparing gastric bypass to sleeve gastrectomy). At 3 years, 2%, 69%, and 43% of participants were not taking any diabetic medications; 55% of medical therapy participants were taking insulin compared with 6% and 8% in the surgery arms. Weight loss was significantly greater in the gastric bypass and sleeve gastrectomy arms (24.5% and 21.1% of baseline body weight compared with the medical therapy arm with 4.2%). HDL cholesterol was higher and triglycerides were lower in both surgery arms, compared with medical therapy, but LDL cholesterol and blood pressure were not significantly different. Surgery participants also were taking fewer cardiovascular medications at 3 years. Quality of life was improved in 5 of 8 domains for the bypass arm compared with medical therapy and in 3 of 8 domains for the sleeve gastrectomy arm.
Conclusion. Gastric bypass and sleeve gastrectomy surgery leads to substantial resolution of diabetes compared to medical therapy.
Commentary
Over the last several decades, bariatric surgery has emerged as important treatment for obesity. Observational studies have demonstrated sustained weight loss persisting up to 15 years, as well as reductions in cardiovascular risk, diabetes, and even mortality [2–5]. In the Swedish Obesity Study, a nonrandomized study of 2010 participants undergoing bariatric surgery and 2037 matched controls, gastric bypass led to a 32% reduction from baseline body weight at 1–2 years after surgery with sustained weight loss of 27% at 15 years [2,3]. Patients undergoing gastric banding lost a bit less weight, with 20% weight loss at 1–2 years and 13% at 15 years. Control subjects lost very little.
Among diabetic Swedish Obesity Study participants, bariatric surgery led to a much higher rate of remission from diabetes over 10 years compared with control patients (36% after surgery, 13% among controls) [2] and lower rates of microvascular and macrovascular complications [6]. Among participants who were not diabetic at baseline, the incidence of diabetes was just 7% in the surgery arm and 24% in the control arm [2]; this difference in incidence persisted for 15 years of follow-up [4].
Among randomized controlled trials, several studies have found short-term resolution of diabetes after surgery. A study of 60 patients (age 30 to 60 years, BMI ≥ 35, A1c ≥ 7%) found that 75% of patients undergoing gastric bypass and 95% of patients undergoing biliopancreatic diversion had fasting glucose of < 100 mg/dL and A1c < 6.5% at 2 years; none of the control subjects met these thresholds for diabetes resolution [7]. Another 1-year trial of 120 US and Taiwanese patients (age 30 to 67 years, BMI 30 to 39.9, A1c ≥ 8%) found that 48% randomized to gastric bypass met a combination endpoint of A1c < 7%, LDL cholesterol < 100 mg/dL, and systolic blood pressure of < 130 mm Hg after 1 year compared with 19% assigned to intensive medical therapy [8]. In the gastric bypass arm, 75% reached an A1c of < 7% compared with 32% receiving medical therapy.
What does the study by Schauer and colleagues contribute? First, the study extended data on diabetes resolution to 3 years, longer than prior studies, and found substantial diabetes resolution in more than 1/3 of gastric bypass patients and 1/4 of sleeve gastrectomy patients (5% receiving medical therapy); over 2/3 and 1/3, respectively, were no longer taking any diabetes medications compared with 2% receiving medical therapy. In an earlier published study reporting on 1-year outcomes of this study, Schauer found diabetes resolution in 42% of those undergoing gastric bypass, 37% with sleeve gastrectomy, and 12% with medical therapy, demonstrating some regression over time [1]. Second, the study compared patients undergoing gastric bypass and sleeve gastrectomy. Sleeve gastrectomy is a newer procedure with less long-term outcome data; for example, none of the Swedish Obesity Study participants had sleeve gastrectomy. Schauer et al demonstrated that both procedures provide similar results for the primary outcome, but use of glucose-lowering medications was less and weight loss was more in the gastric bypass arm. These results provide some evidence that bypass surgery might be superior. Third, the study provided important data on cardiovascular risk factors, showing improvement in triglycerides and HDL cholesterol and quality of life. Quality of life was better after surgery than with medical therapy.
In this study, only 4 patients required reoperations, and no deaths or life-threatening complications were reported. However, mortality and morbidity remain a concern in bariatric surgery. In the earlier published study of this trial, authors noted that 22% of gastric bypass required hospitalization in the year after surgery compared with 8% in the sleeve gastrectomy and 9% in the medical therapy arms [1]. Observational data has shown higher rates of complications. In a study of patients at 10 clinical sites across the US from 2005 to 2007, 30-day mortality was 2.1% for open Roux-en Y gastric bypass and 0.2% for laparoscopic bypass [9]. That study also found substantial morbidity, with nearly 8% of patients after open bypass surgery reaching a composite end-point of death, deep venous thromboembolism, a repeat operation, or persistent hospitalization for 30 days after surgery; 4.8% reached this composite outcome after laparoscopic bypass. In another study of Medicare patients, 30-day mortality was 4.8% after open gastric bypass surgery compared with 1.7% for younger patients [10].
This trial by Schauer and colleagues demonstrates important benefits of gastric bypass and sleeve gastrectomy. While bariatric surgery still has some risk, it increasingly appears to be a viable treatment for patients with obesity, especially if they also have diabetes. Ideal future studies would be large enough to provide more data on predictors of diabetes resolution and long-term successful weight loss. Such information would allow clinicians and patients to better predict how patients might respond to surgery over the long term.
Applications for Clinical Practice
Bariatric surgery leads to a substantial reduction in diabetes over 3 years. While reduction was similar after gastric bypass and sleeve gastrectomy, secondary endpoints demonstrate some superiority of gastric bypass surgery. Clinicians should feel increasingly confident recommending bariatric surgery for their patients with diabetes and obesity.
—Jason P. Block, MD, MPH
1. Schauer PR, Kashyap SR, Wolski K, et al. Bariatric surgery versus intensive medical therapy in obese patients with diabetes. N Engl J Med 2012;366:1567–76.
2. Sjostrom L, Lindroos AK, Peltonen M, et al. Lifestyle, diabetes, and cardiovascular risk factors 10 years after bariatric surgery. N Engl J Med 2004;351:2683–93.
3. Sjostrom L, Narbro K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.
4. Carlsson LM, Peltonen M, Ahlin S, et al. Bariatric surgery and prevention of type 2 diabetes in Swedish obese subjects. N Engl J Med 2012;367:695–704.
5. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.
6. Sjostrom L, Peltonen M, Jacobson P, et al. Association of bariatric surgery with long-term remission of type 2 diabetes and with microvascular and macrovascular complications. JAMA 2014;311:2297–304.
7. Mingrone G, Panunzi S, DeGaetano A, et al. Bariatric surgery versus conventional medical therapy for type 2 diabetes. N Engl J Med 2012;366:1577–85.
8. Ikramuddin S, Korner J, Lee WJ, et al. Roux-en-Y gastric bypass vs intensive medical management for the control of type 2 diabetes, hypertension, and hyperlipidemia: the Diabetes Surgery Study randomized clinical trial. JAMA 2013;309:2240–9.
9. The Longitudinal Assessment of Bariatric Surgery (LABS) Consortium. Perioperative safety in the longitudinal assessment of bariatric surgery. N Engl J Med 2009;361:445–54.
10. Flum DR, Salem L, Elrod JA, et al. Early mortality among Medicare beneficiaries undergoing bariatric surgical procedures. JAMA 2005;294:1903–8.
1. Schauer PR, Kashyap SR, Wolski K, et al. Bariatric surgery versus intensive medical therapy in obese patients with diabetes. N Engl J Med 2012;366:1567–76.
2. Sjostrom L, Lindroos AK, Peltonen M, et al. Lifestyle, diabetes, and cardiovascular risk factors 10 years after bariatric surgery. N Engl J Med 2004;351:2683–93.
3. Sjostrom L, Narbro K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.
4. Carlsson LM, Peltonen M, Ahlin S, et al. Bariatric surgery and prevention of type 2 diabetes in Swedish obese subjects. N Engl J Med 2012;367:695–704.
5. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.
6. Sjostrom L, Peltonen M, Jacobson P, et al. Association of bariatric surgery with long-term remission of type 2 diabetes and with microvascular and macrovascular complications. JAMA 2014;311:2297–304.
7. Mingrone G, Panunzi S, DeGaetano A, et al. Bariatric surgery versus conventional medical therapy for type 2 diabetes. N Engl J Med 2012;366:1577–85.
8. Ikramuddin S, Korner J, Lee WJ, et al. Roux-en-Y gastric bypass vs intensive medical management for the control of type 2 diabetes, hypertension, and hyperlipidemia: the Diabetes Surgery Study randomized clinical trial. JAMA 2013;309:2240–9.
9. The Longitudinal Assessment of Bariatric Surgery (LABS) Consortium. Perioperative safety in the longitudinal assessment of bariatric surgery. N Engl J Med 2009;361:445–54.
10. Flum DR, Salem L, Elrod JA, et al. Early mortality among Medicare beneficiaries undergoing bariatric surgical procedures. JAMA 2005;294:1903–8.
English Ability and Glycemic Control in Latinos with Diabetes
Study Overview
Objective. To determine if there is an association between self-reported English language ability and glycemic control in Latinos with type 2 diabetes.
Design. Descriptive correlational study using data from a larger cross-sectional study.
Setting and participants. 167 adults with diabetes who self-identified as Latino or Hispanic recruited at clinics in the Chicago area from May 2004 to May 2006. The dataset was collected using face-to-face interviews with diabetic patients aged ≥ 18 years. All participants attended clinics affiliated with an academic medical center or physician offices affiliated with a suburban hospital. Patients with type 1 diabetes and those with < 17 points on the Mini-Mental State Examination were excluded. English speaking ability was categorized as speaking English “not at all,” “not well,” “well,” or “very well” based on patient self-report. A multivariable logistic regression model was used to examine the predictive relationship between English language skills and HbA1c levels, with covariates selected if they were significantly correlated with English language ability. The final regression model accounted for age, sex, education, annual income, health insurance status, duration of diabetes, birth in the United States, and years in the United States.
Main outcome measure. HbA1c ≥ 7.0% as captured by chart review.
Main results. Of the 167 patients, 38% reported speaking English very well, 21% reported speaking well, 26% reported speaking not very well, and 14% did not speak English at all. Reflecting immigration-sensitive patterns, patients who spoke English very well were younger and more likely to have graduated high school and have an annual income over $25,000 per year. Comorbidities and complications did not differ by English speaking ability except for diabetic eye disease, which was was more prevalent among those who did not speak English at all (42%, p = 0.04). Whether speaking ability was treated as a continuous or dichotomous variable, HbA1c levels formed a U-shaped curve: those who spoke English very well (odds ratio [OR] 2.32, 95% CI, 1.00–5.41) or not at all (OR 4.11, 95% CI 1.35–12.54) had higher odds of having an elevated HbA1c than those who spoke English well, although this was only statistically significant for those who spoke no English. In adjusted analyses, the U-shaped curve persisted with the highest odds among those who spoke English very well (OR 3.20, 95% CI 1.05–9.79) or not at all (OR 4.95, 95% CI 1.29–18.92).
Conclusion. The relationship between English speaking ability and diabetes management is more complex than previously described. Interventions aimed at improving diabetes outcomes may need to be tailored to specific subgroups within the Latino population.
Commentary
Immigrant health is complex and language is an understudied factor in health transitions of those who migrate for new lives or temporary work. For Latinos, migration abroad was once thought to improve health, but a recent systematic review by Teruya et al [1] suggests that the migration experience has a wide variety of effects on health, many of which can be negative.
The notion that English fluency confers health care benefits is questionable, as the authors state. Those unfamiliar with the acculturation literature might think that English speaking ability is a good marker of acculturation, but recent research on the subject suggests otherwise. Acculturation is a complex phenomenon that cannot be measured or gauged by a single variable [2–5]. Among the many factors influencing acculturation, the migration experience and country of origin will play a major role in acculturation and how it occurs in the arrival country. Health care providers seeking to understand the complexity of acculturation better to improve care for their immigrant patients would benefit from examining the extensive social science literature on the subject. The results of this study suggest that providers should not take for granted someone’s English speaking ability as a marker of acculturation and thus assume that their health outcomes would be equivalent to native born populations.
This study has number of weaknesses. The main concern is that the study did not consider a number of important health service delivery factors. The researchers did not assess for the number of visits the patient had with appropriate interpretation services, whether or not there were language concordant visits between patients and providers (limited English proficiency patients are more likely to form consistent service relationships with language concordant providers [6–10]), or whether the patient had diabetes education classes or individual counseling sessions to facilitate self-management. These service-based factors could potentially explain some of the results seen. The small sample size, age of the data in the study, and failure to distinguish the country of origin of the Latino patients are other weaknesses.
Applications for Clinical Practice
Providers can improve their clinical practice with limited English proficiency Latino patients with diabetes by being more sensitive to the potential effects of language on diabetes outcomes in this population. The results suggest that providers should not assume that a Latino patient’s English language skills mean that they are better at self-managing their diabetes and will have better outcomes. Asking patients about their country of origin and migration experiences may help differentiate the effects of language in concert with other potentially confounding variables that can help elucidate the effects of language on diabetes related outcomes.
—Allison Squires, PhD, RN
1. Teruya SA, Bazargan-Hejazi S. The immigrant and Hispanic paradoxes: a systematic review of their predictions and effects. Hisp J Behav Sci 2013 Sep 5;35:486–509.
2. Rudmin FW. Phenomenology of acculturation: retrospective reports from the Philippines, Japan, Quebec, and Norway. Cult Psychol 2010;16:313–32.
3. Matsunaga M, Hecht ML, Elek E, Ndiaye K. Ethnic identity development and acculturation: a longitudinal analysis of Mexican-heritage youth in the Southwest United States. J Cross Cult Psychol 2010;41:410–27.
4. Siatkowski A. Hispanic acculturation: a concept analysis. J Transcult Nurs 2007;18:316–23.
5. Horevitz E, Organista KC. The Mexican health paradox: expanding the explanatory power of the acculturation construct. Hisp J Behav Sci 2012;35:3–34.
6. Gany F, Leng J, Shapiro E, et al. Patient satisfaction with different interpreting methods: a randomized controlled trial. J Gen Intern Med 2007;22 Suppl 2:312–8.
7. Grover A, Deakyne S, Bajaj L, Roosevelt GE. Comparison of throughput times for limited English proficiency patient visits in the emergency department between different interpreter modalities. J Immigr Minor Health 2012;14:602–7.
8. Ngo-Metzger Q, Sorkin DH, Phillips RS, et al. Providing high-quality care for limited English proficient patients: the importance of language concordance and interpreter use. J Gen Intern Med 2007;22 Suppl 2:324–30.
9. Karliner LS, Jacobs EA, Chen AH, Mutha S. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res 2007;42:727–54.
10. Arauz Boudreau AD, Fluet CF, Reuland CP, et al. Associations of providers’ language and cultural skills with Latino parents’ perceptions of well-child care. Acad Pediatr 2010;10:172–8.
Study Overview
Objective. To determine if there is an association between self-reported English language ability and glycemic control in Latinos with type 2 diabetes.
Design. Descriptive correlational study using data from a larger cross-sectional study.
Setting and participants. 167 adults with diabetes who self-identified as Latino or Hispanic recruited at clinics in the Chicago area from May 2004 to May 2006. The dataset was collected using face-to-face interviews with diabetic patients aged ≥ 18 years. All participants attended clinics affiliated with an academic medical center or physician offices affiliated with a suburban hospital. Patients with type 1 diabetes and those with < 17 points on the Mini-Mental State Examination were excluded. English speaking ability was categorized as speaking English “not at all,” “not well,” “well,” or “very well” based on patient self-report. A multivariable logistic regression model was used to examine the predictive relationship between English language skills and HbA1c levels, with covariates selected if they were significantly correlated with English language ability. The final regression model accounted for age, sex, education, annual income, health insurance status, duration of diabetes, birth in the United States, and years in the United States.
Main outcome measure. HbA1c ≥ 7.0% as captured by chart review.
Main results. Of the 167 patients, 38% reported speaking English very well, 21% reported speaking well, 26% reported speaking not very well, and 14% did not speak English at all. Reflecting immigration-sensitive patterns, patients who spoke English very well were younger and more likely to have graduated high school and have an annual income over $25,000 per year. Comorbidities and complications did not differ by English speaking ability except for diabetic eye disease, which was was more prevalent among those who did not speak English at all (42%, p = 0.04). Whether speaking ability was treated as a continuous or dichotomous variable, HbA1c levels formed a U-shaped curve: those who spoke English very well (odds ratio [OR] 2.32, 95% CI, 1.00–5.41) or not at all (OR 4.11, 95% CI 1.35–12.54) had higher odds of having an elevated HbA1c than those who spoke English well, although this was only statistically significant for those who spoke no English. In adjusted analyses, the U-shaped curve persisted with the highest odds among those who spoke English very well (OR 3.20, 95% CI 1.05–9.79) or not at all (OR 4.95, 95% CI 1.29–18.92).
Conclusion. The relationship between English speaking ability and diabetes management is more complex than previously described. Interventions aimed at improving diabetes outcomes may need to be tailored to specific subgroups within the Latino population.
Commentary
Immigrant health is complex and language is an understudied factor in health transitions of those who migrate for new lives or temporary work. For Latinos, migration abroad was once thought to improve health, but a recent systematic review by Teruya et al [1] suggests that the migration experience has a wide variety of effects on health, many of which can be negative.
The notion that English fluency confers health care benefits is questionable, as the authors state. Those unfamiliar with the acculturation literature might think that English speaking ability is a good marker of acculturation, but recent research on the subject suggests otherwise. Acculturation is a complex phenomenon that cannot be measured or gauged by a single variable [2–5]. Among the many factors influencing acculturation, the migration experience and country of origin will play a major role in acculturation and how it occurs in the arrival country. Health care providers seeking to understand the complexity of acculturation better to improve care for their immigrant patients would benefit from examining the extensive social science literature on the subject. The results of this study suggest that providers should not take for granted someone’s English speaking ability as a marker of acculturation and thus assume that their health outcomes would be equivalent to native born populations.
This study has number of weaknesses. The main concern is that the study did not consider a number of important health service delivery factors. The researchers did not assess for the number of visits the patient had with appropriate interpretation services, whether or not there were language concordant visits between patients and providers (limited English proficiency patients are more likely to form consistent service relationships with language concordant providers [6–10]), or whether the patient had diabetes education classes or individual counseling sessions to facilitate self-management. These service-based factors could potentially explain some of the results seen. The small sample size, age of the data in the study, and failure to distinguish the country of origin of the Latino patients are other weaknesses.
Applications for Clinical Practice
Providers can improve their clinical practice with limited English proficiency Latino patients with diabetes by being more sensitive to the potential effects of language on diabetes outcomes in this population. The results suggest that providers should not assume that a Latino patient’s English language skills mean that they are better at self-managing their diabetes and will have better outcomes. Asking patients about their country of origin and migration experiences may help differentiate the effects of language in concert with other potentially confounding variables that can help elucidate the effects of language on diabetes related outcomes.
—Allison Squires, PhD, RN
Study Overview
Objective. To determine if there is an association between self-reported English language ability and glycemic control in Latinos with type 2 diabetes.
Design. Descriptive correlational study using data from a larger cross-sectional study.
Setting and participants. 167 adults with diabetes who self-identified as Latino or Hispanic recruited at clinics in the Chicago area from May 2004 to May 2006. The dataset was collected using face-to-face interviews with diabetic patients aged ≥ 18 years. All participants attended clinics affiliated with an academic medical center or physician offices affiliated with a suburban hospital. Patients with type 1 diabetes and those with < 17 points on the Mini-Mental State Examination were excluded. English speaking ability was categorized as speaking English “not at all,” “not well,” “well,” or “very well” based on patient self-report. A multivariable logistic regression model was used to examine the predictive relationship between English language skills and HbA1c levels, with covariates selected if they were significantly correlated with English language ability. The final regression model accounted for age, sex, education, annual income, health insurance status, duration of diabetes, birth in the United States, and years in the United States.
Main outcome measure. HbA1c ≥ 7.0% as captured by chart review.
Main results. Of the 167 patients, 38% reported speaking English very well, 21% reported speaking well, 26% reported speaking not very well, and 14% did not speak English at all. Reflecting immigration-sensitive patterns, patients who spoke English very well were younger and more likely to have graduated high school and have an annual income over $25,000 per year. Comorbidities and complications did not differ by English speaking ability except for diabetic eye disease, which was was more prevalent among those who did not speak English at all (42%, p = 0.04). Whether speaking ability was treated as a continuous or dichotomous variable, HbA1c levels formed a U-shaped curve: those who spoke English very well (odds ratio [OR] 2.32, 95% CI, 1.00–5.41) or not at all (OR 4.11, 95% CI 1.35–12.54) had higher odds of having an elevated HbA1c than those who spoke English well, although this was only statistically significant for those who spoke no English. In adjusted analyses, the U-shaped curve persisted with the highest odds among those who spoke English very well (OR 3.20, 95% CI 1.05–9.79) or not at all (OR 4.95, 95% CI 1.29–18.92).
Conclusion. The relationship between English speaking ability and diabetes management is more complex than previously described. Interventions aimed at improving diabetes outcomes may need to be tailored to specific subgroups within the Latino population.
Commentary
Immigrant health is complex and language is an understudied factor in health transitions of those who migrate for new lives or temporary work. For Latinos, migration abroad was once thought to improve health, but a recent systematic review by Teruya et al [1] suggests that the migration experience has a wide variety of effects on health, many of which can be negative.
The notion that English fluency confers health care benefits is questionable, as the authors state. Those unfamiliar with the acculturation literature might think that English speaking ability is a good marker of acculturation, but recent research on the subject suggests otherwise. Acculturation is a complex phenomenon that cannot be measured or gauged by a single variable [2–5]. Among the many factors influencing acculturation, the migration experience and country of origin will play a major role in acculturation and how it occurs in the arrival country. Health care providers seeking to understand the complexity of acculturation better to improve care for their immigrant patients would benefit from examining the extensive social science literature on the subject. The results of this study suggest that providers should not take for granted someone’s English speaking ability as a marker of acculturation and thus assume that their health outcomes would be equivalent to native born populations.
This study has number of weaknesses. The main concern is that the study did not consider a number of important health service delivery factors. The researchers did not assess for the number of visits the patient had with appropriate interpretation services, whether or not there were language concordant visits between patients and providers (limited English proficiency patients are more likely to form consistent service relationships with language concordant providers [6–10]), or whether the patient had diabetes education classes or individual counseling sessions to facilitate self-management. These service-based factors could potentially explain some of the results seen. The small sample size, age of the data in the study, and failure to distinguish the country of origin of the Latino patients are other weaknesses.
Applications for Clinical Practice
Providers can improve their clinical practice with limited English proficiency Latino patients with diabetes by being more sensitive to the potential effects of language on diabetes outcomes in this population. The results suggest that providers should not assume that a Latino patient’s English language skills mean that they are better at self-managing their diabetes and will have better outcomes. Asking patients about their country of origin and migration experiences may help differentiate the effects of language in concert with other potentially confounding variables that can help elucidate the effects of language on diabetes related outcomes.
—Allison Squires, PhD, RN
1. Teruya SA, Bazargan-Hejazi S. The immigrant and Hispanic paradoxes: a systematic review of their predictions and effects. Hisp J Behav Sci 2013 Sep 5;35:486–509.
2. Rudmin FW. Phenomenology of acculturation: retrospective reports from the Philippines, Japan, Quebec, and Norway. Cult Psychol 2010;16:313–32.
3. Matsunaga M, Hecht ML, Elek E, Ndiaye K. Ethnic identity development and acculturation: a longitudinal analysis of Mexican-heritage youth in the Southwest United States. J Cross Cult Psychol 2010;41:410–27.
4. Siatkowski A. Hispanic acculturation: a concept analysis. J Transcult Nurs 2007;18:316–23.
5. Horevitz E, Organista KC. The Mexican health paradox: expanding the explanatory power of the acculturation construct. Hisp J Behav Sci 2012;35:3–34.
6. Gany F, Leng J, Shapiro E, et al. Patient satisfaction with different interpreting methods: a randomized controlled trial. J Gen Intern Med 2007;22 Suppl 2:312–8.
7. Grover A, Deakyne S, Bajaj L, Roosevelt GE. Comparison of throughput times for limited English proficiency patient visits in the emergency department between different interpreter modalities. J Immigr Minor Health 2012;14:602–7.
8. Ngo-Metzger Q, Sorkin DH, Phillips RS, et al. Providing high-quality care for limited English proficient patients: the importance of language concordance and interpreter use. J Gen Intern Med 2007;22 Suppl 2:324–30.
9. Karliner LS, Jacobs EA, Chen AH, Mutha S. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res 2007;42:727–54.
10. Arauz Boudreau AD, Fluet CF, Reuland CP, et al. Associations of providers’ language and cultural skills with Latino parents’ perceptions of well-child care. Acad Pediatr 2010;10:172–8.
1. Teruya SA, Bazargan-Hejazi S. The immigrant and Hispanic paradoxes: a systematic review of their predictions and effects. Hisp J Behav Sci 2013 Sep 5;35:486–509.
2. Rudmin FW. Phenomenology of acculturation: retrospective reports from the Philippines, Japan, Quebec, and Norway. Cult Psychol 2010;16:313–32.
3. Matsunaga M, Hecht ML, Elek E, Ndiaye K. Ethnic identity development and acculturation: a longitudinal analysis of Mexican-heritage youth in the Southwest United States. J Cross Cult Psychol 2010;41:410–27.
4. Siatkowski A. Hispanic acculturation: a concept analysis. J Transcult Nurs 2007;18:316–23.
5. Horevitz E, Organista KC. The Mexican health paradox: expanding the explanatory power of the acculturation construct. Hisp J Behav Sci 2012;35:3–34.
6. Gany F, Leng J, Shapiro E, et al. Patient satisfaction with different interpreting methods: a randomized controlled trial. J Gen Intern Med 2007;22 Suppl 2:312–8.
7. Grover A, Deakyne S, Bajaj L, Roosevelt GE. Comparison of throughput times for limited English proficiency patient visits in the emergency department between different interpreter modalities. J Immigr Minor Health 2012;14:602–7.
8. Ngo-Metzger Q, Sorkin DH, Phillips RS, et al. Providing high-quality care for limited English proficient patients: the importance of language concordance and interpreter use. J Gen Intern Med 2007;22 Suppl 2:324–30.
9. Karliner LS, Jacobs EA, Chen AH, Mutha S. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res 2007;42:727–54.
10. Arauz Boudreau AD, Fluet CF, Reuland CP, et al. Associations of providers’ language and cultural skills with Latino parents’ perceptions of well-child care. Acad Pediatr 2010;10:172–8.