User login
Multifaceted Intervention to Decrease Frequency of Common Labs
Clinical question: Can a multifaceted intervention decrease the frequency of unnecessary labs?
Background: Implementation of a multifaceted QI intervention within a large, community-based hospitalist group to decrease ordering of common labs.
Study design: QI project.
Setting: Large, community-based hospitalist group.
Synopsis: QI intervention was composed of academic detailing, audit and feedback, and transparent reporting of the frequency with which common labs were ordered daily. Researchers performed a pre-post analysis, comparing a cohort of patients during the 10-month baseline period before the QI intervention and the seven-month intervention period. The baseline (n=7,824) and intervention (n=5,759) cohorts were similar in their demographics.
Adjusting for age, sex, and principle discharge diagnosis, the number of common labs ordered per patient-day decreased by 0.22 (10.7%) during the intervention period compared with baseline (95% confidence interval [CI], 0.34 to 0.11; P<0.01). No effect was seen on length of stay or readmission rate. The intervention decreased hospital direct costs by an estimated $16.19 per admission or $151,682 annualized (95% CI, $119,746 to $187,618).
Bottom line: A community-based, hospitalist-led, QI intervention focused on daily labs can be effective in safely reducing healthcare waste without compromising quality of care.
Citation: Corson AH, Fan VS, White T, et al. A multifaceted hospitalist quality improvement intervention: decreased frequency of common labs. J Hosp Med. 2015;10(6):390-395.
Clinical question: Can a multifaceted intervention decrease the frequency of unnecessary labs?
Background: Implementation of a multifaceted QI intervention within a large, community-based hospitalist group to decrease ordering of common labs.
Study design: QI project.
Setting: Large, community-based hospitalist group.
Synopsis: QI intervention was composed of academic detailing, audit and feedback, and transparent reporting of the frequency with which common labs were ordered daily. Researchers performed a pre-post analysis, comparing a cohort of patients during the 10-month baseline period before the QI intervention and the seven-month intervention period. The baseline (n=7,824) and intervention (n=5,759) cohorts were similar in their demographics.
Adjusting for age, sex, and principle discharge diagnosis, the number of common labs ordered per patient-day decreased by 0.22 (10.7%) during the intervention period compared with baseline (95% confidence interval [CI], 0.34 to 0.11; P<0.01). No effect was seen on length of stay or readmission rate. The intervention decreased hospital direct costs by an estimated $16.19 per admission or $151,682 annualized (95% CI, $119,746 to $187,618).
Bottom line: A community-based, hospitalist-led, QI intervention focused on daily labs can be effective in safely reducing healthcare waste without compromising quality of care.
Citation: Corson AH, Fan VS, White T, et al. A multifaceted hospitalist quality improvement intervention: decreased frequency of common labs. J Hosp Med. 2015;10(6):390-395.
Clinical question: Can a multifaceted intervention decrease the frequency of unnecessary labs?
Background: Implementation of a multifaceted QI intervention within a large, community-based hospitalist group to decrease ordering of common labs.
Study design: QI project.
Setting: Large, community-based hospitalist group.
Synopsis: QI intervention was composed of academic detailing, audit and feedback, and transparent reporting of the frequency with which common labs were ordered daily. Researchers performed a pre-post analysis, comparing a cohort of patients during the 10-month baseline period before the QI intervention and the seven-month intervention period. The baseline (n=7,824) and intervention (n=5,759) cohorts were similar in their demographics.
Adjusting for age, sex, and principle discharge diagnosis, the number of common labs ordered per patient-day decreased by 0.22 (10.7%) during the intervention period compared with baseline (95% confidence interval [CI], 0.34 to 0.11; P<0.01). No effect was seen on length of stay or readmission rate. The intervention decreased hospital direct costs by an estimated $16.19 per admission or $151,682 annualized (95% CI, $119,746 to $187,618).
Bottom line: A community-based, hospitalist-led, QI intervention focused on daily labs can be effective in safely reducing healthcare waste without compromising quality of care.
Citation: Corson AH, Fan VS, White T, et al. A multifaceted hospitalist quality improvement intervention: decreased frequency of common labs. J Hosp Med. 2015;10(6):390-395.
When Do Patient-Reported Outcome Measures Inform Readmission Risk?
Clinical question: Among patients discharged from the hospital, how do patient-reported outcome (PRO) measures change after discharge, and can they predict readmission or ED visit?
Background: Variables to predict 30-day rehospitalizations of discharged general medical patients have been looked into, but not many strategies have incorporated PRO measures in predictive models.
Study design: Longitudinal cohort study.
Setting: Patients discharged from an urban safety-net hospital that serves 128 municipalities in northeastern Illinois, including the city of Chicago.
Synopsis: One hundred ninety-six patients completed the initial survey; completion rates were 98%, 90%, and 88% for the 30-, 90-, and 180-day follow-up surveys, respectively. The Memorial Symptom Assessment Scale (MSAS) and the Patient Reported Outcomes Measurement Information System (PROMIS) Global Health short form assessing general self-rated health (GSRH), global physical health (GPH), and global mental health (GMH) were administered. In-hospital assessments of GMH and GSRH predicted 14-day reutilization, whereas post-hospitalization assessments of MSAS and GPH predicted subsequent utilizations. Notable limitations of the study include small sample size with high proportion of uninsured and racial/ethnic minorities and inability to count utilization at hospital(s) other than the hospital studied.
Bottom line: PRO measures are likely to be useful predictors in clinical medicine. More research is needed to improve the generalizability of PRO measures. Perhaps determination of specific measures of high predictive value may be more useful.
Citation: Hinami K, Smith J, Deamant CD, DuBeshter K, Trick WE. When do patient-reported outcome measures inform readmission risk? J Hosp Med. 2015;10(5):294-300.
Clinical question: Among patients discharged from the hospital, how do patient-reported outcome (PRO) measures change after discharge, and can they predict readmission or ED visit?
Background: Variables to predict 30-day rehospitalizations of discharged general medical patients have been looked into, but not many strategies have incorporated PRO measures in predictive models.
Study design: Longitudinal cohort study.
Setting: Patients discharged from an urban safety-net hospital that serves 128 municipalities in northeastern Illinois, including the city of Chicago.
Synopsis: One hundred ninety-six patients completed the initial survey; completion rates were 98%, 90%, and 88% for the 30-, 90-, and 180-day follow-up surveys, respectively. The Memorial Symptom Assessment Scale (MSAS) and the Patient Reported Outcomes Measurement Information System (PROMIS) Global Health short form assessing general self-rated health (GSRH), global physical health (GPH), and global mental health (GMH) were administered. In-hospital assessments of GMH and GSRH predicted 14-day reutilization, whereas post-hospitalization assessments of MSAS and GPH predicted subsequent utilizations. Notable limitations of the study include small sample size with high proportion of uninsured and racial/ethnic minorities and inability to count utilization at hospital(s) other than the hospital studied.
Bottom line: PRO measures are likely to be useful predictors in clinical medicine. More research is needed to improve the generalizability of PRO measures. Perhaps determination of specific measures of high predictive value may be more useful.
Citation: Hinami K, Smith J, Deamant CD, DuBeshter K, Trick WE. When do patient-reported outcome measures inform readmission risk? J Hosp Med. 2015;10(5):294-300.
Clinical question: Among patients discharged from the hospital, how do patient-reported outcome (PRO) measures change after discharge, and can they predict readmission or ED visit?
Background: Variables to predict 30-day rehospitalizations of discharged general medical patients have been looked into, but not many strategies have incorporated PRO measures in predictive models.
Study design: Longitudinal cohort study.
Setting: Patients discharged from an urban safety-net hospital that serves 128 municipalities in northeastern Illinois, including the city of Chicago.
Synopsis: One hundred ninety-six patients completed the initial survey; completion rates were 98%, 90%, and 88% for the 30-, 90-, and 180-day follow-up surveys, respectively. The Memorial Symptom Assessment Scale (MSAS) and the Patient Reported Outcomes Measurement Information System (PROMIS) Global Health short form assessing general self-rated health (GSRH), global physical health (GPH), and global mental health (GMH) were administered. In-hospital assessments of GMH and GSRH predicted 14-day reutilization, whereas post-hospitalization assessments of MSAS and GPH predicted subsequent utilizations. Notable limitations of the study include small sample size with high proportion of uninsured and racial/ethnic minorities and inability to count utilization at hospital(s) other than the hospital studied.
Bottom line: PRO measures are likely to be useful predictors in clinical medicine. More research is needed to improve the generalizability of PRO measures. Perhaps determination of specific measures of high predictive value may be more useful.
Citation: Hinami K, Smith J, Deamant CD, DuBeshter K, Trick WE. When do patient-reported outcome measures inform readmission risk? J Hosp Med. 2015;10(5):294-300.
Improving Patient Satisfaction
Patient satisfaction has received increased attention in recent years, which we believe is well deserved and long overdue. Anyone who has been hospitalized, or has had a loved one hospitalized, can appreciate that there is room to improve the patient experience. Dedicating time and effort to improving the patient experience is consistent with our professional commitment to comfort, empathize, and partner with our patients. Though patient satisfaction itself is an outcome worthy of our attention, it is also positively associated with measures related to patient safety and clinical effectiveness.[1, 2] Moreover, patient satisfaction is the only publicly reported measure that represents the patient's voice,[3] and accounts for a substantial portion of the Centers for Medicare and Medicaid Services payment adjustments under the Hospital Value Based Purchasing Program.[4]
However, all healthcare professionals should understand some key fundamental issues related to the measurement of patient satisfaction. The survey from which data are publicly reported and used for hospital payment adjustment is the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, developed by the Agency for Healthcare Research and Quality.[5, 6] HCAHPS is sent to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey uses ordinal response scales (eg, never, sometimes, usually, always) that generate highly skewed results toward favorable responses. Therefore, results are reported as the percent top box (ie, the percentage of responses in the most favorable category) rather than as a median score. The skewed distribution of results indicates that most patients are generally satisfied with care (ie, most respondents do not have an axe to grind), but also makes meaningful improvement difficult to achieve. Prior to public reporting and determination of effect on hospital payment, results are adjusted for mode of survey administration and patient mix. The same is not true when patient satisfaction data are used for internal purposes. Hospital leaders typically do not perform statistical adjustment and therefore need to be careful not to make apples‐to‐orangestype comparisons. For example, obstetric patient satisfaction scores should not be compared to general medical patient satisfaction scores, as these populations tend to rate satisfaction differently.
The HCAHPS survey questions are organized into domains of care, including satisfaction with nurses and satisfaction with doctors. Importantly, other healthcare team members may influence patients' perception in these domains. For example, a patient responding to nurse communication questions may also reflect on experiences with patient care technicians, social workers, and therapists. A patient responding to physician communication questions might also reflect on experiences with advanced practice providers. A common mistake is the practice of attributing satisfaction with doctors to the individual who served as the discharge physician. Many readers have likely seen patient satisfaction reports broken out by discharge physician with the expectation that giving this information to individual physicians will serve as useful formative feedback. The reality is that patients see many doctors during a hospitalization. To illustrate this point, we analyzed data from 420 patients admitted to our nonteaching hospitalist service who had completed an HCAHPS survey in 2014. We found that the discharge hospitalist accounted for only 34% of all physician encounters. Furthermore, research has shown that patients' experiences with specialist physicians also have a strong influence on their overall satisfaction with physicians.[7]
Having reliable patient satisfaction data on specific individuals would be a truly powerful formative assessment tool. In this issue of the Journal of Hospital Medicine, Banka and colleagues report on an impressive approach incorporating such a tool to give constructive feedback to physicians.[8] Since 2006, the study site had administered surveys to hospitalized patients that assess their satisfaction with specific resident physicians.[9] However, residency programs only reviewed the survey results with resident physicians about twice a year. The multifaceted intervention developed by Banka and colleagues included directly emailing the survey results to internal medicine resident physicians in real time while they were in service, a 1‐hour conference on best communication practices, and a reward program in which 3 residents were identified monthly to receive department‐wide recognition via email and a generous movie package. Using difference‐in‐differences regression analysis, the investigators compared changes in patient satisfaction results for internal medicine residents to results for residents from other specialties (who were not part of the intervention). The percentage of patients who gave top box responses to all 3 physician‐related questions and to the overall hospital rating was significantly higher for the internal medicine residents.
The findings from this study are important, because no prior study of an intervention, to our knowledge, has shown a significant improvement in patient satisfaction scores. In this study, feedback was believed to be the most powerful factor. The importance of meaningful, timely feedback in medical education is well recognized.[10] Without feedback there is poor insight into how intended results from specific actions compare with actual results. When feedback is lacking from external sources (in this case the voice of the patient), an uncontested sense of mastery develops, allowing mistakes to go uncorrected. This false sense of mastery contributes to an emotional and defensive response when performance is finally revealed to be less than optimal. The simple act of giving more timely feedback in this study encouraged self‐motivated reflection and practice change aimed at improving patient satisfaction, with remarkable results.
The study should inspire physician leaders from various hospital settings, and researchers, to develop and evaluate similar programs to improve patient satisfaction. We agree with the investigators that the approach should be multifaceted. Feedback to specific physicians is a powerful motivator, but needs to be combined with strategies to enhance communication skills. Brief conferences are less likely to have a lasting impact on behaviors than strategies like coaching and simulation based training.[11] Interventions should include recognition and reward to acknowledge exceptional performance and build friendly competition.
The biggest challenge to adopting an intervention such as the one used in the Banka study relates to the feasibility of implementing physician‐specific patient satisfaction reporting. Several survey instruments are available for use as tools to assess satisfaction with specific physicians.[9, 12, 13] However, who will administer these instruments? Most hospitals do not have undergraduate students available. Hospitals could use their volunteers, but this is not likely to be a sustainable solution. Hospitals could consider administering the survey via email, but many hospitals are just starting to collect patient email addresses and many patients do not use email. Once data are collected, who will conduct analyses and create comparative reports? Press Ganey recently developed a survey assessing satisfaction with specific hospitalists, using photographs, and offers the ability to create comparative reports.[14] Their service addresses the analytic challenge, but the quandary of survey administration remains.
In conclusion, we encourage hospital medicine leaders to develop and evaluate multifaceted interventions to improve patient satisfaction such as the one reported by Banka et al. Timely, specific feedback to physicians is an essential feature. The collection of physician‐specific data is a major challenge, but not an insurmountable one. Novel use of personnel and/or technology is likely to play a role in these efforts.
Disclosure: Nothing to report.
- A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1). , , .
- Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):1921–1931. , , , .
- Medicare.gov. Hospital Compare. Available at: http://www.medicare.gov/hospitalcompare/search.html. Accessed April 27, 2015.
- Centers for Medicare 67(1):27–37.
- Who's behind an HCAHPS score? Jt Comm J Qual Patient Saf. 2011;37(10):461–468. , , , et al.
- Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497–502. , , , et al.
- Promoting patient‐centred care through trainee feedback: assessing residents' C‐I‐CARE (ARC) program. BMJ Qual Saf. 2012;21(3):225–233. , , , .
- Feedback in clinical medical education. JAMA. 1983;250(6):777–781. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522–527. , , , , , .
- Development and validation of the tool to assess inpatient satisfaction with care from hospitalists. J Hosp Med. 2014;9(9):553–558. , , , , , .
- Press Ganey. A true performance solution for hospitalists. Available at: http://www.pressganey.com/ourSolutions/patient‐voice/census‐based‐surveying/hospitalist.aspx. Accessed April 27, 2015.
Patient satisfaction has received increased attention in recent years, which we believe is well deserved and long overdue. Anyone who has been hospitalized, or has had a loved one hospitalized, can appreciate that there is room to improve the patient experience. Dedicating time and effort to improving the patient experience is consistent with our professional commitment to comfort, empathize, and partner with our patients. Though patient satisfaction itself is an outcome worthy of our attention, it is also positively associated with measures related to patient safety and clinical effectiveness.[1, 2] Moreover, patient satisfaction is the only publicly reported measure that represents the patient's voice,[3] and accounts for a substantial portion of the Centers for Medicare and Medicaid Services payment adjustments under the Hospital Value Based Purchasing Program.[4]
However, all healthcare professionals should understand some key fundamental issues related to the measurement of patient satisfaction. The survey from which data are publicly reported and used for hospital payment adjustment is the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, developed by the Agency for Healthcare Research and Quality.[5, 6] HCAHPS is sent to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey uses ordinal response scales (eg, never, sometimes, usually, always) that generate highly skewed results toward favorable responses. Therefore, results are reported as the percent top box (ie, the percentage of responses in the most favorable category) rather than as a median score. The skewed distribution of results indicates that most patients are generally satisfied with care (ie, most respondents do not have an axe to grind), but also makes meaningful improvement difficult to achieve. Prior to public reporting and determination of effect on hospital payment, results are adjusted for mode of survey administration and patient mix. The same is not true when patient satisfaction data are used for internal purposes. Hospital leaders typically do not perform statistical adjustment and therefore need to be careful not to make apples‐to‐orangestype comparisons. For example, obstetric patient satisfaction scores should not be compared to general medical patient satisfaction scores, as these populations tend to rate satisfaction differently.
The HCAHPS survey questions are organized into domains of care, including satisfaction with nurses and satisfaction with doctors. Importantly, other healthcare team members may influence patients' perception in these domains. For example, a patient responding to nurse communication questions may also reflect on experiences with patient care technicians, social workers, and therapists. A patient responding to physician communication questions might also reflect on experiences with advanced practice providers. A common mistake is the practice of attributing satisfaction with doctors to the individual who served as the discharge physician. Many readers have likely seen patient satisfaction reports broken out by discharge physician with the expectation that giving this information to individual physicians will serve as useful formative feedback. The reality is that patients see many doctors during a hospitalization. To illustrate this point, we analyzed data from 420 patients admitted to our nonteaching hospitalist service who had completed an HCAHPS survey in 2014. We found that the discharge hospitalist accounted for only 34% of all physician encounters. Furthermore, research has shown that patients' experiences with specialist physicians also have a strong influence on their overall satisfaction with physicians.[7]
Having reliable patient satisfaction data on specific individuals would be a truly powerful formative assessment tool. In this issue of the Journal of Hospital Medicine, Banka and colleagues report on an impressive approach incorporating such a tool to give constructive feedback to physicians.[8] Since 2006, the study site had administered surveys to hospitalized patients that assess their satisfaction with specific resident physicians.[9] However, residency programs only reviewed the survey results with resident physicians about twice a year. The multifaceted intervention developed by Banka and colleagues included directly emailing the survey results to internal medicine resident physicians in real time while they were in service, a 1‐hour conference on best communication practices, and a reward program in which 3 residents were identified monthly to receive department‐wide recognition via email and a generous movie package. Using difference‐in‐differences regression analysis, the investigators compared changes in patient satisfaction results for internal medicine residents to results for residents from other specialties (who were not part of the intervention). The percentage of patients who gave top box responses to all 3 physician‐related questions and to the overall hospital rating was significantly higher for the internal medicine residents.
The findings from this study are important, because no prior study of an intervention, to our knowledge, has shown a significant improvement in patient satisfaction scores. In this study, feedback was believed to be the most powerful factor. The importance of meaningful, timely feedback in medical education is well recognized.[10] Without feedback there is poor insight into how intended results from specific actions compare with actual results. When feedback is lacking from external sources (in this case the voice of the patient), an uncontested sense of mastery develops, allowing mistakes to go uncorrected. This false sense of mastery contributes to an emotional and defensive response when performance is finally revealed to be less than optimal. The simple act of giving more timely feedback in this study encouraged self‐motivated reflection and practice change aimed at improving patient satisfaction, with remarkable results.
The study should inspire physician leaders from various hospital settings, and researchers, to develop and evaluate similar programs to improve patient satisfaction. We agree with the investigators that the approach should be multifaceted. Feedback to specific physicians is a powerful motivator, but needs to be combined with strategies to enhance communication skills. Brief conferences are less likely to have a lasting impact on behaviors than strategies like coaching and simulation based training.[11] Interventions should include recognition and reward to acknowledge exceptional performance and build friendly competition.
The biggest challenge to adopting an intervention such as the one used in the Banka study relates to the feasibility of implementing physician‐specific patient satisfaction reporting. Several survey instruments are available for use as tools to assess satisfaction with specific physicians.[9, 12, 13] However, who will administer these instruments? Most hospitals do not have undergraduate students available. Hospitals could use their volunteers, but this is not likely to be a sustainable solution. Hospitals could consider administering the survey via email, but many hospitals are just starting to collect patient email addresses and many patients do not use email. Once data are collected, who will conduct analyses and create comparative reports? Press Ganey recently developed a survey assessing satisfaction with specific hospitalists, using photographs, and offers the ability to create comparative reports.[14] Their service addresses the analytic challenge, but the quandary of survey administration remains.
In conclusion, we encourage hospital medicine leaders to develop and evaluate multifaceted interventions to improve patient satisfaction such as the one reported by Banka et al. Timely, specific feedback to physicians is an essential feature. The collection of physician‐specific data is a major challenge, but not an insurmountable one. Novel use of personnel and/or technology is likely to play a role in these efforts.
Disclosure: Nothing to report.
Patient satisfaction has received increased attention in recent years, which we believe is well deserved and long overdue. Anyone who has been hospitalized, or has had a loved one hospitalized, can appreciate that there is room to improve the patient experience. Dedicating time and effort to improving the patient experience is consistent with our professional commitment to comfort, empathize, and partner with our patients. Though patient satisfaction itself is an outcome worthy of our attention, it is also positively associated with measures related to patient safety and clinical effectiveness.[1, 2] Moreover, patient satisfaction is the only publicly reported measure that represents the patient's voice,[3] and accounts for a substantial portion of the Centers for Medicare and Medicaid Services payment adjustments under the Hospital Value Based Purchasing Program.[4]
However, all healthcare professionals should understand some key fundamental issues related to the measurement of patient satisfaction. The survey from which data are publicly reported and used for hospital payment adjustment is the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, developed by the Agency for Healthcare Research and Quality.[5, 6] HCAHPS is sent to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey uses ordinal response scales (eg, never, sometimes, usually, always) that generate highly skewed results toward favorable responses. Therefore, results are reported as the percent top box (ie, the percentage of responses in the most favorable category) rather than as a median score. The skewed distribution of results indicates that most patients are generally satisfied with care (ie, most respondents do not have an axe to grind), but also makes meaningful improvement difficult to achieve. Prior to public reporting and determination of effect on hospital payment, results are adjusted for mode of survey administration and patient mix. The same is not true when patient satisfaction data are used for internal purposes. Hospital leaders typically do not perform statistical adjustment and therefore need to be careful not to make apples‐to‐orangestype comparisons. For example, obstetric patient satisfaction scores should not be compared to general medical patient satisfaction scores, as these populations tend to rate satisfaction differently.
The HCAHPS survey questions are organized into domains of care, including satisfaction with nurses and satisfaction with doctors. Importantly, other healthcare team members may influence patients' perception in these domains. For example, a patient responding to nurse communication questions may also reflect on experiences with patient care technicians, social workers, and therapists. A patient responding to physician communication questions might also reflect on experiences with advanced practice providers. A common mistake is the practice of attributing satisfaction with doctors to the individual who served as the discharge physician. Many readers have likely seen patient satisfaction reports broken out by discharge physician with the expectation that giving this information to individual physicians will serve as useful formative feedback. The reality is that patients see many doctors during a hospitalization. To illustrate this point, we analyzed data from 420 patients admitted to our nonteaching hospitalist service who had completed an HCAHPS survey in 2014. We found that the discharge hospitalist accounted for only 34% of all physician encounters. Furthermore, research has shown that patients' experiences with specialist physicians also have a strong influence on their overall satisfaction with physicians.[7]
Having reliable patient satisfaction data on specific individuals would be a truly powerful formative assessment tool. In this issue of the Journal of Hospital Medicine, Banka and colleagues report on an impressive approach incorporating such a tool to give constructive feedback to physicians.[8] Since 2006, the study site had administered surveys to hospitalized patients that assess their satisfaction with specific resident physicians.[9] However, residency programs only reviewed the survey results with resident physicians about twice a year. The multifaceted intervention developed by Banka and colleagues included directly emailing the survey results to internal medicine resident physicians in real time while they were in service, a 1‐hour conference on best communication practices, and a reward program in which 3 residents were identified monthly to receive department‐wide recognition via email and a generous movie package. Using difference‐in‐differences regression analysis, the investigators compared changes in patient satisfaction results for internal medicine residents to results for residents from other specialties (who were not part of the intervention). The percentage of patients who gave top box responses to all 3 physician‐related questions and to the overall hospital rating was significantly higher for the internal medicine residents.
The findings from this study are important, because no prior study of an intervention, to our knowledge, has shown a significant improvement in patient satisfaction scores. In this study, feedback was believed to be the most powerful factor. The importance of meaningful, timely feedback in medical education is well recognized.[10] Without feedback there is poor insight into how intended results from specific actions compare with actual results. When feedback is lacking from external sources (in this case the voice of the patient), an uncontested sense of mastery develops, allowing mistakes to go uncorrected. This false sense of mastery contributes to an emotional and defensive response when performance is finally revealed to be less than optimal. The simple act of giving more timely feedback in this study encouraged self‐motivated reflection and practice change aimed at improving patient satisfaction, with remarkable results.
The study should inspire physician leaders from various hospital settings, and researchers, to develop and evaluate similar programs to improve patient satisfaction. We agree with the investigators that the approach should be multifaceted. Feedback to specific physicians is a powerful motivator, but needs to be combined with strategies to enhance communication skills. Brief conferences are less likely to have a lasting impact on behaviors than strategies like coaching and simulation based training.[11] Interventions should include recognition and reward to acknowledge exceptional performance and build friendly competition.
The biggest challenge to adopting an intervention such as the one used in the Banka study relates to the feasibility of implementing physician‐specific patient satisfaction reporting. Several survey instruments are available for use as tools to assess satisfaction with specific physicians.[9, 12, 13] However, who will administer these instruments? Most hospitals do not have undergraduate students available. Hospitals could use their volunteers, but this is not likely to be a sustainable solution. Hospitals could consider administering the survey via email, but many hospitals are just starting to collect patient email addresses and many patients do not use email. Once data are collected, who will conduct analyses and create comparative reports? Press Ganey recently developed a survey assessing satisfaction with specific hospitalists, using photographs, and offers the ability to create comparative reports.[14] Their service addresses the analytic challenge, but the quandary of survey administration remains.
In conclusion, we encourage hospital medicine leaders to develop and evaluate multifaceted interventions to improve patient satisfaction such as the one reported by Banka et al. Timely, specific feedback to physicians is an essential feature. The collection of physician‐specific data is a major challenge, but not an insurmountable one. Novel use of personnel and/or technology is likely to play a role in these efforts.
Disclosure: Nothing to report.
- A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1). , , .
- Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):1921–1931. , , , .
- Medicare.gov. Hospital Compare. Available at: http://www.medicare.gov/hospitalcompare/search.html. Accessed April 27, 2015.
- Centers for Medicare 67(1):27–37.
- Who's behind an HCAHPS score? Jt Comm J Qual Patient Saf. 2011;37(10):461–468. , , , et al.
- Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497–502. , , , et al.
- Promoting patient‐centred care through trainee feedback: assessing residents' C‐I‐CARE (ARC) program. BMJ Qual Saf. 2012;21(3):225–233. , , , .
- Feedback in clinical medical education. JAMA. 1983;250(6):777–781. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522–527. , , , , , .
- Development and validation of the tool to assess inpatient satisfaction with care from hospitalists. J Hosp Med. 2014;9(9):553–558. , , , , , .
- Press Ganey. A true performance solution for hospitalists. Available at: http://www.pressganey.com/ourSolutions/patient‐voice/census‐based‐surveying/hospitalist.aspx. Accessed April 27, 2015.
- A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1). , , .
- Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):1921–1931. , , , .
- Medicare.gov. Hospital Compare. Available at: http://www.medicare.gov/hospitalcompare/search.html. Accessed April 27, 2015.
- Centers for Medicare 67(1):27–37.
- Who's behind an HCAHPS score? Jt Comm J Qual Patient Saf. 2011;37(10):461–468. , , , et al.
- Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497–502. , , , et al.
- Promoting patient‐centred care through trainee feedback: assessing residents' C‐I‐CARE (ARC) program. BMJ Qual Saf. 2012;21(3):225–233. , , , .
- Feedback in clinical medical education. JAMA. 1983;250(6):777–781. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522–527. , , , , , .
- Development and validation of the tool to assess inpatient satisfaction with care from hospitalists. J Hosp Med. 2014;9(9):553–558. , , , , , .
- Press Ganey. A true performance solution for hospitalists. Available at: http://www.pressganey.com/ourSolutions/patient‐voice/census‐based‐surveying/hospitalist.aspx. Accessed April 27, 2015.
Letter to the Editor
The impact of rapid response teams (RRTs) on preventing nonintensive care unit (ICU) cardiopulmonary arrests (CPA) and decreasing in‐hospital mortality is a complex issue that goes beyond the structure of RRTs. The success of RRTs depends upon institutional culture, resources, RRT structure, hospital size, and expertise. These are some of the major reasons RRTs have failed to show benefit consistently across the board, because not all institutions are able muster enough resources, reasonable nurse‐to‐patient ratio, advanced ongoing training, and easily accessible onsite intensivists or physicians. An institutional culture where nurses and ancillary staff do not feel intimidated or retaliated on for calling unnecessary RRT codes is extremely important for the success of any RRT program. We have observed at our institution and others where just improvement in culture reduced non‐ICU CPAs, although it also led to a higher number of RRT codes. As a hospitalist leader of RRTs for many years, it sometimes felt as if unwarranted RRT codes were overwhelming already busy hospitalists. However, the real improvement in patient mortality and morbidity reminded us of the importance of creating an open and stress‐free environment for the nurse responsible for initiating RRTs. Davis et al.'s novel RRT program showed improvement in non‐ICU CPAs and in‐hospital mortality.[1] The researchers and their institutions did a great job in improving outcomes, perhaps by devoting enough resources and creating a positive work environment for the nursing staff.
A large observational study conducted in 9 European countries showed that an increase in a nurses' workload by 1 patient increased the likelihood of a hospitalized patient dying within 30 days of admission by 7%. Furthermore, every 10% increase in nurses with a bachelor's degree was associated with a decrease in this likelihood by 7%.[2] Nursing staff can activate RRTs in a timely fashion if they are not overworked or undertrained. Additionally, having an intermediate‐care unit for the patients who do not quite meet the ICU criteria and yet require more intensive care has been shown to decrease in‐hospital mortality.[3] A study by Ghaferi et al. showed that survival after in‐hospital complications following pancreatectomy was high in hospitals with teaching status, those with a size greater than 200 beds and average daily census greater than 50% capacity, increased nurse‐to‐patient ratios, and high‐level hospital technology.[4] Therefore, there are many factors that could have had an impact on in‐hospital mortality in this study.[1] It will be interesting to know if there was a difference in the novel RRT's success rates in the primary medical center and smaller sister campus in the study by Davis et al.[1] Activation of RRTs based on the change in vital signs is challenging for the elderly.[5] Therefore, geriatrics‐unit staff need special training for RRTs to be successful.
In the study by Davis et al., the charge nurse on each inpatient unit conducted rounds on at‐risk patients throughout each shift.[1] Additionally, the charge nurse responded to each RRT code and received intensive training. This strategy may have contributed to the benefit shown by the novel RRT strategy. However, most community hospitals are struggling to maintain adequate nurse‐to‐patient ratios due cost constraints, and adding a significant burden to the already busy charge nurse's responsibilities is difficult to sustain for some institutions. Having a highly trained, dedicated, multidisciplinary team is likely to improve outcomes, but more sustainable solutions for smaller community hospitals are needed. As this study has demonstrated, devoting more resources to patients may pay off over time. The public and private payers should also recognize this as a quality‐of‐care indicator and reward hospitals making improvements in this arena.
- A novel configuration of a traditional rapid response team decreases non‐intensive care unit arrests and overall hospital mortality. J Hosp Med. 2015;10(6):352–357. , , , et al.
- Nurse staffing and education and hospital mortality in nine European countries: a retrospective observational study. Lancet. 2014;384(9946):851–852. , , , ,
- Hospital mortality of adults admitted to intensive care units in hospitals with and without intermediate care units: a multicentre European cohort study. Crit Care. 2014;18(5):551. , , , et al.
- Hospital characteristics associated with failure to rescue from complications after pancreatectomy. J Am Coll Surg. 2010;211(3):325–330. , , ,
- Differences in vital signs between elderly and nonelderly patients prior to ward cardiac arrest. Crit Care Med. 2015;43(4):816–822. , , , ,
The impact of rapid response teams (RRTs) on preventing nonintensive care unit (ICU) cardiopulmonary arrests (CPA) and decreasing in‐hospital mortality is a complex issue that goes beyond the structure of RRTs. The success of RRTs depends upon institutional culture, resources, RRT structure, hospital size, and expertise. These are some of the major reasons RRTs have failed to show benefit consistently across the board, because not all institutions are able muster enough resources, reasonable nurse‐to‐patient ratio, advanced ongoing training, and easily accessible onsite intensivists or physicians. An institutional culture where nurses and ancillary staff do not feel intimidated or retaliated on for calling unnecessary RRT codes is extremely important for the success of any RRT program. We have observed at our institution and others where just improvement in culture reduced non‐ICU CPAs, although it also led to a higher number of RRT codes. As a hospitalist leader of RRTs for many years, it sometimes felt as if unwarranted RRT codes were overwhelming already busy hospitalists. However, the real improvement in patient mortality and morbidity reminded us of the importance of creating an open and stress‐free environment for the nurse responsible for initiating RRTs. Davis et al.'s novel RRT program showed improvement in non‐ICU CPAs and in‐hospital mortality.[1] The researchers and their institutions did a great job in improving outcomes, perhaps by devoting enough resources and creating a positive work environment for the nursing staff.
A large observational study conducted in 9 European countries showed that an increase in a nurses' workload by 1 patient increased the likelihood of a hospitalized patient dying within 30 days of admission by 7%. Furthermore, every 10% increase in nurses with a bachelor's degree was associated with a decrease in this likelihood by 7%.[2] Nursing staff can activate RRTs in a timely fashion if they are not overworked or undertrained. Additionally, having an intermediate‐care unit for the patients who do not quite meet the ICU criteria and yet require more intensive care has been shown to decrease in‐hospital mortality.[3] A study by Ghaferi et al. showed that survival after in‐hospital complications following pancreatectomy was high in hospitals with teaching status, those with a size greater than 200 beds and average daily census greater than 50% capacity, increased nurse‐to‐patient ratios, and high‐level hospital technology.[4] Therefore, there are many factors that could have had an impact on in‐hospital mortality in this study.[1] It will be interesting to know if there was a difference in the novel RRT's success rates in the primary medical center and smaller sister campus in the study by Davis et al.[1] Activation of RRTs based on the change in vital signs is challenging for the elderly.[5] Therefore, geriatrics‐unit staff need special training for RRTs to be successful.
In the study by Davis et al., the charge nurse on each inpatient unit conducted rounds on at‐risk patients throughout each shift.[1] Additionally, the charge nurse responded to each RRT code and received intensive training. This strategy may have contributed to the benefit shown by the novel RRT strategy. However, most community hospitals are struggling to maintain adequate nurse‐to‐patient ratios due cost constraints, and adding a significant burden to the already busy charge nurse's responsibilities is difficult to sustain for some institutions. Having a highly trained, dedicated, multidisciplinary team is likely to improve outcomes, but more sustainable solutions for smaller community hospitals are needed. As this study has demonstrated, devoting more resources to patients may pay off over time. The public and private payers should also recognize this as a quality‐of‐care indicator and reward hospitals making improvements in this arena.
The impact of rapid response teams (RRTs) on preventing nonintensive care unit (ICU) cardiopulmonary arrests (CPA) and decreasing in‐hospital mortality is a complex issue that goes beyond the structure of RRTs. The success of RRTs depends upon institutional culture, resources, RRT structure, hospital size, and expertise. These are some of the major reasons RRTs have failed to show benefit consistently across the board, because not all institutions are able muster enough resources, reasonable nurse‐to‐patient ratio, advanced ongoing training, and easily accessible onsite intensivists or physicians. An institutional culture where nurses and ancillary staff do not feel intimidated or retaliated on for calling unnecessary RRT codes is extremely important for the success of any RRT program. We have observed at our institution and others where just improvement in culture reduced non‐ICU CPAs, although it also led to a higher number of RRT codes. As a hospitalist leader of RRTs for many years, it sometimes felt as if unwarranted RRT codes were overwhelming already busy hospitalists. However, the real improvement in patient mortality and morbidity reminded us of the importance of creating an open and stress‐free environment for the nurse responsible for initiating RRTs. Davis et al.'s novel RRT program showed improvement in non‐ICU CPAs and in‐hospital mortality.[1] The researchers and their institutions did a great job in improving outcomes, perhaps by devoting enough resources and creating a positive work environment for the nursing staff.
A large observational study conducted in 9 European countries showed that an increase in a nurses' workload by 1 patient increased the likelihood of a hospitalized patient dying within 30 days of admission by 7%. Furthermore, every 10% increase in nurses with a bachelor's degree was associated with a decrease in this likelihood by 7%.[2] Nursing staff can activate RRTs in a timely fashion if they are not overworked or undertrained. Additionally, having an intermediate‐care unit for the patients who do not quite meet the ICU criteria and yet require more intensive care has been shown to decrease in‐hospital mortality.[3] A study by Ghaferi et al. showed that survival after in‐hospital complications following pancreatectomy was high in hospitals with teaching status, those with a size greater than 200 beds and average daily census greater than 50% capacity, increased nurse‐to‐patient ratios, and high‐level hospital technology.[4] Therefore, there are many factors that could have had an impact on in‐hospital mortality in this study.[1] It will be interesting to know if there was a difference in the novel RRT's success rates in the primary medical center and smaller sister campus in the study by Davis et al.[1] Activation of RRTs based on the change in vital signs is challenging for the elderly.[5] Therefore, geriatrics‐unit staff need special training for RRTs to be successful.
In the study by Davis et al., the charge nurse on each inpatient unit conducted rounds on at‐risk patients throughout each shift.[1] Additionally, the charge nurse responded to each RRT code and received intensive training. This strategy may have contributed to the benefit shown by the novel RRT strategy. However, most community hospitals are struggling to maintain adequate nurse‐to‐patient ratios due cost constraints, and adding a significant burden to the already busy charge nurse's responsibilities is difficult to sustain for some institutions. Having a highly trained, dedicated, multidisciplinary team is likely to improve outcomes, but more sustainable solutions for smaller community hospitals are needed. As this study has demonstrated, devoting more resources to patients may pay off over time. The public and private payers should also recognize this as a quality‐of‐care indicator and reward hospitals making improvements in this arena.
- A novel configuration of a traditional rapid response team decreases non‐intensive care unit arrests and overall hospital mortality. J Hosp Med. 2015;10(6):352–357. , , , et al.
- Nurse staffing and education and hospital mortality in nine European countries: a retrospective observational study. Lancet. 2014;384(9946):851–852. , , , ,
- Hospital mortality of adults admitted to intensive care units in hospitals with and without intermediate care units: a multicentre European cohort study. Crit Care. 2014;18(5):551. , , , et al.
- Hospital characteristics associated with failure to rescue from complications after pancreatectomy. J Am Coll Surg. 2010;211(3):325–330. , , ,
- Differences in vital signs between elderly and nonelderly patients prior to ward cardiac arrest. Crit Care Med. 2015;43(4):816–822. , , , ,
- A novel configuration of a traditional rapid response team decreases non‐intensive care unit arrests and overall hospital mortality. J Hosp Med. 2015;10(6):352–357. , , , et al.
- Nurse staffing and education and hospital mortality in nine European countries: a retrospective observational study. Lancet. 2014;384(9946):851–852. , , , ,
- Hospital mortality of adults admitted to intensive care units in hospitals with and without intermediate care units: a multicentre European cohort study. Crit Care. 2014;18(5):551. , , , et al.
- Hospital characteristics associated with failure to rescue from complications after pancreatectomy. J Am Coll Surg. 2010;211(3):325–330. , , ,
- Differences in vital signs between elderly and nonelderly patients prior to ward cardiac arrest. Crit Care Med. 2015;43(4):816–822. , , , ,
Letter to the Editor
We appreciate very much Dr. Singh's interest and insight regarding our article, A Novel Configuration of a Traditional Rapid Response Team Decreases NonIntensive Care Unit Arrests and Overall Hospital Mortality.[1] Dr. Singh makes several critical points that are worth emphasis and additional commentary.
The importance of cultural change in the success of a rapid response team (RRT) program cannot be emphasized enough. The willingness of frontline staff to access an RRT is based on a belief in the potential benefit to the patient as well as a lack of concern about the repercussions of such an activation, whether these are from the primary physician team or the RRT members themselves. Both of these require institutional commitmentideally from administrative and clinical leadershipas well as routine, direct feedback to providers as to the effectiveness of the program. Both of these have been addressed in our advanced resuscitation training (ART) program, which has replaced traditional life‐support training and consolidates many efforts related to patient safety and preventable death.[2] The ART program represents adaptive training, in which arrest prevention is emphasized for nonintensive care unit staff and the importance of institutional processes such as RRT is emphasized.
Our approach to RRT configuration reflects the resource constraints referenced by Dr. Singh. Although the ideal RRT would include critical‐care nurses located physically outside the intensive care unit to allow regular assessment of at‐risk patients, this would have required expenditures that were not available for the program. In our opinion, a reasonable alternative was to train charge nurses from nonintensive care units as RRT members. The role expectation for these charge nurses included twice‐daily rounds, and their proximity to at‐risk patients facilitated regular reassessments throughout each shift. In addition, the ART program allowed routine training for bedside nurses to emphasize code/RRT issues on an annual basis and underscores the importance of early recognition of patient safety and preventable death. The ART program actually reduced life‐support expenditures and allowed implementation of both our RRT and institutional cardiac arrest resuscitation programs in a cost‐effective manner.
The last point made by Dr. Singh that we wish to address involves the balance between over‐ and under‐utilization of RRT resources. Our RRT‐to‐code ratios are relatively favorable, allowing the program to exist with efficient allocation of resources. This may be due, in part, to the approach to training with regard to recognition of deterioration. Most RRT programs appear to emphasize vital sign thresholds or use of scoring systems for activation, both of which rely upon single sets of vital signs. Instead, we focus on pattern recognition, emphasizing dynamic changes in vital signs and other clinical assessments and de‐emphasizing absolute values. We believe that this helps develop clinical decision‐making skills and improves both sensitivity and specificity with regard to RRT activation. Again, the adaptive nature of the ART program allows annual training to enhance these skills without additional expense to the institution.
We very much appreciate Dr. Singh's comments and urge other institutions to listen to his message carefully. There is no substitute for efforts spent in establishing just culture and creating an institution that supports its staff in addressing patient safety issues, ultimately reducing preventable deaths.
- A novel configuration of a traditional rapid response team decreases non‐intensive care unit arrests and overall hospital mortality. J Hosp Med. 2015;10:352–357. , , , et al.
- A performance improvement‐based resuscitation programme reduces arrest incidence and increases survival from in‐hospital cardiac arrest. Resuscitation. 2015;92:63–69. , , , et al.
We appreciate very much Dr. Singh's interest and insight regarding our article, A Novel Configuration of a Traditional Rapid Response Team Decreases NonIntensive Care Unit Arrests and Overall Hospital Mortality.[1] Dr. Singh makes several critical points that are worth emphasis and additional commentary.
The importance of cultural change in the success of a rapid response team (RRT) program cannot be emphasized enough. The willingness of frontline staff to access an RRT is based on a belief in the potential benefit to the patient as well as a lack of concern about the repercussions of such an activation, whether these are from the primary physician team or the RRT members themselves. Both of these require institutional commitmentideally from administrative and clinical leadershipas well as routine, direct feedback to providers as to the effectiveness of the program. Both of these have been addressed in our advanced resuscitation training (ART) program, which has replaced traditional life‐support training and consolidates many efforts related to patient safety and preventable death.[2] The ART program represents adaptive training, in which arrest prevention is emphasized for nonintensive care unit staff and the importance of institutional processes such as RRT is emphasized.
Our approach to RRT configuration reflects the resource constraints referenced by Dr. Singh. Although the ideal RRT would include critical‐care nurses located physically outside the intensive care unit to allow regular assessment of at‐risk patients, this would have required expenditures that were not available for the program. In our opinion, a reasonable alternative was to train charge nurses from nonintensive care units as RRT members. The role expectation for these charge nurses included twice‐daily rounds, and their proximity to at‐risk patients facilitated regular reassessments throughout each shift. In addition, the ART program allowed routine training for bedside nurses to emphasize code/RRT issues on an annual basis and underscores the importance of early recognition of patient safety and preventable death. The ART program actually reduced life‐support expenditures and allowed implementation of both our RRT and institutional cardiac arrest resuscitation programs in a cost‐effective manner.
The last point made by Dr. Singh that we wish to address involves the balance between over‐ and under‐utilization of RRT resources. Our RRT‐to‐code ratios are relatively favorable, allowing the program to exist with efficient allocation of resources. This may be due, in part, to the approach to training with regard to recognition of deterioration. Most RRT programs appear to emphasize vital sign thresholds or use of scoring systems for activation, both of which rely upon single sets of vital signs. Instead, we focus on pattern recognition, emphasizing dynamic changes in vital signs and other clinical assessments and de‐emphasizing absolute values. We believe that this helps develop clinical decision‐making skills and improves both sensitivity and specificity with regard to RRT activation. Again, the adaptive nature of the ART program allows annual training to enhance these skills without additional expense to the institution.
We very much appreciate Dr. Singh's comments and urge other institutions to listen to his message carefully. There is no substitute for efforts spent in establishing just culture and creating an institution that supports its staff in addressing patient safety issues, ultimately reducing preventable deaths.
We appreciate very much Dr. Singh's interest and insight regarding our article, A Novel Configuration of a Traditional Rapid Response Team Decreases NonIntensive Care Unit Arrests and Overall Hospital Mortality.[1] Dr. Singh makes several critical points that are worth emphasis and additional commentary.
The importance of cultural change in the success of a rapid response team (RRT) program cannot be emphasized enough. The willingness of frontline staff to access an RRT is based on a belief in the potential benefit to the patient as well as a lack of concern about the repercussions of such an activation, whether these are from the primary physician team or the RRT members themselves. Both of these require institutional commitmentideally from administrative and clinical leadershipas well as routine, direct feedback to providers as to the effectiveness of the program. Both of these have been addressed in our advanced resuscitation training (ART) program, which has replaced traditional life‐support training and consolidates many efforts related to patient safety and preventable death.[2] The ART program represents adaptive training, in which arrest prevention is emphasized for nonintensive care unit staff and the importance of institutional processes such as RRT is emphasized.
Our approach to RRT configuration reflects the resource constraints referenced by Dr. Singh. Although the ideal RRT would include critical‐care nurses located physically outside the intensive care unit to allow regular assessment of at‐risk patients, this would have required expenditures that were not available for the program. In our opinion, a reasonable alternative was to train charge nurses from nonintensive care units as RRT members. The role expectation for these charge nurses included twice‐daily rounds, and their proximity to at‐risk patients facilitated regular reassessments throughout each shift. In addition, the ART program allowed routine training for bedside nurses to emphasize code/RRT issues on an annual basis and underscores the importance of early recognition of patient safety and preventable death. The ART program actually reduced life‐support expenditures and allowed implementation of both our RRT and institutional cardiac arrest resuscitation programs in a cost‐effective manner.
The last point made by Dr. Singh that we wish to address involves the balance between over‐ and under‐utilization of RRT resources. Our RRT‐to‐code ratios are relatively favorable, allowing the program to exist with efficient allocation of resources. This may be due, in part, to the approach to training with regard to recognition of deterioration. Most RRT programs appear to emphasize vital sign thresholds or use of scoring systems for activation, both of which rely upon single sets of vital signs. Instead, we focus on pattern recognition, emphasizing dynamic changes in vital signs and other clinical assessments and de‐emphasizing absolute values. We believe that this helps develop clinical decision‐making skills and improves both sensitivity and specificity with regard to RRT activation. Again, the adaptive nature of the ART program allows annual training to enhance these skills without additional expense to the institution.
We very much appreciate Dr. Singh's comments and urge other institutions to listen to his message carefully. There is no substitute for efforts spent in establishing just culture and creating an institution that supports its staff in addressing patient safety issues, ultimately reducing preventable deaths.
- A novel configuration of a traditional rapid response team decreases non‐intensive care unit arrests and overall hospital mortality. J Hosp Med. 2015;10:352–357. , , , et al.
- A performance improvement‐based resuscitation programme reduces arrest incidence and increases survival from in‐hospital cardiac arrest. Resuscitation. 2015;92:63–69. , , , et al.
- A novel configuration of a traditional rapid response team decreases non‐intensive care unit arrests and overall hospital mortality. J Hosp Med. 2015;10:352–357. , , , et al.
- A performance improvement‐based resuscitation programme reduces arrest incidence and increases survival from in‐hospital cardiac arrest. Resuscitation. 2015;92:63–69. , , , et al.
PRONE Score Can Track Medicolegal Complaints
Clinical question: Is there a standardized way to identify doctors at high risk of incurring repeated medicolegal events?
Background: Medicolegal agencies react to episodes of substandard care rather than intervening to prevent them due to lack of robust prediction tools at the individual practitioner level. Various studies have tried to predict complaints at the individual practitioner level accurately but had limited success.
Study design: Retrospective cohort study.
Setting: Commissions in all Australian states, except South Australia, with 70,200 practicing doctors.
Synopsis: Researchers used administrative data to analyze a national sample of 13,849 formal complaints, which were lodged by patients in Australia over a 12-year period against 8,424 doctors. Using multivariate logistic regression analysis, predictors for subsequent complaints within two years of an index complaint were estimated. These predictors were used in a simple predictive algorithm, the PRONE (Predicted Risk Of New Event), a score designed for application at the doctor level. PRONE is a 22-point scoring system that estimates a doctor’s future complaint risk based on specialty, sex, the number of previous complaints, and the time since the last complaint.
Because the scoring system has strong validity and reliability, regulators could harness such information to target quality improvement interventions and prevent substandard care and patient dissatisfaction.
Bottom line: The PRONE score appears to be a valid method for assessing individual doctors’ risks of attracting recurrent complaints.
Citation: Spittal MJ, Bismark MM, Studdert DM. The PRONE score: an algorithm for predicting doctors risks of formal patient complaints using routinely collected administrative data. BMJ Qual Saf. 2015;24(6):360-368.
Clinical question: Is there a standardized way to identify doctors at high risk of incurring repeated medicolegal events?
Background: Medicolegal agencies react to episodes of substandard care rather than intervening to prevent them due to lack of robust prediction tools at the individual practitioner level. Various studies have tried to predict complaints at the individual practitioner level accurately but had limited success.
Study design: Retrospective cohort study.
Setting: Commissions in all Australian states, except South Australia, with 70,200 practicing doctors.
Synopsis: Researchers used administrative data to analyze a national sample of 13,849 formal complaints, which were lodged by patients in Australia over a 12-year period against 8,424 doctors. Using multivariate logistic regression analysis, predictors for subsequent complaints within two years of an index complaint were estimated. These predictors were used in a simple predictive algorithm, the PRONE (Predicted Risk Of New Event), a score designed for application at the doctor level. PRONE is a 22-point scoring system that estimates a doctor’s future complaint risk based on specialty, sex, the number of previous complaints, and the time since the last complaint.
Because the scoring system has strong validity and reliability, regulators could harness such information to target quality improvement interventions and prevent substandard care and patient dissatisfaction.
Bottom line: The PRONE score appears to be a valid method for assessing individual doctors’ risks of attracting recurrent complaints.
Citation: Spittal MJ, Bismark MM, Studdert DM. The PRONE score: an algorithm for predicting doctors risks of formal patient complaints using routinely collected administrative data. BMJ Qual Saf. 2015;24(6):360-368.
Clinical question: Is there a standardized way to identify doctors at high risk of incurring repeated medicolegal events?
Background: Medicolegal agencies react to episodes of substandard care rather than intervening to prevent them due to lack of robust prediction tools at the individual practitioner level. Various studies have tried to predict complaints at the individual practitioner level accurately but had limited success.
Study design: Retrospective cohort study.
Setting: Commissions in all Australian states, except South Australia, with 70,200 practicing doctors.
Synopsis: Researchers used administrative data to analyze a national sample of 13,849 formal complaints, which were lodged by patients in Australia over a 12-year period against 8,424 doctors. Using multivariate logistic regression analysis, predictors for subsequent complaints within two years of an index complaint were estimated. These predictors were used in a simple predictive algorithm, the PRONE (Predicted Risk Of New Event), a score designed for application at the doctor level. PRONE is a 22-point scoring system that estimates a doctor’s future complaint risk based on specialty, sex, the number of previous complaints, and the time since the last complaint.
Because the scoring system has strong validity and reliability, regulators could harness such information to target quality improvement interventions and prevent substandard care and patient dissatisfaction.
Bottom line: The PRONE score appears to be a valid method for assessing individual doctors’ risks of attracting recurrent complaints.
Citation: Spittal MJ, Bismark MM, Studdert DM. The PRONE score: an algorithm for predicting doctors risks of formal patient complaints using routinely collected administrative data. BMJ Qual Saf. 2015;24(6):360-368.
Novel SYK inhibitor shows ‘good early evidence’ of activity
LUGANO—The phase 1, first-in-human study of the novel SYK inhibitor TAK-659 is showing “good early evidence” of antitumor activity in patients with lymphoma, according to investigators.
The agent also appears to be fairly well tolerated, with 10 categories of adverse events occurring in 2 or more patients.
Adam M. Petrich, MD, of Northwestern University in Evanston, Illinois, presented results from this ongoing study at the 13th International Congress on Malignant Lymphoma (13-ICML) as abstract 039.*
The study is supported by Millennium Pharmaceuticals, Inc., a wholly owned subsidiary of Takeda Pharmaceutical Company Limited.
Dr Petrich said the B-cell receptor signaling pathway is “very fertile ground with respect to development for novel targeting, particularly of B-cell malignancies, and SYK—the spleen tyrosine kinase—is an integral component of this.”
Investigators believe SYK has implications beyond B-cell lymphoma, including EBV-related malignancies, solid tumors, and myeloid leukemias.
Preclinical findings
In vitro experiments with TAK-659 showed “profound inhibition” of both SYK and FLT3, as indicated by the low IC50 levels, Dr Petrich said.
He also pointed out that the EC50 levels compare favorably to ibrutinib and idelalisib, with generally lower numbers in a broad panel of diffuse large B-cell lymphoma (DLBCL), follicular lymphoma (FL), and chronic lymphocytic leukemia.
In animal models, TAK-659 exhibited a dose-dependent tumor-inhibitory property.
“And if we look at both germinal center B and non-germinal center B subtypes of large-cell lymphoma, we see activity across both types,” Dr Petrich said.
Phase 1 study
Investigators are currently conducting the phase 1 study, which is a standard 3+3 dose-escalation schema. The data cutoff for the ICML presentation was April 13, although the dose-escalation phase was still underway, and the maximum tolerated dose was not yet reached.
Based on preclinical data, the team projected the efficacious dose for humans to be approximately 600 to 1200 mg per day. Patients were started at 60 mg, and, at the next planned step of 120 mg, 2 patients developed asymptomatic lipase elevations.
“For that reason, we revised the protocol, allowed for those to not be considered dose-limiting toxicities, and explored intermediate doses,” Dr Petrich explained.
So the protocol now includes intermediate doses of 80 and 100 mg. Dr Petrich’s presentation focused on the 4 doses—60, 80, 100, and 120 mg taken orally once daily.
He said the observed human clearance of TAK-659 was approximately 3- to 4-fold lower than predicted based on the mouse pharmacokinetic (PK) data, which led to steady-state area under the curve values 3- to 4-fold higher in humans than predicted.
Patient demographics
The investigators enrolled 21 patients, 12 with solid tumors, 6 with DLBCL, and 3 with FL. The median age was 60 years, 66% were male, and 62% had received 4 or more prior therapies.
The median number of TAK-659 treatment cycles was 2 (range, 1–10), and 5 patients are still on active treatment. Dr Petrich pointed out that 4 of the 5 longest-treated patients have DLBCL, and “the record holder with DLBCL is about to celebrate 1 year on therapy.”
Safety
“The safety profile in humans showed that [TAK-659] was actually quite tolerable,” Dr Petrich said.
There were 10 categories of treatment-related adverse events (AEs) that occurred in 2 or more patients. They were, in descending order, fatigue, anemia, diarrhea, elevated AST, hypophosphatemia, nausea, rash, elevated lipase, elevated ALT, and anorexia.
The majority of AEs were grade 1 or 2. However, there were grade 3/4 cases of anemia, diarrhea, elevated AST, and hypophosphatemia. And elevated lipase—the asymptomatic, dose-limiting toxicity for which the protocol was modified—consisted entirely of grade 3 or 4 events.
Episodes of neutropenia and thrombocytopenia occurred in 1 patient each, and both were grade 1.
“So [TAK-659] seems quite well tolerated in that regard as well,” Dr Petrich observed.
The plasma profile on days 1 and 15 of cycle 1 indicate that PK steady-state conditions are generally achieved by day 8, with moderate accumulation after repeated, once-daily dosing for 15 days.
Antitumor activity
Of the 12 evaluable patients, 5 had tumor shrinkage at the 60, 80, or 100 mg dose levels. Three of the 6 DLBCL patients experienced tumor shrinkage, and there were “2 dramatic responses in patients with follicular lymphoma, including 1 CR [complete response],” Dr Petrich said.
One of these FL patients had an aggressive phenotype and never had a previous response last longer than 20 months.
“[H]e actually achieved a CR within 2 cycles—a dramatic response for his disease—and he remains on treatment, and he’s up to cycle 5 now,” Dr Petrich said.
The team concluded that the PK data support daily dosing, despite lower clearance than originally predicted.
“[There is] good early evidence of antitumor activity and no significant safety signals,” Dr Petrich said. “And the [hematologic] toxicity profile, in particular, seems to suggest this is a well-tolerated drug.”
The investigators are conducting expansion cohorts and are considering future combination studies. They recently activated a study in acute myeloid leukemia because TAK-659 has FLT3 inhibitory properties.
*Information in the abstract differs from that presented at the meeting.
LUGANO—The phase 1, first-in-human study of the novel SYK inhibitor TAK-659 is showing “good early evidence” of antitumor activity in patients with lymphoma, according to investigators.
The agent also appears to be fairly well tolerated, with 10 categories of adverse events occurring in 2 or more patients.
Adam M. Petrich, MD, of Northwestern University in Evanston, Illinois, presented results from this ongoing study at the 13th International Congress on Malignant Lymphoma (13-ICML) as abstract 039.*
The study is supported by Millennium Pharmaceuticals, Inc., a wholly owned subsidiary of Takeda Pharmaceutical Company Limited.
Dr Petrich said the B-cell receptor signaling pathway is “very fertile ground with respect to development for novel targeting, particularly of B-cell malignancies, and SYK—the spleen tyrosine kinase—is an integral component of this.”
Investigators believe SYK has implications beyond B-cell lymphoma, including EBV-related malignancies, solid tumors, and myeloid leukemias.
Preclinical findings
In vitro experiments with TAK-659 showed “profound inhibition” of both SYK and FLT3, as indicated by the low IC50 levels, Dr Petrich said.
He also pointed out that the EC50 levels compare favorably to ibrutinib and idelalisib, with generally lower numbers in a broad panel of diffuse large B-cell lymphoma (DLBCL), follicular lymphoma (FL), and chronic lymphocytic leukemia.
In animal models, TAK-659 exhibited a dose-dependent tumor-inhibitory property.
“And if we look at both germinal center B and non-germinal center B subtypes of large-cell lymphoma, we see activity across both types,” Dr Petrich said.
Phase 1 study
Investigators are currently conducting the phase 1 study, which is a standard 3+3 dose-escalation schema. The data cutoff for the ICML presentation was April 13, although the dose-escalation phase was still underway, and the maximum tolerated dose was not yet reached.
Based on preclinical data, the team projected the efficacious dose for humans to be approximately 600 to 1200 mg per day. Patients were started at 60 mg, and, at the next planned step of 120 mg, 2 patients developed asymptomatic lipase elevations.
“For that reason, we revised the protocol, allowed for those to not be considered dose-limiting toxicities, and explored intermediate doses,” Dr Petrich explained.
So the protocol now includes intermediate doses of 80 and 100 mg. Dr Petrich’s presentation focused on the 4 doses—60, 80, 100, and 120 mg taken orally once daily.
He said the observed human clearance of TAK-659 was approximately 3- to 4-fold lower than predicted based on the mouse pharmacokinetic (PK) data, which led to steady-state area under the curve values 3- to 4-fold higher in humans than predicted.
Patient demographics
The investigators enrolled 21 patients, 12 with solid tumors, 6 with DLBCL, and 3 with FL. The median age was 60 years, 66% were male, and 62% had received 4 or more prior therapies.
The median number of TAK-659 treatment cycles was 2 (range, 1–10), and 5 patients are still on active treatment. Dr Petrich pointed out that 4 of the 5 longest-treated patients have DLBCL, and “the record holder with DLBCL is about to celebrate 1 year on therapy.”
Safety
“The safety profile in humans showed that [TAK-659] was actually quite tolerable,” Dr Petrich said.
There were 10 categories of treatment-related adverse events (AEs) that occurred in 2 or more patients. They were, in descending order, fatigue, anemia, diarrhea, elevated AST, hypophosphatemia, nausea, rash, elevated lipase, elevated ALT, and anorexia.
The majority of AEs were grade 1 or 2. However, there were grade 3/4 cases of anemia, diarrhea, elevated AST, and hypophosphatemia. And elevated lipase—the asymptomatic, dose-limiting toxicity for which the protocol was modified—consisted entirely of grade 3 or 4 events.
Episodes of neutropenia and thrombocytopenia occurred in 1 patient each, and both were grade 1.
“So [TAK-659] seems quite well tolerated in that regard as well,” Dr Petrich observed.
The plasma profile on days 1 and 15 of cycle 1 indicate that PK steady-state conditions are generally achieved by day 8, with moderate accumulation after repeated, once-daily dosing for 15 days.
Antitumor activity
Of the 12 evaluable patients, 5 had tumor shrinkage at the 60, 80, or 100 mg dose levels. Three of the 6 DLBCL patients experienced tumor shrinkage, and there were “2 dramatic responses in patients with follicular lymphoma, including 1 CR [complete response],” Dr Petrich said.
One of these FL patients had an aggressive phenotype and never had a previous response last longer than 20 months.
“[H]e actually achieved a CR within 2 cycles—a dramatic response for his disease—and he remains on treatment, and he’s up to cycle 5 now,” Dr Petrich said.
The team concluded that the PK data support daily dosing, despite lower clearance than originally predicted.
“[There is] good early evidence of antitumor activity and no significant safety signals,” Dr Petrich said. “And the [hematologic] toxicity profile, in particular, seems to suggest this is a well-tolerated drug.”
The investigators are conducting expansion cohorts and are considering future combination studies. They recently activated a study in acute myeloid leukemia because TAK-659 has FLT3 inhibitory properties.
*Information in the abstract differs from that presented at the meeting.
LUGANO—The phase 1, first-in-human study of the novel SYK inhibitor TAK-659 is showing “good early evidence” of antitumor activity in patients with lymphoma, according to investigators.
The agent also appears to be fairly well tolerated, with 10 categories of adverse events occurring in 2 or more patients.
Adam M. Petrich, MD, of Northwestern University in Evanston, Illinois, presented results from this ongoing study at the 13th International Congress on Malignant Lymphoma (13-ICML) as abstract 039.*
The study is supported by Millennium Pharmaceuticals, Inc., a wholly owned subsidiary of Takeda Pharmaceutical Company Limited.
Dr Petrich said the B-cell receptor signaling pathway is “very fertile ground with respect to development for novel targeting, particularly of B-cell malignancies, and SYK—the spleen tyrosine kinase—is an integral component of this.”
Investigators believe SYK has implications beyond B-cell lymphoma, including EBV-related malignancies, solid tumors, and myeloid leukemias.
Preclinical findings
In vitro experiments with TAK-659 showed “profound inhibition” of both SYK and FLT3, as indicated by the low IC50 levels, Dr Petrich said.
He also pointed out that the EC50 levels compare favorably to ibrutinib and idelalisib, with generally lower numbers in a broad panel of diffuse large B-cell lymphoma (DLBCL), follicular lymphoma (FL), and chronic lymphocytic leukemia.
In animal models, TAK-659 exhibited a dose-dependent tumor-inhibitory property.
“And if we look at both germinal center B and non-germinal center B subtypes of large-cell lymphoma, we see activity across both types,” Dr Petrich said.
Phase 1 study
Investigators are currently conducting the phase 1 study, which is a standard 3+3 dose-escalation schema. The data cutoff for the ICML presentation was April 13, although the dose-escalation phase was still underway, and the maximum tolerated dose was not yet reached.
Based on preclinical data, the team projected the efficacious dose for humans to be approximately 600 to 1200 mg per day. Patients were started at 60 mg, and, at the next planned step of 120 mg, 2 patients developed asymptomatic lipase elevations.
“For that reason, we revised the protocol, allowed for those to not be considered dose-limiting toxicities, and explored intermediate doses,” Dr Petrich explained.
So the protocol now includes intermediate doses of 80 and 100 mg. Dr Petrich’s presentation focused on the 4 doses—60, 80, 100, and 120 mg taken orally once daily.
He said the observed human clearance of TAK-659 was approximately 3- to 4-fold lower than predicted based on the mouse pharmacokinetic (PK) data, which led to steady-state area under the curve values 3- to 4-fold higher in humans than predicted.
Patient demographics
The investigators enrolled 21 patients, 12 with solid tumors, 6 with DLBCL, and 3 with FL. The median age was 60 years, 66% were male, and 62% had received 4 or more prior therapies.
The median number of TAK-659 treatment cycles was 2 (range, 1–10), and 5 patients are still on active treatment. Dr Petrich pointed out that 4 of the 5 longest-treated patients have DLBCL, and “the record holder with DLBCL is about to celebrate 1 year on therapy.”
Safety
“The safety profile in humans showed that [TAK-659] was actually quite tolerable,” Dr Petrich said.
There were 10 categories of treatment-related adverse events (AEs) that occurred in 2 or more patients. They were, in descending order, fatigue, anemia, diarrhea, elevated AST, hypophosphatemia, nausea, rash, elevated lipase, elevated ALT, and anorexia.
The majority of AEs were grade 1 or 2. However, there were grade 3/4 cases of anemia, diarrhea, elevated AST, and hypophosphatemia. And elevated lipase—the asymptomatic, dose-limiting toxicity for which the protocol was modified—consisted entirely of grade 3 or 4 events.
Episodes of neutropenia and thrombocytopenia occurred in 1 patient each, and both were grade 1.
“So [TAK-659] seems quite well tolerated in that regard as well,” Dr Petrich observed.
The plasma profile on days 1 and 15 of cycle 1 indicate that PK steady-state conditions are generally achieved by day 8, with moderate accumulation after repeated, once-daily dosing for 15 days.
Antitumor activity
Of the 12 evaluable patients, 5 had tumor shrinkage at the 60, 80, or 100 mg dose levels. Three of the 6 DLBCL patients experienced tumor shrinkage, and there were “2 dramatic responses in patients with follicular lymphoma, including 1 CR [complete response],” Dr Petrich said.
One of these FL patients had an aggressive phenotype and never had a previous response last longer than 20 months.
“[H]e actually achieved a CR within 2 cycles—a dramatic response for his disease—and he remains on treatment, and he’s up to cycle 5 now,” Dr Petrich said.
The team concluded that the PK data support daily dosing, despite lower clearance than originally predicted.
“[There is] good early evidence of antitumor activity and no significant safety signals,” Dr Petrich said. “And the [hematologic] toxicity profile, in particular, seems to suggest this is a well-tolerated drug.”
The investigators are conducting expansion cohorts and are considering future combination studies. They recently activated a study in acute myeloid leukemia because TAK-659 has FLT3 inhibitory properties.
*Information in the abstract differs from that presented at the meeting.
PI3K inhibitors may promote cancer spread
Photo courtesy of
The Wistar Institute
Although PI3K inhibitors have been designed to treat cancer, new research indicates these drugs may actually exacerbate the disease.
Researchers found evidence to suggest that treatment with PI3K inhibitors alone can promote more aggressive tumor cell behavior and increase the likelihood that cancer will spread.
PI3K inhibitors appeared to reprogram the mitochondria of tumor cells and move them to “strategic” positions for invasion.
However, the researchers believe that targeting mitochondrial function along with PI3K could prevent this effect.
Dario C. Altieri, MD, of The Wistar Institute in Philadelphia, Pennsylvania, and his colleagues described these findings in PNAS.
The researchers decided to investigate how mitochondria are reprogrammed when exposed to PI3K inhibition and how mitochondria might prevent targeted agents from being as effective as expected.
“Our prior studies have confirmed that tumor cells rely on energy produced by mitochondria more significantly than previously thought,” Dr Altieri said.
“What we have shown in this study is that, in somewhat of a paradox, treatment with a PI3K inhibitor causes a tumor cell’s mitochondria to produce energy in a localized manner, promoting a far more aggressive and invasive phenotype. The treatment appears to be doing the opposite of its intended effect.”
The study showed that treatment with a PI3K inhibitor causes the mitochondria to migrate to the peripheral cytoskeleton of the tumor cells.
While the mitochondria in untreated cells cluster around the cell’s nucleus, exposure of tumor cells to PI3K therapy causes the mitochondria to move to specialized regions of the cell’s membrane implicated in cell motility and invasion.
In this “strategic” position, tumor mitochondria are ideally positioned to provide a concentrated source of energy to support an increase in cell migration and invasion.
However, the researchers said the dependence of this response on mitochondrial function may offer a new therapeutic angle.
Dr Altieri and his team have shown that targeting mitochondrial functions for tumor therapy is feasible and dramatically enhances the anticancer activity of PI3K inhibitors when used in combination.
“These findings continue to support the idea that the mitochondria of tumor cells are crucial to tumor survival and proliferation,” Dr Altieri said. “It’s certainly counterintuitive that a drug designed to fight cancer may in actuality help it spread, but by identifying why this is happening, we can develop better strategies that allow these drugs to treat tumors the way they should.”
Photo courtesy of
The Wistar Institute
Although PI3K inhibitors have been designed to treat cancer, new research indicates these drugs may actually exacerbate the disease.
Researchers found evidence to suggest that treatment with PI3K inhibitors alone can promote more aggressive tumor cell behavior and increase the likelihood that cancer will spread.
PI3K inhibitors appeared to reprogram the mitochondria of tumor cells and move them to “strategic” positions for invasion.
However, the researchers believe that targeting mitochondrial function along with PI3K could prevent this effect.
Dario C. Altieri, MD, of The Wistar Institute in Philadelphia, Pennsylvania, and his colleagues described these findings in PNAS.
The researchers decided to investigate how mitochondria are reprogrammed when exposed to PI3K inhibition and how mitochondria might prevent targeted agents from being as effective as expected.
“Our prior studies have confirmed that tumor cells rely on energy produced by mitochondria more significantly than previously thought,” Dr Altieri said.
“What we have shown in this study is that, in somewhat of a paradox, treatment with a PI3K inhibitor causes a tumor cell’s mitochondria to produce energy in a localized manner, promoting a far more aggressive and invasive phenotype. The treatment appears to be doing the opposite of its intended effect.”
The study showed that treatment with a PI3K inhibitor causes the mitochondria to migrate to the peripheral cytoskeleton of the tumor cells.
While the mitochondria in untreated cells cluster around the cell’s nucleus, exposure of tumor cells to PI3K therapy causes the mitochondria to move to specialized regions of the cell’s membrane implicated in cell motility and invasion.
In this “strategic” position, tumor mitochondria are ideally positioned to provide a concentrated source of energy to support an increase in cell migration and invasion.
However, the researchers said the dependence of this response on mitochondrial function may offer a new therapeutic angle.
Dr Altieri and his team have shown that targeting mitochondrial functions for tumor therapy is feasible and dramatically enhances the anticancer activity of PI3K inhibitors when used in combination.
“These findings continue to support the idea that the mitochondria of tumor cells are crucial to tumor survival and proliferation,” Dr Altieri said. “It’s certainly counterintuitive that a drug designed to fight cancer may in actuality help it spread, but by identifying why this is happening, we can develop better strategies that allow these drugs to treat tumors the way they should.”
Photo courtesy of
The Wistar Institute
Although PI3K inhibitors have been designed to treat cancer, new research indicates these drugs may actually exacerbate the disease.
Researchers found evidence to suggest that treatment with PI3K inhibitors alone can promote more aggressive tumor cell behavior and increase the likelihood that cancer will spread.
PI3K inhibitors appeared to reprogram the mitochondria of tumor cells and move them to “strategic” positions for invasion.
However, the researchers believe that targeting mitochondrial function along with PI3K could prevent this effect.
Dario C. Altieri, MD, of The Wistar Institute in Philadelphia, Pennsylvania, and his colleagues described these findings in PNAS.
The researchers decided to investigate how mitochondria are reprogrammed when exposed to PI3K inhibition and how mitochondria might prevent targeted agents from being as effective as expected.
“Our prior studies have confirmed that tumor cells rely on energy produced by mitochondria more significantly than previously thought,” Dr Altieri said.
“What we have shown in this study is that, in somewhat of a paradox, treatment with a PI3K inhibitor causes a tumor cell’s mitochondria to produce energy in a localized manner, promoting a far more aggressive and invasive phenotype. The treatment appears to be doing the opposite of its intended effect.”
The study showed that treatment with a PI3K inhibitor causes the mitochondria to migrate to the peripheral cytoskeleton of the tumor cells.
While the mitochondria in untreated cells cluster around the cell’s nucleus, exposure of tumor cells to PI3K therapy causes the mitochondria to move to specialized regions of the cell’s membrane implicated in cell motility and invasion.
In this “strategic” position, tumor mitochondria are ideally positioned to provide a concentrated source of energy to support an increase in cell migration and invasion.
However, the researchers said the dependence of this response on mitochondrial function may offer a new therapeutic angle.
Dr Altieri and his team have shown that targeting mitochondrial functions for tumor therapy is feasible and dramatically enhances the anticancer activity of PI3K inhibitors when used in combination.
“These findings continue to support the idea that the mitochondria of tumor cells are crucial to tumor survival and proliferation,” Dr Altieri said. “It’s certainly counterintuitive that a drug designed to fight cancer may in actuality help it spread, but by identifying why this is happening, we can develop better strategies that allow these drugs to treat tumors the way they should.”
Inhibitors increase burden of hemophilia care
TORONTO—When children with hemophilia develop inhibitors, their caregivers shoulder a greater burden, according to a pilot study.
Researchers surveyed 40 subjects on the burden of caring for a child with hemophilia and found that inhibitor development significantly increased the burden of care.
But other factors—such as the number of bleeds a child had experienced in the last 12 months—had no significant impact.
Sylvia von Mackensen, PhD, of the University Medical Centre Hamburg-Eppendorf in Hamburg, Germany, and her colleagues presented these findings at the ISTH 2015 Congress (abstract PO256-WED).
The study was a pilot test of the HEMOCAB questionnaire, which consists of 59 questions mapped to 13 domains. Caregivers were asked to select the response that best qualified their burden and were scored on a scale of 0 to 100, with higher values corresponding with greater burden.
Forty caregivers completed the questionnaire. All of them had children with hemophilia (n=40), with or without inhibitors, who were younger than 22 years of age.
Three-quarters of the caregivers were mothers, 17.5% were fathers, and 7.5% were grandmothers. The mean age of caregivers was 39.32 ± 8.9 years (range, 27-66), and the mean age of the hemophilia patients was 10.98 ± 5.5 years (range, 1-21).
Most of the patients had hemophilia A (95%), and most had severe disease (77.5%). Six children (15%) had inhibitors. Overall, patients had experienced an average of 4.83 ±8.9 bleeds (range, 0-52) in the previous 12 months.
Most of the patients (88.5%) were receiving prophylaxis. Caregivers said they spent 8.69 ± 7.7 hours per month on infusion and 3.84 ± 6.7 hours (range, 0-30) per month traveling to hemophilia treatment facilities.
Size of burden
Caregivers said the most burdensome aspects of care were the caregivers’ perception of the child (38%), emotional stress (36%), financial burden (34%), and the impact of care on the caregiver (31%).
For nearly all of the domains assessed—emotional stress, personal sacrifice, medical management, work situation, etc.—a caregivers’ burden was significantly higher if a child had inhibitors. The only exception was school-related burden.
When the researchers analyzed the impact of other factors on care burden, they found that only inhibitor development had a significant impact.
There was no impact for orthopedic joint score, age of the caregiver, age of the child, time for infusion, time traveling to a hemophilia treatment center, the number of bleeds in the past 12 months, the number of children with hemophilia per household, home treatment, caregiver marital status, location, or caregiver education.
Frequency of burden
The HEMOCAB questionnaire also included scales assessing the frequency of burden. Dr von Mackensen and her colleagues presented data from these scales for the caregivers’ perception of their child, emotional stress, and finances.
Sixty percent of caregivers said they sometimes, often, or always feel their child’s condition is a difficult situation. Seventy-three percent of caregivers expressed feelings of sadness about informing their child of what he can and cannot do due to his illness.
Seventy-four percent of caregivers said they sometimes, often, or always feel afraid their child might get injured when they are not around to help. Forty-six percent of caregivers reported feeling afraid their child’s condition might worsen, and 36% said they feared their child might die from his condition.
Forty-six percent of caregivers said their child’s hemophilia sometimes, often, or always causes financial problems. And 33% of caregivers said that, at least sometimes, their family does not have enough money because of their child’s hemophilia.
HEMOCAB is a trademark of Novo Nordisk Health Care AG. Dr von Mackensen received a consulting fee from Novo Nordisk for developing the HEMOCAB questionnaire. Leonard A. Valentino, MD, of Rush University Medical Center in Chicago, Illinois, was involved in developing the questionnaire as well but did not participate in the pilot study.
The other researchers involved in the study have received funding/consulting fees from—or are employees of—Novo Nordisk, Baxter, Bayer, OctaPharma, CSL Behring, OPKO Health, and Selexys.
TORONTO—When children with hemophilia develop inhibitors, their caregivers shoulder a greater burden, according to a pilot study.
Researchers surveyed 40 subjects on the burden of caring for a child with hemophilia and found that inhibitor development significantly increased the burden of care.
But other factors—such as the number of bleeds a child had experienced in the last 12 months—had no significant impact.
Sylvia von Mackensen, PhD, of the University Medical Centre Hamburg-Eppendorf in Hamburg, Germany, and her colleagues presented these findings at the ISTH 2015 Congress (abstract PO256-WED).
The study was a pilot test of the HEMOCAB questionnaire, which consists of 59 questions mapped to 13 domains. Caregivers were asked to select the response that best qualified their burden and were scored on a scale of 0 to 100, with higher values corresponding with greater burden.
Forty caregivers completed the questionnaire. All of them had children with hemophilia (n=40), with or without inhibitors, who were younger than 22 years of age.
Three-quarters of the caregivers were mothers, 17.5% were fathers, and 7.5% were grandmothers. The mean age of caregivers was 39.32 ± 8.9 years (range, 27-66), and the mean age of the hemophilia patients was 10.98 ± 5.5 years (range, 1-21).
Most of the patients had hemophilia A (95%), and most had severe disease (77.5%). Six children (15%) had inhibitors. Overall, patients had experienced an average of 4.83 ±8.9 bleeds (range, 0-52) in the previous 12 months.
Most of the patients (88.5%) were receiving prophylaxis. Caregivers said they spent 8.69 ± 7.7 hours per month on infusion and 3.84 ± 6.7 hours (range, 0-30) per month traveling to hemophilia treatment facilities.
Size of burden
Caregivers said the most burdensome aspects of care were the caregivers’ perception of the child (38%), emotional stress (36%), financial burden (34%), and the impact of care on the caregiver (31%).
For nearly all of the domains assessed—emotional stress, personal sacrifice, medical management, work situation, etc.—a caregivers’ burden was significantly higher if a child had inhibitors. The only exception was school-related burden.
When the researchers analyzed the impact of other factors on care burden, they found that only inhibitor development had a significant impact.
There was no impact for orthopedic joint score, age of the caregiver, age of the child, time for infusion, time traveling to a hemophilia treatment center, the number of bleeds in the past 12 months, the number of children with hemophilia per household, home treatment, caregiver marital status, location, or caregiver education.
Frequency of burden
The HEMOCAB questionnaire also included scales assessing the frequency of burden. Dr von Mackensen and her colleagues presented data from these scales for the caregivers’ perception of their child, emotional stress, and finances.
Sixty percent of caregivers said they sometimes, often, or always feel their child’s condition is a difficult situation. Seventy-three percent of caregivers expressed feelings of sadness about informing their child of what he can and cannot do due to his illness.
Seventy-four percent of caregivers said they sometimes, often, or always feel afraid their child might get injured when they are not around to help. Forty-six percent of caregivers reported feeling afraid their child’s condition might worsen, and 36% said they feared their child might die from his condition.
Forty-six percent of caregivers said their child’s hemophilia sometimes, often, or always causes financial problems. And 33% of caregivers said that, at least sometimes, their family does not have enough money because of their child’s hemophilia.
HEMOCAB is a trademark of Novo Nordisk Health Care AG. Dr von Mackensen received a consulting fee from Novo Nordisk for developing the HEMOCAB questionnaire. Leonard A. Valentino, MD, of Rush University Medical Center in Chicago, Illinois, was involved in developing the questionnaire as well but did not participate in the pilot study.
The other researchers involved in the study have received funding/consulting fees from—or are employees of—Novo Nordisk, Baxter, Bayer, OctaPharma, CSL Behring, OPKO Health, and Selexys.
TORONTO—When children with hemophilia develop inhibitors, their caregivers shoulder a greater burden, according to a pilot study.
Researchers surveyed 40 subjects on the burden of caring for a child with hemophilia and found that inhibitor development significantly increased the burden of care.
But other factors—such as the number of bleeds a child had experienced in the last 12 months—had no significant impact.
Sylvia von Mackensen, PhD, of the University Medical Centre Hamburg-Eppendorf in Hamburg, Germany, and her colleagues presented these findings at the ISTH 2015 Congress (abstract PO256-WED).
The study was a pilot test of the HEMOCAB questionnaire, which consists of 59 questions mapped to 13 domains. Caregivers were asked to select the response that best qualified their burden and were scored on a scale of 0 to 100, with higher values corresponding with greater burden.
Forty caregivers completed the questionnaire. All of them had children with hemophilia (n=40), with or without inhibitors, who were younger than 22 years of age.
Three-quarters of the caregivers were mothers, 17.5% were fathers, and 7.5% were grandmothers. The mean age of caregivers was 39.32 ± 8.9 years (range, 27-66), and the mean age of the hemophilia patients was 10.98 ± 5.5 years (range, 1-21).
Most of the patients had hemophilia A (95%), and most had severe disease (77.5%). Six children (15%) had inhibitors. Overall, patients had experienced an average of 4.83 ±8.9 bleeds (range, 0-52) in the previous 12 months.
Most of the patients (88.5%) were receiving prophylaxis. Caregivers said they spent 8.69 ± 7.7 hours per month on infusion and 3.84 ± 6.7 hours (range, 0-30) per month traveling to hemophilia treatment facilities.
Size of burden
Caregivers said the most burdensome aspects of care were the caregivers’ perception of the child (38%), emotional stress (36%), financial burden (34%), and the impact of care on the caregiver (31%).
For nearly all of the domains assessed—emotional stress, personal sacrifice, medical management, work situation, etc.—a caregivers’ burden was significantly higher if a child had inhibitors. The only exception was school-related burden.
When the researchers analyzed the impact of other factors on care burden, they found that only inhibitor development had a significant impact.
There was no impact for orthopedic joint score, age of the caregiver, age of the child, time for infusion, time traveling to a hemophilia treatment center, the number of bleeds in the past 12 months, the number of children with hemophilia per household, home treatment, caregiver marital status, location, or caregiver education.
Frequency of burden
The HEMOCAB questionnaire also included scales assessing the frequency of burden. Dr von Mackensen and her colleagues presented data from these scales for the caregivers’ perception of their child, emotional stress, and finances.
Sixty percent of caregivers said they sometimes, often, or always feel their child’s condition is a difficult situation. Seventy-three percent of caregivers expressed feelings of sadness about informing their child of what he can and cannot do due to his illness.
Seventy-four percent of caregivers said they sometimes, often, or always feel afraid their child might get injured when they are not around to help. Forty-six percent of caregivers reported feeling afraid their child’s condition might worsen, and 36% said they feared their child might die from his condition.
Forty-six percent of caregivers said their child’s hemophilia sometimes, often, or always causes financial problems. And 33% of caregivers said that, at least sometimes, their family does not have enough money because of their child’s hemophilia.
HEMOCAB is a trademark of Novo Nordisk Health Care AG. Dr von Mackensen received a consulting fee from Novo Nordisk for developing the HEMOCAB questionnaire. Leonard A. Valentino, MD, of Rush University Medical Center in Chicago, Illinois, was involved in developing the questionnaire as well but did not participate in the pilot study.
The other researchers involved in the study have received funding/consulting fees from—or are employees of—Novo Nordisk, Baxter, Bayer, OctaPharma, CSL Behring, OPKO Health, and Selexys.
DDW: LINX device beneficial, safe for GERD
WASHINGTON – Five-year follow-up data on the magnetic device approved for treating gastroesophageal reflux disease confirm its long-term safety and efficacy, Dr. Robert A. Ganz reported at the annual Digestive Disease Week.
Five years after device implantation, the proportion of patients experiencing moderate to severe regurgitation had dropped to about 1%, from almost 60% at baseline, and two-thirds of patients were not taking any proton pump inhibitors (PPIs), said Dr. Ganz, chief of gastroenterology at Abbott Northwestern Hospital, Minneapolis, and one of the study investigators. These were among the results of the study that evaluated the device, the LINX Reflux Management System. The device was approved by the Food and Drug Administration FDA) in 2012 and is for the treatment of people with GERD as defined by abnormal pH testing, who continue to have chronic GERD symptoms that persist despite maximum medical therapy for the treatment of reflux.
“Magnetic sphincter augmentation should be considered first-line surgical therapy for those with gastroesophageal reflux disease, based on the results of this study,” he said.
The 2-year results of the prospective, multicenter study were the basis of the FDA approval of the device, described by the manufacturer, Torax Medical, as a “small implant [composed] of interlinked titanium beads with magnetic cores,” implanted during standard laparoscopy. The magnetic attraction between the beads augments the existing esophageal sphincter’s barrier function to prevent reflux,” according to the company.
The study enrolled 100 patients with reflux disease with a median age of 53 years, who had experienced typical heartburn for at least 6 months with or without regurgitation and were taking PPIs daily for at least 3 months (median use 5 years). Patients had GERD for a median of 10 years (range: 1-40 years). People who had any type of previous gastric or esophageal surgery, Barrett’s esophagus, a hiatal hernia greater than 3 cm, a body mass index over 35 kg/m2, or grade C or D esophagitis were excluded.
The device was implanted in all patients, who served as their own controls; 85 patients were followed through 5 years (6 were lost to follow-up, the device was explanted in 6 patients, 2 patients did not consent to extended follow-up, and 1 patient died of an unrelated cancer). The median procedure time was 36 minutes with a range of 7-125 minutes); all procedures were successfully completed with no intraoperative complications and all patients were discharged within 24 hours on an unrestricted diet.
The median total Gastroesophageal Reflux Disease–Health-Related Quality of Life (GERD-HRQL) score at baseline was 27 points among those not on PPIs and 11 points on PPIs, dropping to 4 points at 5 years off PPIs. At baseline, 95% of patients expressed dissatisfaction related to reflux, which dropped to 7% at year 5. Moderate to severe heartburn was reported by 89% at baseline, dropping to about 12% at year 5. The proportion of patients experiencing moderate to severe regurgitation dropped from 57% at baseline to about 1% at 5 years, Dr. Ganz said.
At baseline, 100% were taking PPIs every day, compared with 15% at 5 years. (At 5 years, 75% had discontinued PPIs, and about 9% reported PRN use only). Grade A and B esophagitis decreased from 40% at baseline to 16% at 5 years, at which point most cases were grade A, and there were no patients with grade C or D esophagitis, he said. In addition, at 5 years, 100% of patients “reported the ability to belch, and those needing to vomit – about 16% – reported the ability to vomit,” demonstrating that normal physiology was preserved with the device.
At 5 years, there were no device erosions or migrations, or any significant adverse events other than dysphagia, which “was typically mild and not associated with weight loss and tended to resolve over time,” from about 70% in the first few weeks after surgery to 11% at 1 year and 7% at 5 years, Dr. Ganz said.
In seven cases, the device was removed laparoscopically, with no complications and gastric anatomy was preserved for future treatments. All removals were elective. The device was removed in four patients because of dysphagia, which completely resolved in those patients. One patient had the device removed because of vomiting of unknown cause that persisted after removal. Another two patients who “had the device removed for disease management” continued to experience reflux and had “uneventful” Nissen fundoplication,” he said.
“Five years after magnetic augmentation, we have demonstrated objective evidence of reduction in acid exposure and in the majority of patients, normalized pH [and] we demonstrated significant and durable improvement in all group parameters measured, with preservation of fundic anatomy and normal physiology, with the ability to belch and vomit,” Dr. Ganz concluded. The results also show that the “procedure is reproducible, safe and reversible if necessary,” he added, noting that one of the limitations of the study was that subjects served as their own controls. During the discussion period, he was asked about hiatal hernia repairs, an apparent trend to “decay” from years 1 to 5 in some parameters measured, and dysphagia after the procedure.
About 40% of the patients in the study had a hiatal hernia, and about one-third of these patients had a hernia repair. A subgroup analysis of the data is being performed to evaluate the impact of hernia repair, Dr. Ganz said.
PPI use increased from 8% in year 4, to 15% in year 5. The reason for this s difficult to determine but “even though there is a bit of a decay, patients are still quite satisfied at 5 years,” Dr. Ganz remarked, also referring to the marked impact on regurgitation. Many U.S. patients use PPIs for reasons other than reflux, and studies show that many patients are on PPIs after the Nissen procedure in the absence of pathologic pH scores, he pointed out.
Compared with the type of dysphagia patients experience after the Nissen procedure, which is immediate and improves with time, Dr. Ganz said that the dysphagia associated with the device “seemed to peak around 2 weeks and then it slowly improved with time, so this may be more of a scar tissue–associated dysphagia than an edema dysphagia, but … it does improve with time.
Three-year results of the study were published in 2013 (N. Engl. J. Med. 2013;368:719-72), Dr. Ganz was the lead author.
The study was funded by Torax Medical. Dr. Ganz had no disclosures related to the topic of this presentation.
*This story was updated 7/9/2015.
At DDW this year, Dr. Ganz reported on the 5-year follow-up of the original LINX data that was published in the New England Journal of Medicine in 2013 (368:2039-40). The original study enrolled and followed 100 reflux patients for 3 years after implantation of the magnetic sphincter augmentation device, and it appears that the successful outcomes are sustained over the 5-year period. Most notable are the lasting improvement in regurgitation and the dramatic reduction in requirement for maintenance PPI therapy. These findings led the investigators to suggest that this should be considered a first-line surgical therapy for GERD. Overall, this is not an unreasonable statement when one considers the current model wherein antireflux surgery fits in the treatment of GERD. Medical therapy with proton pump inhibitors is extremely safe and effective for a substantial number of patients with GERD and based on this risk/benefit profile should be the first line therapy (Am. J. Gastroenterol. 2013;108:308-28; quiz 329). However, this treatment is not perfect and there are many patients who continue to have persistent symptoms despite PPI therapy (Clin. Gastroenterol. Hepatol. 2012;10:612-9). Although the majority of PPI nonresponders have a functional etiology, there is a distinct population that continue to have refractory reflux-related symptoms, such as regurgitation, that escape the therapeutic target of PPIs. These patients will require an augmentation of the antireflux barrier and the LINX approach appears to be as effective as fundoplication in this regard (J. Am. Coll. Surg. 2015;221:123-8). The question is whether the side effect profile and durability of LINX is better than fundoplication. The answer here is not clear and I would carefully state that LINX and fundoplication can be considered first-line surgical therapies for GERD patients who have documented pathologic acid gastroesophageal reflux and are intolerant to PPIs or not responding to PPIs.
Dr. John E. Pandolfino is professor of medicine and chief of the division of gastroenterology and hepatology at Northwestern University, Chicago. He is a speaker for Astra Zeneca/Takeda and a consultant for EndoGastric Solutions.
At DDW this year, Dr. Ganz reported on the 5-year follow-up of the original LINX data that was published in the New England Journal of Medicine in 2013 (368:2039-40). The original study enrolled and followed 100 reflux patients for 3 years after implantation of the magnetic sphincter augmentation device, and it appears that the successful outcomes are sustained over the 5-year period. Most notable are the lasting improvement in regurgitation and the dramatic reduction in requirement for maintenance PPI therapy. These findings led the investigators to suggest that this should be considered a first-line surgical therapy for GERD. Overall, this is not an unreasonable statement when one considers the current model wherein antireflux surgery fits in the treatment of GERD. Medical therapy with proton pump inhibitors is extremely safe and effective for a substantial number of patients with GERD and based on this risk/benefit profile should be the first line therapy (Am. J. Gastroenterol. 2013;108:308-28; quiz 329). However, this treatment is not perfect and there are many patients who continue to have persistent symptoms despite PPI therapy (Clin. Gastroenterol. Hepatol. 2012;10:612-9). Although the majority of PPI nonresponders have a functional etiology, there is a distinct population that continue to have refractory reflux-related symptoms, such as regurgitation, that escape the therapeutic target of PPIs. These patients will require an augmentation of the antireflux barrier and the LINX approach appears to be as effective as fundoplication in this regard (J. Am. Coll. Surg. 2015;221:123-8). The question is whether the side effect profile and durability of LINX is better than fundoplication. The answer here is not clear and I would carefully state that LINX and fundoplication can be considered first-line surgical therapies for GERD patients who have documented pathologic acid gastroesophageal reflux and are intolerant to PPIs or not responding to PPIs.
Dr. John E. Pandolfino is professor of medicine and chief of the division of gastroenterology and hepatology at Northwestern University, Chicago. He is a speaker for Astra Zeneca/Takeda and a consultant for EndoGastric Solutions.
At DDW this year, Dr. Ganz reported on the 5-year follow-up of the original LINX data that was published in the New England Journal of Medicine in 2013 (368:2039-40). The original study enrolled and followed 100 reflux patients for 3 years after implantation of the magnetic sphincter augmentation device, and it appears that the successful outcomes are sustained over the 5-year period. Most notable are the lasting improvement in regurgitation and the dramatic reduction in requirement for maintenance PPI therapy. These findings led the investigators to suggest that this should be considered a first-line surgical therapy for GERD. Overall, this is not an unreasonable statement when one considers the current model wherein antireflux surgery fits in the treatment of GERD. Medical therapy with proton pump inhibitors is extremely safe and effective for a substantial number of patients with GERD and based on this risk/benefit profile should be the first line therapy (Am. J. Gastroenterol. 2013;108:308-28; quiz 329). However, this treatment is not perfect and there are many patients who continue to have persistent symptoms despite PPI therapy (Clin. Gastroenterol. Hepatol. 2012;10:612-9). Although the majority of PPI nonresponders have a functional etiology, there is a distinct population that continue to have refractory reflux-related symptoms, such as regurgitation, that escape the therapeutic target of PPIs. These patients will require an augmentation of the antireflux barrier and the LINX approach appears to be as effective as fundoplication in this regard (J. Am. Coll. Surg. 2015;221:123-8). The question is whether the side effect profile and durability of LINX is better than fundoplication. The answer here is not clear and I would carefully state that LINX and fundoplication can be considered first-line surgical therapies for GERD patients who have documented pathologic acid gastroesophageal reflux and are intolerant to PPIs or not responding to PPIs.
Dr. John E. Pandolfino is professor of medicine and chief of the division of gastroenterology and hepatology at Northwestern University, Chicago. He is a speaker for Astra Zeneca/Takeda and a consultant for EndoGastric Solutions.
WASHINGTON – Five-year follow-up data on the magnetic device approved for treating gastroesophageal reflux disease confirm its long-term safety and efficacy, Dr. Robert A. Ganz reported at the annual Digestive Disease Week.
Five years after device implantation, the proportion of patients experiencing moderate to severe regurgitation had dropped to about 1%, from almost 60% at baseline, and two-thirds of patients were not taking any proton pump inhibitors (PPIs), said Dr. Ganz, chief of gastroenterology at Abbott Northwestern Hospital, Minneapolis, and one of the study investigators. These were among the results of the study that evaluated the device, the LINX Reflux Management System. The device was approved by the Food and Drug Administration FDA) in 2012 and is for the treatment of people with GERD as defined by abnormal pH testing, who continue to have chronic GERD symptoms that persist despite maximum medical therapy for the treatment of reflux.
“Magnetic sphincter augmentation should be considered first-line surgical therapy for those with gastroesophageal reflux disease, based on the results of this study,” he said.
The 2-year results of the prospective, multicenter study were the basis of the FDA approval of the device, described by the manufacturer, Torax Medical, as a “small implant [composed] of interlinked titanium beads with magnetic cores,” implanted during standard laparoscopy. The magnetic attraction between the beads augments the existing esophageal sphincter’s barrier function to prevent reflux,” according to the company.
The study enrolled 100 patients with reflux disease with a median age of 53 years, who had experienced typical heartburn for at least 6 months with or without regurgitation and were taking PPIs daily for at least 3 months (median use 5 years). Patients had GERD for a median of 10 years (range: 1-40 years). People who had any type of previous gastric or esophageal surgery, Barrett’s esophagus, a hiatal hernia greater than 3 cm, a body mass index over 35 kg/m2, or grade C or D esophagitis were excluded.
The device was implanted in all patients, who served as their own controls; 85 patients were followed through 5 years (6 were lost to follow-up, the device was explanted in 6 patients, 2 patients did not consent to extended follow-up, and 1 patient died of an unrelated cancer). The median procedure time was 36 minutes with a range of 7-125 minutes); all procedures were successfully completed with no intraoperative complications and all patients were discharged within 24 hours on an unrestricted diet.
The median total Gastroesophageal Reflux Disease–Health-Related Quality of Life (GERD-HRQL) score at baseline was 27 points among those not on PPIs and 11 points on PPIs, dropping to 4 points at 5 years off PPIs. At baseline, 95% of patients expressed dissatisfaction related to reflux, which dropped to 7% at year 5. Moderate to severe heartburn was reported by 89% at baseline, dropping to about 12% at year 5. The proportion of patients experiencing moderate to severe regurgitation dropped from 57% at baseline to about 1% at 5 years, Dr. Ganz said.
At baseline, 100% were taking PPIs every day, compared with 15% at 5 years. (At 5 years, 75% had discontinued PPIs, and about 9% reported PRN use only). Grade A and B esophagitis decreased from 40% at baseline to 16% at 5 years, at which point most cases were grade A, and there were no patients with grade C or D esophagitis, he said. In addition, at 5 years, 100% of patients “reported the ability to belch, and those needing to vomit – about 16% – reported the ability to vomit,” demonstrating that normal physiology was preserved with the device.
At 5 years, there were no device erosions or migrations, or any significant adverse events other than dysphagia, which “was typically mild and not associated with weight loss and tended to resolve over time,” from about 70% in the first few weeks after surgery to 11% at 1 year and 7% at 5 years, Dr. Ganz said.
In seven cases, the device was removed laparoscopically, with no complications and gastric anatomy was preserved for future treatments. All removals were elective. The device was removed in four patients because of dysphagia, which completely resolved in those patients. One patient had the device removed because of vomiting of unknown cause that persisted after removal. Another two patients who “had the device removed for disease management” continued to experience reflux and had “uneventful” Nissen fundoplication,” he said.
“Five years after magnetic augmentation, we have demonstrated objective evidence of reduction in acid exposure and in the majority of patients, normalized pH [and] we demonstrated significant and durable improvement in all group parameters measured, with preservation of fundic anatomy and normal physiology, with the ability to belch and vomit,” Dr. Ganz concluded. The results also show that the “procedure is reproducible, safe and reversible if necessary,” he added, noting that one of the limitations of the study was that subjects served as their own controls. During the discussion period, he was asked about hiatal hernia repairs, an apparent trend to “decay” from years 1 to 5 in some parameters measured, and dysphagia after the procedure.
About 40% of the patients in the study had a hiatal hernia, and about one-third of these patients had a hernia repair. A subgroup analysis of the data is being performed to evaluate the impact of hernia repair, Dr. Ganz said.
PPI use increased from 8% in year 4, to 15% in year 5. The reason for this s difficult to determine but “even though there is a bit of a decay, patients are still quite satisfied at 5 years,” Dr. Ganz remarked, also referring to the marked impact on regurgitation. Many U.S. patients use PPIs for reasons other than reflux, and studies show that many patients are on PPIs after the Nissen procedure in the absence of pathologic pH scores, he pointed out.
Compared with the type of dysphagia patients experience after the Nissen procedure, which is immediate and improves with time, Dr. Ganz said that the dysphagia associated with the device “seemed to peak around 2 weeks and then it slowly improved with time, so this may be more of a scar tissue–associated dysphagia than an edema dysphagia, but … it does improve with time.
Three-year results of the study were published in 2013 (N. Engl. J. Med. 2013;368:719-72), Dr. Ganz was the lead author.
The study was funded by Torax Medical. Dr. Ganz had no disclosures related to the topic of this presentation.
*This story was updated 7/9/2015.
WASHINGTON – Five-year follow-up data on the magnetic device approved for treating gastroesophageal reflux disease confirm its long-term safety and efficacy, Dr. Robert A. Ganz reported at the annual Digestive Disease Week.
Five years after device implantation, the proportion of patients experiencing moderate to severe regurgitation had dropped to about 1%, from almost 60% at baseline, and two-thirds of patients were not taking any proton pump inhibitors (PPIs), said Dr. Ganz, chief of gastroenterology at Abbott Northwestern Hospital, Minneapolis, and one of the study investigators. These were among the results of the study that evaluated the device, the LINX Reflux Management System. The device was approved by the Food and Drug Administration FDA) in 2012 and is for the treatment of people with GERD as defined by abnormal pH testing, who continue to have chronic GERD symptoms that persist despite maximum medical therapy for the treatment of reflux.
“Magnetic sphincter augmentation should be considered first-line surgical therapy for those with gastroesophageal reflux disease, based on the results of this study,” he said.
The 2-year results of the prospective, multicenter study were the basis of the FDA approval of the device, described by the manufacturer, Torax Medical, as a “small implant [composed] of interlinked titanium beads with magnetic cores,” implanted during standard laparoscopy. The magnetic attraction between the beads augments the existing esophageal sphincter’s barrier function to prevent reflux,” according to the company.
The study enrolled 100 patients with reflux disease with a median age of 53 years, who had experienced typical heartburn for at least 6 months with or without regurgitation and were taking PPIs daily for at least 3 months (median use 5 years). Patients had GERD for a median of 10 years (range: 1-40 years). People who had any type of previous gastric or esophageal surgery, Barrett’s esophagus, a hiatal hernia greater than 3 cm, a body mass index over 35 kg/m2, or grade C or D esophagitis were excluded.
The device was implanted in all patients, who served as their own controls; 85 patients were followed through 5 years (6 were lost to follow-up, the device was explanted in 6 patients, 2 patients did not consent to extended follow-up, and 1 patient died of an unrelated cancer). The median procedure time was 36 minutes with a range of 7-125 minutes); all procedures were successfully completed with no intraoperative complications and all patients were discharged within 24 hours on an unrestricted diet.
The median total Gastroesophageal Reflux Disease–Health-Related Quality of Life (GERD-HRQL) score at baseline was 27 points among those not on PPIs and 11 points on PPIs, dropping to 4 points at 5 years off PPIs. At baseline, 95% of patients expressed dissatisfaction related to reflux, which dropped to 7% at year 5. Moderate to severe heartburn was reported by 89% at baseline, dropping to about 12% at year 5. The proportion of patients experiencing moderate to severe regurgitation dropped from 57% at baseline to about 1% at 5 years, Dr. Ganz said.
At baseline, 100% were taking PPIs every day, compared with 15% at 5 years. (At 5 years, 75% had discontinued PPIs, and about 9% reported PRN use only). Grade A and B esophagitis decreased from 40% at baseline to 16% at 5 years, at which point most cases were grade A, and there were no patients with grade C or D esophagitis, he said. In addition, at 5 years, 100% of patients “reported the ability to belch, and those needing to vomit – about 16% – reported the ability to vomit,” demonstrating that normal physiology was preserved with the device.
At 5 years, there were no device erosions or migrations, or any significant adverse events other than dysphagia, which “was typically mild and not associated with weight loss and tended to resolve over time,” from about 70% in the first few weeks after surgery to 11% at 1 year and 7% at 5 years, Dr. Ganz said.
In seven cases, the device was removed laparoscopically, with no complications and gastric anatomy was preserved for future treatments. All removals were elective. The device was removed in four patients because of dysphagia, which completely resolved in those patients. One patient had the device removed because of vomiting of unknown cause that persisted after removal. Another two patients who “had the device removed for disease management” continued to experience reflux and had “uneventful” Nissen fundoplication,” he said.
“Five years after magnetic augmentation, we have demonstrated objective evidence of reduction in acid exposure and in the majority of patients, normalized pH [and] we demonstrated significant and durable improvement in all group parameters measured, with preservation of fundic anatomy and normal physiology, with the ability to belch and vomit,” Dr. Ganz concluded. The results also show that the “procedure is reproducible, safe and reversible if necessary,” he added, noting that one of the limitations of the study was that subjects served as their own controls. During the discussion period, he was asked about hiatal hernia repairs, an apparent trend to “decay” from years 1 to 5 in some parameters measured, and dysphagia after the procedure.
About 40% of the patients in the study had a hiatal hernia, and about one-third of these patients had a hernia repair. A subgroup analysis of the data is being performed to evaluate the impact of hernia repair, Dr. Ganz said.
PPI use increased from 8% in year 4, to 15% in year 5. The reason for this s difficult to determine but “even though there is a bit of a decay, patients are still quite satisfied at 5 years,” Dr. Ganz remarked, also referring to the marked impact on regurgitation. Many U.S. patients use PPIs for reasons other than reflux, and studies show that many patients are on PPIs after the Nissen procedure in the absence of pathologic pH scores, he pointed out.
Compared with the type of dysphagia patients experience after the Nissen procedure, which is immediate and improves with time, Dr. Ganz said that the dysphagia associated with the device “seemed to peak around 2 weeks and then it slowly improved with time, so this may be more of a scar tissue–associated dysphagia than an edema dysphagia, but … it does improve with time.
Three-year results of the study were published in 2013 (N. Engl. J. Med. 2013;368:719-72), Dr. Ganz was the lead author.
The study was funded by Torax Medical. Dr. Ganz had no disclosures related to the topic of this presentation.
*This story was updated 7/9/2015.
AT DDW 2015
Key clinical point: A magnetic device designed to augment the lower esophageal sphincter is a surgical option that can be expected to provide long-term control of reflux symptoms in patients with GERD.
Major finding: Improvements 5 years after treatment with the Linx Reflux Management System include a drop from 60% experiencing regurgitation to 1%, and two-third of patients no longer taking PPIs.
Data source: The multicenter, prospective study evaluated the long-term safety and efficacy of the device over 5 years in 100 patients with GERD, who served as their own controls; 85 were included in the 5-year follow-up.
Disclosures: The study was funded by the device manufacturer, Torax Medical. Dr. Ganz had no disclosures related to the topic of this presentation.