Impact of Demographic and Health System Variables on Survival in Early Stage (Stage I and II) Non-Small Cell Lung Carcinoma: A National Cancer Database Analysis

Article Type
Changed
Tue, 12/13/2016 - 10:27
Display Headline
Impact of Demographic and Health System Variables on Survival in Early Stage (Stage I and II) Non-Small Cell Lung Carcinoma: A National Cancer Database Analysis
Goyal G, Nawal L, Monirul Islam KM, Silberstein PT, Ganti AK

Background: Non-small cell lung carcinoma (NSCLC) is the most common type of lung cancer. According to a Surveillance, Epidemiology, and End Results (SEER) database analysis, the 5-year survival rates for clinical stages IA, IB, IIA, and IIB NSCLC are 50%, 43%, 36%, and 25%, respectively. Even with advances in therapies in both the surgical and medical fields, patient outcomes remain suboptimal. Our aim was to assess the role of various demographic and insurance characteristics on patient survival in early stage (stage I and II) NSCLC.

Methods: This is a retrospective study of patients diagnosed with stage I and stage II NSCLC between 1998 and 2012 utilizing the National Cancer Database (NCDB) participant user file (PUF). The NCDB is a nationwide oncology outcomes database for more than 1,500 American College of Surgeons Commission on Cancer-accredited cancer programs. The impact of various factors on survival was analyzed using the Cox proportional hazards model.

Results: A total of 304,092 patients with early stage NSCLC were analyzed for this study. On multivariate analysis, the factors associated with decreased survival were male (hazard ratio [HR] 1.32, P < .0001) compared with female, increasing age (HR 1.036, P < .0001), African American (HR 1.15, P < .0001) compared with white, and rural residency (HR 1.146, P < .0001) compared with metro areas. Privately insured patients had better survival when compared with uninsured patients (HR 0.674, P < .0001), whereas Medicaid patients had the worst survival (HR 1.076, P < .0001). Also, the patients who were diagnosed between 2008 and 2012 had a higher survival than those diagnosed earlier (HR 0.645, P < .0001).

Discussion: The above data suggest that there is a disparity in outcomes among patients with early stage NSCLC based on various demographic and health system factors. Despite an overall increase in survival due to improved therapies from 1998 to 2012, significant differences exist in terms of patient age, gender, race, residency, and insurance status. This could be secondary to decreased receipt of appropriate treatment in certain subgroups or due to a difference in cancer biology in some of these groups. Nevertheless, this study suggests room for improvement in health care delivery to all patients for optimal outcomes.

Publications
Topics
Legacy Keywords
AVAHO, Hematology, Oncology
Sections
Goyal G, Nawal L, Monirul Islam KM, Silberstein PT, Ganti AK
Goyal G, Nawal L, Monirul Islam KM, Silberstein PT, Ganti AK

Background: Non-small cell lung carcinoma (NSCLC) is the most common type of lung cancer. According to a Surveillance, Epidemiology, and End Results (SEER) database analysis, the 5-year survival rates for clinical stages IA, IB, IIA, and IIB NSCLC are 50%, 43%, 36%, and 25%, respectively. Even with advances in therapies in both the surgical and medical fields, patient outcomes remain suboptimal. Our aim was to assess the role of various demographic and insurance characteristics on patient survival in early stage (stage I and II) NSCLC.

Methods: This is a retrospective study of patients diagnosed with stage I and stage II NSCLC between 1998 and 2012 utilizing the National Cancer Database (NCDB) participant user file (PUF). The NCDB is a nationwide oncology outcomes database for more than 1,500 American College of Surgeons Commission on Cancer-accredited cancer programs. The impact of various factors on survival was analyzed using the Cox proportional hazards model.

Results: A total of 304,092 patients with early stage NSCLC were analyzed for this study. On multivariate analysis, the factors associated with decreased survival were male (hazard ratio [HR] 1.32, P < .0001) compared with female, increasing age (HR 1.036, P < .0001), African American (HR 1.15, P < .0001) compared with white, and rural residency (HR 1.146, P < .0001) compared with metro areas. Privately insured patients had better survival when compared with uninsured patients (HR 0.674, P < .0001), whereas Medicaid patients had the worst survival (HR 1.076, P < .0001). Also, the patients who were diagnosed between 2008 and 2012 had a higher survival than those diagnosed earlier (HR 0.645, P < .0001).

Discussion: The above data suggest that there is a disparity in outcomes among patients with early stage NSCLC based on various demographic and health system factors. Despite an overall increase in survival due to improved therapies from 1998 to 2012, significant differences exist in terms of patient age, gender, race, residency, and insurance status. This could be secondary to decreased receipt of appropriate treatment in certain subgroups or due to a difference in cancer biology in some of these groups. Nevertheless, this study suggests room for improvement in health care delivery to all patients for optimal outcomes.

Background: Non-small cell lung carcinoma (NSCLC) is the most common type of lung cancer. According to a Surveillance, Epidemiology, and End Results (SEER) database analysis, the 5-year survival rates for clinical stages IA, IB, IIA, and IIB NSCLC are 50%, 43%, 36%, and 25%, respectively. Even with advances in therapies in both the surgical and medical fields, patient outcomes remain suboptimal. Our aim was to assess the role of various demographic and insurance characteristics on patient survival in early stage (stage I and II) NSCLC.

Methods: This is a retrospective study of patients diagnosed with stage I and stage II NSCLC between 1998 and 2012 utilizing the National Cancer Database (NCDB) participant user file (PUF). The NCDB is a nationwide oncology outcomes database for more than 1,500 American College of Surgeons Commission on Cancer-accredited cancer programs. The impact of various factors on survival was analyzed using the Cox proportional hazards model.

Results: A total of 304,092 patients with early stage NSCLC were analyzed for this study. On multivariate analysis, the factors associated with decreased survival were male (hazard ratio [HR] 1.32, P < .0001) compared with female, increasing age (HR 1.036, P < .0001), African American (HR 1.15, P < .0001) compared with white, and rural residency (HR 1.146, P < .0001) compared with metro areas. Privately insured patients had better survival when compared with uninsured patients (HR 0.674, P < .0001), whereas Medicaid patients had the worst survival (HR 1.076, P < .0001). Also, the patients who were diagnosed between 2008 and 2012 had a higher survival than those diagnosed earlier (HR 0.645, P < .0001).

Discussion: The above data suggest that there is a disparity in outcomes among patients with early stage NSCLC based on various demographic and health system factors. Despite an overall increase in survival due to improved therapies from 1998 to 2012, significant differences exist in terms of patient age, gender, race, residency, and insurance status. This could be secondary to decreased receipt of appropriate treatment in certain subgroups or due to a difference in cancer biology in some of these groups. Nevertheless, this study suggests room for improvement in health care delivery to all patients for optimal outcomes.

Publications
Publications
Topics
Article Type
Display Headline
Impact of Demographic and Health System Variables on Survival in Early Stage (Stage I and II) Non-Small Cell Lung Carcinoma: A National Cancer Database Analysis
Display Headline
Impact of Demographic and Health System Variables on Survival in Early Stage (Stage I and II) Non-Small Cell Lung Carcinoma: A National Cancer Database Analysis
Legacy Keywords
AVAHO, Hematology, Oncology
Legacy Keywords
AVAHO, Hematology, Oncology
Sections
Disallow All Ads

Lesion Sprang Up Under His Nose (Well, to One Side, Actually …)

Article Type
Changed
Tue, 12/13/2016 - 10:27
Display Headline
Lesion Sprang Up Under His Nose (Well, to One Side, Actually …)

An 80-year-old man is brought in by family for evaluation of a lesion on his nose. It manifested several years ago, at a smaller size, but has recently and abruptly grown. Although asymptomatic, the lesion is disturbing to the patient, who can now see it out of the corner of his eye.

The patient worked all of his adult life in the outdoors, as a farm and ranch hand. He has an extensive history of nonmelanoma skin cancer; several lesions have been removed from his face and arm.

Since the patient lives alone and rarely has visitors, it has been months since anyone has seen him. But as soon as his son-in-law saw the patient, he was sufficiently alarmed by the lesion to insist that care be sought.

EXAMINATION
The patient’s facial skin shows abundant evidence of chronic, severe sun damage: a whitish, spongy look to the skin on his forehead and upper cheeks and a great deal of discoloration and scaling.

The lesion in question is a 3 x 1.5–cm, round, bulbous, smooth mass covering the right alar bulb. The surface is glassy-looking, with multiple telangiectasias. It is very firm but nontender on palpation. Shave biopsy is performed.

 

What is the diagnosis?

 

 

 

 

 

DISCUSSION
The biopsy results confirmed the suspicion of basal cell carcinoma (BCC). BCCs typically grow very slowly, often taking years to become noticeable, although not every BCC follows the rules. Some are more aggressive than others, both in terms of growth and clinical behavior.

It’s quite likely that in this case, the patient’s social isolation created the impression that his lesion grew abruptly and dramatically. (A subsequent eye exam revealed a number of problems, including severe presbyopia and advanced cataracts, so the patient himself might not have noticed the lesion for a while.) However, due to the large size and aggressive nature of the lesion—and the fact that the patient lives more than two hours from the nearest city, rendering his other treatment option, radiation, impractical—he was referred for Mohs surgery.

This process will establish clear surgical margins and provide acceptable closure. The latter may require reconstruction of the nose, depending on the depth of the cancer. Mohs surgeons often co-manage such cases with their counterparts in ENT or plastic surgery.

The differential for this lesion included keratoacanthoma , squamous cell carcinoma, and cyst.

TAKE-HOME LEARNING POINTS
• Basal cell carcinoma (BCC) is typically very slow growing, but there are exceptions.

• Social isolation can allow lesions and conditions to advance before they’re detected.

• Shave biopsy is indicated only for possible nonmelanoma skin cancers. Possible melanomas require excision, multiple punches, or deep shave to establish depth (a key prognostic factor).

• Rapid growth of BCCs suggests more aggressive clinical behavior, which in turn suggests the need for controlled margins to ensure complete removal. 

Author and Disclosure Information

 

Joe R. Monroe, MPAS, PA

Issue
Clinician Reviews - 25(10)
Publications
Topics
Page Number
W2
Legacy Keywords
basal cell carcinoma, nose, nasal lesion, biopsy, Mohs surgery
Sections
Author and Disclosure Information

 

Joe R. Monroe, MPAS, PA

Author and Disclosure Information

 

Joe R. Monroe, MPAS, PA

An 80-year-old man is brought in by family for evaluation of a lesion on his nose. It manifested several years ago, at a smaller size, but has recently and abruptly grown. Although asymptomatic, the lesion is disturbing to the patient, who can now see it out of the corner of his eye.

The patient worked all of his adult life in the outdoors, as a farm and ranch hand. He has an extensive history of nonmelanoma skin cancer; several lesions have been removed from his face and arm.

Since the patient lives alone and rarely has visitors, it has been months since anyone has seen him. But as soon as his son-in-law saw the patient, he was sufficiently alarmed by the lesion to insist that care be sought.

EXAMINATION
The patient’s facial skin shows abundant evidence of chronic, severe sun damage: a whitish, spongy look to the skin on his forehead and upper cheeks and a great deal of discoloration and scaling.

The lesion in question is a 3 x 1.5–cm, round, bulbous, smooth mass covering the right alar bulb. The surface is glassy-looking, with multiple telangiectasias. It is very firm but nontender on palpation. Shave biopsy is performed.

 

What is the diagnosis?

 

 

 

 

 

DISCUSSION
The biopsy results confirmed the suspicion of basal cell carcinoma (BCC). BCCs typically grow very slowly, often taking years to become noticeable, although not every BCC follows the rules. Some are more aggressive than others, both in terms of growth and clinical behavior.

It’s quite likely that in this case, the patient’s social isolation created the impression that his lesion grew abruptly and dramatically. (A subsequent eye exam revealed a number of problems, including severe presbyopia and advanced cataracts, so the patient himself might not have noticed the lesion for a while.) However, due to the large size and aggressive nature of the lesion—and the fact that the patient lives more than two hours from the nearest city, rendering his other treatment option, radiation, impractical—he was referred for Mohs surgery.

This process will establish clear surgical margins and provide acceptable closure. The latter may require reconstruction of the nose, depending on the depth of the cancer. Mohs surgeons often co-manage such cases with their counterparts in ENT or plastic surgery.

The differential for this lesion included keratoacanthoma , squamous cell carcinoma, and cyst.

TAKE-HOME LEARNING POINTS
• Basal cell carcinoma (BCC) is typically very slow growing, but there are exceptions.

• Social isolation can allow lesions and conditions to advance before they’re detected.

• Shave biopsy is indicated only for possible nonmelanoma skin cancers. Possible melanomas require excision, multiple punches, or deep shave to establish depth (a key prognostic factor).

• Rapid growth of BCCs suggests more aggressive clinical behavior, which in turn suggests the need for controlled margins to ensure complete removal. 

An 80-year-old man is brought in by family for evaluation of a lesion on his nose. It manifested several years ago, at a smaller size, but has recently and abruptly grown. Although asymptomatic, the lesion is disturbing to the patient, who can now see it out of the corner of his eye.

The patient worked all of his adult life in the outdoors, as a farm and ranch hand. He has an extensive history of nonmelanoma skin cancer; several lesions have been removed from his face and arm.

Since the patient lives alone and rarely has visitors, it has been months since anyone has seen him. But as soon as his son-in-law saw the patient, he was sufficiently alarmed by the lesion to insist that care be sought.

EXAMINATION
The patient’s facial skin shows abundant evidence of chronic, severe sun damage: a whitish, spongy look to the skin on his forehead and upper cheeks and a great deal of discoloration and scaling.

The lesion in question is a 3 x 1.5–cm, round, bulbous, smooth mass covering the right alar bulb. The surface is glassy-looking, with multiple telangiectasias. It is very firm but nontender on palpation. Shave biopsy is performed.

 

What is the diagnosis?

 

 

 

 

 

DISCUSSION
The biopsy results confirmed the suspicion of basal cell carcinoma (BCC). BCCs typically grow very slowly, often taking years to become noticeable, although not every BCC follows the rules. Some are more aggressive than others, both in terms of growth and clinical behavior.

It’s quite likely that in this case, the patient’s social isolation created the impression that his lesion grew abruptly and dramatically. (A subsequent eye exam revealed a number of problems, including severe presbyopia and advanced cataracts, so the patient himself might not have noticed the lesion for a while.) However, due to the large size and aggressive nature of the lesion—and the fact that the patient lives more than two hours from the nearest city, rendering his other treatment option, radiation, impractical—he was referred for Mohs surgery.

This process will establish clear surgical margins and provide acceptable closure. The latter may require reconstruction of the nose, depending on the depth of the cancer. Mohs surgeons often co-manage such cases with their counterparts in ENT or plastic surgery.

The differential for this lesion included keratoacanthoma , squamous cell carcinoma, and cyst.

TAKE-HOME LEARNING POINTS
• Basal cell carcinoma (BCC) is typically very slow growing, but there are exceptions.

• Social isolation can allow lesions and conditions to advance before they’re detected.

• Shave biopsy is indicated only for possible nonmelanoma skin cancers. Possible melanomas require excision, multiple punches, or deep shave to establish depth (a key prognostic factor).

• Rapid growth of BCCs suggests more aggressive clinical behavior, which in turn suggests the need for controlled margins to ensure complete removal. 

Issue
Clinician Reviews - 25(10)
Issue
Clinician Reviews - 25(10)
Page Number
W2
Page Number
W2
Publications
Publications
Topics
Article Type
Display Headline
Lesion Sprang Up Under His Nose (Well, to One Side, Actually …)
Display Headline
Lesion Sprang Up Under His Nose (Well, to One Side, Actually …)
Legacy Keywords
basal cell carcinoma, nose, nasal lesion, biopsy, Mohs surgery
Legacy Keywords
basal cell carcinoma, nose, nasal lesion, biopsy, Mohs surgery
Sections
Disallow All Ads

Bipolar type I patients’ relatives lack brain connectivity disruptions

Article Type
Changed
Mon, 04/16/2018 - 13:47
Display Headline
Bipolar type I patients’ relatives lack brain connectivity disruptions

Anatomical connectivity in discreet frontal regions of the brain is disrupted in bipolar I disorder patients, but not in mentally healthy relatives of such patients, according to a study.

©Purestock/thinkstockphotos.com

The researchers looked for connectivity abnormalities in the brains of multiply affected bipolar I disorder families “to assess the utility of dysconnectivity as a biomarker and its endophenotypic potential.” Tractography was done on magnetic resonance diffusion images of the brains of 19 bipolar I patients in remission, 21 of the patients’ first-degree relatives who did not have bipolar I, and 18 unrelated controls who also did not have bipolar I. A connectivity matrix was generated for each patient, and the Brain Connectivity Toolbox was used to extract neural network metrics.

“Whole brain analysis revealed no differences between groups,” according to Natalie J. Forde, a PhD candidate at the University Medical Centre Gronigen (the Netherlands) and her colleagues. “Analysis of specific mainly frontal regions, previously implicated as potentially endophenotypic by functional magnetic resonance imaging analysis of the same cohort, revealed a significant effect of group in the right medial superior frontal gyrus and left middle frontal gyrus driven by reduced [organization] in [bipolar I] patients, compared with controls.”

Read the full study in Psychiatry Research: Neuroimaging (doi: 10.1016/j.pscychresns.2015.08.004).

[email protected]

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Anatomical connectivity in discreet frontal regions of the brain is disrupted in bipolar I disorder patients, but not in mentally healthy relatives of such patients, according to a study.

©Purestock/thinkstockphotos.com

The researchers looked for connectivity abnormalities in the brains of multiply affected bipolar I disorder families “to assess the utility of dysconnectivity as a biomarker and its endophenotypic potential.” Tractography was done on magnetic resonance diffusion images of the brains of 19 bipolar I patients in remission, 21 of the patients’ first-degree relatives who did not have bipolar I, and 18 unrelated controls who also did not have bipolar I. A connectivity matrix was generated for each patient, and the Brain Connectivity Toolbox was used to extract neural network metrics.

“Whole brain analysis revealed no differences between groups,” according to Natalie J. Forde, a PhD candidate at the University Medical Centre Gronigen (the Netherlands) and her colleagues. “Analysis of specific mainly frontal regions, previously implicated as potentially endophenotypic by functional magnetic resonance imaging analysis of the same cohort, revealed a significant effect of group in the right medial superior frontal gyrus and left middle frontal gyrus driven by reduced [organization] in [bipolar I] patients, compared with controls.”

Read the full study in Psychiatry Research: Neuroimaging (doi: 10.1016/j.pscychresns.2015.08.004).

[email protected]

Anatomical connectivity in discreet frontal regions of the brain is disrupted in bipolar I disorder patients, but not in mentally healthy relatives of such patients, according to a study.

©Purestock/thinkstockphotos.com

The researchers looked for connectivity abnormalities in the brains of multiply affected bipolar I disorder families “to assess the utility of dysconnectivity as a biomarker and its endophenotypic potential.” Tractography was done on magnetic resonance diffusion images of the brains of 19 bipolar I patients in remission, 21 of the patients’ first-degree relatives who did not have bipolar I, and 18 unrelated controls who also did not have bipolar I. A connectivity matrix was generated for each patient, and the Brain Connectivity Toolbox was used to extract neural network metrics.

“Whole brain analysis revealed no differences between groups,” according to Natalie J. Forde, a PhD candidate at the University Medical Centre Gronigen (the Netherlands) and her colleagues. “Analysis of specific mainly frontal regions, previously implicated as potentially endophenotypic by functional magnetic resonance imaging analysis of the same cohort, revealed a significant effect of group in the right medial superior frontal gyrus and left middle frontal gyrus driven by reduced [organization] in [bipolar I] patients, compared with controls.”

Read the full study in Psychiatry Research: Neuroimaging (doi: 10.1016/j.pscychresns.2015.08.004).

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
Bipolar type I patients’ relatives lack brain connectivity disruptions
Display Headline
Bipolar type I patients’ relatives lack brain connectivity disruptions
Article Source

FROM PSYCHIATRY RESEARCH: NEUROIMAGING

PURLs Copyright

Inside the Article

Reducing side effects of CAR T-cell therapy

Article Type
Changed
Fri, 09/25/2015 - 05:00
Display Headline
Reducing side effects of CAR T-cell therapy

Wendell Lim, PhD

Photo courtesy of UCSF

Researchers have reported progress in developing an “on/off switch” to temper the over-active immune response and severe toxicities that can result from chimeric antigen receptor (CAR) T-cell therapy.

The team created CAR T cells that are “off” by default, homing to CD19-expressing cancer cells but remaining inactive until a small molecule is administered.

This system effectively targeted leukemia and lymphoma cells in preclinical experiments.

But the researchers said it’s not ready for clinical testing, as the small-molecule “trigger” is expensive and lasts only 4 hours.

Still, the team believes this type of CAR T-cell therapy could eventually help doctors gradually increase the immune response to treatment and therefore avoid toxicities such as cytokine release syndrome and tumor lysis syndrome.

Wendell Lim, PhD, of University of California, San Francisco, and his colleagues described this work in Science.

“T cells are really powerful beasts, and they can be lethal when they’re activated,” Dr Lim said. “We’ve needed a remote control system that retains the power of these engineered T cells but allows us to communicate specifically with them and manage them while they’re in the body.”

To that end, he and his colleagues created a CAR that requires both an antigen and a small molecule for activation. They dubbed it the “ON-switch CAR.”

ON-switch CAR

The researchers explained that the ON-switch CAR consists of 2 parts that assemble in a small molecule-dependent manner.

Part 1 consists of a CD8α signal sequence, Myc epitope, anti-CD19 single-chain variable fragment, CD8α hinge and transmembrane domain, 4-1BB costimulatory motif, and FK506 Binding Protein (FKBP) domain for heterodimerization.

Part 2 consists of the ectodomain of DNAX-activating protein 10 (DAP10) for homodimerization, CD8α transmembrane domain for membrane anchoring, 4-1BB costimulatory motif, T2089L mutant of FKBP-rapamycin binding (FRB*) domain, T-cell receptor CD3ζ signaling chain, and mCherry tag.

The FKBP and FRB* domains heterodimerize in the presence of the rapamycin analog AP21967, referred to as the “rapalog.”

The researchers conducted in vitro experiments with this ON-switch CAR in cells expressing CD19 (K562, Raji, and Daudi).

The ON-switch CAR T cells homed to CD19-expressing cells but did nothing else until the rapalog was added. Once the rapalog was added, CD19-expressing cells were killed off in a dose-dependent manner.

The team observed similar results in mice with leukemia. Leukemia cells (K562) were selectively eliminated by the ON-switch CARs only after the rapalog had been administered.

Dr Lim stressed that this work should be considered a proof of principle, as the rapalog has too short a half-life to be clinically useful. Nevertheless, he believes the research provides the foundation for practical remote control of CAR T cells.

Members of his lab are exploring other techniques to accomplish this goal, such as controlling CAR T-cell activation with light.

The team is also working to reduce side effects of CAR T-cell therapy by introducing multiple CARs into T cells so the cells will respond to multiple characteristics that are distinctive to an individual patient’s tumor, rather than to a single protein that may also be found on normal cells.

“That we can engineer CAR T cells to have slightly different, quite powerful effects—even if for a subset of patients or for certain types of cancer—is really remarkable,” Dr Lim said. “And this is just the tip of the iceberg.”

Publications
Topics

Wendell Lim, PhD

Photo courtesy of UCSF

Researchers have reported progress in developing an “on/off switch” to temper the over-active immune response and severe toxicities that can result from chimeric antigen receptor (CAR) T-cell therapy.

The team created CAR T cells that are “off” by default, homing to CD19-expressing cancer cells but remaining inactive until a small molecule is administered.

This system effectively targeted leukemia and lymphoma cells in preclinical experiments.

But the researchers said it’s not ready for clinical testing, as the small-molecule “trigger” is expensive and lasts only 4 hours.

Still, the team believes this type of CAR T-cell therapy could eventually help doctors gradually increase the immune response to treatment and therefore avoid toxicities such as cytokine release syndrome and tumor lysis syndrome.

Wendell Lim, PhD, of University of California, San Francisco, and his colleagues described this work in Science.

“T cells are really powerful beasts, and they can be lethal when they’re activated,” Dr Lim said. “We’ve needed a remote control system that retains the power of these engineered T cells but allows us to communicate specifically with them and manage them while they’re in the body.”

To that end, he and his colleagues created a CAR that requires both an antigen and a small molecule for activation. They dubbed it the “ON-switch CAR.”

ON-switch CAR

The researchers explained that the ON-switch CAR consists of 2 parts that assemble in a small molecule-dependent manner.

Part 1 consists of a CD8α signal sequence, Myc epitope, anti-CD19 single-chain variable fragment, CD8α hinge and transmembrane domain, 4-1BB costimulatory motif, and FK506 Binding Protein (FKBP) domain for heterodimerization.

Part 2 consists of the ectodomain of DNAX-activating protein 10 (DAP10) for homodimerization, CD8α transmembrane domain for membrane anchoring, 4-1BB costimulatory motif, T2089L mutant of FKBP-rapamycin binding (FRB*) domain, T-cell receptor CD3ζ signaling chain, and mCherry tag.

The FKBP and FRB* domains heterodimerize in the presence of the rapamycin analog AP21967, referred to as the “rapalog.”

The researchers conducted in vitro experiments with this ON-switch CAR in cells expressing CD19 (K562, Raji, and Daudi).

The ON-switch CAR T cells homed to CD19-expressing cells but did nothing else until the rapalog was added. Once the rapalog was added, CD19-expressing cells were killed off in a dose-dependent manner.

The team observed similar results in mice with leukemia. Leukemia cells (K562) were selectively eliminated by the ON-switch CARs only after the rapalog had been administered.

Dr Lim stressed that this work should be considered a proof of principle, as the rapalog has too short a half-life to be clinically useful. Nevertheless, he believes the research provides the foundation for practical remote control of CAR T cells.

Members of his lab are exploring other techniques to accomplish this goal, such as controlling CAR T-cell activation with light.

The team is also working to reduce side effects of CAR T-cell therapy by introducing multiple CARs into T cells so the cells will respond to multiple characteristics that are distinctive to an individual patient’s tumor, rather than to a single protein that may also be found on normal cells.

“That we can engineer CAR T cells to have slightly different, quite powerful effects—even if for a subset of patients or for certain types of cancer—is really remarkable,” Dr Lim said. “And this is just the tip of the iceberg.”

Wendell Lim, PhD

Photo courtesy of UCSF

Researchers have reported progress in developing an “on/off switch” to temper the over-active immune response and severe toxicities that can result from chimeric antigen receptor (CAR) T-cell therapy.

The team created CAR T cells that are “off” by default, homing to CD19-expressing cancer cells but remaining inactive until a small molecule is administered.

This system effectively targeted leukemia and lymphoma cells in preclinical experiments.

But the researchers said it’s not ready for clinical testing, as the small-molecule “trigger” is expensive and lasts only 4 hours.

Still, the team believes this type of CAR T-cell therapy could eventually help doctors gradually increase the immune response to treatment and therefore avoid toxicities such as cytokine release syndrome and tumor lysis syndrome.

Wendell Lim, PhD, of University of California, San Francisco, and his colleagues described this work in Science.

“T cells are really powerful beasts, and they can be lethal when they’re activated,” Dr Lim said. “We’ve needed a remote control system that retains the power of these engineered T cells but allows us to communicate specifically with them and manage them while they’re in the body.”

To that end, he and his colleagues created a CAR that requires both an antigen and a small molecule for activation. They dubbed it the “ON-switch CAR.”

ON-switch CAR

The researchers explained that the ON-switch CAR consists of 2 parts that assemble in a small molecule-dependent manner.

Part 1 consists of a CD8α signal sequence, Myc epitope, anti-CD19 single-chain variable fragment, CD8α hinge and transmembrane domain, 4-1BB costimulatory motif, and FK506 Binding Protein (FKBP) domain for heterodimerization.

Part 2 consists of the ectodomain of DNAX-activating protein 10 (DAP10) for homodimerization, CD8α transmembrane domain for membrane anchoring, 4-1BB costimulatory motif, T2089L mutant of FKBP-rapamycin binding (FRB*) domain, T-cell receptor CD3ζ signaling chain, and mCherry tag.

The FKBP and FRB* domains heterodimerize in the presence of the rapamycin analog AP21967, referred to as the “rapalog.”

The researchers conducted in vitro experiments with this ON-switch CAR in cells expressing CD19 (K562, Raji, and Daudi).

The ON-switch CAR T cells homed to CD19-expressing cells but did nothing else until the rapalog was added. Once the rapalog was added, CD19-expressing cells were killed off in a dose-dependent manner.

The team observed similar results in mice with leukemia. Leukemia cells (K562) were selectively eliminated by the ON-switch CARs only after the rapalog had been administered.

Dr Lim stressed that this work should be considered a proof of principle, as the rapalog has too short a half-life to be clinically useful. Nevertheless, he believes the research provides the foundation for practical remote control of CAR T cells.

Members of his lab are exploring other techniques to accomplish this goal, such as controlling CAR T-cell activation with light.

The team is also working to reduce side effects of CAR T-cell therapy by introducing multiple CARs into T cells so the cells will respond to multiple characteristics that are distinctive to an individual patient’s tumor, rather than to a single protein that may also be found on normal cells.

“That we can engineer CAR T cells to have slightly different, quite powerful effects—even if for a subset of patients or for certain types of cancer—is really remarkable,” Dr Lim said. “And this is just the tip of the iceberg.”

Publications
Publications
Topics
Article Type
Display Headline
Reducing side effects of CAR T-cell therapy
Display Headline
Reducing side effects of CAR T-cell therapy
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Studies raise concerns about drug approval process

Article Type
Changed
Fri, 09/25/2015 - 05:00
Display Headline
Studies raise concerns about drug approval process

Drug production

Photo courtesy of the FDA

Two newly published studies have raised concerns about the drug approval process in the US.

One study showed that, over the past two decades, the US Food and Drug Administration (FDA) has significantly increased its use of expedited development or review programs when approving new drugs.

Investigators said this increase cannot be attributed to an increase in the number of innovative new drug classes.

The other study revealed “wide variations” in evidence supporting the approval of supplemental drug applications.

Aaron S. Kesselheim, MD, of Brigham and Women’s Hospital in Boston, Massachusetts, and his colleagues conducted these studies and reported the results in The BMJ.

Authors of a related editorial wrote that these studies “give cause for concern about whether most new drugs are any more effective than existing products or whether their safety has been adequately assessed.”

Expedited approval

For the first study, the investigators looked at the FDA’s use of expedited development and review programs for drugs newly approved between 1987 and 2014. This included the orphan designation, fast track designation, priority review, and accelerated approval programs.

The FDA approved 774 drugs during the study period, and 33% of these were first-in-class agents. Priority review (43%) was the most-used program, followed by orphan designation (25%), fast track designation (19%), and accelerated approval (9%).

The investigators observed an increase of 2.6% per year in the number of expedited review and approval programs granted to each newly approved drug (P<0.001) and a 2.4% increase in the proportion of drugs associated with at least one of the programs (P=0.009).

The team noted that “this trend is being driven by drugs that are not first in class and thus potentially less innovative.”

They also said that, by the end of the study period, most newly approved drugs were associated with at least one of the programs. The peak was in 2005, when 75% (15/20) of newly approved drugs were associated with at least one program.

Supplemental approval

For the second study, Dr Kesselheim and his colleagues evaluated the quality of evidence underpinning FDA approval of supplemental drug applications (uses beyond a drug’s original indication) between 2005 and 2014.

The team assessed 295 supplemental drug approvals. They found a lack of trials using clinical outcome endpoints, a lack of trials including active comparators, and differences in the evidence according to types of approval.

Thirty percent of drug approvals for new indications were supported by trials with active comparators, as were 51% of modified-use approvals and 11% of approvals expanding the patient population (P<0.001).

Thirty-two percent of drug approvals for new indications were supported by trials using clinical outcome endpoints, as were 30% of modified-use approvals and 22% of expanded-population approvals (P=0.29).

The investigators said these findings “underscore the need for a robust system of post-approval drug monitoring for efficacy and safety, timely confirmatory studies, and re-examination of existing legislative incentives to promote the optimal delivery of evidence-based medicine.”

Publications
Topics

Drug production

Photo courtesy of the FDA

Two newly published studies have raised concerns about the drug approval process in the US.

One study showed that, over the past two decades, the US Food and Drug Administration (FDA) has significantly increased its use of expedited development or review programs when approving new drugs.

Investigators said this increase cannot be attributed to an increase in the number of innovative new drug classes.

The other study revealed “wide variations” in evidence supporting the approval of supplemental drug applications.

Aaron S. Kesselheim, MD, of Brigham and Women’s Hospital in Boston, Massachusetts, and his colleagues conducted these studies and reported the results in The BMJ.

Authors of a related editorial wrote that these studies “give cause for concern about whether most new drugs are any more effective than existing products or whether their safety has been adequately assessed.”

Expedited approval

For the first study, the investigators looked at the FDA’s use of expedited development and review programs for drugs newly approved between 1987 and 2014. This included the orphan designation, fast track designation, priority review, and accelerated approval programs.

The FDA approved 774 drugs during the study period, and 33% of these were first-in-class agents. Priority review (43%) was the most-used program, followed by orphan designation (25%), fast track designation (19%), and accelerated approval (9%).

The investigators observed an increase of 2.6% per year in the number of expedited review and approval programs granted to each newly approved drug (P<0.001) and a 2.4% increase in the proportion of drugs associated with at least one of the programs (P=0.009).

The team noted that “this trend is being driven by drugs that are not first in class and thus potentially less innovative.”

They also said that, by the end of the study period, most newly approved drugs were associated with at least one of the programs. The peak was in 2005, when 75% (15/20) of newly approved drugs were associated with at least one program.

Supplemental approval

For the second study, Dr Kesselheim and his colleagues evaluated the quality of evidence underpinning FDA approval of supplemental drug applications (uses beyond a drug’s original indication) between 2005 and 2014.

The team assessed 295 supplemental drug approvals. They found a lack of trials using clinical outcome endpoints, a lack of trials including active comparators, and differences in the evidence according to types of approval.

Thirty percent of drug approvals for new indications were supported by trials with active comparators, as were 51% of modified-use approvals and 11% of approvals expanding the patient population (P<0.001).

Thirty-two percent of drug approvals for new indications were supported by trials using clinical outcome endpoints, as were 30% of modified-use approvals and 22% of expanded-population approvals (P=0.29).

The investigators said these findings “underscore the need for a robust system of post-approval drug monitoring for efficacy and safety, timely confirmatory studies, and re-examination of existing legislative incentives to promote the optimal delivery of evidence-based medicine.”

Drug production

Photo courtesy of the FDA

Two newly published studies have raised concerns about the drug approval process in the US.

One study showed that, over the past two decades, the US Food and Drug Administration (FDA) has significantly increased its use of expedited development or review programs when approving new drugs.

Investigators said this increase cannot be attributed to an increase in the number of innovative new drug classes.

The other study revealed “wide variations” in evidence supporting the approval of supplemental drug applications.

Aaron S. Kesselheim, MD, of Brigham and Women’s Hospital in Boston, Massachusetts, and his colleagues conducted these studies and reported the results in The BMJ.

Authors of a related editorial wrote that these studies “give cause for concern about whether most new drugs are any more effective than existing products or whether their safety has been adequately assessed.”

Expedited approval

For the first study, the investigators looked at the FDA’s use of expedited development and review programs for drugs newly approved between 1987 and 2014. This included the orphan designation, fast track designation, priority review, and accelerated approval programs.

The FDA approved 774 drugs during the study period, and 33% of these were first-in-class agents. Priority review (43%) was the most-used program, followed by orphan designation (25%), fast track designation (19%), and accelerated approval (9%).

The investigators observed an increase of 2.6% per year in the number of expedited review and approval programs granted to each newly approved drug (P<0.001) and a 2.4% increase in the proportion of drugs associated with at least one of the programs (P=0.009).

The team noted that “this trend is being driven by drugs that are not first in class and thus potentially less innovative.”

They also said that, by the end of the study period, most newly approved drugs were associated with at least one of the programs. The peak was in 2005, when 75% (15/20) of newly approved drugs were associated with at least one program.

Supplemental approval

For the second study, Dr Kesselheim and his colleagues evaluated the quality of evidence underpinning FDA approval of supplemental drug applications (uses beyond a drug’s original indication) between 2005 and 2014.

The team assessed 295 supplemental drug approvals. They found a lack of trials using clinical outcome endpoints, a lack of trials including active comparators, and differences in the evidence according to types of approval.

Thirty percent of drug approvals for new indications were supported by trials with active comparators, as were 51% of modified-use approvals and 11% of approvals expanding the patient population (P<0.001).

Thirty-two percent of drug approvals for new indications were supported by trials using clinical outcome endpoints, as were 30% of modified-use approvals and 22% of expanded-population approvals (P=0.29).

The investigators said these findings “underscore the need for a robust system of post-approval drug monitoring for efficacy and safety, timely confirmatory studies, and re-examination of existing legislative incentives to promote the optimal delivery of evidence-based medicine.”

Publications
Publications
Topics
Article Type
Display Headline
Studies raise concerns about drug approval process
Display Headline
Studies raise concerns about drug approval process
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Edoxaban to be made available for NVAF

Article Type
Changed
Fri, 09/25/2015 - 05:00
Display Headline
Edoxaban to be made available for NVAF

Prescription medications

Photo courtesy of the CDC

The UK’s National Institute for Health and Care Excellence (NICE) has issued a final guidance recommending the oral anticoagulant edoxaban tosylate (Lixiana) as an option for preventing stroke and systemic embolism in adults with non-valvular atrial fibrillation (NVAF).

The patients must have one or more risk factors for stroke, including congestive heart failure, hypertension, diabetes, prior stroke or transient ischemic attack, and age of 75 years or older.

In the UK, such patients are generally treated with warfarin or the newer oral anticoagulants dabigatran, rivaroxaban, and apixaban.

NICE decided that edoxaban should be added to that list because data suggest the drug is a clinically and cost-effective treatment option for these patients.

Edoxaban should be available on the National Health Service within 3 months of the date NICE’s final guidance was issued, September 23.

NICE’s guidance says the decision about whether to start treatment with edoxaban should be made after an informed discussion between the clinician and the patient about the risks and benefits of edoxaban compared with warfarin, apixaban, dabigatran, and rivaroxaban.

For patients considering switching from warfarin, edoxaban’s potential benefits should be weighed against its potential risks, taking into account the patient’s level of international normalized ratio control.

Clinical effectiveness

NICE’s conclusion that edoxaban is clinically effective was based primarily on results of the ENGAGE AF-TIMI 48 trial. In this trial, researchers compared edoxaban and warfarin as prophylaxis for stroke or systemic embolism in patients with NVAF.

Results suggested edoxaban was at least non-inferior to warfarin with regard to efficacy, and edoxaban was associated with a significantly lower rate of major and fatal bleeding.

A committee advising NICE also reviewed a meta-analysis prepared by Daiichi Sankyo Co., Ltd., the company developing edoxaban.

The goal of the meta-analysis was to compare edoxaban with rivaroxaban, apixaban, and dabigatran. The analysis included 4 trials: ENGAGE AF-TIMI 48, ARISTOTLE (apixaban), RE-LY (dabigatran), and ROCKET-AF (rivaroxaban). All 4 trials had a warfarin comparator arm.

The results of the meta-analysis indicated that, for the composite endpoint of stroke and systemic embolism, efficacy was similar for high-dose edoxaban and the other new oral anticoagulants.

However, edoxaban significantly reduced major bleeding risk by 24% compared to rivaroxaban, 28% compared to dabigatran at 150 mg, and 17% compared to dabigatran at 110 mg. Major bleeding rates were similar for high-dose edoxaban and apixaban.

The committee advising NICE said these results should be interpreted with caution, but edoxaban is unlikely to be different from rivaroxaban, apixaban, and dabigatran in clinical practice.

Cost-effectiveness

Edoxaban costs £58.80 for a 28-tablet pack (60 mg or 30 mg), and the daily cost of treatment is £2.10 (excluding value-added tax). However, costs may vary in different settings because of negotiated procurement discounts.

The committee advising NICE analyzed cost information and concluded that edoxaban is cost-effective compared with warfarin, but there is insufficient evidence to distinguish between the clinical and cost-effectiveness of edoxaban and the other new oral anticoagulants.

Publications
Topics

Prescription medications

Photo courtesy of the CDC

The UK’s National Institute for Health and Care Excellence (NICE) has issued a final guidance recommending the oral anticoagulant edoxaban tosylate (Lixiana) as an option for preventing stroke and systemic embolism in adults with non-valvular atrial fibrillation (NVAF).

The patients must have one or more risk factors for stroke, including congestive heart failure, hypertension, diabetes, prior stroke or transient ischemic attack, and age of 75 years or older.

In the UK, such patients are generally treated with warfarin or the newer oral anticoagulants dabigatran, rivaroxaban, and apixaban.

NICE decided that edoxaban should be added to that list because data suggest the drug is a clinically and cost-effective treatment option for these patients.

Edoxaban should be available on the National Health Service within 3 months of the date NICE’s final guidance was issued, September 23.

NICE’s guidance says the decision about whether to start treatment with edoxaban should be made after an informed discussion between the clinician and the patient about the risks and benefits of edoxaban compared with warfarin, apixaban, dabigatran, and rivaroxaban.

For patients considering switching from warfarin, edoxaban’s potential benefits should be weighed against its potential risks, taking into account the patient’s level of international normalized ratio control.

Clinical effectiveness

NICE’s conclusion that edoxaban is clinically effective was based primarily on results of the ENGAGE AF-TIMI 48 trial. In this trial, researchers compared edoxaban and warfarin as prophylaxis for stroke or systemic embolism in patients with NVAF.

Results suggested edoxaban was at least non-inferior to warfarin with regard to efficacy, and edoxaban was associated with a significantly lower rate of major and fatal bleeding.

A committee advising NICE also reviewed a meta-analysis prepared by Daiichi Sankyo Co., Ltd., the company developing edoxaban.

The goal of the meta-analysis was to compare edoxaban with rivaroxaban, apixaban, and dabigatran. The analysis included 4 trials: ENGAGE AF-TIMI 48, ARISTOTLE (apixaban), RE-LY (dabigatran), and ROCKET-AF (rivaroxaban). All 4 trials had a warfarin comparator arm.

The results of the meta-analysis indicated that, for the composite endpoint of stroke and systemic embolism, efficacy was similar for high-dose edoxaban and the other new oral anticoagulants.

However, edoxaban significantly reduced major bleeding risk by 24% compared to rivaroxaban, 28% compared to dabigatran at 150 mg, and 17% compared to dabigatran at 110 mg. Major bleeding rates were similar for high-dose edoxaban and apixaban.

The committee advising NICE said these results should be interpreted with caution, but edoxaban is unlikely to be different from rivaroxaban, apixaban, and dabigatran in clinical practice.

Cost-effectiveness

Edoxaban costs £58.80 for a 28-tablet pack (60 mg or 30 mg), and the daily cost of treatment is £2.10 (excluding value-added tax). However, costs may vary in different settings because of negotiated procurement discounts.

The committee advising NICE analyzed cost information and concluded that edoxaban is cost-effective compared with warfarin, but there is insufficient evidence to distinguish between the clinical and cost-effectiveness of edoxaban and the other new oral anticoagulants.

Prescription medications

Photo courtesy of the CDC

The UK’s National Institute for Health and Care Excellence (NICE) has issued a final guidance recommending the oral anticoagulant edoxaban tosylate (Lixiana) as an option for preventing stroke and systemic embolism in adults with non-valvular atrial fibrillation (NVAF).

The patients must have one or more risk factors for stroke, including congestive heart failure, hypertension, diabetes, prior stroke or transient ischemic attack, and age of 75 years or older.

In the UK, such patients are generally treated with warfarin or the newer oral anticoagulants dabigatran, rivaroxaban, and apixaban.

NICE decided that edoxaban should be added to that list because data suggest the drug is a clinically and cost-effective treatment option for these patients.

Edoxaban should be available on the National Health Service within 3 months of the date NICE’s final guidance was issued, September 23.

NICE’s guidance says the decision about whether to start treatment with edoxaban should be made after an informed discussion between the clinician and the patient about the risks and benefits of edoxaban compared with warfarin, apixaban, dabigatran, and rivaroxaban.

For patients considering switching from warfarin, edoxaban’s potential benefits should be weighed against its potential risks, taking into account the patient’s level of international normalized ratio control.

Clinical effectiveness

NICE’s conclusion that edoxaban is clinically effective was based primarily on results of the ENGAGE AF-TIMI 48 trial. In this trial, researchers compared edoxaban and warfarin as prophylaxis for stroke or systemic embolism in patients with NVAF.

Results suggested edoxaban was at least non-inferior to warfarin with regard to efficacy, and edoxaban was associated with a significantly lower rate of major and fatal bleeding.

A committee advising NICE also reviewed a meta-analysis prepared by Daiichi Sankyo Co., Ltd., the company developing edoxaban.

The goal of the meta-analysis was to compare edoxaban with rivaroxaban, apixaban, and dabigatran. The analysis included 4 trials: ENGAGE AF-TIMI 48, ARISTOTLE (apixaban), RE-LY (dabigatran), and ROCKET-AF (rivaroxaban). All 4 trials had a warfarin comparator arm.

The results of the meta-analysis indicated that, for the composite endpoint of stroke and systemic embolism, efficacy was similar for high-dose edoxaban and the other new oral anticoagulants.

However, edoxaban significantly reduced major bleeding risk by 24% compared to rivaroxaban, 28% compared to dabigatran at 150 mg, and 17% compared to dabigatran at 110 mg. Major bleeding rates were similar for high-dose edoxaban and apixaban.

The committee advising NICE said these results should be interpreted with caution, but edoxaban is unlikely to be different from rivaroxaban, apixaban, and dabigatran in clinical practice.

Cost-effectiveness

Edoxaban costs £58.80 for a 28-tablet pack (60 mg or 30 mg), and the daily cost of treatment is £2.10 (excluding value-added tax). However, costs may vary in different settings because of negotiated procurement discounts.

The committee advising NICE analyzed cost information and concluded that edoxaban is cost-effective compared with warfarin, but there is insufficient evidence to distinguish between the clinical and cost-effectiveness of edoxaban and the other new oral anticoagulants.

Publications
Publications
Topics
Article Type
Display Headline
Edoxaban to be made available for NVAF
Display Headline
Edoxaban to be made available for NVAF
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Childhood cancer increases material hardship

Article Type
Changed
Fri, 09/25/2015 - 05:00
Display Headline
Childhood cancer increases material hardship

Young cancer patient

Photo by Bill Branson

Results of a small study reveal the material hardships families experience when a child is undergoing cancer treatment.

Researchers surveyed 99 families of children with cancer.

Six months after the child’s diagnosis, 29% of the families reported having at least one household material hardship, such as food, housing, or energy insecurity.

Twenty percent of the families had reported having such hardships at the time of the child’s diagnosis.

Kira Bona, MD, of Dana-Farber/Boston Children’s Cancer and Blood Disorders Center in Massachusetts, and her colleagues reported results from this survey in Pediatric Blood & Cancer.

The researchers surveyed 99 families of pediatric cancer patients treated at Dana-Farber/Boston Children’s, first within a month of diagnosis and then 6 months later.

At diagnosis, 20% of the families were low-income, which was defined as 200% of the federal poverty level. Six months later, an additional 12% suffered income losses that pushed them into the low-income group.

At 6 months, 25% of the families said they had lost more than 40% of their household income due to treatment-related work disruptions. A total of 56% of adults who supported their families experienced a disruption of their work.

This included 15% of parents who either quit their jobs or were laid off as a result of their child’s illness, as well as 37% of respondents who cut their hours or took a leave of absence. Thirty-four percent of these individuals were paid during their leave.

At 6 months, 29% of families said they had at least one material hardship. Twenty percent reported food insecurity, 17% reported energy insecurity, and 8% reported housing insecurity.*

These findings surprised researchers, who said they expected lower levels of need at their center because it provides psychosocial support for patients and has resource specialists to help families facing financial difficulties.

“What it says is that even at a well-resourced, large referral center, about a third of families are reporting food, housing, or energy insecurity 6 months into treatment,” Dr Bona said. “If anything, the numbers in our study are an underestimate of what might be seen at less well-resourced institutions, which was somewhat surprising to us.”

By focusing on specific material hardships, which can be addressed through governmental or philanthropic support, the researchers hope they have identified variables that are easier for clinicians to ameliorate than overall income.

Dr Bona said subsequent research will examine whether material hardship has the same effect on patient outcomes as low-income status.

“If household material hardship is linked to poorer outcomes in pediatric oncology, just like income is, then we can design interventions to fix food, housing, and energy insecurity,” she said. “It’s not clear what you do about income in a clinical setting.”

*Definitions for household material hardships were as follows.

Food insecurity was measured via the US Household Food Security Survey Module: Six-Item Short Form, which includes questions to asses if respondents:

  • sometimes/often do not have enough food to eat
  • sometimes/often cannot afford to eat balanced meals
  • sometimes/often worry about having enough money to buy food, etc.

Families met the definition for housing insecurity if they reported any of the following:

  • crowding (defined as >2 people per bedroom in the home)
  • multiple moves (>1 move in the prior year)
  • doubling up (having to live with other people, even temporarily, because of financial difficulties in the past 6 months).

Families met the definition for energy insecurity if, in the prior 6 months, they had experienced any of the following:

 

 

  • received a letter threatening to shut off the gas/electricity/oil to their house because they had not paid the bills
  • had the gas/electric/oil company shut off electricity or refused to deliver oil/gas because they had not paid the bills
  • had any days that their home was not heated/cooled because they couldn’t pay the bills
  • had ever used a cooking stove to heat their home because they couldn’t pay the bills.
Publications
Topics

Young cancer patient

Photo by Bill Branson

Results of a small study reveal the material hardships families experience when a child is undergoing cancer treatment.

Researchers surveyed 99 families of children with cancer.

Six months after the child’s diagnosis, 29% of the families reported having at least one household material hardship, such as food, housing, or energy insecurity.

Twenty percent of the families had reported having such hardships at the time of the child’s diagnosis.

Kira Bona, MD, of Dana-Farber/Boston Children’s Cancer and Blood Disorders Center in Massachusetts, and her colleagues reported results from this survey in Pediatric Blood & Cancer.

The researchers surveyed 99 families of pediatric cancer patients treated at Dana-Farber/Boston Children’s, first within a month of diagnosis and then 6 months later.

At diagnosis, 20% of the families were low-income, which was defined as 200% of the federal poverty level. Six months later, an additional 12% suffered income losses that pushed them into the low-income group.

At 6 months, 25% of the families said they had lost more than 40% of their household income due to treatment-related work disruptions. A total of 56% of adults who supported their families experienced a disruption of their work.

This included 15% of parents who either quit their jobs or were laid off as a result of their child’s illness, as well as 37% of respondents who cut their hours or took a leave of absence. Thirty-four percent of these individuals were paid during their leave.

At 6 months, 29% of families said they had at least one material hardship. Twenty percent reported food insecurity, 17% reported energy insecurity, and 8% reported housing insecurity.*

These findings surprised researchers, who said they expected lower levels of need at their center because it provides psychosocial support for patients and has resource specialists to help families facing financial difficulties.

“What it says is that even at a well-resourced, large referral center, about a third of families are reporting food, housing, or energy insecurity 6 months into treatment,” Dr Bona said. “If anything, the numbers in our study are an underestimate of what might be seen at less well-resourced institutions, which was somewhat surprising to us.”

By focusing on specific material hardships, which can be addressed through governmental or philanthropic support, the researchers hope they have identified variables that are easier for clinicians to ameliorate than overall income.

Dr Bona said subsequent research will examine whether material hardship has the same effect on patient outcomes as low-income status.

“If household material hardship is linked to poorer outcomes in pediatric oncology, just like income is, then we can design interventions to fix food, housing, and energy insecurity,” she said. “It’s not clear what you do about income in a clinical setting.”

*Definitions for household material hardships were as follows.

Food insecurity was measured via the US Household Food Security Survey Module: Six-Item Short Form, which includes questions to asses if respondents:

  • sometimes/often do not have enough food to eat
  • sometimes/often cannot afford to eat balanced meals
  • sometimes/often worry about having enough money to buy food, etc.

Families met the definition for housing insecurity if they reported any of the following:

  • crowding (defined as >2 people per bedroom in the home)
  • multiple moves (>1 move in the prior year)
  • doubling up (having to live with other people, even temporarily, because of financial difficulties in the past 6 months).

Families met the definition for energy insecurity if, in the prior 6 months, they had experienced any of the following:

 

 

  • received a letter threatening to shut off the gas/electricity/oil to their house because they had not paid the bills
  • had the gas/electric/oil company shut off electricity or refused to deliver oil/gas because they had not paid the bills
  • had any days that their home was not heated/cooled because they couldn’t pay the bills
  • had ever used a cooking stove to heat their home because they couldn’t pay the bills.

Young cancer patient

Photo by Bill Branson

Results of a small study reveal the material hardships families experience when a child is undergoing cancer treatment.

Researchers surveyed 99 families of children with cancer.

Six months after the child’s diagnosis, 29% of the families reported having at least one household material hardship, such as food, housing, or energy insecurity.

Twenty percent of the families had reported having such hardships at the time of the child’s diagnosis.

Kira Bona, MD, of Dana-Farber/Boston Children’s Cancer and Blood Disorders Center in Massachusetts, and her colleagues reported results from this survey in Pediatric Blood & Cancer.

The researchers surveyed 99 families of pediatric cancer patients treated at Dana-Farber/Boston Children’s, first within a month of diagnosis and then 6 months later.

At diagnosis, 20% of the families were low-income, which was defined as 200% of the federal poverty level. Six months later, an additional 12% suffered income losses that pushed them into the low-income group.

At 6 months, 25% of the families said they had lost more than 40% of their household income due to treatment-related work disruptions. A total of 56% of adults who supported their families experienced a disruption of their work.

This included 15% of parents who either quit their jobs or were laid off as a result of their child’s illness, as well as 37% of respondents who cut their hours or took a leave of absence. Thirty-four percent of these individuals were paid during their leave.

At 6 months, 29% of families said they had at least one material hardship. Twenty percent reported food insecurity, 17% reported energy insecurity, and 8% reported housing insecurity.*

These findings surprised researchers, who said they expected lower levels of need at their center because it provides psychosocial support for patients and has resource specialists to help families facing financial difficulties.

“What it says is that even at a well-resourced, large referral center, about a third of families are reporting food, housing, or energy insecurity 6 months into treatment,” Dr Bona said. “If anything, the numbers in our study are an underestimate of what might be seen at less well-resourced institutions, which was somewhat surprising to us.”

By focusing on specific material hardships, which can be addressed through governmental or philanthropic support, the researchers hope they have identified variables that are easier for clinicians to ameliorate than overall income.

Dr Bona said subsequent research will examine whether material hardship has the same effect on patient outcomes as low-income status.

“If household material hardship is linked to poorer outcomes in pediatric oncology, just like income is, then we can design interventions to fix food, housing, and energy insecurity,” she said. “It’s not clear what you do about income in a clinical setting.”

*Definitions for household material hardships were as follows.

Food insecurity was measured via the US Household Food Security Survey Module: Six-Item Short Form, which includes questions to asses if respondents:

  • sometimes/often do not have enough food to eat
  • sometimes/often cannot afford to eat balanced meals
  • sometimes/often worry about having enough money to buy food, etc.

Families met the definition for housing insecurity if they reported any of the following:

  • crowding (defined as >2 people per bedroom in the home)
  • multiple moves (>1 move in the prior year)
  • doubling up (having to live with other people, even temporarily, because of financial difficulties in the past 6 months).

Families met the definition for energy insecurity if, in the prior 6 months, they had experienced any of the following:

 

 

  • received a letter threatening to shut off the gas/electricity/oil to their house because they had not paid the bills
  • had the gas/electric/oil company shut off electricity or refused to deliver oil/gas because they had not paid the bills
  • had any days that their home was not heated/cooled because they couldn’t pay the bills
  • had ever used a cooking stove to heat their home because they couldn’t pay the bills.
Publications
Publications
Topics
Article Type
Display Headline
Childhood cancer increases material hardship
Display Headline
Childhood cancer increases material hardship
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Patient‐Oriented Discharge Instructions

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Co‐creating patient‐oriented discharge instructions with patients, caregivers, and healthcare providers

The period following discharge from the hospital is a vulnerable time for patients that can result in adverse events including avoidable emergency room visits and rehospitalizations.[1] Approximately 8.5% of all visits to the hospital result in readmissions within 30 days.[2] Poor communication of discharge information is even more pronounced for patients with language barriers or limited health literacy, particularly in ethnically diverse communities where up to 60% may speak languages other than English or French at home.[3] Defined as the degree to which individuals can obtain, process, and understand basic health information and services needed to make appropriate health decisions,[4] an estimated 55% of Canadians between the ages of 16 and 65 years have limited health literacy, and only 12% of those above the age of 65 years have adequate health literacy skills.[5]

Previous authors have demonstrated the benefits of using multiple interventions, including nonverbal communication, when designing for individuals with limited literacy.[6] Visual aids have been shown to be particularly useful to non‐English speakers and patients with limited health literacy.[7] In particular, research on medication tools for patients with limited health literacy has shown that illustrated schedules can be helpful.[8]

Typical discharge summaries are documents that are transmitted from the hospital to outpatient physicians to coordinate clinical care. The form codesigned by our team is intended to complement the summary and facilitate patient education and to provide instructions for patients to refer to after discharge.

PURPOSE

The objective of this work was to design instructions for patients going home from the hospital with relevant and actionable information, presented in an easily understandable and usable form.

METHODS

We used participatory action methodology,[9] an approach to research that encourages researchers and those who will benefit from the research to work together across all phases of research, by engaging end‐users of patient instructions from the beginning of the project. Mixed methods were used to understand needs, develop content and design, and iteratively evaluate and refine the instructions. An advisory team of patients, physicians, pharmacists, designers, researchers, and patient‐education professionals gave input into study design and execution.

Although formal inclusion and exclusion criteria were not used, care was taken to engage patients with language barriers, limited health literacy, and mental health issues.

Key methods used are listed below. See Figure 1 for a timeline of the process used to develop the instructions.

Figure 1
Project timeline. Abbreviations: TC LHIN, Toronto Central Local Health Integration Network.

Understanding the Current Patient Experience of Discharge

Key methods included: (1) Patient experience mapping[10]a process of capturing and communicating complex patient interactions and their experience in the system by having interdisciplinary groups create a map of the patient experience and feelings through a mock discharge scenario). (2) A cultural probe[11]patients selected as having minor language barriers or limited health literacy were given a journal and disposable camera to document their time at home after discharge. Patients were asked how confident they were in filling out medical forms by themselves as a way of screening for probable health literacy limitations.[12]

Content and Design

The instructions were developed using a codesign methodology,[13] where researchers and the end‐users of a product design the product together. In our case, teams of patients, healthcare providers, and designers worked together to create prototypes using hypothetical patient cases.

Iteratively Evaluating and Refining the Design

The prototype went through 3 design iterations (Figure 1). Feedback from patients, caregivers, and providers using focus groups, interviews, and surveys was used to refine the content and design and validate symbols for each section.

Key methods included: (1) Two focus groups with hard to reach patient groups that would not participate in interviews or surveys. One was with Cantonese‐speaking patients, facilitated by an interpreter. Cantonese is a common language in Toronto, yet the language barrier typically precludes the patients from participating in research. The other group was with patients admitted to the psychiatry unit of the hospital, another group that typically is excluded from research studies. (2) Usability test of a paper‐based version of the instructions across 3 large academic hospitals; physicians and residents in general internal medicine units filled out the instructions by hand for each patient discharged.

RESULTS

Forty‐four patients, 12 caregivers, 30 healthcare personnel, 7 patient‐education professionals, and 8 designers were involved in the design (see Figure 2 for an image of the template) based on best practices in information design, graphic design, and patient education.

Figure 2
Template.

Understanding the Patient Experience of Discharge

The analysis of the patient experience at discharge revealed the following themes:

(1) Difficulties in understanding and retaining verbal instructions in the immediate postdischarge period because of exhaustion. (2) Patient concerns at discharge including feeling unprepared to leave the hospital. (3) Family members and caregivers play a large role in a patient's life, which becomes more significant in the postdischarge phase. This was made clear through journal entries from patients using cultural probes.

Content and Design

Patients wanted to know information that was relevant and actionable. They consistently mentioned the following information as being most important: (1) medication instructions, (2) follow‐up appointments with phone numbers, (3) normal expected symptoms, danger signs, and what to do, (4) lifestyle changes and when to resume activities, and (5) information and resources to have handy.

Advice from patient‐education specialists on the team, as well as the feedback from patients and caregivers was that instructions should be written in language at a fifth‐ or sixth‐grade level and be directed to the patient, use large fonts, include illustrations of medication schedules, and headings that are meaningful to the patient. In addition, patients wanted white space to take notes, an activity that has been shown to improve comprehension and recall.[14]

Patients felt having symbols for each section in the instructions helped make the form more readable by differentiating sections and providing a recognizable image for patients who could read English.

Iteratively Evaluating and Refining the Design

The results of the usability test data and surveys of the final version of the template showed that patients and providers felt that they would benefit from using the instructions. Of the patients and providers, 94.8% of patients and 75% of providers said that the instructions would be helpful to have when discharged from the hospital. Physicians filling out the instructions by hand took an average of 9 minutes to fill out the form.

DISCUSSION

This initiative is an example of engaging patients and caregivers as active partners in the healthcare system. Patients and caregivers were engaged as codesigners of the form from the outset and continuously throughout.

The instructions can be given to patients and caregivers at discharge as both a teaching tool and a reference that can be reviewed when at home. Process considerations are very important. As family and caregivers play an instrumental role in postdischarge care, the instructions should be given whenever possible in the presence of family. The form is a simple addition to any discharge process. It can be filled out by a single provider, a multidisciplinary team, or even the patient while undergoing discharge teaching. The time and resources to fill out the instructions will vary depending on the discharge process in place. Good discharge practices,[15] such as engaging the patient in the conversation and teach back, should be followed.

The form has been licensed as creative commons, so that any healthcare organization can use and adapt the materials to meet the needs of their patients.

The development of the form is only the first step in a larger project. Almost all of the study participants involved in the initiative were from the general internal medicine wards in downtown Toronto. We do not know yet if the results can be generalized to different patient and provider populations.

The instructions are currently being implemented in 8 hospitals throughout Toronto, spanning rehabilitation, acute care, surgery, and pediatrics. The form appears to have been appropriate and generalizable to all of these settings, but results from this multisite implementation on patient and provider experience or health outcomes are not available yet. Anticipated barriers include determining who has the responsibility for filling out the instructions and validating the accuracy of the medication list.

Discharge instructions serve many purposes. Though previous authors have developed checklists to ensure critical discharge information is included in discharge teaching, the creation of a patient‐oriented form, codesigned with patients and caregivers to provide the information that patients explicitly want at discharge, has been lacking. Using participatory action research, mixed methods, and codesign methodology, and including hard‐to‐reach patient groups was helpful in creating a design that will provide patients with key information at discharge in an easy‐to‐understand format.

Acknowledgements

The authors acknowledge the financial support and guidance of the Toronto Central Local Health Integration Network. The project was advised by a number of individuals, namely: Cynthia Damba, Michelle Ransom, Paolo Korre, Irene Chong, Dawn Lim, Helen Kang, Derek Leong, Elizabeth Abraham, Elke Ruthig, Grace Eagan, Vivian Lo, Rachel Solomon, Kendra Delicaet, Sara Ahmadi, and Jess Leung.

Disclosures: The funding provided by the Toronto Central Local Health Integration Network that supported much of the work contained in this article also paid for a portion of the salaries of Shoshana Hahn‐Goldberg, Tai Huynh, and Najla Zahr. There are no other conflicts of interest to report.

Files
References
  1. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from hospital. Ann Intern Med. 2003;138:161167.
  2. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Inter Med. 2009;150(3):178187.
  3. Statistics Canada. Visual census. 2011 census. Ottawa. Available at: http://www12.statcan.gc.ca/census‐recensement/index‐eng.cfm. Accessed September 19, 2014.
  4. Committee on Health Literacy. Board on Neuroscience and Behavioral Health. Institute of Medicine. Health literacy: a prescription to end confusion. Washington, DC: National Academies Press; 2004. Available at: http://www.collaborationhealthcare.com/7‐20‐10IOMHealthLiteracyExecutiveSummary.pdf. Accessed September 19, 2014.
  5. Rootman I, Gordon‐El‐Bihbety D. A vision for a health literate Canada: report of the Expert Panel on Health Literacy. 2008. Available at: http://www.cpha.ca/uploads/portals/h‐l/report_e.pdf. Accessed September 19, 2014.
  6. Sheridan S, Halpern D, Viera A, Berkman N, Donahue K, Crotty K. Interventions for individuals with low health literacy: a systematic review. J Health Commun. 2011;16:3054.
  7. Schillinger D, Machtinger EL, Wang F, Palacios J, Rodriguez M, Bindman A. Language, literacy, and communication regarding medication in an anticoagulation clinic: a comparison of verbal vs. visual assessment. J Health Commun. 2006;11(7):651664.
  8. Kriplani S, Robertson R, Love‐Ghaffari M, et al. Development of an illustrated medication schedule as a low‐literacy patient education tool. Patient Educ Couns. 2007;66(3):368377.
  9. Turnbull AP, Friesen BJ, Ramirez C. Participatory action research as a model for conducting family research. J Assoc Pers Sev Handicaps. 1998;23(3):178188.
  10. LaVela S, Gallan A. Evaluation and measurement of patient experience. Patient Exp J. 2014;1(1):2836.
  11. Gaver B, Dunne T, Pacenti E. Design: cultural probes. Interactions. 1999;6(1):2129.
  12. Powers B, Trinh J, Bosworth H. Can this patient read and understand written health information? JAMA. 2010;304(1):7684.
  13. Sanders E, Stappers P. Co‐creation and the new landscapes of design. Int J Cocreat Des Arts. 2008;4(1):518.
  14. Mueller P, Oppenheimer D. The pen is mightier that the keyboard: advantages of longhand over laptop note taking. Psychol Sci. 2014;25(6):11591168.
  15. Soong C, Daub S, Lee J, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8:444449.
Article PDF
Issue
Journal of Hospital Medicine - 10(12)
Page Number
804-807
Sections
Files
Files
Article PDF
Article PDF

The period following discharge from the hospital is a vulnerable time for patients that can result in adverse events including avoidable emergency room visits and rehospitalizations.[1] Approximately 8.5% of all visits to the hospital result in readmissions within 30 days.[2] Poor communication of discharge information is even more pronounced for patients with language barriers or limited health literacy, particularly in ethnically diverse communities where up to 60% may speak languages other than English or French at home.[3] Defined as the degree to which individuals can obtain, process, and understand basic health information and services needed to make appropriate health decisions,[4] an estimated 55% of Canadians between the ages of 16 and 65 years have limited health literacy, and only 12% of those above the age of 65 years have adequate health literacy skills.[5]

Previous authors have demonstrated the benefits of using multiple interventions, including nonverbal communication, when designing for individuals with limited literacy.[6] Visual aids have been shown to be particularly useful to non‐English speakers and patients with limited health literacy.[7] In particular, research on medication tools for patients with limited health literacy has shown that illustrated schedules can be helpful.[8]

Typical discharge summaries are documents that are transmitted from the hospital to outpatient physicians to coordinate clinical care. The form codesigned by our team is intended to complement the summary and facilitate patient education and to provide instructions for patients to refer to after discharge.

PURPOSE

The objective of this work was to design instructions for patients going home from the hospital with relevant and actionable information, presented in an easily understandable and usable form.

METHODS

We used participatory action methodology,[9] an approach to research that encourages researchers and those who will benefit from the research to work together across all phases of research, by engaging end‐users of patient instructions from the beginning of the project. Mixed methods were used to understand needs, develop content and design, and iteratively evaluate and refine the instructions. An advisory team of patients, physicians, pharmacists, designers, researchers, and patient‐education professionals gave input into study design and execution.

Although formal inclusion and exclusion criteria were not used, care was taken to engage patients with language barriers, limited health literacy, and mental health issues.

Key methods used are listed below. See Figure 1 for a timeline of the process used to develop the instructions.

Figure 1
Project timeline. Abbreviations: TC LHIN, Toronto Central Local Health Integration Network.

Understanding the Current Patient Experience of Discharge

Key methods included: (1) Patient experience mapping[10]a process of capturing and communicating complex patient interactions and their experience in the system by having interdisciplinary groups create a map of the patient experience and feelings through a mock discharge scenario). (2) A cultural probe[11]patients selected as having minor language barriers or limited health literacy were given a journal and disposable camera to document their time at home after discharge. Patients were asked how confident they were in filling out medical forms by themselves as a way of screening for probable health literacy limitations.[12]

Content and Design

The instructions were developed using a codesign methodology,[13] where researchers and the end‐users of a product design the product together. In our case, teams of patients, healthcare providers, and designers worked together to create prototypes using hypothetical patient cases.

Iteratively Evaluating and Refining the Design

The prototype went through 3 design iterations (Figure 1). Feedback from patients, caregivers, and providers using focus groups, interviews, and surveys was used to refine the content and design and validate symbols for each section.

Key methods included: (1) Two focus groups with hard to reach patient groups that would not participate in interviews or surveys. One was with Cantonese‐speaking patients, facilitated by an interpreter. Cantonese is a common language in Toronto, yet the language barrier typically precludes the patients from participating in research. The other group was with patients admitted to the psychiatry unit of the hospital, another group that typically is excluded from research studies. (2) Usability test of a paper‐based version of the instructions across 3 large academic hospitals; physicians and residents in general internal medicine units filled out the instructions by hand for each patient discharged.

RESULTS

Forty‐four patients, 12 caregivers, 30 healthcare personnel, 7 patient‐education professionals, and 8 designers were involved in the design (see Figure 2 for an image of the template) based on best practices in information design, graphic design, and patient education.

Figure 2
Template.

Understanding the Patient Experience of Discharge

The analysis of the patient experience at discharge revealed the following themes:

(1) Difficulties in understanding and retaining verbal instructions in the immediate postdischarge period because of exhaustion. (2) Patient concerns at discharge including feeling unprepared to leave the hospital. (3) Family members and caregivers play a large role in a patient's life, which becomes more significant in the postdischarge phase. This was made clear through journal entries from patients using cultural probes.

Content and Design

Patients wanted to know information that was relevant and actionable. They consistently mentioned the following information as being most important: (1) medication instructions, (2) follow‐up appointments with phone numbers, (3) normal expected symptoms, danger signs, and what to do, (4) lifestyle changes and when to resume activities, and (5) information and resources to have handy.

Advice from patient‐education specialists on the team, as well as the feedback from patients and caregivers was that instructions should be written in language at a fifth‐ or sixth‐grade level and be directed to the patient, use large fonts, include illustrations of medication schedules, and headings that are meaningful to the patient. In addition, patients wanted white space to take notes, an activity that has been shown to improve comprehension and recall.[14]

Patients felt having symbols for each section in the instructions helped make the form more readable by differentiating sections and providing a recognizable image for patients who could read English.

Iteratively Evaluating and Refining the Design

The results of the usability test data and surveys of the final version of the template showed that patients and providers felt that they would benefit from using the instructions. Of the patients and providers, 94.8% of patients and 75% of providers said that the instructions would be helpful to have when discharged from the hospital. Physicians filling out the instructions by hand took an average of 9 minutes to fill out the form.

DISCUSSION

This initiative is an example of engaging patients and caregivers as active partners in the healthcare system. Patients and caregivers were engaged as codesigners of the form from the outset and continuously throughout.

The instructions can be given to patients and caregivers at discharge as both a teaching tool and a reference that can be reviewed when at home. Process considerations are very important. As family and caregivers play an instrumental role in postdischarge care, the instructions should be given whenever possible in the presence of family. The form is a simple addition to any discharge process. It can be filled out by a single provider, a multidisciplinary team, or even the patient while undergoing discharge teaching. The time and resources to fill out the instructions will vary depending on the discharge process in place. Good discharge practices,[15] such as engaging the patient in the conversation and teach back, should be followed.

The form has been licensed as creative commons, so that any healthcare organization can use and adapt the materials to meet the needs of their patients.

The development of the form is only the first step in a larger project. Almost all of the study participants involved in the initiative were from the general internal medicine wards in downtown Toronto. We do not know yet if the results can be generalized to different patient and provider populations.

The instructions are currently being implemented in 8 hospitals throughout Toronto, spanning rehabilitation, acute care, surgery, and pediatrics. The form appears to have been appropriate and generalizable to all of these settings, but results from this multisite implementation on patient and provider experience or health outcomes are not available yet. Anticipated barriers include determining who has the responsibility for filling out the instructions and validating the accuracy of the medication list.

Discharge instructions serve many purposes. Though previous authors have developed checklists to ensure critical discharge information is included in discharge teaching, the creation of a patient‐oriented form, codesigned with patients and caregivers to provide the information that patients explicitly want at discharge, has been lacking. Using participatory action research, mixed methods, and codesign methodology, and including hard‐to‐reach patient groups was helpful in creating a design that will provide patients with key information at discharge in an easy‐to‐understand format.

Acknowledgements

The authors acknowledge the financial support and guidance of the Toronto Central Local Health Integration Network. The project was advised by a number of individuals, namely: Cynthia Damba, Michelle Ransom, Paolo Korre, Irene Chong, Dawn Lim, Helen Kang, Derek Leong, Elizabeth Abraham, Elke Ruthig, Grace Eagan, Vivian Lo, Rachel Solomon, Kendra Delicaet, Sara Ahmadi, and Jess Leung.

Disclosures: The funding provided by the Toronto Central Local Health Integration Network that supported much of the work contained in this article also paid for a portion of the salaries of Shoshana Hahn‐Goldberg, Tai Huynh, and Najla Zahr. There are no other conflicts of interest to report.

The period following discharge from the hospital is a vulnerable time for patients that can result in adverse events including avoidable emergency room visits and rehospitalizations.[1] Approximately 8.5% of all visits to the hospital result in readmissions within 30 days.[2] Poor communication of discharge information is even more pronounced for patients with language barriers or limited health literacy, particularly in ethnically diverse communities where up to 60% may speak languages other than English or French at home.[3] Defined as the degree to which individuals can obtain, process, and understand basic health information and services needed to make appropriate health decisions,[4] an estimated 55% of Canadians between the ages of 16 and 65 years have limited health literacy, and only 12% of those above the age of 65 years have adequate health literacy skills.[5]

Previous authors have demonstrated the benefits of using multiple interventions, including nonverbal communication, when designing for individuals with limited literacy.[6] Visual aids have been shown to be particularly useful to non‐English speakers and patients with limited health literacy.[7] In particular, research on medication tools for patients with limited health literacy has shown that illustrated schedules can be helpful.[8]

Typical discharge summaries are documents that are transmitted from the hospital to outpatient physicians to coordinate clinical care. The form codesigned by our team is intended to complement the summary and facilitate patient education and to provide instructions for patients to refer to after discharge.

PURPOSE

The objective of this work was to design instructions for patients going home from the hospital with relevant and actionable information, presented in an easily understandable and usable form.

METHODS

We used participatory action methodology,[9] an approach to research that encourages researchers and those who will benefit from the research to work together across all phases of research, by engaging end‐users of patient instructions from the beginning of the project. Mixed methods were used to understand needs, develop content and design, and iteratively evaluate and refine the instructions. An advisory team of patients, physicians, pharmacists, designers, researchers, and patient‐education professionals gave input into study design and execution.

Although formal inclusion and exclusion criteria were not used, care was taken to engage patients with language barriers, limited health literacy, and mental health issues.

Key methods used are listed below. See Figure 1 for a timeline of the process used to develop the instructions.

Figure 1
Project timeline. Abbreviations: TC LHIN, Toronto Central Local Health Integration Network.

Understanding the Current Patient Experience of Discharge

Key methods included: (1) Patient experience mapping[10]a process of capturing and communicating complex patient interactions and their experience in the system by having interdisciplinary groups create a map of the patient experience and feelings through a mock discharge scenario). (2) A cultural probe[11]patients selected as having minor language barriers or limited health literacy were given a journal and disposable camera to document their time at home after discharge. Patients were asked how confident they were in filling out medical forms by themselves as a way of screening for probable health literacy limitations.[12]

Content and Design

The instructions were developed using a codesign methodology,[13] where researchers and the end‐users of a product design the product together. In our case, teams of patients, healthcare providers, and designers worked together to create prototypes using hypothetical patient cases.

Iteratively Evaluating and Refining the Design

The prototype went through 3 design iterations (Figure 1). Feedback from patients, caregivers, and providers using focus groups, interviews, and surveys was used to refine the content and design and validate symbols for each section.

Key methods included: (1) Two focus groups with hard to reach patient groups that would not participate in interviews or surveys. One was with Cantonese‐speaking patients, facilitated by an interpreter. Cantonese is a common language in Toronto, yet the language barrier typically precludes the patients from participating in research. The other group was with patients admitted to the psychiatry unit of the hospital, another group that typically is excluded from research studies. (2) Usability test of a paper‐based version of the instructions across 3 large academic hospitals; physicians and residents in general internal medicine units filled out the instructions by hand for each patient discharged.

RESULTS

Forty‐four patients, 12 caregivers, 30 healthcare personnel, 7 patient‐education professionals, and 8 designers were involved in the design (see Figure 2 for an image of the template) based on best practices in information design, graphic design, and patient education.

Figure 2
Template.

Understanding the Patient Experience of Discharge

The analysis of the patient experience at discharge revealed the following themes:

(1) Difficulties in understanding and retaining verbal instructions in the immediate postdischarge period because of exhaustion. (2) Patient concerns at discharge including feeling unprepared to leave the hospital. (3) Family members and caregivers play a large role in a patient's life, which becomes more significant in the postdischarge phase. This was made clear through journal entries from patients using cultural probes.

Content and Design

Patients wanted to know information that was relevant and actionable. They consistently mentioned the following information as being most important: (1) medication instructions, (2) follow‐up appointments with phone numbers, (3) normal expected symptoms, danger signs, and what to do, (4) lifestyle changes and when to resume activities, and (5) information and resources to have handy.

Advice from patient‐education specialists on the team, as well as the feedback from patients and caregivers was that instructions should be written in language at a fifth‐ or sixth‐grade level and be directed to the patient, use large fonts, include illustrations of medication schedules, and headings that are meaningful to the patient. In addition, patients wanted white space to take notes, an activity that has been shown to improve comprehension and recall.[14]

Patients felt having symbols for each section in the instructions helped make the form more readable by differentiating sections and providing a recognizable image for patients who could read English.

Iteratively Evaluating and Refining the Design

The results of the usability test data and surveys of the final version of the template showed that patients and providers felt that they would benefit from using the instructions. Of the patients and providers, 94.8% of patients and 75% of providers said that the instructions would be helpful to have when discharged from the hospital. Physicians filling out the instructions by hand took an average of 9 minutes to fill out the form.

DISCUSSION

This initiative is an example of engaging patients and caregivers as active partners in the healthcare system. Patients and caregivers were engaged as codesigners of the form from the outset and continuously throughout.

The instructions can be given to patients and caregivers at discharge as both a teaching tool and a reference that can be reviewed when at home. Process considerations are very important. As family and caregivers play an instrumental role in postdischarge care, the instructions should be given whenever possible in the presence of family. The form is a simple addition to any discharge process. It can be filled out by a single provider, a multidisciplinary team, or even the patient while undergoing discharge teaching. The time and resources to fill out the instructions will vary depending on the discharge process in place. Good discharge practices,[15] such as engaging the patient in the conversation and teach back, should be followed.

The form has been licensed as creative commons, so that any healthcare organization can use and adapt the materials to meet the needs of their patients.

The development of the form is only the first step in a larger project. Almost all of the study participants involved in the initiative were from the general internal medicine wards in downtown Toronto. We do not know yet if the results can be generalized to different patient and provider populations.

The instructions are currently being implemented in 8 hospitals throughout Toronto, spanning rehabilitation, acute care, surgery, and pediatrics. The form appears to have been appropriate and generalizable to all of these settings, but results from this multisite implementation on patient and provider experience or health outcomes are not available yet. Anticipated barriers include determining who has the responsibility for filling out the instructions and validating the accuracy of the medication list.

Discharge instructions serve many purposes. Though previous authors have developed checklists to ensure critical discharge information is included in discharge teaching, the creation of a patient‐oriented form, codesigned with patients and caregivers to provide the information that patients explicitly want at discharge, has been lacking. Using participatory action research, mixed methods, and codesign methodology, and including hard‐to‐reach patient groups was helpful in creating a design that will provide patients with key information at discharge in an easy‐to‐understand format.

Acknowledgements

The authors acknowledge the financial support and guidance of the Toronto Central Local Health Integration Network. The project was advised by a number of individuals, namely: Cynthia Damba, Michelle Ransom, Paolo Korre, Irene Chong, Dawn Lim, Helen Kang, Derek Leong, Elizabeth Abraham, Elke Ruthig, Grace Eagan, Vivian Lo, Rachel Solomon, Kendra Delicaet, Sara Ahmadi, and Jess Leung.

Disclosures: The funding provided by the Toronto Central Local Health Integration Network that supported much of the work contained in this article also paid for a portion of the salaries of Shoshana Hahn‐Goldberg, Tai Huynh, and Najla Zahr. There are no other conflicts of interest to report.

References
  1. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from hospital. Ann Intern Med. 2003;138:161167.
  2. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Inter Med. 2009;150(3):178187.
  3. Statistics Canada. Visual census. 2011 census. Ottawa. Available at: http://www12.statcan.gc.ca/census‐recensement/index‐eng.cfm. Accessed September 19, 2014.
  4. Committee on Health Literacy. Board on Neuroscience and Behavioral Health. Institute of Medicine. Health literacy: a prescription to end confusion. Washington, DC: National Academies Press; 2004. Available at: http://www.collaborationhealthcare.com/7‐20‐10IOMHealthLiteracyExecutiveSummary.pdf. Accessed September 19, 2014.
  5. Rootman I, Gordon‐El‐Bihbety D. A vision for a health literate Canada: report of the Expert Panel on Health Literacy. 2008. Available at: http://www.cpha.ca/uploads/portals/h‐l/report_e.pdf. Accessed September 19, 2014.
  6. Sheridan S, Halpern D, Viera A, Berkman N, Donahue K, Crotty K. Interventions for individuals with low health literacy: a systematic review. J Health Commun. 2011;16:3054.
  7. Schillinger D, Machtinger EL, Wang F, Palacios J, Rodriguez M, Bindman A. Language, literacy, and communication regarding medication in an anticoagulation clinic: a comparison of verbal vs. visual assessment. J Health Commun. 2006;11(7):651664.
  8. Kriplani S, Robertson R, Love‐Ghaffari M, et al. Development of an illustrated medication schedule as a low‐literacy patient education tool. Patient Educ Couns. 2007;66(3):368377.
  9. Turnbull AP, Friesen BJ, Ramirez C. Participatory action research as a model for conducting family research. J Assoc Pers Sev Handicaps. 1998;23(3):178188.
  10. LaVela S, Gallan A. Evaluation and measurement of patient experience. Patient Exp J. 2014;1(1):2836.
  11. Gaver B, Dunne T, Pacenti E. Design: cultural probes. Interactions. 1999;6(1):2129.
  12. Powers B, Trinh J, Bosworth H. Can this patient read and understand written health information? JAMA. 2010;304(1):7684.
  13. Sanders E, Stappers P. Co‐creation and the new landscapes of design. Int J Cocreat Des Arts. 2008;4(1):518.
  14. Mueller P, Oppenheimer D. The pen is mightier that the keyboard: advantages of longhand over laptop note taking. Psychol Sci. 2014;25(6):11591168.
  15. Soong C, Daub S, Lee J, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8:444449.
References
  1. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from hospital. Ann Intern Med. 2003;138:161167.
  2. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Inter Med. 2009;150(3):178187.
  3. Statistics Canada. Visual census. 2011 census. Ottawa. Available at: http://www12.statcan.gc.ca/census‐recensement/index‐eng.cfm. Accessed September 19, 2014.
  4. Committee on Health Literacy. Board on Neuroscience and Behavioral Health. Institute of Medicine. Health literacy: a prescription to end confusion. Washington, DC: National Academies Press; 2004. Available at: http://www.collaborationhealthcare.com/7‐20‐10IOMHealthLiteracyExecutiveSummary.pdf. Accessed September 19, 2014.
  5. Rootman I, Gordon‐El‐Bihbety D. A vision for a health literate Canada: report of the Expert Panel on Health Literacy. 2008. Available at: http://www.cpha.ca/uploads/portals/h‐l/report_e.pdf. Accessed September 19, 2014.
  6. Sheridan S, Halpern D, Viera A, Berkman N, Donahue K, Crotty K. Interventions for individuals with low health literacy: a systematic review. J Health Commun. 2011;16:3054.
  7. Schillinger D, Machtinger EL, Wang F, Palacios J, Rodriguez M, Bindman A. Language, literacy, and communication regarding medication in an anticoagulation clinic: a comparison of verbal vs. visual assessment. J Health Commun. 2006;11(7):651664.
  8. Kriplani S, Robertson R, Love‐Ghaffari M, et al. Development of an illustrated medication schedule as a low‐literacy patient education tool. Patient Educ Couns. 2007;66(3):368377.
  9. Turnbull AP, Friesen BJ, Ramirez C. Participatory action research as a model for conducting family research. J Assoc Pers Sev Handicaps. 1998;23(3):178188.
  10. LaVela S, Gallan A. Evaluation and measurement of patient experience. Patient Exp J. 2014;1(1):2836.
  11. Gaver B, Dunne T, Pacenti E. Design: cultural probes. Interactions. 1999;6(1):2129.
  12. Powers B, Trinh J, Bosworth H. Can this patient read and understand written health information? JAMA. 2010;304(1):7684.
  13. Sanders E, Stappers P. Co‐creation and the new landscapes of design. Int J Cocreat Des Arts. 2008;4(1):518.
  14. Mueller P, Oppenheimer D. The pen is mightier that the keyboard: advantages of longhand over laptop note taking. Psychol Sci. 2014;25(6):11591168.
  15. Soong C, Daub S, Lee J, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8:444449.
Issue
Journal of Hospital Medicine - 10(12)
Issue
Journal of Hospital Medicine - 10(12)
Page Number
804-807
Page Number
804-807
Article Type
Display Headline
Co‐creating patient‐oriented discharge instructions with patients, caregivers, and healthcare providers
Display Headline
Co‐creating patient‐oriented discharge instructions with patients, caregivers, and healthcare providers
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Shoshana Hahn‐Goldberg, OpenLab, Toronto General Hospital, 200 Elizabeth Street, Room G NU 403, Toronto, Ontario, Canada M5G 2C4; Telephone: 416‐939‐1507; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

HCAHPS Surveys and Patient Satisfaction

Article Type
Changed
Mon, 05/15/2017 - 22:48
Display Headline
Effect of HCAHPS reporting on patient satisfaction with physician communication

The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) is the first national, standardized, publicly reported survey of patients' perception of hospital care. HCAHPS mandates a standard method of collecting and reporting perception of health care by patients to enable valid comparisons across all hospitals.[1, 2, 3] Voluntary collection of HCAHPS data for public reporting began in July 2006, mandatory collection of data for hospitals that participate in Inpatient Prospective Payment Program of Medicare began in July 2007, and public reporting of mandated HCAHPS scores began in 2008.[2]

Using data from the first 2‐year period, an earlier study had reported an increase in HCAHPS patient satisfaction scores in all domains except in the domain of satisfaction with physician communication.[4] Since then, data from additional years have become available, allowing assessment of satisfaction of hospitalized patients with physician communication over a longer period. Therefore, our objective was to examine changes in patient satisfaction with physician communication from 2007 to 2013, the last reported date, and to explore hospital and local population characteristics that may be associated with patient satisfaction.

METHODS

Publicly available data from 3 sources were used for this study. Patient satisfaction scores with physician communication and hospital characteristics were obtained from the HCAHPS data files available at the Hospital Compare database maintained by the Centers for Medicare and Medicaid Services (CMS).[5] HCAHPS files contain data for the preceding 12 months and are updated quarterly. We used files that reported data from the first to the fourth quarter of the year for 2007 to 2013. The HCAHPS survey contains 32 questions, of which 3 questions are about physician communication.[6] We used the percentage of survey participants who responded that physicians always communicated well as a measure of patient satisfaction with physician communication (the other 2 questions were not included). Hospitals that reported data on patient satisfaction during 2007 were divided into quartiles based on their satisfaction scores, and this quartile allocation was maintained during each subsequent year. Survey response rate, in percentage, was obtained from HCAHPS data files for each year. Hospital characteristics, such as ownership of the hospital, teaching hospital status, and designation of critical access hospital were obtained from the Hospital Compare website. Hospital ownership was defined as government (owned by federal, state, Veterans Affairs, or tribal authorities), for profit (owned by physicians or another proprietary), or nonprofit (owned by a nonprofit organization such as a church). A hospital was considered a teaching hospital if it obtained graduate medical education funding from CMS.

We obtained local population data from 2010 decennial census files and from the American Community Survey 5‐year data profile from 2009 to 2013; both datasets are maintained by the Unites States Census Bureau.[7] Census is mandated by Article I, Section 2 of the United States Constitution and takes place every 10 years. The American Community Survey is also a mandatory, ongoing statistical survey that samples a small percentage of the population every year giving communities the information they need to plan investments and services. We chose to use 5‐year estimates as these are more precise and are reliable in analyzing small populations. For each zip code, we extracted data on total population, percentage of African Americans in the population, median income, poverty level, and insurance status from the Census Bureau data files.

Local population characteristics at zip code level were mapped to hospitals using hospital service area (HSA) crosswalk files from the Dartmouth Atlas of Health Care.[7, 8] The Dartmouth Atlas defined 3436 HSAs by assigning zip codes to the hospital area where the greatest proportion of its Medicare residents were hospitalized. The number of acute care hospital beds and the number of physicians within the HSA were also obtained from the Dartmouth Atlas. Merging data from these 3 sources generated a dataset that contained information about patient satisfaction scores from a particular hospital, hospital characteristics, and population characteristics of the healthcare market.

Data were summarized as mean and standard deviation (SD). To model the dependence of observations from the same hospital and the correlation between hospitals within the same state due to similar regulations, and to assess the relative contribution of satisfaction scores over time within hospital, hospitals within states, and across states, 3‐level hierarchical regression models were examined.[9, 10] At the within‐hospital level, survey response rate was used as a time‐varying variable in addition to the year of observation. However, only year of observation was used to explore differences in patient satisfaction trajectories between hospitals. At the hospitals‐within‐states level, hospital characteristics and local population characteristics within the HSA were included. At the states level, only random effects were obtained, and no additional variables were included in the models.

Four models were built to assess the relationship between satisfaction scores and predictors. The basic model used only random effects without any predictors to determine the relative contribution of each level (within hospitals, hospitals within states, and across states) to variation in patient satisfaction scores and thus was consistent with the variance component analysis. The first model included the year of observation as a predictor at the within‐hospital level to examine trends in patient satisfaction scores during the observation period. For the second model, we added baseline satisfaction quartiles to the second model, whereas remaining predictors (HSA population, African American percentage in HSA, survey response rate, HSA median income, ownership of hospital, percentage with private any insurance in HSA, acute care hospital beds in HSA, teaching hospital status, and percentage of people living in poverty within HSA) were added in the third model. Quartiles for baseline satisfaction were generated using satisfaction scores from 2007. As a larger number of hospitals reported results for 2008 than for 2007 (2273 vs 3746), we conducted a sensitivity analysis using satisfaction quartiles in 2008 as baseline and examined subsequent trends over time for the 4 models noted above. All multilevel models were specified using the nlme package in R to account for clustering of observations within hospitals and hospitals within states, using hospital and state level random effects.[11]

RESULTS

Of the 4353 hospitals with data for the 7‐year period, the majority were in the Southern region (South = 1669, Midwest = 1239, Northeast = 607, West = 838). Texas had the largest number of hospital (N = 358) followed by California (N = 340). The largest number of hospitals were nonprofit (N = 2637, 60.6%). Mean (SD) patient satisfaction with physician communication was 78.9% (5.7%) in 2007 that increased to 81.7% (5.4%) in 2013. Throughout the observation period, the highest patient satisfaction was in the South (80.6% [6.6%] in 2007 and 83.2% [5.4%] in 2013). Of the 2273 hospitals that reported data in 2007, the mean satisfaction score of the lowest quartile was 72% (3.2%), and the highest quartile was 86.9% (3.2%) (Table 1). As a group, hospitals in the highest quartile in 2007 still had higher satisfaction scores in 2013 than the hospitals in the lowest quartile (85% [4.2%] vs 77% [3.6%], respectively). Only 4 of the 584 hospitals in the lowest quartile in 2007 climbed up to the highest quartile in 2013, whereas 22 hospitals that were in the upper quartile in 2007 dropped to the lowest quartile in 2013.

Characteristics of Hospital by Quartiles of Satisfaction Scores in 2007
CharacteristicQuartiles Based on 2007 Satisfaction Scores
Highest Quartile2nd Quartile3rd QuartileLowest Quartile
  • NOTE: Abbreviations: HSA, hospital service area; IQR, interquartile range; SD, standard deviation.

Total no. of hospitals, N (%)461 (20.3)545 (24.0)683 (30.0)584 (25.7)
Hospital ownership, N (%)    
For profit50 (14.4)60 (17.3)96 (27.7)140 (40.5)
Nonprofit269 (17.4)380 (24.6)515 (33.4)378 (24.5)
Government142 (36.9)105 (27.3)72 (18.7)66 (17.1)
HSA population, in 1,000, median (IQR)33.2 (70.5)88.5 (186)161.8 (374)222.2 (534)
Racial distribution of HSA population, median (IQR)    
White, %82.6 (26.2)82.5 (28.5)74.2 (32.9)66.8 (35.3)
Black, %4.3 (21.7)3.7 (16.3)5.9 (14.8)7.4 (12.1)
Other, %6.4 (7.1)8.8 (10.8)12.9 (19.8)20.0 (33.1)
HSA mean median income in $1,000, mean (SD)44.6 (11.7)52.4 (17.8)58.4 (17.1)57.5 (15.7)
Satisfaction scores (at baseline), mean (SD)86.9 (3.1)81.4 (1.1)77.5 (1.1)72.0 (3.2)
Satisfaction scores (in 2013), mean (SD)85.0 (4.3)82.0 (3.4)79.7 (3.0)77.0 (3.5)
Survey response rate (at baseline), mean (SD)43.2 (19.8)34.5 (9.4)32.6 (8.0)30.3 (7.8)
Survey response rate (20072013), mean (SD)32.8 (7.8)32.6 (7.5)30.8 (6.5)29.3 (6.5)
Percentage with any insurance in HSA, mean (SD)84.0 (5.4)84.8 (6.6)85.5 (6.3)83.9 (6.6)
Teaching hospital, N (%)42 (9.1)155 (28.4)277 (40.5)274 (46.9%)
Acute care hospital beds in HSA (per 1,000), mean (SD)3.2 (1.2)2.6 (0.8)2.5 (0.8)2.4 (0.7)
Number of physicians in HSA (per 100,000), mean (SD)190 (36)197 (43)204 (47)199 (45)
Percentage with poverty in HSA, mean (SD)[7]16.9 (6.6)15.5 (6.5)14.4 (5.7)15.5 (6.0)

Using variance component analysis, we found that 23% of the variation in patient satisfaction scores with physician communication was due to differences between states, 52% was due to differences between hospitals within states, and 24% was due to changes over time within a hospital. When examining time trends of satisfaction during the 7‐year period without adjusting for other predictors, we found a statistically significant increasing trend in patient satisfaction with physician communication (0.33% per year; P < 0.001). We also found a significant negative correlation (0.62, P < 0.001) between the random effects for baseline satisfaction (intercept) and change over time (slope), suggesting that initial patient satisfaction with physicians at a hospital was negatively correlated with subsequent change in satisfaction scores during the observation period.

When examining the effect of satisfaction ranking in 2007, hospitals within the lowest quartile of patient satisfaction in 2007 had significantly larger increase in satisfaction scores during the subsequent period as compared to the hospitals in each of the other 3 quartiles (all P < 0.001, Table 2). The difference in the magnitude of the rate of increase in satisfaction scores was greatest between the lowest quartile and the highest quartile (1.10% per year; P < 0.001). In fact, the highest quartile had a statistically significant absolute decrease in patient satisfaction during the observation period (0.23% per year; P < 0.001, Figure 1).

Results of Multilevel Models for Patient Satisfaction With Physician Scores
VariableModel 1: ; P ValueModel 2: ; P ValueModel 3: ; P Value
  • NOTE: Model 1 = Time only predictor with hospital and state as random effects. Model 2 = Time and baseline satisfaction as predictors with hospital and state as random effects. Model 3 = Time, baseline satisfaction, HSA population, African American percentage in HSA, survey response rate, HSA median income, ownership of hospital, percentage with private insurance in HSA, acute care hospital beds in HSA, teaching hospital status, and percentage of people living in poverty within HSA; hospital and state were included as random effects. As there were far fewer values of satisfaction scores than the number of hospitals, and the number of hospitals were not evenly distributed across all satisfaction score values, the number of hospitals in each quartile is not exactly one‐fourth. Abbreviations: HSA, hospital service area.

Time (in years)0.33; <0.0010.87; <0.0010.89; <0.001
Satisfaction quartiles at baseline   
Highest quartile 12.1; <0.00110.4; <0.001
2nd quartile 7.9; <0.0017.1; <0.001
3rd quartile 4.5; <0.0014.1; <0.001
Lowest quartile (REF) REFREF
Interaction with time   
Highest quartile 1.10; <0.0010.94; <0.001
2nd quartile 0.73; <0.0010.71; <0.001
3rd quartile 0.48; <0.0010.47;<0.001
Survey response rate (%)  0.12; <0.001
Total population, in 10,000  0.002; 0.02
African American (%)  0.004; 0.13
HSA median Income in $10,000  0.02; 0.58
Ownership   
Government (REF)  REF
Nonprofit  0.01; 0.88
For profit  0.21; 0.11
Percentage with insurance in HSA  0.007; 0.27
Acute care beds in HSA (per 1,000)  0.60; <0.001
Physicians in HSA (per 100,000)  0.003; 0.007
Teaching hospital  0.34; 0.001
Percentage in poverty in HSA  0.01; 0.27
Figure 1
Trend in patient satisfaction with physicians during the observation period by quartile membership at baseline. The y‐axis represents the percentage of survey participants who responded that physicians “always” communicated well at a particular hospital. The x‐axis represents the years for which survey data were reported. Hospitals were divided into quartiles based on baseline satisfaction scores.

After adjusting for hospital characteristics and population characteristics of the HSA, the 2007 satisfaction quartiles remained significantly associated with subsequent change in satisfaction scores during the 7‐year observation period (Table 2). In addition, survey response rate, number of physicians, and the number of acute‐care hospital beds within the HSA were positively associated with patient satisfaction, whereas higher HSA population density and being a teaching hospital were negatively associated with patient satisfaction. Using 2008 satisfaction scores as baseline, the results did not change except that the number of physicians in the HSA and being a teaching hospital were no longer associated with satisfaction scores with physicians.

DISCUSSION

Using hierarchical modelling, we have shown that national patient satisfaction scores with physicians have consistently improved since 2007, the year when reporting of satisfaction scores began. We further show that the improvement in satisfaction scores has not been consistent through all hospitals. The largest increase in satisfaction scores was in hospitals that were in the lowest quartile of satisfaction scores in 2007. In contrast, satisfaction scores decreased in hospitals that were in the uppermost quartile of satisfaction scores. The difference between the lowest and uppermost quartile was so large in 2007 that despite the difference in the direction of change in satisfaction scores, hospitals in the uppermost quartile continued to have higher satisfaction scores in 2013 than hospitals in the lowest quartile.

Consistent with our findings for patient satisfaction, other studies have found that public reporting is associated with improvement in healthcare quality measures across nursing homes, physician groups, and hospitals.[12, 13, 14] However, it is unclear how public reporting can change patient satisfaction. The main purpose of public reporting of quality of healthcare measures, such as patient satisfaction with the healthcare they receive, is to generate value by increasing transparency and accountability, thereby increasing the quality of healthcare delivery. Healthcare consumers may also utilize the reported measures to choose providers that deliver high‐quality healthcare. Contrary to expectations, there is very little evidence that consumers choose healthcare facilities based on public reporting, and it is likely that other mechanisms may explain the observed association.[15, 16]

Physicians have historically had low adoption of strategies to improve patient satisfaction and often cite suboptimal data and lack of evidence for data‐driven strategies.[17, 18] Hospitals and healthcare organizations have deployed a broad range of strategies to engage physicians. These include emphasizing relationship between patient satisfaction and patient compliance, complaints and malpractice lawsuits, appealing to physicians' sense of competitiveness by publishing individual provider satisfaction scores, educating physicians on HCAHPS and providing them with regularly updated data, and development of specific techniques for improving patient‐physician interaction.[19, 20, 21, 22, 23, 24] Administrators may also enhance physician engagement by improving physician satisfaction, decreasing their turnover, support development of physicians in administrative leadership roles, and improving financial transparency.[25] Thus, involvement of hospital leadership has been instrumental in encouraging physicians to focus on quality measures including patient satisfaction. Some evidence suggests that public reporting exerts strong influence on hospital leaders for adequate resource allocation, local planning, and improvement efforts.[26, 27, 28]

Perhaps the most intriguing finding in our study is that hospitals in the uppermost quartile of satisfaction scores in 2007 had a statistically significant steady decline in scores during the following period as compared to hospitals in the lowest quartile that had a steady increase. A possible explanation for this finding can be that high‐performing hospitals become complacent and do not invest in developing the effort‐intensive resources required to maintain and improve performance in the physician‐related patient satisfaction domain. These resources may be diverted to competing needs that include addressing improvement efforts for a large number of other publicly reported healthcare quality measures. Thus, an unintended consequence of quality improvement may be that improvement in 1 domain may be at the expense of quality of care in another domain.[29, 30, 31] On the other hand, it is likely that hospitals in the lower quartile see a larger improvement in their scores for the same degree of investment as hospitals in the higher quartiles. It is also likely that hospitals, particularly those in the lowest quartile, develop their individual benchmarks and expend effort that is in line with their perceived need for improvement to achieve their strategic and marketing goals.

Our study has significant implications for the healthcare system, clinical practice, and future research. Whereas public reporting of quality measures is associated with an overall improvement in the reported quality measure, hospitals with high scores may move resources away from that metric or become complacent. Health policy makers need to design policies that encourage all hospitals and providers to perform better or continue to perform well. We further show that differences between hospitals and between local healthcare markets are the biggest factor determining the variation in patient satisfaction with physician communication, and an adjustment in reported score for these factors may be needed. Although local healthcare market factors may not be modifiable, an exchange of knowledge between hospitals with low and high patient satisfaction scores may improve overall satisfaction scores. Similarly, hospitals that are successful in increasing patient satisfaction scores should identify and share useful interventions.

The main strength of our study is that we used data on patient satisfaction with physician communication that were reported annually by most hospitals within the United States. These longitudinal data allowed us to examine not only the effect of public reporting on patient satisfaction with physician communication but also its trend over time. As we had 7 years of data, we were able to eliminate the possibility of regression to mean; an extreme result on first measurement is followed by a second measurement that tends to be closer to the average. Further, we adjusted satisfaction scores based on hospital and local healthcare market characteristics allowing us to compare satisfaction scores across hospitals. However, because units of observation were hospitals and not patients, we could not examine the effect of patient characteristics on satisfaction scores. In addition, HCAHPS surveys have low response rates and may have response and selection bias. Furthermore, we were unable to examine the strategies implemented by hospitals to improve satisfaction scores or the effect of such strategies on satisfaction scores. Data on hospital strategies to increase satisfaction scores are not available for most hospitals and could not have been included in the study.

In summary, we have found that public reporting was followed by an improvement in patient satisfaction scores with physician communication between 2007 and 2013. The rate of improvement was significantly greater in hospitals that had satisfaction scores in the lowest quartiles, whereas hospitals in the highest quartile had a small but statistically significant decline in patient satisfaction scores.

Files
References
  1. Centers for Medicare Medicaid Services. Medicare program; hospital outpatient prospective payment system and CY 2007 payment rates; CY 2007 update to the ambulatory surgical center covered procedures list; Medicare administrative contractors; and reporting hospital quality data for FY 2008 inpatient prospective payment system annual payment update program‐‐HCAHPS survey, SCIP, and mortality. Final rule with comment period and final rule. Fed Regist. 2006;71(226):6795968401.
  2. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67(1):2737.
  3. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590593.
  4. Elliott MN, Lehrman WG, Goldstein EH, et al. Hospital survey shows improvements in patient experience. Health Aff (Millwood). 2010;29(11):20612067.
  5. Centers for Medicare 2010:496829.
  6. Gascon‐Barre M, Demers C, Mirshahi A, Neron S, Zalzal S, Nanci A. The normal liver harbors the vitamin D nuclear receptor in nonparenchymal and biliary epithelial cells. Hepatology. 2003;37(5):10341042.
  7. Singer JD, Willett JB. Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. Oxford, United Kingdom: Oxford University Press; 2003.
  8. Gelman A, Hill J. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge, United Kingdom: Cambridge University Press; 2007.
  9. nlme: Linear and Nonlinear Mixed Effects Models [computer program]. Version R package version 2015;3:1121.
  10. Smith MA, Wright A, Queram C, Lamb GC. Public reporting helped drive quality improvement in outpatient diabetes care among Wisconsin physician groups. Health Aff (Millwood). 2012;31(3):570577.
  11. Wees PJ, Sanden MW, Ginneken E, Ayanian JZ, Schneider EC, Westert GP. Governing healthcare through performance measurement in Massachusetts and the Netherlands. Health Policy. 2014;116(1):1826.
  12. Werner R, Stuart E, Polsky D. Public reporting drove quality gains at nursing homes. Health Aff (Millwood). 2010;29(9):17061713.
  13. Bardach NS, Hibbard JH, Dudley RA. Users of public reports of hospital quality: who, what, why, and how?: An aggregate analysis of 16 online public reporting Web sites and users' and experts' suggestions for improvement. Agency for Healthcare Research and Quality. Available at: http://archive.ahrq.gov/professionals/quality‐patient‐safety/quality‐resources/value/pubreportusers/index.html. Updated December 2011. Accessed April 2, 2015.
  14. Kaiser Family Foundation. 2008 update on consumers' views of patient safety and quality information. Available at: http://kff.org/health‐reform/poll‐finding/2008‐update‐on‐consumers‐views‐of‐patient‐2/. Published September 30, 2008. Accessed April 2, 2015.
  15. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625648, 511.
  16. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593624, 510.
  17. Rivers PA, Glover SH. Health care competition, strategic mission, and patient satisfaction: research model and propositions. J Health Organ Manag. 2008;22(6):627641.
  18. Kim SS, Kaplowitz S, Johnston MV. The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof. 2004;27(3):237251.
  19. Villar LM, Campo JA, Ranchal I, Lampe E, Romero‐Gomez M. Association between vitamin D and hepatitis C virus infection: a meta‐analysis. World J Gastroenterol. 2013;19(35):59175924.
  20. Stelfox HT, Gandhi TK, Orav EJ, Gustafson ML. The relation of patient satisfaction with complaints against physicians and malpractice lawsuits. Am J Med. 2005;118(10):11261133.
  21. Rodriguez HP, Rodday AM, Marshall RE, Nelson KL, Rogers WH, Safran DG. Relation of patients' experiences with individual physicians to malpractice risk. Int J Qual Health Care. 2008;20(1):512.
  22. Cydulka RK, Tamayo‐Sarver J, Gage A, Bagnoli D. Association of patient satisfaction with complaints and risk management among emergency physicians. J Emerg Med. 2011;41(4):405411.
  23. Bogue RJ, Guarneri JG, Reed M, Bradley K, Hughes J. Secrets of physician satisfaction. Study identifies pressure points and reveals life practices of highly satisfied doctors. Physician Exec. 2006;32(6):3039.
  24. Lindenauer PK, Lagu T, Ross JS, et al. Attitudes of hospital leaders toward publicly reported measures of health care quality. JAMA Intern Med. 2014;174(12):19041911.
  25. Totten AM, Wagner J, Tiwari A, O'Haire C, Griffin J, Walker M. Closing the quality gap: revisiting the state of the science (vol. 5: public reporting as a quality improvement strategy). Evid Rep Technol Assess (Full Rep). 2012(208.5):1645.
  26. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111123.
  27. Bardach NS, Cabana MD. The unintended consequences of quality improvement. Curr Opin Pediatr. 2009;21(6):777782.
  28. Powell AA, White KM, Partin MR, et al. Unintended consequences of implementing a national performance measurement system into local practice. J Gen Intern Med. 2012;27(4):405412.
  29. Riskin L, Campagna JA. Quality assessment by external bodies: intended and unintended impact on healthcare delivery. Curr Opin Anaesthesiol. 2009;22(2):237241.
Article PDF
Issue
Journal of Hospital Medicine - 11(2)
Page Number
105-110
Sections
Files
Files
Article PDF
Article PDF

The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) is the first national, standardized, publicly reported survey of patients' perception of hospital care. HCAHPS mandates a standard method of collecting and reporting perception of health care by patients to enable valid comparisons across all hospitals.[1, 2, 3] Voluntary collection of HCAHPS data for public reporting began in July 2006, mandatory collection of data for hospitals that participate in Inpatient Prospective Payment Program of Medicare began in July 2007, and public reporting of mandated HCAHPS scores began in 2008.[2]

Using data from the first 2‐year period, an earlier study had reported an increase in HCAHPS patient satisfaction scores in all domains except in the domain of satisfaction with physician communication.[4] Since then, data from additional years have become available, allowing assessment of satisfaction of hospitalized patients with physician communication over a longer period. Therefore, our objective was to examine changes in patient satisfaction with physician communication from 2007 to 2013, the last reported date, and to explore hospital and local population characteristics that may be associated with patient satisfaction.

METHODS

Publicly available data from 3 sources were used for this study. Patient satisfaction scores with physician communication and hospital characteristics were obtained from the HCAHPS data files available at the Hospital Compare database maintained by the Centers for Medicare and Medicaid Services (CMS).[5] HCAHPS files contain data for the preceding 12 months and are updated quarterly. We used files that reported data from the first to the fourth quarter of the year for 2007 to 2013. The HCAHPS survey contains 32 questions, of which 3 questions are about physician communication.[6] We used the percentage of survey participants who responded that physicians always communicated well as a measure of patient satisfaction with physician communication (the other 2 questions were not included). Hospitals that reported data on patient satisfaction during 2007 were divided into quartiles based on their satisfaction scores, and this quartile allocation was maintained during each subsequent year. Survey response rate, in percentage, was obtained from HCAHPS data files for each year. Hospital characteristics, such as ownership of the hospital, teaching hospital status, and designation of critical access hospital were obtained from the Hospital Compare website. Hospital ownership was defined as government (owned by federal, state, Veterans Affairs, or tribal authorities), for profit (owned by physicians or another proprietary), or nonprofit (owned by a nonprofit organization such as a church). A hospital was considered a teaching hospital if it obtained graduate medical education funding from CMS.

We obtained local population data from 2010 decennial census files and from the American Community Survey 5‐year data profile from 2009 to 2013; both datasets are maintained by the Unites States Census Bureau.[7] Census is mandated by Article I, Section 2 of the United States Constitution and takes place every 10 years. The American Community Survey is also a mandatory, ongoing statistical survey that samples a small percentage of the population every year giving communities the information they need to plan investments and services. We chose to use 5‐year estimates as these are more precise and are reliable in analyzing small populations. For each zip code, we extracted data on total population, percentage of African Americans in the population, median income, poverty level, and insurance status from the Census Bureau data files.

Local population characteristics at zip code level were mapped to hospitals using hospital service area (HSA) crosswalk files from the Dartmouth Atlas of Health Care.[7, 8] The Dartmouth Atlas defined 3436 HSAs by assigning zip codes to the hospital area where the greatest proportion of its Medicare residents were hospitalized. The number of acute care hospital beds and the number of physicians within the HSA were also obtained from the Dartmouth Atlas. Merging data from these 3 sources generated a dataset that contained information about patient satisfaction scores from a particular hospital, hospital characteristics, and population characteristics of the healthcare market.

Data were summarized as mean and standard deviation (SD). To model the dependence of observations from the same hospital and the correlation between hospitals within the same state due to similar regulations, and to assess the relative contribution of satisfaction scores over time within hospital, hospitals within states, and across states, 3‐level hierarchical regression models were examined.[9, 10] At the within‐hospital level, survey response rate was used as a time‐varying variable in addition to the year of observation. However, only year of observation was used to explore differences in patient satisfaction trajectories between hospitals. At the hospitals‐within‐states level, hospital characteristics and local population characteristics within the HSA were included. At the states level, only random effects were obtained, and no additional variables were included in the models.

Four models were built to assess the relationship between satisfaction scores and predictors. The basic model used only random effects without any predictors to determine the relative contribution of each level (within hospitals, hospitals within states, and across states) to variation in patient satisfaction scores and thus was consistent with the variance component analysis. The first model included the year of observation as a predictor at the within‐hospital level to examine trends in patient satisfaction scores during the observation period. For the second model, we added baseline satisfaction quartiles to the second model, whereas remaining predictors (HSA population, African American percentage in HSA, survey response rate, HSA median income, ownership of hospital, percentage with private any insurance in HSA, acute care hospital beds in HSA, teaching hospital status, and percentage of people living in poverty within HSA) were added in the third model. Quartiles for baseline satisfaction were generated using satisfaction scores from 2007. As a larger number of hospitals reported results for 2008 than for 2007 (2273 vs 3746), we conducted a sensitivity analysis using satisfaction quartiles in 2008 as baseline and examined subsequent trends over time for the 4 models noted above. All multilevel models were specified using the nlme package in R to account for clustering of observations within hospitals and hospitals within states, using hospital and state level random effects.[11]

RESULTS

Of the 4353 hospitals with data for the 7‐year period, the majority were in the Southern region (South = 1669, Midwest = 1239, Northeast = 607, West = 838). Texas had the largest number of hospital (N = 358) followed by California (N = 340). The largest number of hospitals were nonprofit (N = 2637, 60.6%). Mean (SD) patient satisfaction with physician communication was 78.9% (5.7%) in 2007 that increased to 81.7% (5.4%) in 2013. Throughout the observation period, the highest patient satisfaction was in the South (80.6% [6.6%] in 2007 and 83.2% [5.4%] in 2013). Of the 2273 hospitals that reported data in 2007, the mean satisfaction score of the lowest quartile was 72% (3.2%), and the highest quartile was 86.9% (3.2%) (Table 1). As a group, hospitals in the highest quartile in 2007 still had higher satisfaction scores in 2013 than the hospitals in the lowest quartile (85% [4.2%] vs 77% [3.6%], respectively). Only 4 of the 584 hospitals in the lowest quartile in 2007 climbed up to the highest quartile in 2013, whereas 22 hospitals that were in the upper quartile in 2007 dropped to the lowest quartile in 2013.

Characteristics of Hospital by Quartiles of Satisfaction Scores in 2007
CharacteristicQuartiles Based on 2007 Satisfaction Scores
Highest Quartile2nd Quartile3rd QuartileLowest Quartile
  • NOTE: Abbreviations: HSA, hospital service area; IQR, interquartile range; SD, standard deviation.

Total no. of hospitals, N (%)461 (20.3)545 (24.0)683 (30.0)584 (25.7)
Hospital ownership, N (%)    
For profit50 (14.4)60 (17.3)96 (27.7)140 (40.5)
Nonprofit269 (17.4)380 (24.6)515 (33.4)378 (24.5)
Government142 (36.9)105 (27.3)72 (18.7)66 (17.1)
HSA population, in 1,000, median (IQR)33.2 (70.5)88.5 (186)161.8 (374)222.2 (534)
Racial distribution of HSA population, median (IQR)    
White, %82.6 (26.2)82.5 (28.5)74.2 (32.9)66.8 (35.3)
Black, %4.3 (21.7)3.7 (16.3)5.9 (14.8)7.4 (12.1)
Other, %6.4 (7.1)8.8 (10.8)12.9 (19.8)20.0 (33.1)
HSA mean median income in $1,000, mean (SD)44.6 (11.7)52.4 (17.8)58.4 (17.1)57.5 (15.7)
Satisfaction scores (at baseline), mean (SD)86.9 (3.1)81.4 (1.1)77.5 (1.1)72.0 (3.2)
Satisfaction scores (in 2013), mean (SD)85.0 (4.3)82.0 (3.4)79.7 (3.0)77.0 (3.5)
Survey response rate (at baseline), mean (SD)43.2 (19.8)34.5 (9.4)32.6 (8.0)30.3 (7.8)
Survey response rate (20072013), mean (SD)32.8 (7.8)32.6 (7.5)30.8 (6.5)29.3 (6.5)
Percentage with any insurance in HSA, mean (SD)84.0 (5.4)84.8 (6.6)85.5 (6.3)83.9 (6.6)
Teaching hospital, N (%)42 (9.1)155 (28.4)277 (40.5)274 (46.9%)
Acute care hospital beds in HSA (per 1,000), mean (SD)3.2 (1.2)2.6 (0.8)2.5 (0.8)2.4 (0.7)
Number of physicians in HSA (per 100,000), mean (SD)190 (36)197 (43)204 (47)199 (45)
Percentage with poverty in HSA, mean (SD)[7]16.9 (6.6)15.5 (6.5)14.4 (5.7)15.5 (6.0)

Using variance component analysis, we found that 23% of the variation in patient satisfaction scores with physician communication was due to differences between states, 52% was due to differences between hospitals within states, and 24% was due to changes over time within a hospital. When examining time trends of satisfaction during the 7‐year period without adjusting for other predictors, we found a statistically significant increasing trend in patient satisfaction with physician communication (0.33% per year; P < 0.001). We also found a significant negative correlation (0.62, P < 0.001) between the random effects for baseline satisfaction (intercept) and change over time (slope), suggesting that initial patient satisfaction with physicians at a hospital was negatively correlated with subsequent change in satisfaction scores during the observation period.

When examining the effect of satisfaction ranking in 2007, hospitals within the lowest quartile of patient satisfaction in 2007 had significantly larger increase in satisfaction scores during the subsequent period as compared to the hospitals in each of the other 3 quartiles (all P < 0.001, Table 2). The difference in the magnitude of the rate of increase in satisfaction scores was greatest between the lowest quartile and the highest quartile (1.10% per year; P < 0.001). In fact, the highest quartile had a statistically significant absolute decrease in patient satisfaction during the observation period (0.23% per year; P < 0.001, Figure 1).

Results of Multilevel Models for Patient Satisfaction With Physician Scores
VariableModel 1: ; P ValueModel 2: ; P ValueModel 3: ; P Value
  • NOTE: Model 1 = Time only predictor with hospital and state as random effects. Model 2 = Time and baseline satisfaction as predictors with hospital and state as random effects. Model 3 = Time, baseline satisfaction, HSA population, African American percentage in HSA, survey response rate, HSA median income, ownership of hospital, percentage with private insurance in HSA, acute care hospital beds in HSA, teaching hospital status, and percentage of people living in poverty within HSA; hospital and state were included as random effects. As there were far fewer values of satisfaction scores than the number of hospitals, and the number of hospitals were not evenly distributed across all satisfaction score values, the number of hospitals in each quartile is not exactly one‐fourth. Abbreviations: HSA, hospital service area.

Time (in years)0.33; <0.0010.87; <0.0010.89; <0.001
Satisfaction quartiles at baseline   
Highest quartile 12.1; <0.00110.4; <0.001
2nd quartile 7.9; <0.0017.1; <0.001
3rd quartile 4.5; <0.0014.1; <0.001
Lowest quartile (REF) REFREF
Interaction with time   
Highest quartile 1.10; <0.0010.94; <0.001
2nd quartile 0.73; <0.0010.71; <0.001
3rd quartile 0.48; <0.0010.47;<0.001
Survey response rate (%)  0.12; <0.001
Total population, in 10,000  0.002; 0.02
African American (%)  0.004; 0.13
HSA median Income in $10,000  0.02; 0.58
Ownership   
Government (REF)  REF
Nonprofit  0.01; 0.88
For profit  0.21; 0.11
Percentage with insurance in HSA  0.007; 0.27
Acute care beds in HSA (per 1,000)  0.60; <0.001
Physicians in HSA (per 100,000)  0.003; 0.007
Teaching hospital  0.34; 0.001
Percentage in poverty in HSA  0.01; 0.27
Figure 1
Trend in patient satisfaction with physicians during the observation period by quartile membership at baseline. The y‐axis represents the percentage of survey participants who responded that physicians “always” communicated well at a particular hospital. The x‐axis represents the years for which survey data were reported. Hospitals were divided into quartiles based on baseline satisfaction scores.

After adjusting for hospital characteristics and population characteristics of the HSA, the 2007 satisfaction quartiles remained significantly associated with subsequent change in satisfaction scores during the 7‐year observation period (Table 2). In addition, survey response rate, number of physicians, and the number of acute‐care hospital beds within the HSA were positively associated with patient satisfaction, whereas higher HSA population density and being a teaching hospital were negatively associated with patient satisfaction. Using 2008 satisfaction scores as baseline, the results did not change except that the number of physicians in the HSA and being a teaching hospital were no longer associated with satisfaction scores with physicians.

DISCUSSION

Using hierarchical modelling, we have shown that national patient satisfaction scores with physicians have consistently improved since 2007, the year when reporting of satisfaction scores began. We further show that the improvement in satisfaction scores has not been consistent through all hospitals. The largest increase in satisfaction scores was in hospitals that were in the lowest quartile of satisfaction scores in 2007. In contrast, satisfaction scores decreased in hospitals that were in the uppermost quartile of satisfaction scores. The difference between the lowest and uppermost quartile was so large in 2007 that despite the difference in the direction of change in satisfaction scores, hospitals in the uppermost quartile continued to have higher satisfaction scores in 2013 than hospitals in the lowest quartile.

Consistent with our findings for patient satisfaction, other studies have found that public reporting is associated with improvement in healthcare quality measures across nursing homes, physician groups, and hospitals.[12, 13, 14] However, it is unclear how public reporting can change patient satisfaction. The main purpose of public reporting of quality of healthcare measures, such as patient satisfaction with the healthcare they receive, is to generate value by increasing transparency and accountability, thereby increasing the quality of healthcare delivery. Healthcare consumers may also utilize the reported measures to choose providers that deliver high‐quality healthcare. Contrary to expectations, there is very little evidence that consumers choose healthcare facilities based on public reporting, and it is likely that other mechanisms may explain the observed association.[15, 16]

Physicians have historically had low adoption of strategies to improve patient satisfaction and often cite suboptimal data and lack of evidence for data‐driven strategies.[17, 18] Hospitals and healthcare organizations have deployed a broad range of strategies to engage physicians. These include emphasizing relationship between patient satisfaction and patient compliance, complaints and malpractice lawsuits, appealing to physicians' sense of competitiveness by publishing individual provider satisfaction scores, educating physicians on HCAHPS and providing them with regularly updated data, and development of specific techniques for improving patient‐physician interaction.[19, 20, 21, 22, 23, 24] Administrators may also enhance physician engagement by improving physician satisfaction, decreasing their turnover, support development of physicians in administrative leadership roles, and improving financial transparency.[25] Thus, involvement of hospital leadership has been instrumental in encouraging physicians to focus on quality measures including patient satisfaction. Some evidence suggests that public reporting exerts strong influence on hospital leaders for adequate resource allocation, local planning, and improvement efforts.[26, 27, 28]

Perhaps the most intriguing finding in our study is that hospitals in the uppermost quartile of satisfaction scores in 2007 had a statistically significant steady decline in scores during the following period as compared to hospitals in the lowest quartile that had a steady increase. A possible explanation for this finding can be that high‐performing hospitals become complacent and do not invest in developing the effort‐intensive resources required to maintain and improve performance in the physician‐related patient satisfaction domain. These resources may be diverted to competing needs that include addressing improvement efforts for a large number of other publicly reported healthcare quality measures. Thus, an unintended consequence of quality improvement may be that improvement in 1 domain may be at the expense of quality of care in another domain.[29, 30, 31] On the other hand, it is likely that hospitals in the lower quartile see a larger improvement in their scores for the same degree of investment as hospitals in the higher quartiles. It is also likely that hospitals, particularly those in the lowest quartile, develop their individual benchmarks and expend effort that is in line with their perceived need for improvement to achieve their strategic and marketing goals.

Our study has significant implications for the healthcare system, clinical practice, and future research. Whereas public reporting of quality measures is associated with an overall improvement in the reported quality measure, hospitals with high scores may move resources away from that metric or become complacent. Health policy makers need to design policies that encourage all hospitals and providers to perform better or continue to perform well. We further show that differences between hospitals and between local healthcare markets are the biggest factor determining the variation in patient satisfaction with physician communication, and an adjustment in reported score for these factors may be needed. Although local healthcare market factors may not be modifiable, an exchange of knowledge between hospitals with low and high patient satisfaction scores may improve overall satisfaction scores. Similarly, hospitals that are successful in increasing patient satisfaction scores should identify and share useful interventions.

The main strength of our study is that we used data on patient satisfaction with physician communication that were reported annually by most hospitals within the United States. These longitudinal data allowed us to examine not only the effect of public reporting on patient satisfaction with physician communication but also its trend over time. As we had 7 years of data, we were able to eliminate the possibility of regression to mean; an extreme result on first measurement is followed by a second measurement that tends to be closer to the average. Further, we adjusted satisfaction scores based on hospital and local healthcare market characteristics allowing us to compare satisfaction scores across hospitals. However, because units of observation were hospitals and not patients, we could not examine the effect of patient characteristics on satisfaction scores. In addition, HCAHPS surveys have low response rates and may have response and selection bias. Furthermore, we were unable to examine the strategies implemented by hospitals to improve satisfaction scores or the effect of such strategies on satisfaction scores. Data on hospital strategies to increase satisfaction scores are not available for most hospitals and could not have been included in the study.

In summary, we have found that public reporting was followed by an improvement in patient satisfaction scores with physician communication between 2007 and 2013. The rate of improvement was significantly greater in hospitals that had satisfaction scores in the lowest quartiles, whereas hospitals in the highest quartile had a small but statistically significant decline in patient satisfaction scores.

The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) is the first national, standardized, publicly reported survey of patients' perception of hospital care. HCAHPS mandates a standard method of collecting and reporting perception of health care by patients to enable valid comparisons across all hospitals.[1, 2, 3] Voluntary collection of HCAHPS data for public reporting began in July 2006, mandatory collection of data for hospitals that participate in Inpatient Prospective Payment Program of Medicare began in July 2007, and public reporting of mandated HCAHPS scores began in 2008.[2]

Using data from the first 2‐year period, an earlier study had reported an increase in HCAHPS patient satisfaction scores in all domains except in the domain of satisfaction with physician communication.[4] Since then, data from additional years have become available, allowing assessment of satisfaction of hospitalized patients with physician communication over a longer period. Therefore, our objective was to examine changes in patient satisfaction with physician communication from 2007 to 2013, the last reported date, and to explore hospital and local population characteristics that may be associated with patient satisfaction.

METHODS

Publicly available data from 3 sources were used for this study. Patient satisfaction scores with physician communication and hospital characteristics were obtained from the HCAHPS data files available at the Hospital Compare database maintained by the Centers for Medicare and Medicaid Services (CMS).[5] HCAHPS files contain data for the preceding 12 months and are updated quarterly. We used files that reported data from the first to the fourth quarter of the year for 2007 to 2013. The HCAHPS survey contains 32 questions, of which 3 questions are about physician communication.[6] We used the percentage of survey participants who responded that physicians always communicated well as a measure of patient satisfaction with physician communication (the other 2 questions were not included). Hospitals that reported data on patient satisfaction during 2007 were divided into quartiles based on their satisfaction scores, and this quartile allocation was maintained during each subsequent year. Survey response rate, in percentage, was obtained from HCAHPS data files for each year. Hospital characteristics, such as ownership of the hospital, teaching hospital status, and designation of critical access hospital were obtained from the Hospital Compare website. Hospital ownership was defined as government (owned by federal, state, Veterans Affairs, or tribal authorities), for profit (owned by physicians or another proprietary), or nonprofit (owned by a nonprofit organization such as a church). A hospital was considered a teaching hospital if it obtained graduate medical education funding from CMS.

We obtained local population data from 2010 decennial census files and from the American Community Survey 5‐year data profile from 2009 to 2013; both datasets are maintained by the Unites States Census Bureau.[7] Census is mandated by Article I, Section 2 of the United States Constitution and takes place every 10 years. The American Community Survey is also a mandatory, ongoing statistical survey that samples a small percentage of the population every year giving communities the information they need to plan investments and services. We chose to use 5‐year estimates as these are more precise and are reliable in analyzing small populations. For each zip code, we extracted data on total population, percentage of African Americans in the population, median income, poverty level, and insurance status from the Census Bureau data files.

Local population characteristics at zip code level were mapped to hospitals using hospital service area (HSA) crosswalk files from the Dartmouth Atlas of Health Care.[7, 8] The Dartmouth Atlas defined 3436 HSAs by assigning zip codes to the hospital area where the greatest proportion of its Medicare residents were hospitalized. The number of acute care hospital beds and the number of physicians within the HSA were also obtained from the Dartmouth Atlas. Merging data from these 3 sources generated a dataset that contained information about patient satisfaction scores from a particular hospital, hospital characteristics, and population characteristics of the healthcare market.

Data were summarized as mean and standard deviation (SD). To model the dependence of observations from the same hospital and the correlation between hospitals within the same state due to similar regulations, and to assess the relative contribution of satisfaction scores over time within hospital, hospitals within states, and across states, 3‐level hierarchical regression models were examined.[9, 10] At the within‐hospital level, survey response rate was used as a time‐varying variable in addition to the year of observation. However, only year of observation was used to explore differences in patient satisfaction trajectories between hospitals. At the hospitals‐within‐states level, hospital characteristics and local population characteristics within the HSA were included. At the states level, only random effects were obtained, and no additional variables were included in the models.

Four models were built to assess the relationship between satisfaction scores and predictors. The basic model used only random effects without any predictors to determine the relative contribution of each level (within hospitals, hospitals within states, and across states) to variation in patient satisfaction scores and thus was consistent with the variance component analysis. The first model included the year of observation as a predictor at the within‐hospital level to examine trends in patient satisfaction scores during the observation period. For the second model, we added baseline satisfaction quartiles to the second model, whereas remaining predictors (HSA population, African American percentage in HSA, survey response rate, HSA median income, ownership of hospital, percentage with private any insurance in HSA, acute care hospital beds in HSA, teaching hospital status, and percentage of people living in poverty within HSA) were added in the third model. Quartiles for baseline satisfaction were generated using satisfaction scores from 2007. As a larger number of hospitals reported results for 2008 than for 2007 (2273 vs 3746), we conducted a sensitivity analysis using satisfaction quartiles in 2008 as baseline and examined subsequent trends over time for the 4 models noted above. All multilevel models were specified using the nlme package in R to account for clustering of observations within hospitals and hospitals within states, using hospital and state level random effects.[11]

RESULTS

Of the 4353 hospitals with data for the 7‐year period, the majority were in the Southern region (South = 1669, Midwest = 1239, Northeast = 607, West = 838). Texas had the largest number of hospital (N = 358) followed by California (N = 340). The largest number of hospitals were nonprofit (N = 2637, 60.6%). Mean (SD) patient satisfaction with physician communication was 78.9% (5.7%) in 2007 that increased to 81.7% (5.4%) in 2013. Throughout the observation period, the highest patient satisfaction was in the South (80.6% [6.6%] in 2007 and 83.2% [5.4%] in 2013). Of the 2273 hospitals that reported data in 2007, the mean satisfaction score of the lowest quartile was 72% (3.2%), and the highest quartile was 86.9% (3.2%) (Table 1). As a group, hospitals in the highest quartile in 2007 still had higher satisfaction scores in 2013 than the hospitals in the lowest quartile (85% [4.2%] vs 77% [3.6%], respectively). Only 4 of the 584 hospitals in the lowest quartile in 2007 climbed up to the highest quartile in 2013, whereas 22 hospitals that were in the upper quartile in 2007 dropped to the lowest quartile in 2013.

Characteristics of Hospital by Quartiles of Satisfaction Scores in 2007
CharacteristicQuartiles Based on 2007 Satisfaction Scores
Highest Quartile2nd Quartile3rd QuartileLowest Quartile
  • NOTE: Abbreviations: HSA, hospital service area; IQR, interquartile range; SD, standard deviation.

Total no. of hospitals, N (%)461 (20.3)545 (24.0)683 (30.0)584 (25.7)
Hospital ownership, N (%)    
For profit50 (14.4)60 (17.3)96 (27.7)140 (40.5)
Nonprofit269 (17.4)380 (24.6)515 (33.4)378 (24.5)
Government142 (36.9)105 (27.3)72 (18.7)66 (17.1)
HSA population, in 1,000, median (IQR)33.2 (70.5)88.5 (186)161.8 (374)222.2 (534)
Racial distribution of HSA population, median (IQR)    
White, %82.6 (26.2)82.5 (28.5)74.2 (32.9)66.8 (35.3)
Black, %4.3 (21.7)3.7 (16.3)5.9 (14.8)7.4 (12.1)
Other, %6.4 (7.1)8.8 (10.8)12.9 (19.8)20.0 (33.1)
HSA mean median income in $1,000, mean (SD)44.6 (11.7)52.4 (17.8)58.4 (17.1)57.5 (15.7)
Satisfaction scores (at baseline), mean (SD)86.9 (3.1)81.4 (1.1)77.5 (1.1)72.0 (3.2)
Satisfaction scores (in 2013), mean (SD)85.0 (4.3)82.0 (3.4)79.7 (3.0)77.0 (3.5)
Survey response rate (at baseline), mean (SD)43.2 (19.8)34.5 (9.4)32.6 (8.0)30.3 (7.8)
Survey response rate (20072013), mean (SD)32.8 (7.8)32.6 (7.5)30.8 (6.5)29.3 (6.5)
Percentage with any insurance in HSA, mean (SD)84.0 (5.4)84.8 (6.6)85.5 (6.3)83.9 (6.6)
Teaching hospital, N (%)42 (9.1)155 (28.4)277 (40.5)274 (46.9%)
Acute care hospital beds in HSA (per 1,000), mean (SD)3.2 (1.2)2.6 (0.8)2.5 (0.8)2.4 (0.7)
Number of physicians in HSA (per 100,000), mean (SD)190 (36)197 (43)204 (47)199 (45)
Percentage with poverty in HSA, mean (SD)[7]16.9 (6.6)15.5 (6.5)14.4 (5.7)15.5 (6.0)

Using variance component analysis, we found that 23% of the variation in patient satisfaction scores with physician communication was due to differences between states, 52% was due to differences between hospitals within states, and 24% was due to changes over time within a hospital. When examining time trends of satisfaction during the 7‐year period without adjusting for other predictors, we found a statistically significant increasing trend in patient satisfaction with physician communication (0.33% per year; P < 0.001). We also found a significant negative correlation (0.62, P < 0.001) between the random effects for baseline satisfaction (intercept) and change over time (slope), suggesting that initial patient satisfaction with physicians at a hospital was negatively correlated with subsequent change in satisfaction scores during the observation period.

When examining the effect of satisfaction ranking in 2007, hospitals within the lowest quartile of patient satisfaction in 2007 had significantly larger increase in satisfaction scores during the subsequent period as compared to the hospitals in each of the other 3 quartiles (all P < 0.001, Table 2). The difference in the magnitude of the rate of increase in satisfaction scores was greatest between the lowest quartile and the highest quartile (1.10% per year; P < 0.001). In fact, the highest quartile had a statistically significant absolute decrease in patient satisfaction during the observation period (0.23% per year; P < 0.001, Figure 1).

Results of Multilevel Models for Patient Satisfaction With Physician Scores
VariableModel 1: ; P ValueModel 2: ; P ValueModel 3: ; P Value
  • NOTE: Model 1 = Time only predictor with hospital and state as random effects. Model 2 = Time and baseline satisfaction as predictors with hospital and state as random effects. Model 3 = Time, baseline satisfaction, HSA population, African American percentage in HSA, survey response rate, HSA median income, ownership of hospital, percentage with private insurance in HSA, acute care hospital beds in HSA, teaching hospital status, and percentage of people living in poverty within HSA; hospital and state were included as random effects. As there were far fewer values of satisfaction scores than the number of hospitals, and the number of hospitals were not evenly distributed across all satisfaction score values, the number of hospitals in each quartile is not exactly one‐fourth. Abbreviations: HSA, hospital service area.

Time (in years)0.33; <0.0010.87; <0.0010.89; <0.001
Satisfaction quartiles at baseline   
Highest quartile 12.1; <0.00110.4; <0.001
2nd quartile 7.9; <0.0017.1; <0.001
3rd quartile 4.5; <0.0014.1; <0.001
Lowest quartile (REF) REFREF
Interaction with time   
Highest quartile 1.10; <0.0010.94; <0.001
2nd quartile 0.73; <0.0010.71; <0.001
3rd quartile 0.48; <0.0010.47;<0.001
Survey response rate (%)  0.12; <0.001
Total population, in 10,000  0.002; 0.02
African American (%)  0.004; 0.13
HSA median Income in $10,000  0.02; 0.58
Ownership   
Government (REF)  REF
Nonprofit  0.01; 0.88
For profit  0.21; 0.11
Percentage with insurance in HSA  0.007; 0.27
Acute care beds in HSA (per 1,000)  0.60; <0.001
Physicians in HSA (per 100,000)  0.003; 0.007
Teaching hospital  0.34; 0.001
Percentage in poverty in HSA  0.01; 0.27
Figure 1
Trend in patient satisfaction with physicians during the observation period by quartile membership at baseline. The y‐axis represents the percentage of survey participants who responded that physicians “always” communicated well at a particular hospital. The x‐axis represents the years for which survey data were reported. Hospitals were divided into quartiles based on baseline satisfaction scores.

After adjusting for hospital characteristics and population characteristics of the HSA, the 2007 satisfaction quartiles remained significantly associated with subsequent change in satisfaction scores during the 7‐year observation period (Table 2). In addition, survey response rate, number of physicians, and the number of acute‐care hospital beds within the HSA were positively associated with patient satisfaction, whereas higher HSA population density and being a teaching hospital were negatively associated with patient satisfaction. Using 2008 satisfaction scores as baseline, the results did not change except that the number of physicians in the HSA and being a teaching hospital were no longer associated with satisfaction scores with physicians.

DISCUSSION

Using hierarchical modelling, we have shown that national patient satisfaction scores with physicians have consistently improved since 2007, the year when reporting of satisfaction scores began. We further show that the improvement in satisfaction scores has not been consistent through all hospitals. The largest increase in satisfaction scores was in hospitals that were in the lowest quartile of satisfaction scores in 2007. In contrast, satisfaction scores decreased in hospitals that were in the uppermost quartile of satisfaction scores. The difference between the lowest and uppermost quartile was so large in 2007 that despite the difference in the direction of change in satisfaction scores, hospitals in the uppermost quartile continued to have higher satisfaction scores in 2013 than hospitals in the lowest quartile.

Consistent with our findings for patient satisfaction, other studies have found that public reporting is associated with improvement in healthcare quality measures across nursing homes, physician groups, and hospitals.[12, 13, 14] However, it is unclear how public reporting can change patient satisfaction. The main purpose of public reporting of quality of healthcare measures, such as patient satisfaction with the healthcare they receive, is to generate value by increasing transparency and accountability, thereby increasing the quality of healthcare delivery. Healthcare consumers may also utilize the reported measures to choose providers that deliver high‐quality healthcare. Contrary to expectations, there is very little evidence that consumers choose healthcare facilities based on public reporting, and it is likely that other mechanisms may explain the observed association.[15, 16]

Physicians have historically had low adoption of strategies to improve patient satisfaction and often cite suboptimal data and lack of evidence for data‐driven strategies.[17, 18] Hospitals and healthcare organizations have deployed a broad range of strategies to engage physicians. These include emphasizing relationship between patient satisfaction and patient compliance, complaints and malpractice lawsuits, appealing to physicians' sense of competitiveness by publishing individual provider satisfaction scores, educating physicians on HCAHPS and providing them with regularly updated data, and development of specific techniques for improving patient‐physician interaction.[19, 20, 21, 22, 23, 24] Administrators may also enhance physician engagement by improving physician satisfaction, decreasing their turnover, support development of physicians in administrative leadership roles, and improving financial transparency.[25] Thus, involvement of hospital leadership has been instrumental in encouraging physicians to focus on quality measures including patient satisfaction. Some evidence suggests that public reporting exerts strong influence on hospital leaders for adequate resource allocation, local planning, and improvement efforts.[26, 27, 28]

Perhaps the most intriguing finding in our study is that hospitals in the uppermost quartile of satisfaction scores in 2007 had a statistically significant steady decline in scores during the following period as compared to hospitals in the lowest quartile that had a steady increase. A possible explanation for this finding can be that high‐performing hospitals become complacent and do not invest in developing the effort‐intensive resources required to maintain and improve performance in the physician‐related patient satisfaction domain. These resources may be diverted to competing needs that include addressing improvement efforts for a large number of other publicly reported healthcare quality measures. Thus, an unintended consequence of quality improvement may be that improvement in 1 domain may be at the expense of quality of care in another domain.[29, 30, 31] On the other hand, it is likely that hospitals in the lower quartile see a larger improvement in their scores for the same degree of investment as hospitals in the higher quartiles. It is also likely that hospitals, particularly those in the lowest quartile, develop their individual benchmarks and expend effort that is in line with their perceived need for improvement to achieve their strategic and marketing goals.

Our study has significant implications for the healthcare system, clinical practice, and future research. Whereas public reporting of quality measures is associated with an overall improvement in the reported quality measure, hospitals with high scores may move resources away from that metric or become complacent. Health policy makers need to design policies that encourage all hospitals and providers to perform better or continue to perform well. We further show that differences between hospitals and between local healthcare markets are the biggest factor determining the variation in patient satisfaction with physician communication, and an adjustment in reported score for these factors may be needed. Although local healthcare market factors may not be modifiable, an exchange of knowledge between hospitals with low and high patient satisfaction scores may improve overall satisfaction scores. Similarly, hospitals that are successful in increasing patient satisfaction scores should identify and share useful interventions.

The main strength of our study is that we used data on patient satisfaction with physician communication that were reported annually by most hospitals within the United States. These longitudinal data allowed us to examine not only the effect of public reporting on patient satisfaction with physician communication but also its trend over time. As we had 7 years of data, we were able to eliminate the possibility of regression to mean; an extreme result on first measurement is followed by a second measurement that tends to be closer to the average. Further, we adjusted satisfaction scores based on hospital and local healthcare market characteristics allowing us to compare satisfaction scores across hospitals. However, because units of observation were hospitals and not patients, we could not examine the effect of patient characteristics on satisfaction scores. In addition, HCAHPS surveys have low response rates and may have response and selection bias. Furthermore, we were unable to examine the strategies implemented by hospitals to improve satisfaction scores or the effect of such strategies on satisfaction scores. Data on hospital strategies to increase satisfaction scores are not available for most hospitals and could not have been included in the study.

In summary, we have found that public reporting was followed by an improvement in patient satisfaction scores with physician communication between 2007 and 2013. The rate of improvement was significantly greater in hospitals that had satisfaction scores in the lowest quartiles, whereas hospitals in the highest quartile had a small but statistically significant decline in patient satisfaction scores.

References
  1. Centers for Medicare Medicaid Services. Medicare program; hospital outpatient prospective payment system and CY 2007 payment rates; CY 2007 update to the ambulatory surgical center covered procedures list; Medicare administrative contractors; and reporting hospital quality data for FY 2008 inpatient prospective payment system annual payment update program‐‐HCAHPS survey, SCIP, and mortality. Final rule with comment period and final rule. Fed Regist. 2006;71(226):6795968401.
  2. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67(1):2737.
  3. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590593.
  4. Elliott MN, Lehrman WG, Goldstein EH, et al. Hospital survey shows improvements in patient experience. Health Aff (Millwood). 2010;29(11):20612067.
  5. Centers for Medicare 2010:496829.
  6. Gascon‐Barre M, Demers C, Mirshahi A, Neron S, Zalzal S, Nanci A. The normal liver harbors the vitamin D nuclear receptor in nonparenchymal and biliary epithelial cells. Hepatology. 2003;37(5):10341042.
  7. Singer JD, Willett JB. Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. Oxford, United Kingdom: Oxford University Press; 2003.
  8. Gelman A, Hill J. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge, United Kingdom: Cambridge University Press; 2007.
  9. nlme: Linear and Nonlinear Mixed Effects Models [computer program]. Version R package version 2015;3:1121.
  10. Smith MA, Wright A, Queram C, Lamb GC. Public reporting helped drive quality improvement in outpatient diabetes care among Wisconsin physician groups. Health Aff (Millwood). 2012;31(3):570577.
  11. Wees PJ, Sanden MW, Ginneken E, Ayanian JZ, Schneider EC, Westert GP. Governing healthcare through performance measurement in Massachusetts and the Netherlands. Health Policy. 2014;116(1):1826.
  12. Werner R, Stuart E, Polsky D. Public reporting drove quality gains at nursing homes. Health Aff (Millwood). 2010;29(9):17061713.
  13. Bardach NS, Hibbard JH, Dudley RA. Users of public reports of hospital quality: who, what, why, and how?: An aggregate analysis of 16 online public reporting Web sites and users' and experts' suggestions for improvement. Agency for Healthcare Research and Quality. Available at: http://archive.ahrq.gov/professionals/quality‐patient‐safety/quality‐resources/value/pubreportusers/index.html. Updated December 2011. Accessed April 2, 2015.
  14. Kaiser Family Foundation. 2008 update on consumers' views of patient safety and quality information. Available at: http://kff.org/health‐reform/poll‐finding/2008‐update‐on‐consumers‐views‐of‐patient‐2/. Published September 30, 2008. Accessed April 2, 2015.
  15. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625648, 511.
  16. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593624, 510.
  17. Rivers PA, Glover SH. Health care competition, strategic mission, and patient satisfaction: research model and propositions. J Health Organ Manag. 2008;22(6):627641.
  18. Kim SS, Kaplowitz S, Johnston MV. The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof. 2004;27(3):237251.
  19. Villar LM, Campo JA, Ranchal I, Lampe E, Romero‐Gomez M. Association between vitamin D and hepatitis C virus infection: a meta‐analysis. World J Gastroenterol. 2013;19(35):59175924.
  20. Stelfox HT, Gandhi TK, Orav EJ, Gustafson ML. The relation of patient satisfaction with complaints against physicians and malpractice lawsuits. Am J Med. 2005;118(10):11261133.
  21. Rodriguez HP, Rodday AM, Marshall RE, Nelson KL, Rogers WH, Safran DG. Relation of patients' experiences with individual physicians to malpractice risk. Int J Qual Health Care. 2008;20(1):512.
  22. Cydulka RK, Tamayo‐Sarver J, Gage A, Bagnoli D. Association of patient satisfaction with complaints and risk management among emergency physicians. J Emerg Med. 2011;41(4):405411.
  23. Bogue RJ, Guarneri JG, Reed M, Bradley K, Hughes J. Secrets of physician satisfaction. Study identifies pressure points and reveals life practices of highly satisfied doctors. Physician Exec. 2006;32(6):3039.
  24. Lindenauer PK, Lagu T, Ross JS, et al. Attitudes of hospital leaders toward publicly reported measures of health care quality. JAMA Intern Med. 2014;174(12):19041911.
  25. Totten AM, Wagner J, Tiwari A, O'Haire C, Griffin J, Walker M. Closing the quality gap: revisiting the state of the science (vol. 5: public reporting as a quality improvement strategy). Evid Rep Technol Assess (Full Rep). 2012(208.5):1645.
  26. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111123.
  27. Bardach NS, Cabana MD. The unintended consequences of quality improvement. Curr Opin Pediatr. 2009;21(6):777782.
  28. Powell AA, White KM, Partin MR, et al. Unintended consequences of implementing a national performance measurement system into local practice. J Gen Intern Med. 2012;27(4):405412.
  29. Riskin L, Campagna JA. Quality assessment by external bodies: intended and unintended impact on healthcare delivery. Curr Opin Anaesthesiol. 2009;22(2):237241.
References
  1. Centers for Medicare Medicaid Services. Medicare program; hospital outpatient prospective payment system and CY 2007 payment rates; CY 2007 update to the ambulatory surgical center covered procedures list; Medicare administrative contractors; and reporting hospital quality data for FY 2008 inpatient prospective payment system annual payment update program‐‐HCAHPS survey, SCIP, and mortality. Final rule with comment period and final rule. Fed Regist. 2006;71(226):6795968401.
  2. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67(1):2737.
  3. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590593.
  4. Elliott MN, Lehrman WG, Goldstein EH, et al. Hospital survey shows improvements in patient experience. Health Aff (Millwood). 2010;29(11):20612067.
  5. Centers for Medicare 2010:496829.
  6. Gascon‐Barre M, Demers C, Mirshahi A, Neron S, Zalzal S, Nanci A. The normal liver harbors the vitamin D nuclear receptor in nonparenchymal and biliary epithelial cells. Hepatology. 2003;37(5):10341042.
  7. Singer JD, Willett JB. Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. Oxford, United Kingdom: Oxford University Press; 2003.
  8. Gelman A, Hill J. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge, United Kingdom: Cambridge University Press; 2007.
  9. nlme: Linear and Nonlinear Mixed Effects Models [computer program]. Version R package version 2015;3:1121.
  10. Smith MA, Wright A, Queram C, Lamb GC. Public reporting helped drive quality improvement in outpatient diabetes care among Wisconsin physician groups. Health Aff (Millwood). 2012;31(3):570577.
  11. Wees PJ, Sanden MW, Ginneken E, Ayanian JZ, Schneider EC, Westert GP. Governing healthcare through performance measurement in Massachusetts and the Netherlands. Health Policy. 2014;116(1):1826.
  12. Werner R, Stuart E, Polsky D. Public reporting drove quality gains at nursing homes. Health Aff (Millwood). 2010;29(9):17061713.
  13. Bardach NS, Hibbard JH, Dudley RA. Users of public reports of hospital quality: who, what, why, and how?: An aggregate analysis of 16 online public reporting Web sites and users' and experts' suggestions for improvement. Agency for Healthcare Research and Quality. Available at: http://archive.ahrq.gov/professionals/quality‐patient‐safety/quality‐resources/value/pubreportusers/index.html. Updated December 2011. Accessed April 2, 2015.
  14. Kaiser Family Foundation. 2008 update on consumers' views of patient safety and quality information. Available at: http://kff.org/health‐reform/poll‐finding/2008‐update‐on‐consumers‐views‐of‐patient‐2/. Published September 30, 2008. Accessed April 2, 2015.
  15. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625648, 511.
  16. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593624, 510.
  17. Rivers PA, Glover SH. Health care competition, strategic mission, and patient satisfaction: research model and propositions. J Health Organ Manag. 2008;22(6):627641.
  18. Kim SS, Kaplowitz S, Johnston MV. The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof. 2004;27(3):237251.
  19. Villar LM, Campo JA, Ranchal I, Lampe E, Romero‐Gomez M. Association between vitamin D and hepatitis C virus infection: a meta‐analysis. World J Gastroenterol. 2013;19(35):59175924.
  20. Stelfox HT, Gandhi TK, Orav EJ, Gustafson ML. The relation of patient satisfaction with complaints against physicians and malpractice lawsuits. Am J Med. 2005;118(10):11261133.
  21. Rodriguez HP, Rodday AM, Marshall RE, Nelson KL, Rogers WH, Safran DG. Relation of patients' experiences with individual physicians to malpractice risk. Int J Qual Health Care. 2008;20(1):512.
  22. Cydulka RK, Tamayo‐Sarver J, Gage A, Bagnoli D. Association of patient satisfaction with complaints and risk management among emergency physicians. J Emerg Med. 2011;41(4):405411.
  23. Bogue RJ, Guarneri JG, Reed M, Bradley K, Hughes J. Secrets of physician satisfaction. Study identifies pressure points and reveals life practices of highly satisfied doctors. Physician Exec. 2006;32(6):3039.
  24. Lindenauer PK, Lagu T, Ross JS, et al. Attitudes of hospital leaders toward publicly reported measures of health care quality. JAMA Intern Med. 2014;174(12):19041911.
  25. Totten AM, Wagner J, Tiwari A, O'Haire C, Griffin J, Walker M. Closing the quality gap: revisiting the state of the science (vol. 5: public reporting as a quality improvement strategy). Evid Rep Technol Assess (Full Rep). 2012(208.5):1645.
  26. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111123.
  27. Bardach NS, Cabana MD. The unintended consequences of quality improvement. Curr Opin Pediatr. 2009;21(6):777782.
  28. Powell AA, White KM, Partin MR, et al. Unintended consequences of implementing a national performance measurement system into local practice. J Gen Intern Med. 2012;27(4):405412.
  29. Riskin L, Campagna JA. Quality assessment by external bodies: intended and unintended impact on healthcare delivery. Curr Opin Anaesthesiol. 2009;22(2):237241.
Issue
Journal of Hospital Medicine - 11(2)
Issue
Journal of Hospital Medicine - 11(2)
Page Number
105-110
Page Number
105-110
Article Type
Display Headline
Effect of HCAHPS reporting on patient satisfaction with physician communication
Display Headline
Effect of HCAHPS reporting on patient satisfaction with physician communication
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Rehan Qayyum, MD, 960 East Third Street, Suite 208, Chattanooga, TN 37403; Telephone: 443‐762‐9267; Fax: 423‐778‐2611; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Recall issued on U.S. Compounding sterile products

Article Type
Changed
Fri, 01/18/2019 - 15:15
Display Headline
Recall issued on U.S. Compounding sterile products

U.S. Compounding Inc. is issuing a recall on all sterile products distributed between March 14, 2015, and Sept. 9, 2015, according to a safety alert from the Food and Drug Administration.

The product recall applies to all aseptically compounded and packaged USC sterile products distributed to hospitals, patients, providers, and clinics because of FDA concerns over lack of sterility assurance. Because of the risk to any patients using a compromised product, USC is proceeding voluntarily with the recall.

Patients or providers who received sterile compounded products from USC within the recall date and have not expired should stop using the products immediately, quarantine the product until proper disposal is possible, and contact USC as soon as possible to coordinate a plan to return the product.

Patients should also contact their physicians if they have experienced any issues relating to the recalled product, and physicians should contact patients to inform them of the recall and to advise them to stop using the product.

The USC recall does not apply to any nonsterile compounded medication produced or distributed by USC, according to the FDA alert.

Find the full safety alert on the FDA website.

[email protected]

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

U.S. Compounding Inc. is issuing a recall on all sterile products distributed between March 14, 2015, and Sept. 9, 2015, according to a safety alert from the Food and Drug Administration.

The product recall applies to all aseptically compounded and packaged USC sterile products distributed to hospitals, patients, providers, and clinics because of FDA concerns over lack of sterility assurance. Because of the risk to any patients using a compromised product, USC is proceeding voluntarily with the recall.

Patients or providers who received sterile compounded products from USC within the recall date and have not expired should stop using the products immediately, quarantine the product until proper disposal is possible, and contact USC as soon as possible to coordinate a plan to return the product.

Patients should also contact their physicians if they have experienced any issues relating to the recalled product, and physicians should contact patients to inform them of the recall and to advise them to stop using the product.

The USC recall does not apply to any nonsterile compounded medication produced or distributed by USC, according to the FDA alert.

Find the full safety alert on the FDA website.

[email protected]

U.S. Compounding Inc. is issuing a recall on all sterile products distributed between March 14, 2015, and Sept. 9, 2015, according to a safety alert from the Food and Drug Administration.

The product recall applies to all aseptically compounded and packaged USC sterile products distributed to hospitals, patients, providers, and clinics because of FDA concerns over lack of sterility assurance. Because of the risk to any patients using a compromised product, USC is proceeding voluntarily with the recall.

Patients or providers who received sterile compounded products from USC within the recall date and have not expired should stop using the products immediately, quarantine the product until proper disposal is possible, and contact USC as soon as possible to coordinate a plan to return the product.

Patients should also contact their physicians if they have experienced any issues relating to the recalled product, and physicians should contact patients to inform them of the recall and to advise them to stop using the product.

The USC recall does not apply to any nonsterile compounded medication produced or distributed by USC, according to the FDA alert.

Find the full safety alert on the FDA website.

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
Recall issued on U.S. Compounding sterile products
Display Headline
Recall issued on U.S. Compounding sterile products
Article Source

PURLs Copyright

Inside the Article