User login
Nutrition source could improve platinum-based nanodrugs
A parenteral nutrition source can reduce the toxicity and increase the bioavailability of platinum-based anticancer nanodrugs, according to preclinical research published in Scientific Reports.
Many of the side effects of platinum-based drugs occur when they settle in healthy tissue.
To deliver these drugs in a more targeted way, researchers have created nanoscale delivery systems engineered to make the drugs accumulate at tumor sites.
However, tests of these nanodrugs show that between 1% and 10% of the drugs are delivered to the tumor site, with most of the remainder being diverted to the liver and spleen.
“The body’s immune system, especially the liver and spleen, has been one of the biggest stumbling blocks in developing nanoscale chemotherapy drug delivery systems,” said Chien Ho, PhD, of Carnegie Mellon University in Pittsburg, Pennsylvania.
“When the drugs collect in those organs, they become less available to treat the cancer and can also cause toxicity.”
But Dr Ho and his colleagues have found evidence to suggest that Intralipid, a fat emulsion used as a parenteral nutrition source, can help prevent that.
While developing cellular nanotags to help detect organ rejection, Dr Ho noticed that Intralipid reduced the amount of nanoparticles that were being cleared by the liver and spleen by about 50%. As a result, the nanoparticles remained in the bloodstream for longer periods of time.
So he and his colleagues decided to see if Intralipid had the same effect on platinum-based anticancer nanodrugs.
In the newly published study, the researchers administered a single, clinical dose of Intralipid to Sprague Dawley rats. One hour later, they administered a dose of a platinum-based chemotherapy drug that had been incorporated into a nanoparticle to both Intralipid-treated rats and controls.
Twenty-four hours after the drug was administered, rats pretreated with Intralipid had experienced reduced accumulation of the platinum-based drug compared to controls.
Drug accumulation decreased by 20.4% in the liver, 42.5% in the spleen, and 31.2% in the kidney. Consequently, in these organs, the toxic side effects of the nanodrug were significantly decreased compared to controls.
Furthermore, Intralipid pretreatment allowed more of the drug to remain available and active in the body for longer periods of time.
After 5 hours, the drug’s bioavailability increased by 18.7% in Intralipid-treated mice compared to controls. After 24 hours, bioavailability was 9.4% higher in Intralipid-treated mice than in controls.
The researchers believe this increased bioavailability will allow more of the drug to reach the tumor site and could perhaps allow clinicians to reduce the dosage needed to treat a patient. The team is now investigating the possibility of bringing this research to a clinical trial.
A parenteral nutrition source can reduce the toxicity and increase the bioavailability of platinum-based anticancer nanodrugs, according to preclinical research published in Scientific Reports.
Many of the side effects of platinum-based drugs occur when they settle in healthy tissue.
To deliver these drugs in a more targeted way, researchers have created nanoscale delivery systems engineered to make the drugs accumulate at tumor sites.
However, tests of these nanodrugs show that between 1% and 10% of the drugs are delivered to the tumor site, with most of the remainder being diverted to the liver and spleen.
“The body’s immune system, especially the liver and spleen, has been one of the biggest stumbling blocks in developing nanoscale chemotherapy drug delivery systems,” said Chien Ho, PhD, of Carnegie Mellon University in Pittsburg, Pennsylvania.
“When the drugs collect in those organs, they become less available to treat the cancer and can also cause toxicity.”
But Dr Ho and his colleagues have found evidence to suggest that Intralipid, a fat emulsion used as a parenteral nutrition source, can help prevent that.
While developing cellular nanotags to help detect organ rejection, Dr Ho noticed that Intralipid reduced the amount of nanoparticles that were being cleared by the liver and spleen by about 50%. As a result, the nanoparticles remained in the bloodstream for longer periods of time.
So he and his colleagues decided to see if Intralipid had the same effect on platinum-based anticancer nanodrugs.
In the newly published study, the researchers administered a single, clinical dose of Intralipid to Sprague Dawley rats. One hour later, they administered a dose of a platinum-based chemotherapy drug that had been incorporated into a nanoparticle to both Intralipid-treated rats and controls.
Twenty-four hours after the drug was administered, rats pretreated with Intralipid had experienced reduced accumulation of the platinum-based drug compared to controls.
Drug accumulation decreased by 20.4% in the liver, 42.5% in the spleen, and 31.2% in the kidney. Consequently, in these organs, the toxic side effects of the nanodrug were significantly decreased compared to controls.
Furthermore, Intralipid pretreatment allowed more of the drug to remain available and active in the body for longer periods of time.
After 5 hours, the drug’s bioavailability increased by 18.7% in Intralipid-treated mice compared to controls. After 24 hours, bioavailability was 9.4% higher in Intralipid-treated mice than in controls.
The researchers believe this increased bioavailability will allow more of the drug to reach the tumor site and could perhaps allow clinicians to reduce the dosage needed to treat a patient. The team is now investigating the possibility of bringing this research to a clinical trial.
A parenteral nutrition source can reduce the toxicity and increase the bioavailability of platinum-based anticancer nanodrugs, according to preclinical research published in Scientific Reports.
Many of the side effects of platinum-based drugs occur when they settle in healthy tissue.
To deliver these drugs in a more targeted way, researchers have created nanoscale delivery systems engineered to make the drugs accumulate at tumor sites.
However, tests of these nanodrugs show that between 1% and 10% of the drugs are delivered to the tumor site, with most of the remainder being diverted to the liver and spleen.
“The body’s immune system, especially the liver and spleen, has been one of the biggest stumbling blocks in developing nanoscale chemotherapy drug delivery systems,” said Chien Ho, PhD, of Carnegie Mellon University in Pittsburg, Pennsylvania.
“When the drugs collect in those organs, they become less available to treat the cancer and can also cause toxicity.”
But Dr Ho and his colleagues have found evidence to suggest that Intralipid, a fat emulsion used as a parenteral nutrition source, can help prevent that.
While developing cellular nanotags to help detect organ rejection, Dr Ho noticed that Intralipid reduced the amount of nanoparticles that were being cleared by the liver and spleen by about 50%. As a result, the nanoparticles remained in the bloodstream for longer periods of time.
So he and his colleagues decided to see if Intralipid had the same effect on platinum-based anticancer nanodrugs.
In the newly published study, the researchers administered a single, clinical dose of Intralipid to Sprague Dawley rats. One hour later, they administered a dose of a platinum-based chemotherapy drug that had been incorporated into a nanoparticle to both Intralipid-treated rats and controls.
Twenty-four hours after the drug was administered, rats pretreated with Intralipid had experienced reduced accumulation of the platinum-based drug compared to controls.
Drug accumulation decreased by 20.4% in the liver, 42.5% in the spleen, and 31.2% in the kidney. Consequently, in these organs, the toxic side effects of the nanodrug were significantly decreased compared to controls.
Furthermore, Intralipid pretreatment allowed more of the drug to remain available and active in the body for longer periods of time.
After 5 hours, the drug’s bioavailability increased by 18.7% in Intralipid-treated mice compared to controls. After 24 hours, bioavailability was 9.4% higher in Intralipid-treated mice than in controls.
The researchers believe this increased bioavailability will allow more of the drug to reach the tumor site and could perhaps allow clinicians to reduce the dosage needed to treat a patient. The team is now investigating the possibility of bringing this research to a clinical trial.
Letter to the Editor
I read Beck et al.'s article titled Redesigning an Inpatient Pediatric Service Using Lean to Improve Throughput Efficiency with great interest.[1] Redesigning the rounding process using Lean not only created a standard workflow including seeing dischargeable patients first, involving interdisciplinary huddle, completing the discharge checklist at bedside, but also added a second attending physician, thereby decreasing the workload. Stein et al. demonstrated that restructured floor‐based patient care including unit‐based teams, interdisciplinary bedside rounds, unit‐level performance reporting, and unit‐level nurse and physician coleadership all improved workflow with an average of 12.9 patients per physician.[2] Another study showed that increased workload was associated with prolonged length of stay with a recommended number of patients per day per physician at 15.[3] I want to point out the number of patients per physician in these studies. Today's hospitalists in community hospitals are expected to see >18 patients per day, with the additional pressure of decreasing costs, readmission rates, length of stays, and time for discharge while increasing productivity and patient satisfaction. Michtalik et al.'s survey showed that 40% of hospitalists reported exceeding their own safe numbers. Regardless of any assistance, physicians reported that they could safely see 15 patients per shift if their effort was 100% clinical.[4] Therefore, despite the outstanding results of the above studies, I am hesitant as to whether similar interventions would be as successful in community hospitals with higher patient loads. We need further studies to determine the optimum number of patients per hospitalist for nonteaching community hospitals. Another concern is how to adopt the successful examples of academic centers in nonteaching community hospitals in the absence of interns. Expecting hospitalists to replace the intern role is worrisome for job satisfaction, especially in the presence of high burnout rates.
Eliminating waste and redesigning the rounding process initiatives will definitely be the norm over the next years. We need to define center‐specific right patient/hospitalist ratios with proper roles and responsibilities for hospitalists. What works in the presence of residents may not work for nonteaching community hospitals. Caution should be taken while restructuring hospital medicine.
- Redesigning an inpatient pediatric service using Lean to improve throughput efficiency. J Hosp Med. 2015;10(4):220–227. , .
- Reorganizing a hospital ward as an accountable care unit. J Hosp Med. 2015;10(1):36–40. , , , et al.
- Effect of hospitalist workload on the quality and efficiency of care. JAMA Intern Med. 2014;174(5):786–793. , , , , .
- Impact of attending physician workload on patient care: a survey of hospitalists. JAMA Intern Med. 2013;173(5):375–377. , , , .
I read Beck et al.'s article titled Redesigning an Inpatient Pediatric Service Using Lean to Improve Throughput Efficiency with great interest.[1] Redesigning the rounding process using Lean not only created a standard workflow including seeing dischargeable patients first, involving interdisciplinary huddle, completing the discharge checklist at bedside, but also added a second attending physician, thereby decreasing the workload. Stein et al. demonstrated that restructured floor‐based patient care including unit‐based teams, interdisciplinary bedside rounds, unit‐level performance reporting, and unit‐level nurse and physician coleadership all improved workflow with an average of 12.9 patients per physician.[2] Another study showed that increased workload was associated with prolonged length of stay with a recommended number of patients per day per physician at 15.[3] I want to point out the number of patients per physician in these studies. Today's hospitalists in community hospitals are expected to see >18 patients per day, with the additional pressure of decreasing costs, readmission rates, length of stays, and time for discharge while increasing productivity and patient satisfaction. Michtalik et al.'s survey showed that 40% of hospitalists reported exceeding their own safe numbers. Regardless of any assistance, physicians reported that they could safely see 15 patients per shift if their effort was 100% clinical.[4] Therefore, despite the outstanding results of the above studies, I am hesitant as to whether similar interventions would be as successful in community hospitals with higher patient loads. We need further studies to determine the optimum number of patients per hospitalist for nonteaching community hospitals. Another concern is how to adopt the successful examples of academic centers in nonteaching community hospitals in the absence of interns. Expecting hospitalists to replace the intern role is worrisome for job satisfaction, especially in the presence of high burnout rates.
Eliminating waste and redesigning the rounding process initiatives will definitely be the norm over the next years. We need to define center‐specific right patient/hospitalist ratios with proper roles and responsibilities for hospitalists. What works in the presence of residents may not work for nonteaching community hospitals. Caution should be taken while restructuring hospital medicine.
I read Beck et al.'s article titled Redesigning an Inpatient Pediatric Service Using Lean to Improve Throughput Efficiency with great interest.[1] Redesigning the rounding process using Lean not only created a standard workflow including seeing dischargeable patients first, involving interdisciplinary huddle, completing the discharge checklist at bedside, but also added a second attending physician, thereby decreasing the workload. Stein et al. demonstrated that restructured floor‐based patient care including unit‐based teams, interdisciplinary bedside rounds, unit‐level performance reporting, and unit‐level nurse and physician coleadership all improved workflow with an average of 12.9 patients per physician.[2] Another study showed that increased workload was associated with prolonged length of stay with a recommended number of patients per day per physician at 15.[3] I want to point out the number of patients per physician in these studies. Today's hospitalists in community hospitals are expected to see >18 patients per day, with the additional pressure of decreasing costs, readmission rates, length of stays, and time for discharge while increasing productivity and patient satisfaction. Michtalik et al.'s survey showed that 40% of hospitalists reported exceeding their own safe numbers. Regardless of any assistance, physicians reported that they could safely see 15 patients per shift if their effort was 100% clinical.[4] Therefore, despite the outstanding results of the above studies, I am hesitant as to whether similar interventions would be as successful in community hospitals with higher patient loads. We need further studies to determine the optimum number of patients per hospitalist for nonteaching community hospitals. Another concern is how to adopt the successful examples of academic centers in nonteaching community hospitals in the absence of interns. Expecting hospitalists to replace the intern role is worrisome for job satisfaction, especially in the presence of high burnout rates.
Eliminating waste and redesigning the rounding process initiatives will definitely be the norm over the next years. We need to define center‐specific right patient/hospitalist ratios with proper roles and responsibilities for hospitalists. What works in the presence of residents may not work for nonteaching community hospitals. Caution should be taken while restructuring hospital medicine.
- Redesigning an inpatient pediatric service using Lean to improve throughput efficiency. J Hosp Med. 2015;10(4):220–227. , .
- Reorganizing a hospital ward as an accountable care unit. J Hosp Med. 2015;10(1):36–40. , , , et al.
- Effect of hospitalist workload on the quality and efficiency of care. JAMA Intern Med. 2014;174(5):786–793. , , , , .
- Impact of attending physician workload on patient care: a survey of hospitalists. JAMA Intern Med. 2013;173(5):375–377. , , , .
- Redesigning an inpatient pediatric service using Lean to improve throughput efficiency. J Hosp Med. 2015;10(4):220–227. , .
- Reorganizing a hospital ward as an accountable care unit. J Hosp Med. 2015;10(1):36–40. , , , et al.
- Effect of hospitalist workload on the quality and efficiency of care. JAMA Intern Med. 2014;174(5):786–793. , , , , .
- Impact of attending physician workload on patient care: a survey of hospitalists. JAMA Intern Med. 2013;173(5):375–377. , , , .
Tight glycemic control: Somewhat fewer CV events, same mortality
Tight glycemic control modestly reduced the rate of major cardiovascular events but didn’t improve mortality in an extended follow-up of a clinical trial involving 1,791 veterans with type 2 diabetes, which was published online June 3 in the New England Journal of Medicine.
At the conclusion of the treatment phase of the Veteran Affairs Diabetes Trial in 2008, the primary outcome – the rate of a first major CV event – was nonsignificantly lower with intensive glycemic control than with standard glycemic control. Researchers now report the findings after an additional 7.5 years of follow-up of 92% of the participants in that multicenter unblended randomized controlled trial.
During the treatment phase of the study, median glycated hemoglobin level differed by 1.5 percentage points between patients who received intensive therapy (6.9%) and patients who received standard therapy (8.4%). During follow-up, this difference declined to only 0.2-0.3 percentage points. “Even with the support of a dedicated research team, only approximately half the participants [achieved] a glycated hemoglobin level of less than 7%,” said Dr. Rodney A. Hayward of the VA Center for Clinical Management Research, VA Ann Arbor (Mich.) Healthcare System, and his associates.
During extended follow-up, there were 253 major CV events in the group randomly assigned to intensive therapy and 288 in the group assigned to standard therapy. Tight glycemic control using a multidrug regimen was associated with a significant, though modest, 17% relative reduction in a the primary composite outcome of heart attack, stroke, new or worsening congestive heart failure, death from CV causes, or amputation due to ischemic gangrene. This represents 8.6 CV events prevented per 1,000 person-years.
However, there was no evidence of any reduction in either cardiovascular or all-cause mortality. In addition, treatment effects were no different between patients at high and those at low cardiovascular risk, the investigators said (N. Engl. J. Med. 2015 June 3 [doi:10.1056/NEJMoa1414266]).
“In the absence of a reduction in total mortality, a small to moderate reduction in the rate of CV events needs to be weighed against potential harm due to overly aggressive care and the burden, long-term safety profile, and side effects of treatment, including weight gain and hypoglycemia,” they added.
Tight glycemic control modestly reduced the rate of major cardiovascular events but didn’t improve mortality in an extended follow-up of a clinical trial involving 1,791 veterans with type 2 diabetes, which was published online June 3 in the New England Journal of Medicine.
At the conclusion of the treatment phase of the Veteran Affairs Diabetes Trial in 2008, the primary outcome – the rate of a first major CV event – was nonsignificantly lower with intensive glycemic control than with standard glycemic control. Researchers now report the findings after an additional 7.5 years of follow-up of 92% of the participants in that multicenter unblended randomized controlled trial.
During the treatment phase of the study, median glycated hemoglobin level differed by 1.5 percentage points between patients who received intensive therapy (6.9%) and patients who received standard therapy (8.4%). During follow-up, this difference declined to only 0.2-0.3 percentage points. “Even with the support of a dedicated research team, only approximately half the participants [achieved] a glycated hemoglobin level of less than 7%,” said Dr. Rodney A. Hayward of the VA Center for Clinical Management Research, VA Ann Arbor (Mich.) Healthcare System, and his associates.
During extended follow-up, there were 253 major CV events in the group randomly assigned to intensive therapy and 288 in the group assigned to standard therapy. Tight glycemic control using a multidrug regimen was associated with a significant, though modest, 17% relative reduction in a the primary composite outcome of heart attack, stroke, new or worsening congestive heart failure, death from CV causes, or amputation due to ischemic gangrene. This represents 8.6 CV events prevented per 1,000 person-years.
However, there was no evidence of any reduction in either cardiovascular or all-cause mortality. In addition, treatment effects were no different between patients at high and those at low cardiovascular risk, the investigators said (N. Engl. J. Med. 2015 June 3 [doi:10.1056/NEJMoa1414266]).
“In the absence of a reduction in total mortality, a small to moderate reduction in the rate of CV events needs to be weighed against potential harm due to overly aggressive care and the burden, long-term safety profile, and side effects of treatment, including weight gain and hypoglycemia,” they added.
Tight glycemic control modestly reduced the rate of major cardiovascular events but didn’t improve mortality in an extended follow-up of a clinical trial involving 1,791 veterans with type 2 diabetes, which was published online June 3 in the New England Journal of Medicine.
At the conclusion of the treatment phase of the Veteran Affairs Diabetes Trial in 2008, the primary outcome – the rate of a first major CV event – was nonsignificantly lower with intensive glycemic control than with standard glycemic control. Researchers now report the findings after an additional 7.5 years of follow-up of 92% of the participants in that multicenter unblended randomized controlled trial.
During the treatment phase of the study, median glycated hemoglobin level differed by 1.5 percentage points between patients who received intensive therapy (6.9%) and patients who received standard therapy (8.4%). During follow-up, this difference declined to only 0.2-0.3 percentage points. “Even with the support of a dedicated research team, only approximately half the participants [achieved] a glycated hemoglobin level of less than 7%,” said Dr. Rodney A. Hayward of the VA Center for Clinical Management Research, VA Ann Arbor (Mich.) Healthcare System, and his associates.
During extended follow-up, there were 253 major CV events in the group randomly assigned to intensive therapy and 288 in the group assigned to standard therapy. Tight glycemic control using a multidrug regimen was associated with a significant, though modest, 17% relative reduction in a the primary composite outcome of heart attack, stroke, new or worsening congestive heart failure, death from CV causes, or amputation due to ischemic gangrene. This represents 8.6 CV events prevented per 1,000 person-years.
However, there was no evidence of any reduction in either cardiovascular or all-cause mortality. In addition, treatment effects were no different between patients at high and those at low cardiovascular risk, the investigators said (N. Engl. J. Med. 2015 June 3 [doi:10.1056/NEJMoa1414266]).
“In the absence of a reduction in total mortality, a small to moderate reduction in the rate of CV events needs to be weighed against potential harm due to overly aggressive care and the burden, long-term safety profile, and side effects of treatment, including weight gain and hypoglycemia,” they added.
Key clinical point: Tight glycemic control cut the rate of major cardiovascular events by 17% but didn’t improve mortality in patients with type 2 diabetes.
Major finding: Compared with standard glycemic control, tight glycemic control prevented 8.6 CV events per 1,000 person-years.
Data source: Extended follow-up of an unblinded, multicenter, randomized, controlled trial involving 1,791 veterans with type 2 diabetes.
Disclosures: This study was supported by the VA Cooperative Studies Program, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institutes of Health. Dr. Hayward reported having no relevant financial disclosures; two of his associates reported ties to Amgen, AstraZeneca, Merck, and Novo Nordisk.
No Advantage to Routine Thrombectomy Prior to Percutaneous Coronary Intervention for STEMI
Clinical question: Does the use of routine thrombectomy for patients presenting with ST-segment elevation myocardial infarction improve outcomes?
Bottom line: For patients with ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PCI), the routine use of manual thrombectomy improves some electrocardiographic and angiographic outcomes, but ultimately does not result in improved cardiovascular morbidity or mortality. Moreover, thrombectomy may increase the risk of stroke. (LOE = 1b)
Reference: Jolly SS, Cairns JA, Yusuf S, et al, for the TOTAL Investigators. Randomized trial of primary PCI with or without routine manual thrombectomy. N Engl J Med. 2015;372(15):1389–1398.
Study design: Randomized controlled trial (nonblinded)
Funding source: Industry + govt
Allocation: Concealed
Setting: Inpatient (any location) with outpatient follow-up
Synopsis
Manual thrombectomy with aspiration of thrombus prior to PCI is thought to prevent distal embolization and improve microvascular perfusion. Whether this results in clinical benefit is unclear. In this study, the investigators randomized patients presenting with STEMI to undergo either routine thrombus aspiration followed by PCI or PCI alone. Those who had a previous history of coronary-artery bypass grafting or those who had received fibrinolytics were excluded. The 2 groups were balanced at baseline, with almost 80% of patients in each group noted to have a high thrombus burden. A modified intention-to-treat analysis was used that included only those patients who actually underwent PCI for the index STEMI.
Although electrocardiographic and angiographic outcomes improved with thrombectomy (eg, increased ST-segment resolution, decreased distal embolization), no clinical benefit was found. Specifically, for the primary outcome of cardiovascular death, recurrent myocardial infarction, cardiogenic shock, or New York Heart Association class IV heart failure within 180 days of randomization, there were no significant differences detected between the 2 groups. The components of the composite outcome taken individually were also similar in each group. These results persisted across prespecified analyses of the as-treated population, per-protocol population, and the subgroup with high thrombus burden. Additionally, patients in the thrombectomy group were more likely to have a stroke within 30 days and 180 days, although the number of events was relatively small (for 30 days: 0.7% vs 0.3%, P = .02; for 180 days: 1% vs 0.5%, P = .002).
Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.
Clinical question: Does the use of routine thrombectomy for patients presenting with ST-segment elevation myocardial infarction improve outcomes?
Bottom line: For patients with ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PCI), the routine use of manual thrombectomy improves some electrocardiographic and angiographic outcomes, but ultimately does not result in improved cardiovascular morbidity or mortality. Moreover, thrombectomy may increase the risk of stroke. (LOE = 1b)
Reference: Jolly SS, Cairns JA, Yusuf S, et al, for the TOTAL Investigators. Randomized trial of primary PCI with or without routine manual thrombectomy. N Engl J Med. 2015;372(15):1389–1398.
Study design: Randomized controlled trial (nonblinded)
Funding source: Industry + govt
Allocation: Concealed
Setting: Inpatient (any location) with outpatient follow-up
Synopsis
Manual thrombectomy with aspiration of thrombus prior to PCI is thought to prevent distal embolization and improve microvascular perfusion. Whether this results in clinical benefit is unclear. In this study, the investigators randomized patients presenting with STEMI to undergo either routine thrombus aspiration followed by PCI or PCI alone. Those who had a previous history of coronary-artery bypass grafting or those who had received fibrinolytics were excluded. The 2 groups were balanced at baseline, with almost 80% of patients in each group noted to have a high thrombus burden. A modified intention-to-treat analysis was used that included only those patients who actually underwent PCI for the index STEMI.
Although electrocardiographic and angiographic outcomes improved with thrombectomy (eg, increased ST-segment resolution, decreased distal embolization), no clinical benefit was found. Specifically, for the primary outcome of cardiovascular death, recurrent myocardial infarction, cardiogenic shock, or New York Heart Association class IV heart failure within 180 days of randomization, there were no significant differences detected between the 2 groups. The components of the composite outcome taken individually were also similar in each group. These results persisted across prespecified analyses of the as-treated population, per-protocol population, and the subgroup with high thrombus burden. Additionally, patients in the thrombectomy group were more likely to have a stroke within 30 days and 180 days, although the number of events was relatively small (for 30 days: 0.7% vs 0.3%, P = .02; for 180 days: 1% vs 0.5%, P = .002).
Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.
Clinical question: Does the use of routine thrombectomy for patients presenting with ST-segment elevation myocardial infarction improve outcomes?
Bottom line: For patients with ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PCI), the routine use of manual thrombectomy improves some electrocardiographic and angiographic outcomes, but ultimately does not result in improved cardiovascular morbidity or mortality. Moreover, thrombectomy may increase the risk of stroke. (LOE = 1b)
Reference: Jolly SS, Cairns JA, Yusuf S, et al, for the TOTAL Investigators. Randomized trial of primary PCI with or without routine manual thrombectomy. N Engl J Med. 2015;372(15):1389–1398.
Study design: Randomized controlled trial (nonblinded)
Funding source: Industry + govt
Allocation: Concealed
Setting: Inpatient (any location) with outpatient follow-up
Synopsis
Manual thrombectomy with aspiration of thrombus prior to PCI is thought to prevent distal embolization and improve microvascular perfusion. Whether this results in clinical benefit is unclear. In this study, the investigators randomized patients presenting with STEMI to undergo either routine thrombus aspiration followed by PCI or PCI alone. Those who had a previous history of coronary-artery bypass grafting or those who had received fibrinolytics were excluded. The 2 groups were balanced at baseline, with almost 80% of patients in each group noted to have a high thrombus burden. A modified intention-to-treat analysis was used that included only those patients who actually underwent PCI for the index STEMI.
Although electrocardiographic and angiographic outcomes improved with thrombectomy (eg, increased ST-segment resolution, decreased distal embolization), no clinical benefit was found. Specifically, for the primary outcome of cardiovascular death, recurrent myocardial infarction, cardiogenic shock, or New York Heart Association class IV heart failure within 180 days of randomization, there were no significant differences detected between the 2 groups. The components of the composite outcome taken individually were also similar in each group. These results persisted across prespecified analyses of the as-treated population, per-protocol population, and the subgroup with high thrombus burden. Additionally, patients in the thrombectomy group were more likely to have a stroke within 30 days and 180 days, although the number of events was relatively small (for 30 days: 0.7% vs 0.3%, P = .02; for 180 days: 1% vs 0.5%, P = .002).
Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.
Temporary IVC Filter Added to Anticoagulation Does Not Decrease Pulmonary Embolism Recurrence Risk
Clinical question: Does the insertion of a retrievable inferior vena cava filter in addition to anticoagulation prevent the recurrence of pulmonary embolism in high-risk patients?
Bottom line: For patients with pulmonary embolism (PE) who are at high risk of recurrence or who have poor cardiopulmonary reserve, the addition of a retrievable inferior vena cava (IVC) filter plus anticoagulation does not decrease the risk of recurrent PE as compared with anticoagulation alone. Although this study was underpowered to detect a difference if one truly exists, the authors postulate that such a difference would likely be small and thus clinically irrelevant. (LOE = 1b-)
Reference: Mismetti P, Laporte S, Pellerin O, et al, for the PREPIC2 Study Group. Effect of a retrievable inferior vena cava filter plus anticoagulation vs anticoagulation alone on risk of recurrent pulmonary embolism. JAMA. 2015;313(16):1627–1635.
Study design: Randomized controlled trial (nonblinded)
Funding source: Government
Allocation: Concealed
Setting: Inpatient (any location) with outpatient follow-up
Synopsis
The utility of retrievable IVC filters added to anticoagulation for the prevention of recurrent PE is unknown. This study included adults who were hospitalized for acute PE associated with lower extremity venous thrombosis and had one additional criterion for severity (older than 75 years, active cancer, chronic cardiopulmonary conditions, recent stroke with leg paralysis, iliocaval or bilateral venous thromboses, or evidence of right ventricular dysfunction or myocardial injury).
The patients were randomized, using concealed allocation, to receive a filter plus anticoagulation or anticoagulation alone. Both groups were anticoagulated for at least 6 months and filters were retrieved at 3 months. More patients in the filter group had chronic respiratory failure at baseline but the groups were otherwise well matched. Analysis was by intention to treat.
At 3 months, the rate of recurrent PE did not differ between the 2 groups (3% in filter group vs 1.5% in control group; P = .50; RR with filter 2.00; 95% CI 0.51-7.89). Additionally, there were no differences detected in venous thromboembolism recurrence, major bleeding, or death at either 3 or 6 months. Complications in the filter group included access site hematomas, filter thromboses, and filter retrieval failures. The authors based their analysis on an expected PE recurrence rate of 8% in the control group but the actual rate was much lower. Although this results in an underpowered study, the authors note that the point estimate of the relative risk still favors the control group and if filters did confer a small advantage it would likely not be clinically meaningful.
Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.
Clinical question: Does the insertion of a retrievable inferior vena cava filter in addition to anticoagulation prevent the recurrence of pulmonary embolism in high-risk patients?
Bottom line: For patients with pulmonary embolism (PE) who are at high risk of recurrence or who have poor cardiopulmonary reserve, the addition of a retrievable inferior vena cava (IVC) filter plus anticoagulation does not decrease the risk of recurrent PE as compared with anticoagulation alone. Although this study was underpowered to detect a difference if one truly exists, the authors postulate that such a difference would likely be small and thus clinically irrelevant. (LOE = 1b-)
Reference: Mismetti P, Laporte S, Pellerin O, et al, for the PREPIC2 Study Group. Effect of a retrievable inferior vena cava filter plus anticoagulation vs anticoagulation alone on risk of recurrent pulmonary embolism. JAMA. 2015;313(16):1627–1635.
Study design: Randomized controlled trial (nonblinded)
Funding source: Government
Allocation: Concealed
Setting: Inpatient (any location) with outpatient follow-up
Synopsis
The utility of retrievable IVC filters added to anticoagulation for the prevention of recurrent PE is unknown. This study included adults who were hospitalized for acute PE associated with lower extremity venous thrombosis and had one additional criterion for severity (older than 75 years, active cancer, chronic cardiopulmonary conditions, recent stroke with leg paralysis, iliocaval or bilateral venous thromboses, or evidence of right ventricular dysfunction or myocardial injury).
The patients were randomized, using concealed allocation, to receive a filter plus anticoagulation or anticoagulation alone. Both groups were anticoagulated for at least 6 months and filters were retrieved at 3 months. More patients in the filter group had chronic respiratory failure at baseline but the groups were otherwise well matched. Analysis was by intention to treat.
At 3 months, the rate of recurrent PE did not differ between the 2 groups (3% in filter group vs 1.5% in control group; P = .50; RR with filter 2.00; 95% CI 0.51-7.89). Additionally, there were no differences detected in venous thromboembolism recurrence, major bleeding, or death at either 3 or 6 months. Complications in the filter group included access site hematomas, filter thromboses, and filter retrieval failures. The authors based their analysis on an expected PE recurrence rate of 8% in the control group but the actual rate was much lower. Although this results in an underpowered study, the authors note that the point estimate of the relative risk still favors the control group and if filters did confer a small advantage it would likely not be clinically meaningful.
Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.
Clinical question: Does the insertion of a retrievable inferior vena cava filter in addition to anticoagulation prevent the recurrence of pulmonary embolism in high-risk patients?
Bottom line: For patients with pulmonary embolism (PE) who are at high risk of recurrence or who have poor cardiopulmonary reserve, the addition of a retrievable inferior vena cava (IVC) filter plus anticoagulation does not decrease the risk of recurrent PE as compared with anticoagulation alone. Although this study was underpowered to detect a difference if one truly exists, the authors postulate that such a difference would likely be small and thus clinically irrelevant. (LOE = 1b-)
Reference: Mismetti P, Laporte S, Pellerin O, et al, for the PREPIC2 Study Group. Effect of a retrievable inferior vena cava filter plus anticoagulation vs anticoagulation alone on risk of recurrent pulmonary embolism. JAMA. 2015;313(16):1627–1635.
Study design: Randomized controlled trial (nonblinded)
Funding source: Government
Allocation: Concealed
Setting: Inpatient (any location) with outpatient follow-up
Synopsis
The utility of retrievable IVC filters added to anticoagulation for the prevention of recurrent PE is unknown. This study included adults who were hospitalized for acute PE associated with lower extremity venous thrombosis and had one additional criterion for severity (older than 75 years, active cancer, chronic cardiopulmonary conditions, recent stroke with leg paralysis, iliocaval or bilateral venous thromboses, or evidence of right ventricular dysfunction or myocardial injury).
The patients were randomized, using concealed allocation, to receive a filter plus anticoagulation or anticoagulation alone. Both groups were anticoagulated for at least 6 months and filters were retrieved at 3 months. More patients in the filter group had chronic respiratory failure at baseline but the groups were otherwise well matched. Analysis was by intention to treat.
At 3 months, the rate of recurrent PE did not differ between the 2 groups (3% in filter group vs 1.5% in control group; P = .50; RR with filter 2.00; 95% CI 0.51-7.89). Additionally, there were no differences detected in venous thromboembolism recurrence, major bleeding, or death at either 3 or 6 months. Complications in the filter group included access site hematomas, filter thromboses, and filter retrieval failures. The authors based their analysis on an expected PE recurrence rate of 8% in the control group but the actual rate was much lower. Although this results in an underpowered study, the authors note that the point estimate of the relative risk still favors the control group and if filters did confer a small advantage it would likely not be clinically meaningful.
Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.
Single agent can treat resistant MM
© ASCO/Todd Buchanan
CHICAGO—The anti-CD38 monoclonal antibody daratumumab can be effective as a stand-alone therapy for some heavily pretreated patients with multiple myeloma (MM), results of an ongoing phase 2 trial suggest.
The study, known as SIRIUS or MMY2002, included more than 100 patients who had received 3 or more prior lines of therapy.
Roughly 30% of these subjects responded to daratumumab, with a median response duration of about 7 months.
The median progression-free survival was close to 4 months, and the estimated 1-year overall survival rate was 65%.
Serious adverse events (AEs) occurred in 30% of patients.
“These findings speak to the potential of daratumumab as an effective and tolerable option for people with multiple myeloma who have exhausted other available treatment options,” said study investigator Sagar Lonial, MD, of Emory University School of Medicine in Atlanta, Georgia.
Dr Lonial presented these findings at the 2015 ASCO Annual Meeting (abstract LBA8512). The research was funded by Janssen Research & Development, the company developing daratumumab.
In part 1 of this study, 34 patients were randomized to receive either 8 mg/kg of daratumumab once every 4 weeks or 16 mg/kg once a week for 8 weeks, then once every 2 weeks for 16 weeks and once every 4 weeks after that, until disease progression or unacceptable toxicity.
In part 2, an additional 90 patients were enrolled to receive 16 mg/kg of daratumumab on the same dosing schedule as in part 1.
Dr Lonial reported results for all patients in parts 1 and 2 who received 16 mg/kg of daratumumab. These 106 patients had received a median of 5 prior lines of therapy, including a proteasome inhibitor and an immunomodulatory drug.
According to an independent review committee, 29.2% of patients responded to daratumumab. Eighteen patients had a partial response, 10 had a very good partial response, and 3 had a stringent complete response. The median duration of response was 7.4 months.
“It is particularly noteworthy to see this level of response with a single-agent in this heavily pretreated population,” Dr Lonial said. “Ninety-seven percent of patients in this study were refractory to their last line of therapy, and 95% were double-refractory to both a [proteasome inhibitor] and an [immunomodulatory drug].”
The median overall survival has not been reached, and the estimated 1-year overall survival rate is 65%. The median progression-free survival was 3.7 months.
After a median follow up of 9.4 months, 45.2% of responders remain on therapy.
The most common AEs were fatigue (39.6%), anemia (33%), nausea (29.2%), thrombocytopenia (25.5%), neutropenia (22.6%), back pain (22.6%), and cough (20.8%).
Thirty percent of patients experienced serious AEs. And 4.7% of patients discontinued treatment due to AEs, none of which were considered drug-related.
Infusion-related reactions (IRR) were reported in 42.5% of patients and were predominantly grade 1 or 2 (4.7% grade 3; no grade 4). These occurred mainly during the first infusion.
The most common IRRs included nasal congestion (12%), throat irritation (7%), cough (6%), dyspnea (6%), chills (6%), and vomiting (6%)—all of which were treated with standard of care and slower infusion rates.
© ASCO/Todd Buchanan
CHICAGO—The anti-CD38 monoclonal antibody daratumumab can be effective as a stand-alone therapy for some heavily pretreated patients with multiple myeloma (MM), results of an ongoing phase 2 trial suggest.
The study, known as SIRIUS or MMY2002, included more than 100 patients who had received 3 or more prior lines of therapy.
Roughly 30% of these subjects responded to daratumumab, with a median response duration of about 7 months.
The median progression-free survival was close to 4 months, and the estimated 1-year overall survival rate was 65%.
Serious adverse events (AEs) occurred in 30% of patients.
“These findings speak to the potential of daratumumab as an effective and tolerable option for people with multiple myeloma who have exhausted other available treatment options,” said study investigator Sagar Lonial, MD, of Emory University School of Medicine in Atlanta, Georgia.
Dr Lonial presented these findings at the 2015 ASCO Annual Meeting (abstract LBA8512). The research was funded by Janssen Research & Development, the company developing daratumumab.
In part 1 of this study, 34 patients were randomized to receive either 8 mg/kg of daratumumab once every 4 weeks or 16 mg/kg once a week for 8 weeks, then once every 2 weeks for 16 weeks and once every 4 weeks after that, until disease progression or unacceptable toxicity.
In part 2, an additional 90 patients were enrolled to receive 16 mg/kg of daratumumab on the same dosing schedule as in part 1.
Dr Lonial reported results for all patients in parts 1 and 2 who received 16 mg/kg of daratumumab. These 106 patients had received a median of 5 prior lines of therapy, including a proteasome inhibitor and an immunomodulatory drug.
According to an independent review committee, 29.2% of patients responded to daratumumab. Eighteen patients had a partial response, 10 had a very good partial response, and 3 had a stringent complete response. The median duration of response was 7.4 months.
“It is particularly noteworthy to see this level of response with a single-agent in this heavily pretreated population,” Dr Lonial said. “Ninety-seven percent of patients in this study were refractory to their last line of therapy, and 95% were double-refractory to both a [proteasome inhibitor] and an [immunomodulatory drug].”
The median overall survival has not been reached, and the estimated 1-year overall survival rate is 65%. The median progression-free survival was 3.7 months.
After a median follow up of 9.4 months, 45.2% of responders remain on therapy.
The most common AEs were fatigue (39.6%), anemia (33%), nausea (29.2%), thrombocytopenia (25.5%), neutropenia (22.6%), back pain (22.6%), and cough (20.8%).
Thirty percent of patients experienced serious AEs. And 4.7% of patients discontinued treatment due to AEs, none of which were considered drug-related.
Infusion-related reactions (IRR) were reported in 42.5% of patients and were predominantly grade 1 or 2 (4.7% grade 3; no grade 4). These occurred mainly during the first infusion.
The most common IRRs included nasal congestion (12%), throat irritation (7%), cough (6%), dyspnea (6%), chills (6%), and vomiting (6%)—all of which were treated with standard of care and slower infusion rates.
© ASCO/Todd Buchanan
CHICAGO—The anti-CD38 monoclonal antibody daratumumab can be effective as a stand-alone therapy for some heavily pretreated patients with multiple myeloma (MM), results of an ongoing phase 2 trial suggest.
The study, known as SIRIUS or MMY2002, included more than 100 patients who had received 3 or more prior lines of therapy.
Roughly 30% of these subjects responded to daratumumab, with a median response duration of about 7 months.
The median progression-free survival was close to 4 months, and the estimated 1-year overall survival rate was 65%.
Serious adverse events (AEs) occurred in 30% of patients.
“These findings speak to the potential of daratumumab as an effective and tolerable option for people with multiple myeloma who have exhausted other available treatment options,” said study investigator Sagar Lonial, MD, of Emory University School of Medicine in Atlanta, Georgia.
Dr Lonial presented these findings at the 2015 ASCO Annual Meeting (abstract LBA8512). The research was funded by Janssen Research & Development, the company developing daratumumab.
In part 1 of this study, 34 patients were randomized to receive either 8 mg/kg of daratumumab once every 4 weeks or 16 mg/kg once a week for 8 weeks, then once every 2 weeks for 16 weeks and once every 4 weeks after that, until disease progression or unacceptable toxicity.
In part 2, an additional 90 patients were enrolled to receive 16 mg/kg of daratumumab on the same dosing schedule as in part 1.
Dr Lonial reported results for all patients in parts 1 and 2 who received 16 mg/kg of daratumumab. These 106 patients had received a median of 5 prior lines of therapy, including a proteasome inhibitor and an immunomodulatory drug.
According to an independent review committee, 29.2% of patients responded to daratumumab. Eighteen patients had a partial response, 10 had a very good partial response, and 3 had a stringent complete response. The median duration of response was 7.4 months.
“It is particularly noteworthy to see this level of response with a single-agent in this heavily pretreated population,” Dr Lonial said. “Ninety-seven percent of patients in this study were refractory to their last line of therapy, and 95% were double-refractory to both a [proteasome inhibitor] and an [immunomodulatory drug].”
The median overall survival has not been reached, and the estimated 1-year overall survival rate is 65%. The median progression-free survival was 3.7 months.
After a median follow up of 9.4 months, 45.2% of responders remain on therapy.
The most common AEs were fatigue (39.6%), anemia (33%), nausea (29.2%), thrombocytopenia (25.5%), neutropenia (22.6%), back pain (22.6%), and cough (20.8%).
Thirty percent of patients experienced serious AEs. And 4.7% of patients discontinued treatment due to AEs, none of which were considered drug-related.
Infusion-related reactions (IRR) were reported in 42.5% of patients and were predominantly grade 1 or 2 (4.7% grade 3; no grade 4). These occurred mainly during the first infusion.
The most common IRRs included nasal congestion (12%), throat irritation (7%), cough (6%), dyspnea (6%), chills (6%), and vomiting (6%)—all of which were treated with standard of care and slower infusion rates.
Cancer survivors mirror spouses’ moods
chemotherapy
Photo by Rhoda Baer
Cancer survivors’ moods are impacted—both positively and negatively—by their spouses’ moods, according to research published in Cancer Epidemiology, Biomarkers & Prevention.
In the study, cancer survivors whose spouses reported depressed moods were more likely to be depressed after about a year of follow-up, and survivors whose spouses reported better mental and physical health-related quality of life (HRQOL) were less likely to be depressed.
However, survivors’ moods did not have the same impact on their spouses.
“We were surprised that the effects of the spouses on the survivors were so much larger in this study than the effect of the survivors on their spouses,” said study author Kristin Litzelman, PhD, of the National Cancer Institute in Bethesda, Maryland. “We expected to see a more reciprocal relationship.”
Dr Litzelman and her colleagues conducted this research in an attempt to understand how cancer survivors and their families influence one another. The team hoped to identify ways to improve the healthcare both parties receive and thereby improve their health and well-being.
The researchers analyzed data from 910 cancer patients and their spouses, comparing them to 910 couples without any kind of cancer-related health problem.
The team used statistical models to assess how each spouse’s quality of life or depression at one time point was associated with his or her partner’s risk of depression around 11 months later. The researchers took into account a person’s previously reported mood, demographic characteristics, and other factors.
The results showed that, when spouses reported feeling depressed, cancer survivors were about 4 times more likely to report being depressed 11 months later (odds ratio [OR]=4.27). This association was stronger among female cancer survivors (OR=9.49) than male survivors (OR=3.98).
Cancer survivors whose spouses reported better HRQOL had a 30% decrease in depressed mood per 10-point improvement in HRQOL score. The ORs were 0.72 for mental health and 0.68 for physical health. The associations between spousal HRQOL and survivor depressed mood were similar for male and female survivors.
The researchers noted that cancer survivors’ moods did not have a significant impact on their spouses’ risk of depressed mood 11 months later.
And the team did not see mood associations in couples without any cancer-related health problems.
“This finding certainly needs to be backed up by other studies, but it highlights the importance of family well-being in cancer survivor outcomes,” Dr Litzelman said. “Our research highlights that spouses need to take care of themselves, not just for their own sake, but also for the sake of the cancer survivor.”
“Our findings also suggest that, when caring for cancer survivors, clinicians may want to assess the well-being of spousal caregivers. Future research could test whether including caregivers in the survivorship care plan might help to improve outcomes for both caregivers and for cancer survivors.”
chemotherapy
Photo by Rhoda Baer
Cancer survivors’ moods are impacted—both positively and negatively—by their spouses’ moods, according to research published in Cancer Epidemiology, Biomarkers & Prevention.
In the study, cancer survivors whose spouses reported depressed moods were more likely to be depressed after about a year of follow-up, and survivors whose spouses reported better mental and physical health-related quality of life (HRQOL) were less likely to be depressed.
However, survivors’ moods did not have the same impact on their spouses.
“We were surprised that the effects of the spouses on the survivors were so much larger in this study than the effect of the survivors on their spouses,” said study author Kristin Litzelman, PhD, of the National Cancer Institute in Bethesda, Maryland. “We expected to see a more reciprocal relationship.”
Dr Litzelman and her colleagues conducted this research in an attempt to understand how cancer survivors and their families influence one another. The team hoped to identify ways to improve the healthcare both parties receive and thereby improve their health and well-being.
The researchers analyzed data from 910 cancer patients and their spouses, comparing them to 910 couples without any kind of cancer-related health problem.
The team used statistical models to assess how each spouse’s quality of life or depression at one time point was associated with his or her partner’s risk of depression around 11 months later. The researchers took into account a person’s previously reported mood, demographic characteristics, and other factors.
The results showed that, when spouses reported feeling depressed, cancer survivors were about 4 times more likely to report being depressed 11 months later (odds ratio [OR]=4.27). This association was stronger among female cancer survivors (OR=9.49) than male survivors (OR=3.98).
Cancer survivors whose spouses reported better HRQOL had a 30% decrease in depressed mood per 10-point improvement in HRQOL score. The ORs were 0.72 for mental health and 0.68 for physical health. The associations between spousal HRQOL and survivor depressed mood were similar for male and female survivors.
The researchers noted that cancer survivors’ moods did not have a significant impact on their spouses’ risk of depressed mood 11 months later.
And the team did not see mood associations in couples without any cancer-related health problems.
“This finding certainly needs to be backed up by other studies, but it highlights the importance of family well-being in cancer survivor outcomes,” Dr Litzelman said. “Our research highlights that spouses need to take care of themselves, not just for their own sake, but also for the sake of the cancer survivor.”
“Our findings also suggest that, when caring for cancer survivors, clinicians may want to assess the well-being of spousal caregivers. Future research could test whether including caregivers in the survivorship care plan might help to improve outcomes for both caregivers and for cancer survivors.”
chemotherapy
Photo by Rhoda Baer
Cancer survivors’ moods are impacted—both positively and negatively—by their spouses’ moods, according to research published in Cancer Epidemiology, Biomarkers & Prevention.
In the study, cancer survivors whose spouses reported depressed moods were more likely to be depressed after about a year of follow-up, and survivors whose spouses reported better mental and physical health-related quality of life (HRQOL) were less likely to be depressed.
However, survivors’ moods did not have the same impact on their spouses.
“We were surprised that the effects of the spouses on the survivors were so much larger in this study than the effect of the survivors on their spouses,” said study author Kristin Litzelman, PhD, of the National Cancer Institute in Bethesda, Maryland. “We expected to see a more reciprocal relationship.”
Dr Litzelman and her colleagues conducted this research in an attempt to understand how cancer survivors and their families influence one another. The team hoped to identify ways to improve the healthcare both parties receive and thereby improve their health and well-being.
The researchers analyzed data from 910 cancer patients and their spouses, comparing them to 910 couples without any kind of cancer-related health problem.
The team used statistical models to assess how each spouse’s quality of life or depression at one time point was associated with his or her partner’s risk of depression around 11 months later. The researchers took into account a person’s previously reported mood, demographic characteristics, and other factors.
The results showed that, when spouses reported feeling depressed, cancer survivors were about 4 times more likely to report being depressed 11 months later (odds ratio [OR]=4.27). This association was stronger among female cancer survivors (OR=9.49) than male survivors (OR=3.98).
Cancer survivors whose spouses reported better HRQOL had a 30% decrease in depressed mood per 10-point improvement in HRQOL score. The ORs were 0.72 for mental health and 0.68 for physical health. The associations between spousal HRQOL and survivor depressed mood were similar for male and female survivors.
The researchers noted that cancer survivors’ moods did not have a significant impact on their spouses’ risk of depressed mood 11 months later.
And the team did not see mood associations in couples without any cancer-related health problems.
“This finding certainly needs to be backed up by other studies, but it highlights the importance of family well-being in cancer survivor outcomes,” Dr Litzelman said. “Our research highlights that spouses need to take care of themselves, not just for their own sake, but also for the sake of the cancer survivor.”
“Our findings also suggest that, when caring for cancer survivors, clinicians may want to assess the well-being of spousal caregivers. Future research could test whether including caregivers in the survivorship care plan might help to improve outcomes for both caregivers and for cancer survivors.”
CDS systems often can’t tell if imaging is appropriate
Photo courtesy of NIH
Tools that help physicians decide whether to use diagnostic imaging can help reduce the use of unnecessary tests.
But new research suggests these tools may not be able to determine which tests are necessary most of the time.
The tools in question are computerized clinical decision support (CDS) systems, which match a patient’s characteristics against appropriateness
criteria to produce algorithmic treatment recommendations.
In a study published in JAMA, CDS systems did increase orders of imaging tests rated as “appropriate.”
However, the systems were not able to assign appropriateness ratings for a majority of tests because no appropriateness criteria were available for a particular test, or because the systems themselves were not able to find matching criteria.
“The increase in orders rated as appropriate is promising, but the number of tests that were not rated indicates there is room for further improvement of these tools,” said study author Peter S. Hussey, PhD, of the RAND Corporation in Boston, Massachusetts.
Study details
Dr Hussey and his colleagues used data from the Medicare Imaging Demonstration to evaluate the relationship of CDS system use with the proportion of imaging orders matched to appropriateness criteria, the appropriateness of ordered images, and the proportion of orders that changed after feedback.
The team compared 2 time periods during which clinicians used computerized radiology order entry systems and CDS systems for MRI, CT, and nuclear medicine procedures.
During a 6-month baseline period, the CDS systems tracked whether orders were linked with appropriateness criteria but did not provide clinicians with feedback on the appropriateness of orders.
During the 18-month intervention period, the CDS systems provided feedback indicating whether the order was linked to appropriateness criteria and, if so, the appropriateness rating, any recommendations for alternative orders, and a link to documentation supporting each rating.
National medical specialty societies developed the appropriateness criteria using expert panels that reviewed evidence and completed a structured rating process. The same appropriateness criteria were loaded into the CDS systems tools for all participating clinicians.
In all, 3340 clinicians placed 117,348 orders for advanced diagnostic imaging procedures.
Results
The CDS systems could not match most orders to appropriateness criteria. The systems did not identify relevant criteria for 63.3% of orders made during the baseline period and 66.5% of orders made during the intervention period.
Of the orders CDS systems could rate, 73.7% ordered during the baseline period and 81% ordered during the intervention period were rated as appropriate, and 11.1% and 6.4%, respectively, were rated inappropriate.
Of the orders that were initially rated as inappropriate, 4.8% were changed, and 1.9% were canceled.
When the CDS systems suggested an alternative for inappropriate orders, 9.9% of the orders were changed, and 0.4% were canceled. When the systems did not provide an alternative, 1.4% of inappropriate orders were changed, and 2.8% were canceled.
“In response to these findings, we recommend that clinical decision support efforts should focus on tools that help clinicians perform their work more efficiently and effectively,” said study author Katherine Kahn, MD, of the University of California, Los Angeles.
“We need a more comprehensive set of evidence-based guidelines that cover a greater proportion of advanced imaging orders for Medicare patients, and provide better methods for communicating feedback to clinicians.”
Photo courtesy of NIH
Tools that help physicians decide whether to use diagnostic imaging can help reduce the use of unnecessary tests.
But new research suggests these tools may not be able to determine which tests are necessary most of the time.
The tools in question are computerized clinical decision support (CDS) systems, which match a patient’s characteristics against appropriateness
criteria to produce algorithmic treatment recommendations.
In a study published in JAMA, CDS systems did increase orders of imaging tests rated as “appropriate.”
However, the systems were not able to assign appropriateness ratings for a majority of tests because no appropriateness criteria were available for a particular test, or because the systems themselves were not able to find matching criteria.
“The increase in orders rated as appropriate is promising, but the number of tests that were not rated indicates there is room for further improvement of these tools,” said study author Peter S. Hussey, PhD, of the RAND Corporation in Boston, Massachusetts.
Study details
Dr Hussey and his colleagues used data from the Medicare Imaging Demonstration to evaluate the relationship of CDS system use with the proportion of imaging orders matched to appropriateness criteria, the appropriateness of ordered images, and the proportion of orders that changed after feedback.
The team compared 2 time periods during which clinicians used computerized radiology order entry systems and CDS systems for MRI, CT, and nuclear medicine procedures.
During a 6-month baseline period, the CDS systems tracked whether orders were linked with appropriateness criteria but did not provide clinicians with feedback on the appropriateness of orders.
During the 18-month intervention period, the CDS systems provided feedback indicating whether the order was linked to appropriateness criteria and, if so, the appropriateness rating, any recommendations for alternative orders, and a link to documentation supporting each rating.
National medical specialty societies developed the appropriateness criteria using expert panels that reviewed evidence and completed a structured rating process. The same appropriateness criteria were loaded into the CDS systems tools for all participating clinicians.
In all, 3340 clinicians placed 117,348 orders for advanced diagnostic imaging procedures.
Results
The CDS systems could not match most orders to appropriateness criteria. The systems did not identify relevant criteria for 63.3% of orders made during the baseline period and 66.5% of orders made during the intervention period.
Of the orders CDS systems could rate, 73.7% ordered during the baseline period and 81% ordered during the intervention period were rated as appropriate, and 11.1% and 6.4%, respectively, were rated inappropriate.
Of the orders that were initially rated as inappropriate, 4.8% were changed, and 1.9% were canceled.
When the CDS systems suggested an alternative for inappropriate orders, 9.9% of the orders were changed, and 0.4% were canceled. When the systems did not provide an alternative, 1.4% of inappropriate orders were changed, and 2.8% were canceled.
“In response to these findings, we recommend that clinical decision support efforts should focus on tools that help clinicians perform their work more efficiently and effectively,” said study author Katherine Kahn, MD, of the University of California, Los Angeles.
“We need a more comprehensive set of evidence-based guidelines that cover a greater proportion of advanced imaging orders for Medicare patients, and provide better methods for communicating feedback to clinicians.”
Photo courtesy of NIH
Tools that help physicians decide whether to use diagnostic imaging can help reduce the use of unnecessary tests.
But new research suggests these tools may not be able to determine which tests are necessary most of the time.
The tools in question are computerized clinical decision support (CDS) systems, which match a patient’s characteristics against appropriateness
criteria to produce algorithmic treatment recommendations.
In a study published in JAMA, CDS systems did increase orders of imaging tests rated as “appropriate.”
However, the systems were not able to assign appropriateness ratings for a majority of tests because no appropriateness criteria were available for a particular test, or because the systems themselves were not able to find matching criteria.
“The increase in orders rated as appropriate is promising, but the number of tests that were not rated indicates there is room for further improvement of these tools,” said study author Peter S. Hussey, PhD, of the RAND Corporation in Boston, Massachusetts.
Study details
Dr Hussey and his colleagues used data from the Medicare Imaging Demonstration to evaluate the relationship of CDS system use with the proportion of imaging orders matched to appropriateness criteria, the appropriateness of ordered images, and the proportion of orders that changed after feedback.
The team compared 2 time periods during which clinicians used computerized radiology order entry systems and CDS systems for MRI, CT, and nuclear medicine procedures.
During a 6-month baseline period, the CDS systems tracked whether orders were linked with appropriateness criteria but did not provide clinicians with feedback on the appropriateness of orders.
During the 18-month intervention period, the CDS systems provided feedback indicating whether the order was linked to appropriateness criteria and, if so, the appropriateness rating, any recommendations for alternative orders, and a link to documentation supporting each rating.
National medical specialty societies developed the appropriateness criteria using expert panels that reviewed evidence and completed a structured rating process. The same appropriateness criteria were loaded into the CDS systems tools for all participating clinicians.
In all, 3340 clinicians placed 117,348 orders for advanced diagnostic imaging procedures.
Results
The CDS systems could not match most orders to appropriateness criteria. The systems did not identify relevant criteria for 63.3% of orders made during the baseline period and 66.5% of orders made during the intervention period.
Of the orders CDS systems could rate, 73.7% ordered during the baseline period and 81% ordered during the intervention period were rated as appropriate, and 11.1% and 6.4%, respectively, were rated inappropriate.
Of the orders that were initially rated as inappropriate, 4.8% were changed, and 1.9% were canceled.
When the CDS systems suggested an alternative for inappropriate orders, 9.9% of the orders were changed, and 0.4% were canceled. When the systems did not provide an alternative, 1.4% of inappropriate orders were changed, and 2.8% were canceled.
“In response to these findings, we recommend that clinical decision support efforts should focus on tools that help clinicians perform their work more efficiently and effectively,” said study author Katherine Kahn, MD, of the University of California, Los Angeles.
“We need a more comprehensive set of evidence-based guidelines that cover a greater proportion of advanced imaging orders for Medicare patients, and provide better methods for communicating feedback to clinicians.”
Scientists uncover structure of TOR complex 2
Photo courtesy of
University of Geneva
A group of researchers has developed a new tool to study the structure and function of target of rapamycin complex 2 (TORC2), which helps explain why rapamycin cannot access the TOR protein in this complex.
TOR is essential for the growth of normal cells but is hyperactive in tumor cells. Rapamycin is an immunosuppressant and anticancer agent that inactivates TOR in TORC1 but not in TORC2.
“In order to more easily study TORC2, we wanted to learn how to make this complex sensitive to rapamycin,” said Robbie Loewith, PhD, of the University of Geneva in Switzerland.
So Dr Loewith and a team of scientists from Switzerland, France, and the UK set out to elucidate how TORC2 works. The team reported their findings in Molecular Cell.
Using crosslinking-mass spectrometry and electron microscopy, they discovered that TORC2 has 3 features in common with TORC1: a rhomboid shape, 2-fold symmetry, and a central cavity delimited by the interface of its protein chains.
The 2 complexes differ markedly, however, in overall size, surface area of the interface, and the volume and shape of the central cavity.
By determining the structure of TORC2, the team could observe which subunit within TORC2 was obstructing the rapamycin binding site on TOR.
“By deleting part of this subunit, we generated a variant of TORC2 sensitive to rapamycin,” said Manoel Prouteau, PhD, also of the University of Geneva.
This allowed the researchers to study how TORC2 acts to stimulate cell growth.
Now, they hope to identify a specific inhibitor of endogenous TORC2 that could also be an effective anticancer agent.
“Our discovery that TORC2 inhibition alone is sufficient to block the cell cycle suggests that mTORC2-specific inhibitors may provide new and potentially better therapeutic alternatives,” the team concluded.
Photo courtesy of
University of Geneva
A group of researchers has developed a new tool to study the structure and function of target of rapamycin complex 2 (TORC2), which helps explain why rapamycin cannot access the TOR protein in this complex.
TOR is essential for the growth of normal cells but is hyperactive in tumor cells. Rapamycin is an immunosuppressant and anticancer agent that inactivates TOR in TORC1 but not in TORC2.
“In order to more easily study TORC2, we wanted to learn how to make this complex sensitive to rapamycin,” said Robbie Loewith, PhD, of the University of Geneva in Switzerland.
So Dr Loewith and a team of scientists from Switzerland, France, and the UK set out to elucidate how TORC2 works. The team reported their findings in Molecular Cell.
Using crosslinking-mass spectrometry and electron microscopy, they discovered that TORC2 has 3 features in common with TORC1: a rhomboid shape, 2-fold symmetry, and a central cavity delimited by the interface of its protein chains.
The 2 complexes differ markedly, however, in overall size, surface area of the interface, and the volume and shape of the central cavity.
By determining the structure of TORC2, the team could observe which subunit within TORC2 was obstructing the rapamycin binding site on TOR.
“By deleting part of this subunit, we generated a variant of TORC2 sensitive to rapamycin,” said Manoel Prouteau, PhD, also of the University of Geneva.
This allowed the researchers to study how TORC2 acts to stimulate cell growth.
Now, they hope to identify a specific inhibitor of endogenous TORC2 that could also be an effective anticancer agent.
“Our discovery that TORC2 inhibition alone is sufficient to block the cell cycle suggests that mTORC2-specific inhibitors may provide new and potentially better therapeutic alternatives,” the team concluded.
Photo courtesy of
University of Geneva
A group of researchers has developed a new tool to study the structure and function of target of rapamycin complex 2 (TORC2), which helps explain why rapamycin cannot access the TOR protein in this complex.
TOR is essential for the growth of normal cells but is hyperactive in tumor cells. Rapamycin is an immunosuppressant and anticancer agent that inactivates TOR in TORC1 but not in TORC2.
“In order to more easily study TORC2, we wanted to learn how to make this complex sensitive to rapamycin,” said Robbie Loewith, PhD, of the University of Geneva in Switzerland.
So Dr Loewith and a team of scientists from Switzerland, France, and the UK set out to elucidate how TORC2 works. The team reported their findings in Molecular Cell.
Using crosslinking-mass spectrometry and electron microscopy, they discovered that TORC2 has 3 features in common with TORC1: a rhomboid shape, 2-fold symmetry, and a central cavity delimited by the interface of its protein chains.
The 2 complexes differ markedly, however, in overall size, surface area of the interface, and the volume and shape of the central cavity.
By determining the structure of TORC2, the team could observe which subunit within TORC2 was obstructing the rapamycin binding site on TOR.
“By deleting part of this subunit, we generated a variant of TORC2 sensitive to rapamycin,” said Manoel Prouteau, PhD, also of the University of Geneva.
This allowed the researchers to study how TORC2 acts to stimulate cell growth.
Now, they hope to identify a specific inhibitor of endogenous TORC2 that could also be an effective anticancer agent.
“Our discovery that TORC2 inhibition alone is sufficient to block the cell cycle suggests that mTORC2-specific inhibitors may provide new and potentially better therapeutic alternatives,” the team concluded.
Reduction in Iatrogenic Pneumothorax
Iatrogenic pneumothorax (IAP) is a complication of invasive procedures that is associated with substantial morbidity and some mortality.[1] IAP is often avoidable, and in many cases can be prevented through adherence to evidence‐based guidelines and procedural techniques known to reduce the incidence of IAP.[2] IAP may occur with a subclavian (SC) or internal jugular (IJ) central venous catheter (CVC) insertion, but is more frequently associated with the SC approach.[3] Ultrasound guidance during IJ CVC insertion is associated with a lower risk as compared to guidance by anatomical landmarks.[4, 5] Other bedside procedures that are known to cause IAP include thoracentesis. This risk can also be reduced with the use of ultrasound guidance.[6]
Including simulation in training for CVC insertion has been demonstrated in meta‐analyses to improve both learner outcomes, including simulator performance and perceived confidence, and patient outcomes, including fewer failed CVC attempts and reduced incidence of IAP.[7] Even brief simulation workshops lasting less than two hours can improve patient safety during CVC insertion.[8]
The implementation of ultrasound‐based simulation and improved adherence to the actual use of ultrasound at the bedside can be motivated by tying competency‐based educational objectives (eg, CVC insertion) to clinical outcomes (ie, rates of IAP) and tracking both as part of a continuous quality‐improvement cycle.[9] Adherence to best practices for CVC insertion can also be improved through standardizing hospital‐wide policies and hands‐on training.[10] Involving many stakeholders, including nurses, physicians, nurse practioners and physician assistants, in a multidisciplinary team has been shown to help alter entrenched behaviors and reduce the incidence of central‐line associated bloodstream infections through long‐term adherence to evidence‐based interventions.[11]
LOCAL PROBLEM
The Agency for Healthcare Research and Quality (AHRQ) has designed Patient Safety Indicators (PSIs) (
Our hospital is a member of the University HealthSystem Consortium (UHC) (
Despite this, the PSI can highlight areas where quality‐improvement efforts might be best directed. In 2005 and 2006, our hospital was ranked within the lowest UHC performance quartile for all‐cause IAP PSI.
During FY 2006 (September 2005August 2006), root‐cause analysis on cases of IAP at our hospital found that CVC insertion (40%) was the most common procedure associated with IAP, with SC insertion causing 69% of CVC‐associated IAP. Other common procedures associated with IAP were operative/pacemaker (30%), thoracentesis (25%), and ventilator associated (5%). Ultrasound was not used in 2/5 cases of IJ CVC placement and 3/5 thoracentesis cases. Only 44% of CVC insertions had a procedure note.
Intended Improvement/Study Question
Our team set out to plan and implement a set of multifaceted interventions within 90 days. The short‐term goal was a 50% reduction in the CVC IAP and all‐cause IAP rate within 18 months, and the long‐term goal was sustained reduction of CVC IAP and all‐cause IAP rate.
METHODS
The format of this article is based on the standards for quality‐improvement reporting excellence guidelines for the reporting of studies on the effectiveness of quality‐improvement interventions.[14]
Setting
Stanford University Medical Center is an academic medical center with 465 beds and over 25,000 inpatient admissions per year, providing both general acute care services and tertiary medical care. Residents perform CVC bedside procedures when central venous access is needed, in the intensive care unit (ICU), operating room (OR), and inpatient units. Prior to this project, ultrasound equipment was only available in the emergency department (ED) and ICUs. There was no formal CVC procedure supervision policy, CVC training curriculum, and procedure note templates for documentation of CVC insertion.
Planning the Interventions
A multidisciplinary quality‐improvement team met weekly during the 90‐day design period from January 2007 to March 2007. Our team included representatives from the departments of medicine, anesthesia and critical care, surgery, nursing, and emergency medicine. We also partnered with our institution's clinical and administrative leaders, experts in simulation, and the hospital quality department.
We hypothesized that a standardized set of education and training interventions promoting ultrasound‐guided IJ CVC insertion as the method of choice at our hospital would significantly reduce our rate of CVC‐associated IAP. Our multifaceted intervention included: (1) clinical and documentation standards based on evidence, (2) cognitive aids, (3) simulation training, (4) purchase and deployment of ultrasound equipment, and (5) feedback to clinical services.
Our team followed the define, measure, analyze, improve, control (DMAIC) framework.[15] We set interval goals with target completion dates throughout the 90‐day period, identified owners of each goal, and tracked progress with a shared spreadsheet.
In the 90‐day intervention, we accomplished the following: (1) conducted root‐cause analysis of IAP cases for fiscal year 2006, (2) created clinical and documentation standards around CVC placement, (3) created cognitive aids and procedure note templates, (4) developed simulation training courses, and (5) requested purchase of additional ultrasound equipment.
Data Collection
To evaluate our progress in reducing the rates of IAP, we tracked the incidence of IAP using UHC and AHRQ PSI methodology. In collaboration with our hospital's quality department, we manually reviewed every PSI‐identified case of IAP. This review has focused on identifying whether or not pneumothorax actually occurred, and whether it was associated with CVC insertion. For those associated with CVC, data were collected for patient location and service, the procedure site, whether ultrasound was used, whether a chest tube was required, and the final disposition of the patient.
Demographic data (age, gender, case mix index [CMI]) shown in Table 1 were obtained through MIDAS+ Solutions (Tucson, Arizona), a proprietary database that contains healthcare management coded data. Total hospital CVC insertion rates were calculated using International Classification of Diseases, Ninth Revision (ICD‐9) coding for 38.93 and 38.97. ICU central lineassociated blood stream infections (CLABSI) data were obtained from internal collection by our infection control team. Number and location of CVCs placed in the ICU data were obtained from nursing flow sheets in our electronic medical record (EMR). Cost information was provided by our finance department using internal accounting.
Patients With CVC Insertion | Year | ||||||||
---|---|---|---|---|---|---|---|---|---|
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |
| |||||||||
Age, y (mean) | 55.0 | 55.5 | 55.0 | 57.0 | 56.5 | 58.5 | 57.5 | 59.0 | 58.5 |
% female | 47.0 | 49.5 | 47.0 | 48.8 | 46.2 | 46.1 | 45.7 | 46.2 | 45.7 |
Case‐mix index | 3.08 | 3.35 | 3.21 | 3.40 | 3.71 | 3.91 | 3.92 | 3.92 | 4.08 |
Total no. of CVCs/year* | 1,593 | 1,141 | 1,589 | 2,250 | 2,441 | 2,774 | 2,754 | 2,722 | 2,845 |
No. of CVCs/year in ICU | NA | NA | NA | 1,502 | 1,357 | 1,345 | 1,316 | 1,421 | 1,590 |
No. of subclavians/year in ICU | NA | NA | NA | 167 | 75 | 70 | 83 | 75 | 97 |
No. of IJs/year in ICU | NA | NA | NA | 898 | 773 | 681 | 677 | 713 | 876 |
No. of femorals/year in ICU | NA | NA | NA | 212 | 152 | 203 | 171 | 198 | 206 |
No. of PICCs/year in ICU | NA | NA | NA | 225 | 357 | 391 | 385 | 435 | 411 |
Preintervention (2006) | Postintervention (20082014) | P Value | |||||||
Age, y (mean) | 55.2 | 58.7 | <0.0001 | ||||||
% female | 47.0% | 46.4% | 0.642 | ||||||
Case‐mix index | 3.08 | 3.73 | <0.0001 | ||||||
CVC insertion rate | 8.1% | 11.4% | <0.0001 | ||||||
All Inpatients | Year | ||||||||
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |
Age, y (mean) | 57.1 | 57.2 | 56.8 | 57.2 | 57.5 | 58.0 | 58.0 | 57.9 | 58.3 |
% female | 51.6 | 51.2 | 52.4 | 51.7 | 51.1 | 51.5 | 50.3 | 49.9 | 50.1 |
Case‐mix index | 1.86 | 1.98 | 1.96 | 1.99 | 1.96 | 2.02 | 2.03 | 2.07 | 2.23 |
Preintervention (2006) | Postintervention (20082014) | P Value | |||||||
Age, y (mean) | 57.1 | 57.6 | <0.01 | ||||||
% female | 51.6% | 50.9% | 0.07 | ||||||
Case‐mix index | 1.86 | 2.03 | 0.13 | ||||||
Central Line‐Associated Bloodstream Infections per 1,000 Central Line Days | |||||||||
Preintervention | Postintervention | P Value | |||||||
Short term (2006 vs 2008) | 1.8 | 0.60 | 0.004 | ||||||
Long term (2006 vs 20082014) | 1.8 | 0.68 | <0.0001 |
The project granted a Notice of Determination of Approval from the Stanford Administrative Panels for the Protection of Human Subjects (institutional review board).
Methods of Evaluation/Analysis
For the purpose of this analysis, the preintervention period was defined as January 1, 2006 through December 31, 2006, our first year of IAP case review. We defined the intervention period as January 1, 2007 through December 31, 2007, during which we planned and implemented hospital‐wide standardization of CVC insertion practices and incorporated CVC insertion training simulation into resident orientation in July 2007. The postintervention period was defined as January 1, 2008 through December 31, 2014.
All statistical analyses were performed using Stata version 12.1 (StataCorp, College Station, TX). [2] tests were used to determine statistical differences in pre‐ versus postintervention patient demographic data (age, gender, CMI), CVC insertion rates, and CLABSI rates. Because IAP is a rare event, a statistical process control g‐chart was created using QI Macros (KnowWare International, Inc., Denver, CO) to show the number of CVC procedures between IAP. [2] and Fisher exact tests were used to determine statistical differences in CVC anatomic location and use of ultrasound pre‐ and postintervention. A 2‐sided Z test to show a difference in proportions was used to determine statistical differences in CVC‐related IAP rate and all‐cause IAP rate pre‐ and postintervention.
Measuring Adherence to Intervention
Location of CVC Placement and Ultrasound Guidance Pre‐ Versus Postintervention
We utilized the Stanford Clinical Informatics Center (SCCI) services for obtaining counts of patients. Custom queries were performed on SCCI's Stanford Translational Research Integrated Database Environment (STRIDE) platform[16] to search Stanford Hospital electronic heath records for patients. This search primarily involved getting counts for the number of patients with clinical notes that contained the keywords of interest. To identify documentation for placement of CVC from 2006 to 2014, procedure or operative notes containing the words central line or CVC were counted. Further subcounts were obtained by searching for additional keywords such as PICC [peripherally inserted central catheters], femoral, jugular, subclavian, and ultrasound.
Adherence to Intervention in the ICU in 2014
A total of 100 charts were reviewed from patients in our medical and surgical ICU with a CVC in 2014 to evaluate the current trend of central line placement and sustainability of our intervention. Fifty charts were initially randomly selected from the ICU cohort. For those who had multiple lines placed, only the first line was reviewed. Because the initial audit did not provide enough SC lines and we wanted to review more IJ lines, we randomly selected an additional 25 patients who had SC and 25 patients who had IJ to review. The following was collected during chart review: primary team, location of line placement, usage of ultrasound, usage of standard procedure template, supervision, level of training for supervisor, and level of training for staff who performed procedure.
Outcomes
The rate of CVC‐associated IAP was calculated as the total number of IAPs attributed to CVCs divided by the total number of CVCs inserted determined by ICD‐9 coding for 38.93 and 38.97. The total IAP rate was calculated as the total number of IAP/1000 discharges.
RESULTS
Interventions
Our team began the intervention in early 2007 with promotion of ultrasound‐guided IJ catheterization. Clinical exceptions included: (1) trauma or code situations where access to the neck is limited, (2) suspected or confirmed neck injuries, (3) presence of a tracheostomy, and (4) bilateral internal jugular sites unsuitable for catheterization.
Our hospital adopted new formal CVC insertion policies consistent with the above training and education efforts. All physicians were required to document CVC insertions using the template available in the EMR. To be certified to perform CVC insertion independently, trainee physicians were required to complete the simulation training and successfully place a minimum of 5 CVCs directly supervised by an already‐certified physician. This was consistent with the Accreditation Council for Graduate Medical Education suggested minimum requirement in 2007. In our critical care units, all CVC insertions must be supervised by an ICU fellow or attending.
To reinforce the on‐the‐ground work by our physician leaders, we created 2 education tools to embed best practices into our CVC insertion workflow. A checklist with best practices for CVC insertion that was distributed throughout the hospital via central line kits and educational flyers, and a CVC insertion procedure note template consistent with California Department of Public Health documentation requirements was made available in our EMR.
In June 2007, we integrated CVC insertion simulation training into procedure workshops required for all medicine, surgery, anesthesia, and emergency medicine trainees during their intern year. These workshops promoted ultrasound‐guided IJ catheterization and supporting evidence for the new IJ site preference. Training sessions were 2 to 3 hours, and included a demonstration of best‐practice CVC insertion, as well as training with simulation models supervised by an instructor using a standardized CVC checklist. These trainings used both the Blue Phantom human torso model (
Hospital administration provided funds to purchase ultrasound machines for patient units such as medicine, cardiology, ED, and ICU). A total of 4 Site‐Rite (Bard Access Systems, Inc., Salt Lake City, UT) ultrasounds were purchased in 2007. The hospital has continued to purchase ultrasound units yearly, and had 53 ultrasound units in 2014
Cases of IAP were continuously reviewed throughout the intervention period. Based on their higher CVC‐associated IAP rates, the ORs and catheterization lab were identified as having opportunities for improvement. In 2008, Hospital quality‐improvement leadership met with physician leaders in these areas to review their CVC‐related IAP data and to discuss strategies to reduce their IAP rates. These strategies included lessons learned from other services that had successfully decreased their IAP rates.
To sustain our gains, we continue to review all IAP through our coding quality, clinical documentation, quality reporting departments, and peer review. We have implemented other strategies to decrease IAP, such as the use of ultrasound guidance for bedside thoracentesis procedures, which became possible after the availability of more ultrasound devices. Training for ultrasound‐guided thoracentesis was done by our procedure‐team attending during supervision of residents.
Outcomes
Preintervention (January 1, 2006 to December 31, 2006)
There were a total of 26 cases of IAP in 2006. Of these, 15 (58%) were associated with CVC insertion (Figure 1). The single procedure associated with the largest proportion of IAP was SC CVC insertion (11 cases, 42% of all IAP cases). Eleven CVC‐associated IAPs were significant enough to require chest tube placement. Our hospital recorded a total of 1593 CVC insertions (ICD‐9 codes 38.93 and 38.97) in 2006.

Postintervention (January 1, 2008 to December 31, 2014)
There were a total of 80 cases of IAP over 7 years, of which 24 (30%) were associated with CVC insertion. Of these, 16 required chest tube placement. In the last 4 years of the postintervention period (20112014), there were only 5 cases of CVC‐associated IAP requiring chest tube placement (Figure 1). There were a total of 12,000 CVC insertions recorded over the same period.
We successfully met both our short‐ and long‐term goals. Our preintervention CVC‐associated IAP rate was 0.94%, and our post‐intervention rate during 2008 was 0.44%, a short‐term reduction of 53% (P=0.088). Our average postintervention CVC‐associated IAP rate for the years 2008 through 2014 was 0.13%, a significant long‐term reduction of 86% (P<0.0001) (Table 2). The decrease in CVC‐associated IAP rates occurred despite an older patient population (P<0.001) and a higher CMI (P<0.001) in postintervention patients who received a CVC (Table 1). Special cause variation corresponding to a change in our process is demonstrated in Figure 2. The preintervention average number of procedures between IAP was 114.8 and increased to 460.7 in the postintervention period.
Total CVC (n=95) | Subclavian (n=29) | Internal Jugular (n=58) | Femoral (n=8) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||
Compliance to intervention | |||||||||||||
US guided | 68.1% | 20.7% | 86.2% | 100.0% | |||||||||
Procedure note completion | 90.4% | 93.1% | 86.2% | 100.0% | |||||||||
Supervision | 70.2% | 77.8% | 73.1% | 87.5% | |||||||||
Level of training | |||||||||||||
Resident | 61.1% | 58.6% | 60.3% | 75.0% | |||||||||
Fellow | 25.3% | 27.6% | 24.1% | 25.0% | |||||||||
Attending | 4.2% | 6.9% | 3.4% | 0.0% | |||||||||
Advance practitioner | 3.2% | 3.4% | 3.4% | 0.0% | |||||||||
Unknown | 6.3% | 3.4% | 8.6% | 0.0% | |||||||||
Supervisor type | |||||||||||||
Resident | 3.0% | 4.8% | 2.6% | 0.0% | |||||||||
Fellow | 54.5% | 33.3% | 57.9% | 100.0% | |||||||||
Attending | 42.4% | 61.9% | 39.5% | 0.0% | |||||||||
Location of CVC Placement | Internal Jugular (n=25) | Subclavian (n=25) | |||||||||||
MICU | 32.0% | 32.0% | |||||||||||
SICU* | 40.0% | 52.0% | |||||||||||
Operating room | 28.0% | 16.0% | |||||||||||
Average no. of attempts/procedure | 1.4 | 1.5 | |||||||||||
Indications for subclavian insertion (n=25) | |||||||||||||
Trauma/surgical site | 60.0% | ||||||||||||
Need for additional access | 16.0% | ||||||||||||
Unsuccessful IJ placement | 4.0% | ||||||||||||
Unclear | 20.0% | ||||||||||||
Iatrogenic Pneumothorax Rate (20062014) | Year | ||||||||||||
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |||||
% of CVC insertions associated with IAP | 0.94 | 1.49 | 0.44 | 0.13 | 0.20 | 0.07 | 0.04 | 0.11 | 0.07 | ||||
All‐cause IAP per 1,000 discharges | 1.32 | 1.29 | 0.98 | 0.71 | 0.83 | 0.49 | 0.13 | 0.35 | 0.23 | ||||
Preintervention | Postintervention | P Value | |||||||||||
CVC‐ associated IAP short term (2006 vs 2008) | 0.94% | 0.44% | 0.088 | ||||||||||
CVC‐associated IAP long term (2006 vs 20082014) | 0.94% | 0.13% | <0.0001 | ||||||||||
All‐cause IAP per 1,000 discharges short term (2006 vs 2008) | 1.32 | 0.98 | <0.0001 | ||||||||||
All‐cause IAP per 1,000 discharges long term (2006 vs 2008‐14) | 1.32 | 0.52 | <0.0001 |

With the decrease in CVC‐associated IAP, we also saw a decrease in our all‐cause IAP rate per 1000 discharges from 1.32 in 2006 to 0.98 in 2008. This represents a 26% short‐term reduction (P<0.0001). We also saw a decrease in our all‐cause IAP rate per 1000 discharges to 0.52 from 2008 to 2014, representing a 61% long‐term reduction (P<0.0001). This decrease in all‐cause IAP postintervention occurred despite an older patient population (P<0.01) for all discharges. Our hospital is now in the highest performance UHC quartile for all‐cause IAP in 2012 to 2014.
After our multifaceted intervention in 2007, there was substantially more and consistent documentation of CVC procedure notes from less than 500 in 2006 to greater than 2000 in 2009. The distribution of CVC procedure notes in the pre‐ (2006) versus postintervention (20082014) period showed a decrease in the proportion of femoral lines from 15% to 11%, increase in IJ lines from 31% to 49%, and a decrease in SC from 54% to 40% (P=0.001). The distribution of IJ CVC procedure notes in the pre‐ (2006) versus postintervention (20082014) period showed an increase in the proportion of procedures with ultrasound documentation from 13% to 93% (P<0.001) (Figure 3).

In our ICU 2014 audit, the majority of CVC lines were placed by residents under supervision (>70%), and most used the standard CVC note template to document the procedure (90%). Of the total CVC approach, 66% were IJ and 4% were SC. Eighty‐six percent used ultrasound during IJ placement. The majority of SC insertions were placed in the surgical ICU and had clear indications (80%) for placement. Of those, 75% were due to trauma (limited access to neck) or surgery (interfering with surgical site) (Table 2).
DISCUSSION
Summary
This quality‐improvement intervention demonstrates that a multidisciplinary team can successfully implement a multifaceted intervention that sustainably reduces the rate of IAP complications from CVC placement and improves patient safety over 7 years. We found high compliance with our intervention, which included an increase in CVC notes and documentation of ultrasound guidance. There was also an increase in the IJ approach in our postintervention period. We showed statistically significant long‐term reductions in both CVC‐associated and all‐cause IAP rates. From 2011 to 2014, there were only 5 cases of CVC‐associated IAP requiring chest tube placement. Post hoc analysis showed a statistically significant decrease in CLABSI rates (P<0.0001) from a preintervention rate of 1.6 infections per 1000 central line days to postintervention average rate of 0.68 infections per 1000 central line days. This decrease may be related to the incorporation of wide sterile barrier techniques in our CVC training workshops, checklists, and template procedure notes.
A strength of this study is the sustained significant long‐term reduction in IAP. There are few data that exist to describe sustained interventions in this area. Sustainability was achieved by integrating our interventions into ongoing programs that already existed in the hospital; we incorporated our simulation training into the existing new resident orientation, increased the availability of existing ultrasound equipment, and continued our IAP chart review through coding quality with feedback to involved services. The procedure note template continues to be easily available in our EMR, and the SC approach to CVC placement is limited to select cases.
Based on a post hoc cost‐benefit analysis, the financial benefits of decreasing the rate of IAP outweigh the costs associated with implementation of this initiative. The purchase cost for a Site‐Rite (Bard Access Systems) ultrasound machine was $18,000. The cost of materials for 1 workshop is $5000 annually. Cases from the Nationwide Inpatient Sample that were flagged by this PSI had 7.0% excess mortality, 4.4 days of excess hospitalization, and approximately $18,000 in excess hospital charges.[17, 18] Based on these data, if we had continued at our preintervention rate of CVC‐associated IAP requiring chest tube placement, we would estimate 9 additional CVC‐associated IAPs requiring chest tube insertion per year. This would result in over $180,000 of additional costs annually. Based on an initial cost of $100,000 for 4 workshops and the necessary equipment, we would have realized our cost savings in less than 1 year postintervention. These are all approximate costs, and further detailed analysis is needed.
One challenge with this intervention is the culture change away from using the SC approach, and the concern from trainees of how they would learn to perform SC CVC if needed. We would suggest dedicated SC CVC ultrasound training for those services who may need to use this approach (eg, neuroanesthesia and trauma).
Interpretation/Relation to Other Evidence
The field of implementation science can help explain why some projects are successful and others fail. We can further dissect the success of this project using an implementation science model similar to that described by French et al.[19] French et al. describe 4 behavior‐change techniques. These steps include (1) who needs to do what differently, (2) which barriers and enablers need to be addressed, (3) which intervention component could overcome the barriers and enhance enablers, and (4) how can behavior change be measured and understood. Barriers included suboptimal skills of residents, low awareness of evidence‐based guidelines, and entrenched practices inconsistent with best evidence. There was also a belief that IJ lines were more likely to become infected. Targeted behaviors needing to be done differently were the choice of CVC placement site and insertion technique. Barriers to change were assessed by asking members of the project team to explore with members of their service what led them to do CVC lines without ultrasound guidance. Enhancements focused on information provision, simulation practice, and persuasive communication. Behavior change was measured by tracking the number of IAPs, site of CVC, and documentation of technique. Continuation of these interventions based on this theoretical framework drove maintenance of gains.
We completed our main intervention planning in 90 days, and met our short‐term goal on schedule. The Institute for Healthcare Improvement (IHI) advocates that such short timelines are efficient mechanisms for developing and acting on projects. Other institutions have reported on similar rapid‐cycle planning and short‐term goal setting[20]
Limitations
Our study captures the experience of a quality‐improvement team at a single academic center, and our results may not be generalizable to other institutions. Our chart review process only occurred once a case had been identified through AHRQ PSI methodology. It is possible that the PSI does not capture all cases of IAP, although we believe our coding department has a very rigorous process to look for all IAP evidence in the patient's record. We used administrative data to determine the number of hospital‐wide CVC procedures.
Our compliance data with interventions from STRIDE are based on looking for key words in procedure note documentation (so undocumented notes are not captured). To validate this, we performed a manual audit of our adherence to our intervention in 2014, and those data are consistent with the results from our STRIDE data.
Our study's observational design also cannot control for exogenous effects on physician practice relating to CVC insertion or the overall risk of IAP. Some of our decrease in complications may be from the increase in PICC line use. Nevertheless, our CVC‐associated IAP rate has decreased despite >6000 non‐PICC CVCs in our ICU over the past 5 years, and a rising CMI (18% increase in postintervention period) and older population of patients with CVC insertion (P<0.0001)
CONCLUSIONS
We are the first, to our knowledge, to report a measurable improvement in reducing IAP patient outcomes that has been sustained for over 7 years. Our hospital is in the highest performance UHC quartile for all‐cause IAP in 2012 to 2014. A multidisciplinary quality‐improvement team, focused on evidence, patient safety, and standardization, can use a multifaceted intervention to sustainably improve patient outcomes. Promoting ultrasound‐guided IJ catheterization as the CVC insertion method of choice significantly reduced our hospital's rate of CVC‐associated IAP.
Acknowledgements
The authors acknowledge many who have contributed to this quality‐improvement project:
Irina Tokareva, Jay Lee, Kourt Bowes, and Gomathi Krishnan for data analysis; Laura Meinke for significant website curriculum; Fred Mihm, Sarah Williams, and John Kugler for leadership in ultrasound training; Kevin Tabb and Norm Rizk for hospital financial support of simulation workshops and ultrasound machines; Pooja Loftus and Helene Grossman for statistical analysis; Eric Hadhazy for data support; Joan Hendershott for cost information; Nancy Szaflarski for project management and manuscript review; and Isabella Chu for manuscript review.
Disclosures: STRIDE (Stanford Translational Research Integrated Database Environment) is a research and development project at Stanford University to create a standards‐based informatics platform supporting clinical and translational research. This STRIDE project was supported by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1 RR025744. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The authors report no conflicts of interest.
- Significance of iatrogenic pneumothoraces. Chest. 1994;105(4):1147–1150. , , .
- How to avoid and manage a pneumothorax. J Vasc Access. 2006;7(1):7–14. , , , , , .
- Iatrogenic pneumothorax: etiology, incidence and risk factors. Thorac Cardiovasc Surg. 2009;57(5):286–290. , , , .
- Real‐time ultrasound‐guided catheterisation of the internal jugular vein: a prospective comparison with the landmark technique in critical care patients. Crit Care. 2006;10(6):R162. , , , et al.
- Safe placement of central venous catheters: a measured approach. J Intens Care Med. 2011;26(6):392–396. , , .
- Reducing iatrogenic risk in thoracentesis: establishing best practice via experiential training in a zero‐risk environment. Chest. 2009;135(5):1315–1320. , , , .
- Use of simulation‐based education to improve outcomes of central venous catheterization: a systematic review and meta‐analysis. Acad Med. 2011;86(9):1137–1147. , , , , , .
- A prerotational, simulation‐based workshop improves the safety of central venous catheter insertion: results of a successful internal medicine house staff training program. Chest. 2011;140(3):652–658. , , , , , .
- Linking residency training effectiveness to clinical outcomes: a quality improvement approach. Jt Comm J Qual Patient Saf. 2010;36(5):203–208. , , , , .
- Education of physicians‐in‐training can decrease the risk for vascular catheter infection. Ann Intern Med. 2000;132(8):641–648. , , , et al.
- A multidisciplinary approach to reduce central line‐associated bloodstream infections. Jt Comm J Qual Patient Saf. 2013;39(2):61–69. , , , et al.
- Validity of selected Patient Safety Indicators: opportunities and concerns. J Am Coll Surg. 2011;212(6):924–934. , , , et al.
- Cases of iatrogenic pneumothorax can be identified from ICD‐9‐CM coded data. Am J Med Qual. 2010;25(3):218–224. , , , et al.
- SQUIRE development group. Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ. 2009;338:a3152. , , , , ;
- The Quality Toolbox. 2nd ed. Milwaukee, WI: ASQ Quality Press; 2005. .
- STRIDE—an integrated standards‐based translational research informatics platform. AMIA Annu Symp Proc. 2009;2009:391–395. , , , .
- Accidental iatrogenic pneumothorax in hospitalized patients. Med Care. 2006;44(2):182–186. , , .
- Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290(14):1868–1874. , .
- Developing theory‐informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci. 2012;7:38. , , , et al.
- Using rapid‐cycle quality improvement methodology to reduce feeding tubes in patients with advanced dementia: before and after study. BMJ. 2004;329(7464):491–494. , .
Iatrogenic pneumothorax (IAP) is a complication of invasive procedures that is associated with substantial morbidity and some mortality.[1] IAP is often avoidable, and in many cases can be prevented through adherence to evidence‐based guidelines and procedural techniques known to reduce the incidence of IAP.[2] IAP may occur with a subclavian (SC) or internal jugular (IJ) central venous catheter (CVC) insertion, but is more frequently associated with the SC approach.[3] Ultrasound guidance during IJ CVC insertion is associated with a lower risk as compared to guidance by anatomical landmarks.[4, 5] Other bedside procedures that are known to cause IAP include thoracentesis. This risk can also be reduced with the use of ultrasound guidance.[6]
Including simulation in training for CVC insertion has been demonstrated in meta‐analyses to improve both learner outcomes, including simulator performance and perceived confidence, and patient outcomes, including fewer failed CVC attempts and reduced incidence of IAP.[7] Even brief simulation workshops lasting less than two hours can improve patient safety during CVC insertion.[8]
The implementation of ultrasound‐based simulation and improved adherence to the actual use of ultrasound at the bedside can be motivated by tying competency‐based educational objectives (eg, CVC insertion) to clinical outcomes (ie, rates of IAP) and tracking both as part of a continuous quality‐improvement cycle.[9] Adherence to best practices for CVC insertion can also be improved through standardizing hospital‐wide policies and hands‐on training.[10] Involving many stakeholders, including nurses, physicians, nurse practioners and physician assistants, in a multidisciplinary team has been shown to help alter entrenched behaviors and reduce the incidence of central‐line associated bloodstream infections through long‐term adherence to evidence‐based interventions.[11]
LOCAL PROBLEM
The Agency for Healthcare Research and Quality (AHRQ) has designed Patient Safety Indicators (PSIs) (
Our hospital is a member of the University HealthSystem Consortium (UHC) (
Despite this, the PSI can highlight areas where quality‐improvement efforts might be best directed. In 2005 and 2006, our hospital was ranked within the lowest UHC performance quartile for all‐cause IAP PSI.
During FY 2006 (September 2005August 2006), root‐cause analysis on cases of IAP at our hospital found that CVC insertion (40%) was the most common procedure associated with IAP, with SC insertion causing 69% of CVC‐associated IAP. Other common procedures associated with IAP were operative/pacemaker (30%), thoracentesis (25%), and ventilator associated (5%). Ultrasound was not used in 2/5 cases of IJ CVC placement and 3/5 thoracentesis cases. Only 44% of CVC insertions had a procedure note.
Intended Improvement/Study Question
Our team set out to plan and implement a set of multifaceted interventions within 90 days. The short‐term goal was a 50% reduction in the CVC IAP and all‐cause IAP rate within 18 months, and the long‐term goal was sustained reduction of CVC IAP and all‐cause IAP rate.
METHODS
The format of this article is based on the standards for quality‐improvement reporting excellence guidelines for the reporting of studies on the effectiveness of quality‐improvement interventions.[14]
Setting
Stanford University Medical Center is an academic medical center with 465 beds and over 25,000 inpatient admissions per year, providing both general acute care services and tertiary medical care. Residents perform CVC bedside procedures when central venous access is needed, in the intensive care unit (ICU), operating room (OR), and inpatient units. Prior to this project, ultrasound equipment was only available in the emergency department (ED) and ICUs. There was no formal CVC procedure supervision policy, CVC training curriculum, and procedure note templates for documentation of CVC insertion.
Planning the Interventions
A multidisciplinary quality‐improvement team met weekly during the 90‐day design period from January 2007 to March 2007. Our team included representatives from the departments of medicine, anesthesia and critical care, surgery, nursing, and emergency medicine. We also partnered with our institution's clinical and administrative leaders, experts in simulation, and the hospital quality department.
We hypothesized that a standardized set of education and training interventions promoting ultrasound‐guided IJ CVC insertion as the method of choice at our hospital would significantly reduce our rate of CVC‐associated IAP. Our multifaceted intervention included: (1) clinical and documentation standards based on evidence, (2) cognitive aids, (3) simulation training, (4) purchase and deployment of ultrasound equipment, and (5) feedback to clinical services.
Our team followed the define, measure, analyze, improve, control (DMAIC) framework.[15] We set interval goals with target completion dates throughout the 90‐day period, identified owners of each goal, and tracked progress with a shared spreadsheet.
In the 90‐day intervention, we accomplished the following: (1) conducted root‐cause analysis of IAP cases for fiscal year 2006, (2) created clinical and documentation standards around CVC placement, (3) created cognitive aids and procedure note templates, (4) developed simulation training courses, and (5) requested purchase of additional ultrasound equipment.
Data Collection
To evaluate our progress in reducing the rates of IAP, we tracked the incidence of IAP using UHC and AHRQ PSI methodology. In collaboration with our hospital's quality department, we manually reviewed every PSI‐identified case of IAP. This review has focused on identifying whether or not pneumothorax actually occurred, and whether it was associated with CVC insertion. For those associated with CVC, data were collected for patient location and service, the procedure site, whether ultrasound was used, whether a chest tube was required, and the final disposition of the patient.
Demographic data (age, gender, case mix index [CMI]) shown in Table 1 were obtained through MIDAS+ Solutions (Tucson, Arizona), a proprietary database that contains healthcare management coded data. Total hospital CVC insertion rates were calculated using International Classification of Diseases, Ninth Revision (ICD‐9) coding for 38.93 and 38.97. ICU central lineassociated blood stream infections (CLABSI) data were obtained from internal collection by our infection control team. Number and location of CVCs placed in the ICU data were obtained from nursing flow sheets in our electronic medical record (EMR). Cost information was provided by our finance department using internal accounting.
Patients With CVC Insertion | Year | ||||||||
---|---|---|---|---|---|---|---|---|---|
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |
| |||||||||
Age, y (mean) | 55.0 | 55.5 | 55.0 | 57.0 | 56.5 | 58.5 | 57.5 | 59.0 | 58.5 |
% female | 47.0 | 49.5 | 47.0 | 48.8 | 46.2 | 46.1 | 45.7 | 46.2 | 45.7 |
Case‐mix index | 3.08 | 3.35 | 3.21 | 3.40 | 3.71 | 3.91 | 3.92 | 3.92 | 4.08 |
Total no. of CVCs/year* | 1,593 | 1,141 | 1,589 | 2,250 | 2,441 | 2,774 | 2,754 | 2,722 | 2,845 |
No. of CVCs/year in ICU | NA | NA | NA | 1,502 | 1,357 | 1,345 | 1,316 | 1,421 | 1,590 |
No. of subclavians/year in ICU | NA | NA | NA | 167 | 75 | 70 | 83 | 75 | 97 |
No. of IJs/year in ICU | NA | NA | NA | 898 | 773 | 681 | 677 | 713 | 876 |
No. of femorals/year in ICU | NA | NA | NA | 212 | 152 | 203 | 171 | 198 | 206 |
No. of PICCs/year in ICU | NA | NA | NA | 225 | 357 | 391 | 385 | 435 | 411 |
Preintervention (2006) | Postintervention (20082014) | P Value | |||||||
Age, y (mean) | 55.2 | 58.7 | <0.0001 | ||||||
% female | 47.0% | 46.4% | 0.642 | ||||||
Case‐mix index | 3.08 | 3.73 | <0.0001 | ||||||
CVC insertion rate | 8.1% | 11.4% | <0.0001 | ||||||
All Inpatients | Year | ||||||||
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |
Age, y (mean) | 57.1 | 57.2 | 56.8 | 57.2 | 57.5 | 58.0 | 58.0 | 57.9 | 58.3 |
% female | 51.6 | 51.2 | 52.4 | 51.7 | 51.1 | 51.5 | 50.3 | 49.9 | 50.1 |
Case‐mix index | 1.86 | 1.98 | 1.96 | 1.99 | 1.96 | 2.02 | 2.03 | 2.07 | 2.23 |
Preintervention (2006) | Postintervention (20082014) | P Value | |||||||
Age, y (mean) | 57.1 | 57.6 | <0.01 | ||||||
% female | 51.6% | 50.9% | 0.07 | ||||||
Case‐mix index | 1.86 | 2.03 | 0.13 | ||||||
Central Line‐Associated Bloodstream Infections per 1,000 Central Line Days | |||||||||
Preintervention | Postintervention | P Value | |||||||
Short term (2006 vs 2008) | 1.8 | 0.60 | 0.004 | ||||||
Long term (2006 vs 20082014) | 1.8 | 0.68 | <0.0001 |
The project granted a Notice of Determination of Approval from the Stanford Administrative Panels for the Protection of Human Subjects (institutional review board).
Methods of Evaluation/Analysis
For the purpose of this analysis, the preintervention period was defined as January 1, 2006 through December 31, 2006, our first year of IAP case review. We defined the intervention period as January 1, 2007 through December 31, 2007, during which we planned and implemented hospital‐wide standardization of CVC insertion practices and incorporated CVC insertion training simulation into resident orientation in July 2007. The postintervention period was defined as January 1, 2008 through December 31, 2014.
All statistical analyses were performed using Stata version 12.1 (StataCorp, College Station, TX). [2] tests were used to determine statistical differences in pre‐ versus postintervention patient demographic data (age, gender, CMI), CVC insertion rates, and CLABSI rates. Because IAP is a rare event, a statistical process control g‐chart was created using QI Macros (KnowWare International, Inc., Denver, CO) to show the number of CVC procedures between IAP. [2] and Fisher exact tests were used to determine statistical differences in CVC anatomic location and use of ultrasound pre‐ and postintervention. A 2‐sided Z test to show a difference in proportions was used to determine statistical differences in CVC‐related IAP rate and all‐cause IAP rate pre‐ and postintervention.
Measuring Adherence to Intervention
Location of CVC Placement and Ultrasound Guidance Pre‐ Versus Postintervention
We utilized the Stanford Clinical Informatics Center (SCCI) services for obtaining counts of patients. Custom queries were performed on SCCI's Stanford Translational Research Integrated Database Environment (STRIDE) platform[16] to search Stanford Hospital electronic heath records for patients. This search primarily involved getting counts for the number of patients with clinical notes that contained the keywords of interest. To identify documentation for placement of CVC from 2006 to 2014, procedure or operative notes containing the words central line or CVC were counted. Further subcounts were obtained by searching for additional keywords such as PICC [peripherally inserted central catheters], femoral, jugular, subclavian, and ultrasound.
Adherence to Intervention in the ICU in 2014
A total of 100 charts were reviewed from patients in our medical and surgical ICU with a CVC in 2014 to evaluate the current trend of central line placement and sustainability of our intervention. Fifty charts were initially randomly selected from the ICU cohort. For those who had multiple lines placed, only the first line was reviewed. Because the initial audit did not provide enough SC lines and we wanted to review more IJ lines, we randomly selected an additional 25 patients who had SC and 25 patients who had IJ to review. The following was collected during chart review: primary team, location of line placement, usage of ultrasound, usage of standard procedure template, supervision, level of training for supervisor, and level of training for staff who performed procedure.
Outcomes
The rate of CVC‐associated IAP was calculated as the total number of IAPs attributed to CVCs divided by the total number of CVCs inserted determined by ICD‐9 coding for 38.93 and 38.97. The total IAP rate was calculated as the total number of IAP/1000 discharges.
RESULTS
Interventions
Our team began the intervention in early 2007 with promotion of ultrasound‐guided IJ catheterization. Clinical exceptions included: (1) trauma or code situations where access to the neck is limited, (2) suspected or confirmed neck injuries, (3) presence of a tracheostomy, and (4) bilateral internal jugular sites unsuitable for catheterization.
Our hospital adopted new formal CVC insertion policies consistent with the above training and education efforts. All physicians were required to document CVC insertions using the template available in the EMR. To be certified to perform CVC insertion independently, trainee physicians were required to complete the simulation training and successfully place a minimum of 5 CVCs directly supervised by an already‐certified physician. This was consistent with the Accreditation Council for Graduate Medical Education suggested minimum requirement in 2007. In our critical care units, all CVC insertions must be supervised by an ICU fellow or attending.
To reinforce the on‐the‐ground work by our physician leaders, we created 2 education tools to embed best practices into our CVC insertion workflow. A checklist with best practices for CVC insertion that was distributed throughout the hospital via central line kits and educational flyers, and a CVC insertion procedure note template consistent with California Department of Public Health documentation requirements was made available in our EMR.
In June 2007, we integrated CVC insertion simulation training into procedure workshops required for all medicine, surgery, anesthesia, and emergency medicine trainees during their intern year. These workshops promoted ultrasound‐guided IJ catheterization and supporting evidence for the new IJ site preference. Training sessions were 2 to 3 hours, and included a demonstration of best‐practice CVC insertion, as well as training with simulation models supervised by an instructor using a standardized CVC checklist. These trainings used both the Blue Phantom human torso model (
Hospital administration provided funds to purchase ultrasound machines for patient units such as medicine, cardiology, ED, and ICU). A total of 4 Site‐Rite (Bard Access Systems, Inc., Salt Lake City, UT) ultrasounds were purchased in 2007. The hospital has continued to purchase ultrasound units yearly, and had 53 ultrasound units in 2014
Cases of IAP were continuously reviewed throughout the intervention period. Based on their higher CVC‐associated IAP rates, the ORs and catheterization lab were identified as having opportunities for improvement. In 2008, Hospital quality‐improvement leadership met with physician leaders in these areas to review their CVC‐related IAP data and to discuss strategies to reduce their IAP rates. These strategies included lessons learned from other services that had successfully decreased their IAP rates.
To sustain our gains, we continue to review all IAP through our coding quality, clinical documentation, quality reporting departments, and peer review. We have implemented other strategies to decrease IAP, such as the use of ultrasound guidance for bedside thoracentesis procedures, which became possible after the availability of more ultrasound devices. Training for ultrasound‐guided thoracentesis was done by our procedure‐team attending during supervision of residents.
Outcomes
Preintervention (January 1, 2006 to December 31, 2006)
There were a total of 26 cases of IAP in 2006. Of these, 15 (58%) were associated with CVC insertion (Figure 1). The single procedure associated with the largest proportion of IAP was SC CVC insertion (11 cases, 42% of all IAP cases). Eleven CVC‐associated IAPs were significant enough to require chest tube placement. Our hospital recorded a total of 1593 CVC insertions (ICD‐9 codes 38.93 and 38.97) in 2006.

Postintervention (January 1, 2008 to December 31, 2014)
There were a total of 80 cases of IAP over 7 years, of which 24 (30%) were associated with CVC insertion. Of these, 16 required chest tube placement. In the last 4 years of the postintervention period (20112014), there were only 5 cases of CVC‐associated IAP requiring chest tube placement (Figure 1). There were a total of 12,000 CVC insertions recorded over the same period.
We successfully met both our short‐ and long‐term goals. Our preintervention CVC‐associated IAP rate was 0.94%, and our post‐intervention rate during 2008 was 0.44%, a short‐term reduction of 53% (P=0.088). Our average postintervention CVC‐associated IAP rate for the years 2008 through 2014 was 0.13%, a significant long‐term reduction of 86% (P<0.0001) (Table 2). The decrease in CVC‐associated IAP rates occurred despite an older patient population (P<0.001) and a higher CMI (P<0.001) in postintervention patients who received a CVC (Table 1). Special cause variation corresponding to a change in our process is demonstrated in Figure 2. The preintervention average number of procedures between IAP was 114.8 and increased to 460.7 in the postintervention period.
Total CVC (n=95) | Subclavian (n=29) | Internal Jugular (n=58) | Femoral (n=8) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||
Compliance to intervention | |||||||||||||
US guided | 68.1% | 20.7% | 86.2% | 100.0% | |||||||||
Procedure note completion | 90.4% | 93.1% | 86.2% | 100.0% | |||||||||
Supervision | 70.2% | 77.8% | 73.1% | 87.5% | |||||||||
Level of training | |||||||||||||
Resident | 61.1% | 58.6% | 60.3% | 75.0% | |||||||||
Fellow | 25.3% | 27.6% | 24.1% | 25.0% | |||||||||
Attending | 4.2% | 6.9% | 3.4% | 0.0% | |||||||||
Advance practitioner | 3.2% | 3.4% | 3.4% | 0.0% | |||||||||
Unknown | 6.3% | 3.4% | 8.6% | 0.0% | |||||||||
Supervisor type | |||||||||||||
Resident | 3.0% | 4.8% | 2.6% | 0.0% | |||||||||
Fellow | 54.5% | 33.3% | 57.9% | 100.0% | |||||||||
Attending | 42.4% | 61.9% | 39.5% | 0.0% | |||||||||
Location of CVC Placement | Internal Jugular (n=25) | Subclavian (n=25) | |||||||||||
MICU | 32.0% | 32.0% | |||||||||||
SICU* | 40.0% | 52.0% | |||||||||||
Operating room | 28.0% | 16.0% | |||||||||||
Average no. of attempts/procedure | 1.4 | 1.5 | |||||||||||
Indications for subclavian insertion (n=25) | |||||||||||||
Trauma/surgical site | 60.0% | ||||||||||||
Need for additional access | 16.0% | ||||||||||||
Unsuccessful IJ placement | 4.0% | ||||||||||||
Unclear | 20.0% | ||||||||||||
Iatrogenic Pneumothorax Rate (20062014) | Year | ||||||||||||
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |||||
% of CVC insertions associated with IAP | 0.94 | 1.49 | 0.44 | 0.13 | 0.20 | 0.07 | 0.04 | 0.11 | 0.07 | ||||
All‐cause IAP per 1,000 discharges | 1.32 | 1.29 | 0.98 | 0.71 | 0.83 | 0.49 | 0.13 | 0.35 | 0.23 | ||||
Preintervention | Postintervention | P Value | |||||||||||
CVC‐ associated IAP short term (2006 vs 2008) | 0.94% | 0.44% | 0.088 | ||||||||||
CVC‐associated IAP long term (2006 vs 20082014) | 0.94% | 0.13% | <0.0001 | ||||||||||
All‐cause IAP per 1,000 discharges short term (2006 vs 2008) | 1.32 | 0.98 | <0.0001 | ||||||||||
All‐cause IAP per 1,000 discharges long term (2006 vs 2008‐14) | 1.32 | 0.52 | <0.0001 |

With the decrease in CVC‐associated IAP, we also saw a decrease in our all‐cause IAP rate per 1000 discharges from 1.32 in 2006 to 0.98 in 2008. This represents a 26% short‐term reduction (P<0.0001). We also saw a decrease in our all‐cause IAP rate per 1000 discharges to 0.52 from 2008 to 2014, representing a 61% long‐term reduction (P<0.0001). This decrease in all‐cause IAP postintervention occurred despite an older patient population (P<0.01) for all discharges. Our hospital is now in the highest performance UHC quartile for all‐cause IAP in 2012 to 2014.
After our multifaceted intervention in 2007, there was substantially more and consistent documentation of CVC procedure notes from less than 500 in 2006 to greater than 2000 in 2009. The distribution of CVC procedure notes in the pre‐ (2006) versus postintervention (20082014) period showed a decrease in the proportion of femoral lines from 15% to 11%, increase in IJ lines from 31% to 49%, and a decrease in SC from 54% to 40% (P=0.001). The distribution of IJ CVC procedure notes in the pre‐ (2006) versus postintervention (20082014) period showed an increase in the proportion of procedures with ultrasound documentation from 13% to 93% (P<0.001) (Figure 3).

In our ICU 2014 audit, the majority of CVC lines were placed by residents under supervision (>70%), and most used the standard CVC note template to document the procedure (90%). Of the total CVC approach, 66% were IJ and 4% were SC. Eighty‐six percent used ultrasound during IJ placement. The majority of SC insertions were placed in the surgical ICU and had clear indications (80%) for placement. Of those, 75% were due to trauma (limited access to neck) or surgery (interfering with surgical site) (Table 2).
DISCUSSION
Summary
This quality‐improvement intervention demonstrates that a multidisciplinary team can successfully implement a multifaceted intervention that sustainably reduces the rate of IAP complications from CVC placement and improves patient safety over 7 years. We found high compliance with our intervention, which included an increase in CVC notes and documentation of ultrasound guidance. There was also an increase in the IJ approach in our postintervention period. We showed statistically significant long‐term reductions in both CVC‐associated and all‐cause IAP rates. From 2011 to 2014, there were only 5 cases of CVC‐associated IAP requiring chest tube placement. Post hoc analysis showed a statistically significant decrease in CLABSI rates (P<0.0001) from a preintervention rate of 1.6 infections per 1000 central line days to postintervention average rate of 0.68 infections per 1000 central line days. This decrease may be related to the incorporation of wide sterile barrier techniques in our CVC training workshops, checklists, and template procedure notes.
A strength of this study is the sustained significant long‐term reduction in IAP. There are few data that exist to describe sustained interventions in this area. Sustainability was achieved by integrating our interventions into ongoing programs that already existed in the hospital; we incorporated our simulation training into the existing new resident orientation, increased the availability of existing ultrasound equipment, and continued our IAP chart review through coding quality with feedback to involved services. The procedure note template continues to be easily available in our EMR, and the SC approach to CVC placement is limited to select cases.
Based on a post hoc cost‐benefit analysis, the financial benefits of decreasing the rate of IAP outweigh the costs associated with implementation of this initiative. The purchase cost for a Site‐Rite (Bard Access Systems) ultrasound machine was $18,000. The cost of materials for 1 workshop is $5000 annually. Cases from the Nationwide Inpatient Sample that were flagged by this PSI had 7.0% excess mortality, 4.4 days of excess hospitalization, and approximately $18,000 in excess hospital charges.[17, 18] Based on these data, if we had continued at our preintervention rate of CVC‐associated IAP requiring chest tube placement, we would estimate 9 additional CVC‐associated IAPs requiring chest tube insertion per year. This would result in over $180,000 of additional costs annually. Based on an initial cost of $100,000 for 4 workshops and the necessary equipment, we would have realized our cost savings in less than 1 year postintervention. These are all approximate costs, and further detailed analysis is needed.
One challenge with this intervention is the culture change away from using the SC approach, and the concern from trainees of how they would learn to perform SC CVC if needed. We would suggest dedicated SC CVC ultrasound training for those services who may need to use this approach (eg, neuroanesthesia and trauma).
Interpretation/Relation to Other Evidence
The field of implementation science can help explain why some projects are successful and others fail. We can further dissect the success of this project using an implementation science model similar to that described by French et al.[19] French et al. describe 4 behavior‐change techniques. These steps include (1) who needs to do what differently, (2) which barriers and enablers need to be addressed, (3) which intervention component could overcome the barriers and enhance enablers, and (4) how can behavior change be measured and understood. Barriers included suboptimal skills of residents, low awareness of evidence‐based guidelines, and entrenched practices inconsistent with best evidence. There was also a belief that IJ lines were more likely to become infected. Targeted behaviors needing to be done differently were the choice of CVC placement site and insertion technique. Barriers to change were assessed by asking members of the project team to explore with members of their service what led them to do CVC lines without ultrasound guidance. Enhancements focused on information provision, simulation practice, and persuasive communication. Behavior change was measured by tracking the number of IAPs, site of CVC, and documentation of technique. Continuation of these interventions based on this theoretical framework drove maintenance of gains.
We completed our main intervention planning in 90 days, and met our short‐term goal on schedule. The Institute for Healthcare Improvement (IHI) advocates that such short timelines are efficient mechanisms for developing and acting on projects. Other institutions have reported on similar rapid‐cycle planning and short‐term goal setting[20]
Limitations
Our study captures the experience of a quality‐improvement team at a single academic center, and our results may not be generalizable to other institutions. Our chart review process only occurred once a case had been identified through AHRQ PSI methodology. It is possible that the PSI does not capture all cases of IAP, although we believe our coding department has a very rigorous process to look for all IAP evidence in the patient's record. We used administrative data to determine the number of hospital‐wide CVC procedures.
Our compliance data with interventions from STRIDE are based on looking for key words in procedure note documentation (so undocumented notes are not captured). To validate this, we performed a manual audit of our adherence to our intervention in 2014, and those data are consistent with the results from our STRIDE data.
Our study's observational design also cannot control for exogenous effects on physician practice relating to CVC insertion or the overall risk of IAP. Some of our decrease in complications may be from the increase in PICC line use. Nevertheless, our CVC‐associated IAP rate has decreased despite >6000 non‐PICC CVCs in our ICU over the past 5 years, and a rising CMI (18% increase in postintervention period) and older population of patients with CVC insertion (P<0.0001)
CONCLUSIONS
We are the first, to our knowledge, to report a measurable improvement in reducing IAP patient outcomes that has been sustained for over 7 years. Our hospital is in the highest performance UHC quartile for all‐cause IAP in 2012 to 2014. A multidisciplinary quality‐improvement team, focused on evidence, patient safety, and standardization, can use a multifaceted intervention to sustainably improve patient outcomes. Promoting ultrasound‐guided IJ catheterization as the CVC insertion method of choice significantly reduced our hospital's rate of CVC‐associated IAP.
Acknowledgements
The authors acknowledge many who have contributed to this quality‐improvement project:
Irina Tokareva, Jay Lee, Kourt Bowes, and Gomathi Krishnan for data analysis; Laura Meinke for significant website curriculum; Fred Mihm, Sarah Williams, and John Kugler for leadership in ultrasound training; Kevin Tabb and Norm Rizk for hospital financial support of simulation workshops and ultrasound machines; Pooja Loftus and Helene Grossman for statistical analysis; Eric Hadhazy for data support; Joan Hendershott for cost information; Nancy Szaflarski for project management and manuscript review; and Isabella Chu for manuscript review.
Disclosures: STRIDE (Stanford Translational Research Integrated Database Environment) is a research and development project at Stanford University to create a standards‐based informatics platform supporting clinical and translational research. This STRIDE project was supported by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1 RR025744. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The authors report no conflicts of interest.
Iatrogenic pneumothorax (IAP) is a complication of invasive procedures that is associated with substantial morbidity and some mortality.[1] IAP is often avoidable, and in many cases can be prevented through adherence to evidence‐based guidelines and procedural techniques known to reduce the incidence of IAP.[2] IAP may occur with a subclavian (SC) or internal jugular (IJ) central venous catheter (CVC) insertion, but is more frequently associated with the SC approach.[3] Ultrasound guidance during IJ CVC insertion is associated with a lower risk as compared to guidance by anatomical landmarks.[4, 5] Other bedside procedures that are known to cause IAP include thoracentesis. This risk can also be reduced with the use of ultrasound guidance.[6]
Including simulation in training for CVC insertion has been demonstrated in meta‐analyses to improve both learner outcomes, including simulator performance and perceived confidence, and patient outcomes, including fewer failed CVC attempts and reduced incidence of IAP.[7] Even brief simulation workshops lasting less than two hours can improve patient safety during CVC insertion.[8]
The implementation of ultrasound‐based simulation and improved adherence to the actual use of ultrasound at the bedside can be motivated by tying competency‐based educational objectives (eg, CVC insertion) to clinical outcomes (ie, rates of IAP) and tracking both as part of a continuous quality‐improvement cycle.[9] Adherence to best practices for CVC insertion can also be improved through standardizing hospital‐wide policies and hands‐on training.[10] Involving many stakeholders, including nurses, physicians, nurse practioners and physician assistants, in a multidisciplinary team has been shown to help alter entrenched behaviors and reduce the incidence of central‐line associated bloodstream infections through long‐term adherence to evidence‐based interventions.[11]
LOCAL PROBLEM
The Agency for Healthcare Research and Quality (AHRQ) has designed Patient Safety Indicators (PSIs) (
Our hospital is a member of the University HealthSystem Consortium (UHC) (
Despite this, the PSI can highlight areas where quality‐improvement efforts might be best directed. In 2005 and 2006, our hospital was ranked within the lowest UHC performance quartile for all‐cause IAP PSI.
During FY 2006 (September 2005August 2006), root‐cause analysis on cases of IAP at our hospital found that CVC insertion (40%) was the most common procedure associated with IAP, with SC insertion causing 69% of CVC‐associated IAP. Other common procedures associated with IAP were operative/pacemaker (30%), thoracentesis (25%), and ventilator associated (5%). Ultrasound was not used in 2/5 cases of IJ CVC placement and 3/5 thoracentesis cases. Only 44% of CVC insertions had a procedure note.
Intended Improvement/Study Question
Our team set out to plan and implement a set of multifaceted interventions within 90 days. The short‐term goal was a 50% reduction in the CVC IAP and all‐cause IAP rate within 18 months, and the long‐term goal was sustained reduction of CVC IAP and all‐cause IAP rate.
METHODS
The format of this article is based on the standards for quality‐improvement reporting excellence guidelines for the reporting of studies on the effectiveness of quality‐improvement interventions.[14]
Setting
Stanford University Medical Center is an academic medical center with 465 beds and over 25,000 inpatient admissions per year, providing both general acute care services and tertiary medical care. Residents perform CVC bedside procedures when central venous access is needed, in the intensive care unit (ICU), operating room (OR), and inpatient units. Prior to this project, ultrasound equipment was only available in the emergency department (ED) and ICUs. There was no formal CVC procedure supervision policy, CVC training curriculum, and procedure note templates for documentation of CVC insertion.
Planning the Interventions
A multidisciplinary quality‐improvement team met weekly during the 90‐day design period from January 2007 to March 2007. Our team included representatives from the departments of medicine, anesthesia and critical care, surgery, nursing, and emergency medicine. We also partnered with our institution's clinical and administrative leaders, experts in simulation, and the hospital quality department.
We hypothesized that a standardized set of education and training interventions promoting ultrasound‐guided IJ CVC insertion as the method of choice at our hospital would significantly reduce our rate of CVC‐associated IAP. Our multifaceted intervention included: (1) clinical and documentation standards based on evidence, (2) cognitive aids, (3) simulation training, (4) purchase and deployment of ultrasound equipment, and (5) feedback to clinical services.
Our team followed the define, measure, analyze, improve, control (DMAIC) framework.[15] We set interval goals with target completion dates throughout the 90‐day period, identified owners of each goal, and tracked progress with a shared spreadsheet.
In the 90‐day intervention, we accomplished the following: (1) conducted root‐cause analysis of IAP cases for fiscal year 2006, (2) created clinical and documentation standards around CVC placement, (3) created cognitive aids and procedure note templates, (4) developed simulation training courses, and (5) requested purchase of additional ultrasound equipment.
Data Collection
To evaluate our progress in reducing the rates of IAP, we tracked the incidence of IAP using UHC and AHRQ PSI methodology. In collaboration with our hospital's quality department, we manually reviewed every PSI‐identified case of IAP. This review has focused on identifying whether or not pneumothorax actually occurred, and whether it was associated with CVC insertion. For those associated with CVC, data were collected for patient location and service, the procedure site, whether ultrasound was used, whether a chest tube was required, and the final disposition of the patient.
Demographic data (age, gender, case mix index [CMI]) shown in Table 1 were obtained through MIDAS+ Solutions (Tucson, Arizona), a proprietary database that contains healthcare management coded data. Total hospital CVC insertion rates were calculated using International Classification of Diseases, Ninth Revision (ICD‐9) coding for 38.93 and 38.97. ICU central lineassociated blood stream infections (CLABSI) data were obtained from internal collection by our infection control team. Number and location of CVCs placed in the ICU data were obtained from nursing flow sheets in our electronic medical record (EMR). Cost information was provided by our finance department using internal accounting.
Patients With CVC Insertion | Year | ||||||||
---|---|---|---|---|---|---|---|---|---|
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |
| |||||||||
Age, y (mean) | 55.0 | 55.5 | 55.0 | 57.0 | 56.5 | 58.5 | 57.5 | 59.0 | 58.5 |
% female | 47.0 | 49.5 | 47.0 | 48.8 | 46.2 | 46.1 | 45.7 | 46.2 | 45.7 |
Case‐mix index | 3.08 | 3.35 | 3.21 | 3.40 | 3.71 | 3.91 | 3.92 | 3.92 | 4.08 |
Total no. of CVCs/year* | 1,593 | 1,141 | 1,589 | 2,250 | 2,441 | 2,774 | 2,754 | 2,722 | 2,845 |
No. of CVCs/year in ICU | NA | NA | NA | 1,502 | 1,357 | 1,345 | 1,316 | 1,421 | 1,590 |
No. of subclavians/year in ICU | NA | NA | NA | 167 | 75 | 70 | 83 | 75 | 97 |
No. of IJs/year in ICU | NA | NA | NA | 898 | 773 | 681 | 677 | 713 | 876 |
No. of femorals/year in ICU | NA | NA | NA | 212 | 152 | 203 | 171 | 198 | 206 |
No. of PICCs/year in ICU | NA | NA | NA | 225 | 357 | 391 | 385 | 435 | 411 |
Preintervention (2006) | Postintervention (20082014) | P Value | |||||||
Age, y (mean) | 55.2 | 58.7 | <0.0001 | ||||||
% female | 47.0% | 46.4% | 0.642 | ||||||
Case‐mix index | 3.08 | 3.73 | <0.0001 | ||||||
CVC insertion rate | 8.1% | 11.4% | <0.0001 | ||||||
All Inpatients | Year | ||||||||
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |
Age, y (mean) | 57.1 | 57.2 | 56.8 | 57.2 | 57.5 | 58.0 | 58.0 | 57.9 | 58.3 |
% female | 51.6 | 51.2 | 52.4 | 51.7 | 51.1 | 51.5 | 50.3 | 49.9 | 50.1 |
Case‐mix index | 1.86 | 1.98 | 1.96 | 1.99 | 1.96 | 2.02 | 2.03 | 2.07 | 2.23 |
Preintervention (2006) | Postintervention (20082014) | P Value | |||||||
Age, y (mean) | 57.1 | 57.6 | <0.01 | ||||||
% female | 51.6% | 50.9% | 0.07 | ||||||
Case‐mix index | 1.86 | 2.03 | 0.13 | ||||||
Central Line‐Associated Bloodstream Infections per 1,000 Central Line Days | |||||||||
Preintervention | Postintervention | P Value | |||||||
Short term (2006 vs 2008) | 1.8 | 0.60 | 0.004 | ||||||
Long term (2006 vs 20082014) | 1.8 | 0.68 | <0.0001 |
The project granted a Notice of Determination of Approval from the Stanford Administrative Panels for the Protection of Human Subjects (institutional review board).
Methods of Evaluation/Analysis
For the purpose of this analysis, the preintervention period was defined as January 1, 2006 through December 31, 2006, our first year of IAP case review. We defined the intervention period as January 1, 2007 through December 31, 2007, during which we planned and implemented hospital‐wide standardization of CVC insertion practices and incorporated CVC insertion training simulation into resident orientation in July 2007. The postintervention period was defined as January 1, 2008 through December 31, 2014.
All statistical analyses were performed using Stata version 12.1 (StataCorp, College Station, TX). [2] tests were used to determine statistical differences in pre‐ versus postintervention patient demographic data (age, gender, CMI), CVC insertion rates, and CLABSI rates. Because IAP is a rare event, a statistical process control g‐chart was created using QI Macros (KnowWare International, Inc., Denver, CO) to show the number of CVC procedures between IAP. [2] and Fisher exact tests were used to determine statistical differences in CVC anatomic location and use of ultrasound pre‐ and postintervention. A 2‐sided Z test to show a difference in proportions was used to determine statistical differences in CVC‐related IAP rate and all‐cause IAP rate pre‐ and postintervention.
Measuring Adherence to Intervention
Location of CVC Placement and Ultrasound Guidance Pre‐ Versus Postintervention
We utilized the Stanford Clinical Informatics Center (SCCI) services for obtaining counts of patients. Custom queries were performed on SCCI's Stanford Translational Research Integrated Database Environment (STRIDE) platform[16] to search Stanford Hospital electronic heath records for patients. This search primarily involved getting counts for the number of patients with clinical notes that contained the keywords of interest. To identify documentation for placement of CVC from 2006 to 2014, procedure or operative notes containing the words central line or CVC were counted. Further subcounts were obtained by searching for additional keywords such as PICC [peripherally inserted central catheters], femoral, jugular, subclavian, and ultrasound.
Adherence to Intervention in the ICU in 2014
A total of 100 charts were reviewed from patients in our medical and surgical ICU with a CVC in 2014 to evaluate the current trend of central line placement and sustainability of our intervention. Fifty charts were initially randomly selected from the ICU cohort. For those who had multiple lines placed, only the first line was reviewed. Because the initial audit did not provide enough SC lines and we wanted to review more IJ lines, we randomly selected an additional 25 patients who had SC and 25 patients who had IJ to review. The following was collected during chart review: primary team, location of line placement, usage of ultrasound, usage of standard procedure template, supervision, level of training for supervisor, and level of training for staff who performed procedure.
Outcomes
The rate of CVC‐associated IAP was calculated as the total number of IAPs attributed to CVCs divided by the total number of CVCs inserted determined by ICD‐9 coding for 38.93 and 38.97. The total IAP rate was calculated as the total number of IAP/1000 discharges.
RESULTS
Interventions
Our team began the intervention in early 2007 with promotion of ultrasound‐guided IJ catheterization. Clinical exceptions included: (1) trauma or code situations where access to the neck is limited, (2) suspected or confirmed neck injuries, (3) presence of a tracheostomy, and (4) bilateral internal jugular sites unsuitable for catheterization.
Our hospital adopted new formal CVC insertion policies consistent with the above training and education efforts. All physicians were required to document CVC insertions using the template available in the EMR. To be certified to perform CVC insertion independently, trainee physicians were required to complete the simulation training and successfully place a minimum of 5 CVCs directly supervised by an already‐certified physician. This was consistent with the Accreditation Council for Graduate Medical Education suggested minimum requirement in 2007. In our critical care units, all CVC insertions must be supervised by an ICU fellow or attending.
To reinforce the on‐the‐ground work by our physician leaders, we created 2 education tools to embed best practices into our CVC insertion workflow. A checklist with best practices for CVC insertion that was distributed throughout the hospital via central line kits and educational flyers, and a CVC insertion procedure note template consistent with California Department of Public Health documentation requirements was made available in our EMR.
In June 2007, we integrated CVC insertion simulation training into procedure workshops required for all medicine, surgery, anesthesia, and emergency medicine trainees during their intern year. These workshops promoted ultrasound‐guided IJ catheterization and supporting evidence for the new IJ site preference. Training sessions were 2 to 3 hours, and included a demonstration of best‐practice CVC insertion, as well as training with simulation models supervised by an instructor using a standardized CVC checklist. These trainings used both the Blue Phantom human torso model (
Hospital administration provided funds to purchase ultrasound machines for patient units such as medicine, cardiology, ED, and ICU). A total of 4 Site‐Rite (Bard Access Systems, Inc., Salt Lake City, UT) ultrasounds were purchased in 2007. The hospital has continued to purchase ultrasound units yearly, and had 53 ultrasound units in 2014
Cases of IAP were continuously reviewed throughout the intervention period. Based on their higher CVC‐associated IAP rates, the ORs and catheterization lab were identified as having opportunities for improvement. In 2008, Hospital quality‐improvement leadership met with physician leaders in these areas to review their CVC‐related IAP data and to discuss strategies to reduce their IAP rates. These strategies included lessons learned from other services that had successfully decreased their IAP rates.
To sustain our gains, we continue to review all IAP through our coding quality, clinical documentation, quality reporting departments, and peer review. We have implemented other strategies to decrease IAP, such as the use of ultrasound guidance for bedside thoracentesis procedures, which became possible after the availability of more ultrasound devices. Training for ultrasound‐guided thoracentesis was done by our procedure‐team attending during supervision of residents.
Outcomes
Preintervention (January 1, 2006 to December 31, 2006)
There were a total of 26 cases of IAP in 2006. Of these, 15 (58%) were associated with CVC insertion (Figure 1). The single procedure associated with the largest proportion of IAP was SC CVC insertion (11 cases, 42% of all IAP cases). Eleven CVC‐associated IAPs were significant enough to require chest tube placement. Our hospital recorded a total of 1593 CVC insertions (ICD‐9 codes 38.93 and 38.97) in 2006.

Postintervention (January 1, 2008 to December 31, 2014)
There were a total of 80 cases of IAP over 7 years, of which 24 (30%) were associated with CVC insertion. Of these, 16 required chest tube placement. In the last 4 years of the postintervention period (20112014), there were only 5 cases of CVC‐associated IAP requiring chest tube placement (Figure 1). There were a total of 12,000 CVC insertions recorded over the same period.
We successfully met both our short‐ and long‐term goals. Our preintervention CVC‐associated IAP rate was 0.94%, and our post‐intervention rate during 2008 was 0.44%, a short‐term reduction of 53% (P=0.088). Our average postintervention CVC‐associated IAP rate for the years 2008 through 2014 was 0.13%, a significant long‐term reduction of 86% (P<0.0001) (Table 2). The decrease in CVC‐associated IAP rates occurred despite an older patient population (P<0.001) and a higher CMI (P<0.001) in postintervention patients who received a CVC (Table 1). Special cause variation corresponding to a change in our process is demonstrated in Figure 2. The preintervention average number of procedures between IAP was 114.8 and increased to 460.7 in the postintervention period.
Total CVC (n=95) | Subclavian (n=29) | Internal Jugular (n=58) | Femoral (n=8) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||
Compliance to intervention | |||||||||||||
US guided | 68.1% | 20.7% | 86.2% | 100.0% | |||||||||
Procedure note completion | 90.4% | 93.1% | 86.2% | 100.0% | |||||||||
Supervision | 70.2% | 77.8% | 73.1% | 87.5% | |||||||||
Level of training | |||||||||||||
Resident | 61.1% | 58.6% | 60.3% | 75.0% | |||||||||
Fellow | 25.3% | 27.6% | 24.1% | 25.0% | |||||||||
Attending | 4.2% | 6.9% | 3.4% | 0.0% | |||||||||
Advance practitioner | 3.2% | 3.4% | 3.4% | 0.0% | |||||||||
Unknown | 6.3% | 3.4% | 8.6% | 0.0% | |||||||||
Supervisor type | |||||||||||||
Resident | 3.0% | 4.8% | 2.6% | 0.0% | |||||||||
Fellow | 54.5% | 33.3% | 57.9% | 100.0% | |||||||||
Attending | 42.4% | 61.9% | 39.5% | 0.0% | |||||||||
Location of CVC Placement | Internal Jugular (n=25) | Subclavian (n=25) | |||||||||||
MICU | 32.0% | 32.0% | |||||||||||
SICU* | 40.0% | 52.0% | |||||||||||
Operating room | 28.0% | 16.0% | |||||||||||
Average no. of attempts/procedure | 1.4 | 1.5 | |||||||||||
Indications for subclavian insertion (n=25) | |||||||||||||
Trauma/surgical site | 60.0% | ||||||||||||
Need for additional access | 16.0% | ||||||||||||
Unsuccessful IJ placement | 4.0% | ||||||||||||
Unclear | 20.0% | ||||||||||||
Iatrogenic Pneumothorax Rate (20062014) | Year | ||||||||||||
2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | |||||
% of CVC insertions associated with IAP | 0.94 | 1.49 | 0.44 | 0.13 | 0.20 | 0.07 | 0.04 | 0.11 | 0.07 | ||||
All‐cause IAP per 1,000 discharges | 1.32 | 1.29 | 0.98 | 0.71 | 0.83 | 0.49 | 0.13 | 0.35 | 0.23 | ||||
Preintervention | Postintervention | P Value | |||||||||||
CVC‐ associated IAP short term (2006 vs 2008) | 0.94% | 0.44% | 0.088 | ||||||||||
CVC‐associated IAP long term (2006 vs 20082014) | 0.94% | 0.13% | <0.0001 | ||||||||||
All‐cause IAP per 1,000 discharges short term (2006 vs 2008) | 1.32 | 0.98 | <0.0001 | ||||||||||
All‐cause IAP per 1,000 discharges long term (2006 vs 2008‐14) | 1.32 | 0.52 | <0.0001 |

With the decrease in CVC‐associated IAP, we also saw a decrease in our all‐cause IAP rate per 1000 discharges from 1.32 in 2006 to 0.98 in 2008. This represents a 26% short‐term reduction (P<0.0001). We also saw a decrease in our all‐cause IAP rate per 1000 discharges to 0.52 from 2008 to 2014, representing a 61% long‐term reduction (P<0.0001). This decrease in all‐cause IAP postintervention occurred despite an older patient population (P<0.01) for all discharges. Our hospital is now in the highest performance UHC quartile for all‐cause IAP in 2012 to 2014.
After our multifaceted intervention in 2007, there was substantially more and consistent documentation of CVC procedure notes from less than 500 in 2006 to greater than 2000 in 2009. The distribution of CVC procedure notes in the pre‐ (2006) versus postintervention (20082014) period showed a decrease in the proportion of femoral lines from 15% to 11%, increase in IJ lines from 31% to 49%, and a decrease in SC from 54% to 40% (P=0.001). The distribution of IJ CVC procedure notes in the pre‐ (2006) versus postintervention (20082014) period showed an increase in the proportion of procedures with ultrasound documentation from 13% to 93% (P<0.001) (Figure 3).

In our ICU 2014 audit, the majority of CVC lines were placed by residents under supervision (>70%), and most used the standard CVC note template to document the procedure (90%). Of the total CVC approach, 66% were IJ and 4% were SC. Eighty‐six percent used ultrasound during IJ placement. The majority of SC insertions were placed in the surgical ICU and had clear indications (80%) for placement. Of those, 75% were due to trauma (limited access to neck) or surgery (interfering with surgical site) (Table 2).
DISCUSSION
Summary
This quality‐improvement intervention demonstrates that a multidisciplinary team can successfully implement a multifaceted intervention that sustainably reduces the rate of IAP complications from CVC placement and improves patient safety over 7 years. We found high compliance with our intervention, which included an increase in CVC notes and documentation of ultrasound guidance. There was also an increase in the IJ approach in our postintervention period. We showed statistically significant long‐term reductions in both CVC‐associated and all‐cause IAP rates. From 2011 to 2014, there were only 5 cases of CVC‐associated IAP requiring chest tube placement. Post hoc analysis showed a statistically significant decrease in CLABSI rates (P<0.0001) from a preintervention rate of 1.6 infections per 1000 central line days to postintervention average rate of 0.68 infections per 1000 central line days. This decrease may be related to the incorporation of wide sterile barrier techniques in our CVC training workshops, checklists, and template procedure notes.
A strength of this study is the sustained significant long‐term reduction in IAP. There are few data that exist to describe sustained interventions in this area. Sustainability was achieved by integrating our interventions into ongoing programs that already existed in the hospital; we incorporated our simulation training into the existing new resident orientation, increased the availability of existing ultrasound equipment, and continued our IAP chart review through coding quality with feedback to involved services. The procedure note template continues to be easily available in our EMR, and the SC approach to CVC placement is limited to select cases.
Based on a post hoc cost‐benefit analysis, the financial benefits of decreasing the rate of IAP outweigh the costs associated with implementation of this initiative. The purchase cost for a Site‐Rite (Bard Access Systems) ultrasound machine was $18,000. The cost of materials for 1 workshop is $5000 annually. Cases from the Nationwide Inpatient Sample that were flagged by this PSI had 7.0% excess mortality, 4.4 days of excess hospitalization, and approximately $18,000 in excess hospital charges.[17, 18] Based on these data, if we had continued at our preintervention rate of CVC‐associated IAP requiring chest tube placement, we would estimate 9 additional CVC‐associated IAPs requiring chest tube insertion per year. This would result in over $180,000 of additional costs annually. Based on an initial cost of $100,000 for 4 workshops and the necessary equipment, we would have realized our cost savings in less than 1 year postintervention. These are all approximate costs, and further detailed analysis is needed.
One challenge with this intervention is the culture change away from using the SC approach, and the concern from trainees of how they would learn to perform SC CVC if needed. We would suggest dedicated SC CVC ultrasound training for those services who may need to use this approach (eg, neuroanesthesia and trauma).
Interpretation/Relation to Other Evidence
The field of implementation science can help explain why some projects are successful and others fail. We can further dissect the success of this project using an implementation science model similar to that described by French et al.[19] French et al. describe 4 behavior‐change techniques. These steps include (1) who needs to do what differently, (2) which barriers and enablers need to be addressed, (3) which intervention component could overcome the barriers and enhance enablers, and (4) how can behavior change be measured and understood. Barriers included suboptimal skills of residents, low awareness of evidence‐based guidelines, and entrenched practices inconsistent with best evidence. There was also a belief that IJ lines were more likely to become infected. Targeted behaviors needing to be done differently were the choice of CVC placement site and insertion technique. Barriers to change were assessed by asking members of the project team to explore with members of their service what led them to do CVC lines without ultrasound guidance. Enhancements focused on information provision, simulation practice, and persuasive communication. Behavior change was measured by tracking the number of IAPs, site of CVC, and documentation of technique. Continuation of these interventions based on this theoretical framework drove maintenance of gains.
We completed our main intervention planning in 90 days, and met our short‐term goal on schedule. The Institute for Healthcare Improvement (IHI) advocates that such short timelines are efficient mechanisms for developing and acting on projects. Other institutions have reported on similar rapid‐cycle planning and short‐term goal setting[20]
Limitations
Our study captures the experience of a quality‐improvement team at a single academic center, and our results may not be generalizable to other institutions. Our chart review process only occurred once a case had been identified through AHRQ PSI methodology. It is possible that the PSI does not capture all cases of IAP, although we believe our coding department has a very rigorous process to look for all IAP evidence in the patient's record. We used administrative data to determine the number of hospital‐wide CVC procedures.
Our compliance data with interventions from STRIDE are based on looking for key words in procedure note documentation (so undocumented notes are not captured). To validate this, we performed a manual audit of our adherence to our intervention in 2014, and those data are consistent with the results from our STRIDE data.
Our study's observational design also cannot control for exogenous effects on physician practice relating to CVC insertion or the overall risk of IAP. Some of our decrease in complications may be from the increase in PICC line use. Nevertheless, our CVC‐associated IAP rate has decreased despite >6000 non‐PICC CVCs in our ICU over the past 5 years, and a rising CMI (18% increase in postintervention period) and older population of patients with CVC insertion (P<0.0001)
CONCLUSIONS
We are the first, to our knowledge, to report a measurable improvement in reducing IAP patient outcomes that has been sustained for over 7 years. Our hospital is in the highest performance UHC quartile for all‐cause IAP in 2012 to 2014. A multidisciplinary quality‐improvement team, focused on evidence, patient safety, and standardization, can use a multifaceted intervention to sustainably improve patient outcomes. Promoting ultrasound‐guided IJ catheterization as the CVC insertion method of choice significantly reduced our hospital's rate of CVC‐associated IAP.
Acknowledgements
The authors acknowledge many who have contributed to this quality‐improvement project:
Irina Tokareva, Jay Lee, Kourt Bowes, and Gomathi Krishnan for data analysis; Laura Meinke for significant website curriculum; Fred Mihm, Sarah Williams, and John Kugler for leadership in ultrasound training; Kevin Tabb and Norm Rizk for hospital financial support of simulation workshops and ultrasound machines; Pooja Loftus and Helene Grossman for statistical analysis; Eric Hadhazy for data support; Joan Hendershott for cost information; Nancy Szaflarski for project management and manuscript review; and Isabella Chu for manuscript review.
Disclosures: STRIDE (Stanford Translational Research Integrated Database Environment) is a research and development project at Stanford University to create a standards‐based informatics platform supporting clinical and translational research. This STRIDE project was supported by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1 RR025744. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The authors report no conflicts of interest.
- Significance of iatrogenic pneumothoraces. Chest. 1994;105(4):1147–1150. , , .
- How to avoid and manage a pneumothorax. J Vasc Access. 2006;7(1):7–14. , , , , , .
- Iatrogenic pneumothorax: etiology, incidence and risk factors. Thorac Cardiovasc Surg. 2009;57(5):286–290. , , , .
- Real‐time ultrasound‐guided catheterisation of the internal jugular vein: a prospective comparison with the landmark technique in critical care patients. Crit Care. 2006;10(6):R162. , , , et al.
- Safe placement of central venous catheters: a measured approach. J Intens Care Med. 2011;26(6):392–396. , , .
- Reducing iatrogenic risk in thoracentesis: establishing best practice via experiential training in a zero‐risk environment. Chest. 2009;135(5):1315–1320. , , , .
- Use of simulation‐based education to improve outcomes of central venous catheterization: a systematic review and meta‐analysis. Acad Med. 2011;86(9):1137–1147. , , , , , .
- A prerotational, simulation‐based workshop improves the safety of central venous catheter insertion: results of a successful internal medicine house staff training program. Chest. 2011;140(3):652–658. , , , , , .
- Linking residency training effectiveness to clinical outcomes: a quality improvement approach. Jt Comm J Qual Patient Saf. 2010;36(5):203–208. , , , , .
- Education of physicians‐in‐training can decrease the risk for vascular catheter infection. Ann Intern Med. 2000;132(8):641–648. , , , et al.
- A multidisciplinary approach to reduce central line‐associated bloodstream infections. Jt Comm J Qual Patient Saf. 2013;39(2):61–69. , , , et al.
- Validity of selected Patient Safety Indicators: opportunities and concerns. J Am Coll Surg. 2011;212(6):924–934. , , , et al.
- Cases of iatrogenic pneumothorax can be identified from ICD‐9‐CM coded data. Am J Med Qual. 2010;25(3):218–224. , , , et al.
- SQUIRE development group. Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ. 2009;338:a3152. , , , , ;
- The Quality Toolbox. 2nd ed. Milwaukee, WI: ASQ Quality Press; 2005. .
- STRIDE—an integrated standards‐based translational research informatics platform. AMIA Annu Symp Proc. 2009;2009:391–395. , , , .
- Accidental iatrogenic pneumothorax in hospitalized patients. Med Care. 2006;44(2):182–186. , , .
- Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290(14):1868–1874. , .
- Developing theory‐informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci. 2012;7:38. , , , et al.
- Using rapid‐cycle quality improvement methodology to reduce feeding tubes in patients with advanced dementia: before and after study. BMJ. 2004;329(7464):491–494. , .
- Significance of iatrogenic pneumothoraces. Chest. 1994;105(4):1147–1150. , , .
- How to avoid and manage a pneumothorax. J Vasc Access. 2006;7(1):7–14. , , , , , .
- Iatrogenic pneumothorax: etiology, incidence and risk factors. Thorac Cardiovasc Surg. 2009;57(5):286–290. , , , .
- Real‐time ultrasound‐guided catheterisation of the internal jugular vein: a prospective comparison with the landmark technique in critical care patients. Crit Care. 2006;10(6):R162. , , , et al.
- Safe placement of central venous catheters: a measured approach. J Intens Care Med. 2011;26(6):392–396. , , .
- Reducing iatrogenic risk in thoracentesis: establishing best practice via experiential training in a zero‐risk environment. Chest. 2009;135(5):1315–1320. , , , .
- Use of simulation‐based education to improve outcomes of central venous catheterization: a systematic review and meta‐analysis. Acad Med. 2011;86(9):1137–1147. , , , , , .
- A prerotational, simulation‐based workshop improves the safety of central venous catheter insertion: results of a successful internal medicine house staff training program. Chest. 2011;140(3):652–658. , , , , , .
- Linking residency training effectiveness to clinical outcomes: a quality improvement approach. Jt Comm J Qual Patient Saf. 2010;36(5):203–208. , , , , .
- Education of physicians‐in‐training can decrease the risk for vascular catheter infection. Ann Intern Med. 2000;132(8):641–648. , , , et al.
- A multidisciplinary approach to reduce central line‐associated bloodstream infections. Jt Comm J Qual Patient Saf. 2013;39(2):61–69. , , , et al.
- Validity of selected Patient Safety Indicators: opportunities and concerns. J Am Coll Surg. 2011;212(6):924–934. , , , et al.
- Cases of iatrogenic pneumothorax can be identified from ICD‐9‐CM coded data. Am J Med Qual. 2010;25(3):218–224. , , , et al.
- SQUIRE development group. Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ. 2009;338:a3152. , , , , ;
- The Quality Toolbox. 2nd ed. Milwaukee, WI: ASQ Quality Press; 2005. .
- STRIDE—an integrated standards‐based translational research informatics platform. AMIA Annu Symp Proc. 2009;2009:391–395. , , , .
- Accidental iatrogenic pneumothorax in hospitalized patients. Med Care. 2006;44(2):182–186. , , .
- Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290(14):1868–1874. , .
- Developing theory‐informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci. 2012;7:38. , , , et al.
- Using rapid‐cycle quality improvement methodology to reduce feeding tubes in patients with advanced dementia: before and after study. BMJ. 2004;329(7464):491–494. , .
© 2015 Society of Hospital Medicine