User login
The Official Newspaper of the American Association for Thoracic Surgery
AATS Annual Meeting Registration Packages Still Available
Health Care Professional Package: includes registration for the Saturday Courses, Sunday Symposia and the 96th Annual Meeting (Monday-Wednesday). Registration is $500 (a savings of $400).
Resident/Fellow and Medical Student Package: includes registration for the Saturday Courses, Sunday Symposia and the 96th Annual Meeting (Monday-Wednesday). Registration is $300.
Saturday Courses and Sunday Symposia Registration: Register for a Saturday course and/or a Sunday symposia and have access to all other courses/symposia taking place that same day. Note: Registration for the Saturday courses and/or Sunday symposia is separate from the Annual Meeting fee.
Health Care Professional Package: includes registration for the Saturday Courses, Sunday Symposia and the 96th Annual Meeting (Monday-Wednesday). Registration is $500 (a savings of $400).
Resident/Fellow and Medical Student Package: includes registration for the Saturday Courses, Sunday Symposia and the 96th Annual Meeting (Monday-Wednesday). Registration is $300.
Saturday Courses and Sunday Symposia Registration: Register for a Saturday course and/or a Sunday symposia and have access to all other courses/symposia taking place that same day. Note: Registration for the Saturday courses and/or Sunday symposia is separate from the Annual Meeting fee.
Health Care Professional Package: includes registration for the Saturday Courses, Sunday Symposia and the 96th Annual Meeting (Monday-Wednesday). Registration is $500 (a savings of $400).
Resident/Fellow and Medical Student Package: includes registration for the Saturday Courses, Sunday Symposia and the 96th Annual Meeting (Monday-Wednesday). Registration is $300.
Saturday Courses and Sunday Symposia Registration: Register for a Saturday course and/or a Sunday symposia and have access to all other courses/symposia taking place that same day. Note: Registration for the Saturday courses and/or Sunday symposia is separate from the Annual Meeting fee.
See you at AATS Week 2016!
AATS Week 2016 includes Two Terrific Events
AATS Week 2016 Registration & Housing Open!
Aortic Symposium
May 12–13, 2016
New York, NY
(More information below)
96th Annual Meeting
May 14-18, 2016
Baltimore, MD
(More information below)
Register for AATS Week 2016 today & receive a $100 discount off the AATS Aortic Symposium registration fee.
AATS Aortic Symposium
May 12–13, 2016
New York, NY
Course Directors
Joseph S. Coselli
Steven L. Lansman
The 2016 AATS Aortic Symposium is a two-day symposium focused on the pathophysiology, diagnosis and treatment of aortic aneurysms and dissections. The conference is designed for cardiovascular and thoracic surgeons, residents, perfusionists, ICU and OR nurses and others involved in aortic disease patient care. Faculty members include world leaders in the field who will share their experiences treating difficult aortic disease cases.
Be sure to register for a Friday Morning Breakfast Breakout session.
AATS 96th Annual Meeting
May 14-18, 2016
Baltimore, MD
President & Annual Meeting Chair
Joseph S. Coselli
Annual Meeting Co-Chairs
Charles D. Fraser
David R. Jones
View Preliminary Program, Speakers, Presentations and Full Abstracts
Don’t miss this year’s exciting program including:
Saturday Skills Courses featuring Combined Luncheon Speaker: Denton A. Cooley, followed by Hands-On Sessions
Sunday Postgraduate Symposia with Legends Luncheons featuring Leonard L. Bailey, Joel D. Cooper and John L. Ochsner
New: Survival Guide for the Cardiothoracic Surgical Team course following by a Hands-On Session (Available to Residents, Fellows and Health Care Professionals Only)
Presidential Address: Competition: Perspiration to Inspiration “Aut viam inveniam aut faciam,” Joseph S. Coselli, Baylor College of Medicine
Honored Guest Lecture: Brian Kelly, Notre Dame Head Football Coach and a veteran of 23 seasons as a collegiate head coach. Brian Kelly brings a championship tradition to his fifth year as the 29th head football coach at the University of Notre Dame.
Emerging Technologies & Techniques For: Adult Cardiac and General Thoracic
VAD/ECMO SessionMasters of Surgery Video Sessions
AATS Learning Center: Featuring cutting-edge case videos of novel procedures and surgical techniques.
Check Out the AATS Week Video
Learn more about the exciting program planned for the AATS Aortic Symposium and 2016 Annual Meeting.
AATS Week 2016 includes Two Terrific Events
AATS Week 2016 Registration & Housing Open!
Aortic Symposium
May 12–13, 2016
New York, NY
(More information below)
96th Annual Meeting
May 14-18, 2016
Baltimore, MD
(More information below)
Register for AATS Week 2016 today & receive a $100 discount off the AATS Aortic Symposium registration fee.
AATS Aortic Symposium
May 12–13, 2016
New York, NY
Course Directors
Joseph S. Coselli
Steven L. Lansman
The 2016 AATS Aortic Symposium is a two-day symposium focused on the pathophysiology, diagnosis and treatment of aortic aneurysms and dissections. The conference is designed for cardiovascular and thoracic surgeons, residents, perfusionists, ICU and OR nurses and others involved in aortic disease patient care. Faculty members include world leaders in the field who will share their experiences treating difficult aortic disease cases.
Be sure to register for a Friday Morning Breakfast Breakout session.
AATS 96th Annual Meeting
May 14-18, 2016
Baltimore, MD
President & Annual Meeting Chair
Joseph S. Coselli
Annual Meeting Co-Chairs
Charles D. Fraser
David R. Jones
View Preliminary Program, Speakers, Presentations and Full Abstracts
Don’t miss this year’s exciting program including:
Saturday Skills Courses featuring Combined Luncheon Speaker: Denton A. Cooley, followed by Hands-On Sessions
Sunday Postgraduate Symposia with Legends Luncheons featuring Leonard L. Bailey, Joel D. Cooper and John L. Ochsner
New: Survival Guide for the Cardiothoracic Surgical Team course following by a Hands-On Session (Available to Residents, Fellows and Health Care Professionals Only)
Presidential Address: Competition: Perspiration to Inspiration “Aut viam inveniam aut faciam,” Joseph S. Coselli, Baylor College of Medicine
Honored Guest Lecture: Brian Kelly, Notre Dame Head Football Coach and a veteran of 23 seasons as a collegiate head coach. Brian Kelly brings a championship tradition to his fifth year as the 29th head football coach at the University of Notre Dame.
Emerging Technologies & Techniques For: Adult Cardiac and General Thoracic
VAD/ECMO SessionMasters of Surgery Video Sessions
AATS Learning Center: Featuring cutting-edge case videos of novel procedures and surgical techniques.
Check Out the AATS Week Video
Learn more about the exciting program planned for the AATS Aortic Symposium and 2016 Annual Meeting.
AATS Week 2016 includes Two Terrific Events
AATS Week 2016 Registration & Housing Open!
Aortic Symposium
May 12–13, 2016
New York, NY
(More information below)
96th Annual Meeting
May 14-18, 2016
Baltimore, MD
(More information below)
Register for AATS Week 2016 today & receive a $100 discount off the AATS Aortic Symposium registration fee.
AATS Aortic Symposium
May 12–13, 2016
New York, NY
Course Directors
Joseph S. Coselli
Steven L. Lansman
The 2016 AATS Aortic Symposium is a two-day symposium focused on the pathophysiology, diagnosis and treatment of aortic aneurysms and dissections. The conference is designed for cardiovascular and thoracic surgeons, residents, perfusionists, ICU and OR nurses and others involved in aortic disease patient care. Faculty members include world leaders in the field who will share their experiences treating difficult aortic disease cases.
Be sure to register for a Friday Morning Breakfast Breakout session.
AATS 96th Annual Meeting
May 14-18, 2016
Baltimore, MD
President & Annual Meeting Chair
Joseph S. Coselli
Annual Meeting Co-Chairs
Charles D. Fraser
David R. Jones
View Preliminary Program, Speakers, Presentations and Full Abstracts
Don’t miss this year’s exciting program including:
Saturday Skills Courses featuring Combined Luncheon Speaker: Denton A. Cooley, followed by Hands-On Sessions
Sunday Postgraduate Symposia with Legends Luncheons featuring Leonard L. Bailey, Joel D. Cooper and John L. Ochsner
New: Survival Guide for the Cardiothoracic Surgical Team course following by a Hands-On Session (Available to Residents, Fellows and Health Care Professionals Only)
Presidential Address: Competition: Perspiration to Inspiration “Aut viam inveniam aut faciam,” Joseph S. Coselli, Baylor College of Medicine
Honored Guest Lecture: Brian Kelly, Notre Dame Head Football Coach and a veteran of 23 seasons as a collegiate head coach. Brian Kelly brings a championship tradition to his fifth year as the 29th head football coach at the University of Notre Dame.
Emerging Technologies & Techniques For: Adult Cardiac and General Thoracic
VAD/ECMO SessionMasters of Surgery Video Sessions
AATS Learning Center: Featuring cutting-edge case videos of novel procedures and surgical techniques.
Check Out the AATS Week Video
Learn more about the exciting program planned for the AATS Aortic Symposium and 2016 Annual Meeting.
Congratulations to 2016 “Honoring Our Mentors” Fellows
Winners of the F. Griffith Pearson Fellowships and the Mark R. de Leval Fellowship announced.
F. Griffith Pearson Fellowship
Nestor Villamizar Ortiz, MD
Institution: University of Miami
Host Sponsor: Mark Onaitis, MD
Host Institution: Duke University Medical Center
Fellowship Focus: Robotic Surgery for Malignant and Benign Esophageal Pathology
Xiao Li, MD
Institution: Peking University People’s Hospital, Beijing, China
Host Sponsor: Mark K. Ferguson, MD
Host Institution: Department of Thoracic Surgery, University of Chicago
Fellowship Focus: Advanced Minimally Invasive Thoracic Surgery and Robotic Thoracic Surgery
Marc R. de Leval Fellowship
Jeremy Herrmann, MD
Institution: The Children’s Hospital of Philadelphia
Host Sponsor: David Barron, MD
Host Institution: Birmingham Children’s Hospital, UK
Fellowship Focus: Management of ccTGA
Winners of the F. Griffith Pearson Fellowships and the Mark R. de Leval Fellowship announced.
F. Griffith Pearson Fellowship
Nestor Villamizar Ortiz, MD
Institution: University of Miami
Host Sponsor: Mark Onaitis, MD
Host Institution: Duke University Medical Center
Fellowship Focus: Robotic Surgery for Malignant and Benign Esophageal Pathology
Xiao Li, MD
Institution: Peking University People’s Hospital, Beijing, China
Host Sponsor: Mark K. Ferguson, MD
Host Institution: Department of Thoracic Surgery, University of Chicago
Fellowship Focus: Advanced Minimally Invasive Thoracic Surgery and Robotic Thoracic Surgery
Marc R. de Leval Fellowship
Jeremy Herrmann, MD
Institution: The Children’s Hospital of Philadelphia
Host Sponsor: David Barron, MD
Host Institution: Birmingham Children’s Hospital, UK
Fellowship Focus: Management of ccTGA
Winners of the F. Griffith Pearson Fellowships and the Mark R. de Leval Fellowship announced.
F. Griffith Pearson Fellowship
Nestor Villamizar Ortiz, MD
Institution: University of Miami
Host Sponsor: Mark Onaitis, MD
Host Institution: Duke University Medical Center
Fellowship Focus: Robotic Surgery for Malignant and Benign Esophageal Pathology
Xiao Li, MD
Institution: Peking University People’s Hospital, Beijing, China
Host Sponsor: Mark K. Ferguson, MD
Host Institution: Department of Thoracic Surgery, University of Chicago
Fellowship Focus: Advanced Minimally Invasive Thoracic Surgery and Robotic Thoracic Surgery
Marc R. de Leval Fellowship
Jeremy Herrmann, MD
Institution: The Children’s Hospital of Philadelphia
Host Sponsor: David Barron, MD
Host Institution: Birmingham Children’s Hospital, UK
Fellowship Focus: Management of ccTGA
Risk score predicts rehospitalization after heart surgery
PHOENIX – A simple, five-element formula can help identify the patients undergoing heart surgery who face the greatest risk for a hospital readmission within 30 days following discharge from their index hospitalization.
The surgeons who developed this formula hope to use it in an investigational program that will target intensified management resources in postsurgical patients who face the highest readmission risk, to cut rehospitalizations and better improve their clinical status and quality of life.
The analysis that produced this formula also documented that the worst offender for triggering rehospitalizations following heart surgery is fluid overload, the proximate readmission cause for 23% of postsurgical patients, Dr. Arman Kilic said at the annual meeting of the Society of Thoracic Surgeons. The next most common cause was infection, which led to 20% of readmissions, followed by arrhythmias, responsible for 8% of readmissions, said Dr. Kilic, a thoracic surgeon at the University of Pennsylvania in Philadelphia.
Because fluid overload, often in the form of pleural effusion, is such an important driver of rehospitalizations, a more targeted management program would include better titration of diuretic treatment to patients following heart surgery, thoracentesis, and closer monitoring of clinical features that flag fluid overload such as weight.
“The volume overload issue is where the money is. If we can reduce that, it could really impact readmissions,” Dr. Kilic said in an interview.
An investigational program to target rehospitalization risk in heart surgery patients is planned at Johns Hopkins Hospital in Baltimore, where Dr. Kilic worked when he performed this analysis. Surgeons at Johns Hopkins are now in the process of getting funding for this pilot program, said Dr. John V. Conte Jr., professor of surgery and director of mechanical circulatory support at Johns Hopkins and a collaborator with Dr. Kilic on developing the risk formula.
“We’ll tailor postoperative follow-up. We’ll get high-risk patients back to the clinic sooner, and we’ll send nurse practitioners to see them to make sure they’re taking their medications and are getting weighed daily,” Dr. Conte said in an interview. “When a patient has heart surgery, they typically retain about 5-10 pounds of fluid. Patients with good renal function give up that fluid easily, but others are difficult to diurese. Many patients go home before they have been fully diuresed, and we need to follow these patients and transition them better to out-of-hospital care.”
He noted that other situations also come up that unnecessarily drive patients back to the hospital when an alternative and less expensive intervention might be equally effective. For example, some patients return to the hospital out of concern for how their chest wound is healing. Instead of being rehospitalized, such patients could be reassured by having them send a nurse a photo of their wound or by coming to an outpatient clinic.
“We need to engage more often with recently discharged patients,” Dr. Conte said in an interview. “Discharging them doesn’t mean separating them from the health care system; it should mean interacting with patients in a different way” that produces better outcomes and patient satisfaction for less money. Developing improved ways to manage recent heart surgery patients following discharge becomes even more critical later this year when, in July, the Centers for Medicare & Medicaid Services adds 30-day readmissions following coronary artery bypass grafting (CABG) to its list of procedures that can generate a penalty to hospitals if they exceed U.S. norms for readmission rates.
The risk model developed by Dr. Kilic, Dr. Conte, and their associates used data collected from 5,360 heart surgery patients treated at Johns Hopkins during 2008-2013. Nearly half the patients underwent isolated CABG, and 20% had isolated valve surgery. Overall, 585 patients (11%) had a hospital readmission within 30 days of their index discharge. One limitation of the analysis was it used data only on readmissions back to Johns Hopkins Hospital.
The researchers used data from three-quarters of the database to derive the risk formula, and from the remaining 25% of the database to validate the formula. A multivariate analysis of demographic and clinical characteristics that significantly linked with an elevated risk for readmissions identified five factors that independently made a significant contribution to readmission risk. The researchers assigned each of these five factors points depending on its relative contribution to readmission risk in the adjusted model: Severe chronic lung disease received 6 points; placement of a ventricular assist device received 5 points, while other types of heart surgery that was not CABG or valve surgery received 4 points (isolated CABG, isolated valve, or combined CABG and valve surgery received 0 points); development of acute renal failure postoperatively but before index discharge received 4 points; an index length of stay beyond 7 days received 4 points; and African American race received 3 points. The maximum number of points a patient could receive was 22.
Patients with a score of 0 had a 6% rate of a 30-day readmission; those with a score of 22 had a 63% readmission rate. For simplicity, Dr. Kilic suggested dividing patients into three categories based on their readmission risk score: Low-risk patients with a score of 0 had a readmission risk of 6%, medium-risk patients with a score of 1-10 had a readmission risk of 12%, and high-risk patients with a score of 11 or more had a readmission risk of 31%. The researchers found a 96% correlation when comparing these predicted readmission risk rates based on the derivation-subgroup analysis with the actual readmission rates seen in the validation subgroup of their database. The targeted risk-management program planned by Dr. Conte would primarily focus on high-risk patients.
Dr. Kilic and Dr. Conte said they had no relevant financial disclosures.
[email protected]
On Twitter @mitchelzoler
Dr. Kilic's data illustrates common factors resulting in rehospitalization after cardiac surgery. Fastidious fluid management in these patients and others is critical to reduce hospital readmissions. A further point to consider is that many pleural effusions, especially those on the left side, are due to retained hemothorax rather than fluid overload. In those instances, early surgical intervention with video-assisted thoracoscopic surgery, rather than prolonged diuresis, would be optimal.
Dr. Francis J. Podbielski, FCCP, serves on the editorial advisory board for CHEST Physician.
Dr. Kilic's data illustrates common factors resulting in rehospitalization after cardiac surgery. Fastidious fluid management in these patients and others is critical to reduce hospital readmissions. A further point to consider is that many pleural effusions, especially those on the left side, are due to retained hemothorax rather than fluid overload. In those instances, early surgical intervention with video-assisted thoracoscopic surgery, rather than prolonged diuresis, would be optimal.
Dr. Francis J. Podbielski, FCCP, serves on the editorial advisory board for CHEST Physician.
Dr. Kilic's data illustrates common factors resulting in rehospitalization after cardiac surgery. Fastidious fluid management in these patients and others is critical to reduce hospital readmissions. A further point to consider is that many pleural effusions, especially those on the left side, are due to retained hemothorax rather than fluid overload. In those instances, early surgical intervention with video-assisted thoracoscopic surgery, rather than prolonged diuresis, would be optimal.
Dr. Francis J. Podbielski, FCCP, serves on the editorial advisory board for CHEST Physician.
PHOENIX – A simple, five-element formula can help identify the patients undergoing heart surgery who face the greatest risk for a hospital readmission within 30 days following discharge from their index hospitalization.
The surgeons who developed this formula hope to use it in an investigational program that will target intensified management resources in postsurgical patients who face the highest readmission risk, to cut rehospitalizations and better improve their clinical status and quality of life.
The analysis that produced this formula also documented that the worst offender for triggering rehospitalizations following heart surgery is fluid overload, the proximate readmission cause for 23% of postsurgical patients, Dr. Arman Kilic said at the annual meeting of the Society of Thoracic Surgeons. The next most common cause was infection, which led to 20% of readmissions, followed by arrhythmias, responsible for 8% of readmissions, said Dr. Kilic, a thoracic surgeon at the University of Pennsylvania in Philadelphia.
Because fluid overload, often in the form of pleural effusion, is such an important driver of rehospitalizations, a more targeted management program would include better titration of diuretic treatment to patients following heart surgery, thoracentesis, and closer monitoring of clinical features that flag fluid overload such as weight.
“The volume overload issue is where the money is. If we can reduce that, it could really impact readmissions,” Dr. Kilic said in an interview.
An investigational program to target rehospitalization risk in heart surgery patients is planned at Johns Hopkins Hospital in Baltimore, where Dr. Kilic worked when he performed this analysis. Surgeons at Johns Hopkins are now in the process of getting funding for this pilot program, said Dr. John V. Conte Jr., professor of surgery and director of mechanical circulatory support at Johns Hopkins and a collaborator with Dr. Kilic on developing the risk formula.
“We’ll tailor postoperative follow-up. We’ll get high-risk patients back to the clinic sooner, and we’ll send nurse practitioners to see them to make sure they’re taking their medications and are getting weighed daily,” Dr. Conte said in an interview. “When a patient has heart surgery, they typically retain about 5-10 pounds of fluid. Patients with good renal function give up that fluid easily, but others are difficult to diurese. Many patients go home before they have been fully diuresed, and we need to follow these patients and transition them better to out-of-hospital care.”
He noted that other situations also come up that unnecessarily drive patients back to the hospital when an alternative and less expensive intervention might be equally effective. For example, some patients return to the hospital out of concern for how their chest wound is healing. Instead of being rehospitalized, such patients could be reassured by having them send a nurse a photo of their wound or by coming to an outpatient clinic.
“We need to engage more often with recently discharged patients,” Dr. Conte said in an interview. “Discharging them doesn’t mean separating them from the health care system; it should mean interacting with patients in a different way” that produces better outcomes and patient satisfaction for less money. Developing improved ways to manage recent heart surgery patients following discharge becomes even more critical later this year when, in July, the Centers for Medicare & Medicaid Services adds 30-day readmissions following coronary artery bypass grafting (CABG) to its list of procedures that can generate a penalty to hospitals if they exceed U.S. norms for readmission rates.
The risk model developed by Dr. Kilic, Dr. Conte, and their associates used data collected from 5,360 heart surgery patients treated at Johns Hopkins during 2008-2013. Nearly half the patients underwent isolated CABG, and 20% had isolated valve surgery. Overall, 585 patients (11%) had a hospital readmission within 30 days of their index discharge. One limitation of the analysis was it used data only on readmissions back to Johns Hopkins Hospital.
The researchers used data from three-quarters of the database to derive the risk formula, and from the remaining 25% of the database to validate the formula. A multivariate analysis of demographic and clinical characteristics that significantly linked with an elevated risk for readmissions identified five factors that independently made a significant contribution to readmission risk. The researchers assigned each of these five factors points depending on its relative contribution to readmission risk in the adjusted model: Severe chronic lung disease received 6 points; placement of a ventricular assist device received 5 points, while other types of heart surgery that was not CABG or valve surgery received 4 points (isolated CABG, isolated valve, or combined CABG and valve surgery received 0 points); development of acute renal failure postoperatively but before index discharge received 4 points; an index length of stay beyond 7 days received 4 points; and African American race received 3 points. The maximum number of points a patient could receive was 22.
Patients with a score of 0 had a 6% rate of a 30-day readmission; those with a score of 22 had a 63% readmission rate. For simplicity, Dr. Kilic suggested dividing patients into three categories based on their readmission risk score: Low-risk patients with a score of 0 had a readmission risk of 6%, medium-risk patients with a score of 1-10 had a readmission risk of 12%, and high-risk patients with a score of 11 or more had a readmission risk of 31%. The researchers found a 96% correlation when comparing these predicted readmission risk rates based on the derivation-subgroup analysis with the actual readmission rates seen in the validation subgroup of their database. The targeted risk-management program planned by Dr. Conte would primarily focus on high-risk patients.
Dr. Kilic and Dr. Conte said they had no relevant financial disclosures.
[email protected]
On Twitter @mitchelzoler
PHOENIX – A simple, five-element formula can help identify the patients undergoing heart surgery who face the greatest risk for a hospital readmission within 30 days following discharge from their index hospitalization.
The surgeons who developed this formula hope to use it in an investigational program that will target intensified management resources in postsurgical patients who face the highest readmission risk, to cut rehospitalizations and better improve their clinical status and quality of life.
The analysis that produced this formula also documented that the worst offender for triggering rehospitalizations following heart surgery is fluid overload, the proximate readmission cause for 23% of postsurgical patients, Dr. Arman Kilic said at the annual meeting of the Society of Thoracic Surgeons. The next most common cause was infection, which led to 20% of readmissions, followed by arrhythmias, responsible for 8% of readmissions, said Dr. Kilic, a thoracic surgeon at the University of Pennsylvania in Philadelphia.
Because fluid overload, often in the form of pleural effusion, is such an important driver of rehospitalizations, a more targeted management program would include better titration of diuretic treatment to patients following heart surgery, thoracentesis, and closer monitoring of clinical features that flag fluid overload such as weight.
“The volume overload issue is where the money is. If we can reduce that, it could really impact readmissions,” Dr. Kilic said in an interview.
An investigational program to target rehospitalization risk in heart surgery patients is planned at Johns Hopkins Hospital in Baltimore, where Dr. Kilic worked when he performed this analysis. Surgeons at Johns Hopkins are now in the process of getting funding for this pilot program, said Dr. John V. Conte Jr., professor of surgery and director of mechanical circulatory support at Johns Hopkins and a collaborator with Dr. Kilic on developing the risk formula.
“We’ll tailor postoperative follow-up. We’ll get high-risk patients back to the clinic sooner, and we’ll send nurse practitioners to see them to make sure they’re taking their medications and are getting weighed daily,” Dr. Conte said in an interview. “When a patient has heart surgery, they typically retain about 5-10 pounds of fluid. Patients with good renal function give up that fluid easily, but others are difficult to diurese. Many patients go home before they have been fully diuresed, and we need to follow these patients and transition them better to out-of-hospital care.”
He noted that other situations also come up that unnecessarily drive patients back to the hospital when an alternative and less expensive intervention might be equally effective. For example, some patients return to the hospital out of concern for how their chest wound is healing. Instead of being rehospitalized, such patients could be reassured by having them send a nurse a photo of their wound or by coming to an outpatient clinic.
“We need to engage more often with recently discharged patients,” Dr. Conte said in an interview. “Discharging them doesn’t mean separating them from the health care system; it should mean interacting with patients in a different way” that produces better outcomes and patient satisfaction for less money. Developing improved ways to manage recent heart surgery patients following discharge becomes even more critical later this year when, in July, the Centers for Medicare & Medicaid Services adds 30-day readmissions following coronary artery bypass grafting (CABG) to its list of procedures that can generate a penalty to hospitals if they exceed U.S. norms for readmission rates.
The risk model developed by Dr. Kilic, Dr. Conte, and their associates used data collected from 5,360 heart surgery patients treated at Johns Hopkins during 2008-2013. Nearly half the patients underwent isolated CABG, and 20% had isolated valve surgery. Overall, 585 patients (11%) had a hospital readmission within 30 days of their index discharge. One limitation of the analysis was it used data only on readmissions back to Johns Hopkins Hospital.
The researchers used data from three-quarters of the database to derive the risk formula, and from the remaining 25% of the database to validate the formula. A multivariate analysis of demographic and clinical characteristics that significantly linked with an elevated risk for readmissions identified five factors that independently made a significant contribution to readmission risk. The researchers assigned each of these five factors points depending on its relative contribution to readmission risk in the adjusted model: Severe chronic lung disease received 6 points; placement of a ventricular assist device received 5 points, while other types of heart surgery that was not CABG or valve surgery received 4 points (isolated CABG, isolated valve, or combined CABG and valve surgery received 0 points); development of acute renal failure postoperatively but before index discharge received 4 points; an index length of stay beyond 7 days received 4 points; and African American race received 3 points. The maximum number of points a patient could receive was 22.
Patients with a score of 0 had a 6% rate of a 30-day readmission; those with a score of 22 had a 63% readmission rate. For simplicity, Dr. Kilic suggested dividing patients into three categories based on their readmission risk score: Low-risk patients with a score of 0 had a readmission risk of 6%, medium-risk patients with a score of 1-10 had a readmission risk of 12%, and high-risk patients with a score of 11 or more had a readmission risk of 31%. The researchers found a 96% correlation when comparing these predicted readmission risk rates based on the derivation-subgroup analysis with the actual readmission rates seen in the validation subgroup of their database. The targeted risk-management program planned by Dr. Conte would primarily focus on high-risk patients.
Dr. Kilic and Dr. Conte said they had no relevant financial disclosures.
[email protected]
On Twitter @mitchelzoler
Key clinical point: A risk score predicted which heart surgery patients faced the greatest risk for hospital readmission within 30 days of their index discharge.
Major finding: Patients with a 0 score had a 6% 30-day readmission rate; a high score of 22 linked with a 63% rate.
Data source: A review of 5,360 heart surgery patients treated at one U.S. center.
Disclosures: Dr. Kilic and Dr. Conte said they had no relevant financial disclosures.
Heart attack patients getting younger, fatter, and less healthy
Despite advances in the prevention and early detection of cardiovascular disease, heart attack patients are getting younger, fatter, and less health conscious.
A look at 10 years’ worth of patient data reveals these and other “alarming trends,” according to Dr. Samir R. Kapadia of the Cleveland Clinic.
“What we found was so very contradictory to what we expected,” he said at a press briefing held in advance of the annual meeting of the American College of Cardiology. “Amazingly, we saw that patients presenting with myocardial infarction were getting younger, and their body mass index was going up. There was more smoking, more hypertension, and more diabetes. And all of this despite our better understanding of cardiovascular risk factors.”
The findings seem to point to a serious gap between gathering scientific knowledge and putting that knowledge into practice.
“We have to extend our efforts and put a lot more into educating patients,” Dr. Kapadia said. “Maybe it’s not enough to just tell people to eat right and exercise – maybe we should also be providing them with a structured program. But this is not just the job of the cardiologist. Primary care physicians have to also have this insight, communicate it to the patients, and get them the resources they need to help prevent heart attacks.”
His retrospective study comprised 3,912 consecutive patients who were treated for ST-segment elevation MI (STEMI) from 1995 to 2014. Data were collected on age, gender, diabetes, hypertension, smoking, lipid levels, chronic renal impairment, and obesity. The group was divided into four epochs: 1995-1999, 2000-2004, 2005-2009, and 2010-2014. The researchers examined these factors both in the entire cohort and in a subset of 1,325 who had a diagnosis of coronary artery disease at the time of their MI.
Patients became significantly younger over the entire study period. In epoch 1, the mean age of the entire cohort was 63.6 years. By epoch 3, this had declined to 60.3 years – a significant drop. The change was also evident in the CAD subset; among these patients, mean age declined from 64.1 years in epoch 1 to 61.8 years in epoch 4.
Tobacco use increased significantly in both groups as well. In the overall cohort, the rate was 27.7% in epoch 1 and 45.4% in epoch 4. In the CAD subset, it rose from 24.6% to 42.7%.
Hypertension in the entire cohort increased from 56.7% to 77.3%. In the CAD subset, it increased from 60.9% to 89%.
Obesity increased in both cohorts in overlapping trends, from about 30% in epoch 1 to 40% in epoch 4.
Diabetes increased as well. In the entire cohort, it rose from 24.6% to 30.6%. In the CAD subset, it rose from 25.4% to 41.5%.
Dr. Kapadia noted that the proportion of patients with at least three major risk factors rose from 65% to 85%, and that the incidence of chronic obstructive pulmonary disease increased from 5% to 12%, although he didn’t break this trend down by group.
He had no financial disclosures.
Despite advances in the prevention and early detection of cardiovascular disease, heart attack patients are getting younger, fatter, and less health conscious.
A look at 10 years’ worth of patient data reveals these and other “alarming trends,” according to Dr. Samir R. Kapadia of the Cleveland Clinic.
“What we found was so very contradictory to what we expected,” he said at a press briefing held in advance of the annual meeting of the American College of Cardiology. “Amazingly, we saw that patients presenting with myocardial infarction were getting younger, and their body mass index was going up. There was more smoking, more hypertension, and more diabetes. And all of this despite our better understanding of cardiovascular risk factors.”
The findings seem to point to a serious gap between gathering scientific knowledge and putting that knowledge into practice.
“We have to extend our efforts and put a lot more into educating patients,” Dr. Kapadia said. “Maybe it’s not enough to just tell people to eat right and exercise – maybe we should also be providing them with a structured program. But this is not just the job of the cardiologist. Primary care physicians have to also have this insight, communicate it to the patients, and get them the resources they need to help prevent heart attacks.”
His retrospective study comprised 3,912 consecutive patients who were treated for ST-segment elevation MI (STEMI) from 1995 to 2014. Data were collected on age, gender, diabetes, hypertension, smoking, lipid levels, chronic renal impairment, and obesity. The group was divided into four epochs: 1995-1999, 2000-2004, 2005-2009, and 2010-2014. The researchers examined these factors both in the entire cohort and in a subset of 1,325 who had a diagnosis of coronary artery disease at the time of their MI.
Patients became significantly younger over the entire study period. In epoch 1, the mean age of the entire cohort was 63.6 years. By epoch 3, this had declined to 60.3 years – a significant drop. The change was also evident in the CAD subset; among these patients, mean age declined from 64.1 years in epoch 1 to 61.8 years in epoch 4.
Tobacco use increased significantly in both groups as well. In the overall cohort, the rate was 27.7% in epoch 1 and 45.4% in epoch 4. In the CAD subset, it rose from 24.6% to 42.7%.
Hypertension in the entire cohort increased from 56.7% to 77.3%. In the CAD subset, it increased from 60.9% to 89%.
Obesity increased in both cohorts in overlapping trends, from about 30% in epoch 1 to 40% in epoch 4.
Diabetes increased as well. In the entire cohort, it rose from 24.6% to 30.6%. In the CAD subset, it rose from 25.4% to 41.5%.
Dr. Kapadia noted that the proportion of patients with at least three major risk factors rose from 65% to 85%, and that the incidence of chronic obstructive pulmonary disease increased from 5% to 12%, although he didn’t break this trend down by group.
He had no financial disclosures.
Despite advances in the prevention and early detection of cardiovascular disease, heart attack patients are getting younger, fatter, and less health conscious.
A look at 10 years’ worth of patient data reveals these and other “alarming trends,” according to Dr. Samir R. Kapadia of the Cleveland Clinic.
“What we found was so very contradictory to what we expected,” he said at a press briefing held in advance of the annual meeting of the American College of Cardiology. “Amazingly, we saw that patients presenting with myocardial infarction were getting younger, and their body mass index was going up. There was more smoking, more hypertension, and more diabetes. And all of this despite our better understanding of cardiovascular risk factors.”
The findings seem to point to a serious gap between gathering scientific knowledge and putting that knowledge into practice.
“We have to extend our efforts and put a lot more into educating patients,” Dr. Kapadia said. “Maybe it’s not enough to just tell people to eat right and exercise – maybe we should also be providing them with a structured program. But this is not just the job of the cardiologist. Primary care physicians have to also have this insight, communicate it to the patients, and get them the resources they need to help prevent heart attacks.”
His retrospective study comprised 3,912 consecutive patients who were treated for ST-segment elevation MI (STEMI) from 1995 to 2014. Data were collected on age, gender, diabetes, hypertension, smoking, lipid levels, chronic renal impairment, and obesity. The group was divided into four epochs: 1995-1999, 2000-2004, 2005-2009, and 2010-2014. The researchers examined these factors both in the entire cohort and in a subset of 1,325 who had a diagnosis of coronary artery disease at the time of their MI.
Patients became significantly younger over the entire study period. In epoch 1, the mean age of the entire cohort was 63.6 years. By epoch 3, this had declined to 60.3 years – a significant drop. The change was also evident in the CAD subset; among these patients, mean age declined from 64.1 years in epoch 1 to 61.8 years in epoch 4.
Tobacco use increased significantly in both groups as well. In the overall cohort, the rate was 27.7% in epoch 1 and 45.4% in epoch 4. In the CAD subset, it rose from 24.6% to 42.7%.
Hypertension in the entire cohort increased from 56.7% to 77.3%. In the CAD subset, it increased from 60.9% to 89%.
Obesity increased in both cohorts in overlapping trends, from about 30% in epoch 1 to 40% in epoch 4.
Diabetes increased as well. In the entire cohort, it rose from 24.6% to 30.6%. In the CAD subset, it rose from 25.4% to 41.5%.
Dr. Kapadia noted that the proportion of patients with at least three major risk factors rose from 65% to 85%, and that the incidence of chronic obstructive pulmonary disease increased from 5% to 12%, although he didn’t break this trend down by group.
He had no financial disclosures.
FROM ACC 16
Key clinical point: Despite advances in understanding heart disease prevention, patients with heart attack are younger and less healthy than they were 10 years ago.
Major finding: Patients are an average of 3 years younger than in 1994, and more are obese and use tobacco.
Data source: A retrospective study of 3,912 patients with acute ST-segment elevation MI.
Disclosures: Dr. Samir Kapadia had no financial disclosures.
Lies, damn lies, and research: Improving reproducibility in biomedical science
The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.
In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.
Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.
According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).
A litany of concerns
In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).
Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).
According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”
This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.
Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).
The paucity of P
Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).
Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.
In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).
So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).
And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.
A muddling of mice (and more)
Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.
Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).
A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).
But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).
Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).
Reviewer, view thyself
The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.
The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.
Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.
Not taking grants for granted
The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.
“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”
Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).
In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”
To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.
The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.
Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.
Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.
In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”
To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.
The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.
Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.
Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.
In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”
To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.
The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.
Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.
Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.
The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.
In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.
Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.
According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).
A litany of concerns
In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).
Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).
According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”
This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.
Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).
The paucity of P
Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).
Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.
In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).
So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).
And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.
A muddling of mice (and more)
Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.
Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).
A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).
But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).
Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).
Reviewer, view thyself
The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.
The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.
Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.
Not taking grants for granted
The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.
“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”
Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).
The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.
In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.
Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.
According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).
A litany of concerns
In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).
Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).
According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”
This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.
Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).
The paucity of P
Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).
Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.
In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).
So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).
And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.
A muddling of mice (and more)
Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.
Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).
A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).
But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).
Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).
Reviewer, view thyself
The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.
The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.
Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.
Not taking grants for granted
The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.
“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”
Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).
FDA proposes ban on powdered gloves
The Food and Drug Administration has proposed a ban on most powdered gloves used during surgery and for patient examination, and on absorbable powder used for lubricating surgeons’ gloves.
Aerosolized glove powder on natural rubber latex gloves can cause respiratory allergic reactions, and while powdered synthetic gloves don’t present the risk of allergic reactions, all powdered gloves have been associated with numerous potentially serious adverse events, including severe airway inflammation, wound inflammation, and postsurgical adhesions, according to an FDA statement.
The proposed ban would not apply to powdered radiographic protection gloves; the agency is not aware of any such gloves that are currently on the market. The ban also would not affect non-powdered gloves.
The decision to move forward with the proposed ban was based on a determination that the affected products “are dangerous and present an unreasonable and substantial risk,” according to the statement.
In making this determination, the FDA considered the available evidence, including a literature review and the 285 comments received on a February 2011 Federal Register Notice.
That notice announced the establishment of a public docket to receive comments related to powdered gloves and followed the FDA’s receipt of two citizen petitions requesting a ban on such gloves because of the adverse health effects associated with use of the gloves. The comments overwhelmingly supported a warning or ban.
The FDA determined that the risks associated with powdered gloves cannot be corrected through new or updated labeling, and thus moved forward with the proposed ban.
“This ban is about protecting patients and health care professionals from a danger they might not even be aware of,” Dr. Jeffrey Shuren, director of the FDA Center for Devices and Radiological Health said in the statement. “We take bans very seriously and only take this action when we feel it’s necessary to protect the public health.”
In fact, should this ban be put into place, it would be only the second such ban; the first was the 1983 ban of prosthetic hair fibers, which were found to provide no public health benefit. The benefits cited for powdered gloves were almost entirely related to greater ease of putting the gloves on and taking them off, Eric Pahon of the FDA said in an interview.
A ban on the gloves was not proposed sooner in part because when concerns were first raised about the risks associated with powdered gloves, a ban would have created a shortage, and the risks of a glove shortage outweighed the benefits of banning the gloves, Mr. Pahon said.
However, a recent economic analysis conducted by the FDA because of the critical role medical gloves play in protecting patients and health care providers showed that a powdered glove ban would not cause a glove shortage or have a significant economic impact, and that a ban would not be likely to affect medical practice since numerous non-powdered gloves options are now available, the agency noted.
The proposed rule will be available online March 22 at the Federal Register, and is open for public comment for 90 days.
If finalized, the powdered gloves and absorbable powder used for lubricating surgeons’ gloves would be removed from the marketplace.
The Food and Drug Administration has proposed a ban on most powdered gloves used during surgery and for patient examination, and on absorbable powder used for lubricating surgeons’ gloves.
Aerosolized glove powder on natural rubber latex gloves can cause respiratory allergic reactions, and while powdered synthetic gloves don’t present the risk of allergic reactions, all powdered gloves have been associated with numerous potentially serious adverse events, including severe airway inflammation, wound inflammation, and postsurgical adhesions, according to an FDA statement.
The proposed ban would not apply to powdered radiographic protection gloves; the agency is not aware of any such gloves that are currently on the market. The ban also would not affect non-powdered gloves.
The decision to move forward with the proposed ban was based on a determination that the affected products “are dangerous and present an unreasonable and substantial risk,” according to the statement.
In making this determination, the FDA considered the available evidence, including a literature review and the 285 comments received on a February 2011 Federal Register Notice.
That notice announced the establishment of a public docket to receive comments related to powdered gloves and followed the FDA’s receipt of two citizen petitions requesting a ban on such gloves because of the adverse health effects associated with use of the gloves. The comments overwhelmingly supported a warning or ban.
The FDA determined that the risks associated with powdered gloves cannot be corrected through new or updated labeling, and thus moved forward with the proposed ban.
“This ban is about protecting patients and health care professionals from a danger they might not even be aware of,” Dr. Jeffrey Shuren, director of the FDA Center for Devices and Radiological Health said in the statement. “We take bans very seriously and only take this action when we feel it’s necessary to protect the public health.”
In fact, should this ban be put into place, it would be only the second such ban; the first was the 1983 ban of prosthetic hair fibers, which were found to provide no public health benefit. The benefits cited for powdered gloves were almost entirely related to greater ease of putting the gloves on and taking them off, Eric Pahon of the FDA said in an interview.
A ban on the gloves was not proposed sooner in part because when concerns were first raised about the risks associated with powdered gloves, a ban would have created a shortage, and the risks of a glove shortage outweighed the benefits of banning the gloves, Mr. Pahon said.
However, a recent economic analysis conducted by the FDA because of the critical role medical gloves play in protecting patients and health care providers showed that a powdered glove ban would not cause a glove shortage or have a significant economic impact, and that a ban would not be likely to affect medical practice since numerous non-powdered gloves options are now available, the agency noted.
The proposed rule will be available online March 22 at the Federal Register, and is open for public comment for 90 days.
If finalized, the powdered gloves and absorbable powder used for lubricating surgeons’ gloves would be removed from the marketplace.
The Food and Drug Administration has proposed a ban on most powdered gloves used during surgery and for patient examination, and on absorbable powder used for lubricating surgeons’ gloves.
Aerosolized glove powder on natural rubber latex gloves can cause respiratory allergic reactions, and while powdered synthetic gloves don’t present the risk of allergic reactions, all powdered gloves have been associated with numerous potentially serious adverse events, including severe airway inflammation, wound inflammation, and postsurgical adhesions, according to an FDA statement.
The proposed ban would not apply to powdered radiographic protection gloves; the agency is not aware of any such gloves that are currently on the market. The ban also would not affect non-powdered gloves.
The decision to move forward with the proposed ban was based on a determination that the affected products “are dangerous and present an unreasonable and substantial risk,” according to the statement.
In making this determination, the FDA considered the available evidence, including a literature review and the 285 comments received on a February 2011 Federal Register Notice.
That notice announced the establishment of a public docket to receive comments related to powdered gloves and followed the FDA’s receipt of two citizen petitions requesting a ban on such gloves because of the adverse health effects associated with use of the gloves. The comments overwhelmingly supported a warning or ban.
The FDA determined that the risks associated with powdered gloves cannot be corrected through new or updated labeling, and thus moved forward with the proposed ban.
“This ban is about protecting patients and health care professionals from a danger they might not even be aware of,” Dr. Jeffrey Shuren, director of the FDA Center for Devices and Radiological Health said in the statement. “We take bans very seriously and only take this action when we feel it’s necessary to protect the public health.”
In fact, should this ban be put into place, it would be only the second such ban; the first was the 1983 ban of prosthetic hair fibers, which were found to provide no public health benefit. The benefits cited for powdered gloves were almost entirely related to greater ease of putting the gloves on and taking them off, Eric Pahon of the FDA said in an interview.
A ban on the gloves was not proposed sooner in part because when concerns were first raised about the risks associated with powdered gloves, a ban would have created a shortage, and the risks of a glove shortage outweighed the benefits of banning the gloves, Mr. Pahon said.
However, a recent economic analysis conducted by the FDA because of the critical role medical gloves play in protecting patients and health care providers showed that a powdered glove ban would not cause a glove shortage or have a significant economic impact, and that a ban would not be likely to affect medical practice since numerous non-powdered gloves options are now available, the agency noted.
The proposed rule will be available online March 22 at the Federal Register, and is open for public comment for 90 days.
If finalized, the powdered gloves and absorbable powder used for lubricating surgeons’ gloves would be removed from the marketplace.
Feds launch phase 2 of HIPAA audits
The federal government has launched the second phase of its HIPAA Audit Program and will soon be identifying health providers it plans to target.
For the 2016 Phase 2 HIPAA Audit Program, auditors will review policies and procedures enacted by covered entities and their business associates to meet selected standards of the Privacy, Security, and Breach Notification Rules, according to a March 21 announcement by the Department of Health & Human Services Office for Civil Rights (OCR).
Physicians and other covered entities can expect an email at some point this year requesting that updated contact information be provided to the OCR. The office will then send health providers a pre-audit questionnaire to gather data about the practice’s size, type, and operations, according to the announcement. The government will use the data as well as other information to create audit subject pools. If an entity does not respond to the OCR’s contact request or the pre-audit questionnaire, the agency will use publicly available information about the practice.
Every covered entity and business associate is eligible for an audit, the OCR noted. For Phase 2, the government plans to identify health providers and business associates that represent a wide range of health care providers, health plans, health care clearinghouses and business associates to access HIPAA compliance across the industry. Sampling criteria for auditee selection will include size of the entity, affiliation with other health care organizations, whether an organization is public or private, geographic factors, and present enforcement activity with OCR. Entities with open complaints or that are currently undergoing investigations will not be chosen.
The first set of audits will be desk audits of covered entities followed by a second round of desk audits of business associates, OCR stated. OCR plans to complete all desk audits by December 2016. A third set of audits will be on site and will examine a broader scope of requirements under the HIPAA rules. Some desk auditees may be subject to a subsequent on-site audit, the government noted.
A list of frequently asked questions about the 2016 Phase 2 HIPAA Audit Program can be found on the OCR’s website.
Round 2 of the HIPAA audits follows a pilot program launched in 2011 and 2012 by OCR that assessed HIPAA controls and processes implemented by 115 covered entities. The second phase will draw on the results and experiences learned from the pilot program, according to OCR.
On Twitter @legal_med
The federal government has launched the second phase of its HIPAA Audit Program and will soon be identifying health providers it plans to target.
For the 2016 Phase 2 HIPAA Audit Program, auditors will review policies and procedures enacted by covered entities and their business associates to meet selected standards of the Privacy, Security, and Breach Notification Rules, according to a March 21 announcement by the Department of Health & Human Services Office for Civil Rights (OCR).
Physicians and other covered entities can expect an email at some point this year requesting that updated contact information be provided to the OCR. The office will then send health providers a pre-audit questionnaire to gather data about the practice’s size, type, and operations, according to the announcement. The government will use the data as well as other information to create audit subject pools. If an entity does not respond to the OCR’s contact request or the pre-audit questionnaire, the agency will use publicly available information about the practice.
Every covered entity and business associate is eligible for an audit, the OCR noted. For Phase 2, the government plans to identify health providers and business associates that represent a wide range of health care providers, health plans, health care clearinghouses and business associates to access HIPAA compliance across the industry. Sampling criteria for auditee selection will include size of the entity, affiliation with other health care organizations, whether an organization is public or private, geographic factors, and present enforcement activity with OCR. Entities with open complaints or that are currently undergoing investigations will not be chosen.
The first set of audits will be desk audits of covered entities followed by a second round of desk audits of business associates, OCR stated. OCR plans to complete all desk audits by December 2016. A third set of audits will be on site and will examine a broader scope of requirements under the HIPAA rules. Some desk auditees may be subject to a subsequent on-site audit, the government noted.
A list of frequently asked questions about the 2016 Phase 2 HIPAA Audit Program can be found on the OCR’s website.
Round 2 of the HIPAA audits follows a pilot program launched in 2011 and 2012 by OCR that assessed HIPAA controls and processes implemented by 115 covered entities. The second phase will draw on the results and experiences learned from the pilot program, according to OCR.
On Twitter @legal_med
The federal government has launched the second phase of its HIPAA Audit Program and will soon be identifying health providers it plans to target.
For the 2016 Phase 2 HIPAA Audit Program, auditors will review policies and procedures enacted by covered entities and their business associates to meet selected standards of the Privacy, Security, and Breach Notification Rules, according to a March 21 announcement by the Department of Health & Human Services Office for Civil Rights (OCR).
Physicians and other covered entities can expect an email at some point this year requesting that updated contact information be provided to the OCR. The office will then send health providers a pre-audit questionnaire to gather data about the practice’s size, type, and operations, according to the announcement. The government will use the data as well as other information to create audit subject pools. If an entity does not respond to the OCR’s contact request or the pre-audit questionnaire, the agency will use publicly available information about the practice.
Every covered entity and business associate is eligible for an audit, the OCR noted. For Phase 2, the government plans to identify health providers and business associates that represent a wide range of health care providers, health plans, health care clearinghouses and business associates to access HIPAA compliance across the industry. Sampling criteria for auditee selection will include size of the entity, affiliation with other health care organizations, whether an organization is public or private, geographic factors, and present enforcement activity with OCR. Entities with open complaints or that are currently undergoing investigations will not be chosen.
The first set of audits will be desk audits of covered entities followed by a second round of desk audits of business associates, OCR stated. OCR plans to complete all desk audits by December 2016. A third set of audits will be on site and will examine a broader scope of requirements under the HIPAA rules. Some desk auditees may be subject to a subsequent on-site audit, the government noted.
A list of frequently asked questions about the 2016 Phase 2 HIPAA Audit Program can be found on the OCR’s website.
Round 2 of the HIPAA audits follows a pilot program launched in 2011 and 2012 by OCR that assessed HIPAA controls and processes implemented by 115 covered entities. The second phase will draw on the results and experiences learned from the pilot program, according to OCR.
On Twitter @legal_med
Wanted: Better evidence on fast-track lung resection
A host of medical specialties have adopted strategies to speed recovery of surgical patients, reduce length of hospital stays, and cut costs, known as fast-track or enhanced-recovery pathways, but when it comes to elective lung resection, the medical evidence has yet to establish if patients in expedited recovery protocols fare any better than do those in a conventional recovery course, a systematic review in the March issue of the Journal of Thoracic and Cardiovascular Surgery reported (2016 Mar;151:708-15).
A team of investigators from McGill University in Montreal performed a systematic review of six studies that evaluated patient outcomes of both traditional and enhanced-recovery pathways (ERPs) in elective lung resection. They concluded that ERPs may reduce the length of hospital stays and hospital costs but that well-designed trials are needed to overcome limitations of existing studies.
“The influence of ERPs on postoperative outcomes after lung resection has not been extensively studied in comparative studies involving a control group receiving traditional care,” lead author Julio F. Fiore Jr., Ph.D., and his colleagues said. One of the six studies they reviewed was a randomized clinical trial. The six studies involved a total of 1,612 participants (821 ERP, 791 control).
The researchers also reported that the studies they analyzed shared a significant limitation. “Risk of bias favoring enhanced-recovery pathways was high,” Dr. Fiore and his colleagues wrote. The studies were unclear if patient selection may have factored into the results.
Five studies reported shorter hospital length of stay (LOS) for the ERP group. “The majority of the studies reported that LOS was significantly shorter when patients undergoing lung resection were treated within an ERP, which corroborates the results observed in other surgical populations,” Dr. Fiore and his colleagues said.
Three nonrandomized studies also evaluated costs per patient. Two reported significantly lower costs for ERP patients: $13,093 vs. $14,439 for controls; and $13,432 vs. $17,103 for controls (Jpn. J. Thorac. Cardiovasc. Surg. 2006 Sep;54:387-90; Ann. Thorac. Surg. 1998 Sep;66:914-9). The third showed what the authors said was no statistically significant cost differential between the two groups: $14,792 for ERP vs. $16,063 for controls (Ann. Thorac. Surg. 1997 Aug;64:299-302).
Three studies evaluated readmission rates, but only one showed measurably lower rates for the ERP group: 3% vs. 10% for controls (Lung Cancer. 2012 Dec;78:270-5). Three studies measured complication rates in both groups. Two reported cardiopulmonary complication rates of 18% and 17% in the ERP group vs. 16% and 14% in the control group, respectively (Eur. J. Cardiothorac. Surg. 2012 May;41:1083-7; Lung Cancer. 2012 Dec;78:270-5). One reported rates of pulmonary complications of 7% for ERP vs. 36% for controls (Eur. J. Cardiothorac. Surg. 2008 Jul;34:174-80).
Dr. Fiore and his colleagues pointed out that some of the studies they reviewed were completed before video-assisted thoracic surgery became routine for lung resection. But they acknowledged that research in other surgical specialties have validated the role of ERP, along with minimally invasive surgery, to improve outcomes. “Future research should investigate whether this holds true for patients undergoing lung resection,” they said.
The study authors had no financial relationships to disclose.
The task that Dr. Fiore and colleagues undertook to evaluate and compare disparate studies of fast-track surgery in lung resection is “akin to comparing not just apples and oranges but apples to zucchini,” Dr. Lisa M. Brown of University of California, Davis, Medical Center said in her invited analysis (J. Thorac. Cardiovasc. Surg. 2016 Mar;151:715-16). Without the authors’ “descriptive approach,” Dr. Brown said, “the results of a true meta-analysis would be uninterpretable.”
|
Dr. Lisa M. Brown |
Nonetheless, the systematic review underscores the need for a blinded, randomized trial, Dr. Brown said. “Furthermore, rather than measuring [hospital] stay, subjects should be evaluated for readiness for discharge, because this would reduce the effect of systems-based obstacles to discharge,” she said. Enhanced recovery pathways (ERPs) in colorectal surgery have been used as models for other specialties, but the novelty of these pathways versus traditional care may be difficult to replicate in thoracic surgery, she said. Strategies such as antibiotic prophylaxis and epidural analgesia in thoracic surgery “are not dissimilar enough from standard care to elicit a difference in outcome,” she said.
In thoracic surgery, ERPs must consider the challenges of pain control and chest tube management unique in these patients, Dr. Brown said. For pain control, paravertebral blockade rather than epidural analgesia could lead to earlier hospital discharges. Use of chest tubes is commonly a matter of surgeon preference, she said, but chest tubes without an air leak and with acceptable fluid output can be safely removed, and even patients with an air leak but no pneumothorax on water seal can go home with a chest tube, Dr. Brown said.
Dr. Brown had no financial relationships to disclose.
The task that Dr. Fiore and colleagues undertook to evaluate and compare disparate studies of fast-track surgery in lung resection is “akin to comparing not just apples and oranges but apples to zucchini,” Dr. Lisa M. Brown of University of California, Davis, Medical Center said in her invited analysis (J. Thorac. Cardiovasc. Surg. 2016 Mar;151:715-16). Without the authors’ “descriptive approach,” Dr. Brown said, “the results of a true meta-analysis would be uninterpretable.”
|
Dr. Lisa M. Brown |
Nonetheless, the systematic review underscores the need for a blinded, randomized trial, Dr. Brown said. “Furthermore, rather than measuring [hospital] stay, subjects should be evaluated for readiness for discharge, because this would reduce the effect of systems-based obstacles to discharge,” she said. Enhanced recovery pathways (ERPs) in colorectal surgery have been used as models for other specialties, but the novelty of these pathways versus traditional care may be difficult to replicate in thoracic surgery, she said. Strategies such as antibiotic prophylaxis and epidural analgesia in thoracic surgery “are not dissimilar enough from standard care to elicit a difference in outcome,” she said.
In thoracic surgery, ERPs must consider the challenges of pain control and chest tube management unique in these patients, Dr. Brown said. For pain control, paravertebral blockade rather than epidural analgesia could lead to earlier hospital discharges. Use of chest tubes is commonly a matter of surgeon preference, she said, but chest tubes without an air leak and with acceptable fluid output can be safely removed, and even patients with an air leak but no pneumothorax on water seal can go home with a chest tube, Dr. Brown said.
Dr. Brown had no financial relationships to disclose.
The task that Dr. Fiore and colleagues undertook to evaluate and compare disparate studies of fast-track surgery in lung resection is “akin to comparing not just apples and oranges but apples to zucchini,” Dr. Lisa M. Brown of University of California, Davis, Medical Center said in her invited analysis (J. Thorac. Cardiovasc. Surg. 2016 Mar;151:715-16). Without the authors’ “descriptive approach,” Dr. Brown said, “the results of a true meta-analysis would be uninterpretable.”
|
Dr. Lisa M. Brown |
Nonetheless, the systematic review underscores the need for a blinded, randomized trial, Dr. Brown said. “Furthermore, rather than measuring [hospital] stay, subjects should be evaluated for readiness for discharge, because this would reduce the effect of systems-based obstacles to discharge,” she said. Enhanced recovery pathways (ERPs) in colorectal surgery have been used as models for other specialties, but the novelty of these pathways versus traditional care may be difficult to replicate in thoracic surgery, she said. Strategies such as antibiotic prophylaxis and epidural analgesia in thoracic surgery “are not dissimilar enough from standard care to elicit a difference in outcome,” she said.
In thoracic surgery, ERPs must consider the challenges of pain control and chest tube management unique in these patients, Dr. Brown said. For pain control, paravertebral blockade rather than epidural analgesia could lead to earlier hospital discharges. Use of chest tubes is commonly a matter of surgeon preference, she said, but chest tubes without an air leak and with acceptable fluid output can be safely removed, and even patients with an air leak but no pneumothorax on water seal can go home with a chest tube, Dr. Brown said.
Dr. Brown had no financial relationships to disclose.
A host of medical specialties have adopted strategies to speed recovery of surgical patients, reduce length of hospital stays, and cut costs, known as fast-track or enhanced-recovery pathways, but when it comes to elective lung resection, the medical evidence has yet to establish if patients in expedited recovery protocols fare any better than do those in a conventional recovery course, a systematic review in the March issue of the Journal of Thoracic and Cardiovascular Surgery reported (2016 Mar;151:708-15).
A team of investigators from McGill University in Montreal performed a systematic review of six studies that evaluated patient outcomes of both traditional and enhanced-recovery pathways (ERPs) in elective lung resection. They concluded that ERPs may reduce the length of hospital stays and hospital costs but that well-designed trials are needed to overcome limitations of existing studies.
“The influence of ERPs on postoperative outcomes after lung resection has not been extensively studied in comparative studies involving a control group receiving traditional care,” lead author Julio F. Fiore Jr., Ph.D., and his colleagues said. One of the six studies they reviewed was a randomized clinical trial. The six studies involved a total of 1,612 participants (821 ERP, 791 control).
The researchers also reported that the studies they analyzed shared a significant limitation. “Risk of bias favoring enhanced-recovery pathways was high,” Dr. Fiore and his colleagues wrote. The studies were unclear if patient selection may have factored into the results.
Five studies reported shorter hospital length of stay (LOS) for the ERP group. “The majority of the studies reported that LOS was significantly shorter when patients undergoing lung resection were treated within an ERP, which corroborates the results observed in other surgical populations,” Dr. Fiore and his colleagues said.
Three nonrandomized studies also evaluated costs per patient. Two reported significantly lower costs for ERP patients: $13,093 vs. $14,439 for controls; and $13,432 vs. $17,103 for controls (Jpn. J. Thorac. Cardiovasc. Surg. 2006 Sep;54:387-90; Ann. Thorac. Surg. 1998 Sep;66:914-9). The third showed what the authors said was no statistically significant cost differential between the two groups: $14,792 for ERP vs. $16,063 for controls (Ann. Thorac. Surg. 1997 Aug;64:299-302).
Three studies evaluated readmission rates, but only one showed measurably lower rates for the ERP group: 3% vs. 10% for controls (Lung Cancer. 2012 Dec;78:270-5). Three studies measured complication rates in both groups. Two reported cardiopulmonary complication rates of 18% and 17% in the ERP group vs. 16% and 14% in the control group, respectively (Eur. J. Cardiothorac. Surg. 2012 May;41:1083-7; Lung Cancer. 2012 Dec;78:270-5). One reported rates of pulmonary complications of 7% for ERP vs. 36% for controls (Eur. J. Cardiothorac. Surg. 2008 Jul;34:174-80).
Dr. Fiore and his colleagues pointed out that some of the studies they reviewed were completed before video-assisted thoracic surgery became routine for lung resection. But they acknowledged that research in other surgical specialties have validated the role of ERP, along with minimally invasive surgery, to improve outcomes. “Future research should investigate whether this holds true for patients undergoing lung resection,” they said.
The study authors had no financial relationships to disclose.
A host of medical specialties have adopted strategies to speed recovery of surgical patients, reduce length of hospital stays, and cut costs, known as fast-track or enhanced-recovery pathways, but when it comes to elective lung resection, the medical evidence has yet to establish if patients in expedited recovery protocols fare any better than do those in a conventional recovery course, a systematic review in the March issue of the Journal of Thoracic and Cardiovascular Surgery reported (2016 Mar;151:708-15).
A team of investigators from McGill University in Montreal performed a systematic review of six studies that evaluated patient outcomes of both traditional and enhanced-recovery pathways (ERPs) in elective lung resection. They concluded that ERPs may reduce the length of hospital stays and hospital costs but that well-designed trials are needed to overcome limitations of existing studies.
“The influence of ERPs on postoperative outcomes after lung resection has not been extensively studied in comparative studies involving a control group receiving traditional care,” lead author Julio F. Fiore Jr., Ph.D., and his colleagues said. One of the six studies they reviewed was a randomized clinical trial. The six studies involved a total of 1,612 participants (821 ERP, 791 control).
The researchers also reported that the studies they analyzed shared a significant limitation. “Risk of bias favoring enhanced-recovery pathways was high,” Dr. Fiore and his colleagues wrote. The studies were unclear if patient selection may have factored into the results.
Five studies reported shorter hospital length of stay (LOS) for the ERP group. “The majority of the studies reported that LOS was significantly shorter when patients undergoing lung resection were treated within an ERP, which corroborates the results observed in other surgical populations,” Dr. Fiore and his colleagues said.
Three nonrandomized studies also evaluated costs per patient. Two reported significantly lower costs for ERP patients: $13,093 vs. $14,439 for controls; and $13,432 vs. $17,103 for controls (Jpn. J. Thorac. Cardiovasc. Surg. 2006 Sep;54:387-90; Ann. Thorac. Surg. 1998 Sep;66:914-9). The third showed what the authors said was no statistically significant cost differential between the two groups: $14,792 for ERP vs. $16,063 for controls (Ann. Thorac. Surg. 1997 Aug;64:299-302).
Three studies evaluated readmission rates, but only one showed measurably lower rates for the ERP group: 3% vs. 10% for controls (Lung Cancer. 2012 Dec;78:270-5). Three studies measured complication rates in both groups. Two reported cardiopulmonary complication rates of 18% and 17% in the ERP group vs. 16% and 14% in the control group, respectively (Eur. J. Cardiothorac. Surg. 2012 May;41:1083-7; Lung Cancer. 2012 Dec;78:270-5). One reported rates of pulmonary complications of 7% for ERP vs. 36% for controls (Eur. J. Cardiothorac. Surg. 2008 Jul;34:174-80).
Dr. Fiore and his colleagues pointed out that some of the studies they reviewed were completed before video-assisted thoracic surgery became routine for lung resection. But they acknowledged that research in other surgical specialties have validated the role of ERP, along with minimally invasive surgery, to improve outcomes. “Future research should investigate whether this holds true for patients undergoing lung resection,” they said.
The study authors had no financial relationships to disclose.
Key clinical point: Well-designed clinical trials are needed to determine the effectiveness of fast-track recovery pathways in lung resection.
Major finding: Fast-track lung resection patients showed no differences in readmissions, overall complication and death rates compared to patients subjected to a traditional recovery course.
Data source: Systematic review of six studies published from 1997 to 2012 that involved 1,612 individuals who had lung resection.
Disclosures: The study authors had no financial relationships to disclose.
Sutureless AVR an option for higher-risk patients
The first North American experience with a sutureless bioprosthetic aortic valve that has been available in Europe since 2005 and is well suited for minimally invasive surgery has underscored the utility of the device as an alternative to conventional aortic valve replacement (AVR) in higher-risk patients, investigators from McGill University Health Center in Montreal reported in the March issue of the Journal of Thoracic and Cardiovascular Surgery (2016;151:735-742).
The investigators, led by Dr. Benoir de Varennes, reported on their experience implanting the Enable valve (Medtronic, Minneapolis) in 63 patients between August 2012 and October 2014. “The enable bioprosthesis is an acceptable alternative to conventional aortic valve replacement in higher-risk patients,” Dr. de Varennes and colleagues said. “The early hemodynamic performance seems favorable.” Their findings were first presented at the 95th annual meeting of the American Association for Thoracic Surgery in April 2015 in Seattle. A video of the presentation is available.
The Enable valve has been the subject of four European studies with 429 patients. It received its CE Mark in Europe in 2009, but is not yet commercially approved in the United States.
In the McGill study, one patient died within 30 days of receiving the valve and two died after 30 days, but none of the deaths were valve related. Four patients (6.3%) required revision during the implantation operation, and one patient required reoperation for early migration. Peak and mean gradients after surgery were 17 mm Hg and 9 mm Hg, respectively. Three patients had reported complications: Two (3.1%) required a pacemaker and one (1.6%) had a heart attack. Mean follow-up was 10 months.
Patient ages ranged from 57 to 89 years, with an average age of 80. Before surgery, all patients had calcific aortic stenosis, 43 (68%) had some degree of associated aortic regurgitation, and 46 (73%) were in New York Heart Association (NYHA) class III or IV. At the last follow-up after surgery, 61 patients (97%) were in NYHA class I.
The investigators implanted the valve through a full sternotomy or a partial upper sternotomy into the fourth intercostal space, and they used perioperative transesophageal echocardiography in all patients. They performed high-transverse aortotomy and completely excised the native valve.
The average cross-clamp time for the 30 patients who had isolated AVR was 44 minutes and 77 minutes for the 33 patients who had combined procedures. Dr. de Varennes and colleagues acknowledged the cross-clamp time for isolated AVR is “similar” to European series but “not very different” from recent reports on sutured AVR (J. Thorac. Cardiovasc. Surg. 2015;149:451-460). “This may be explained partly by the learning period of all three surgeons and the aggressive debridement of the annulus in all cases,” they said. “We think that, as further experience is gained, the clamp time will be further reduced, and this will benefit mostly higher-risk patients or those requiring concomitant procedures.”
They noted that some patients received the Enable prosthesis because of “hostile” aortas with extensive root calcification.
Dr. de Varennes disclosed he is a consultant for Medtronic and a proctor for Enable training. The coauthors had no relationships to disclose.
One of the key advantages that advocates of sutureless valves point to is shorter bypass times than sutured valves, but in his invited commentary Dr. Thomas G. Gleason of the University of Pittsburgh questioned this rationale based on the results Dr. de Varennes and colleagues reported (J. Thorac. Cardiovasc. Surg. 2016;151:743-744). The cardiac bypass times they observed “are not appreciably different from those reported in larger series of conventional aortic valve replacement,” Dr. Gleason said.
Dr. Gleason suggested that “market forces” might be driving the push into sutureless aortic valve replacement. “The attraction, particularly to consumers, of the ministernotomy (and thus things that might facilitate it) is both cosmetic and the perception that it is less invasive,” he said. “These attractions notwithstanding, it has been difficult to demonstrate that ministernotomy or minithoracotomy yield better primary outcomes (e.g., mortality, stroke, or major complication rates) or even quality of life indicators, particularly when measured beyond the perioperative period.”
|
Dr. Thomas G. Gleason |
He alluded to the “elephant in the room” with regard to sutureless aortic valve technologies: their cost and unknown durability compared with conventional sutured bioprostheses.
“As health care costs continue to rise and large populations of patients are either underinsured or see rationed care, trimming direct costs may be a more relevant concern for the modern era than trimming cross-clamp time,” he said. Analyses have not yet evaluated the increased costs of sutureless valves in terms of shortened hospital stays or lower morbidity, particularly in the moderate-risk population with aortic stenosis, he said.
“Moving forward, there is little doubt that the current value of the sutureless valve will be dictated by the market, but in the end it will be measured by the long-term outcomes of the ‘minimally invaded,’” Dr. Gleason said.
Dr. Gleason had no financial relationships to disclose.
One of the key advantages that advocates of sutureless valves point to is shorter bypass times than sutured valves, but in his invited commentary Dr. Thomas G. Gleason of the University of Pittsburgh questioned this rationale based on the results Dr. de Varennes and colleagues reported (J. Thorac. Cardiovasc. Surg. 2016;151:743-744). The cardiac bypass times they observed “are not appreciably different from those reported in larger series of conventional aortic valve replacement,” Dr. Gleason said.
Dr. Gleason suggested that “market forces” might be driving the push into sutureless aortic valve replacement. “The attraction, particularly to consumers, of the ministernotomy (and thus things that might facilitate it) is both cosmetic and the perception that it is less invasive,” he said. “These attractions notwithstanding, it has been difficult to demonstrate that ministernotomy or minithoracotomy yield better primary outcomes (e.g., mortality, stroke, or major complication rates) or even quality of life indicators, particularly when measured beyond the perioperative period.”
|
Dr. Thomas G. Gleason |
He alluded to the “elephant in the room” with regard to sutureless aortic valve technologies: their cost and unknown durability compared with conventional sutured bioprostheses.
“As health care costs continue to rise and large populations of patients are either underinsured or see rationed care, trimming direct costs may be a more relevant concern for the modern era than trimming cross-clamp time,” he said. Analyses have not yet evaluated the increased costs of sutureless valves in terms of shortened hospital stays or lower morbidity, particularly in the moderate-risk population with aortic stenosis, he said.
“Moving forward, there is little doubt that the current value of the sutureless valve will be dictated by the market, but in the end it will be measured by the long-term outcomes of the ‘minimally invaded,’” Dr. Gleason said.
Dr. Gleason had no financial relationships to disclose.
One of the key advantages that advocates of sutureless valves point to is shorter bypass times than sutured valves, but in his invited commentary Dr. Thomas G. Gleason of the University of Pittsburgh questioned this rationale based on the results Dr. de Varennes and colleagues reported (J. Thorac. Cardiovasc. Surg. 2016;151:743-744). The cardiac bypass times they observed “are not appreciably different from those reported in larger series of conventional aortic valve replacement,” Dr. Gleason said.
Dr. Gleason suggested that “market forces” might be driving the push into sutureless aortic valve replacement. “The attraction, particularly to consumers, of the ministernotomy (and thus things that might facilitate it) is both cosmetic and the perception that it is less invasive,” he said. “These attractions notwithstanding, it has been difficult to demonstrate that ministernotomy or minithoracotomy yield better primary outcomes (e.g., mortality, stroke, or major complication rates) or even quality of life indicators, particularly when measured beyond the perioperative period.”
|
Dr. Thomas G. Gleason |
He alluded to the “elephant in the room” with regard to sutureless aortic valve technologies: their cost and unknown durability compared with conventional sutured bioprostheses.
“As health care costs continue to rise and large populations of patients are either underinsured or see rationed care, trimming direct costs may be a more relevant concern for the modern era than trimming cross-clamp time,” he said. Analyses have not yet evaluated the increased costs of sutureless valves in terms of shortened hospital stays or lower morbidity, particularly in the moderate-risk population with aortic stenosis, he said.
“Moving forward, there is little doubt that the current value of the sutureless valve will be dictated by the market, but in the end it will be measured by the long-term outcomes of the ‘minimally invaded,’” Dr. Gleason said.
Dr. Gleason had no financial relationships to disclose.
The first North American experience with a sutureless bioprosthetic aortic valve that has been available in Europe since 2005 and is well suited for minimally invasive surgery has underscored the utility of the device as an alternative to conventional aortic valve replacement (AVR) in higher-risk patients, investigators from McGill University Health Center in Montreal reported in the March issue of the Journal of Thoracic and Cardiovascular Surgery (2016;151:735-742).
The investigators, led by Dr. Benoir de Varennes, reported on their experience implanting the Enable valve (Medtronic, Minneapolis) in 63 patients between August 2012 and October 2014. “The enable bioprosthesis is an acceptable alternative to conventional aortic valve replacement in higher-risk patients,” Dr. de Varennes and colleagues said. “The early hemodynamic performance seems favorable.” Their findings were first presented at the 95th annual meeting of the American Association for Thoracic Surgery in April 2015 in Seattle. A video of the presentation is available.
The Enable valve has been the subject of four European studies with 429 patients. It received its CE Mark in Europe in 2009, but is not yet commercially approved in the United States.
In the McGill study, one patient died within 30 days of receiving the valve and two died after 30 days, but none of the deaths were valve related. Four patients (6.3%) required revision during the implantation operation, and one patient required reoperation for early migration. Peak and mean gradients after surgery were 17 mm Hg and 9 mm Hg, respectively. Three patients had reported complications: Two (3.1%) required a pacemaker and one (1.6%) had a heart attack. Mean follow-up was 10 months.
Patient ages ranged from 57 to 89 years, with an average age of 80. Before surgery, all patients had calcific aortic stenosis, 43 (68%) had some degree of associated aortic regurgitation, and 46 (73%) were in New York Heart Association (NYHA) class III or IV. At the last follow-up after surgery, 61 patients (97%) were in NYHA class I.
The investigators implanted the valve through a full sternotomy or a partial upper sternotomy into the fourth intercostal space, and they used perioperative transesophageal echocardiography in all patients. They performed high-transverse aortotomy and completely excised the native valve.
The average cross-clamp time for the 30 patients who had isolated AVR was 44 minutes and 77 minutes for the 33 patients who had combined procedures. Dr. de Varennes and colleagues acknowledged the cross-clamp time for isolated AVR is “similar” to European series but “not very different” from recent reports on sutured AVR (J. Thorac. Cardiovasc. Surg. 2015;149:451-460). “This may be explained partly by the learning period of all three surgeons and the aggressive debridement of the annulus in all cases,” they said. “We think that, as further experience is gained, the clamp time will be further reduced, and this will benefit mostly higher-risk patients or those requiring concomitant procedures.”
They noted that some patients received the Enable prosthesis because of “hostile” aortas with extensive root calcification.
Dr. de Varennes disclosed he is a consultant for Medtronic and a proctor for Enable training. The coauthors had no relationships to disclose.
The first North American experience with a sutureless bioprosthetic aortic valve that has been available in Europe since 2005 and is well suited for minimally invasive surgery has underscored the utility of the device as an alternative to conventional aortic valve replacement (AVR) in higher-risk patients, investigators from McGill University Health Center in Montreal reported in the March issue of the Journal of Thoracic and Cardiovascular Surgery (2016;151:735-742).
The investigators, led by Dr. Benoir de Varennes, reported on their experience implanting the Enable valve (Medtronic, Minneapolis) in 63 patients between August 2012 and October 2014. “The enable bioprosthesis is an acceptable alternative to conventional aortic valve replacement in higher-risk patients,” Dr. de Varennes and colleagues said. “The early hemodynamic performance seems favorable.” Their findings were first presented at the 95th annual meeting of the American Association for Thoracic Surgery in April 2015 in Seattle. A video of the presentation is available.
The Enable valve has been the subject of four European studies with 429 patients. It received its CE Mark in Europe in 2009, but is not yet commercially approved in the United States.
In the McGill study, one patient died within 30 days of receiving the valve and two died after 30 days, but none of the deaths were valve related. Four patients (6.3%) required revision during the implantation operation, and one patient required reoperation for early migration. Peak and mean gradients after surgery were 17 mm Hg and 9 mm Hg, respectively. Three patients had reported complications: Two (3.1%) required a pacemaker and one (1.6%) had a heart attack. Mean follow-up was 10 months.
Patient ages ranged from 57 to 89 years, with an average age of 80. Before surgery, all patients had calcific aortic stenosis, 43 (68%) had some degree of associated aortic regurgitation, and 46 (73%) were in New York Heart Association (NYHA) class III or IV. At the last follow-up after surgery, 61 patients (97%) were in NYHA class I.
The investigators implanted the valve through a full sternotomy or a partial upper sternotomy into the fourth intercostal space, and they used perioperative transesophageal echocardiography in all patients. They performed high-transverse aortotomy and completely excised the native valve.
The average cross-clamp time for the 30 patients who had isolated AVR was 44 minutes and 77 minutes for the 33 patients who had combined procedures. Dr. de Varennes and colleagues acknowledged the cross-clamp time for isolated AVR is “similar” to European series but “not very different” from recent reports on sutured AVR (J. Thorac. Cardiovasc. Surg. 2015;149:451-460). “This may be explained partly by the learning period of all three surgeons and the aggressive debridement of the annulus in all cases,” they said. “We think that, as further experience is gained, the clamp time will be further reduced, and this will benefit mostly higher-risk patients or those requiring concomitant procedures.”
They noted that some patients received the Enable prosthesis because of “hostile” aortas with extensive root calcification.
Dr. de Varennes disclosed he is a consultant for Medtronic and a proctor for Enable training. The coauthors had no relationships to disclose.
FROM THE JOURNAL OF THORACIC AND CARDIOVASCULAR SURGERY
Key clinical point: Sutureless aortic valves have the potential to achieve shorter procedure times and benefit increased-risk patients with aortic stenosis.
Major finding: Thirty-day mortality of patients who received the Enable aortic valve was 1.6%, and late mortality was 3.2%. No deaths were valve related.
Data source: Sixty-three patients with aortic stenosis who had Enable bioprosthetic valve implantation between August 2012 and October 2014 at McGill University Health Center.
Disclosures: Lead author Dr. Benoit de Varennes is a consultant for Medtronic and a trainer for the Enable device. The other authors had no relationships to disclose.