User login
Digital health and big data: New tools for making the most of real-world evidence
LAKE BUENA VISTA, FLA. – Digital health technology is vastly expanding the real-world data pool for clinical and comparative effectiveness research, according to Jeffrey Curtis, MD.
The trick is to harness the power of that data to improve patient care and outcomes, and that can be achieved in part through linkage of data sources and through point-of-care access, Dr. Curtis, professor of medicine in the division of clinical immunology and rheumatology at the University of Alabama at Birmingham (UAB), said at the annual meeting of the Florida Society of Rheumatology.
“We want to take care of patients, but probably what you and I also want is to have real-world evidence ... evidence relevant for people [we] take care of on a day-to-day basis – not people in highly selected phase 3 or even phase 4 trials,” he said.
Real-world data, which gained particular cachet through the 21st Century Cures Act permitting the Food and Drug Administration to consider real-world evidence as part of the regulatory process and in post-marketing surveillance, includes information from electronic health records (EHRs), health plan claims, traditional registries, and mobile health and technology, explained Dr. Curtis, who also is codirector of the UAB Pharmacoepidemiology and Pharmacoeconomics Unit.
“And you and I want it because patients are different, and in medicine we only have about 20% of patients where there is direct evidence about what we should do,” he added. “Give me the trial that describes the 75-year-old African American smoker with diabetes and how well he does on biologic du jour; there’s no trial like that, and yet you and I need to make those kinds of decisions in light of patients’ comorbidities and other features.”
Generating real-world evidence, however, requires new approaches and new tools, he said, explaining that efficiency is key for applying the data in busy practices, as is compatibility with delivering an intervention and with randomization.
Imagine using the EHR at the point of care to look up what happened to “the last 10 patients like this” based on how they were treated by you or your colleagues, he said.
“That would be useful information to have. In fact, the day is not so far in the future where you could, perhaps, randomize within your EHR if you had a clinically important question that really needed an answer and a protocol attached,” he added.
Real-world data collection
Pragmatic trials offer one approach to garnering real-world data by addressing a simple question – usually with a hard outcome – using very few inclusion and exclusion criteria, Dr. Curtis said, describing the recently completed VERVE Zoster Vaccine trial.
He and his colleagues randomized 617 patients from 33 sites to look at the safety of the live-virus Zostavax herpes zoster vaccine in rheumatoid arthritis patients over age 50 years on any anti–tumor necrosis factor (anti-TNF) therapy. Half of the patients received saline, the other half received the vaccine, and no cases of varicella zoster occurred in either group.
“So, to the extent that half of 617 people with zero cases was reassuring, we now have some evidence where heretofore there was none,” he said, noting that those results will be presented at the 2019 American College of Rheumatology annual meeting. “But the focus of this talk is not on vaccination, it’s really on how we do real-world effectiveness or safety studies in a way that doesn’t slow us way down and doesn’t require some big research operation.”
One way is through efficient recruitment, and depending on how complicated the study is, qualified patients may be easily identifiable through the EHR. In fact, numerous tools are available to codify and search both structured and unstructured data, Dr. Curtis said, noting that he and his colleagues used the web-based i2b2 Query Tool for the VERVE study.
The study sites that did the best with recruiting had the ability to search their own EHRs for patients who met the inclusion criteria, and those patients were then invited to participate. A short video was created to educate those who were interested, and a “knowledge review” quiz was administered afterward to ensure informed consent, which was provided via digital signature.
Health plan and other “big data” can also be very useful for answering certain questions. One example is how soon biologics should be stopped before elective orthopedic surgery? Dr. Curtis and colleagues looked at this using claims data for nearly 4,300 patients undergoing elective hip or knee arthroplasty and found no evidence that administering infliximab within 4 weeks of surgery increased serious infection risk within 30 days or prosthetic joint infection within 1 year.
“Where else are you going to go run a prospective study of 4,300 elective hips and knees,” he said, stressing that it wouldn’t be easy.
Other sources that can help generate real-world effectiveness data include traditional or single-center registries and EHR-based registries.
“The EHR registries are, I think, the newest that many are part of in our field,” he said, noting that “a number of groups are aggregating that,” including the ACR RISE registry and some physician groups, for example.
“What we’re really after is to have a clinically integrated network and a learning health care environment,” he explained, adding that the goal is to develop care pathways.
The approach represents a shift from evidence-based practice to practice-based evidence, he noted.
“When you and I practice, we’re generating that evidence and now we just need to harness that data to get smarter to take care of patients,” he said, adding that the lack of randomization for much of these data isn’t necessarily a problem.
“Do you have to randomize? I would argue that you don’t necessarily have to randomize if the source of variability in how we treat patients is very related to patients’ characteristics,” he said.
If the evidence for a specific approach is weak, or a decision is based on physician preference, physician practice, or insurance company considerations instead of patient characteristics, randomization may not be necessary, he explained.
In fact, insurance company requirements often create “natural experiments” that can be used to help identify better practices. For example, if one only covers adalimumab for first-line TNF inhibition, and another has a “different fail-first policy and that’s not first line and everybody gets some other TNF inhibitor, then I can probably compare those quite reasonably,” he said.
“That’s a great setting where you might not need randomization.”
Of note, “having more data sometimes trumps smarter algorithms,” but that means finding and linking more data that “exist in the wild,” Dr. Curtis said.
Linking data sources
When he and his colleagues wanted to assess the cost of not achieving RA remission, no single data source provided all of the information they needed. They used both CORRONA registry data and health claims data to look at various outcome measures across disease activity categories and with adjustment for comorbidity clusters. They previously reported on the feasibility and validity of the approach.
“We’re currently doing another project where one of the local Blue Cross plans said ‘I’m interested to support you to see how efficient you are; we will donate or loan you our claims data [and] let you link it to your practice so you can actually tell us ... cost conditional on [a patient’s] disease activity,’ ” he said.
Another example involves a recent study looking at biomarker-based cardiovascular disease risk prediction in RA using data from nearly 31,000 Medicare patients linked with multibiomarker disease activity (MBDA) test results, with which they “basically built and validated a risk prediction model,” he said.
The point is that such data linkage provided tools for use at the point of care that can predict CVD risk using “some simple things that you and I have in our EHR,” he said. “But you couldn’t do this if you had to assemble a prospective cohort of tens of thousands of arthritis patients and then wait years for follow-up.”
Patient-reported outcomes collected at the point of care and by patients at home between visits, such as digital data collected via wearable technology, can provide additional information to help improve patient care and management.
“My interest is not to think about [these data sources] in isolation, but really to think about how we bring these together,” he said. “I’m interested in maximizing value for both patients and clinicians, and not having to pick only one of these data sources, but really to harness several of them if that’s what we need to take better care of patients and to answer important questions.”
Doing so is increasingly important given the workforce shortage in rheumatology, he noted.
“The point is that we’re going to need to be a whole lot more efficient as a field because there are going to be fewer of us even at a time when more of us are needed,” he said.
It’s a topic in which the ACR has shown a lot of interest, he said, noting that he cochaired a preconference course on mobile health technologies at the 2018 ACR annual meeting and is involved with a similar course on “big data” ahead of the 2019 meeting.
The thought of making use of the various digital health and “big data” sources can be overwhelming, but the key is to start with the question that needs an answer or the problem that needs to be solved.
“Don’t start with the data,” he explained. “Start with [asking] ... ‘What am I trying to do?’ ”
Dr. Curtis reported funding from the National Institute on Arthritis and Musculoskeletal and Skin Diseases and the Patient-Centered Outcomes Research Institute. He has also consulted for or received research grants from Amgen, AbbVie, Bristol-Myers Squibb, CORRONA, Lilly, Janssen, Myriad, Novartis, Roche, Pfizer, and Sanofi/Regeneron.
LAKE BUENA VISTA, FLA. – Digital health technology is vastly expanding the real-world data pool for clinical and comparative effectiveness research, according to Jeffrey Curtis, MD.
The trick is to harness the power of that data to improve patient care and outcomes, and that can be achieved in part through linkage of data sources and through point-of-care access, Dr. Curtis, professor of medicine in the division of clinical immunology and rheumatology at the University of Alabama at Birmingham (UAB), said at the annual meeting of the Florida Society of Rheumatology.
“We want to take care of patients, but probably what you and I also want is to have real-world evidence ... evidence relevant for people [we] take care of on a day-to-day basis – not people in highly selected phase 3 or even phase 4 trials,” he said.
Real-world data, which gained particular cachet through the 21st Century Cures Act permitting the Food and Drug Administration to consider real-world evidence as part of the regulatory process and in post-marketing surveillance, includes information from electronic health records (EHRs), health plan claims, traditional registries, and mobile health and technology, explained Dr. Curtis, who also is codirector of the UAB Pharmacoepidemiology and Pharmacoeconomics Unit.
“And you and I want it because patients are different, and in medicine we only have about 20% of patients where there is direct evidence about what we should do,” he added. “Give me the trial that describes the 75-year-old African American smoker with diabetes and how well he does on biologic du jour; there’s no trial like that, and yet you and I need to make those kinds of decisions in light of patients’ comorbidities and other features.”
Generating real-world evidence, however, requires new approaches and new tools, he said, explaining that efficiency is key for applying the data in busy practices, as is compatibility with delivering an intervention and with randomization.
Imagine using the EHR at the point of care to look up what happened to “the last 10 patients like this” based on how they were treated by you or your colleagues, he said.
“That would be useful information to have. In fact, the day is not so far in the future where you could, perhaps, randomize within your EHR if you had a clinically important question that really needed an answer and a protocol attached,” he added.
Real-world data collection
Pragmatic trials offer one approach to garnering real-world data by addressing a simple question – usually with a hard outcome – using very few inclusion and exclusion criteria, Dr. Curtis said, describing the recently completed VERVE Zoster Vaccine trial.
He and his colleagues randomized 617 patients from 33 sites to look at the safety of the live-virus Zostavax herpes zoster vaccine in rheumatoid arthritis patients over age 50 years on any anti–tumor necrosis factor (anti-TNF) therapy. Half of the patients received saline, the other half received the vaccine, and no cases of varicella zoster occurred in either group.
“So, to the extent that half of 617 people with zero cases was reassuring, we now have some evidence where heretofore there was none,” he said, noting that those results will be presented at the 2019 American College of Rheumatology annual meeting. “But the focus of this talk is not on vaccination, it’s really on how we do real-world effectiveness or safety studies in a way that doesn’t slow us way down and doesn’t require some big research operation.”
One way is through efficient recruitment, and depending on how complicated the study is, qualified patients may be easily identifiable through the EHR. In fact, numerous tools are available to codify and search both structured and unstructured data, Dr. Curtis said, noting that he and his colleagues used the web-based i2b2 Query Tool for the VERVE study.
The study sites that did the best with recruiting had the ability to search their own EHRs for patients who met the inclusion criteria, and those patients were then invited to participate. A short video was created to educate those who were interested, and a “knowledge review” quiz was administered afterward to ensure informed consent, which was provided via digital signature.
Health plan and other “big data” can also be very useful for answering certain questions. One example is how soon biologics should be stopped before elective orthopedic surgery? Dr. Curtis and colleagues looked at this using claims data for nearly 4,300 patients undergoing elective hip or knee arthroplasty and found no evidence that administering infliximab within 4 weeks of surgery increased serious infection risk within 30 days or prosthetic joint infection within 1 year.
“Where else are you going to go run a prospective study of 4,300 elective hips and knees,” he said, stressing that it wouldn’t be easy.
Other sources that can help generate real-world effectiveness data include traditional or single-center registries and EHR-based registries.
“The EHR registries are, I think, the newest that many are part of in our field,” he said, noting that “a number of groups are aggregating that,” including the ACR RISE registry and some physician groups, for example.
“What we’re really after is to have a clinically integrated network and a learning health care environment,” he explained, adding that the goal is to develop care pathways.
The approach represents a shift from evidence-based practice to practice-based evidence, he noted.
“When you and I practice, we’re generating that evidence and now we just need to harness that data to get smarter to take care of patients,” he said, adding that the lack of randomization for much of these data isn’t necessarily a problem.
“Do you have to randomize? I would argue that you don’t necessarily have to randomize if the source of variability in how we treat patients is very related to patients’ characteristics,” he said.
If the evidence for a specific approach is weak, or a decision is based on physician preference, physician practice, or insurance company considerations instead of patient characteristics, randomization may not be necessary, he explained.
In fact, insurance company requirements often create “natural experiments” that can be used to help identify better practices. For example, if one only covers adalimumab for first-line TNF inhibition, and another has a “different fail-first policy and that’s not first line and everybody gets some other TNF inhibitor, then I can probably compare those quite reasonably,” he said.
“That’s a great setting where you might not need randomization.”
Of note, “having more data sometimes trumps smarter algorithms,” but that means finding and linking more data that “exist in the wild,” Dr. Curtis said.
Linking data sources
When he and his colleagues wanted to assess the cost of not achieving RA remission, no single data source provided all of the information they needed. They used both CORRONA registry data and health claims data to look at various outcome measures across disease activity categories and with adjustment for comorbidity clusters. They previously reported on the feasibility and validity of the approach.
“We’re currently doing another project where one of the local Blue Cross plans said ‘I’m interested to support you to see how efficient you are; we will donate or loan you our claims data [and] let you link it to your practice so you can actually tell us ... cost conditional on [a patient’s] disease activity,’ ” he said.
Another example involves a recent study looking at biomarker-based cardiovascular disease risk prediction in RA using data from nearly 31,000 Medicare patients linked with multibiomarker disease activity (MBDA) test results, with which they “basically built and validated a risk prediction model,” he said.
The point is that such data linkage provided tools for use at the point of care that can predict CVD risk using “some simple things that you and I have in our EHR,” he said. “But you couldn’t do this if you had to assemble a prospective cohort of tens of thousands of arthritis patients and then wait years for follow-up.”
Patient-reported outcomes collected at the point of care and by patients at home between visits, such as digital data collected via wearable technology, can provide additional information to help improve patient care and management.
“My interest is not to think about [these data sources] in isolation, but really to think about how we bring these together,” he said. “I’m interested in maximizing value for both patients and clinicians, and not having to pick only one of these data sources, but really to harness several of them if that’s what we need to take better care of patients and to answer important questions.”
Doing so is increasingly important given the workforce shortage in rheumatology, he noted.
“The point is that we’re going to need to be a whole lot more efficient as a field because there are going to be fewer of us even at a time when more of us are needed,” he said.
It’s a topic in which the ACR has shown a lot of interest, he said, noting that he cochaired a preconference course on mobile health technologies at the 2018 ACR annual meeting and is involved with a similar course on “big data” ahead of the 2019 meeting.
The thought of making use of the various digital health and “big data” sources can be overwhelming, but the key is to start with the question that needs an answer or the problem that needs to be solved.
“Don’t start with the data,” he explained. “Start with [asking] ... ‘What am I trying to do?’ ”
Dr. Curtis reported funding from the National Institute on Arthritis and Musculoskeletal and Skin Diseases and the Patient-Centered Outcomes Research Institute. He has also consulted for or received research grants from Amgen, AbbVie, Bristol-Myers Squibb, CORRONA, Lilly, Janssen, Myriad, Novartis, Roche, Pfizer, and Sanofi/Regeneron.
LAKE BUENA VISTA, FLA. – Digital health technology is vastly expanding the real-world data pool for clinical and comparative effectiveness research, according to Jeffrey Curtis, MD.
The trick is to harness the power of that data to improve patient care and outcomes, and that can be achieved in part through linkage of data sources and through point-of-care access, Dr. Curtis, professor of medicine in the division of clinical immunology and rheumatology at the University of Alabama at Birmingham (UAB), said at the annual meeting of the Florida Society of Rheumatology.
“We want to take care of patients, but probably what you and I also want is to have real-world evidence ... evidence relevant for people [we] take care of on a day-to-day basis – not people in highly selected phase 3 or even phase 4 trials,” he said.
Real-world data, which gained particular cachet through the 21st Century Cures Act permitting the Food and Drug Administration to consider real-world evidence as part of the regulatory process and in post-marketing surveillance, includes information from electronic health records (EHRs), health plan claims, traditional registries, and mobile health and technology, explained Dr. Curtis, who also is codirector of the UAB Pharmacoepidemiology and Pharmacoeconomics Unit.
“And you and I want it because patients are different, and in medicine we only have about 20% of patients where there is direct evidence about what we should do,” he added. “Give me the trial that describes the 75-year-old African American smoker with diabetes and how well he does on biologic du jour; there’s no trial like that, and yet you and I need to make those kinds of decisions in light of patients’ comorbidities and other features.”
Generating real-world evidence, however, requires new approaches and new tools, he said, explaining that efficiency is key for applying the data in busy practices, as is compatibility with delivering an intervention and with randomization.
Imagine using the EHR at the point of care to look up what happened to “the last 10 patients like this” based on how they were treated by you or your colleagues, he said.
“That would be useful information to have. In fact, the day is not so far in the future where you could, perhaps, randomize within your EHR if you had a clinically important question that really needed an answer and a protocol attached,” he added.
Real-world data collection
Pragmatic trials offer one approach to garnering real-world data by addressing a simple question – usually with a hard outcome – using very few inclusion and exclusion criteria, Dr. Curtis said, describing the recently completed VERVE Zoster Vaccine trial.
He and his colleagues randomized 617 patients from 33 sites to look at the safety of the live-virus Zostavax herpes zoster vaccine in rheumatoid arthritis patients over age 50 years on any anti–tumor necrosis factor (anti-TNF) therapy. Half of the patients received saline, the other half received the vaccine, and no cases of varicella zoster occurred in either group.
“So, to the extent that half of 617 people with zero cases was reassuring, we now have some evidence where heretofore there was none,” he said, noting that those results will be presented at the 2019 American College of Rheumatology annual meeting. “But the focus of this talk is not on vaccination, it’s really on how we do real-world effectiveness or safety studies in a way that doesn’t slow us way down and doesn’t require some big research operation.”
One way is through efficient recruitment, and depending on how complicated the study is, qualified patients may be easily identifiable through the EHR. In fact, numerous tools are available to codify and search both structured and unstructured data, Dr. Curtis said, noting that he and his colleagues used the web-based i2b2 Query Tool for the VERVE study.
The study sites that did the best with recruiting had the ability to search their own EHRs for patients who met the inclusion criteria, and those patients were then invited to participate. A short video was created to educate those who were interested, and a “knowledge review” quiz was administered afterward to ensure informed consent, which was provided via digital signature.
Health plan and other “big data” can also be very useful for answering certain questions. One example is how soon biologics should be stopped before elective orthopedic surgery? Dr. Curtis and colleagues looked at this using claims data for nearly 4,300 patients undergoing elective hip or knee arthroplasty and found no evidence that administering infliximab within 4 weeks of surgery increased serious infection risk within 30 days or prosthetic joint infection within 1 year.
“Where else are you going to go run a prospective study of 4,300 elective hips and knees,” he said, stressing that it wouldn’t be easy.
Other sources that can help generate real-world effectiveness data include traditional or single-center registries and EHR-based registries.
“The EHR registries are, I think, the newest that many are part of in our field,” he said, noting that “a number of groups are aggregating that,” including the ACR RISE registry and some physician groups, for example.
“What we’re really after is to have a clinically integrated network and a learning health care environment,” he explained, adding that the goal is to develop care pathways.
The approach represents a shift from evidence-based practice to practice-based evidence, he noted.
“When you and I practice, we’re generating that evidence and now we just need to harness that data to get smarter to take care of patients,” he said, adding that the lack of randomization for much of these data isn’t necessarily a problem.
“Do you have to randomize? I would argue that you don’t necessarily have to randomize if the source of variability in how we treat patients is very related to patients’ characteristics,” he said.
If the evidence for a specific approach is weak, or a decision is based on physician preference, physician practice, or insurance company considerations instead of patient characteristics, randomization may not be necessary, he explained.
In fact, insurance company requirements often create “natural experiments” that can be used to help identify better practices. For example, if one only covers adalimumab for first-line TNF inhibition, and another has a “different fail-first policy and that’s not first line and everybody gets some other TNF inhibitor, then I can probably compare those quite reasonably,” he said.
“That’s a great setting where you might not need randomization.”
Of note, “having more data sometimes trumps smarter algorithms,” but that means finding and linking more data that “exist in the wild,” Dr. Curtis said.
Linking data sources
When he and his colleagues wanted to assess the cost of not achieving RA remission, no single data source provided all of the information they needed. They used both CORRONA registry data and health claims data to look at various outcome measures across disease activity categories and with adjustment for comorbidity clusters. They previously reported on the feasibility and validity of the approach.
“We’re currently doing another project where one of the local Blue Cross plans said ‘I’m interested to support you to see how efficient you are; we will donate or loan you our claims data [and] let you link it to your practice so you can actually tell us ... cost conditional on [a patient’s] disease activity,’ ” he said.
Another example involves a recent study looking at biomarker-based cardiovascular disease risk prediction in RA using data from nearly 31,000 Medicare patients linked with multibiomarker disease activity (MBDA) test results, with which they “basically built and validated a risk prediction model,” he said.
The point is that such data linkage provided tools for use at the point of care that can predict CVD risk using “some simple things that you and I have in our EHR,” he said. “But you couldn’t do this if you had to assemble a prospective cohort of tens of thousands of arthritis patients and then wait years for follow-up.”
Patient-reported outcomes collected at the point of care and by patients at home between visits, such as digital data collected via wearable technology, can provide additional information to help improve patient care and management.
“My interest is not to think about [these data sources] in isolation, but really to think about how we bring these together,” he said. “I’m interested in maximizing value for both patients and clinicians, and not having to pick only one of these data sources, but really to harness several of them if that’s what we need to take better care of patients and to answer important questions.”
Doing so is increasingly important given the workforce shortage in rheumatology, he noted.
“The point is that we’re going to need to be a whole lot more efficient as a field because there are going to be fewer of us even at a time when more of us are needed,” he said.
It’s a topic in which the ACR has shown a lot of interest, he said, noting that he cochaired a preconference course on mobile health technologies at the 2018 ACR annual meeting and is involved with a similar course on “big data” ahead of the 2019 meeting.
The thought of making use of the various digital health and “big data” sources can be overwhelming, but the key is to start with the question that needs an answer or the problem that needs to be solved.
“Don’t start with the data,” he explained. “Start with [asking] ... ‘What am I trying to do?’ ”
Dr. Curtis reported funding from the National Institute on Arthritis and Musculoskeletal and Skin Diseases and the Patient-Centered Outcomes Research Institute. He has also consulted for or received research grants from Amgen, AbbVie, Bristol-Myers Squibb, CORRONA, Lilly, Janssen, Myriad, Novartis, Roche, Pfizer, and Sanofi/Regeneron.
EXPERT ANALYSIS FROM FSR 2019
Post-TAVR anticoagulation alone fails to cut stroke risk in AFib
In patients with atrial fibrillation (AFib) who have undergone transcatheter aortic valve replacement (TAVR) and had a CHA2DS2-VASc score of at least 2, oral anticoagulant (OAC) therapy alone was not linked to reduced stroke risk.
By contrast, antiplatelet therapy was linked to a reduced risk of stroke in those AFib-TAVR patients, regardless of whether an oral anticoagulant was on board, according to results of a substudy of the randomized PARTNER II (Placement of Aortic Transcatheter Valve II) trial and its associated registries.
“Anticoagulant therapy was associated with a reduced risk of stroke and the composite of death or stroke when used concomitantly with uninterrupted antiplatelet therapy following TAVR,” concluded authors of the analysis, led by Ioanna Kosmidou, MD, PhD, of Columbia University in New York.
Taken together, these findings suggest OAC alone is “not sufficient” to prevent cerebrovascular events after TAVR in patients with AFib, Dr. Kosmidou and colleagues reported in JACC: Cardiovascular Interventions.
The analysis of the PARTNER II substudy included a total of 1,621 patients with aortic stenosis treated with TAVR who had a history of AFib and an absolute indication for anticoagulation as evidenced by a CHA2DS2-VASc score of at least 2.
Despite the absolute indication for anticoagulation, more than 40% of these patients were not prescribed an OAC upon discharge, investigators wrote, though the rate of nonprescribing decreased over the 5-year enrollment period of 2011-2015.
OAC therapy alone was not linked to reduced stroke risk in this cohort, investigators said. After 2 years, the rate of stroke was 6.6% for AFib-TAVR patients on anticoagulant therapy, and 5.6% for those who were not on anticoagulant therapy, a nonsignificant difference at P = 0.53, according to the reported data.
By contrast, uninterrupted antiplatelet therapy reduced both risk of stroke and risk of the composite endpoint of stroke and death at 2 years “irrespective of concomitant anticoagulation,” Dr. Kosmidou and coinvestigators wrote in the report.
The stroke rates were 5.4% for antiplatelet therapy plus OAC, versus 11.1% for those receiving neither antithrombotic treatment (P = 0.03), while the rates of stroke or death were 29.7% and 40.1%, respectively (P = 0.01), according to investigators.
After adjustment, stroke risk was not significantly reduced for OAC when compared with no OAC or antiplatelet therapy (HR, 0.61; P = .16), whereas stroke risk was indeed reduced for antiplatelet therapy alone (HR, 0.32; P = .002) and antiplatelet therapy with oral anticoagulation (HR, 0.44; P = .018).
The PARTNER II study was funded by Edwards Lifesciences. Senior author Martin B. Leon, MD, and several other study coauthors reported disclosures related to Edwards Lifesciences, in addition to Abbott Vascular, Cordis, Medtronic, Boston Scientific, and other companies. Dr. Kosmidou reported no disclosures.
SOURCE: Kosmidou I et al. JACC Cardiovasc Interv. 2019;12:1580-9.
Results of this PARTNER II substudy investigation by Kosmidou and colleagues are timely and thought provoking because they imply that some current recommendations may be insufficient for preventing stroke in patients with atrial fibrillation (AFib) undergoing transcatheter aortic valve replacement (TAVR).
Specifically, the results showed no difference in risk of stroke or the composite of death and stroke at 2 years in oral anticoagulant (OAC) and non-OAC patient groups, whereas by contrast, antiplatelet therapy was linked with reduced stroke risk versus no antithrombotic therapy, whether or not the patients received OAC.
The substudy reinforces the understanding that TAVR itself is a determinant of stroke because of mechanisms that go beyond thrombus formation in the left atrial appendage and are essentially platelet mediated.
How to manage antithrombotic therapy in patients with AFib who undergo TAVR remains a residual field of ambiguity.
However, observational studies cannot be conclusive, they said, so results of relevant prospective, randomized trials are eagerly awaited.
For example, the effects of novel oral anticoagulants versus vitamin K antagonists will be evaluated in the ENVISAGE-TAVI study, as well as the ATLANTIS trial, which will additionally include non-OAC patients.
The relative benefits of OAC alone versus OAC plus antiplatelet therapy will be evaluated in the AVATAR study, which will include AFib-TAVR patients randomized to OAC versus OAC plus aspirin, while the POPular-TAVI and CLOE trials will also include cohorts that help provide a more eloquent answer regarding the benefit-risk ratio of combining antiplatelet therapy and OAC in these patients.
Davide Capodanno, MD, PhD, and Antonio Greco, MD, of the University of Catania (Italy) made these comments in an accompanying editorial (JACC: Cardiovasc Interv. 2019 Aug 19. doi: 10.1016/j.jcin.2019.07.004). Dr. Capodanno reported disclosures related to Abbott Vascular, Amgen, AstraZeneca, Bayer, Boehringer Ingelheim, Daiichi-Sankyo, and Sanofi. Dr. Greco reported having no relevant disclosures.
Results of this PARTNER II substudy investigation by Kosmidou and colleagues are timely and thought provoking because they imply that some current recommendations may be insufficient for preventing stroke in patients with atrial fibrillation (AFib) undergoing transcatheter aortic valve replacement (TAVR).
Specifically, the results showed no difference in risk of stroke or the composite of death and stroke at 2 years in oral anticoagulant (OAC) and non-OAC patient groups, whereas by contrast, antiplatelet therapy was linked with reduced stroke risk versus no antithrombotic therapy, whether or not the patients received OAC.
The substudy reinforces the understanding that TAVR itself is a determinant of stroke because of mechanisms that go beyond thrombus formation in the left atrial appendage and are essentially platelet mediated.
How to manage antithrombotic therapy in patients with AFib who undergo TAVR remains a residual field of ambiguity.
However, observational studies cannot be conclusive, they said, so results of relevant prospective, randomized trials are eagerly awaited.
For example, the effects of novel oral anticoagulants versus vitamin K antagonists will be evaluated in the ENVISAGE-TAVI study, as well as the ATLANTIS trial, which will additionally include non-OAC patients.
The relative benefits of OAC alone versus OAC plus antiplatelet therapy will be evaluated in the AVATAR study, which will include AFib-TAVR patients randomized to OAC versus OAC plus aspirin, while the POPular-TAVI and CLOE trials will also include cohorts that help provide a more eloquent answer regarding the benefit-risk ratio of combining antiplatelet therapy and OAC in these patients.
Davide Capodanno, MD, PhD, and Antonio Greco, MD, of the University of Catania (Italy) made these comments in an accompanying editorial (JACC: Cardiovasc Interv. 2019 Aug 19. doi: 10.1016/j.jcin.2019.07.004). Dr. Capodanno reported disclosures related to Abbott Vascular, Amgen, AstraZeneca, Bayer, Boehringer Ingelheim, Daiichi-Sankyo, and Sanofi. Dr. Greco reported having no relevant disclosures.
Results of this PARTNER II substudy investigation by Kosmidou and colleagues are timely and thought provoking because they imply that some current recommendations may be insufficient for preventing stroke in patients with atrial fibrillation (AFib) undergoing transcatheter aortic valve replacement (TAVR).
Specifically, the results showed no difference in risk of stroke or the composite of death and stroke at 2 years in oral anticoagulant (OAC) and non-OAC patient groups, whereas by contrast, antiplatelet therapy was linked with reduced stroke risk versus no antithrombotic therapy, whether or not the patients received OAC.
The substudy reinforces the understanding that TAVR itself is a determinant of stroke because of mechanisms that go beyond thrombus formation in the left atrial appendage and are essentially platelet mediated.
How to manage antithrombotic therapy in patients with AFib who undergo TAVR remains a residual field of ambiguity.
However, observational studies cannot be conclusive, they said, so results of relevant prospective, randomized trials are eagerly awaited.
For example, the effects of novel oral anticoagulants versus vitamin K antagonists will be evaluated in the ENVISAGE-TAVI study, as well as the ATLANTIS trial, which will additionally include non-OAC patients.
The relative benefits of OAC alone versus OAC plus antiplatelet therapy will be evaluated in the AVATAR study, which will include AFib-TAVR patients randomized to OAC versus OAC plus aspirin, while the POPular-TAVI and CLOE trials will also include cohorts that help provide a more eloquent answer regarding the benefit-risk ratio of combining antiplatelet therapy and OAC in these patients.
Davide Capodanno, MD, PhD, and Antonio Greco, MD, of the University of Catania (Italy) made these comments in an accompanying editorial (JACC: Cardiovasc Interv. 2019 Aug 19. doi: 10.1016/j.jcin.2019.07.004). Dr. Capodanno reported disclosures related to Abbott Vascular, Amgen, AstraZeneca, Bayer, Boehringer Ingelheim, Daiichi-Sankyo, and Sanofi. Dr. Greco reported having no relevant disclosures.
In patients with atrial fibrillation (AFib) who have undergone transcatheter aortic valve replacement (TAVR) and had a CHA2DS2-VASc score of at least 2, oral anticoagulant (OAC) therapy alone was not linked to reduced stroke risk.
By contrast, antiplatelet therapy was linked to a reduced risk of stroke in those AFib-TAVR patients, regardless of whether an oral anticoagulant was on board, according to results of a substudy of the randomized PARTNER II (Placement of Aortic Transcatheter Valve II) trial and its associated registries.
“Anticoagulant therapy was associated with a reduced risk of stroke and the composite of death or stroke when used concomitantly with uninterrupted antiplatelet therapy following TAVR,” concluded authors of the analysis, led by Ioanna Kosmidou, MD, PhD, of Columbia University in New York.
Taken together, these findings suggest OAC alone is “not sufficient” to prevent cerebrovascular events after TAVR in patients with AFib, Dr. Kosmidou and colleagues reported in JACC: Cardiovascular Interventions.
The analysis of the PARTNER II substudy included a total of 1,621 patients with aortic stenosis treated with TAVR who had a history of AFib and an absolute indication for anticoagulation as evidenced by a CHA2DS2-VASc score of at least 2.
Despite the absolute indication for anticoagulation, more than 40% of these patients were not prescribed an OAC upon discharge, investigators wrote, though the rate of nonprescribing decreased over the 5-year enrollment period of 2011-2015.
OAC therapy alone was not linked to reduced stroke risk in this cohort, investigators said. After 2 years, the rate of stroke was 6.6% for AFib-TAVR patients on anticoagulant therapy, and 5.6% for those who were not on anticoagulant therapy, a nonsignificant difference at P = 0.53, according to the reported data.
By contrast, uninterrupted antiplatelet therapy reduced both risk of stroke and risk of the composite endpoint of stroke and death at 2 years “irrespective of concomitant anticoagulation,” Dr. Kosmidou and coinvestigators wrote in the report.
The stroke rates were 5.4% for antiplatelet therapy plus OAC, versus 11.1% for those receiving neither antithrombotic treatment (P = 0.03), while the rates of stroke or death were 29.7% and 40.1%, respectively (P = 0.01), according to investigators.
After adjustment, stroke risk was not significantly reduced for OAC when compared with no OAC or antiplatelet therapy (HR, 0.61; P = .16), whereas stroke risk was indeed reduced for antiplatelet therapy alone (HR, 0.32; P = .002) and antiplatelet therapy with oral anticoagulation (HR, 0.44; P = .018).
The PARTNER II study was funded by Edwards Lifesciences. Senior author Martin B. Leon, MD, and several other study coauthors reported disclosures related to Edwards Lifesciences, in addition to Abbott Vascular, Cordis, Medtronic, Boston Scientific, and other companies. Dr. Kosmidou reported no disclosures.
SOURCE: Kosmidou I et al. JACC Cardiovasc Interv. 2019;12:1580-9.
In patients with atrial fibrillation (AFib) who have undergone transcatheter aortic valve replacement (TAVR) and had a CHA2DS2-VASc score of at least 2, oral anticoagulant (OAC) therapy alone was not linked to reduced stroke risk.
By contrast, antiplatelet therapy was linked to a reduced risk of stroke in those AFib-TAVR patients, regardless of whether an oral anticoagulant was on board, according to results of a substudy of the randomized PARTNER II (Placement of Aortic Transcatheter Valve II) trial and its associated registries.
“Anticoagulant therapy was associated with a reduced risk of stroke and the composite of death or stroke when used concomitantly with uninterrupted antiplatelet therapy following TAVR,” concluded authors of the analysis, led by Ioanna Kosmidou, MD, PhD, of Columbia University in New York.
Taken together, these findings suggest OAC alone is “not sufficient” to prevent cerebrovascular events after TAVR in patients with AFib, Dr. Kosmidou and colleagues reported in JACC: Cardiovascular Interventions.
The analysis of the PARTNER II substudy included a total of 1,621 patients with aortic stenosis treated with TAVR who had a history of AFib and an absolute indication for anticoagulation as evidenced by a CHA2DS2-VASc score of at least 2.
Despite the absolute indication for anticoagulation, more than 40% of these patients were not prescribed an OAC upon discharge, investigators wrote, though the rate of nonprescribing decreased over the 5-year enrollment period of 2011-2015.
OAC therapy alone was not linked to reduced stroke risk in this cohort, investigators said. After 2 years, the rate of stroke was 6.6% for AFib-TAVR patients on anticoagulant therapy, and 5.6% for those who were not on anticoagulant therapy, a nonsignificant difference at P = 0.53, according to the reported data.
By contrast, uninterrupted antiplatelet therapy reduced both risk of stroke and risk of the composite endpoint of stroke and death at 2 years “irrespective of concomitant anticoagulation,” Dr. Kosmidou and coinvestigators wrote in the report.
The stroke rates were 5.4% for antiplatelet therapy plus OAC, versus 11.1% for those receiving neither antithrombotic treatment (P = 0.03), while the rates of stroke or death were 29.7% and 40.1%, respectively (P = 0.01), according to investigators.
After adjustment, stroke risk was not significantly reduced for OAC when compared with no OAC or antiplatelet therapy (HR, 0.61; P = .16), whereas stroke risk was indeed reduced for antiplatelet therapy alone (HR, 0.32; P = .002) and antiplatelet therapy with oral anticoagulation (HR, 0.44; P = .018).
The PARTNER II study was funded by Edwards Lifesciences. Senior author Martin B. Leon, MD, and several other study coauthors reported disclosures related to Edwards Lifesciences, in addition to Abbott Vascular, Cordis, Medtronic, Boston Scientific, and other companies. Dr. Kosmidou reported no disclosures.
SOURCE: Kosmidou I et al. JACC Cardiovasc Interv. 2019;12:1580-9.
FROM JACC: CARDIOVASCULAR INTERVENTIONS
FDA approves baroreflex activation for advanced HF
The Food and Drug Administration has approved the Barostim Neo System, an electronic carotid sinus baroreceptor stimulator, for advanced heart failure patients who have a regular heart rhythm, an ejection fraction of 35% or less, and who are not candidates for cardiac resynchronization.
A tiny, unilateral electrode delivers a pulse that decreases sympathetic but increases parasympathetic tone. The effect is that blood vessels relax and production of stress hormones drops. The device is powered by a small generator implanted under the collarbone.
Approval was based on BeAT-HF, a randomized trial with 408 patients on guideline-directed medical therapy with left ventricular ejection fractions at or below 35% and New York Heart Association class III disease.
At 6 months, 125 patients implanted with the device had improved about 14 points more than controls on a quality of life scale, walked about 60 meters farther in 6 minutes, and were more likely to have improved a functional class or two. The benefits corresponded with a drop in the N-terminal of the prohormone brain natriuretic peptide.
Possible complications include infection, low blood pressure, nerve damage, arterial damage, heart failure exacerbation, stroke, and death. Contraindications include certain nervous system disorders, uncontrolled and symptomatic bradycardia, and atherosclerosis or ulcerative carotid plaques near the implant zone, the FDA said.
The system, from CRVx in Minneapolis, received priority review as a breakthrough device. The agency is requiring a phase 4 investigation of its potential to reduce hospitalizations and prolong life.
The Food and Drug Administration has approved the Barostim Neo System, an electronic carotid sinus baroreceptor stimulator, for advanced heart failure patients who have a regular heart rhythm, an ejection fraction of 35% or less, and who are not candidates for cardiac resynchronization.
A tiny, unilateral electrode delivers a pulse that decreases sympathetic but increases parasympathetic tone. The effect is that blood vessels relax and production of stress hormones drops. The device is powered by a small generator implanted under the collarbone.
Approval was based on BeAT-HF, a randomized trial with 408 patients on guideline-directed medical therapy with left ventricular ejection fractions at or below 35% and New York Heart Association class III disease.
At 6 months, 125 patients implanted with the device had improved about 14 points more than controls on a quality of life scale, walked about 60 meters farther in 6 minutes, and were more likely to have improved a functional class or two. The benefits corresponded with a drop in the N-terminal of the prohormone brain natriuretic peptide.
Possible complications include infection, low blood pressure, nerve damage, arterial damage, heart failure exacerbation, stroke, and death. Contraindications include certain nervous system disorders, uncontrolled and symptomatic bradycardia, and atherosclerosis or ulcerative carotid plaques near the implant zone, the FDA said.
The system, from CRVx in Minneapolis, received priority review as a breakthrough device. The agency is requiring a phase 4 investigation of its potential to reduce hospitalizations and prolong life.
The Food and Drug Administration has approved the Barostim Neo System, an electronic carotid sinus baroreceptor stimulator, for advanced heart failure patients who have a regular heart rhythm, an ejection fraction of 35% or less, and who are not candidates for cardiac resynchronization.
A tiny, unilateral electrode delivers a pulse that decreases sympathetic but increases parasympathetic tone. The effect is that blood vessels relax and production of stress hormones drops. The device is powered by a small generator implanted under the collarbone.
Approval was based on BeAT-HF, a randomized trial with 408 patients on guideline-directed medical therapy with left ventricular ejection fractions at or below 35% and New York Heart Association class III disease.
At 6 months, 125 patients implanted with the device had improved about 14 points more than controls on a quality of life scale, walked about 60 meters farther in 6 minutes, and were more likely to have improved a functional class or two. The benefits corresponded with a drop in the N-terminal of the prohormone brain natriuretic peptide.
Possible complications include infection, low blood pressure, nerve damage, arterial damage, heart failure exacerbation, stroke, and death. Contraindications include certain nervous system disorders, uncontrolled and symptomatic bradycardia, and atherosclerosis or ulcerative carotid plaques near the implant zone, the FDA said.
The system, from CRVx in Minneapolis, received priority review as a breakthrough device. The agency is requiring a phase 4 investigation of its potential to reduce hospitalizations and prolong life.
Fluoride exposure during pregnancy tied to lower IQ score in children
with boys having a lower mean score than girls, according to a recent prospective, multicenter birth cohort study.
“These findings were observed at fluoride levels typically found in white North American women,” wrote Rivka Green, York University, Toronto, and colleagues. “This indicates the possible need to reduce fluoride intake during pregnancy.”
This study confirms findings in a 2017 study suggesting a relationship between maternal fluoride levels and children’s later cognitive scores.
Ms. Green and colleagues evaluated 512 mother-child pairs in the Maternal-Infant Research on Environmental Chemicals (MIREC) cohort from six Canadian cities. The children were born between 2008 and 2012, underwent neurodevelopmental testing between 3 and 4 years, and were assessed using the Wechsler Preschool and Primary Scale of Intelligence, Third Edition. Full Scale IQ (FSIQ) test.
Of these, 400 mother-child pairs had data on fluoride intake, IQ, and complete covariate data; 141 of these mothers lived in areas with fluoridated tap water, while 228 mothers lived in areas without fluoridated tap water. Maternal urinary fluoride adjusted for specific gravity (MUFSG) was averaged across three trimesters of data, and the estimated fluoride level was obtained through self-reported exposure by women included in the study.
The researchers found mothers living in areas with fluoridated water had significantly higher MUFSG levels (0.69 mg/L), compared with women in areas without fluoridated water (0.40 mg/L; P equals .001). The median estimated fluoride intake was significantly higher among women living in areas with fluoridated water (0.93 mg per day) than in women who did not live in areas with fluoridated water (0.30 mg per day; P less than .001).
Overall, children scored a mean 107.16 (range, 52-143) on the IQ test, and girls had significantly higher mean IQ scores than did boys (109.56 vs. 104.61; P = .001). After adjusting for covariates of maternal age, race, parity, smoking, and alcohol status during pregnancy, child gender, gestational age, and birth weight, the researchers found a significant interaction between MUFSG and the child’s gender (P = .02), and a 1-mg/L MUFSG increase was associated with a decrease in 4.49 IQ points in boys (95% confidence interval, −8.38 to −0.60) but not girls. There also was an association between 1-mg higher daily intake of maternal fluoride intake and decreased IQ score in both boys and girls (−3.66; 95% CI, −7.16 to −0.15 ; P = .04).
Ms. Green and her colleagues acknowledged several limitations with the study, such as the short half-life of urinary fluoride and the potential inaccuracy of maternal urinary samples at predicting fetal exposure to fluoride, the self-reported nature of estimated fluoride consumption, lack of availability of maternal IQ data, and not including postnatal exposure and consumption of fluoride.
In a related editorial, David C. Bellinger, PhD, MSc, referred to a previous prospective study in Mexico City by Bashash et al. that found a maternal fluoride level of 0.9 mg/L was associated with a decrease in cognitive scores in children at 4 years and between 6 years and 12 years (Environ Health Perspect. 2017;125(9):097017. doi: 10.1289/EHP655), and noted the effect sizes seen in the Mexico City study were similar to those reported by Green et al. “If the effect sizes reported by Green et al. and others are valid, the total cognitive loss at the population level that might be associated with children’s prenatal exposure to fluoride could be substantial,” he said.
The study raises many questions, including whether there is a concentration where neurotoxicity risk is negligible, if gender plays a role (there was no gender risk difference in Bashash et al.), whether other developmental domains are affected apart from IQ, and if postnatal exposure carries a risk, Dr. Bellinger said. “The findings of Green et al. and others indicate that a dispassionate and tempered discussion of fluoride’s potential neurotoxicity is warranted, including consideration of what additional research is needed to reach more definitive conclusions about the implications, if any, for public health,” he said.
Dimitri A. Christakis, MD, MPH, editor of JAMA Pediatrics and director of the Center for Child Health, Behavior, and Development at Seattle Children’s Research Institute, said in an editor’s note that it was not an easy decision to publish the article because of the potential implications of the findings.
“The mission of the journal is to ensure that child health is optimized by bringing the best available evidence to the fore,” he said. “Publishing it serves as testament to the fact that JAMA Pediatrics is committed to disseminating the best science based entirely on the rigor of the methods and the soundness of the hypotheses tested, regardless of how contentious the results may be.”
However, “scientific inquiry is an iterative process,” Dr. Christakis said, and rarely does a single study provide “definitive evidence.
“We hope that purveyors and consumers of these findings are mindful of that as the implications of this study are debated in the public arena.”
This study was funded in a grant from the National Institute of Environmental Health Science, and the MIREC Study was funded by Chemicals Management Plan at Health Canada, the Ontario Ministry of the Environment, and the Canadian Institutes for Health Research. Dr. Bruce Lanphear reported being an unpaid expert witness for an upcoming case involving the U.S. Environmental Protection Agency and water fluoridation. Dr. Richard Hornung reported receiving personal fees from York University. Dr. E. Angeles Martinez-Mier reported receiving grants from the National Institutes of Health. The other authors report no relevant conflicts of interest. Dr. Bellinger reported no relevant conflicts of interest with regard to his editorial.
SOURCEs: Green R et al. JAMA Pediatr. 2019. doi: 10.1001/jamapediatrics.2019.1729; Bellinger. JAMA Pediatr. 2019. doi: 10.1001/ jamapediatrics.2019.1728.
with boys having a lower mean score than girls, according to a recent prospective, multicenter birth cohort study.
“These findings were observed at fluoride levels typically found in white North American women,” wrote Rivka Green, York University, Toronto, and colleagues. “This indicates the possible need to reduce fluoride intake during pregnancy.”
This study confirms findings in a 2017 study suggesting a relationship between maternal fluoride levels and children’s later cognitive scores.
Ms. Green and colleagues evaluated 512 mother-child pairs in the Maternal-Infant Research on Environmental Chemicals (MIREC) cohort from six Canadian cities. The children were born between 2008 and 2012, underwent neurodevelopmental testing between 3 and 4 years, and were assessed using the Wechsler Preschool and Primary Scale of Intelligence, Third Edition. Full Scale IQ (FSIQ) test.
Of these, 400 mother-child pairs had data on fluoride intake, IQ, and complete covariate data; 141 of these mothers lived in areas with fluoridated tap water, while 228 mothers lived in areas without fluoridated tap water. Maternal urinary fluoride adjusted for specific gravity (MUFSG) was averaged across three trimesters of data, and the estimated fluoride level was obtained through self-reported exposure by women included in the study.
The researchers found mothers living in areas with fluoridated water had significantly higher MUFSG levels (0.69 mg/L), compared with women in areas without fluoridated water (0.40 mg/L; P equals .001). The median estimated fluoride intake was significantly higher among women living in areas with fluoridated water (0.93 mg per day) than in women who did not live in areas with fluoridated water (0.30 mg per day; P less than .001).
Overall, children scored a mean 107.16 (range, 52-143) on the IQ test, and girls had significantly higher mean IQ scores than did boys (109.56 vs. 104.61; P = .001). After adjusting for covariates of maternal age, race, parity, smoking, and alcohol status during pregnancy, child gender, gestational age, and birth weight, the researchers found a significant interaction between MUFSG and the child’s gender (P = .02), and a 1-mg/L MUFSG increase was associated with a decrease in 4.49 IQ points in boys (95% confidence interval, −8.38 to −0.60) but not girls. There also was an association between 1-mg higher daily intake of maternal fluoride intake and decreased IQ score in both boys and girls (−3.66; 95% CI, −7.16 to −0.15 ; P = .04).
Ms. Green and her colleagues acknowledged several limitations with the study, such as the short half-life of urinary fluoride and the potential inaccuracy of maternal urinary samples at predicting fetal exposure to fluoride, the self-reported nature of estimated fluoride consumption, lack of availability of maternal IQ data, and not including postnatal exposure and consumption of fluoride.
In a related editorial, David C. Bellinger, PhD, MSc, referred to a previous prospective study in Mexico City by Bashash et al. that found a maternal fluoride level of 0.9 mg/L was associated with a decrease in cognitive scores in children at 4 years and between 6 years and 12 years (Environ Health Perspect. 2017;125(9):097017. doi: 10.1289/EHP655), and noted the effect sizes seen in the Mexico City study were similar to those reported by Green et al. “If the effect sizes reported by Green et al. and others are valid, the total cognitive loss at the population level that might be associated with children’s prenatal exposure to fluoride could be substantial,” he said.
The study raises many questions, including whether there is a concentration where neurotoxicity risk is negligible, if gender plays a role (there was no gender risk difference in Bashash et al.), whether other developmental domains are affected apart from IQ, and if postnatal exposure carries a risk, Dr. Bellinger said. “The findings of Green et al. and others indicate that a dispassionate and tempered discussion of fluoride’s potential neurotoxicity is warranted, including consideration of what additional research is needed to reach more definitive conclusions about the implications, if any, for public health,” he said.
Dimitri A. Christakis, MD, MPH, editor of JAMA Pediatrics and director of the Center for Child Health, Behavior, and Development at Seattle Children’s Research Institute, said in an editor’s note that it was not an easy decision to publish the article because of the potential implications of the findings.
“The mission of the journal is to ensure that child health is optimized by bringing the best available evidence to the fore,” he said. “Publishing it serves as testament to the fact that JAMA Pediatrics is committed to disseminating the best science based entirely on the rigor of the methods and the soundness of the hypotheses tested, regardless of how contentious the results may be.”
However, “scientific inquiry is an iterative process,” Dr. Christakis said, and rarely does a single study provide “definitive evidence.
“We hope that purveyors and consumers of these findings are mindful of that as the implications of this study are debated in the public arena.”
This study was funded in a grant from the National Institute of Environmental Health Science, and the MIREC Study was funded by Chemicals Management Plan at Health Canada, the Ontario Ministry of the Environment, and the Canadian Institutes for Health Research. Dr. Bruce Lanphear reported being an unpaid expert witness for an upcoming case involving the U.S. Environmental Protection Agency and water fluoridation. Dr. Richard Hornung reported receiving personal fees from York University. Dr. E. Angeles Martinez-Mier reported receiving grants from the National Institutes of Health. The other authors report no relevant conflicts of interest. Dr. Bellinger reported no relevant conflicts of interest with regard to his editorial.
SOURCEs: Green R et al. JAMA Pediatr. 2019. doi: 10.1001/jamapediatrics.2019.1729; Bellinger. JAMA Pediatr. 2019. doi: 10.1001/ jamapediatrics.2019.1728.
with boys having a lower mean score than girls, according to a recent prospective, multicenter birth cohort study.
“These findings were observed at fluoride levels typically found in white North American women,” wrote Rivka Green, York University, Toronto, and colleagues. “This indicates the possible need to reduce fluoride intake during pregnancy.”
This study confirms findings in a 2017 study suggesting a relationship between maternal fluoride levels and children’s later cognitive scores.
Ms. Green and colleagues evaluated 512 mother-child pairs in the Maternal-Infant Research on Environmental Chemicals (MIREC) cohort from six Canadian cities. The children were born between 2008 and 2012, underwent neurodevelopmental testing between 3 and 4 years, and were assessed using the Wechsler Preschool and Primary Scale of Intelligence, Third Edition. Full Scale IQ (FSIQ) test.
Of these, 400 mother-child pairs had data on fluoride intake, IQ, and complete covariate data; 141 of these mothers lived in areas with fluoridated tap water, while 228 mothers lived in areas without fluoridated tap water. Maternal urinary fluoride adjusted for specific gravity (MUFSG) was averaged across three trimesters of data, and the estimated fluoride level was obtained through self-reported exposure by women included in the study.
The researchers found mothers living in areas with fluoridated water had significantly higher MUFSG levels (0.69 mg/L), compared with women in areas without fluoridated water (0.40 mg/L; P equals .001). The median estimated fluoride intake was significantly higher among women living in areas with fluoridated water (0.93 mg per day) than in women who did not live in areas with fluoridated water (0.30 mg per day; P less than .001).
Overall, children scored a mean 107.16 (range, 52-143) on the IQ test, and girls had significantly higher mean IQ scores than did boys (109.56 vs. 104.61; P = .001). After adjusting for covariates of maternal age, race, parity, smoking, and alcohol status during pregnancy, child gender, gestational age, and birth weight, the researchers found a significant interaction between MUFSG and the child’s gender (P = .02), and a 1-mg/L MUFSG increase was associated with a decrease in 4.49 IQ points in boys (95% confidence interval, −8.38 to −0.60) but not girls. There also was an association between 1-mg higher daily intake of maternal fluoride intake and decreased IQ score in both boys and girls (−3.66; 95% CI, −7.16 to −0.15 ; P = .04).
Ms. Green and her colleagues acknowledged several limitations with the study, such as the short half-life of urinary fluoride and the potential inaccuracy of maternal urinary samples at predicting fetal exposure to fluoride, the self-reported nature of estimated fluoride consumption, lack of availability of maternal IQ data, and not including postnatal exposure and consumption of fluoride.
In a related editorial, David C. Bellinger, PhD, MSc, referred to a previous prospective study in Mexico City by Bashash et al. that found a maternal fluoride level of 0.9 mg/L was associated with a decrease in cognitive scores in children at 4 years and between 6 years and 12 years (Environ Health Perspect. 2017;125(9):097017. doi: 10.1289/EHP655), and noted the effect sizes seen in the Mexico City study were similar to those reported by Green et al. “If the effect sizes reported by Green et al. and others are valid, the total cognitive loss at the population level that might be associated with children’s prenatal exposure to fluoride could be substantial,” he said.
The study raises many questions, including whether there is a concentration where neurotoxicity risk is negligible, if gender plays a role (there was no gender risk difference in Bashash et al.), whether other developmental domains are affected apart from IQ, and if postnatal exposure carries a risk, Dr. Bellinger said. “The findings of Green et al. and others indicate that a dispassionate and tempered discussion of fluoride’s potential neurotoxicity is warranted, including consideration of what additional research is needed to reach more definitive conclusions about the implications, if any, for public health,” he said.
Dimitri A. Christakis, MD, MPH, editor of JAMA Pediatrics and director of the Center for Child Health, Behavior, and Development at Seattle Children’s Research Institute, said in an editor’s note that it was not an easy decision to publish the article because of the potential implications of the findings.
“The mission of the journal is to ensure that child health is optimized by bringing the best available evidence to the fore,” he said. “Publishing it serves as testament to the fact that JAMA Pediatrics is committed to disseminating the best science based entirely on the rigor of the methods and the soundness of the hypotheses tested, regardless of how contentious the results may be.”
However, “scientific inquiry is an iterative process,” Dr. Christakis said, and rarely does a single study provide “definitive evidence.
“We hope that purveyors and consumers of these findings are mindful of that as the implications of this study are debated in the public arena.”
This study was funded in a grant from the National Institute of Environmental Health Science, and the MIREC Study was funded by Chemicals Management Plan at Health Canada, the Ontario Ministry of the Environment, and the Canadian Institutes for Health Research. Dr. Bruce Lanphear reported being an unpaid expert witness for an upcoming case involving the U.S. Environmental Protection Agency and water fluoridation. Dr. Richard Hornung reported receiving personal fees from York University. Dr. E. Angeles Martinez-Mier reported receiving grants from the National Institutes of Health. The other authors report no relevant conflicts of interest. Dr. Bellinger reported no relevant conflicts of interest with regard to his editorial.
SOURCEs: Green R et al. JAMA Pediatr. 2019. doi: 10.1001/jamapediatrics.2019.1729; Bellinger. JAMA Pediatr. 2019. doi: 10.1001/ jamapediatrics.2019.1728.
FROM JAMA PEDIATRICS
ASCO VTE guideline update: DOACs now an option for prevention, treatment
The direct oral anticoagulants (DOACs) apixaban and rivaroxaban are now among the options for thromboprophylaxis in high-risk cancer outpatients with low risk for bleeding and drug interactions, according to a practice guideline update from the American Society of Clinical Oncology.
Rivaroxaban also has been added as an option for initial anticoagulation for venous thromboembolism (VTE), and both rivaroxaban and edoxaban are now options for long-term anticoagulation, Nigel S. Key, MB ChB, and colleagues wrote in the updated guideline on the prophylaxis and treatment of VTE – including deep vein thrombosis (DVT) and pulmonary embolism (PE) – in cancer patients (J Clin Oncol. 2019 Aug 5. doi: 10.1200/JCO.19.19.01461).
The addition of DOACs as options for VTE prophylaxis and treatment represents the most notable change to the guideline.
“Oral anticoagulants that target thrombin (direct thrombin inhibitor, dabigatran) or activated factor X (antifactor Xa inhibitors, rivaroxaban, apixaban, and edoxaban) are now approved for treatment of DVT or PE as well as for DVT prophylaxis following orthopedic surgery and for reducing the risk of stroke and systemic embolism in patients with nonvalvular atrial fibrillation,” the guideline panel wrote.
A systematic review of PubMed and the Cochrane Library for randomized controlled trials (RCTs) and meta-analyses of RCTs published from Aug. 1, 2014, through Dec. 4, 2018, identified 35 publications on VTE prophylaxis and treatment, including 2 RCTs of DOACs for prophylaxis and 2 others of DOAC treatment, as well as 8 publications on VTE risk assessment. A multidisciplinary expert panel appointed by ASCO and cochaired by Dr. Key of the University of North Carolina, Chapel Hill, used this evidence to develop the updated guideline.
The work was guided by “the ‘signals’ approach that is designed to identify only new, potentially practice-changing data – signals – that might translate into revised practice recommendations,” the authors explained.
DOAC-related updates
VTE prophylaxis. Based in part on findings from the recently published AVERT trial of apixaban in patients initiating a new course of chemotherapy and from the CASSINI trial of rivaroxaban in patients with solid tumors or lymphoma starting systemic antineoplastic therapy, the panel added both agents as thromboprophylactic options that can be offered to high-risk cancer outpatients with no significant risk factors for bleeding or drug interactions (N Engl J Med. 2019;380:711-19; N Engl J Med. 2019;380:720-8).
Low-molecular-weight heparin (LMWH) also remains an option in such patients; consideration of therapy should involve discussion with the patient about relative benefits and harms, drug costs, and “the uncertainty surrounding duration of prophylaxis in this setting,” they wrote.
Anticoagulation for VTE. Options for initial anticoagulation include LMWH, unfractionated heparin (UFH), fondaparinux, and now rivaroxaban, with the latter added based on findings from two RCTs – the SELECT-D trial and the Hokusai VTE-Cancer study – and multiple meta-analyses (J Clin Oncol. 2018;36:2017-23; N Engl J Med. 2018;378:615-24).
Long-term anticoagulation can involve treatment with LMWH, edoxaban, or rivaroxaban for at least 6 months, all of which have improved efficacy versus vitamin K agonists (VKAs), the panel noted. However, VKAs may be used if LMWH and DOACs are not accessible.
Importantly, the literature indicates an increased risk of major bleeding with DOACs, particularly in patients with gastrointestinal malignancies and potentially in those with genitourinary malignancies. “Caution with DOACs is also warranted in other settings with high risk for mucosal bleeding,” the panel wrote.
Additional updates
CNS metastases. The anticoagulation recommendations were also updated to include patients with metastatic central nervous system malignancies (those with primary CNS malignancies were included previously). Both those with primary and metastatic CNS malignancy should be offered anticoagulation for established VTE as described for patients with other types of cancer. However, the panel stressed that “uncertainties remain about choice of agents and selection of patients most likely to benefit.”
“Patients with intracranial tumors are at increased risk for thrombotic complications and intracranial hemorrhage (ICH), but the presence of a stable or active primary intracranial malignancy or brain metastases is not an absolute contraindication to anticoagulation,” they wrote.
Limited evidence suggests that therapeutic anticoagulation does not increase ICH risk in patients with brain metastases, but it may increase risk in those with primary brain tumors, the panel added.
Additionally, preliminary data from a retrospective cohort of patients with metastatic brain disease and venous thrombosis suggest that DOACs may be associated with a lower risk of ICH than is LMWH in this population.
Long-term postoperative LMWH. Extended prophylaxis with LMWH for up to 4 weeks is recommended after major open or laparoscopic abdominal or pelvic surgery in cancer patients with high-risk features, such as restricted mobility, obesity, history of VTE, or with additional risk factors. Lower-risk surgical settings require case-by-case decision making about appropriate thromboprophylaxis duration, according to the update.
A 2014 RCT looking at thromboprophylaxis duration in 225 patients undergoing laparoscopic surgery for colorectal cancer prompted the addition of laparoscopic surgery to this recommendation. In that study, VTE occurred by 4 weeks in nearly 10% of patients receiving 1 week of prophylaxis and in no patients in the 4-week arm. Major bleeding occurred in one versus zero patients in the thromboprophylaxis arms, respectively (Ann Surg. April 2014;259[4]:665-9).
Reaffirmed recommendations
Based on the latest available data, the panel reaffirmed that most hospitalized patients with cancer and an acute medical condition require thromboprophylaxis for the duration of their hospitalization and that thromboprophylaxis should not be routinely recommended for all outpatients with cancer.
The panel also reaffirmed the need for thromboprophylaxis starting preoperatively and continuing for at least 7-10 days in patients undergoing major cancer surgery, the need for periodic assessment of VTE risk in cancer patients, and the importance of patient education about the signs and symptoms of VTE.
Perspective and future directions
In an interview, David H. Henry, MD, said he was pleased to see ASCO incorporate the latest DOAC data into the VTE guideline.
The AVERT and CASSINI studies, in particular, highlight the value of using the Khorana Risk Score, which considers cancer type, blood counts, and body mass index to predict the risk of thrombosis in cancer patients and to guide decisions regarding prophylaxis, said Dr. Henry, vice chair of the department of medicine and clinical professor of medicine at Penn Medicine’s Abramson Cancer Center, Philadelphia.
The DOACs also represent “a nice new development in the treatment setting,” he said, adding that it’s been long known – since the 2003 CLOT trial – that cancer patients with VTE had much lower recurrence rates with LMWH versus warfarin (Coumadin).
“Now fast forward to the modern era ... and DOACs now appear to be a good idea,” he said.
Dr. Henry also addressed the recommendation for expanded postoperative LMWH use.
“That I found interesting; I’m not sure what took them so long,” he said, explaining that National Comprehensive Cancer Network and European Society of Medical Oncology recommendations have long stated that, for patients with abdominal cancers who undergo abdominopelvic surgery, DVT prophylaxis should continue for 4 weeks.
Dr. Henry said that a survey at his center showed that those recommendations were “very poorly followed,” with surgeons giving 4 weeks of prophylaxis in just 5% of cases.
“The good news from our survey was that not many people had a VTE, despite not many people following the recommendations, but I must say I think our surgeons are catching on,” he said.
Overall, the updated guideline highlights the importance of considering the “cancer variable” when it comes to VTE prevention and treatment.
“We’ve known forever that when we diagnose a DVT or PE in the outpatient setting – and this is independent of cancer – that you should treat it. Add the cancer variable and we now know that we should worry and try to prevent the VTE in certain high-risk patients, and there are some drugs to do it with,” he said, adding that “you should worry about the person you’ve just provoked [with surgery] as well.”
An important question not addressed in the guideline update is the indefinite use of DOACs in cancer patients with ongoing risk, he said.
“When we see DVT or PE, we usually treat for 3 months – that’s the industry standard – and at the end of 3 months ... you do a time out and you say to yourself, ‘Was this person provoked?’ ” he said.
For example, if they took a long flight or if pregnancy was a factor, treatment can usually be safely stopped. However, in a cancer patient who still has cancer, the provocation continues, and the patient may require indefinite treatment.
Questions that remain involve defining “indefinite” and include whether (and which of) these drugs can be used indefinitely in such patients, Dr. Henry said.
Dr. Key reported receiving honoraria from Novo Nordisk, research funding to his institution from Baxter Biosciences, Grifols, and Pfizer, and serving as a consultant or advisor for Genentech, Roche, Uniqure, Seattle Genetics, and Shire Human Genetic Therapies. Numerous disclosures were also reported by other expert panel members.
The direct oral anticoagulants (DOACs) apixaban and rivaroxaban are now among the options for thromboprophylaxis in high-risk cancer outpatients with low risk for bleeding and drug interactions, according to a practice guideline update from the American Society of Clinical Oncology.
Rivaroxaban also has been added as an option for initial anticoagulation for venous thromboembolism (VTE), and both rivaroxaban and edoxaban are now options for long-term anticoagulation, Nigel S. Key, MB ChB, and colleagues wrote in the updated guideline on the prophylaxis and treatment of VTE – including deep vein thrombosis (DVT) and pulmonary embolism (PE) – in cancer patients (J Clin Oncol. 2019 Aug 5. doi: 10.1200/JCO.19.19.01461).
The addition of DOACs as options for VTE prophylaxis and treatment represents the most notable change to the guideline.
“Oral anticoagulants that target thrombin (direct thrombin inhibitor, dabigatran) or activated factor X (antifactor Xa inhibitors, rivaroxaban, apixaban, and edoxaban) are now approved for treatment of DVT or PE as well as for DVT prophylaxis following orthopedic surgery and for reducing the risk of stroke and systemic embolism in patients with nonvalvular atrial fibrillation,” the guideline panel wrote.
A systematic review of PubMed and the Cochrane Library for randomized controlled trials (RCTs) and meta-analyses of RCTs published from Aug. 1, 2014, through Dec. 4, 2018, identified 35 publications on VTE prophylaxis and treatment, including 2 RCTs of DOACs for prophylaxis and 2 others of DOAC treatment, as well as 8 publications on VTE risk assessment. A multidisciplinary expert panel appointed by ASCO and cochaired by Dr. Key of the University of North Carolina, Chapel Hill, used this evidence to develop the updated guideline.
The work was guided by “the ‘signals’ approach that is designed to identify only new, potentially practice-changing data – signals – that might translate into revised practice recommendations,” the authors explained.
DOAC-related updates
VTE prophylaxis. Based in part on findings from the recently published AVERT trial of apixaban in patients initiating a new course of chemotherapy and from the CASSINI trial of rivaroxaban in patients with solid tumors or lymphoma starting systemic antineoplastic therapy, the panel added both agents as thromboprophylactic options that can be offered to high-risk cancer outpatients with no significant risk factors for bleeding or drug interactions (N Engl J Med. 2019;380:711-19; N Engl J Med. 2019;380:720-8).
Low-molecular-weight heparin (LMWH) also remains an option in such patients; consideration of therapy should involve discussion with the patient about relative benefits and harms, drug costs, and “the uncertainty surrounding duration of prophylaxis in this setting,” they wrote.
Anticoagulation for VTE. Options for initial anticoagulation include LMWH, unfractionated heparin (UFH), fondaparinux, and now rivaroxaban, with the latter added based on findings from two RCTs – the SELECT-D trial and the Hokusai VTE-Cancer study – and multiple meta-analyses (J Clin Oncol. 2018;36:2017-23; N Engl J Med. 2018;378:615-24).
Long-term anticoagulation can involve treatment with LMWH, edoxaban, or rivaroxaban for at least 6 months, all of which have improved efficacy versus vitamin K agonists (VKAs), the panel noted. However, VKAs may be used if LMWH and DOACs are not accessible.
Importantly, the literature indicates an increased risk of major bleeding with DOACs, particularly in patients with gastrointestinal malignancies and potentially in those with genitourinary malignancies. “Caution with DOACs is also warranted in other settings with high risk for mucosal bleeding,” the panel wrote.
Additional updates
CNS metastases. The anticoagulation recommendations were also updated to include patients with metastatic central nervous system malignancies (those with primary CNS malignancies were included previously). Both those with primary and metastatic CNS malignancy should be offered anticoagulation for established VTE as described for patients with other types of cancer. However, the panel stressed that “uncertainties remain about choice of agents and selection of patients most likely to benefit.”
“Patients with intracranial tumors are at increased risk for thrombotic complications and intracranial hemorrhage (ICH), but the presence of a stable or active primary intracranial malignancy or brain metastases is not an absolute contraindication to anticoagulation,” they wrote.
Limited evidence suggests that therapeutic anticoagulation does not increase ICH risk in patients with brain metastases, but it may increase risk in those with primary brain tumors, the panel added.
Additionally, preliminary data from a retrospective cohort of patients with metastatic brain disease and venous thrombosis suggest that DOACs may be associated with a lower risk of ICH than is LMWH in this population.
Long-term postoperative LMWH. Extended prophylaxis with LMWH for up to 4 weeks is recommended after major open or laparoscopic abdominal or pelvic surgery in cancer patients with high-risk features, such as restricted mobility, obesity, history of VTE, or with additional risk factors. Lower-risk surgical settings require case-by-case decision making about appropriate thromboprophylaxis duration, according to the update.
A 2014 RCT looking at thromboprophylaxis duration in 225 patients undergoing laparoscopic surgery for colorectal cancer prompted the addition of laparoscopic surgery to this recommendation. In that study, VTE occurred by 4 weeks in nearly 10% of patients receiving 1 week of prophylaxis and in no patients in the 4-week arm. Major bleeding occurred in one versus zero patients in the thromboprophylaxis arms, respectively (Ann Surg. April 2014;259[4]:665-9).
Reaffirmed recommendations
Based on the latest available data, the panel reaffirmed that most hospitalized patients with cancer and an acute medical condition require thromboprophylaxis for the duration of their hospitalization and that thromboprophylaxis should not be routinely recommended for all outpatients with cancer.
The panel also reaffirmed the need for thromboprophylaxis starting preoperatively and continuing for at least 7-10 days in patients undergoing major cancer surgery, the need for periodic assessment of VTE risk in cancer patients, and the importance of patient education about the signs and symptoms of VTE.
Perspective and future directions
In an interview, David H. Henry, MD, said he was pleased to see ASCO incorporate the latest DOAC data into the VTE guideline.
The AVERT and CASSINI studies, in particular, highlight the value of using the Khorana Risk Score, which considers cancer type, blood counts, and body mass index to predict the risk of thrombosis in cancer patients and to guide decisions regarding prophylaxis, said Dr. Henry, vice chair of the department of medicine and clinical professor of medicine at Penn Medicine’s Abramson Cancer Center, Philadelphia.
The DOACs also represent “a nice new development in the treatment setting,” he said, adding that it’s been long known – since the 2003 CLOT trial – that cancer patients with VTE had much lower recurrence rates with LMWH versus warfarin (Coumadin).
“Now fast forward to the modern era ... and DOACs now appear to be a good idea,” he said.
Dr. Henry also addressed the recommendation for expanded postoperative LMWH use.
“That I found interesting; I’m not sure what took them so long,” he said, explaining that National Comprehensive Cancer Network and European Society of Medical Oncology recommendations have long stated that, for patients with abdominal cancers who undergo abdominopelvic surgery, DVT prophylaxis should continue for 4 weeks.
Dr. Henry said that a survey at his center showed that those recommendations were “very poorly followed,” with surgeons giving 4 weeks of prophylaxis in just 5% of cases.
“The good news from our survey was that not many people had a VTE, despite not many people following the recommendations, but I must say I think our surgeons are catching on,” he said.
Overall, the updated guideline highlights the importance of considering the “cancer variable” when it comes to VTE prevention and treatment.
“We’ve known forever that when we diagnose a DVT or PE in the outpatient setting – and this is independent of cancer – that you should treat it. Add the cancer variable and we now know that we should worry and try to prevent the VTE in certain high-risk patients, and there are some drugs to do it with,” he said, adding that “you should worry about the person you’ve just provoked [with surgery] as well.”
An important question not addressed in the guideline update is the indefinite use of DOACs in cancer patients with ongoing risk, he said.
“When we see DVT or PE, we usually treat for 3 months – that’s the industry standard – and at the end of 3 months ... you do a time out and you say to yourself, ‘Was this person provoked?’ ” he said.
For example, if they took a long flight or if pregnancy was a factor, treatment can usually be safely stopped. However, in a cancer patient who still has cancer, the provocation continues, and the patient may require indefinite treatment.
Questions that remain involve defining “indefinite” and include whether (and which of) these drugs can be used indefinitely in such patients, Dr. Henry said.
Dr. Key reported receiving honoraria from Novo Nordisk, research funding to his institution from Baxter Biosciences, Grifols, and Pfizer, and serving as a consultant or advisor for Genentech, Roche, Uniqure, Seattle Genetics, and Shire Human Genetic Therapies. Numerous disclosures were also reported by other expert panel members.
The direct oral anticoagulants (DOACs) apixaban and rivaroxaban are now among the options for thromboprophylaxis in high-risk cancer outpatients with low risk for bleeding and drug interactions, according to a practice guideline update from the American Society of Clinical Oncology.
Rivaroxaban also has been added as an option for initial anticoagulation for venous thromboembolism (VTE), and both rivaroxaban and edoxaban are now options for long-term anticoagulation, Nigel S. Key, MB ChB, and colleagues wrote in the updated guideline on the prophylaxis and treatment of VTE – including deep vein thrombosis (DVT) and pulmonary embolism (PE) – in cancer patients (J Clin Oncol. 2019 Aug 5. doi: 10.1200/JCO.19.19.01461).
The addition of DOACs as options for VTE prophylaxis and treatment represents the most notable change to the guideline.
“Oral anticoagulants that target thrombin (direct thrombin inhibitor, dabigatran) or activated factor X (antifactor Xa inhibitors, rivaroxaban, apixaban, and edoxaban) are now approved for treatment of DVT or PE as well as for DVT prophylaxis following orthopedic surgery and for reducing the risk of stroke and systemic embolism in patients with nonvalvular atrial fibrillation,” the guideline panel wrote.
A systematic review of PubMed and the Cochrane Library for randomized controlled trials (RCTs) and meta-analyses of RCTs published from Aug. 1, 2014, through Dec. 4, 2018, identified 35 publications on VTE prophylaxis and treatment, including 2 RCTs of DOACs for prophylaxis and 2 others of DOAC treatment, as well as 8 publications on VTE risk assessment. A multidisciplinary expert panel appointed by ASCO and cochaired by Dr. Key of the University of North Carolina, Chapel Hill, used this evidence to develop the updated guideline.
The work was guided by “the ‘signals’ approach that is designed to identify only new, potentially practice-changing data – signals – that might translate into revised practice recommendations,” the authors explained.
DOAC-related updates
VTE prophylaxis. Based in part on findings from the recently published AVERT trial of apixaban in patients initiating a new course of chemotherapy and from the CASSINI trial of rivaroxaban in patients with solid tumors or lymphoma starting systemic antineoplastic therapy, the panel added both agents as thromboprophylactic options that can be offered to high-risk cancer outpatients with no significant risk factors for bleeding or drug interactions (N Engl J Med. 2019;380:711-19; N Engl J Med. 2019;380:720-8).
Low-molecular-weight heparin (LMWH) also remains an option in such patients; consideration of therapy should involve discussion with the patient about relative benefits and harms, drug costs, and “the uncertainty surrounding duration of prophylaxis in this setting,” they wrote.
Anticoagulation for VTE. Options for initial anticoagulation include LMWH, unfractionated heparin (UFH), fondaparinux, and now rivaroxaban, with the latter added based on findings from two RCTs – the SELECT-D trial and the Hokusai VTE-Cancer study – and multiple meta-analyses (J Clin Oncol. 2018;36:2017-23; N Engl J Med. 2018;378:615-24).
Long-term anticoagulation can involve treatment with LMWH, edoxaban, or rivaroxaban for at least 6 months, all of which have improved efficacy versus vitamin K agonists (VKAs), the panel noted. However, VKAs may be used if LMWH and DOACs are not accessible.
Importantly, the literature indicates an increased risk of major bleeding with DOACs, particularly in patients with gastrointestinal malignancies and potentially in those with genitourinary malignancies. “Caution with DOACs is also warranted in other settings with high risk for mucosal bleeding,” the panel wrote.
Additional updates
CNS metastases. The anticoagulation recommendations were also updated to include patients with metastatic central nervous system malignancies (those with primary CNS malignancies were included previously). Both those with primary and metastatic CNS malignancy should be offered anticoagulation for established VTE as described for patients with other types of cancer. However, the panel stressed that “uncertainties remain about choice of agents and selection of patients most likely to benefit.”
“Patients with intracranial tumors are at increased risk for thrombotic complications and intracranial hemorrhage (ICH), but the presence of a stable or active primary intracranial malignancy or brain metastases is not an absolute contraindication to anticoagulation,” they wrote.
Limited evidence suggests that therapeutic anticoagulation does not increase ICH risk in patients with brain metastases, but it may increase risk in those with primary brain tumors, the panel added.
Additionally, preliminary data from a retrospective cohort of patients with metastatic brain disease and venous thrombosis suggest that DOACs may be associated with a lower risk of ICH than is LMWH in this population.
Long-term postoperative LMWH. Extended prophylaxis with LMWH for up to 4 weeks is recommended after major open or laparoscopic abdominal or pelvic surgery in cancer patients with high-risk features, such as restricted mobility, obesity, history of VTE, or with additional risk factors. Lower-risk surgical settings require case-by-case decision making about appropriate thromboprophylaxis duration, according to the update.
A 2014 RCT looking at thromboprophylaxis duration in 225 patients undergoing laparoscopic surgery for colorectal cancer prompted the addition of laparoscopic surgery to this recommendation. In that study, VTE occurred by 4 weeks in nearly 10% of patients receiving 1 week of prophylaxis and in no patients in the 4-week arm. Major bleeding occurred in one versus zero patients in the thromboprophylaxis arms, respectively (Ann Surg. April 2014;259[4]:665-9).
Reaffirmed recommendations
Based on the latest available data, the panel reaffirmed that most hospitalized patients with cancer and an acute medical condition require thromboprophylaxis for the duration of their hospitalization and that thromboprophylaxis should not be routinely recommended for all outpatients with cancer.
The panel also reaffirmed the need for thromboprophylaxis starting preoperatively and continuing for at least 7-10 days in patients undergoing major cancer surgery, the need for periodic assessment of VTE risk in cancer patients, and the importance of patient education about the signs and symptoms of VTE.
Perspective and future directions
In an interview, David H. Henry, MD, said he was pleased to see ASCO incorporate the latest DOAC data into the VTE guideline.
The AVERT and CASSINI studies, in particular, highlight the value of using the Khorana Risk Score, which considers cancer type, blood counts, and body mass index to predict the risk of thrombosis in cancer patients and to guide decisions regarding prophylaxis, said Dr. Henry, vice chair of the department of medicine and clinical professor of medicine at Penn Medicine’s Abramson Cancer Center, Philadelphia.
The DOACs also represent “a nice new development in the treatment setting,” he said, adding that it’s been long known – since the 2003 CLOT trial – that cancer patients with VTE had much lower recurrence rates with LMWH versus warfarin (Coumadin).
“Now fast forward to the modern era ... and DOACs now appear to be a good idea,” he said.
Dr. Henry also addressed the recommendation for expanded postoperative LMWH use.
“That I found interesting; I’m not sure what took them so long,” he said, explaining that National Comprehensive Cancer Network and European Society of Medical Oncology recommendations have long stated that, for patients with abdominal cancers who undergo abdominopelvic surgery, DVT prophylaxis should continue for 4 weeks.
Dr. Henry said that a survey at his center showed that those recommendations were “very poorly followed,” with surgeons giving 4 weeks of prophylaxis in just 5% of cases.
“The good news from our survey was that not many people had a VTE, despite not many people following the recommendations, but I must say I think our surgeons are catching on,” he said.
Overall, the updated guideline highlights the importance of considering the “cancer variable” when it comes to VTE prevention and treatment.
“We’ve known forever that when we diagnose a DVT or PE in the outpatient setting – and this is independent of cancer – that you should treat it. Add the cancer variable and we now know that we should worry and try to prevent the VTE in certain high-risk patients, and there are some drugs to do it with,” he said, adding that “you should worry about the person you’ve just provoked [with surgery] as well.”
An important question not addressed in the guideline update is the indefinite use of DOACs in cancer patients with ongoing risk, he said.
“When we see DVT or PE, we usually treat for 3 months – that’s the industry standard – and at the end of 3 months ... you do a time out and you say to yourself, ‘Was this person provoked?’ ” he said.
For example, if they took a long flight or if pregnancy was a factor, treatment can usually be safely stopped. However, in a cancer patient who still has cancer, the provocation continues, and the patient may require indefinite treatment.
Questions that remain involve defining “indefinite” and include whether (and which of) these drugs can be used indefinitely in such patients, Dr. Henry said.
Dr. Key reported receiving honoraria from Novo Nordisk, research funding to his institution from Baxter Biosciences, Grifols, and Pfizer, and serving as a consultant or advisor for Genentech, Roche, Uniqure, Seattle Genetics, and Shire Human Genetic Therapies. Numerous disclosures were also reported by other expert panel members.
Analysis finds no mortality reductions with osteoporosis drugs
A paper published in JAMA Internal Medicine analyzed data from 38 randomized, placebo-controlled clinical trials of osteoporosis drugs involving a total of 101,642 participants.
“Studies have estimated that less than 30% of the mortality following hip and vertebral fractures may be attributed to the fracture itself and, therefore, potentially avoidable by preventing the fracture,” wrote Steven R. Cummings, MD, of the San Francisco Coordinating Center at the University of California, San Francisco, and colleagues. “Some studies have suggested that treatments for osteoporosis may directly reduce overall mortality rates in addition to decreasing fracture risk.”
Despite including a diversity of drugs including bisphosphonates, denosumab (Prolia), selective estrogen receptor modulators, parathyroid hormone analogues, odanacatib, and romosozumab (Evenity), the analysis found no significant association between receiving a drug treatment for osteoporosis and overall mortality.
The researchers did a separate analysis of the 21 clinical trials of bisphosphonate treatments, again finding no impact of the treatment on overall mortality. Similarly, analysis of six zoledronate clinical trials found no statistically significant impact on mortality, although the authors noted that there was some heterogeneity in the results. For example, two large trials found 28% and 35% reductions in mortality, however these effects were not seen in another other zoledronate trials.
An analysis limited to nitrogen-containing bisphosphonates (alendronate, risedronate, ibandronate, and zoledronate) showed a nonsignificant trend toward lower overall mortality, although this became even less statistically significant when trials of zoledronate were excluded.
“More data from placebo-controlled clinical trials of zoledronate therapy and mortality rates are needed to resolve whether treatment with zoledronate is associated with reduced mortality in addition to decreased fracture risk,” the authors wrote.
They added that the 25%-60% mortality reductions seen in previous observational were too large to be attributable solely to reductions in the risk of fracture, but were perhaps the result of unmeasured confounders that could have contributed to lower mortality.
“The apparent reduction in mortality may be an example of the ‘healthy adherer effect,’ which has been documented in studies reporting that participants who adhered to placebo treatment in clinical trials had lower mortality,” they wrote, citing data from the Women’s Health Study that showed 36% lower mortality in those who were at least 80% adherent to placebo.
“This effect is particularly applicable to observational studies of treatments for osteoporosis because only an estimated half of women taking oral drugs for the treatment of osteoporosis continued the regimen for 1 year, and even fewer continued longer,” they added.
They did note one limitation of their analysis was that it did not include a large clinical trial of the antiresorptive drug odanacatib, which was only available in abstract form at the time.
One author reported receiving grants and personal fees from a pharmaceutical company during the conduct of the study, and another reported receiving grants and personal fees outside the submitted work. No other conflicts of interest were reported.
SOURCE: Cummings SR et al. JAMA Intern Med. 2019 Aug 19. doi: 10.1001/jamainternmed.2019.2779.
A paper published in JAMA Internal Medicine analyzed data from 38 randomized, placebo-controlled clinical trials of osteoporosis drugs involving a total of 101,642 participants.
“Studies have estimated that less than 30% of the mortality following hip and vertebral fractures may be attributed to the fracture itself and, therefore, potentially avoidable by preventing the fracture,” wrote Steven R. Cummings, MD, of the San Francisco Coordinating Center at the University of California, San Francisco, and colleagues. “Some studies have suggested that treatments for osteoporosis may directly reduce overall mortality rates in addition to decreasing fracture risk.”
Despite including a diversity of drugs including bisphosphonates, denosumab (Prolia), selective estrogen receptor modulators, parathyroid hormone analogues, odanacatib, and romosozumab (Evenity), the analysis found no significant association between receiving a drug treatment for osteoporosis and overall mortality.
The researchers did a separate analysis of the 21 clinical trials of bisphosphonate treatments, again finding no impact of the treatment on overall mortality. Similarly, analysis of six zoledronate clinical trials found no statistically significant impact on mortality, although the authors noted that there was some heterogeneity in the results. For example, two large trials found 28% and 35% reductions in mortality, however these effects were not seen in another other zoledronate trials.
An analysis limited to nitrogen-containing bisphosphonates (alendronate, risedronate, ibandronate, and zoledronate) showed a nonsignificant trend toward lower overall mortality, although this became even less statistically significant when trials of zoledronate were excluded.
“More data from placebo-controlled clinical trials of zoledronate therapy and mortality rates are needed to resolve whether treatment with zoledronate is associated with reduced mortality in addition to decreased fracture risk,” the authors wrote.
They added that the 25%-60% mortality reductions seen in previous observational were too large to be attributable solely to reductions in the risk of fracture, but were perhaps the result of unmeasured confounders that could have contributed to lower mortality.
“The apparent reduction in mortality may be an example of the ‘healthy adherer effect,’ which has been documented in studies reporting that participants who adhered to placebo treatment in clinical trials had lower mortality,” they wrote, citing data from the Women’s Health Study that showed 36% lower mortality in those who were at least 80% adherent to placebo.
“This effect is particularly applicable to observational studies of treatments for osteoporosis because only an estimated half of women taking oral drugs for the treatment of osteoporosis continued the regimen for 1 year, and even fewer continued longer,” they added.
They did note one limitation of their analysis was that it did not include a large clinical trial of the antiresorptive drug odanacatib, which was only available in abstract form at the time.
One author reported receiving grants and personal fees from a pharmaceutical company during the conduct of the study, and another reported receiving grants and personal fees outside the submitted work. No other conflicts of interest were reported.
SOURCE: Cummings SR et al. JAMA Intern Med. 2019 Aug 19. doi: 10.1001/jamainternmed.2019.2779.
A paper published in JAMA Internal Medicine analyzed data from 38 randomized, placebo-controlled clinical trials of osteoporosis drugs involving a total of 101,642 participants.
“Studies have estimated that less than 30% of the mortality following hip and vertebral fractures may be attributed to the fracture itself and, therefore, potentially avoidable by preventing the fracture,” wrote Steven R. Cummings, MD, of the San Francisco Coordinating Center at the University of California, San Francisco, and colleagues. “Some studies have suggested that treatments for osteoporosis may directly reduce overall mortality rates in addition to decreasing fracture risk.”
Despite including a diversity of drugs including bisphosphonates, denosumab (Prolia), selective estrogen receptor modulators, parathyroid hormone analogues, odanacatib, and romosozumab (Evenity), the analysis found no significant association between receiving a drug treatment for osteoporosis and overall mortality.
The researchers did a separate analysis of the 21 clinical trials of bisphosphonate treatments, again finding no impact of the treatment on overall mortality. Similarly, analysis of six zoledronate clinical trials found no statistically significant impact on mortality, although the authors noted that there was some heterogeneity in the results. For example, two large trials found 28% and 35% reductions in mortality, however these effects were not seen in another other zoledronate trials.
An analysis limited to nitrogen-containing bisphosphonates (alendronate, risedronate, ibandronate, and zoledronate) showed a nonsignificant trend toward lower overall mortality, although this became even less statistically significant when trials of zoledronate were excluded.
“More data from placebo-controlled clinical trials of zoledronate therapy and mortality rates are needed to resolve whether treatment with zoledronate is associated with reduced mortality in addition to decreased fracture risk,” the authors wrote.
They added that the 25%-60% mortality reductions seen in previous observational were too large to be attributable solely to reductions in the risk of fracture, but were perhaps the result of unmeasured confounders that could have contributed to lower mortality.
“The apparent reduction in mortality may be an example of the ‘healthy adherer effect,’ which has been documented in studies reporting that participants who adhered to placebo treatment in clinical trials had lower mortality,” they wrote, citing data from the Women’s Health Study that showed 36% lower mortality in those who were at least 80% adherent to placebo.
“This effect is particularly applicable to observational studies of treatments for osteoporosis because only an estimated half of women taking oral drugs for the treatment of osteoporosis continued the regimen for 1 year, and even fewer continued longer,” they added.
They did note one limitation of their analysis was that it did not include a large clinical trial of the antiresorptive drug odanacatib, which was only available in abstract form at the time.
One author reported receiving grants and personal fees from a pharmaceutical company during the conduct of the study, and another reported receiving grants and personal fees outside the submitted work. No other conflicts of interest were reported.
SOURCE: Cummings SR et al. JAMA Intern Med. 2019 Aug 19. doi: 10.1001/jamainternmed.2019.2779.
FROM JAMA INTERNAL MEDICINE
Differential monocytic HLA-DR expression prognostically useful in PICU
LJUBLJANA, SLOVENIA – During their first 4 days in the pediatric ICU, critically ill children have significantly reduced human leukocyte antigen (HLA)–DR expression within all three major subsets of monocytes. The reductions are seen regardless of whether the children were admitted for sepsis, trauma, or after surgery, Navin Boeddha, MD, PhD, reported in his PIDJ Award Lecture at the annual meeting of the European Society for Paediatric Infectious Diseases.
The PIDJ Award is given annually by the editors of the Pediatric Infectious Disease Journal in recognition of what they deem the most important study published in the journal during the prior year. This one stood out because it identified promising potential laboratory markers that have been sought as a prerequisite to developing immunostimulatory therapies aimed at improving outcomes in severely immunosuppressed children.
Researchers are particularly eager to explore this investigative treatment strategy because the mortality and long-term morbidity of pediatric sepsis, in particular, remain unacceptably high. The hope now is that HLA-DR expression on monocyte subsets will be helpful in directing granulocyte-macrophage colony-stimulating factor, interferon-gamma, and other immunostimulatory therapies to the pediatric ICU patients with the most favorable benefit/risk ratio, according to Dr. Boeddha of Sophia Children’s Hospital and Erasmus University, Rotterdam, the Netherlands.
He reported on 37 critically ill children admitted to a pediatric ICU – 12 for sepsis, 11 post surgery, 10 for trauma, and 4 for other reasons – as well as 37 healthy controls. HLA-DR expression on monocyte subsets was measured by flow cytometry upon admission and again on each of the following 3 days.
The impetus for this study is that severe infection, major surgery, and severe trauma are often associated with immunosuppression. And while prior work in septic adults has concluded that decreased monocytic HLA-DR expression is a marker for immunosuppression – and that the lower the level of such expression, the greater the risk of nosocomial infection and death – this phenomenon hasn’t been well studied in critically ill children, he explained.
Dr. Boeddha and coinvestigators found that monocytic HLA-DR expression, which plays a major role in presenting antigens to T cells, decreased over time during the critically ill children’s first 4 days in the pediatric ICU. Moreover, it was lower than in controls at all four time points. This was true both for the percentage of HLA-DR–expressing monocytes of all subsets, as well as for HLA-DR mean fluorescence intensity.
In the critically ill study population as a whole, the percentage of classical monocytes – that is, CD14++ CD16– monocytes – was significantly greater at admission than in healthy controls by margins of 95% and 87%, while the percentage of nonclassical CD14+/-CD16++ monocytes was markedly lower at 2% than the 9% figure in controls.
The biggest discrepancy in monocyte subset distribution was seen in patients admitted for sepsis. Their percentage of classical monocytes was lower than in controls by a margin of 82% versus 87%; however, their proportion of intermediate monocytes (CD14++ CD16+) upon admission was twice that of controls, and it climbed further to 14% on day 2.
Among the key findings in the Rotterdam study: 13 of 37 critically ill patients experienced at least one nosocomial infection while in the pediatric ICU. Their day 2 percentage of HLA-DR–expressing classical monocytes was 42%, strikingly lower than the 78% figure in patients who didn’t develop an infection. Also, the 6 patients who died had only a 33% rate of HLA-DR–expressing classical monocytes on day 3 after pediatric ICU admission versus a 63% rate in survivors of their critical illness.
Thus, low HLA-DR expression on classical monocytes early during the course of a pediatric ICU stay may be the sought-after biomarker that identifies a particularly high-risk subgroup of critically ill children in whom immunostimulatory therapies should be studied. However, future confirmatory studies should monitor monocytic HLA-DR expression in a larger critically ill patient population for a longer period in order to establish the time to recovery of low expression and its impact on long-term complications, the physician said.
Dr. Boeddha reported having no financial conflicts regarding the award-winning study, supported by the European Union and Erasmus University.
SOURCE: Boeddha NP et al. Pediatr Infect Dis J. 2018 Oct;37(10):1034-40.
LJUBLJANA, SLOVENIA – During their first 4 days in the pediatric ICU, critically ill children have significantly reduced human leukocyte antigen (HLA)–DR expression within all three major subsets of monocytes. The reductions are seen regardless of whether the children were admitted for sepsis, trauma, or after surgery, Navin Boeddha, MD, PhD, reported in his PIDJ Award Lecture at the annual meeting of the European Society for Paediatric Infectious Diseases.
The PIDJ Award is given annually by the editors of the Pediatric Infectious Disease Journal in recognition of what they deem the most important study published in the journal during the prior year. This one stood out because it identified promising potential laboratory markers that have been sought as a prerequisite to developing immunostimulatory therapies aimed at improving outcomes in severely immunosuppressed children.
Researchers are particularly eager to explore this investigative treatment strategy because the mortality and long-term morbidity of pediatric sepsis, in particular, remain unacceptably high. The hope now is that HLA-DR expression on monocyte subsets will be helpful in directing granulocyte-macrophage colony-stimulating factor, interferon-gamma, and other immunostimulatory therapies to the pediatric ICU patients with the most favorable benefit/risk ratio, according to Dr. Boeddha of Sophia Children’s Hospital and Erasmus University, Rotterdam, the Netherlands.
He reported on 37 critically ill children admitted to a pediatric ICU – 12 for sepsis, 11 post surgery, 10 for trauma, and 4 for other reasons – as well as 37 healthy controls. HLA-DR expression on monocyte subsets was measured by flow cytometry upon admission and again on each of the following 3 days.
The impetus for this study is that severe infection, major surgery, and severe trauma are often associated with immunosuppression. And while prior work in septic adults has concluded that decreased monocytic HLA-DR expression is a marker for immunosuppression – and that the lower the level of such expression, the greater the risk of nosocomial infection and death – this phenomenon hasn’t been well studied in critically ill children, he explained.
Dr. Boeddha and coinvestigators found that monocytic HLA-DR expression, which plays a major role in presenting antigens to T cells, decreased over time during the critically ill children’s first 4 days in the pediatric ICU. Moreover, it was lower than in controls at all four time points. This was true both for the percentage of HLA-DR–expressing monocytes of all subsets, as well as for HLA-DR mean fluorescence intensity.
In the critically ill study population as a whole, the percentage of classical monocytes – that is, CD14++ CD16– monocytes – was significantly greater at admission than in healthy controls by margins of 95% and 87%, while the percentage of nonclassical CD14+/-CD16++ monocytes was markedly lower at 2% than the 9% figure in controls.
The biggest discrepancy in monocyte subset distribution was seen in patients admitted for sepsis. Their percentage of classical monocytes was lower than in controls by a margin of 82% versus 87%; however, their proportion of intermediate monocytes (CD14++ CD16+) upon admission was twice that of controls, and it climbed further to 14% on day 2.
Among the key findings in the Rotterdam study: 13 of 37 critically ill patients experienced at least one nosocomial infection while in the pediatric ICU. Their day 2 percentage of HLA-DR–expressing classical monocytes was 42%, strikingly lower than the 78% figure in patients who didn’t develop an infection. Also, the 6 patients who died had only a 33% rate of HLA-DR–expressing classical monocytes on day 3 after pediatric ICU admission versus a 63% rate in survivors of their critical illness.
Thus, low HLA-DR expression on classical monocytes early during the course of a pediatric ICU stay may be the sought-after biomarker that identifies a particularly high-risk subgroup of critically ill children in whom immunostimulatory therapies should be studied. However, future confirmatory studies should monitor monocytic HLA-DR expression in a larger critically ill patient population for a longer period in order to establish the time to recovery of low expression and its impact on long-term complications, the physician said.
Dr. Boeddha reported having no financial conflicts regarding the award-winning study, supported by the European Union and Erasmus University.
SOURCE: Boeddha NP et al. Pediatr Infect Dis J. 2018 Oct;37(10):1034-40.
LJUBLJANA, SLOVENIA – During their first 4 days in the pediatric ICU, critically ill children have significantly reduced human leukocyte antigen (HLA)–DR expression within all three major subsets of monocytes. The reductions are seen regardless of whether the children were admitted for sepsis, trauma, or after surgery, Navin Boeddha, MD, PhD, reported in his PIDJ Award Lecture at the annual meeting of the European Society for Paediatric Infectious Diseases.
The PIDJ Award is given annually by the editors of the Pediatric Infectious Disease Journal in recognition of what they deem the most important study published in the journal during the prior year. This one stood out because it identified promising potential laboratory markers that have been sought as a prerequisite to developing immunostimulatory therapies aimed at improving outcomes in severely immunosuppressed children.
Researchers are particularly eager to explore this investigative treatment strategy because the mortality and long-term morbidity of pediatric sepsis, in particular, remain unacceptably high. The hope now is that HLA-DR expression on monocyte subsets will be helpful in directing granulocyte-macrophage colony-stimulating factor, interferon-gamma, and other immunostimulatory therapies to the pediatric ICU patients with the most favorable benefit/risk ratio, according to Dr. Boeddha of Sophia Children’s Hospital and Erasmus University, Rotterdam, the Netherlands.
He reported on 37 critically ill children admitted to a pediatric ICU – 12 for sepsis, 11 post surgery, 10 for trauma, and 4 for other reasons – as well as 37 healthy controls. HLA-DR expression on monocyte subsets was measured by flow cytometry upon admission and again on each of the following 3 days.
The impetus for this study is that severe infection, major surgery, and severe trauma are often associated with immunosuppression. And while prior work in septic adults has concluded that decreased monocytic HLA-DR expression is a marker for immunosuppression – and that the lower the level of such expression, the greater the risk of nosocomial infection and death – this phenomenon hasn’t been well studied in critically ill children, he explained.
Dr. Boeddha and coinvestigators found that monocytic HLA-DR expression, which plays a major role in presenting antigens to T cells, decreased over time during the critically ill children’s first 4 days in the pediatric ICU. Moreover, it was lower than in controls at all four time points. This was true both for the percentage of HLA-DR–expressing monocytes of all subsets, as well as for HLA-DR mean fluorescence intensity.
In the critically ill study population as a whole, the percentage of classical monocytes – that is, CD14++ CD16– monocytes – was significantly greater at admission than in healthy controls by margins of 95% and 87%, while the percentage of nonclassical CD14+/-CD16++ monocytes was markedly lower at 2% than the 9% figure in controls.
The biggest discrepancy in monocyte subset distribution was seen in patients admitted for sepsis. Their percentage of classical monocytes was lower than in controls by a margin of 82% versus 87%; however, their proportion of intermediate monocytes (CD14++ CD16+) upon admission was twice that of controls, and it climbed further to 14% on day 2.
Among the key findings in the Rotterdam study: 13 of 37 critically ill patients experienced at least one nosocomial infection while in the pediatric ICU. Their day 2 percentage of HLA-DR–expressing classical monocytes was 42%, strikingly lower than the 78% figure in patients who didn’t develop an infection. Also, the 6 patients who died had only a 33% rate of HLA-DR–expressing classical monocytes on day 3 after pediatric ICU admission versus a 63% rate in survivors of their critical illness.
Thus, low HLA-DR expression on classical monocytes early during the course of a pediatric ICU stay may be the sought-after biomarker that identifies a particularly high-risk subgroup of critically ill children in whom immunostimulatory therapies should be studied. However, future confirmatory studies should monitor monocytic HLA-DR expression in a larger critically ill patient population for a longer period in order to establish the time to recovery of low expression and its impact on long-term complications, the physician said.
Dr. Boeddha reported having no financial conflicts regarding the award-winning study, supported by the European Union and Erasmus University.
SOURCE: Boeddha NP et al. Pediatr Infect Dis J. 2018 Oct;37(10):1034-40.
REPORTING FROM ESPID 2019
Considerations for Psoriasis in Pregnancy
1. Trivedi MK, Vaughn AR, Murase JE. Pustular psoriasis of pregnancy: current perspectives. Int J Womens Health. 2018;10:109-115.
2. Kondo RN, Araújo FM, Pereira AM, et al. Pustular psoriasis of pregnancy (impetigo herpetiformis)—case report. An Bras Dermatol. 2013;88(6 suppl 1):186-189.
3. Oumeish OY, Farraj SE, Bataineh AS. Some aspects of impetigo herpetiformis. Arch Dermatol. 1982;118:103-105.
4. Flynn A, Burke N, Byrne B, et al. Two case reports of generalized pustular psoriasis of pregnancy: different outcomes. Obstet Med. 2016;9:55-59.
5. Shaw CJ, Wu P, Sriemevan A. First trimester impetigo herpetiformis in multiparous female successfully treated with oral cyclosporine. BMJ Case Rep. 2011;2011:bcr0220113915.
6. Pitch M, Somers K, Scott G, et al. A case of pustular psoriasis of pregnancy with positive maternal-fetal outcomes. Cutis. 2018;101:278-280.
7. Namazi N, Dadkhahfar S. Impetigo herpetiformis: review of pathogenesis, complication, and treatment [published April 4, 2018]. Dermatol Res Pract. 2018;2018:5801280. doi:10.1155/2018/5801280. eCollection 2018.
8. Lehrhoff S, Pomeranz MK. Specific dermatoses of pregnancy and their treatment. Dermatol Ther. 2013;26:274-284.
9. Ulubay M, Keskin U, Fidan U, et al. Case report of a rare dermatosis in pregnancy: impetigo herpetiformis. J Obstet Gynaecol Res. 2015;41:301-303.
10. Robinson A, Van Voorhees AS, Hsu S, et al. Treatment of pustular psoriasis: from the Medical Board of the National Psoriasis Foundation. J Am Acad Dermatol. 2012;67:279-288.
11. Hazarika D. Generalized pustular psoriasis of pregnancy successfully treated with cyclosporine. Indian J Dermatol Venereol Leprol. 2009;75:638.
12. Puig L, Barco D, Alomar A. Treatment of psoriasis with anti-TNF drugs during pregnancy: case report and review of the literature. Dermatology. 2010;220:71-76.
13. Bozdag K, Ozturk S, Ermete M. A case of recurrent impetigo herpetiformis treated with systemic corticosteroids and narrow¬band UVB [published online January 20, 2012]. Cutan Ocul Toxicol. 2012;31:67-69.
1. Trivedi MK, Vaughn AR, Murase JE. Pustular psoriasis of pregnancy: current perspectives. Int J Womens Health. 2018;10:109-115.
2. Kondo RN, Araújo FM, Pereira AM, et al. Pustular psoriasis of pregnancy (impetigo herpetiformis)—case report. An Bras Dermatol. 2013;88(6 suppl 1):186-189.
3. Oumeish OY, Farraj SE, Bataineh AS. Some aspects of impetigo herpetiformis. Arch Dermatol. 1982;118:103-105.
4. Flynn A, Burke N, Byrne B, et al. Two case reports of generalized pustular psoriasis of pregnancy: different outcomes. Obstet Med. 2016;9:55-59.
5. Shaw CJ, Wu P, Sriemevan A. First trimester impetigo herpetiformis in multiparous female successfully treated with oral cyclosporine. BMJ Case Rep. 2011;2011:bcr0220113915.
6. Pitch M, Somers K, Scott G, et al. A case of pustular psoriasis of pregnancy with positive maternal-fetal outcomes. Cutis. 2018;101:278-280.
7. Namazi N, Dadkhahfar S. Impetigo herpetiformis: review of pathogenesis, complication, and treatment [published April 4, 2018]. Dermatol Res Pract. 2018;2018:5801280. doi:10.1155/2018/5801280. eCollection 2018.
8. Lehrhoff S, Pomeranz MK. Specific dermatoses of pregnancy and their treatment. Dermatol Ther. 2013;26:274-284.
9. Ulubay M, Keskin U, Fidan U, et al. Case report of a rare dermatosis in pregnancy: impetigo herpetiformis. J Obstet Gynaecol Res. 2015;41:301-303.
10. Robinson A, Van Voorhees AS, Hsu S, et al. Treatment of pustular psoriasis: from the Medical Board of the National Psoriasis Foundation. J Am Acad Dermatol. 2012;67:279-288.
11. Hazarika D. Generalized pustular psoriasis of pregnancy successfully treated with cyclosporine. Indian J Dermatol Venereol Leprol. 2009;75:638.
12. Puig L, Barco D, Alomar A. Treatment of psoriasis with anti-TNF drugs during pregnancy: case report and review of the literature. Dermatology. 2010;220:71-76.
13. Bozdag K, Ozturk S, Ermete M. A case of recurrent impetigo herpetiformis treated with systemic corticosteroids and narrow¬band UVB [published online January 20, 2012]. Cutan Ocul Toxicol. 2012;31:67-69.
1. Trivedi MK, Vaughn AR, Murase JE. Pustular psoriasis of pregnancy: current perspectives. Int J Womens Health. 2018;10:109-115.
2. Kondo RN, Araújo FM, Pereira AM, et al. Pustular psoriasis of pregnancy (impetigo herpetiformis)—case report. An Bras Dermatol. 2013;88(6 suppl 1):186-189.
3. Oumeish OY, Farraj SE, Bataineh AS. Some aspects of impetigo herpetiformis. Arch Dermatol. 1982;118:103-105.
4. Flynn A, Burke N, Byrne B, et al. Two case reports of generalized pustular psoriasis of pregnancy: different outcomes. Obstet Med. 2016;9:55-59.
5. Shaw CJ, Wu P, Sriemevan A. First trimester impetigo herpetiformis in multiparous female successfully treated with oral cyclosporine. BMJ Case Rep. 2011;2011:bcr0220113915.
6. Pitch M, Somers K, Scott G, et al. A case of pustular psoriasis of pregnancy with positive maternal-fetal outcomes. Cutis. 2018;101:278-280.
7. Namazi N, Dadkhahfar S. Impetigo herpetiformis: review of pathogenesis, complication, and treatment [published April 4, 2018]. Dermatol Res Pract. 2018;2018:5801280. doi:10.1155/2018/5801280. eCollection 2018.
8. Lehrhoff S, Pomeranz MK. Specific dermatoses of pregnancy and their treatment. Dermatol Ther. 2013;26:274-284.
9. Ulubay M, Keskin U, Fidan U, et al. Case report of a rare dermatosis in pregnancy: impetigo herpetiformis. J Obstet Gynaecol Res. 2015;41:301-303.
10. Robinson A, Van Voorhees AS, Hsu S, et al. Treatment of pustular psoriasis: from the Medical Board of the National Psoriasis Foundation. J Am Acad Dermatol. 2012;67:279-288.
11. Hazarika D. Generalized pustular psoriasis of pregnancy successfully treated with cyclosporine. Indian J Dermatol Venereol Leprol. 2009;75:638.
12. Puig L, Barco D, Alomar A. Treatment of psoriasis with anti-TNF drugs during pregnancy: case report and review of the literature. Dermatology. 2010;220:71-76.
13. Bozdag K, Ozturk S, Ermete M. A case of recurrent impetigo herpetiformis treated with systemic corticosteroids and narrow¬band UVB [published online January 20, 2012]. Cutan Ocul Toxicol. 2012;31:67-69.
To be, or not to be ... on backup?
A staffing backup system is essential
It was late 2011. We were a practice of around 20 physicians, and just starting to integrate advanced practice providers into our practice. Our average daily census was about 100 patients and slightly more than 50% of our services were resident services.
My boss, colleague, friend, and mentor – Charles “Chuck” Sargent, MD, and I were on service together early one Saturday morning; Chuck gets a phone call that one of our colleagues was ill. With just 10 physicians working and 10 off, it was an ordeal for Chuck to call all 10 colleagues. Unlike most times, no one could come to moonlight that day. In the end Chuck and I took care of our colleague’s patients.
Yes, it was an exhausting few days, but illness and family needs do not come announced. Now, close to a decade later, we are a practice of 70 physicians and 16 advanced practice providers, our average daily census is about 270 patients, and we have two backup physicians every day – known as Jeopardy-1 and Jeopardy-2. Paternity leave, maternity leave, minor illness, minor trauma, surgery, and family needs are common for our practice. We considered it a good year when we utilized our Jeopardy-1 and Jeopardy-2 for 10% and 1% respectively; and for the past year with a lot of needs, we employed Jeopardy-1 and Jeopardy-2 for 25% and 10%, respectively.
A staffing backup system is a necessary tool for almost every practice. Not having a formal backup system doesn’t mean you don’t need one or you don’t have one – it is just called “no formal backup system.” The Society of Hospital Medicine’s State of Hospital Medicine Reports (SoHM) have been providing data about staffing backup systems every other year. Backup systems come in three flavors. The first system is no formal backup, which means the leaders of the program scramble for coverage every time there is a need. The second is a voluntary backup system in which clinicians volunteer to be on a backup schedule, and the third is a mandatory system in which all or most clinicians are required to be on the backup schedule.
The cumulative data reported in the 2014, 2016, and 2018 SoHM for hospital medicine groups serving adults only, children only, and both adults and children (weighted for number of groups reporting), suggests that 48.3% of respondent practices had no formal backup system, 31.7% had a voluntary system, and 20% had a mandatory backup system.
When we look at different populations served, the trend of “no formal backup system” responses is in decline. The 2014, 2016, and 2018 SoHM reports for hospital medicine groups serving adults, children, and both adults and children, reinforce such trends. The SoHM 2018 report shows 65.6% of hospital medicine groups serving children, 41.6% of groups serving adults, and only 25% of groups serving both adults and children have “no formal backup system.” Our medicine-pediatrics colleagues seem to be leading the trend and have already deduced that, for a solid practice, a backup system is a necessity.
It is also important to see the trend of “no formal backup system” based on geographic area, employer type, academic status, or total number of full-time employees. As we would have predicted, the larger the group the more likely they are to have a backup system. For academic practices a similar trend was seen; they had a higher percentage of some type of backup system year after year.
When it comes to compensation for backup work, four patterns were explored by the SoHM over the years. The most common type of arrangement was “no additional compensation for being on the backup schedule, but additional compensation was provided when called into work.” This kind of arrangement would be easiest to negotiate when the hospitalist and the employer sit across a table. There is nothing at risk for the employer when there isn’t a need, or when there is a need to fill a shift.
The least common method was “additional compensation for being on the backup schedule, but no additional compensation if called into work.” From employers’ perspectives, this is an extra expense and is not ideal for the hospitalist either. In the middle of the pack were “no additional compensation associated with the backup plan” (the second most common model), while the third most common model was “additional compensation for being on the backup schedule, as well as additional compensation if called into work.”
Once you have seen one hospital medicine practice, you have seen one hospital medicine practice. There are different needs for every group, and the backup system – as well its compensation model – has to be designed for it. Thankfully, the SoHM reports reveal the patterns and trends so that we don’t have to reinvent the wheel. For our practice, we decreased a week of clinical service for 2 weeks a year of backup. Every time we activate our backup system, the person coming in receives extra compensation or a similar shift off. In the long run, our backup system didn’t kill us, but rather made us stronger as a group.
Dr. Chadha is interim division chief in the division of hospital medicine at the University of Kentucky HealthCare in Lexington. He actively leads efforts of recruiting, scheduling, practice analysis, and operation of the group. He is a first-time member of the SHM Practice Analysis Committee. Ms. Babb is administrative support associate in the division of hospital medicine at University of Kentucky HealthCare.
A staffing backup system is essential
A staffing backup system is essential
It was late 2011. We were a practice of around 20 physicians, and just starting to integrate advanced practice providers into our practice. Our average daily census was about 100 patients and slightly more than 50% of our services were resident services.
My boss, colleague, friend, and mentor – Charles “Chuck” Sargent, MD, and I were on service together early one Saturday morning; Chuck gets a phone call that one of our colleagues was ill. With just 10 physicians working and 10 off, it was an ordeal for Chuck to call all 10 colleagues. Unlike most times, no one could come to moonlight that day. In the end Chuck and I took care of our colleague’s patients.
Yes, it was an exhausting few days, but illness and family needs do not come announced. Now, close to a decade later, we are a practice of 70 physicians and 16 advanced practice providers, our average daily census is about 270 patients, and we have two backup physicians every day – known as Jeopardy-1 and Jeopardy-2. Paternity leave, maternity leave, minor illness, minor trauma, surgery, and family needs are common for our practice. We considered it a good year when we utilized our Jeopardy-1 and Jeopardy-2 for 10% and 1% respectively; and for the past year with a lot of needs, we employed Jeopardy-1 and Jeopardy-2 for 25% and 10%, respectively.
A staffing backup system is a necessary tool for almost every practice. Not having a formal backup system doesn’t mean you don’t need one or you don’t have one – it is just called “no formal backup system.” The Society of Hospital Medicine’s State of Hospital Medicine Reports (SoHM) have been providing data about staffing backup systems every other year. Backup systems come in three flavors. The first system is no formal backup, which means the leaders of the program scramble for coverage every time there is a need. The second is a voluntary backup system in which clinicians volunteer to be on a backup schedule, and the third is a mandatory system in which all or most clinicians are required to be on the backup schedule.
The cumulative data reported in the 2014, 2016, and 2018 SoHM for hospital medicine groups serving adults only, children only, and both adults and children (weighted for number of groups reporting), suggests that 48.3% of respondent practices had no formal backup system, 31.7% had a voluntary system, and 20% had a mandatory backup system.
When we look at different populations served, the trend of “no formal backup system” responses is in decline. The 2014, 2016, and 2018 SoHM reports for hospital medicine groups serving adults, children, and both adults and children, reinforce such trends. The SoHM 2018 report shows 65.6% of hospital medicine groups serving children, 41.6% of groups serving adults, and only 25% of groups serving both adults and children have “no formal backup system.” Our medicine-pediatrics colleagues seem to be leading the trend and have already deduced that, for a solid practice, a backup system is a necessity.
It is also important to see the trend of “no formal backup system” based on geographic area, employer type, academic status, or total number of full-time employees. As we would have predicted, the larger the group the more likely they are to have a backup system. For academic practices a similar trend was seen; they had a higher percentage of some type of backup system year after year.
When it comes to compensation for backup work, four patterns were explored by the SoHM over the years. The most common type of arrangement was “no additional compensation for being on the backup schedule, but additional compensation was provided when called into work.” This kind of arrangement would be easiest to negotiate when the hospitalist and the employer sit across a table. There is nothing at risk for the employer when there isn’t a need, or when there is a need to fill a shift.
The least common method was “additional compensation for being on the backup schedule, but no additional compensation if called into work.” From employers’ perspectives, this is an extra expense and is not ideal for the hospitalist either. In the middle of the pack were “no additional compensation associated with the backup plan” (the second most common model), while the third most common model was “additional compensation for being on the backup schedule, as well as additional compensation if called into work.”
Once you have seen one hospital medicine practice, you have seen one hospital medicine practice. There are different needs for every group, and the backup system – as well its compensation model – has to be designed for it. Thankfully, the SoHM reports reveal the patterns and trends so that we don’t have to reinvent the wheel. For our practice, we decreased a week of clinical service for 2 weeks a year of backup. Every time we activate our backup system, the person coming in receives extra compensation or a similar shift off. In the long run, our backup system didn’t kill us, but rather made us stronger as a group.
Dr. Chadha is interim division chief in the division of hospital medicine at the University of Kentucky HealthCare in Lexington. He actively leads efforts of recruiting, scheduling, practice analysis, and operation of the group. He is a first-time member of the SHM Practice Analysis Committee. Ms. Babb is administrative support associate in the division of hospital medicine at University of Kentucky HealthCare.
It was late 2011. We were a practice of around 20 physicians, and just starting to integrate advanced practice providers into our practice. Our average daily census was about 100 patients and slightly more than 50% of our services were resident services.
My boss, colleague, friend, and mentor – Charles “Chuck” Sargent, MD, and I were on service together early one Saturday morning; Chuck gets a phone call that one of our colleagues was ill. With just 10 physicians working and 10 off, it was an ordeal for Chuck to call all 10 colleagues. Unlike most times, no one could come to moonlight that day. In the end Chuck and I took care of our colleague’s patients.
Yes, it was an exhausting few days, but illness and family needs do not come announced. Now, close to a decade later, we are a practice of 70 physicians and 16 advanced practice providers, our average daily census is about 270 patients, and we have two backup physicians every day – known as Jeopardy-1 and Jeopardy-2. Paternity leave, maternity leave, minor illness, minor trauma, surgery, and family needs are common for our practice. We considered it a good year when we utilized our Jeopardy-1 and Jeopardy-2 for 10% and 1% respectively; and for the past year with a lot of needs, we employed Jeopardy-1 and Jeopardy-2 for 25% and 10%, respectively.
A staffing backup system is a necessary tool for almost every practice. Not having a formal backup system doesn’t mean you don’t need one or you don’t have one – it is just called “no formal backup system.” The Society of Hospital Medicine’s State of Hospital Medicine Reports (SoHM) have been providing data about staffing backup systems every other year. Backup systems come in three flavors. The first system is no formal backup, which means the leaders of the program scramble for coverage every time there is a need. The second is a voluntary backup system in which clinicians volunteer to be on a backup schedule, and the third is a mandatory system in which all or most clinicians are required to be on the backup schedule.
The cumulative data reported in the 2014, 2016, and 2018 SoHM for hospital medicine groups serving adults only, children only, and both adults and children (weighted for number of groups reporting), suggests that 48.3% of respondent practices had no formal backup system, 31.7% had a voluntary system, and 20% had a mandatory backup system.
When we look at different populations served, the trend of “no formal backup system” responses is in decline. The 2014, 2016, and 2018 SoHM reports for hospital medicine groups serving adults, children, and both adults and children, reinforce such trends. The SoHM 2018 report shows 65.6% of hospital medicine groups serving children, 41.6% of groups serving adults, and only 25% of groups serving both adults and children have “no formal backup system.” Our medicine-pediatrics colleagues seem to be leading the trend and have already deduced that, for a solid practice, a backup system is a necessity.
It is also important to see the trend of “no formal backup system” based on geographic area, employer type, academic status, or total number of full-time employees. As we would have predicted, the larger the group the more likely they are to have a backup system. For academic practices a similar trend was seen; they had a higher percentage of some type of backup system year after year.
When it comes to compensation for backup work, four patterns were explored by the SoHM over the years. The most common type of arrangement was “no additional compensation for being on the backup schedule, but additional compensation was provided when called into work.” This kind of arrangement would be easiest to negotiate when the hospitalist and the employer sit across a table. There is nothing at risk for the employer when there isn’t a need, or when there is a need to fill a shift.
The least common method was “additional compensation for being on the backup schedule, but no additional compensation if called into work.” From employers’ perspectives, this is an extra expense and is not ideal for the hospitalist either. In the middle of the pack were “no additional compensation associated with the backup plan” (the second most common model), while the third most common model was “additional compensation for being on the backup schedule, as well as additional compensation if called into work.”
Once you have seen one hospital medicine practice, you have seen one hospital medicine practice. There are different needs for every group, and the backup system – as well its compensation model – has to be designed for it. Thankfully, the SoHM reports reveal the patterns and trends so that we don’t have to reinvent the wheel. For our practice, we decreased a week of clinical service for 2 weeks a year of backup. Every time we activate our backup system, the person coming in receives extra compensation or a similar shift off. In the long run, our backup system didn’t kill us, but rather made us stronger as a group.
Dr. Chadha is interim division chief in the division of hospital medicine at the University of Kentucky HealthCare in Lexington. He actively leads efforts of recruiting, scheduling, practice analysis, and operation of the group. He is a first-time member of the SHM Practice Analysis Committee. Ms. Babb is administrative support associate in the division of hospital medicine at University of Kentucky HealthCare.
Timely Diagnosis of Lung Cancer in a Dedicated VA Referral Unit with Endobronchial Ultrasound Capability (FULL)
Lung cancer is the leading cause of cancer death in the US, with 154 050 deaths in 2018.1 There have been many attempts to reduce mortality of the disease through early diagnosis with use of computed tomography (CT). The National Lung Cancer Screening trial showed that screening high-risk populations with low-dose CT (LDCT) can reduce mortality.2 However, implementing LDCT screening in the clinical setting has proven challenging, as illustrated by the VA Lung Cancer Screening Demonstration Project (LCSDP).3 A lung cancer diagnosis typically comprises several steps that require different medical specialties; this can lead to delays. In the LCSDP, the mean time to diagnosis was 137 days.3 There are no federal standards for timeliness of lung cancer diagnosis.
The nonprofit RAND Corporation is the only American research organization that has published guidelines specifying acceptable intervals for the diagnosis and treatment of lung cancer. In Quality of Care for Oncologic Conditions and HIV, RAND Corporation researchers propose management quality indicators: lung cancer diagnosis within 2 months of an abnormal radiologic study and treatment within 6 weeks of diagnosis.4 The Swedish Lung Cancer Study5 and the Canadian Strategy for Cancer Control6 both recommended a standard of about 30 days—half the time recommended by the RAND Corporation.
Bukhari and colleagues at the Dayton US Department of Veterans Affairs (VA) Medical Center (VAMC) conducted a quality improvement study that examined lung cancer diagnosis and management.7 They found the time (SD) from abnormal chest imaging to diagnosis was 35.5 (31.6) days. Of those veterans who received a lung cancer diagnosis, 89.2% had the diagnosis made within the 60 days recommended by the RAND Corporation. Although these results surpass those of the LCSDP, they can be exceeded.
Beyond the potential emotional distress of awaiting the final diagnosis of a lung lesion, a delay in diagnosis and treatment may adversely affect outcomes. LDCT screening has been shown to reduce mortality, which implies a link between survival and time to intervention. There is no published evidence that time to diagnosis in advanced stage lung cancer affects outcome. The National Cancer Database (NCDB) contains informtion on about 70% of the cancers diagnosed each year in the US.8 An analysis of 4984 patients with stage IA squamous cell lung cancer undergoing lobectomy from NCDB showed that earlier surgery was associated with an absolute decrease in 5-year mortality of 5% to 8%. 9 Hence, at least in early-stage disease, reduced time from initial suspect imaging to definitive treatment may improve survival.
A system that coordinates the requisite diagnostic steps and avoids delays should provide a significant improvement in patient care. The results of such an approach that utilized nurse navigators has been previously published. 10 Here, we present the results of a dedicated VA referral clinic with priority access to pulmonary consultation and procedures in place that are designed to expedite the diagnosis of potential lung cancer.
Methods
The John L. McClellan Memorial Veterans Hospital (JLMMVH) in Little Rock, Arkansas institutional review board approved this study, which was performed in accordance with the Declaration of Helsinki. Requirement for informed consent was waived, and patient confidentiality was maintained throughout.
We have developed a plan of care specifically to facilitate diagnosis and treatment of the large number of veterans referred to the JLMMVH Diagnostic Clinic for abnormal results of chest imaging. The clinic has priority access to same-day imaging and subspecialty consultation services. In the clinic, medical students and residents perform evaluations and a registered nurse (RN) manager coordinates care.
A Diagnostic Clinic consult for abnormal thoracic imaging immediately triggers an e-consult to an interventional pulmonologist (Figure). The RN manager and pulmonologist perform a joint review of records/imaging prior to scheduling, and the pulmonologist triages the patient. Triage options include follow-up imaging, bronchoscopy with endobronchial ultrasound (EBUS), endoscopic ultrasound (EUS), and CT-guided biopsy.
The RN manager then schedules a clinic visit that includes a medical evaluation by clinic staff and any indicated procedures on the same day. The interventional pulmonologist performs EBUS, EUS with the convex curvilinear bronchoscope, or both combined as indicated for diagnosis and staging. All procedures are performed in the JLMMVH bronchoscopy suite with standard conscious sedation using midazolam and fentanyl. Any other relevant procedures, such as pleural tap, also are performed at time of procedure. The pulmonologist and an attending pathologist interpret biopsies obtained in the bronchoscopy suite.
We performed a retrospective chart review of patients diagnosed with primary lung cancer through referral to the JLMMVH Diagnostic Clinic. The primary outcome was time from initial suspect chest imaging to cancer diagnosis. The study population consisted of patients referred for abnormal thoracic imaging between January 1, 2013 and December 31, 2016 and subsequently diagnosed with a primary lung cancer.
Subjects were excluded if (1) the patient was referred from outside our care network and a delay of > 10 days occurred between initial lesion imaging and referral; (2) the patient did not show up for appointments or chose to delay evaluation following referral; (3) biopsy demonstrated a nonlung primary cancer; and (4) serious intercurrent illness interrupted the diagnostic plan. In some cases, the radiologist or consulting pulmonologist had judged the lung lesion too small for immediate biopsy and recommended repeat imaging at a later date.
Patients were included in the study if the follow- up imaging led to a lung cancer diagnosis. However, because the interval between the initial imaging and the follow-up imaging in these patients did not represent a systems delay problem, the date of the scheduled follow-up abnormal imaging, which resulted in initiation of a potential cancer evaluation, served as the index suspect imaging date for this study.
Patient electronic medical records were reviewed and the following data were abstracted: date of the abnormal imaging that led to referral and time from abnormal chest X-ray to chest CT scan if applicable; date of referral and date of clinic visit; date of biopsy; date of lung cancer diagnosis; method of obtaining diagnostic specimen; lung cancer type and stage; type and date of treatment initiation or decision for supportive care only; and decision to seek further evaluation or care outside of our system.
All patients diagnosed with lung cancer during the study period were reviewed for inclusion, hence no required sample-size estimate was calculated. All outcomes were assessed as calendar days. The primary outcome was the time from the index suspect chest imaging study to the date of diagnosis of lung cancer. Prior to the initiation of our study, we chose this more stringent 30-day recommendation of the Canadian6 and Swedish5 studies as the comparator for our primary outcome, although data with respect to the 60-day Rand Corporation guidelines also are reported.4
Statistical Methods
The mean time to lung cancer diagnosis in our cohort was compared with this 30-day standard using a 2-sided Mann–Whitney U test. Normality of data distribution was determined using the Kolmogorov–Smirnov test. For statistical significance testing a P value of .05 was used. Statistical calculations were performed using R statistical software version 3.2.4. Secondary outcomes consisted of time from diagnosis to treatment; proportion of subjects diagnosed within 60 days; time from initial clinic visit to biopsy; and time from biopsy to diagnosis.
Results
Overall, 222 patients were diagnosed with a malignant lung lesion, of which 63 were excluded from analysis: 22 cancelled or did not appear for appointments, declined further evaluation, or completed evaluation outside of our network; 13 had the diagnosis made prior to Diagnostic Clinic visit; 13 proved to have a nonlung primary tumor presenting in the lung or mediastinal nodes; 12 were delayed > 10 days in referral from an outside network; and 3 had an intervening serious acute medical problem forcing delay in the diagnostic process.
Of the 159 included subjects, 154 (96.9%) were male, and the mean (SD) age was 67.6 (8.1) years. For 76 subjects, the abnormal chest X-ray and subsequent chest CT scan were performed the same day or the lung lesion had initially been noted on a CT scan. For 54 subjects, there was a delay of ≥ 1 week in obtaining a chest CT scan. The mean (SD) time from placement of the Diagnostic Clinic consultation by the primary care provider (PCP) or other provider and the initial Diagnostic Clinic visit was 6.3 (4.4) days. The mean (SD) time from suspect imaging to diagnosis (primary outcome) was 22.6(16.6) days.
The distribution of this outcome was nonnormal (Kolmogorov-Smirnov test P < .01). When compared with the standard of 30 days, the primary outcome of 22.6 days was significantly shorter (2-sided Mann–Whitney U test P < .01). Three-quarters (76.1%) of subjects were diagnosed within 30 days and 95.0% of subjects were diagnosed within 60 days of the initial imaging. For the 8 subjects diagnosed after 60 days, contributing factors included PCP delay in Diagnostic Clinic consultation, initial negative biopsy, delay in performance of chest CT scan prior to consultation, and outsourcing of positron emission tomography (PET) scans.
Overall, 57 (35.8%) of the subjects underwent biopsy on the day of their Diagnostic Clinic visit: 14 underwent CT-guided biopsy and 43 underwent EBUS/EUS. Within 2 days of the initial visit 106 subjects (66.7%) had undergone biopsy. The mean (SD) time from initial Diagnostic Clinic visit to biopsy was 6.3 (9.5) days. The mean (SD) interval was 1.8 (3.0) days for EBUS/ EUS and 11.3 (11.7) days for CT-guided biopsy. The mean (SD) interval from biopsy to diagnosis was 3.2 (6.2) days with 64 cases (40.3%) diagnosed the day of biopsy.
Excluding subjects whose treatment was delayed by patient choice or intercurrent illness, and those who left the VA system to seek treatment elsewhere (n = 21), 24 opted for palliative care, 5 died before treatment could be initiated, and 109 underwent treatment for their tumors (Table). The mean times (SD) from diagnosis to treatment were: chemotherapy alone 34.7 (25.3) days; chemoradiation 37.0 (22.8) days; surgery 44.3 (24.4) days; radiation therapy alone 47.9 (26.0) days. With respect to the RAND Corporation recommended diagnosis to treatment time, 60.9% of chemotherapy alone, 61.5% of chemoradiation, 66.7% of surgery, and 45.0% of radiation therapy alone treatments were initiated within the 6-week window.
Discussion
This retrospective case study demonstrates the effectiveness of a dedicated diagnostic clinic with priority EBUS/EUS access in diagnosing lung cancer within the VA system. Although there is no universally accepted quality standard for comparison, the RAND Corporation recommendation of 60 days from abnormal imaging to diagnosis and the Dayton VAMC published mean of 35.5 days are guideposts; however, the results from the Dayton VAMC may have been affected negatively by some subjects undergoing serial imaging for asymptomatic nodules. We chose a more stringent standard of 30 days as recommended by Swedish and Canadian task forces.
When diagnosing lung cancer, the overriding purpose of the Diagnostic Clinic is to minimize system delays. The method is to have as simple a task as possible for the PCP or other provider who identifies a lung nodule or mass and submits a single consultation request to the Diagnostic Clinic. Once this consultation is placed, the clinic RN manager oversees all further steps required for diagnosis and referral for treatment. The key factor in achieving a mean diagnosis time of 22.6 days is the cooperation between the RN manager and the interventional pulmonologist. When a consultation is received, the RN manager and pulmonologist review the data together and schedule the initial clinic visit; the goal is same-day biopsy, which is achieved in more than one-third of cases. Not all patients with a chest image suspected for lung cancer had it ordered by their PCP. For this reason, a Diagnostic Clinic consultation is available to all health care providers in our system. Many patients reach the clinic after the discovery of a suspect chest X-ray during an emergency department visit, a regularly scheduled subspecialty appointment, or during a preoperative evaluation.
The mean time from initial visit to biopsy was 1.8 days for EBUS/EUS compared with an interval of 11.3 days for CT-guided biopsy. This difference reflects the pulmonologist’s involvement in initial scheduling of Diagnostic Clinic patients. The ability of the pulmonologist to provide an accurate assessment of sample adequacy and a preliminary diagnosis at bedside, with concurrent confirmation by a staff pathologist, permitted the Diagnostic Clinic to inform 40.3% of patients of the finding of malignancy on the day of biopsy. A published comparison of the onsite review of biopsy material showed our pulmonologist and staff pathologists to be equally accurate in their interpretations.11
Sources of Delays
While this study documents the shortest intervals from suspect imaging to diagnosis reported to date, it also identifies sources of system delay in diagnosing lung cancer that JLMMVH could further optimize. The first is the time from initial abnormal chest X-ray imaging to performance of the chest CT scan. On occasion, the index lung lesion is identified unexpectedly on an outpatient or emergency department chest CT scan. With greater use of LDCT lung cancer screening, the initial detection of suspect lesions by CT scanning will increase in the future. However, the PCP most often investigates a patient complaint with a standard chest X-ray that reveals a suspect nodule or mass. When ordered by the PCP as an outpatient test, scheduling of the follow-up chest CT scan is not given priority. More than a third of subjects experienced a delay ≥ 1 week in obtaining a chest CT scan ordered by the PCP; for 29 subjects the delay was ≥ 3weeks. At JLMMVH, the Diagnostic Clinic is given priority in scheduling CT scans. Hence, for suspect lung lesions, the chest CT scan, if not already obtained, is generally performed on the morning of the clinic visit. Educating the PCP to refer the patient immediately to the Diagnostic Clinic rather than waiting to obtain an outpatient chest CT scan may remove this source of unnecessary delay.
Scheduling a CT-guided fine needle aspiration of a lung lesion is another source of system delay. When the chest CT scan is available at the time of the Diagnostic Clinic referral, the clinic visit is scheduled for the earliest day a required CT-guided biopsy can be performed. However, the mean time of 11.3 days from initial Diagnostic Clinic visit to CT-guided biopsy is indicative of the backlog faced by the interventional radiologists.
Although infrequent, PET scans that are required before biopsy can lead to substantial delays. PET scans are performed at our university affiliate, and the joint VA-university lung tumor board sometimes generates requests for such scans prior to tissue diagnosis, yet another source of delay.
The time from referral receipt to the Diagnostic Clinic visit averaged 6.3 days. This delay usually was determined by the availability of the CT-guided biopsy or the dedicated interventional pulmonologist. Although other interventional pulmonologists at JLMMVH may perform the requisite diagnostic procedures, they are not always available for immediate review of imaging studies of referred patients nor can their schedules flexibly accommodate the number of patients seen in our clinic for evaluation.
Lung Cancer Diagnosis
Prompt diagnosis in the setting of a worrisome chest X-ray may help decrease patient anxiety, but does the clinic improve lung cancer treatment outcomes? Such improvement has been demonstrated only in stage IA squamous cell lung cancer.9 Of our study population, 37.7% had squamous cell carcinoma, and 85.5% had non-small cell lung cancer. Of those with non-small cell lung cancer, 28.9% had a clinical stage I tumor. Stage I squamous cell carcinoma, the type of tumor most likely to benefit from early diagnosis and treatment, was diagnosed in 11.3% of patients. With the increased application of LDCT screening, the proportion of veterans identified with early stage lung cancer may rise. The Providence VAMC in Rhode Island reported its results from instituting LDCT screening.12 Prior to screening, 28% of patients diagnosed with lung cancer had a stage I tumor. Following the introduction of LDCT screening, 49% diagnosed by LDCT screening had a stage I tumor. Nearly a third of their patients diagnosed with lung cancer through LDCT screening had squamous cell tumor histology. Thus, we can anticipate an increasing number of veterans with early stage lung cancer who would benefit from timely diagnosis.
The JLMMVH is a referral center for the entire state of Arkansas. Quite a few of its referred patients come from a long distance, which may require overnight housing and other related travel expenses. Apart from any potential outcome benefit, the efficiencies of the system described herein include the minimization of extra trips, an inconvenience and cost to both patient and JLMMVH.
Although the primary task of the clinic is diagnosis, we also seek to facilitate timely treatment. Our lack of an on-site PET scanner and radiation therapy, resources present on-site at the Dayton VAMC, contribute to longer therapy wait times. The shortest mean wait time at JLMMVH is for chemotherapy alone (34.7 days), in part because the JLMMVH oncologists, performing initial consultations 2 to 3 times weekly in the Diagnostic Clinic, are more readily available than are our thoracic surgeons or radiation therapists. Yet overall, JLMMVH patients often face delay from the time of lung cancer diagnosis to initiation of treatment.
The Connecticut Veterans Affairs Healthcare System has published the results of changes in lung cancer management associated with a nurse navigator system.10 Prior to creating the position of cancer care coordinator, filled by an advanced practice RNs, the mean time from clinical suspicion of lung cancer to treatment was 117 days. After 4 years of such care navigation, this waiting time had decreased to 52.4 days. Associated with this dramatic improvement in overall waiting time were decreases in the turnaround time required for performance of CT and PET scans. With respect to this big picture view of lung cancer care, our Diagnostic Clinic serves as a model for the initial step of diagnosis. Coordination and streamlining of the various steps from diagnosis to definitive therapy shall require a more system-wide effort involving all the key players in cancer care.
Conclusion
We have developed a care pathway based in a dedicated diagnostic clinic and have been able to document the shortest interval from abnormality to diagnosis of lung cancer reported in the literature to date. Efficient functioning of this clinic is dependent upon the close cooperation between a full-time RN clinic manager and an interventional pulmonologist experienced in lung cancer management and able to interpret cytologic samples at the time of biopsy. Shortening the delay between diagnosis and definitive therapy remains a challenge and may benefit from the oncology nurse navigator model previously described within the VA system. 10
1. American Cancer Society. Cancer Facts & Figures. https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2018/cancer-facts-and-figures-2018.pdf. Accessed July 13, 2019.
2. National Lung Screening Trial Research Team, Aberle DR, Adams AM, et al. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Eng J Med. 2011;365(5):395-409.
3. Kinsinger LS, Anderson C, Kim J, et al. Implementation of lung cancer screening in the Veterans Health Administration. JAMA Intern Med. 2017;177(3):399-406.
4. Asch SM, Kerr EA, Hamilton EG, Reifel JL, McGlynn EA, eds. Quality of Care for Oncologic Conditions and HIV: A Review of the Literature and Quality Indicators. Santa Monica, CA: RAND Corporation; 2000.
5. Hillerdal G. [Recommendations from the Swedish Lung Cancer Study Group: Shorter waiting times are demanded for quality in diagnostic work-ups for lung care.] Swedish Med J 1999; 96: 4691.
6. Simunovic M, Gagliardi A, McCready D, Coates A, Levine M, DePetrillo D. A snapshot of waiting times for cancer surgery provided by surgeons affiliated with regional cancer centres in Ontario. CMAJ. 2001;165(4):421-425. [Canadian Strategy for Cancer Control]
7. Bukhari A, Kumar G, Rajsheker R, Markert R. Timeliness of lung cancer diagnosis and treatment. Fed Pract. 2017;34(suppl 1):24S-29S.
8. Bilimoria KY, Ko CY, Tomlinson JS, et al. Wait times for cancer surgery in the United States: trends and predictors of delays. Ann Surg. 2011;253(4):779-785.
9. Yang CJ, Wang H, Kumar A, et al. Impact of timing of lobectomy on survival for clinical stage IA lung squamous cell carcinoma. Chest. 2017;152(6):1239-1250.
10. Hunnibell LS, Rose MG, Connery DM, et al. Using nurse navigation to improve timeliness of lung cancer care at a veterans hospital. Clin J Oncol Nurs. 2012;16(1):29-36.
11. Meena N, Jeffus S, Massoll N, et al. Rapid onsite evaluation: a comparison of cytopathologist and pulmonologist performance. Cancer Cytopatho. 2016;124(4):279-84.
12. Okereke IC, Bates MF, Jankowich MD, et al. Effects of implementation of lung cancer screening at one Veterans Affairs Medical Center. Chest 2016;150(5):1023-1029.
Lung cancer is the leading cause of cancer death in the US, with 154 050 deaths in 2018.1 There have been many attempts to reduce mortality of the disease through early diagnosis with use of computed tomography (CT). The National Lung Cancer Screening trial showed that screening high-risk populations with low-dose CT (LDCT) can reduce mortality.2 However, implementing LDCT screening in the clinical setting has proven challenging, as illustrated by the VA Lung Cancer Screening Demonstration Project (LCSDP).3 A lung cancer diagnosis typically comprises several steps that require different medical specialties; this can lead to delays. In the LCSDP, the mean time to diagnosis was 137 days.3 There are no federal standards for timeliness of lung cancer diagnosis.
The nonprofit RAND Corporation is the only American research organization that has published guidelines specifying acceptable intervals for the diagnosis and treatment of lung cancer. In Quality of Care for Oncologic Conditions and HIV, RAND Corporation researchers propose management quality indicators: lung cancer diagnosis within 2 months of an abnormal radiologic study and treatment within 6 weeks of diagnosis.4 The Swedish Lung Cancer Study5 and the Canadian Strategy for Cancer Control6 both recommended a standard of about 30 days—half the time recommended by the RAND Corporation.
Bukhari and colleagues at the Dayton US Department of Veterans Affairs (VA) Medical Center (VAMC) conducted a quality improvement study that examined lung cancer diagnosis and management.7 They found the time (SD) from abnormal chest imaging to diagnosis was 35.5 (31.6) days. Of those veterans who received a lung cancer diagnosis, 89.2% had the diagnosis made within the 60 days recommended by the RAND Corporation. Although these results surpass those of the LCSDP, they can be exceeded.
Beyond the potential emotional distress of awaiting the final diagnosis of a lung lesion, a delay in diagnosis and treatment may adversely affect outcomes. LDCT screening has been shown to reduce mortality, which implies a link between survival and time to intervention. There is no published evidence that time to diagnosis in advanced stage lung cancer affects outcome. The National Cancer Database (NCDB) contains informtion on about 70% of the cancers diagnosed each year in the US.8 An analysis of 4984 patients with stage IA squamous cell lung cancer undergoing lobectomy from NCDB showed that earlier surgery was associated with an absolute decrease in 5-year mortality of 5% to 8%. 9 Hence, at least in early-stage disease, reduced time from initial suspect imaging to definitive treatment may improve survival.
A system that coordinates the requisite diagnostic steps and avoids delays should provide a significant improvement in patient care. The results of such an approach that utilized nurse navigators has been previously published. 10 Here, we present the results of a dedicated VA referral clinic with priority access to pulmonary consultation and procedures in place that are designed to expedite the diagnosis of potential lung cancer.
Methods
The John L. McClellan Memorial Veterans Hospital (JLMMVH) in Little Rock, Arkansas institutional review board approved this study, which was performed in accordance with the Declaration of Helsinki. Requirement for informed consent was waived, and patient confidentiality was maintained throughout.
We have developed a plan of care specifically to facilitate diagnosis and treatment of the large number of veterans referred to the JLMMVH Diagnostic Clinic for abnormal results of chest imaging. The clinic has priority access to same-day imaging and subspecialty consultation services. In the clinic, medical students and residents perform evaluations and a registered nurse (RN) manager coordinates care.
A Diagnostic Clinic consult for abnormal thoracic imaging immediately triggers an e-consult to an interventional pulmonologist (Figure). The RN manager and pulmonologist perform a joint review of records/imaging prior to scheduling, and the pulmonologist triages the patient. Triage options include follow-up imaging, bronchoscopy with endobronchial ultrasound (EBUS), endoscopic ultrasound (EUS), and CT-guided biopsy.
The RN manager then schedules a clinic visit that includes a medical evaluation by clinic staff and any indicated procedures on the same day. The interventional pulmonologist performs EBUS, EUS with the convex curvilinear bronchoscope, or both combined as indicated for diagnosis and staging. All procedures are performed in the JLMMVH bronchoscopy suite with standard conscious sedation using midazolam and fentanyl. Any other relevant procedures, such as pleural tap, also are performed at time of procedure. The pulmonologist and an attending pathologist interpret biopsies obtained in the bronchoscopy suite.
We performed a retrospective chart review of patients diagnosed with primary lung cancer through referral to the JLMMVH Diagnostic Clinic. The primary outcome was time from initial suspect chest imaging to cancer diagnosis. The study population consisted of patients referred for abnormal thoracic imaging between January 1, 2013 and December 31, 2016 and subsequently diagnosed with a primary lung cancer.
Subjects were excluded if (1) the patient was referred from outside our care network and a delay of > 10 days occurred between initial lesion imaging and referral; (2) the patient did not show up for appointments or chose to delay evaluation following referral; (3) biopsy demonstrated a nonlung primary cancer; and (4) serious intercurrent illness interrupted the diagnostic plan. In some cases, the radiologist or consulting pulmonologist had judged the lung lesion too small for immediate biopsy and recommended repeat imaging at a later date.
Patients were included in the study if the follow- up imaging led to a lung cancer diagnosis. However, because the interval between the initial imaging and the follow-up imaging in these patients did not represent a systems delay problem, the date of the scheduled follow-up abnormal imaging, which resulted in initiation of a potential cancer evaluation, served as the index suspect imaging date for this study.
Patient electronic medical records were reviewed and the following data were abstracted: date of the abnormal imaging that led to referral and time from abnormal chest X-ray to chest CT scan if applicable; date of referral and date of clinic visit; date of biopsy; date of lung cancer diagnosis; method of obtaining diagnostic specimen; lung cancer type and stage; type and date of treatment initiation or decision for supportive care only; and decision to seek further evaluation or care outside of our system.
All patients diagnosed with lung cancer during the study period were reviewed for inclusion, hence no required sample-size estimate was calculated. All outcomes were assessed as calendar days. The primary outcome was the time from the index suspect chest imaging study to the date of diagnosis of lung cancer. Prior to the initiation of our study, we chose this more stringent 30-day recommendation of the Canadian6 and Swedish5 studies as the comparator for our primary outcome, although data with respect to the 60-day Rand Corporation guidelines also are reported.4
Statistical Methods
The mean time to lung cancer diagnosis in our cohort was compared with this 30-day standard using a 2-sided Mann–Whitney U test. Normality of data distribution was determined using the Kolmogorov–Smirnov test. For statistical significance testing a P value of .05 was used. Statistical calculations were performed using R statistical software version 3.2.4. Secondary outcomes consisted of time from diagnosis to treatment; proportion of subjects diagnosed within 60 days; time from initial clinic visit to biopsy; and time from biopsy to diagnosis.
Results
Overall, 222 patients were diagnosed with a malignant lung lesion, of which 63 were excluded from analysis: 22 cancelled or did not appear for appointments, declined further evaluation, or completed evaluation outside of our network; 13 had the diagnosis made prior to Diagnostic Clinic visit; 13 proved to have a nonlung primary tumor presenting in the lung or mediastinal nodes; 12 were delayed > 10 days in referral from an outside network; and 3 had an intervening serious acute medical problem forcing delay in the diagnostic process.
Of the 159 included subjects, 154 (96.9%) were male, and the mean (SD) age was 67.6 (8.1) years. For 76 subjects, the abnormal chest X-ray and subsequent chest CT scan were performed the same day or the lung lesion had initially been noted on a CT scan. For 54 subjects, there was a delay of ≥ 1 week in obtaining a chest CT scan. The mean (SD) time from placement of the Diagnostic Clinic consultation by the primary care provider (PCP) or other provider and the initial Diagnostic Clinic visit was 6.3 (4.4) days. The mean (SD) time from suspect imaging to diagnosis (primary outcome) was 22.6(16.6) days.
The distribution of this outcome was nonnormal (Kolmogorov-Smirnov test P < .01). When compared with the standard of 30 days, the primary outcome of 22.6 days was significantly shorter (2-sided Mann–Whitney U test P < .01). Three-quarters (76.1%) of subjects were diagnosed within 30 days and 95.0% of subjects were diagnosed within 60 days of the initial imaging. For the 8 subjects diagnosed after 60 days, contributing factors included PCP delay in Diagnostic Clinic consultation, initial negative biopsy, delay in performance of chest CT scan prior to consultation, and outsourcing of positron emission tomography (PET) scans.
Overall, 57 (35.8%) of the subjects underwent biopsy on the day of their Diagnostic Clinic visit: 14 underwent CT-guided biopsy and 43 underwent EBUS/EUS. Within 2 days of the initial visit 106 subjects (66.7%) had undergone biopsy. The mean (SD) time from initial Diagnostic Clinic visit to biopsy was 6.3 (9.5) days. The mean (SD) interval was 1.8 (3.0) days for EBUS/ EUS and 11.3 (11.7) days for CT-guided biopsy. The mean (SD) interval from biopsy to diagnosis was 3.2 (6.2) days with 64 cases (40.3%) diagnosed the day of biopsy.
Excluding subjects whose treatment was delayed by patient choice or intercurrent illness, and those who left the VA system to seek treatment elsewhere (n = 21), 24 opted for palliative care, 5 died before treatment could be initiated, and 109 underwent treatment for their tumors (Table). The mean times (SD) from diagnosis to treatment were: chemotherapy alone 34.7 (25.3) days; chemoradiation 37.0 (22.8) days; surgery 44.3 (24.4) days; radiation therapy alone 47.9 (26.0) days. With respect to the RAND Corporation recommended diagnosis to treatment time, 60.9% of chemotherapy alone, 61.5% of chemoradiation, 66.7% of surgery, and 45.0% of radiation therapy alone treatments were initiated within the 6-week window.
Discussion
This retrospective case study demonstrates the effectiveness of a dedicated diagnostic clinic with priority EBUS/EUS access in diagnosing lung cancer within the VA system. Although there is no universally accepted quality standard for comparison, the RAND Corporation recommendation of 60 days from abnormal imaging to diagnosis and the Dayton VAMC published mean of 35.5 days are guideposts; however, the results from the Dayton VAMC may have been affected negatively by some subjects undergoing serial imaging for asymptomatic nodules. We chose a more stringent standard of 30 days as recommended by Swedish and Canadian task forces.
When diagnosing lung cancer, the overriding purpose of the Diagnostic Clinic is to minimize system delays. The method is to have as simple a task as possible for the PCP or other provider who identifies a lung nodule or mass and submits a single consultation request to the Diagnostic Clinic. Once this consultation is placed, the clinic RN manager oversees all further steps required for diagnosis and referral for treatment. The key factor in achieving a mean diagnosis time of 22.6 days is the cooperation between the RN manager and the interventional pulmonologist. When a consultation is received, the RN manager and pulmonologist review the data together and schedule the initial clinic visit; the goal is same-day biopsy, which is achieved in more than one-third of cases. Not all patients with a chest image suspected for lung cancer had it ordered by their PCP. For this reason, a Diagnostic Clinic consultation is available to all health care providers in our system. Many patients reach the clinic after the discovery of a suspect chest X-ray during an emergency department visit, a regularly scheduled subspecialty appointment, or during a preoperative evaluation.
The mean time from initial visit to biopsy was 1.8 days for EBUS/EUS compared with an interval of 11.3 days for CT-guided biopsy. This difference reflects the pulmonologist’s involvement in initial scheduling of Diagnostic Clinic patients. The ability of the pulmonologist to provide an accurate assessment of sample adequacy and a preliminary diagnosis at bedside, with concurrent confirmation by a staff pathologist, permitted the Diagnostic Clinic to inform 40.3% of patients of the finding of malignancy on the day of biopsy. A published comparison of the onsite review of biopsy material showed our pulmonologist and staff pathologists to be equally accurate in their interpretations.11
Sources of Delays
While this study documents the shortest intervals from suspect imaging to diagnosis reported to date, it also identifies sources of system delay in diagnosing lung cancer that JLMMVH could further optimize. The first is the time from initial abnormal chest X-ray imaging to performance of the chest CT scan. On occasion, the index lung lesion is identified unexpectedly on an outpatient or emergency department chest CT scan. With greater use of LDCT lung cancer screening, the initial detection of suspect lesions by CT scanning will increase in the future. However, the PCP most often investigates a patient complaint with a standard chest X-ray that reveals a suspect nodule or mass. When ordered by the PCP as an outpatient test, scheduling of the follow-up chest CT scan is not given priority. More than a third of subjects experienced a delay ≥ 1 week in obtaining a chest CT scan ordered by the PCP; for 29 subjects the delay was ≥ 3weeks. At JLMMVH, the Diagnostic Clinic is given priority in scheduling CT scans. Hence, for suspect lung lesions, the chest CT scan, if not already obtained, is generally performed on the morning of the clinic visit. Educating the PCP to refer the patient immediately to the Diagnostic Clinic rather than waiting to obtain an outpatient chest CT scan may remove this source of unnecessary delay.
Scheduling a CT-guided fine needle aspiration of a lung lesion is another source of system delay. When the chest CT scan is available at the time of the Diagnostic Clinic referral, the clinic visit is scheduled for the earliest day a required CT-guided biopsy can be performed. However, the mean time of 11.3 days from initial Diagnostic Clinic visit to CT-guided biopsy is indicative of the backlog faced by the interventional radiologists.
Although infrequent, PET scans that are required before biopsy can lead to substantial delays. PET scans are performed at our university affiliate, and the joint VA-university lung tumor board sometimes generates requests for such scans prior to tissue diagnosis, yet another source of delay.
The time from referral receipt to the Diagnostic Clinic visit averaged 6.3 days. This delay usually was determined by the availability of the CT-guided biopsy or the dedicated interventional pulmonologist. Although other interventional pulmonologists at JLMMVH may perform the requisite diagnostic procedures, they are not always available for immediate review of imaging studies of referred patients nor can their schedules flexibly accommodate the number of patients seen in our clinic for evaluation.
Lung Cancer Diagnosis
Prompt diagnosis in the setting of a worrisome chest X-ray may help decrease patient anxiety, but does the clinic improve lung cancer treatment outcomes? Such improvement has been demonstrated only in stage IA squamous cell lung cancer.9 Of our study population, 37.7% had squamous cell carcinoma, and 85.5% had non-small cell lung cancer. Of those with non-small cell lung cancer, 28.9% had a clinical stage I tumor. Stage I squamous cell carcinoma, the type of tumor most likely to benefit from early diagnosis and treatment, was diagnosed in 11.3% of patients. With the increased application of LDCT screening, the proportion of veterans identified with early stage lung cancer may rise. The Providence VAMC in Rhode Island reported its results from instituting LDCT screening.12 Prior to screening, 28% of patients diagnosed with lung cancer had a stage I tumor. Following the introduction of LDCT screening, 49% diagnosed by LDCT screening had a stage I tumor. Nearly a third of their patients diagnosed with lung cancer through LDCT screening had squamous cell tumor histology. Thus, we can anticipate an increasing number of veterans with early stage lung cancer who would benefit from timely diagnosis.
The JLMMVH is a referral center for the entire state of Arkansas. Quite a few of its referred patients come from a long distance, which may require overnight housing and other related travel expenses. Apart from any potential outcome benefit, the efficiencies of the system described herein include the minimization of extra trips, an inconvenience and cost to both patient and JLMMVH.
Although the primary task of the clinic is diagnosis, we also seek to facilitate timely treatment. Our lack of an on-site PET scanner and radiation therapy, resources present on-site at the Dayton VAMC, contribute to longer therapy wait times. The shortest mean wait time at JLMMVH is for chemotherapy alone (34.7 days), in part because the JLMMVH oncologists, performing initial consultations 2 to 3 times weekly in the Diagnostic Clinic, are more readily available than are our thoracic surgeons or radiation therapists. Yet overall, JLMMVH patients often face delay from the time of lung cancer diagnosis to initiation of treatment.
The Connecticut Veterans Affairs Healthcare System has published the results of changes in lung cancer management associated with a nurse navigator system.10 Prior to creating the position of cancer care coordinator, filled by an advanced practice RNs, the mean time from clinical suspicion of lung cancer to treatment was 117 days. After 4 years of such care navigation, this waiting time had decreased to 52.4 days. Associated with this dramatic improvement in overall waiting time were decreases in the turnaround time required for performance of CT and PET scans. With respect to this big picture view of lung cancer care, our Diagnostic Clinic serves as a model for the initial step of diagnosis. Coordination and streamlining of the various steps from diagnosis to definitive therapy shall require a more system-wide effort involving all the key players in cancer care.
Conclusion
We have developed a care pathway based in a dedicated diagnostic clinic and have been able to document the shortest interval from abnormality to diagnosis of lung cancer reported in the literature to date. Efficient functioning of this clinic is dependent upon the close cooperation between a full-time RN clinic manager and an interventional pulmonologist experienced in lung cancer management and able to interpret cytologic samples at the time of biopsy. Shortening the delay between diagnosis and definitive therapy remains a challenge and may benefit from the oncology nurse navigator model previously described within the VA system. 10
Lung cancer is the leading cause of cancer death in the US, with 154 050 deaths in 2018.1 There have been many attempts to reduce mortality of the disease through early diagnosis with use of computed tomography (CT). The National Lung Cancer Screening trial showed that screening high-risk populations with low-dose CT (LDCT) can reduce mortality.2 However, implementing LDCT screening in the clinical setting has proven challenging, as illustrated by the VA Lung Cancer Screening Demonstration Project (LCSDP).3 A lung cancer diagnosis typically comprises several steps that require different medical specialties; this can lead to delays. In the LCSDP, the mean time to diagnosis was 137 days.3 There are no federal standards for timeliness of lung cancer diagnosis.
The nonprofit RAND Corporation is the only American research organization that has published guidelines specifying acceptable intervals for the diagnosis and treatment of lung cancer. In Quality of Care for Oncologic Conditions and HIV, RAND Corporation researchers propose management quality indicators: lung cancer diagnosis within 2 months of an abnormal radiologic study and treatment within 6 weeks of diagnosis.4 The Swedish Lung Cancer Study5 and the Canadian Strategy for Cancer Control6 both recommended a standard of about 30 days—half the time recommended by the RAND Corporation.
Bukhari and colleagues at the Dayton US Department of Veterans Affairs (VA) Medical Center (VAMC) conducted a quality improvement study that examined lung cancer diagnosis and management.7 They found the time (SD) from abnormal chest imaging to diagnosis was 35.5 (31.6) days. Of those veterans who received a lung cancer diagnosis, 89.2% had the diagnosis made within the 60 days recommended by the RAND Corporation. Although these results surpass those of the LCSDP, they can be exceeded.
Beyond the potential emotional distress of awaiting the final diagnosis of a lung lesion, a delay in diagnosis and treatment may adversely affect outcomes. LDCT screening has been shown to reduce mortality, which implies a link between survival and time to intervention. There is no published evidence that time to diagnosis in advanced stage lung cancer affects outcome. The National Cancer Database (NCDB) contains informtion on about 70% of the cancers diagnosed each year in the US.8 An analysis of 4984 patients with stage IA squamous cell lung cancer undergoing lobectomy from NCDB showed that earlier surgery was associated with an absolute decrease in 5-year mortality of 5% to 8%. 9 Hence, at least in early-stage disease, reduced time from initial suspect imaging to definitive treatment may improve survival.
A system that coordinates the requisite diagnostic steps and avoids delays should provide a significant improvement in patient care. The results of such an approach that utilized nurse navigators has been previously published. 10 Here, we present the results of a dedicated VA referral clinic with priority access to pulmonary consultation and procedures in place that are designed to expedite the diagnosis of potential lung cancer.
Methods
The John L. McClellan Memorial Veterans Hospital (JLMMVH) in Little Rock, Arkansas institutional review board approved this study, which was performed in accordance with the Declaration of Helsinki. Requirement for informed consent was waived, and patient confidentiality was maintained throughout.
We have developed a plan of care specifically to facilitate diagnosis and treatment of the large number of veterans referred to the JLMMVH Diagnostic Clinic for abnormal results of chest imaging. The clinic has priority access to same-day imaging and subspecialty consultation services. In the clinic, medical students and residents perform evaluations and a registered nurse (RN) manager coordinates care.
A Diagnostic Clinic consult for abnormal thoracic imaging immediately triggers an e-consult to an interventional pulmonologist (Figure). The RN manager and pulmonologist perform a joint review of records/imaging prior to scheduling, and the pulmonologist triages the patient. Triage options include follow-up imaging, bronchoscopy with endobronchial ultrasound (EBUS), endoscopic ultrasound (EUS), and CT-guided biopsy.
The RN manager then schedules a clinic visit that includes a medical evaluation by clinic staff and any indicated procedures on the same day. The interventional pulmonologist performs EBUS, EUS with the convex curvilinear bronchoscope, or both combined as indicated for diagnosis and staging. All procedures are performed in the JLMMVH bronchoscopy suite with standard conscious sedation using midazolam and fentanyl. Any other relevant procedures, such as pleural tap, also are performed at time of procedure. The pulmonologist and an attending pathologist interpret biopsies obtained in the bronchoscopy suite.
We performed a retrospective chart review of patients diagnosed with primary lung cancer through referral to the JLMMVH Diagnostic Clinic. The primary outcome was time from initial suspect chest imaging to cancer diagnosis. The study population consisted of patients referred for abnormal thoracic imaging between January 1, 2013 and December 31, 2016 and subsequently diagnosed with a primary lung cancer.
Subjects were excluded if (1) the patient was referred from outside our care network and a delay of > 10 days occurred between initial lesion imaging and referral; (2) the patient did not show up for appointments or chose to delay evaluation following referral; (3) biopsy demonstrated a nonlung primary cancer; and (4) serious intercurrent illness interrupted the diagnostic plan. In some cases, the radiologist or consulting pulmonologist had judged the lung lesion too small for immediate biopsy and recommended repeat imaging at a later date.
Patients were included in the study if the follow- up imaging led to a lung cancer diagnosis. However, because the interval between the initial imaging and the follow-up imaging in these patients did not represent a systems delay problem, the date of the scheduled follow-up abnormal imaging, which resulted in initiation of a potential cancer evaluation, served as the index suspect imaging date for this study.
Patient electronic medical records were reviewed and the following data were abstracted: date of the abnormal imaging that led to referral and time from abnormal chest X-ray to chest CT scan if applicable; date of referral and date of clinic visit; date of biopsy; date of lung cancer diagnosis; method of obtaining diagnostic specimen; lung cancer type and stage; type and date of treatment initiation or decision for supportive care only; and decision to seek further evaluation or care outside of our system.
All patients diagnosed with lung cancer during the study period were reviewed for inclusion, hence no required sample-size estimate was calculated. All outcomes were assessed as calendar days. The primary outcome was the time from the index suspect chest imaging study to the date of diagnosis of lung cancer. Prior to the initiation of our study, we chose this more stringent 30-day recommendation of the Canadian6 and Swedish5 studies as the comparator for our primary outcome, although data with respect to the 60-day Rand Corporation guidelines also are reported.4
Statistical Methods
The mean time to lung cancer diagnosis in our cohort was compared with this 30-day standard using a 2-sided Mann–Whitney U test. Normality of data distribution was determined using the Kolmogorov–Smirnov test. For statistical significance testing a P value of .05 was used. Statistical calculations were performed using R statistical software version 3.2.4. Secondary outcomes consisted of time from diagnosis to treatment; proportion of subjects diagnosed within 60 days; time from initial clinic visit to biopsy; and time from biopsy to diagnosis.
Results
Overall, 222 patients were diagnosed with a malignant lung lesion, of which 63 were excluded from analysis: 22 cancelled or did not appear for appointments, declined further evaluation, or completed evaluation outside of our network; 13 had the diagnosis made prior to Diagnostic Clinic visit; 13 proved to have a nonlung primary tumor presenting in the lung or mediastinal nodes; 12 were delayed > 10 days in referral from an outside network; and 3 had an intervening serious acute medical problem forcing delay in the diagnostic process.
Of the 159 included subjects, 154 (96.9%) were male, and the mean (SD) age was 67.6 (8.1) years. For 76 subjects, the abnormal chest X-ray and subsequent chest CT scan were performed the same day or the lung lesion had initially been noted on a CT scan. For 54 subjects, there was a delay of ≥ 1 week in obtaining a chest CT scan. The mean (SD) time from placement of the Diagnostic Clinic consultation by the primary care provider (PCP) or other provider and the initial Diagnostic Clinic visit was 6.3 (4.4) days. The mean (SD) time from suspect imaging to diagnosis (primary outcome) was 22.6(16.6) days.
The distribution of this outcome was nonnormal (Kolmogorov-Smirnov test P < .01). When compared with the standard of 30 days, the primary outcome of 22.6 days was significantly shorter (2-sided Mann–Whitney U test P < .01). Three-quarters (76.1%) of subjects were diagnosed within 30 days and 95.0% of subjects were diagnosed within 60 days of the initial imaging. For the 8 subjects diagnosed after 60 days, contributing factors included PCP delay in Diagnostic Clinic consultation, initial negative biopsy, delay in performance of chest CT scan prior to consultation, and outsourcing of positron emission tomography (PET) scans.
Overall, 57 (35.8%) of the subjects underwent biopsy on the day of their Diagnostic Clinic visit: 14 underwent CT-guided biopsy and 43 underwent EBUS/EUS. Within 2 days of the initial visit 106 subjects (66.7%) had undergone biopsy. The mean (SD) time from initial Diagnostic Clinic visit to biopsy was 6.3 (9.5) days. The mean (SD) interval was 1.8 (3.0) days for EBUS/ EUS and 11.3 (11.7) days for CT-guided biopsy. The mean (SD) interval from biopsy to diagnosis was 3.2 (6.2) days with 64 cases (40.3%) diagnosed the day of biopsy.
Excluding subjects whose treatment was delayed by patient choice or intercurrent illness, and those who left the VA system to seek treatment elsewhere (n = 21), 24 opted for palliative care, 5 died before treatment could be initiated, and 109 underwent treatment for their tumors (Table). The mean times (SD) from diagnosis to treatment were: chemotherapy alone 34.7 (25.3) days; chemoradiation 37.0 (22.8) days; surgery 44.3 (24.4) days; radiation therapy alone 47.9 (26.0) days. With respect to the RAND Corporation recommended diagnosis to treatment time, 60.9% of chemotherapy alone, 61.5% of chemoradiation, 66.7% of surgery, and 45.0% of radiation therapy alone treatments were initiated within the 6-week window.
Discussion
This retrospective case study demonstrates the effectiveness of a dedicated diagnostic clinic with priority EBUS/EUS access in diagnosing lung cancer within the VA system. Although there is no universally accepted quality standard for comparison, the RAND Corporation recommendation of 60 days from abnormal imaging to diagnosis and the Dayton VAMC published mean of 35.5 days are guideposts; however, the results from the Dayton VAMC may have been affected negatively by some subjects undergoing serial imaging for asymptomatic nodules. We chose a more stringent standard of 30 days as recommended by Swedish and Canadian task forces.
When diagnosing lung cancer, the overriding purpose of the Diagnostic Clinic is to minimize system delays. The method is to have as simple a task as possible for the PCP or other provider who identifies a lung nodule or mass and submits a single consultation request to the Diagnostic Clinic. Once this consultation is placed, the clinic RN manager oversees all further steps required for diagnosis and referral for treatment. The key factor in achieving a mean diagnosis time of 22.6 days is the cooperation between the RN manager and the interventional pulmonologist. When a consultation is received, the RN manager and pulmonologist review the data together and schedule the initial clinic visit; the goal is same-day biopsy, which is achieved in more than one-third of cases. Not all patients with a chest image suspected for lung cancer had it ordered by their PCP. For this reason, a Diagnostic Clinic consultation is available to all health care providers in our system. Many patients reach the clinic after the discovery of a suspect chest X-ray during an emergency department visit, a regularly scheduled subspecialty appointment, or during a preoperative evaluation.
The mean time from initial visit to biopsy was 1.8 days for EBUS/EUS compared with an interval of 11.3 days for CT-guided biopsy. This difference reflects the pulmonologist’s involvement in initial scheduling of Diagnostic Clinic patients. The ability of the pulmonologist to provide an accurate assessment of sample adequacy and a preliminary diagnosis at bedside, with concurrent confirmation by a staff pathologist, permitted the Diagnostic Clinic to inform 40.3% of patients of the finding of malignancy on the day of biopsy. A published comparison of the onsite review of biopsy material showed our pulmonologist and staff pathologists to be equally accurate in their interpretations.11
Sources of Delays
While this study documents the shortest intervals from suspect imaging to diagnosis reported to date, it also identifies sources of system delay in diagnosing lung cancer that JLMMVH could further optimize. The first is the time from initial abnormal chest X-ray imaging to performance of the chest CT scan. On occasion, the index lung lesion is identified unexpectedly on an outpatient or emergency department chest CT scan. With greater use of LDCT lung cancer screening, the initial detection of suspect lesions by CT scanning will increase in the future. However, the PCP most often investigates a patient complaint with a standard chest X-ray that reveals a suspect nodule or mass. When ordered by the PCP as an outpatient test, scheduling of the follow-up chest CT scan is not given priority. More than a third of subjects experienced a delay ≥ 1 week in obtaining a chest CT scan ordered by the PCP; for 29 subjects the delay was ≥ 3weeks. At JLMMVH, the Diagnostic Clinic is given priority in scheduling CT scans. Hence, for suspect lung lesions, the chest CT scan, if not already obtained, is generally performed on the morning of the clinic visit. Educating the PCP to refer the patient immediately to the Diagnostic Clinic rather than waiting to obtain an outpatient chest CT scan may remove this source of unnecessary delay.
Scheduling a CT-guided fine needle aspiration of a lung lesion is another source of system delay. When the chest CT scan is available at the time of the Diagnostic Clinic referral, the clinic visit is scheduled for the earliest day a required CT-guided biopsy can be performed. However, the mean time of 11.3 days from initial Diagnostic Clinic visit to CT-guided biopsy is indicative of the backlog faced by the interventional radiologists.
Although infrequent, PET scans that are required before biopsy can lead to substantial delays. PET scans are performed at our university affiliate, and the joint VA-university lung tumor board sometimes generates requests for such scans prior to tissue diagnosis, yet another source of delay.
The time from referral receipt to the Diagnostic Clinic visit averaged 6.3 days. This delay usually was determined by the availability of the CT-guided biopsy or the dedicated interventional pulmonologist. Although other interventional pulmonologists at JLMMVH may perform the requisite diagnostic procedures, they are not always available for immediate review of imaging studies of referred patients nor can their schedules flexibly accommodate the number of patients seen in our clinic for evaluation.
Lung Cancer Diagnosis
Prompt diagnosis in the setting of a worrisome chest X-ray may help decrease patient anxiety, but does the clinic improve lung cancer treatment outcomes? Such improvement has been demonstrated only in stage IA squamous cell lung cancer.9 Of our study population, 37.7% had squamous cell carcinoma, and 85.5% had non-small cell lung cancer. Of those with non-small cell lung cancer, 28.9% had a clinical stage I tumor. Stage I squamous cell carcinoma, the type of tumor most likely to benefit from early diagnosis and treatment, was diagnosed in 11.3% of patients. With the increased application of LDCT screening, the proportion of veterans identified with early stage lung cancer may rise. The Providence VAMC in Rhode Island reported its results from instituting LDCT screening.12 Prior to screening, 28% of patients diagnosed with lung cancer had a stage I tumor. Following the introduction of LDCT screening, 49% diagnosed by LDCT screening had a stage I tumor. Nearly a third of their patients diagnosed with lung cancer through LDCT screening had squamous cell tumor histology. Thus, we can anticipate an increasing number of veterans with early stage lung cancer who would benefit from timely diagnosis.
The JLMMVH is a referral center for the entire state of Arkansas. Quite a few of its referred patients come from a long distance, which may require overnight housing and other related travel expenses. Apart from any potential outcome benefit, the efficiencies of the system described herein include the minimization of extra trips, an inconvenience and cost to both patient and JLMMVH.
Although the primary task of the clinic is diagnosis, we also seek to facilitate timely treatment. Our lack of an on-site PET scanner and radiation therapy, resources present on-site at the Dayton VAMC, contribute to longer therapy wait times. The shortest mean wait time at JLMMVH is for chemotherapy alone (34.7 days), in part because the JLMMVH oncologists, performing initial consultations 2 to 3 times weekly in the Diagnostic Clinic, are more readily available than are our thoracic surgeons or radiation therapists. Yet overall, JLMMVH patients often face delay from the time of lung cancer diagnosis to initiation of treatment.
The Connecticut Veterans Affairs Healthcare System has published the results of changes in lung cancer management associated with a nurse navigator system.10 Prior to creating the position of cancer care coordinator, filled by an advanced practice RNs, the mean time from clinical suspicion of lung cancer to treatment was 117 days. After 4 years of such care navigation, this waiting time had decreased to 52.4 days. Associated with this dramatic improvement in overall waiting time were decreases in the turnaround time required for performance of CT and PET scans. With respect to this big picture view of lung cancer care, our Diagnostic Clinic serves as a model for the initial step of diagnosis. Coordination and streamlining of the various steps from diagnosis to definitive therapy shall require a more system-wide effort involving all the key players in cancer care.
Conclusion
We have developed a care pathway based in a dedicated diagnostic clinic and have been able to document the shortest interval from abnormality to diagnosis of lung cancer reported in the literature to date. Efficient functioning of this clinic is dependent upon the close cooperation between a full-time RN clinic manager and an interventional pulmonologist experienced in lung cancer management and able to interpret cytologic samples at the time of biopsy. Shortening the delay between diagnosis and definitive therapy remains a challenge and may benefit from the oncology nurse navigator model previously described within the VA system. 10
1. American Cancer Society. Cancer Facts & Figures. https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2018/cancer-facts-and-figures-2018.pdf. Accessed July 13, 2019.
2. National Lung Screening Trial Research Team, Aberle DR, Adams AM, et al. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Eng J Med. 2011;365(5):395-409.
3. Kinsinger LS, Anderson C, Kim J, et al. Implementation of lung cancer screening in the Veterans Health Administration. JAMA Intern Med. 2017;177(3):399-406.
4. Asch SM, Kerr EA, Hamilton EG, Reifel JL, McGlynn EA, eds. Quality of Care for Oncologic Conditions and HIV: A Review of the Literature and Quality Indicators. Santa Monica, CA: RAND Corporation; 2000.
5. Hillerdal G. [Recommendations from the Swedish Lung Cancer Study Group: Shorter waiting times are demanded for quality in diagnostic work-ups for lung care.] Swedish Med J 1999; 96: 4691.
6. Simunovic M, Gagliardi A, McCready D, Coates A, Levine M, DePetrillo D. A snapshot of waiting times for cancer surgery provided by surgeons affiliated with regional cancer centres in Ontario. CMAJ. 2001;165(4):421-425. [Canadian Strategy for Cancer Control]
7. Bukhari A, Kumar G, Rajsheker R, Markert R. Timeliness of lung cancer diagnosis and treatment. Fed Pract. 2017;34(suppl 1):24S-29S.
8. Bilimoria KY, Ko CY, Tomlinson JS, et al. Wait times for cancer surgery in the United States: trends and predictors of delays. Ann Surg. 2011;253(4):779-785.
9. Yang CJ, Wang H, Kumar A, et al. Impact of timing of lobectomy on survival for clinical stage IA lung squamous cell carcinoma. Chest. 2017;152(6):1239-1250.
10. Hunnibell LS, Rose MG, Connery DM, et al. Using nurse navigation to improve timeliness of lung cancer care at a veterans hospital. Clin J Oncol Nurs. 2012;16(1):29-36.
11. Meena N, Jeffus S, Massoll N, et al. Rapid onsite evaluation: a comparison of cytopathologist and pulmonologist performance. Cancer Cytopatho. 2016;124(4):279-84.
12. Okereke IC, Bates MF, Jankowich MD, et al. Effects of implementation of lung cancer screening at one Veterans Affairs Medical Center. Chest 2016;150(5):1023-1029.
1. American Cancer Society. Cancer Facts & Figures. https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2018/cancer-facts-and-figures-2018.pdf. Accessed July 13, 2019.
2. National Lung Screening Trial Research Team, Aberle DR, Adams AM, et al. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Eng J Med. 2011;365(5):395-409.
3. Kinsinger LS, Anderson C, Kim J, et al. Implementation of lung cancer screening in the Veterans Health Administration. JAMA Intern Med. 2017;177(3):399-406.
4. Asch SM, Kerr EA, Hamilton EG, Reifel JL, McGlynn EA, eds. Quality of Care for Oncologic Conditions and HIV: A Review of the Literature and Quality Indicators. Santa Monica, CA: RAND Corporation; 2000.
5. Hillerdal G. [Recommendations from the Swedish Lung Cancer Study Group: Shorter waiting times are demanded for quality in diagnostic work-ups for lung care.] Swedish Med J 1999; 96: 4691.
6. Simunovic M, Gagliardi A, McCready D, Coates A, Levine M, DePetrillo D. A snapshot of waiting times for cancer surgery provided by surgeons affiliated with regional cancer centres in Ontario. CMAJ. 2001;165(4):421-425. [Canadian Strategy for Cancer Control]
7. Bukhari A, Kumar G, Rajsheker R, Markert R. Timeliness of lung cancer diagnosis and treatment. Fed Pract. 2017;34(suppl 1):24S-29S.
8. Bilimoria KY, Ko CY, Tomlinson JS, et al. Wait times for cancer surgery in the United States: trends and predictors of delays. Ann Surg. 2011;253(4):779-785.
9. Yang CJ, Wang H, Kumar A, et al. Impact of timing of lobectomy on survival for clinical stage IA lung squamous cell carcinoma. Chest. 2017;152(6):1239-1250.
10. Hunnibell LS, Rose MG, Connery DM, et al. Using nurse navigation to improve timeliness of lung cancer care at a veterans hospital. Clin J Oncol Nurs. 2012;16(1):29-36.
11. Meena N, Jeffus S, Massoll N, et al. Rapid onsite evaluation: a comparison of cytopathologist and pulmonologist performance. Cancer Cytopatho. 2016;124(4):279-84.
12. Okereke IC, Bates MF, Jankowich MD, et al. Effects of implementation of lung cancer screening at one Veterans Affairs Medical Center. Chest 2016;150(5):1023-1029.