User login
Daratumumab disappoints in non-Hodgkin lymphoma trial
Daratumumab is safe but ineffective for the treatment of patients with relapsed or refractory non-Hodgkin lymphoma (NHL) and CD38 expression of at least 50%, according to findings from a recent phase 2 trial.
Unfortunately, the study met headwinds early on, when initial screening of 112 patients with available tumor samples showed that only about half (56%) had CD38 expression of at least 50%, reported lead author Giles Salles, MD, PhD, of Claude Bernard University in Lyon, France, and his colleagues. The cutoff was based on preclinical models, suggesting that daratumumab-induced cytotoxicity depends on a high level of CD38 expression.
“Only 36 [patients] were eligible for study enrollment, questioning the generalizability of the study population,” the investigators wrote in Clinical Lymphoma, Myeloma & Leukemia.
Of these 36 patients, 15 had diffuse large B-cell lymphoma (DLBCL), 16 had follicular lymphoma (FL), and 5 had mantle cell lymphoma (MCL). Median CD38 expression was 70%. Patients were given 16 mg/kg of IV daratumumab once a week for two cycles, then every 2 weeks for four cycles, and finally on a monthly basis. Cycles were 28 days long. The primary endpoint was overall response rate. Safety and pharmacokinetics were also evaluated.
Results were generally disappointing, with ORR occurring in two patients (12.5%) with FL and one patient (6.7%) with DLBCL. No patients with MCL responded before the study was terminated. On a more encouraging note, 10 of 16 patients with FL maintained stable disease.
“All 16 patients in the FL cohort had progressed/relapsed on their prior treatment regimen; therefore, the maintenance of stable disease in the FL cohort may suggest some clinical benefit of daratumumab in this subset of NHL,” the investigators wrote.
Pharmacokinetics and safety data were similar to those from multiple myeloma studies of daratumumab; no new safety signals or instances of immunogenicity were encountered. The most common grade 3 or higher treatment-related adverse event was thrombocytopenia, which occurred in 11.1% of patients. Infusion-related reactions occurred in 72.2% of patients, but none were grade 4 and only three reactions were grade 3.
The investigators suggested that daratumumab may still play a role in NHL treatment, but not as a single agent.
“It is possible that daratumumab-based combination therapy would have allowed for more responses to be achieved within the current study,” the investigators wrote. “NHL is an extremely heterogeneous disease and the identification of predictive biomarkers and molecular genetics may provide new personalized therapies.”
The study was funded by Janssen Research & Development; two study authors reported employment by Janssen. Others reported financial ties to Janssen, Celgene, Roche, Gilead, Novartis, Amgen, and others.
SOURCE: Salles G et al. Clin Lymphoma Myeloma Leuk. 2019 Jan 2. doi: 10.1016/j.clml.2018.12.013.
Daratumumab is safe but ineffective for the treatment of patients with relapsed or refractory non-Hodgkin lymphoma (NHL) and CD38 expression of at least 50%, according to findings from a recent phase 2 trial.
Unfortunately, the study met headwinds early on, when initial screening of 112 patients with available tumor samples showed that only about half (56%) had CD38 expression of at least 50%, reported lead author Giles Salles, MD, PhD, of Claude Bernard University in Lyon, France, and his colleagues. The cutoff was based on preclinical models, suggesting that daratumumab-induced cytotoxicity depends on a high level of CD38 expression.
“Only 36 [patients] were eligible for study enrollment, questioning the generalizability of the study population,” the investigators wrote in Clinical Lymphoma, Myeloma & Leukemia.
Of these 36 patients, 15 had diffuse large B-cell lymphoma (DLBCL), 16 had follicular lymphoma (FL), and 5 had mantle cell lymphoma (MCL). Median CD38 expression was 70%. Patients were given 16 mg/kg of IV daratumumab once a week for two cycles, then every 2 weeks for four cycles, and finally on a monthly basis. Cycles were 28 days long. The primary endpoint was overall response rate. Safety and pharmacokinetics were also evaluated.
Results were generally disappointing, with ORR occurring in two patients (12.5%) with FL and one patient (6.7%) with DLBCL. No patients with MCL responded before the study was terminated. On a more encouraging note, 10 of 16 patients with FL maintained stable disease.
“All 16 patients in the FL cohort had progressed/relapsed on their prior treatment regimen; therefore, the maintenance of stable disease in the FL cohort may suggest some clinical benefit of daratumumab in this subset of NHL,” the investigators wrote.
Pharmacokinetics and safety data were similar to those from multiple myeloma studies of daratumumab; no new safety signals or instances of immunogenicity were encountered. The most common grade 3 or higher treatment-related adverse event was thrombocytopenia, which occurred in 11.1% of patients. Infusion-related reactions occurred in 72.2% of patients, but none were grade 4 and only three reactions were grade 3.
The investigators suggested that daratumumab may still play a role in NHL treatment, but not as a single agent.
“It is possible that daratumumab-based combination therapy would have allowed for more responses to be achieved within the current study,” the investigators wrote. “NHL is an extremely heterogeneous disease and the identification of predictive biomarkers and molecular genetics may provide new personalized therapies.”
The study was funded by Janssen Research & Development; two study authors reported employment by Janssen. Others reported financial ties to Janssen, Celgene, Roche, Gilead, Novartis, Amgen, and others.
SOURCE: Salles G et al. Clin Lymphoma Myeloma Leuk. 2019 Jan 2. doi: 10.1016/j.clml.2018.12.013.
Daratumumab is safe but ineffective for the treatment of patients with relapsed or refractory non-Hodgkin lymphoma (NHL) and CD38 expression of at least 50%, according to findings from a recent phase 2 trial.
Unfortunately, the study met headwinds early on, when initial screening of 112 patients with available tumor samples showed that only about half (56%) had CD38 expression of at least 50%, reported lead author Giles Salles, MD, PhD, of Claude Bernard University in Lyon, France, and his colleagues. The cutoff was based on preclinical models, suggesting that daratumumab-induced cytotoxicity depends on a high level of CD38 expression.
“Only 36 [patients] were eligible for study enrollment, questioning the generalizability of the study population,” the investigators wrote in Clinical Lymphoma, Myeloma & Leukemia.
Of these 36 patients, 15 had diffuse large B-cell lymphoma (DLBCL), 16 had follicular lymphoma (FL), and 5 had mantle cell lymphoma (MCL). Median CD38 expression was 70%. Patients were given 16 mg/kg of IV daratumumab once a week for two cycles, then every 2 weeks for four cycles, and finally on a monthly basis. Cycles were 28 days long. The primary endpoint was overall response rate. Safety and pharmacokinetics were also evaluated.
Results were generally disappointing, with ORR occurring in two patients (12.5%) with FL and one patient (6.7%) with DLBCL. No patients with MCL responded before the study was terminated. On a more encouraging note, 10 of 16 patients with FL maintained stable disease.
“All 16 patients in the FL cohort had progressed/relapsed on their prior treatment regimen; therefore, the maintenance of stable disease in the FL cohort may suggest some clinical benefit of daratumumab in this subset of NHL,” the investigators wrote.
Pharmacokinetics and safety data were similar to those from multiple myeloma studies of daratumumab; no new safety signals or instances of immunogenicity were encountered. The most common grade 3 or higher treatment-related adverse event was thrombocytopenia, which occurred in 11.1% of patients. Infusion-related reactions occurred in 72.2% of patients, but none were grade 4 and only three reactions were grade 3.
The investigators suggested that daratumumab may still play a role in NHL treatment, but not as a single agent.
“It is possible that daratumumab-based combination therapy would have allowed for more responses to be achieved within the current study,” the investigators wrote. “NHL is an extremely heterogeneous disease and the identification of predictive biomarkers and molecular genetics may provide new personalized therapies.”
The study was funded by Janssen Research & Development; two study authors reported employment by Janssen. Others reported financial ties to Janssen, Celgene, Roche, Gilead, Novartis, Amgen, and others.
SOURCE: Salles G et al. Clin Lymphoma Myeloma Leuk. 2019 Jan 2. doi: 10.1016/j.clml.2018.12.013.
FROM CLINICAL LYMPHOMA, MYELOMA & LEUKEMIA
Key clinical point:
Major finding: The overall response rate was 12.5% for patients with follicular lymphoma and 6.7% for diffuse large B-cell lymphoma (DLBCL). There were no responders in the mantle cell lymphoma cohort.
Study details: An open-label, phase 2 trial involving 15 patients with diffuse large B-cell lymphoma, 16 patients with follicular lymphoma, and 5 patients with mantle cell lymphoma.
Disclosures: The study was funded by Janssen Research & Development; two study authors reported employment by Janssen. Others reported financial ties to Janssen, Celgene, Roche, Gilead, Novartis, Amgen, and others.
Source: Salles G et al. Clin Lymphoma Myeloma Leuk. 2019 Jan 2. doi: 10.1016/j.clml.2018.12.013.
Combo appears to overcome aggressive L-NN-MCL
Some patients with aggressive leukemic nonnodal mantle cell lymphoma (L-NN-MCL) respond very well to combination therapy with rituximab and ibrutinib, according to two case reports.
Both patients, who had aggressive L-NN-MCL and P53 abnormalities, remain free of disease 18 months after treatment with rituximab/ibrutinib and autologous stem cell transplantation (ASCT), reported Shahram Mori, MD, PhD, of the Florida Hospital Cancer Institute in Orlando, and his colleagues.
The findings suggest that P53 gene status in L-NN-MCL may have a significant impact on prognosis and treatment planning. There are currently no guidelines for risk stratifying L-NN-MCL patients.
“Although the recognition of L-NN-MCL is important to avoid overtreatment, there appears to be a subset of patients who either have a more aggressive form or disease that has transformed to a more aggressive form who present with symptomatic disease and/or cytopenias,” the investigators wrote in Clinical Lymphoma, Myeloma & Leukemia.
The investigators described two such cases in their report. Both patients had leukocytosis with various other blood cell derangements and splenomegaly without lymphadenopathy.
The first patient was a 53-year-old African American man with L-NN-MCL and a number of genetic aberrations, including loss of the P53 gene. After two cycles of rituximab with bendamustine proved ineffective, he was switched to rituxan with cyclophosphamide, vincristine, adriamycin, and dexamethasone with high-dose methotrexate and cytarabine. This regimen was also ineffective and his white blood cell count kept rising.
His story changed for the better when the patient was switched to ibrutinib 560 mg daily and rituximab 375 mg/m2 monthly. Within 2 months of starting therapy, his blood abnormalities normalized, and bone marrow biopsy at the end of treatment revealed complete remission without evidence of minimal residual disease. The patient remains in complete remission 18 months after ASCT.
The second patient was a 49-year-old Hispanic man with L-NN-MCL. He had missense mutations in TP53 and KMT2A (MLL), a frameshift mutation in BCOR, and a t(11;14) translocation. Ibrutinib/rituximab was started immediately. After 1 month, his blood levels began to normalize. After five cycles, bone marrow biopsy showed complete remission with no evidence of minimal residual disease. Like the first patient, the second patient remains in complete remission 18 months after ASCT.
“To our knowledge, these are the first two cases of L-NN-MCL with P53 gene mutations/alterations that were successfully treated with a combination of rituximab and ibrutinib,” the investigators wrote. “Our two cases confirm the previous studies by Chapman-Fredricks et al, who also noted P53 gene mutation or deletion is associated with the aggressive course.”
The researchers reported having no financial disclosures.
SOURCE: Mori S et al. Clin Lymphoma Myeloma Leuk. 2019 Feb;19(2):e93-7.
Some patients with aggressive leukemic nonnodal mantle cell lymphoma (L-NN-MCL) respond very well to combination therapy with rituximab and ibrutinib, according to two case reports.
Both patients, who had aggressive L-NN-MCL and P53 abnormalities, remain free of disease 18 months after treatment with rituximab/ibrutinib and autologous stem cell transplantation (ASCT), reported Shahram Mori, MD, PhD, of the Florida Hospital Cancer Institute in Orlando, and his colleagues.
The findings suggest that P53 gene status in L-NN-MCL may have a significant impact on prognosis and treatment planning. There are currently no guidelines for risk stratifying L-NN-MCL patients.
“Although the recognition of L-NN-MCL is important to avoid overtreatment, there appears to be a subset of patients who either have a more aggressive form or disease that has transformed to a more aggressive form who present with symptomatic disease and/or cytopenias,” the investigators wrote in Clinical Lymphoma, Myeloma & Leukemia.
The investigators described two such cases in their report. Both patients had leukocytosis with various other blood cell derangements and splenomegaly without lymphadenopathy.
The first patient was a 53-year-old African American man with L-NN-MCL and a number of genetic aberrations, including loss of the P53 gene. After two cycles of rituximab with bendamustine proved ineffective, he was switched to rituxan with cyclophosphamide, vincristine, adriamycin, and dexamethasone with high-dose methotrexate and cytarabine. This regimen was also ineffective and his white blood cell count kept rising.
His story changed for the better when the patient was switched to ibrutinib 560 mg daily and rituximab 375 mg/m2 monthly. Within 2 months of starting therapy, his blood abnormalities normalized, and bone marrow biopsy at the end of treatment revealed complete remission without evidence of minimal residual disease. The patient remains in complete remission 18 months after ASCT.
The second patient was a 49-year-old Hispanic man with L-NN-MCL. He had missense mutations in TP53 and KMT2A (MLL), a frameshift mutation in BCOR, and a t(11;14) translocation. Ibrutinib/rituximab was started immediately. After 1 month, his blood levels began to normalize. After five cycles, bone marrow biopsy showed complete remission with no evidence of minimal residual disease. Like the first patient, the second patient remains in complete remission 18 months after ASCT.
“To our knowledge, these are the first two cases of L-NN-MCL with P53 gene mutations/alterations that were successfully treated with a combination of rituximab and ibrutinib,” the investigators wrote. “Our two cases confirm the previous studies by Chapman-Fredricks et al, who also noted P53 gene mutation or deletion is associated with the aggressive course.”
The researchers reported having no financial disclosures.
SOURCE: Mori S et al. Clin Lymphoma Myeloma Leuk. 2019 Feb;19(2):e93-7.
Some patients with aggressive leukemic nonnodal mantle cell lymphoma (L-NN-MCL) respond very well to combination therapy with rituximab and ibrutinib, according to two case reports.
Both patients, who had aggressive L-NN-MCL and P53 abnormalities, remain free of disease 18 months after treatment with rituximab/ibrutinib and autologous stem cell transplantation (ASCT), reported Shahram Mori, MD, PhD, of the Florida Hospital Cancer Institute in Orlando, and his colleagues.
The findings suggest that P53 gene status in L-NN-MCL may have a significant impact on prognosis and treatment planning. There are currently no guidelines for risk stratifying L-NN-MCL patients.
“Although the recognition of L-NN-MCL is important to avoid overtreatment, there appears to be a subset of patients who either have a more aggressive form or disease that has transformed to a more aggressive form who present with symptomatic disease and/or cytopenias,” the investigators wrote in Clinical Lymphoma, Myeloma & Leukemia.
The investigators described two such cases in their report. Both patients had leukocytosis with various other blood cell derangements and splenomegaly without lymphadenopathy.
The first patient was a 53-year-old African American man with L-NN-MCL and a number of genetic aberrations, including loss of the P53 gene. After two cycles of rituximab with bendamustine proved ineffective, he was switched to rituxan with cyclophosphamide, vincristine, adriamycin, and dexamethasone with high-dose methotrexate and cytarabine. This regimen was also ineffective and his white blood cell count kept rising.
His story changed for the better when the patient was switched to ibrutinib 560 mg daily and rituximab 375 mg/m2 monthly. Within 2 months of starting therapy, his blood abnormalities normalized, and bone marrow biopsy at the end of treatment revealed complete remission without evidence of minimal residual disease. The patient remains in complete remission 18 months after ASCT.
The second patient was a 49-year-old Hispanic man with L-NN-MCL. He had missense mutations in TP53 and KMT2A (MLL), a frameshift mutation in BCOR, and a t(11;14) translocation. Ibrutinib/rituximab was started immediately. After 1 month, his blood levels began to normalize. After five cycles, bone marrow biopsy showed complete remission with no evidence of minimal residual disease. Like the first patient, the second patient remains in complete remission 18 months after ASCT.
“To our knowledge, these are the first two cases of L-NN-MCL with P53 gene mutations/alterations that were successfully treated with a combination of rituximab and ibrutinib,” the investigators wrote. “Our two cases confirm the previous studies by Chapman-Fredricks et al, who also noted P53 gene mutation or deletion is associated with the aggressive course.”
The researchers reported having no financial disclosures.
SOURCE: Mori S et al. Clin Lymphoma Myeloma Leuk. 2019 Feb;19(2):e93-7.
FROM CLINICAL LYMPHOMA, MYELOMA & LEUKEMIA
Key clinical point:
Major finding: Two patients with aggressive L-NN-MCL and P53 abnormalities who were treated with rituximab/ibrutinib and autologous stem cell transplantation remain free of disease 18 months later.
Study details: Two case reports.
Disclosures: The authors reported having no financial disclosures.
Source: Mori S et al. Clin Lymphoma Myeloma Leuk. 2019 Feb;19(2):e93-7.
Functional MRI detects consciousness after brain damage
Functional MRI can measure patterns of connectivity to determine levels of consciousness in nonresponsive patients with brain injury, according to results from a multicenter, cross-sectional, observational study.
Blood oxygen level–dependent (BOLD) fMRI showed that brain-wide coordination patterns of high complexity became increasingly common moving from unresponsive patients to those with minimal consciousness to healthy individuals, reported lead author Athena Demertzi, PhD, of GIGA Research Institute at the University of Liège in Belgium, and her colleagues.
“Finding reliable markers indicating the presence or absence of consciousness represents an outstanding open problem for science,” the investigators wrote in Science Advances.
In medicine, an fMRI-based measure of consciousness could supplement behavioral assessments of awareness and guide therapeutic strategies; more broadly, image-based markers could help elucidate the nature of consciousness itself.
“We postulate that consciousness has specific characteristics that are based on the temporal dynamics of ongoing brain activity and its coordination over distant cortical regions,” the investigators wrote. “Our hypothesis stems from the common stance of various contemporary theories which propose that consciousness relates to a dynamic process of self-sustained, coordinated brain-scale activity assisting the tuning to a constantly evolving environment, rather than in static descriptions of brain function.”
There is a need for a reliable way of distinguishing consciousness from unconscious states, the investigators said. “Given that nonresponsiveness can be associated with a variety of brain lesions, varying levels of vigilance, and covert cognition, we highlight the need to determine a common set of features capable of accounting for the capacity to sustain conscious experience.”
To search for patterns of brain signal coordination that correlate with consciousness, four independent research centers performed BOLD fMRI scans of participants at rest or under anesthesia with propofol. Of 159 total participants, 47 were healthy individuals and 112 were patients in a vegetative state/with unresponsive wakefulness syndrome (UWS) or in a minimally conscious state (MCS), based on standardized behavioral assessments. The main data analysis, which included 125 participants, assessed BOLD fMRI signal coordination between six brain networks known to have roles in cognitive and functional processes.
The researchers’ analysis revealed four distinct and recurring brain-wide coordination patterns ranging on a scale from highest activity (pattern 1) to lowest activity (pattern 4). Pattern 1, which exhibited most long-distance edges, spatial complexity, efficiency, and community structure, became increasingly common when moving from UWS patients to MCS patients to healthy control individuals (UWS < MCS < HC, rho = 0.7, Spearman rank correlation between rate and group, P less than 1 x 10-16).
In contrast, pattern 4, characterized by low interareal coordination, showed an inverse trend; it became less common when moving from vegetative patients to healthy individuals (UWS > MCS > HC, Spearman rank correlation between rate and group, rho = –0.6, P less than 1 x 10-11). Although patterns 2 and 3 occurred with equal frequency across all groups, the investigators noted that switching between patterns was most common and predictably sequential in healthy individuals, versus patients with UWS, who were least likely to switch patterns. A total of 23 patients who were scanned under propofol anesthesia were equally likely to exhibit pattern 4, regardless of health status, suggesting that pattern 4 depends upon fixed anatomical pathways. Results were not affected by scanning site or other patient characteristics, such as age, gender, etiology, or chronicity.
“We conclude that these patterns of transient brain signal coordination are characteristic of conscious and unconscious brain states,” the investigators wrote, “warranting future research concerning their relationship to ongoing conscious content, and the possibility of modifying their prevalence by external perturbations, both in healthy and pathological individuals, as well as across species.”
The study was funded by a James S. McDonnell Foundation Collaborative Activity Award, INSERM, the Belgian National Funds for Scientific Research, the Canada Excellence Research Chairs program, and others. The authors declared having no conflicts of interest.
SOURCE: Demertzi A et al. Sci Adv. 2019 Feb 6. doi: 10.1126/sciadv.aat7603.
Functional MRI can measure patterns of connectivity to determine levels of consciousness in nonresponsive patients with brain injury, according to results from a multicenter, cross-sectional, observational study.
Blood oxygen level–dependent (BOLD) fMRI showed that brain-wide coordination patterns of high complexity became increasingly common moving from unresponsive patients to those with minimal consciousness to healthy individuals, reported lead author Athena Demertzi, PhD, of GIGA Research Institute at the University of Liège in Belgium, and her colleagues.
“Finding reliable markers indicating the presence or absence of consciousness represents an outstanding open problem for science,” the investigators wrote in Science Advances.
In medicine, an fMRI-based measure of consciousness could supplement behavioral assessments of awareness and guide therapeutic strategies; more broadly, image-based markers could help elucidate the nature of consciousness itself.
“We postulate that consciousness has specific characteristics that are based on the temporal dynamics of ongoing brain activity and its coordination over distant cortical regions,” the investigators wrote. “Our hypothesis stems from the common stance of various contemporary theories which propose that consciousness relates to a dynamic process of self-sustained, coordinated brain-scale activity assisting the tuning to a constantly evolving environment, rather than in static descriptions of brain function.”
There is a need for a reliable way of distinguishing consciousness from unconscious states, the investigators said. “Given that nonresponsiveness can be associated with a variety of brain lesions, varying levels of vigilance, and covert cognition, we highlight the need to determine a common set of features capable of accounting for the capacity to sustain conscious experience.”
To search for patterns of brain signal coordination that correlate with consciousness, four independent research centers performed BOLD fMRI scans of participants at rest or under anesthesia with propofol. Of 159 total participants, 47 were healthy individuals and 112 were patients in a vegetative state/with unresponsive wakefulness syndrome (UWS) or in a minimally conscious state (MCS), based on standardized behavioral assessments. The main data analysis, which included 125 participants, assessed BOLD fMRI signal coordination between six brain networks known to have roles in cognitive and functional processes.
The researchers’ analysis revealed four distinct and recurring brain-wide coordination patterns ranging on a scale from highest activity (pattern 1) to lowest activity (pattern 4). Pattern 1, which exhibited most long-distance edges, spatial complexity, efficiency, and community structure, became increasingly common when moving from UWS patients to MCS patients to healthy control individuals (UWS < MCS < HC, rho = 0.7, Spearman rank correlation between rate and group, P less than 1 x 10-16).
In contrast, pattern 4, characterized by low interareal coordination, showed an inverse trend; it became less common when moving from vegetative patients to healthy individuals (UWS > MCS > HC, Spearman rank correlation between rate and group, rho = –0.6, P less than 1 x 10-11). Although patterns 2 and 3 occurred with equal frequency across all groups, the investigators noted that switching between patterns was most common and predictably sequential in healthy individuals, versus patients with UWS, who were least likely to switch patterns. A total of 23 patients who were scanned under propofol anesthesia were equally likely to exhibit pattern 4, regardless of health status, suggesting that pattern 4 depends upon fixed anatomical pathways. Results were not affected by scanning site or other patient characteristics, such as age, gender, etiology, or chronicity.
“We conclude that these patterns of transient brain signal coordination are characteristic of conscious and unconscious brain states,” the investigators wrote, “warranting future research concerning their relationship to ongoing conscious content, and the possibility of modifying their prevalence by external perturbations, both in healthy and pathological individuals, as well as across species.”
The study was funded by a James S. McDonnell Foundation Collaborative Activity Award, INSERM, the Belgian National Funds for Scientific Research, the Canada Excellence Research Chairs program, and others. The authors declared having no conflicts of interest.
SOURCE: Demertzi A et al. Sci Adv. 2019 Feb 6. doi: 10.1126/sciadv.aat7603.
Functional MRI can measure patterns of connectivity to determine levels of consciousness in nonresponsive patients with brain injury, according to results from a multicenter, cross-sectional, observational study.
Blood oxygen level–dependent (BOLD) fMRI showed that brain-wide coordination patterns of high complexity became increasingly common moving from unresponsive patients to those with minimal consciousness to healthy individuals, reported lead author Athena Demertzi, PhD, of GIGA Research Institute at the University of Liège in Belgium, and her colleagues.
“Finding reliable markers indicating the presence or absence of consciousness represents an outstanding open problem for science,” the investigators wrote in Science Advances.
In medicine, an fMRI-based measure of consciousness could supplement behavioral assessments of awareness and guide therapeutic strategies; more broadly, image-based markers could help elucidate the nature of consciousness itself.
“We postulate that consciousness has specific characteristics that are based on the temporal dynamics of ongoing brain activity and its coordination over distant cortical regions,” the investigators wrote. “Our hypothesis stems from the common stance of various contemporary theories which propose that consciousness relates to a dynamic process of self-sustained, coordinated brain-scale activity assisting the tuning to a constantly evolving environment, rather than in static descriptions of brain function.”
There is a need for a reliable way of distinguishing consciousness from unconscious states, the investigators said. “Given that nonresponsiveness can be associated with a variety of brain lesions, varying levels of vigilance, and covert cognition, we highlight the need to determine a common set of features capable of accounting for the capacity to sustain conscious experience.”
To search for patterns of brain signal coordination that correlate with consciousness, four independent research centers performed BOLD fMRI scans of participants at rest or under anesthesia with propofol. Of 159 total participants, 47 were healthy individuals and 112 were patients in a vegetative state/with unresponsive wakefulness syndrome (UWS) or in a minimally conscious state (MCS), based on standardized behavioral assessments. The main data analysis, which included 125 participants, assessed BOLD fMRI signal coordination between six brain networks known to have roles in cognitive and functional processes.
The researchers’ analysis revealed four distinct and recurring brain-wide coordination patterns ranging on a scale from highest activity (pattern 1) to lowest activity (pattern 4). Pattern 1, which exhibited most long-distance edges, spatial complexity, efficiency, and community structure, became increasingly common when moving from UWS patients to MCS patients to healthy control individuals (UWS < MCS < HC, rho = 0.7, Spearman rank correlation between rate and group, P less than 1 x 10-16).
In contrast, pattern 4, characterized by low interareal coordination, showed an inverse trend; it became less common when moving from vegetative patients to healthy individuals (UWS > MCS > HC, Spearman rank correlation between rate and group, rho = –0.6, P less than 1 x 10-11). Although patterns 2 and 3 occurred with equal frequency across all groups, the investigators noted that switching between patterns was most common and predictably sequential in healthy individuals, versus patients with UWS, who were least likely to switch patterns. A total of 23 patients who were scanned under propofol anesthesia were equally likely to exhibit pattern 4, regardless of health status, suggesting that pattern 4 depends upon fixed anatomical pathways. Results were not affected by scanning site or other patient characteristics, such as age, gender, etiology, or chronicity.
“We conclude that these patterns of transient brain signal coordination are characteristic of conscious and unconscious brain states,” the investigators wrote, “warranting future research concerning their relationship to ongoing conscious content, and the possibility of modifying their prevalence by external perturbations, both in healthy and pathological individuals, as well as across species.”
The study was funded by a James S. McDonnell Foundation Collaborative Activity Award, INSERM, the Belgian National Funds for Scientific Research, the Canada Excellence Research Chairs program, and others. The authors declared having no conflicts of interest.
SOURCE: Demertzi A et al. Sci Adv. 2019 Feb 6. doi: 10.1126/sciadv.aat7603.
FROM SCIENCE ADVANCES
Key clinical point:
Major finding: A brain-wide coordination pattern of high complexity became increasingly common when moving from patients with unresponsive wakefulness syndrome (UWS) to patients in a minimally conscious state (MCS) to healthy control individuals.
Study details: A study involving blood oxygen level–dependent (BOLD) fMRI scans at rest or under anesthesia in 159 participants at four independent research facilities.
Disclosures: The study was funded by a James S. McDonnell Foundation Collaborative Activity Award, INSERM, the Belgian National Funds for Scientific Research, the Canada Excellence Research Chairs program, and others. The authors declared having no conflicts of interest.
Source: Demertzi A et al. Sci Adv. 2019 Feb 6. doi: 10.1126/sciadv.aat7603.
Maltodextrin may increase colitis risk
The food additive maltodextrin may increase risk of inflammatory bowel disease, according to a recent study.
Compared with control subjects, mice given drinking water that contained 5% maltodextrin were significantly more likely to develop colitis and lose weight when challenged with dextran sodium sulfate (DSS), reported lead author Federica Laudisi, PhD, of the department of systems medicine at the University of Rome Tor Vergata in Rome, and her colleagues.
Further experiments with murine intestinal crypts and a human cell line echoed these results and offered mechanistic insight. Treatment with maltodextrin stressed the endoplasmic reticulum of goblet cells, predisposing the intestinal epithelium to mucus depletion and inflammation. With these results, maltodextrin joins polysorbate 80 and carboxymethylcellulose on a growing list of food additives in the Western diet with proinflammatory potential.
“Although the U.S. Food and Drug Administration recognizes these dietary elements as safe,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, “their use has been linked to the development of intestinal pathologies in both animals and human beings.
“It also has been shown that the polysaccharide maltodextrin, which is commonly used as a filler and thickener during food processing, can alter microbial phenotype and host antibacterial defenses. Maltodextrin expands the Escherichia coli population in the ileum and induces necrotizing enterocolitis in preterm piglets (Am J Physiol Gastrointest Liver Physiol. 2009 Dec;297:G1115-25).”
The present study began by administering three compounds dissolved in drinking water to wild-type Balb/c mice for 45 days: 5% maltodextrin, 0.5% propylene glycol, or 5 g/L animal gelatin. Control mice drank plain water. None of the treatments triggered clinical or histologic signs of colitis, and stool levels of lipocalin-2 (Lcn-2), a biomarker of intestinal inflammation, remained comparable with that of control mice. However, outcomes changed when mice were challenged with DSS (1.75% in drinking water) on days 35-45 or injected subcutaneously with indomethacin (5 mg/kg) on day 35 and sacrificed 24 hours later. When challenged with DSS, mice in the maltodextrin group developed severe colitis and lost 10%-15% of body weight, compared with minimal colitis and negligible weight loss in the other groups. In addition, compared with other mice, maltodextrin-fed mice had increased colon tissue expression of Lcn-2 and inflammatory cytokine interleukin (IL)-1beta. These initial findings suggested that dietary maltodextrin could increase susceptibility to clinical colitis.
To determine the pathophysiology of this phenomenon, the investigators performed microarray analysis of colonic samples. Multiple genes associated with carbohydrate and lipid metabolism were upregulated in maltodextrin-fed mice, including genes that controlled the unfolded protein response (UPR), a process in which unfolded proteins accumulate in the endoplasmic reticulum (ER) during ER stress. The most prominently expressed among the UPR-related genes was Ern-2, which regulates inositol-requiring enzyme 1beta, found exclusively in the ER of goblet cells in the small intestine and colon. When maltodextrin causes ER stress in goblet cells, it leads to misfolding of mucin glycoprotein Mucin-2 (Muc-2), a major component of gut mucus, causing gut mucus levels to drop. A diminished mucus barrier exposes the intestine to infection and damage, as demonstrated by higher rates of pathogenic bacteria in Muc-2–deficient mice than in control mice, and more severe intestinal damage than in controls when Muc-2 mice are deliberately infected with pathogens.
The investigators found that humans likely have similar responses to dietary maltodextrin. Treating the mucus-secreting HT29-methotrexate treated (HT29-MTX) cell line with 5% maltodextrin resulted in upregulation of Ern-2, which is the same mechanism observed in mice. Additional testing showed that this process was mediated by p38 mitogen-activated protein kinase, and pharmacologic inhibition or knockdown of p38 suppressed RNA expression of Ern-2. The investigators found that p38 was similarly involved in maltodextrin-fed mice.
To show that maltodextrin enhances susceptibility to inflammation via ER stress, the investigators used tauroursodeoxycholic acid (TUDCA) to inhibit ER stress. Indeed, inhibition led to reduced Ern-2 expression in HT29-MTX cells and in mice treated with maltodextrin. Giving TUDCA to maltodextrin-fed mice resulted in less weight loss, improved histology, and lower expression of Lcn-2 and IL-1beta.
The study concluded with three final experiments: The first showed that maltodextrin did not alter mucosa-associated microbiota; the second showed that mice fed 5% maltodextrin long term (for 10 weeks) had low-grade intestinal inflammation on histology, albeit without clinical colitis or weight loss; and the third showed that mice consuming maltodextrin long term had higher 15-hour fasting blood glycemic levels than control mice, supporting recent research suggesting that food additives can disrupt metabolism in a nonsusceptible host.
“In conclusion,” the investigators wrote, “this study shows that a maltodextrin-enriched diet reduces the intestinal content of Muc-2, thus making the host more sensitive to colitogenic stimuli. These data, together with the demonstration that maltodextrin can promote epithelial intestinal adhesion of pathogenic bacteria, supports the hypothesis that Western diets rich in maltodextrin can contribute to gut disease susceptibility.”
The study was funded by the Italian Ministry of Education, Universities, and Research. The authors reported no conflicts of interest.
SOURCE: Laudisi F et al. CMGH. 2019 Jan 18. doi: 10.1016/j.jcmgh.2018.09.002.
Maltodextrin is a polysaccharide derived from starch hydrolysis and broadly used as a thickener and filler in processed food. While it is regarded as inert and considered “generally regarded as safe” by the U.S. Food and Drug Administration, multiple recent studies have demonstrated detrimental roles played by maltodextrin in the intestinal environment, suggesting that this broadly used food additive may play a role in chronic inflammatory diseases.
Importantly, in addition to the use of a murine model of colitis, Laudisi and colleagues also investigated the impact that maltodextrin may have on a “normal” host, i.e. without genetic susceptibility nor induced colitis. While maltodextrin did not induce visible levels of intestinal inflammation, it led to the development of low-grade intestinal inflammation, characterized by subtle but nonetheless consistent elevation in intestinal inflammatory markers, ultimately leading to metabolic abnormalities.
Altogether, these recent results, together with previous reports, suggest that consumption of the food additive maltodextrin may be a risk factor for the IBD-prone population, as well as a factor promoting chronic low-grade intestinal inflammation leading to metabolic abnormalities in the general population. These findings further support the concept that FDA testing of food additives should be performed in disease-prone and resistant host models, designed to detect chronic and low-grade inflammation, as well as consider impacts on the gut microbiota.
Benoit Chassaing, PhD, is an assistant professor in the Neuroscience Institute and Institute for Biomedical Sciences, Georgia State University, Atlanta. He has no conflicts. These remarks are excerpted from an editorial accompanying Dr. Laudisi’s article (CMGH. 2019 Jan 18. doi.org/10.1016/j.jcmgh.2018.09.002).
Maltodextrin is a polysaccharide derived from starch hydrolysis and broadly used as a thickener and filler in processed food. While it is regarded as inert and considered “generally regarded as safe” by the U.S. Food and Drug Administration, multiple recent studies have demonstrated detrimental roles played by maltodextrin in the intestinal environment, suggesting that this broadly used food additive may play a role in chronic inflammatory diseases.
Importantly, in addition to the use of a murine model of colitis, Laudisi and colleagues also investigated the impact that maltodextrin may have on a “normal” host, i.e. without genetic susceptibility nor induced colitis. While maltodextrin did not induce visible levels of intestinal inflammation, it led to the development of low-grade intestinal inflammation, characterized by subtle but nonetheless consistent elevation in intestinal inflammatory markers, ultimately leading to metabolic abnormalities.
Altogether, these recent results, together with previous reports, suggest that consumption of the food additive maltodextrin may be a risk factor for the IBD-prone population, as well as a factor promoting chronic low-grade intestinal inflammation leading to metabolic abnormalities in the general population. These findings further support the concept that FDA testing of food additives should be performed in disease-prone and resistant host models, designed to detect chronic and low-grade inflammation, as well as consider impacts on the gut microbiota.
Benoit Chassaing, PhD, is an assistant professor in the Neuroscience Institute and Institute for Biomedical Sciences, Georgia State University, Atlanta. He has no conflicts. These remarks are excerpted from an editorial accompanying Dr. Laudisi’s article (CMGH. 2019 Jan 18. doi.org/10.1016/j.jcmgh.2018.09.002).
Maltodextrin is a polysaccharide derived from starch hydrolysis and broadly used as a thickener and filler in processed food. While it is regarded as inert and considered “generally regarded as safe” by the U.S. Food and Drug Administration, multiple recent studies have demonstrated detrimental roles played by maltodextrin in the intestinal environment, suggesting that this broadly used food additive may play a role in chronic inflammatory diseases.
Importantly, in addition to the use of a murine model of colitis, Laudisi and colleagues also investigated the impact that maltodextrin may have on a “normal” host, i.e. without genetic susceptibility nor induced colitis. While maltodextrin did not induce visible levels of intestinal inflammation, it led to the development of low-grade intestinal inflammation, characterized by subtle but nonetheless consistent elevation in intestinal inflammatory markers, ultimately leading to metabolic abnormalities.
Altogether, these recent results, together with previous reports, suggest that consumption of the food additive maltodextrin may be a risk factor for the IBD-prone population, as well as a factor promoting chronic low-grade intestinal inflammation leading to metabolic abnormalities in the general population. These findings further support the concept that FDA testing of food additives should be performed in disease-prone and resistant host models, designed to detect chronic and low-grade inflammation, as well as consider impacts on the gut microbiota.
Benoit Chassaing, PhD, is an assistant professor in the Neuroscience Institute and Institute for Biomedical Sciences, Georgia State University, Atlanta. He has no conflicts. These remarks are excerpted from an editorial accompanying Dr. Laudisi’s article (CMGH. 2019 Jan 18. doi.org/10.1016/j.jcmgh.2018.09.002).
The food additive maltodextrin may increase risk of inflammatory bowel disease, according to a recent study.
Compared with control subjects, mice given drinking water that contained 5% maltodextrin were significantly more likely to develop colitis and lose weight when challenged with dextran sodium sulfate (DSS), reported lead author Federica Laudisi, PhD, of the department of systems medicine at the University of Rome Tor Vergata in Rome, and her colleagues.
Further experiments with murine intestinal crypts and a human cell line echoed these results and offered mechanistic insight. Treatment with maltodextrin stressed the endoplasmic reticulum of goblet cells, predisposing the intestinal epithelium to mucus depletion and inflammation. With these results, maltodextrin joins polysorbate 80 and carboxymethylcellulose on a growing list of food additives in the Western diet with proinflammatory potential.
“Although the U.S. Food and Drug Administration recognizes these dietary elements as safe,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, “their use has been linked to the development of intestinal pathologies in both animals and human beings.
“It also has been shown that the polysaccharide maltodextrin, which is commonly used as a filler and thickener during food processing, can alter microbial phenotype and host antibacterial defenses. Maltodextrin expands the Escherichia coli population in the ileum and induces necrotizing enterocolitis in preterm piglets (Am J Physiol Gastrointest Liver Physiol. 2009 Dec;297:G1115-25).”
The present study began by administering three compounds dissolved in drinking water to wild-type Balb/c mice for 45 days: 5% maltodextrin, 0.5% propylene glycol, or 5 g/L animal gelatin. Control mice drank plain water. None of the treatments triggered clinical or histologic signs of colitis, and stool levels of lipocalin-2 (Lcn-2), a biomarker of intestinal inflammation, remained comparable with that of control mice. However, outcomes changed when mice were challenged with DSS (1.75% in drinking water) on days 35-45 or injected subcutaneously with indomethacin (5 mg/kg) on day 35 and sacrificed 24 hours later. When challenged with DSS, mice in the maltodextrin group developed severe colitis and lost 10%-15% of body weight, compared with minimal colitis and negligible weight loss in the other groups. In addition, compared with other mice, maltodextrin-fed mice had increased colon tissue expression of Lcn-2 and inflammatory cytokine interleukin (IL)-1beta. These initial findings suggested that dietary maltodextrin could increase susceptibility to clinical colitis.
To determine the pathophysiology of this phenomenon, the investigators performed microarray analysis of colonic samples. Multiple genes associated with carbohydrate and lipid metabolism were upregulated in maltodextrin-fed mice, including genes that controlled the unfolded protein response (UPR), a process in which unfolded proteins accumulate in the endoplasmic reticulum (ER) during ER stress. The most prominently expressed among the UPR-related genes was Ern-2, which regulates inositol-requiring enzyme 1beta, found exclusively in the ER of goblet cells in the small intestine and colon. When maltodextrin causes ER stress in goblet cells, it leads to misfolding of mucin glycoprotein Mucin-2 (Muc-2), a major component of gut mucus, causing gut mucus levels to drop. A diminished mucus barrier exposes the intestine to infection and damage, as demonstrated by higher rates of pathogenic bacteria in Muc-2–deficient mice than in control mice, and more severe intestinal damage than in controls when Muc-2 mice are deliberately infected with pathogens.
The investigators found that humans likely have similar responses to dietary maltodextrin. Treating the mucus-secreting HT29-methotrexate treated (HT29-MTX) cell line with 5% maltodextrin resulted in upregulation of Ern-2, which is the same mechanism observed in mice. Additional testing showed that this process was mediated by p38 mitogen-activated protein kinase, and pharmacologic inhibition or knockdown of p38 suppressed RNA expression of Ern-2. The investigators found that p38 was similarly involved in maltodextrin-fed mice.
To show that maltodextrin enhances susceptibility to inflammation via ER stress, the investigators used tauroursodeoxycholic acid (TUDCA) to inhibit ER stress. Indeed, inhibition led to reduced Ern-2 expression in HT29-MTX cells and in mice treated with maltodextrin. Giving TUDCA to maltodextrin-fed mice resulted in less weight loss, improved histology, and lower expression of Lcn-2 and IL-1beta.
The study concluded with three final experiments: The first showed that maltodextrin did not alter mucosa-associated microbiota; the second showed that mice fed 5% maltodextrin long term (for 10 weeks) had low-grade intestinal inflammation on histology, albeit without clinical colitis or weight loss; and the third showed that mice consuming maltodextrin long term had higher 15-hour fasting blood glycemic levels than control mice, supporting recent research suggesting that food additives can disrupt metabolism in a nonsusceptible host.
“In conclusion,” the investigators wrote, “this study shows that a maltodextrin-enriched diet reduces the intestinal content of Muc-2, thus making the host more sensitive to colitogenic stimuli. These data, together with the demonstration that maltodextrin can promote epithelial intestinal adhesion of pathogenic bacteria, supports the hypothesis that Western diets rich in maltodextrin can contribute to gut disease susceptibility.”
The study was funded by the Italian Ministry of Education, Universities, and Research. The authors reported no conflicts of interest.
SOURCE: Laudisi F et al. CMGH. 2019 Jan 18. doi: 10.1016/j.jcmgh.2018.09.002.
The food additive maltodextrin may increase risk of inflammatory bowel disease, according to a recent study.
Compared with control subjects, mice given drinking water that contained 5% maltodextrin were significantly more likely to develop colitis and lose weight when challenged with dextran sodium sulfate (DSS), reported lead author Federica Laudisi, PhD, of the department of systems medicine at the University of Rome Tor Vergata in Rome, and her colleagues.
Further experiments with murine intestinal crypts and a human cell line echoed these results and offered mechanistic insight. Treatment with maltodextrin stressed the endoplasmic reticulum of goblet cells, predisposing the intestinal epithelium to mucus depletion and inflammation. With these results, maltodextrin joins polysorbate 80 and carboxymethylcellulose on a growing list of food additives in the Western diet with proinflammatory potential.
“Although the U.S. Food and Drug Administration recognizes these dietary elements as safe,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, “their use has been linked to the development of intestinal pathologies in both animals and human beings.
“It also has been shown that the polysaccharide maltodextrin, which is commonly used as a filler and thickener during food processing, can alter microbial phenotype and host antibacterial defenses. Maltodextrin expands the Escherichia coli population in the ileum and induces necrotizing enterocolitis in preterm piglets (Am J Physiol Gastrointest Liver Physiol. 2009 Dec;297:G1115-25).”
The present study began by administering three compounds dissolved in drinking water to wild-type Balb/c mice for 45 days: 5% maltodextrin, 0.5% propylene glycol, or 5 g/L animal gelatin. Control mice drank plain water. None of the treatments triggered clinical or histologic signs of colitis, and stool levels of lipocalin-2 (Lcn-2), a biomarker of intestinal inflammation, remained comparable with that of control mice. However, outcomes changed when mice were challenged with DSS (1.75% in drinking water) on days 35-45 or injected subcutaneously with indomethacin (5 mg/kg) on day 35 and sacrificed 24 hours later. When challenged with DSS, mice in the maltodextrin group developed severe colitis and lost 10%-15% of body weight, compared with minimal colitis and negligible weight loss in the other groups. In addition, compared with other mice, maltodextrin-fed mice had increased colon tissue expression of Lcn-2 and inflammatory cytokine interleukin (IL)-1beta. These initial findings suggested that dietary maltodextrin could increase susceptibility to clinical colitis.
To determine the pathophysiology of this phenomenon, the investigators performed microarray analysis of colonic samples. Multiple genes associated with carbohydrate and lipid metabolism were upregulated in maltodextrin-fed mice, including genes that controlled the unfolded protein response (UPR), a process in which unfolded proteins accumulate in the endoplasmic reticulum (ER) during ER stress. The most prominently expressed among the UPR-related genes was Ern-2, which regulates inositol-requiring enzyme 1beta, found exclusively in the ER of goblet cells in the small intestine and colon. When maltodextrin causes ER stress in goblet cells, it leads to misfolding of mucin glycoprotein Mucin-2 (Muc-2), a major component of gut mucus, causing gut mucus levels to drop. A diminished mucus barrier exposes the intestine to infection and damage, as demonstrated by higher rates of pathogenic bacteria in Muc-2–deficient mice than in control mice, and more severe intestinal damage than in controls when Muc-2 mice are deliberately infected with pathogens.
The investigators found that humans likely have similar responses to dietary maltodextrin. Treating the mucus-secreting HT29-methotrexate treated (HT29-MTX) cell line with 5% maltodextrin resulted in upregulation of Ern-2, which is the same mechanism observed in mice. Additional testing showed that this process was mediated by p38 mitogen-activated protein kinase, and pharmacologic inhibition or knockdown of p38 suppressed RNA expression of Ern-2. The investigators found that p38 was similarly involved in maltodextrin-fed mice.
To show that maltodextrin enhances susceptibility to inflammation via ER stress, the investigators used tauroursodeoxycholic acid (TUDCA) to inhibit ER stress. Indeed, inhibition led to reduced Ern-2 expression in HT29-MTX cells and in mice treated with maltodextrin. Giving TUDCA to maltodextrin-fed mice resulted in less weight loss, improved histology, and lower expression of Lcn-2 and IL-1beta.
The study concluded with three final experiments: The first showed that maltodextrin did not alter mucosa-associated microbiota; the second showed that mice fed 5% maltodextrin long term (for 10 weeks) had low-grade intestinal inflammation on histology, albeit without clinical colitis or weight loss; and the third showed that mice consuming maltodextrin long term had higher 15-hour fasting blood glycemic levels than control mice, supporting recent research suggesting that food additives can disrupt metabolism in a nonsusceptible host.
“In conclusion,” the investigators wrote, “this study shows that a maltodextrin-enriched diet reduces the intestinal content of Muc-2, thus making the host more sensitive to colitogenic stimuli. These data, together with the demonstration that maltodextrin can promote epithelial intestinal adhesion of pathogenic bacteria, supports the hypothesis that Western diets rich in maltodextrin can contribute to gut disease susceptibility.”
The study was funded by the Italian Ministry of Education, Universities, and Research. The authors reported no conflicts of interest.
SOURCE: Laudisi F et al. CMGH. 2019 Jan 18. doi: 10.1016/j.jcmgh.2018.09.002.
FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: The food additive maltodextrin may increase risk of inflammatory bowel disease.
Major finding: When challenged with dextran sulfate sodium, mice consuming maltodextrin developed colitis and lost about 10%-15% of original body weight, compared with negligible inflammation and weight loss in mice not receiving maltodextrin.
Study details: A prospective study involving in vivo experiments with wild-type Balb/c mice and in vitro experiments with murine intestinal crypts and a human intestinal cell line.
Disclosures: The study was funded by the Italian Ministry of Education, Universities, and Research. The investigators reported no conflicts of interest.
Source: Laudisi F et al. CMGH. 2019 Jan 18. doi: 10.1016/j.jcmgh.2018.09.002.
Meta-analysis: IVIG bests anti-D on platelet count in pediatric ITP
For patients with pediatric immune thrombocytopenia (ITP), treatment with intravenous immunoglobulins (IVIG) is more likely to raise platelet count in the short-term, compared with anti-D immunoglobulins (anti-D), according the authors of a recent systematic review and meta-analysis.
Although findings from the meta-analysis support recommendations for first-line IVIG, not all studies reported bleeding symptoms, so the clinical effects of differing platelet responses remain unknown, reported lead author Bertrand Lioger, MD, of François-Rabelais University in Tours, France, and his colleagues.
“To date, no meta-analysis has compared the efficacy and safety of IVIG vs. anti-D,” the investigators wrote in The Journal of Pediatrics.
Each treatment approach has strengths and weaknesses, the investigators noted. Namely, IVIG is more expensive, while anti-D is more likely to cause adverse drugs reactions (ADRs), such as disseminated intravascular coagulation and hemolysis.
The present review evaluated 11 studies comparing the efficacy of IVIG with that of anti-D in 704 children with ITP. Platelet response and bleeding were the main efficacy outcomes. The investigators used response thresholds defined by each study because several did not use standardized levels. Other outcomes considered were mortality, disease course, splenectomy, and ADRs. The ADRs included serious adverse reactions, infusion reactions, transfusions, hemoglobin loss, and hemolysis.
In alignment with previous guidelines, anti-D therapy was most often given to RhD positive, nonsplenectomized children at a dose of 50-75 mcg/kg, whereas IVIG was dosed at 0.8-1 g/kg for 1 or 2 consecutive days.
Results showed that patients treated with IVIG were 15% more likely to have platelet counts greater than 20 × 109/L within 24-72 hours, compared with those given anti-D. This disparity rose to 25% in favor of IVIG when using a threshold of 50 × 109/L.
Treatment risk was lower and general symptoms were less common after treatment with anti-D infusion, compared with IVIG (24.6% vs. 31.4%), but this was only true for trials foregoing premedication. Anti-D was more often associated with hemolysis, making transfusion necessary for some patients.
Although platelet count is often used as a surrogate measure of bleeding risk, the investigators decided that a lack of bleeding data among the studies precluded an accurate determination of clinical superiority between the treatments.
“Severe hemolysis remains an important issue when using anti-D immunoglobulins and premedication reduces the incidence of general symptoms observed with IVIG,” the investigators wrote. “Our conclusions should, however, be cautiously considered due to the poor overall quality of included studies and to limited data about clinically relevant outcomes.”
The study was not supported by outside funding. The investigators reported financial relationships with Amgen, Novartis, Roche Pharma, Sanofi, and others.
SOURCE: Lioger B et al. J Pediatr. 2019;204:225-33.
For patients with pediatric immune thrombocytopenia (ITP), treatment with intravenous immunoglobulins (IVIG) is more likely to raise platelet count in the short-term, compared with anti-D immunoglobulins (anti-D), according the authors of a recent systematic review and meta-analysis.
Although findings from the meta-analysis support recommendations for first-line IVIG, not all studies reported bleeding symptoms, so the clinical effects of differing platelet responses remain unknown, reported lead author Bertrand Lioger, MD, of François-Rabelais University in Tours, France, and his colleagues.
“To date, no meta-analysis has compared the efficacy and safety of IVIG vs. anti-D,” the investigators wrote in The Journal of Pediatrics.
Each treatment approach has strengths and weaknesses, the investigators noted. Namely, IVIG is more expensive, while anti-D is more likely to cause adverse drugs reactions (ADRs), such as disseminated intravascular coagulation and hemolysis.
The present review evaluated 11 studies comparing the efficacy of IVIG with that of anti-D in 704 children with ITP. Platelet response and bleeding were the main efficacy outcomes. The investigators used response thresholds defined by each study because several did not use standardized levels. Other outcomes considered were mortality, disease course, splenectomy, and ADRs. The ADRs included serious adverse reactions, infusion reactions, transfusions, hemoglobin loss, and hemolysis.
In alignment with previous guidelines, anti-D therapy was most often given to RhD positive, nonsplenectomized children at a dose of 50-75 mcg/kg, whereas IVIG was dosed at 0.8-1 g/kg for 1 or 2 consecutive days.
Results showed that patients treated with IVIG were 15% more likely to have platelet counts greater than 20 × 109/L within 24-72 hours, compared with those given anti-D. This disparity rose to 25% in favor of IVIG when using a threshold of 50 × 109/L.
Treatment risk was lower and general symptoms were less common after treatment with anti-D infusion, compared with IVIG (24.6% vs. 31.4%), but this was only true for trials foregoing premedication. Anti-D was more often associated with hemolysis, making transfusion necessary for some patients.
Although platelet count is often used as a surrogate measure of bleeding risk, the investigators decided that a lack of bleeding data among the studies precluded an accurate determination of clinical superiority between the treatments.
“Severe hemolysis remains an important issue when using anti-D immunoglobulins and premedication reduces the incidence of general symptoms observed with IVIG,” the investigators wrote. “Our conclusions should, however, be cautiously considered due to the poor overall quality of included studies and to limited data about clinically relevant outcomes.”
The study was not supported by outside funding. The investigators reported financial relationships with Amgen, Novartis, Roche Pharma, Sanofi, and others.
SOURCE: Lioger B et al. J Pediatr. 2019;204:225-33.
For patients with pediatric immune thrombocytopenia (ITP), treatment with intravenous immunoglobulins (IVIG) is more likely to raise platelet count in the short-term, compared with anti-D immunoglobulins (anti-D), according the authors of a recent systematic review and meta-analysis.
Although findings from the meta-analysis support recommendations for first-line IVIG, not all studies reported bleeding symptoms, so the clinical effects of differing platelet responses remain unknown, reported lead author Bertrand Lioger, MD, of François-Rabelais University in Tours, France, and his colleagues.
“To date, no meta-analysis has compared the efficacy and safety of IVIG vs. anti-D,” the investigators wrote in The Journal of Pediatrics.
Each treatment approach has strengths and weaknesses, the investigators noted. Namely, IVIG is more expensive, while anti-D is more likely to cause adverse drugs reactions (ADRs), such as disseminated intravascular coagulation and hemolysis.
The present review evaluated 11 studies comparing the efficacy of IVIG with that of anti-D in 704 children with ITP. Platelet response and bleeding were the main efficacy outcomes. The investigators used response thresholds defined by each study because several did not use standardized levels. Other outcomes considered were mortality, disease course, splenectomy, and ADRs. The ADRs included serious adverse reactions, infusion reactions, transfusions, hemoglobin loss, and hemolysis.
In alignment with previous guidelines, anti-D therapy was most often given to RhD positive, nonsplenectomized children at a dose of 50-75 mcg/kg, whereas IVIG was dosed at 0.8-1 g/kg for 1 or 2 consecutive days.
Results showed that patients treated with IVIG were 15% more likely to have platelet counts greater than 20 × 109/L within 24-72 hours, compared with those given anti-D. This disparity rose to 25% in favor of IVIG when using a threshold of 50 × 109/L.
Treatment risk was lower and general symptoms were less common after treatment with anti-D infusion, compared with IVIG (24.6% vs. 31.4%), but this was only true for trials foregoing premedication. Anti-D was more often associated with hemolysis, making transfusion necessary for some patients.
Although platelet count is often used as a surrogate measure of bleeding risk, the investigators decided that a lack of bleeding data among the studies precluded an accurate determination of clinical superiority between the treatments.
“Severe hemolysis remains an important issue when using anti-D immunoglobulins and premedication reduces the incidence of general symptoms observed with IVIG,” the investigators wrote. “Our conclusions should, however, be cautiously considered due to the poor overall quality of included studies and to limited data about clinically relevant outcomes.”
The study was not supported by outside funding. The investigators reported financial relationships with Amgen, Novartis, Roche Pharma, Sanofi, and others.
SOURCE: Lioger B et al. J Pediatr. 2019;204:225-33.
FROM THE JOURNAL OF PEDIATRICS
Key clinical point:
Major finding: Treatment with IVIG was 15% more likely than anti-D immunoglobulin to raise platelet counts higher than 20 × 109/L within 24-72 hours.
Study details: A systematic review and meta-analysis of 11 studies comparing the efficacy of IVIG with that of anti-D in 704 children with ITP.
Disclosures: The meta-analysis did not have outside funding. The investigators reported financial relationships with Amgen, Novartis, Roche Pharma, Sanofi, and others.
Source: Lioger B et al. J Pediatr. 2019; 204:225-33.
Risk models fail to predict lower-GI bleeding outcomes
In cases of lower gastrointestinal bleeding (LGIB), albumin and hemoglobin levels are the best independent predictors of severe bleeding, according to investigators.
These findings came from a sobering look at LGIB risk-prediction models. While some models could predict specific outcomes with reasonable accuracy, none of the models demonstrated broad predictive power, reported Natalie Tapaskar, MD, of the department of medicine at the University of Chicago, and her colleagues.
LGIB requires intensive resource utilization and proves fatal in 5%-15% of patients, which means timely and appropriate interventions are essential, especially for those with severe bleeding.
“There are limited data on accurately predicting the risk of adverse outcomes for hospitalized patients with LGIB,” the investigators wrote in Gastrointestinal Endoscopy, “especially in comparison to patients with upper gastrointestinal bleeding (UGIB), where tools such as the Glasgow-Blatchford Bleeding Score have been validated to accurately predict important clinical outcomes.”
To assess existing risk models for LGIB, the investigators performed a prospective observational study involving 170 patients with LGIB who underwent colonoscopy during April 2016–September 2017 at the University of Chicago Medical Center. Data were collected through comprehensive medical record review.
The primary outcome was severe bleeding. This was defined by acute bleeding during the first 24 hours of admission that required a transfusion of 2 or more units of packed red blood cells, and/or caused a 20% or greater decrease in hematocrit; and/or recurrent bleeding 24 hours after clinical stability, involving rectal bleeding with an additional drop in hematocrit of 20% or more, and/or readmission for LGIB within 1 week of discharge. Secondary outcomes included blood transfusion requirements, in-hospital recurrent bleeding, length of stay, ICU admission, intervention (surgery, interventional radiology, endoscopy), and the comparative predictive ability of seven clinical risk stratification models: AIMS65, Charlson Comorbidity Index, Glasgow-Blatchford, NOBLADS, Oakland, Sengupta, and Strate. Area under the receiver operating characteristic curve (AUC) was used to compare model predictive power. Risk of adverse outcomes was calculated by univariable and multivariable logistic regression.
Results showed that median patient age was 70 years. Most of the patients (80%) were African American and slightly more than half were female (58%). These demographic factors were not predictive of severe bleeding, which occurred in about half of the cases (52%). Upon admission, patients with severe bleeding were more likely to have chronic renal failure (30% vs. 17%; P = .05), lower albumin (3.6 g/dL vs. 3.95 g/dL; P less than .0001), lower hemoglobin (8.6 g/dL vs. 11.1 g/dL; P = .0001), lower systolic blood pressure (118 mm Hg vs. 132 mm Hg; P = .01), and higher creatinine (1.3 mg/dL vs. 1 mg/dL; P = .04). After adjustment for confounding variables, the strongest independent predictors of severe bleeding were low albumin (odds ratio, 2.56 per 1-g/dL decrease; P = .02) and low hemoglobin (OR, 1.28 per 1-g/dL decrease; P = .0015).
On average, time between admission and colonoscopy was between 2 and 3 days (median, 62.2 hours). In 3 out of 4 patients (77%), etiology of LGIB was confirmed; diverticular bleeding was most common (39%), followed distantly by hemorrhoidal bleeding (15%).
Compared with milder cases, patients with severe bleeding were more likely to stay in the ICU (49% vs. 19%; P less than .0001), have a blood transfusion (85% vs 36%; P less than .0001), and need to remain in the hospital for a longer period of time (6 days vs. 4 days; P = .0009). These findings exemplify the high level of resource utilization required for LGIB and show how severe bleeding dramatically compounds intensity of care.
Further analysis showed that none of the seven risk models were predictive across all outcomes; however, some predicted specific outcomes better than others. Leaders were the Glasgow-Blatchford score for blood transfusion (AUC 0.87; P less than .0001), the Oakland score for severe bleeding (AUC 0.74; P less than .0001), the Sengupta score for ICU stay (AUC 0.74; P less than .0001), and the Strate score for both recurrent bleeding during hospital stay (AUC 0.66; P = .0008) and endoscopic intervention (AUC 0.62; P = .01).
The investigators noted that the Glasgow-Blatchford score, which also is used in cases of UGIB, has previously demonstrated accuracy in predicting blood transfusion, as it did in the present study, suggesting that, “[i]n instances where there may be uncertainty of the origin of the bleeding, the Blatchford score may be a preferential choice of risk score.”
“Overall, we found that no singular score performed best across all the outcomes studied nor did any score have an extremely strong discriminatory power for any individual variable,” the investigators wrote, concluding that “... simpler and more powerful prediction tools are required for better risk stratification in LGIB.”
The investigators reported no financial support or conflicts of interest.
*This story was updated on Jan. 31, 2019.
SOURCE: Tapaskar N et al. Gastrointest Endosc. 2018 Dec 18. doi: 10.1016/j.gie.2018.12.011.
In cases of lower gastrointestinal bleeding (LGIB), albumin and hemoglobin levels are the best independent predictors of severe bleeding, according to investigators.
These findings came from a sobering look at LGIB risk-prediction models. While some models could predict specific outcomes with reasonable accuracy, none of the models demonstrated broad predictive power, reported Natalie Tapaskar, MD, of the department of medicine at the University of Chicago, and her colleagues.
LGIB requires intensive resource utilization and proves fatal in 5%-15% of patients, which means timely and appropriate interventions are essential, especially for those with severe bleeding.
“There are limited data on accurately predicting the risk of adverse outcomes for hospitalized patients with LGIB,” the investigators wrote in Gastrointestinal Endoscopy, “especially in comparison to patients with upper gastrointestinal bleeding (UGIB), where tools such as the Glasgow-Blatchford Bleeding Score have been validated to accurately predict important clinical outcomes.”
To assess existing risk models for LGIB, the investigators performed a prospective observational study involving 170 patients with LGIB who underwent colonoscopy during April 2016–September 2017 at the University of Chicago Medical Center. Data were collected through comprehensive medical record review.
The primary outcome was severe bleeding. This was defined by acute bleeding during the first 24 hours of admission that required a transfusion of 2 or more units of packed red blood cells, and/or caused a 20% or greater decrease in hematocrit; and/or recurrent bleeding 24 hours after clinical stability, involving rectal bleeding with an additional drop in hematocrit of 20% or more, and/or readmission for LGIB within 1 week of discharge. Secondary outcomes included blood transfusion requirements, in-hospital recurrent bleeding, length of stay, ICU admission, intervention (surgery, interventional radiology, endoscopy), and the comparative predictive ability of seven clinical risk stratification models: AIMS65, Charlson Comorbidity Index, Glasgow-Blatchford, NOBLADS, Oakland, Sengupta, and Strate. Area under the receiver operating characteristic curve (AUC) was used to compare model predictive power. Risk of adverse outcomes was calculated by univariable and multivariable logistic regression.
Results showed that median patient age was 70 years. Most of the patients (80%) were African American and slightly more than half were female (58%). These demographic factors were not predictive of severe bleeding, which occurred in about half of the cases (52%). Upon admission, patients with severe bleeding were more likely to have chronic renal failure (30% vs. 17%; P = .05), lower albumin (3.6 g/dL vs. 3.95 g/dL; P less than .0001), lower hemoglobin (8.6 g/dL vs. 11.1 g/dL; P = .0001), lower systolic blood pressure (118 mm Hg vs. 132 mm Hg; P = .01), and higher creatinine (1.3 mg/dL vs. 1 mg/dL; P = .04). After adjustment for confounding variables, the strongest independent predictors of severe bleeding were low albumin (odds ratio, 2.56 per 1-g/dL decrease; P = .02) and low hemoglobin (OR, 1.28 per 1-g/dL decrease; P = .0015).
On average, time between admission and colonoscopy was between 2 and 3 days (median, 62.2 hours). In 3 out of 4 patients (77%), etiology of LGIB was confirmed; diverticular bleeding was most common (39%), followed distantly by hemorrhoidal bleeding (15%).
Compared with milder cases, patients with severe bleeding were more likely to stay in the ICU (49% vs. 19%; P less than .0001), have a blood transfusion (85% vs 36%; P less than .0001), and need to remain in the hospital for a longer period of time (6 days vs. 4 days; P = .0009). These findings exemplify the high level of resource utilization required for LGIB and show how severe bleeding dramatically compounds intensity of care.
Further analysis showed that none of the seven risk models were predictive across all outcomes; however, some predicted specific outcomes better than others. Leaders were the Glasgow-Blatchford score for blood transfusion (AUC 0.87; P less than .0001), the Oakland score for severe bleeding (AUC 0.74; P less than .0001), the Sengupta score for ICU stay (AUC 0.74; P less than .0001), and the Strate score for both recurrent bleeding during hospital stay (AUC 0.66; P = .0008) and endoscopic intervention (AUC 0.62; P = .01).
The investigators noted that the Glasgow-Blatchford score, which also is used in cases of UGIB, has previously demonstrated accuracy in predicting blood transfusion, as it did in the present study, suggesting that, “[i]n instances where there may be uncertainty of the origin of the bleeding, the Blatchford score may be a preferential choice of risk score.”
“Overall, we found that no singular score performed best across all the outcomes studied nor did any score have an extremely strong discriminatory power for any individual variable,” the investigators wrote, concluding that “... simpler and more powerful prediction tools are required for better risk stratification in LGIB.”
The investigators reported no financial support or conflicts of interest.
*This story was updated on Jan. 31, 2019.
SOURCE: Tapaskar N et al. Gastrointest Endosc. 2018 Dec 18. doi: 10.1016/j.gie.2018.12.011.
In cases of lower gastrointestinal bleeding (LGIB), albumin and hemoglobin levels are the best independent predictors of severe bleeding, according to investigators.
These findings came from a sobering look at LGIB risk-prediction models. While some models could predict specific outcomes with reasonable accuracy, none of the models demonstrated broad predictive power, reported Natalie Tapaskar, MD, of the department of medicine at the University of Chicago, and her colleagues.
LGIB requires intensive resource utilization and proves fatal in 5%-15% of patients, which means timely and appropriate interventions are essential, especially for those with severe bleeding.
“There are limited data on accurately predicting the risk of adverse outcomes for hospitalized patients with LGIB,” the investigators wrote in Gastrointestinal Endoscopy, “especially in comparison to patients with upper gastrointestinal bleeding (UGIB), where tools such as the Glasgow-Blatchford Bleeding Score have been validated to accurately predict important clinical outcomes.”
To assess existing risk models for LGIB, the investigators performed a prospective observational study involving 170 patients with LGIB who underwent colonoscopy during April 2016–September 2017 at the University of Chicago Medical Center. Data were collected through comprehensive medical record review.
The primary outcome was severe bleeding. This was defined by acute bleeding during the first 24 hours of admission that required a transfusion of 2 or more units of packed red blood cells, and/or caused a 20% or greater decrease in hematocrit; and/or recurrent bleeding 24 hours after clinical stability, involving rectal bleeding with an additional drop in hematocrit of 20% or more, and/or readmission for LGIB within 1 week of discharge. Secondary outcomes included blood transfusion requirements, in-hospital recurrent bleeding, length of stay, ICU admission, intervention (surgery, interventional radiology, endoscopy), and the comparative predictive ability of seven clinical risk stratification models: AIMS65, Charlson Comorbidity Index, Glasgow-Blatchford, NOBLADS, Oakland, Sengupta, and Strate. Area under the receiver operating characteristic curve (AUC) was used to compare model predictive power. Risk of adverse outcomes was calculated by univariable and multivariable logistic regression.
Results showed that median patient age was 70 years. Most of the patients (80%) were African American and slightly more than half were female (58%). These demographic factors were not predictive of severe bleeding, which occurred in about half of the cases (52%). Upon admission, patients with severe bleeding were more likely to have chronic renal failure (30% vs. 17%; P = .05), lower albumin (3.6 g/dL vs. 3.95 g/dL; P less than .0001), lower hemoglobin (8.6 g/dL vs. 11.1 g/dL; P = .0001), lower systolic blood pressure (118 mm Hg vs. 132 mm Hg; P = .01), and higher creatinine (1.3 mg/dL vs. 1 mg/dL; P = .04). After adjustment for confounding variables, the strongest independent predictors of severe bleeding were low albumin (odds ratio, 2.56 per 1-g/dL decrease; P = .02) and low hemoglobin (OR, 1.28 per 1-g/dL decrease; P = .0015).
On average, time between admission and colonoscopy was between 2 and 3 days (median, 62.2 hours). In 3 out of 4 patients (77%), etiology of LGIB was confirmed; diverticular bleeding was most common (39%), followed distantly by hemorrhoidal bleeding (15%).
Compared with milder cases, patients with severe bleeding were more likely to stay in the ICU (49% vs. 19%; P less than .0001), have a blood transfusion (85% vs 36%; P less than .0001), and need to remain in the hospital for a longer period of time (6 days vs. 4 days; P = .0009). These findings exemplify the high level of resource utilization required for LGIB and show how severe bleeding dramatically compounds intensity of care.
Further analysis showed that none of the seven risk models were predictive across all outcomes; however, some predicted specific outcomes better than others. Leaders were the Glasgow-Blatchford score for blood transfusion (AUC 0.87; P less than .0001), the Oakland score for severe bleeding (AUC 0.74; P less than .0001), the Sengupta score for ICU stay (AUC 0.74; P less than .0001), and the Strate score for both recurrent bleeding during hospital stay (AUC 0.66; P = .0008) and endoscopic intervention (AUC 0.62; P = .01).
The investigators noted that the Glasgow-Blatchford score, which also is used in cases of UGIB, has previously demonstrated accuracy in predicting blood transfusion, as it did in the present study, suggesting that, “[i]n instances where there may be uncertainty of the origin of the bleeding, the Blatchford score may be a preferential choice of risk score.”
“Overall, we found that no singular score performed best across all the outcomes studied nor did any score have an extremely strong discriminatory power for any individual variable,” the investigators wrote, concluding that “... simpler and more powerful prediction tools are required for better risk stratification in LGIB.”
The investigators reported no financial support or conflicts of interest.
*This story was updated on Jan. 31, 2019.
SOURCE: Tapaskar N et al. Gastrointest Endosc. 2018 Dec 18. doi: 10.1016/j.gie.2018.12.011.
FROM GASTROINTESTINAL ENDOSCOPY
Key clinical point: In cases of lower gastrointestinal bleeding (LGIB), albumin and hemoglobin levels are the best independent predictors of severe bleeding.
Major finding: After adjustment for confounding variables, low albumin upon admission was the strongest independent predictor of severe bleeding (OR, 2.56 per 1 g/dL decrease; P = .02).
Study details: A prospective, observational study of 170 patients with LGIB who underwent colonoscopy during April 2016–September 2017 at the University of Chicago Medical Center.
Disclosures: The investigators reported no financial support or conflicts of interest.
Source: Tapaskar N et al. Gastrointest Endosc. 2018 Dec 18. doi: 10.1016/j.gie.2018.12.011.
Impaired clot lysis associated with mild bleeding symptoms
Patients with self-reported mild bleeding symptoms may have impaired clot lysis, according to investigators. This finding is remarkable because it contrasts with known bleeding disorders, such as hemophilia, which are associated with enhanced clot lysis, reported lead author Minka J.A. Vries, MD, of the Cardiovascular Research Institute Maastricht (CARIM) at Maastricht (the Netherlands) University and her colleagues.
The observational study, which included 335 patients undergoing elective surgery at Maastricht University Medical Center, was conducted to better understand lysis capacity, which is challenging to assess in a clinical setting. Although the Euglobulin Lysis Time (ELT) is often used in the clinic, it cannot determine the influence of hemostatic proteins or formation of a fibrin clot under physiological conditions.
“In the more recently developed lysis assays,” the investigators wrote in Thrombosis Research, “the turbidity lysis assay and the tissue plasminogen activator–rotational thromboelastometry (tPA-ROTEM) [assay], all plasma proteins are present and fibrin is formed under more physiological conditions for the measurement of fibrinolysis.” These two tests were used in the present study.
Of the 335 adult patients, 240 had self-reported mild bleeding symptoms, and 95 did not. Patients with bleeding disorders, thrombocytopenia, or anemia were excluded, as were pregnant women and those taking blood thinners or NSAIDs. Along with assessing time parameters of fibrinolysis, clot-associated proteins were measured for possible imbalances.
“We hypothesized that clot lysis capacity is enhanced in patients with mild bleeding symptoms,” the investigators wrote, based on other bleeding disorders. Surprisingly, the results told a different story.
After adjusting for sex, BMI, and age, patients with bleeding symptoms had lower tPA-ROTEM lysis speed (beta −0.35; P = .007) and longer tPA-ROTEM lysis time (beta 0.29; P = .022) than did patients without bleeding symptoms. The investigators found that tPA-ROTEM measurements depended on factor II, factor XII, alpha2-antiplasmin, plasminogen, thrombin activatable fibrinolysis inhibitor (TAFI), and plasminogen activator inhibitor–1 (PAI-1) level. In contrast, turbidity lysis assay measurements were not significantly different between groups. This latter assay was influenced by alpha2-antiplasmin, TAFI, and PAI-1.
“We did not find evidence for systemic hyperfibrinolytic capacity in patients reporting mild bleeding symptoms in comparison to patients not reporting bleeding symptoms,” the investigators concluded. “tPA-ROTEM even suggested a slower clot lysis in these patients. Though this may appear counterintuitive, our results are in line with two papers assessing systemic clot lysis in mild bleeders.”
While this phenomenon gains supporting evidence, it remains poorly understood.
“We have no good explanation for these findings,” the investigators noted.
This study was funded by the Sint Annadal Foundation Maastricht, Maastricht University Medical Centre, CTMM INCOAG Maastricht, Cardiovascular Research Institute Maastricht, and the British Heart Foundation. No conflicts of interest were reported.
SOURCE: Vries MJA et al. Thromb Res. 2018 Dec 4. doi: 10.1016/j.thromres.2018.12.004.
Patients with self-reported mild bleeding symptoms may have impaired clot lysis, according to investigators. This finding is remarkable because it contrasts with known bleeding disorders, such as hemophilia, which are associated with enhanced clot lysis, reported lead author Minka J.A. Vries, MD, of the Cardiovascular Research Institute Maastricht (CARIM) at Maastricht (the Netherlands) University and her colleagues.
The observational study, which included 335 patients undergoing elective surgery at Maastricht University Medical Center, was conducted to better understand lysis capacity, which is challenging to assess in a clinical setting. Although the Euglobulin Lysis Time (ELT) is often used in the clinic, it cannot determine the influence of hemostatic proteins or formation of a fibrin clot under physiological conditions.
“In the more recently developed lysis assays,” the investigators wrote in Thrombosis Research, “the turbidity lysis assay and the tissue plasminogen activator–rotational thromboelastometry (tPA-ROTEM) [assay], all plasma proteins are present and fibrin is formed under more physiological conditions for the measurement of fibrinolysis.” These two tests were used in the present study.
Of the 335 adult patients, 240 had self-reported mild bleeding symptoms, and 95 did not. Patients with bleeding disorders, thrombocytopenia, or anemia were excluded, as were pregnant women and those taking blood thinners or NSAIDs. Along with assessing time parameters of fibrinolysis, clot-associated proteins were measured for possible imbalances.
“We hypothesized that clot lysis capacity is enhanced in patients with mild bleeding symptoms,” the investigators wrote, based on other bleeding disorders. Surprisingly, the results told a different story.
After adjusting for sex, BMI, and age, patients with bleeding symptoms had lower tPA-ROTEM lysis speed (beta −0.35; P = .007) and longer tPA-ROTEM lysis time (beta 0.29; P = .022) than did patients without bleeding symptoms. The investigators found that tPA-ROTEM measurements depended on factor II, factor XII, alpha2-antiplasmin, plasminogen, thrombin activatable fibrinolysis inhibitor (TAFI), and plasminogen activator inhibitor–1 (PAI-1) level. In contrast, turbidity lysis assay measurements were not significantly different between groups. This latter assay was influenced by alpha2-antiplasmin, TAFI, and PAI-1.
“We did not find evidence for systemic hyperfibrinolytic capacity in patients reporting mild bleeding symptoms in comparison to patients not reporting bleeding symptoms,” the investigators concluded. “tPA-ROTEM even suggested a slower clot lysis in these patients. Though this may appear counterintuitive, our results are in line with two papers assessing systemic clot lysis in mild bleeders.”
While this phenomenon gains supporting evidence, it remains poorly understood.
“We have no good explanation for these findings,” the investigators noted.
This study was funded by the Sint Annadal Foundation Maastricht, Maastricht University Medical Centre, CTMM INCOAG Maastricht, Cardiovascular Research Institute Maastricht, and the British Heart Foundation. No conflicts of interest were reported.
SOURCE: Vries MJA et al. Thromb Res. 2018 Dec 4. doi: 10.1016/j.thromres.2018.12.004.
Patients with self-reported mild bleeding symptoms may have impaired clot lysis, according to investigators. This finding is remarkable because it contrasts with known bleeding disorders, such as hemophilia, which are associated with enhanced clot lysis, reported lead author Minka J.A. Vries, MD, of the Cardiovascular Research Institute Maastricht (CARIM) at Maastricht (the Netherlands) University and her colleagues.
The observational study, which included 335 patients undergoing elective surgery at Maastricht University Medical Center, was conducted to better understand lysis capacity, which is challenging to assess in a clinical setting. Although the Euglobulin Lysis Time (ELT) is often used in the clinic, it cannot determine the influence of hemostatic proteins or formation of a fibrin clot under physiological conditions.
“In the more recently developed lysis assays,” the investigators wrote in Thrombosis Research, “the turbidity lysis assay and the tissue plasminogen activator–rotational thromboelastometry (tPA-ROTEM) [assay], all plasma proteins are present and fibrin is formed under more physiological conditions for the measurement of fibrinolysis.” These two tests were used in the present study.
Of the 335 adult patients, 240 had self-reported mild bleeding symptoms, and 95 did not. Patients with bleeding disorders, thrombocytopenia, or anemia were excluded, as were pregnant women and those taking blood thinners or NSAIDs. Along with assessing time parameters of fibrinolysis, clot-associated proteins were measured for possible imbalances.
“We hypothesized that clot lysis capacity is enhanced in patients with mild bleeding symptoms,” the investigators wrote, based on other bleeding disorders. Surprisingly, the results told a different story.
After adjusting for sex, BMI, and age, patients with bleeding symptoms had lower tPA-ROTEM lysis speed (beta −0.35; P = .007) and longer tPA-ROTEM lysis time (beta 0.29; P = .022) than did patients without bleeding symptoms. The investigators found that tPA-ROTEM measurements depended on factor II, factor XII, alpha2-antiplasmin, plasminogen, thrombin activatable fibrinolysis inhibitor (TAFI), and plasminogen activator inhibitor–1 (PAI-1) level. In contrast, turbidity lysis assay measurements were not significantly different between groups. This latter assay was influenced by alpha2-antiplasmin, TAFI, and PAI-1.
“We did not find evidence for systemic hyperfibrinolytic capacity in patients reporting mild bleeding symptoms in comparison to patients not reporting bleeding symptoms,” the investigators concluded. “tPA-ROTEM even suggested a slower clot lysis in these patients. Though this may appear counterintuitive, our results are in line with two papers assessing systemic clot lysis in mild bleeders.”
While this phenomenon gains supporting evidence, it remains poorly understood.
“We have no good explanation for these findings,” the investigators noted.
This study was funded by the Sint Annadal Foundation Maastricht, Maastricht University Medical Centre, CTMM INCOAG Maastricht, Cardiovascular Research Institute Maastricht, and the British Heart Foundation. No conflicts of interest were reported.
SOURCE: Vries MJA et al. Thromb Res. 2018 Dec 4. doi: 10.1016/j.thromres.2018.12.004.
FROM THROMBOSIS RESEARCH
Key clinical point: Patients with self-reported mild bleeding symptoms may have impaired clot lysis, in contrast with known bleeding disorders.
Major finding: Patients with mild bleeding had longer whole blood tissue plasminogen activator-rotational thromboelastometry lysis times (P = .022) than did patients without symptoms.
Study details: An observational study of 335 adult patients undergoing elective surgery.
Disclosures: This study was funded by the Sint Annadal Foundation, Maastricht University Medical Center, CTMM INCOAG Maastricht, Cardiovascular Research Institute Maastricht, and the British Heart Foundation. No conflicts of interest were reported.
Source: Vries MJA et al. Thromb Res. 2018 Dec 4. doi: 10.1016/j.thromres.2018.12.004.
Many misunderstand purpose of tumor profiling research
Although most cancer patients and parents of cancer patients understand that genomic tumor profiling research aims to improve care for future patients, many also believe that the process will benefit present treatment, according to a recent survey conducted at four academic treatment centers.
Misunderstandings were most common among less-educated individuals and those with little genetic knowledge, reported lead author Jonathan M. Marron, MD, MPH, of the Dana-Farber Cancer Institute in Boston and his colleagues.
Previous surveys have shown that “up to 60% of research participants demonstrate evidence of therapeutic misconception,” the investigators wrote in JCO Precision Oncology, referring to “the belief that the primary purpose of research is therapeutic in nature rather than acquisition of generalizable knowledge.”
“Although advances in targeted therapeutics generate great excitement, they may also blur the line between research and clinical care,” the investigators wrote. As such therapeutics become more common, so may misconceptions.
To evaluate current views of genomic tumor profiling research, the investigators surveyed 45 cancer patients and parents of cancer patients at four academic treatment centers. All patients were aged 30 years or younger at enrollment and undergoing tumor profiling; parents were asked to respond if patients were younger than 18 years.
The survey was divided into two sections: basic understanding and comprehensive understanding. To achieve basic understanding, a respondent needed to recognize that “the primary purpose was not to improve the patient’s treatment.” To achieve comprehensive understanding, the respondent needed to recognize four facts: “primary purpose was not to improve patient’s treatment,” “primary purpose was to improve treatment of future patients,” “there may not be direct medical benefit,” and “most likely result of participation was not increased likelihood of cure.”
Forty-four out of 45 survey participants responded. Of these, 30 (68%) demonstrated basic understanding, and 24 (55%) had comprehensive understanding. Respondents with higher education were more likely to answer correctly, with 81% showing basic understanding and 73% showing comprehensive understanding; among less-educated respondents, only half (50%) had basic understanding, and about 1 out of 4 (28%) had comprehensive understanding. Similar disparities were observed among respondents with more versus less genetic knowledge. Almost all respondents (93%) who thought that profiling would help present treatment also believed it would benefit future patients.
Taken as a whole, these findings suggest that therapeutic misconception in genomic tumor profiling research is relatively common, which echoes previous findings. The investigators recommended that clinicians anticipate these knowledge gaps and aim to overcome them.
“Interventional work to improve participant understanding of these complexities and nuances is necessary as sequencing moves from the laboratory to the clinic,” the investigators concluded. “Such work can guide pediatric oncologists in how to manage expectations and best counsel patients and families about the meaning and significance of clinical profiling results.”
The study was funded by Hyundai Hope on Wheels, the Friends for Life Foundation, the Gillmore Fund, National Institutes of Health, and others. The investigators reported financial affiliations with Merck, Millennium, Novartis, Roche, Amgen, and others.
SOURCE: Marron et al. JCO Precis Oncol. 2019 Jan 22. doi: 10.1200/PO.18.00176.
Although most cancer patients and parents of cancer patients understand that genomic tumor profiling research aims to improve care for future patients, many also believe that the process will benefit present treatment, according to a recent survey conducted at four academic treatment centers.
Misunderstandings were most common among less-educated individuals and those with little genetic knowledge, reported lead author Jonathan M. Marron, MD, MPH, of the Dana-Farber Cancer Institute in Boston and his colleagues.
Previous surveys have shown that “up to 60% of research participants demonstrate evidence of therapeutic misconception,” the investigators wrote in JCO Precision Oncology, referring to “the belief that the primary purpose of research is therapeutic in nature rather than acquisition of generalizable knowledge.”
“Although advances in targeted therapeutics generate great excitement, they may also blur the line between research and clinical care,” the investigators wrote. As such therapeutics become more common, so may misconceptions.
To evaluate current views of genomic tumor profiling research, the investigators surveyed 45 cancer patients and parents of cancer patients at four academic treatment centers. All patients were aged 30 years or younger at enrollment and undergoing tumor profiling; parents were asked to respond if patients were younger than 18 years.
The survey was divided into two sections: basic understanding and comprehensive understanding. To achieve basic understanding, a respondent needed to recognize that “the primary purpose was not to improve the patient’s treatment.” To achieve comprehensive understanding, the respondent needed to recognize four facts: “primary purpose was not to improve patient’s treatment,” “primary purpose was to improve treatment of future patients,” “there may not be direct medical benefit,” and “most likely result of participation was not increased likelihood of cure.”
Forty-four out of 45 survey participants responded. Of these, 30 (68%) demonstrated basic understanding, and 24 (55%) had comprehensive understanding. Respondents with higher education were more likely to answer correctly, with 81% showing basic understanding and 73% showing comprehensive understanding; among less-educated respondents, only half (50%) had basic understanding, and about 1 out of 4 (28%) had comprehensive understanding. Similar disparities were observed among respondents with more versus less genetic knowledge. Almost all respondents (93%) who thought that profiling would help present treatment also believed it would benefit future patients.
Taken as a whole, these findings suggest that therapeutic misconception in genomic tumor profiling research is relatively common, which echoes previous findings. The investigators recommended that clinicians anticipate these knowledge gaps and aim to overcome them.
“Interventional work to improve participant understanding of these complexities and nuances is necessary as sequencing moves from the laboratory to the clinic,” the investigators concluded. “Such work can guide pediatric oncologists in how to manage expectations and best counsel patients and families about the meaning and significance of clinical profiling results.”
The study was funded by Hyundai Hope on Wheels, the Friends for Life Foundation, the Gillmore Fund, National Institutes of Health, and others. The investigators reported financial affiliations with Merck, Millennium, Novartis, Roche, Amgen, and others.
SOURCE: Marron et al. JCO Precis Oncol. 2019 Jan 22. doi: 10.1200/PO.18.00176.
Although most cancer patients and parents of cancer patients understand that genomic tumor profiling research aims to improve care for future patients, many also believe that the process will benefit present treatment, according to a recent survey conducted at four academic treatment centers.
Misunderstandings were most common among less-educated individuals and those with little genetic knowledge, reported lead author Jonathan M. Marron, MD, MPH, of the Dana-Farber Cancer Institute in Boston and his colleagues.
Previous surveys have shown that “up to 60% of research participants demonstrate evidence of therapeutic misconception,” the investigators wrote in JCO Precision Oncology, referring to “the belief that the primary purpose of research is therapeutic in nature rather than acquisition of generalizable knowledge.”
“Although advances in targeted therapeutics generate great excitement, they may also blur the line between research and clinical care,” the investigators wrote. As such therapeutics become more common, so may misconceptions.
To evaluate current views of genomic tumor profiling research, the investigators surveyed 45 cancer patients and parents of cancer patients at four academic treatment centers. All patients were aged 30 years or younger at enrollment and undergoing tumor profiling; parents were asked to respond if patients were younger than 18 years.
The survey was divided into two sections: basic understanding and comprehensive understanding. To achieve basic understanding, a respondent needed to recognize that “the primary purpose was not to improve the patient’s treatment.” To achieve comprehensive understanding, the respondent needed to recognize four facts: “primary purpose was not to improve patient’s treatment,” “primary purpose was to improve treatment of future patients,” “there may not be direct medical benefit,” and “most likely result of participation was not increased likelihood of cure.”
Forty-four out of 45 survey participants responded. Of these, 30 (68%) demonstrated basic understanding, and 24 (55%) had comprehensive understanding. Respondents with higher education were more likely to answer correctly, with 81% showing basic understanding and 73% showing comprehensive understanding; among less-educated respondents, only half (50%) had basic understanding, and about 1 out of 4 (28%) had comprehensive understanding. Similar disparities were observed among respondents with more versus less genetic knowledge. Almost all respondents (93%) who thought that profiling would help present treatment also believed it would benefit future patients.
Taken as a whole, these findings suggest that therapeutic misconception in genomic tumor profiling research is relatively common, which echoes previous findings. The investigators recommended that clinicians anticipate these knowledge gaps and aim to overcome them.
“Interventional work to improve participant understanding of these complexities and nuances is necessary as sequencing moves from the laboratory to the clinic,” the investigators concluded. “Such work can guide pediatric oncologists in how to manage expectations and best counsel patients and families about the meaning and significance of clinical profiling results.”
The study was funded by Hyundai Hope on Wheels, the Friends for Life Foundation, the Gillmore Fund, National Institutes of Health, and others. The investigators reported financial affiliations with Merck, Millennium, Novartis, Roche, Amgen, and others.
SOURCE: Marron et al. JCO Precis Oncol. 2019 Jan 22. doi: 10.1200/PO.18.00176.
FROM JCO PRECISION ONCOLOGY
Key clinical point: Although most cancer patients and parents of cancer patients understand that genomic tumor profiling research aims to improve care for future patients, many also believe that the process will benefit present treatment.
Major finding: Fifty-five percent of respondents demonstrated comprehensive understanding the purpose of genomic tumor profiling research.
Study details: A survey of 45 cancer patients and parents of cancer patients conducted at four academic treatment centers.
Disclosures: The study was funded by Hyundai Hope on Wheels, the Friends for Life Foundation, the Gillmore Fund, National Institutes of Health, and others. The investigators reported financial affiliations with Merck, Millennium, Novartis, Roche, Amgen, and others.
Source: Marron et al. JCO Precis Oncol. 2019 Jan 22. doi: 10.1200/PO.18.00176.
Self-reporting extends lung cancer survival
Patients with nonprogressive, metastatic lung cancer who report symptoms through a weekly, web-based monitoring system may survive longer than those who undergo standard imaging surveillance, according to a recent French study.
Self-reporting may notify care providers about adverse effects or recurrence earlier than imaging, suggested lead author, Fabrice Denis, MD, PhD, of Institut Inter-régional de Cancérologie Jean Bernard in Le Mans, France, and his colleagues. Findings were published in a letter in JAMA.
In 2017, a similar, single-center study showed that web-based symptom reporting could improve survival in patients undergoing chemotherapy. The lead investigator on that trial was Ethan Basch, MD, who coauthored the present publication.
The current, prospective study involved 121 patients treated at five centers in France between June 2014 and December 2017. Eligibility required a diagnosis of nonprogressive, metastatic lung cancer, including stage III or IV non–small cell or small cell disease. Patients were treated with antiangiogenic therapy, chemotherapy, immunotherapy, or tyrosine kinase inhibitors.
Patients in the control group had standard follow-up with imaging every 3-6 months. In contrast, the patient-reported outcomes (PRO) group completed a weekly online survey of 13 common symptoms between follow-up visits. If patients reported symptoms that matched with predefined criteria for severity or worsening, then the treating oncologist was notified.
When an 18-month interim analysis showed significant survival advantage in the PRO group, recruitment was stopped, and control patients were moved to the PRO group. After 2 years of follow-up, 40 patients (66.7%) in the control group had died, compared with 29 patients (47.5%) in the PRO group. Before censoring for crossover, median overall survival (OS) was 22.5 months in the PRO group, compared with 14.9 months in the control group (P = .03). Censoring for crossover widened the gap between groups by more than a month (22.5 vs. 13.5 months; P = .005).
“A potential mechanism of action is that symptoms suggesting adverse events or recurrence were detected earlier,” the investigators concluded.
The study was funded by SIVAN Innovation. Investigators reported financial affiliations with AstraZeneca, SIVAN Innovation, Ipsen, Roche, the National Cancer institute, Lilly, and others.
SOURCE: Denis F et al. JAMA. 2019 Jan 22;321(3):306-7.
Patients with nonprogressive, metastatic lung cancer who report symptoms through a weekly, web-based monitoring system may survive longer than those who undergo standard imaging surveillance, according to a recent French study.
Self-reporting may notify care providers about adverse effects or recurrence earlier than imaging, suggested lead author, Fabrice Denis, MD, PhD, of Institut Inter-régional de Cancérologie Jean Bernard in Le Mans, France, and his colleagues. Findings were published in a letter in JAMA.
In 2017, a similar, single-center study showed that web-based symptom reporting could improve survival in patients undergoing chemotherapy. The lead investigator on that trial was Ethan Basch, MD, who coauthored the present publication.
The current, prospective study involved 121 patients treated at five centers in France between June 2014 and December 2017. Eligibility required a diagnosis of nonprogressive, metastatic lung cancer, including stage III or IV non–small cell or small cell disease. Patients were treated with antiangiogenic therapy, chemotherapy, immunotherapy, or tyrosine kinase inhibitors.
Patients in the control group had standard follow-up with imaging every 3-6 months. In contrast, the patient-reported outcomes (PRO) group completed a weekly online survey of 13 common symptoms between follow-up visits. If patients reported symptoms that matched with predefined criteria for severity or worsening, then the treating oncologist was notified.
When an 18-month interim analysis showed significant survival advantage in the PRO group, recruitment was stopped, and control patients were moved to the PRO group. After 2 years of follow-up, 40 patients (66.7%) in the control group had died, compared with 29 patients (47.5%) in the PRO group. Before censoring for crossover, median overall survival (OS) was 22.5 months in the PRO group, compared with 14.9 months in the control group (P = .03). Censoring for crossover widened the gap between groups by more than a month (22.5 vs. 13.5 months; P = .005).
“A potential mechanism of action is that symptoms suggesting adverse events or recurrence were detected earlier,” the investigators concluded.
The study was funded by SIVAN Innovation. Investigators reported financial affiliations with AstraZeneca, SIVAN Innovation, Ipsen, Roche, the National Cancer institute, Lilly, and others.
SOURCE: Denis F et al. JAMA. 2019 Jan 22;321(3):306-7.
Patients with nonprogressive, metastatic lung cancer who report symptoms through a weekly, web-based monitoring system may survive longer than those who undergo standard imaging surveillance, according to a recent French study.
Self-reporting may notify care providers about adverse effects or recurrence earlier than imaging, suggested lead author, Fabrice Denis, MD, PhD, of Institut Inter-régional de Cancérologie Jean Bernard in Le Mans, France, and his colleagues. Findings were published in a letter in JAMA.
In 2017, a similar, single-center study showed that web-based symptom reporting could improve survival in patients undergoing chemotherapy. The lead investigator on that trial was Ethan Basch, MD, who coauthored the present publication.
The current, prospective study involved 121 patients treated at five centers in France between June 2014 and December 2017. Eligibility required a diagnosis of nonprogressive, metastatic lung cancer, including stage III or IV non–small cell or small cell disease. Patients were treated with antiangiogenic therapy, chemotherapy, immunotherapy, or tyrosine kinase inhibitors.
Patients in the control group had standard follow-up with imaging every 3-6 months. In contrast, the patient-reported outcomes (PRO) group completed a weekly online survey of 13 common symptoms between follow-up visits. If patients reported symptoms that matched with predefined criteria for severity or worsening, then the treating oncologist was notified.
When an 18-month interim analysis showed significant survival advantage in the PRO group, recruitment was stopped, and control patients were moved to the PRO group. After 2 years of follow-up, 40 patients (66.7%) in the control group had died, compared with 29 patients (47.5%) in the PRO group. Before censoring for crossover, median overall survival (OS) was 22.5 months in the PRO group, compared with 14.9 months in the control group (P = .03). Censoring for crossover widened the gap between groups by more than a month (22.5 vs. 13.5 months; P = .005).
“A potential mechanism of action is that symptoms suggesting adverse events or recurrence were detected earlier,” the investigators concluded.
The study was funded by SIVAN Innovation. Investigators reported financial affiliations with AstraZeneca, SIVAN Innovation, Ipsen, Roche, the National Cancer institute, Lilly, and others.
SOURCE: Denis F et al. JAMA. 2019 Jan 22;321(3):306-7.
FROM JAMA
Key clinical point: Patients with nonprogressive, metastatic lung cancer who report symptoms through a weekly, web-based monitoring system may survive longer than those who undergo standard imaging surveillance.
Major finding: Median overall survival (OS) of patients in the web-based monitoring group was 22.5 months versus 13.5 months for patients in the standard imaging group (P = .005).
Study details: A prospective study of 121 nonprogressive, metastatic lung cancer patients being treated with antiangiogenic therapy, chemotherapy, immunotherapy, or tyrosine kinase inhibitors.
Disclosures: The study was funded by SIVAN Innovation. Investigators reported financial affiliations with AstraZeneca, SIVAN Innovation, Ipsen, Roche, the National Cancer Institute, Lilly, and others.
Source: Denis F et al. JAMA. 2019 Jan 22;321(3):306-7.
High postpartum breast cancer metastasis risk may persist for a decade
Increased risk of metastasis associated with postpartum breast cancer (PPBC) in women 45 years or younger may persist for 10 years after childbirth, a finding that may give reason to extend the 5-year window currently defining PPBC.
Analysis of more than 700 patients showed that risk of metastasis was approximately twofold higher for a decade after childbirth, with risks about 3.5- to fivefold higher in women diagnosed with stage I or II disease, reported lead author Erica Goddard, PhD, of the Fred Hutchinson Cancer Research Center in Seattle, and her colleagues. Regardless of parity status, patients diagnosed with stage III disease had poor outcomes.
“The high risk for metastasis is independent of poor prognostic indicators, including biological subtype, stage, age, or year of diagnosis,” the investigators wrote in JAMA Network Open. “Yet, PPBC is an underrecognized subset of breast cancer, and few studies address the associated high risk for metastasis.”
The cohort study involved 701 women 45 years or younger who were diagnosed with breast cancer between 1981 and 2014. Early cases were retrospective, until the study switched to a prospective method in 2004. The investigators analyzed rates of distant metastasis and looked for associations with tumor cell proliferation, lymphovascular invasion, lymph node involvement, and other clinical attributes. Distant metastasis was defined by spread beyond the ipsilateral breast or local draining lymph node, as detected by physical exam, imaging, and/or pathological testing. The investigators also stained available tumor samples for Ki67 positivity, which is used for prognostic purposes, and to distinguish between ER-positive luminal A versus ER-positive luminal B disease.
Compared with nulliparous patients, women under 45 who were diagnosed with PPBC within 5 years of childbirth were 2.13 times as likely to develop metastasis (P = .009). This risk persisted for 5 more years. Women diagnosed within 5-10 years of childbirth showed a similar hazard ratio, of 2.23 (P = .006). After 10 years, the hazard ratio dropped to 1.6, but this value was statistically insignificant (P = .13). Patients identified with stage I or II disease had more dramatic risk profiles, with hazard ratios of 3.5 and 5.2, for diagnoses up to 5 years postpartum, and diagnoses 5-10 years postpartum, respectively. These findings suggest that, for some patients, the 5- to 10-year window may be the riskiest time for metastasis, and, incidentally, one that has historically been excluded from the definition of PPBC.
In addition, patients diagnosed with estrogen receptor–positive breast cancer within 10 years of childbirth had outcomes similar to those of nulliparous women with estrogen receptor–negative breast cancer, and postpartum women with estrogen receptor–negative breast cancer had worse outcomes than did nulliparous women with the same subtype. Furthermore, PPBC was associated with higher rates of lymph node involvement and lymphovascular invasion. Collectively, these findings suggest that PPBC is generally more aggressive than nulliparous breast cancer. In contrast, Ki67 positivity, identifying the luminal B subtype, was associated with worse outcome regardless of parity status, but this finding was statistically insignificant.
“[T]hese data suggest that stages I and II breast cancer in patients with PPBC diagnosed within 10 years of parturition may be underestimated in their risk for metastasis, as parity status is not currently factored into clinical decision-making algorithms, such as the National Comprehensive Cancer Network guidelines,” the investigators concluded. “In sum, we suggest that poor-prognostic PPBC is an increasing problem that merits more dedicated research.”
The study was funded by the National Cancer Institute, the National Institutes of Health, the U.S. Department of Defense, and other organizations. Dr. Goddard reported funding from the NCI and NIH. Dr. Mori reported financial support from the Department of Defense.
SOURCE: Goddard et al. JAMA Netw Open. 2019 Jan 11. doi: 10.1001/jamanetworkopen.2018.
Increased risk of metastasis associated with postpartum breast cancer (PPBC) in women 45 years or younger may persist for 10 years after childbirth, a finding that may give reason to extend the 5-year window currently defining PPBC.
Analysis of more than 700 patients showed that risk of metastasis was approximately twofold higher for a decade after childbirth, with risks about 3.5- to fivefold higher in women diagnosed with stage I or II disease, reported lead author Erica Goddard, PhD, of the Fred Hutchinson Cancer Research Center in Seattle, and her colleagues. Regardless of parity status, patients diagnosed with stage III disease had poor outcomes.
“The high risk for metastasis is independent of poor prognostic indicators, including biological subtype, stage, age, or year of diagnosis,” the investigators wrote in JAMA Network Open. “Yet, PPBC is an underrecognized subset of breast cancer, and few studies address the associated high risk for metastasis.”
The cohort study involved 701 women 45 years or younger who were diagnosed with breast cancer between 1981 and 2014. Early cases were retrospective, until the study switched to a prospective method in 2004. The investigators analyzed rates of distant metastasis and looked for associations with tumor cell proliferation, lymphovascular invasion, lymph node involvement, and other clinical attributes. Distant metastasis was defined by spread beyond the ipsilateral breast or local draining lymph node, as detected by physical exam, imaging, and/or pathological testing. The investigators also stained available tumor samples for Ki67 positivity, which is used for prognostic purposes, and to distinguish between ER-positive luminal A versus ER-positive luminal B disease.
Compared with nulliparous patients, women under 45 who were diagnosed with PPBC within 5 years of childbirth were 2.13 times as likely to develop metastasis (P = .009). This risk persisted for 5 more years. Women diagnosed within 5-10 years of childbirth showed a similar hazard ratio, of 2.23 (P = .006). After 10 years, the hazard ratio dropped to 1.6, but this value was statistically insignificant (P = .13). Patients identified with stage I or II disease had more dramatic risk profiles, with hazard ratios of 3.5 and 5.2, for diagnoses up to 5 years postpartum, and diagnoses 5-10 years postpartum, respectively. These findings suggest that, for some patients, the 5- to 10-year window may be the riskiest time for metastasis, and, incidentally, one that has historically been excluded from the definition of PPBC.
In addition, patients diagnosed with estrogen receptor–positive breast cancer within 10 years of childbirth had outcomes similar to those of nulliparous women with estrogen receptor–negative breast cancer, and postpartum women with estrogen receptor–negative breast cancer had worse outcomes than did nulliparous women with the same subtype. Furthermore, PPBC was associated with higher rates of lymph node involvement and lymphovascular invasion. Collectively, these findings suggest that PPBC is generally more aggressive than nulliparous breast cancer. In contrast, Ki67 positivity, identifying the luminal B subtype, was associated with worse outcome regardless of parity status, but this finding was statistically insignificant.
“[T]hese data suggest that stages I and II breast cancer in patients with PPBC diagnosed within 10 years of parturition may be underestimated in their risk for metastasis, as parity status is not currently factored into clinical decision-making algorithms, such as the National Comprehensive Cancer Network guidelines,” the investigators concluded. “In sum, we suggest that poor-prognostic PPBC is an increasing problem that merits more dedicated research.”
The study was funded by the National Cancer Institute, the National Institutes of Health, the U.S. Department of Defense, and other organizations. Dr. Goddard reported funding from the NCI and NIH. Dr. Mori reported financial support from the Department of Defense.
SOURCE: Goddard et al. JAMA Netw Open. 2019 Jan 11. doi: 10.1001/jamanetworkopen.2018.
Increased risk of metastasis associated with postpartum breast cancer (PPBC) in women 45 years or younger may persist for 10 years after childbirth, a finding that may give reason to extend the 5-year window currently defining PPBC.
Analysis of more than 700 patients showed that risk of metastasis was approximately twofold higher for a decade after childbirth, with risks about 3.5- to fivefold higher in women diagnosed with stage I or II disease, reported lead author Erica Goddard, PhD, of the Fred Hutchinson Cancer Research Center in Seattle, and her colleagues. Regardless of parity status, patients diagnosed with stage III disease had poor outcomes.
“The high risk for metastasis is independent of poor prognostic indicators, including biological subtype, stage, age, or year of diagnosis,” the investigators wrote in JAMA Network Open. “Yet, PPBC is an underrecognized subset of breast cancer, and few studies address the associated high risk for metastasis.”
The cohort study involved 701 women 45 years or younger who were diagnosed with breast cancer between 1981 and 2014. Early cases were retrospective, until the study switched to a prospective method in 2004. The investigators analyzed rates of distant metastasis and looked for associations with tumor cell proliferation, lymphovascular invasion, lymph node involvement, and other clinical attributes. Distant metastasis was defined by spread beyond the ipsilateral breast or local draining lymph node, as detected by physical exam, imaging, and/or pathological testing. The investigators also stained available tumor samples for Ki67 positivity, which is used for prognostic purposes, and to distinguish between ER-positive luminal A versus ER-positive luminal B disease.
Compared with nulliparous patients, women under 45 who were diagnosed with PPBC within 5 years of childbirth were 2.13 times as likely to develop metastasis (P = .009). This risk persisted for 5 more years. Women diagnosed within 5-10 years of childbirth showed a similar hazard ratio, of 2.23 (P = .006). After 10 years, the hazard ratio dropped to 1.6, but this value was statistically insignificant (P = .13). Patients identified with stage I or II disease had more dramatic risk profiles, with hazard ratios of 3.5 and 5.2, for diagnoses up to 5 years postpartum, and diagnoses 5-10 years postpartum, respectively. These findings suggest that, for some patients, the 5- to 10-year window may be the riskiest time for metastasis, and, incidentally, one that has historically been excluded from the definition of PPBC.
In addition, patients diagnosed with estrogen receptor–positive breast cancer within 10 years of childbirth had outcomes similar to those of nulliparous women with estrogen receptor–negative breast cancer, and postpartum women with estrogen receptor–negative breast cancer had worse outcomes than did nulliparous women with the same subtype. Furthermore, PPBC was associated with higher rates of lymph node involvement and lymphovascular invasion. Collectively, these findings suggest that PPBC is generally more aggressive than nulliparous breast cancer. In contrast, Ki67 positivity, identifying the luminal B subtype, was associated with worse outcome regardless of parity status, but this finding was statistically insignificant.
“[T]hese data suggest that stages I and II breast cancer in patients with PPBC diagnosed within 10 years of parturition may be underestimated in their risk for metastasis, as parity status is not currently factored into clinical decision-making algorithms, such as the National Comprehensive Cancer Network guidelines,” the investigators concluded. “In sum, we suggest that poor-prognostic PPBC is an increasing problem that merits more dedicated research.”
The study was funded by the National Cancer Institute, the National Institutes of Health, the U.S. Department of Defense, and other organizations. Dr. Goddard reported funding from the NCI and NIH. Dr. Mori reported financial support from the Department of Defense.
SOURCE: Goddard et al. JAMA Netw Open. 2019 Jan 11. doi: 10.1001/jamanetworkopen.2018.
FROM JAMA NETWORK OPEN
Key clinical point: Increased risk of metastasis associated with postpartum breast cancer in women 45 years or younger may persist for 10 years after childbirth, instead of 5 years, as previously reported.
Major finding: Compared with nulliparous breast cancer patients, women 45 years or younger diagnosed with breast cancer within 5-10 years of childbirth were 2.23 times as likely to develop metastasis.
Study details: A retrospective and prospective cohort study involving 701 women with stage I, II, or III breast cancer who were 45 years or younger at time of diagnosis.
Disclosures: The study was funded by the National Cancer Institute, the National Institutes of Health, the U.S. Department of Defense, and other organizations. Dr. Goddard reported funding from the NCI and NIH. Dr. Mori reported financial support from the Department of Defense.
Source: Goddard et al. JAMA Netw Open. 2019 Jan 11. doi: 10.1001/jamanetworkopen.2018.6997.