User login
After prior TNFi in axSpA, taking secukinumab or another TNFi appear equivalent
MADRID – In axial spondyloarthritis patients who discontinue a tumor necrosis factor inhibitor (TNFi), there does not appear to be any advantage for using the anti–interleukin-17 biologic secukinumab over a different tumor necrosis factor inhibitor for next therapy, according to an analysis presented at the European Congress of Rheumatology.
“Switching to secukinumab [Cosentyx] might even be inferior in many patients,” according to Adrian Ciurea, MD, of the clinic for rheumatology at University Hospital Zürich.
This conclusion was reached in a retrospective analysis of axial spondyloarthritis (axSpA) patients enrolled in the Swiss Clinical Quality Management Cohort. Although Dr. Ciurea said that a prospective trial is needed to confirm the findings, this study was conducted because there have been, up until now, “no data to choose between options” to guide this choice.
In this study of 382 axSpA patients who were candidates for a new biologic after discontinuing a previous TNFi, 275 were started on a different TNFi and 107 were started on secukinumab. Although about 60% of patients in both groups were HLAB27-positive, there were many other characteristics, including those related to disease severity, that were different, Dr. Ciurea acknowledged.
Specifically, the proportion of patients starting secukinumab treated with two or more TNF inhibitors was greater than that of patients switching to another TNFi (77.6% vs. 37.8%; P less than .001). In addition, patients in the secukinumab group had a higher baseline disease activity, more enthesitis, and greater axial impairment.
These were reflected in higher average Bath Ankylosing Spondylitis Disease Activity Index scores (6.1 vs. 4.8; P less than .001) as well as other baseline clinical scoring methods, such as the Bath Ankylosing Spondylitis Functional Index and the Maastricht Ankylosing Spondylitis Enthesitis Score.
However, baseline high-sensitivity C-reactive protein levels, number of swollen joints, or years of symptom duration were not significantly different between the groups, although all were numerically higher in the secukinumab group. The proportion of patients with uveitis was higher in the TNFi group. About 70% of patients in both groups had discontinued their prior TNFi for inadequate response.
For the primary assessment of drug survival on the new therapy, the median time was 1.1 years in the secukinumab group and 2.0 years in the group switched to a new TNFi, without adjustment for baseline characteristics and disease severity. After risk adjustment, this difference was no statistically significant.
“There was an interaction with gender, indicating a significantly higher risk of discontinuing secukinumab than a new TNFi in men,” according to Dr. Ciurea. This was not seen in women.
Previous studies have shown the response rate to a second TNFi is typically lower than for an initial TNFi therapy. Previous studies have also shown that response to secukinumab is lower in patients with previous TNFi experience than in those who are naive to biologics, Dr. Ciurea said. This analysis suggests that the likelihood of sustained disease control is not greater in TNFi-experienced patients who start secukinumab relative to a different TNFi.
When asked if the data had been analyzed to compare response in patients exposed to only one prior TNFi, Dr. Ciurea replied that this could not be done because the sample size was too small.
Although Dr. Ciurea acknowledged the limitations of retrospective studies with risk adjustments, he concluded that there does not appear to be an advantage for initiating secukinumab over starting a different TNFi in axSpA patients who require a switch from their current TNFi,
Even though he said that this is the first study to address this question objectively, Dr. Ciurea said, “A sufficiently powered, prospective, head-to-head trial is needed.”
Dr. Ciurea reported multiple financial relationships with pharmaceutical companies but received no funding for this study.
SOURCE: Tellenbach C et al. Ann Rheum Dis. 2019;78(Suppl 2):197. Abstract OPO237, doi: 10.1136/annrheumdis-2019-eular.2427
MADRID – In axial spondyloarthritis patients who discontinue a tumor necrosis factor inhibitor (TNFi), there does not appear to be any advantage for using the anti–interleukin-17 biologic secukinumab over a different tumor necrosis factor inhibitor for next therapy, according to an analysis presented at the European Congress of Rheumatology.
“Switching to secukinumab [Cosentyx] might even be inferior in many patients,” according to Adrian Ciurea, MD, of the clinic for rheumatology at University Hospital Zürich.
This conclusion was reached in a retrospective analysis of axial spondyloarthritis (axSpA) patients enrolled in the Swiss Clinical Quality Management Cohort. Although Dr. Ciurea said that a prospective trial is needed to confirm the findings, this study was conducted because there have been, up until now, “no data to choose between options” to guide this choice.
In this study of 382 axSpA patients who were candidates for a new biologic after discontinuing a previous TNFi, 275 were started on a different TNFi and 107 were started on secukinumab. Although about 60% of patients in both groups were HLAB27-positive, there were many other characteristics, including those related to disease severity, that were different, Dr. Ciurea acknowledged.
Specifically, the proportion of patients starting secukinumab treated with two or more TNF inhibitors was greater than that of patients switching to another TNFi (77.6% vs. 37.8%; P less than .001). In addition, patients in the secukinumab group had a higher baseline disease activity, more enthesitis, and greater axial impairment.
These were reflected in higher average Bath Ankylosing Spondylitis Disease Activity Index scores (6.1 vs. 4.8; P less than .001) as well as other baseline clinical scoring methods, such as the Bath Ankylosing Spondylitis Functional Index and the Maastricht Ankylosing Spondylitis Enthesitis Score.
However, baseline high-sensitivity C-reactive protein levels, number of swollen joints, or years of symptom duration were not significantly different between the groups, although all were numerically higher in the secukinumab group. The proportion of patients with uveitis was higher in the TNFi group. About 70% of patients in both groups had discontinued their prior TNFi for inadequate response.
For the primary assessment of drug survival on the new therapy, the median time was 1.1 years in the secukinumab group and 2.0 years in the group switched to a new TNFi, without adjustment for baseline characteristics and disease severity. After risk adjustment, this difference was no statistically significant.
“There was an interaction with gender, indicating a significantly higher risk of discontinuing secukinumab than a new TNFi in men,” according to Dr. Ciurea. This was not seen in women.
Previous studies have shown the response rate to a second TNFi is typically lower than for an initial TNFi therapy. Previous studies have also shown that response to secukinumab is lower in patients with previous TNFi experience than in those who are naive to biologics, Dr. Ciurea said. This analysis suggests that the likelihood of sustained disease control is not greater in TNFi-experienced patients who start secukinumab relative to a different TNFi.
When asked if the data had been analyzed to compare response in patients exposed to only one prior TNFi, Dr. Ciurea replied that this could not be done because the sample size was too small.
Although Dr. Ciurea acknowledged the limitations of retrospective studies with risk adjustments, he concluded that there does not appear to be an advantage for initiating secukinumab over starting a different TNFi in axSpA patients who require a switch from their current TNFi,
Even though he said that this is the first study to address this question objectively, Dr. Ciurea said, “A sufficiently powered, prospective, head-to-head trial is needed.”
Dr. Ciurea reported multiple financial relationships with pharmaceutical companies but received no funding for this study.
SOURCE: Tellenbach C et al. Ann Rheum Dis. 2019;78(Suppl 2):197. Abstract OPO237, doi: 10.1136/annrheumdis-2019-eular.2427
MADRID – In axial spondyloarthritis patients who discontinue a tumor necrosis factor inhibitor (TNFi), there does not appear to be any advantage for using the anti–interleukin-17 biologic secukinumab over a different tumor necrosis factor inhibitor for next therapy, according to an analysis presented at the European Congress of Rheumatology.
“Switching to secukinumab [Cosentyx] might even be inferior in many patients,” according to Adrian Ciurea, MD, of the clinic for rheumatology at University Hospital Zürich.
This conclusion was reached in a retrospective analysis of axial spondyloarthritis (axSpA) patients enrolled in the Swiss Clinical Quality Management Cohort. Although Dr. Ciurea said that a prospective trial is needed to confirm the findings, this study was conducted because there have been, up until now, “no data to choose between options” to guide this choice.
In this study of 382 axSpA patients who were candidates for a new biologic after discontinuing a previous TNFi, 275 were started on a different TNFi and 107 were started on secukinumab. Although about 60% of patients in both groups were HLAB27-positive, there were many other characteristics, including those related to disease severity, that were different, Dr. Ciurea acknowledged.
Specifically, the proportion of patients starting secukinumab treated with two or more TNF inhibitors was greater than that of patients switching to another TNFi (77.6% vs. 37.8%; P less than .001). In addition, patients in the secukinumab group had a higher baseline disease activity, more enthesitis, and greater axial impairment.
These were reflected in higher average Bath Ankylosing Spondylitis Disease Activity Index scores (6.1 vs. 4.8; P less than .001) as well as other baseline clinical scoring methods, such as the Bath Ankylosing Spondylitis Functional Index and the Maastricht Ankylosing Spondylitis Enthesitis Score.
However, baseline high-sensitivity C-reactive protein levels, number of swollen joints, or years of symptom duration were not significantly different between the groups, although all were numerically higher in the secukinumab group. The proportion of patients with uveitis was higher in the TNFi group. About 70% of patients in both groups had discontinued their prior TNFi for inadequate response.
For the primary assessment of drug survival on the new therapy, the median time was 1.1 years in the secukinumab group and 2.0 years in the group switched to a new TNFi, without adjustment for baseline characteristics and disease severity. After risk adjustment, this difference was no statistically significant.
“There was an interaction with gender, indicating a significantly higher risk of discontinuing secukinumab than a new TNFi in men,” according to Dr. Ciurea. This was not seen in women.
Previous studies have shown the response rate to a second TNFi is typically lower than for an initial TNFi therapy. Previous studies have also shown that response to secukinumab is lower in patients with previous TNFi experience than in those who are naive to biologics, Dr. Ciurea said. This analysis suggests that the likelihood of sustained disease control is not greater in TNFi-experienced patients who start secukinumab relative to a different TNFi.
When asked if the data had been analyzed to compare response in patients exposed to only one prior TNFi, Dr. Ciurea replied that this could not be done because the sample size was too small.
Although Dr. Ciurea acknowledged the limitations of retrospective studies with risk adjustments, he concluded that there does not appear to be an advantage for initiating secukinumab over starting a different TNFi in axSpA patients who require a switch from their current TNFi,
Even though he said that this is the first study to address this question objectively, Dr. Ciurea said, “A sufficiently powered, prospective, head-to-head trial is needed.”
Dr. Ciurea reported multiple financial relationships with pharmaceutical companies but received no funding for this study.
SOURCE: Tellenbach C et al. Ann Rheum Dis. 2019;78(Suppl 2):197. Abstract OPO237, doi: 10.1136/annrheumdis-2019-eular.2427
REPORTING FROM EULAR 2019 CONGRESS
CDC Advisory: Acute Flaccid Myelitis
Late summer is the season to be especially alert for possible cases of acute flaccid myelitis (AFM), the CDC says.
Since 2014, when the CDC began tracking AFM, 570 cases, mostly in children, have been reported. Outbreaks have followed a pattern: every 2 years, spiking between August and October. Nearly all states and DC have reported cases. The largest outbreak, 233 cases, was in 2018. Theoretically, 2019 would be an off year, but too little is known about AFM to say outbreaks are unlikely.
AFM starts with symptoms similar to those of a viral infection but can progress rapidly to limb weakness, then respiratory failure. Most patients are previously healthy children, average age 5 years old, who had respiratory symptoms or fever consistent with a viral infection less than a week before they experienced sudden weakness in their arms or legs. On average, the CDC receives reports of suspected AFM cases 18 days after the patient develops limb weakness.
The CDC believes viruses play a role, but which ones is still unclear. Symptoms have been found to develop after poliovirus, West Nile virus, and adenovirus infections. In an analysis of confirmed cases from 2018, CDC researchers detected enteroviruses and rhinoviruses in nearly half of stool and respiratory specimens. However, of 74 cases with a cerebral spinal fluid specimen, only 2 were positive for enteroviruses. All specimens tested negative for poliovirus.
But even when it is associated with a viral infection, it is not known how the infection triggered the AFM, or why it triggers AFM in some people and not others. AFM is rare—affecting ≤ 2 children per million in the US every year. Viral infections from enteroviruses are common, especially in children—and especially in the late summer/early autumn months. It is not known why a small number of people develop AFM while most others recover.
AFM can be difficult to diagnose because the symptoms are similar to those of neurologic diseases, such as Guillain-Barré syndrome. As of yet, no laboratory test is available; diagnosis is done through physical examination and magnetic resonance imaging (MRI) scans of the spinal cord.
There also are no proven ways to treat or prevent AFM. That is why timing is so key. The CDC says as soon as AFM is suspected, collect cerebral spinal fluid, serum, stool, and nasopharyngeal swabs. If an MRI shows a spinal lesion with some gray matter involvement, alert the health department and send specimens and medical records. Refer to specialists, monitor the patient for worsening symptoms, hospitalize if indicated, and begin treatment and rehabilitation.
In short: no specific etiology, no specific way to diagnose, and no specific treatment exist for AFM. Treatments, including immunoglobulin, corticosteroids, and antivirals have been tried, but no clear evidence exists that any have affected recovery. Other treatment is supportive, with physical and occupational therapy.
The length of recovery time varies. Some people make a full recovery, most have continued muscle weakness even after a year.
The CDC is researching possible risk factors, conducting advanced laboratory testing and research to determine how viral infections may lead to AFM, and tracking long-term patient outcomes.
Clinicians can contact neurologists who specialize in AFM through the AFM Physician Consult and Support Portal: https://myelitis.org/living-with-myelitis/resources/afm-physician-support-portal/.
Late summer is the season to be especially alert for possible cases of acute flaccid myelitis (AFM), the CDC says.
Since 2014, when the CDC began tracking AFM, 570 cases, mostly in children, have been reported. Outbreaks have followed a pattern: every 2 years, spiking between August and October. Nearly all states and DC have reported cases. The largest outbreak, 233 cases, was in 2018. Theoretically, 2019 would be an off year, but too little is known about AFM to say outbreaks are unlikely.
AFM starts with symptoms similar to those of a viral infection but can progress rapidly to limb weakness, then respiratory failure. Most patients are previously healthy children, average age 5 years old, who had respiratory symptoms or fever consistent with a viral infection less than a week before they experienced sudden weakness in their arms or legs. On average, the CDC receives reports of suspected AFM cases 18 days after the patient develops limb weakness.
The CDC believes viruses play a role, but which ones is still unclear. Symptoms have been found to develop after poliovirus, West Nile virus, and adenovirus infections. In an analysis of confirmed cases from 2018, CDC researchers detected enteroviruses and rhinoviruses in nearly half of stool and respiratory specimens. However, of 74 cases with a cerebral spinal fluid specimen, only 2 were positive for enteroviruses. All specimens tested negative for poliovirus.
But even when it is associated with a viral infection, it is not known how the infection triggered the AFM, or why it triggers AFM in some people and not others. AFM is rare—affecting ≤ 2 children per million in the US every year. Viral infections from enteroviruses are common, especially in children—and especially in the late summer/early autumn months. It is not known why a small number of people develop AFM while most others recover.
AFM can be difficult to diagnose because the symptoms are similar to those of neurologic diseases, such as Guillain-Barré syndrome. As of yet, no laboratory test is available; diagnosis is done through physical examination and magnetic resonance imaging (MRI) scans of the spinal cord.
There also are no proven ways to treat or prevent AFM. That is why timing is so key. The CDC says as soon as AFM is suspected, collect cerebral spinal fluid, serum, stool, and nasopharyngeal swabs. If an MRI shows a spinal lesion with some gray matter involvement, alert the health department and send specimens and medical records. Refer to specialists, monitor the patient for worsening symptoms, hospitalize if indicated, and begin treatment and rehabilitation.
In short: no specific etiology, no specific way to diagnose, and no specific treatment exist for AFM. Treatments, including immunoglobulin, corticosteroids, and antivirals have been tried, but no clear evidence exists that any have affected recovery. Other treatment is supportive, with physical and occupational therapy.
The length of recovery time varies. Some people make a full recovery, most have continued muscle weakness even after a year.
The CDC is researching possible risk factors, conducting advanced laboratory testing and research to determine how viral infections may lead to AFM, and tracking long-term patient outcomes.
Clinicians can contact neurologists who specialize in AFM through the AFM Physician Consult and Support Portal: https://myelitis.org/living-with-myelitis/resources/afm-physician-support-portal/.
Late summer is the season to be especially alert for possible cases of acute flaccid myelitis (AFM), the CDC says.
Since 2014, when the CDC began tracking AFM, 570 cases, mostly in children, have been reported. Outbreaks have followed a pattern: every 2 years, spiking between August and October. Nearly all states and DC have reported cases. The largest outbreak, 233 cases, was in 2018. Theoretically, 2019 would be an off year, but too little is known about AFM to say outbreaks are unlikely.
AFM starts with symptoms similar to those of a viral infection but can progress rapidly to limb weakness, then respiratory failure. Most patients are previously healthy children, average age 5 years old, who had respiratory symptoms or fever consistent with a viral infection less than a week before they experienced sudden weakness in their arms or legs. On average, the CDC receives reports of suspected AFM cases 18 days after the patient develops limb weakness.
The CDC believes viruses play a role, but which ones is still unclear. Symptoms have been found to develop after poliovirus, West Nile virus, and adenovirus infections. In an analysis of confirmed cases from 2018, CDC researchers detected enteroviruses and rhinoviruses in nearly half of stool and respiratory specimens. However, of 74 cases with a cerebral spinal fluid specimen, only 2 were positive for enteroviruses. All specimens tested negative for poliovirus.
But even when it is associated with a viral infection, it is not known how the infection triggered the AFM, or why it triggers AFM in some people and not others. AFM is rare—affecting ≤ 2 children per million in the US every year. Viral infections from enteroviruses are common, especially in children—and especially in the late summer/early autumn months. It is not known why a small number of people develop AFM while most others recover.
AFM can be difficult to diagnose because the symptoms are similar to those of neurologic diseases, such as Guillain-Barré syndrome. As of yet, no laboratory test is available; diagnosis is done through physical examination and magnetic resonance imaging (MRI) scans of the spinal cord.
There also are no proven ways to treat or prevent AFM. That is why timing is so key. The CDC says as soon as AFM is suspected, collect cerebral spinal fluid, serum, stool, and nasopharyngeal swabs. If an MRI shows a spinal lesion with some gray matter involvement, alert the health department and send specimens and medical records. Refer to specialists, monitor the patient for worsening symptoms, hospitalize if indicated, and begin treatment and rehabilitation.
In short: no specific etiology, no specific way to diagnose, and no specific treatment exist for AFM. Treatments, including immunoglobulin, corticosteroids, and antivirals have been tried, but no clear evidence exists that any have affected recovery. Other treatment is supportive, with physical and occupational therapy.
The length of recovery time varies. Some people make a full recovery, most have continued muscle weakness even after a year.
The CDC is researching possible risk factors, conducting advanced laboratory testing and research to determine how viral infections may lead to AFM, and tracking long-term patient outcomes.
Clinicians can contact neurologists who specialize in AFM through the AFM Physician Consult and Support Portal: https://myelitis.org/living-with-myelitis/resources/afm-physician-support-portal/.
Guidelines update donor selection criteria for HSCT
Newly updated guidelines can inform the selection of adult donors and cord blood units for allogeneic hematopoietic stem cell transplant.
The evidence-based guidelines suggest high-resolution human leukocyte antigen (HLA) matching and donor age are important when selecting adult donors, while HLA matching, cell dose, and banking practices should be considered when selecting cord blood units.
The guidelines were developed by the National Marrow Donor Program (NMDP) and Center for International Blood and Marrow Transplant Research (CIBMTR) and were recently published in Blood.
Adult donors
The guidelines recommend high-resolution HLA typing for adult donors and patients. This means typing for HLA-A, -B, -C, and -DRB1, at minimum. Typing at other loci – DPB1, DQB1, DRB3/4/5, DQA1, and DPA1 – is “optional but often helpful.”
An 8/8 HLA-matched donor is considered optimal. If only 7/8-matched donors are available, select a donor with a single allele mismatched at the patient’s homozygous locus if possible, and select an HLA-C*03:03 mismatch over an HLA-C*03:04 mismatch where applicable.
For both 8/8- and 7/8-matched donors, try to avoid mismatches at DQB1 and DRB3/4/5, and select DPB1 mismatches based on the DPB1 T-cell epitope algorithm. Mismatches of allotypes targeted by donor-specific HLA antibodies (DSA), including DQA1 and DPA1, should be avoided.
The guidelines recommend pursuing multiple donors because not all potential donors will be available. Younger donors should be prioritized over older donors. Other factors – such as sex or cytomegalovirus serostatus – should not affect donor selection.
Cord blood
For cord blood donations, testing attached segment identity is mandatory, red blood cell–replete units are not recommended, and both unit cryovolume and year of cryopreservation should be taken into consideration. The guidelines note that “some expert centers” favor red blood cell–depleted units with a postcryopreservation volume of about 25 ml/bag, and units banked more recently “may be linked to optimal banking practices.”
The guidelines recommend a minimum of eight high-resolution HLA typing for cord blood units and patients. A 4/6 match (HLA-A, -B, -DRB1) is acceptable, as is a 4/8 match (HLA-A, -B, -C, and -DRB1) or greater. In the case of a double-unit transplant, there is no need to match the units to each other.
“DSA must be considered on a case-by-case basis,” according to the guidelines. The patient’s diagnosis, prior immunosuppressive therapy, planned conditioning regimen, and DSA number/titer/specificity/complement fixation should be taken into consideration. DSA-targeted units should be avoided in patients with nonmalignant conditions and used with caution in patients with hematologic malignancies.
For single–cord blood units, the total nucleated cell dose should be at least 2.5 x 107/kg, and the number of CD34+ cells should be at least 1.5 x 105/kg. For double-unit transplants, the total nucleated cell dose should be at least 1.5 x 107/kg for each unit, and the number of CD34+ cells should be at least 1.0 x 105/kg for each unit.
The guidelines note that additional research is needed to inform how to balance cell dose against HLA match. However, cell dose should often take priority over HLA match for adults and larger pediatric patients, and HLA match can take priority in children, smaller adults, or patients with common HLA typing who have multiple units with a high cell dose.
The guidelines’ authors reported relationships with MolMed, NexImmune, AbbVie, Bellicum, Incyte, Medigene, Merck, Nektar, Novartis, Servier, Miltenyi, and the U.S. government/military.
SOURCE: Dehn J et al. Blood. 2019 Jul 10. doi: 10.1182/blood.2019001212.
Newly updated guidelines can inform the selection of adult donors and cord blood units for allogeneic hematopoietic stem cell transplant.
The evidence-based guidelines suggest high-resolution human leukocyte antigen (HLA) matching and donor age are important when selecting adult donors, while HLA matching, cell dose, and banking practices should be considered when selecting cord blood units.
The guidelines were developed by the National Marrow Donor Program (NMDP) and Center for International Blood and Marrow Transplant Research (CIBMTR) and were recently published in Blood.
Adult donors
The guidelines recommend high-resolution HLA typing for adult donors and patients. This means typing for HLA-A, -B, -C, and -DRB1, at minimum. Typing at other loci – DPB1, DQB1, DRB3/4/5, DQA1, and DPA1 – is “optional but often helpful.”
An 8/8 HLA-matched donor is considered optimal. If only 7/8-matched donors are available, select a donor with a single allele mismatched at the patient’s homozygous locus if possible, and select an HLA-C*03:03 mismatch over an HLA-C*03:04 mismatch where applicable.
For both 8/8- and 7/8-matched donors, try to avoid mismatches at DQB1 and DRB3/4/5, and select DPB1 mismatches based on the DPB1 T-cell epitope algorithm. Mismatches of allotypes targeted by donor-specific HLA antibodies (DSA), including DQA1 and DPA1, should be avoided.
The guidelines recommend pursuing multiple donors because not all potential donors will be available. Younger donors should be prioritized over older donors. Other factors – such as sex or cytomegalovirus serostatus – should not affect donor selection.
Cord blood
For cord blood donations, testing attached segment identity is mandatory, red blood cell–replete units are not recommended, and both unit cryovolume and year of cryopreservation should be taken into consideration. The guidelines note that “some expert centers” favor red blood cell–depleted units with a postcryopreservation volume of about 25 ml/bag, and units banked more recently “may be linked to optimal banking practices.”
The guidelines recommend a minimum of eight high-resolution HLA typing for cord blood units and patients. A 4/6 match (HLA-A, -B, -DRB1) is acceptable, as is a 4/8 match (HLA-A, -B, -C, and -DRB1) or greater. In the case of a double-unit transplant, there is no need to match the units to each other.
“DSA must be considered on a case-by-case basis,” according to the guidelines. The patient’s diagnosis, prior immunosuppressive therapy, planned conditioning regimen, and DSA number/titer/specificity/complement fixation should be taken into consideration. DSA-targeted units should be avoided in patients with nonmalignant conditions and used with caution in patients with hematologic malignancies.
For single–cord blood units, the total nucleated cell dose should be at least 2.5 x 107/kg, and the number of CD34+ cells should be at least 1.5 x 105/kg. For double-unit transplants, the total nucleated cell dose should be at least 1.5 x 107/kg for each unit, and the number of CD34+ cells should be at least 1.0 x 105/kg for each unit.
The guidelines note that additional research is needed to inform how to balance cell dose against HLA match. However, cell dose should often take priority over HLA match for adults and larger pediatric patients, and HLA match can take priority in children, smaller adults, or patients with common HLA typing who have multiple units with a high cell dose.
The guidelines’ authors reported relationships with MolMed, NexImmune, AbbVie, Bellicum, Incyte, Medigene, Merck, Nektar, Novartis, Servier, Miltenyi, and the U.S. government/military.
SOURCE: Dehn J et al. Blood. 2019 Jul 10. doi: 10.1182/blood.2019001212.
Newly updated guidelines can inform the selection of adult donors and cord blood units for allogeneic hematopoietic stem cell transplant.
The evidence-based guidelines suggest high-resolution human leukocyte antigen (HLA) matching and donor age are important when selecting adult donors, while HLA matching, cell dose, and banking practices should be considered when selecting cord blood units.
The guidelines were developed by the National Marrow Donor Program (NMDP) and Center for International Blood and Marrow Transplant Research (CIBMTR) and were recently published in Blood.
Adult donors
The guidelines recommend high-resolution HLA typing for adult donors and patients. This means typing for HLA-A, -B, -C, and -DRB1, at minimum. Typing at other loci – DPB1, DQB1, DRB3/4/5, DQA1, and DPA1 – is “optional but often helpful.”
An 8/8 HLA-matched donor is considered optimal. If only 7/8-matched donors are available, select a donor with a single allele mismatched at the patient’s homozygous locus if possible, and select an HLA-C*03:03 mismatch over an HLA-C*03:04 mismatch where applicable.
For both 8/8- and 7/8-matched donors, try to avoid mismatches at DQB1 and DRB3/4/5, and select DPB1 mismatches based on the DPB1 T-cell epitope algorithm. Mismatches of allotypes targeted by donor-specific HLA antibodies (DSA), including DQA1 and DPA1, should be avoided.
The guidelines recommend pursuing multiple donors because not all potential donors will be available. Younger donors should be prioritized over older donors. Other factors – such as sex or cytomegalovirus serostatus – should not affect donor selection.
Cord blood
For cord blood donations, testing attached segment identity is mandatory, red blood cell–replete units are not recommended, and both unit cryovolume and year of cryopreservation should be taken into consideration. The guidelines note that “some expert centers” favor red blood cell–depleted units with a postcryopreservation volume of about 25 ml/bag, and units banked more recently “may be linked to optimal banking practices.”
The guidelines recommend a minimum of eight high-resolution HLA typing for cord blood units and patients. A 4/6 match (HLA-A, -B, -DRB1) is acceptable, as is a 4/8 match (HLA-A, -B, -C, and -DRB1) or greater. In the case of a double-unit transplant, there is no need to match the units to each other.
“DSA must be considered on a case-by-case basis,” according to the guidelines. The patient’s diagnosis, prior immunosuppressive therapy, planned conditioning regimen, and DSA number/titer/specificity/complement fixation should be taken into consideration. DSA-targeted units should be avoided in patients with nonmalignant conditions and used with caution in patients with hematologic malignancies.
For single–cord blood units, the total nucleated cell dose should be at least 2.5 x 107/kg, and the number of CD34+ cells should be at least 1.5 x 105/kg. For double-unit transplants, the total nucleated cell dose should be at least 1.5 x 107/kg for each unit, and the number of CD34+ cells should be at least 1.0 x 105/kg for each unit.
The guidelines note that additional research is needed to inform how to balance cell dose against HLA match. However, cell dose should often take priority over HLA match for adults and larger pediatric patients, and HLA match can take priority in children, smaller adults, or patients with common HLA typing who have multiple units with a high cell dose.
The guidelines’ authors reported relationships with MolMed, NexImmune, AbbVie, Bellicum, Incyte, Medigene, Merck, Nektar, Novartis, Servier, Miltenyi, and the U.S. government/military.
SOURCE: Dehn J et al. Blood. 2019 Jul 10. doi: 10.1182/blood.2019001212.
FROM BLOOD
Large genetic cohort supports NfL as Alzheimer’s biomarker
LOS ANGELES – Neurofilament light, or NfL, is an increasingly studied biomarker of axonal damage across a range of neurodegenerative diseases, including Alzheimer’s disease. And because it is a biomarker that can be measured in blood, it is a less invasive measure of disease progression in Alzheimer’s than cerebrospinal fluid markers.
At the Alzheimer’s Association International Conference, scientists studying the world’s largest cohort of early-onset Alzheimer’s families presented results from a cross-sectional and longitudinal study of more than 2,000 carriers and noncarriers of a single Alzheimer’s-causing mutation (Presenilin 1 E280A) that occurs in an extended Colombian family.
While previous studies have also looked at NfL in cohorts of autosomal dominant mutation carriers, this study strengthens evidence for NfL as an Alzheimer’s biomarker in the largest single-mutation cohort to date.
Yakeel Quiroz, PhD, of Harvard University in Boston and colleagues reported that, in a cross-sectional study of 1,070 mutation carriers and 1,074 noncarriers aged 8-75 years (mean age, 29-30 years; 46% male), mean plasma NfL levels were elevated in cognitively unimpaired carriers (18.08 pg/mL), compared with noncarriers (9.09 pg/mL; P less than .0001).
Longitudinal data from 504 of those carriers and noncarriers showed that NfL levels begin to diverge significantly between the groups at age 22, more than 2 decades before the mean onset of mild cognitive impairment for this cohort (44 years). The between-group differences in NfL continued to widen with advancing age.
“At approximately age 22, the axons, the neurons are already changing, and this measure serves as an early sign of degeneration,” Dr. Quiroz said in an interview. “This is really telling us about neurodegeneration related to Alzheimer’s disease because these are people destined to develop Alzheimer’s dementia later in life and have no age-related comorbidities that could cause elevation in NfL.”
A study published early this year in a different cohort of about 400 autosomal dominant Alzheimer’s disease mutation carriers and noncarriers found that the longitudinal rate of change of serum NfL could discriminate carriers from noncarriers almost a decade earlier than cross-sectional absolute NfL levels – at 16 years and 7 years, respectively, before expected onset of symptoms (Nat Med. 2019 Feb;25[2]:277–83).
In Dr. Quiroz and colleagues’ study, both cross-sectional and longitudinal findings showed carriers to significantly differ from noncarriers by age 22 years. “We’re seeing differences between groups that reach statistical significance earlier” – decades, in this case, before onset of symptoms, Dr. Quiroz said. The current study is distinguished by its exceptional power, she said: “No one has done this with such a large number of carriers with a single genetic mutation.”
Eric Reiman, MD, of Banner Alzheimer’s Institute in Phoenix, the coauthor on the study who presented the findings to the conference, commented in an interview that they illustrate “the opportunity for fluid biomarkers to be used in trials.”
Dr. Reiman cautioned, however, that the NfL measurements are likely a more useful measure of preclinical neurodegeneration in genetic early-onset Alzheimer’s than in late-onset or sporadic disease, which represents the lion’s share of Alzheimer’s cases.
“In autosomal dominant Alzheimer’s disease, these [NfL] changes really go up – probably more so than in late onset,” he said.
Dr. Reiman said these cohort findings add to growing interest in less-invasive biomarkers for Alzheimer’s, both in research and clinical practice. “If NfL is already elevated as a marker of active neurodegeneration, in early phase trials you might think about looking to it as a proof of concept – so where in 6-12 months you can see reductions [in NfL].”
Dr. Reiman added that “there will be other fluid biomarkers coming down the pike that will be exciting as well, and which people will learn a lot more about in the next few months.”
Dr. Quiroz had no disclosures related to her findings. Other authors on the study, including Dr. Reiman, have received research support and/or consulting fees from pharmaceutical manufacturers.
LOS ANGELES – Neurofilament light, or NfL, is an increasingly studied biomarker of axonal damage across a range of neurodegenerative diseases, including Alzheimer’s disease. And because it is a biomarker that can be measured in blood, it is a less invasive measure of disease progression in Alzheimer’s than cerebrospinal fluid markers.
At the Alzheimer’s Association International Conference, scientists studying the world’s largest cohort of early-onset Alzheimer’s families presented results from a cross-sectional and longitudinal study of more than 2,000 carriers and noncarriers of a single Alzheimer’s-causing mutation (Presenilin 1 E280A) that occurs in an extended Colombian family.
While previous studies have also looked at NfL in cohorts of autosomal dominant mutation carriers, this study strengthens evidence for NfL as an Alzheimer’s biomarker in the largest single-mutation cohort to date.
Yakeel Quiroz, PhD, of Harvard University in Boston and colleagues reported that, in a cross-sectional study of 1,070 mutation carriers and 1,074 noncarriers aged 8-75 years (mean age, 29-30 years; 46% male), mean plasma NfL levels were elevated in cognitively unimpaired carriers (18.08 pg/mL), compared with noncarriers (9.09 pg/mL; P less than .0001).
Longitudinal data from 504 of those carriers and noncarriers showed that NfL levels begin to diverge significantly between the groups at age 22, more than 2 decades before the mean onset of mild cognitive impairment for this cohort (44 years). The between-group differences in NfL continued to widen with advancing age.
“At approximately age 22, the axons, the neurons are already changing, and this measure serves as an early sign of degeneration,” Dr. Quiroz said in an interview. “This is really telling us about neurodegeneration related to Alzheimer’s disease because these are people destined to develop Alzheimer’s dementia later in life and have no age-related comorbidities that could cause elevation in NfL.”
A study published early this year in a different cohort of about 400 autosomal dominant Alzheimer’s disease mutation carriers and noncarriers found that the longitudinal rate of change of serum NfL could discriminate carriers from noncarriers almost a decade earlier than cross-sectional absolute NfL levels – at 16 years and 7 years, respectively, before expected onset of symptoms (Nat Med. 2019 Feb;25[2]:277–83).
In Dr. Quiroz and colleagues’ study, both cross-sectional and longitudinal findings showed carriers to significantly differ from noncarriers by age 22 years. “We’re seeing differences between groups that reach statistical significance earlier” – decades, in this case, before onset of symptoms, Dr. Quiroz said. The current study is distinguished by its exceptional power, she said: “No one has done this with such a large number of carriers with a single genetic mutation.”
Eric Reiman, MD, of Banner Alzheimer’s Institute in Phoenix, the coauthor on the study who presented the findings to the conference, commented in an interview that they illustrate “the opportunity for fluid biomarkers to be used in trials.”
Dr. Reiman cautioned, however, that the NfL measurements are likely a more useful measure of preclinical neurodegeneration in genetic early-onset Alzheimer’s than in late-onset or sporadic disease, which represents the lion’s share of Alzheimer’s cases.
“In autosomal dominant Alzheimer’s disease, these [NfL] changes really go up – probably more so than in late onset,” he said.
Dr. Reiman said these cohort findings add to growing interest in less-invasive biomarkers for Alzheimer’s, both in research and clinical practice. “If NfL is already elevated as a marker of active neurodegeneration, in early phase trials you might think about looking to it as a proof of concept – so where in 6-12 months you can see reductions [in NfL].”
Dr. Reiman added that “there will be other fluid biomarkers coming down the pike that will be exciting as well, and which people will learn a lot more about in the next few months.”
Dr. Quiroz had no disclosures related to her findings. Other authors on the study, including Dr. Reiman, have received research support and/or consulting fees from pharmaceutical manufacturers.
LOS ANGELES – Neurofilament light, or NfL, is an increasingly studied biomarker of axonal damage across a range of neurodegenerative diseases, including Alzheimer’s disease. And because it is a biomarker that can be measured in blood, it is a less invasive measure of disease progression in Alzheimer’s than cerebrospinal fluid markers.
At the Alzheimer’s Association International Conference, scientists studying the world’s largest cohort of early-onset Alzheimer’s families presented results from a cross-sectional and longitudinal study of more than 2,000 carriers and noncarriers of a single Alzheimer’s-causing mutation (Presenilin 1 E280A) that occurs in an extended Colombian family.
While previous studies have also looked at NfL in cohorts of autosomal dominant mutation carriers, this study strengthens evidence for NfL as an Alzheimer’s biomarker in the largest single-mutation cohort to date.
Yakeel Quiroz, PhD, of Harvard University in Boston and colleagues reported that, in a cross-sectional study of 1,070 mutation carriers and 1,074 noncarriers aged 8-75 years (mean age, 29-30 years; 46% male), mean plasma NfL levels were elevated in cognitively unimpaired carriers (18.08 pg/mL), compared with noncarriers (9.09 pg/mL; P less than .0001).
Longitudinal data from 504 of those carriers and noncarriers showed that NfL levels begin to diverge significantly between the groups at age 22, more than 2 decades before the mean onset of mild cognitive impairment for this cohort (44 years). The between-group differences in NfL continued to widen with advancing age.
“At approximately age 22, the axons, the neurons are already changing, and this measure serves as an early sign of degeneration,” Dr. Quiroz said in an interview. “This is really telling us about neurodegeneration related to Alzheimer’s disease because these are people destined to develop Alzheimer’s dementia later in life and have no age-related comorbidities that could cause elevation in NfL.”
A study published early this year in a different cohort of about 400 autosomal dominant Alzheimer’s disease mutation carriers and noncarriers found that the longitudinal rate of change of serum NfL could discriminate carriers from noncarriers almost a decade earlier than cross-sectional absolute NfL levels – at 16 years and 7 years, respectively, before expected onset of symptoms (Nat Med. 2019 Feb;25[2]:277–83).
In Dr. Quiroz and colleagues’ study, both cross-sectional and longitudinal findings showed carriers to significantly differ from noncarriers by age 22 years. “We’re seeing differences between groups that reach statistical significance earlier” – decades, in this case, before onset of symptoms, Dr. Quiroz said. The current study is distinguished by its exceptional power, she said: “No one has done this with such a large number of carriers with a single genetic mutation.”
Eric Reiman, MD, of Banner Alzheimer’s Institute in Phoenix, the coauthor on the study who presented the findings to the conference, commented in an interview that they illustrate “the opportunity for fluid biomarkers to be used in trials.”
Dr. Reiman cautioned, however, that the NfL measurements are likely a more useful measure of preclinical neurodegeneration in genetic early-onset Alzheimer’s than in late-onset or sporadic disease, which represents the lion’s share of Alzheimer’s cases.
“In autosomal dominant Alzheimer’s disease, these [NfL] changes really go up – probably more so than in late onset,” he said.
Dr. Reiman said these cohort findings add to growing interest in less-invasive biomarkers for Alzheimer’s, both in research and clinical practice. “If NfL is already elevated as a marker of active neurodegeneration, in early phase trials you might think about looking to it as a proof of concept – so where in 6-12 months you can see reductions [in NfL].”
Dr. Reiman added that “there will be other fluid biomarkers coming down the pike that will be exciting as well, and which people will learn a lot more about in the next few months.”
Dr. Quiroz had no disclosures related to her findings. Other authors on the study, including Dr. Reiman, have received research support and/or consulting fees from pharmaceutical manufacturers.
REPORTING FROM AAIC 2019
Exercise counters astronauts’ dizziness after space flight
Months of spaceflight can leave astronauts feeling dizzy when they get back to terra firma. A new study published online in Circulation shows that up to 2 hours of daily resistance and endurance training during the mission, combined with IV fluid replacement upon return to Earth, completely eliminated dizziness and fainting during normal activity. The exercise regimen counters cardiovascular, bone, and muscular deconditioning.
The dizziness is related to low blood pressure after standing up from a sitting or lying down position. It results when blood rushes to the feet and away from the brain because of the movement. The phenomenon, known as orthostatic hypotension, has plagued the space program even before the Apollo 11 mission, which celebrates its 50th anniversary this week.
The finding is good news for astronauts, but the findings extend to full-time Earthlings as well. The same fitness program is helping patients with postural orthostatic tachycardia syndrome (POTS), which most often affects women aged 13-50 years, said senior author Benjamin Levine, MD. About 450,000 people suffer from the condition in the United States.
The program held up well against a difficult challenge. “What surprised me the most was how well the astronauts did after spending 6 months in space. I thought there would be frequent episodes of fainting when they returned to Earth, but they didn’t have any. It’s compelling evidence of the effectiveness of the countermeasures – the exercise regimen and fluid replenishment,” said Dr. Levine, who is professor of Exercise Sciences at UT Southwestern Medical Center, said in a press release.
The researchers studied 12 astronauts (8 men, 4 women) who were aboard the International Space Station for roughly 6 months. To monitor for orthostatic hypotension, the researchers used ambulatory beat-to-beat blood pressure monitoring during activities of daily living. The device was used for 24 hour periods before, during (day 15, 30, and 75, as well as 15 days before return to Earth), and immediately following space flight.
The research showed that 24-hour systolic blood pressure decreased, compared with preflight levels, during flight time (106 vs 120 mm Hg; P less than .01), but the values returned to normal after return to Earth (122 mm Hg). There was no change in diastolic BP during or after flight. Systolic and diastolic BP variability did not change before, during, and after flight.
The study was funded by NASA. Dr. Levine has no disclosures.
SOURCE: Qi Fu et al. Circulation. 2019 July 19. doi: 10.1161/CIRCULATIONAHA.119.041050.
Months of spaceflight can leave astronauts feeling dizzy when they get back to terra firma. A new study published online in Circulation shows that up to 2 hours of daily resistance and endurance training during the mission, combined with IV fluid replacement upon return to Earth, completely eliminated dizziness and fainting during normal activity. The exercise regimen counters cardiovascular, bone, and muscular deconditioning.
The dizziness is related to low blood pressure after standing up from a sitting or lying down position. It results when blood rushes to the feet and away from the brain because of the movement. The phenomenon, known as orthostatic hypotension, has plagued the space program even before the Apollo 11 mission, which celebrates its 50th anniversary this week.
The finding is good news for astronauts, but the findings extend to full-time Earthlings as well. The same fitness program is helping patients with postural orthostatic tachycardia syndrome (POTS), which most often affects women aged 13-50 years, said senior author Benjamin Levine, MD. About 450,000 people suffer from the condition in the United States.
The program held up well against a difficult challenge. “What surprised me the most was how well the astronauts did after spending 6 months in space. I thought there would be frequent episodes of fainting when they returned to Earth, but they didn’t have any. It’s compelling evidence of the effectiveness of the countermeasures – the exercise regimen and fluid replenishment,” said Dr. Levine, who is professor of Exercise Sciences at UT Southwestern Medical Center, said in a press release.
The researchers studied 12 astronauts (8 men, 4 women) who were aboard the International Space Station for roughly 6 months. To monitor for orthostatic hypotension, the researchers used ambulatory beat-to-beat blood pressure monitoring during activities of daily living. The device was used for 24 hour periods before, during (day 15, 30, and 75, as well as 15 days before return to Earth), and immediately following space flight.
The research showed that 24-hour systolic blood pressure decreased, compared with preflight levels, during flight time (106 vs 120 mm Hg; P less than .01), but the values returned to normal after return to Earth (122 mm Hg). There was no change in diastolic BP during or after flight. Systolic and diastolic BP variability did not change before, during, and after flight.
The study was funded by NASA. Dr. Levine has no disclosures.
SOURCE: Qi Fu et al. Circulation. 2019 July 19. doi: 10.1161/CIRCULATIONAHA.119.041050.
Months of spaceflight can leave astronauts feeling dizzy when they get back to terra firma. A new study published online in Circulation shows that up to 2 hours of daily resistance and endurance training during the mission, combined with IV fluid replacement upon return to Earth, completely eliminated dizziness and fainting during normal activity. The exercise regimen counters cardiovascular, bone, and muscular deconditioning.
The dizziness is related to low blood pressure after standing up from a sitting or lying down position. It results when blood rushes to the feet and away from the brain because of the movement. The phenomenon, known as orthostatic hypotension, has plagued the space program even before the Apollo 11 mission, which celebrates its 50th anniversary this week.
The finding is good news for astronauts, but the findings extend to full-time Earthlings as well. The same fitness program is helping patients with postural orthostatic tachycardia syndrome (POTS), which most often affects women aged 13-50 years, said senior author Benjamin Levine, MD. About 450,000 people suffer from the condition in the United States.
The program held up well against a difficult challenge. “What surprised me the most was how well the astronauts did after spending 6 months in space. I thought there would be frequent episodes of fainting when they returned to Earth, but they didn’t have any. It’s compelling evidence of the effectiveness of the countermeasures – the exercise regimen and fluid replenishment,” said Dr. Levine, who is professor of Exercise Sciences at UT Southwestern Medical Center, said in a press release.
The researchers studied 12 astronauts (8 men, 4 women) who were aboard the International Space Station for roughly 6 months. To monitor for orthostatic hypotension, the researchers used ambulatory beat-to-beat blood pressure monitoring during activities of daily living. The device was used for 24 hour periods before, during (day 15, 30, and 75, as well as 15 days before return to Earth), and immediately following space flight.
The research showed that 24-hour systolic blood pressure decreased, compared with preflight levels, during flight time (106 vs 120 mm Hg; P less than .01), but the values returned to normal after return to Earth (122 mm Hg). There was no change in diastolic BP during or after flight. Systolic and diastolic BP variability did not change before, during, and after flight.
The study was funded by NASA. Dr. Levine has no disclosures.
SOURCE: Qi Fu et al. Circulation. 2019 July 19. doi: 10.1161/CIRCULATIONAHA.119.041050.
REPORTING FROM CIRCULATION
IL-6, CRP are prognostic for checkpoint inhibition in melanoma
CHICAGO – according to post hoc analyses of data from three randomized CheckMate studies.
In 70 treatment-naive patients from the randomized phase 2 CheckMate 064 study who received sequential treatment with the programmed death-1 (PD-1) checkpoint inhibitor nivolumab (NIVO) followed by the cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) checkpoint inhibitor ipilimumab (IPI), best overall response was modestly associated with lower baseline serum IL-6 (P = .087) and significantly associated with on-treatment IL-6 (P = .006). In 70 patients who received IPI then NIVO, best overall response was associated only with on-treatment IL-6 (P = .043), Jeffrey S. Weber, MD, PhD, reported at the annual meeting of the American Society of Clinical Oncology.
“This stimulated us to look at associations with survival ... and there apparently was a significant association with high IL-6 levels in the serum both pretreatment and on treatment in both arms, whether they got NIVO then IPI followed by NIVO maintenance, or IPI then NIVO, also followed by NIVO maintenance,” he said.
After adjusting for covariates, the hazard ratios for survival for baseline IL-6 below versus above the median were 7.81 and 1.07, in the groups, respectively. No deaths occurred in the NIVO-IPI group (thus, no HR), but the HR for survival based on on-treatment IL-6 below versus above the median in the IPI-NIVO group was 1.92.
“So initial conclusions: High baseline and on-treatment IL-6 levels in the serum were associated with poor survival,” said Dr. Weber, deputy director of the Perlmutter Cancer Center, New York University Langone Medical Center.
This finding prompted evaluation of additional samples from the randomized CheckMate 066 study, which compared dacarbazine chemotherapy (the standard of care at the time) and NIVO in 400 treatment-naive patients with BRAF wild-type disease.
Again, baseline IL-6 levels (nondetectable vs. detectable) were associated with better overall survival (OS) in both groups (adjusted HRs, 1.79 and 1.54).
“So this is not a predictive marker, this is a baseline prognostic marker,” he said.
In the international, three-arm, randomized phase 3 CheckMate 067 study, which compared IPI, NIVO, and IPI+NIVO in 945 treatment-naive patients with either BRAF wild-type or BRAF mutated disease, baseline IL-6 levels (nondetectable vs. detectable) again were associated with better OS in all 3 arms (adjusted HRs, 3.13 for NIVO, 2.67 for NIVO+IPI, and 4.06 for IPI alone).
A multivariate analysis of data from the CheckMate 066 and 067 studies, with controlling for lactic acid deydrogenase, performance status, and disease stage, provided additional “impressive evidence” of IL-6 as a potent prognostic factor, Dr. Weber said.
“We then looked at CRP. I’ve always been interested in CRP because in a recent publication CRP was found to be associated with outcomes in patients who got PD-1, and the higher the CRP, the worse they did,” he said.
In CheckMate 064 there was modest association between lower baseline CRP and best overall response in both the NIVO-IPI and IPI-NIVO groups (P = .069 and 0.009, respectively), and on treatment, the association was really only seen in the IPI-NIVO group (P = .210 for NIVO-IPI and 0.015 for IPI-NIVO), in which the higher CRP levels were associated with progression or stability.
For survival, however, both baseline and on-treatment CRP levels were associated with OS; baseline serum CRP above the median was associated with shorter OS (HRs, 7.25 for NIVO-IPI and 1.53 for IPI-NIVO), and a similar trend was seen for on-treatment CRP (HRs, 1.60 and 2.0, respectively).
In CheckMate 066, the association between CRP and OS was also apparent, but not as impressive for NIVO alone (HR, 0.996) as it was for dacarbazine (HR, 1.90), and similar to CheckMate 064, higher baseline CRP levels were associated with shorter survival and were prognostic, he said.
In CheckMate 067, similar trends were seen across the treatment arms, and they were similar to those seen for IL-6, with higher baseline CRP levels (at or above median versus below) associated with shorter OS (HRs, 1.46 for NIVO, 1.26 for NIVO+IPI, and 1.48 for IPI alone).
To better understand how CRP might inhibit the effects of PD-1 and how it could have an immune effect – as also indicated by some prior data – Dr. Weber and colleagues conducted additional in vitro studies to examine the impact of exogenous CRP on T-cell function; they found that CRP affected the earliest steps in T-cell signaling and activation, thereby dampening antitumor immune responses.
Acute phase reactants such as CRP and chronic inflammatory proteins including IL-6 (which induces production of CRP from the liver) have been associated with poor prognosis in a variety of cancers, as well as with poor outcomes after anti–PD-1 or programmed death-ligand 1 (PD-L1) therapy in melanoma and other cancers, Dr. Weber said.
“In murine models of melanoma and pancreatic cancer, combined treatment with anti-IL-6 blockade and anti–PD-1/PD-L1 antibodies enhances antitumor immune responses and efficacy,” he explained, noting that the current analyses were undertaken based on those findings and on “a significant body of data” from other groups and from his own lab.
The current findings suggest that IL-6 and CRP may be prognostic for immune checkpoint inhibitor therapies in patients with melanoma, he said, adding that “blockade of IL-6 and CRP synthesis and/or activity in combination with immune checkpoint therapies may enhance responses and survival rates in patients with different cancers, including melanomas.”
To that end, an investigator-sponsored trial looking at IPI-NIVO with the IL-6–blocking antibody tocilizumab has been approved and will start accruing patients in the next few months, he said.
During a discussion of the findings at the meeting, Charles G. Drake, MD, PhD, associate director for clinical research at the Herbert Irving Comprehensive Cancer Center at Columbia University, New York, said that “Dr. Weber and his colleagues should be commended for really trying to show what CRP does to T-cell activation, and in the studies he showed us, it’s clearly negative.”
“But IL-6 is a pleiotropic cytokine. It will be very interesting to see what happens in the prospective clinical trial that he mentioned, in terms of all the other effects on CD-4 cells, neutrophils, and macrophages,” said Dr. Drake, who also is codirector of Columbia’s Cancer Immunotherapy Program. “Nevertheless, I think the data were clear that IL-6 and CRP are negative prognostic biomarkers in melanoma.”
Of note, the development of a biomarker identified in a trial typically takes many steps, but in the case of IL-6 – and perhaps even more so for CRP – the pathway is relatively short, Dr. Drake said.
“That’s because CRP is a validated and [Food and Drug Administration]–approved test; you can order it to assess cardiovascular risk in almost any hospital in the United States, and so the analyte – this part of the qualification – is done,” he explained. “I think if this was validated prospectively we could have CRP as a negative prognostic – not predictive – biomarker in melanoma, actually.”
Dr. Weber and Dr. Drake each reported relationships with numerous companies, including stock and other ownership interests and patents, consulting or advisory roles and/or receipt of honoraria, research funding to their respective institutions, and payment for travel, accommodations, and expenses
SOURCE: Weber J et al. ASCO 2019, Abstract 100.
CHICAGO – according to post hoc analyses of data from three randomized CheckMate studies.
In 70 treatment-naive patients from the randomized phase 2 CheckMate 064 study who received sequential treatment with the programmed death-1 (PD-1) checkpoint inhibitor nivolumab (NIVO) followed by the cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) checkpoint inhibitor ipilimumab (IPI), best overall response was modestly associated with lower baseline serum IL-6 (P = .087) and significantly associated with on-treatment IL-6 (P = .006). In 70 patients who received IPI then NIVO, best overall response was associated only with on-treatment IL-6 (P = .043), Jeffrey S. Weber, MD, PhD, reported at the annual meeting of the American Society of Clinical Oncology.
“This stimulated us to look at associations with survival ... and there apparently was a significant association with high IL-6 levels in the serum both pretreatment and on treatment in both arms, whether they got NIVO then IPI followed by NIVO maintenance, or IPI then NIVO, also followed by NIVO maintenance,” he said.
After adjusting for covariates, the hazard ratios for survival for baseline IL-6 below versus above the median were 7.81 and 1.07, in the groups, respectively. No deaths occurred in the NIVO-IPI group (thus, no HR), but the HR for survival based on on-treatment IL-6 below versus above the median in the IPI-NIVO group was 1.92.
“So initial conclusions: High baseline and on-treatment IL-6 levels in the serum were associated with poor survival,” said Dr. Weber, deputy director of the Perlmutter Cancer Center, New York University Langone Medical Center.
This finding prompted evaluation of additional samples from the randomized CheckMate 066 study, which compared dacarbazine chemotherapy (the standard of care at the time) and NIVO in 400 treatment-naive patients with BRAF wild-type disease.
Again, baseline IL-6 levels (nondetectable vs. detectable) were associated with better overall survival (OS) in both groups (adjusted HRs, 1.79 and 1.54).
“So this is not a predictive marker, this is a baseline prognostic marker,” he said.
In the international, three-arm, randomized phase 3 CheckMate 067 study, which compared IPI, NIVO, and IPI+NIVO in 945 treatment-naive patients with either BRAF wild-type or BRAF mutated disease, baseline IL-6 levels (nondetectable vs. detectable) again were associated with better OS in all 3 arms (adjusted HRs, 3.13 for NIVO, 2.67 for NIVO+IPI, and 4.06 for IPI alone).
A multivariate analysis of data from the CheckMate 066 and 067 studies, with controlling for lactic acid deydrogenase, performance status, and disease stage, provided additional “impressive evidence” of IL-6 as a potent prognostic factor, Dr. Weber said.
“We then looked at CRP. I’ve always been interested in CRP because in a recent publication CRP was found to be associated with outcomes in patients who got PD-1, and the higher the CRP, the worse they did,” he said.
In CheckMate 064 there was modest association between lower baseline CRP and best overall response in both the NIVO-IPI and IPI-NIVO groups (P = .069 and 0.009, respectively), and on treatment, the association was really only seen in the IPI-NIVO group (P = .210 for NIVO-IPI and 0.015 for IPI-NIVO), in which the higher CRP levels were associated with progression or stability.
For survival, however, both baseline and on-treatment CRP levels were associated with OS; baseline serum CRP above the median was associated with shorter OS (HRs, 7.25 for NIVO-IPI and 1.53 for IPI-NIVO), and a similar trend was seen for on-treatment CRP (HRs, 1.60 and 2.0, respectively).
In CheckMate 066, the association between CRP and OS was also apparent, but not as impressive for NIVO alone (HR, 0.996) as it was for dacarbazine (HR, 1.90), and similar to CheckMate 064, higher baseline CRP levels were associated with shorter survival and were prognostic, he said.
In CheckMate 067, similar trends were seen across the treatment arms, and they were similar to those seen for IL-6, with higher baseline CRP levels (at or above median versus below) associated with shorter OS (HRs, 1.46 for NIVO, 1.26 for NIVO+IPI, and 1.48 for IPI alone).
To better understand how CRP might inhibit the effects of PD-1 and how it could have an immune effect – as also indicated by some prior data – Dr. Weber and colleagues conducted additional in vitro studies to examine the impact of exogenous CRP on T-cell function; they found that CRP affected the earliest steps in T-cell signaling and activation, thereby dampening antitumor immune responses.
Acute phase reactants such as CRP and chronic inflammatory proteins including IL-6 (which induces production of CRP from the liver) have been associated with poor prognosis in a variety of cancers, as well as with poor outcomes after anti–PD-1 or programmed death-ligand 1 (PD-L1) therapy in melanoma and other cancers, Dr. Weber said.
“In murine models of melanoma and pancreatic cancer, combined treatment with anti-IL-6 blockade and anti–PD-1/PD-L1 antibodies enhances antitumor immune responses and efficacy,” he explained, noting that the current analyses were undertaken based on those findings and on “a significant body of data” from other groups and from his own lab.
The current findings suggest that IL-6 and CRP may be prognostic for immune checkpoint inhibitor therapies in patients with melanoma, he said, adding that “blockade of IL-6 and CRP synthesis and/or activity in combination with immune checkpoint therapies may enhance responses and survival rates in patients with different cancers, including melanomas.”
To that end, an investigator-sponsored trial looking at IPI-NIVO with the IL-6–blocking antibody tocilizumab has been approved and will start accruing patients in the next few months, he said.
During a discussion of the findings at the meeting, Charles G. Drake, MD, PhD, associate director for clinical research at the Herbert Irving Comprehensive Cancer Center at Columbia University, New York, said that “Dr. Weber and his colleagues should be commended for really trying to show what CRP does to T-cell activation, and in the studies he showed us, it’s clearly negative.”
“But IL-6 is a pleiotropic cytokine. It will be very interesting to see what happens in the prospective clinical trial that he mentioned, in terms of all the other effects on CD-4 cells, neutrophils, and macrophages,” said Dr. Drake, who also is codirector of Columbia’s Cancer Immunotherapy Program. “Nevertheless, I think the data were clear that IL-6 and CRP are negative prognostic biomarkers in melanoma.”
Of note, the development of a biomarker identified in a trial typically takes many steps, but in the case of IL-6 – and perhaps even more so for CRP – the pathway is relatively short, Dr. Drake said.
“That’s because CRP is a validated and [Food and Drug Administration]–approved test; you can order it to assess cardiovascular risk in almost any hospital in the United States, and so the analyte – this part of the qualification – is done,” he explained. “I think if this was validated prospectively we could have CRP as a negative prognostic – not predictive – biomarker in melanoma, actually.”
Dr. Weber and Dr. Drake each reported relationships with numerous companies, including stock and other ownership interests and patents, consulting or advisory roles and/or receipt of honoraria, research funding to their respective institutions, and payment for travel, accommodations, and expenses
SOURCE: Weber J et al. ASCO 2019, Abstract 100.
CHICAGO – according to post hoc analyses of data from three randomized CheckMate studies.
In 70 treatment-naive patients from the randomized phase 2 CheckMate 064 study who received sequential treatment with the programmed death-1 (PD-1) checkpoint inhibitor nivolumab (NIVO) followed by the cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) checkpoint inhibitor ipilimumab (IPI), best overall response was modestly associated with lower baseline serum IL-6 (P = .087) and significantly associated with on-treatment IL-6 (P = .006). In 70 patients who received IPI then NIVO, best overall response was associated only with on-treatment IL-6 (P = .043), Jeffrey S. Weber, MD, PhD, reported at the annual meeting of the American Society of Clinical Oncology.
“This stimulated us to look at associations with survival ... and there apparently was a significant association with high IL-6 levels in the serum both pretreatment and on treatment in both arms, whether they got NIVO then IPI followed by NIVO maintenance, or IPI then NIVO, also followed by NIVO maintenance,” he said.
After adjusting for covariates, the hazard ratios for survival for baseline IL-6 below versus above the median were 7.81 and 1.07, in the groups, respectively. No deaths occurred in the NIVO-IPI group (thus, no HR), but the HR for survival based on on-treatment IL-6 below versus above the median in the IPI-NIVO group was 1.92.
“So initial conclusions: High baseline and on-treatment IL-6 levels in the serum were associated with poor survival,” said Dr. Weber, deputy director of the Perlmutter Cancer Center, New York University Langone Medical Center.
This finding prompted evaluation of additional samples from the randomized CheckMate 066 study, which compared dacarbazine chemotherapy (the standard of care at the time) and NIVO in 400 treatment-naive patients with BRAF wild-type disease.
Again, baseline IL-6 levels (nondetectable vs. detectable) were associated with better overall survival (OS) in both groups (adjusted HRs, 1.79 and 1.54).
“So this is not a predictive marker, this is a baseline prognostic marker,” he said.
In the international, three-arm, randomized phase 3 CheckMate 067 study, which compared IPI, NIVO, and IPI+NIVO in 945 treatment-naive patients with either BRAF wild-type or BRAF mutated disease, baseline IL-6 levels (nondetectable vs. detectable) again were associated with better OS in all 3 arms (adjusted HRs, 3.13 for NIVO, 2.67 for NIVO+IPI, and 4.06 for IPI alone).
A multivariate analysis of data from the CheckMate 066 and 067 studies, with controlling for lactic acid deydrogenase, performance status, and disease stage, provided additional “impressive evidence” of IL-6 as a potent prognostic factor, Dr. Weber said.
“We then looked at CRP. I’ve always been interested in CRP because in a recent publication CRP was found to be associated with outcomes in patients who got PD-1, and the higher the CRP, the worse they did,” he said.
In CheckMate 064 there was modest association between lower baseline CRP and best overall response in both the NIVO-IPI and IPI-NIVO groups (P = .069 and 0.009, respectively), and on treatment, the association was really only seen in the IPI-NIVO group (P = .210 for NIVO-IPI and 0.015 for IPI-NIVO), in which the higher CRP levels were associated with progression or stability.
For survival, however, both baseline and on-treatment CRP levels were associated with OS; baseline serum CRP above the median was associated with shorter OS (HRs, 7.25 for NIVO-IPI and 1.53 for IPI-NIVO), and a similar trend was seen for on-treatment CRP (HRs, 1.60 and 2.0, respectively).
In CheckMate 066, the association between CRP and OS was also apparent, but not as impressive for NIVO alone (HR, 0.996) as it was for dacarbazine (HR, 1.90), and similar to CheckMate 064, higher baseline CRP levels were associated with shorter survival and were prognostic, he said.
In CheckMate 067, similar trends were seen across the treatment arms, and they were similar to those seen for IL-6, with higher baseline CRP levels (at or above median versus below) associated with shorter OS (HRs, 1.46 for NIVO, 1.26 for NIVO+IPI, and 1.48 for IPI alone).
To better understand how CRP might inhibit the effects of PD-1 and how it could have an immune effect – as also indicated by some prior data – Dr. Weber and colleagues conducted additional in vitro studies to examine the impact of exogenous CRP on T-cell function; they found that CRP affected the earliest steps in T-cell signaling and activation, thereby dampening antitumor immune responses.
Acute phase reactants such as CRP and chronic inflammatory proteins including IL-6 (which induces production of CRP from the liver) have been associated with poor prognosis in a variety of cancers, as well as with poor outcomes after anti–PD-1 or programmed death-ligand 1 (PD-L1) therapy in melanoma and other cancers, Dr. Weber said.
“In murine models of melanoma and pancreatic cancer, combined treatment with anti-IL-6 blockade and anti–PD-1/PD-L1 antibodies enhances antitumor immune responses and efficacy,” he explained, noting that the current analyses were undertaken based on those findings and on “a significant body of data” from other groups and from his own lab.
The current findings suggest that IL-6 and CRP may be prognostic for immune checkpoint inhibitor therapies in patients with melanoma, he said, adding that “blockade of IL-6 and CRP synthesis and/or activity in combination with immune checkpoint therapies may enhance responses and survival rates in patients with different cancers, including melanomas.”
To that end, an investigator-sponsored trial looking at IPI-NIVO with the IL-6–blocking antibody tocilizumab has been approved and will start accruing patients in the next few months, he said.
During a discussion of the findings at the meeting, Charles G. Drake, MD, PhD, associate director for clinical research at the Herbert Irving Comprehensive Cancer Center at Columbia University, New York, said that “Dr. Weber and his colleagues should be commended for really trying to show what CRP does to T-cell activation, and in the studies he showed us, it’s clearly negative.”
“But IL-6 is a pleiotropic cytokine. It will be very interesting to see what happens in the prospective clinical trial that he mentioned, in terms of all the other effects on CD-4 cells, neutrophils, and macrophages,” said Dr. Drake, who also is codirector of Columbia’s Cancer Immunotherapy Program. “Nevertheless, I think the data were clear that IL-6 and CRP are negative prognostic biomarkers in melanoma.”
Of note, the development of a biomarker identified in a trial typically takes many steps, but in the case of IL-6 – and perhaps even more so for CRP – the pathway is relatively short, Dr. Drake said.
“That’s because CRP is a validated and [Food and Drug Administration]–approved test; you can order it to assess cardiovascular risk in almost any hospital in the United States, and so the analyte – this part of the qualification – is done,” he explained. “I think if this was validated prospectively we could have CRP as a negative prognostic – not predictive – biomarker in melanoma, actually.”
Dr. Weber and Dr. Drake each reported relationships with numerous companies, including stock and other ownership interests and patents, consulting or advisory roles and/or receipt of honoraria, research funding to their respective institutions, and payment for travel, accommodations, and expenses
SOURCE: Weber J et al. ASCO 2019, Abstract 100.
REPORTING FROM ASCO 2019
Potential improvements in convenience, tolerability of hematologic treatment
In this edition of “How I will treat my next patient,” I highlight two recent presentations regarding potential improvements in the convenience and tolerability of treatment for two hematologic malignancies: multiple myeloma and chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL).
SC-Dara in myeloma
At the 2019 annual meeting of the American Society of Clinical Oncology, Maria-Victoria Mateos, MD, PhD, and colleagues, reported the results of COLUMBA, a phase 3 evaluation in 522 patients with multiple myeloma who were randomized to subcutaneous daratumumab (SC-Dara) or standard intravenous infusions of daratumumab (IV-Dara). A previous phase 1b study (Blood. 2017;130:838) had suggested comparable efficacy from the more convenient SC regime. Whereas conventional infusions of IV-Dara (16 mg/kg) take several hours, the SC formulation (1,800 mg–flat dose) is delivered in minutes. In COLUMBA, patients were randomized between SC- and IV-Dara weekly (cycles 1-2), then every 2 weeks (cycles 3-6), then every 4 weeks until disease progression.
Among the IV-Dara patients, the median duration of the first infusion was 421 minutes in cycle 1, 255 minutes in cycle 2, and 205 minutes in subsequent cycles – compatible with standard practice in the United States. As reported, at a median follow-up of 7.46 months, the efficacy (overall response rate, complete response rate, stringent-complete response rate, very good-partial response rate, progression-free survival, and 6-month overall survival) and safety profile were non-inferior for SC-Dara. SC-Dara patients also reported higher satisfaction with therapy.
What this means in practice
It is always a good idea to await publication of the manuscript because there may be study details and statistical nuances that make SC-Dara appear better than it will prove to be. For example, patient characteristics were slightly different between the two arms. Peer review of the final manuscript could be important in placing these results in context.
However, for treatments that demand frequent office visits over many months, reducing treatment burden for patients has value. Based on COLUMBA, it appears likely that SC-Dara will be a major convenience for patients, without obvious drawbacks in efficacy or toxicity. Meanwhile, flat dosing will be a time-saver for physicians, nursing, and pharmacy staff. If the price of the SC formulation is not exorbitant, I would expect a “win-win” that will support converting from IV- to SC-Dara as standard practice.
Acalabrutinib in CLL/SLL
Preclinical studies have shown acalabrutinib (Acala) to be more selective for Bruton’s tyrosine kinase (BTK) than the first-in-class agent ibrutinib, with less off-target kinase inhibition. As reported at the 2019 annual congress of the European Hematology Association by Paolo Ghia, MD, PhD, and colleagues in the phase 3 ASCEND trial, 310 patients with previously treated CLL were randomized between oral Acala twice daily and treatment of physician’s choice (TPC) – either idelalisib plus rituximab (maximum of seven infusions) or bendamustine plus rituximab (maximum of six cycles).
Progression-free survival was the primary endpoint. At a median of 16.1 months, progression-free survival had not been reached for Acala, in comparison with 16.5 months for TPC. Significant benefit of Acala was observed in all prognostic subsets.
Although there was no difference in overall survival at a median follow-up of about 16 months, 85% of Acala patients had a response lasting at least 12 months, compared with 60% of TPC patients. Adverse events of any grade occurred in 94% of patients treated with Acala, with 45% being grade 3-4 toxicities and six treatment-related deaths.
What this means in practice
The vast majority of CLL/SLL patients will relapse after primary therapy and will require further treatment, so the progression-free survival improvement associated with Acala in ASCEND is eye-catching. However, there are important considerations that demand closer scrutiny.
With oral agents administered until progression or unacceptable toxicity, low-grade toxicities can influence patient adherence, quality of life, and potentially the need for dose reduction or treatment interruptions. Regimens of finite duration and easy adherence monitoring may be, on balance, preferred by patients and providers – especially if the oral agent can be given in later-line with comparable overall survival.
With ibrutinib (Blood. 2017;129:2612-5), Paul M. Barr, MD, and colleagues demonstrated that higher dose intensity was associated with improved progression-free survival and that holds were associated with worsened progression-free survival. Acala’s promise of high efficacy and lower off-target toxicity will be solidified if the large (more than 500 patients) phase 3 ACE-CL-006 study (Acala vs. ibrutinib) demonstrates its relative benefit from efficacy, toxicity, and adherence perspectives, in comparison with a standard therapy that similarly demands adherence until disease progression or unacceptable toxicity.
Dr. Lyss has been a community-based medical oncologist and clinical researcher for more than 35 years, practicing in St. Louis. His clinical and research interests are in the prevention, diagnosis, and treatment of breast and lung cancers and in expanding access to clinical trials to medically underserved populations.
In this edition of “How I will treat my next patient,” I highlight two recent presentations regarding potential improvements in the convenience and tolerability of treatment for two hematologic malignancies: multiple myeloma and chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL).
SC-Dara in myeloma
At the 2019 annual meeting of the American Society of Clinical Oncology, Maria-Victoria Mateos, MD, PhD, and colleagues, reported the results of COLUMBA, a phase 3 evaluation in 522 patients with multiple myeloma who were randomized to subcutaneous daratumumab (SC-Dara) or standard intravenous infusions of daratumumab (IV-Dara). A previous phase 1b study (Blood. 2017;130:838) had suggested comparable efficacy from the more convenient SC regime. Whereas conventional infusions of IV-Dara (16 mg/kg) take several hours, the SC formulation (1,800 mg–flat dose) is delivered in minutes. In COLUMBA, patients were randomized between SC- and IV-Dara weekly (cycles 1-2), then every 2 weeks (cycles 3-6), then every 4 weeks until disease progression.
Among the IV-Dara patients, the median duration of the first infusion was 421 minutes in cycle 1, 255 minutes in cycle 2, and 205 minutes in subsequent cycles – compatible with standard practice in the United States. As reported, at a median follow-up of 7.46 months, the efficacy (overall response rate, complete response rate, stringent-complete response rate, very good-partial response rate, progression-free survival, and 6-month overall survival) and safety profile were non-inferior for SC-Dara. SC-Dara patients also reported higher satisfaction with therapy.
What this means in practice
It is always a good idea to await publication of the manuscript because there may be study details and statistical nuances that make SC-Dara appear better than it will prove to be. For example, patient characteristics were slightly different between the two arms. Peer review of the final manuscript could be important in placing these results in context.
However, for treatments that demand frequent office visits over many months, reducing treatment burden for patients has value. Based on COLUMBA, it appears likely that SC-Dara will be a major convenience for patients, without obvious drawbacks in efficacy or toxicity. Meanwhile, flat dosing will be a time-saver for physicians, nursing, and pharmacy staff. If the price of the SC formulation is not exorbitant, I would expect a “win-win” that will support converting from IV- to SC-Dara as standard practice.
Acalabrutinib in CLL/SLL
Preclinical studies have shown acalabrutinib (Acala) to be more selective for Bruton’s tyrosine kinase (BTK) than the first-in-class agent ibrutinib, with less off-target kinase inhibition. As reported at the 2019 annual congress of the European Hematology Association by Paolo Ghia, MD, PhD, and colleagues in the phase 3 ASCEND trial, 310 patients with previously treated CLL were randomized between oral Acala twice daily and treatment of physician’s choice (TPC) – either idelalisib plus rituximab (maximum of seven infusions) or bendamustine plus rituximab (maximum of six cycles).
Progression-free survival was the primary endpoint. At a median of 16.1 months, progression-free survival had not been reached for Acala, in comparison with 16.5 months for TPC. Significant benefit of Acala was observed in all prognostic subsets.
Although there was no difference in overall survival at a median follow-up of about 16 months, 85% of Acala patients had a response lasting at least 12 months, compared with 60% of TPC patients. Adverse events of any grade occurred in 94% of patients treated with Acala, with 45% being grade 3-4 toxicities and six treatment-related deaths.
What this means in practice
The vast majority of CLL/SLL patients will relapse after primary therapy and will require further treatment, so the progression-free survival improvement associated with Acala in ASCEND is eye-catching. However, there are important considerations that demand closer scrutiny.
With oral agents administered until progression or unacceptable toxicity, low-grade toxicities can influence patient adherence, quality of life, and potentially the need for dose reduction or treatment interruptions. Regimens of finite duration and easy adherence monitoring may be, on balance, preferred by patients and providers – especially if the oral agent can be given in later-line with comparable overall survival.
With ibrutinib (Blood. 2017;129:2612-5), Paul M. Barr, MD, and colleagues demonstrated that higher dose intensity was associated with improved progression-free survival and that holds were associated with worsened progression-free survival. Acala’s promise of high efficacy and lower off-target toxicity will be solidified if the large (more than 500 patients) phase 3 ACE-CL-006 study (Acala vs. ibrutinib) demonstrates its relative benefit from efficacy, toxicity, and adherence perspectives, in comparison with a standard therapy that similarly demands adherence until disease progression or unacceptable toxicity.
Dr. Lyss has been a community-based medical oncologist and clinical researcher for more than 35 years, practicing in St. Louis. His clinical and research interests are in the prevention, diagnosis, and treatment of breast and lung cancers and in expanding access to clinical trials to medically underserved populations.
In this edition of “How I will treat my next patient,” I highlight two recent presentations regarding potential improvements in the convenience and tolerability of treatment for two hematologic malignancies: multiple myeloma and chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL).
SC-Dara in myeloma
At the 2019 annual meeting of the American Society of Clinical Oncology, Maria-Victoria Mateos, MD, PhD, and colleagues, reported the results of COLUMBA, a phase 3 evaluation in 522 patients with multiple myeloma who were randomized to subcutaneous daratumumab (SC-Dara) or standard intravenous infusions of daratumumab (IV-Dara). A previous phase 1b study (Blood. 2017;130:838) had suggested comparable efficacy from the more convenient SC regime. Whereas conventional infusions of IV-Dara (16 mg/kg) take several hours, the SC formulation (1,800 mg–flat dose) is delivered in minutes. In COLUMBA, patients were randomized between SC- and IV-Dara weekly (cycles 1-2), then every 2 weeks (cycles 3-6), then every 4 weeks until disease progression.
Among the IV-Dara patients, the median duration of the first infusion was 421 minutes in cycle 1, 255 minutes in cycle 2, and 205 minutes in subsequent cycles – compatible with standard practice in the United States. As reported, at a median follow-up of 7.46 months, the efficacy (overall response rate, complete response rate, stringent-complete response rate, very good-partial response rate, progression-free survival, and 6-month overall survival) and safety profile were non-inferior for SC-Dara. SC-Dara patients also reported higher satisfaction with therapy.
What this means in practice
It is always a good idea to await publication of the manuscript because there may be study details and statistical nuances that make SC-Dara appear better than it will prove to be. For example, patient characteristics were slightly different between the two arms. Peer review of the final manuscript could be important in placing these results in context.
However, for treatments that demand frequent office visits over many months, reducing treatment burden for patients has value. Based on COLUMBA, it appears likely that SC-Dara will be a major convenience for patients, without obvious drawbacks in efficacy or toxicity. Meanwhile, flat dosing will be a time-saver for physicians, nursing, and pharmacy staff. If the price of the SC formulation is not exorbitant, I would expect a “win-win” that will support converting from IV- to SC-Dara as standard practice.
Acalabrutinib in CLL/SLL
Preclinical studies have shown acalabrutinib (Acala) to be more selective for Bruton’s tyrosine kinase (BTK) than the first-in-class agent ibrutinib, with less off-target kinase inhibition. As reported at the 2019 annual congress of the European Hematology Association by Paolo Ghia, MD, PhD, and colleagues in the phase 3 ASCEND trial, 310 patients with previously treated CLL were randomized between oral Acala twice daily and treatment of physician’s choice (TPC) – either idelalisib plus rituximab (maximum of seven infusions) or bendamustine plus rituximab (maximum of six cycles).
Progression-free survival was the primary endpoint. At a median of 16.1 months, progression-free survival had not been reached for Acala, in comparison with 16.5 months for TPC. Significant benefit of Acala was observed in all prognostic subsets.
Although there was no difference in overall survival at a median follow-up of about 16 months, 85% of Acala patients had a response lasting at least 12 months, compared with 60% of TPC patients. Adverse events of any grade occurred in 94% of patients treated with Acala, with 45% being grade 3-4 toxicities and six treatment-related deaths.
What this means in practice
The vast majority of CLL/SLL patients will relapse after primary therapy and will require further treatment, so the progression-free survival improvement associated with Acala in ASCEND is eye-catching. However, there are important considerations that demand closer scrutiny.
With oral agents administered until progression or unacceptable toxicity, low-grade toxicities can influence patient adherence, quality of life, and potentially the need for dose reduction or treatment interruptions. Regimens of finite duration and easy adherence monitoring may be, on balance, preferred by patients and providers – especially if the oral agent can be given in later-line with comparable overall survival.
With ibrutinib (Blood. 2017;129:2612-5), Paul M. Barr, MD, and colleagues demonstrated that higher dose intensity was associated with improved progression-free survival and that holds were associated with worsened progression-free survival. Acala’s promise of high efficacy and lower off-target toxicity will be solidified if the large (more than 500 patients) phase 3 ACE-CL-006 study (Acala vs. ibrutinib) demonstrates its relative benefit from efficacy, toxicity, and adherence perspectives, in comparison with a standard therapy that similarly demands adherence until disease progression or unacceptable toxicity.
Dr. Lyss has been a community-based medical oncologist and clinical researcher for more than 35 years, practicing in St. Louis. His clinical and research interests are in the prevention, diagnosis, and treatment of breast and lung cancers and in expanding access to clinical trials to medically underserved populations.
Use of antipsychotics to treat delirium in the ICU
Background: Delirium is commonly seen in the ICU and has been associated with increased morbidity and mortality. While haloperidol, as well as atypical antipsychotics, often are used to manage ICU delirium, evidence has been mixed as to whether these medications shorten the duration of either hyperactive or hypoactive delirium.
Study design: Randomized, controlled trial.
Setting: 16 medical centers in the United States.
Synopsis: 566 adult patients with respiratory failure or shock who experienced delirium in medical or surgical ICUs in participating hospitals were randomly assigned to receive either IV haloperidol, ziprasidone, or placebo. The median exposure to the trial medication or placebo was 4 days. The median number of days without delirium was not significantly different among the three groups (P = .26) with a median length of delirium of 8.5 days in the placebo group, compared with 7.9 days in the haloperidol group and 8.7 days in the ziprasidone group. The study was powered to detect a 2-day difference.
Only 11% of patients experienced hyperactive delirium, which makes these results less generalizable to patients whose delirium presents as agitation.
Bottom line: The use of antipsychotics in ICU delirium does not affect the duration of delirium in patient with respiratory failure or shock.
Citation: Girard TD et al. Haloperidol and ziprasidone for treatment of delirium in critical illness. N Eng J Med. 2018 Dec 27;379(26):2506-16.
Dr. Defoe is an instructor of medicine at Northwestern University Feinberg School of Medicine and a hospitalist at Northwestern Memorial Hospital, both in Chicago.
Background: Delirium is commonly seen in the ICU and has been associated with increased morbidity and mortality. While haloperidol, as well as atypical antipsychotics, often are used to manage ICU delirium, evidence has been mixed as to whether these medications shorten the duration of either hyperactive or hypoactive delirium.
Study design: Randomized, controlled trial.
Setting: 16 medical centers in the United States.
Synopsis: 566 adult patients with respiratory failure or shock who experienced delirium in medical or surgical ICUs in participating hospitals were randomly assigned to receive either IV haloperidol, ziprasidone, or placebo. The median exposure to the trial medication or placebo was 4 days. The median number of days without delirium was not significantly different among the three groups (P = .26) with a median length of delirium of 8.5 days in the placebo group, compared with 7.9 days in the haloperidol group and 8.7 days in the ziprasidone group. The study was powered to detect a 2-day difference.
Only 11% of patients experienced hyperactive delirium, which makes these results less generalizable to patients whose delirium presents as agitation.
Bottom line: The use of antipsychotics in ICU delirium does not affect the duration of delirium in patient with respiratory failure or shock.
Citation: Girard TD et al. Haloperidol and ziprasidone for treatment of delirium in critical illness. N Eng J Med. 2018 Dec 27;379(26):2506-16.
Dr. Defoe is an instructor of medicine at Northwestern University Feinberg School of Medicine and a hospitalist at Northwestern Memorial Hospital, both in Chicago.
Background: Delirium is commonly seen in the ICU and has been associated with increased morbidity and mortality. While haloperidol, as well as atypical antipsychotics, often are used to manage ICU delirium, evidence has been mixed as to whether these medications shorten the duration of either hyperactive or hypoactive delirium.
Study design: Randomized, controlled trial.
Setting: 16 medical centers in the United States.
Synopsis: 566 adult patients with respiratory failure or shock who experienced delirium in medical or surgical ICUs in participating hospitals were randomly assigned to receive either IV haloperidol, ziprasidone, or placebo. The median exposure to the trial medication or placebo was 4 days. The median number of days without delirium was not significantly different among the three groups (P = .26) with a median length of delirium of 8.5 days in the placebo group, compared with 7.9 days in the haloperidol group and 8.7 days in the ziprasidone group. The study was powered to detect a 2-day difference.
Only 11% of patients experienced hyperactive delirium, which makes these results less generalizable to patients whose delirium presents as agitation.
Bottom line: The use of antipsychotics in ICU delirium does not affect the duration of delirium in patient with respiratory failure or shock.
Citation: Girard TD et al. Haloperidol and ziprasidone for treatment of delirium in critical illness. N Eng J Med. 2018 Dec 27;379(26):2506-16.
Dr. Defoe is an instructor of medicine at Northwestern University Feinberg School of Medicine and a hospitalist at Northwestern Memorial Hospital, both in Chicago.
When’s the right time to use dementia as a diagnosis?
Is dementia a diagnosis?
I use it myself, although I find that some neurologists consider this blasphemy.
The problem is that there aren’t many terms to cover cognitive disorders beyond mild cognitive impairment (MCI). Phrases like “cortical degeneration” and “frontotemporal disorder” are difficult for families and patients. They aren’t medically trained and want something easy to write down.
“Alzheimer’s,” or – as one patient’s family member says, “the A-word” – is often more accurate, but has stigma attached to it that many don’t want, especially at a first visit. It also immediately conjures up feared images of nursing homes, wheelchairs, and bed-bound people.
So I use a diagnosis of dementia with many families, at least initially. Since, with occasional exceptions, we tend to perform a work-up of all cognitive disorders the same way, I don’t have a problem with using a more generic blanket term. As I sometimes try to simplify things, I’ll say, “It’s like squares and rectangles. Alzheimer’s disease is a dementia, but not all dementias are Alzheimer’s disease.”
I don’t do this to avoid confrontation, be dishonest, mislead patients and families, or avoid telling the truth. I still make it very clear that this is a progressive neurologic illness that will cause worsening cognitive problems over time. But many times families aren’t ready for “the A-word” early on, or there’s a concern the patient will harm themselves while they still have that capacity. Sometimes, it’s better to use a different phrase.
It may all be semantics, but on a personal level, a word can make a huge difference.
So I say dementia. In spite of some editorials I’ve seen saying we should retire the phrase, I argue that in many circumstances it’s still valid and useful.
It may not be a final, or even specific, diagnosis, but it is often the best and most socially acceptable one at the beginning of the doctor-patient-family relationship. When you’re trying to build rapport with them, that’s equally critical when you know what’s to come down the road.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
Is dementia a diagnosis?
I use it myself, although I find that some neurologists consider this blasphemy.
The problem is that there aren’t many terms to cover cognitive disorders beyond mild cognitive impairment (MCI). Phrases like “cortical degeneration” and “frontotemporal disorder” are difficult for families and patients. They aren’t medically trained and want something easy to write down.
“Alzheimer’s,” or – as one patient’s family member says, “the A-word” – is often more accurate, but has stigma attached to it that many don’t want, especially at a first visit. It also immediately conjures up feared images of nursing homes, wheelchairs, and bed-bound people.
So I use a diagnosis of dementia with many families, at least initially. Since, with occasional exceptions, we tend to perform a work-up of all cognitive disorders the same way, I don’t have a problem with using a more generic blanket term. As I sometimes try to simplify things, I’ll say, “It’s like squares and rectangles. Alzheimer’s disease is a dementia, but not all dementias are Alzheimer’s disease.”
I don’t do this to avoid confrontation, be dishonest, mislead patients and families, or avoid telling the truth. I still make it very clear that this is a progressive neurologic illness that will cause worsening cognitive problems over time. But many times families aren’t ready for “the A-word” early on, or there’s a concern the patient will harm themselves while they still have that capacity. Sometimes, it’s better to use a different phrase.
It may all be semantics, but on a personal level, a word can make a huge difference.
So I say dementia. In spite of some editorials I’ve seen saying we should retire the phrase, I argue that in many circumstances it’s still valid and useful.
It may not be a final, or even specific, diagnosis, but it is often the best and most socially acceptable one at the beginning of the doctor-patient-family relationship. When you’re trying to build rapport with them, that’s equally critical when you know what’s to come down the road.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
Is dementia a diagnosis?
I use it myself, although I find that some neurologists consider this blasphemy.
The problem is that there aren’t many terms to cover cognitive disorders beyond mild cognitive impairment (MCI). Phrases like “cortical degeneration” and “frontotemporal disorder” are difficult for families and patients. They aren’t medically trained and want something easy to write down.
“Alzheimer’s,” or – as one patient’s family member says, “the A-word” – is often more accurate, but has stigma attached to it that many don’t want, especially at a first visit. It also immediately conjures up feared images of nursing homes, wheelchairs, and bed-bound people.
So I use a diagnosis of dementia with many families, at least initially. Since, with occasional exceptions, we tend to perform a work-up of all cognitive disorders the same way, I don’t have a problem with using a more generic blanket term. As I sometimes try to simplify things, I’ll say, “It’s like squares and rectangles. Alzheimer’s disease is a dementia, but not all dementias are Alzheimer’s disease.”
I don’t do this to avoid confrontation, be dishonest, mislead patients and families, or avoid telling the truth. I still make it very clear that this is a progressive neurologic illness that will cause worsening cognitive problems over time. But many times families aren’t ready for “the A-word” early on, or there’s a concern the patient will harm themselves while they still have that capacity. Sometimes, it’s better to use a different phrase.
It may all be semantics, but on a personal level, a word can make a huge difference.
So I say dementia. In spite of some editorials I’ve seen saying we should retire the phrase, I argue that in many circumstances it’s still valid and useful.
It may not be a final, or even specific, diagnosis, but it is often the best and most socially acceptable one at the beginning of the doctor-patient-family relationship. When you’re trying to build rapport with them, that’s equally critical when you know what’s to come down the road.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
Surprise medical billing legislation advances to the House floor
Legislation to end surprise medical billing cleared the House Energy and Commerce Committee and is headed to the House floor, but it contains a somewhat controversial arbitration mechanism that allows physicians and hospitals to seek higher payments within 30 days.
The bill, which was tacked on to H.R. 2328, passed the committee by voice vote on July 17. The amendment on arbitration also passed the committee via voice vote.
“Our most important task today is to protect patients from the unreasonable and unacceptable practice of surprise billing,” Rep. Frank Pallone (D-N.J.), chairman of the Energy and Commerce Committee, said just prior to the votes being taken by the committee. “Under the [legislation], providers would no longer be able to balance bill patients for out-of-network emergency services or for scheduled services from providers the patient was not aware would be in their treatment.”
Out-of-network providers would receive a benchmark payment for the services they provided under the legislation.
Rep. Pallone described the legislation as taking patients out of the middle of disputes between payers and providers.
He went on to describe the arbitration amendment, introduced by Rep. Raul Ruiz, MD, (D.-Calif.) and Rep. Larry Bucshon, MD, (R-Ind.), as creating an independent dispute resolution process for physicians and hospitals to file a claim in the event that they don’t think they were adequately paid for their services.
“The amendment would allow providers to have 30 days within which to file an appeal of the benchmark payment with the insurer,” Rep. Pallone said. “The insurer would then have 30 days to adjudicate the appeal, after which the provider could initiate independent dispute resolution.”
The amendment limits appeals to extenuating circumstances so that only complex cases would qualify, and it limits the variables that can be considered during arbitration to the quality of care that was provided to the patient, according to Rep. Pallone.
“Most importantly to me, it bars arbitrators from considering billed charges, which are unilaterally set by providers,” Rep. Pallone said. “Provider charges are often double or triple Medicare rates, and in some cases for some large physician staffing companies, it is around 500% of Medicare rates. If Congress sends this signal to arbiters that provider charges are to be considered, we would be creating a significantly higher standard for payment, decreasing incentives for providers to be in network and putting upward pressure on health care premiums.”
Rep. Michael Burgess, MD, (R-Texas) praised the inclusion of the arbitration amendment and said the surprise billing legislation would not be able to be passed without its inclusion.
Rep. Janice Schakowsky (D-Ill.) offered a dissenting voice to the amendment. “Arbitration, in my view, which is used as the backstop, will not lower the health care costs,” she said. “Arbitration actually comes with additional administrative costs and complexities, which could then be passed on to consumers in the form of higher premiums. Even as a backstop, I think that binding arbitration leaves a public interest, public health decision, up to an unaccountable private decision maker, and I don’t think that is a very progressive way to be dealing with the issue of pricing.”
The American Medical Association praised the inclusion of an appeals process for resolving out-of-network payment disputes. “This addition represents progress,” Patrice Harris, MD, president of the AMA, said in a statement. “While we continue to have concerns with elements of the legislation, we remain committed to working with all committee members to secure further improvements to protect patients, preserve access, and foster fair payments for out-of-network services.”
America’s Health Insurance Plans, which represents health insurers, voiced its opposition to the arbitration provision. “We strongly oppose the inclusion of arbitration because it does not solve the problem of surprise medical bills,” AHIP President and CEO Matt Eyles said in a statement. “It increases the financial burden on everyone with coverage, increasing patient premiums and driving up the cost of health care. The arbitration proposal allows private-equity firms and certain providers to price gouge patients and then shifts the final decision to a ‘third party.’ This process introduces new bureaucracy and red tape into the system, with costs to hardworking taxpayers exceeding $1 billion.”
Benedic Ippolito, research fellow in economic policy studies at the American Economic Institute, further criticized the use of arbitration in this process.
“This concept that arbitration is a backstop doesn’t really make a lot of sense,” he said during a July 17 panel discussion hosted by the Bipartisan Policy Center on surprise billing. “The arbiter has to do the same thing any sort of rate setter has to do.” He noted that if they are doing “baseball-style” arbitration, they have to choose between two offers placed in front of the arbiter, based on what the arbiter believes to be closer to whatever the reasonable rate is.
This process will eventually lead to either a system that favors the payer, the provider, or ends up being exactly what the benchmark is that has been set. “If it is better for one or the other, somebody is going to have an incentive to just trigger this thing the whole time,” Mr. Ippolito said.
Loren Adler, associate director of the USC-Brookings Schaeffer Initiative for Health Policy, said during the panel discussion that “there is no policy reason to have arbitration. There is nothing it adds for policy value.”
Mr. Adler called the arbitration amendment “a provider giveaway bill,” adding that if there has to be an arbitration option, it should have a very high threshold to be triggered.
Legislation to end surprise medical billing cleared the House Energy and Commerce Committee and is headed to the House floor, but it contains a somewhat controversial arbitration mechanism that allows physicians and hospitals to seek higher payments within 30 days.
The bill, which was tacked on to H.R. 2328, passed the committee by voice vote on July 17. The amendment on arbitration also passed the committee via voice vote.
“Our most important task today is to protect patients from the unreasonable and unacceptable practice of surprise billing,” Rep. Frank Pallone (D-N.J.), chairman of the Energy and Commerce Committee, said just prior to the votes being taken by the committee. “Under the [legislation], providers would no longer be able to balance bill patients for out-of-network emergency services or for scheduled services from providers the patient was not aware would be in their treatment.”
Out-of-network providers would receive a benchmark payment for the services they provided under the legislation.
Rep. Pallone described the legislation as taking patients out of the middle of disputes between payers and providers.
He went on to describe the arbitration amendment, introduced by Rep. Raul Ruiz, MD, (D.-Calif.) and Rep. Larry Bucshon, MD, (R-Ind.), as creating an independent dispute resolution process for physicians and hospitals to file a claim in the event that they don’t think they were adequately paid for their services.
“The amendment would allow providers to have 30 days within which to file an appeal of the benchmark payment with the insurer,” Rep. Pallone said. “The insurer would then have 30 days to adjudicate the appeal, after which the provider could initiate independent dispute resolution.”
The amendment limits appeals to extenuating circumstances so that only complex cases would qualify, and it limits the variables that can be considered during arbitration to the quality of care that was provided to the patient, according to Rep. Pallone.
“Most importantly to me, it bars arbitrators from considering billed charges, which are unilaterally set by providers,” Rep. Pallone said. “Provider charges are often double or triple Medicare rates, and in some cases for some large physician staffing companies, it is around 500% of Medicare rates. If Congress sends this signal to arbiters that provider charges are to be considered, we would be creating a significantly higher standard for payment, decreasing incentives for providers to be in network and putting upward pressure on health care premiums.”
Rep. Michael Burgess, MD, (R-Texas) praised the inclusion of the arbitration amendment and said the surprise billing legislation would not be able to be passed without its inclusion.
Rep. Janice Schakowsky (D-Ill.) offered a dissenting voice to the amendment. “Arbitration, in my view, which is used as the backstop, will not lower the health care costs,” she said. “Arbitration actually comes with additional administrative costs and complexities, which could then be passed on to consumers in the form of higher premiums. Even as a backstop, I think that binding arbitration leaves a public interest, public health decision, up to an unaccountable private decision maker, and I don’t think that is a very progressive way to be dealing with the issue of pricing.”
The American Medical Association praised the inclusion of an appeals process for resolving out-of-network payment disputes. “This addition represents progress,” Patrice Harris, MD, president of the AMA, said in a statement. “While we continue to have concerns with elements of the legislation, we remain committed to working with all committee members to secure further improvements to protect patients, preserve access, and foster fair payments for out-of-network services.”
America’s Health Insurance Plans, which represents health insurers, voiced its opposition to the arbitration provision. “We strongly oppose the inclusion of arbitration because it does not solve the problem of surprise medical bills,” AHIP President and CEO Matt Eyles said in a statement. “It increases the financial burden on everyone with coverage, increasing patient premiums and driving up the cost of health care. The arbitration proposal allows private-equity firms and certain providers to price gouge patients and then shifts the final decision to a ‘third party.’ This process introduces new bureaucracy and red tape into the system, with costs to hardworking taxpayers exceeding $1 billion.”
Benedic Ippolito, research fellow in economic policy studies at the American Economic Institute, further criticized the use of arbitration in this process.
“This concept that arbitration is a backstop doesn’t really make a lot of sense,” he said during a July 17 panel discussion hosted by the Bipartisan Policy Center on surprise billing. “The arbiter has to do the same thing any sort of rate setter has to do.” He noted that if they are doing “baseball-style” arbitration, they have to choose between two offers placed in front of the arbiter, based on what the arbiter believes to be closer to whatever the reasonable rate is.
This process will eventually lead to either a system that favors the payer, the provider, or ends up being exactly what the benchmark is that has been set. “If it is better for one or the other, somebody is going to have an incentive to just trigger this thing the whole time,” Mr. Ippolito said.
Loren Adler, associate director of the USC-Brookings Schaeffer Initiative for Health Policy, said during the panel discussion that “there is no policy reason to have arbitration. There is nothing it adds for policy value.”
Mr. Adler called the arbitration amendment “a provider giveaway bill,” adding that if there has to be an arbitration option, it should have a very high threshold to be triggered.
Legislation to end surprise medical billing cleared the House Energy and Commerce Committee and is headed to the House floor, but it contains a somewhat controversial arbitration mechanism that allows physicians and hospitals to seek higher payments within 30 days.
The bill, which was tacked on to H.R. 2328, passed the committee by voice vote on July 17. The amendment on arbitration also passed the committee via voice vote.
“Our most important task today is to protect patients from the unreasonable and unacceptable practice of surprise billing,” Rep. Frank Pallone (D-N.J.), chairman of the Energy and Commerce Committee, said just prior to the votes being taken by the committee. “Under the [legislation], providers would no longer be able to balance bill patients for out-of-network emergency services or for scheduled services from providers the patient was not aware would be in their treatment.”
Out-of-network providers would receive a benchmark payment for the services they provided under the legislation.
Rep. Pallone described the legislation as taking patients out of the middle of disputes between payers and providers.
He went on to describe the arbitration amendment, introduced by Rep. Raul Ruiz, MD, (D.-Calif.) and Rep. Larry Bucshon, MD, (R-Ind.), as creating an independent dispute resolution process for physicians and hospitals to file a claim in the event that they don’t think they were adequately paid for their services.
“The amendment would allow providers to have 30 days within which to file an appeal of the benchmark payment with the insurer,” Rep. Pallone said. “The insurer would then have 30 days to adjudicate the appeal, after which the provider could initiate independent dispute resolution.”
The amendment limits appeals to extenuating circumstances so that only complex cases would qualify, and it limits the variables that can be considered during arbitration to the quality of care that was provided to the patient, according to Rep. Pallone.
“Most importantly to me, it bars arbitrators from considering billed charges, which are unilaterally set by providers,” Rep. Pallone said. “Provider charges are often double or triple Medicare rates, and in some cases for some large physician staffing companies, it is around 500% of Medicare rates. If Congress sends this signal to arbiters that provider charges are to be considered, we would be creating a significantly higher standard for payment, decreasing incentives for providers to be in network and putting upward pressure on health care premiums.”
Rep. Michael Burgess, MD, (R-Texas) praised the inclusion of the arbitration amendment and said the surprise billing legislation would not be able to be passed without its inclusion.
Rep. Janice Schakowsky (D-Ill.) offered a dissenting voice to the amendment. “Arbitration, in my view, which is used as the backstop, will not lower the health care costs,” she said. “Arbitration actually comes with additional administrative costs and complexities, which could then be passed on to consumers in the form of higher premiums. Even as a backstop, I think that binding arbitration leaves a public interest, public health decision, up to an unaccountable private decision maker, and I don’t think that is a very progressive way to be dealing with the issue of pricing.”
The American Medical Association praised the inclusion of an appeals process for resolving out-of-network payment disputes. “This addition represents progress,” Patrice Harris, MD, president of the AMA, said in a statement. “While we continue to have concerns with elements of the legislation, we remain committed to working with all committee members to secure further improvements to protect patients, preserve access, and foster fair payments for out-of-network services.”
America’s Health Insurance Plans, which represents health insurers, voiced its opposition to the arbitration provision. “We strongly oppose the inclusion of arbitration because it does not solve the problem of surprise medical bills,” AHIP President and CEO Matt Eyles said in a statement. “It increases the financial burden on everyone with coverage, increasing patient premiums and driving up the cost of health care. The arbitration proposal allows private-equity firms and certain providers to price gouge patients and then shifts the final decision to a ‘third party.’ This process introduces new bureaucracy and red tape into the system, with costs to hardworking taxpayers exceeding $1 billion.”
Benedic Ippolito, research fellow in economic policy studies at the American Economic Institute, further criticized the use of arbitration in this process.
“This concept that arbitration is a backstop doesn’t really make a lot of sense,” he said during a July 17 panel discussion hosted by the Bipartisan Policy Center on surprise billing. “The arbiter has to do the same thing any sort of rate setter has to do.” He noted that if they are doing “baseball-style” arbitration, they have to choose between two offers placed in front of the arbiter, based on what the arbiter believes to be closer to whatever the reasonable rate is.
This process will eventually lead to either a system that favors the payer, the provider, or ends up being exactly what the benchmark is that has been set. “If it is better for one or the other, somebody is going to have an incentive to just trigger this thing the whole time,” Mr. Ippolito said.
Loren Adler, associate director of the USC-Brookings Schaeffer Initiative for Health Policy, said during the panel discussion that “there is no policy reason to have arbitration. There is nothing it adds for policy value.”
Mr. Adler called the arbitration amendment “a provider giveaway bill,” adding that if there has to be an arbitration option, it should have a very high threshold to be triggered.