User login
Amphetamine tied to higher risk of new-onset psychosis than methylphenidate
Greater risk applies only to adolescents, young adults with ADHD treated in primary care
Adolescents and young adults with ADHD who start on amphetamine might have twice the risk of developing new-onset psychosis as do those who start on methylphenidate, a cohort study of more than 220,000 patients suggests.
“The percentage of patients who had a psychotic episode was 0.10% among patients who received methylphenidate and 0.21% among patients who received amphetamine, reported Lauren V. Moran, MD, of the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital in Boston and her colleagues. The study was published by the New England Journal of Medicine.
(aged 13-25 years) with ADHD between January 2004 and September 2015 who were prescribed methylphenidate or amphetamine (both 110,923 patients; 143,286 total person-years of follow-up). They looked for an ICD-9 or ICD-10 code for new-onset psychosis followed by a prescription for an antipsychotic medication the same day or within 60 days of the psychosis diagnosis. Hazard ratios were calculated by matching patients taking methylphenidate with patients taking amphetamine across both databases and calculating the incidence rate of psychosis in each group.
The researchers found 343 new cases of psychosis overall, with an incidence of 2.4 cases per 1,000 person-years. There were 106 episodes of psychosis among patients receiving methylphenidate (0.10%) and 237 new cases among patients receiving amphetamine (0.21%). There was an incidence rate of 1.78 cases per 1,000 person-years for methylphenidate patients and 2.83 cases per 1,000 person-years for amphetamine patients. Across both databases, the pooled hazard ratio for amphetamine use and new-onset psychosis, compared with matched patients, was 1.65 (95% confidence interval, 1.31-2.09).
“The attribution of the higher risk of psychosis to amphetamine use was supported by negative control outcome analyses, which showed that there was no difference in the risk of other psychiatric events between the two stimulant groups,” Dr. Moran and her colleagues reported. “The different biologic mechanisms of methylphenidate and amphetamine activity on neurotransmitters could explain our findings.”
Patients who were prescribed amphetamine by family medicine physicians, internists, and pediatricians were at a higher risk of developing psychosis. That risk, however, did not extend to patients prescribed amphetamine by psychiatrists, the researchers said.
“Psychosis may develop in these patients regardless of stimulant treatment. Alternatively, psychiatrists may prescribe amphetamine more cautiously than other providers and may screen for risk factors for psychosis,” Dr. Moran and her colleagues wrote.
The researchers said the study was limited by unmeasured confounders, such as substance or stimulant misuse; the rate of diversion for amphetamine; and lack of information on race, gender, or socioeconomic status. In addition, they noted, the results could not be generalized to patients with public insurance or no insurance, “which disproportionately applies to patients who are black or Hispanic.”
Dr. Moran reported receiving grants from National Institute of Mental Health (NIMH). The other authors reported grants, personal fees, and other relationships with several entities, including Boehringer Ingelheim, the Food and Drug Administration, the NIMH, and Takeda.
SOURCE: Moran LV et al. N Engl J Med. 2019. doi: 10.1056/NEJMoa1813751.
The findings by Moran et al. are consistent with other randomized controlled trials that suggest a better safety profile for methylphenidate over amphetamine. But the data cannot determine causality in this patient population, Samuele Cortese, MD, PhD, wrote in a related editorial.
“The findings of the current study should not be considered definitive. Observational studies such as this one can provide information on uncommon adverse events in real-world clinical practice that are challenging to assess in randomized trials performed over brief periods,” he said. “However, even sophisticated approaches, such as the ones used in this study to address possible biases, do not have the advantages of randomized trials in excluding confounding factors.”
It is still unclear why some patients developed psychosis, such as in cases of patients with stimulant use and had a “low” or “high” vulnerability to developing psychosis after exposure. The lack of association between psychosis and prescribing amphetamines among psychiatrists also might indicate that those clinicians identified risk factors in patients that predicted the development of psychosis and thus avoided prescribing amphetamines to these patients, he said.
“Currently, it is not possible to predict which patients will have psychotic episodes after stimulant treatment,” Dr. Cortese concluded. “Perhaps techniques such as machine learning applied to large data sets from randomized trials, combined with observational data, will provide predictors at the individual patient level.”
Dr. Cortese is affiliated with the Center for Innovation in Mental Health at the University of Southampton (England). These comments summarize his accompanying editorial (N Engl J Med. 2019. doi: 10.1056/NEJMe1900887 ). He reported nonfinancial relationships with the Association for Child and Adolescent Central Health and the Healthcare Convention & Exhibitors Association.
Greater risk applies only to adolescents, young adults with ADHD treated in primary care
Greater risk applies only to adolescents, young adults with ADHD treated in primary care
The findings by Moran et al. are consistent with other randomized controlled trials that suggest a better safety profile for methylphenidate over amphetamine. But the data cannot determine causality in this patient population, Samuele Cortese, MD, PhD, wrote in a related editorial.
“The findings of the current study should not be considered definitive. Observational studies such as this one can provide information on uncommon adverse events in real-world clinical practice that are challenging to assess in randomized trials performed over brief periods,” he said. “However, even sophisticated approaches, such as the ones used in this study to address possible biases, do not have the advantages of randomized trials in excluding confounding factors.”
It is still unclear why some patients developed psychosis, such as in cases of patients with stimulant use and had a “low” or “high” vulnerability to developing psychosis after exposure. The lack of association between psychosis and prescribing amphetamines among psychiatrists also might indicate that those clinicians identified risk factors in patients that predicted the development of psychosis and thus avoided prescribing amphetamines to these patients, he said.
“Currently, it is not possible to predict which patients will have psychotic episodes after stimulant treatment,” Dr. Cortese concluded. “Perhaps techniques such as machine learning applied to large data sets from randomized trials, combined with observational data, will provide predictors at the individual patient level.”
Dr. Cortese is affiliated with the Center for Innovation in Mental Health at the University of Southampton (England). These comments summarize his accompanying editorial (N Engl J Med. 2019. doi: 10.1056/NEJMe1900887 ). He reported nonfinancial relationships with the Association for Child and Adolescent Central Health and the Healthcare Convention & Exhibitors Association.
The findings by Moran et al. are consistent with other randomized controlled trials that suggest a better safety profile for methylphenidate over amphetamine. But the data cannot determine causality in this patient population, Samuele Cortese, MD, PhD, wrote in a related editorial.
“The findings of the current study should not be considered definitive. Observational studies such as this one can provide information on uncommon adverse events in real-world clinical practice that are challenging to assess in randomized trials performed over brief periods,” he said. “However, even sophisticated approaches, such as the ones used in this study to address possible biases, do not have the advantages of randomized trials in excluding confounding factors.”
It is still unclear why some patients developed psychosis, such as in cases of patients with stimulant use and had a “low” or “high” vulnerability to developing psychosis after exposure. The lack of association between psychosis and prescribing amphetamines among psychiatrists also might indicate that those clinicians identified risk factors in patients that predicted the development of psychosis and thus avoided prescribing amphetamines to these patients, he said.
“Currently, it is not possible to predict which patients will have psychotic episodes after stimulant treatment,” Dr. Cortese concluded. “Perhaps techniques such as machine learning applied to large data sets from randomized trials, combined with observational data, will provide predictors at the individual patient level.”
Dr. Cortese is affiliated with the Center for Innovation in Mental Health at the University of Southampton (England). These comments summarize his accompanying editorial (N Engl J Med. 2019. doi: 10.1056/NEJMe1900887 ). He reported nonfinancial relationships with the Association for Child and Adolescent Central Health and the Healthcare Convention & Exhibitors Association.
Adolescents and young adults with ADHD who start on amphetamine might have twice the risk of developing new-onset psychosis as do those who start on methylphenidate, a cohort study of more than 220,000 patients suggests.
“The percentage of patients who had a psychotic episode was 0.10% among patients who received methylphenidate and 0.21% among patients who received amphetamine, reported Lauren V. Moran, MD, of the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital in Boston and her colleagues. The study was published by the New England Journal of Medicine.
(aged 13-25 years) with ADHD between January 2004 and September 2015 who were prescribed methylphenidate or amphetamine (both 110,923 patients; 143,286 total person-years of follow-up). They looked for an ICD-9 or ICD-10 code for new-onset psychosis followed by a prescription for an antipsychotic medication the same day or within 60 days of the psychosis diagnosis. Hazard ratios were calculated by matching patients taking methylphenidate with patients taking amphetamine across both databases and calculating the incidence rate of psychosis in each group.
The researchers found 343 new cases of psychosis overall, with an incidence of 2.4 cases per 1,000 person-years. There were 106 episodes of psychosis among patients receiving methylphenidate (0.10%) and 237 new cases among patients receiving amphetamine (0.21%). There was an incidence rate of 1.78 cases per 1,000 person-years for methylphenidate patients and 2.83 cases per 1,000 person-years for amphetamine patients. Across both databases, the pooled hazard ratio for amphetamine use and new-onset psychosis, compared with matched patients, was 1.65 (95% confidence interval, 1.31-2.09).
“The attribution of the higher risk of psychosis to amphetamine use was supported by negative control outcome analyses, which showed that there was no difference in the risk of other psychiatric events between the two stimulant groups,” Dr. Moran and her colleagues reported. “The different biologic mechanisms of methylphenidate and amphetamine activity on neurotransmitters could explain our findings.”
Patients who were prescribed amphetamine by family medicine physicians, internists, and pediatricians were at a higher risk of developing psychosis. That risk, however, did not extend to patients prescribed amphetamine by psychiatrists, the researchers said.
“Psychosis may develop in these patients regardless of stimulant treatment. Alternatively, psychiatrists may prescribe amphetamine more cautiously than other providers and may screen for risk factors for psychosis,” Dr. Moran and her colleagues wrote.
The researchers said the study was limited by unmeasured confounders, such as substance or stimulant misuse; the rate of diversion for amphetamine; and lack of information on race, gender, or socioeconomic status. In addition, they noted, the results could not be generalized to patients with public insurance or no insurance, “which disproportionately applies to patients who are black or Hispanic.”
Dr. Moran reported receiving grants from National Institute of Mental Health (NIMH). The other authors reported grants, personal fees, and other relationships with several entities, including Boehringer Ingelheim, the Food and Drug Administration, the NIMH, and Takeda.
SOURCE: Moran LV et al. N Engl J Med. 2019. doi: 10.1056/NEJMoa1813751.
Adolescents and young adults with ADHD who start on amphetamine might have twice the risk of developing new-onset psychosis as do those who start on methylphenidate, a cohort study of more than 220,000 patients suggests.
“The percentage of patients who had a psychotic episode was 0.10% among patients who received methylphenidate and 0.21% among patients who received amphetamine, reported Lauren V. Moran, MD, of the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital in Boston and her colleagues. The study was published by the New England Journal of Medicine.
(aged 13-25 years) with ADHD between January 2004 and September 2015 who were prescribed methylphenidate or amphetamine (both 110,923 patients; 143,286 total person-years of follow-up). They looked for an ICD-9 or ICD-10 code for new-onset psychosis followed by a prescription for an antipsychotic medication the same day or within 60 days of the psychosis diagnosis. Hazard ratios were calculated by matching patients taking methylphenidate with patients taking amphetamine across both databases and calculating the incidence rate of psychosis in each group.
The researchers found 343 new cases of psychosis overall, with an incidence of 2.4 cases per 1,000 person-years. There were 106 episodes of psychosis among patients receiving methylphenidate (0.10%) and 237 new cases among patients receiving amphetamine (0.21%). There was an incidence rate of 1.78 cases per 1,000 person-years for methylphenidate patients and 2.83 cases per 1,000 person-years for amphetamine patients. Across both databases, the pooled hazard ratio for amphetamine use and new-onset psychosis, compared with matched patients, was 1.65 (95% confidence interval, 1.31-2.09).
“The attribution of the higher risk of psychosis to amphetamine use was supported by negative control outcome analyses, which showed that there was no difference in the risk of other psychiatric events between the two stimulant groups,” Dr. Moran and her colleagues reported. “The different biologic mechanisms of methylphenidate and amphetamine activity on neurotransmitters could explain our findings.”
Patients who were prescribed amphetamine by family medicine physicians, internists, and pediatricians were at a higher risk of developing psychosis. That risk, however, did not extend to patients prescribed amphetamine by psychiatrists, the researchers said.
“Psychosis may develop in these patients regardless of stimulant treatment. Alternatively, psychiatrists may prescribe amphetamine more cautiously than other providers and may screen for risk factors for psychosis,” Dr. Moran and her colleagues wrote.
The researchers said the study was limited by unmeasured confounders, such as substance or stimulant misuse; the rate of diversion for amphetamine; and lack of information on race, gender, or socioeconomic status. In addition, they noted, the results could not be generalized to patients with public insurance or no insurance, “which disproportionately applies to patients who are black or Hispanic.”
Dr. Moran reported receiving grants from National Institute of Mental Health (NIMH). The other authors reported grants, personal fees, and other relationships with several entities, including Boehringer Ingelheim, the Food and Drug Administration, the NIMH, and Takeda.
SOURCE: Moran LV et al. N Engl J Med. 2019. doi: 10.1056/NEJMoa1813751.
Cirrhosis model predicts decompensation across diverse populations
A prognostic model that uses serum albumin-bilirubin (ALBI) and Fibrosis-4 (FIB-4) scores can identify patients with cirrhosis who are at high risk of liver decompensation, according to investigators.
During validation testing, the scoring system performed well among European and Middle Eastern patients, which supports prognostic value across diverse populations, reported lead author Neil Guha, MRCP, PhD, of the University of Nottingham (U.K.) and his colleagues, who suggested that the scoring system could fix an important practice gap.
“Identification of patients [with chronic liver disease] that need intensive monitoring and timely intervention is challenging,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Robust prognostic tools using simple laboratory variables, with potential for implementation in nonspecialist settings and across different health care systems, have significant appeal.”
Although existing scoring systems have been used for decades, they have clear limitations, the investigators noted, referring to predictive ability that may be too little, too late.
“[T]hese scoring systems provide value after synthetic liver function has become significantly deranged and provide only short-term prognostic value,” the investigators wrote. “Presently, there are no scores, performed in routine clinical practice, that provide robust prognostic stratification within early, compensated cirrhosis over the medium/long term.”
To fulfill this need, the investigators developed and validated a prognostic model that incorporates data from the ALBI and FIB-4 scoring systems because these tests measure both fibrosis and function. The development phase involved 145 patients with compensated cirrhosis from Nottingham. Almost half of the cohort had liver disease because of alcohol (44.8%), while about one out of three patients had nonalcoholic fatty liver disease (29.7%). After investigators collected baseline clinical features and scores, patients were followed for a median of 4.59 years, during which time decompensation events were recorded (ascites, variceal bleeding, and encephalopathy). Decompensation occurred in about one out of five patients (19.3%) in the U.K. group, with ascites being the most common (71.4%). Using these findings, the investigators created the prognostic model, which classified patients as having either low or high risk of decompensation. In the development cohort, patients with high risk scores had a hazard ratio for decompensation of 7.10.
In the second part of the study, the investigators validated their model with two clinically distinct groups in Dublin, Ireland (prospective; n = 141), and Menoufia, Egypt (retrospective; n = 93).
In the Dublin cohort, the most common etiologies were alcohol (39.7%) and hepatitis C (29.8%). Over a maximum observational period of 6.4 years, the decompensation rate was lower than the development group, at 12.1%. Types of decompensation also differed, with variceal bleeding being the most common (47.1%). Patients with high risk scores had a higher HR for decompensation than the U.K. cohort, at 12.54.
In the Egypt group, the most common causes of liver disease were nonalcoholic fatty liver disease (47.3%) and hepatitis C (34.4%). The maximum follow-up period was 10.6 years, during which time 38.7% of patients experienced decompensation, with ascites being the most common form (57.1%). The HR of 5.10 was the lowest of all cohorts.
The investigators noted that the cohorts represented unique patient populations with different etiological patterns. “This provides reassurance that the model has generalizability for stratifying liver disease at an international level,” the investigators wrote, suggesting that ALBI and FIB-4 can be used in low-resource and community settings.
“A frequently leveled criticism of algorithms such as ALBI-FIB-4 is that they are too complicated to be applied routinely in the clinical setting,” the investigators wrote. “To overcome this problem we developed a simple online calculator which can be accessed using the following link: https://jscalc.io/calc/gdEJj89Wz5PirkSL.”
“We have shown that routinely available laboratory variables, combined in a novel algorithm, ALBI-FIB-4, can stratify patients with cirrhosis for future risk of liver decompensation,” the investigators concluded. “The ability to do this in the context of early, compensated cirrhosis with preserved liver synthetic function whilst also predicting long-term clinical outcomes has clinical utility for international health care systems.”
The study was funded by National Institute for Health Research (NIHR) Nottingham Digestive Diseases Biomedical Research Centre based at Nottingham University Hospitals NHS Trust and the University of Nottingham. The investigators declared no conflicts of interest.
SOURCE: Guha N et al. CGH. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.042.
A prognostic model that uses serum albumin-bilirubin (ALBI) and Fibrosis-4 (FIB-4) scores can identify patients with cirrhosis who are at high risk of liver decompensation, according to investigators.
During validation testing, the scoring system performed well among European and Middle Eastern patients, which supports prognostic value across diverse populations, reported lead author Neil Guha, MRCP, PhD, of the University of Nottingham (U.K.) and his colleagues, who suggested that the scoring system could fix an important practice gap.
“Identification of patients [with chronic liver disease] that need intensive monitoring and timely intervention is challenging,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Robust prognostic tools using simple laboratory variables, with potential for implementation in nonspecialist settings and across different health care systems, have significant appeal.”
Although existing scoring systems have been used for decades, they have clear limitations, the investigators noted, referring to predictive ability that may be too little, too late.
“[T]hese scoring systems provide value after synthetic liver function has become significantly deranged and provide only short-term prognostic value,” the investigators wrote. “Presently, there are no scores, performed in routine clinical practice, that provide robust prognostic stratification within early, compensated cirrhosis over the medium/long term.”
To fulfill this need, the investigators developed and validated a prognostic model that incorporates data from the ALBI and FIB-4 scoring systems because these tests measure both fibrosis and function. The development phase involved 145 patients with compensated cirrhosis from Nottingham. Almost half of the cohort had liver disease because of alcohol (44.8%), while about one out of three patients had nonalcoholic fatty liver disease (29.7%). After investigators collected baseline clinical features and scores, patients were followed for a median of 4.59 years, during which time decompensation events were recorded (ascites, variceal bleeding, and encephalopathy). Decompensation occurred in about one out of five patients (19.3%) in the U.K. group, with ascites being the most common (71.4%). Using these findings, the investigators created the prognostic model, which classified patients as having either low or high risk of decompensation. In the development cohort, patients with high risk scores had a hazard ratio for decompensation of 7.10.
In the second part of the study, the investigators validated their model with two clinically distinct groups in Dublin, Ireland (prospective; n = 141), and Menoufia, Egypt (retrospective; n = 93).
In the Dublin cohort, the most common etiologies were alcohol (39.7%) and hepatitis C (29.8%). Over a maximum observational period of 6.4 years, the decompensation rate was lower than the development group, at 12.1%. Types of decompensation also differed, with variceal bleeding being the most common (47.1%). Patients with high risk scores had a higher HR for decompensation than the U.K. cohort, at 12.54.
In the Egypt group, the most common causes of liver disease were nonalcoholic fatty liver disease (47.3%) and hepatitis C (34.4%). The maximum follow-up period was 10.6 years, during which time 38.7% of patients experienced decompensation, with ascites being the most common form (57.1%). The HR of 5.10 was the lowest of all cohorts.
The investigators noted that the cohorts represented unique patient populations with different etiological patterns. “This provides reassurance that the model has generalizability for stratifying liver disease at an international level,” the investigators wrote, suggesting that ALBI and FIB-4 can be used in low-resource and community settings.
“A frequently leveled criticism of algorithms such as ALBI-FIB-4 is that they are too complicated to be applied routinely in the clinical setting,” the investigators wrote. “To overcome this problem we developed a simple online calculator which can be accessed using the following link: https://jscalc.io/calc/gdEJj89Wz5PirkSL.”
“We have shown that routinely available laboratory variables, combined in a novel algorithm, ALBI-FIB-4, can stratify patients with cirrhosis for future risk of liver decompensation,” the investigators concluded. “The ability to do this in the context of early, compensated cirrhosis with preserved liver synthetic function whilst also predicting long-term clinical outcomes has clinical utility for international health care systems.”
The study was funded by National Institute for Health Research (NIHR) Nottingham Digestive Diseases Biomedical Research Centre based at Nottingham University Hospitals NHS Trust and the University of Nottingham. The investigators declared no conflicts of interest.
SOURCE: Guha N et al. CGH. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.042.
A prognostic model that uses serum albumin-bilirubin (ALBI) and Fibrosis-4 (FIB-4) scores can identify patients with cirrhosis who are at high risk of liver decompensation, according to investigators.
During validation testing, the scoring system performed well among European and Middle Eastern patients, which supports prognostic value across diverse populations, reported lead author Neil Guha, MRCP, PhD, of the University of Nottingham (U.K.) and his colleagues, who suggested that the scoring system could fix an important practice gap.
“Identification of patients [with chronic liver disease] that need intensive monitoring and timely intervention is challenging,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Robust prognostic tools using simple laboratory variables, with potential for implementation in nonspecialist settings and across different health care systems, have significant appeal.”
Although existing scoring systems have been used for decades, they have clear limitations, the investigators noted, referring to predictive ability that may be too little, too late.
“[T]hese scoring systems provide value after synthetic liver function has become significantly deranged and provide only short-term prognostic value,” the investigators wrote. “Presently, there are no scores, performed in routine clinical practice, that provide robust prognostic stratification within early, compensated cirrhosis over the medium/long term.”
To fulfill this need, the investigators developed and validated a prognostic model that incorporates data from the ALBI and FIB-4 scoring systems because these tests measure both fibrosis and function. The development phase involved 145 patients with compensated cirrhosis from Nottingham. Almost half of the cohort had liver disease because of alcohol (44.8%), while about one out of three patients had nonalcoholic fatty liver disease (29.7%). After investigators collected baseline clinical features and scores, patients were followed for a median of 4.59 years, during which time decompensation events were recorded (ascites, variceal bleeding, and encephalopathy). Decompensation occurred in about one out of five patients (19.3%) in the U.K. group, with ascites being the most common (71.4%). Using these findings, the investigators created the prognostic model, which classified patients as having either low or high risk of decompensation. In the development cohort, patients with high risk scores had a hazard ratio for decompensation of 7.10.
In the second part of the study, the investigators validated their model with two clinically distinct groups in Dublin, Ireland (prospective; n = 141), and Menoufia, Egypt (retrospective; n = 93).
In the Dublin cohort, the most common etiologies were alcohol (39.7%) and hepatitis C (29.8%). Over a maximum observational period of 6.4 years, the decompensation rate was lower than the development group, at 12.1%. Types of decompensation also differed, with variceal bleeding being the most common (47.1%). Patients with high risk scores had a higher HR for decompensation than the U.K. cohort, at 12.54.
In the Egypt group, the most common causes of liver disease were nonalcoholic fatty liver disease (47.3%) and hepatitis C (34.4%). The maximum follow-up period was 10.6 years, during which time 38.7% of patients experienced decompensation, with ascites being the most common form (57.1%). The HR of 5.10 was the lowest of all cohorts.
The investigators noted that the cohorts represented unique patient populations with different etiological patterns. “This provides reassurance that the model has generalizability for stratifying liver disease at an international level,” the investigators wrote, suggesting that ALBI and FIB-4 can be used in low-resource and community settings.
“A frequently leveled criticism of algorithms such as ALBI-FIB-4 is that they are too complicated to be applied routinely in the clinical setting,” the investigators wrote. “To overcome this problem we developed a simple online calculator which can be accessed using the following link: https://jscalc.io/calc/gdEJj89Wz5PirkSL.”
“We have shown that routinely available laboratory variables, combined in a novel algorithm, ALBI-FIB-4, can stratify patients with cirrhosis for future risk of liver decompensation,” the investigators concluded. “The ability to do this in the context of early, compensated cirrhosis with preserved liver synthetic function whilst also predicting long-term clinical outcomes has clinical utility for international health care systems.”
The study was funded by National Institute for Health Research (NIHR) Nottingham Digestive Diseases Biomedical Research Centre based at Nottingham University Hospitals NHS Trust and the University of Nottingham. The investigators declared no conflicts of interest.
SOURCE: Guha N et al. CGH. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.042.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
FDA committee advises status quo for blood supply Zika testing
Most members of a Food and Drug Administration advisory committee considered that data support maintaining current testing protocols for Zika virus in the blood donor pool. However, committee discussion entertained the idea of revisiting testing strategies after another year or 2 of Zika virus epidemiological data are available.
In its last guidance regarding Zika virus testing, issued in July 2018, the FDA recommended that either minipool nucleic acid testing (MP NAT) or individual donor (ID) NAT be used to screen for Zika virus. Current guidance still requires conversion to all-ID NAT “when certain threshold conditions are met that indicate an increased risk of suspected mosquito-borne transmission in a defined geographic collection area.”
In the first of three separate votes, 11 of 15 voting members of the FDA’s Blood Products Advisory Committee (BPAC) answered in the affirmative to the question of whether available data support continuing the status quo for Zika testing. Committee members then were asked to weigh whether current data support scaling back to a regional testing strategy targeting at-risk areas. Here, six committee members answered in the affirmative, and nine in the negative.
Just one committee member, F. Blaine Hollinger, MD, voted in favor of the third option, elimination of all Zika virus testing without reintroducing donor screening for risk factors in risk-free areas pending another outbreak in the United States. Dr. Hollinger is a professor of virology and microbiology at Baylor College of Medicine, Houston.
The committee as whole wasn’t swayed by a line of questioning put forward by chairman Richard Kaufman, MD. “I will be the devil’s advocate a little bit: We learned that there have been zero confirmed positives from blood donors for the past year. Would anyone be comfortable with just stopping screening of donors?” asked Dr. Kaufman, medical director of the adult transfusion service at Brigham and Women’s Hospital, Boston.
A wide-ranging morning of presentations put data regarding historical trends and current global Zika hot spots in front of the committee. Current upticks in infection rates in northwest Mexico and in some states in India were areas of concern, given North American travel patterns, noted speaker Marc Fisher, MD, of the Center for Disease Control and Prevention’s Arboviral Disease Branch (Fort Collins, Colo.) “We’re going to see sporadic outbreaks; it’s hard to predict the future,” he said. “The new outbreak in India raises concerns.”
Briefing information from the FDA explained that Zika virus local transmission peaked in the United States in late summer of 2016. More than 5,000 cases were reported in the United States and over 36,000 in Puerto Rico. This has plummeted to 220 in 2018, with about two-thirds of these cases occurring in the territories, mostly (97%) from Puerto Rico across all 3 years.
Zika viremic blood donors dropped by an order of magnitude yearly, totaling 363 in 2016, 38 in 2017, and just 3 in 2018. Of the 363 detected in 2016, 96% came from Puerto Rico or Florida, noted Dr. Fisher.
The number of suspected and confirmed cases in the Americas overall has also dropped from over 650,000 in 2016 to under 30,000 in 2018, with most cases in 2018 being suspected rather than laboratory confirmed. In contrast to testing conducted in North America, few cases in much of Central and South America were laboratory confirmed.
Asymptomatic infections have occurred in blood donors, said the FDA, with 1.8% of blood donations in Puerto Rico testing positive for Zika virus during the peak of the outbreak. Transmission by transfusion is thought to have occurred in Brazil.
Although Zika virus infections have plummeted in the United States and worldwide, prevalence and rates of local transmission are unpredictable, said the FDA, which pointed to sporadic increases in autochthonous transmission of viruses such as dengue and chikungunya that are carried by the same mosquito vector as Zika.
Some of the committee’s discussion centered around finding a way to carve out protection for those most harmed by Zika virus – pregnant women and their fetuses. Martin Schreiber, MD, professor of surgery at Oregon Health and Sciences University, Portland, proposed a point-of-care testing strategy in which only blood destined for pregnant women would be tested for Zika virus. Dr. Schreiber, a trauma surgeon, put forward the rationale that Zika virus causes harm almost exclusively to fetuses, except for rare cases of Guillain-Barré syndrome.
In response, Dr. Kaufman pointed out that with rare exceptions for some bacterial testing, all testing is done from samples taken at the point of donation. The supply chain for donor blood is not set up to accommodate point-of-care testing, he said.
Answering questions about another targeted strategy – maintaining a separate, Zika-tested supply of blood for pregnant women – Susan Stramer, PhD, vice president of scientific affairs for the American Red Cross, said, “Most hospitals do not want, and are very adamant against, carrying a dual inventory.”
Ultimately, the committee’s discussion swung toward the realization that it may be too soon after the recent spike in U.S. Zika cases to plot the best course for ongoing testing strategies. “We are at the tail end of a waning epidemic. ... I think it would probably be a pretty easy question for the committee and for the agency if we actually had some way of having a crystal ball and knowing that the current trend was likely to continue,” said Roger Lewis, MD, PhD, professor at the University of California, Los Angeles, and chair of the department of emergency medicine at Harbor-UCLA Medical Center.
“I think that is not the question,” he went on. “I think the question is, What is the optimal strategy if we have no idea if that tail is going to continue in this current trend. ... And that maybe the committee ought to be thinking about what is the right strategy for the next 2 years – with an underlying assumption that this is a question that can be brought back as we learn more about how this disease behaves.”
The FDA usually follows the recommendations of its advisory committees.
Most members of a Food and Drug Administration advisory committee considered that data support maintaining current testing protocols for Zika virus in the blood donor pool. However, committee discussion entertained the idea of revisiting testing strategies after another year or 2 of Zika virus epidemiological data are available.
In its last guidance regarding Zika virus testing, issued in July 2018, the FDA recommended that either minipool nucleic acid testing (MP NAT) or individual donor (ID) NAT be used to screen for Zika virus. Current guidance still requires conversion to all-ID NAT “when certain threshold conditions are met that indicate an increased risk of suspected mosquito-borne transmission in a defined geographic collection area.”
In the first of three separate votes, 11 of 15 voting members of the FDA’s Blood Products Advisory Committee (BPAC) answered in the affirmative to the question of whether available data support continuing the status quo for Zika testing. Committee members then were asked to weigh whether current data support scaling back to a regional testing strategy targeting at-risk areas. Here, six committee members answered in the affirmative, and nine in the negative.
Just one committee member, F. Blaine Hollinger, MD, voted in favor of the third option, elimination of all Zika virus testing without reintroducing donor screening for risk factors in risk-free areas pending another outbreak in the United States. Dr. Hollinger is a professor of virology and microbiology at Baylor College of Medicine, Houston.
The committee as whole wasn’t swayed by a line of questioning put forward by chairman Richard Kaufman, MD. “I will be the devil’s advocate a little bit: We learned that there have been zero confirmed positives from blood donors for the past year. Would anyone be comfortable with just stopping screening of donors?” asked Dr. Kaufman, medical director of the adult transfusion service at Brigham and Women’s Hospital, Boston.
A wide-ranging morning of presentations put data regarding historical trends and current global Zika hot spots in front of the committee. Current upticks in infection rates in northwest Mexico and in some states in India were areas of concern, given North American travel patterns, noted speaker Marc Fisher, MD, of the Center for Disease Control and Prevention’s Arboviral Disease Branch (Fort Collins, Colo.) “We’re going to see sporadic outbreaks; it’s hard to predict the future,” he said. “The new outbreak in India raises concerns.”
Briefing information from the FDA explained that Zika virus local transmission peaked in the United States in late summer of 2016. More than 5,000 cases were reported in the United States and over 36,000 in Puerto Rico. This has plummeted to 220 in 2018, with about two-thirds of these cases occurring in the territories, mostly (97%) from Puerto Rico across all 3 years.
Zika viremic blood donors dropped by an order of magnitude yearly, totaling 363 in 2016, 38 in 2017, and just 3 in 2018. Of the 363 detected in 2016, 96% came from Puerto Rico or Florida, noted Dr. Fisher.
The number of suspected and confirmed cases in the Americas overall has also dropped from over 650,000 in 2016 to under 30,000 in 2018, with most cases in 2018 being suspected rather than laboratory confirmed. In contrast to testing conducted in North America, few cases in much of Central and South America were laboratory confirmed.
Asymptomatic infections have occurred in blood donors, said the FDA, with 1.8% of blood donations in Puerto Rico testing positive for Zika virus during the peak of the outbreak. Transmission by transfusion is thought to have occurred in Brazil.
Although Zika virus infections have plummeted in the United States and worldwide, prevalence and rates of local transmission are unpredictable, said the FDA, which pointed to sporadic increases in autochthonous transmission of viruses such as dengue and chikungunya that are carried by the same mosquito vector as Zika.
Some of the committee’s discussion centered around finding a way to carve out protection for those most harmed by Zika virus – pregnant women and their fetuses. Martin Schreiber, MD, professor of surgery at Oregon Health and Sciences University, Portland, proposed a point-of-care testing strategy in which only blood destined for pregnant women would be tested for Zika virus. Dr. Schreiber, a trauma surgeon, put forward the rationale that Zika virus causes harm almost exclusively to fetuses, except for rare cases of Guillain-Barré syndrome.
In response, Dr. Kaufman pointed out that with rare exceptions for some bacterial testing, all testing is done from samples taken at the point of donation. The supply chain for donor blood is not set up to accommodate point-of-care testing, he said.
Answering questions about another targeted strategy – maintaining a separate, Zika-tested supply of blood for pregnant women – Susan Stramer, PhD, vice president of scientific affairs for the American Red Cross, said, “Most hospitals do not want, and are very adamant against, carrying a dual inventory.”
Ultimately, the committee’s discussion swung toward the realization that it may be too soon after the recent spike in U.S. Zika cases to plot the best course for ongoing testing strategies. “We are at the tail end of a waning epidemic. ... I think it would probably be a pretty easy question for the committee and for the agency if we actually had some way of having a crystal ball and knowing that the current trend was likely to continue,” said Roger Lewis, MD, PhD, professor at the University of California, Los Angeles, and chair of the department of emergency medicine at Harbor-UCLA Medical Center.
“I think that is not the question,” he went on. “I think the question is, What is the optimal strategy if we have no idea if that tail is going to continue in this current trend. ... And that maybe the committee ought to be thinking about what is the right strategy for the next 2 years – with an underlying assumption that this is a question that can be brought back as we learn more about how this disease behaves.”
The FDA usually follows the recommendations of its advisory committees.
Most members of a Food and Drug Administration advisory committee considered that data support maintaining current testing protocols for Zika virus in the blood donor pool. However, committee discussion entertained the idea of revisiting testing strategies after another year or 2 of Zika virus epidemiological data are available.
In its last guidance regarding Zika virus testing, issued in July 2018, the FDA recommended that either minipool nucleic acid testing (MP NAT) or individual donor (ID) NAT be used to screen for Zika virus. Current guidance still requires conversion to all-ID NAT “when certain threshold conditions are met that indicate an increased risk of suspected mosquito-borne transmission in a defined geographic collection area.”
In the first of three separate votes, 11 of 15 voting members of the FDA’s Blood Products Advisory Committee (BPAC) answered in the affirmative to the question of whether available data support continuing the status quo for Zika testing. Committee members then were asked to weigh whether current data support scaling back to a regional testing strategy targeting at-risk areas. Here, six committee members answered in the affirmative, and nine in the negative.
Just one committee member, F. Blaine Hollinger, MD, voted in favor of the third option, elimination of all Zika virus testing without reintroducing donor screening for risk factors in risk-free areas pending another outbreak in the United States. Dr. Hollinger is a professor of virology and microbiology at Baylor College of Medicine, Houston.
The committee as whole wasn’t swayed by a line of questioning put forward by chairman Richard Kaufman, MD. “I will be the devil’s advocate a little bit: We learned that there have been zero confirmed positives from blood donors for the past year. Would anyone be comfortable with just stopping screening of donors?” asked Dr. Kaufman, medical director of the adult transfusion service at Brigham and Women’s Hospital, Boston.
A wide-ranging morning of presentations put data regarding historical trends and current global Zika hot spots in front of the committee. Current upticks in infection rates in northwest Mexico and in some states in India were areas of concern, given North American travel patterns, noted speaker Marc Fisher, MD, of the Center for Disease Control and Prevention’s Arboviral Disease Branch (Fort Collins, Colo.) “We’re going to see sporadic outbreaks; it’s hard to predict the future,” he said. “The new outbreak in India raises concerns.”
Briefing information from the FDA explained that Zika virus local transmission peaked in the United States in late summer of 2016. More than 5,000 cases were reported in the United States and over 36,000 in Puerto Rico. This has plummeted to 220 in 2018, with about two-thirds of these cases occurring in the territories, mostly (97%) from Puerto Rico across all 3 years.
Zika viremic blood donors dropped by an order of magnitude yearly, totaling 363 in 2016, 38 in 2017, and just 3 in 2018. Of the 363 detected in 2016, 96% came from Puerto Rico or Florida, noted Dr. Fisher.
The number of suspected and confirmed cases in the Americas overall has also dropped from over 650,000 in 2016 to under 30,000 in 2018, with most cases in 2018 being suspected rather than laboratory confirmed. In contrast to testing conducted in North America, few cases in much of Central and South America were laboratory confirmed.
Asymptomatic infections have occurred in blood donors, said the FDA, with 1.8% of blood donations in Puerto Rico testing positive for Zika virus during the peak of the outbreak. Transmission by transfusion is thought to have occurred in Brazil.
Although Zika virus infections have plummeted in the United States and worldwide, prevalence and rates of local transmission are unpredictable, said the FDA, which pointed to sporadic increases in autochthonous transmission of viruses such as dengue and chikungunya that are carried by the same mosquito vector as Zika.
Some of the committee’s discussion centered around finding a way to carve out protection for those most harmed by Zika virus – pregnant women and their fetuses. Martin Schreiber, MD, professor of surgery at Oregon Health and Sciences University, Portland, proposed a point-of-care testing strategy in which only blood destined for pregnant women would be tested for Zika virus. Dr. Schreiber, a trauma surgeon, put forward the rationale that Zika virus causes harm almost exclusively to fetuses, except for rare cases of Guillain-Barré syndrome.
In response, Dr. Kaufman pointed out that with rare exceptions for some bacterial testing, all testing is done from samples taken at the point of donation. The supply chain for donor blood is not set up to accommodate point-of-care testing, he said.
Answering questions about another targeted strategy – maintaining a separate, Zika-tested supply of blood for pregnant women – Susan Stramer, PhD, vice president of scientific affairs for the American Red Cross, said, “Most hospitals do not want, and are very adamant against, carrying a dual inventory.”
Ultimately, the committee’s discussion swung toward the realization that it may be too soon after the recent spike in U.S. Zika cases to plot the best course for ongoing testing strategies. “We are at the tail end of a waning epidemic. ... I think it would probably be a pretty easy question for the committee and for the agency if we actually had some way of having a crystal ball and knowing that the current trend was likely to continue,” said Roger Lewis, MD, PhD, professor at the University of California, Los Angeles, and chair of the department of emergency medicine at Harbor-UCLA Medical Center.
“I think that is not the question,” he went on. “I think the question is, What is the optimal strategy if we have no idea if that tail is going to continue in this current trend. ... And that maybe the committee ought to be thinking about what is the right strategy for the next 2 years – with an underlying assumption that this is a question that can be brought back as we learn more about how this disease behaves.”
The FDA usually follows the recommendations of its advisory committees.
MedPAC puts Part B reference pricing, binding arbitration on the table
Much of the presentation, offered during the commission’s March meeting, was general ideas with more work to come in terms of fleshing out the details. An ambitious goal of having something ready for the commission’s June 2019 report to Congress was set.
The policy recommendations for reference pricing, to be used when multiple similar drugs are available, and binding arbitration, to be used on new entrants to the market with limited or no competition, are being designed to work with the previously recommended drug value program, but could be implemented on their own.
In general, the reference pricing policy would set a maximum payment rate for a group of drugs with similar health effects based on the minimum, median, or other point along the range of prices for all drugs in that group. Providers would be incentivized to choose a lower-cost alternative when clinically appropriate.
Beneficiaries who still want access to a higher-cost drug would be on the hook for the difference through cost-sharing mechanisms.
MedPAC staff presented two options for setting the reference price. One would be to establish the price based on internal Medicare data. The other would take international pricing into consideration.
Binding arbitration, which is already a component of the drug value program, would be expanded. In the program described by staff, Medicare and the manufacturer would each come to the table with a price and the arbitrator (either an individual or a panel) would set one price.
Potential cost savings from one or both programs was not addressed
“It seems like an important thing for us to understand in order to know the potential impact ... through these two levers that work on different parts of the spend problem,” said Commissioner Dana Safran, head of measurement for the health care venture formed by Amazon, Berkshire Hathaway, and JPMorgan Chase.
Staff said it would work on making that determination.
Commissioners raised additional questions on operational details.
Marjorie Ginsburg, founding executive director of the Center for Healthcare Decisions Inc. in Sacramento, Calif., questioned what would happen if a manufacturer declined to participate in the arbitration process and whether that would mean Medicare would not cover a drug in that circumstance.
Jay Crosson, MD, noted that “Congress would have to … figure out how to deal with that circumstance. ... We would not want to end up with a system that would deny coverage” of effective medications for Medicare beneficiaries.
Another area affecting both issues was the potential for cross subsidization of drugs.
Jonathan Perlin, MD, president of clinical services and chief medical officer of HCA Healthcare of Nashville, Tenn., questioned whether this could open a door for a provider buying at a cheaper government price and using the drugs across patients not from Medicare or whether it could lead to higher prices being charged to commercial payers.
MedPAC staff member Kim Neuman said that “there would need to be some back end reconciliations that would happen to ensure that the stock that was then administered to Medicare patients was provided at a price that was no higher than that ceiling. ... We haven’t scoped out implications for other payers.”
Commissioner Kathy Buto, independent consultant and former vice president of global health policy at Johnson & Johnson, inquired about whether a drug would be made available upon launch while reference pricing or arbitration processes were in progress.
Commissioners also inquired as to how the reference pricing aspects will be operationalized into conversations between the doctor and the patient.
Ms. Buto also cautioned that using a reference pricing scheme could alter the dynamic of pricing competition that has companies competing against a reference price rather than doing what they can to lower prices beyond that.
Much of the presentation, offered during the commission’s March meeting, was general ideas with more work to come in terms of fleshing out the details. An ambitious goal of having something ready for the commission’s June 2019 report to Congress was set.
The policy recommendations for reference pricing, to be used when multiple similar drugs are available, and binding arbitration, to be used on new entrants to the market with limited or no competition, are being designed to work with the previously recommended drug value program, but could be implemented on their own.
In general, the reference pricing policy would set a maximum payment rate for a group of drugs with similar health effects based on the minimum, median, or other point along the range of prices for all drugs in that group. Providers would be incentivized to choose a lower-cost alternative when clinically appropriate.
Beneficiaries who still want access to a higher-cost drug would be on the hook for the difference through cost-sharing mechanisms.
MedPAC staff presented two options for setting the reference price. One would be to establish the price based on internal Medicare data. The other would take international pricing into consideration.
Binding arbitration, which is already a component of the drug value program, would be expanded. In the program described by staff, Medicare and the manufacturer would each come to the table with a price and the arbitrator (either an individual or a panel) would set one price.
Potential cost savings from one or both programs was not addressed
“It seems like an important thing for us to understand in order to know the potential impact ... through these two levers that work on different parts of the spend problem,” said Commissioner Dana Safran, head of measurement for the health care venture formed by Amazon, Berkshire Hathaway, and JPMorgan Chase.
Staff said it would work on making that determination.
Commissioners raised additional questions on operational details.
Marjorie Ginsburg, founding executive director of the Center for Healthcare Decisions Inc. in Sacramento, Calif., questioned what would happen if a manufacturer declined to participate in the arbitration process and whether that would mean Medicare would not cover a drug in that circumstance.
Jay Crosson, MD, noted that “Congress would have to … figure out how to deal with that circumstance. ... We would not want to end up with a system that would deny coverage” of effective medications for Medicare beneficiaries.
Another area affecting both issues was the potential for cross subsidization of drugs.
Jonathan Perlin, MD, president of clinical services and chief medical officer of HCA Healthcare of Nashville, Tenn., questioned whether this could open a door for a provider buying at a cheaper government price and using the drugs across patients not from Medicare or whether it could lead to higher prices being charged to commercial payers.
MedPAC staff member Kim Neuman said that “there would need to be some back end reconciliations that would happen to ensure that the stock that was then administered to Medicare patients was provided at a price that was no higher than that ceiling. ... We haven’t scoped out implications for other payers.”
Commissioner Kathy Buto, independent consultant and former vice president of global health policy at Johnson & Johnson, inquired about whether a drug would be made available upon launch while reference pricing or arbitration processes were in progress.
Commissioners also inquired as to how the reference pricing aspects will be operationalized into conversations between the doctor and the patient.
Ms. Buto also cautioned that using a reference pricing scheme could alter the dynamic of pricing competition that has companies competing against a reference price rather than doing what they can to lower prices beyond that.
Much of the presentation, offered during the commission’s March meeting, was general ideas with more work to come in terms of fleshing out the details. An ambitious goal of having something ready for the commission’s June 2019 report to Congress was set.
The policy recommendations for reference pricing, to be used when multiple similar drugs are available, and binding arbitration, to be used on new entrants to the market with limited or no competition, are being designed to work with the previously recommended drug value program, but could be implemented on their own.
In general, the reference pricing policy would set a maximum payment rate for a group of drugs with similar health effects based on the minimum, median, or other point along the range of prices for all drugs in that group. Providers would be incentivized to choose a lower-cost alternative when clinically appropriate.
Beneficiaries who still want access to a higher-cost drug would be on the hook for the difference through cost-sharing mechanisms.
MedPAC staff presented two options for setting the reference price. One would be to establish the price based on internal Medicare data. The other would take international pricing into consideration.
Binding arbitration, which is already a component of the drug value program, would be expanded. In the program described by staff, Medicare and the manufacturer would each come to the table with a price and the arbitrator (either an individual or a panel) would set one price.
Potential cost savings from one or both programs was not addressed
“It seems like an important thing for us to understand in order to know the potential impact ... through these two levers that work on different parts of the spend problem,” said Commissioner Dana Safran, head of measurement for the health care venture formed by Amazon, Berkshire Hathaway, and JPMorgan Chase.
Staff said it would work on making that determination.
Commissioners raised additional questions on operational details.
Marjorie Ginsburg, founding executive director of the Center for Healthcare Decisions Inc. in Sacramento, Calif., questioned what would happen if a manufacturer declined to participate in the arbitration process and whether that would mean Medicare would not cover a drug in that circumstance.
Jay Crosson, MD, noted that “Congress would have to … figure out how to deal with that circumstance. ... We would not want to end up with a system that would deny coverage” of effective medications for Medicare beneficiaries.
Another area affecting both issues was the potential for cross subsidization of drugs.
Jonathan Perlin, MD, president of clinical services and chief medical officer of HCA Healthcare of Nashville, Tenn., questioned whether this could open a door for a provider buying at a cheaper government price and using the drugs across patients not from Medicare or whether it could lead to higher prices being charged to commercial payers.
MedPAC staff member Kim Neuman said that “there would need to be some back end reconciliations that would happen to ensure that the stock that was then administered to Medicare patients was provided at a price that was no higher than that ceiling. ... We haven’t scoped out implications for other payers.”
Commissioner Kathy Buto, independent consultant and former vice president of global health policy at Johnson & Johnson, inquired about whether a drug would be made available upon launch while reference pricing or arbitration processes were in progress.
Commissioners also inquired as to how the reference pricing aspects will be operationalized into conversations between the doctor and the patient.
Ms. Buto also cautioned that using a reference pricing scheme could alter the dynamic of pricing competition that has companies competing against a reference price rather than doing what they can to lower prices beyond that.
REPORTING FROM A MEDPAC MEETING
Home oxygen therapy for children: New guidelines combine limited evidence, expert experience
Based on the very limited evidence available, an expert panel convened by the
.The guideline authors not only addressed specific indications for chronic lung and pulmonary vascular diseases, but also defined hypoxemia in children – noting that Medicare and Medicaid coverage determinations for home oxygen therapy in children are based on decades-old studies that lacked pediatric patients – and offer expert advice on how to wean and discontinue oxygen, when warranted.
The disease-specific recommendations on whether or not to prescribe home oxygen therapy are characterized either as strong, meaning that it’s the right course of action for at least 95% of patients; or conditional, meaning it might not be right for a “sizable minority” of patients, authors explained in the guideline.
Home oxygen therapy gets a strong recommendation, for example, in patients with cystic fibrosis complicated by severe chronic hypoxemia, but gets a conditional recommendation for sickle cell disease with severe chronic hypoxemia, according to the guideline, published in the American Journal of Respiratory and Critical Care Medicine.
Regardless of strong or conditional, the recommendations were largely based on “very low-quality evidence,” according to ad hoc subcommittee of the ATS Assembly on Pediatrics, cochaired by Don Hayes Jr., MD, of Nationwide Children’s Hospital, Columbus, Ohio, and Robin R. Deterding, MD, of Children’s Hospital Colorado, Denver.
“Despite widespread use of home oxygen therapy for various lung and pulmonary vascular diseases, there is a striking paucity of data regarding its implementation, efficacy, monitoring, and discontinuation,” Dr. Hayes, Dr. Deterding, and 20 additional committee members wrote in their report.
Accordingly, the panel sought to add expert opinion and experience to the limited evidence, in the hope that it would aid clinicians in the management of complex pediatric patients, they said.
One new tool they provide, toward that end, is a definition of hypoxemia in children based on oxygen saturation as quantified by pulse oximetry (SpO2).
Based on a review of 31 selected studies measuring oxygenation in healthy children, the expert panel defined hypoxemia (at or near sea level) as SpO2 of 90% or lower for 5% of the recording time in children under 1 year old, and an SpO2 of 93% or lower in older children; or alternately, as three independent measurements of SpO2 less than or equal to 90% in the younger children and 93% in the older children.
By contrast, an SpO2 of less than 88% is one of the indications for funding home oxygen therapy as determined by the Centers for Medicare & Medicaid Services for both pediatric and adult patients, according to the committee.
The CMS indications derived from “seminal studies” showing that continuous oxygen therapy reduced mortality in adults with chronic obstructive pulmonary disease, they said in the guideline document.
“Despite the lack of pediatric patients in these historic studies performed over 35 years ago, the CMS coverage determination for [home oxygen therapy] is the same for pediatric patients of all ages compared with adult patients,” they wrote in the report.
The committee unanimously agreed that 2 weeks of low SpO2 was “sufficient evidence” to indicate chronic hypoxemia, their report says.
Dr. Hayes reported no relationships with relevant commercial interests, while Dr. Deterding provided disclosures related to Boehringer Ingelheim, Novartis, and Elsevier Publishing, among others. Fellow committee members provided disclosures related to Shire Pharmaceuticals, United Therapeutics, and others as listed in the clinical practice guideline document.
SOURCE: Hayes D Jr. et al. J Respir Crit Care Med. 2019 Feb 1;199(3):e5-e23. doi: 10.1164/rccm.201812-2276ST.
It is unfortunate that over the course of a decade, the evidence base supporting home oxygen therapy in children has not substantially changed, according to Ian Balfour-Lynn, MD, a member of the American Thoracic Society (ATS) committee that developed the clinical practice guideline.
The ATS clinical practice guideline on home oxygen therapy for children echoes conclusions reached in a 2009 guideline published by the British Thoracic Society (BTS), he wrote in The Lancet Respiratory Medicine.
Dr. Balfour-Lynn, who chaired the BTS guideline committee, said new research is sorely needed, particularly in the prevention of preterm births, which he said constitute the commonest cause of home oxygen need among children, according to the Lancet report.
In addition, a large prospective trial is needed to evaluate strategies for weaning or discontinuing oxygen, he said, noting that the ATS recommendations on weaning were almost entirely based on the expert panel’s combined clinical experience.
Dr. Balfour-Lynn is a consultant in pediatric respiratory medicine at Royal Brompton Hospital, London. This summary of his opinions is based on his comments in a report that appeared March 8 in The Lancet Respiratory Medicine . He reported no relationships with commercial interests relevant to his work on the ATS clinical practice guideline.
It is unfortunate that over the course of a decade, the evidence base supporting home oxygen therapy in children has not substantially changed, according to Ian Balfour-Lynn, MD, a member of the American Thoracic Society (ATS) committee that developed the clinical practice guideline.
The ATS clinical practice guideline on home oxygen therapy for children echoes conclusions reached in a 2009 guideline published by the British Thoracic Society (BTS), he wrote in The Lancet Respiratory Medicine.
Dr. Balfour-Lynn, who chaired the BTS guideline committee, said new research is sorely needed, particularly in the prevention of preterm births, which he said constitute the commonest cause of home oxygen need among children, according to the Lancet report.
In addition, a large prospective trial is needed to evaluate strategies for weaning or discontinuing oxygen, he said, noting that the ATS recommendations on weaning were almost entirely based on the expert panel’s combined clinical experience.
Dr. Balfour-Lynn is a consultant in pediatric respiratory medicine at Royal Brompton Hospital, London. This summary of his opinions is based on his comments in a report that appeared March 8 in The Lancet Respiratory Medicine . He reported no relationships with commercial interests relevant to his work on the ATS clinical practice guideline.
It is unfortunate that over the course of a decade, the evidence base supporting home oxygen therapy in children has not substantially changed, according to Ian Balfour-Lynn, MD, a member of the American Thoracic Society (ATS) committee that developed the clinical practice guideline.
The ATS clinical practice guideline on home oxygen therapy for children echoes conclusions reached in a 2009 guideline published by the British Thoracic Society (BTS), he wrote in The Lancet Respiratory Medicine.
Dr. Balfour-Lynn, who chaired the BTS guideline committee, said new research is sorely needed, particularly in the prevention of preterm births, which he said constitute the commonest cause of home oxygen need among children, according to the Lancet report.
In addition, a large prospective trial is needed to evaluate strategies for weaning or discontinuing oxygen, he said, noting that the ATS recommendations on weaning were almost entirely based on the expert panel’s combined clinical experience.
Dr. Balfour-Lynn is a consultant in pediatric respiratory medicine at Royal Brompton Hospital, London. This summary of his opinions is based on his comments in a report that appeared March 8 in The Lancet Respiratory Medicine . He reported no relationships with commercial interests relevant to his work on the ATS clinical practice guideline.
Based on the very limited evidence available, an expert panel convened by the
.The guideline authors not only addressed specific indications for chronic lung and pulmonary vascular diseases, but also defined hypoxemia in children – noting that Medicare and Medicaid coverage determinations for home oxygen therapy in children are based on decades-old studies that lacked pediatric patients – and offer expert advice on how to wean and discontinue oxygen, when warranted.
The disease-specific recommendations on whether or not to prescribe home oxygen therapy are characterized either as strong, meaning that it’s the right course of action for at least 95% of patients; or conditional, meaning it might not be right for a “sizable minority” of patients, authors explained in the guideline.
Home oxygen therapy gets a strong recommendation, for example, in patients with cystic fibrosis complicated by severe chronic hypoxemia, but gets a conditional recommendation for sickle cell disease with severe chronic hypoxemia, according to the guideline, published in the American Journal of Respiratory and Critical Care Medicine.
Regardless of strong or conditional, the recommendations were largely based on “very low-quality evidence,” according to ad hoc subcommittee of the ATS Assembly on Pediatrics, cochaired by Don Hayes Jr., MD, of Nationwide Children’s Hospital, Columbus, Ohio, and Robin R. Deterding, MD, of Children’s Hospital Colorado, Denver.
“Despite widespread use of home oxygen therapy for various lung and pulmonary vascular diseases, there is a striking paucity of data regarding its implementation, efficacy, monitoring, and discontinuation,” Dr. Hayes, Dr. Deterding, and 20 additional committee members wrote in their report.
Accordingly, the panel sought to add expert opinion and experience to the limited evidence, in the hope that it would aid clinicians in the management of complex pediatric patients, they said.
One new tool they provide, toward that end, is a definition of hypoxemia in children based on oxygen saturation as quantified by pulse oximetry (SpO2).
Based on a review of 31 selected studies measuring oxygenation in healthy children, the expert panel defined hypoxemia (at or near sea level) as SpO2 of 90% or lower for 5% of the recording time in children under 1 year old, and an SpO2 of 93% or lower in older children; or alternately, as three independent measurements of SpO2 less than or equal to 90% in the younger children and 93% in the older children.
By contrast, an SpO2 of less than 88% is one of the indications for funding home oxygen therapy as determined by the Centers for Medicare & Medicaid Services for both pediatric and adult patients, according to the committee.
The CMS indications derived from “seminal studies” showing that continuous oxygen therapy reduced mortality in adults with chronic obstructive pulmonary disease, they said in the guideline document.
“Despite the lack of pediatric patients in these historic studies performed over 35 years ago, the CMS coverage determination for [home oxygen therapy] is the same for pediatric patients of all ages compared with adult patients,” they wrote in the report.
The committee unanimously agreed that 2 weeks of low SpO2 was “sufficient evidence” to indicate chronic hypoxemia, their report says.
Dr. Hayes reported no relationships with relevant commercial interests, while Dr. Deterding provided disclosures related to Boehringer Ingelheim, Novartis, and Elsevier Publishing, among others. Fellow committee members provided disclosures related to Shire Pharmaceuticals, United Therapeutics, and others as listed in the clinical practice guideline document.
SOURCE: Hayes D Jr. et al. J Respir Crit Care Med. 2019 Feb 1;199(3):e5-e23. doi: 10.1164/rccm.201812-2276ST.
Based on the very limited evidence available, an expert panel convened by the
.The guideline authors not only addressed specific indications for chronic lung and pulmonary vascular diseases, but also defined hypoxemia in children – noting that Medicare and Medicaid coverage determinations for home oxygen therapy in children are based on decades-old studies that lacked pediatric patients – and offer expert advice on how to wean and discontinue oxygen, when warranted.
The disease-specific recommendations on whether or not to prescribe home oxygen therapy are characterized either as strong, meaning that it’s the right course of action for at least 95% of patients; or conditional, meaning it might not be right for a “sizable minority” of patients, authors explained in the guideline.
Home oxygen therapy gets a strong recommendation, for example, in patients with cystic fibrosis complicated by severe chronic hypoxemia, but gets a conditional recommendation for sickle cell disease with severe chronic hypoxemia, according to the guideline, published in the American Journal of Respiratory and Critical Care Medicine.
Regardless of strong or conditional, the recommendations were largely based on “very low-quality evidence,” according to ad hoc subcommittee of the ATS Assembly on Pediatrics, cochaired by Don Hayes Jr., MD, of Nationwide Children’s Hospital, Columbus, Ohio, and Robin R. Deterding, MD, of Children’s Hospital Colorado, Denver.
“Despite widespread use of home oxygen therapy for various lung and pulmonary vascular diseases, there is a striking paucity of data regarding its implementation, efficacy, monitoring, and discontinuation,” Dr. Hayes, Dr. Deterding, and 20 additional committee members wrote in their report.
Accordingly, the panel sought to add expert opinion and experience to the limited evidence, in the hope that it would aid clinicians in the management of complex pediatric patients, they said.
One new tool they provide, toward that end, is a definition of hypoxemia in children based on oxygen saturation as quantified by pulse oximetry (SpO2).
Based on a review of 31 selected studies measuring oxygenation in healthy children, the expert panel defined hypoxemia (at or near sea level) as SpO2 of 90% or lower for 5% of the recording time in children under 1 year old, and an SpO2 of 93% or lower in older children; or alternately, as three independent measurements of SpO2 less than or equal to 90% in the younger children and 93% in the older children.
By contrast, an SpO2 of less than 88% is one of the indications for funding home oxygen therapy as determined by the Centers for Medicare & Medicaid Services for both pediatric and adult patients, according to the committee.
The CMS indications derived from “seminal studies” showing that continuous oxygen therapy reduced mortality in adults with chronic obstructive pulmonary disease, they said in the guideline document.
“Despite the lack of pediatric patients in these historic studies performed over 35 years ago, the CMS coverage determination for [home oxygen therapy] is the same for pediatric patients of all ages compared with adult patients,” they wrote in the report.
The committee unanimously agreed that 2 weeks of low SpO2 was “sufficient evidence” to indicate chronic hypoxemia, their report says.
Dr. Hayes reported no relationships with relevant commercial interests, while Dr. Deterding provided disclosures related to Boehringer Ingelheim, Novartis, and Elsevier Publishing, among others. Fellow committee members provided disclosures related to Shire Pharmaceuticals, United Therapeutics, and others as listed in the clinical practice guideline document.
SOURCE: Hayes D Jr. et al. J Respir Crit Care Med. 2019 Feb 1;199(3):e5-e23. doi: 10.1164/rccm.201812-2276ST.
FROM THE AMERICAN JOURNAL OF RESPIRATORY AND CRITICAL CARE MEDICINE
Telerehabilitation is noninferior to in-clinic rehabilitation for poststroke arm function
HONOLULU – according to research presented at the International Stroke Conference sponsored by the American Heart Association. Telerehabilitation also provides patient education as effectively as in-clinic rehabilitation, said Steven C. Cramer, MD, professor of neurology at the University of California, Irvine.
Stroke is a leading cause of disability, and more than 80% of patients with stroke have motor deficits when they present to the ED. Research indicates that high doses of rehabilitation therapy improve brain and motor function. However, many patients get low amounts of rehabilitation because of obstacles such as travel difficulties and shortages of therapy providers. “We reasoned that telerehabilitation is ideally suited to efficiently provide a large dose of useful, high-quality rehab therapy after stroke,” Dr. Cramer said.
Participants received supervised and unsupervised therapy
He and his colleagues enrolled patients who had experienced a stroke during the previous 4-36 weeks and who had arm motor deficits into their study. Eligible participants were adults, had experienced ischemic stroke or intracerebral hemorrhage, and had an arm Fugl-Meyer score between 22 and 56 out of 66.
Dr. Cramer’s group randomized 124 participants at 11 National Institutes of Health StrokeNet sites to 6 weeks of intensive arm rehabilitation therapy, plus stroke education, delivered in clinic or at home by a telehealth system. For both groups, treatment included 36 sessions that each lasted for 70 minutes. Half of the sessions were supervised and half were not. All sessions included at least 15 minutes of arm exercises and at least 15 minutes of functional training. Unsupervised sessions also included at least 5 minutes of stroke education on topics such as prevention, risk factors, recognition, and treatment. Participants in the in-clinic group worked with therapists in the clinic on supervised days and at home with a personalized booklet on unsupervised days. Participants in the telerehabilitation group played specially designed and individually tailored computer games at home on all days and had video conferences with therapists on supervised days. The treatment groups included approximately equal numbers of patients; treatment duration, intensity, and frequency were matched between groups.
The investigators hypothesized that telerehabilitation was not inferior to in-clinic rehabilitation. The study’s primary endpoint was change in Fugl-Meyer score from baseline to 30 days after the end of therapy. Secondary end points included Box and Blocks score (that is, a measure of arm function), Stroke Impact Scale–hand, and gains in stroke knowledge. The researchers defined the noninferiority margin as 30% of the gains of the in-clinic group. End points were evaluated by blinded assessors.
Patients had clinically meaningful gains
Participants’ average age was 61 years; the mean baseline arm Fugl-Meyer score was 42. Stroke onset had occurred at a mean of 4.5 months previously, and most strokes were ischemic. In all, 10 participants dropped out of the study. The rate of compliance was 98.3% in the telerehabilitation group and 93.0% in the in-clinic group.
The change in Fugl-Meyer score from baseline to 30 days post therapy was 8.36 points in the in-clinic group and 7.86 points in the telerehabilitation group. The changes in this score were higher than the minimal clinically important difference. The difference between groups, adjusted for covariance, was approximately 0. In addition, the 95% confidence interval for the change in score in the telerehabilitation group was within the noninferiority margin. “We can say that telerehabilitation is not inferior” to in-clinic therapy, said Dr. Cramer.
Telerehabilitation also was noninferior to in-clinic rehabilitation on the Box and Blocks score, and gains in stroke knowledge were significant and comparable in both groups. “Interestingly, the arm motor gains did not differ whether the subjects had aphasia or not,” said Dr. Cramer.
The investigators measured activity-inherent motivation (that is, how much a patient likes rehabilitation) using the Physical Activity Enjoyment Scale. Scores were higher in the in-clinic group, compared with the telerehabilitation group. “People like going to sit with a live human, and they like the longer time with the live human. This is something for us to study further and understand,” said Dr. Cramer.
Dr. Cramer and colleagues observed six serious adverse events in the in-clinic group and one in the telerehabilitation group, such as pneumonia or palpitations, all of which were deemed unrelated to therapy. Adverse events related to therapy (for example, shoulder pain and fatigue) were equally distributed between the two groups.
“What we were trying to do with home-based telehealth does not compete with or replace traditional rehab medicine. It is expanding tools for occupational and physical therapists, for nurses and physicians,” said Dr. Cramer.
Future studies could examine the efficacy of telerehabilitation in the treatment of language deficits, leg weakness, micturition, and dysphagia. “We might also study telehealth such as this to see how we can improve access and lower the cost of poststroke rehab care,” he concluded.
The study was funded by the Eunice Kennedy Shriver National Institute Of Child Health & Human Development and several grants from the National Institute of Neurological Disorders and Stroke. Dr. Cramer has an ownership interest in TRCare, a company that plans to market a telerehabilitation system and was not involved in the study. In addition, he is a consultant or advisor for MicroTransponder, Dart Neuroscience, Neurolutions, Regenera, Abbvie, SanBio, and TRCare.
SOURCE: Cramer SC et al. ISC 2019, Abstract LB23.
HONOLULU – according to research presented at the International Stroke Conference sponsored by the American Heart Association. Telerehabilitation also provides patient education as effectively as in-clinic rehabilitation, said Steven C. Cramer, MD, professor of neurology at the University of California, Irvine.
Stroke is a leading cause of disability, and more than 80% of patients with stroke have motor deficits when they present to the ED. Research indicates that high doses of rehabilitation therapy improve brain and motor function. However, many patients get low amounts of rehabilitation because of obstacles such as travel difficulties and shortages of therapy providers. “We reasoned that telerehabilitation is ideally suited to efficiently provide a large dose of useful, high-quality rehab therapy after stroke,” Dr. Cramer said.
Participants received supervised and unsupervised therapy
He and his colleagues enrolled patients who had experienced a stroke during the previous 4-36 weeks and who had arm motor deficits into their study. Eligible participants were adults, had experienced ischemic stroke or intracerebral hemorrhage, and had an arm Fugl-Meyer score between 22 and 56 out of 66.
Dr. Cramer’s group randomized 124 participants at 11 National Institutes of Health StrokeNet sites to 6 weeks of intensive arm rehabilitation therapy, plus stroke education, delivered in clinic or at home by a telehealth system. For both groups, treatment included 36 sessions that each lasted for 70 minutes. Half of the sessions were supervised and half were not. All sessions included at least 15 minutes of arm exercises and at least 15 minutes of functional training. Unsupervised sessions also included at least 5 minutes of stroke education on topics such as prevention, risk factors, recognition, and treatment. Participants in the in-clinic group worked with therapists in the clinic on supervised days and at home with a personalized booklet on unsupervised days. Participants in the telerehabilitation group played specially designed and individually tailored computer games at home on all days and had video conferences with therapists on supervised days. The treatment groups included approximately equal numbers of patients; treatment duration, intensity, and frequency were matched between groups.
The investigators hypothesized that telerehabilitation was not inferior to in-clinic rehabilitation. The study’s primary endpoint was change in Fugl-Meyer score from baseline to 30 days after the end of therapy. Secondary end points included Box and Blocks score (that is, a measure of arm function), Stroke Impact Scale–hand, and gains in stroke knowledge. The researchers defined the noninferiority margin as 30% of the gains of the in-clinic group. End points were evaluated by blinded assessors.
Patients had clinically meaningful gains
Participants’ average age was 61 years; the mean baseline arm Fugl-Meyer score was 42. Stroke onset had occurred at a mean of 4.5 months previously, and most strokes were ischemic. In all, 10 participants dropped out of the study. The rate of compliance was 98.3% in the telerehabilitation group and 93.0% in the in-clinic group.
The change in Fugl-Meyer score from baseline to 30 days post therapy was 8.36 points in the in-clinic group and 7.86 points in the telerehabilitation group. The changes in this score were higher than the minimal clinically important difference. The difference between groups, adjusted for covariance, was approximately 0. In addition, the 95% confidence interval for the change in score in the telerehabilitation group was within the noninferiority margin. “We can say that telerehabilitation is not inferior” to in-clinic therapy, said Dr. Cramer.
Telerehabilitation also was noninferior to in-clinic rehabilitation on the Box and Blocks score, and gains in stroke knowledge were significant and comparable in both groups. “Interestingly, the arm motor gains did not differ whether the subjects had aphasia or not,” said Dr. Cramer.
The investigators measured activity-inherent motivation (that is, how much a patient likes rehabilitation) using the Physical Activity Enjoyment Scale. Scores were higher in the in-clinic group, compared with the telerehabilitation group. “People like going to sit with a live human, and they like the longer time with the live human. This is something for us to study further and understand,” said Dr. Cramer.
Dr. Cramer and colleagues observed six serious adverse events in the in-clinic group and one in the telerehabilitation group, such as pneumonia or palpitations, all of which were deemed unrelated to therapy. Adverse events related to therapy (for example, shoulder pain and fatigue) were equally distributed between the two groups.
“What we were trying to do with home-based telehealth does not compete with or replace traditional rehab medicine. It is expanding tools for occupational and physical therapists, for nurses and physicians,” said Dr. Cramer.
Future studies could examine the efficacy of telerehabilitation in the treatment of language deficits, leg weakness, micturition, and dysphagia. “We might also study telehealth such as this to see how we can improve access and lower the cost of poststroke rehab care,” he concluded.
The study was funded by the Eunice Kennedy Shriver National Institute Of Child Health & Human Development and several grants from the National Institute of Neurological Disorders and Stroke. Dr. Cramer has an ownership interest in TRCare, a company that plans to market a telerehabilitation system and was not involved in the study. In addition, he is a consultant or advisor for MicroTransponder, Dart Neuroscience, Neurolutions, Regenera, Abbvie, SanBio, and TRCare.
SOURCE: Cramer SC et al. ISC 2019, Abstract LB23.
HONOLULU – according to research presented at the International Stroke Conference sponsored by the American Heart Association. Telerehabilitation also provides patient education as effectively as in-clinic rehabilitation, said Steven C. Cramer, MD, professor of neurology at the University of California, Irvine.
Stroke is a leading cause of disability, and more than 80% of patients with stroke have motor deficits when they present to the ED. Research indicates that high doses of rehabilitation therapy improve brain and motor function. However, many patients get low amounts of rehabilitation because of obstacles such as travel difficulties and shortages of therapy providers. “We reasoned that telerehabilitation is ideally suited to efficiently provide a large dose of useful, high-quality rehab therapy after stroke,” Dr. Cramer said.
Participants received supervised and unsupervised therapy
He and his colleagues enrolled patients who had experienced a stroke during the previous 4-36 weeks and who had arm motor deficits into their study. Eligible participants were adults, had experienced ischemic stroke or intracerebral hemorrhage, and had an arm Fugl-Meyer score between 22 and 56 out of 66.
Dr. Cramer’s group randomized 124 participants at 11 National Institutes of Health StrokeNet sites to 6 weeks of intensive arm rehabilitation therapy, plus stroke education, delivered in clinic or at home by a telehealth system. For both groups, treatment included 36 sessions that each lasted for 70 minutes. Half of the sessions were supervised and half were not. All sessions included at least 15 minutes of arm exercises and at least 15 minutes of functional training. Unsupervised sessions also included at least 5 minutes of stroke education on topics such as prevention, risk factors, recognition, and treatment. Participants in the in-clinic group worked with therapists in the clinic on supervised days and at home with a personalized booklet on unsupervised days. Participants in the telerehabilitation group played specially designed and individually tailored computer games at home on all days and had video conferences with therapists on supervised days. The treatment groups included approximately equal numbers of patients; treatment duration, intensity, and frequency were matched between groups.
The investigators hypothesized that telerehabilitation was not inferior to in-clinic rehabilitation. The study’s primary endpoint was change in Fugl-Meyer score from baseline to 30 days after the end of therapy. Secondary end points included Box and Blocks score (that is, a measure of arm function), Stroke Impact Scale–hand, and gains in stroke knowledge. The researchers defined the noninferiority margin as 30% of the gains of the in-clinic group. End points were evaluated by blinded assessors.
Patients had clinically meaningful gains
Participants’ average age was 61 years; the mean baseline arm Fugl-Meyer score was 42. Stroke onset had occurred at a mean of 4.5 months previously, and most strokes were ischemic. In all, 10 participants dropped out of the study. The rate of compliance was 98.3% in the telerehabilitation group and 93.0% in the in-clinic group.
The change in Fugl-Meyer score from baseline to 30 days post therapy was 8.36 points in the in-clinic group and 7.86 points in the telerehabilitation group. The changes in this score were higher than the minimal clinically important difference. The difference between groups, adjusted for covariance, was approximately 0. In addition, the 95% confidence interval for the change in score in the telerehabilitation group was within the noninferiority margin. “We can say that telerehabilitation is not inferior” to in-clinic therapy, said Dr. Cramer.
Telerehabilitation also was noninferior to in-clinic rehabilitation on the Box and Blocks score, and gains in stroke knowledge were significant and comparable in both groups. “Interestingly, the arm motor gains did not differ whether the subjects had aphasia or not,” said Dr. Cramer.
The investigators measured activity-inherent motivation (that is, how much a patient likes rehabilitation) using the Physical Activity Enjoyment Scale. Scores were higher in the in-clinic group, compared with the telerehabilitation group. “People like going to sit with a live human, and they like the longer time with the live human. This is something for us to study further and understand,” said Dr. Cramer.
Dr. Cramer and colleagues observed six serious adverse events in the in-clinic group and one in the telerehabilitation group, such as pneumonia or palpitations, all of which were deemed unrelated to therapy. Adverse events related to therapy (for example, shoulder pain and fatigue) were equally distributed between the two groups.
“What we were trying to do with home-based telehealth does not compete with or replace traditional rehab medicine. It is expanding tools for occupational and physical therapists, for nurses and physicians,” said Dr. Cramer.
Future studies could examine the efficacy of telerehabilitation in the treatment of language deficits, leg weakness, micturition, and dysphagia. “We might also study telehealth such as this to see how we can improve access and lower the cost of poststroke rehab care,” he concluded.
The study was funded by the Eunice Kennedy Shriver National Institute Of Child Health & Human Development and several grants from the National Institute of Neurological Disorders and Stroke. Dr. Cramer has an ownership interest in TRCare, a company that plans to market a telerehabilitation system and was not involved in the study. In addition, he is a consultant or advisor for MicroTransponder, Dart Neuroscience, Neurolutions, Regenera, Abbvie, SanBio, and TRCare.
SOURCE: Cramer SC et al. ISC 2019, Abstract LB23.
REPORTING FROM ISC 2019
Quick Byte: Trauma care
Innovating quickly
The U.S. military has completely transformed trauma care over the past 17 years, and that success offers lessons for civilian medicine.
In the civilian world, it takes an average of 17 years for a new discovery to change medical practice, but the military has developed or significantly expanded more than 27 major innovations, such as redesigned tourniquets and new transport procedures, in about a decade. As a result, the death rate from battlefield wounds has decreased by half.
Reference
Kellermann A et al. How the US military reinvented trauma care and what this means for US medicine. Health Aff. 2018 Jul 3. doi: 10.1377/hblog20180628.431867.
Innovating quickly
Innovating quickly
The U.S. military has completely transformed trauma care over the past 17 years, and that success offers lessons for civilian medicine.
In the civilian world, it takes an average of 17 years for a new discovery to change medical practice, but the military has developed or significantly expanded more than 27 major innovations, such as redesigned tourniquets and new transport procedures, in about a decade. As a result, the death rate from battlefield wounds has decreased by half.
Reference
Kellermann A et al. How the US military reinvented trauma care and what this means for US medicine. Health Aff. 2018 Jul 3. doi: 10.1377/hblog20180628.431867.
The U.S. military has completely transformed trauma care over the past 17 years, and that success offers lessons for civilian medicine.
In the civilian world, it takes an average of 17 years for a new discovery to change medical practice, but the military has developed or significantly expanded more than 27 major innovations, such as redesigned tourniquets and new transport procedures, in about a decade. As a result, the death rate from battlefield wounds has decreased by half.
Reference
Kellermann A et al. How the US military reinvented trauma care and what this means for US medicine. Health Aff. 2018 Jul 3. doi: 10.1377/hblog20180628.431867.
Postcesarean pain relief better on nonopioid regimen
LAS VEGAS – Women who had cesarean delivery and received a nonopioid pain control regimen at hospital discharge had lower pain scores by 4 weeks post partum than those who also received opioids, according to study results shared during a fellows session at the meeting presented by the Society for Maternal-Fetal Medicine.
At 2-4 weeks post partum, the mean pain score on a visual analog scale (VAS) was 12/100 mm for women on the nonopioid regimen, compared with 16/100 mm for women who received opioids, using an intention-to-treat analysis. The median pain score for those in the nonopioid arm was 0, compared with 6 for those in the opioid arm.
The findings surprised Jenifer Dinis, MD, a maternal-fetal medicine fellow at the University of Texas, Houston, and her collaborators, because they had hypothesized merely that the two groups would have similar pain scores 2-4 weeks after delivery.
Although women in the nonopioid arm were able to obtain a rescue hydrocodone prescription through the study, and some women obtained opioids from their private physician, they still used less than half as much opioid medication as women in the opioid arm (21 versus 43 morphine milligram equivalents, P less than .01).
However, women in the nonopioid arm did not use significantly more ibuprofen or acetaminophen, and there was no difference in patient satisfaction with the outpatient postpartum analgesic regimen between study arms. Somnolence was more common in the opioid arm (P = .03); no other medication side effects were significantly more common in one group than the other.
Overall, 22 of 76 (29%) women in the nonopioid arm took any opioids after discharge, compared with 59/81 (73%) in the opioid arm (P less than .01).
After cesarean delivery, the 170 participating women had an inpatient pain control regimen determined by their primary ob.gyn., Dr. Dinis said in her presentation. Patients were randomized 1:1 to their outpatient analgesia regimens on postoperative day 2 or 3, with appropriate prescriptions placed in patient charts. Participants received either a nonopioid regimen with prescriptions for 60 ibuprofen tablets (600 mg) and 60 acetaminophen tablets (325 mg), or to an opioid regimen that included ibuprofen plus hydrocodone/acetaminophen 5 (325 mg) 1-2 tablets every 4 hours.
Pain scores were assessed between 2 and 4 weeks after delivery, either at an in-person appointment or by means of a phone call and a provided email link.
The single-site study was designed as a parallel-group equivalence trial, to show noninferiority of one pain control regimen over the other. Women between the ages of 18 and 50 years were included if they had a cesarean delivery; both English- and Spanish-speaking women were enrolled.
Allowing for attrition and crossover, Dr. Dinis and her colleagues enrolled 85 patients per study arm to achieve sufficient statistical power to detect the difference needed. The investigators planned both an intention-to-treat and a per-protocol analysis in their registered clinical trial.
Postpartum pain assessments were not obtained for 12 patients in the nonopioid group, and 9 in the opioid group, leaving 73 and 76 patients in each group for the per-protocol analysis, respectively.
At baseline, patients were a mean 28 years old, and a little over a quarter (28%) were nulliparous. Participants were overall about half African American and 34%-40% Hispanic. Over half (62%-72%) received Medicaid; most women (62%-75%) had body mass indices of 30 kg/m2 or more.
The mean gestational age at delivery was a little more than 36 weeks, with about half of deliveries being the participant’s first cesarean delivery. About 90% of women had a Pfannenstiel skin incision, with a low transverse uterine incision.
Patients were aware of their allocation, and the study results aren’t applicable to women with opioid or benzodiazepine use disorder, she noted. However, the study was pragmatic, included all types of cesarean deliveries, and was adequately powered to detect “the smallest clinically significant difference.”
Dr. Dinis reported no outside sources of funding and no conflicts of interest.
SOURCE: Dinis J et al. Am J Obstet Gynecol. 2019 Jan;220(1):S34, Abstract 42.
LAS VEGAS – Women who had cesarean delivery and received a nonopioid pain control regimen at hospital discharge had lower pain scores by 4 weeks post partum than those who also received opioids, according to study results shared during a fellows session at the meeting presented by the Society for Maternal-Fetal Medicine.
At 2-4 weeks post partum, the mean pain score on a visual analog scale (VAS) was 12/100 mm for women on the nonopioid regimen, compared with 16/100 mm for women who received opioids, using an intention-to-treat analysis. The median pain score for those in the nonopioid arm was 0, compared with 6 for those in the opioid arm.
The findings surprised Jenifer Dinis, MD, a maternal-fetal medicine fellow at the University of Texas, Houston, and her collaborators, because they had hypothesized merely that the two groups would have similar pain scores 2-4 weeks after delivery.
Although women in the nonopioid arm were able to obtain a rescue hydrocodone prescription through the study, and some women obtained opioids from their private physician, they still used less than half as much opioid medication as women in the opioid arm (21 versus 43 morphine milligram equivalents, P less than .01).
However, women in the nonopioid arm did not use significantly more ibuprofen or acetaminophen, and there was no difference in patient satisfaction with the outpatient postpartum analgesic regimen between study arms. Somnolence was more common in the opioid arm (P = .03); no other medication side effects were significantly more common in one group than the other.
Overall, 22 of 76 (29%) women in the nonopioid arm took any opioids after discharge, compared with 59/81 (73%) in the opioid arm (P less than .01).
After cesarean delivery, the 170 participating women had an inpatient pain control regimen determined by their primary ob.gyn., Dr. Dinis said in her presentation. Patients were randomized 1:1 to their outpatient analgesia regimens on postoperative day 2 or 3, with appropriate prescriptions placed in patient charts. Participants received either a nonopioid regimen with prescriptions for 60 ibuprofen tablets (600 mg) and 60 acetaminophen tablets (325 mg), or to an opioid regimen that included ibuprofen plus hydrocodone/acetaminophen 5 (325 mg) 1-2 tablets every 4 hours.
Pain scores were assessed between 2 and 4 weeks after delivery, either at an in-person appointment or by means of a phone call and a provided email link.
The single-site study was designed as a parallel-group equivalence trial, to show noninferiority of one pain control regimen over the other. Women between the ages of 18 and 50 years were included if they had a cesarean delivery; both English- and Spanish-speaking women were enrolled.
Allowing for attrition and crossover, Dr. Dinis and her colleagues enrolled 85 patients per study arm to achieve sufficient statistical power to detect the difference needed. The investigators planned both an intention-to-treat and a per-protocol analysis in their registered clinical trial.
Postpartum pain assessments were not obtained for 12 patients in the nonopioid group, and 9 in the opioid group, leaving 73 and 76 patients in each group for the per-protocol analysis, respectively.
At baseline, patients were a mean 28 years old, and a little over a quarter (28%) were nulliparous. Participants were overall about half African American and 34%-40% Hispanic. Over half (62%-72%) received Medicaid; most women (62%-75%) had body mass indices of 30 kg/m2 or more.
The mean gestational age at delivery was a little more than 36 weeks, with about half of deliveries being the participant’s first cesarean delivery. About 90% of women had a Pfannenstiel skin incision, with a low transverse uterine incision.
Patients were aware of their allocation, and the study results aren’t applicable to women with opioid or benzodiazepine use disorder, she noted. However, the study was pragmatic, included all types of cesarean deliveries, and was adequately powered to detect “the smallest clinically significant difference.”
Dr. Dinis reported no outside sources of funding and no conflicts of interest.
SOURCE: Dinis J et al. Am J Obstet Gynecol. 2019 Jan;220(1):S34, Abstract 42.
LAS VEGAS – Women who had cesarean delivery and received a nonopioid pain control regimen at hospital discharge had lower pain scores by 4 weeks post partum than those who also received opioids, according to study results shared during a fellows session at the meeting presented by the Society for Maternal-Fetal Medicine.
At 2-4 weeks post partum, the mean pain score on a visual analog scale (VAS) was 12/100 mm for women on the nonopioid regimen, compared with 16/100 mm for women who received opioids, using an intention-to-treat analysis. The median pain score for those in the nonopioid arm was 0, compared with 6 for those in the opioid arm.
The findings surprised Jenifer Dinis, MD, a maternal-fetal medicine fellow at the University of Texas, Houston, and her collaborators, because they had hypothesized merely that the two groups would have similar pain scores 2-4 weeks after delivery.
Although women in the nonopioid arm were able to obtain a rescue hydrocodone prescription through the study, and some women obtained opioids from their private physician, they still used less than half as much opioid medication as women in the opioid arm (21 versus 43 morphine milligram equivalents, P less than .01).
However, women in the nonopioid arm did not use significantly more ibuprofen or acetaminophen, and there was no difference in patient satisfaction with the outpatient postpartum analgesic regimen between study arms. Somnolence was more common in the opioid arm (P = .03); no other medication side effects were significantly more common in one group than the other.
Overall, 22 of 76 (29%) women in the nonopioid arm took any opioids after discharge, compared with 59/81 (73%) in the opioid arm (P less than .01).
After cesarean delivery, the 170 participating women had an inpatient pain control regimen determined by their primary ob.gyn., Dr. Dinis said in her presentation. Patients were randomized 1:1 to their outpatient analgesia regimens on postoperative day 2 or 3, with appropriate prescriptions placed in patient charts. Participants received either a nonopioid regimen with prescriptions for 60 ibuprofen tablets (600 mg) and 60 acetaminophen tablets (325 mg), or to an opioid regimen that included ibuprofen plus hydrocodone/acetaminophen 5 (325 mg) 1-2 tablets every 4 hours.
Pain scores were assessed between 2 and 4 weeks after delivery, either at an in-person appointment or by means of a phone call and a provided email link.
The single-site study was designed as a parallel-group equivalence trial, to show noninferiority of one pain control regimen over the other. Women between the ages of 18 and 50 years were included if they had a cesarean delivery; both English- and Spanish-speaking women were enrolled.
Allowing for attrition and crossover, Dr. Dinis and her colleagues enrolled 85 patients per study arm to achieve sufficient statistical power to detect the difference needed. The investigators planned both an intention-to-treat and a per-protocol analysis in their registered clinical trial.
Postpartum pain assessments were not obtained for 12 patients in the nonopioid group, and 9 in the opioid group, leaving 73 and 76 patients in each group for the per-protocol analysis, respectively.
At baseline, patients were a mean 28 years old, and a little over a quarter (28%) were nulliparous. Participants were overall about half African American and 34%-40% Hispanic. Over half (62%-72%) received Medicaid; most women (62%-75%) had body mass indices of 30 kg/m2 or more.
The mean gestational age at delivery was a little more than 36 weeks, with about half of deliveries being the participant’s first cesarean delivery. About 90% of women had a Pfannenstiel skin incision, with a low transverse uterine incision.
Patients were aware of their allocation, and the study results aren’t applicable to women with opioid or benzodiazepine use disorder, she noted. However, the study was pragmatic, included all types of cesarean deliveries, and was adequately powered to detect “the smallest clinically significant difference.”
Dr. Dinis reported no outside sources of funding and no conflicts of interest.
SOURCE: Dinis J et al. Am J Obstet Gynecol. 2019 Jan;220(1):S34, Abstract 42.
REPORTING FROM THE PREGNANCY MEETING
MCL survival rates improve with novel agents
Survival outcomes for patients with mantle cell lymphoma (MCL) substantially improved from 1995 to 2013, particularly for those with advanced-stage tumors, according to a retrospective analysis.
The median overall survival for the study period was 52 months and 57 months in two cancer databases.
“Over the past 20 years, many novel agents and treatment regimens have been developed to treat MCL,” Shuangshuang Fu, PhD, of the University of Texas, Houston, and her colleagues wrote in Cancer Epidemiology.
The researchers retrospectively studied population-based data from two separate databases: the national Surveillance, Epidemiology and End Results (SEER) database and the Texas Cancer Registry (TCR). They identified all adult patients who received a new diagnosis of MCL between Jan. 1, 1995, and Dec. 31, 2013.
A total of 9,610 patients were included in the study: 7,555 patients from SEER and 2,055 from the TCR. The team collected data related to MCL diagnosis, mortality, and other variables, including age at diagnosis, marital status, sex, and tumor stage.
In total, 76.2% and 61.6% of patients from the SEER and TCR databases, respectively, had an advanced-stage tumor.
Dr. Fu and her colleagues found that all-cause mortality rates in both groups were significantly reduced from 1995 to 2013 (SEER, P less than .001; TCR, P = .03).
In addition, the team reported that the median overall survival time for all patients in the SEER database was 52 months, and it was 57 months for the TCR database.
“MCL patients with [an] advanced stage tumor benefitted most from the introduction of newly developed regimens,” they added.
The researchers acknowledged that a key limitation of the study was the inability to assess treatment regimen–specific survival, which could only be estimated with these data.
“The findings of our study further confirmed the impact of novel agents on improved survival over time that was shown in other studies,” they wrote.
The study was supported by grant funding from the Cancer Prevention Research Institute of Texas and the National Institutes of Health. The researchers reported having no conflicts of interest.
SOURCE: Fu S et al. Cancer Epidemiol. 2019 Feb;58:89-97.
Survival outcomes for patients with mantle cell lymphoma (MCL) substantially improved from 1995 to 2013, particularly for those with advanced-stage tumors, according to a retrospective analysis.
The median overall survival for the study period was 52 months and 57 months in two cancer databases.
“Over the past 20 years, many novel agents and treatment regimens have been developed to treat MCL,” Shuangshuang Fu, PhD, of the University of Texas, Houston, and her colleagues wrote in Cancer Epidemiology.
The researchers retrospectively studied population-based data from two separate databases: the national Surveillance, Epidemiology and End Results (SEER) database and the Texas Cancer Registry (TCR). They identified all adult patients who received a new diagnosis of MCL between Jan. 1, 1995, and Dec. 31, 2013.
A total of 9,610 patients were included in the study: 7,555 patients from SEER and 2,055 from the TCR. The team collected data related to MCL diagnosis, mortality, and other variables, including age at diagnosis, marital status, sex, and tumor stage.
In total, 76.2% and 61.6% of patients from the SEER and TCR databases, respectively, had an advanced-stage tumor.
Dr. Fu and her colleagues found that all-cause mortality rates in both groups were significantly reduced from 1995 to 2013 (SEER, P less than .001; TCR, P = .03).
In addition, the team reported that the median overall survival time for all patients in the SEER database was 52 months, and it was 57 months for the TCR database.
“MCL patients with [an] advanced stage tumor benefitted most from the introduction of newly developed regimens,” they added.
The researchers acknowledged that a key limitation of the study was the inability to assess treatment regimen–specific survival, which could only be estimated with these data.
“The findings of our study further confirmed the impact of novel agents on improved survival over time that was shown in other studies,” they wrote.
The study was supported by grant funding from the Cancer Prevention Research Institute of Texas and the National Institutes of Health. The researchers reported having no conflicts of interest.
SOURCE: Fu S et al. Cancer Epidemiol. 2019 Feb;58:89-97.
Survival outcomes for patients with mantle cell lymphoma (MCL) substantially improved from 1995 to 2013, particularly for those with advanced-stage tumors, according to a retrospective analysis.
The median overall survival for the study period was 52 months and 57 months in two cancer databases.
“Over the past 20 years, many novel agents and treatment regimens have been developed to treat MCL,” Shuangshuang Fu, PhD, of the University of Texas, Houston, and her colleagues wrote in Cancer Epidemiology.
The researchers retrospectively studied population-based data from two separate databases: the national Surveillance, Epidemiology and End Results (SEER) database and the Texas Cancer Registry (TCR). They identified all adult patients who received a new diagnosis of MCL between Jan. 1, 1995, and Dec. 31, 2013.
A total of 9,610 patients were included in the study: 7,555 patients from SEER and 2,055 from the TCR. The team collected data related to MCL diagnosis, mortality, and other variables, including age at diagnosis, marital status, sex, and tumor stage.
In total, 76.2% and 61.6% of patients from the SEER and TCR databases, respectively, had an advanced-stage tumor.
Dr. Fu and her colleagues found that all-cause mortality rates in both groups were significantly reduced from 1995 to 2013 (SEER, P less than .001; TCR, P = .03).
In addition, the team reported that the median overall survival time for all patients in the SEER database was 52 months, and it was 57 months for the TCR database.
“MCL patients with [an] advanced stage tumor benefitted most from the introduction of newly developed regimens,” they added.
The researchers acknowledged that a key limitation of the study was the inability to assess treatment regimen–specific survival, which could only be estimated with these data.
“The findings of our study further confirmed the impact of novel agents on improved survival over time that was shown in other studies,” they wrote.
The study was supported by grant funding from the Cancer Prevention Research Institute of Texas and the National Institutes of Health. The researchers reported having no conflicts of interest.
SOURCE: Fu S et al. Cancer Epidemiol. 2019 Feb;58:89-97.
FROM CANCER EPIDEMIOLOGY
Novel transplant protocol improves engraftment in severe hemoglobinopathies
Doubling total body irradiation improved rates of engraftment without altering safety in patients with severe hemoglobinopathies undergoing haploidentical hematopoietic cell transplantation, new findings suggest.
“[A]lthough our previous study showed cures in most patients and low toxicity, the graft failure rate – albeit all with full host recovery – was 50%,” Francisco Javier Bolaños-Meade, MD, of Johns Hopkins University, Baltimore, and his colleagues wrote in the Lancet Haematology. The present study set out to decrease graft failure in these patients.
The researchers conducted a single-center study of 17 consecutive patients who underwent haploidentical hematopoietic cell transplantation for a severe hemoglobinopathy. A total of 12 patients had sickle cell disease and 5 had beta-thalassemia major.
Study participants received a nonmyeloablative conditioning regimen consisting of haploidentical related donors and postprocedure cyclophosphamide.
“The primary endpoint of the study was modified to evaluate engraftment by measurement of blood chimerism,” they wrote.
After analysis, Dr. Bolaños-Meade and his colleagues found that increasing total body irradiation dose from 200 cGy to 400 cGy lowered graft failure without raising toxicity. In particular, only one participant had primary graft failure, but experienced recovery of host hematopoiesis.
Of the 17 patients, 13 patients (76%) achieved full donor chimerism and 3 patients (18%) had mixed donor-host chimerism. Three patients remained on immunosuppression, the researchers reported.
With respect to safety, five patients developed acute GVHD, which varied from grade 2 to 4; chronic GVHD was seen in three patients.
“The results of our study warrant further investigation to determine whether the curative potential of allogeneic bone marrow transplantation can extend beyond the traditionally small fraction of patients with severe hemoglobinopathies who have matched donors and are healthy enough to receive myeloablative conditioning,” they wrote.
The study was funded by the National Institutes of Health and the Maryland Stem Cell Research Fund. The researchers reported financial disclosures related to Aduro Biotech, Amgen, Alexion Pharmaceuticals, Celgene, Takeda, and others.
SOURCE: Bolaños-Meade FJ et al. Lancet Haematol. 2019 Mar 13. doi: 10.1016/S2352-3026(19)30031-6.
Doubling total body irradiation improved rates of engraftment without altering safety in patients with severe hemoglobinopathies undergoing haploidentical hematopoietic cell transplantation, new findings suggest.
“[A]lthough our previous study showed cures in most patients and low toxicity, the graft failure rate – albeit all with full host recovery – was 50%,” Francisco Javier Bolaños-Meade, MD, of Johns Hopkins University, Baltimore, and his colleagues wrote in the Lancet Haematology. The present study set out to decrease graft failure in these patients.
The researchers conducted a single-center study of 17 consecutive patients who underwent haploidentical hematopoietic cell transplantation for a severe hemoglobinopathy. A total of 12 patients had sickle cell disease and 5 had beta-thalassemia major.
Study participants received a nonmyeloablative conditioning regimen consisting of haploidentical related donors and postprocedure cyclophosphamide.
“The primary endpoint of the study was modified to evaluate engraftment by measurement of blood chimerism,” they wrote.
After analysis, Dr. Bolaños-Meade and his colleagues found that increasing total body irradiation dose from 200 cGy to 400 cGy lowered graft failure without raising toxicity. In particular, only one participant had primary graft failure, but experienced recovery of host hematopoiesis.
Of the 17 patients, 13 patients (76%) achieved full donor chimerism and 3 patients (18%) had mixed donor-host chimerism. Three patients remained on immunosuppression, the researchers reported.
With respect to safety, five patients developed acute GVHD, which varied from grade 2 to 4; chronic GVHD was seen in three patients.
“The results of our study warrant further investigation to determine whether the curative potential of allogeneic bone marrow transplantation can extend beyond the traditionally small fraction of patients with severe hemoglobinopathies who have matched donors and are healthy enough to receive myeloablative conditioning,” they wrote.
The study was funded by the National Institutes of Health and the Maryland Stem Cell Research Fund. The researchers reported financial disclosures related to Aduro Biotech, Amgen, Alexion Pharmaceuticals, Celgene, Takeda, and others.
SOURCE: Bolaños-Meade FJ et al. Lancet Haematol. 2019 Mar 13. doi: 10.1016/S2352-3026(19)30031-6.
Doubling total body irradiation improved rates of engraftment without altering safety in patients with severe hemoglobinopathies undergoing haploidentical hematopoietic cell transplantation, new findings suggest.
“[A]lthough our previous study showed cures in most patients and low toxicity, the graft failure rate – albeit all with full host recovery – was 50%,” Francisco Javier Bolaños-Meade, MD, of Johns Hopkins University, Baltimore, and his colleagues wrote in the Lancet Haematology. The present study set out to decrease graft failure in these patients.
The researchers conducted a single-center study of 17 consecutive patients who underwent haploidentical hematopoietic cell transplantation for a severe hemoglobinopathy. A total of 12 patients had sickle cell disease and 5 had beta-thalassemia major.
Study participants received a nonmyeloablative conditioning regimen consisting of haploidentical related donors and postprocedure cyclophosphamide.
“The primary endpoint of the study was modified to evaluate engraftment by measurement of blood chimerism,” they wrote.
After analysis, Dr. Bolaños-Meade and his colleagues found that increasing total body irradiation dose from 200 cGy to 400 cGy lowered graft failure without raising toxicity. In particular, only one participant had primary graft failure, but experienced recovery of host hematopoiesis.
Of the 17 patients, 13 patients (76%) achieved full donor chimerism and 3 patients (18%) had mixed donor-host chimerism. Three patients remained on immunosuppression, the researchers reported.
With respect to safety, five patients developed acute GVHD, which varied from grade 2 to 4; chronic GVHD was seen in three patients.
“The results of our study warrant further investigation to determine whether the curative potential of allogeneic bone marrow transplantation can extend beyond the traditionally small fraction of patients with severe hemoglobinopathies who have matched donors and are healthy enough to receive myeloablative conditioning,” they wrote.
The study was funded by the National Institutes of Health and the Maryland Stem Cell Research Fund. The researchers reported financial disclosures related to Aduro Biotech, Amgen, Alexion Pharmaceuticals, Celgene, Takeda, and others.
SOURCE: Bolaños-Meade FJ et al. Lancet Haematol. 2019 Mar 13. doi: 10.1016/S2352-3026(19)30031-6.
FROM LANCET HAEMATOLOGY