User login
Higher prenatal exposure to daylight tied to lower depression risk
Prenatal programming of the circadian and limbic systems might play a role in the odds of developing lifetime depression, a longitudinal study of almost 161,000 women shows.
“Our results could add support to an emerging hypothesis that perinatal photoperiod may influence depression risk,” wrote Elizabeth E. Devore of Brigham and Women’s Hospital, Boston, and her associates. “If replicated, ... these results could translate into safe and inexpensive light-related interventions for mothers and babies.”
In the study, which was published in the Journal of Psychiatric Research, Ms. Devore and her associates examined the influence of daylight exposure during maternal pregnancy and lifetime depression risk in the resulting offspring. They found that increased exposure to daylight during maternal pregnancy correlated with reduced lifetime risk of depression. wrote Ms. Devore, who also is affiliated with Harvard Medical School, Boston, and her associates.
The effects of daylight exposure were considered modest within the study population, but the authors emphasized that the finding would have much “larger effects at the population level,” given the occurrence of depression in the general population. They added that their findings reinforce a growing consensus that perinatal exposure to daylight could have the ability to influence the risk of developing a mood disorder.
The investigators accessed the Nurses’ Health Study (NHS) and the NHS II, established in 1976 and 1989, respectively, to assess risk factors for chronic conditions in female nurses. Both studies biennially surveyed demographic data on health, lifestyle, and medication use through mailed questionnaires. The first group was composed of 121,701 women aged 30-55 years; the second included 116,430 women aged 25-42 years. Altogether, 160,737 women born full-term were included in the study; 20,912 were excluded from the original survey group for not reporting depression status, as well as an additional 43,325 for not reporting their state of birth.
From data collected regarding participants’ day and state of birth, the researchers were able to estimate total length of daylight exposure during pregnancy using mathematical equations published by the National Oceanic and Atmospheric Administration.
Longitudinal coordinates pinpointing the center of population density for a participant’s birth state were used to identify the location of each participant during gestation. Using those assumptions, the authors were able to establish the two key data points evaluated for the study: total daylight exposure during pregnancy gestation, which was calculated by adding the lengths of all 280 days of the pregnancy, and extreme differences in daylight exposure that might have occurred throughout the pregnancy, which was measured by subtracting the longest and shortest day lengths during gestation.
The investigators paid particular attention to reported levels of depression, evidence of suicide, and personal characteristics and lifestyle factors, such as race, hair color, and early-life socioeconomic factors, including parents’ homeownership at the time of offspring birth; birth weight; history of having been breastfed; and parental occupation throughout the participant’s childhood.
Participants did not begin reporting antidepressant use for the first time until 1996; history of clinician-reported diagnoses of depression began in 2000, Ms. Devore and her associates reported.
Total daylight exposure during pregnancy was found to have “a borderline significant association with odds of lifetime depression,” but the trend was not convincing qualitatively, “and individual estimates across quintiles of exposure” were not considered to be statistically significant. In fact, the authors found that a larger difference between minimum and maximum daylight exposure throughout pregnancy significantly lowered lifetime risk of depression. Women with the largest differences in minimum/maximum daylight exposure during gestation had a 12% lower risk of depression in the NHS population. That reduced risk increased to 15% with the NHS II group. When both cohorts were combined, the reduced risk of depression was 13%.
When evaluating the role that daylight exposure plays with regard to trimester of pregnancy, the authors did note an association for the first trimester, but the association was much stronger for the second trimester; no association was found for the third trimester.
In terms of the effects of daylight exposure on incidence of suicide, no significant associations were found.
Because birth latitude and birth season were of key interest in this study, their relative contribution to total daylight exposure and extreme differences in exposure were considered. Citing observations from the National Health and Nutrition Examination Survey (NHANES), In this study, the authors found that women born in northern latitudes were found to have a 7% risk for lifetime depression, compared with women born in middle latitudes. Conversely, women born in southern latitudes had a 15% risk of depression. No association was found between birth season and incidence of depression, regardless of how season was defined.
The investigators cited several limitations. One is that they did not collect behavioral factors such as the time women spent outdoors. “Our method of exposure calculation relied on the assumption that participants’ mothers were exposed to sunlight from sunrise to sunset,” Ms. Devore and her associates wrote. This way of assessing exposure might have biased their results.
Nevertheless, they said, more studies are needed to examine the role that birth latitude and birth season might play with regard to depression.
The research for this study was supported by the National Institute of Mental Health and the Centers for Disease Control and Prevention/National Institute for Occupational Safety and Health. Additional infrastructure support for the Nurses’ Health Studies was provided by the National Cancer Institute.
The authors declared no conflicts of interest. Ms. Devore has reported receiving consulting fees from Epi Excellence and Bohn Epidemiology.
SOURCE: Devore EE et al. J. Psychiatric Res. 2018. 104(08):e20180225.
Prenatal programming of the circadian and limbic systems might play a role in the odds of developing lifetime depression, a longitudinal study of almost 161,000 women shows.
“Our results could add support to an emerging hypothesis that perinatal photoperiod may influence depression risk,” wrote Elizabeth E. Devore of Brigham and Women’s Hospital, Boston, and her associates. “If replicated, ... these results could translate into safe and inexpensive light-related interventions for mothers and babies.”
In the study, which was published in the Journal of Psychiatric Research, Ms. Devore and her associates examined the influence of daylight exposure during maternal pregnancy and lifetime depression risk in the resulting offspring. They found that increased exposure to daylight during maternal pregnancy correlated with reduced lifetime risk of depression. wrote Ms. Devore, who also is affiliated with Harvard Medical School, Boston, and her associates.
The effects of daylight exposure were considered modest within the study population, but the authors emphasized that the finding would have much “larger effects at the population level,” given the occurrence of depression in the general population. They added that their findings reinforce a growing consensus that perinatal exposure to daylight could have the ability to influence the risk of developing a mood disorder.
The investigators accessed the Nurses’ Health Study (NHS) and the NHS II, established in 1976 and 1989, respectively, to assess risk factors for chronic conditions in female nurses. Both studies biennially surveyed demographic data on health, lifestyle, and medication use through mailed questionnaires. The first group was composed of 121,701 women aged 30-55 years; the second included 116,430 women aged 25-42 years. Altogether, 160,737 women born full-term were included in the study; 20,912 were excluded from the original survey group for not reporting depression status, as well as an additional 43,325 for not reporting their state of birth.
From data collected regarding participants’ day and state of birth, the researchers were able to estimate total length of daylight exposure during pregnancy using mathematical equations published by the National Oceanic and Atmospheric Administration.
Longitudinal coordinates pinpointing the center of population density for a participant’s birth state were used to identify the location of each participant during gestation. Using those assumptions, the authors were able to establish the two key data points evaluated for the study: total daylight exposure during pregnancy gestation, which was calculated by adding the lengths of all 280 days of the pregnancy, and extreme differences in daylight exposure that might have occurred throughout the pregnancy, which was measured by subtracting the longest and shortest day lengths during gestation.
The investigators paid particular attention to reported levels of depression, evidence of suicide, and personal characteristics and lifestyle factors, such as race, hair color, and early-life socioeconomic factors, including parents’ homeownership at the time of offspring birth; birth weight; history of having been breastfed; and parental occupation throughout the participant’s childhood.
Participants did not begin reporting antidepressant use for the first time until 1996; history of clinician-reported diagnoses of depression began in 2000, Ms. Devore and her associates reported.
Total daylight exposure during pregnancy was found to have “a borderline significant association with odds of lifetime depression,” but the trend was not convincing qualitatively, “and individual estimates across quintiles of exposure” were not considered to be statistically significant. In fact, the authors found that a larger difference between minimum and maximum daylight exposure throughout pregnancy significantly lowered lifetime risk of depression. Women with the largest differences in minimum/maximum daylight exposure during gestation had a 12% lower risk of depression in the NHS population. That reduced risk increased to 15% with the NHS II group. When both cohorts were combined, the reduced risk of depression was 13%.
When evaluating the role that daylight exposure plays with regard to trimester of pregnancy, the authors did note an association for the first trimester, but the association was much stronger for the second trimester; no association was found for the third trimester.
In terms of the effects of daylight exposure on incidence of suicide, no significant associations were found.
Because birth latitude and birth season were of key interest in this study, their relative contribution to total daylight exposure and extreme differences in exposure were considered. Citing observations from the National Health and Nutrition Examination Survey (NHANES), In this study, the authors found that women born in northern latitudes were found to have a 7% risk for lifetime depression, compared with women born in middle latitudes. Conversely, women born in southern latitudes had a 15% risk of depression. No association was found between birth season and incidence of depression, regardless of how season was defined.
The investigators cited several limitations. One is that they did not collect behavioral factors such as the time women spent outdoors. “Our method of exposure calculation relied on the assumption that participants’ mothers were exposed to sunlight from sunrise to sunset,” Ms. Devore and her associates wrote. This way of assessing exposure might have biased their results.
Nevertheless, they said, more studies are needed to examine the role that birth latitude and birth season might play with regard to depression.
The research for this study was supported by the National Institute of Mental Health and the Centers for Disease Control and Prevention/National Institute for Occupational Safety and Health. Additional infrastructure support for the Nurses’ Health Studies was provided by the National Cancer Institute.
The authors declared no conflicts of interest. Ms. Devore has reported receiving consulting fees from Epi Excellence and Bohn Epidemiology.
SOURCE: Devore EE et al. J. Psychiatric Res. 2018. 104(08):e20180225.
Prenatal programming of the circadian and limbic systems might play a role in the odds of developing lifetime depression, a longitudinal study of almost 161,000 women shows.
“Our results could add support to an emerging hypothesis that perinatal photoperiod may influence depression risk,” wrote Elizabeth E. Devore of Brigham and Women’s Hospital, Boston, and her associates. “If replicated, ... these results could translate into safe and inexpensive light-related interventions for mothers and babies.”
In the study, which was published in the Journal of Psychiatric Research, Ms. Devore and her associates examined the influence of daylight exposure during maternal pregnancy and lifetime depression risk in the resulting offspring. They found that increased exposure to daylight during maternal pregnancy correlated with reduced lifetime risk of depression. wrote Ms. Devore, who also is affiliated with Harvard Medical School, Boston, and her associates.
The effects of daylight exposure were considered modest within the study population, but the authors emphasized that the finding would have much “larger effects at the population level,” given the occurrence of depression in the general population. They added that their findings reinforce a growing consensus that perinatal exposure to daylight could have the ability to influence the risk of developing a mood disorder.
The investigators accessed the Nurses’ Health Study (NHS) and the NHS II, established in 1976 and 1989, respectively, to assess risk factors for chronic conditions in female nurses. Both studies biennially surveyed demographic data on health, lifestyle, and medication use through mailed questionnaires. The first group was composed of 121,701 women aged 30-55 years; the second included 116,430 women aged 25-42 years. Altogether, 160,737 women born full-term were included in the study; 20,912 were excluded from the original survey group for not reporting depression status, as well as an additional 43,325 for not reporting their state of birth.
From data collected regarding participants’ day and state of birth, the researchers were able to estimate total length of daylight exposure during pregnancy using mathematical equations published by the National Oceanic and Atmospheric Administration.
Longitudinal coordinates pinpointing the center of population density for a participant’s birth state were used to identify the location of each participant during gestation. Using those assumptions, the authors were able to establish the two key data points evaluated for the study: total daylight exposure during pregnancy gestation, which was calculated by adding the lengths of all 280 days of the pregnancy, and extreme differences in daylight exposure that might have occurred throughout the pregnancy, which was measured by subtracting the longest and shortest day lengths during gestation.
The investigators paid particular attention to reported levels of depression, evidence of suicide, and personal characteristics and lifestyle factors, such as race, hair color, and early-life socioeconomic factors, including parents’ homeownership at the time of offspring birth; birth weight; history of having been breastfed; and parental occupation throughout the participant’s childhood.
Participants did not begin reporting antidepressant use for the first time until 1996; history of clinician-reported diagnoses of depression began in 2000, Ms. Devore and her associates reported.
Total daylight exposure during pregnancy was found to have “a borderline significant association with odds of lifetime depression,” but the trend was not convincing qualitatively, “and individual estimates across quintiles of exposure” were not considered to be statistically significant. In fact, the authors found that a larger difference between minimum and maximum daylight exposure throughout pregnancy significantly lowered lifetime risk of depression. Women with the largest differences in minimum/maximum daylight exposure during gestation had a 12% lower risk of depression in the NHS population. That reduced risk increased to 15% with the NHS II group. When both cohorts were combined, the reduced risk of depression was 13%.
When evaluating the role that daylight exposure plays with regard to trimester of pregnancy, the authors did note an association for the first trimester, but the association was much stronger for the second trimester; no association was found for the third trimester.
In terms of the effects of daylight exposure on incidence of suicide, no significant associations were found.
Because birth latitude and birth season were of key interest in this study, their relative contribution to total daylight exposure and extreme differences in exposure were considered. Citing observations from the National Health and Nutrition Examination Survey (NHANES), In this study, the authors found that women born in northern latitudes were found to have a 7% risk for lifetime depression, compared with women born in middle latitudes. Conversely, women born in southern latitudes had a 15% risk of depression. No association was found between birth season and incidence of depression, regardless of how season was defined.
The investigators cited several limitations. One is that they did not collect behavioral factors such as the time women spent outdoors. “Our method of exposure calculation relied on the assumption that participants’ mothers were exposed to sunlight from sunrise to sunset,” Ms. Devore and her associates wrote. This way of assessing exposure might have biased their results.
Nevertheless, they said, more studies are needed to examine the role that birth latitude and birth season might play with regard to depression.
The research for this study was supported by the National Institute of Mental Health and the Centers for Disease Control and Prevention/National Institute for Occupational Safety and Health. Additional infrastructure support for the Nurses’ Health Studies was provided by the National Cancer Institute.
The authors declared no conflicts of interest. Ms. Devore has reported receiving consulting fees from Epi Excellence and Bohn Epidemiology.
SOURCE: Devore EE et al. J. Psychiatric Res. 2018. 104(08):e20180225.
FROM THE JOURNAL OF PSYCHIATRIC RESEARCH
Key clinical point: Future studies that are able to replicate findings have the potential to offer safe, inexpensive light-based treatments for both mothers and babies.
Major finding: Benefits of daytime light exposure are highest with second-trimester exposure.
Study details: Longitudinal cohort study of almost 161,000 women who were born full term.
Disclosures: The authors declared no conflicts of interest. Ms. Devore reported receiving consulting fees from Epi Excellence and Bohn Epidemiology.
Source: Devore EE et al. J Psychiatric Res. 2018.104(08):e20180225.
Facial exercises hastened the effects of botulinum toxin in small study
Performing of the injections by 1 day, a small, randomized study has shown.
The study addressed the lack of data regarding whether exercising treated muscles for several hours after injections helped “to enhance uptake” of botulinum toxin, said Murad Alam, MD, chief of cutaneous and aesthetic surgery and a professor of dermatology at Northwestern University, Chicago, and his coauthors.
“The results of this study suggest that a postinjection facial exercise regimen is a safe and effective method for achieving an earlier onset of clinical effect of botulinum toxin injections,” he and his coauthors concluded. The results were reported in the Journal of the American Academy of Dermatology.
The study enrolled 22 women, aged 27-66 years (mean age, 47 years) who received botulinum toxin injections for forehead and glabella dynamic rhytids. Following the injections, half of the women did the exercises for 4 hours and the other half avoided facial contractions for 4 hours. Exercises included raised motions of the forehead and scowls – such as knitting the brows – in three sets of 40 repetitions separated by 10 minutes, according to a Northwestern University press release. At 7 months, the women came back for treatment, and switched groups.
Two blinded dermatologists rated photos of forehead and glabella dynamic creases at baseline and on days 1,2,3,4,7, and 14 with the 5-point Carruthers’ Forehead Lines Grading Scale and the 4-point Gladys study group rating scale for glabellar frown lines. The women also assessed their own dynamic creases using a 7-point Subject Self-Evaluation Improvement Scale.
By day 3, ratings by dermatologists and patients of glabellar and forehead wrinkles were statistically significantly better for patients who had performed the exercises after the injections. When facial exercises followed the injections, women said they saw noticeable glabellar improvement by day 2 or 3, compared with day 3 or 4 among those who did not do facial exercises after the injections (P = .02).
“A significant advantage in the exercise group was detectable as early as day 3, at which point patients’ self-evaluation wrinkle scores increased by approximately twice as much in exercisers compared to non-exercisers,” the study authors noted.
But by 2 weeks, the effects of treatment were similar in both groups, and the effects of treatment lasted for a similar period of time with or without exercises.
“Expediting the time to noticeable benefit, even by one day, may be clinically significant for some patients,” the authors wrote. Exercises could be recommended only to those patients who need faster results “to avoid needless inconvenience,” they added.
The study was supported by research funds from the department of dermatology at Northwestern University. The authors had no relevant disclosures.
SOURCE: Alam M et al. J Am Acad Dermatol. 2018. doi: 10.1016/j.jaad.2018.10.013.
Performing of the injections by 1 day, a small, randomized study has shown.
The study addressed the lack of data regarding whether exercising treated muscles for several hours after injections helped “to enhance uptake” of botulinum toxin, said Murad Alam, MD, chief of cutaneous and aesthetic surgery and a professor of dermatology at Northwestern University, Chicago, and his coauthors.
“The results of this study suggest that a postinjection facial exercise regimen is a safe and effective method for achieving an earlier onset of clinical effect of botulinum toxin injections,” he and his coauthors concluded. The results were reported in the Journal of the American Academy of Dermatology.
The study enrolled 22 women, aged 27-66 years (mean age, 47 years) who received botulinum toxin injections for forehead and glabella dynamic rhytids. Following the injections, half of the women did the exercises for 4 hours and the other half avoided facial contractions for 4 hours. Exercises included raised motions of the forehead and scowls – such as knitting the brows – in three sets of 40 repetitions separated by 10 minutes, according to a Northwestern University press release. At 7 months, the women came back for treatment, and switched groups.
Two blinded dermatologists rated photos of forehead and glabella dynamic creases at baseline and on days 1,2,3,4,7, and 14 with the 5-point Carruthers’ Forehead Lines Grading Scale and the 4-point Gladys study group rating scale for glabellar frown lines. The women also assessed their own dynamic creases using a 7-point Subject Self-Evaluation Improvement Scale.
By day 3, ratings by dermatologists and patients of glabellar and forehead wrinkles were statistically significantly better for patients who had performed the exercises after the injections. When facial exercises followed the injections, women said they saw noticeable glabellar improvement by day 2 or 3, compared with day 3 or 4 among those who did not do facial exercises after the injections (P = .02).
“A significant advantage in the exercise group was detectable as early as day 3, at which point patients’ self-evaluation wrinkle scores increased by approximately twice as much in exercisers compared to non-exercisers,” the study authors noted.
But by 2 weeks, the effects of treatment were similar in both groups, and the effects of treatment lasted for a similar period of time with or without exercises.
“Expediting the time to noticeable benefit, even by one day, may be clinically significant for some patients,” the authors wrote. Exercises could be recommended only to those patients who need faster results “to avoid needless inconvenience,” they added.
The study was supported by research funds from the department of dermatology at Northwestern University. The authors had no relevant disclosures.
SOURCE: Alam M et al. J Am Acad Dermatol. 2018. doi: 10.1016/j.jaad.2018.10.013.
Performing of the injections by 1 day, a small, randomized study has shown.
The study addressed the lack of data regarding whether exercising treated muscles for several hours after injections helped “to enhance uptake” of botulinum toxin, said Murad Alam, MD, chief of cutaneous and aesthetic surgery and a professor of dermatology at Northwestern University, Chicago, and his coauthors.
“The results of this study suggest that a postinjection facial exercise regimen is a safe and effective method for achieving an earlier onset of clinical effect of botulinum toxin injections,” he and his coauthors concluded. The results were reported in the Journal of the American Academy of Dermatology.
The study enrolled 22 women, aged 27-66 years (mean age, 47 years) who received botulinum toxin injections for forehead and glabella dynamic rhytids. Following the injections, half of the women did the exercises for 4 hours and the other half avoided facial contractions for 4 hours. Exercises included raised motions of the forehead and scowls – such as knitting the brows – in three sets of 40 repetitions separated by 10 minutes, according to a Northwestern University press release. At 7 months, the women came back for treatment, and switched groups.
Two blinded dermatologists rated photos of forehead and glabella dynamic creases at baseline and on days 1,2,3,4,7, and 14 with the 5-point Carruthers’ Forehead Lines Grading Scale and the 4-point Gladys study group rating scale for glabellar frown lines. The women also assessed their own dynamic creases using a 7-point Subject Self-Evaluation Improvement Scale.
By day 3, ratings by dermatologists and patients of glabellar and forehead wrinkles were statistically significantly better for patients who had performed the exercises after the injections. When facial exercises followed the injections, women said they saw noticeable glabellar improvement by day 2 or 3, compared with day 3 or 4 among those who did not do facial exercises after the injections (P = .02).
“A significant advantage in the exercise group was detectable as early as day 3, at which point patients’ self-evaluation wrinkle scores increased by approximately twice as much in exercisers compared to non-exercisers,” the study authors noted.
But by 2 weeks, the effects of treatment were similar in both groups, and the effects of treatment lasted for a similar period of time with or without exercises.
“Expediting the time to noticeable benefit, even by one day, may be clinically significant for some patients,” the authors wrote. Exercises could be recommended only to those patients who need faster results “to avoid needless inconvenience,” they added.
The study was supported by research funds from the department of dermatology at Northwestern University. The authors had no relevant disclosures.
SOURCE: Alam M et al. J Am Acad Dermatol. 2018. doi: 10.1016/j.jaad.2018.10.013.
FROM THE JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY
Key clinical point: Recommending facial muscle exercises after botulinum toxin injections to the forehead is now evidence based.
Major finding: Posttreatment facial exercise after botulinum toxin injections reduced the appearance of forehead wrinkles one day earlier.
Study details: A randomized, crossover clinical trial of 22 women treated with botulinum toxin for dynamic rhytids of the forehead and glabella.
Disclosures: The study was supported by research funds from the department of dermatology at Northwestern University. The authors had no relevant disclosures.
Source: Alam M et al. J Am Acad Dermatol. 2018. doi: 10.1016/j.jaad.2018.10.013.
Good news, bad news about HCV in kidney disease
SAN DIEGO – There’s good news and bad news about hepatitis C virus (HCV) in patients with chronic kidney disease (CKD): The new generation of drugs that cures HCV is effective in this population, but outbreaks of infection are still plaguing the nation’s dialysis clinics.
These perspectives came in a presentation about infections in CKD at Kidney Week 2018, sponsored by the American Society of Nephrology.
First, the good news about HCV. “Treatment is now feasible for all stages of chronic kidney disease,” said gastroenterologist Paul Martin, MD, of the University of Miami. “It was possible to achieve biological cure in 99% of patients, which is truly remarkable considering what a problem kidney patients were for hepatitis C until very recently.”
The key is to treat HCV with drug combinations that lower the risk of viral resistance. “These drugs are extremely well tolerated. They’re not like interferon or ribavirin,” he said, referring to a drug combo that was formerly used to treat HCV. “We can anticipate curing hepatitis C with a finite amount of therapy in virtually every patient we see, including those with kidney disease.”
In patients with CKD, all the new drugs are approved for glomerular filtration rates greater than 30 mL/min. Sofosbuvir (Sovaldi) is not approved for patients with a filtration rate under 30 mL/min, he said, but other options are available.
Ribavirin, he added, is no longer needed with current regimens.
Dr. Martin pointed to two studies that reveal the power of the new regimens against HCV in patients with CKD. One of the studies, a 2015 industry-funded report in the Lancet, found that “once-daily grazoprevir and elbasvir for 12 weeks had a low rate of adverse events and was effective in patients infected with HCV genotype 1 and stage 4-5 chronic kidney disease.” The other study, also funded by industry and published in 2017 in the New England Journal of Medicine, found that “treatment with glecaprevir and pibrentasvir for 12 weeks resulted in a high rate of sustained virologic response in patients with stage 4 or 5 chronic kidney disease and HCV infection.”
Meanwhile, there are signs that HCV treatment may boost survival in CKD patients on dialysis, Dr. Martin said.
In terms of bad news, Priti R. Patel, MD, MPH, a medical officer with the Centers for Disease Control and Prevention, warned that dialysis clinics are still seeing HCV outbreaks. “It’s a continuing problem,” she said. “What we hear about at the CDC is the tip of the iceberg.”
The CDC says it received word of 21 HCV outbreaks of two or more cases in dialysis clinics during 2008-2017. These affected 102 patients, and more than 3,000 patients were notified that they were at risk and should be screened.
One dialysis clinic in Philadelphia had 18 cases of HCV during 2008-2013; they were blamed on “multiple lapses in infection control ... including hand hygiene and glove use, vascular access care, medication preparation, cleaning, and disinfection.”
“There should be no more than one case that has to happen for a facility to detect that it has a problem and identify a solution,” Dr. Patel said.
Since acute HCV can appear without symptoms, every dialysis patients should be tested for HCV antibodies, she added. “If it’s positive, confirm it. If confirmed, they should be informed of their infection status and have an evaluation for treatment.”
Dr. Martin reported consulting for Bristol-Myers Squibb and AbbVie and receiving research funding from Gilead, Bristol-Myers Squibb, AbbVie, and Merck. Dr. Patel reported no disclosures.
SAN DIEGO – There’s good news and bad news about hepatitis C virus (HCV) in patients with chronic kidney disease (CKD): The new generation of drugs that cures HCV is effective in this population, but outbreaks of infection are still plaguing the nation’s dialysis clinics.
These perspectives came in a presentation about infections in CKD at Kidney Week 2018, sponsored by the American Society of Nephrology.
First, the good news about HCV. “Treatment is now feasible for all stages of chronic kidney disease,” said gastroenterologist Paul Martin, MD, of the University of Miami. “It was possible to achieve biological cure in 99% of patients, which is truly remarkable considering what a problem kidney patients were for hepatitis C until very recently.”
The key is to treat HCV with drug combinations that lower the risk of viral resistance. “These drugs are extremely well tolerated. They’re not like interferon or ribavirin,” he said, referring to a drug combo that was formerly used to treat HCV. “We can anticipate curing hepatitis C with a finite amount of therapy in virtually every patient we see, including those with kidney disease.”
In patients with CKD, all the new drugs are approved for glomerular filtration rates greater than 30 mL/min. Sofosbuvir (Sovaldi) is not approved for patients with a filtration rate under 30 mL/min, he said, but other options are available.
Ribavirin, he added, is no longer needed with current regimens.
Dr. Martin pointed to two studies that reveal the power of the new regimens against HCV in patients with CKD. One of the studies, a 2015 industry-funded report in the Lancet, found that “once-daily grazoprevir and elbasvir for 12 weeks had a low rate of adverse events and was effective in patients infected with HCV genotype 1 and stage 4-5 chronic kidney disease.” The other study, also funded by industry and published in 2017 in the New England Journal of Medicine, found that “treatment with glecaprevir and pibrentasvir for 12 weeks resulted in a high rate of sustained virologic response in patients with stage 4 or 5 chronic kidney disease and HCV infection.”
Meanwhile, there are signs that HCV treatment may boost survival in CKD patients on dialysis, Dr. Martin said.
In terms of bad news, Priti R. Patel, MD, MPH, a medical officer with the Centers for Disease Control and Prevention, warned that dialysis clinics are still seeing HCV outbreaks. “It’s a continuing problem,” she said. “What we hear about at the CDC is the tip of the iceberg.”
The CDC says it received word of 21 HCV outbreaks of two or more cases in dialysis clinics during 2008-2017. These affected 102 patients, and more than 3,000 patients were notified that they were at risk and should be screened.
One dialysis clinic in Philadelphia had 18 cases of HCV during 2008-2013; they were blamed on “multiple lapses in infection control ... including hand hygiene and glove use, vascular access care, medication preparation, cleaning, and disinfection.”
“There should be no more than one case that has to happen for a facility to detect that it has a problem and identify a solution,” Dr. Patel said.
Since acute HCV can appear without symptoms, every dialysis patients should be tested for HCV antibodies, she added. “If it’s positive, confirm it. If confirmed, they should be informed of their infection status and have an evaluation for treatment.”
Dr. Martin reported consulting for Bristol-Myers Squibb and AbbVie and receiving research funding from Gilead, Bristol-Myers Squibb, AbbVie, and Merck. Dr. Patel reported no disclosures.
SAN DIEGO – There’s good news and bad news about hepatitis C virus (HCV) in patients with chronic kidney disease (CKD): The new generation of drugs that cures HCV is effective in this population, but outbreaks of infection are still plaguing the nation’s dialysis clinics.
These perspectives came in a presentation about infections in CKD at Kidney Week 2018, sponsored by the American Society of Nephrology.
First, the good news about HCV. “Treatment is now feasible for all stages of chronic kidney disease,” said gastroenterologist Paul Martin, MD, of the University of Miami. “It was possible to achieve biological cure in 99% of patients, which is truly remarkable considering what a problem kidney patients were for hepatitis C until very recently.”
The key is to treat HCV with drug combinations that lower the risk of viral resistance. “These drugs are extremely well tolerated. They’re not like interferon or ribavirin,” he said, referring to a drug combo that was formerly used to treat HCV. “We can anticipate curing hepatitis C with a finite amount of therapy in virtually every patient we see, including those with kidney disease.”
In patients with CKD, all the new drugs are approved for glomerular filtration rates greater than 30 mL/min. Sofosbuvir (Sovaldi) is not approved for patients with a filtration rate under 30 mL/min, he said, but other options are available.
Ribavirin, he added, is no longer needed with current regimens.
Dr. Martin pointed to two studies that reveal the power of the new regimens against HCV in patients with CKD. One of the studies, a 2015 industry-funded report in the Lancet, found that “once-daily grazoprevir and elbasvir for 12 weeks had a low rate of adverse events and was effective in patients infected with HCV genotype 1 and stage 4-5 chronic kidney disease.” The other study, also funded by industry and published in 2017 in the New England Journal of Medicine, found that “treatment with glecaprevir and pibrentasvir for 12 weeks resulted in a high rate of sustained virologic response in patients with stage 4 or 5 chronic kidney disease and HCV infection.”
Meanwhile, there are signs that HCV treatment may boost survival in CKD patients on dialysis, Dr. Martin said.
In terms of bad news, Priti R. Patel, MD, MPH, a medical officer with the Centers for Disease Control and Prevention, warned that dialysis clinics are still seeing HCV outbreaks. “It’s a continuing problem,” she said. “What we hear about at the CDC is the tip of the iceberg.”
The CDC says it received word of 21 HCV outbreaks of two or more cases in dialysis clinics during 2008-2017. These affected 102 patients, and more than 3,000 patients were notified that they were at risk and should be screened.
One dialysis clinic in Philadelphia had 18 cases of HCV during 2008-2013; they were blamed on “multiple lapses in infection control ... including hand hygiene and glove use, vascular access care, medication preparation, cleaning, and disinfection.”
“There should be no more than one case that has to happen for a facility to detect that it has a problem and identify a solution,” Dr. Patel said.
Since acute HCV can appear without symptoms, every dialysis patients should be tested for HCV antibodies, she added. “If it’s positive, confirm it. If confirmed, they should be informed of their infection status and have an evaluation for treatment.”
Dr. Martin reported consulting for Bristol-Myers Squibb and AbbVie and receiving research funding from Gilead, Bristol-Myers Squibb, AbbVie, and Merck. Dr. Patel reported no disclosures.
EXPERT ANALYSIS FROM KIDNEY WEEK 2018
FDA approves Yupelri for COPD maintenance therapy
The Food and Drug Administration has approved Yupelri (revefenacin) for maintenance therapy of patients with chronic obstructive pulmonary disease (COPD).
Revefenacin is a long-acting muscarinic antagonist aimed at improving the lung function of patients with COPD. Yupelri is an inhalation solution administered once daily through a standard jet nebulizer.
The most common adverse events associated with Yupelri are cough, nasopharyngitis, upper respiratory tract infection, headache, and back pain. Patients receiving other anticholinergic-containing drugs or OATP1B1 and OATP1B3 inhibitors should not receive Yupelri.
“Patients should also be alert for signs and symptoms of acute narrow-angle glaucoma [e.g., eye pain or discomfort, blurred vision, visual changes]. Patients should consult a healthcare professional immediately if any of these signs or symptoms develop,” the FDA said in the press release.
The expanded label for Yupelri can be found on the FDA website.
The Food and Drug Administration has approved Yupelri (revefenacin) for maintenance therapy of patients with chronic obstructive pulmonary disease (COPD).
Revefenacin is a long-acting muscarinic antagonist aimed at improving the lung function of patients with COPD. Yupelri is an inhalation solution administered once daily through a standard jet nebulizer.
The most common adverse events associated with Yupelri are cough, nasopharyngitis, upper respiratory tract infection, headache, and back pain. Patients receiving other anticholinergic-containing drugs or OATP1B1 and OATP1B3 inhibitors should not receive Yupelri.
“Patients should also be alert for signs and symptoms of acute narrow-angle glaucoma [e.g., eye pain or discomfort, blurred vision, visual changes]. Patients should consult a healthcare professional immediately if any of these signs or symptoms develop,” the FDA said in the press release.
The expanded label for Yupelri can be found on the FDA website.
The Food and Drug Administration has approved Yupelri (revefenacin) for maintenance therapy of patients with chronic obstructive pulmonary disease (COPD).
Revefenacin is a long-acting muscarinic antagonist aimed at improving the lung function of patients with COPD. Yupelri is an inhalation solution administered once daily through a standard jet nebulizer.
The most common adverse events associated with Yupelri are cough, nasopharyngitis, upper respiratory tract infection, headache, and back pain. Patients receiving other anticholinergic-containing drugs or OATP1B1 and OATP1B3 inhibitors should not receive Yupelri.
“Patients should also be alert for signs and symptoms of acute narrow-angle glaucoma [e.g., eye pain or discomfort, blurred vision, visual changes]. Patients should consult a healthcare professional immediately if any of these signs or symptoms develop,” the FDA said in the press release.
The expanded label for Yupelri can be found on the FDA website.
Lower glucose targets show improved mortality in cardiac patients
Tighter glucose control while minimizing the risk of severe hypoglycemia is associated with lower mortality among critically ill cardiac patents, new research suggests.
Researchers reported in CHEST on the outcomes of a multicenter retrospective cohort study in 1,809 adults in cardiac ICUs. Patients were treated either to a blood glucose target of 80-110 mg/dL or 90-140 mg/dL, based on the clinician’s preference, but using a computerized ICU insulin infusion protocol that the authors said had resulted in low rates of severe hypoglycemia.
The study found patients treated to the 80-110 mg/dL blood glucose target had a significantly lower unadjusted 30-day mortality compared to patients treated to the 90-140 mg/dL target (4.3% vs. 9.2%; P less than .001). The lower mortality in the lower target group was evident among both diabetic (4.7% vs. 12.9%; P less than .001) and nondiabetic patients (4.1% vs. 7.4%; P = .02).
Researchers also saw that unadjusted 30-day mortality increased with increasing median glucose levels; 5.5% in patients with a blood glucose of 70-110 mg/dL, 8.3% mortality in those with blood glucose levels of 141-180 mg/dL, and 25% in those with a blood glucose level higher than 180 mg/dL.
Patients treated to the 80-110 mg/dL blood glucose target were more likely to experience an episode of moderate hypoglycemia, compared with those in the higher target group (18.6% vs. 8.3%; P less than .001). However, the rates of severe hypoglycemia were low in both groups, and the difference between the low and high target groups did not reach statistical significance (1.16% vs. 0.35%; P = .051).
The authors did note that patients whose blood glucose dropped below 60 mg/dL showed increased mortality, regardless of what target they was set for them. The 30-day unadjusted mortality in these patients was 15%, compared with 5.2% for patients in either group who did not experience a blood glucose level below 60 mg/dL.
“Our results further the discussion about the appropriate BG [blood glucose] target in the critically ill because they suggest that the BG target and severe hypoglycemia effects can be separated,” wrote Andrew M. Hersh, MD, of the division of pulmonary and critical care at San Antonio Military Medical Center, and his coauthors.
But they said the large differences in mortality seen between the two treatment targets should be interpreted with caution, as it was difficult to attribute that difference solely to an 18 mg/dL difference in blood glucose treatment targets.
“While we attempted to capture factors that influenced clinician choice, and while our model successfully achieved balance, suggesting that residual confounding was minimized, we suspect that some of the mortality signal may be attributable to residual confounding,” they wrote.
Another explanation could be that hypoglycemia was an ‘epiphenomenon’ of multiorgan failure, as some studies have found that both spontaneous and iatrogenic hypoglycemia were independently associated with mortality. “However, given the very-low rates of severe hypoglycemia found in both groups it is unlikely that this was a main driver of the mortality difference found,” the investigators wrote.
The majority of patients in the study had been admitted to the hospital for chest pain or acute coronary syndrome (43.3%), while 31.9% were admitted for cardiothoracic surgery, 6.8% for heart failure including cardiogenic shock, and 6% for vascular surgery.
The authors commented that a safe and reliable protocol for intensive insulin therapy, with high clinician compliance, could be the key to realizing its benefits, and could be aided by recent advances such as closed-loop insulin delivery systems.
They also stressed that their results did not support a rejection of current guidelines and instead called for large randomized, clinical trials to find a balance between benefits and harms of intensive insulin therapy.
“Instead our analysis suggests that trials such as NICE-SUGAR, and the conclusion they drew, may have been accurate only in the setting of technologies, which led to high rates of severe hypoglycemia.”
No conflicts of interest were declared.
After the multicenter NICE-SUGAR trial showed higher 90-day mortality in patients treated with intensive insulin therapy to lower blood glucose targets, compared with more moderate targets, enthusiasm has waned for tighter blood glucose control, James S. Krinsley, MD, argued in an editorial accompanying the study (CHEST 2018; 154[5]:1004-5). But the assumption of a “one-size-fits-all” approach to glucose control in the critically ill is a potential flaw of randomized clinical trials, he noted, and some patients may be better suited to tighter control than others. This study has shown that standardized protocols, including frequent measurement of blood glucose, can safely achieve tight blood glucose control in the ICU with low rates of hypoglycemia. If these findings are confirmed in larger multicenter clinical trials, it should prompt a rethink of blood glucose targets in the critically ill, he concluded.
Dr. Krinsley is director of critical care at Stamford (Conn.) Hospital and clinical professor of medicine at the Columbia University College of Physicians and Surgeons, New York. He declared consultancies or advisory board positions with Edwards Life Sciences, Medtronic, OptiScan Biomedical, and Roche Diagnostics.
After the multicenter NICE-SUGAR trial showed higher 90-day mortality in patients treated with intensive insulin therapy to lower blood glucose targets, compared with more moderate targets, enthusiasm has waned for tighter blood glucose control, James S. Krinsley, MD, argued in an editorial accompanying the study (CHEST 2018; 154[5]:1004-5). But the assumption of a “one-size-fits-all” approach to glucose control in the critically ill is a potential flaw of randomized clinical trials, he noted, and some patients may be better suited to tighter control than others. This study has shown that standardized protocols, including frequent measurement of blood glucose, can safely achieve tight blood glucose control in the ICU with low rates of hypoglycemia. If these findings are confirmed in larger multicenter clinical trials, it should prompt a rethink of blood glucose targets in the critically ill, he concluded.
Dr. Krinsley is director of critical care at Stamford (Conn.) Hospital and clinical professor of medicine at the Columbia University College of Physicians and Surgeons, New York. He declared consultancies or advisory board positions with Edwards Life Sciences, Medtronic, OptiScan Biomedical, and Roche Diagnostics.
After the multicenter NICE-SUGAR trial showed higher 90-day mortality in patients treated with intensive insulin therapy to lower blood glucose targets, compared with more moderate targets, enthusiasm has waned for tighter blood glucose control, James S. Krinsley, MD, argued in an editorial accompanying the study (CHEST 2018; 154[5]:1004-5). But the assumption of a “one-size-fits-all” approach to glucose control in the critically ill is a potential flaw of randomized clinical trials, he noted, and some patients may be better suited to tighter control than others. This study has shown that standardized protocols, including frequent measurement of blood glucose, can safely achieve tight blood glucose control in the ICU with low rates of hypoglycemia. If these findings are confirmed in larger multicenter clinical trials, it should prompt a rethink of blood glucose targets in the critically ill, he concluded.
Dr. Krinsley is director of critical care at Stamford (Conn.) Hospital and clinical professor of medicine at the Columbia University College of Physicians and Surgeons, New York. He declared consultancies or advisory board positions with Edwards Life Sciences, Medtronic, OptiScan Biomedical, and Roche Diagnostics.
Tighter glucose control while minimizing the risk of severe hypoglycemia is associated with lower mortality among critically ill cardiac patents, new research suggests.
Researchers reported in CHEST on the outcomes of a multicenter retrospective cohort study in 1,809 adults in cardiac ICUs. Patients were treated either to a blood glucose target of 80-110 mg/dL or 90-140 mg/dL, based on the clinician’s preference, but using a computerized ICU insulin infusion protocol that the authors said had resulted in low rates of severe hypoglycemia.
The study found patients treated to the 80-110 mg/dL blood glucose target had a significantly lower unadjusted 30-day mortality compared to patients treated to the 90-140 mg/dL target (4.3% vs. 9.2%; P less than .001). The lower mortality in the lower target group was evident among both diabetic (4.7% vs. 12.9%; P less than .001) and nondiabetic patients (4.1% vs. 7.4%; P = .02).
Researchers also saw that unadjusted 30-day mortality increased with increasing median glucose levels; 5.5% in patients with a blood glucose of 70-110 mg/dL, 8.3% mortality in those with blood glucose levels of 141-180 mg/dL, and 25% in those with a blood glucose level higher than 180 mg/dL.
Patients treated to the 80-110 mg/dL blood glucose target were more likely to experience an episode of moderate hypoglycemia, compared with those in the higher target group (18.6% vs. 8.3%; P less than .001). However, the rates of severe hypoglycemia were low in both groups, and the difference between the low and high target groups did not reach statistical significance (1.16% vs. 0.35%; P = .051).
The authors did note that patients whose blood glucose dropped below 60 mg/dL showed increased mortality, regardless of what target they was set for them. The 30-day unadjusted mortality in these patients was 15%, compared with 5.2% for patients in either group who did not experience a blood glucose level below 60 mg/dL.
“Our results further the discussion about the appropriate BG [blood glucose] target in the critically ill because they suggest that the BG target and severe hypoglycemia effects can be separated,” wrote Andrew M. Hersh, MD, of the division of pulmonary and critical care at San Antonio Military Medical Center, and his coauthors.
But they said the large differences in mortality seen between the two treatment targets should be interpreted with caution, as it was difficult to attribute that difference solely to an 18 mg/dL difference in blood glucose treatment targets.
“While we attempted to capture factors that influenced clinician choice, and while our model successfully achieved balance, suggesting that residual confounding was minimized, we suspect that some of the mortality signal may be attributable to residual confounding,” they wrote.
Another explanation could be that hypoglycemia was an ‘epiphenomenon’ of multiorgan failure, as some studies have found that both spontaneous and iatrogenic hypoglycemia were independently associated with mortality. “However, given the very-low rates of severe hypoglycemia found in both groups it is unlikely that this was a main driver of the mortality difference found,” the investigators wrote.
The majority of patients in the study had been admitted to the hospital for chest pain or acute coronary syndrome (43.3%), while 31.9% were admitted for cardiothoracic surgery, 6.8% for heart failure including cardiogenic shock, and 6% for vascular surgery.
The authors commented that a safe and reliable protocol for intensive insulin therapy, with high clinician compliance, could be the key to realizing its benefits, and could be aided by recent advances such as closed-loop insulin delivery systems.
They also stressed that their results did not support a rejection of current guidelines and instead called for large randomized, clinical trials to find a balance between benefits and harms of intensive insulin therapy.
“Instead our analysis suggests that trials such as NICE-SUGAR, and the conclusion they drew, may have been accurate only in the setting of technologies, which led to high rates of severe hypoglycemia.”
No conflicts of interest were declared.
Tighter glucose control while minimizing the risk of severe hypoglycemia is associated with lower mortality among critically ill cardiac patents, new research suggests.
Researchers reported in CHEST on the outcomes of a multicenter retrospective cohort study in 1,809 adults in cardiac ICUs. Patients were treated either to a blood glucose target of 80-110 mg/dL or 90-140 mg/dL, based on the clinician’s preference, but using a computerized ICU insulin infusion protocol that the authors said had resulted in low rates of severe hypoglycemia.
The study found patients treated to the 80-110 mg/dL blood glucose target had a significantly lower unadjusted 30-day mortality compared to patients treated to the 90-140 mg/dL target (4.3% vs. 9.2%; P less than .001). The lower mortality in the lower target group was evident among both diabetic (4.7% vs. 12.9%; P less than .001) and nondiabetic patients (4.1% vs. 7.4%; P = .02).
Researchers also saw that unadjusted 30-day mortality increased with increasing median glucose levels; 5.5% in patients with a blood glucose of 70-110 mg/dL, 8.3% mortality in those with blood glucose levels of 141-180 mg/dL, and 25% in those with a blood glucose level higher than 180 mg/dL.
Patients treated to the 80-110 mg/dL blood glucose target were more likely to experience an episode of moderate hypoglycemia, compared with those in the higher target group (18.6% vs. 8.3%; P less than .001). However, the rates of severe hypoglycemia were low in both groups, and the difference between the low and high target groups did not reach statistical significance (1.16% vs. 0.35%; P = .051).
The authors did note that patients whose blood glucose dropped below 60 mg/dL showed increased mortality, regardless of what target they was set for them. The 30-day unadjusted mortality in these patients was 15%, compared with 5.2% for patients in either group who did not experience a blood glucose level below 60 mg/dL.
“Our results further the discussion about the appropriate BG [blood glucose] target in the critically ill because they suggest that the BG target and severe hypoglycemia effects can be separated,” wrote Andrew M. Hersh, MD, of the division of pulmonary and critical care at San Antonio Military Medical Center, and his coauthors.
But they said the large differences in mortality seen between the two treatment targets should be interpreted with caution, as it was difficult to attribute that difference solely to an 18 mg/dL difference in blood glucose treatment targets.
“While we attempted to capture factors that influenced clinician choice, and while our model successfully achieved balance, suggesting that residual confounding was minimized, we suspect that some of the mortality signal may be attributable to residual confounding,” they wrote.
Another explanation could be that hypoglycemia was an ‘epiphenomenon’ of multiorgan failure, as some studies have found that both spontaneous and iatrogenic hypoglycemia were independently associated with mortality. “However, given the very-low rates of severe hypoglycemia found in both groups it is unlikely that this was a main driver of the mortality difference found,” the investigators wrote.
The majority of patients in the study had been admitted to the hospital for chest pain or acute coronary syndrome (43.3%), while 31.9% were admitted for cardiothoracic surgery, 6.8% for heart failure including cardiogenic shock, and 6% for vascular surgery.
The authors commented that a safe and reliable protocol for intensive insulin therapy, with high clinician compliance, could be the key to realizing its benefits, and could be aided by recent advances such as closed-loop insulin delivery systems.
They also stressed that their results did not support a rejection of current guidelines and instead called for large randomized, clinical trials to find a balance between benefits and harms of intensive insulin therapy.
“Instead our analysis suggests that trials such as NICE-SUGAR, and the conclusion they drew, may have been accurate only in the setting of technologies, which led to high rates of severe hypoglycemia.”
No conflicts of interest were declared.
FROM CHEST
Key clinical point: Tighter blood glucose control may reduce 30-day mortality in critically ill cardiac patients.
Major finding: Unadjusted 30-day mortality increased with increasing median glucose levels; 5.5% in patients with a blood glucose between 70 and 110 mg/dL, and 25% in those above 180 mg/dL.
Study details: A retrospective cohort study in 1,809 adults in cardiac intensive care units.
Disclosures: No conflicts of interest were declared.
Source: Hersh AM et al. Chest. 2018 Nov;154(5):1044-51.
P-BCMA-101 gains FDA regenerative medicine designation
(MM), has received the regenerative medicine advanced therapy (RMAT) designation from the Food and Drug Administration.
P-BCMA-101 modifies patients’ T cells using a nonviral DNA modification system known as piggyBac. The modified T cells target cells expressing B-cell maturation antigen (BCMA), which is expressed on essentially all MM cells.
Early results from the phase 1 clinical trial of P-BCMA-101 were recently reported at the 2018 CAR-TCR Summit by Eric Ostertag, MD, PhD, chief executive officer of Poseida Therapeutics, the company developing P-BCMA-101.
Initial results of the trial (NCT03288493) included data on 11 patients with heavily pretreated MM. Patients were a median age of 60, and 73% were high risk. They had a median of six prior therapies.
Patients received conditioning treatment with fludarabine and cyclophosphamide for 3 days prior to receiving P-BCMA-101. They then received one of three doses of CAR T cells – 51×106 (n=3), 152×106 (n=7), or 430×106 (n=1).
The investigators observed no dose-limiting toxicities. Adverse events included neutropenia in eight patients and thrombocytopenia in five.
One patient may have had cytokine release syndrome, but the condition resolved without drug intervention. And investigators observed no neurotoxicity.
Seven of ten patients evaluable for response by International Myeloma Working Group criteria achieved at least a partial response, including very good partial responses and stringent complete response.
The eleventh patient has oligosecretory disease and was only evaluable by PET, which indicated a near-complete response.
Poseida expects to have additional data to report by the end of the year, according to Dr. Ostertag. The study is funded by the California Institute for Regenerative Medicine and Poseida Therapeutics.RMAT designation is intended to expedite development and review of regenerative medicines that are intended to treat, modify, reverse, or cure a serious or life-threatening disease or condition.
Preliminary evidence must indicate that the therapy has the potential to address unmet medical needs for the disease or condition. RMAT designation includes all the benefits of fast track and breakthrough therapy designations, including early interactions with the FDA.
(MM), has received the regenerative medicine advanced therapy (RMAT) designation from the Food and Drug Administration.
P-BCMA-101 modifies patients’ T cells using a nonviral DNA modification system known as piggyBac. The modified T cells target cells expressing B-cell maturation antigen (BCMA), which is expressed on essentially all MM cells.
Early results from the phase 1 clinical trial of P-BCMA-101 were recently reported at the 2018 CAR-TCR Summit by Eric Ostertag, MD, PhD, chief executive officer of Poseida Therapeutics, the company developing P-BCMA-101.
Initial results of the trial (NCT03288493) included data on 11 patients with heavily pretreated MM. Patients were a median age of 60, and 73% were high risk. They had a median of six prior therapies.
Patients received conditioning treatment with fludarabine and cyclophosphamide for 3 days prior to receiving P-BCMA-101. They then received one of three doses of CAR T cells – 51×106 (n=3), 152×106 (n=7), or 430×106 (n=1).
The investigators observed no dose-limiting toxicities. Adverse events included neutropenia in eight patients and thrombocytopenia in five.
One patient may have had cytokine release syndrome, but the condition resolved without drug intervention. And investigators observed no neurotoxicity.
Seven of ten patients evaluable for response by International Myeloma Working Group criteria achieved at least a partial response, including very good partial responses and stringent complete response.
The eleventh patient has oligosecretory disease and was only evaluable by PET, which indicated a near-complete response.
Poseida expects to have additional data to report by the end of the year, according to Dr. Ostertag. The study is funded by the California Institute for Regenerative Medicine and Poseida Therapeutics.RMAT designation is intended to expedite development and review of regenerative medicines that are intended to treat, modify, reverse, or cure a serious or life-threatening disease or condition.
Preliminary evidence must indicate that the therapy has the potential to address unmet medical needs for the disease or condition. RMAT designation includes all the benefits of fast track and breakthrough therapy designations, including early interactions with the FDA.
(MM), has received the regenerative medicine advanced therapy (RMAT) designation from the Food and Drug Administration.
P-BCMA-101 modifies patients’ T cells using a nonviral DNA modification system known as piggyBac. The modified T cells target cells expressing B-cell maturation antigen (BCMA), which is expressed on essentially all MM cells.
Early results from the phase 1 clinical trial of P-BCMA-101 were recently reported at the 2018 CAR-TCR Summit by Eric Ostertag, MD, PhD, chief executive officer of Poseida Therapeutics, the company developing P-BCMA-101.
Initial results of the trial (NCT03288493) included data on 11 patients with heavily pretreated MM. Patients were a median age of 60, and 73% were high risk. They had a median of six prior therapies.
Patients received conditioning treatment with fludarabine and cyclophosphamide for 3 days prior to receiving P-BCMA-101. They then received one of three doses of CAR T cells – 51×106 (n=3), 152×106 (n=7), or 430×106 (n=1).
The investigators observed no dose-limiting toxicities. Adverse events included neutropenia in eight patients and thrombocytopenia in five.
One patient may have had cytokine release syndrome, but the condition resolved without drug intervention. And investigators observed no neurotoxicity.
Seven of ten patients evaluable for response by International Myeloma Working Group criteria achieved at least a partial response, including very good partial responses and stringent complete response.
The eleventh patient has oligosecretory disease and was only evaluable by PET, which indicated a near-complete response.
Poseida expects to have additional data to report by the end of the year, according to Dr. Ostertag. The study is funded by the California Institute for Regenerative Medicine and Poseida Therapeutics.RMAT designation is intended to expedite development and review of regenerative medicines that are intended to treat, modify, reverse, or cure a serious or life-threatening disease or condition.
Preliminary evidence must indicate that the therapy has the potential to address unmet medical needs for the disease or condition. RMAT designation includes all the benefits of fast track and breakthrough therapy designations, including early interactions with the FDA.
SRS beats surgery in early control of brain mets, advantage fades with time
Stereotactic radiosurgery (SRS) provides better early local control of brain metastases than complete surgical resection, but this advantage fades with time, according to investigators.
By 6 months, lower risks associated with SRS shifted in favor of those who had surgical resection, reported lead author Thomas Churilla, MD, of Fox Chase Cancer Center in Philadelphia and his colleagues.
“Outside recognized indications for surgery such as establishing diagnosis or relieving mass effect, little evidence is available to guide the therapeutic choice of SRS vs. surgical resection in the treatment of patients with limited brain metastases,” the investigators wrote in JAMA Oncology.
The investigators performed an exploratory analysis of data from the European Organization for the Research and Treatment of Cancer (EORTC) 22952-26001 phase 3 trial, which was designed to evaluate whole-brain radiotherapy for patients with one to three brain metastases who had undergone SRS or complete surgical resection. The present analysis involved 268 patients, of whom 154 had SRS and 114 had complete surgical resection.
Primary tumors included lung, breast, colorectum, kidney, and melanoma. Initial analysis showed that patients undergoing surgical resection, compared with those who had SRS, typically had larger brain metastases (median, 28 mm vs. 20 mm) and more often had 1 brain metastasis (98.2% vs. 74.0%). Mass locality also differed between groups; compared with patients receiving SRS, surgical patients more often had metastases in the posterior fossa (26.3% vs. 7.8%) and less often in the parietal lobe (18.4% vs. 39.6%).
After median follow-up of 39.9 months, risks of local recurrence were similar between surgical and SRS groups (hazard ratio, 1.15). Stratifying by interval, however, showed that surgical patients were at much higher risk of local recurrence in the first 3 months following treatment (HR for 0-3 months, 5.94). Of note, this risk faded with time (HR for 3-6 months, 1.37; HR for 6-9 months, 0.75; HR for 9 months or longer, 0.36). From the 6-9 months interval onward, surgical patients had lower risk of recurrence, compared with SRS patients, and the risk even decreased after the 6-9 month interval.
“Prospective controlled trials are warranted to direct the optimal local approach for patients with brain metastases and to define whether any population may benefit from escalation in local therapy,” the investigators concluded.
The study was funded by the National Cancer Institute, National Institutes of Health, and Fonds Cancer in Belgium. One author reported receiving financial compensation from Pfizer via her institution.
SOURCE: Churilla T et al. JAMA Onc. 2018. doi: 10.1001/jamaoncol.2018.4610.
Stereotactic radiosurgery (SRS) provides better early local control of brain metastases than complete surgical resection, but this advantage fades with time, according to investigators.
By 6 months, lower risks associated with SRS shifted in favor of those who had surgical resection, reported lead author Thomas Churilla, MD, of Fox Chase Cancer Center in Philadelphia and his colleagues.
“Outside recognized indications for surgery such as establishing diagnosis or relieving mass effect, little evidence is available to guide the therapeutic choice of SRS vs. surgical resection in the treatment of patients with limited brain metastases,” the investigators wrote in JAMA Oncology.
The investigators performed an exploratory analysis of data from the European Organization for the Research and Treatment of Cancer (EORTC) 22952-26001 phase 3 trial, which was designed to evaluate whole-brain radiotherapy for patients with one to three brain metastases who had undergone SRS or complete surgical resection. The present analysis involved 268 patients, of whom 154 had SRS and 114 had complete surgical resection.
Primary tumors included lung, breast, colorectum, kidney, and melanoma. Initial analysis showed that patients undergoing surgical resection, compared with those who had SRS, typically had larger brain metastases (median, 28 mm vs. 20 mm) and more often had 1 brain metastasis (98.2% vs. 74.0%). Mass locality also differed between groups; compared with patients receiving SRS, surgical patients more often had metastases in the posterior fossa (26.3% vs. 7.8%) and less often in the parietal lobe (18.4% vs. 39.6%).
After median follow-up of 39.9 months, risks of local recurrence were similar between surgical and SRS groups (hazard ratio, 1.15). Stratifying by interval, however, showed that surgical patients were at much higher risk of local recurrence in the first 3 months following treatment (HR for 0-3 months, 5.94). Of note, this risk faded with time (HR for 3-6 months, 1.37; HR for 6-9 months, 0.75; HR for 9 months or longer, 0.36). From the 6-9 months interval onward, surgical patients had lower risk of recurrence, compared with SRS patients, and the risk even decreased after the 6-9 month interval.
“Prospective controlled trials are warranted to direct the optimal local approach for patients with brain metastases and to define whether any population may benefit from escalation in local therapy,” the investigators concluded.
The study was funded by the National Cancer Institute, National Institutes of Health, and Fonds Cancer in Belgium. One author reported receiving financial compensation from Pfizer via her institution.
SOURCE: Churilla T et al. JAMA Onc. 2018. doi: 10.1001/jamaoncol.2018.4610.
Stereotactic radiosurgery (SRS) provides better early local control of brain metastases than complete surgical resection, but this advantage fades with time, according to investigators.
By 6 months, lower risks associated with SRS shifted in favor of those who had surgical resection, reported lead author Thomas Churilla, MD, of Fox Chase Cancer Center in Philadelphia and his colleagues.
“Outside recognized indications for surgery such as establishing diagnosis or relieving mass effect, little evidence is available to guide the therapeutic choice of SRS vs. surgical resection in the treatment of patients with limited brain metastases,” the investigators wrote in JAMA Oncology.
The investigators performed an exploratory analysis of data from the European Organization for the Research and Treatment of Cancer (EORTC) 22952-26001 phase 3 trial, which was designed to evaluate whole-brain radiotherapy for patients with one to three brain metastases who had undergone SRS or complete surgical resection. The present analysis involved 268 patients, of whom 154 had SRS and 114 had complete surgical resection.
Primary tumors included lung, breast, colorectum, kidney, and melanoma. Initial analysis showed that patients undergoing surgical resection, compared with those who had SRS, typically had larger brain metastases (median, 28 mm vs. 20 mm) and more often had 1 brain metastasis (98.2% vs. 74.0%). Mass locality also differed between groups; compared with patients receiving SRS, surgical patients more often had metastases in the posterior fossa (26.3% vs. 7.8%) and less often in the parietal lobe (18.4% vs. 39.6%).
After median follow-up of 39.9 months, risks of local recurrence were similar between surgical and SRS groups (hazard ratio, 1.15). Stratifying by interval, however, showed that surgical patients were at much higher risk of local recurrence in the first 3 months following treatment (HR for 0-3 months, 5.94). Of note, this risk faded with time (HR for 3-6 months, 1.37; HR for 6-9 months, 0.75; HR for 9 months or longer, 0.36). From the 6-9 months interval onward, surgical patients had lower risk of recurrence, compared with SRS patients, and the risk even decreased after the 6-9 month interval.
“Prospective controlled trials are warranted to direct the optimal local approach for patients with brain metastases and to define whether any population may benefit from escalation in local therapy,” the investigators concluded.
The study was funded by the National Cancer Institute, National Institutes of Health, and Fonds Cancer in Belgium. One author reported receiving financial compensation from Pfizer via her institution.
SOURCE: Churilla T et al. JAMA Onc. 2018. doi: 10.1001/jamaoncol.2018.4610.
FROM JAMA ONCOLOGY
Key clinical point: Stereotactic radiosurgery (SRS) provides better early local control of brain metastases than surgical resection, but this advantage fades with time.
Major finding: Patients treated with surgery were more likely to have local recurrence in the first 3 months following treatment, compared with patients treated with SRS (hazard ratio, 5.94).
Study details: An exploratory analysis of data from the European Organization for the Research and Treatment of Cancer (EORTC) 22952-26001 phase 3 trial. Analysis involved 268 patients with one to three brain metastases who underwent whole-brain radiotherapy or observation after SRS (n = 154) or complete surgical resection (n = 114).
Disclosures: The study was funded by the National Cancer Institute, National Institutes of Health, and Fonds Cancer in Belgium. Dr. Handorf reported financial compensation from Pfizer, via her institution.
Source: Churilla T et al. JAMA Onc. 2018. doi: 10.1001/jamaoncol.2018.4610.
Capmatinib active against NSCLC with MET exon 14 mutations
MUNICH – The experimental agent capmatinib was associated with a high response rate when used in the first line for patients with advanced non–small cell lung cancers bearing MET exon 14–skipping mutations, said investigators in the Geometry MONO-1 trial.
Among a cohort of 25 patients with treatment-naive, MET exon 14–mutated non–small cell lung cancer (NSCLC), the primary endpoint of overall response rate (ORR) as determined by blinded, independent reviewers was 72%.
In contrast, the ORR among 69 patients who had received one or more prior lines of therapy was 39.1%, reported Juergen Wolf, MD, of University Hospital Cologne (Germany).
“The differential benefit observed between patients treated in the first line and relapsed [settings] highlights the need of early diagnosis of this aberration, and prompt targeted treatment of this challenging patient population,” he said at the European Society for Medical Oncology Congress.
MET exon14–skipping mutations occur in approximately 3%-4% of NSCLC cases. The mutation is thought to be an oncogenic driver and has been shown to be a poor prognostic factor for patients with advanced NSCLC. Patients with this mutation have poor responses to conventional therapy and immune checkpoint inhibitors, even when their tumors have high levels of programmed death–ligand 1 (PD-L1) and high mutational burden, Dr. Wolf said.
Capmatinib (INC280) is an oral, reversible inhibitor of the MET receptor tyrosine kinase and is highly selective for MET, with particular affinity for MET exon 14 mutations. It is also capable of crossing the blood-brain barrier and has shown activity in the brain in preliminary studies.
The Geometry MONO-1 trial is a phase 2 study of capmatinib in patients with stage IIIB/IV NSCLC with tumors that demonstrate MET amplification and/or carry the MET exon 14 mutation. Three study cohorts of patients with MET amplification were closed for futility. Dr. Wolf reported results from two cohorts of patients with MET exon 14–skipping mutations regardless of gene copy number: one with treatment-naive patients and the other with patients being treated in the second or third line.
As noted, the ORR in 25 patients in the treatment-naive cohort after a median follow-up of 5.6 months was 72%, including 18 partial responses and no complete responses. In addition, six patients (24%) had stable disease, for a disease control rate of 96%.
In the pretreated cohort, however, there were no complete responses among 69 patients, and 27 patients (39.1%) had partial responses. In this cohort, an additional 26 patients (37.7%) had stable disease, for an ORR of 39.1% and disease-control rate of 78.3%.
Dr. Wolf also highlighted preliminary evidence of capmatinib activity in the brain. He noted that one patient, an 80-year-old woman with multiple untreated brain metastases as well as lesions in dermal lymph nodes, liver, and pleura, had complete resolution of brain metastases at the first postbaseline CT scan, 42 days after starting capmatinib. The duration of response was 11.3 months, at which point the patient discontinued the drug because of extracranial progressive disease.
Among all patients in all study cohorts (302) the most common grade 3 or 4 adverse events were peripheral edema, dyspnea, fatigue, nausea, vomiting, and decreased appetite. Adverse drug-related events (grade 3 or 4) included peripheral edema, nausea, vomiting, fatigue, and decreased appetite. In all, 10.3% of patients discontinued for adverse events suspected to be related to capmatinib.
Invited discussant James Chih-Hsin Yang, MD, PhD, from the National Taiwan University Hospital in Taipei, said that the study shows that the MET exon 14–skipping mutation is an oncogenic driver and that capmatinib is an effective tyrosine kinase inhibitor (TKI) for patients with NSCLC harboring this mutation.
Questions that still need to be answered, he said, include whether patients with the mutation are heterogeneous and may have differing response to TKIs, how long the duration of response is, how long it will take for resistance to capmatinib to occur, how it compares with other MET inhibitors, and if there are additional biomarkers that could help select patients for treatment with the novel agent.
The study was funded by Novartis. Dr. Wolf reported advisory board participation, institutional research support, and lecture fees from Novartis and others. Dr. Yang reported honoraria from advisory board participation and/or speaking from Novartis and others. His institution participated in the Geometry MONO-1 study, but he was not personally involved.
MUNICH – The experimental agent capmatinib was associated with a high response rate when used in the first line for patients with advanced non–small cell lung cancers bearing MET exon 14–skipping mutations, said investigators in the Geometry MONO-1 trial.
Among a cohort of 25 patients with treatment-naive, MET exon 14–mutated non–small cell lung cancer (NSCLC), the primary endpoint of overall response rate (ORR) as determined by blinded, independent reviewers was 72%.
In contrast, the ORR among 69 patients who had received one or more prior lines of therapy was 39.1%, reported Juergen Wolf, MD, of University Hospital Cologne (Germany).
“The differential benefit observed between patients treated in the first line and relapsed [settings] highlights the need of early diagnosis of this aberration, and prompt targeted treatment of this challenging patient population,” he said at the European Society for Medical Oncology Congress.
MET exon14–skipping mutations occur in approximately 3%-4% of NSCLC cases. The mutation is thought to be an oncogenic driver and has been shown to be a poor prognostic factor for patients with advanced NSCLC. Patients with this mutation have poor responses to conventional therapy and immune checkpoint inhibitors, even when their tumors have high levels of programmed death–ligand 1 (PD-L1) and high mutational burden, Dr. Wolf said.
Capmatinib (INC280) is an oral, reversible inhibitor of the MET receptor tyrosine kinase and is highly selective for MET, with particular affinity for MET exon 14 mutations. It is also capable of crossing the blood-brain barrier and has shown activity in the brain in preliminary studies.
The Geometry MONO-1 trial is a phase 2 study of capmatinib in patients with stage IIIB/IV NSCLC with tumors that demonstrate MET amplification and/or carry the MET exon 14 mutation. Three study cohorts of patients with MET amplification were closed for futility. Dr. Wolf reported results from two cohorts of patients with MET exon 14–skipping mutations regardless of gene copy number: one with treatment-naive patients and the other with patients being treated in the second or third line.
As noted, the ORR in 25 patients in the treatment-naive cohort after a median follow-up of 5.6 months was 72%, including 18 partial responses and no complete responses. In addition, six patients (24%) had stable disease, for a disease control rate of 96%.
In the pretreated cohort, however, there were no complete responses among 69 patients, and 27 patients (39.1%) had partial responses. In this cohort, an additional 26 patients (37.7%) had stable disease, for an ORR of 39.1% and disease-control rate of 78.3%.
Dr. Wolf also highlighted preliminary evidence of capmatinib activity in the brain. He noted that one patient, an 80-year-old woman with multiple untreated brain metastases as well as lesions in dermal lymph nodes, liver, and pleura, had complete resolution of brain metastases at the first postbaseline CT scan, 42 days after starting capmatinib. The duration of response was 11.3 months, at which point the patient discontinued the drug because of extracranial progressive disease.
Among all patients in all study cohorts (302) the most common grade 3 or 4 adverse events were peripheral edema, dyspnea, fatigue, nausea, vomiting, and decreased appetite. Adverse drug-related events (grade 3 or 4) included peripheral edema, nausea, vomiting, fatigue, and decreased appetite. In all, 10.3% of patients discontinued for adverse events suspected to be related to capmatinib.
Invited discussant James Chih-Hsin Yang, MD, PhD, from the National Taiwan University Hospital in Taipei, said that the study shows that the MET exon 14–skipping mutation is an oncogenic driver and that capmatinib is an effective tyrosine kinase inhibitor (TKI) for patients with NSCLC harboring this mutation.
Questions that still need to be answered, he said, include whether patients with the mutation are heterogeneous and may have differing response to TKIs, how long the duration of response is, how long it will take for resistance to capmatinib to occur, how it compares with other MET inhibitors, and if there are additional biomarkers that could help select patients for treatment with the novel agent.
The study was funded by Novartis. Dr. Wolf reported advisory board participation, institutional research support, and lecture fees from Novartis and others. Dr. Yang reported honoraria from advisory board participation and/or speaking from Novartis and others. His institution participated in the Geometry MONO-1 study, but he was not personally involved.
MUNICH – The experimental agent capmatinib was associated with a high response rate when used in the first line for patients with advanced non–small cell lung cancers bearing MET exon 14–skipping mutations, said investigators in the Geometry MONO-1 trial.
Among a cohort of 25 patients with treatment-naive, MET exon 14–mutated non–small cell lung cancer (NSCLC), the primary endpoint of overall response rate (ORR) as determined by blinded, independent reviewers was 72%.
In contrast, the ORR among 69 patients who had received one or more prior lines of therapy was 39.1%, reported Juergen Wolf, MD, of University Hospital Cologne (Germany).
“The differential benefit observed between patients treated in the first line and relapsed [settings] highlights the need of early diagnosis of this aberration, and prompt targeted treatment of this challenging patient population,” he said at the European Society for Medical Oncology Congress.
MET exon14–skipping mutations occur in approximately 3%-4% of NSCLC cases. The mutation is thought to be an oncogenic driver and has been shown to be a poor prognostic factor for patients with advanced NSCLC. Patients with this mutation have poor responses to conventional therapy and immune checkpoint inhibitors, even when their tumors have high levels of programmed death–ligand 1 (PD-L1) and high mutational burden, Dr. Wolf said.
Capmatinib (INC280) is an oral, reversible inhibitor of the MET receptor tyrosine kinase and is highly selective for MET, with particular affinity for MET exon 14 mutations. It is also capable of crossing the blood-brain barrier and has shown activity in the brain in preliminary studies.
The Geometry MONO-1 trial is a phase 2 study of capmatinib in patients with stage IIIB/IV NSCLC with tumors that demonstrate MET amplification and/or carry the MET exon 14 mutation. Three study cohorts of patients with MET amplification were closed for futility. Dr. Wolf reported results from two cohorts of patients with MET exon 14–skipping mutations regardless of gene copy number: one with treatment-naive patients and the other with patients being treated in the second or third line.
As noted, the ORR in 25 patients in the treatment-naive cohort after a median follow-up of 5.6 months was 72%, including 18 partial responses and no complete responses. In addition, six patients (24%) had stable disease, for a disease control rate of 96%.
In the pretreated cohort, however, there were no complete responses among 69 patients, and 27 patients (39.1%) had partial responses. In this cohort, an additional 26 patients (37.7%) had stable disease, for an ORR of 39.1% and disease-control rate of 78.3%.
Dr. Wolf also highlighted preliminary evidence of capmatinib activity in the brain. He noted that one patient, an 80-year-old woman with multiple untreated brain metastases as well as lesions in dermal lymph nodes, liver, and pleura, had complete resolution of brain metastases at the first postbaseline CT scan, 42 days after starting capmatinib. The duration of response was 11.3 months, at which point the patient discontinued the drug because of extracranial progressive disease.
Among all patients in all study cohorts (302) the most common grade 3 or 4 adverse events were peripheral edema, dyspnea, fatigue, nausea, vomiting, and decreased appetite. Adverse drug-related events (grade 3 or 4) included peripheral edema, nausea, vomiting, fatigue, and decreased appetite. In all, 10.3% of patients discontinued for adverse events suspected to be related to capmatinib.
Invited discussant James Chih-Hsin Yang, MD, PhD, from the National Taiwan University Hospital in Taipei, said that the study shows that the MET exon 14–skipping mutation is an oncogenic driver and that capmatinib is an effective tyrosine kinase inhibitor (TKI) for patients with NSCLC harboring this mutation.
Questions that still need to be answered, he said, include whether patients with the mutation are heterogeneous and may have differing response to TKIs, how long the duration of response is, how long it will take for resistance to capmatinib to occur, how it compares with other MET inhibitors, and if there are additional biomarkers that could help select patients for treatment with the novel agent.
The study was funded by Novartis. Dr. Wolf reported advisory board participation, institutional research support, and lecture fees from Novartis and others. Dr. Yang reported honoraria from advisory board participation and/or speaking from Novartis and others. His institution participated in the Geometry MONO-1 study, but he was not personally involved.
REPORTING FROM ESMO 2018
Key clinical point: Patients with non–small cell lung cancer bearing a MET exon 14–skipping mutation had high overall response rates to the MET inhibitor capmatinib.
Major finding: The overall response rate in treatment-naive patients was 72%.
Study details: A phase 2 trial with previously treated and untreated patients with advanced non–small cell lung cancers bearing MET exon 14–skipping mutations.
Disclosures: The study was funded by Novartis. Dr. Wolf reported advisory board participation and lecture fees from Novartis and others and institutional research support from Novartis and others. Dr. Yang reported honoraria from advisory board participation and/or speaking from Novartis and others. His institution participated in the Geometry MONO-1 study, but he was not personally involved.
FDA authorizes emergency use of rapid fingerstick test for Ebola
The Food and Drug Administration has issued an emergency use authorization (EUA) for the DPP Ebola Antigen System, a rapid, single-use test for the detection of Ebola virus.
The DPP Ebola Antigen System can provide rapid results in locations where health care providers lack access to authorized Ebola virus nucleic acid tests, which are highly sensitive but require an adequately equipped laboratory setting. The new system is authorized to use blood specimens from capillary whole blood, ethylenediaminetetraacetic acid (EDTA) venous whole blood, and EDTA plasma. It is to be used in individuals with signs and symptoms of Ebola virus disease, in addition to other risk factors, such as living in an area with high Ebola virus prevalence or having had contact with people showing signs or symptoms of the disease.
The system is the second Ebola rapid antigen fingerstick test made available through the EUA, but it is the first to use a portable, battery-operated reader, allowing for easier use in remote areas where patients are likely to be treated.
The FDA noted that a negative result from the DPP Ebola Antigen system does not necessarily indicate a negative diagnosis and should not be taken authoritatively, especially in individuals displaying signs and systems of Ebola virus disease.
“This EUA is part of the agency’s ongoing efforts to help mitigate potential, future threats by making medical products that have the potential to prevent, diagnosis, or treat available as quickly as possible. We’re committed to helping the people of the DRC [Democratic Republic of the Congo] effectively confront and end the current Ebola outbreak. By authorizing the first fingerstick test with a portable reader, we hope to better arm health care providers in the field to more quickly detect the virus in patients and improve patient outcomes,” FDA Commissioner Scott Gottlieb, MD, said in the press release.
Find the full press release on the FDA website.
The Food and Drug Administration has issued an emergency use authorization (EUA) for the DPP Ebola Antigen System, a rapid, single-use test for the detection of Ebola virus.
The DPP Ebola Antigen System can provide rapid results in locations where health care providers lack access to authorized Ebola virus nucleic acid tests, which are highly sensitive but require an adequately equipped laboratory setting. The new system is authorized to use blood specimens from capillary whole blood, ethylenediaminetetraacetic acid (EDTA) venous whole blood, and EDTA plasma. It is to be used in individuals with signs and symptoms of Ebola virus disease, in addition to other risk factors, such as living in an area with high Ebola virus prevalence or having had contact with people showing signs or symptoms of the disease.
The system is the second Ebola rapid antigen fingerstick test made available through the EUA, but it is the first to use a portable, battery-operated reader, allowing for easier use in remote areas where patients are likely to be treated.
The FDA noted that a negative result from the DPP Ebola Antigen system does not necessarily indicate a negative diagnosis and should not be taken authoritatively, especially in individuals displaying signs and systems of Ebola virus disease.
“This EUA is part of the agency’s ongoing efforts to help mitigate potential, future threats by making medical products that have the potential to prevent, diagnosis, or treat available as quickly as possible. We’re committed to helping the people of the DRC [Democratic Republic of the Congo] effectively confront and end the current Ebola outbreak. By authorizing the first fingerstick test with a portable reader, we hope to better arm health care providers in the field to more quickly detect the virus in patients and improve patient outcomes,” FDA Commissioner Scott Gottlieb, MD, said in the press release.
Find the full press release on the FDA website.
The Food and Drug Administration has issued an emergency use authorization (EUA) for the DPP Ebola Antigen System, a rapid, single-use test for the detection of Ebola virus.
The DPP Ebola Antigen System can provide rapid results in locations where health care providers lack access to authorized Ebola virus nucleic acid tests, which are highly sensitive but require an adequately equipped laboratory setting. The new system is authorized to use blood specimens from capillary whole blood, ethylenediaminetetraacetic acid (EDTA) venous whole blood, and EDTA plasma. It is to be used in individuals with signs and symptoms of Ebola virus disease, in addition to other risk factors, such as living in an area with high Ebola virus prevalence or having had contact with people showing signs or symptoms of the disease.
The system is the second Ebola rapid antigen fingerstick test made available through the EUA, but it is the first to use a portable, battery-operated reader, allowing for easier use in remote areas where patients are likely to be treated.
The FDA noted that a negative result from the DPP Ebola Antigen system does not necessarily indicate a negative diagnosis and should not be taken authoritatively, especially in individuals displaying signs and systems of Ebola virus disease.
“This EUA is part of the agency’s ongoing efforts to help mitigate potential, future threats by making medical products that have the potential to prevent, diagnosis, or treat available as quickly as possible. We’re committed to helping the people of the DRC [Democratic Republic of the Congo] effectively confront and end the current Ebola outbreak. By authorizing the first fingerstick test with a portable reader, we hope to better arm health care providers in the field to more quickly detect the virus in patients and improve patient outcomes,” FDA Commissioner Scott Gottlieb, MD, said in the press release.
Find the full press release on the FDA website.
Think research is just for MD-PhDs? Think again
ATLANTA – You don’t have to hold an advanced research degree or secure National Institutes of Health funding in order to contribute to neurology research in a meaningful way.
That’s a key finding from an analysis of 244 neurology residency program graduates.
“Science as a whole is trying to get better,” lead study author Wyatt P. Bensken said in an interview at the annual meeting of the American Neurological Association. “If your goal is to be a clinician, that doesn’t mean you can’t contribute to research. If your goal is to see patients for 80% of your time, that doesn’t mean that other 20% – which is research – disqualifies you from being a physician-scientist.”
In an effort to better understand the current status of the physician-scientist workforce in the neurology field, Mr. Bensken and his colleagues identified neurology residency graduates from the top National Institute of Neurological Disorders and Stroke–funded institutions for 2003, 2004, and 2005 via program websites. Data points collected for each individual included complete NIH and other government funding history, number of post-residency publications by year, and the Hirsch-index, or h-index, which measures an individual’s research publication impact. The researchers conducted data analysis via visualization and ANOVA testing.
Mr. Bensken, a research collaborator with the NINDS who is also a PhD student at Case Western Reserve University in Cleveland, reported that 186 of the 244 neurology residency program graduates had demonstrated interest in research based on their publication activity findings. Specifically, 26 had obtained an R01 grant, 31 were non–R01-funded, and 129 were nonfunded. Of the 26 individuals who had obtained an R01, 15 (58%) were MD-PhDs, from a total of 50 MD‐PhDs in the cohort. In addition, 43 individuals had a K‐series award, with 18 going on to receive R01 funding.
Of those with non‐R01 funding or no funding, a number of individuals performed as well as R01‐funded individuals with respect to post‐residency publication rate and impact factor. However, the publication rate and impact factor were highest in the R01-funded group (6.4 and 28.6, respectively), followed by those in the non‐R01 group (3.0 and 15.9), and those in the nonfunded group (1.2 and 8.0). Further, the publications‐per‐research hour for the three groups revealed varied productivity levels. Specifically, those in the R01-funded group with 80% protected research time produced 3.2 publications per 1,000 research hours, while those in the non–R01-funded group with 40% protected research time produced 3.0 publications per 1,000 research hours. Meanwhile, those without R01 funding overall (those with non-RO1 funding and those without funding) performed at a higher per-hour rate, when estimating 10% or 15% protected time (4.9 and 3.3 publications per 1,000 research hours, respectively).
“I think this reinforces the notion that there are far more neurologists out there who aren’t trained as MD-PhDs, who aren’t receiving R01s, but who are making meaningful contributions,” Mr. Bensken said. “Our ultimate goal is to maximize the potential of everybody in this environment to contribute. If everyone was able to contribute what they could, I think research would be far more successful and far more impactful than it is now.”
The study was funded by the NINDS. Mr. Bensken reported having no financial disclosures.
SOURCE: Bensken WP et al. Ann Neurol. 2018;84[S22]:S72-3, Abstract S176.
ATLANTA – You don’t have to hold an advanced research degree or secure National Institutes of Health funding in order to contribute to neurology research in a meaningful way.
That’s a key finding from an analysis of 244 neurology residency program graduates.
“Science as a whole is trying to get better,” lead study author Wyatt P. Bensken said in an interview at the annual meeting of the American Neurological Association. “If your goal is to be a clinician, that doesn’t mean you can’t contribute to research. If your goal is to see patients for 80% of your time, that doesn’t mean that other 20% – which is research – disqualifies you from being a physician-scientist.”
In an effort to better understand the current status of the physician-scientist workforce in the neurology field, Mr. Bensken and his colleagues identified neurology residency graduates from the top National Institute of Neurological Disorders and Stroke–funded institutions for 2003, 2004, and 2005 via program websites. Data points collected for each individual included complete NIH and other government funding history, number of post-residency publications by year, and the Hirsch-index, or h-index, which measures an individual’s research publication impact. The researchers conducted data analysis via visualization and ANOVA testing.
Mr. Bensken, a research collaborator with the NINDS who is also a PhD student at Case Western Reserve University in Cleveland, reported that 186 of the 244 neurology residency program graduates had demonstrated interest in research based on their publication activity findings. Specifically, 26 had obtained an R01 grant, 31 were non–R01-funded, and 129 were nonfunded. Of the 26 individuals who had obtained an R01, 15 (58%) were MD-PhDs, from a total of 50 MD‐PhDs in the cohort. In addition, 43 individuals had a K‐series award, with 18 going on to receive R01 funding.
Of those with non‐R01 funding or no funding, a number of individuals performed as well as R01‐funded individuals with respect to post‐residency publication rate and impact factor. However, the publication rate and impact factor were highest in the R01-funded group (6.4 and 28.6, respectively), followed by those in the non‐R01 group (3.0 and 15.9), and those in the nonfunded group (1.2 and 8.0). Further, the publications‐per‐research hour for the three groups revealed varied productivity levels. Specifically, those in the R01-funded group with 80% protected research time produced 3.2 publications per 1,000 research hours, while those in the non–R01-funded group with 40% protected research time produced 3.0 publications per 1,000 research hours. Meanwhile, those without R01 funding overall (those with non-RO1 funding and those without funding) performed at a higher per-hour rate, when estimating 10% or 15% protected time (4.9 and 3.3 publications per 1,000 research hours, respectively).
“I think this reinforces the notion that there are far more neurologists out there who aren’t trained as MD-PhDs, who aren’t receiving R01s, but who are making meaningful contributions,” Mr. Bensken said. “Our ultimate goal is to maximize the potential of everybody in this environment to contribute. If everyone was able to contribute what they could, I think research would be far more successful and far more impactful than it is now.”
The study was funded by the NINDS. Mr. Bensken reported having no financial disclosures.
SOURCE: Bensken WP et al. Ann Neurol. 2018;84[S22]:S72-3, Abstract S176.
ATLANTA – You don’t have to hold an advanced research degree or secure National Institutes of Health funding in order to contribute to neurology research in a meaningful way.
That’s a key finding from an analysis of 244 neurology residency program graduates.
“Science as a whole is trying to get better,” lead study author Wyatt P. Bensken said in an interview at the annual meeting of the American Neurological Association. “If your goal is to be a clinician, that doesn’t mean you can’t contribute to research. If your goal is to see patients for 80% of your time, that doesn’t mean that other 20% – which is research – disqualifies you from being a physician-scientist.”
In an effort to better understand the current status of the physician-scientist workforce in the neurology field, Mr. Bensken and his colleagues identified neurology residency graduates from the top National Institute of Neurological Disorders and Stroke–funded institutions for 2003, 2004, and 2005 via program websites. Data points collected for each individual included complete NIH and other government funding history, number of post-residency publications by year, and the Hirsch-index, or h-index, which measures an individual’s research publication impact. The researchers conducted data analysis via visualization and ANOVA testing.
Mr. Bensken, a research collaborator with the NINDS who is also a PhD student at Case Western Reserve University in Cleveland, reported that 186 of the 244 neurology residency program graduates had demonstrated interest in research based on their publication activity findings. Specifically, 26 had obtained an R01 grant, 31 were non–R01-funded, and 129 were nonfunded. Of the 26 individuals who had obtained an R01, 15 (58%) were MD-PhDs, from a total of 50 MD‐PhDs in the cohort. In addition, 43 individuals had a K‐series award, with 18 going on to receive R01 funding.
Of those with non‐R01 funding or no funding, a number of individuals performed as well as R01‐funded individuals with respect to post‐residency publication rate and impact factor. However, the publication rate and impact factor were highest in the R01-funded group (6.4 and 28.6, respectively), followed by those in the non‐R01 group (3.0 and 15.9), and those in the nonfunded group (1.2 and 8.0). Further, the publications‐per‐research hour for the three groups revealed varied productivity levels. Specifically, those in the R01-funded group with 80% protected research time produced 3.2 publications per 1,000 research hours, while those in the non–R01-funded group with 40% protected research time produced 3.0 publications per 1,000 research hours. Meanwhile, those without R01 funding overall (those with non-RO1 funding and those without funding) performed at a higher per-hour rate, when estimating 10% or 15% protected time (4.9 and 3.3 publications per 1,000 research hours, respectively).
“I think this reinforces the notion that there are far more neurologists out there who aren’t trained as MD-PhDs, who aren’t receiving R01s, but who are making meaningful contributions,” Mr. Bensken said. “Our ultimate goal is to maximize the potential of everybody in this environment to contribute. If everyone was able to contribute what they could, I think research would be far more successful and far more impactful than it is now.”
The study was funded by the NINDS. Mr. Bensken reported having no financial disclosures.
SOURCE: Bensken WP et al. Ann Neurol. 2018;84[S22]:S72-3, Abstract S176.
REPORTING FROM ANA 2018
Key clinical point:
Major finding: Those in the R01-funded group with 80% protected research time produced 3.2 publications per 1,000 research hours, while those in the non–R01-funded group with 40% protected research time produced 3.0 publications per 1,000 research hours.
Study details: An analysis of 244 neurology residency program graduates.
Disclosures: The study was funded by the NINDS. Mr. Bensken reported having no financial disclosures.
Source: Bensken WP et al. Ann Neurol. 2018;84[S 22]:S72-3, Abstract S176.