User login
Dealing with complications associated with central venous access catheters
On Thursday morning, John T. Loree, a medical student at SUNY Upstate Medical School, Syracuse, will present a study that he and his colleagues performed to assess the risks and complications associated with the use of central venous access (CVA) catheters over the long term. They attempted to identify high-risk subgroups based upon patient characteristics and line type. The research is warranted so that modified follow-up regimens can be implemented to reduce risk and improve patient outcomes. In his presentation, Mr. Loree will discuss selected therapies for specific complications.
The researchers performed a PubMed data base search, which located 21 papers published between 2012 and 2018. In this sample, 6,781 catheters were placed in 6,183 patients, with a total dwell time of 2,538,323 days. Patients characteristics varied from children to adults. Various line types were used (peripherally inserted central catheter [PICC], central line, mediport, tunneled central venous catheter). Indications for catheterization included (chemotherapy, dialysis, total parenteral nutrition (TPN), and other medication infusion.
Mr. Loree will discuss the primary outcomes – overall complication rate and the infectious and mechanical complication rates per 1,000 catheter-days.
He and his colleagues found that port purpose was significantly predictive of infection rate, while port type was selectively predictive of overall and mechanical complication rate. Subgroup analysis demonstrated significantly increased overall complication rates in peripherally inserted catheters and patients receiving medications, and increased mechanical complication rates with central lines.
Mr. Loree will discuss how the complication rates associated with long-term use of CVA catheters were associated with factors easily identifiable at the initial patient visit.
Their data will show how, overall, PICC lines used for TPN/medication administration were associated with the highest complication rate, while mediports used for chemotherapy were associated with the lowest complication rate. Based on these patient characteristics, stricter follow-up to monitor for complications can be used in select patients to improve patient outcomes, according to Mr. Loree.
On Thursday morning, John T. Loree, a medical student at SUNY Upstate Medical School, Syracuse, will present a study that he and his colleagues performed to assess the risks and complications associated with the use of central venous access (CVA) catheters over the long term. They attempted to identify high-risk subgroups based upon patient characteristics and line type. The research is warranted so that modified follow-up regimens can be implemented to reduce risk and improve patient outcomes. In his presentation, Mr. Loree will discuss selected therapies for specific complications.
The researchers performed a PubMed data base search, which located 21 papers published between 2012 and 2018. In this sample, 6,781 catheters were placed in 6,183 patients, with a total dwell time of 2,538,323 days. Patients characteristics varied from children to adults. Various line types were used (peripherally inserted central catheter [PICC], central line, mediport, tunneled central venous catheter). Indications for catheterization included (chemotherapy, dialysis, total parenteral nutrition (TPN), and other medication infusion.
Mr. Loree will discuss the primary outcomes – overall complication rate and the infectious and mechanical complication rates per 1,000 catheter-days.
He and his colleagues found that port purpose was significantly predictive of infection rate, while port type was selectively predictive of overall and mechanical complication rate. Subgroup analysis demonstrated significantly increased overall complication rates in peripherally inserted catheters and patients receiving medications, and increased mechanical complication rates with central lines.
Mr. Loree will discuss how the complication rates associated with long-term use of CVA catheters were associated with factors easily identifiable at the initial patient visit.
Their data will show how, overall, PICC lines used for TPN/medication administration were associated with the highest complication rate, while mediports used for chemotherapy were associated with the lowest complication rate. Based on these patient characteristics, stricter follow-up to monitor for complications can be used in select patients to improve patient outcomes, according to Mr. Loree.
On Thursday morning, John T. Loree, a medical student at SUNY Upstate Medical School, Syracuse, will present a study that he and his colleagues performed to assess the risks and complications associated with the use of central venous access (CVA) catheters over the long term. They attempted to identify high-risk subgroups based upon patient characteristics and line type. The research is warranted so that modified follow-up regimens can be implemented to reduce risk and improve patient outcomes. In his presentation, Mr. Loree will discuss selected therapies for specific complications.
The researchers performed a PubMed data base search, which located 21 papers published between 2012 and 2018. In this sample, 6,781 catheters were placed in 6,183 patients, with a total dwell time of 2,538,323 days. Patients characteristics varied from children to adults. Various line types were used (peripherally inserted central catheter [PICC], central line, mediport, tunneled central venous catheter). Indications for catheterization included (chemotherapy, dialysis, total parenteral nutrition (TPN), and other medication infusion.
Mr. Loree will discuss the primary outcomes – overall complication rate and the infectious and mechanical complication rates per 1,000 catheter-days.
He and his colleagues found that port purpose was significantly predictive of infection rate, while port type was selectively predictive of overall and mechanical complication rate. Subgroup analysis demonstrated significantly increased overall complication rates in peripherally inserted catheters and patients receiving medications, and increased mechanical complication rates with central lines.
Mr. Loree will discuss how the complication rates associated with long-term use of CVA catheters were associated with factors easily identifiable at the initial patient visit.
Their data will show how, overall, PICC lines used for TPN/medication administration were associated with the highest complication rate, while mediports used for chemotherapy were associated with the lowest complication rate. Based on these patient characteristics, stricter follow-up to monitor for complications can be used in select patients to improve patient outcomes, according to Mr. Loree.
Satisfaction high among psoriasis patients on apremilast
MADRID – within the next 12 months than were patients on their first tumor necrosis factor (TNF) inhibitor, in a large retrospective national propensity score-matched study.
“This was surprising to us,” David L. Kaplan, MD, admitted in presenting the study findings at the annual congress of the European Academy of Dermatology and Venereology.
The surprise came because apremilast, a phosphodiesterase 4 (PDE4)-inhibitor, is less potent than the injectable biologics at driving down Psoriasis Area and Severity Index (PASI) scores.
“This is real-world data. And this is what patients are saying at 1 year: that they’re actually happier [with apremilast] and they’re not interested in changing,” said Dr. Kaplan, a dermatologist at the University of Kansas and in private practice in Overland Park, Kan.
He and his coinvestigators tapped the IBM Watson MarketScan health insurance claims database for 2015-2016 and identified 1,645 biologic-naive adults with psoriasis who started on apremilast therapy and an equal number of biologic-naive psoriasis patients who initiated treatment with a biologic, of whom 1,207 started on an TNF inhibitor and 438 began on an interleukin inhibitor, which was ustekinumab in 81% of cases. The TNF inhibitor cohort was split 80/20 between adalimumab and etanercept. The three groups – new users of apremilast, a TNF inhibitor, or an interleukin inhibitor – were propensity-matched based upon age, prior usage of systemic psoriasis therapies, Charlson Comorbidity Index scores, and other potential confounders.
The primary endpoint was the switch rate to a different psoriasis treatment within 12 months. The switch rate was significantly lower in patients who had started on apremilast than in those on a TNF inhibitor by a margin of 14% to 25%, while the 11% switch rate among patients on an interleukin inhibitor was not significantly different from the rate in the apremilast group.
“I think this data kind of gives us pause,” the dermatologist said. “As a clinician myself, when patients come back in the first question I always ask is, ‘How’re you doing? Are you happy?’ And at the end of the day, the data in terms of switch rates shows where patients are at. And that doesn’t really follow what we see with PASI scores.”
A secondary endpoint was the switch rate through 24 months. The same pattern held true: 24.9% in the apremilast starters, which was similar to the 22.9% in patients initiated on an interleukin inhibitor, and significantly less than the 39.1% rate in the TNF inhibitor group.
Among patients who switched medications within the first 12 months, the mean number of days to the switch was similar across all three groups.
The study had several limitations. Propensity score–matching is not a cure-all that can eradicate all potential biases. And the claims database didn’t include information on why patients switched, nor what their PASI scores were. “This is real-world data, and clinicians don’t do PASI scores in the real world,” he noted.
Audience member Andrew Blauvelt, MD, a dermatologist and president of the Oregon Medical Research Center, Portland, rose to challenge Dr. Kaplan’s conclusion that patients on apremilast were happier with their care.
“How can you rule out that it’s just practices that don’t use biologics, and they’re keeping patients on apremilast regardless of whether they’re better or happy because they’re not using biologics?” inquired Dr. Blauvelt.
Dr. Kaplan conceded that might well be a partial explanation for the results.
“Reluctance to use biologics is out there,” he agreed.
Dr. Kaplan reported serving as a consultant and paid speaker for Celgene, the study sponsor, as well as several other pharmaceutical companies.
SOURCE: Kaplan DL. EADV Abstract FC04.04.
MADRID – within the next 12 months than were patients on their first tumor necrosis factor (TNF) inhibitor, in a large retrospective national propensity score-matched study.
“This was surprising to us,” David L. Kaplan, MD, admitted in presenting the study findings at the annual congress of the European Academy of Dermatology and Venereology.
The surprise came because apremilast, a phosphodiesterase 4 (PDE4)-inhibitor, is less potent than the injectable biologics at driving down Psoriasis Area and Severity Index (PASI) scores.
“This is real-world data. And this is what patients are saying at 1 year: that they’re actually happier [with apremilast] and they’re not interested in changing,” said Dr. Kaplan, a dermatologist at the University of Kansas and in private practice in Overland Park, Kan.
He and his coinvestigators tapped the IBM Watson MarketScan health insurance claims database for 2015-2016 and identified 1,645 biologic-naive adults with psoriasis who started on apremilast therapy and an equal number of biologic-naive psoriasis patients who initiated treatment with a biologic, of whom 1,207 started on an TNF inhibitor and 438 began on an interleukin inhibitor, which was ustekinumab in 81% of cases. The TNF inhibitor cohort was split 80/20 between adalimumab and etanercept. The three groups – new users of apremilast, a TNF inhibitor, or an interleukin inhibitor – were propensity-matched based upon age, prior usage of systemic psoriasis therapies, Charlson Comorbidity Index scores, and other potential confounders.
The primary endpoint was the switch rate to a different psoriasis treatment within 12 months. The switch rate was significantly lower in patients who had started on apremilast than in those on a TNF inhibitor by a margin of 14% to 25%, while the 11% switch rate among patients on an interleukin inhibitor was not significantly different from the rate in the apremilast group.
“I think this data kind of gives us pause,” the dermatologist said. “As a clinician myself, when patients come back in the first question I always ask is, ‘How’re you doing? Are you happy?’ And at the end of the day, the data in terms of switch rates shows where patients are at. And that doesn’t really follow what we see with PASI scores.”
A secondary endpoint was the switch rate through 24 months. The same pattern held true: 24.9% in the apremilast starters, which was similar to the 22.9% in patients initiated on an interleukin inhibitor, and significantly less than the 39.1% rate in the TNF inhibitor group.
Among patients who switched medications within the first 12 months, the mean number of days to the switch was similar across all three groups.
The study had several limitations. Propensity score–matching is not a cure-all that can eradicate all potential biases. And the claims database didn’t include information on why patients switched, nor what their PASI scores were. “This is real-world data, and clinicians don’t do PASI scores in the real world,” he noted.
Audience member Andrew Blauvelt, MD, a dermatologist and president of the Oregon Medical Research Center, Portland, rose to challenge Dr. Kaplan’s conclusion that patients on apremilast were happier with their care.
“How can you rule out that it’s just practices that don’t use biologics, and they’re keeping patients on apremilast regardless of whether they’re better or happy because they’re not using biologics?” inquired Dr. Blauvelt.
Dr. Kaplan conceded that might well be a partial explanation for the results.
“Reluctance to use biologics is out there,” he agreed.
Dr. Kaplan reported serving as a consultant and paid speaker for Celgene, the study sponsor, as well as several other pharmaceutical companies.
SOURCE: Kaplan DL. EADV Abstract FC04.04.
MADRID – within the next 12 months than were patients on their first tumor necrosis factor (TNF) inhibitor, in a large retrospective national propensity score-matched study.
“This was surprising to us,” David L. Kaplan, MD, admitted in presenting the study findings at the annual congress of the European Academy of Dermatology and Venereology.
The surprise came because apremilast, a phosphodiesterase 4 (PDE4)-inhibitor, is less potent than the injectable biologics at driving down Psoriasis Area and Severity Index (PASI) scores.
“This is real-world data. And this is what patients are saying at 1 year: that they’re actually happier [with apremilast] and they’re not interested in changing,” said Dr. Kaplan, a dermatologist at the University of Kansas and in private practice in Overland Park, Kan.
He and his coinvestigators tapped the IBM Watson MarketScan health insurance claims database for 2015-2016 and identified 1,645 biologic-naive adults with psoriasis who started on apremilast therapy and an equal number of biologic-naive psoriasis patients who initiated treatment with a biologic, of whom 1,207 started on an TNF inhibitor and 438 began on an interleukin inhibitor, which was ustekinumab in 81% of cases. The TNF inhibitor cohort was split 80/20 between adalimumab and etanercept. The three groups – new users of apremilast, a TNF inhibitor, or an interleukin inhibitor – were propensity-matched based upon age, prior usage of systemic psoriasis therapies, Charlson Comorbidity Index scores, and other potential confounders.
The primary endpoint was the switch rate to a different psoriasis treatment within 12 months. The switch rate was significantly lower in patients who had started on apremilast than in those on a TNF inhibitor by a margin of 14% to 25%, while the 11% switch rate among patients on an interleukin inhibitor was not significantly different from the rate in the apremilast group.
“I think this data kind of gives us pause,” the dermatologist said. “As a clinician myself, when patients come back in the first question I always ask is, ‘How’re you doing? Are you happy?’ And at the end of the day, the data in terms of switch rates shows where patients are at. And that doesn’t really follow what we see with PASI scores.”
A secondary endpoint was the switch rate through 24 months. The same pattern held true: 24.9% in the apremilast starters, which was similar to the 22.9% in patients initiated on an interleukin inhibitor, and significantly less than the 39.1% rate in the TNF inhibitor group.
Among patients who switched medications within the first 12 months, the mean number of days to the switch was similar across all three groups.
The study had several limitations. Propensity score–matching is not a cure-all that can eradicate all potential biases. And the claims database didn’t include information on why patients switched, nor what their PASI scores were. “This is real-world data, and clinicians don’t do PASI scores in the real world,” he noted.
Audience member Andrew Blauvelt, MD, a dermatologist and president of the Oregon Medical Research Center, Portland, rose to challenge Dr. Kaplan’s conclusion that patients on apremilast were happier with their care.
“How can you rule out that it’s just practices that don’t use biologics, and they’re keeping patients on apremilast regardless of whether they’re better or happy because they’re not using biologics?” inquired Dr. Blauvelt.
Dr. Kaplan conceded that might well be a partial explanation for the results.
“Reluctance to use biologics is out there,” he agreed.
Dr. Kaplan reported serving as a consultant and paid speaker for Celgene, the study sponsor, as well as several other pharmaceutical companies.
SOURCE: Kaplan DL. EADV Abstract FC04.04.
REPORTING FROM EADV 2019
Early lenalidomide may delay progression of smoldering myeloma
Early treatment with lenalidomide may delay disease progression and prevent end-organ damage in patients with high-risk smoldering multiple myeloma (SMM), according to findings from a phase 3 trial.
While observation is the current standard of care in SMM, early therapy may represent a new standard for patients with high-risk disease, explained Sagar Lonial, MD, of Winship Cancer Institute, Emory University, Atlanta, and colleagues. Their findings were published in the Journal of Clinical Oncology.
The randomized, open-label, phase 3 study included 182 patients with intermediate- or high-risk SMM. Study patients were randomly allocated to receive either oral lenalidomide at 25 mg daily on days 1-21 of a 28-day cycle or observation.
Study subjects were stratified based on time since SMM diagnosis – 1 year or less vs. more than 1 year, and all patients in the lenalidomide arm received aspirin at 325 mg on days 1-28. Both interventions were maintained until unacceptable toxicity, disease progression, or withdrawal for other reasons.
The primary outcome was progression-free survival (PFS), measured from baseline to the development of symptomatic multiple myeloma (MM). The criteria for progression included evidence of end-organ damage in relation to MM and biochemical disease progression.
The researchers found that at 1 year PFS was 98% in the lenalidomide group and 89% in the observation group. At 2 years, PFS was 93% in the lenalidomide group and 76% in the observation group. PFS was 91% in the lenalidomide group and 66% in the observation group at 3 years (hazard ratio, 0.28; P = .002).
Among lenalidomide-treated patients, grade 3 or 4 hematologic and nonhematologic adverse events occurred in 36 patients (41%). Nonhematologic adverse events occurred in 25 patients (28%).
Frequent AEs among lenalidomide-treated patients included grade 4 decreased neutrophil count (4.5%), as well as grade 3 infections (20.5%), hypertension (9.1%), fatigue (6.8%), skin problems (5.7%), dyspnea (5.7%), and hypokalemia (3.4%). “In most cases, [adverse events] could be managed with dose modifications,” they wrote.
To reduce long-term toxicity, the researchers recommended a 2-year duration of therapy for patients at highest risk.
“Our results support the use of early intervention in patients with high-risk SMM – as defined by the 20/2/20 criteria where our magnitude of benefit was the greatest – rather than continued observation,” the researchers wrote.
The trial was funded by the National Cancer Institute. The authors reported financial affiliations with AbbVie, Aduro Biotech, Amgen, Bristol-Myers Squibb, Celgene, Juno Therapeutics, Kite Pharma, Sanofi, Takeda, and several other companies.
SOURCE: Lonial S et al. J Clin Oncol. 2019 Oct 25. doi: 10.1200/JCO.19.01740.
Early treatment with lenalidomide may delay disease progression and prevent end-organ damage in patients with high-risk smoldering multiple myeloma (SMM), according to findings from a phase 3 trial.
While observation is the current standard of care in SMM, early therapy may represent a new standard for patients with high-risk disease, explained Sagar Lonial, MD, of Winship Cancer Institute, Emory University, Atlanta, and colleagues. Their findings were published in the Journal of Clinical Oncology.
The randomized, open-label, phase 3 study included 182 patients with intermediate- or high-risk SMM. Study patients were randomly allocated to receive either oral lenalidomide at 25 mg daily on days 1-21 of a 28-day cycle or observation.
Study subjects were stratified based on time since SMM diagnosis – 1 year or less vs. more than 1 year, and all patients in the lenalidomide arm received aspirin at 325 mg on days 1-28. Both interventions were maintained until unacceptable toxicity, disease progression, or withdrawal for other reasons.
The primary outcome was progression-free survival (PFS), measured from baseline to the development of symptomatic multiple myeloma (MM). The criteria for progression included evidence of end-organ damage in relation to MM and biochemical disease progression.
The researchers found that at 1 year PFS was 98% in the lenalidomide group and 89% in the observation group. At 2 years, PFS was 93% in the lenalidomide group and 76% in the observation group. PFS was 91% in the lenalidomide group and 66% in the observation group at 3 years (hazard ratio, 0.28; P = .002).
Among lenalidomide-treated patients, grade 3 or 4 hematologic and nonhematologic adverse events occurred in 36 patients (41%). Nonhematologic adverse events occurred in 25 patients (28%).
Frequent AEs among lenalidomide-treated patients included grade 4 decreased neutrophil count (4.5%), as well as grade 3 infections (20.5%), hypertension (9.1%), fatigue (6.8%), skin problems (5.7%), dyspnea (5.7%), and hypokalemia (3.4%). “In most cases, [adverse events] could be managed with dose modifications,” they wrote.
To reduce long-term toxicity, the researchers recommended a 2-year duration of therapy for patients at highest risk.
“Our results support the use of early intervention in patients with high-risk SMM – as defined by the 20/2/20 criteria where our magnitude of benefit was the greatest – rather than continued observation,” the researchers wrote.
The trial was funded by the National Cancer Institute. The authors reported financial affiliations with AbbVie, Aduro Biotech, Amgen, Bristol-Myers Squibb, Celgene, Juno Therapeutics, Kite Pharma, Sanofi, Takeda, and several other companies.
SOURCE: Lonial S et al. J Clin Oncol. 2019 Oct 25. doi: 10.1200/JCO.19.01740.
Early treatment with lenalidomide may delay disease progression and prevent end-organ damage in patients with high-risk smoldering multiple myeloma (SMM), according to findings from a phase 3 trial.
While observation is the current standard of care in SMM, early therapy may represent a new standard for patients with high-risk disease, explained Sagar Lonial, MD, of Winship Cancer Institute, Emory University, Atlanta, and colleagues. Their findings were published in the Journal of Clinical Oncology.
The randomized, open-label, phase 3 study included 182 patients with intermediate- or high-risk SMM. Study patients were randomly allocated to receive either oral lenalidomide at 25 mg daily on days 1-21 of a 28-day cycle or observation.
Study subjects were stratified based on time since SMM diagnosis – 1 year or less vs. more than 1 year, and all patients in the lenalidomide arm received aspirin at 325 mg on days 1-28. Both interventions were maintained until unacceptable toxicity, disease progression, or withdrawal for other reasons.
The primary outcome was progression-free survival (PFS), measured from baseline to the development of symptomatic multiple myeloma (MM). The criteria for progression included evidence of end-organ damage in relation to MM and biochemical disease progression.
The researchers found that at 1 year PFS was 98% in the lenalidomide group and 89% in the observation group. At 2 years, PFS was 93% in the lenalidomide group and 76% in the observation group. PFS was 91% in the lenalidomide group and 66% in the observation group at 3 years (hazard ratio, 0.28; P = .002).
Among lenalidomide-treated patients, grade 3 or 4 hematologic and nonhematologic adverse events occurred in 36 patients (41%). Nonhematologic adverse events occurred in 25 patients (28%).
Frequent AEs among lenalidomide-treated patients included grade 4 decreased neutrophil count (4.5%), as well as grade 3 infections (20.5%), hypertension (9.1%), fatigue (6.8%), skin problems (5.7%), dyspnea (5.7%), and hypokalemia (3.4%). “In most cases, [adverse events] could be managed with dose modifications,” they wrote.
To reduce long-term toxicity, the researchers recommended a 2-year duration of therapy for patients at highest risk.
“Our results support the use of early intervention in patients with high-risk SMM – as defined by the 20/2/20 criteria where our magnitude of benefit was the greatest – rather than continued observation,” the researchers wrote.
The trial was funded by the National Cancer Institute. The authors reported financial affiliations with AbbVie, Aduro Biotech, Amgen, Bristol-Myers Squibb, Celgene, Juno Therapeutics, Kite Pharma, Sanofi, Takeda, and several other companies.
SOURCE: Lonial S et al. J Clin Oncol. 2019 Oct 25. doi: 10.1200/JCO.19.01740.
FROM JOURNAL OF CLINICAL ONCOLOGY
Umbilical cord management matters less for mothers than for infants
Immediate umbilical cord milking or delayed clamping of the umbilical cord had no significant impact on maternal outcomes, but infants were significantly more likely to experience severe intraventricular hemorrhage with umbilical cord milking, according to results of two studies published in JAMA.
“While the evidence for neonatal benefit with delayed cord clamping at term is strong, data related to maternal outcomes, particularly after cesarean delivery, are largely lacking,” wrote Stephanie E. Purisch, MD, of Columbia University Irving Medical Center, New York, and colleagues.
In a randomized trial of 113 women who underwent cesarean deliveries of singleton infants, the researchers hypothesized that maternal blood loss would be greater with delayed cord clamping (JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.15995).
However, maternal blood loss, based on mean hemoglobin levels 1 day after delivery, was not significantly different between the delayed group (10.1 g/dL) and the immediate group (98 g/dL). The median time to cord clamping was 63 seconds in the delayed group and 6 seconds in the immediate group.
In addition, no significant differences occurred in 15 of 19 prespecified secondary outcomes. However, for whom data were available (18.1 g/dL vs. 16.4 g/dL; P less than .001).
The results were limited by factors including lack of generalizability to other situations such as emergency or preterm deliveries and by the lack of a definition of a “clinically important postoperative hemoglobin change,” the researchers noted. However, the results show no significant impact of umbilical cord management on maternal hemoglobin in the study population.
In another study published in JAMA, Anup Katheria, MD, of Sharp Mary Birch Hospital for Women & Newborns, San Diego, and colleagues found no significant difference in rates of a composite outcome of death or severe intraventricular hemorrhage among infants randomized to umbilical cord milking (12%) vs. delayed umbilical cord clamping (8%). However, immediate umbilical cord milking was significantly associated with a higher rate of intraventricular hemorrhage alone, compared with delayed clamping (8% vs. 3%), and this signal of risk prompted the researchers to terminate the study earlier than intended.
The researchers randomized 474 infants born at less than 32 weeks’ gestation to umbilical cord milking or delayed umbilical cord clamping (JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.16004). The study was conducted at six sites in the United States and one site each in Ireland, Germany, and Canada between June 2017 and September 2018. “Because of the importance of long-term neurodevelopment, all surviving infants will be followed up to determine developmental outcomes at 22 to 26 months’ corrected gestational age,” they said.
The study was terminated early, which prevents definitive conclusions, the researchers noted, but a new study has been approved to compare umbilical cord milking with delayed umbilical cord clamping in infants of 30-32 weeks’ gestational age, they said.
“Although the safety of placental transfusion for the mother seems well established, it remains unclear which method of providing placental transfusion is best for the infant: delayed clamping and cutting the cord or milking the intact cord. The latter provides a transfusion more rapidly, which may facilitate initiation of resuscitation when needed,” Heike Rabe, MD, of the University of Sussex, Brighton, and Ola Andersson, PhD, of Lund (Sweden) University, wrote in an editorial accompanying the two studies (JAMA. 2019 Nov 19;322:1864-5. doi: 10.1001/jama.2019.16003).
The 8% incidence of severe intraventricular hemorrhage in the umbilical milking group in the study by Katheria and colleagues was higher than the 5.2% in a recent Cochrane review, but the 3% incidence of severe intraventricular hemorrhage in the delayed group was lower than the 4.5% in the Cochrane review, they said.
“Umbilical cord milking has been used in many hospitals without an increase in intraventricular hemorrhage being observed,” they noted.
“The study by Purisch et al. demonstrated the safety of delayed cord clamping for mothers delivering by cesarean at term,” the editorialists wrote. Studies are underway to identify the best techniques for cord clamping, they said.
“In the meantime, clinicians should follow the World Health Organization recommendation to delay cord clamping and cutting for 1 to 3 minutes for term infants and for at least 60 seconds for preterm infants to prevent iron deficiency and potentially enable more premature infants to survive,” they concluded.
Dr. Purisch received funding from the Maternal-Fetal Medicine Fellow Research Fund for the first study. Coauthor Cynthia Gyamfi-Bannerman, MD, reported receiving grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the Society for Maternal-Fetal Medicine/AMAG Pharmaceuticals, and personal fees from Sera Prognostics outside the submitted work. The second study was supported by NICHD in a grant to Dr. Katheria, who had no financial conflicts to disclose. Coauthor Gary Cutter, PhD, had numerous ties to pharmaceutical companies. The editorialists had no financial conflicts to disclose.
SOURCES: Purisch SE et al. JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.15995; Katheria A et al. JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.16004; Rabe H and Andersson O. JAMA. 2019 Nov 19; 322:1864-5.
Immediate umbilical cord milking or delayed clamping of the umbilical cord had no significant impact on maternal outcomes, but infants were significantly more likely to experience severe intraventricular hemorrhage with umbilical cord milking, according to results of two studies published in JAMA.
“While the evidence for neonatal benefit with delayed cord clamping at term is strong, data related to maternal outcomes, particularly after cesarean delivery, are largely lacking,” wrote Stephanie E. Purisch, MD, of Columbia University Irving Medical Center, New York, and colleagues.
In a randomized trial of 113 women who underwent cesarean deliveries of singleton infants, the researchers hypothesized that maternal blood loss would be greater with delayed cord clamping (JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.15995).
However, maternal blood loss, based on mean hemoglobin levels 1 day after delivery, was not significantly different between the delayed group (10.1 g/dL) and the immediate group (98 g/dL). The median time to cord clamping was 63 seconds in the delayed group and 6 seconds in the immediate group.
In addition, no significant differences occurred in 15 of 19 prespecified secondary outcomes. However, for whom data were available (18.1 g/dL vs. 16.4 g/dL; P less than .001).
The results were limited by factors including lack of generalizability to other situations such as emergency or preterm deliveries and by the lack of a definition of a “clinically important postoperative hemoglobin change,” the researchers noted. However, the results show no significant impact of umbilical cord management on maternal hemoglobin in the study population.
In another study published in JAMA, Anup Katheria, MD, of Sharp Mary Birch Hospital for Women & Newborns, San Diego, and colleagues found no significant difference in rates of a composite outcome of death or severe intraventricular hemorrhage among infants randomized to umbilical cord milking (12%) vs. delayed umbilical cord clamping (8%). However, immediate umbilical cord milking was significantly associated with a higher rate of intraventricular hemorrhage alone, compared with delayed clamping (8% vs. 3%), and this signal of risk prompted the researchers to terminate the study earlier than intended.
The researchers randomized 474 infants born at less than 32 weeks’ gestation to umbilical cord milking or delayed umbilical cord clamping (JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.16004). The study was conducted at six sites in the United States and one site each in Ireland, Germany, and Canada between June 2017 and September 2018. “Because of the importance of long-term neurodevelopment, all surviving infants will be followed up to determine developmental outcomes at 22 to 26 months’ corrected gestational age,” they said.
The study was terminated early, which prevents definitive conclusions, the researchers noted, but a new study has been approved to compare umbilical cord milking with delayed umbilical cord clamping in infants of 30-32 weeks’ gestational age, they said.
“Although the safety of placental transfusion for the mother seems well established, it remains unclear which method of providing placental transfusion is best for the infant: delayed clamping and cutting the cord or milking the intact cord. The latter provides a transfusion more rapidly, which may facilitate initiation of resuscitation when needed,” Heike Rabe, MD, of the University of Sussex, Brighton, and Ola Andersson, PhD, of Lund (Sweden) University, wrote in an editorial accompanying the two studies (JAMA. 2019 Nov 19;322:1864-5. doi: 10.1001/jama.2019.16003).
The 8% incidence of severe intraventricular hemorrhage in the umbilical milking group in the study by Katheria and colleagues was higher than the 5.2% in a recent Cochrane review, but the 3% incidence of severe intraventricular hemorrhage in the delayed group was lower than the 4.5% in the Cochrane review, they said.
“Umbilical cord milking has been used in many hospitals without an increase in intraventricular hemorrhage being observed,” they noted.
“The study by Purisch et al. demonstrated the safety of delayed cord clamping for mothers delivering by cesarean at term,” the editorialists wrote. Studies are underway to identify the best techniques for cord clamping, they said.
“In the meantime, clinicians should follow the World Health Organization recommendation to delay cord clamping and cutting for 1 to 3 minutes for term infants and for at least 60 seconds for preterm infants to prevent iron deficiency and potentially enable more premature infants to survive,” they concluded.
Dr. Purisch received funding from the Maternal-Fetal Medicine Fellow Research Fund for the first study. Coauthor Cynthia Gyamfi-Bannerman, MD, reported receiving grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the Society for Maternal-Fetal Medicine/AMAG Pharmaceuticals, and personal fees from Sera Prognostics outside the submitted work. The second study was supported by NICHD in a grant to Dr. Katheria, who had no financial conflicts to disclose. Coauthor Gary Cutter, PhD, had numerous ties to pharmaceutical companies. The editorialists had no financial conflicts to disclose.
SOURCES: Purisch SE et al. JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.15995; Katheria A et al. JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.16004; Rabe H and Andersson O. JAMA. 2019 Nov 19; 322:1864-5.
Immediate umbilical cord milking or delayed clamping of the umbilical cord had no significant impact on maternal outcomes, but infants were significantly more likely to experience severe intraventricular hemorrhage with umbilical cord milking, according to results of two studies published in JAMA.
“While the evidence for neonatal benefit with delayed cord clamping at term is strong, data related to maternal outcomes, particularly after cesarean delivery, are largely lacking,” wrote Stephanie E. Purisch, MD, of Columbia University Irving Medical Center, New York, and colleagues.
In a randomized trial of 113 women who underwent cesarean deliveries of singleton infants, the researchers hypothesized that maternal blood loss would be greater with delayed cord clamping (JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.15995).
However, maternal blood loss, based on mean hemoglobin levels 1 day after delivery, was not significantly different between the delayed group (10.1 g/dL) and the immediate group (98 g/dL). The median time to cord clamping was 63 seconds in the delayed group and 6 seconds in the immediate group.
In addition, no significant differences occurred in 15 of 19 prespecified secondary outcomes. However, for whom data were available (18.1 g/dL vs. 16.4 g/dL; P less than .001).
The results were limited by factors including lack of generalizability to other situations such as emergency or preterm deliveries and by the lack of a definition of a “clinically important postoperative hemoglobin change,” the researchers noted. However, the results show no significant impact of umbilical cord management on maternal hemoglobin in the study population.
In another study published in JAMA, Anup Katheria, MD, of Sharp Mary Birch Hospital for Women & Newborns, San Diego, and colleagues found no significant difference in rates of a composite outcome of death or severe intraventricular hemorrhage among infants randomized to umbilical cord milking (12%) vs. delayed umbilical cord clamping (8%). However, immediate umbilical cord milking was significantly associated with a higher rate of intraventricular hemorrhage alone, compared with delayed clamping (8% vs. 3%), and this signal of risk prompted the researchers to terminate the study earlier than intended.
The researchers randomized 474 infants born at less than 32 weeks’ gestation to umbilical cord milking or delayed umbilical cord clamping (JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.16004). The study was conducted at six sites in the United States and one site each in Ireland, Germany, and Canada between June 2017 and September 2018. “Because of the importance of long-term neurodevelopment, all surviving infants will be followed up to determine developmental outcomes at 22 to 26 months’ corrected gestational age,” they said.
The study was terminated early, which prevents definitive conclusions, the researchers noted, but a new study has been approved to compare umbilical cord milking with delayed umbilical cord clamping in infants of 30-32 weeks’ gestational age, they said.
“Although the safety of placental transfusion for the mother seems well established, it remains unclear which method of providing placental transfusion is best for the infant: delayed clamping and cutting the cord or milking the intact cord. The latter provides a transfusion more rapidly, which may facilitate initiation of resuscitation when needed,” Heike Rabe, MD, of the University of Sussex, Brighton, and Ola Andersson, PhD, of Lund (Sweden) University, wrote in an editorial accompanying the two studies (JAMA. 2019 Nov 19;322:1864-5. doi: 10.1001/jama.2019.16003).
The 8% incidence of severe intraventricular hemorrhage in the umbilical milking group in the study by Katheria and colleagues was higher than the 5.2% in a recent Cochrane review, but the 3% incidence of severe intraventricular hemorrhage in the delayed group was lower than the 4.5% in the Cochrane review, they said.
“Umbilical cord milking has been used in many hospitals without an increase in intraventricular hemorrhage being observed,” they noted.
“The study by Purisch et al. demonstrated the safety of delayed cord clamping for mothers delivering by cesarean at term,” the editorialists wrote. Studies are underway to identify the best techniques for cord clamping, they said.
“In the meantime, clinicians should follow the World Health Organization recommendation to delay cord clamping and cutting for 1 to 3 minutes for term infants and for at least 60 seconds for preterm infants to prevent iron deficiency and potentially enable more premature infants to survive,” they concluded.
Dr. Purisch received funding from the Maternal-Fetal Medicine Fellow Research Fund for the first study. Coauthor Cynthia Gyamfi-Bannerman, MD, reported receiving grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the Society for Maternal-Fetal Medicine/AMAG Pharmaceuticals, and personal fees from Sera Prognostics outside the submitted work. The second study was supported by NICHD in a grant to Dr. Katheria, who had no financial conflicts to disclose. Coauthor Gary Cutter, PhD, had numerous ties to pharmaceutical companies. The editorialists had no financial conflicts to disclose.
SOURCES: Purisch SE et al. JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.15995; Katheria A et al. JAMA. 2019 Nov 19. doi: 10.1001/jama.2019.16004; Rabe H and Andersson O. JAMA. 2019 Nov 19; 322:1864-5.
FROM JAMA
Heavy metals linked with autoimmune liver disease
BOSTON – Exposure to heavy metals from natural and man-made sources may contribute to development of autoimmune liver disease, according to a recent U.K. study involving more than 3,500 patients.
Coal mines were particularly implicated, as they accounted for 39% of the risk of developing primary biliary cholangitis (PBC), reported lead author Jessica Dyson, MBBS, of Newcastle (England) University, and colleagues.
“We know that the etiology of autoimmune liver disease remains unclear, but we’re increasingly coming to understand that it’s likely to be a complex interplay between genetic and environmental factors,” Dr. Dyson said during a presentation at the annual meeting of the American Association for the Study of Liver Diseases. Showing a map of England, she pointed out how three autoimmune liver diseases – PBC, primary sclerosing cholangitis (PSC), and autoimmune hepatitis (AIH) – each have unique clusters of distribution. “This implies that environmental exposure may have a role in disease pathogenesis.”
To investigate this possibility, Dr. Dyson and colleagues used structural equation modeling to look for associations between the above three autoimmune liver diseases, socioeconomic status, and environmental factors. Specific environmental factors included soil concentrations of heavy metals (cadmium, arsenic, lead, manganese, and iron), coal mines, lead mines, quarries, urban areas, traffic, stream pH, and landfills.
The study was conducted in the northeast of England, where migration rates are low, Dr. Dyson said. From this region, the investigators identified patients with PBC (n = 2,150), AIH (n = 963), and PSC (n = 472). Conceptual models were used to examine relationships between covariates and prevalence of disease, with good models exhibiting a root-mean-square error of association less than 0.05 and a 95% covariate significance. After adjusting for population density, comparative fit was used to measure variation within each model.
The best model for PBC revealed the aforementioned link with coal mines, proximity to which accounted for 39% of the pathogenesis of PBC. High levels of cadmium in soil had an interactive role with coal mines, and itself directly contributed 22% of the risk of PBC; however, Dr. Dyson noted that, while many cadmium-rich areas had high rates of PBC, not all did.
“This demonstrates the complexity of causality of disease, and we certainly can’t say that cadmium, in its own right, is a direct cause and effect,” Dr. Dyson said. “But I think [cadmium] certainly potentially is one of the factors at play.”
For AIH, coal mines contributed less (6%), although cadmium still accounted for 22% of variation of disease, as did alkaline areas. Finally, a significant link was found between PSC and regions with high arsenic levels.
“To conclude, our data suggest that heavy metals may be risk factors for autoimmune liver disease,” Dr. Dyson said. “There are a number of exposure routes that may be pertinent to patients, from heavy metals occurring via natural sources, and also via virtue of human activity, such as burning of fossil fuels, heavy-metal production, and pesticides.” Dr. Dyson emphasized this latter route, as some rural areas, where pesticide use is common, had high prevalence rates of autoimmune liver disease.
Dr. Dyson went on to put her findings in context. “Heavy metals are a well-recognized cause of immune dysregulation and epithelial injury and are actually actively transported into the bile, and that may be particularly relevant in terms of cholangiopathies. And this leads us to the possibility of interventions to reduce toxic exposure that may modify risk of disease.”
Looking to the future, Dr. Dyson described plans to build on this research with measurements of heavy metals in tissues, serum, and urine.
The investigators reported no relevant disclosures.
SOURCE: Dyson J et al. The Liver Meeting 2019, Abstract 48.
BOSTON – Exposure to heavy metals from natural and man-made sources may contribute to development of autoimmune liver disease, according to a recent U.K. study involving more than 3,500 patients.
Coal mines were particularly implicated, as they accounted for 39% of the risk of developing primary biliary cholangitis (PBC), reported lead author Jessica Dyson, MBBS, of Newcastle (England) University, and colleagues.
“We know that the etiology of autoimmune liver disease remains unclear, but we’re increasingly coming to understand that it’s likely to be a complex interplay between genetic and environmental factors,” Dr. Dyson said during a presentation at the annual meeting of the American Association for the Study of Liver Diseases. Showing a map of England, she pointed out how three autoimmune liver diseases – PBC, primary sclerosing cholangitis (PSC), and autoimmune hepatitis (AIH) – each have unique clusters of distribution. “This implies that environmental exposure may have a role in disease pathogenesis.”
To investigate this possibility, Dr. Dyson and colleagues used structural equation modeling to look for associations between the above three autoimmune liver diseases, socioeconomic status, and environmental factors. Specific environmental factors included soil concentrations of heavy metals (cadmium, arsenic, lead, manganese, and iron), coal mines, lead mines, quarries, urban areas, traffic, stream pH, and landfills.
The study was conducted in the northeast of England, where migration rates are low, Dr. Dyson said. From this region, the investigators identified patients with PBC (n = 2,150), AIH (n = 963), and PSC (n = 472). Conceptual models were used to examine relationships between covariates and prevalence of disease, with good models exhibiting a root-mean-square error of association less than 0.05 and a 95% covariate significance. After adjusting for population density, comparative fit was used to measure variation within each model.
The best model for PBC revealed the aforementioned link with coal mines, proximity to which accounted for 39% of the pathogenesis of PBC. High levels of cadmium in soil had an interactive role with coal mines, and itself directly contributed 22% of the risk of PBC; however, Dr. Dyson noted that, while many cadmium-rich areas had high rates of PBC, not all did.
“This demonstrates the complexity of causality of disease, and we certainly can’t say that cadmium, in its own right, is a direct cause and effect,” Dr. Dyson said. “But I think [cadmium] certainly potentially is one of the factors at play.”
For AIH, coal mines contributed less (6%), although cadmium still accounted for 22% of variation of disease, as did alkaline areas. Finally, a significant link was found between PSC and regions with high arsenic levels.
“To conclude, our data suggest that heavy metals may be risk factors for autoimmune liver disease,” Dr. Dyson said. “There are a number of exposure routes that may be pertinent to patients, from heavy metals occurring via natural sources, and also via virtue of human activity, such as burning of fossil fuels, heavy-metal production, and pesticides.” Dr. Dyson emphasized this latter route, as some rural areas, where pesticide use is common, had high prevalence rates of autoimmune liver disease.
Dr. Dyson went on to put her findings in context. “Heavy metals are a well-recognized cause of immune dysregulation and epithelial injury and are actually actively transported into the bile, and that may be particularly relevant in terms of cholangiopathies. And this leads us to the possibility of interventions to reduce toxic exposure that may modify risk of disease.”
Looking to the future, Dr. Dyson described plans to build on this research with measurements of heavy metals in tissues, serum, and urine.
The investigators reported no relevant disclosures.
SOURCE: Dyson J et al. The Liver Meeting 2019, Abstract 48.
BOSTON – Exposure to heavy metals from natural and man-made sources may contribute to development of autoimmune liver disease, according to a recent U.K. study involving more than 3,500 patients.
Coal mines were particularly implicated, as they accounted for 39% of the risk of developing primary biliary cholangitis (PBC), reported lead author Jessica Dyson, MBBS, of Newcastle (England) University, and colleagues.
“We know that the etiology of autoimmune liver disease remains unclear, but we’re increasingly coming to understand that it’s likely to be a complex interplay between genetic and environmental factors,” Dr. Dyson said during a presentation at the annual meeting of the American Association for the Study of Liver Diseases. Showing a map of England, she pointed out how three autoimmune liver diseases – PBC, primary sclerosing cholangitis (PSC), and autoimmune hepatitis (AIH) – each have unique clusters of distribution. “This implies that environmental exposure may have a role in disease pathogenesis.”
To investigate this possibility, Dr. Dyson and colleagues used structural equation modeling to look for associations between the above three autoimmune liver diseases, socioeconomic status, and environmental factors. Specific environmental factors included soil concentrations of heavy metals (cadmium, arsenic, lead, manganese, and iron), coal mines, lead mines, quarries, urban areas, traffic, stream pH, and landfills.
The study was conducted in the northeast of England, where migration rates are low, Dr. Dyson said. From this region, the investigators identified patients with PBC (n = 2,150), AIH (n = 963), and PSC (n = 472). Conceptual models were used to examine relationships between covariates and prevalence of disease, with good models exhibiting a root-mean-square error of association less than 0.05 and a 95% covariate significance. After adjusting for population density, comparative fit was used to measure variation within each model.
The best model for PBC revealed the aforementioned link with coal mines, proximity to which accounted for 39% of the pathogenesis of PBC. High levels of cadmium in soil had an interactive role with coal mines, and itself directly contributed 22% of the risk of PBC; however, Dr. Dyson noted that, while many cadmium-rich areas had high rates of PBC, not all did.
“This demonstrates the complexity of causality of disease, and we certainly can’t say that cadmium, in its own right, is a direct cause and effect,” Dr. Dyson said. “But I think [cadmium] certainly potentially is one of the factors at play.”
For AIH, coal mines contributed less (6%), although cadmium still accounted for 22% of variation of disease, as did alkaline areas. Finally, a significant link was found between PSC and regions with high arsenic levels.
“To conclude, our data suggest that heavy metals may be risk factors for autoimmune liver disease,” Dr. Dyson said. “There are a number of exposure routes that may be pertinent to patients, from heavy metals occurring via natural sources, and also via virtue of human activity, such as burning of fossil fuels, heavy-metal production, and pesticides.” Dr. Dyson emphasized this latter route, as some rural areas, where pesticide use is common, had high prevalence rates of autoimmune liver disease.
Dr. Dyson went on to put her findings in context. “Heavy metals are a well-recognized cause of immune dysregulation and epithelial injury and are actually actively transported into the bile, and that may be particularly relevant in terms of cholangiopathies. And this leads us to the possibility of interventions to reduce toxic exposure that may modify risk of disease.”
Looking to the future, Dr. Dyson described plans to build on this research with measurements of heavy metals in tissues, serum, and urine.
The investigators reported no relevant disclosures.
SOURCE: Dyson J et al. The Liver Meeting 2019, Abstract 48.
REPORTING FROM THE LIVER MEETING 2019
Not all lung cancer patients receive treatment
In the United States, just over 15% of patients with lung cancer receive no treatment, according to the American Lung Association.
“This can happen for multiple reasons, such as the tumor having spread too far, poor health, or refusal of treatment,” the ALA said in its 2019 State of Lung Cancer report.
On the state level, the disparities were considerable. Arizona had the highest rate of nontreatment at 30.4%, followed by the neighboring states of New Mexico (24.2%) and California (24.0%). The lowest rate in the country, 8.0%, came from North Dakota, with Missouri next at 9.4% and Maine third at 9.6%, based on data from the North American Association of Central Cancer Registries’ December 2018 data submission, which covered the years from 2012 to 2016.
Although some cases of lung cancer may be unavoidable, “no one should go untreated because of lack of provider or patient knowledge, stigma associated with lung cancer, fatalism after diagnosis, or cost of treatment. Dismantling these and other barriers is important to reducing the percent of untreated patients,” the ALA said.
In the United States, just over 15% of patients with lung cancer receive no treatment, according to the American Lung Association.
“This can happen for multiple reasons, such as the tumor having spread too far, poor health, or refusal of treatment,” the ALA said in its 2019 State of Lung Cancer report.
On the state level, the disparities were considerable. Arizona had the highest rate of nontreatment at 30.4%, followed by the neighboring states of New Mexico (24.2%) and California (24.0%). The lowest rate in the country, 8.0%, came from North Dakota, with Missouri next at 9.4% and Maine third at 9.6%, based on data from the North American Association of Central Cancer Registries’ December 2018 data submission, which covered the years from 2012 to 2016.
Although some cases of lung cancer may be unavoidable, “no one should go untreated because of lack of provider or patient knowledge, stigma associated with lung cancer, fatalism after diagnosis, or cost of treatment. Dismantling these and other barriers is important to reducing the percent of untreated patients,” the ALA said.
In the United States, just over 15% of patients with lung cancer receive no treatment, according to the American Lung Association.
“This can happen for multiple reasons, such as the tumor having spread too far, poor health, or refusal of treatment,” the ALA said in its 2019 State of Lung Cancer report.
On the state level, the disparities were considerable. Arizona had the highest rate of nontreatment at 30.4%, followed by the neighboring states of New Mexico (24.2%) and California (24.0%). The lowest rate in the country, 8.0%, came from North Dakota, with Missouri next at 9.4% and Maine third at 9.6%, based on data from the North American Association of Central Cancer Registries’ December 2018 data submission, which covered the years from 2012 to 2016.
Although some cases of lung cancer may be unavoidable, “no one should go untreated because of lack of provider or patient knowledge, stigma associated with lung cancer, fatalism after diagnosis, or cost of treatment. Dismantling these and other barriers is important to reducing the percent of untreated patients,” the ALA said.
Inside Dr. Swathi Eluri’s journey to physician-scientist
Inspired by her father, who was diagnosed with inflammatory bowel disease (IBD), Swathi Eluri, MD, spent time during her college days at the University of North Carolina (UNC), Chapel Hill, in a GI basic science lab hoping to better understand this condition.
After a stint at John Hopkins Hospital in Baltimore for her medical degree and residency, Dr. Eluri returned to UNC Chapel Hill for her GI fellowship, where she remains today as an assistant professor of medicine in the division of gastroenterology and hepatology. Dr. Eluri’s research is focused on increasing early detection of esophageal cancer, to allow for earlier and more effective treatment. The AGA Research Foundation was proud to support Dr. Eluri’s work with a 2018 AGA Research Scholar Award.
Learn more about Dr. Swathi Eluri’s inspiring career by visiting: https://www.gastro.org/news/inside-dr-swathi-eluris-journey-to-physician-scientist.
Help AGA build a community of investigators through the AGA Research Foundation
Your donation to the AGA Research Foundation can fund future success stories by keeping young scientists working to advance our understanding of digestive diseases. Make your year-end donation today at www.gastro.org/donateonline.
Inspired by her father, who was diagnosed with inflammatory bowel disease (IBD), Swathi Eluri, MD, spent time during her college days at the University of North Carolina (UNC), Chapel Hill, in a GI basic science lab hoping to better understand this condition.
After a stint at John Hopkins Hospital in Baltimore for her medical degree and residency, Dr. Eluri returned to UNC Chapel Hill for her GI fellowship, where she remains today as an assistant professor of medicine in the division of gastroenterology and hepatology. Dr. Eluri’s research is focused on increasing early detection of esophageal cancer, to allow for earlier and more effective treatment. The AGA Research Foundation was proud to support Dr. Eluri’s work with a 2018 AGA Research Scholar Award.
Learn more about Dr. Swathi Eluri’s inspiring career by visiting: https://www.gastro.org/news/inside-dr-swathi-eluris-journey-to-physician-scientist.
Help AGA build a community of investigators through the AGA Research Foundation
Your donation to the AGA Research Foundation can fund future success stories by keeping young scientists working to advance our understanding of digestive diseases. Make your year-end donation today at www.gastro.org/donateonline.
Inspired by her father, who was diagnosed with inflammatory bowel disease (IBD), Swathi Eluri, MD, spent time during her college days at the University of North Carolina (UNC), Chapel Hill, in a GI basic science lab hoping to better understand this condition.
After a stint at John Hopkins Hospital in Baltimore for her medical degree and residency, Dr. Eluri returned to UNC Chapel Hill for her GI fellowship, where she remains today as an assistant professor of medicine in the division of gastroenterology and hepatology. Dr. Eluri’s research is focused on increasing early detection of esophageal cancer, to allow for earlier and more effective treatment. The AGA Research Foundation was proud to support Dr. Eluri’s work with a 2018 AGA Research Scholar Award.
Learn more about Dr. Swathi Eluri’s inspiring career by visiting: https://www.gastro.org/news/inside-dr-swathi-eluris-journey-to-physician-scientist.
Help AGA build a community of investigators through the AGA Research Foundation
Your donation to the AGA Research Foundation can fund future success stories by keeping young scientists working to advance our understanding of digestive diseases. Make your year-end donation today at www.gastro.org/donateonline.
Ixekizumab effective over long term for psoriasis
The Journal of the European Academy of Dermatology and Venereology.
according to a study published in theThe UNCOVER-3 trial was a double-blind, multicenter, phase 3 study in 1,346 individuals with moderate to severe psoriasis who were randomized to placebo, 80 mg of ixekizumab every 2 or 4 weeks, or 50 mg of etanercept twice weekly. At week 12, all patients were transferred to 80 mg of ixekizumab every 4 weeks for the long-term extension period, and after 60 weeks, some were dose adjusted to 80 mg every 2 weeks.
At week 204, 48.3% of the 385 patients who were receiving treatment either every 2 or 4 weeks achieved Psoriasis Area and Severity Index (PASI) 100 according to the modified nonresponder imputation method to summarize efficacy – in which patients who drop out of the study are counted as nonresponders – 66.4% achieved PASI 90 and 82.8% achieved PASI 75.
Using the as-observed method for assessing efficacy, 67.1% of patients achieved PASI 100, 87.8% achieved PASI 90, and 98.2% achieved PASI 75. Using the multiple-imputation method, those same figures were 52.7%, 73.3%, and 94.8% respectively.
They also saw consistently high response rates according to the static Physician’s Global Assessment (sPGA) score, which goes from 0 (clear) to 5 (severe disease). Using the as-observed, multiple imputation and modified nonresponder imputation methods, 68.9%, 54.6%, and 49.7% of patients respectively achieved a score of 0.
The study also saw complete resolution in 95.8% of patients with baseline palmoplantar involvement, 75.9% of those with baseline nail involvement, and 87.1% of those with scalp involvement, using the as-observed method.
“These results corroborate the results that were reported previously in patients with moderate to severe psoriasis,” wrote Mark G. Lebwohl, MD, professor and chair of the department of dermatology at the Icahn School of Medicine at Mount Sinai, New York, and coauthors. “Sustained high response was observed with all the efficacy parameters such as PASI 75, 90, 100, and sPGA (0) or (0, 1), regardless of the statistical analyses ... performed.”
The majority of adverse events were mild or moderate, but serious adverse events were seen in 18.1% of patients in the long-term extension and 9.4% of patients stopped taking the study drug because of adverse events.
The most frequently reported adverse events of special interest were infections, such as nasopharyngitis (28.5% of patients), upper respiratory tract infections (10.8%), and Candida infection (5.2%). However, clinically significant neutropenia only occurred in 0.7% of patients, and malignancies occurred in 2.2% of patients.
“These safety findings support the consistency in the safety profile of [ixekizumab] treatment with no new signals occurring even after the longer exposure,” the authors wrote.
The study was funded by Eli Lilly, which manufactures ixekizumab. Two authors were employees of Eli Lilly and own company stocks. The remaining three authors reported receiving research funding and consultancies from different pharmaceutical companies, including Eli Lilly.
SOURCE: Lebwohl MG et al. J Eur Acad Dermatol Venereol. 2019 Sep 3. doi: 10.1111/jdv.15921.
The Journal of the European Academy of Dermatology and Venereology.
according to a study published in theThe UNCOVER-3 trial was a double-blind, multicenter, phase 3 study in 1,346 individuals with moderate to severe psoriasis who were randomized to placebo, 80 mg of ixekizumab every 2 or 4 weeks, or 50 mg of etanercept twice weekly. At week 12, all patients were transferred to 80 mg of ixekizumab every 4 weeks for the long-term extension period, and after 60 weeks, some were dose adjusted to 80 mg every 2 weeks.
At week 204, 48.3% of the 385 patients who were receiving treatment either every 2 or 4 weeks achieved Psoriasis Area and Severity Index (PASI) 100 according to the modified nonresponder imputation method to summarize efficacy – in which patients who drop out of the study are counted as nonresponders – 66.4% achieved PASI 90 and 82.8% achieved PASI 75.
Using the as-observed method for assessing efficacy, 67.1% of patients achieved PASI 100, 87.8% achieved PASI 90, and 98.2% achieved PASI 75. Using the multiple-imputation method, those same figures were 52.7%, 73.3%, and 94.8% respectively.
They also saw consistently high response rates according to the static Physician’s Global Assessment (sPGA) score, which goes from 0 (clear) to 5 (severe disease). Using the as-observed, multiple imputation and modified nonresponder imputation methods, 68.9%, 54.6%, and 49.7% of patients respectively achieved a score of 0.
The study also saw complete resolution in 95.8% of patients with baseline palmoplantar involvement, 75.9% of those with baseline nail involvement, and 87.1% of those with scalp involvement, using the as-observed method.
“These results corroborate the results that were reported previously in patients with moderate to severe psoriasis,” wrote Mark G. Lebwohl, MD, professor and chair of the department of dermatology at the Icahn School of Medicine at Mount Sinai, New York, and coauthors. “Sustained high response was observed with all the efficacy parameters such as PASI 75, 90, 100, and sPGA (0) or (0, 1), regardless of the statistical analyses ... performed.”
The majority of adverse events were mild or moderate, but serious adverse events were seen in 18.1% of patients in the long-term extension and 9.4% of patients stopped taking the study drug because of adverse events.
The most frequently reported adverse events of special interest were infections, such as nasopharyngitis (28.5% of patients), upper respiratory tract infections (10.8%), and Candida infection (5.2%). However, clinically significant neutropenia only occurred in 0.7% of patients, and malignancies occurred in 2.2% of patients.
“These safety findings support the consistency in the safety profile of [ixekizumab] treatment with no new signals occurring even after the longer exposure,” the authors wrote.
The study was funded by Eli Lilly, which manufactures ixekizumab. Two authors were employees of Eli Lilly and own company stocks. The remaining three authors reported receiving research funding and consultancies from different pharmaceutical companies, including Eli Lilly.
SOURCE: Lebwohl MG et al. J Eur Acad Dermatol Venereol. 2019 Sep 3. doi: 10.1111/jdv.15921.
The Journal of the European Academy of Dermatology and Venereology.
according to a study published in theThe UNCOVER-3 trial was a double-blind, multicenter, phase 3 study in 1,346 individuals with moderate to severe psoriasis who were randomized to placebo, 80 mg of ixekizumab every 2 or 4 weeks, or 50 mg of etanercept twice weekly. At week 12, all patients were transferred to 80 mg of ixekizumab every 4 weeks for the long-term extension period, and after 60 weeks, some were dose adjusted to 80 mg every 2 weeks.
At week 204, 48.3% of the 385 patients who were receiving treatment either every 2 or 4 weeks achieved Psoriasis Area and Severity Index (PASI) 100 according to the modified nonresponder imputation method to summarize efficacy – in which patients who drop out of the study are counted as nonresponders – 66.4% achieved PASI 90 and 82.8% achieved PASI 75.
Using the as-observed method for assessing efficacy, 67.1% of patients achieved PASI 100, 87.8% achieved PASI 90, and 98.2% achieved PASI 75. Using the multiple-imputation method, those same figures were 52.7%, 73.3%, and 94.8% respectively.
They also saw consistently high response rates according to the static Physician’s Global Assessment (sPGA) score, which goes from 0 (clear) to 5 (severe disease). Using the as-observed, multiple imputation and modified nonresponder imputation methods, 68.9%, 54.6%, and 49.7% of patients respectively achieved a score of 0.
The study also saw complete resolution in 95.8% of patients with baseline palmoplantar involvement, 75.9% of those with baseline nail involvement, and 87.1% of those with scalp involvement, using the as-observed method.
“These results corroborate the results that were reported previously in patients with moderate to severe psoriasis,” wrote Mark G. Lebwohl, MD, professor and chair of the department of dermatology at the Icahn School of Medicine at Mount Sinai, New York, and coauthors. “Sustained high response was observed with all the efficacy parameters such as PASI 75, 90, 100, and sPGA (0) or (0, 1), regardless of the statistical analyses ... performed.”
The majority of adverse events were mild or moderate, but serious adverse events were seen in 18.1% of patients in the long-term extension and 9.4% of patients stopped taking the study drug because of adverse events.
The most frequently reported adverse events of special interest were infections, such as nasopharyngitis (28.5% of patients), upper respiratory tract infections (10.8%), and Candida infection (5.2%). However, clinically significant neutropenia only occurred in 0.7% of patients, and malignancies occurred in 2.2% of patients.
“These safety findings support the consistency in the safety profile of [ixekizumab] treatment with no new signals occurring even after the longer exposure,” the authors wrote.
The study was funded by Eli Lilly, which manufactures ixekizumab. Two authors were employees of Eli Lilly and own company stocks. The remaining three authors reported receiving research funding and consultancies from different pharmaceutical companies, including Eli Lilly.
SOURCE: Lebwohl MG et al. J Eur Acad Dermatol Venereol. 2019 Sep 3. doi: 10.1111/jdv.15921.
FROM THE JOURNAL OF THE EUROPEAN ACADEMY OF DERMATOLOGY AND VENEREOLOGY
Key clinical point: Long-term follow-up shows ixekizumab efficacy in moderate to severe psoriasis.
Major finding: At 4 years, 66.5% of patients on ixekizumab for psoriasis achieved Psoriasis Area and Severity Index 100.
Study details: A long-term extension of the UNCOVER-3 double-blind, multicenter, phase 3 study in 1,346 individuals with psoriasis.
Disclosures: The study was funded by Eli Lilly, which manufactures ixekizumab. Two authors were employees of Eli Lilly and own company stocks. The remaining three authors reported receiving research funding and consultancies from different pharmaceutical companies, including Eli Lilly.
Source: Lebwohl MG et al. J Eur Acad Dermatol Venereol. 2019 Sep 3. doi: 10.1111/jdv.15921.
RNA inhibitors silence two new targets in dyslipidemia
PHILADELPHIA – A novel treatment strategy tackling hypertriglyceridemia via long-acting agents targeting two specific culprit genes caused a stir based on the highly encouraging early results of two small proof-of-concept studies presented at the American Heart Association scientific sessions.
ARO-APOC3 is a small interfering ribonucleic acid molecule (siRNA) targeting the apolipoprotein C-III gene (APOC3) specifically within hepatocytes, while ARO-ANG3 is an siRNA targeting hepatic angiotensinlike protein 3 (ANG3). ARO-APOC3 is being developed as a potential treatment for familial chylomicronemia syndrome, a rare disorder associated with triglyceride levels in excess of 800 mg/dL, as well as for patients with severe hypertriglyceridemia and associated pancreatitis – a far more common condition – and ultimately, perhaps, for patients with hypertriglyceridemia and heart disease. ARO-ANG3, which lowers very-low-density lipoprotein (VLDL) as well as LDL cholesterol while increasing HDL cholesterol levels, is under development as a treatment for high triglycerides, homozygous familial hypercholesterolemia, nonalcoholic fatty liver disease, and metabolic diseases.
Christie M. Ballantyne, MD, presented the results of the phase 1/2a study of ARO-APOC3, which included 40 healthy subjects who received a single subcutaneous injection of the RNA inhibitor at 10, 25, 50, or 100 mg and were followed for 16 weeks. At the highest dose, it reduced serum APOC3 levels by 94%, triglyceride levels by 64%, LDL cholesterol levels by up to 25%, and VLDL by a maximum of 68%, while boosting HDL cholesterol levels by up to 69%. These substantial changes in lipids remained stable through week 16.
The observed prolonged duration of effect provides a potential opportunity for dosing quarterly or perhaps even twice a year. This would be ideal for patients who have problems with adherence to daily therapy with statins and other oral agents, observed Dr. Ballantyne, professor of medicine and professor of molecular and human genetics at Baylor College of Medicine, Houston.
Gerald F. Watts, MBBS, DM, DSc, PhD, presented a separate phase 1/2a, 16-week study of a single dose of ARO-ANG3 at 35, 100, 200, or 300 mg in 40 dyslipidemic subjects who were not on background lipid-lowering therapy. The impact on lipids was similar to that achieved by silencing apolipoprotein C-III, except that the reduction in LDL cholesterol was larger and ARO-ANG3 reduced HDL cholesterol in dose-dependent fashion by up to 26%. As in the ARO-APOC3 study, the safety profile of the ANG3 RNA inhibitor raised no concerns, with no study dropouts and no serious adverse events, added Dr. Watts, professor of medicine at the University of Western Australia, Perth.
Discussant Daniel J. Rader, MD, noted that there is an unmet need for hypertriglyceridemia-lowering therapies, because elevated triglycerides can cause pancreatitis as well as coronary disease.
“These siRNA molecules are catalytic: They can go around and destroy multiple aspects of the target RNAs in a way that provides substantial longevity of effect, which is quite remarkable,” explained Dr. Rader, professor of molecular medicine and director of the preventive cardiology program at the University of Pennsylvania, Philadelphia.
Hypertriglyceridemia is often a challenge to treat successfully in clinical practice, so the siRNA studies drew considerable attention, not only for the impressive size and durability of the lipid changes, but also because of the way in which the target genes were identified, a process that began by genetic analysis of individuals with inherently low levels of APOC3 and ANG3.
“One of the really interesting parts of this story is the rapidity with which we went from target identification to therapeutics, now moving into phase 1 and 2 trials. It’s happening much more rapidly than we’ve ever seen before,” commented AHA scientific sessions program chair Donald Lloyd-Jones, MD, senior associate dean for clinical and translational research and chair of the department of preventive medicine at Northwestern University, Chicago.
Still, he was quick to inject a cautionary note. “These genomic studies can show us that having lower levels of these proteins is associated with lower risk. But that doesn’t necessarily mean that lowering levels of these proteins will lower risk, and it certainly doesn’t tell us anything about potential safety concerns.”
In an interview, AHA spokesperson Jennifer Robinson, MD, made a similar point: “We have had lots of fibrate trials in which we’ve lowered triglycerides, and they didn’t really work.”
Yet she, too, was clearly caught up in the thrill of the early evidence of a novel means of treating new targets in dyslipidemia.
“We’re on the cusp of the genetic revolution,” declared Dr. Robinson, professor of epidemiology and director of the preventive and intervention center at the University of Iowa, Iowa City. “For us science nerds, this is so exciting. It’s so cool. The brilliance of these compounds is they have a very focused target in a very focused organ. If you’re just in the liver, you’re limiting off-target effects, so the safety issue should be better than with what we have now.”
Dr. Rader commented that plenty of questions remain to be answered about siRNA therapy for hyperlipidemia. These include which target – APOC3 or ANG3 – is the more effective for treating severe hypertriglyceridemia and/or for preventing major cardiovascular events, how frequently these agents will need to be dosed, whether there’s a clinical downside to the substantial HDL cholesterol lowering seen with silencing of ANG3, and whether the APOC3 that’s produced in the intestine – and which isn’t touched by hepatocentric ARO-APOC3 – will cause problems.
Dr. Ballantyne reported serving as a consultant to Arrowhead Pharmaceutics, which is developing the RNA inhibitors for hypertriglyceridemia, as well as numerous other pharmaceutical companies. Dr. Watts has received research grants from Amgen and Sanofi-Regeneron.
PHILADELPHIA – A novel treatment strategy tackling hypertriglyceridemia via long-acting agents targeting two specific culprit genes caused a stir based on the highly encouraging early results of two small proof-of-concept studies presented at the American Heart Association scientific sessions.
ARO-APOC3 is a small interfering ribonucleic acid molecule (siRNA) targeting the apolipoprotein C-III gene (APOC3) specifically within hepatocytes, while ARO-ANG3 is an siRNA targeting hepatic angiotensinlike protein 3 (ANG3). ARO-APOC3 is being developed as a potential treatment for familial chylomicronemia syndrome, a rare disorder associated with triglyceride levels in excess of 800 mg/dL, as well as for patients with severe hypertriglyceridemia and associated pancreatitis – a far more common condition – and ultimately, perhaps, for patients with hypertriglyceridemia and heart disease. ARO-ANG3, which lowers very-low-density lipoprotein (VLDL) as well as LDL cholesterol while increasing HDL cholesterol levels, is under development as a treatment for high triglycerides, homozygous familial hypercholesterolemia, nonalcoholic fatty liver disease, and metabolic diseases.
Christie M. Ballantyne, MD, presented the results of the phase 1/2a study of ARO-APOC3, which included 40 healthy subjects who received a single subcutaneous injection of the RNA inhibitor at 10, 25, 50, or 100 mg and were followed for 16 weeks. At the highest dose, it reduced serum APOC3 levels by 94%, triglyceride levels by 64%, LDL cholesterol levels by up to 25%, and VLDL by a maximum of 68%, while boosting HDL cholesterol levels by up to 69%. These substantial changes in lipids remained stable through week 16.
The observed prolonged duration of effect provides a potential opportunity for dosing quarterly or perhaps even twice a year. This would be ideal for patients who have problems with adherence to daily therapy with statins and other oral agents, observed Dr. Ballantyne, professor of medicine and professor of molecular and human genetics at Baylor College of Medicine, Houston.
Gerald F. Watts, MBBS, DM, DSc, PhD, presented a separate phase 1/2a, 16-week study of a single dose of ARO-ANG3 at 35, 100, 200, or 300 mg in 40 dyslipidemic subjects who were not on background lipid-lowering therapy. The impact on lipids was similar to that achieved by silencing apolipoprotein C-III, except that the reduction in LDL cholesterol was larger and ARO-ANG3 reduced HDL cholesterol in dose-dependent fashion by up to 26%. As in the ARO-APOC3 study, the safety profile of the ANG3 RNA inhibitor raised no concerns, with no study dropouts and no serious adverse events, added Dr. Watts, professor of medicine at the University of Western Australia, Perth.
Discussant Daniel J. Rader, MD, noted that there is an unmet need for hypertriglyceridemia-lowering therapies, because elevated triglycerides can cause pancreatitis as well as coronary disease.
“These siRNA molecules are catalytic: They can go around and destroy multiple aspects of the target RNAs in a way that provides substantial longevity of effect, which is quite remarkable,” explained Dr. Rader, professor of molecular medicine and director of the preventive cardiology program at the University of Pennsylvania, Philadelphia.
Hypertriglyceridemia is often a challenge to treat successfully in clinical practice, so the siRNA studies drew considerable attention, not only for the impressive size and durability of the lipid changes, but also because of the way in which the target genes were identified, a process that began by genetic analysis of individuals with inherently low levels of APOC3 and ANG3.
“One of the really interesting parts of this story is the rapidity with which we went from target identification to therapeutics, now moving into phase 1 and 2 trials. It’s happening much more rapidly than we’ve ever seen before,” commented AHA scientific sessions program chair Donald Lloyd-Jones, MD, senior associate dean for clinical and translational research and chair of the department of preventive medicine at Northwestern University, Chicago.
Still, he was quick to inject a cautionary note. “These genomic studies can show us that having lower levels of these proteins is associated with lower risk. But that doesn’t necessarily mean that lowering levels of these proteins will lower risk, and it certainly doesn’t tell us anything about potential safety concerns.”
In an interview, AHA spokesperson Jennifer Robinson, MD, made a similar point: “We have had lots of fibrate trials in which we’ve lowered triglycerides, and they didn’t really work.”
Yet she, too, was clearly caught up in the thrill of the early evidence of a novel means of treating new targets in dyslipidemia.
“We’re on the cusp of the genetic revolution,” declared Dr. Robinson, professor of epidemiology and director of the preventive and intervention center at the University of Iowa, Iowa City. “For us science nerds, this is so exciting. It’s so cool. The brilliance of these compounds is they have a very focused target in a very focused organ. If you’re just in the liver, you’re limiting off-target effects, so the safety issue should be better than with what we have now.”
Dr. Rader commented that plenty of questions remain to be answered about siRNA therapy for hyperlipidemia. These include which target – APOC3 or ANG3 – is the more effective for treating severe hypertriglyceridemia and/or for preventing major cardiovascular events, how frequently these agents will need to be dosed, whether there’s a clinical downside to the substantial HDL cholesterol lowering seen with silencing of ANG3, and whether the APOC3 that’s produced in the intestine – and which isn’t touched by hepatocentric ARO-APOC3 – will cause problems.
Dr. Ballantyne reported serving as a consultant to Arrowhead Pharmaceutics, which is developing the RNA inhibitors for hypertriglyceridemia, as well as numerous other pharmaceutical companies. Dr. Watts has received research grants from Amgen and Sanofi-Regeneron.
PHILADELPHIA – A novel treatment strategy tackling hypertriglyceridemia via long-acting agents targeting two specific culprit genes caused a stir based on the highly encouraging early results of two small proof-of-concept studies presented at the American Heart Association scientific sessions.
ARO-APOC3 is a small interfering ribonucleic acid molecule (siRNA) targeting the apolipoprotein C-III gene (APOC3) specifically within hepatocytes, while ARO-ANG3 is an siRNA targeting hepatic angiotensinlike protein 3 (ANG3). ARO-APOC3 is being developed as a potential treatment for familial chylomicronemia syndrome, a rare disorder associated with triglyceride levels in excess of 800 mg/dL, as well as for patients with severe hypertriglyceridemia and associated pancreatitis – a far more common condition – and ultimately, perhaps, for patients with hypertriglyceridemia and heart disease. ARO-ANG3, which lowers very-low-density lipoprotein (VLDL) as well as LDL cholesterol while increasing HDL cholesterol levels, is under development as a treatment for high triglycerides, homozygous familial hypercholesterolemia, nonalcoholic fatty liver disease, and metabolic diseases.
Christie M. Ballantyne, MD, presented the results of the phase 1/2a study of ARO-APOC3, which included 40 healthy subjects who received a single subcutaneous injection of the RNA inhibitor at 10, 25, 50, or 100 mg and were followed for 16 weeks. At the highest dose, it reduced serum APOC3 levels by 94%, triglyceride levels by 64%, LDL cholesterol levels by up to 25%, and VLDL by a maximum of 68%, while boosting HDL cholesterol levels by up to 69%. These substantial changes in lipids remained stable through week 16.
The observed prolonged duration of effect provides a potential opportunity for dosing quarterly or perhaps even twice a year. This would be ideal for patients who have problems with adherence to daily therapy with statins and other oral agents, observed Dr. Ballantyne, professor of medicine and professor of molecular and human genetics at Baylor College of Medicine, Houston.
Gerald F. Watts, MBBS, DM, DSc, PhD, presented a separate phase 1/2a, 16-week study of a single dose of ARO-ANG3 at 35, 100, 200, or 300 mg in 40 dyslipidemic subjects who were not on background lipid-lowering therapy. The impact on lipids was similar to that achieved by silencing apolipoprotein C-III, except that the reduction in LDL cholesterol was larger and ARO-ANG3 reduced HDL cholesterol in dose-dependent fashion by up to 26%. As in the ARO-APOC3 study, the safety profile of the ANG3 RNA inhibitor raised no concerns, with no study dropouts and no serious adverse events, added Dr. Watts, professor of medicine at the University of Western Australia, Perth.
Discussant Daniel J. Rader, MD, noted that there is an unmet need for hypertriglyceridemia-lowering therapies, because elevated triglycerides can cause pancreatitis as well as coronary disease.
“These siRNA molecules are catalytic: They can go around and destroy multiple aspects of the target RNAs in a way that provides substantial longevity of effect, which is quite remarkable,” explained Dr. Rader, professor of molecular medicine and director of the preventive cardiology program at the University of Pennsylvania, Philadelphia.
Hypertriglyceridemia is often a challenge to treat successfully in clinical practice, so the siRNA studies drew considerable attention, not only for the impressive size and durability of the lipid changes, but also because of the way in which the target genes were identified, a process that began by genetic analysis of individuals with inherently low levels of APOC3 and ANG3.
“One of the really interesting parts of this story is the rapidity with which we went from target identification to therapeutics, now moving into phase 1 and 2 trials. It’s happening much more rapidly than we’ve ever seen before,” commented AHA scientific sessions program chair Donald Lloyd-Jones, MD, senior associate dean for clinical and translational research and chair of the department of preventive medicine at Northwestern University, Chicago.
Still, he was quick to inject a cautionary note. “These genomic studies can show us that having lower levels of these proteins is associated with lower risk. But that doesn’t necessarily mean that lowering levels of these proteins will lower risk, and it certainly doesn’t tell us anything about potential safety concerns.”
In an interview, AHA spokesperson Jennifer Robinson, MD, made a similar point: “We have had lots of fibrate trials in which we’ve lowered triglycerides, and they didn’t really work.”
Yet she, too, was clearly caught up in the thrill of the early evidence of a novel means of treating new targets in dyslipidemia.
“We’re on the cusp of the genetic revolution,” declared Dr. Robinson, professor of epidemiology and director of the preventive and intervention center at the University of Iowa, Iowa City. “For us science nerds, this is so exciting. It’s so cool. The brilliance of these compounds is they have a very focused target in a very focused organ. If you’re just in the liver, you’re limiting off-target effects, so the safety issue should be better than with what we have now.”
Dr. Rader commented that plenty of questions remain to be answered about siRNA therapy for hyperlipidemia. These include which target – APOC3 or ANG3 – is the more effective for treating severe hypertriglyceridemia and/or for preventing major cardiovascular events, how frequently these agents will need to be dosed, whether there’s a clinical downside to the substantial HDL cholesterol lowering seen with silencing of ANG3, and whether the APOC3 that’s produced in the intestine – and which isn’t touched by hepatocentric ARO-APOC3 – will cause problems.
Dr. Ballantyne reported serving as a consultant to Arrowhead Pharmaceutics, which is developing the RNA inhibitors for hypertriglyceridemia, as well as numerous other pharmaceutical companies. Dr. Watts has received research grants from Amgen and Sanofi-Regeneron.
REPORTING FROM AHA 2019
Burnout – what is it, why we’re talking about, and what does it have to do with you?
There has been a great deal of evolving research and writing about physician burnout. Horror stories about long work hours, frustrations with working environments, administrative challenges are everywhere – social media, medical journals, mainstream media. While burnout is not new, the increased attention and consequences in the health care system are exposing not only the importance of physician well-being but also the impact of burnout on patient care.
What is burnout?
Burnout was first identified in the 1970s and further refined by Christina Maslach, PhD, as a syndrome that is due to prolonged exposure to chronic interpersonal stress with three key dimensions that include 1) overwhelming exhaustion, 2) feelings of cynicism and detachment from the job, and 3) a sense of ineffectiveness and lack of accomplishment.
The Maslach Burnout Inventory (MBI), a 22-item questionnaire that was developed in the 1980s has become the standard survey in research settings for the identification of burnout. However, a two-item questionnaire has been utilized with good correlation to the domains of emotional exhaustion and depersonalization and includes: 1) I feel burned out from my work and 2) I have become more callous toward people since I took this job. Responses are graded on a scale from never to everyday with five points in between; the likelihood of burnout is high when responses are once a week or more frequent (i.e., a few times a week or everyday).
Why are we talking about burnout?
Burnout has far-reaching consequences. It not only affects the individual but also that person’s interpersonal relationships with family and friends. Additionally, burnout affects patient care and the overall health care system.
Let’s imagine the scenario in which you arrive at your office on a Monday morning and open your electronic health record (EHR). You tend to arrive at work about 45 minutes prior to your first patient to try to catch up with messages. As you wait for your computer to login (5 minutes? 8 minutes? 12 minutes?) and Citrix to connect, you are eating your breakfast bar and drinking mediocre coffee because you still haven’t had time to fix your coffee machine at home (should you just order a new one on Amazon and contribute to the world’s growing trash problem?). Once you login to your EHR, the first three messages are about missing charges and charts still left open – yes, you haven’t corrected the resident’s note from clinic on Friday afternoon, yet. The next two messages are about insurance denials for a prescription or a procedure or an imaging study. You decide that perhaps you should change gears and check your work email. The first email is a reminder that vacation requests for the next 6 months are due by end of business today and any requests made after today must go through some administrative approval process that seems inefficient and almost punitive (mainly because you forgot to discuss this with your partner and family and you are feeling somewhat guilty but resentful of this arbitrary deadline that was announced last week). Your pager promptly buzzes to announce that the first patient has arrived and is ready for you to consult. As you walk over to the procedure area, you remind yourself to finalize the resident’s note from Friday, file the missing charges, close those charts and find some reasonable evidence to justify the medication/test/procedure so that your patient is not saddled with a large bill. And as you walk up to your first patient of the morning, you are greeted by a nurse who indicates the patient doesn’t have a ride home postprocedure and what do you want to do?
Does any of this sound remotely familiar? In today’s medical practice, there are multiple factors that contribute toward burnout, including excessive clerical burden, inefficient EHR and computer systems, the sense of loss of control and flexibility, along with problems associated with work-life balance.
What does it have to do with you?
According to two surveys administered by the AGA and ACG, burnout occurs in approximately 50% of gastroenterologists. It also appears that burnout starts as early as the fellowship years when there is even less control, long work hours, and similar demands with regards to work-life balance.
Burnout is prevalent amongt gastroenterologists and it can start early. There is evidence to suggest that procedurally based specialties are at higher risk because of the added possibility of complications associated with procedures. It is important for us to recognize signs of burnout not only in ourselves but also in our colleagues and understand what personal and system-related triggers and solutions are present. The consequences of burnout have been reported to include earlier retirement and/or career transitions and are associated with depression, the risk of motor vehicle accidents, substance abuse, and suicide.
At the systems level, changes can be made to mitigate known pressures that contribute to burnout. There are efforts such as improved workflow and specific quality improvement initiatives that can improve physician satisfaction. Ensuring adequate support for physicians with aids such as scribes and appropriate support staff and para–health care workers can significantly decrease the administrative burden on clinicians and improve productivity and patient care.
At a broader level, talking about burnout, recognizing the signs of burnout and also ensuring the appropriate support is available for physicians who are at risk or already experiencing burnout can arise from leadership at both the institutional level but also at the larger organizational level, where there is greater investment into the health and well-being of physicians. For example, societies can have the negotiating power to advocate to simplify tasks unique to gastroenterologists with regard to reimbursement or EHR pathways. Academic centers can incorporate classes and forums for medical students, trainees, and practicing physicians that focus on health and well-being.
At the individual level, we should be able to reach out to our colleagues to ask for help or to see if they need help. We also need to better identify what our triggers are and what are remedies for these triggers. It’s not normal to be in a profession in which you have a constant sensation that you are drowning or barely treading water but I am sure many of us have felt this at some point if not with some regularity. So as a practitioner, what coping mechanisms do you have in place? There has been some work with respect to adaptive and maladaptive coping mechanisms at the individual and organizational levels. Maladaptive mechanisms can result in significant personal health issues including hypertension, substance abuse, and depression; it can also further exacerbate burnout symptoms in the provider and result in patient-related complications, shortened provider career trajectories, and increased strains on provider’s interpersonal relationships. I think it is an important point here to make that there are likely sex differences in maladaptive coping mechanisms and manifestations of burnout with work that suggests that women are more prone to depression, isolation, and suicide compared with male colleagues.
With respect to adaptive coping mechanisms, the most common theme is to not isolate yourself or others. Ask a colleague how s/he is doing – we are all equally busy but sometimes just popping into someone’s office to say hello is enough to help another person (and yourself) connect. Additionally, it’s not too much to ask for professional help. What we do is high stakes and taking care of ourselves usually comes behind the patient and our families. But who takes care of the caregiver? Working on interpersonal relationships can strengthen your resilience and coping techniques to the stressors we face on a daily basis. Ultimately, we are all in this together – burnout affects all of us no matter what hat you want to wear – provider, colleague, patient, or friend.
Dr. Mason is a gastroenterologist at the Virginia Mason Medical Center in Seattle.
There has been a great deal of evolving research and writing about physician burnout. Horror stories about long work hours, frustrations with working environments, administrative challenges are everywhere – social media, medical journals, mainstream media. While burnout is not new, the increased attention and consequences in the health care system are exposing not only the importance of physician well-being but also the impact of burnout on patient care.
What is burnout?
Burnout was first identified in the 1970s and further refined by Christina Maslach, PhD, as a syndrome that is due to prolonged exposure to chronic interpersonal stress with three key dimensions that include 1) overwhelming exhaustion, 2) feelings of cynicism and detachment from the job, and 3) a sense of ineffectiveness and lack of accomplishment.
The Maslach Burnout Inventory (MBI), a 22-item questionnaire that was developed in the 1980s has become the standard survey in research settings for the identification of burnout. However, a two-item questionnaire has been utilized with good correlation to the domains of emotional exhaustion and depersonalization and includes: 1) I feel burned out from my work and 2) I have become more callous toward people since I took this job. Responses are graded on a scale from never to everyday with five points in between; the likelihood of burnout is high when responses are once a week or more frequent (i.e., a few times a week or everyday).
Why are we talking about burnout?
Burnout has far-reaching consequences. It not only affects the individual but also that person’s interpersonal relationships with family and friends. Additionally, burnout affects patient care and the overall health care system.
Let’s imagine the scenario in which you arrive at your office on a Monday morning and open your electronic health record (EHR). You tend to arrive at work about 45 minutes prior to your first patient to try to catch up with messages. As you wait for your computer to login (5 minutes? 8 minutes? 12 minutes?) and Citrix to connect, you are eating your breakfast bar and drinking mediocre coffee because you still haven’t had time to fix your coffee machine at home (should you just order a new one on Amazon and contribute to the world’s growing trash problem?). Once you login to your EHR, the first three messages are about missing charges and charts still left open – yes, you haven’t corrected the resident’s note from clinic on Friday afternoon, yet. The next two messages are about insurance denials for a prescription or a procedure or an imaging study. You decide that perhaps you should change gears and check your work email. The first email is a reminder that vacation requests for the next 6 months are due by end of business today and any requests made after today must go through some administrative approval process that seems inefficient and almost punitive (mainly because you forgot to discuss this with your partner and family and you are feeling somewhat guilty but resentful of this arbitrary deadline that was announced last week). Your pager promptly buzzes to announce that the first patient has arrived and is ready for you to consult. As you walk over to the procedure area, you remind yourself to finalize the resident’s note from Friday, file the missing charges, close those charts and find some reasonable evidence to justify the medication/test/procedure so that your patient is not saddled with a large bill. And as you walk up to your first patient of the morning, you are greeted by a nurse who indicates the patient doesn’t have a ride home postprocedure and what do you want to do?
Does any of this sound remotely familiar? In today’s medical practice, there are multiple factors that contribute toward burnout, including excessive clerical burden, inefficient EHR and computer systems, the sense of loss of control and flexibility, along with problems associated with work-life balance.
What does it have to do with you?
According to two surveys administered by the AGA and ACG, burnout occurs in approximately 50% of gastroenterologists. It also appears that burnout starts as early as the fellowship years when there is even less control, long work hours, and similar demands with regards to work-life balance.
Burnout is prevalent amongt gastroenterologists and it can start early. There is evidence to suggest that procedurally based specialties are at higher risk because of the added possibility of complications associated with procedures. It is important for us to recognize signs of burnout not only in ourselves but also in our colleagues and understand what personal and system-related triggers and solutions are present. The consequences of burnout have been reported to include earlier retirement and/or career transitions and are associated with depression, the risk of motor vehicle accidents, substance abuse, and suicide.
At the systems level, changes can be made to mitigate known pressures that contribute to burnout. There are efforts such as improved workflow and specific quality improvement initiatives that can improve physician satisfaction. Ensuring adequate support for physicians with aids such as scribes and appropriate support staff and para–health care workers can significantly decrease the administrative burden on clinicians and improve productivity and patient care.
At a broader level, talking about burnout, recognizing the signs of burnout and also ensuring the appropriate support is available for physicians who are at risk or already experiencing burnout can arise from leadership at both the institutional level but also at the larger organizational level, where there is greater investment into the health and well-being of physicians. For example, societies can have the negotiating power to advocate to simplify tasks unique to gastroenterologists with regard to reimbursement or EHR pathways. Academic centers can incorporate classes and forums for medical students, trainees, and practicing physicians that focus on health and well-being.
At the individual level, we should be able to reach out to our colleagues to ask for help or to see if they need help. We also need to better identify what our triggers are and what are remedies for these triggers. It’s not normal to be in a profession in which you have a constant sensation that you are drowning or barely treading water but I am sure many of us have felt this at some point if not with some regularity. So as a practitioner, what coping mechanisms do you have in place? There has been some work with respect to adaptive and maladaptive coping mechanisms at the individual and organizational levels. Maladaptive mechanisms can result in significant personal health issues including hypertension, substance abuse, and depression; it can also further exacerbate burnout symptoms in the provider and result in patient-related complications, shortened provider career trajectories, and increased strains on provider’s interpersonal relationships. I think it is an important point here to make that there are likely sex differences in maladaptive coping mechanisms and manifestations of burnout with work that suggests that women are more prone to depression, isolation, and suicide compared with male colleagues.
With respect to adaptive coping mechanisms, the most common theme is to not isolate yourself or others. Ask a colleague how s/he is doing – we are all equally busy but sometimes just popping into someone’s office to say hello is enough to help another person (and yourself) connect. Additionally, it’s not too much to ask for professional help. What we do is high stakes and taking care of ourselves usually comes behind the patient and our families. But who takes care of the caregiver? Working on interpersonal relationships can strengthen your resilience and coping techniques to the stressors we face on a daily basis. Ultimately, we are all in this together – burnout affects all of us no matter what hat you want to wear – provider, colleague, patient, or friend.
Dr. Mason is a gastroenterologist at the Virginia Mason Medical Center in Seattle.
There has been a great deal of evolving research and writing about physician burnout. Horror stories about long work hours, frustrations with working environments, administrative challenges are everywhere – social media, medical journals, mainstream media. While burnout is not new, the increased attention and consequences in the health care system are exposing not only the importance of physician well-being but also the impact of burnout on patient care.
What is burnout?
Burnout was first identified in the 1970s and further refined by Christina Maslach, PhD, as a syndrome that is due to prolonged exposure to chronic interpersonal stress with three key dimensions that include 1) overwhelming exhaustion, 2) feelings of cynicism and detachment from the job, and 3) a sense of ineffectiveness and lack of accomplishment.
The Maslach Burnout Inventory (MBI), a 22-item questionnaire that was developed in the 1980s has become the standard survey in research settings for the identification of burnout. However, a two-item questionnaire has been utilized with good correlation to the domains of emotional exhaustion and depersonalization and includes: 1) I feel burned out from my work and 2) I have become more callous toward people since I took this job. Responses are graded on a scale from never to everyday with five points in between; the likelihood of burnout is high when responses are once a week or more frequent (i.e., a few times a week or everyday).
Why are we talking about burnout?
Burnout has far-reaching consequences. It not only affects the individual but also that person’s interpersonal relationships with family and friends. Additionally, burnout affects patient care and the overall health care system.
Let’s imagine the scenario in which you arrive at your office on a Monday morning and open your electronic health record (EHR). You tend to arrive at work about 45 minutes prior to your first patient to try to catch up with messages. As you wait for your computer to login (5 minutes? 8 minutes? 12 minutes?) and Citrix to connect, you are eating your breakfast bar and drinking mediocre coffee because you still haven’t had time to fix your coffee machine at home (should you just order a new one on Amazon and contribute to the world’s growing trash problem?). Once you login to your EHR, the first three messages are about missing charges and charts still left open – yes, you haven’t corrected the resident’s note from clinic on Friday afternoon, yet. The next two messages are about insurance denials for a prescription or a procedure or an imaging study. You decide that perhaps you should change gears and check your work email. The first email is a reminder that vacation requests for the next 6 months are due by end of business today and any requests made after today must go through some administrative approval process that seems inefficient and almost punitive (mainly because you forgot to discuss this with your partner and family and you are feeling somewhat guilty but resentful of this arbitrary deadline that was announced last week). Your pager promptly buzzes to announce that the first patient has arrived and is ready for you to consult. As you walk over to the procedure area, you remind yourself to finalize the resident’s note from Friday, file the missing charges, close those charts and find some reasonable evidence to justify the medication/test/procedure so that your patient is not saddled with a large bill. And as you walk up to your first patient of the morning, you are greeted by a nurse who indicates the patient doesn’t have a ride home postprocedure and what do you want to do?
Does any of this sound remotely familiar? In today’s medical practice, there are multiple factors that contribute toward burnout, including excessive clerical burden, inefficient EHR and computer systems, the sense of loss of control and flexibility, along with problems associated with work-life balance.
What does it have to do with you?
According to two surveys administered by the AGA and ACG, burnout occurs in approximately 50% of gastroenterologists. It also appears that burnout starts as early as the fellowship years when there is even less control, long work hours, and similar demands with regards to work-life balance.
Burnout is prevalent amongt gastroenterologists and it can start early. There is evidence to suggest that procedurally based specialties are at higher risk because of the added possibility of complications associated with procedures. It is important for us to recognize signs of burnout not only in ourselves but also in our colleagues and understand what personal and system-related triggers and solutions are present. The consequences of burnout have been reported to include earlier retirement and/or career transitions and are associated with depression, the risk of motor vehicle accidents, substance abuse, and suicide.
At the systems level, changes can be made to mitigate known pressures that contribute to burnout. There are efforts such as improved workflow and specific quality improvement initiatives that can improve physician satisfaction. Ensuring adequate support for physicians with aids such as scribes and appropriate support staff and para–health care workers can significantly decrease the administrative burden on clinicians and improve productivity and patient care.
At a broader level, talking about burnout, recognizing the signs of burnout and also ensuring the appropriate support is available for physicians who are at risk or already experiencing burnout can arise from leadership at both the institutional level but also at the larger organizational level, where there is greater investment into the health and well-being of physicians. For example, societies can have the negotiating power to advocate to simplify tasks unique to gastroenterologists with regard to reimbursement or EHR pathways. Academic centers can incorporate classes and forums for medical students, trainees, and practicing physicians that focus on health and well-being.
At the individual level, we should be able to reach out to our colleagues to ask for help or to see if they need help. We also need to better identify what our triggers are and what are remedies for these triggers. It’s not normal to be in a profession in which you have a constant sensation that you are drowning or barely treading water but I am sure many of us have felt this at some point if not with some regularity. So as a practitioner, what coping mechanisms do you have in place? There has been some work with respect to adaptive and maladaptive coping mechanisms at the individual and organizational levels. Maladaptive mechanisms can result in significant personal health issues including hypertension, substance abuse, and depression; it can also further exacerbate burnout symptoms in the provider and result in patient-related complications, shortened provider career trajectories, and increased strains on provider’s interpersonal relationships. I think it is an important point here to make that there are likely sex differences in maladaptive coping mechanisms and manifestations of burnout with work that suggests that women are more prone to depression, isolation, and suicide compared with male colleagues.
With respect to adaptive coping mechanisms, the most common theme is to not isolate yourself or others. Ask a colleague how s/he is doing – we are all equally busy but sometimes just popping into someone’s office to say hello is enough to help another person (and yourself) connect. Additionally, it’s not too much to ask for professional help. What we do is high stakes and taking care of ourselves usually comes behind the patient and our families. But who takes care of the caregiver? Working on interpersonal relationships can strengthen your resilience and coping techniques to the stressors we face on a daily basis. Ultimately, we are all in this together – burnout affects all of us no matter what hat you want to wear – provider, colleague, patient, or friend.
Dr. Mason is a gastroenterologist at the Virginia Mason Medical Center in Seattle.