ACIP approves flu vaccine recommendations for 2019-2020 season

Article Type
Changed
Fri, 06/28/2019 - 16:30

All individuals aged 6 months and older should receive the influenza vaccine by the end of October next season, according to the Centers for Disease Control and Prevention’s Committee on Immunization Practices. The committee voted unanimously to accept minor updates to the ACIP flu recommendations for the 2019-2020 season, but no major changes were made from recent years.

MarianVejcik/Getty Images

The past flu season was moderate overall, but notable for two waves of viral infections of similar magnitude, one with H1N1 and another with H3N2, said Lynette Brewer of the CDC’s National Center for Immunization and Respiratory Diseases, who presented data on last year’s flu activity.

Last year’s vaccine likely prevented between 40,000 and 90,000 hospitalizations, but mostly reduced the burden of H1N1 disease and provided no real protection against H3N2, she said.

The recommended H3N2 component for next season is A/Kansas/14/2017–like virus, which is genetically similar to the H3N2 that circulated last year.

Lisa Grohskopf, MD, of the CDC’s influenza division, presented the minor adjustments that included the changes in vaccine composition for next year, some licensure changes, and a new table summarizing dose volumes. Also, language was changed to advise vaccination for all eligible individuals by the end of October, and individuals who need two doses should have the first one as soon as it becomes available, in July or August if possible. The updated language also clarified that 8 year olds who need two doses should receive the second dose, even if they turn 9 between the two doses.

Additional guidance updates approved by the committee included harmonizing language on groups that should be the focus of vaccination in the event of limited supply to be more consistent with the 2011 ACIP Recommendations for the Immunization of Health Care Personnel.

The committee also voted unanimously to accept the proposed influenza vaccine in the Vaccines for Children program; there were no changes in recommended dosing intervals, dosages, contraindications, or precautions, according to Frank Whitlach of the National Center for Immunization and Respiratory Diseases, who presented the Vaccines for Children information.

The ACIP members had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

All individuals aged 6 months and older should receive the influenza vaccine by the end of October next season, according to the Centers for Disease Control and Prevention’s Committee on Immunization Practices. The committee voted unanimously to accept minor updates to the ACIP flu recommendations for the 2019-2020 season, but no major changes were made from recent years.

MarianVejcik/Getty Images

The past flu season was moderate overall, but notable for two waves of viral infections of similar magnitude, one with H1N1 and another with H3N2, said Lynette Brewer of the CDC’s National Center for Immunization and Respiratory Diseases, who presented data on last year’s flu activity.

Last year’s vaccine likely prevented between 40,000 and 90,000 hospitalizations, but mostly reduced the burden of H1N1 disease and provided no real protection against H3N2, she said.

The recommended H3N2 component for next season is A/Kansas/14/2017–like virus, which is genetically similar to the H3N2 that circulated last year.

Lisa Grohskopf, MD, of the CDC’s influenza division, presented the minor adjustments that included the changes in vaccine composition for next year, some licensure changes, and a new table summarizing dose volumes. Also, language was changed to advise vaccination for all eligible individuals by the end of October, and individuals who need two doses should have the first one as soon as it becomes available, in July or August if possible. The updated language also clarified that 8 year olds who need two doses should receive the second dose, even if they turn 9 between the two doses.

Additional guidance updates approved by the committee included harmonizing language on groups that should be the focus of vaccination in the event of limited supply to be more consistent with the 2011 ACIP Recommendations for the Immunization of Health Care Personnel.

The committee also voted unanimously to accept the proposed influenza vaccine in the Vaccines for Children program; there were no changes in recommended dosing intervals, dosages, contraindications, or precautions, according to Frank Whitlach of the National Center for Immunization and Respiratory Diseases, who presented the Vaccines for Children information.

The ACIP members had no financial conflicts to disclose.

All individuals aged 6 months and older should receive the influenza vaccine by the end of October next season, according to the Centers for Disease Control and Prevention’s Committee on Immunization Practices. The committee voted unanimously to accept minor updates to the ACIP flu recommendations for the 2019-2020 season, but no major changes were made from recent years.

MarianVejcik/Getty Images

The past flu season was moderate overall, but notable for two waves of viral infections of similar magnitude, one with H1N1 and another with H3N2, said Lynette Brewer of the CDC’s National Center for Immunization and Respiratory Diseases, who presented data on last year’s flu activity.

Last year’s vaccine likely prevented between 40,000 and 90,000 hospitalizations, but mostly reduced the burden of H1N1 disease and provided no real protection against H3N2, she said.

The recommended H3N2 component for next season is A/Kansas/14/2017–like virus, which is genetically similar to the H3N2 that circulated last year.

Lisa Grohskopf, MD, of the CDC’s influenza division, presented the minor adjustments that included the changes in vaccine composition for next year, some licensure changes, and a new table summarizing dose volumes. Also, language was changed to advise vaccination for all eligible individuals by the end of October, and individuals who need two doses should have the first one as soon as it becomes available, in July or August if possible. The updated language also clarified that 8 year olds who need two doses should receive the second dose, even if they turn 9 between the two doses.

Additional guidance updates approved by the committee included harmonizing language on groups that should be the focus of vaccination in the event of limited supply to be more consistent with the 2011 ACIP Recommendations for the Immunization of Health Care Personnel.

The committee also voted unanimously to accept the proposed influenza vaccine in the Vaccines for Children program; there were no changes in recommended dosing intervals, dosages, contraindications, or precautions, according to Frank Whitlach of the National Center for Immunization and Respiratory Diseases, who presented the Vaccines for Children information.

The ACIP members had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM AN ACIP MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

FDA issues warning on insulin pump cybersecurity weakness

Article Type
Changed
Tue, 05/03/2022 - 15:14

 

The Food and Drug Administration has issued a warning to patients and health care providers that a pair of Medtronic insulin pumps are being recalled because of potential cybersecurity risks, according to a press release.

The affected devices are the MiniMed 508 and MiniMed Paradigm series insulin pumps, which wirelessly connect to both the patient’s blood glucose meter and continuous glucose monitoring system. A remote controller and CareLink USB – a thumb-sized wireless device that plugs into a computer – are used to operate the devices; the remote controller sends insulin dosing commands to the pump and the CareLink USB can be used to download and share data with the patient’s health care provider.

The potential risk involves the wireless communication between the pumps and related devices such as the blood glucose meter and remote controller. The FDA has identified a cybersecurity vulnerability within the insulin pumps, and is concerned that a third party could connect to the device and change the pump’s settings. Insulin could be given in excess, causing hypoglycemia, or stopped, causing hyperglycemia or diabetic ketoacidosis.

Medtronic has identified 4,000 patients in the United States who are affected by the security weakness. Because the company is unable to adequately update or patch the device to remove the weakness, the FDA is working to ensure that Medtronic addresses the issue in any way possible, including helping patients with affected pumps switch to newer models.

“While we are not aware of patients who may have been harmed by this particular cybersecurity vulnerability, the risk of patient harm if such a vulnerability were left unaddressed is significant. The safety communication issued today contains recommendations for what actions patients and health care providers should take to avoid the risk this vulnerability could pose,” said Suzanne Schwartz, MD, MBA, deputy director of the Office of Strategic Partnerships and Technology Innovation.

Find the full press release on the FDA website.

Publications
Topics
Sections

 

The Food and Drug Administration has issued a warning to patients and health care providers that a pair of Medtronic insulin pumps are being recalled because of potential cybersecurity risks, according to a press release.

The affected devices are the MiniMed 508 and MiniMed Paradigm series insulin pumps, which wirelessly connect to both the patient’s blood glucose meter and continuous glucose monitoring system. A remote controller and CareLink USB – a thumb-sized wireless device that plugs into a computer – are used to operate the devices; the remote controller sends insulin dosing commands to the pump and the CareLink USB can be used to download and share data with the patient’s health care provider.

The potential risk involves the wireless communication between the pumps and related devices such as the blood glucose meter and remote controller. The FDA has identified a cybersecurity vulnerability within the insulin pumps, and is concerned that a third party could connect to the device and change the pump’s settings. Insulin could be given in excess, causing hypoglycemia, or stopped, causing hyperglycemia or diabetic ketoacidosis.

Medtronic has identified 4,000 patients in the United States who are affected by the security weakness. Because the company is unable to adequately update or patch the device to remove the weakness, the FDA is working to ensure that Medtronic addresses the issue in any way possible, including helping patients with affected pumps switch to newer models.

“While we are not aware of patients who may have been harmed by this particular cybersecurity vulnerability, the risk of patient harm if such a vulnerability were left unaddressed is significant. The safety communication issued today contains recommendations for what actions patients and health care providers should take to avoid the risk this vulnerability could pose,” said Suzanne Schwartz, MD, MBA, deputy director of the Office of Strategic Partnerships and Technology Innovation.

Find the full press release on the FDA website.

 

The Food and Drug Administration has issued a warning to patients and health care providers that a pair of Medtronic insulin pumps are being recalled because of potential cybersecurity risks, according to a press release.

The affected devices are the MiniMed 508 and MiniMed Paradigm series insulin pumps, which wirelessly connect to both the patient’s blood glucose meter and continuous glucose monitoring system. A remote controller and CareLink USB – a thumb-sized wireless device that plugs into a computer – are used to operate the devices; the remote controller sends insulin dosing commands to the pump and the CareLink USB can be used to download and share data with the patient’s health care provider.

The potential risk involves the wireless communication between the pumps and related devices such as the blood glucose meter and remote controller. The FDA has identified a cybersecurity vulnerability within the insulin pumps, and is concerned that a third party could connect to the device and change the pump’s settings. Insulin could be given in excess, causing hypoglycemia, or stopped, causing hyperglycemia or diabetic ketoacidosis.

Medtronic has identified 4,000 patients in the United States who are affected by the security weakness. Because the company is unable to adequately update or patch the device to remove the weakness, the FDA is working to ensure that Medtronic addresses the issue in any way possible, including helping patients with affected pumps switch to newer models.

“While we are not aware of patients who may have been harmed by this particular cybersecurity vulnerability, the risk of patient harm if such a vulnerability were left unaddressed is significant. The safety communication issued today contains recommendations for what actions patients and health care providers should take to avoid the risk this vulnerability could pose,” said Suzanne Schwartz, MD, MBA, deputy director of the Office of Strategic Partnerships and Technology Innovation.

Find the full press release on the FDA website.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

ACIP endorses catch-up hepatitis A vaccinations

Article Type
Changed
Thu, 07/18/2019 - 12:43

 

The Centers for Disease Control and Prevention’s Committee on Immunization Practices voted unanimously in support of three recommendations for the use of hepatitis A vaccines.

Joseph Abbott/Thinkstock

The committee recommended catch-up vaccination at any age for all children aged 2-18 years who had not previously received hepatitis A vaccination, recommended that all persons with HIV aged 1 year and older should be vaccinated with the hepatitis A vaccine, and approved updating the language in the full hepatitis A vaccine statement, “Prevention of Hepatitis A Virus Infection in The United States: Recommendations of The Advisory Committee on Immunization Practices.”

Catch-up vaccination will expand coverage to adolescents who might have missed it, and data show that the vaccine effectiveness is high, and the rates of adverse events are low in the child and adolescent population, said Noele Nelson, MD, of the CDC’s National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, who presented the recommendations to the committee. “Recent outbreaks are occurring primarily among adults,” and many cases are among persons who use drugs or are homeless, she added.

Several committee members noted that the specific recommendations for catch-up in children and teens and for vaccination of HIV patients offer more opportunities for protection than risk-based recommendations. Catching up with vaccinating adolescents is “more effective than tracking down high-risk adults later in life,” noted Grace Lee, MD, of Lucile Packard Children’s Hospital at Stanford, Calif.

The committee also recommended that all persons with HIV aged 1 year and older should be vaccinated with the hepatitis A vaccine. Data on persons with HIV show that approximately 60% have at least one risk factor for hepatitis A, such as men who have sex with men or individuals engaged in intravenous drug use, said Dr. Nelson. Data also show that individuals with HIV are at increased risk for complications if they get hepatitis A.

The committee’s approval of the full hepatitis A vaccine statement included one notable change – the removal of clotting factor disorders as a high-risk group. The risk has decreased over time based on improvements such as better screening of source plasma, and this group is now at no greater risk than the general population, according to work group chair Kelly Moore, MD, of Vanderbilt University, Nashville, Tenn.

The ACIP members had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

The Centers for Disease Control and Prevention’s Committee on Immunization Practices voted unanimously in support of three recommendations for the use of hepatitis A vaccines.

Joseph Abbott/Thinkstock

The committee recommended catch-up vaccination at any age for all children aged 2-18 years who had not previously received hepatitis A vaccination, recommended that all persons with HIV aged 1 year and older should be vaccinated with the hepatitis A vaccine, and approved updating the language in the full hepatitis A vaccine statement, “Prevention of Hepatitis A Virus Infection in The United States: Recommendations of The Advisory Committee on Immunization Practices.”

Catch-up vaccination will expand coverage to adolescents who might have missed it, and data show that the vaccine effectiveness is high, and the rates of adverse events are low in the child and adolescent population, said Noele Nelson, MD, of the CDC’s National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, who presented the recommendations to the committee. “Recent outbreaks are occurring primarily among adults,” and many cases are among persons who use drugs or are homeless, she added.

Several committee members noted that the specific recommendations for catch-up in children and teens and for vaccination of HIV patients offer more opportunities for protection than risk-based recommendations. Catching up with vaccinating adolescents is “more effective than tracking down high-risk adults later in life,” noted Grace Lee, MD, of Lucile Packard Children’s Hospital at Stanford, Calif.

The committee also recommended that all persons with HIV aged 1 year and older should be vaccinated with the hepatitis A vaccine. Data on persons with HIV show that approximately 60% have at least one risk factor for hepatitis A, such as men who have sex with men or individuals engaged in intravenous drug use, said Dr. Nelson. Data also show that individuals with HIV are at increased risk for complications if they get hepatitis A.

The committee’s approval of the full hepatitis A vaccine statement included one notable change – the removal of clotting factor disorders as a high-risk group. The risk has decreased over time based on improvements such as better screening of source plasma, and this group is now at no greater risk than the general population, according to work group chair Kelly Moore, MD, of Vanderbilt University, Nashville, Tenn.

The ACIP members had no financial conflicts to disclose.

 

The Centers for Disease Control and Prevention’s Committee on Immunization Practices voted unanimously in support of three recommendations for the use of hepatitis A vaccines.

Joseph Abbott/Thinkstock

The committee recommended catch-up vaccination at any age for all children aged 2-18 years who had not previously received hepatitis A vaccination, recommended that all persons with HIV aged 1 year and older should be vaccinated with the hepatitis A vaccine, and approved updating the language in the full hepatitis A vaccine statement, “Prevention of Hepatitis A Virus Infection in The United States: Recommendations of The Advisory Committee on Immunization Practices.”

Catch-up vaccination will expand coverage to adolescents who might have missed it, and data show that the vaccine effectiveness is high, and the rates of adverse events are low in the child and adolescent population, said Noele Nelson, MD, of the CDC’s National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, who presented the recommendations to the committee. “Recent outbreaks are occurring primarily among adults,” and many cases are among persons who use drugs or are homeless, she added.

Several committee members noted that the specific recommendations for catch-up in children and teens and for vaccination of HIV patients offer more opportunities for protection than risk-based recommendations. Catching up with vaccinating adolescents is “more effective than tracking down high-risk adults later in life,” noted Grace Lee, MD, of Lucile Packard Children’s Hospital at Stanford, Calif.

The committee also recommended that all persons with HIV aged 1 year and older should be vaccinated with the hepatitis A vaccine. Data on persons with HIV show that approximately 60% have at least one risk factor for hepatitis A, such as men who have sex with men or individuals engaged in intravenous drug use, said Dr. Nelson. Data also show that individuals with HIV are at increased risk for complications if they get hepatitis A.

The committee’s approval of the full hepatitis A vaccine statement included one notable change – the removal of clotting factor disorders as a high-risk group. The risk has decreased over time based on improvements such as better screening of source plasma, and this group is now at no greater risk than the general population, according to work group chair Kelly Moore, MD, of Vanderbilt University, Nashville, Tenn.

The ACIP members had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM AN ACIP MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Severity, itch improvements remain steady with ruxolitinib for atopic dermatitis

Article Type
Changed
Fri, 06/28/2019 - 08:37

– A cream-based formulation of the Janus kinase (JAK) inhibitor ruxolitinib maintained its efficacy in the 4-week open-label period of a 16-week randomized phase 2 study of adults with mild to moderate atopic dermatitis (AD), Leon H. Kircik, MD, said at the World Congress of Dermatology.

Improvements in disease severity and itch in patients receiving 1.5% ruxolitinib cream twice daily were sustained over the open-label period, said Dr. Kircik, a dermatologist in Louisville, Ky., affiliated with Mount Sinai Medical Center, New York.

Patients who switched from vehicle or 0.1% triamcinolone cream to the JAK1/2 selective inhibitor in the open-label period also experienced rapid improvements in disease severity and itch.

“This is a novel treatment that’s a topical JAK inhibitor, which so far we don’t have any in the market for atopic dermatitis, and it does have a very good efficacy and safety profile,” Dr. Kircik said during an oral presentation at the meeting.

Janus kinases modulate inflammatory cytokines implicated in AD, and may also directly modulate itch, Dr. Kircik noted.

The study comprised 307 adults with mild to moderate AD (Investigator’s Global Assessment [IGA] score of 2 or 3) and body surface area involvement of 3%-20%. They were randomized equally to six arms, including vehicle, triamcinolone cream, and ruxolitinib at dosages of 0.15%, 0.5%, 1.5% once daily, or the target dose level of 1.5% twice daily.


After an 8-week double-blind period, there was a 4-week open-label period during which patients randomized to vehicle or triamcinolone were switched to ruxolitinib, and then a 4-week follow-up period during which no treatment was given, Dr. Kircik said.

The mean age of the patients was 35 years, 54% were female, and the median duration of disease was 20.8 years.

In the double-blind period, 1.5% ruxolitinib cream twice daily significantly improved Eczema Area and Severity Index (EASI) score versus vehicle, Dr. Kircik said.

The mean change in EASI scores at weeks 2, 4, and 6 were 52.7%, 71.6%, and 78.5% for ruxolitinib, versus 4.8%, 15.5%, and 26.9% for vehicle (P less than .001 for all comparisons), according to Dr. Kircik.

The patients on the target ruxolitinib dose maintained the improvements in EASI score throughout the open label period, with mean improvement from baseline reaching 81.4% by week 10 and 84.9% by week 12.

Meanwhile, there was a sharp increase in mean EASI score improvement in patients switched to ruxolitinib, according to Dr. Kircik. In the vehicle arm, mean improvement leapt from 26.9% at week 8 to 78.4% by week 12.

Significant reductions in itch scores were seen within 36 hours of starting the 1.5% ruxolitinib cream, with itch Numeric Rating Scale (NRS) scores of –1.8 versus –0.2 for vehicle at that time point (P less than .0001), he added.

Reduction in itch score was similarly maintained in the ruxolitinib target dose group, and rapidly fell to similar levels for patients switched over to that treatment in the open-label period, Dr. Kircik said.

The target ruxolitinib dose was also noninferior to triamcinolone cream, for which mean change in EASI scores at weeks 2 and 4 were 40.0% and 59.8%, respectively.

Recruitment of patients in phase 3 studies of ruxolitinib cream for AD has just started, Dr. Kircik said.

The TRuE AD1 and TRuE AD2 studies are set to enroll 1,200 adolescents and adults with AD who will be randomized to ruxolitinib cream or vehicle, according to listings on ClinicalTrials.gov.

Dr. Kircik disclosed ties to several companies including Incyte, which was the sponsor of the phase 2 study and the phase 3 studies.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– A cream-based formulation of the Janus kinase (JAK) inhibitor ruxolitinib maintained its efficacy in the 4-week open-label period of a 16-week randomized phase 2 study of adults with mild to moderate atopic dermatitis (AD), Leon H. Kircik, MD, said at the World Congress of Dermatology.

Improvements in disease severity and itch in patients receiving 1.5% ruxolitinib cream twice daily were sustained over the open-label period, said Dr. Kircik, a dermatologist in Louisville, Ky., affiliated with Mount Sinai Medical Center, New York.

Patients who switched from vehicle or 0.1% triamcinolone cream to the JAK1/2 selective inhibitor in the open-label period also experienced rapid improvements in disease severity and itch.

“This is a novel treatment that’s a topical JAK inhibitor, which so far we don’t have any in the market for atopic dermatitis, and it does have a very good efficacy and safety profile,” Dr. Kircik said during an oral presentation at the meeting.

Janus kinases modulate inflammatory cytokines implicated in AD, and may also directly modulate itch, Dr. Kircik noted.

The study comprised 307 adults with mild to moderate AD (Investigator’s Global Assessment [IGA] score of 2 or 3) and body surface area involvement of 3%-20%. They were randomized equally to six arms, including vehicle, triamcinolone cream, and ruxolitinib at dosages of 0.15%, 0.5%, 1.5% once daily, or the target dose level of 1.5% twice daily.


After an 8-week double-blind period, there was a 4-week open-label period during which patients randomized to vehicle or triamcinolone were switched to ruxolitinib, and then a 4-week follow-up period during which no treatment was given, Dr. Kircik said.

The mean age of the patients was 35 years, 54% were female, and the median duration of disease was 20.8 years.

In the double-blind period, 1.5% ruxolitinib cream twice daily significantly improved Eczema Area and Severity Index (EASI) score versus vehicle, Dr. Kircik said.

The mean change in EASI scores at weeks 2, 4, and 6 were 52.7%, 71.6%, and 78.5% for ruxolitinib, versus 4.8%, 15.5%, and 26.9% for vehicle (P less than .001 for all comparisons), according to Dr. Kircik.

The patients on the target ruxolitinib dose maintained the improvements in EASI score throughout the open label period, with mean improvement from baseline reaching 81.4% by week 10 and 84.9% by week 12.

Meanwhile, there was a sharp increase in mean EASI score improvement in patients switched to ruxolitinib, according to Dr. Kircik. In the vehicle arm, mean improvement leapt from 26.9% at week 8 to 78.4% by week 12.

Significant reductions in itch scores were seen within 36 hours of starting the 1.5% ruxolitinib cream, with itch Numeric Rating Scale (NRS) scores of –1.8 versus –0.2 for vehicle at that time point (P less than .0001), he added.

Reduction in itch score was similarly maintained in the ruxolitinib target dose group, and rapidly fell to similar levels for patients switched over to that treatment in the open-label period, Dr. Kircik said.

The target ruxolitinib dose was also noninferior to triamcinolone cream, for which mean change in EASI scores at weeks 2 and 4 were 40.0% and 59.8%, respectively.

Recruitment of patients in phase 3 studies of ruxolitinib cream for AD has just started, Dr. Kircik said.

The TRuE AD1 and TRuE AD2 studies are set to enroll 1,200 adolescents and adults with AD who will be randomized to ruxolitinib cream or vehicle, according to listings on ClinicalTrials.gov.

Dr. Kircik disclosed ties to several companies including Incyte, which was the sponsor of the phase 2 study and the phase 3 studies.

– A cream-based formulation of the Janus kinase (JAK) inhibitor ruxolitinib maintained its efficacy in the 4-week open-label period of a 16-week randomized phase 2 study of adults with mild to moderate atopic dermatitis (AD), Leon H. Kircik, MD, said at the World Congress of Dermatology.

Improvements in disease severity and itch in patients receiving 1.5% ruxolitinib cream twice daily were sustained over the open-label period, said Dr. Kircik, a dermatologist in Louisville, Ky., affiliated with Mount Sinai Medical Center, New York.

Patients who switched from vehicle or 0.1% triamcinolone cream to the JAK1/2 selective inhibitor in the open-label period also experienced rapid improvements in disease severity and itch.

“This is a novel treatment that’s a topical JAK inhibitor, which so far we don’t have any in the market for atopic dermatitis, and it does have a very good efficacy and safety profile,” Dr. Kircik said during an oral presentation at the meeting.

Janus kinases modulate inflammatory cytokines implicated in AD, and may also directly modulate itch, Dr. Kircik noted.

The study comprised 307 adults with mild to moderate AD (Investigator’s Global Assessment [IGA] score of 2 or 3) and body surface area involvement of 3%-20%. They were randomized equally to six arms, including vehicle, triamcinolone cream, and ruxolitinib at dosages of 0.15%, 0.5%, 1.5% once daily, or the target dose level of 1.5% twice daily.


After an 8-week double-blind period, there was a 4-week open-label period during which patients randomized to vehicle or triamcinolone were switched to ruxolitinib, and then a 4-week follow-up period during which no treatment was given, Dr. Kircik said.

The mean age of the patients was 35 years, 54% were female, and the median duration of disease was 20.8 years.

In the double-blind period, 1.5% ruxolitinib cream twice daily significantly improved Eczema Area and Severity Index (EASI) score versus vehicle, Dr. Kircik said.

The mean change in EASI scores at weeks 2, 4, and 6 were 52.7%, 71.6%, and 78.5% for ruxolitinib, versus 4.8%, 15.5%, and 26.9% for vehicle (P less than .001 for all comparisons), according to Dr. Kircik.

The patients on the target ruxolitinib dose maintained the improvements in EASI score throughout the open label period, with mean improvement from baseline reaching 81.4% by week 10 and 84.9% by week 12.

Meanwhile, there was a sharp increase in mean EASI score improvement in patients switched to ruxolitinib, according to Dr. Kircik. In the vehicle arm, mean improvement leapt from 26.9% at week 8 to 78.4% by week 12.

Significant reductions in itch scores were seen within 36 hours of starting the 1.5% ruxolitinib cream, with itch Numeric Rating Scale (NRS) scores of –1.8 versus –0.2 for vehicle at that time point (P less than .0001), he added.

Reduction in itch score was similarly maintained in the ruxolitinib target dose group, and rapidly fell to similar levels for patients switched over to that treatment in the open-label period, Dr. Kircik said.

The target ruxolitinib dose was also noninferior to triamcinolone cream, for which mean change in EASI scores at weeks 2 and 4 were 40.0% and 59.8%, respectively.

Recruitment of patients in phase 3 studies of ruxolitinib cream for AD has just started, Dr. Kircik said.

The TRuE AD1 and TRuE AD2 studies are set to enroll 1,200 adolescents and adults with AD who will be randomized to ruxolitinib cream or vehicle, according to listings on ClinicalTrials.gov.

Dr. Kircik disclosed ties to several companies including Incyte, which was the sponsor of the phase 2 study and the phase 3 studies.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM WCD2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Tanezumab improves osteoarthritis pain, function in phase 3 trial

Article Type
Changed
Wed, 06/09/2021 - 08:08

– Tanezumab, an investigational monoclonal antibody directed against nerve growth factor that is under development to treat osteoarthritis pain, met most of the coprimary efficacy endpoints set for the drug in a randomized, double-blind, parallel-group, placebo-controlled phase 3 study.

At the end of a 24-week, double-blind treatment period, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain and WOMAC physical function subscale scores were significantly improved, compared with placebo in the two tanezumab (2.5 mg and 5 mg) dose groups.

The least squares (ls) mean change from baseline in WOMAC pain scores were –2.24 for placebo, –2.70 for tanezumab 2.5 mg, and –2.85 for tanezumab 5 mg (P less than or equal to .01 and P less than or equal to .001 vs. placebo).

The ls mean change from baseline in WOMAC physical function scores were a respective –2.11, –2.70, and –2.82 (P less than or equal to .001 for both vs. placebo).

The coprimary endpoint of patients’ global assessment of OA (PGA-OA) was also significantly improved with tanezumab 5 mg (–0.90; P less than or equal to .05) but not 2.5 mg (–0.82) versus placebo (–0.72).

As the 2.5-mg dose of tanezumab didn’t meet one of the three coprimary endpoints, further hypothesis testing was not possible, but exploratory findings suggested that tanezumab at 2.5 mg or 5 mg yielded higher proportions of patients with reductions from baseline in WOMAC pain scores when compared against placebo. This was the case for reductions of at least 30% (65.6%, 68.7%, 56.6%, respectively), 50% (45.4%, 47.9%, 33.8%), or 70% (21.3%, 23.2%, 17.8%).

“I think that we have now a lot of studies with tanezumab showing a significant effect on hip and knee OA pain and function, so we have the studies in order to have the drug on the market,” study first author Francis Berenbaum, MD, PhD, of Saint-Antoine Hospital, Sorbonne Université in Paris, said in an interview at the European Congress of Rheumatology.

“Of course, because of the safety issue with rapid progressive osteoarthritis (RPOA), what we are discussing now is: ‘For which patients will there be an optimal benefit-to-risk?’ So, it’s now more a discussion around the population of patients who can benefit the most with the drug,” Dr. Berenbaum added.

A possible link between the use of tanezumab and a risk for developing RPOA was first suggested by preclinical and early clinical trial data, prompting the U.S. Food and Drug Administration to place partial holds on its clinical development in 2010, and again in 2012.

However, Dr. Berenbaum noted that a “mitigation plan” had been put in place for the phase 3 program to try to lower the likelihood of RPOA. This included: lowering the dose of the drug used and delivering it subcutaneously rather than intravenously; not prescribing it with NSAIDs and testing its possible effects and safety in a difficult-to-treat population of patients with no known risk factors for the potentially very serious adverse event.

“Based on this mitigation plan, the risk of rapid progressive osteoarthritis has considerably decreased,” Dr. Berenbaum observed. Indeed, in the phase 3 study he presented at the meeting, he said that around 2% of patients developed RPOA, which is “exactly in line with what has already been shown.” RPOA was reported in none of the placebo-treated patients, in 1.4% of those treated with tanezumab 2.5 mg, and in 2.8% in those treated with tanezumab 5 mg.

However, a “striking” finding of the current study was that despite the small increase in RPOA seen, there was no difference between the tanezumab and placebo groups in the number of patients needing total joint replacement (TJR). The percentages of patients undergoing at least one TJR was 6.7% in the placebo group, 7.8% in the tanezumab 2.5-mg group, and 7.0% in the tanezumab 5-mg group.

The joint safety events seen in the study, including TJRs, were adjudicated as being part of the normal progression of OA in the majority (73.4%) of cases. Other joint events of note were one case of subchondral insufficiency fracture occurring in a patient treated with tanezumab 2.5 mg and one case of primary osteonecrosis in a patient treated with tanezumab 5 mg.

During his presentation of the findings in a late-breaking oral abstract session, Dr. Berenbaum noted that this was a difficult-to-treat population of patients. All 849 patients who had been recruited had moderate to severe OA pain of the knee or hip and had a history of insufficient pain relief or intolerance to treatment with acetaminophen, oral NSAIDs, and tramadol and were also not responding to, or unwilling to take, opioid painkillers. Patients had to have no radiographic evidence of specified bone conditions, including RPOA.

Patients had been treated with subcutaneous tanezumab 2.5 mg (n = 283) or 5 mg (n = 284) or placebo (n = 282) at baseline, week 8, and week 16, with the three coprimary efficacy endpoints assessed at week 24.


Discussing the risk-to-benefit ratio of the drug after his presentation, Dr. Berenbaum said: “You have to keep in mind that, first, it was in very difficult-to-treat patients, compared to the other trials in the field of OA symptoms.”

He added: “Second, is that compared to the other trials, this one was able to include patients with Kellgren-Lawrence grade 4, meaning that this is a more serious population,” and third, “when you look at the responders – WOMAC 30%, 50%, 70% – there is a strong difference in terms of responders.”

Dr. Berenbaum and his coauthors noted on the poster that accompanied the late-breaking oral presentation that “an active-controlled study will provide data to further characterize the risk-benefit of tanezumab in patients with OA.”

The study was sponsored by Pfizer and Eli Lilly. Dr. Berenbaum disclosed receiving research funding through his institution from Pfizer and acting as a consultant to, and speaker for, the company as well as multiple other pharmaceutical companies. Coauthors of the study also disclosed research funding or consultancy agreements with Pfizer or Eli Lilly or were employees of the companies.

SOURCE: Berenbaum F et al. Ann Rheum Dis. Jun 2019;78(Suppl 2):262-4. Abstract LB0007, doi: 10.1136/annrheumdis-2019-eular.8660

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Tanezumab, an investigational monoclonal antibody directed against nerve growth factor that is under development to treat osteoarthritis pain, met most of the coprimary efficacy endpoints set for the drug in a randomized, double-blind, parallel-group, placebo-controlled phase 3 study.

At the end of a 24-week, double-blind treatment period, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain and WOMAC physical function subscale scores were significantly improved, compared with placebo in the two tanezumab (2.5 mg and 5 mg) dose groups.

The least squares (ls) mean change from baseline in WOMAC pain scores were –2.24 for placebo, –2.70 for tanezumab 2.5 mg, and –2.85 for tanezumab 5 mg (P less than or equal to .01 and P less than or equal to .001 vs. placebo).

The ls mean change from baseline in WOMAC physical function scores were a respective –2.11, –2.70, and –2.82 (P less than or equal to .001 for both vs. placebo).

The coprimary endpoint of patients’ global assessment of OA (PGA-OA) was also significantly improved with tanezumab 5 mg (–0.90; P less than or equal to .05) but not 2.5 mg (–0.82) versus placebo (–0.72).

As the 2.5-mg dose of tanezumab didn’t meet one of the three coprimary endpoints, further hypothesis testing was not possible, but exploratory findings suggested that tanezumab at 2.5 mg or 5 mg yielded higher proportions of patients with reductions from baseline in WOMAC pain scores when compared against placebo. This was the case for reductions of at least 30% (65.6%, 68.7%, 56.6%, respectively), 50% (45.4%, 47.9%, 33.8%), or 70% (21.3%, 23.2%, 17.8%).

“I think that we have now a lot of studies with tanezumab showing a significant effect on hip and knee OA pain and function, so we have the studies in order to have the drug on the market,” study first author Francis Berenbaum, MD, PhD, of Saint-Antoine Hospital, Sorbonne Université in Paris, said in an interview at the European Congress of Rheumatology.

“Of course, because of the safety issue with rapid progressive osteoarthritis (RPOA), what we are discussing now is: ‘For which patients will there be an optimal benefit-to-risk?’ So, it’s now more a discussion around the population of patients who can benefit the most with the drug,” Dr. Berenbaum added.

A possible link between the use of tanezumab and a risk for developing RPOA was first suggested by preclinical and early clinical trial data, prompting the U.S. Food and Drug Administration to place partial holds on its clinical development in 2010, and again in 2012.

However, Dr. Berenbaum noted that a “mitigation plan” had been put in place for the phase 3 program to try to lower the likelihood of RPOA. This included: lowering the dose of the drug used and delivering it subcutaneously rather than intravenously; not prescribing it with NSAIDs and testing its possible effects and safety in a difficult-to-treat population of patients with no known risk factors for the potentially very serious adverse event.

“Based on this mitigation plan, the risk of rapid progressive osteoarthritis has considerably decreased,” Dr. Berenbaum observed. Indeed, in the phase 3 study he presented at the meeting, he said that around 2% of patients developed RPOA, which is “exactly in line with what has already been shown.” RPOA was reported in none of the placebo-treated patients, in 1.4% of those treated with tanezumab 2.5 mg, and in 2.8% in those treated with tanezumab 5 mg.

However, a “striking” finding of the current study was that despite the small increase in RPOA seen, there was no difference between the tanezumab and placebo groups in the number of patients needing total joint replacement (TJR). The percentages of patients undergoing at least one TJR was 6.7% in the placebo group, 7.8% in the tanezumab 2.5-mg group, and 7.0% in the tanezumab 5-mg group.

The joint safety events seen in the study, including TJRs, were adjudicated as being part of the normal progression of OA in the majority (73.4%) of cases. Other joint events of note were one case of subchondral insufficiency fracture occurring in a patient treated with tanezumab 2.5 mg and one case of primary osteonecrosis in a patient treated with tanezumab 5 mg.

During his presentation of the findings in a late-breaking oral abstract session, Dr. Berenbaum noted that this was a difficult-to-treat population of patients. All 849 patients who had been recruited had moderate to severe OA pain of the knee or hip and had a history of insufficient pain relief or intolerance to treatment with acetaminophen, oral NSAIDs, and tramadol and were also not responding to, or unwilling to take, opioid painkillers. Patients had to have no radiographic evidence of specified bone conditions, including RPOA.

Patients had been treated with subcutaneous tanezumab 2.5 mg (n = 283) or 5 mg (n = 284) or placebo (n = 282) at baseline, week 8, and week 16, with the three coprimary efficacy endpoints assessed at week 24.


Discussing the risk-to-benefit ratio of the drug after his presentation, Dr. Berenbaum said: “You have to keep in mind that, first, it was in very difficult-to-treat patients, compared to the other trials in the field of OA symptoms.”

He added: “Second, is that compared to the other trials, this one was able to include patients with Kellgren-Lawrence grade 4, meaning that this is a more serious population,” and third, “when you look at the responders – WOMAC 30%, 50%, 70% – there is a strong difference in terms of responders.”

Dr. Berenbaum and his coauthors noted on the poster that accompanied the late-breaking oral presentation that “an active-controlled study will provide data to further characterize the risk-benefit of tanezumab in patients with OA.”

The study was sponsored by Pfizer and Eli Lilly. Dr. Berenbaum disclosed receiving research funding through his institution from Pfizer and acting as a consultant to, and speaker for, the company as well as multiple other pharmaceutical companies. Coauthors of the study also disclosed research funding or consultancy agreements with Pfizer or Eli Lilly or were employees of the companies.

SOURCE: Berenbaum F et al. Ann Rheum Dis. Jun 2019;78(Suppl 2):262-4. Abstract LB0007, doi: 10.1136/annrheumdis-2019-eular.8660

– Tanezumab, an investigational monoclonal antibody directed against nerve growth factor that is under development to treat osteoarthritis pain, met most of the coprimary efficacy endpoints set for the drug in a randomized, double-blind, parallel-group, placebo-controlled phase 3 study.

At the end of a 24-week, double-blind treatment period, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain and WOMAC physical function subscale scores were significantly improved, compared with placebo in the two tanezumab (2.5 mg and 5 mg) dose groups.

The least squares (ls) mean change from baseline in WOMAC pain scores were –2.24 for placebo, –2.70 for tanezumab 2.5 mg, and –2.85 for tanezumab 5 mg (P less than or equal to .01 and P less than or equal to .001 vs. placebo).

The ls mean change from baseline in WOMAC physical function scores were a respective –2.11, –2.70, and –2.82 (P less than or equal to .001 for both vs. placebo).

The coprimary endpoint of patients’ global assessment of OA (PGA-OA) was also significantly improved with tanezumab 5 mg (–0.90; P less than or equal to .05) but not 2.5 mg (–0.82) versus placebo (–0.72).

As the 2.5-mg dose of tanezumab didn’t meet one of the three coprimary endpoints, further hypothesis testing was not possible, but exploratory findings suggested that tanezumab at 2.5 mg or 5 mg yielded higher proportions of patients with reductions from baseline in WOMAC pain scores when compared against placebo. This was the case for reductions of at least 30% (65.6%, 68.7%, 56.6%, respectively), 50% (45.4%, 47.9%, 33.8%), or 70% (21.3%, 23.2%, 17.8%).

“I think that we have now a lot of studies with tanezumab showing a significant effect on hip and knee OA pain and function, so we have the studies in order to have the drug on the market,” study first author Francis Berenbaum, MD, PhD, of Saint-Antoine Hospital, Sorbonne Université in Paris, said in an interview at the European Congress of Rheumatology.

“Of course, because of the safety issue with rapid progressive osteoarthritis (RPOA), what we are discussing now is: ‘For which patients will there be an optimal benefit-to-risk?’ So, it’s now more a discussion around the population of patients who can benefit the most with the drug,” Dr. Berenbaum added.

A possible link between the use of tanezumab and a risk for developing RPOA was first suggested by preclinical and early clinical trial data, prompting the U.S. Food and Drug Administration to place partial holds on its clinical development in 2010, and again in 2012.

However, Dr. Berenbaum noted that a “mitigation plan” had been put in place for the phase 3 program to try to lower the likelihood of RPOA. This included: lowering the dose of the drug used and delivering it subcutaneously rather than intravenously; not prescribing it with NSAIDs and testing its possible effects and safety in a difficult-to-treat population of patients with no known risk factors for the potentially very serious adverse event.

“Based on this mitigation plan, the risk of rapid progressive osteoarthritis has considerably decreased,” Dr. Berenbaum observed. Indeed, in the phase 3 study he presented at the meeting, he said that around 2% of patients developed RPOA, which is “exactly in line with what has already been shown.” RPOA was reported in none of the placebo-treated patients, in 1.4% of those treated with tanezumab 2.5 mg, and in 2.8% in those treated with tanezumab 5 mg.

However, a “striking” finding of the current study was that despite the small increase in RPOA seen, there was no difference between the tanezumab and placebo groups in the number of patients needing total joint replacement (TJR). The percentages of patients undergoing at least one TJR was 6.7% in the placebo group, 7.8% in the tanezumab 2.5-mg group, and 7.0% in the tanezumab 5-mg group.

The joint safety events seen in the study, including TJRs, were adjudicated as being part of the normal progression of OA in the majority (73.4%) of cases. Other joint events of note were one case of subchondral insufficiency fracture occurring in a patient treated with tanezumab 2.5 mg and one case of primary osteonecrosis in a patient treated with tanezumab 5 mg.

During his presentation of the findings in a late-breaking oral abstract session, Dr. Berenbaum noted that this was a difficult-to-treat population of patients. All 849 patients who had been recruited had moderate to severe OA pain of the knee or hip and had a history of insufficient pain relief or intolerance to treatment with acetaminophen, oral NSAIDs, and tramadol and were also not responding to, or unwilling to take, opioid painkillers. Patients had to have no radiographic evidence of specified bone conditions, including RPOA.

Patients had been treated with subcutaneous tanezumab 2.5 mg (n = 283) or 5 mg (n = 284) or placebo (n = 282) at baseline, week 8, and week 16, with the three coprimary efficacy endpoints assessed at week 24.


Discussing the risk-to-benefit ratio of the drug after his presentation, Dr. Berenbaum said: “You have to keep in mind that, first, it was in very difficult-to-treat patients, compared to the other trials in the field of OA symptoms.”

He added: “Second, is that compared to the other trials, this one was able to include patients with Kellgren-Lawrence grade 4, meaning that this is a more serious population,” and third, “when you look at the responders – WOMAC 30%, 50%, 70% – there is a strong difference in terms of responders.”

Dr. Berenbaum and his coauthors noted on the poster that accompanied the late-breaking oral presentation that “an active-controlled study will provide data to further characterize the risk-benefit of tanezumab in patients with OA.”

The study was sponsored by Pfizer and Eli Lilly. Dr. Berenbaum disclosed receiving research funding through his institution from Pfizer and acting as a consultant to, and speaker for, the company as well as multiple other pharmaceutical companies. Coauthors of the study also disclosed research funding or consultancy agreements with Pfizer or Eli Lilly or were employees of the companies.

SOURCE: Berenbaum F et al. Ann Rheum Dis. Jun 2019;78(Suppl 2):262-4. Abstract LB0007, doi: 10.1136/annrheumdis-2019-eular.8660

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

REPORTING FROM EULAR 2019 CONGRESS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

IHS Announces Requirements to Increase Access to OUD Treatment

Article Type
Changed
Fri, 06/28/2019 - 03:55
To reduce the high overdose rates among Native communities, new requirements are being implemented giving more access to opioid use disorder treatment.

Native American communities have experienced the largest increase in drug overdose deaths of all racial/ethnic groups in the US. Between 1999 and 2015, drug overdose deaths rose > 500%. To help ensure that American Indians and Alaska Natives (AI/AN) get the treatment they need, the Indian Health Service (IHS) has released Special General Memorandum 2019-01: Assuring Access to Medication Assisted Treatment for Opioid Use Disorder. It requires all IHS federal facilities to:

  • Identify opioid use disorder (OUD) treatment resources in their local areas;
  • Create an action plan, no later than Dec. 11, 2019; and
  • Provide or coordinate patient access to medication-assisted treatment (MAT), specifically increasing access to culturally appropriate prevention, treatment, and recovery support services.

MAT is a comprehensive evidence-based approach that combines pharmacologic interventions with substance abuse counseling and culturally sensitive social support.

The IHS has recently taken other steps to further facilitate MAT access in tribal communities. For example, it has added 3 FDA-approved medications to the National Core Formulary: buprenorphine, buprenorphine/naloxone, and injectable naltrexone, all of which relieve withdrawal symptoms and psychological cravings, supporting adherence to treatment and reducing illicit opioid use.

In addition, the IHS has published the Internet Eligible Controlled Substance Provider Designation Policy. This policy, established in 2018, is designed to increase access to treatment for AI/AN who live in rural or remote areas, where it can be difficult to access a provider with the necessary training and Drug Enforcement Administration approval to prescribe buprenorphine in an outpatient or office-based setting. Once approved, IHS, tribal, and urban Indian organization health care providers can prescribe controlled substances for MAT through telemedicine.

In 2018, the IHS also launched a new website (www.IHS.gov/opioids) to share information about opioids with patients, health care providers, tribal leaders, tribal and urban program administrators, and other community members. The site includes information on approaches to prevent opioid abuse, pain management, recovery tools, and funding opportunities.

Publications
Topics
Sections
To reduce the high overdose rates among Native communities, new requirements are being implemented giving more access to opioid use disorder treatment.
To reduce the high overdose rates among Native communities, new requirements are being implemented giving more access to opioid use disorder treatment.

Native American communities have experienced the largest increase in drug overdose deaths of all racial/ethnic groups in the US. Between 1999 and 2015, drug overdose deaths rose > 500%. To help ensure that American Indians and Alaska Natives (AI/AN) get the treatment they need, the Indian Health Service (IHS) has released Special General Memorandum 2019-01: Assuring Access to Medication Assisted Treatment for Opioid Use Disorder. It requires all IHS federal facilities to:

  • Identify opioid use disorder (OUD) treatment resources in their local areas;
  • Create an action plan, no later than Dec. 11, 2019; and
  • Provide or coordinate patient access to medication-assisted treatment (MAT), specifically increasing access to culturally appropriate prevention, treatment, and recovery support services.

MAT is a comprehensive evidence-based approach that combines pharmacologic interventions with substance abuse counseling and culturally sensitive social support.

The IHS has recently taken other steps to further facilitate MAT access in tribal communities. For example, it has added 3 FDA-approved medications to the National Core Formulary: buprenorphine, buprenorphine/naloxone, and injectable naltrexone, all of which relieve withdrawal symptoms and psychological cravings, supporting adherence to treatment and reducing illicit opioid use.

In addition, the IHS has published the Internet Eligible Controlled Substance Provider Designation Policy. This policy, established in 2018, is designed to increase access to treatment for AI/AN who live in rural or remote areas, where it can be difficult to access a provider with the necessary training and Drug Enforcement Administration approval to prescribe buprenorphine in an outpatient or office-based setting. Once approved, IHS, tribal, and urban Indian organization health care providers can prescribe controlled substances for MAT through telemedicine.

In 2018, the IHS also launched a new website (www.IHS.gov/opioids) to share information about opioids with patients, health care providers, tribal leaders, tribal and urban program administrators, and other community members. The site includes information on approaches to prevent opioid abuse, pain management, recovery tools, and funding opportunities.

Native American communities have experienced the largest increase in drug overdose deaths of all racial/ethnic groups in the US. Between 1999 and 2015, drug overdose deaths rose > 500%. To help ensure that American Indians and Alaska Natives (AI/AN) get the treatment they need, the Indian Health Service (IHS) has released Special General Memorandum 2019-01: Assuring Access to Medication Assisted Treatment for Opioid Use Disorder. It requires all IHS federal facilities to:

  • Identify opioid use disorder (OUD) treatment resources in their local areas;
  • Create an action plan, no later than Dec. 11, 2019; and
  • Provide or coordinate patient access to medication-assisted treatment (MAT), specifically increasing access to culturally appropriate prevention, treatment, and recovery support services.

MAT is a comprehensive evidence-based approach that combines pharmacologic interventions with substance abuse counseling and culturally sensitive social support.

The IHS has recently taken other steps to further facilitate MAT access in tribal communities. For example, it has added 3 FDA-approved medications to the National Core Formulary: buprenorphine, buprenorphine/naloxone, and injectable naltrexone, all of which relieve withdrawal symptoms and psychological cravings, supporting adherence to treatment and reducing illicit opioid use.

In addition, the IHS has published the Internet Eligible Controlled Substance Provider Designation Policy. This policy, established in 2018, is designed to increase access to treatment for AI/AN who live in rural or remote areas, where it can be difficult to access a provider with the necessary training and Drug Enforcement Administration approval to prescribe buprenorphine in an outpatient or office-based setting. Once approved, IHS, tribal, and urban Indian organization health care providers can prescribe controlled substances for MAT through telemedicine.

In 2018, the IHS also launched a new website (www.IHS.gov/opioids) to share information about opioids with patients, health care providers, tribal leaders, tribal and urban program administrators, and other community members. The site includes information on approaches to prevent opioid abuse, pain management, recovery tools, and funding opportunities.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 06/26/2019 - 12:45
Un-Gate On Date
Wed, 06/26/2019 - 12:45
Use ProPublica
CFC Schedule Remove Status
Wed, 06/26/2019 - 12:45
Hide sidebar & use full width
render the right sidebar.

Algorithm predicts villous atrophy in children with potential celiac disease

Evidence-based prediction with a grain of salt
Article Type
Changed
Tue, 07/23/2019 - 13:31

A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.

The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.

Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.

After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.

To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.

“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”

The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.

“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”

Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.

Body

While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?

Dr. Stefano Guandalini

 

The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.

Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt! 

Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
 

Publications
Topics
Sections
Body

While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?

Dr. Stefano Guandalini

 

The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.

Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt! 

Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
 

Body

While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?

Dr. Stefano Guandalini

 

The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.

Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt! 

Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
 

Title
Evidence-based prediction with a grain of salt
Evidence-based prediction with a grain of salt

A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.

The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.

Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.

After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.

To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.

“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”

The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.

“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”

Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.

A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.

The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.

Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.

After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.

To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.

“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”

The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.

“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”

Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Automated measurements of plasma predict amyloid status

Findings represent significant advances
Article Type
Changed
Thu, 12/15/2022 - 15:46

 

Measuring plasma amyloid-beta 42 and amyloid-beta 40 using a fully automated immunoassay predicts amyloid-beta status in all stages of Alzheimer’s disease, according to research published online ahead of print June 24 in JAMA Neurology. Analyzing APOE genotype in addition to these biomarkers increases the accuracy of the prediction. This blood test thus could allow neurologists to identify patients at risk of amyloid-beta positivity who should undergo further assessment, said the authors. It also could be used to enroll amyloid-beta–positive participants in clinical trials.

Dr. Sebastian Palmqvist

In vivo PET imaging and analysis of cerebrospinal fluid (CSF) can detect amyloid-beta, but these procedures are expensive, and their availability is limited. Clinicians need readily available methods for detecting amyloid-beta, and research has indicated that blood-based biomarkers correlate with those in CSF. Fully automated immunoassays, such as the Elecsys test developed by Roche Diagnostics, have recently demonstrated high reliability and precision for CSF amyloid-beta. Using the Elecsys assay, Sebastian Palmqvist, MD, PhD, a neurologist at Skåne University Hospital in Malmö, Sweden, and colleagues sought to examine the accuracy of plasma amyloid-beta and tau, together with other blood-based biomarkers, at detecting cerebral amyloid-beta.
 

Testing the immunoassay in two cohorts

Dr. Palmqvist and colleagues examined participants in the prospective Swedish BioFINDER Study, which enrolled patients between July 6, 2009, and February 11, 2015. This cohort included 513 cognitively unimpaired (CU) participants, 265 participants with mild cognitive impairment (MCI), and 64 participants with Alzheimer’s disease dementia. Investigators collected blood and CSF samples at the same time from all participants. Participants’ amyloid-beta status was ascertained using the Elecsys CSF amyloid-beta 42/amyloid-beta 40 ratio. The researchers defined amyloid-beta positivity with an unbiased cutoff of less than 0.059.

Dr. Palmqvist and colleagues also examined a validation cohort that included 237 participants who had been enrolled between January 29, 2000, and October 11, 2006, in Ulm and Hannover, Germany. This group included 34 CU participants, 109 participants with MCI, and 94 participants with mild Alzheimer’s disease dementia. The investigators applied the same cutoff of CSF amyloid-beta 42/amyloid-beta 40 to define amyloid-beta positivity in this cohort as they applied to the BioFINDER cohort.
 

Automated immunoassay had high predictive accuracy

The mean age of the BioFINDER cohort was 72 years, and 52.5% of participants were female. Overall, 44% of this cohort was amyloid-beta positive, including 29% of CU participants, 60% of participants with MCI, and 100% of participants with Alzheimer’s dementia. The investigators found statistically significant positive correlations between all plasma and corresponding CSF biomarkers in this cohort.

Plasma amyloid-beta 42 and amyloid-beta 40 levels predicted amyloid-beta status with an area under the receiver operating characteristic curve (AUC) of 0.80. When the researchers added APOE to the model, the AUC increased significantly to 0.85. Accuracy improved slightly when the researchers added plasma tau (AUC, 0.86) or tau and neurofilament light (AUC, 0.87) to amyloid-beta 42, amyloid-beta 40, and APOE. The results were similar in CU and cognitively impaired participants, and in younger and older participants.

In the validation cohort, the mean age was 66 years, and 50.6% of participants were female. When Dr. Palmqvist and colleagues applied the plasma amyloid-beta 42 and amyloid-beta 40 model from the BioFINDER cohort to this population, they obtained a slightly higher AUC (0.86), but plasma tau did not increase predictive accuracy.

The investigators performed a cost-benefit analysis using a scenario in which 1,000 amyloid-positive participants are included in a trial and given a cost of $4,000 per participant for amyloid PET. Using plasma amyloid-beta 42, amyloid-beta 40, and APOE in this scenario reduced PET costs by as much as 30%-50%, depending on the cutoff.
 

 

 

Validation cohort was small

Dr. Palmqvist and colleagues acknowledged that a lack of data about APOE was a limitation of their validation analysis. Other limitations that they acknowledged were the small population size, which precluded subpopulation analysis, and the lack of improvement in predictive ability when they replicated the model that included plasma tau.

“Overall, the accuracies of the amyloid-beta 42 and amyloid-beta 40 assays are not sufficient to be used on their own as a clinical test of amyloid-beta positivity,” said Dr. Palmqvist and colleagues. “Additional assay development is needed before this can be recommended, possibly together with other blood biomarkers and screening tools in diagnostic algorithms.”

Even though additional validation studies are necessary, the present findings indicate “the potential usefulness blood assays might have, especially considering the ongoing great need to recruit large cohorts for Alzheimer’s disease drug trials in preclinical and prodromal stages,” the authors concluded.

This investigation was funded by foundations including the European Research Council, the Swedish Research Council, and the Knut and Alice Wallenberg foundation. Several authors are employees of the Roche Group. One author served on a scientific advisory board for Roche Diagnostics, and another received institutional research support from that company.

SOURCE: Palmqvist S et al. JAMA Neurol. 2019 Jun 24. doi: 10.1001/jamaneurol.2019.1632.

Body

 

The investigation by Palmqvist et al. “makes several significant advancements in the field,” said Sid E. O’Bryant, PhD, professor of pharmacology and neuroscience at the University of North Texas Health Science Center in Fort Worth, in an accompanying editorial. The study’s protocol design clears the ground for a context of use of a blood screen for amyloid positivity. Also, the fully automated immunoassay “yields performance measurements that are superior to [those of] many earlier nonautomated procedures,” said Dr. O’Bryant. When Dr. Palmqvist and colleagues applied their discovery findings from a training cohort directly to a test cohort, it produced strong results. “This study suggests that the field is one step closer to the actual application of blood-based biomarkers with specific contexts of use in Alzheimer’s disease.”

The main concern about the plasma biomarkers, however, is the scalability of the methods used to measure them. “If primary care physicians are to use such a technology, the technology must have the capacity to conduct hundreds of millions of assays annually around the globe,” said Dr. O’Bryant. “A blood test for primary care must fit into the existing protocols and parameters in clinical laboratory settings. The blood collection and processing procedures are not applicable to standard clinical lab practice and will cause substantial barriers to clinical application.”

In addition, the study authors emphasize the utility of the immunoassay for primary care, but the study was designed to test for amyloid positivity, which is more appropriate for clinical trials. “No currently available drugs for patient use target amyloid,” said Dr. O’Bryant. “Therefore, this specific context of use is geared more toward clinical trial application than primary care physicians who currently need a test for the presence or absence of Alzheimer’s disease so currently available treatments and support can be put in place for patients and family members.”

Nevertheless, Dr. Palmqvist and associates have presented promising data, Dr. O’Bryant continued. The question in the field is ceasing to be whether blood biomarkers can be used in Alzheimer’s disease, and becoming how they can be used.

Publications
Topics
Sections
Body

 

The investigation by Palmqvist et al. “makes several significant advancements in the field,” said Sid E. O’Bryant, PhD, professor of pharmacology and neuroscience at the University of North Texas Health Science Center in Fort Worth, in an accompanying editorial. The study’s protocol design clears the ground for a context of use of a blood screen for amyloid positivity. Also, the fully automated immunoassay “yields performance measurements that are superior to [those of] many earlier nonautomated procedures,” said Dr. O’Bryant. When Dr. Palmqvist and colleagues applied their discovery findings from a training cohort directly to a test cohort, it produced strong results. “This study suggests that the field is one step closer to the actual application of blood-based biomarkers with specific contexts of use in Alzheimer’s disease.”

The main concern about the plasma biomarkers, however, is the scalability of the methods used to measure them. “If primary care physicians are to use such a technology, the technology must have the capacity to conduct hundreds of millions of assays annually around the globe,” said Dr. O’Bryant. “A blood test for primary care must fit into the existing protocols and parameters in clinical laboratory settings. The blood collection and processing procedures are not applicable to standard clinical lab practice and will cause substantial barriers to clinical application.”

In addition, the study authors emphasize the utility of the immunoassay for primary care, but the study was designed to test for amyloid positivity, which is more appropriate for clinical trials. “No currently available drugs for patient use target amyloid,” said Dr. O’Bryant. “Therefore, this specific context of use is geared more toward clinical trial application than primary care physicians who currently need a test for the presence or absence of Alzheimer’s disease so currently available treatments and support can be put in place for patients and family members.”

Nevertheless, Dr. Palmqvist and associates have presented promising data, Dr. O’Bryant continued. The question in the field is ceasing to be whether blood biomarkers can be used in Alzheimer’s disease, and becoming how they can be used.

Body

 

The investigation by Palmqvist et al. “makes several significant advancements in the field,” said Sid E. O’Bryant, PhD, professor of pharmacology and neuroscience at the University of North Texas Health Science Center in Fort Worth, in an accompanying editorial. The study’s protocol design clears the ground for a context of use of a blood screen for amyloid positivity. Also, the fully automated immunoassay “yields performance measurements that are superior to [those of] many earlier nonautomated procedures,” said Dr. O’Bryant. When Dr. Palmqvist and colleagues applied their discovery findings from a training cohort directly to a test cohort, it produced strong results. “This study suggests that the field is one step closer to the actual application of blood-based biomarkers with specific contexts of use in Alzheimer’s disease.”

The main concern about the plasma biomarkers, however, is the scalability of the methods used to measure them. “If primary care physicians are to use such a technology, the technology must have the capacity to conduct hundreds of millions of assays annually around the globe,” said Dr. O’Bryant. “A blood test for primary care must fit into the existing protocols and parameters in clinical laboratory settings. The blood collection and processing procedures are not applicable to standard clinical lab practice and will cause substantial barriers to clinical application.”

In addition, the study authors emphasize the utility of the immunoassay for primary care, but the study was designed to test for amyloid positivity, which is more appropriate for clinical trials. “No currently available drugs for patient use target amyloid,” said Dr. O’Bryant. “Therefore, this specific context of use is geared more toward clinical trial application than primary care physicians who currently need a test for the presence or absence of Alzheimer’s disease so currently available treatments and support can be put in place for patients and family members.”

Nevertheless, Dr. Palmqvist and associates have presented promising data, Dr. O’Bryant continued. The question in the field is ceasing to be whether blood biomarkers can be used in Alzheimer’s disease, and becoming how they can be used.

Title
Findings represent significant advances
Findings represent significant advances

 

Measuring plasma amyloid-beta 42 and amyloid-beta 40 using a fully automated immunoassay predicts amyloid-beta status in all stages of Alzheimer’s disease, according to research published online ahead of print June 24 in JAMA Neurology. Analyzing APOE genotype in addition to these biomarkers increases the accuracy of the prediction. This blood test thus could allow neurologists to identify patients at risk of amyloid-beta positivity who should undergo further assessment, said the authors. It also could be used to enroll amyloid-beta–positive participants in clinical trials.

Dr. Sebastian Palmqvist

In vivo PET imaging and analysis of cerebrospinal fluid (CSF) can detect amyloid-beta, but these procedures are expensive, and their availability is limited. Clinicians need readily available methods for detecting amyloid-beta, and research has indicated that blood-based biomarkers correlate with those in CSF. Fully automated immunoassays, such as the Elecsys test developed by Roche Diagnostics, have recently demonstrated high reliability and precision for CSF amyloid-beta. Using the Elecsys assay, Sebastian Palmqvist, MD, PhD, a neurologist at Skåne University Hospital in Malmö, Sweden, and colleagues sought to examine the accuracy of plasma amyloid-beta and tau, together with other blood-based biomarkers, at detecting cerebral amyloid-beta.
 

Testing the immunoassay in two cohorts

Dr. Palmqvist and colleagues examined participants in the prospective Swedish BioFINDER Study, which enrolled patients between July 6, 2009, and February 11, 2015. This cohort included 513 cognitively unimpaired (CU) participants, 265 participants with mild cognitive impairment (MCI), and 64 participants with Alzheimer’s disease dementia. Investigators collected blood and CSF samples at the same time from all participants. Participants’ amyloid-beta status was ascertained using the Elecsys CSF amyloid-beta 42/amyloid-beta 40 ratio. The researchers defined amyloid-beta positivity with an unbiased cutoff of less than 0.059.

Dr. Palmqvist and colleagues also examined a validation cohort that included 237 participants who had been enrolled between January 29, 2000, and October 11, 2006, in Ulm and Hannover, Germany. This group included 34 CU participants, 109 participants with MCI, and 94 participants with mild Alzheimer’s disease dementia. The investigators applied the same cutoff of CSF amyloid-beta 42/amyloid-beta 40 to define amyloid-beta positivity in this cohort as they applied to the BioFINDER cohort.
 

Automated immunoassay had high predictive accuracy

The mean age of the BioFINDER cohort was 72 years, and 52.5% of participants were female. Overall, 44% of this cohort was amyloid-beta positive, including 29% of CU participants, 60% of participants with MCI, and 100% of participants with Alzheimer’s dementia. The investigators found statistically significant positive correlations between all plasma and corresponding CSF biomarkers in this cohort.

Plasma amyloid-beta 42 and amyloid-beta 40 levels predicted amyloid-beta status with an area under the receiver operating characteristic curve (AUC) of 0.80. When the researchers added APOE to the model, the AUC increased significantly to 0.85. Accuracy improved slightly when the researchers added plasma tau (AUC, 0.86) or tau and neurofilament light (AUC, 0.87) to amyloid-beta 42, amyloid-beta 40, and APOE. The results were similar in CU and cognitively impaired participants, and in younger and older participants.

In the validation cohort, the mean age was 66 years, and 50.6% of participants were female. When Dr. Palmqvist and colleagues applied the plasma amyloid-beta 42 and amyloid-beta 40 model from the BioFINDER cohort to this population, they obtained a slightly higher AUC (0.86), but plasma tau did not increase predictive accuracy.

The investigators performed a cost-benefit analysis using a scenario in which 1,000 amyloid-positive participants are included in a trial and given a cost of $4,000 per participant for amyloid PET. Using plasma amyloid-beta 42, amyloid-beta 40, and APOE in this scenario reduced PET costs by as much as 30%-50%, depending on the cutoff.
 

 

 

Validation cohort was small

Dr. Palmqvist and colleagues acknowledged that a lack of data about APOE was a limitation of their validation analysis. Other limitations that they acknowledged were the small population size, which precluded subpopulation analysis, and the lack of improvement in predictive ability when they replicated the model that included plasma tau.

“Overall, the accuracies of the amyloid-beta 42 and amyloid-beta 40 assays are not sufficient to be used on their own as a clinical test of amyloid-beta positivity,” said Dr. Palmqvist and colleagues. “Additional assay development is needed before this can be recommended, possibly together with other blood biomarkers and screening tools in diagnostic algorithms.”

Even though additional validation studies are necessary, the present findings indicate “the potential usefulness blood assays might have, especially considering the ongoing great need to recruit large cohorts for Alzheimer’s disease drug trials in preclinical and prodromal stages,” the authors concluded.

This investigation was funded by foundations including the European Research Council, the Swedish Research Council, and the Knut and Alice Wallenberg foundation. Several authors are employees of the Roche Group. One author served on a scientific advisory board for Roche Diagnostics, and another received institutional research support from that company.

SOURCE: Palmqvist S et al. JAMA Neurol. 2019 Jun 24. doi: 10.1001/jamaneurol.2019.1632.

 

Measuring plasma amyloid-beta 42 and amyloid-beta 40 using a fully automated immunoassay predicts amyloid-beta status in all stages of Alzheimer’s disease, according to research published online ahead of print June 24 in JAMA Neurology. Analyzing APOE genotype in addition to these biomarkers increases the accuracy of the prediction. This blood test thus could allow neurologists to identify patients at risk of amyloid-beta positivity who should undergo further assessment, said the authors. It also could be used to enroll amyloid-beta–positive participants in clinical trials.

Dr. Sebastian Palmqvist

In vivo PET imaging and analysis of cerebrospinal fluid (CSF) can detect amyloid-beta, but these procedures are expensive, and their availability is limited. Clinicians need readily available methods for detecting amyloid-beta, and research has indicated that blood-based biomarkers correlate with those in CSF. Fully automated immunoassays, such as the Elecsys test developed by Roche Diagnostics, have recently demonstrated high reliability and precision for CSF amyloid-beta. Using the Elecsys assay, Sebastian Palmqvist, MD, PhD, a neurologist at Skåne University Hospital in Malmö, Sweden, and colleagues sought to examine the accuracy of plasma amyloid-beta and tau, together with other blood-based biomarkers, at detecting cerebral amyloid-beta.
 

Testing the immunoassay in two cohorts

Dr. Palmqvist and colleagues examined participants in the prospective Swedish BioFINDER Study, which enrolled patients between July 6, 2009, and February 11, 2015. This cohort included 513 cognitively unimpaired (CU) participants, 265 participants with mild cognitive impairment (MCI), and 64 participants with Alzheimer’s disease dementia. Investigators collected blood and CSF samples at the same time from all participants. Participants’ amyloid-beta status was ascertained using the Elecsys CSF amyloid-beta 42/amyloid-beta 40 ratio. The researchers defined amyloid-beta positivity with an unbiased cutoff of less than 0.059.

Dr. Palmqvist and colleagues also examined a validation cohort that included 237 participants who had been enrolled between January 29, 2000, and October 11, 2006, in Ulm and Hannover, Germany. This group included 34 CU participants, 109 participants with MCI, and 94 participants with mild Alzheimer’s disease dementia. The investigators applied the same cutoff of CSF amyloid-beta 42/amyloid-beta 40 to define amyloid-beta positivity in this cohort as they applied to the BioFINDER cohort.
 

Automated immunoassay had high predictive accuracy

The mean age of the BioFINDER cohort was 72 years, and 52.5% of participants were female. Overall, 44% of this cohort was amyloid-beta positive, including 29% of CU participants, 60% of participants with MCI, and 100% of participants with Alzheimer’s dementia. The investigators found statistically significant positive correlations between all plasma and corresponding CSF biomarkers in this cohort.

Plasma amyloid-beta 42 and amyloid-beta 40 levels predicted amyloid-beta status with an area under the receiver operating characteristic curve (AUC) of 0.80. When the researchers added APOE to the model, the AUC increased significantly to 0.85. Accuracy improved slightly when the researchers added plasma tau (AUC, 0.86) or tau and neurofilament light (AUC, 0.87) to amyloid-beta 42, amyloid-beta 40, and APOE. The results were similar in CU and cognitively impaired participants, and in younger and older participants.

In the validation cohort, the mean age was 66 years, and 50.6% of participants were female. When Dr. Palmqvist and colleagues applied the plasma amyloid-beta 42 and amyloid-beta 40 model from the BioFINDER cohort to this population, they obtained a slightly higher AUC (0.86), but plasma tau did not increase predictive accuracy.

The investigators performed a cost-benefit analysis using a scenario in which 1,000 amyloid-positive participants are included in a trial and given a cost of $4,000 per participant for amyloid PET. Using plasma amyloid-beta 42, amyloid-beta 40, and APOE in this scenario reduced PET costs by as much as 30%-50%, depending on the cutoff.
 

 

 

Validation cohort was small

Dr. Palmqvist and colleagues acknowledged that a lack of data about APOE was a limitation of their validation analysis. Other limitations that they acknowledged were the small population size, which precluded subpopulation analysis, and the lack of improvement in predictive ability when they replicated the model that included plasma tau.

“Overall, the accuracies of the amyloid-beta 42 and amyloid-beta 40 assays are not sufficient to be used on their own as a clinical test of amyloid-beta positivity,” said Dr. Palmqvist and colleagues. “Additional assay development is needed before this can be recommended, possibly together with other blood biomarkers and screening tools in diagnostic algorithms.”

Even though additional validation studies are necessary, the present findings indicate “the potential usefulness blood assays might have, especially considering the ongoing great need to recruit large cohorts for Alzheimer’s disease drug trials in preclinical and prodromal stages,” the authors concluded.

This investigation was funded by foundations including the European Research Council, the Swedish Research Council, and the Knut and Alice Wallenberg foundation. Several authors are employees of the Roche Group. One author served on a scientific advisory board for Roche Diagnostics, and another received institutional research support from that company.

SOURCE: Palmqvist S et al. JAMA Neurol. 2019 Jun 24. doi: 10.1001/jamaneurol.2019.1632.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

FDA approves first treatment for neuromyelitis optica spectrum disorder

Article Type
Changed
Tue, 07/30/2019 - 14:20

 

The Food and Drug Administration has approved Soliris (eculizumab) injection for intravenous use for the treatment of neuromyelitis optica spectrum disorder (NMOSD) in adult who have tested positive for anti–aquaporin-4 (AQP4) autoantibodies.

Soliris, a complement inhibitor, is the first FDA-approved treatment for NMOSD, a rare autoimmune disease of the central nervous system that mainly affects the optic nerves and spinal cord, according to a news release.

About 73% of patients with NMOSD test positive for anti-AQP4 antibodies, and complement activation resulting from anti-AQP4 antibodies is an underlying cause of the disease, according to the news release from Alexion, the company that markets the drug. The average age of NMOSD onset is 39 years, and the disease can lead to permanent visual impairment and paralysis. The condition, previously known as Devic’s disease, may affect between 4,000 and 8,000 people in the United States. NMOSD may be confused with other neurologic conditions such as multiple sclerosis.

Investigators studied the drug’s effectiveness in a placebo-controlled clinical trial of 143 patients with NMOSD who had anti-AQP4 antibodies. Compared with placebo, Soliris reduced the number of NMOSD relapses by 94% during the 48-week study. Nearly 98% of patients in the PREVENT trial who received Soliris were relapse-free after 48 weeks, compared with 63% of patients who received placebo.



Soliris also reduced hospitalizations and the need for corticosteroids and plasma exchange to treat acute attacks.

Soliris includes a boxed warning about life-threatening and fatal meningococcal infections that have occurred in patients treated with Soliris. Patients should be monitored and evaluated immediately if infection is suspected, according to the FDA announcement. In addition, health care professionals should use caution when administering Soliris to patients with any other infection. No cases of meningococcal infection were observed in the PREVENT trial.

Soliris is available through a restricted program under a Risk Evaluation and Mitigation Strategy (REMS). Prescribers must counsel patients about the risk of meningococcal infection and ensure that patients have been vaccinated with meningococcal vaccines.



Adverse reactions in the NMOSD clinical trial included upper respiratory infection, nasopharyngitis, diarrhea, back pain, dizziness, influenza, joint pain, sore throat, and confusion.

The drug’s use for NMOSD received Orphan Drug designation, which provides incentives for the development of drugs for rare diseases.

Eculizumab first was approved by the FDA in 2007 and also may be used to treat paroxysmal nocturnal hemoglobinuria, atypical hemolytic uremic syndrome, and myasthenia gravis.

Issue
Neurology Reviews- 27(8)
Publications
Topics
Sections

 

The Food and Drug Administration has approved Soliris (eculizumab) injection for intravenous use for the treatment of neuromyelitis optica spectrum disorder (NMOSD) in adult who have tested positive for anti–aquaporin-4 (AQP4) autoantibodies.

Soliris, a complement inhibitor, is the first FDA-approved treatment for NMOSD, a rare autoimmune disease of the central nervous system that mainly affects the optic nerves and spinal cord, according to a news release.

About 73% of patients with NMOSD test positive for anti-AQP4 antibodies, and complement activation resulting from anti-AQP4 antibodies is an underlying cause of the disease, according to the news release from Alexion, the company that markets the drug. The average age of NMOSD onset is 39 years, and the disease can lead to permanent visual impairment and paralysis. The condition, previously known as Devic’s disease, may affect between 4,000 and 8,000 people in the United States. NMOSD may be confused with other neurologic conditions such as multiple sclerosis.

Investigators studied the drug’s effectiveness in a placebo-controlled clinical trial of 143 patients with NMOSD who had anti-AQP4 antibodies. Compared with placebo, Soliris reduced the number of NMOSD relapses by 94% during the 48-week study. Nearly 98% of patients in the PREVENT trial who received Soliris were relapse-free after 48 weeks, compared with 63% of patients who received placebo.



Soliris also reduced hospitalizations and the need for corticosteroids and plasma exchange to treat acute attacks.

Soliris includes a boxed warning about life-threatening and fatal meningococcal infections that have occurred in patients treated with Soliris. Patients should be monitored and evaluated immediately if infection is suspected, according to the FDA announcement. In addition, health care professionals should use caution when administering Soliris to patients with any other infection. No cases of meningococcal infection were observed in the PREVENT trial.

Soliris is available through a restricted program under a Risk Evaluation and Mitigation Strategy (REMS). Prescribers must counsel patients about the risk of meningococcal infection and ensure that patients have been vaccinated with meningococcal vaccines.



Adverse reactions in the NMOSD clinical trial included upper respiratory infection, nasopharyngitis, diarrhea, back pain, dizziness, influenza, joint pain, sore throat, and confusion.

The drug’s use for NMOSD received Orphan Drug designation, which provides incentives for the development of drugs for rare diseases.

Eculizumab first was approved by the FDA in 2007 and also may be used to treat paroxysmal nocturnal hemoglobinuria, atypical hemolytic uremic syndrome, and myasthenia gravis.

 

The Food and Drug Administration has approved Soliris (eculizumab) injection for intravenous use for the treatment of neuromyelitis optica spectrum disorder (NMOSD) in adult who have tested positive for anti–aquaporin-4 (AQP4) autoantibodies.

Soliris, a complement inhibitor, is the first FDA-approved treatment for NMOSD, a rare autoimmune disease of the central nervous system that mainly affects the optic nerves and spinal cord, according to a news release.

About 73% of patients with NMOSD test positive for anti-AQP4 antibodies, and complement activation resulting from anti-AQP4 antibodies is an underlying cause of the disease, according to the news release from Alexion, the company that markets the drug. The average age of NMOSD onset is 39 years, and the disease can lead to permanent visual impairment and paralysis. The condition, previously known as Devic’s disease, may affect between 4,000 and 8,000 people in the United States. NMOSD may be confused with other neurologic conditions such as multiple sclerosis.

Investigators studied the drug’s effectiveness in a placebo-controlled clinical trial of 143 patients with NMOSD who had anti-AQP4 antibodies. Compared with placebo, Soliris reduced the number of NMOSD relapses by 94% during the 48-week study. Nearly 98% of patients in the PREVENT trial who received Soliris were relapse-free after 48 weeks, compared with 63% of patients who received placebo.



Soliris also reduced hospitalizations and the need for corticosteroids and plasma exchange to treat acute attacks.

Soliris includes a boxed warning about life-threatening and fatal meningococcal infections that have occurred in patients treated with Soliris. Patients should be monitored and evaluated immediately if infection is suspected, according to the FDA announcement. In addition, health care professionals should use caution when administering Soliris to patients with any other infection. No cases of meningococcal infection were observed in the PREVENT trial.

Soliris is available through a restricted program under a Risk Evaluation and Mitigation Strategy (REMS). Prescribers must counsel patients about the risk of meningococcal infection and ensure that patients have been vaccinated with meningococcal vaccines.



Adverse reactions in the NMOSD clinical trial included upper respiratory infection, nasopharyngitis, diarrhea, back pain, dizziness, influenza, joint pain, sore throat, and confusion.

The drug’s use for NMOSD received Orphan Drug designation, which provides incentives for the development of drugs for rare diseases.

Eculizumab first was approved by the FDA in 2007 and also may be used to treat paroxysmal nocturnal hemoglobinuria, atypical hemolytic uremic syndrome, and myasthenia gravis.

Issue
Neurology Reviews- 27(8)
Issue
Neurology Reviews- 27(8)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: June 27, 2019
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Dr. Eve Espey: Some good news in her 2019 contraceptive update

Article Type
Changed
Thu, 06/27/2019 - 16:30

 

– There’s some good news on the contraception and reproductive health front, according to a recent update from Eve Espey, MD.

Sharon Worcester/MDedge News
Dr. Eve Espey

The unintended pregnancy rate in the United States, including among adolescents and young women, is declining, and the U.S. abortion rate is at its lowest level since Roe v. Wade, she said at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists.

A 2016 article based on 2008-2011 data showed that after hovering around 50% for nearly 3 decades, the unintended pregnancy rate dropped “for the first time in a very long period of time,” said Dr. Espey, professor and chair of the department of obstetrics & gynecology, division of family planning at the University of New Mexico, Albuquerque (N Engl J Med. 2016; 374[9]:843-52).

“It doesn’t look that impressive – it basically went down to 45%, but considering the scope and the number of women who are affected by unplanned pregnancy, this is actually a huge public health achievement,” she said. “And I think ... in the next cycles of the [Center for Disease Control and Prevention’s] National Survey of Family Growth ... we’ll hopefully continue to see this and potentially more [decline].”

As for abortion rates, an increase occurred following Roe v. Wade, but rates are now down to pre-Roe levels.

“One of the things that we know about the abortion rate is that the most important determinant ... is access to contraceptives,” Dr. Espey said, noting that both the abortion and unintended pregnancy rate declines are attributable to better and more consistent use of contraceptives, increased abstinence as teens are waiting longer to have sex, and the “meteoric rise in long-acting reversible contraceptive (LARC) use.”

Importantly, while improvements in public health have traditionally only impacted upper-class white women, a reduction is finally occurring in disparities with women of color, but those disparities still remain,” she added. “Just like we’re focusing so much on this relative to maternal mortality, the same kinds of disparities occur in access to reproductive health.”

Dr. Espey also provided updates on other aspects of contraception.
 

IUDs and other LARC methods

The use of LARCs increased from 2% of contraceptive types used by reproductive-aged women in 2002 to 12% in 2012. The majority of that change was in IUD use, with a small increase in implant use, she said, noting that the latest data from the 2015-2017 cycle of the National Survey of Family Growth shows that the rate is now up to 16%.

“The rise has been nothing that I ever imagined that I would see, certainly in my professional career,” she said.

The huge impact of LARCs on the unintended pregnancy rate is attributable to consistent effectiveness over time, compared with an increasing failure rate over time with short-acting contraceptive methods, she said, explaining that while the failure rate with oral contraceptives is about 8%-9% over the first 3 years, it increases to 53% at 8 years.

It’s a matter of looking at both “typical use” effectiveness and continuation rates: LARCs have continuation rates of about 75%-85%; Depot-Provera, for example, has a 25%-30% continuation rate at 1 year, she noted.

Dr. Espey also attributed the gains to improved access via the Affordable Care Act’s contraceptive mandate, which has been shown in numerous studies to have improved access and consistency of contraceptive use, but which is “currently being chipped away,” and to the federal Title X program that covers family planning care for low income women, including undocumented women.

“These two programs have made a huge impact for us, and I hope that we as ob.gyns. will continue to support them,” she said.
 

 

 

Reproductive justice

Despite their effectiveness, it is important to remember that LARC methods are not right for everyone, Dr. Espey said.

“It’s not all about effectiveness. Women have many reasons for accessing contraception, and our job is not to reduce unintended pregnancy. ... The idea really is that we empower women. ... We should really give choices and trust women to make the best choices for them,” she explained.

Barriers to IUD removal also should be eliminated, she noted, explaining that a woman who wants her IUD removed a month after insertion should have that option.

She said she has “changed her language,” from asking why a woman wants an $800 IUD removed after a month to asking whether she would like to hear about ways to make it better or if she is “just ready to have it removed.”

For those not interested in a discussion about birth control, she suggested providing information about the bedsider.org site.

“This is a great resource for patients,” she said, noting that it is available in both English and Spanish.
 

U.S. Medical Eligibility Criteria and Selected Practice Recommendations on contraceptive use

The MEC contraceptive guidance, a regularly updated, evidence-based project of the CDC, provides “best practices” information on candidate selection, or the “who” of contraceptive selection (who is a candidate for a particular method), Dr. Espy said, noting that it’s a “handy resource” for in-office use.

The SPR is more of a “how-to” guide that provides specifics on contraceptive use, such as when a woman can rely on the pill for contraception after she starts taking it, or how a woman should be followed after IUD placement, she said.

A free CDC app provides access to both.
 

Emergency contraception

The best overall emergency contraceptive method is the copper IUD, but often it is less accessible than oral methods, of which ulipristal acetate (ella), is the best choice, Dr. Espy said.

“Ulipristal is kind of a best-kept secret. It’s a selective estrogen-receptor modulator – it actually works better and longer than Plan B (levonorgestrel). What’s great about Plan B is that you can get it over the counter, but ulipristal delays ovulation longer,” she explained.
 

Contraceptives and obesity

Oral contraceptive efficacy is “so much more about adherence,” than about weight, she said.

With respect to the contraceptive patch, limited evidence suggests that obesity may reduce effectiveness, but “it’s still way better than barrier methods,” and for the contraceptive ring, no evidence suggests that obesity affects efficacy, she said.

For emergency contraception, evidence suggests that ulipristal is more effective than Plan B in women with high body mass index.
 

OTC contraceptive access

Pharmacy and OTC access are a good idea, Dr. Espy said.

“ACOG now supports both, which is great, and there are now a number of states where women can access contraception through the pharmacy. There are a lot of barriers there as well, and really the answer is OTC access,” she said. “There is a pill right now that is seeking [Food and Drug Administration] approval; it will be a progestin-only pill – the first one to be available over the counter, so I think this is something that we’ll see in the next 5-10 years.”
 

 

 

Additional future directions

One technology in development is a longer-acting injectable, such as a 6- or 9-month Depot-type shot.

Biodegradable implants also are in development. “What a cool idea – it just disappears in your arm, no need to remove it,” Dr. Espey said, adding that nonsurgical permanent sterilization is another possible advance, which would be “a holy grail.”

As for male contraception?

“I’ve been saying for about 25 years that in 5 years we’ll have a male contraceptive, so I’m not going to say it anymore with any kind of time frame, but it’s possible,” she said.

Dr. Espey reported having no financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– There’s some good news on the contraception and reproductive health front, according to a recent update from Eve Espey, MD.

Sharon Worcester/MDedge News
Dr. Eve Espey

The unintended pregnancy rate in the United States, including among adolescents and young women, is declining, and the U.S. abortion rate is at its lowest level since Roe v. Wade, she said at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists.

A 2016 article based on 2008-2011 data showed that after hovering around 50% for nearly 3 decades, the unintended pregnancy rate dropped “for the first time in a very long period of time,” said Dr. Espey, professor and chair of the department of obstetrics & gynecology, division of family planning at the University of New Mexico, Albuquerque (N Engl J Med. 2016; 374[9]:843-52).

“It doesn’t look that impressive – it basically went down to 45%, but considering the scope and the number of women who are affected by unplanned pregnancy, this is actually a huge public health achievement,” she said. “And I think ... in the next cycles of the [Center for Disease Control and Prevention’s] National Survey of Family Growth ... we’ll hopefully continue to see this and potentially more [decline].”

As for abortion rates, an increase occurred following Roe v. Wade, but rates are now down to pre-Roe levels.

“One of the things that we know about the abortion rate is that the most important determinant ... is access to contraceptives,” Dr. Espey said, noting that both the abortion and unintended pregnancy rate declines are attributable to better and more consistent use of contraceptives, increased abstinence as teens are waiting longer to have sex, and the “meteoric rise in long-acting reversible contraceptive (LARC) use.”

Importantly, while improvements in public health have traditionally only impacted upper-class white women, a reduction is finally occurring in disparities with women of color, but those disparities still remain,” she added. “Just like we’re focusing so much on this relative to maternal mortality, the same kinds of disparities occur in access to reproductive health.”

Dr. Espey also provided updates on other aspects of contraception.
 

IUDs and other LARC methods

The use of LARCs increased from 2% of contraceptive types used by reproductive-aged women in 2002 to 12% in 2012. The majority of that change was in IUD use, with a small increase in implant use, she said, noting that the latest data from the 2015-2017 cycle of the National Survey of Family Growth shows that the rate is now up to 16%.

“The rise has been nothing that I ever imagined that I would see, certainly in my professional career,” she said.

The huge impact of LARCs on the unintended pregnancy rate is attributable to consistent effectiveness over time, compared with an increasing failure rate over time with short-acting contraceptive methods, she said, explaining that while the failure rate with oral contraceptives is about 8%-9% over the first 3 years, it increases to 53% at 8 years.

It’s a matter of looking at both “typical use” effectiveness and continuation rates: LARCs have continuation rates of about 75%-85%; Depot-Provera, for example, has a 25%-30% continuation rate at 1 year, she noted.

Dr. Espey also attributed the gains to improved access via the Affordable Care Act’s contraceptive mandate, which has been shown in numerous studies to have improved access and consistency of contraceptive use, but which is “currently being chipped away,” and to the federal Title X program that covers family planning care for low income women, including undocumented women.

“These two programs have made a huge impact for us, and I hope that we as ob.gyns. will continue to support them,” she said.
 

 

 

Reproductive justice

Despite their effectiveness, it is important to remember that LARC methods are not right for everyone, Dr. Espey said.

“It’s not all about effectiveness. Women have many reasons for accessing contraception, and our job is not to reduce unintended pregnancy. ... The idea really is that we empower women. ... We should really give choices and trust women to make the best choices for them,” she explained.

Barriers to IUD removal also should be eliminated, she noted, explaining that a woman who wants her IUD removed a month after insertion should have that option.

She said she has “changed her language,” from asking why a woman wants an $800 IUD removed after a month to asking whether she would like to hear about ways to make it better or if she is “just ready to have it removed.”

For those not interested in a discussion about birth control, she suggested providing information about the bedsider.org site.

“This is a great resource for patients,” she said, noting that it is available in both English and Spanish.
 

U.S. Medical Eligibility Criteria and Selected Practice Recommendations on contraceptive use

The MEC contraceptive guidance, a regularly updated, evidence-based project of the CDC, provides “best practices” information on candidate selection, or the “who” of contraceptive selection (who is a candidate for a particular method), Dr. Espy said, noting that it’s a “handy resource” for in-office use.

The SPR is more of a “how-to” guide that provides specifics on contraceptive use, such as when a woman can rely on the pill for contraception after she starts taking it, or how a woman should be followed after IUD placement, she said.

A free CDC app provides access to both.
 

Emergency contraception

The best overall emergency contraceptive method is the copper IUD, but often it is less accessible than oral methods, of which ulipristal acetate (ella), is the best choice, Dr. Espy said.

“Ulipristal is kind of a best-kept secret. It’s a selective estrogen-receptor modulator – it actually works better and longer than Plan B (levonorgestrel). What’s great about Plan B is that you can get it over the counter, but ulipristal delays ovulation longer,” she explained.
 

Contraceptives and obesity

Oral contraceptive efficacy is “so much more about adherence,” than about weight, she said.

With respect to the contraceptive patch, limited evidence suggests that obesity may reduce effectiveness, but “it’s still way better than barrier methods,” and for the contraceptive ring, no evidence suggests that obesity affects efficacy, she said.

For emergency contraception, evidence suggests that ulipristal is more effective than Plan B in women with high body mass index.
 

OTC contraceptive access

Pharmacy and OTC access are a good idea, Dr. Espy said.

“ACOG now supports both, which is great, and there are now a number of states where women can access contraception through the pharmacy. There are a lot of barriers there as well, and really the answer is OTC access,” she said. “There is a pill right now that is seeking [Food and Drug Administration] approval; it will be a progestin-only pill – the first one to be available over the counter, so I think this is something that we’ll see in the next 5-10 years.”
 

 

 

Additional future directions

One technology in development is a longer-acting injectable, such as a 6- or 9-month Depot-type shot.

Biodegradable implants also are in development. “What a cool idea – it just disappears in your arm, no need to remove it,” Dr. Espey said, adding that nonsurgical permanent sterilization is another possible advance, which would be “a holy grail.”

As for male contraception?

“I’ve been saying for about 25 years that in 5 years we’ll have a male contraceptive, so I’m not going to say it anymore with any kind of time frame, but it’s possible,” she said.

Dr. Espey reported having no financial disclosures.

 

– There’s some good news on the contraception and reproductive health front, according to a recent update from Eve Espey, MD.

Sharon Worcester/MDedge News
Dr. Eve Espey

The unintended pregnancy rate in the United States, including among adolescents and young women, is declining, and the U.S. abortion rate is at its lowest level since Roe v. Wade, she said at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists.

A 2016 article based on 2008-2011 data showed that after hovering around 50% for nearly 3 decades, the unintended pregnancy rate dropped “for the first time in a very long period of time,” said Dr. Espey, professor and chair of the department of obstetrics & gynecology, division of family planning at the University of New Mexico, Albuquerque (N Engl J Med. 2016; 374[9]:843-52).

“It doesn’t look that impressive – it basically went down to 45%, but considering the scope and the number of women who are affected by unplanned pregnancy, this is actually a huge public health achievement,” she said. “And I think ... in the next cycles of the [Center for Disease Control and Prevention’s] National Survey of Family Growth ... we’ll hopefully continue to see this and potentially more [decline].”

As for abortion rates, an increase occurred following Roe v. Wade, but rates are now down to pre-Roe levels.

“One of the things that we know about the abortion rate is that the most important determinant ... is access to contraceptives,” Dr. Espey said, noting that both the abortion and unintended pregnancy rate declines are attributable to better and more consistent use of contraceptives, increased abstinence as teens are waiting longer to have sex, and the “meteoric rise in long-acting reversible contraceptive (LARC) use.”

Importantly, while improvements in public health have traditionally only impacted upper-class white women, a reduction is finally occurring in disparities with women of color, but those disparities still remain,” she added. “Just like we’re focusing so much on this relative to maternal mortality, the same kinds of disparities occur in access to reproductive health.”

Dr. Espey also provided updates on other aspects of contraception.
 

IUDs and other LARC methods

The use of LARCs increased from 2% of contraceptive types used by reproductive-aged women in 2002 to 12% in 2012. The majority of that change was in IUD use, with a small increase in implant use, she said, noting that the latest data from the 2015-2017 cycle of the National Survey of Family Growth shows that the rate is now up to 16%.

“The rise has been nothing that I ever imagined that I would see, certainly in my professional career,” she said.

The huge impact of LARCs on the unintended pregnancy rate is attributable to consistent effectiveness over time, compared with an increasing failure rate over time with short-acting contraceptive methods, she said, explaining that while the failure rate with oral contraceptives is about 8%-9% over the first 3 years, it increases to 53% at 8 years.

It’s a matter of looking at both “typical use” effectiveness and continuation rates: LARCs have continuation rates of about 75%-85%; Depot-Provera, for example, has a 25%-30% continuation rate at 1 year, she noted.

Dr. Espey also attributed the gains to improved access via the Affordable Care Act’s contraceptive mandate, which has been shown in numerous studies to have improved access and consistency of contraceptive use, but which is “currently being chipped away,” and to the federal Title X program that covers family planning care for low income women, including undocumented women.

“These two programs have made a huge impact for us, and I hope that we as ob.gyns. will continue to support them,” she said.
 

 

 

Reproductive justice

Despite their effectiveness, it is important to remember that LARC methods are not right for everyone, Dr. Espey said.

“It’s not all about effectiveness. Women have many reasons for accessing contraception, and our job is not to reduce unintended pregnancy. ... The idea really is that we empower women. ... We should really give choices and trust women to make the best choices for them,” she explained.

Barriers to IUD removal also should be eliminated, she noted, explaining that a woman who wants her IUD removed a month after insertion should have that option.

She said she has “changed her language,” from asking why a woman wants an $800 IUD removed after a month to asking whether she would like to hear about ways to make it better or if she is “just ready to have it removed.”

For those not interested in a discussion about birth control, she suggested providing information about the bedsider.org site.

“This is a great resource for patients,” she said, noting that it is available in both English and Spanish.
 

U.S. Medical Eligibility Criteria and Selected Practice Recommendations on contraceptive use

The MEC contraceptive guidance, a regularly updated, evidence-based project of the CDC, provides “best practices” information on candidate selection, or the “who” of contraceptive selection (who is a candidate for a particular method), Dr. Espy said, noting that it’s a “handy resource” for in-office use.

The SPR is more of a “how-to” guide that provides specifics on contraceptive use, such as when a woman can rely on the pill for contraception after she starts taking it, or how a woman should be followed after IUD placement, she said.

A free CDC app provides access to both.
 

Emergency contraception

The best overall emergency contraceptive method is the copper IUD, but often it is less accessible than oral methods, of which ulipristal acetate (ella), is the best choice, Dr. Espy said.

“Ulipristal is kind of a best-kept secret. It’s a selective estrogen-receptor modulator – it actually works better and longer than Plan B (levonorgestrel). What’s great about Plan B is that you can get it over the counter, but ulipristal delays ovulation longer,” she explained.
 

Contraceptives and obesity

Oral contraceptive efficacy is “so much more about adherence,” than about weight, she said.

With respect to the contraceptive patch, limited evidence suggests that obesity may reduce effectiveness, but “it’s still way better than barrier methods,” and for the contraceptive ring, no evidence suggests that obesity affects efficacy, she said.

For emergency contraception, evidence suggests that ulipristal is more effective than Plan B in women with high body mass index.
 

OTC contraceptive access

Pharmacy and OTC access are a good idea, Dr. Espy said.

“ACOG now supports both, which is great, and there are now a number of states where women can access contraception through the pharmacy. There are a lot of barriers there as well, and really the answer is OTC access,” she said. “There is a pill right now that is seeking [Food and Drug Administration] approval; it will be a progestin-only pill – the first one to be available over the counter, so I think this is something that we’ll see in the next 5-10 years.”
 

 

 

Additional future directions

One technology in development is a longer-acting injectable, such as a 6- or 9-month Depot-type shot.

Biodegradable implants also are in development. “What a cool idea – it just disappears in your arm, no need to remove it,” Dr. Espey said, adding that nonsurgical permanent sterilization is another possible advance, which would be “a holy grail.”

As for male contraception?

“I’ve been saying for about 25 years that in 5 years we’ll have a male contraceptive, so I’m not going to say it anymore with any kind of time frame, but it’s possible,” she said.

Dr. Espey reported having no financial disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

EXPERT ANALYSIS FROM ACOG 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.