User login
Congenital syphilis continues to rise at an alarming rate
One of the nation’s most preventable diseases is killing newborns in ever-increasing numbers.
In California, cases of congenital syphilis – the term used when a mother passes the infection to her baby during pregnancy – continued a stark 7-year climb, to 332 cases, an 18.1% increase from 2017, according to the federal data. Only Texas, Nevada, Louisiana, and Arizona had congenital syphilis rates higher than California’s. Those five states combined made up nearly two-thirds of total cases, although all but 17 states saw increases in their congenital syphilis rates.
The state-by-state numbers were released as part of a broader report from the Centers for Disease Control and Prevention tracking trends in sexually transmitted diseases. Cases of syphilis, gonorrhea, and chlamydia combined reached an all-time high in 2018. Cases of the most infectious stage of syphilis rose 14% to more than 35,000 cases; gonorrhea increased 5% to more than 580,000 cases; and chlamydia increased 3% to more than 1.7 million cases.
For veteran public health workers, the upward trend in congenital syphilis numbers is particularly disturbing because the condition is so easy to prevent. Blood tests can identify infection in pregnant women. The treatment is relatively simple and effective. When caught during pregnancy, transmission from mother to baby generally can be stopped.
“When we see a case of congenital syphilis, it is a hallmark of a health system and a health care failure,” said Virginia Bowen, PhD, an epidemiologist with the CDC and an author of the report.
It takes just a few shots of antibiotics to prevent a baby from getting syphilis from its mother. Left untreated, Treponema pallidum, the corkscrew-shaped organism that causes syphilis, can wiggle its way through a mother’s placenta and into a fetus. Once there, it can multiply furiously, invading every part of the body.
The effects on a newborn can be devastating. Philip Cheng, MD, is a neonatologist at St. Joseph’s Medical Center in Stockton, a city in San Joaquin County in California’s Central Valley. Twenty-six babies were infected last year in San Joaquin County, according to state data.
The brain of one of Cheng’s patients didn’t develop properly and the baby died shortly after birth. Other young patients survive but battle blood abnormalities, bone deformities, and organ damage. Congenital syphilis can cause blindness and excruciating pain.
Public health departments across the Central Valley, a largely rural expanse, report similar experiences. Following the release of the CDC report Tuesday, the California Department of Public Health released its county-by-county numbers for 2018. The report showed syphilis, gonorrhea, and chlamydia levels at their highest in 30 years, and attributed 22 stillbirths or neonatal deaths to congenital syphilis.
For the past several years, Fresno County, which had 63 cases of congenital syphilis in 2017, had the highest rate in California. In 2018, Fresno fell to fourth, behind Yuba, Kern, and San Joaquin counties. But the epidemic is far from under control. “I couldn’t even tell you how soon I think we’re going to see a decrease,” said Jena Adams, who oversees HIV and STD programs for Fresno County.
Syphilis was once a prolific and widely feared STD. But by the 1940s, penicillin was found to have a near-perfect cure rate for the disease. By 2000, syphilis rates were so low in the U.S. that the federal government launched a plan to eliminate the disease. Today, that goal is a distant memory.
Health departments once tracked down every person who tested positive for chlamydia, gonorrhea, or syphilis, to make sure they and their partners got treatment. With limited funds and climbing caseloads, many states now devote resources only to tracking syphilis. The caseloads are so high in some California counties that they track only women of childbearing age or just pregnant women.
“A lot of the funding for day-to-day public health work isn’t there,” said Jeffrey Klausner, MD, a professor at the University of California-Los Angeles who ran San Francisco’s STD program for more than a decade.
The bulk of STD prevention funding is appropriated by Congress to the CDC, which passes it on to states. That funding has been largely flat since 2003, according to data from the National Coalition of STD Directors, which represents health departments across the country. Take into account inflation and the growing caseloads, and the money is spread thinner. “It takes money, it takes training, it takes resources,” Dr. Klausner said, “and policymakers have just not prioritized that.”
A report this year by Trust for America’s Health, a public health policy research and advocacy group, estimated that 55,000 jobs were cut from local public health departments from 2008 to 2017. “We have our hands tied as much as [states] do,” said Dr. Bowen of the CDC. “We take what we’re given and try to distribute it as fairly as we can.”
San Joaquin County health officials have reorganized the department and applied for grants to increase the number of investigators available while congenital syphilis has spiked, said Hemal Parikh, county coordinator for STD control. But even with new hires and cutting back to tracking only women of childbearing age with syphilis, an investigator can have anywhere from 20 to 30 open cases at a time. In other counties, the caseload can be double that.
In 2018, Jennifer Wagman, PhD, a UCLA professor who studies infectious diseases and gender inequality, was part of a group that received CDC funding to look into what is causing the spike in congenital syphilis in California’s Central Valley.
Dr. Wagman said that, after years of studying health systems in other countries, she was shocked to see how much basic public health infrastructure has crumbled in California. In many parts of the Central Valley, county walk-in clinics that tested for and treated STDs were shuttered in the wake of the recession. That left few places for drop-in care, and investigators with no place to take someone for immediate treatment. Investigators or their patients must make appointments at one of the few providers who carry the right kind of treatment and hope the patients can keep the appointment when the time comes.
In focus groups, women told Dr. Wagman that working hourly jobs, or dealing with chaotic lives involving homelessness, abusive partners, and drug use, can make it all but impossible to stick to the appointments required at private clinics.
Dr. Wagman found that women in these high-risk groups were seeking care, though sometimes late in their pregnancy. They were just more likely to visit an emergency room, urgent care, or even a methadone clinic – places that take drop-ins but don’t necessarily routinely test for or treat syphilis.
“These people already have a million barriers,” said Jenny Malone, the public health nurse for San Joaquin County. “Now there are more.”
The most challenging cases in California are wrapped up with the state’s growing housing crisis and a methamphetamine epidemic with few treatment options. Women who are homeless often have unreliable contact information and are unlikely to have a primary care doctor. That makes them tough to track down to give a positive diagnosis or to follow up on a treatment plan.
Louisiana had the highest rate of congenital syphilis in the country for several years – until 2018. After a 22% drop in its rate, combined with increases in other states, Louisiana now ranks behind Texas and Nevada. That drop is the direct result of $550 million in temporary supplemental funding that the CDC gave the state to combat the epidemic, said Chaquetta Johnson, DNP, deputy director of operations for the state’s STD/HIV/hepatitis program. The money helped bolster the state’s lagging public health infrastructure. It was used to host two conferences for providers in the hardest-hit areas, hire two case managers and a nurse educator, create a program for in-home treatment, and improve data systems to track cases, among other things.
In California, more than 40% of pregnant women with syphilis passed it on to their baby in 2016, the most recent year for which data is available. Gov. Gavin Newsom (D) made additional funding available this year, but it’s a “drop in the bucket,” said Sergio Morales of Essential Access Health, a nonprofit that focuses on sexual and reproductive health and is working with Kern County on congenital syphilis. “We are seeing the results of years of inaction and a lack of prioritization of STD prevention, and we’re now paying the price.”
This KHN story first published on California Healthline, a service of the California Health Care Foundation. Kaiser Health News is a nonprofit national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation that is not affiliated with Kaiser Permanente.
[Update: This story was revised at 6:50 p.m. ET on Oct. 8 to reflect news developments.]
One of the nation’s most preventable diseases is killing newborns in ever-increasing numbers.
In California, cases of congenital syphilis – the term used when a mother passes the infection to her baby during pregnancy – continued a stark 7-year climb, to 332 cases, an 18.1% increase from 2017, according to the federal data. Only Texas, Nevada, Louisiana, and Arizona had congenital syphilis rates higher than California’s. Those five states combined made up nearly two-thirds of total cases, although all but 17 states saw increases in their congenital syphilis rates.
The state-by-state numbers were released as part of a broader report from the Centers for Disease Control and Prevention tracking trends in sexually transmitted diseases. Cases of syphilis, gonorrhea, and chlamydia combined reached an all-time high in 2018. Cases of the most infectious stage of syphilis rose 14% to more than 35,000 cases; gonorrhea increased 5% to more than 580,000 cases; and chlamydia increased 3% to more than 1.7 million cases.
For veteran public health workers, the upward trend in congenital syphilis numbers is particularly disturbing because the condition is so easy to prevent. Blood tests can identify infection in pregnant women. The treatment is relatively simple and effective. When caught during pregnancy, transmission from mother to baby generally can be stopped.
“When we see a case of congenital syphilis, it is a hallmark of a health system and a health care failure,” said Virginia Bowen, PhD, an epidemiologist with the CDC and an author of the report.
It takes just a few shots of antibiotics to prevent a baby from getting syphilis from its mother. Left untreated, Treponema pallidum, the corkscrew-shaped organism that causes syphilis, can wiggle its way through a mother’s placenta and into a fetus. Once there, it can multiply furiously, invading every part of the body.
The effects on a newborn can be devastating. Philip Cheng, MD, is a neonatologist at St. Joseph’s Medical Center in Stockton, a city in San Joaquin County in California’s Central Valley. Twenty-six babies were infected last year in San Joaquin County, according to state data.
The brain of one of Cheng’s patients didn’t develop properly and the baby died shortly after birth. Other young patients survive but battle blood abnormalities, bone deformities, and organ damage. Congenital syphilis can cause blindness and excruciating pain.
Public health departments across the Central Valley, a largely rural expanse, report similar experiences. Following the release of the CDC report Tuesday, the California Department of Public Health released its county-by-county numbers for 2018. The report showed syphilis, gonorrhea, and chlamydia levels at their highest in 30 years, and attributed 22 stillbirths or neonatal deaths to congenital syphilis.
For the past several years, Fresno County, which had 63 cases of congenital syphilis in 2017, had the highest rate in California. In 2018, Fresno fell to fourth, behind Yuba, Kern, and San Joaquin counties. But the epidemic is far from under control. “I couldn’t even tell you how soon I think we’re going to see a decrease,” said Jena Adams, who oversees HIV and STD programs for Fresno County.
Syphilis was once a prolific and widely feared STD. But by the 1940s, penicillin was found to have a near-perfect cure rate for the disease. By 2000, syphilis rates were so low in the U.S. that the federal government launched a plan to eliminate the disease. Today, that goal is a distant memory.
Health departments once tracked down every person who tested positive for chlamydia, gonorrhea, or syphilis, to make sure they and their partners got treatment. With limited funds and climbing caseloads, many states now devote resources only to tracking syphilis. The caseloads are so high in some California counties that they track only women of childbearing age or just pregnant women.
“A lot of the funding for day-to-day public health work isn’t there,” said Jeffrey Klausner, MD, a professor at the University of California-Los Angeles who ran San Francisco’s STD program for more than a decade.
The bulk of STD prevention funding is appropriated by Congress to the CDC, which passes it on to states. That funding has been largely flat since 2003, according to data from the National Coalition of STD Directors, which represents health departments across the country. Take into account inflation and the growing caseloads, and the money is spread thinner. “It takes money, it takes training, it takes resources,” Dr. Klausner said, “and policymakers have just not prioritized that.”
A report this year by Trust for America’s Health, a public health policy research and advocacy group, estimated that 55,000 jobs were cut from local public health departments from 2008 to 2017. “We have our hands tied as much as [states] do,” said Dr. Bowen of the CDC. “We take what we’re given and try to distribute it as fairly as we can.”
San Joaquin County health officials have reorganized the department and applied for grants to increase the number of investigators available while congenital syphilis has spiked, said Hemal Parikh, county coordinator for STD control. But even with new hires and cutting back to tracking only women of childbearing age with syphilis, an investigator can have anywhere from 20 to 30 open cases at a time. In other counties, the caseload can be double that.
In 2018, Jennifer Wagman, PhD, a UCLA professor who studies infectious diseases and gender inequality, was part of a group that received CDC funding to look into what is causing the spike in congenital syphilis in California’s Central Valley.
Dr. Wagman said that, after years of studying health systems in other countries, she was shocked to see how much basic public health infrastructure has crumbled in California. In many parts of the Central Valley, county walk-in clinics that tested for and treated STDs were shuttered in the wake of the recession. That left few places for drop-in care, and investigators with no place to take someone for immediate treatment. Investigators or their patients must make appointments at one of the few providers who carry the right kind of treatment and hope the patients can keep the appointment when the time comes.
In focus groups, women told Dr. Wagman that working hourly jobs, or dealing with chaotic lives involving homelessness, abusive partners, and drug use, can make it all but impossible to stick to the appointments required at private clinics.
Dr. Wagman found that women in these high-risk groups were seeking care, though sometimes late in their pregnancy. They were just more likely to visit an emergency room, urgent care, or even a methadone clinic – places that take drop-ins but don’t necessarily routinely test for or treat syphilis.
“These people already have a million barriers,” said Jenny Malone, the public health nurse for San Joaquin County. “Now there are more.”
The most challenging cases in California are wrapped up with the state’s growing housing crisis and a methamphetamine epidemic with few treatment options. Women who are homeless often have unreliable contact information and are unlikely to have a primary care doctor. That makes them tough to track down to give a positive diagnosis or to follow up on a treatment plan.
Louisiana had the highest rate of congenital syphilis in the country for several years – until 2018. After a 22% drop in its rate, combined with increases in other states, Louisiana now ranks behind Texas and Nevada. That drop is the direct result of $550 million in temporary supplemental funding that the CDC gave the state to combat the epidemic, said Chaquetta Johnson, DNP, deputy director of operations for the state’s STD/HIV/hepatitis program. The money helped bolster the state’s lagging public health infrastructure. It was used to host two conferences for providers in the hardest-hit areas, hire two case managers and a nurse educator, create a program for in-home treatment, and improve data systems to track cases, among other things.
In California, more than 40% of pregnant women with syphilis passed it on to their baby in 2016, the most recent year for which data is available. Gov. Gavin Newsom (D) made additional funding available this year, but it’s a “drop in the bucket,” said Sergio Morales of Essential Access Health, a nonprofit that focuses on sexual and reproductive health and is working with Kern County on congenital syphilis. “We are seeing the results of years of inaction and a lack of prioritization of STD prevention, and we’re now paying the price.”
This KHN story first published on California Healthline, a service of the California Health Care Foundation. Kaiser Health News is a nonprofit national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation that is not affiliated with Kaiser Permanente.
[Update: This story was revised at 6:50 p.m. ET on Oct. 8 to reflect news developments.]
One of the nation’s most preventable diseases is killing newborns in ever-increasing numbers.
In California, cases of congenital syphilis – the term used when a mother passes the infection to her baby during pregnancy – continued a stark 7-year climb, to 332 cases, an 18.1% increase from 2017, according to the federal data. Only Texas, Nevada, Louisiana, and Arizona had congenital syphilis rates higher than California’s. Those five states combined made up nearly two-thirds of total cases, although all but 17 states saw increases in their congenital syphilis rates.
The state-by-state numbers were released as part of a broader report from the Centers for Disease Control and Prevention tracking trends in sexually transmitted diseases. Cases of syphilis, gonorrhea, and chlamydia combined reached an all-time high in 2018. Cases of the most infectious stage of syphilis rose 14% to more than 35,000 cases; gonorrhea increased 5% to more than 580,000 cases; and chlamydia increased 3% to more than 1.7 million cases.
For veteran public health workers, the upward trend in congenital syphilis numbers is particularly disturbing because the condition is so easy to prevent. Blood tests can identify infection in pregnant women. The treatment is relatively simple and effective. When caught during pregnancy, transmission from mother to baby generally can be stopped.
“When we see a case of congenital syphilis, it is a hallmark of a health system and a health care failure,” said Virginia Bowen, PhD, an epidemiologist with the CDC and an author of the report.
It takes just a few shots of antibiotics to prevent a baby from getting syphilis from its mother. Left untreated, Treponema pallidum, the corkscrew-shaped organism that causes syphilis, can wiggle its way through a mother’s placenta and into a fetus. Once there, it can multiply furiously, invading every part of the body.
The effects on a newborn can be devastating. Philip Cheng, MD, is a neonatologist at St. Joseph’s Medical Center in Stockton, a city in San Joaquin County in California’s Central Valley. Twenty-six babies were infected last year in San Joaquin County, according to state data.
The brain of one of Cheng’s patients didn’t develop properly and the baby died shortly after birth. Other young patients survive but battle blood abnormalities, bone deformities, and organ damage. Congenital syphilis can cause blindness and excruciating pain.
Public health departments across the Central Valley, a largely rural expanse, report similar experiences. Following the release of the CDC report Tuesday, the California Department of Public Health released its county-by-county numbers for 2018. The report showed syphilis, gonorrhea, and chlamydia levels at their highest in 30 years, and attributed 22 stillbirths or neonatal deaths to congenital syphilis.
For the past several years, Fresno County, which had 63 cases of congenital syphilis in 2017, had the highest rate in California. In 2018, Fresno fell to fourth, behind Yuba, Kern, and San Joaquin counties. But the epidemic is far from under control. “I couldn’t even tell you how soon I think we’re going to see a decrease,” said Jena Adams, who oversees HIV and STD programs for Fresno County.
Syphilis was once a prolific and widely feared STD. But by the 1940s, penicillin was found to have a near-perfect cure rate for the disease. By 2000, syphilis rates were so low in the U.S. that the federal government launched a plan to eliminate the disease. Today, that goal is a distant memory.
Health departments once tracked down every person who tested positive for chlamydia, gonorrhea, or syphilis, to make sure they and their partners got treatment. With limited funds and climbing caseloads, many states now devote resources only to tracking syphilis. The caseloads are so high in some California counties that they track only women of childbearing age or just pregnant women.
“A lot of the funding for day-to-day public health work isn’t there,” said Jeffrey Klausner, MD, a professor at the University of California-Los Angeles who ran San Francisco’s STD program for more than a decade.
The bulk of STD prevention funding is appropriated by Congress to the CDC, which passes it on to states. That funding has been largely flat since 2003, according to data from the National Coalition of STD Directors, which represents health departments across the country. Take into account inflation and the growing caseloads, and the money is spread thinner. “It takes money, it takes training, it takes resources,” Dr. Klausner said, “and policymakers have just not prioritized that.”
A report this year by Trust for America’s Health, a public health policy research and advocacy group, estimated that 55,000 jobs were cut from local public health departments from 2008 to 2017. “We have our hands tied as much as [states] do,” said Dr. Bowen of the CDC. “We take what we’re given and try to distribute it as fairly as we can.”
San Joaquin County health officials have reorganized the department and applied for grants to increase the number of investigators available while congenital syphilis has spiked, said Hemal Parikh, county coordinator for STD control. But even with new hires and cutting back to tracking only women of childbearing age with syphilis, an investigator can have anywhere from 20 to 30 open cases at a time. In other counties, the caseload can be double that.
In 2018, Jennifer Wagman, PhD, a UCLA professor who studies infectious diseases and gender inequality, was part of a group that received CDC funding to look into what is causing the spike in congenital syphilis in California’s Central Valley.
Dr. Wagman said that, after years of studying health systems in other countries, she was shocked to see how much basic public health infrastructure has crumbled in California. In many parts of the Central Valley, county walk-in clinics that tested for and treated STDs were shuttered in the wake of the recession. That left few places for drop-in care, and investigators with no place to take someone for immediate treatment. Investigators or their patients must make appointments at one of the few providers who carry the right kind of treatment and hope the patients can keep the appointment when the time comes.
In focus groups, women told Dr. Wagman that working hourly jobs, or dealing with chaotic lives involving homelessness, abusive partners, and drug use, can make it all but impossible to stick to the appointments required at private clinics.
Dr. Wagman found that women in these high-risk groups were seeking care, though sometimes late in their pregnancy. They were just more likely to visit an emergency room, urgent care, or even a methadone clinic – places that take drop-ins but don’t necessarily routinely test for or treat syphilis.
“These people already have a million barriers,” said Jenny Malone, the public health nurse for San Joaquin County. “Now there are more.”
The most challenging cases in California are wrapped up with the state’s growing housing crisis and a methamphetamine epidemic with few treatment options. Women who are homeless often have unreliable contact information and are unlikely to have a primary care doctor. That makes them tough to track down to give a positive diagnosis or to follow up on a treatment plan.
Louisiana had the highest rate of congenital syphilis in the country for several years – until 2018. After a 22% drop in its rate, combined with increases in other states, Louisiana now ranks behind Texas and Nevada. That drop is the direct result of $550 million in temporary supplemental funding that the CDC gave the state to combat the epidemic, said Chaquetta Johnson, DNP, deputy director of operations for the state’s STD/HIV/hepatitis program. The money helped bolster the state’s lagging public health infrastructure. It was used to host two conferences for providers in the hardest-hit areas, hire two case managers and a nurse educator, create a program for in-home treatment, and improve data systems to track cases, among other things.
In California, more than 40% of pregnant women with syphilis passed it on to their baby in 2016, the most recent year for which data is available. Gov. Gavin Newsom (D) made additional funding available this year, but it’s a “drop in the bucket,” said Sergio Morales of Essential Access Health, a nonprofit that focuses on sexual and reproductive health and is working with Kern County on congenital syphilis. “We are seeing the results of years of inaction and a lack of prioritization of STD prevention, and we’re now paying the price.”
This KHN story first published on California Healthline, a service of the California Health Care Foundation. Kaiser Health News is a nonprofit national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation that is not affiliated with Kaiser Permanente.
[Update: This story was revised at 6:50 p.m. ET on Oct. 8 to reflect news developments.]
One-year data support dupilumab’s efficacy and safety in adolescents with AD
A study of
and continued evidence of efficacy for up to 52 weeks, reported the authors of the study, published online Oct. 9 in the British Journal of Dermatology.The phase 2a open-label, ascending-dose cohort study of dupilumab in 40 adolescents with moderate to severe AD was followed by a 48-week phase 3 open-label extension study in 36 of those participants. Dupilumab is a monoclonal antibody that inhibits signaling of interleukin (IL)-4 and IL-13.
In the phase 2a study, participants were treated with a single subcutaneous dose of dupilumab – either 2 mg/kg or 4 mg/kg – and had 8 weeks of pharmacokinetic sampling. They subsequently received that same dose weekly for 4 weeks, with an 8-week-long safety follow-up period. Those who participated in the open-label extension continued their weekly dose to a maximum of 300 mg. per kg
The most common treatment-emergent adverse events (a primary endpoint) seen in both the phase 2a and phase 3 studies were nasopharyngitis and exacerbation of AD – in the phase 2a study, exacerbations were seen in the period when patients weren’t taking the treatment. In the 2-mg and 4-mg groups, the incidence of skin infections was 29% and 42%, respectively, and the incidence of injection site reactions – which were mostly mild – were 18% and 11%, respectively. Researchers also noted conjunctivitis in 18% and 16% of the patients in the 2-mg and 4-mg groups, respectively, but none of the cases were considered serious and all resolved over the course of the study. In the phase 2a study, 50% of patients on the 2-mg/kg dose and 65% of those on the 4-mg/kg dose experienced an adverse event, while in the open-label extension all reported at least one adverse event.
There was one case of suicidal behavior and one case of systemic or severe hypersensitivity reported in the 2-mg/kg groups, both of which were considered adverse events of special interest. There were no deaths.
However none of the serious adverse events – which included infected AD, palpitations, patent ductus arteriosus, and food allergy – were linked to the study treatment, and no adverse events led to study discontinuation, the authors reported.
By week 12, 70% of participants in the 2-mg/kg group and 75% in the 4-mg/kg group had achieved a 50% or greater improvement in their Eczema Area and Severity Index (EASI) scores, which was a secondary outcome. By week 52, that had increased to 100% and 89% respectively.
More than half the patients (55%) in the 2-mg/kg group, and 40% of those in the 4-mg/kg group achieved a 75% or more improvement in their EASI scores by week 12, which increased to 88% and 78%, respectively, by week 52 in the open label phase.
“The results from these studies support use of dupilumab for the long-term management of moderate to severe AD in adolescents,” wrote Michael J. Cork, MD, professor of dermatology, University of Sheffield, England, and coauthors. No new safety signals were identified, “compared with the known safety profile of dupilumab in adults with moderate to severe AD,” and “the PK profile was characterized by nonlinear, target-mediated kinetics, consistent with the profile in adults with moderate to severe AD,” they added.
Dupilumab was approved in the United States in March 2019 for adolescents with moderate to severe AD whose disease is not adequately controlled with topical prescription therapies or when those therapies are not advisable.
The study was sponsored by dupilumab manufacturers Sanofi and Regeneron Pharmaceuticals, which market dupilumab as Dupixent in the United States. Dr. Cork disclosures included those related to Sanofi Genzyme and Regeneron; other authors included employees of the companies.
SOURCE: Cork M et al. Br J Dermatol. 2019 Oct 9. doi: 10.1111/bjd.18476.
A study of
and continued evidence of efficacy for up to 52 weeks, reported the authors of the study, published online Oct. 9 in the British Journal of Dermatology.The phase 2a open-label, ascending-dose cohort study of dupilumab in 40 adolescents with moderate to severe AD was followed by a 48-week phase 3 open-label extension study in 36 of those participants. Dupilumab is a monoclonal antibody that inhibits signaling of interleukin (IL)-4 and IL-13.
In the phase 2a study, participants were treated with a single subcutaneous dose of dupilumab – either 2 mg/kg or 4 mg/kg – and had 8 weeks of pharmacokinetic sampling. They subsequently received that same dose weekly for 4 weeks, with an 8-week-long safety follow-up period. Those who participated in the open-label extension continued their weekly dose to a maximum of 300 mg. per kg
The most common treatment-emergent adverse events (a primary endpoint) seen in both the phase 2a and phase 3 studies were nasopharyngitis and exacerbation of AD – in the phase 2a study, exacerbations were seen in the period when patients weren’t taking the treatment. In the 2-mg and 4-mg groups, the incidence of skin infections was 29% and 42%, respectively, and the incidence of injection site reactions – which were mostly mild – were 18% and 11%, respectively. Researchers also noted conjunctivitis in 18% and 16% of the patients in the 2-mg and 4-mg groups, respectively, but none of the cases were considered serious and all resolved over the course of the study. In the phase 2a study, 50% of patients on the 2-mg/kg dose and 65% of those on the 4-mg/kg dose experienced an adverse event, while in the open-label extension all reported at least one adverse event.
There was one case of suicidal behavior and one case of systemic or severe hypersensitivity reported in the 2-mg/kg groups, both of which were considered adverse events of special interest. There were no deaths.
However none of the serious adverse events – which included infected AD, palpitations, patent ductus arteriosus, and food allergy – were linked to the study treatment, and no adverse events led to study discontinuation, the authors reported.
By week 12, 70% of participants in the 2-mg/kg group and 75% in the 4-mg/kg group had achieved a 50% or greater improvement in their Eczema Area and Severity Index (EASI) scores, which was a secondary outcome. By week 52, that had increased to 100% and 89% respectively.
More than half the patients (55%) in the 2-mg/kg group, and 40% of those in the 4-mg/kg group achieved a 75% or more improvement in their EASI scores by week 12, which increased to 88% and 78%, respectively, by week 52 in the open label phase.
“The results from these studies support use of dupilumab for the long-term management of moderate to severe AD in adolescents,” wrote Michael J. Cork, MD, professor of dermatology, University of Sheffield, England, and coauthors. No new safety signals were identified, “compared with the known safety profile of dupilumab in adults with moderate to severe AD,” and “the PK profile was characterized by nonlinear, target-mediated kinetics, consistent with the profile in adults with moderate to severe AD,” they added.
Dupilumab was approved in the United States in March 2019 for adolescents with moderate to severe AD whose disease is not adequately controlled with topical prescription therapies or when those therapies are not advisable.
The study was sponsored by dupilumab manufacturers Sanofi and Regeneron Pharmaceuticals, which market dupilumab as Dupixent in the United States. Dr. Cork disclosures included those related to Sanofi Genzyme and Regeneron; other authors included employees of the companies.
SOURCE: Cork M et al. Br J Dermatol. 2019 Oct 9. doi: 10.1111/bjd.18476.
A study of
and continued evidence of efficacy for up to 52 weeks, reported the authors of the study, published online Oct. 9 in the British Journal of Dermatology.The phase 2a open-label, ascending-dose cohort study of dupilumab in 40 adolescents with moderate to severe AD was followed by a 48-week phase 3 open-label extension study in 36 of those participants. Dupilumab is a monoclonal antibody that inhibits signaling of interleukin (IL)-4 and IL-13.
In the phase 2a study, participants were treated with a single subcutaneous dose of dupilumab – either 2 mg/kg or 4 mg/kg – and had 8 weeks of pharmacokinetic sampling. They subsequently received that same dose weekly for 4 weeks, with an 8-week-long safety follow-up period. Those who participated in the open-label extension continued their weekly dose to a maximum of 300 mg. per kg
The most common treatment-emergent adverse events (a primary endpoint) seen in both the phase 2a and phase 3 studies were nasopharyngitis and exacerbation of AD – in the phase 2a study, exacerbations were seen in the period when patients weren’t taking the treatment. In the 2-mg and 4-mg groups, the incidence of skin infections was 29% and 42%, respectively, and the incidence of injection site reactions – which were mostly mild – were 18% and 11%, respectively. Researchers also noted conjunctivitis in 18% and 16% of the patients in the 2-mg and 4-mg groups, respectively, but none of the cases were considered serious and all resolved over the course of the study. In the phase 2a study, 50% of patients on the 2-mg/kg dose and 65% of those on the 4-mg/kg dose experienced an adverse event, while in the open-label extension all reported at least one adverse event.
There was one case of suicidal behavior and one case of systemic or severe hypersensitivity reported in the 2-mg/kg groups, both of which were considered adverse events of special interest. There were no deaths.
However none of the serious adverse events – which included infected AD, palpitations, patent ductus arteriosus, and food allergy – were linked to the study treatment, and no adverse events led to study discontinuation, the authors reported.
By week 12, 70% of participants in the 2-mg/kg group and 75% in the 4-mg/kg group had achieved a 50% or greater improvement in their Eczema Area and Severity Index (EASI) scores, which was a secondary outcome. By week 52, that had increased to 100% and 89% respectively.
More than half the patients (55%) in the 2-mg/kg group, and 40% of those in the 4-mg/kg group achieved a 75% or more improvement in their EASI scores by week 12, which increased to 88% and 78%, respectively, by week 52 in the open label phase.
“The results from these studies support use of dupilumab for the long-term management of moderate to severe AD in adolescents,” wrote Michael J. Cork, MD, professor of dermatology, University of Sheffield, England, and coauthors. No new safety signals were identified, “compared with the known safety profile of dupilumab in adults with moderate to severe AD,” and “the PK profile was characterized by nonlinear, target-mediated kinetics, consistent with the profile in adults with moderate to severe AD,” they added.
Dupilumab was approved in the United States in March 2019 for adolescents with moderate to severe AD whose disease is not adequately controlled with topical prescription therapies or when those therapies are not advisable.
The study was sponsored by dupilumab manufacturers Sanofi and Regeneron Pharmaceuticals, which market dupilumab as Dupixent in the United States. Dr. Cork disclosures included those related to Sanofi Genzyme and Regeneron; other authors included employees of the companies.
SOURCE: Cork M et al. Br J Dermatol. 2019 Oct 9. doi: 10.1111/bjd.18476.
FROM THE BRITISH JOURNAL OF DERMATOLOGY
Firearm-related deaths show recent increase
After years of relative stability, firearm-related mortality in the United States rose sharply starting in 2015, according to analysis of a national mortality database.
U.S. firearm mortality was 10.4 per 100,000 person-years during 1999-2014, with the high in that period occurring in 2012 and dropping each of the next 2 years – compared with 11.8 per 100,000 during 2015-2017, an increase of 13.8%, Jason E. Goldstick, PhD, and associates wrote Oct. 8 in Health Affairs.
The majority of the 612,310 firearm deaths over the entire study period were suicides, with the proportion rising slightly from 58.6% in 1999-2014 to 60.0% in 2015-2017. Homicides made up 38.5% of deaths in 1999-2014 and 37.9% in 2015-2017, while the combined share of unintentional and undetermined deaths dropped from 2.9% to 2.1%, the investigators reported.
Dr. Goldstick of the University of Michigan, Ann Arbor, said in a separate written statement.
The geographic broadness can be seen when the change in mortality from 1999-2014 to 2015-2017 was calculated for each locale: 29 states had an increase of more than 20% and only 3 states (California, New York, and Rhode Island) and the District of Columbia had a decrease of at least 12.5%, they said. The data came from the Centers for Disease Control and Prevention’s Wide-ranging Online Data for Epidemiologic Research tool.
The different trends among states and subpopulations make it difficult to offer policy-based interventions. “The epidemiology of firearm violence is complex and varies based on the mechanism of death, demographic group under study, and regionally specific culture, making a one-size-fits-all solution inappropriate,” Dr. Goldstick and associates wrote.
The study was funded mainly by a grant from the National Institute of Child Health and Human Development. The investigators did not provide any information on conflicts of interest.
SOURCE: Goldstick JE et al. Health Aff. 2019;38(10):1646-52.
After years of relative stability, firearm-related mortality in the United States rose sharply starting in 2015, according to analysis of a national mortality database.
U.S. firearm mortality was 10.4 per 100,000 person-years during 1999-2014, with the high in that period occurring in 2012 and dropping each of the next 2 years – compared with 11.8 per 100,000 during 2015-2017, an increase of 13.8%, Jason E. Goldstick, PhD, and associates wrote Oct. 8 in Health Affairs.
The majority of the 612,310 firearm deaths over the entire study period were suicides, with the proportion rising slightly from 58.6% in 1999-2014 to 60.0% in 2015-2017. Homicides made up 38.5% of deaths in 1999-2014 and 37.9% in 2015-2017, while the combined share of unintentional and undetermined deaths dropped from 2.9% to 2.1%, the investigators reported.
Dr. Goldstick of the University of Michigan, Ann Arbor, said in a separate written statement.
The geographic broadness can be seen when the change in mortality from 1999-2014 to 2015-2017 was calculated for each locale: 29 states had an increase of more than 20% and only 3 states (California, New York, and Rhode Island) and the District of Columbia had a decrease of at least 12.5%, they said. The data came from the Centers for Disease Control and Prevention’s Wide-ranging Online Data for Epidemiologic Research tool.
The different trends among states and subpopulations make it difficult to offer policy-based interventions. “The epidemiology of firearm violence is complex and varies based on the mechanism of death, demographic group under study, and regionally specific culture, making a one-size-fits-all solution inappropriate,” Dr. Goldstick and associates wrote.
The study was funded mainly by a grant from the National Institute of Child Health and Human Development. The investigators did not provide any information on conflicts of interest.
SOURCE: Goldstick JE et al. Health Aff. 2019;38(10):1646-52.
After years of relative stability, firearm-related mortality in the United States rose sharply starting in 2015, according to analysis of a national mortality database.
U.S. firearm mortality was 10.4 per 100,000 person-years during 1999-2014, with the high in that period occurring in 2012 and dropping each of the next 2 years – compared with 11.8 per 100,000 during 2015-2017, an increase of 13.8%, Jason E. Goldstick, PhD, and associates wrote Oct. 8 in Health Affairs.
The majority of the 612,310 firearm deaths over the entire study period were suicides, with the proportion rising slightly from 58.6% in 1999-2014 to 60.0% in 2015-2017. Homicides made up 38.5% of deaths in 1999-2014 and 37.9% in 2015-2017, while the combined share of unintentional and undetermined deaths dropped from 2.9% to 2.1%, the investigators reported.
Dr. Goldstick of the University of Michigan, Ann Arbor, said in a separate written statement.
The geographic broadness can be seen when the change in mortality from 1999-2014 to 2015-2017 was calculated for each locale: 29 states had an increase of more than 20% and only 3 states (California, New York, and Rhode Island) and the District of Columbia had a decrease of at least 12.5%, they said. The data came from the Centers for Disease Control and Prevention’s Wide-ranging Online Data for Epidemiologic Research tool.
The different trends among states and subpopulations make it difficult to offer policy-based interventions. “The epidemiology of firearm violence is complex and varies based on the mechanism of death, demographic group under study, and regionally specific culture, making a one-size-fits-all solution inappropriate,” Dr. Goldstick and associates wrote.
The study was funded mainly by a grant from the National Institute of Child Health and Human Development. The investigators did not provide any information on conflicts of interest.
SOURCE: Goldstick JE et al. Health Aff. 2019;38(10):1646-52.
FROM HEALTH AFFAIRS
Online assessment identifies excess steroid use in IBD patients
Alimentary Pharmacology & Therapeutics.
according to recent research in the journalSince measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
To help your patients better understand their treatment options, share AGA’s IBD patient education, which is online at www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
Alimentary Pharmacology & Therapeutics.
according to recent research in the journalSince measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
To help your patients better understand their treatment options, share AGA’s IBD patient education, which is online at www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
Alimentary Pharmacology & Therapeutics.
according to recent research in the journalSince measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
To help your patients better understand their treatment options, share AGA’s IBD patient education, which is online at www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
FROM ALIMENTARY PHARMACOLOGY & THERAPEUTICS
Online assessment identifies excess steroid use in IBD patients
Alimentary Pharmacology & Therapeutics.
according to recent research in the journalSince measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
Alimentary Pharmacology & Therapeutics.
according to recent research in the journalSince measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
Alimentary Pharmacology & Therapeutics.
according to recent research in the journalSince measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
FROM ALIMENTARY PHARMACOLOGY & THERAPEUTICS
Key clinical point: An online assessment tool can be used to identify patients with inflammatory bowel disease (IBD) receiving an excess of steroids, and a quality improvement program lowered excess steroids at centers that implemented the program.
Major finding: Of patients in the study, 14.8% of patients were given excess steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program.
Study details: A prospective study of 2,385 patients with IBD at 19 centers in England, Wales, and Scotland.
Disclosures: The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
Source: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
Considering the value of productivity bonuses
Connect high-value care with reimbursement
Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.
“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.
To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.
The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”
Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.
“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.
Reference
1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.
Connect high-value care with reimbursement
Connect high-value care with reimbursement
Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.
“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.
To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.
The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”
Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.
“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.
Reference
1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.
Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.
“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.
To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.
The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”
Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.
“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.
Reference
1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.
Best treatment approach for early stage follicular lymphoma is unclear
Randomized trials are needed to determine the optimal treatment approach for early stage follicular lymphoma (FL), according to researchers.
A retrospective study showed similar outcomes among patients who received radiotherapy, immunochemotherapy, combined modality treatment (CMT), and watchful waiting (WW).
There were some differences in progression-free survival (PFS) according to treatment approach. However, there were no significant differences in overall survival (OS) between any of the active treatments or between patients who received active treatment and those managed with WW.
Joshua W. D. Tobin, MD, of Princess Alexandra Hospital in Brisbane, Queensland, Australia, and colleagues conducted this research and reported the results in Blood Advances.
The researchers analyzed 365 patients with newly diagnosed, stage I/II FL. The patients had a median age of 63 years and more than half were men. They were diagnosed between 2005 and 2017, and the median follow-up was 45 months.
Most patients (n = 280) received active treatment, but 85 were managed with WW. The WW patients were older and had more extranodal involvement.
Types of active treatment included radiotherapy alone (n = 171), immunochemotherapy alone (n = 63), and CMT (n = 46). Compared with the other groups, patients who received radiotherapy alone had less bulk, fewer nodal sites, and fewer B symptoms, and were more likely to have stage I disease. Patients who received CMT had fewer B symptoms and lower FLIPI scores compared with patients who received immunochemotherapy.
The immunochemotherapy regimens used were largely rituximab based. In all, 106 patients received rituximab (alone or in combination) for induction, and 49 received maintenance rituximab (37 in the immunochemotherapy group and 12 in the CMT group).
Results
Response rates were similar among the active treatment groups. The overall response rate was 95% in the radiotherapy group, 96% in the immunochemotherapy group, and 95% in the CMT group (P = .87).
There was a significant difference in PFS between the radiotherapy, immunochemotherapy, and CMT groups (P = .023), but there was no difference in OS between these groups (P = .38).
There was no significant difference in PFS between the immunochemotherapy and CMT groups (hazard ratio [HR], 1.78; P = .24), so the researchers combined these groups into a single group called “systemic therapy.” The patients treated with systemic therapy had PFS (HR, 1.32; P = .96) and OS (HR, 0.46; P = .21) similar to that of patients treated with radiotherapy alone.
Maintenance rituximab was associated with prolonged PFS among patients treated with systemic therapy (HR, 0.24; P = .017). However, there was no significant difference in OS between patients who received maintenance and those who did not (HR, 0.89; P = .90).
Relapse was less common among patients who received maintenance, and there were no cases of transformation in that group. Relapse occurred in 24.6% of the radiotherapy group, 18.3% of the systemic therapy group, and 4.1% of the group that received systemic therapy plus maintenance (P = .006). Transformation was less likely in the systemic therapy group (1.8%) than in the radiotherapy (6.4%) and WW (9.4%) groups (HR, 0.20; P = .034).
Overall, the active treatment group had better PFS than the WW group (HR, 0.52; P = .002), but there was no significant difference in OS between the groups (HR, 0.94; P = .90).
“Based on our comparable OS between WW and actively treated patients, WW could be considered as an initial management strategy in early stage FL,” Dr. Tobin and colleagues wrote. “However, long-term follow-up is required to determine if a survival benefit exists favoring active treatment.”
The researchers reported relationships with many pharmaceutical companies.
SOURCE: Tobin JWD et al. Blood Adv. 2019 Oct 8;3(19):2804-11.
Randomized trials are needed to determine the optimal treatment approach for early stage follicular lymphoma (FL), according to researchers.
A retrospective study showed similar outcomes among patients who received radiotherapy, immunochemotherapy, combined modality treatment (CMT), and watchful waiting (WW).
There were some differences in progression-free survival (PFS) according to treatment approach. However, there were no significant differences in overall survival (OS) between any of the active treatments or between patients who received active treatment and those managed with WW.
Joshua W. D. Tobin, MD, of Princess Alexandra Hospital in Brisbane, Queensland, Australia, and colleagues conducted this research and reported the results in Blood Advances.
The researchers analyzed 365 patients with newly diagnosed, stage I/II FL. The patients had a median age of 63 years and more than half were men. They were diagnosed between 2005 and 2017, and the median follow-up was 45 months.
Most patients (n = 280) received active treatment, but 85 were managed with WW. The WW patients were older and had more extranodal involvement.
Types of active treatment included radiotherapy alone (n = 171), immunochemotherapy alone (n = 63), and CMT (n = 46). Compared with the other groups, patients who received radiotherapy alone had less bulk, fewer nodal sites, and fewer B symptoms, and were more likely to have stage I disease. Patients who received CMT had fewer B symptoms and lower FLIPI scores compared with patients who received immunochemotherapy.
The immunochemotherapy regimens used were largely rituximab based. In all, 106 patients received rituximab (alone or in combination) for induction, and 49 received maintenance rituximab (37 in the immunochemotherapy group and 12 in the CMT group).
Results
Response rates were similar among the active treatment groups. The overall response rate was 95% in the radiotherapy group, 96% in the immunochemotherapy group, and 95% in the CMT group (P = .87).
There was a significant difference in PFS between the radiotherapy, immunochemotherapy, and CMT groups (P = .023), but there was no difference in OS between these groups (P = .38).
There was no significant difference in PFS between the immunochemotherapy and CMT groups (hazard ratio [HR], 1.78; P = .24), so the researchers combined these groups into a single group called “systemic therapy.” The patients treated with systemic therapy had PFS (HR, 1.32; P = .96) and OS (HR, 0.46; P = .21) similar to that of patients treated with radiotherapy alone.
Maintenance rituximab was associated with prolonged PFS among patients treated with systemic therapy (HR, 0.24; P = .017). However, there was no significant difference in OS between patients who received maintenance and those who did not (HR, 0.89; P = .90).
Relapse was less common among patients who received maintenance, and there were no cases of transformation in that group. Relapse occurred in 24.6% of the radiotherapy group, 18.3% of the systemic therapy group, and 4.1% of the group that received systemic therapy plus maintenance (P = .006). Transformation was less likely in the systemic therapy group (1.8%) than in the radiotherapy (6.4%) and WW (9.4%) groups (HR, 0.20; P = .034).
Overall, the active treatment group had better PFS than the WW group (HR, 0.52; P = .002), but there was no significant difference in OS between the groups (HR, 0.94; P = .90).
“Based on our comparable OS between WW and actively treated patients, WW could be considered as an initial management strategy in early stage FL,” Dr. Tobin and colleagues wrote. “However, long-term follow-up is required to determine if a survival benefit exists favoring active treatment.”
The researchers reported relationships with many pharmaceutical companies.
SOURCE: Tobin JWD et al. Blood Adv. 2019 Oct 8;3(19):2804-11.
Randomized trials are needed to determine the optimal treatment approach for early stage follicular lymphoma (FL), according to researchers.
A retrospective study showed similar outcomes among patients who received radiotherapy, immunochemotherapy, combined modality treatment (CMT), and watchful waiting (WW).
There were some differences in progression-free survival (PFS) according to treatment approach. However, there were no significant differences in overall survival (OS) between any of the active treatments or between patients who received active treatment and those managed with WW.
Joshua W. D. Tobin, MD, of Princess Alexandra Hospital in Brisbane, Queensland, Australia, and colleagues conducted this research and reported the results in Blood Advances.
The researchers analyzed 365 patients with newly diagnosed, stage I/II FL. The patients had a median age of 63 years and more than half were men. They were diagnosed between 2005 and 2017, and the median follow-up was 45 months.
Most patients (n = 280) received active treatment, but 85 were managed with WW. The WW patients were older and had more extranodal involvement.
Types of active treatment included radiotherapy alone (n = 171), immunochemotherapy alone (n = 63), and CMT (n = 46). Compared with the other groups, patients who received radiotherapy alone had less bulk, fewer nodal sites, and fewer B symptoms, and were more likely to have stage I disease. Patients who received CMT had fewer B symptoms and lower FLIPI scores compared with patients who received immunochemotherapy.
The immunochemotherapy regimens used were largely rituximab based. In all, 106 patients received rituximab (alone or in combination) for induction, and 49 received maintenance rituximab (37 in the immunochemotherapy group and 12 in the CMT group).
Results
Response rates were similar among the active treatment groups. The overall response rate was 95% in the radiotherapy group, 96% in the immunochemotherapy group, and 95% in the CMT group (P = .87).
There was a significant difference in PFS between the radiotherapy, immunochemotherapy, and CMT groups (P = .023), but there was no difference in OS between these groups (P = .38).
There was no significant difference in PFS between the immunochemotherapy and CMT groups (hazard ratio [HR], 1.78; P = .24), so the researchers combined these groups into a single group called “systemic therapy.” The patients treated with systemic therapy had PFS (HR, 1.32; P = .96) and OS (HR, 0.46; P = .21) similar to that of patients treated with radiotherapy alone.
Maintenance rituximab was associated with prolonged PFS among patients treated with systemic therapy (HR, 0.24; P = .017). However, there was no significant difference in OS between patients who received maintenance and those who did not (HR, 0.89; P = .90).
Relapse was less common among patients who received maintenance, and there were no cases of transformation in that group. Relapse occurred in 24.6% of the radiotherapy group, 18.3% of the systemic therapy group, and 4.1% of the group that received systemic therapy plus maintenance (P = .006). Transformation was less likely in the systemic therapy group (1.8%) than in the radiotherapy (6.4%) and WW (9.4%) groups (HR, 0.20; P = .034).
Overall, the active treatment group had better PFS than the WW group (HR, 0.52; P = .002), but there was no significant difference in OS between the groups (HR, 0.94; P = .90).
“Based on our comparable OS between WW and actively treated patients, WW could be considered as an initial management strategy in early stage FL,” Dr. Tobin and colleagues wrote. “However, long-term follow-up is required to determine if a survival benefit exists favoring active treatment.”
The researchers reported relationships with many pharmaceutical companies.
SOURCE: Tobin JWD et al. Blood Adv. 2019 Oct 8;3(19):2804-11.
FROM BLOOD ADVANCES
Investigators use ARMSS score to predict future MS-related disability
STOCKHOLM – , according to research presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. The resulting measurement is stable, not highly sensitive to age, and appropriate for research applications. “It could give a clinician an earlier indication of the potential disease course of a patient,” said Ryan Ramanujam, PhD, assistant professor of translational neuroepidemiology at Karolinska Institutet in Stockholm.
Researchers who study MS use various scores to measure disease severity, including the Expanded Disability Status Scale (EDSS) and the MS Severity Scale (MSSS). These scores cannot predict a patient’s future status, however, and they do not remain stable throughout the course of a patient’s disease. Fitting a linear model over a series of scores over time can provide a misleading impression of a patient’s disease progression. “What we need is a metric to give a holistic overview of disease course, regardless of when it’s measured in a patient’s disease progression,” said Dr. Ramanujam. Such a measurement could aid the search for genes that affect MS severity, he added.
Examining disability by patient age
Dr. Ramanujam and colleagues constructed their measure using the ARMSS score, which ranks EDSS score by age instead of by disease duration. The ARMSS score ranges from 0 to 10, and the median value is 5 for all patients at a given age. Investigators can calculate the score using a previously published global matrix of values for ARMSS and MSSS available in the R package ms.sev.
The investigators found that the ARMSS score is slightly superior to the MSSS in detecting small increases in EDSS. One benefit of the ARMSS score, compared with the MSSS, is that it allows investigators to study patients for whom time of disease onset is unknown. The ARMSS score also removes potential systematic bias that might result from a neurologist’s retrospective assignment of date of disease onset, said Dr. Ramanujam.
He and his colleagues used ARMSS to compare patients’ disease course with what is expected for that patient (i.e., an ARMSS that remains stable at 5). They extracted data for 15,831 patients participating in the Swedish MS registry, including age and EDSS score at each neurological visit. Eligible patients had serial EDSS scores for 10 years. Dr. Ramanujam and colleagues included 4,514 patients in their analysis.
Measures at 2 years correlated with those at 10 years
The researchers created what they called the ARMSS integral by calculating the ARMSS score’s change from 5 at each examination (e.g., −0.5 or 1). “The ARMSS integral can be thought of as the cumulative disability that a patient accrues over his or her disease course, relative to the average patient, who had the disease for the same ages,” said Dr. Ramanujam. At 2 years of follow-up and at 10 years of follow-up, the distribution of ARMSS integrals for the study population followed a normal pattern.
Next, the investigators sought to compare patients by standardizing their follow-up time. To do this, they calculated what they called the ARMSS-rate by dividing each patient’s ARMSS integral by the number of years of follow-up. The ARMSS-rate offers a “snapshot of disease severity and progression,” said Dr. Ramanujam. When the researchers compared ARMSS-rates at 2 years and 10 years for each patient, they found that the measure was “extremely stable over time and strongly correlated with future disability.” The correlation improved slightly when the researchers compared ARMSS-rates at 4 years and 10 years.
The investigators then categorized patients based on their ARMSS-rate at 2 years (e.g., 0 to 1, 1 to 2, 2 to 3). When they compared the values in these categories with the median ARMSS-rates for the same individuals over the subsequent 8 years, they found strong group-level correlations.
To analyze correlations on an individual level, Dr. Ramanujam and colleagues examined the ability of different metrics at the time closest to 2 years of follow-up to predict those measured at 10 years. They assigned the value 1 to the most severe quartile of outcomes and the value 0 to all other quartiles. For predictors and outcomes, the investigators examined ARMSS-rate and the integral of progression index, which they calculated using the integral of EDSS. They also included EDSS at 10 years as an outcome for progression index.
For predicting the subsequent 8 years of ARMSS-rates, ARMSS-rate at 2 years had an area under the curve (AUC) of 0.921. When the investigators performed the same analysis using a cohort of patients with MS from British Columbia, Canada, they obtained an AUC of 0.887. Progression index at 2 years had an AUC of 0.61 for predicting the most severe quartile of the next 8 years. Compared with this result, ARMSS integral up to 2 years was slightly better at predicting EDSS at 10 years, said Dr. Ramanujam. The progression index poorly predicted the most severe quartile of EDSS at 10 years.
The main limitation of the ARMSS integral and ARMSS-rate is that they are based on EDSS, he added. The EDSS gives great weight to mobility and largely does not measure cognitive disability. “Future metrics could therefore include additional data such as MRI, Symbol Digit Modalities Test, or neurofilament light levels,” said Dr. Ramanujam. “Also, self-assessment could be one area to improve in the future.”
Dr. Ramanujam had no conflicts of interest to disclose. He receives funding from the MultipleMS Project, which is part of the EU Horizon 2020 Framework.
STOCKHOLM – , according to research presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. The resulting measurement is stable, not highly sensitive to age, and appropriate for research applications. “It could give a clinician an earlier indication of the potential disease course of a patient,” said Ryan Ramanujam, PhD, assistant professor of translational neuroepidemiology at Karolinska Institutet in Stockholm.
Researchers who study MS use various scores to measure disease severity, including the Expanded Disability Status Scale (EDSS) and the MS Severity Scale (MSSS). These scores cannot predict a patient’s future status, however, and they do not remain stable throughout the course of a patient’s disease. Fitting a linear model over a series of scores over time can provide a misleading impression of a patient’s disease progression. “What we need is a metric to give a holistic overview of disease course, regardless of when it’s measured in a patient’s disease progression,” said Dr. Ramanujam. Such a measurement could aid the search for genes that affect MS severity, he added.
Examining disability by patient age
Dr. Ramanujam and colleagues constructed their measure using the ARMSS score, which ranks EDSS score by age instead of by disease duration. The ARMSS score ranges from 0 to 10, and the median value is 5 for all patients at a given age. Investigators can calculate the score using a previously published global matrix of values for ARMSS and MSSS available in the R package ms.sev.
The investigators found that the ARMSS score is slightly superior to the MSSS in detecting small increases in EDSS. One benefit of the ARMSS score, compared with the MSSS, is that it allows investigators to study patients for whom time of disease onset is unknown. The ARMSS score also removes potential systematic bias that might result from a neurologist’s retrospective assignment of date of disease onset, said Dr. Ramanujam.
He and his colleagues used ARMSS to compare patients’ disease course with what is expected for that patient (i.e., an ARMSS that remains stable at 5). They extracted data for 15,831 patients participating in the Swedish MS registry, including age and EDSS score at each neurological visit. Eligible patients had serial EDSS scores for 10 years. Dr. Ramanujam and colleagues included 4,514 patients in their analysis.
Measures at 2 years correlated with those at 10 years
The researchers created what they called the ARMSS integral by calculating the ARMSS score’s change from 5 at each examination (e.g., −0.5 or 1). “The ARMSS integral can be thought of as the cumulative disability that a patient accrues over his or her disease course, relative to the average patient, who had the disease for the same ages,” said Dr. Ramanujam. At 2 years of follow-up and at 10 years of follow-up, the distribution of ARMSS integrals for the study population followed a normal pattern.
Next, the investigators sought to compare patients by standardizing their follow-up time. To do this, they calculated what they called the ARMSS-rate by dividing each patient’s ARMSS integral by the number of years of follow-up. The ARMSS-rate offers a “snapshot of disease severity and progression,” said Dr. Ramanujam. When the researchers compared ARMSS-rates at 2 years and 10 years for each patient, they found that the measure was “extremely stable over time and strongly correlated with future disability.” The correlation improved slightly when the researchers compared ARMSS-rates at 4 years and 10 years.
The investigators then categorized patients based on their ARMSS-rate at 2 years (e.g., 0 to 1, 1 to 2, 2 to 3). When they compared the values in these categories with the median ARMSS-rates for the same individuals over the subsequent 8 years, they found strong group-level correlations.
To analyze correlations on an individual level, Dr. Ramanujam and colleagues examined the ability of different metrics at the time closest to 2 years of follow-up to predict those measured at 10 years. They assigned the value 1 to the most severe quartile of outcomes and the value 0 to all other quartiles. For predictors and outcomes, the investigators examined ARMSS-rate and the integral of progression index, which they calculated using the integral of EDSS. They also included EDSS at 10 years as an outcome for progression index.
For predicting the subsequent 8 years of ARMSS-rates, ARMSS-rate at 2 years had an area under the curve (AUC) of 0.921. When the investigators performed the same analysis using a cohort of patients with MS from British Columbia, Canada, they obtained an AUC of 0.887. Progression index at 2 years had an AUC of 0.61 for predicting the most severe quartile of the next 8 years. Compared with this result, ARMSS integral up to 2 years was slightly better at predicting EDSS at 10 years, said Dr. Ramanujam. The progression index poorly predicted the most severe quartile of EDSS at 10 years.
The main limitation of the ARMSS integral and ARMSS-rate is that they are based on EDSS, he added. The EDSS gives great weight to mobility and largely does not measure cognitive disability. “Future metrics could therefore include additional data such as MRI, Symbol Digit Modalities Test, or neurofilament light levels,” said Dr. Ramanujam. “Also, self-assessment could be one area to improve in the future.”
Dr. Ramanujam had no conflicts of interest to disclose. He receives funding from the MultipleMS Project, which is part of the EU Horizon 2020 Framework.
STOCKHOLM – , according to research presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. The resulting measurement is stable, not highly sensitive to age, and appropriate for research applications. “It could give a clinician an earlier indication of the potential disease course of a patient,” said Ryan Ramanujam, PhD, assistant professor of translational neuroepidemiology at Karolinska Institutet in Stockholm.
Researchers who study MS use various scores to measure disease severity, including the Expanded Disability Status Scale (EDSS) and the MS Severity Scale (MSSS). These scores cannot predict a patient’s future status, however, and they do not remain stable throughout the course of a patient’s disease. Fitting a linear model over a series of scores over time can provide a misleading impression of a patient’s disease progression. “What we need is a metric to give a holistic overview of disease course, regardless of when it’s measured in a patient’s disease progression,” said Dr. Ramanujam. Such a measurement could aid the search for genes that affect MS severity, he added.
Examining disability by patient age
Dr. Ramanujam and colleagues constructed their measure using the ARMSS score, which ranks EDSS score by age instead of by disease duration. The ARMSS score ranges from 0 to 10, and the median value is 5 for all patients at a given age. Investigators can calculate the score using a previously published global matrix of values for ARMSS and MSSS available in the R package ms.sev.
The investigators found that the ARMSS score is slightly superior to the MSSS in detecting small increases in EDSS. One benefit of the ARMSS score, compared with the MSSS, is that it allows investigators to study patients for whom time of disease onset is unknown. The ARMSS score also removes potential systematic bias that might result from a neurologist’s retrospective assignment of date of disease onset, said Dr. Ramanujam.
He and his colleagues used ARMSS to compare patients’ disease course with what is expected for that patient (i.e., an ARMSS that remains stable at 5). They extracted data for 15,831 patients participating in the Swedish MS registry, including age and EDSS score at each neurological visit. Eligible patients had serial EDSS scores for 10 years. Dr. Ramanujam and colleagues included 4,514 patients in their analysis.
Measures at 2 years correlated with those at 10 years
The researchers created what they called the ARMSS integral by calculating the ARMSS score’s change from 5 at each examination (e.g., −0.5 or 1). “The ARMSS integral can be thought of as the cumulative disability that a patient accrues over his or her disease course, relative to the average patient, who had the disease for the same ages,” said Dr. Ramanujam. At 2 years of follow-up and at 10 years of follow-up, the distribution of ARMSS integrals for the study population followed a normal pattern.
Next, the investigators sought to compare patients by standardizing their follow-up time. To do this, they calculated what they called the ARMSS-rate by dividing each patient’s ARMSS integral by the number of years of follow-up. The ARMSS-rate offers a “snapshot of disease severity and progression,” said Dr. Ramanujam. When the researchers compared ARMSS-rates at 2 years and 10 years for each patient, they found that the measure was “extremely stable over time and strongly correlated with future disability.” The correlation improved slightly when the researchers compared ARMSS-rates at 4 years and 10 years.
The investigators then categorized patients based on their ARMSS-rate at 2 years (e.g., 0 to 1, 1 to 2, 2 to 3). When they compared the values in these categories with the median ARMSS-rates for the same individuals over the subsequent 8 years, they found strong group-level correlations.
To analyze correlations on an individual level, Dr. Ramanujam and colleagues examined the ability of different metrics at the time closest to 2 years of follow-up to predict those measured at 10 years. They assigned the value 1 to the most severe quartile of outcomes and the value 0 to all other quartiles. For predictors and outcomes, the investigators examined ARMSS-rate and the integral of progression index, which they calculated using the integral of EDSS. They also included EDSS at 10 years as an outcome for progression index.
For predicting the subsequent 8 years of ARMSS-rates, ARMSS-rate at 2 years had an area under the curve (AUC) of 0.921. When the investigators performed the same analysis using a cohort of patients with MS from British Columbia, Canada, they obtained an AUC of 0.887. Progression index at 2 years had an AUC of 0.61 for predicting the most severe quartile of the next 8 years. Compared with this result, ARMSS integral up to 2 years was slightly better at predicting EDSS at 10 years, said Dr. Ramanujam. The progression index poorly predicted the most severe quartile of EDSS at 10 years.
The main limitation of the ARMSS integral and ARMSS-rate is that they are based on EDSS, he added. The EDSS gives great weight to mobility and largely does not measure cognitive disability. “Future metrics could therefore include additional data such as MRI, Symbol Digit Modalities Test, or neurofilament light levels,” said Dr. Ramanujam. “Also, self-assessment could be one area to improve in the future.”
Dr. Ramanujam had no conflicts of interest to disclose. He receives funding from the MultipleMS Project, which is part of the EU Horizon 2020 Framework.
REPORTING FROM ECTRIMS 2019
HCV+ kidney transplants: Similar outcomes to HCV- regardless of recipient serostatus
Kidneys from donors with hepatitis C virus (HCV) infection function well despite adverse quality assessment and are a valuable resource for transplantation candidates independent of HCV status, according to the findings of a large U.S. registry study.
A total of 260 HCV-viremic kidneys were transplanted in the first quarter of 2019, with 105 additional viremic kidneys being discarded, according to a report in the Journal of the American Society of Nephrology by Vishnu S. Potluri, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
Donor HCV viremia was defined as an HCV nucleic acid test–positive result reported to the Organ Procurement and Transplantation Network (OPTN). Donors who were HCV negative in this test were labeled as HCV nonviremic. Kidney transplantation recipients were defined as either HCV seropositive or seronegative based on HCV antibody testing.
During the first quarter of 2019, 74% of HCV-viremic kidneys were transplanted into seronegative recipients, which is a major change from how HCV-viremic kidneys were allocated a few years ago, according to the researchers. The results of small trials showing the benefits of such transplantations and the success of direct-acting antiviral therapy (DAA) on clearing HCV infections were indicated as likely responsible for the change.
HCV-viremic kidneys had similar function, compared with HCV-nonviremic kidneys, when matched on the donor elements included in the Kidney Profile Donor Index (KDPI), excluding HCV, they added. In addition, the 12-month estimated glomerular filtration rate (eGFR) was similar between the seropositive and seronegative recipients, respectively 65.4 and 71.1 mL/min per 1.73 m2 (P = .05), which suggests that recipient HCV serostatus does not negatively affect 1-year graft function using HCV-viremic kidneys in the era of DAA treatments, according to the authors.
Also, among HCV-seropositive recipients of HCV-viremic kidneys, seven (3.4%) died by 1 year post transplantation, while none of the HCV-seronegative recipients of HCV-viremic kidneys experienced graft failure or death.
“These striking results provide important additional evidence that the KDPI, with its current negative weighting for HCV status, does not accurately assess the quality of kidneys from HCV-viremic donors,” the authors wrote.
“HCV-viremic kidneys are a valuable resource for transplantation. Disincentives for accepting these organs should be addressed by the transplantation community,” Dr. Potluri and colleagues concluded.
This work was supported in part by the Health Resources and Services Administration of the U.S. Department of Health & Human Services. The various authors reported grant funding and advisory board participation with a number of pharmaceutical companies.
SOURCE: Potluri VS et al. J Am Soc Nephrol. 2019;30:1939-51.
Kidneys from donors with hepatitis C virus (HCV) infection function well despite adverse quality assessment and are a valuable resource for transplantation candidates independent of HCV status, according to the findings of a large U.S. registry study.
A total of 260 HCV-viremic kidneys were transplanted in the first quarter of 2019, with 105 additional viremic kidneys being discarded, according to a report in the Journal of the American Society of Nephrology by Vishnu S. Potluri, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
Donor HCV viremia was defined as an HCV nucleic acid test–positive result reported to the Organ Procurement and Transplantation Network (OPTN). Donors who were HCV negative in this test were labeled as HCV nonviremic. Kidney transplantation recipients were defined as either HCV seropositive or seronegative based on HCV antibody testing.
During the first quarter of 2019, 74% of HCV-viremic kidneys were transplanted into seronegative recipients, which is a major change from how HCV-viremic kidneys were allocated a few years ago, according to the researchers. The results of small trials showing the benefits of such transplantations and the success of direct-acting antiviral therapy (DAA) on clearing HCV infections were indicated as likely responsible for the change.
HCV-viremic kidneys had similar function, compared with HCV-nonviremic kidneys, when matched on the donor elements included in the Kidney Profile Donor Index (KDPI), excluding HCV, they added. In addition, the 12-month estimated glomerular filtration rate (eGFR) was similar between the seropositive and seronegative recipients, respectively 65.4 and 71.1 mL/min per 1.73 m2 (P = .05), which suggests that recipient HCV serostatus does not negatively affect 1-year graft function using HCV-viremic kidneys in the era of DAA treatments, according to the authors.
Also, among HCV-seropositive recipients of HCV-viremic kidneys, seven (3.4%) died by 1 year post transplantation, while none of the HCV-seronegative recipients of HCV-viremic kidneys experienced graft failure or death.
“These striking results provide important additional evidence that the KDPI, with its current negative weighting for HCV status, does not accurately assess the quality of kidneys from HCV-viremic donors,” the authors wrote.
“HCV-viremic kidneys are a valuable resource for transplantation. Disincentives for accepting these organs should be addressed by the transplantation community,” Dr. Potluri and colleagues concluded.
This work was supported in part by the Health Resources and Services Administration of the U.S. Department of Health & Human Services. The various authors reported grant funding and advisory board participation with a number of pharmaceutical companies.
SOURCE: Potluri VS et al. J Am Soc Nephrol. 2019;30:1939-51.
Kidneys from donors with hepatitis C virus (HCV) infection function well despite adverse quality assessment and are a valuable resource for transplantation candidates independent of HCV status, according to the findings of a large U.S. registry study.
A total of 260 HCV-viremic kidneys were transplanted in the first quarter of 2019, with 105 additional viremic kidneys being discarded, according to a report in the Journal of the American Society of Nephrology by Vishnu S. Potluri, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
Donor HCV viremia was defined as an HCV nucleic acid test–positive result reported to the Organ Procurement and Transplantation Network (OPTN). Donors who were HCV negative in this test were labeled as HCV nonviremic. Kidney transplantation recipients were defined as either HCV seropositive or seronegative based on HCV antibody testing.
During the first quarter of 2019, 74% of HCV-viremic kidneys were transplanted into seronegative recipients, which is a major change from how HCV-viremic kidneys were allocated a few years ago, according to the researchers. The results of small trials showing the benefits of such transplantations and the success of direct-acting antiviral therapy (DAA) on clearing HCV infections were indicated as likely responsible for the change.
HCV-viremic kidneys had similar function, compared with HCV-nonviremic kidneys, when matched on the donor elements included in the Kidney Profile Donor Index (KDPI), excluding HCV, they added. In addition, the 12-month estimated glomerular filtration rate (eGFR) was similar between the seropositive and seronegative recipients, respectively 65.4 and 71.1 mL/min per 1.73 m2 (P = .05), which suggests that recipient HCV serostatus does not negatively affect 1-year graft function using HCV-viremic kidneys in the era of DAA treatments, according to the authors.
Also, among HCV-seropositive recipients of HCV-viremic kidneys, seven (3.4%) died by 1 year post transplantation, while none of the HCV-seronegative recipients of HCV-viremic kidneys experienced graft failure or death.
“These striking results provide important additional evidence that the KDPI, with its current negative weighting for HCV status, does not accurately assess the quality of kidneys from HCV-viremic donors,” the authors wrote.
“HCV-viremic kidneys are a valuable resource for transplantation. Disincentives for accepting these organs should be addressed by the transplantation community,” Dr. Potluri and colleagues concluded.
This work was supported in part by the Health Resources and Services Administration of the U.S. Department of Health & Human Services. The various authors reported grant funding and advisory board participation with a number of pharmaceutical companies.
SOURCE: Potluri VS et al. J Am Soc Nephrol. 2019;30:1939-51.
FROM JOURNAL OF THE AMERICAN SOCIETY OF NEPHROLOGY
Intensive cognitive training may be needed for memory gains in MS
STOCKHOLM – Cognitive rehabilitation to address memory deficits in multiple sclerosis (MS) can take a page from efforts to help those with other conditions, but practitioners and patients should realize that more intensive interventions are likely to be of greater benefit in MS.
Piet Bouman reported at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.
Hippocampal pathology can underlie the high-impact memory deficits that are seen frequently in patients with MS, noted Mr. Bouman, a doctoral student at Amsterdam University Medical Centers, and his collaborators. However, they observed, which strategies might best ameliorate hippocampal memory loss for those with MS is an open question.
To address this knowledge gap, Mr. Bouman and his coauthors conducted a systematic review and meta-analysis that aimed to determine which memory interventions in current use most help hippocampal memory functioning. The authors did not limit the review to MS, but included other conditions where hippocampal lesions, atrophy, or changes in connection or functioning may affect memory. These include healthy aging, mild cognitive impairment, and Alzheimer’s disease.
Included in the search for studies were those that used either cognitive or exercise interventions and also evaluated both visuospatial and verbal memory using validated measures, such as the Brief Visuospatial Memory Test or the California Verbal Learning Test.
After reviewing an initial 6,697 articles, the authors used Cochrane criteria to eliminate studies that were at high risk of bias. In the end, 141 studies were selected for the final review, and 82 of these were included in the meta-analysis. Eighteen studies involving 895 individuals addressed healthy aging; 39 studies enrolled 2,256 patients with mild cognitive impairment; 8 studies enrolled 223 patients with Alzheimer’s disease; and 26 studies involving 1,174 patients looked at cognitive impairment in the MS population.
To express the efficacy of the interventions across the various studies, Mr. Bouman and collaborators used the ratio of the difference in mean outcomes between groups and the standard deviation in outcome among participants. This ratio, commonly used to harmonize data in meta-analyses, is termed standardized mean difference.
Individuals representing the healthy aging population saw the most benefit from interventions to address memory loss, with a standardized mean difference of 0.48. Patients with mild cognitive impairment saw a standardized mean difference of 0.46, followed by patients with Alzheimer’s disease with a standardized mean difference of 0.43. Patients with MS lagged far behind in their response to interventions to improve memory, with a standardized mean difference of 0.34.
Looking at the different kinds of interventions, exercise interventions showed moderate effectiveness, with a standardized mean difference of 0.46. By contrast, high intensity cognitive training working on memory strategies was the most effective intervention, said Mr. Bouman and his coauthors: This intervention showed a standardized mean difference of 1.03.
Among the varying conditions associated with hippocampal memory loss, MS-related memory problems saw the least response to intervention, “which might be a result of a more widespread pattern of cognitive decline in MS,” noted Mr. Bouman and coauthors.
“Future studies should work from the realization that memory rehabilitation in MS might require a different approach” than that used in healthy aging, mild cognitive impairment, and Alzheimer’s disease, wrote the authors.
Their review revealed “persistent methodological flaws” in the literature, they noted. These included small sample sizes and selection bias.
Mr. Bouman reported that he had no disclosures. One coauthor reported financial relationships with Sanofi Genzyme, Merck-Serono and Biogen Idec. Another reported financial relationships with Merck-Serono, Bogen, Novartis, Genzyme, and Teva Pharmaceuticals.
SOURCE: Bouman P et al. ECTRIMS 2019. Abstract P1439.
STOCKHOLM – Cognitive rehabilitation to address memory deficits in multiple sclerosis (MS) can take a page from efforts to help those with other conditions, but practitioners and patients should realize that more intensive interventions are likely to be of greater benefit in MS.
Piet Bouman reported at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.
Hippocampal pathology can underlie the high-impact memory deficits that are seen frequently in patients with MS, noted Mr. Bouman, a doctoral student at Amsterdam University Medical Centers, and his collaborators. However, they observed, which strategies might best ameliorate hippocampal memory loss for those with MS is an open question.
To address this knowledge gap, Mr. Bouman and his coauthors conducted a systematic review and meta-analysis that aimed to determine which memory interventions in current use most help hippocampal memory functioning. The authors did not limit the review to MS, but included other conditions where hippocampal lesions, atrophy, or changes in connection or functioning may affect memory. These include healthy aging, mild cognitive impairment, and Alzheimer’s disease.
Included in the search for studies were those that used either cognitive or exercise interventions and also evaluated both visuospatial and verbal memory using validated measures, such as the Brief Visuospatial Memory Test or the California Verbal Learning Test.
After reviewing an initial 6,697 articles, the authors used Cochrane criteria to eliminate studies that were at high risk of bias. In the end, 141 studies were selected for the final review, and 82 of these were included in the meta-analysis. Eighteen studies involving 895 individuals addressed healthy aging; 39 studies enrolled 2,256 patients with mild cognitive impairment; 8 studies enrolled 223 patients with Alzheimer’s disease; and 26 studies involving 1,174 patients looked at cognitive impairment in the MS population.
To express the efficacy of the interventions across the various studies, Mr. Bouman and collaborators used the ratio of the difference in mean outcomes between groups and the standard deviation in outcome among participants. This ratio, commonly used to harmonize data in meta-analyses, is termed standardized mean difference.
Individuals representing the healthy aging population saw the most benefit from interventions to address memory loss, with a standardized mean difference of 0.48. Patients with mild cognitive impairment saw a standardized mean difference of 0.46, followed by patients with Alzheimer’s disease with a standardized mean difference of 0.43. Patients with MS lagged far behind in their response to interventions to improve memory, with a standardized mean difference of 0.34.
Looking at the different kinds of interventions, exercise interventions showed moderate effectiveness, with a standardized mean difference of 0.46. By contrast, high intensity cognitive training working on memory strategies was the most effective intervention, said Mr. Bouman and his coauthors: This intervention showed a standardized mean difference of 1.03.
Among the varying conditions associated with hippocampal memory loss, MS-related memory problems saw the least response to intervention, “which might be a result of a more widespread pattern of cognitive decline in MS,” noted Mr. Bouman and coauthors.
“Future studies should work from the realization that memory rehabilitation in MS might require a different approach” than that used in healthy aging, mild cognitive impairment, and Alzheimer’s disease, wrote the authors.
Their review revealed “persistent methodological flaws” in the literature, they noted. These included small sample sizes and selection bias.
Mr. Bouman reported that he had no disclosures. One coauthor reported financial relationships with Sanofi Genzyme, Merck-Serono and Biogen Idec. Another reported financial relationships with Merck-Serono, Bogen, Novartis, Genzyme, and Teva Pharmaceuticals.
SOURCE: Bouman P et al. ECTRIMS 2019. Abstract P1439.
STOCKHOLM – Cognitive rehabilitation to address memory deficits in multiple sclerosis (MS) can take a page from efforts to help those with other conditions, but practitioners and patients should realize that more intensive interventions are likely to be of greater benefit in MS.
Piet Bouman reported at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.
Hippocampal pathology can underlie the high-impact memory deficits that are seen frequently in patients with MS, noted Mr. Bouman, a doctoral student at Amsterdam University Medical Centers, and his collaborators. However, they observed, which strategies might best ameliorate hippocampal memory loss for those with MS is an open question.
To address this knowledge gap, Mr. Bouman and his coauthors conducted a systematic review and meta-analysis that aimed to determine which memory interventions in current use most help hippocampal memory functioning. The authors did not limit the review to MS, but included other conditions where hippocampal lesions, atrophy, or changes in connection or functioning may affect memory. These include healthy aging, mild cognitive impairment, and Alzheimer’s disease.
Included in the search for studies were those that used either cognitive or exercise interventions and also evaluated both visuospatial and verbal memory using validated measures, such as the Brief Visuospatial Memory Test or the California Verbal Learning Test.
After reviewing an initial 6,697 articles, the authors used Cochrane criteria to eliminate studies that were at high risk of bias. In the end, 141 studies were selected for the final review, and 82 of these were included in the meta-analysis. Eighteen studies involving 895 individuals addressed healthy aging; 39 studies enrolled 2,256 patients with mild cognitive impairment; 8 studies enrolled 223 patients with Alzheimer’s disease; and 26 studies involving 1,174 patients looked at cognitive impairment in the MS population.
To express the efficacy of the interventions across the various studies, Mr. Bouman and collaborators used the ratio of the difference in mean outcomes between groups and the standard deviation in outcome among participants. This ratio, commonly used to harmonize data in meta-analyses, is termed standardized mean difference.
Individuals representing the healthy aging population saw the most benefit from interventions to address memory loss, with a standardized mean difference of 0.48. Patients with mild cognitive impairment saw a standardized mean difference of 0.46, followed by patients with Alzheimer’s disease with a standardized mean difference of 0.43. Patients with MS lagged far behind in their response to interventions to improve memory, with a standardized mean difference of 0.34.
Looking at the different kinds of interventions, exercise interventions showed moderate effectiveness, with a standardized mean difference of 0.46. By contrast, high intensity cognitive training working on memory strategies was the most effective intervention, said Mr. Bouman and his coauthors: This intervention showed a standardized mean difference of 1.03.
Among the varying conditions associated with hippocampal memory loss, MS-related memory problems saw the least response to intervention, “which might be a result of a more widespread pattern of cognitive decline in MS,” noted Mr. Bouman and coauthors.
“Future studies should work from the realization that memory rehabilitation in MS might require a different approach” than that used in healthy aging, mild cognitive impairment, and Alzheimer’s disease, wrote the authors.
Their review revealed “persistent methodological flaws” in the literature, they noted. These included small sample sizes and selection bias.
Mr. Bouman reported that he had no disclosures. One coauthor reported financial relationships with Sanofi Genzyme, Merck-Serono and Biogen Idec. Another reported financial relationships with Merck-Serono, Bogen, Novartis, Genzyme, and Teva Pharmaceuticals.
SOURCE: Bouman P et al. ECTRIMS 2019. Abstract P1439.
REPORTING FROM ECTRIMS 2019