User login
Water, water everywhere leads to leaner students
Elementary schools that provide easy access to drinking water and education about its benefits may help their students maintain a healthy weight, a new study found.
Researchers examined the health and drinking habits of 1,249 children in 26 low-income, ethnically diverse elementary schools in the San Francisco Bay Area. In half of the schools, water stations were placed throughout, along with signs explaining why water is healthier than sugary drinks. In addition, assemblies were held explaining the advantages of water over sugary drinks.
That simple message seemed to have had an outsized effect. Schools with water stations had significantly fewer overweight students than the other schools by the end of the 15-month study, according to Anisha Patel, MD, MSPH, MSHS, associate professor of pediatrics at Stanford (Calif.) University, who will be presenting the findings at the Pediatric Academic Societies (PAS) 2022 Meeting, Denver.
“Sugar-sweetened beverages are a huge contributor to obesity,” Dr. Patel told this news organization. “This provides a key strategy for schools to adopt, and the time is right for this type of work – in the pandemic period we’ve seen significant increases in obesity. Investments like this could help stem that.”
According to the U.S. Centers for Disease Control and Prevention, 14.4 million children aged 2-19 years in the United States – about 19% of all kids in that age range – were obese in 2017-2018. The agency said the rate of increase in body mass index among this group nearly doubled during the COVID-19 pandemic.
Children with obesity are at higher risk for chronic health problems, including diabetes, heart disease, depression, and high blood pressure.
Dr. Patel’s study, funded by the National Institutes of Health, was the culmination of a decade of interest in the area, she said.
Water stations and compostable or recyclable cups were placed in high-traffic areas of the schools, including playgrounds and cafeterias. The water was tested for lead, and if needed, researchers worked with school districts to remediate, Dr. Patel said in an interview.
The intervention included a kickoff assembly about the health benefits of water intake, and students who were seen drinking water with their lunches were given small prizes.
The researchers assessed body weight, height, and dietary intake of students throughout the study, including their consumption of water, sodas, fruit juices, and flavored and unflavored milk.
Promoting water didn’t lead to magical weight loss. At the start of the study, 49.5% of students in the intervention group were overweight – a figure that nudged up to 49.8% by the end of the study. In the control group, however, 47.7% of students began the study overweight – a number that climbed to 51.4% by the end of the trial (odds ratio, 0.3; P = .01), according to the researchers, who credited the increase to the lack of emphasis on opting for water over sweetened drinks.
“We were very excited the effect sizes were nearly double previous studies, which was great news,” Dr. Patel said.
Water intake began to decline at about the 15-month mark, signaling the need for more long-term, consistent education and incentive to foster lasting habits, Dr. Patel said.
The researchers noted that they were unable to collect data from eight of the target schools because of the pandemic. In addition, the study focused on schools with heavily Latino student populations, so the results might not be generalizable to other communities, they said.
Angie Cradock, a principal research scientist at the Harvard T. H. Chan School of Public Health, Boston, said the study “offers an important and practical strategy to promote student health.”
Ms. Cradock serves as deputy director of the Harvard Prevention Research Center on Nutrition and Physical Activity, which focuses on improving population nutrition, increasing physical activity, reducing obesity and chronic disease, and improving health equity.
Dr. Patel and her colleagues’ three-pronged approach of using education, promotion, and accessibility to increase student interest in drinking water could be employed at countless other schools, said Ms. Cradock, who was not involved in the study.
“Negative perceptions of tap water and drinking fountains are common,” she said. “Not all students have access to safe and appealing drinking water while at school, and this strategy seems like a recipe for success.”
Dr. Patel reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Elementary schools that provide easy access to drinking water and education about its benefits may help their students maintain a healthy weight, a new study found.
Researchers examined the health and drinking habits of 1,249 children in 26 low-income, ethnically diverse elementary schools in the San Francisco Bay Area. In half of the schools, water stations were placed throughout, along with signs explaining why water is healthier than sugary drinks. In addition, assemblies were held explaining the advantages of water over sugary drinks.
That simple message seemed to have had an outsized effect. Schools with water stations had significantly fewer overweight students than the other schools by the end of the 15-month study, according to Anisha Patel, MD, MSPH, MSHS, associate professor of pediatrics at Stanford (Calif.) University, who will be presenting the findings at the Pediatric Academic Societies (PAS) 2022 Meeting, Denver.
“Sugar-sweetened beverages are a huge contributor to obesity,” Dr. Patel told this news organization. “This provides a key strategy for schools to adopt, and the time is right for this type of work – in the pandemic period we’ve seen significant increases in obesity. Investments like this could help stem that.”
According to the U.S. Centers for Disease Control and Prevention, 14.4 million children aged 2-19 years in the United States – about 19% of all kids in that age range – were obese in 2017-2018. The agency said the rate of increase in body mass index among this group nearly doubled during the COVID-19 pandemic.
Children with obesity are at higher risk for chronic health problems, including diabetes, heart disease, depression, and high blood pressure.
Dr. Patel’s study, funded by the National Institutes of Health, was the culmination of a decade of interest in the area, she said.
Water stations and compostable or recyclable cups were placed in high-traffic areas of the schools, including playgrounds and cafeterias. The water was tested for lead, and if needed, researchers worked with school districts to remediate, Dr. Patel said in an interview.
The intervention included a kickoff assembly about the health benefits of water intake, and students who were seen drinking water with their lunches were given small prizes.
The researchers assessed body weight, height, and dietary intake of students throughout the study, including their consumption of water, sodas, fruit juices, and flavored and unflavored milk.
Promoting water didn’t lead to magical weight loss. At the start of the study, 49.5% of students in the intervention group were overweight – a figure that nudged up to 49.8% by the end of the study. In the control group, however, 47.7% of students began the study overweight – a number that climbed to 51.4% by the end of the trial (odds ratio, 0.3; P = .01), according to the researchers, who credited the increase to the lack of emphasis on opting for water over sweetened drinks.
“We were very excited the effect sizes were nearly double previous studies, which was great news,” Dr. Patel said.
Water intake began to decline at about the 15-month mark, signaling the need for more long-term, consistent education and incentive to foster lasting habits, Dr. Patel said.
The researchers noted that they were unable to collect data from eight of the target schools because of the pandemic. In addition, the study focused on schools with heavily Latino student populations, so the results might not be generalizable to other communities, they said.
Angie Cradock, a principal research scientist at the Harvard T. H. Chan School of Public Health, Boston, said the study “offers an important and practical strategy to promote student health.”
Ms. Cradock serves as deputy director of the Harvard Prevention Research Center on Nutrition and Physical Activity, which focuses on improving population nutrition, increasing physical activity, reducing obesity and chronic disease, and improving health equity.
Dr. Patel and her colleagues’ three-pronged approach of using education, promotion, and accessibility to increase student interest in drinking water could be employed at countless other schools, said Ms. Cradock, who was not involved in the study.
“Negative perceptions of tap water and drinking fountains are common,” she said. “Not all students have access to safe and appealing drinking water while at school, and this strategy seems like a recipe for success.”
Dr. Patel reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Elementary schools that provide easy access to drinking water and education about its benefits may help their students maintain a healthy weight, a new study found.
Researchers examined the health and drinking habits of 1,249 children in 26 low-income, ethnically diverse elementary schools in the San Francisco Bay Area. In half of the schools, water stations were placed throughout, along with signs explaining why water is healthier than sugary drinks. In addition, assemblies were held explaining the advantages of water over sugary drinks.
That simple message seemed to have had an outsized effect. Schools with water stations had significantly fewer overweight students than the other schools by the end of the 15-month study, according to Anisha Patel, MD, MSPH, MSHS, associate professor of pediatrics at Stanford (Calif.) University, who will be presenting the findings at the Pediatric Academic Societies (PAS) 2022 Meeting, Denver.
“Sugar-sweetened beverages are a huge contributor to obesity,” Dr. Patel told this news organization. “This provides a key strategy for schools to adopt, and the time is right for this type of work – in the pandemic period we’ve seen significant increases in obesity. Investments like this could help stem that.”
According to the U.S. Centers for Disease Control and Prevention, 14.4 million children aged 2-19 years in the United States – about 19% of all kids in that age range – were obese in 2017-2018. The agency said the rate of increase in body mass index among this group nearly doubled during the COVID-19 pandemic.
Children with obesity are at higher risk for chronic health problems, including diabetes, heart disease, depression, and high blood pressure.
Dr. Patel’s study, funded by the National Institutes of Health, was the culmination of a decade of interest in the area, she said.
Water stations and compostable or recyclable cups were placed in high-traffic areas of the schools, including playgrounds and cafeterias. The water was tested for lead, and if needed, researchers worked with school districts to remediate, Dr. Patel said in an interview.
The intervention included a kickoff assembly about the health benefits of water intake, and students who were seen drinking water with their lunches were given small prizes.
The researchers assessed body weight, height, and dietary intake of students throughout the study, including their consumption of water, sodas, fruit juices, and flavored and unflavored milk.
Promoting water didn’t lead to magical weight loss. At the start of the study, 49.5% of students in the intervention group were overweight – a figure that nudged up to 49.8% by the end of the study. In the control group, however, 47.7% of students began the study overweight – a number that climbed to 51.4% by the end of the trial (odds ratio, 0.3; P = .01), according to the researchers, who credited the increase to the lack of emphasis on opting for water over sweetened drinks.
“We were very excited the effect sizes were nearly double previous studies, which was great news,” Dr. Patel said.
Water intake began to decline at about the 15-month mark, signaling the need for more long-term, consistent education and incentive to foster lasting habits, Dr. Patel said.
The researchers noted that they were unable to collect data from eight of the target schools because of the pandemic. In addition, the study focused on schools with heavily Latino student populations, so the results might not be generalizable to other communities, they said.
Angie Cradock, a principal research scientist at the Harvard T. H. Chan School of Public Health, Boston, said the study “offers an important and practical strategy to promote student health.”
Ms. Cradock serves as deputy director of the Harvard Prevention Research Center on Nutrition and Physical Activity, which focuses on improving population nutrition, increasing physical activity, reducing obesity and chronic disease, and improving health equity.
Dr. Patel and her colleagues’ three-pronged approach of using education, promotion, and accessibility to increase student interest in drinking water could be employed at countless other schools, said Ms. Cradock, who was not involved in the study.
“Negative perceptions of tap water and drinking fountains are common,” she said. “Not all students have access to safe and appealing drinking water while at school, and this strategy seems like a recipe for success.”
Dr. Patel reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Substance use disorders increase risk for death from COVID-19
MADRID, Spain –
– compared with the general population. Such are the findings of a line of research led by Mexican psychiatrist Nora Volkow, MD, director of the U.S. National Institute on Drug Abuse (NIDA).A pioneer in the use of brain imaging to investigate how substance use affects brain functions and one of Time magazine’s “Top 100 People Who Shape Our World,” she led the Inaugural Conference at the XXXI Congress of the Spanish Society of Clinical Pharmacology “Drugs and Actions During the Pandemic.” Dr. Volkow spoke about the effects that the current health crisis has had on drug use and the social challenges that arose from lockdowns. She also presented and discussed the results of studies being conducted at NIDA that “are aimed at reviewing what we’ve learned and what the consequences of COVID-19 have been with respect to substance abuse disorder.”
As Dr. Volkow pointed out, drugs affect much more than just the brain. “In particular, the heart, the lungs, the immune system – all of these are significantly harmed by substances like tobacco, alcohol, cocaine, and methamphetamine. This is why, since the beginning of the pandemic, we’ve been worried about seeing what consequences SARS-CoV-2 was going to have on users of these substances, especially in light of the great toll this disease takes on the respiratory system and the vascular system.”
Pulmonary ‘predisposition’ and race
Dr. Volkow and her team launched several studies to get a more thorough understanding of the link between substance abuse disorders and poor COVID-19 prognoses. One of them was based on analyses from electronic health records in the United States. The purpose was to determine COVID-19 risk and outcomes in patients based on the type of use disorder (for example, alcohol, opioid, cannabis, cocaine).
“The results showed that regardless of the drug type, all users of these substances had both a higher risk of being infected by COVID-19 and a higher death rate in comparison with the rest of the population,” said Dr. Volkow. “This surprised us, because there’s no evidence that drugs themselves make the virus more infectious. However, what the results did clearly indicate to us was that using these substances was associated with behavior that put these individuals at a greater risk for infection,” Dr. Volkow explained.
“In addition,” she continued, “using, for example, tobacco or cannabis causes inflammation in the lungs. It seems that, as a result, they end up being more vulnerable to infection by COVID. And this has consequences, above all, in terms of mortality.”
Another finding was that, among patients with substance use disorders, race had the largest effect on COVID risk. “From the very start, we saw that, compared with White individuals, Black individuals showed a much higher risk of not only getting COVID, but also dying from it,” said Dr. Volkow. “Therefore, on the one hand, our data show that drug users are more vulnerable to COVID-19 and, on the other hand, they reflect that within this group, Black individuals are even more vulnerable.”
In her presentation, Dr. Volkow drew particular attention to the impact that social surroundings have on these patients and the decisive role they played in terms of vulnerability. “It’s a very complex issue, what with the various factors at play: family, social environment. ... A person living in an at-risk situation can more easily get drugs or even prescription medication, which can also be abused.”
The psychiatrist stressed that when it comes to addictive disorders (and related questions such as prevention, treatment, and social reintegration), one of the most crucial factors has to do with the individual’s social support structures. “The studies also brought to light the role that social interaction has as an inhibitory factor with regard to drug use,” said Dr. Volkow. “And indeed, adequate adherence to treatment requires that the necessary support systems be maintained.”
In the context of the pandemic, this social aspect was also key, especially concerning the high death rate among substance use disorder patients with COVID-19. “There are very significant social determinants, such as the stigma associated with these groups – a stigma that makes these individuals more likely to hesitate to seek out treatment for diseases that may be starting to take hold, in this case COVID-19.”
On that note, Dr. Volkow emphasized the importance of treating drug addicts as though they had a chronic disease in need of treatment. “In fact, the prevalence of pathologies such as hypertension, diabetes, cancer, and dementia is much higher in these individuals than in the general population,” she said. “However, this isn’t widely known. The data reflect that not only the prevalence of these diseases, but also the severity of the symptoms, is higher, and this has a lot to do with these individuals’ reticence when it comes to reaching out for medical care. Added to that are the effects of their economic situation and other factors, such as stress (which can trigger a relapse), lack of ready access to medications, and limited access to community support or other sources of social connection.”
Opioids and COVID-19
As for drug use during the pandemic, Dr. Volkow provided context by mentioning that in the United States, the experts and authorities have spent two decades fighting the epidemic of opioid-related drug overdoses, which has caused many deaths. “And on top of this epidemic – one that we still haven’t been able to get control of – there’s the situation brought about by COVID-19. So, we had to see the consequences of a pandemic crossing paths with an epidemic.”
The United States’s epidemic of overdose deaths started with the use of opioid painkillers, medications which are overprescribed. Another issue that the United States faces is that many drugs are contaminated with fentanyl. This contamination has caused an increase in deaths.
“In the United States, fentanyl is everywhere,” said Dr. Volkow. “And what’s more concerning: almost a third of this fentanyl comes in pills that are sold as benzodiazepines. With this comes a high risk for overdose. In line with this, we saw overdose deaths among adolescents nearly double in 1 year, an increase which is likely related to these contaminated pills. It’s a risk that’s just below the surface. We’ve got to be vigilant, because this phenomenon is expected to eventually spread to Europe. After all, these pills are very cheap, hence the rapid increase in their use.”
To provide figures on drug use and overdose deaths since the beginning of the pandemic, Dr. Volkow referred to COVID-19 data provided by the National Center for Health Statistics (NCHS) at the U.S. Centers for Disease Control and Prevention. The data indicate that of the 70,630 drug overdose deaths that occurred in 2019, 49,860 involved opioids (whether prescribed or illicit). “And these numbers have continued to rise, so much so that the current situation can be classified as catastrophic – because this increase has been even greater during the pandemic due to the rise in the use of all drugs,” said Dr. Volkow.
Dr. Volkow referred to an NCHS study that looked at the period between September 2020 and September 2021, finding a 15.9% increase in the number of drug overdose deaths. A breakdown of these data shows that the highest percentage corresponds to deaths from “other psychostimulants,” primarily methamphetamines (35.7%). This category is followed by deaths involving synthetic opioids, mostly illicit fentanyl (25.8%), and deaths from cocaine (13.4%).
“These figures indicate that, for the first time in history, the United States had over 100,000 overdose deaths in 1 year,” said Dr. Volkow. “This is something that has never happened. We can only infer that the pandemic had a hand in making the overdose crisis even worse than it already was.”
As Dr. Volkow explained, policies related to handling overdoses and prescribing medications have been changed in the context of COVID-19. Addiction treatment consequently has been provided through a larger number of telehealth services, and measures such as greater access to treatment for comorbid conditions, expanded access to behavioral treatments, and the establishment of mental health hotlines have been undertaken.
Children’s cognitive development
Dr. Volkow also spoke about another of NIDA’s current subjects of research: The role that damage or compromise from drugs has on the neural circuits involved in reinforcement systems. “It’s important that we make people aware of the significance of what’s at play there, because the greatest damage that can be inflicted on the brain comes from using any type of drug during adolescence. In these cases, the likelihood of having an addictive disorder as an adult significantly increases.”
Within this framework, her team has also investigated the impact of the pandemic on the cognitive development of infants under 1 year of age. One of these studies was a pilot program in which pregnant women participated. “We found that children born during the pandemic had lower cognitive development: n = 112 versus n = 554 of those born before January 2019.”
“None of the mothers or children in the study had been infected with SARS-CoV-2,” Dr. Volkow explained. “But the results clearly reflect the negative effect of the circumstances brought about by the pandemic, especially the high level of stress, the isolation, and the lack of stimuli. Another study, currently in preprint, is based on imaging. It analyzed the impact on myelination in children not exposed to COVID-19 but born during the pandemic, compared with pre-pandemic infants. The data showed significantly reduced areas of myelin development (P < .05) in those born after 2019. And the researchers didn’t find significant differences in gestation duration or birth weight.”
The longitudinal characteristics of these studies will let us see whether a change in these individuals’ social circumstances over time also brings to light cognitive changes, even the recovery of lost or underdeveloped cognitive processes, Dr. Volkow concluded.
Dr. Volkow has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
MADRID, Spain –
– compared with the general population. Such are the findings of a line of research led by Mexican psychiatrist Nora Volkow, MD, director of the U.S. National Institute on Drug Abuse (NIDA).A pioneer in the use of brain imaging to investigate how substance use affects brain functions and one of Time magazine’s “Top 100 People Who Shape Our World,” she led the Inaugural Conference at the XXXI Congress of the Spanish Society of Clinical Pharmacology “Drugs and Actions During the Pandemic.” Dr. Volkow spoke about the effects that the current health crisis has had on drug use and the social challenges that arose from lockdowns. She also presented and discussed the results of studies being conducted at NIDA that “are aimed at reviewing what we’ve learned and what the consequences of COVID-19 have been with respect to substance abuse disorder.”
As Dr. Volkow pointed out, drugs affect much more than just the brain. “In particular, the heart, the lungs, the immune system – all of these are significantly harmed by substances like tobacco, alcohol, cocaine, and methamphetamine. This is why, since the beginning of the pandemic, we’ve been worried about seeing what consequences SARS-CoV-2 was going to have on users of these substances, especially in light of the great toll this disease takes on the respiratory system and the vascular system.”
Pulmonary ‘predisposition’ and race
Dr. Volkow and her team launched several studies to get a more thorough understanding of the link between substance abuse disorders and poor COVID-19 prognoses. One of them was based on analyses from electronic health records in the United States. The purpose was to determine COVID-19 risk and outcomes in patients based on the type of use disorder (for example, alcohol, opioid, cannabis, cocaine).
“The results showed that regardless of the drug type, all users of these substances had both a higher risk of being infected by COVID-19 and a higher death rate in comparison with the rest of the population,” said Dr. Volkow. “This surprised us, because there’s no evidence that drugs themselves make the virus more infectious. However, what the results did clearly indicate to us was that using these substances was associated with behavior that put these individuals at a greater risk for infection,” Dr. Volkow explained.
“In addition,” she continued, “using, for example, tobacco or cannabis causes inflammation in the lungs. It seems that, as a result, they end up being more vulnerable to infection by COVID. And this has consequences, above all, in terms of mortality.”
Another finding was that, among patients with substance use disorders, race had the largest effect on COVID risk. “From the very start, we saw that, compared with White individuals, Black individuals showed a much higher risk of not only getting COVID, but also dying from it,” said Dr. Volkow. “Therefore, on the one hand, our data show that drug users are more vulnerable to COVID-19 and, on the other hand, they reflect that within this group, Black individuals are even more vulnerable.”
In her presentation, Dr. Volkow drew particular attention to the impact that social surroundings have on these patients and the decisive role they played in terms of vulnerability. “It’s a very complex issue, what with the various factors at play: family, social environment. ... A person living in an at-risk situation can more easily get drugs or even prescription medication, which can also be abused.”
The psychiatrist stressed that when it comes to addictive disorders (and related questions such as prevention, treatment, and social reintegration), one of the most crucial factors has to do with the individual’s social support structures. “The studies also brought to light the role that social interaction has as an inhibitory factor with regard to drug use,” said Dr. Volkow. “And indeed, adequate adherence to treatment requires that the necessary support systems be maintained.”
In the context of the pandemic, this social aspect was also key, especially concerning the high death rate among substance use disorder patients with COVID-19. “There are very significant social determinants, such as the stigma associated with these groups – a stigma that makes these individuals more likely to hesitate to seek out treatment for diseases that may be starting to take hold, in this case COVID-19.”
On that note, Dr. Volkow emphasized the importance of treating drug addicts as though they had a chronic disease in need of treatment. “In fact, the prevalence of pathologies such as hypertension, diabetes, cancer, and dementia is much higher in these individuals than in the general population,” she said. “However, this isn’t widely known. The data reflect that not only the prevalence of these diseases, but also the severity of the symptoms, is higher, and this has a lot to do with these individuals’ reticence when it comes to reaching out for medical care. Added to that are the effects of their economic situation and other factors, such as stress (which can trigger a relapse), lack of ready access to medications, and limited access to community support or other sources of social connection.”
Opioids and COVID-19
As for drug use during the pandemic, Dr. Volkow provided context by mentioning that in the United States, the experts and authorities have spent two decades fighting the epidemic of opioid-related drug overdoses, which has caused many deaths. “And on top of this epidemic – one that we still haven’t been able to get control of – there’s the situation brought about by COVID-19. So, we had to see the consequences of a pandemic crossing paths with an epidemic.”
The United States’s epidemic of overdose deaths started with the use of opioid painkillers, medications which are overprescribed. Another issue that the United States faces is that many drugs are contaminated with fentanyl. This contamination has caused an increase in deaths.
“In the United States, fentanyl is everywhere,” said Dr. Volkow. “And what’s more concerning: almost a third of this fentanyl comes in pills that are sold as benzodiazepines. With this comes a high risk for overdose. In line with this, we saw overdose deaths among adolescents nearly double in 1 year, an increase which is likely related to these contaminated pills. It’s a risk that’s just below the surface. We’ve got to be vigilant, because this phenomenon is expected to eventually spread to Europe. After all, these pills are very cheap, hence the rapid increase in their use.”
To provide figures on drug use and overdose deaths since the beginning of the pandemic, Dr. Volkow referred to COVID-19 data provided by the National Center for Health Statistics (NCHS) at the U.S. Centers for Disease Control and Prevention. The data indicate that of the 70,630 drug overdose deaths that occurred in 2019, 49,860 involved opioids (whether prescribed or illicit). “And these numbers have continued to rise, so much so that the current situation can be classified as catastrophic – because this increase has been even greater during the pandemic due to the rise in the use of all drugs,” said Dr. Volkow.
Dr. Volkow referred to an NCHS study that looked at the period between September 2020 and September 2021, finding a 15.9% increase in the number of drug overdose deaths. A breakdown of these data shows that the highest percentage corresponds to deaths from “other psychostimulants,” primarily methamphetamines (35.7%). This category is followed by deaths involving synthetic opioids, mostly illicit fentanyl (25.8%), and deaths from cocaine (13.4%).
“These figures indicate that, for the first time in history, the United States had over 100,000 overdose deaths in 1 year,” said Dr. Volkow. “This is something that has never happened. We can only infer that the pandemic had a hand in making the overdose crisis even worse than it already was.”
As Dr. Volkow explained, policies related to handling overdoses and prescribing medications have been changed in the context of COVID-19. Addiction treatment consequently has been provided through a larger number of telehealth services, and measures such as greater access to treatment for comorbid conditions, expanded access to behavioral treatments, and the establishment of mental health hotlines have been undertaken.
Children’s cognitive development
Dr. Volkow also spoke about another of NIDA’s current subjects of research: The role that damage or compromise from drugs has on the neural circuits involved in reinforcement systems. “It’s important that we make people aware of the significance of what’s at play there, because the greatest damage that can be inflicted on the brain comes from using any type of drug during adolescence. In these cases, the likelihood of having an addictive disorder as an adult significantly increases.”
Within this framework, her team has also investigated the impact of the pandemic on the cognitive development of infants under 1 year of age. One of these studies was a pilot program in which pregnant women participated. “We found that children born during the pandemic had lower cognitive development: n = 112 versus n = 554 of those born before January 2019.”
“None of the mothers or children in the study had been infected with SARS-CoV-2,” Dr. Volkow explained. “But the results clearly reflect the negative effect of the circumstances brought about by the pandemic, especially the high level of stress, the isolation, and the lack of stimuli. Another study, currently in preprint, is based on imaging. It analyzed the impact on myelination in children not exposed to COVID-19 but born during the pandemic, compared with pre-pandemic infants. The data showed significantly reduced areas of myelin development (P < .05) in those born after 2019. And the researchers didn’t find significant differences in gestation duration or birth weight.”
The longitudinal characteristics of these studies will let us see whether a change in these individuals’ social circumstances over time also brings to light cognitive changes, even the recovery of lost or underdeveloped cognitive processes, Dr. Volkow concluded.
Dr. Volkow has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
MADRID, Spain –
– compared with the general population. Such are the findings of a line of research led by Mexican psychiatrist Nora Volkow, MD, director of the U.S. National Institute on Drug Abuse (NIDA).A pioneer in the use of brain imaging to investigate how substance use affects brain functions and one of Time magazine’s “Top 100 People Who Shape Our World,” she led the Inaugural Conference at the XXXI Congress of the Spanish Society of Clinical Pharmacology “Drugs and Actions During the Pandemic.” Dr. Volkow spoke about the effects that the current health crisis has had on drug use and the social challenges that arose from lockdowns. She also presented and discussed the results of studies being conducted at NIDA that “are aimed at reviewing what we’ve learned and what the consequences of COVID-19 have been with respect to substance abuse disorder.”
As Dr. Volkow pointed out, drugs affect much more than just the brain. “In particular, the heart, the lungs, the immune system – all of these are significantly harmed by substances like tobacco, alcohol, cocaine, and methamphetamine. This is why, since the beginning of the pandemic, we’ve been worried about seeing what consequences SARS-CoV-2 was going to have on users of these substances, especially in light of the great toll this disease takes on the respiratory system and the vascular system.”
Pulmonary ‘predisposition’ and race
Dr. Volkow and her team launched several studies to get a more thorough understanding of the link between substance abuse disorders and poor COVID-19 prognoses. One of them was based on analyses from electronic health records in the United States. The purpose was to determine COVID-19 risk and outcomes in patients based on the type of use disorder (for example, alcohol, opioid, cannabis, cocaine).
“The results showed that regardless of the drug type, all users of these substances had both a higher risk of being infected by COVID-19 and a higher death rate in comparison with the rest of the population,” said Dr. Volkow. “This surprised us, because there’s no evidence that drugs themselves make the virus more infectious. However, what the results did clearly indicate to us was that using these substances was associated with behavior that put these individuals at a greater risk for infection,” Dr. Volkow explained.
“In addition,” she continued, “using, for example, tobacco or cannabis causes inflammation in the lungs. It seems that, as a result, they end up being more vulnerable to infection by COVID. And this has consequences, above all, in terms of mortality.”
Another finding was that, among patients with substance use disorders, race had the largest effect on COVID risk. “From the very start, we saw that, compared with White individuals, Black individuals showed a much higher risk of not only getting COVID, but also dying from it,” said Dr. Volkow. “Therefore, on the one hand, our data show that drug users are more vulnerable to COVID-19 and, on the other hand, they reflect that within this group, Black individuals are even more vulnerable.”
In her presentation, Dr. Volkow drew particular attention to the impact that social surroundings have on these patients and the decisive role they played in terms of vulnerability. “It’s a very complex issue, what with the various factors at play: family, social environment. ... A person living in an at-risk situation can more easily get drugs or even prescription medication, which can also be abused.”
The psychiatrist stressed that when it comes to addictive disorders (and related questions such as prevention, treatment, and social reintegration), one of the most crucial factors has to do with the individual’s social support structures. “The studies also brought to light the role that social interaction has as an inhibitory factor with regard to drug use,” said Dr. Volkow. “And indeed, adequate adherence to treatment requires that the necessary support systems be maintained.”
In the context of the pandemic, this social aspect was also key, especially concerning the high death rate among substance use disorder patients with COVID-19. “There are very significant social determinants, such as the stigma associated with these groups – a stigma that makes these individuals more likely to hesitate to seek out treatment for diseases that may be starting to take hold, in this case COVID-19.”
On that note, Dr. Volkow emphasized the importance of treating drug addicts as though they had a chronic disease in need of treatment. “In fact, the prevalence of pathologies such as hypertension, diabetes, cancer, and dementia is much higher in these individuals than in the general population,” she said. “However, this isn’t widely known. The data reflect that not only the prevalence of these diseases, but also the severity of the symptoms, is higher, and this has a lot to do with these individuals’ reticence when it comes to reaching out for medical care. Added to that are the effects of their economic situation and other factors, such as stress (which can trigger a relapse), lack of ready access to medications, and limited access to community support or other sources of social connection.”
Opioids and COVID-19
As for drug use during the pandemic, Dr. Volkow provided context by mentioning that in the United States, the experts and authorities have spent two decades fighting the epidemic of opioid-related drug overdoses, which has caused many deaths. “And on top of this epidemic – one that we still haven’t been able to get control of – there’s the situation brought about by COVID-19. So, we had to see the consequences of a pandemic crossing paths with an epidemic.”
The United States’s epidemic of overdose deaths started with the use of opioid painkillers, medications which are overprescribed. Another issue that the United States faces is that many drugs are contaminated with fentanyl. This contamination has caused an increase in deaths.
“In the United States, fentanyl is everywhere,” said Dr. Volkow. “And what’s more concerning: almost a third of this fentanyl comes in pills that are sold as benzodiazepines. With this comes a high risk for overdose. In line with this, we saw overdose deaths among adolescents nearly double in 1 year, an increase which is likely related to these contaminated pills. It’s a risk that’s just below the surface. We’ve got to be vigilant, because this phenomenon is expected to eventually spread to Europe. After all, these pills are very cheap, hence the rapid increase in their use.”
To provide figures on drug use and overdose deaths since the beginning of the pandemic, Dr. Volkow referred to COVID-19 data provided by the National Center for Health Statistics (NCHS) at the U.S. Centers for Disease Control and Prevention. The data indicate that of the 70,630 drug overdose deaths that occurred in 2019, 49,860 involved opioids (whether prescribed or illicit). “And these numbers have continued to rise, so much so that the current situation can be classified as catastrophic – because this increase has been even greater during the pandemic due to the rise in the use of all drugs,” said Dr. Volkow.
Dr. Volkow referred to an NCHS study that looked at the period between September 2020 and September 2021, finding a 15.9% increase in the number of drug overdose deaths. A breakdown of these data shows that the highest percentage corresponds to deaths from “other psychostimulants,” primarily methamphetamines (35.7%). This category is followed by deaths involving synthetic opioids, mostly illicit fentanyl (25.8%), and deaths from cocaine (13.4%).
“These figures indicate that, for the first time in history, the United States had over 100,000 overdose deaths in 1 year,” said Dr. Volkow. “This is something that has never happened. We can only infer that the pandemic had a hand in making the overdose crisis even worse than it already was.”
As Dr. Volkow explained, policies related to handling overdoses and prescribing medications have been changed in the context of COVID-19. Addiction treatment consequently has been provided through a larger number of telehealth services, and measures such as greater access to treatment for comorbid conditions, expanded access to behavioral treatments, and the establishment of mental health hotlines have been undertaken.
Children’s cognitive development
Dr. Volkow also spoke about another of NIDA’s current subjects of research: The role that damage or compromise from drugs has on the neural circuits involved in reinforcement systems. “It’s important that we make people aware of the significance of what’s at play there, because the greatest damage that can be inflicted on the brain comes from using any type of drug during adolescence. In these cases, the likelihood of having an addictive disorder as an adult significantly increases.”
Within this framework, her team has also investigated the impact of the pandemic on the cognitive development of infants under 1 year of age. One of these studies was a pilot program in which pregnant women participated. “We found that children born during the pandemic had lower cognitive development: n = 112 versus n = 554 of those born before January 2019.”
“None of the mothers or children in the study had been infected with SARS-CoV-2,” Dr. Volkow explained. “But the results clearly reflect the negative effect of the circumstances brought about by the pandemic, especially the high level of stress, the isolation, and the lack of stimuli. Another study, currently in preprint, is based on imaging. It analyzed the impact on myelination in children not exposed to COVID-19 but born during the pandemic, compared with pre-pandemic infants. The data showed significantly reduced areas of myelin development (P < .05) in those born after 2019. And the researchers didn’t find significant differences in gestation duration or birth weight.”
The longitudinal characteristics of these studies will let us see whether a change in these individuals’ social circumstances over time also brings to light cognitive changes, even the recovery of lost or underdeveloped cognitive processes, Dr. Volkow concluded.
Dr. Volkow has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM ANNUAL MEETING OF SPANISH SOCIETY OF CLINICAL PHARMACOLOGY
COVID-19 again the third-leading cause of U.S. deaths
the Centers for Disease Control and Prevention said April 22.
About 693,000 people died of heart disease in 2021, with 605,000 dying of cancer and 415,000 of COVID, the CDC said, citing provisional data that might be updated later.
Unintentional injuries were the fourth-leading cause of death, increasing to 219,000 in 2021 from 201,000 in 2020. Influenza and pneumonia dropped out of the top 10 leading causes of death and suicide moved into 10th place.
Overall, about 3,458,697 deaths were reported in the United States in 2021. The age-adjusted death rate was 841.6 deaths per 100,000 people, an increase of 0.7% from 2020. The 2021 death rate was the highest since 2003, the CDC said.
The overall number of COVID deaths in 2021 increased around 20% over 2020, when around 384,000 people died from the virus, the CDC said. COVID deaths in 2021 peaked for the weeks ending Jan. 16 and Sept. 11, following holiday periods.
The demographics of COVID mortality changed slightly, the CDC said in a second report.
Blacks accounted for 13.3% of COVID deaths in 2021 and Hispanics 16.5%, down several percentage points from 2020, the CDC said. Asians made up 3.1% of COVID deaths for 2021, a drop from 3.6% in 2020. White people accounted for 65.2% of COVID deaths in 2021, an increase from 59.6% in 2020.
Non-Hispanic American Indian/Alaskan Native and non-Hispanic Black or African American had the highest overall death rates for COVID, the CDC said.
Breaking the data down by age, the number of COVID deaths among people aged 75 years and older dropped to 178,000 in 2021 from around 207,000 in 2020. The numbers went up in other age groups. Among people aged 65-75, about 101,000 died of COVID in 2021, up from around 76,000 in 2020.
“The results of both studies highlight the need for greater effort to implement effective interventions,” the CDC said in a statement. “We must work to ensure equal treatment in all communities in proportion to their need for effective interventions that can prevent excess COVID-19 deaths.”
Since the pandemic began, about 991,000 people in the United States have died from COVID-related causes, the most among all nations in the world.
A version of this article first appeared on WebMD.com.
the Centers for Disease Control and Prevention said April 22.
About 693,000 people died of heart disease in 2021, with 605,000 dying of cancer and 415,000 of COVID, the CDC said, citing provisional data that might be updated later.
Unintentional injuries were the fourth-leading cause of death, increasing to 219,000 in 2021 from 201,000 in 2020. Influenza and pneumonia dropped out of the top 10 leading causes of death and suicide moved into 10th place.
Overall, about 3,458,697 deaths were reported in the United States in 2021. The age-adjusted death rate was 841.6 deaths per 100,000 people, an increase of 0.7% from 2020. The 2021 death rate was the highest since 2003, the CDC said.
The overall number of COVID deaths in 2021 increased around 20% over 2020, when around 384,000 people died from the virus, the CDC said. COVID deaths in 2021 peaked for the weeks ending Jan. 16 and Sept. 11, following holiday periods.
The demographics of COVID mortality changed slightly, the CDC said in a second report.
Blacks accounted for 13.3% of COVID deaths in 2021 and Hispanics 16.5%, down several percentage points from 2020, the CDC said. Asians made up 3.1% of COVID deaths for 2021, a drop from 3.6% in 2020. White people accounted for 65.2% of COVID deaths in 2021, an increase from 59.6% in 2020.
Non-Hispanic American Indian/Alaskan Native and non-Hispanic Black or African American had the highest overall death rates for COVID, the CDC said.
Breaking the data down by age, the number of COVID deaths among people aged 75 years and older dropped to 178,000 in 2021 from around 207,000 in 2020. The numbers went up in other age groups. Among people aged 65-75, about 101,000 died of COVID in 2021, up from around 76,000 in 2020.
“The results of both studies highlight the need for greater effort to implement effective interventions,” the CDC said in a statement. “We must work to ensure equal treatment in all communities in proportion to their need for effective interventions that can prevent excess COVID-19 deaths.”
Since the pandemic began, about 991,000 people in the United States have died from COVID-related causes, the most among all nations in the world.
A version of this article first appeared on WebMD.com.
the Centers for Disease Control and Prevention said April 22.
About 693,000 people died of heart disease in 2021, with 605,000 dying of cancer and 415,000 of COVID, the CDC said, citing provisional data that might be updated later.
Unintentional injuries were the fourth-leading cause of death, increasing to 219,000 in 2021 from 201,000 in 2020. Influenza and pneumonia dropped out of the top 10 leading causes of death and suicide moved into 10th place.
Overall, about 3,458,697 deaths were reported in the United States in 2021. The age-adjusted death rate was 841.6 deaths per 100,000 people, an increase of 0.7% from 2020. The 2021 death rate was the highest since 2003, the CDC said.
The overall number of COVID deaths in 2021 increased around 20% over 2020, when around 384,000 people died from the virus, the CDC said. COVID deaths in 2021 peaked for the weeks ending Jan. 16 and Sept. 11, following holiday periods.
The demographics of COVID mortality changed slightly, the CDC said in a second report.
Blacks accounted for 13.3% of COVID deaths in 2021 and Hispanics 16.5%, down several percentage points from 2020, the CDC said. Asians made up 3.1% of COVID deaths for 2021, a drop from 3.6% in 2020. White people accounted for 65.2% of COVID deaths in 2021, an increase from 59.6% in 2020.
Non-Hispanic American Indian/Alaskan Native and non-Hispanic Black or African American had the highest overall death rates for COVID, the CDC said.
Breaking the data down by age, the number of COVID deaths among people aged 75 years and older dropped to 178,000 in 2021 from around 207,000 in 2020. The numbers went up in other age groups. Among people aged 65-75, about 101,000 died of COVID in 2021, up from around 76,000 in 2020.
“The results of both studies highlight the need for greater effort to implement effective interventions,” the CDC said in a statement. “We must work to ensure equal treatment in all communities in proportion to their need for effective interventions that can prevent excess COVID-19 deaths.”
Since the pandemic began, about 991,000 people in the United States have died from COVID-related causes, the most among all nations in the world.
A version of this article first appeared on WebMD.com.
FROM THE MMWR
Experts decry CDC’s long pause on neglected tropical disease testing
The Centers for Disease Control and Prevention has long been the premier reference lab for the United States and, for some diseases, internationally.
In September 2021, the CDC stated on its website that it would stop testing for parasites, herpesvirus encephalitis, human herpesvirus 6 and 7, Epstein-Barr virus, and other viruses, saying, “We are working diligently to implement laboratory system improvements.”
At the time, the CDC said testing would be halted only for a few months.
In response to a query from this news organization, a CDC spokesperson replied, “While at present we are unable to share a detailed timeline, our highest priority is to resume high-quality testing operations in a phased, prioritized approach as soon as possible and to offer the same tests that were available before the pause.”
Several global health clinicians told this news organization that they were not aware of the halt and that they are now uncertain about the specific diagnosis and best treatment for some patients. Other patients have been lost to follow-up.
In response, a group of tropical disease specialists who focus on neglected tropical diseases (NTDs) wrote an editorial, “Neglected Testing for Neglected Tropical Diseases at the CDC,” which recently appeared in the American Journal of Tropical Medicine and Hygiene (AJTMH).
NTDs are caused by viruses, bacteria, and parasites. They include leprosy and worms; many such diseases are disfiguring, such as filariasis (which causes the hugely swollen extremities of elephantiasis) and onchocerciasis (river blindness). They also include some viral and bacterial diseases. Their common denominator is that they are diseases of poverty, primarily in Africa, Asia, and Latin America, so they garner little attention from “first world” countries.
The loss of testing for two devastating parasites – Chagas and Leishmania – was particularly significant. Few other labs in the United States test for these, and the tests can be expensive and of variable quality, experts said.
Norman Beatty, MD, a global health physician at the University of Florida, told this news organization, “Chagas confirmatory testing is only available at the CDC and is the most reliable testing we have access to in the United States. Leishmania species identification is also only available at the CDC and is important in determining which antiparasitic medications we will use.”
Chagas disease is caused by the parasite Trypanosoma cruzi and is transmitted by triatomine bugs, also known as kissing bugs. Chagas is a major cause of an enlarged heart and congestive heart failure, as well as a dramatically enlarged esophagus or colon.
Prior to the cuts and before COVID-19, the CDC reported that they ran 10,000 to 15,000 tests for parasitic diseases annually. Testing requests declined during COVID. In 2021, they ran 1,003 tests for Chagas.
Dr. Beatty said that he first became aware of the CDC’s testing cuts last fall when he sought care for a patient. He was first told the delay would be 2-3 weeks, then another 2-3 weeks. It’s now been 7 months, and only three tests have been resumed.
Dr. Beatty added that for Chagas disease in particular, there is urgency in testing because cardiac complications can be life-threatening. He said that “a lot of these diseases can be considered rare, but they also have a tremendous ability to cause morbidity and mortality.”
Leishmania infections are also serious. Following the bite of an infected sandfly, they can cause disfiguring skin infections, but, more importantly, they can affect the liver, spleen, and bone marrow. Dr. Beatty said that since testing was dropped at the CDC, some colleagues had to send specimens outside of the country.
Dr. Beatty emphasized that the cuts in testing at the CDC highlight disparities in our society. “There are other commercial reference laboratories who may have some of these tests available, but the vast majority of people who suffer from diseases are underserved and vulnerable. [My patients] most definitely will not have access to advanced testing commercial laboratories,” Dr. Beatty said. Those laboratories include Associated Regional University Pathologists laboratories, Quest Diagnostics, and LabCorp Diagnostics. But for some parasitic infections, there will simply be no testing, and patients will not receive appropriate therapy.
The CDC’s website says, “USAID and CDC work together on a shared agenda to advance global progress towards the control and elimination of NTDs that can be addressed with preventive chemotherapy. ... CDC has strong working relationships with WHO, regional reference laboratories/bodies, [and] national NTD programs ... working with these partners through the provision of unique laboratory, diagnostic, and epidemiological technical assistance.”
The WHO Roadmap for 2030 aims to prevent and control many NTDs, in part by “providing new interventions and effective, standardized, and affordable diagnostics.” Last year, the CDC said that they “will continue working with WHO and other global partners to meet the established goals.”
But testing for a number of NTDs is not currently available at the CDC. In response to questions from this news organization, a CDC spokesperson said the agency “supports the development of country capacity for NTD testing required ... but does not perform testing related to the WHO Roadmap.”
A group of CDC officials wrote an editorial response that was published in AJTMH, saying the agency has “three main priorities: reducing parasitic disease-related death, illness, and disability in the United States; reducing the global burden of malaria; and eliminating targeted neglected tropical diseases.”
In response to this news organization’s interview request, a CDC spokesperson wrote, “CDC is unwavering in our commitment to provide the highest quality laboratory diagnostic services for parasitic diseases. We understand the concerns expressed in the editorial and the challenges the pause in testing for parasitic diseases presents for health care providers, particularly those treating people at elevated risk for parasitic diseases.”
Michael Reich, PhD, Dr. Beatty’s co-author, is an international health policy expert at Harvard. He and the physicians had approached CDC about the elimination of services. He said in an interview, “We’re still unable to get clear responses except for something along the lines of, ‘We are working on it. It is complicated. It takes time. We’re doing our best.’”
Dr. Reich added, “For me, this raises troubling issues both of transparency and accountability – transparency about what is going on and what the problems are, and accountability in terms of who’s being held responsible for the closures and the impacts on both public health and patient treatment.”
Dr. Beatty concluded, “I think the goal of our group was to bring more awareness to the importance of having a national laboratory that can service all people, even the most underserved and vulnerable populations.” He added, “Chagas disease is a disease of inequity in Latin Americans. Without having access to an appropriate laboratory such as the CDC, we would be taking a backwards approach to tackle neglected tropical diseases in our country and worldwide.”
Dr. Beatty and Dr. Reich report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The Centers for Disease Control and Prevention has long been the premier reference lab for the United States and, for some diseases, internationally.
In September 2021, the CDC stated on its website that it would stop testing for parasites, herpesvirus encephalitis, human herpesvirus 6 and 7, Epstein-Barr virus, and other viruses, saying, “We are working diligently to implement laboratory system improvements.”
At the time, the CDC said testing would be halted only for a few months.
In response to a query from this news organization, a CDC spokesperson replied, “While at present we are unable to share a detailed timeline, our highest priority is to resume high-quality testing operations in a phased, prioritized approach as soon as possible and to offer the same tests that were available before the pause.”
Several global health clinicians told this news organization that they were not aware of the halt and that they are now uncertain about the specific diagnosis and best treatment for some patients. Other patients have been lost to follow-up.
In response, a group of tropical disease specialists who focus on neglected tropical diseases (NTDs) wrote an editorial, “Neglected Testing for Neglected Tropical Diseases at the CDC,” which recently appeared in the American Journal of Tropical Medicine and Hygiene (AJTMH).
NTDs are caused by viruses, bacteria, and parasites. They include leprosy and worms; many such diseases are disfiguring, such as filariasis (which causes the hugely swollen extremities of elephantiasis) and onchocerciasis (river blindness). They also include some viral and bacterial diseases. Their common denominator is that they are diseases of poverty, primarily in Africa, Asia, and Latin America, so they garner little attention from “first world” countries.
The loss of testing for two devastating parasites – Chagas and Leishmania – was particularly significant. Few other labs in the United States test for these, and the tests can be expensive and of variable quality, experts said.
Norman Beatty, MD, a global health physician at the University of Florida, told this news organization, “Chagas confirmatory testing is only available at the CDC and is the most reliable testing we have access to in the United States. Leishmania species identification is also only available at the CDC and is important in determining which antiparasitic medications we will use.”
Chagas disease is caused by the parasite Trypanosoma cruzi and is transmitted by triatomine bugs, also known as kissing bugs. Chagas is a major cause of an enlarged heart and congestive heart failure, as well as a dramatically enlarged esophagus or colon.
Prior to the cuts and before COVID-19, the CDC reported that they ran 10,000 to 15,000 tests for parasitic diseases annually. Testing requests declined during COVID. In 2021, they ran 1,003 tests for Chagas.
Dr. Beatty said that he first became aware of the CDC’s testing cuts last fall when he sought care for a patient. He was first told the delay would be 2-3 weeks, then another 2-3 weeks. It’s now been 7 months, and only three tests have been resumed.
Dr. Beatty added that for Chagas disease in particular, there is urgency in testing because cardiac complications can be life-threatening. He said that “a lot of these diseases can be considered rare, but they also have a tremendous ability to cause morbidity and mortality.”
Leishmania infections are also serious. Following the bite of an infected sandfly, they can cause disfiguring skin infections, but, more importantly, they can affect the liver, spleen, and bone marrow. Dr. Beatty said that since testing was dropped at the CDC, some colleagues had to send specimens outside of the country.
Dr. Beatty emphasized that the cuts in testing at the CDC highlight disparities in our society. “There are other commercial reference laboratories who may have some of these tests available, but the vast majority of people who suffer from diseases are underserved and vulnerable. [My patients] most definitely will not have access to advanced testing commercial laboratories,” Dr. Beatty said. Those laboratories include Associated Regional University Pathologists laboratories, Quest Diagnostics, and LabCorp Diagnostics. But for some parasitic infections, there will simply be no testing, and patients will not receive appropriate therapy.
The CDC’s website says, “USAID and CDC work together on a shared agenda to advance global progress towards the control and elimination of NTDs that can be addressed with preventive chemotherapy. ... CDC has strong working relationships with WHO, regional reference laboratories/bodies, [and] national NTD programs ... working with these partners through the provision of unique laboratory, diagnostic, and epidemiological technical assistance.”
The WHO Roadmap for 2030 aims to prevent and control many NTDs, in part by “providing new interventions and effective, standardized, and affordable diagnostics.” Last year, the CDC said that they “will continue working with WHO and other global partners to meet the established goals.”
But testing for a number of NTDs is not currently available at the CDC. In response to questions from this news organization, a CDC spokesperson said the agency “supports the development of country capacity for NTD testing required ... but does not perform testing related to the WHO Roadmap.”
A group of CDC officials wrote an editorial response that was published in AJTMH, saying the agency has “three main priorities: reducing parasitic disease-related death, illness, and disability in the United States; reducing the global burden of malaria; and eliminating targeted neglected tropical diseases.”
In response to this news organization’s interview request, a CDC spokesperson wrote, “CDC is unwavering in our commitment to provide the highest quality laboratory diagnostic services for parasitic diseases. We understand the concerns expressed in the editorial and the challenges the pause in testing for parasitic diseases presents for health care providers, particularly those treating people at elevated risk for parasitic diseases.”
Michael Reich, PhD, Dr. Beatty’s co-author, is an international health policy expert at Harvard. He and the physicians had approached CDC about the elimination of services. He said in an interview, “We’re still unable to get clear responses except for something along the lines of, ‘We are working on it. It is complicated. It takes time. We’re doing our best.’”
Dr. Reich added, “For me, this raises troubling issues both of transparency and accountability – transparency about what is going on and what the problems are, and accountability in terms of who’s being held responsible for the closures and the impacts on both public health and patient treatment.”
Dr. Beatty concluded, “I think the goal of our group was to bring more awareness to the importance of having a national laboratory that can service all people, even the most underserved and vulnerable populations.” He added, “Chagas disease is a disease of inequity in Latin Americans. Without having access to an appropriate laboratory such as the CDC, we would be taking a backwards approach to tackle neglected tropical diseases in our country and worldwide.”
Dr. Beatty and Dr. Reich report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The Centers for Disease Control and Prevention has long been the premier reference lab for the United States and, for some diseases, internationally.
In September 2021, the CDC stated on its website that it would stop testing for parasites, herpesvirus encephalitis, human herpesvirus 6 and 7, Epstein-Barr virus, and other viruses, saying, “We are working diligently to implement laboratory system improvements.”
At the time, the CDC said testing would be halted only for a few months.
In response to a query from this news organization, a CDC spokesperson replied, “While at present we are unable to share a detailed timeline, our highest priority is to resume high-quality testing operations in a phased, prioritized approach as soon as possible and to offer the same tests that were available before the pause.”
Several global health clinicians told this news organization that they were not aware of the halt and that they are now uncertain about the specific diagnosis and best treatment for some patients. Other patients have been lost to follow-up.
In response, a group of tropical disease specialists who focus on neglected tropical diseases (NTDs) wrote an editorial, “Neglected Testing for Neglected Tropical Diseases at the CDC,” which recently appeared in the American Journal of Tropical Medicine and Hygiene (AJTMH).
NTDs are caused by viruses, bacteria, and parasites. They include leprosy and worms; many such diseases are disfiguring, such as filariasis (which causes the hugely swollen extremities of elephantiasis) and onchocerciasis (river blindness). They also include some viral and bacterial diseases. Their common denominator is that they are diseases of poverty, primarily in Africa, Asia, and Latin America, so they garner little attention from “first world” countries.
The loss of testing for two devastating parasites – Chagas and Leishmania – was particularly significant. Few other labs in the United States test for these, and the tests can be expensive and of variable quality, experts said.
Norman Beatty, MD, a global health physician at the University of Florida, told this news organization, “Chagas confirmatory testing is only available at the CDC and is the most reliable testing we have access to in the United States. Leishmania species identification is also only available at the CDC and is important in determining which antiparasitic medications we will use.”
Chagas disease is caused by the parasite Trypanosoma cruzi and is transmitted by triatomine bugs, also known as kissing bugs. Chagas is a major cause of an enlarged heart and congestive heart failure, as well as a dramatically enlarged esophagus or colon.
Prior to the cuts and before COVID-19, the CDC reported that they ran 10,000 to 15,000 tests for parasitic diseases annually. Testing requests declined during COVID. In 2021, they ran 1,003 tests for Chagas.
Dr. Beatty said that he first became aware of the CDC’s testing cuts last fall when he sought care for a patient. He was first told the delay would be 2-3 weeks, then another 2-3 weeks. It’s now been 7 months, and only three tests have been resumed.
Dr. Beatty added that for Chagas disease in particular, there is urgency in testing because cardiac complications can be life-threatening. He said that “a lot of these diseases can be considered rare, but they also have a tremendous ability to cause morbidity and mortality.”
Leishmania infections are also serious. Following the bite of an infected sandfly, they can cause disfiguring skin infections, but, more importantly, they can affect the liver, spleen, and bone marrow. Dr. Beatty said that since testing was dropped at the CDC, some colleagues had to send specimens outside of the country.
Dr. Beatty emphasized that the cuts in testing at the CDC highlight disparities in our society. “There are other commercial reference laboratories who may have some of these tests available, but the vast majority of people who suffer from diseases are underserved and vulnerable. [My patients] most definitely will not have access to advanced testing commercial laboratories,” Dr. Beatty said. Those laboratories include Associated Regional University Pathologists laboratories, Quest Diagnostics, and LabCorp Diagnostics. But for some parasitic infections, there will simply be no testing, and patients will not receive appropriate therapy.
The CDC’s website says, “USAID and CDC work together on a shared agenda to advance global progress towards the control and elimination of NTDs that can be addressed with preventive chemotherapy. ... CDC has strong working relationships with WHO, regional reference laboratories/bodies, [and] national NTD programs ... working with these partners through the provision of unique laboratory, diagnostic, and epidemiological technical assistance.”
The WHO Roadmap for 2030 aims to prevent and control many NTDs, in part by “providing new interventions and effective, standardized, and affordable diagnostics.” Last year, the CDC said that they “will continue working with WHO and other global partners to meet the established goals.”
But testing for a number of NTDs is not currently available at the CDC. In response to questions from this news organization, a CDC spokesperson said the agency “supports the development of country capacity for NTD testing required ... but does not perform testing related to the WHO Roadmap.”
A group of CDC officials wrote an editorial response that was published in AJTMH, saying the agency has “three main priorities: reducing parasitic disease-related death, illness, and disability in the United States; reducing the global burden of malaria; and eliminating targeted neglected tropical diseases.”
In response to this news organization’s interview request, a CDC spokesperson wrote, “CDC is unwavering in our commitment to provide the highest quality laboratory diagnostic services for parasitic diseases. We understand the concerns expressed in the editorial and the challenges the pause in testing for parasitic diseases presents for health care providers, particularly those treating people at elevated risk for parasitic diseases.”
Michael Reich, PhD, Dr. Beatty’s co-author, is an international health policy expert at Harvard. He and the physicians had approached CDC about the elimination of services. He said in an interview, “We’re still unable to get clear responses except for something along the lines of, ‘We are working on it. It is complicated. It takes time. We’re doing our best.’”
Dr. Reich added, “For me, this raises troubling issues both of transparency and accountability – transparency about what is going on and what the problems are, and accountability in terms of who’s being held responsible for the closures and the impacts on both public health and patient treatment.”
Dr. Beatty concluded, “I think the goal of our group was to bring more awareness to the importance of having a national laboratory that can service all people, even the most underserved and vulnerable populations.” He added, “Chagas disease is a disease of inequity in Latin Americans. Without having access to an appropriate laboratory such as the CDC, we would be taking a backwards approach to tackle neglected tropical diseases in our country and worldwide.”
Dr. Beatty and Dr. Reich report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Three in four U.S. doctors are employed by hospitals, corporate entities: Report
Marcus Welby, MD, was a fictitious hometown doctor featured in a TV drama with the same name that was shown on ABC from 1969 to 1976. Played by actor Robert Young, Dr. Welby treated his patients through their bouts with breast cancer, impotence, and Alzheimer’s disease.
“COVID-19 drove physicians to leave private practice for employment at an even more rapid pace than we’ve seen in recent years, and these trends continued to accelerate in 2021,” Kelly Kenney, chief executive officer of Physicians Advocacy Institute, said in an announcement. “This study underscores the fact that physicians across the nation are facing severe burnout and strain. The pressures of the pandemic forced many independent physicians to make difficult decisions to sell their practices, health insurers, or other corporate entities.”
Corporate entities are defined in the report as health insurers, private equity firms, and umbrella corporate entities that own multiple physician practices.
“The pandemic has been just brutal ... for nurses and physicians who are caring for patients,” Ms. Kenney told this news organization. “Between the financial stress that the pandemic certainly had on practices, because they certainly had little revenue for a while, and then also we know that the stress that physicians have felt mentally, you can’t overstate that.”
More than half of physician practices owned by hospitals, corporate entities
The Physicians Advocacy Institute has tracked changes in physician employment consistently since 2012, said Ms. Kenney. In 2012, 25% of physicians were employed; that has jumped to nearly 74%, which means the past decade has brought a world of change to the nation’s physicians.
“These are essentially small-business people ... and they were primarily trained to care for patients,” said Ms. Kenney, referring to physicians in independent practice. Still, she understands why physicians would seek employment in the face of “the crushing kind of pressure of having to deal with 20 different payers, pay overhead, and keep the lights on [at the practice].”
According to the report, 108,700 physicians left independent practice to enter employment with hospitals or other corporate entities in the 3-year period that ended in 2021. Seventy-six percent of that shift to employed status among physicians has occurred since the start of the COVID-19 pandemic in March 2020.
From a regional perspective, the report found continued growth among employed physicians across all U.S. regions in the last half of 2020. Hospital- or corporate-owned physician practices increased between 28% and 44%, while the percentage of hospital- or corporate-employed physicians increased between 13% and 24%.
Eighty percent of physicians in the Midwest are employed by hospitals or corporations, which leads the rest of the country, per the report. That’s followed by the Northeast, the West, and the South. Overall, the number of physicians working for such entities increased in all regions.
The report revealed that physician employment by corporations such as health insurers and venture capital firms grew from 92,400 in January 2019 to 142,900 in January 2022.
Hospitals and corporate entities acquired 36,200 physician practices (representing 38% growth) between 2019 and 2021, and the majority of these moves occurred since the pandemic’s start, according to the report.
Value-based care, venture capital firms driving change
Ms. Kenney pointed to value-based care as driving much of this activity by hospitals. “We all embrace [value-based payment], because we need to get a handle on cost, and we want better quality [but] those trends tend to favor integrated systems and systems that can handle a lot of risk and populations of patients.”
Still, the moves by private equity firms and health insurers in this space is relatively new, said Ms. Kenney, who added that her organization started tracking this trend 3 years ago. She pointed to a “marked acceleration” in the trend toward employing physicians and the sale of practices in the 18 months following the pandemic’s start; nonhospital corporate entities drove that steep increase, she said.
Ms. Kenney calls for further study and “guardrails” to respond to “that force in the health care system,” referring to the acquisition of practices by entities such as private equity firms. “Are these big [health care] systems going to continue to see patients in underserved areas, rural areas, and Medicaid patients if it doesn’t make sense financially to do so?
“That’s what we’re teeing up with this research,” added Ms. Kenney. “We are providing information that starts some conversations around what we might want to think about in terms of policies to ensure that we don’t impact patients’ access to care.”
The Physicians Advocacy Institute represents more than 170,000 physicians and medical students. Avalere Health used the IQVIA OneKey database for the report. The researchers studied the 3-year period from Jan. 1, 2019, to Jan. 1, 2022.
A version of this article first appeared on Medscape.com.
Marcus Welby, MD, was a fictitious hometown doctor featured in a TV drama with the same name that was shown on ABC from 1969 to 1976. Played by actor Robert Young, Dr. Welby treated his patients through their bouts with breast cancer, impotence, and Alzheimer’s disease.
“COVID-19 drove physicians to leave private practice for employment at an even more rapid pace than we’ve seen in recent years, and these trends continued to accelerate in 2021,” Kelly Kenney, chief executive officer of Physicians Advocacy Institute, said in an announcement. “This study underscores the fact that physicians across the nation are facing severe burnout and strain. The pressures of the pandemic forced many independent physicians to make difficult decisions to sell their practices, health insurers, or other corporate entities.”
Corporate entities are defined in the report as health insurers, private equity firms, and umbrella corporate entities that own multiple physician practices.
“The pandemic has been just brutal ... for nurses and physicians who are caring for patients,” Ms. Kenney told this news organization. “Between the financial stress that the pandemic certainly had on practices, because they certainly had little revenue for a while, and then also we know that the stress that physicians have felt mentally, you can’t overstate that.”
More than half of physician practices owned by hospitals, corporate entities
The Physicians Advocacy Institute has tracked changes in physician employment consistently since 2012, said Ms. Kenney. In 2012, 25% of physicians were employed; that has jumped to nearly 74%, which means the past decade has brought a world of change to the nation’s physicians.
“These are essentially small-business people ... and they were primarily trained to care for patients,” said Ms. Kenney, referring to physicians in independent practice. Still, she understands why physicians would seek employment in the face of “the crushing kind of pressure of having to deal with 20 different payers, pay overhead, and keep the lights on [at the practice].”
According to the report, 108,700 physicians left independent practice to enter employment with hospitals or other corporate entities in the 3-year period that ended in 2021. Seventy-six percent of that shift to employed status among physicians has occurred since the start of the COVID-19 pandemic in March 2020.
From a regional perspective, the report found continued growth among employed physicians across all U.S. regions in the last half of 2020. Hospital- or corporate-owned physician practices increased between 28% and 44%, while the percentage of hospital- or corporate-employed physicians increased between 13% and 24%.
Eighty percent of physicians in the Midwest are employed by hospitals or corporations, which leads the rest of the country, per the report. That’s followed by the Northeast, the West, and the South. Overall, the number of physicians working for such entities increased in all regions.
The report revealed that physician employment by corporations such as health insurers and venture capital firms grew from 92,400 in January 2019 to 142,900 in January 2022.
Hospitals and corporate entities acquired 36,200 physician practices (representing 38% growth) between 2019 and 2021, and the majority of these moves occurred since the pandemic’s start, according to the report.
Value-based care, venture capital firms driving change
Ms. Kenney pointed to value-based care as driving much of this activity by hospitals. “We all embrace [value-based payment], because we need to get a handle on cost, and we want better quality [but] those trends tend to favor integrated systems and systems that can handle a lot of risk and populations of patients.”
Still, the moves by private equity firms and health insurers in this space is relatively new, said Ms. Kenney, who added that her organization started tracking this trend 3 years ago. She pointed to a “marked acceleration” in the trend toward employing physicians and the sale of practices in the 18 months following the pandemic’s start; nonhospital corporate entities drove that steep increase, she said.
Ms. Kenney calls for further study and “guardrails” to respond to “that force in the health care system,” referring to the acquisition of practices by entities such as private equity firms. “Are these big [health care] systems going to continue to see patients in underserved areas, rural areas, and Medicaid patients if it doesn’t make sense financially to do so?
“That’s what we’re teeing up with this research,” added Ms. Kenney. “We are providing information that starts some conversations around what we might want to think about in terms of policies to ensure that we don’t impact patients’ access to care.”
The Physicians Advocacy Institute represents more than 170,000 physicians and medical students. Avalere Health used the IQVIA OneKey database for the report. The researchers studied the 3-year period from Jan. 1, 2019, to Jan. 1, 2022.
A version of this article first appeared on Medscape.com.
Marcus Welby, MD, was a fictitious hometown doctor featured in a TV drama with the same name that was shown on ABC from 1969 to 1976. Played by actor Robert Young, Dr. Welby treated his patients through their bouts with breast cancer, impotence, and Alzheimer’s disease.
“COVID-19 drove physicians to leave private practice for employment at an even more rapid pace than we’ve seen in recent years, and these trends continued to accelerate in 2021,” Kelly Kenney, chief executive officer of Physicians Advocacy Institute, said in an announcement. “This study underscores the fact that physicians across the nation are facing severe burnout and strain. The pressures of the pandemic forced many independent physicians to make difficult decisions to sell their practices, health insurers, or other corporate entities.”
Corporate entities are defined in the report as health insurers, private equity firms, and umbrella corporate entities that own multiple physician practices.
“The pandemic has been just brutal ... for nurses and physicians who are caring for patients,” Ms. Kenney told this news organization. “Between the financial stress that the pandemic certainly had on practices, because they certainly had little revenue for a while, and then also we know that the stress that physicians have felt mentally, you can’t overstate that.”
More than half of physician practices owned by hospitals, corporate entities
The Physicians Advocacy Institute has tracked changes in physician employment consistently since 2012, said Ms. Kenney. In 2012, 25% of physicians were employed; that has jumped to nearly 74%, which means the past decade has brought a world of change to the nation’s physicians.
“These are essentially small-business people ... and they were primarily trained to care for patients,” said Ms. Kenney, referring to physicians in independent practice. Still, she understands why physicians would seek employment in the face of “the crushing kind of pressure of having to deal with 20 different payers, pay overhead, and keep the lights on [at the practice].”
According to the report, 108,700 physicians left independent practice to enter employment with hospitals or other corporate entities in the 3-year period that ended in 2021. Seventy-six percent of that shift to employed status among physicians has occurred since the start of the COVID-19 pandemic in March 2020.
From a regional perspective, the report found continued growth among employed physicians across all U.S. regions in the last half of 2020. Hospital- or corporate-owned physician practices increased between 28% and 44%, while the percentage of hospital- or corporate-employed physicians increased between 13% and 24%.
Eighty percent of physicians in the Midwest are employed by hospitals or corporations, which leads the rest of the country, per the report. That’s followed by the Northeast, the West, and the South. Overall, the number of physicians working for such entities increased in all regions.
The report revealed that physician employment by corporations such as health insurers and venture capital firms grew from 92,400 in January 2019 to 142,900 in January 2022.
Hospitals and corporate entities acquired 36,200 physician practices (representing 38% growth) between 2019 and 2021, and the majority of these moves occurred since the pandemic’s start, according to the report.
Value-based care, venture capital firms driving change
Ms. Kenney pointed to value-based care as driving much of this activity by hospitals. “We all embrace [value-based payment], because we need to get a handle on cost, and we want better quality [but] those trends tend to favor integrated systems and systems that can handle a lot of risk and populations of patients.”
Still, the moves by private equity firms and health insurers in this space is relatively new, said Ms. Kenney, who added that her organization started tracking this trend 3 years ago. She pointed to a “marked acceleration” in the trend toward employing physicians and the sale of practices in the 18 months following the pandemic’s start; nonhospital corporate entities drove that steep increase, she said.
Ms. Kenney calls for further study and “guardrails” to respond to “that force in the health care system,” referring to the acquisition of practices by entities such as private equity firms. “Are these big [health care] systems going to continue to see patients in underserved areas, rural areas, and Medicaid patients if it doesn’t make sense financially to do so?
“That’s what we’re teeing up with this research,” added Ms. Kenney. “We are providing information that starts some conversations around what we might want to think about in terms of policies to ensure that we don’t impact patients’ access to care.”
The Physicians Advocacy Institute represents more than 170,000 physicians and medical students. Avalere Health used the IQVIA OneKey database for the report. The researchers studied the 3-year period from Jan. 1, 2019, to Jan. 1, 2022.
A version of this article first appeared on Medscape.com.
Icosapent ethyl’s CV mortality benefit magnified in patients with prior MI
In the placebo-controlled REDUCE-IT trial, icosapent ethyl (IPE) was linked to a significant reduction in major adverse cardiovascular events (MACE) when administered on top of LDL cholesterol control, but a new substudy suggests a greater relative advantage in those with a prior myocardial infarction.
In the study as a whole, IPE (Vascepa, Amarin) was tied to a 20% reduction in CV death (hazard ratio, 0.80; P = .03), but it climbed to a 30% reduction (HR, 0.70; P = .01) in the subgroup with a prior MI, reported a multinational team of investigators led by Prakriti Gaba, MD, a cardiologist at Brigham and Women’s Hospital, Boston.
On the basis of these data, “the imperative to treat patients who have a history of prior MI is even stronger,” said Deepak L. Bhatt, MD, executive director of interventional cardiovascular programs at Brigham and Women’s Hospital.
The principal investigator of REDUCE-IT and a coauthor of this subanalysis, Dr. Bhatt said in an interview, “The significant reduction in cardiovascular mortality, as well as sudden cardiac death and cardiac arrest, really should make physicians strongly consider this therapy in eligible patients.”
The main results of the REDUCE-IT trial were published more than 3 years ago. It enrolled patients with established CV disease or diabetes with additional risk factors who were on a statin and had elevated triglyceride (TG) levels.
A 25% reduction in MACE reported
In those randomized to IPE, there was about a 25% reduction in the primary composite MACE outcome of cardiovascular death, nonfatal MI, nonfatal stroke, revascularization, and unstable angina relative to placebo. About the same relative reduction was achieved in the key secondary endpoint of CV death, nonfatal MI, and nonfatal stroke.
Some guidelines have been changed on the basis of these data. The National Lipid Association, for example, conferred a class 1 recommendation for adding IPE to other appropriate lipid-reducing therapies in any individual 45 years of age or older with atherosclerotic cardiovascular disease.
This new substudy (J Am Coll Cardiol. 2022 Apr 25; doi: 10.1016/j.jacc.2022.02.035), is likely to be influential for those guidelines not yet revised. In the substudy of the prior MI patients, the relative benefit of IPE for the primary and secondary MACE endpoints were of similar magnitude to the overall study population, but events occurred more frequently in the prior-MI subgroup, greatly increasing the statistical power of the advantage.
More MACE in prior MI patients
For example, the primary outcome was observed in 22% of the placebo patients in the overall REDUCE-IT analysis but in 26.1% of those with prior MI, so even though the relative risk reduction remained at about 25%, the statistical strength was a hundred-fold greater (P = .00001 vs. P < .001).
For the key secondary composite MACE endpoint, the relative reduction for those with a prior MI was modestly greater than the study as a whole (HR 0.71 vs. HR. 075) but the statistical strength was again magnified in those with a prior MI (P = .00006 vs. P < .001). In those with a prior MI , the advantage of receiving IPE was similar whether or not there had been a prior revascularization.
The 20% lower rate of all-cause mortality among prior MI patients receiving IPE rather than placebo fell just short of statistical significance (HR, 0.80; P = .054). Ischemic events on IPE were reduced by 35% (P = .0000001) and recurrent MI was reduced by 34% (P = .00009).
In the substudy as well as in the REDUCE-IT trial overall, IPE was well tolerated. A slightly higher rate of atrial fibrillation was reported in both.
The REDUCE-IT substudy evaluated 3,693 patients with a history of MI, representing 45% of the 8,179 patients randomized.
IPE, an ethyl ester of the omega-3 polyunsaturated fatty acid, initially attracted attention for its ability to reduce elevated TG. It was hoped this would address reduce residual risk in patients on maximally reduced LDL cholesterol. However, it is suspected that IPE exerts benefits additive to or independent of TG lowering, according to the authors of the REDUCE-IT substudy. These include attenuation of the inflammatory response, release of nitric oxide, and effects that support stabilization of atherosclerotic plaque.
The investigators reported that the pattern of response supports this theory. In the newly reported substudy, the primary event curves that included nonthrombotic events separated at about 1 year, but even curves for CV death and sudden cardiac death were more delayed.
This delay might be explained “by the slow but steady reduction in plaque volume, mitigation of inflammation, improvements in endothelial function, and membrane stabilization,” according to the authors, who cited studies suggesting each of these effects might not be wholly dependent on TG reductions alone.
Prior TG-lowering studies disappointing
In fact, several studies evaluating other strategies for TG reductions have been disappointing, according to an accompanying editorial (J Am Coll Cardiol. 2022 Apr 25; doi: 10.1016/j.jacc.2022.03.001). For example, the STRENGTH trial did not show clinical benefits despite a slightly greater reduction in TGs than that shown in REDUCE-IT (19% reduction vs. 18.3%).
Overall, the REDUCE-IT trial and the prior-MI REDUCE-IT substudy show that there is targetable residual risk in high risk patients on statin therapy. One of the authors of the editorial that accompanied the prior-MI substudy of REDUCE-IT, William E. Boden, MD, professor of medicine, Boston University, emphasized this point. On the basis of REDUCE-IT, he said he believes that IPE should be considered to have broad indications as an adjunctive treatment to other lipid-lowering strategies.
“My practice centers on optimizing secondary prevention in high-risk patients who have elevated TG levels despite well-controlled LDL levels on statins, ezetimibe, or even PCSK-9 [proprotein convertase subtilisin/kexin type 9] inhibitors,” Dr. Boden said in an interview. Patients with diabetes are notorious for presenting with this profile of dyslipidemia, but he added that “even nondiabetics with prior MI, acute coronary syndrome, or revascularization will benefit from the addition of IPE to high-potency statins.”
Although the American Heart Association and the American College of Cardiology have not yet updated their guidelines to include IPE, Dr. Boden pointed out that the European Society of Cardiology, the Canadian Cardiovascular Society, and the American Diabetes Society have.
Dr. Bhatt added that there is a clear message from REDUCE-IT that IPE addresses residual risk.
Targeting the subgroup of high-risk patients with elevated TGs “is easy” because they are so readily identifiable, according to Dr. Bhatt, but he said it should be used for any patient that meet the entry criteria used for REDUCE-IT.
“The overall results of REDUCE-IT were robustly positive, so I wouldn’t just use it in patients with prior MI,” Dr. Bhatt said.
Dr. Bhatt reports financial relationships with more than 20 pharmaceutical companies, including Amarin, which provided funding for this trial. Dr. Boden reports no potential conflicts of interest.
In the placebo-controlled REDUCE-IT trial, icosapent ethyl (IPE) was linked to a significant reduction in major adverse cardiovascular events (MACE) when administered on top of LDL cholesterol control, but a new substudy suggests a greater relative advantage in those with a prior myocardial infarction.
In the study as a whole, IPE (Vascepa, Amarin) was tied to a 20% reduction in CV death (hazard ratio, 0.80; P = .03), but it climbed to a 30% reduction (HR, 0.70; P = .01) in the subgroup with a prior MI, reported a multinational team of investigators led by Prakriti Gaba, MD, a cardiologist at Brigham and Women’s Hospital, Boston.
On the basis of these data, “the imperative to treat patients who have a history of prior MI is even stronger,” said Deepak L. Bhatt, MD, executive director of interventional cardiovascular programs at Brigham and Women’s Hospital.
The principal investigator of REDUCE-IT and a coauthor of this subanalysis, Dr. Bhatt said in an interview, “The significant reduction in cardiovascular mortality, as well as sudden cardiac death and cardiac arrest, really should make physicians strongly consider this therapy in eligible patients.”
The main results of the REDUCE-IT trial were published more than 3 years ago. It enrolled patients with established CV disease or diabetes with additional risk factors who were on a statin and had elevated triglyceride (TG) levels.
A 25% reduction in MACE reported
In those randomized to IPE, there was about a 25% reduction in the primary composite MACE outcome of cardiovascular death, nonfatal MI, nonfatal stroke, revascularization, and unstable angina relative to placebo. About the same relative reduction was achieved in the key secondary endpoint of CV death, nonfatal MI, and nonfatal stroke.
Some guidelines have been changed on the basis of these data. The National Lipid Association, for example, conferred a class 1 recommendation for adding IPE to other appropriate lipid-reducing therapies in any individual 45 years of age or older with atherosclerotic cardiovascular disease.
This new substudy (J Am Coll Cardiol. 2022 Apr 25; doi: 10.1016/j.jacc.2022.02.035), is likely to be influential for those guidelines not yet revised. In the substudy of the prior MI patients, the relative benefit of IPE for the primary and secondary MACE endpoints were of similar magnitude to the overall study population, but events occurred more frequently in the prior-MI subgroup, greatly increasing the statistical power of the advantage.
More MACE in prior MI patients
For example, the primary outcome was observed in 22% of the placebo patients in the overall REDUCE-IT analysis but in 26.1% of those with prior MI, so even though the relative risk reduction remained at about 25%, the statistical strength was a hundred-fold greater (P = .00001 vs. P < .001).
For the key secondary composite MACE endpoint, the relative reduction for those with a prior MI was modestly greater than the study as a whole (HR 0.71 vs. HR. 075) but the statistical strength was again magnified in those with a prior MI (P = .00006 vs. P < .001). In those with a prior MI , the advantage of receiving IPE was similar whether or not there had been a prior revascularization.
The 20% lower rate of all-cause mortality among prior MI patients receiving IPE rather than placebo fell just short of statistical significance (HR, 0.80; P = .054). Ischemic events on IPE were reduced by 35% (P = .0000001) and recurrent MI was reduced by 34% (P = .00009).
In the substudy as well as in the REDUCE-IT trial overall, IPE was well tolerated. A slightly higher rate of atrial fibrillation was reported in both.
The REDUCE-IT substudy evaluated 3,693 patients with a history of MI, representing 45% of the 8,179 patients randomized.
IPE, an ethyl ester of the omega-3 polyunsaturated fatty acid, initially attracted attention for its ability to reduce elevated TG. It was hoped this would address reduce residual risk in patients on maximally reduced LDL cholesterol. However, it is suspected that IPE exerts benefits additive to or independent of TG lowering, according to the authors of the REDUCE-IT substudy. These include attenuation of the inflammatory response, release of nitric oxide, and effects that support stabilization of atherosclerotic plaque.
The investigators reported that the pattern of response supports this theory. In the newly reported substudy, the primary event curves that included nonthrombotic events separated at about 1 year, but even curves for CV death and sudden cardiac death were more delayed.
This delay might be explained “by the slow but steady reduction in plaque volume, mitigation of inflammation, improvements in endothelial function, and membrane stabilization,” according to the authors, who cited studies suggesting each of these effects might not be wholly dependent on TG reductions alone.
Prior TG-lowering studies disappointing
In fact, several studies evaluating other strategies for TG reductions have been disappointing, according to an accompanying editorial (J Am Coll Cardiol. 2022 Apr 25; doi: 10.1016/j.jacc.2022.03.001). For example, the STRENGTH trial did not show clinical benefits despite a slightly greater reduction in TGs than that shown in REDUCE-IT (19% reduction vs. 18.3%).
Overall, the REDUCE-IT trial and the prior-MI REDUCE-IT substudy show that there is targetable residual risk in high risk patients on statin therapy. One of the authors of the editorial that accompanied the prior-MI substudy of REDUCE-IT, William E. Boden, MD, professor of medicine, Boston University, emphasized this point. On the basis of REDUCE-IT, he said he believes that IPE should be considered to have broad indications as an adjunctive treatment to other lipid-lowering strategies.
“My practice centers on optimizing secondary prevention in high-risk patients who have elevated TG levels despite well-controlled LDL levels on statins, ezetimibe, or even PCSK-9 [proprotein convertase subtilisin/kexin type 9] inhibitors,” Dr. Boden said in an interview. Patients with diabetes are notorious for presenting with this profile of dyslipidemia, but he added that “even nondiabetics with prior MI, acute coronary syndrome, or revascularization will benefit from the addition of IPE to high-potency statins.”
Although the American Heart Association and the American College of Cardiology have not yet updated their guidelines to include IPE, Dr. Boden pointed out that the European Society of Cardiology, the Canadian Cardiovascular Society, and the American Diabetes Society have.
Dr. Bhatt added that there is a clear message from REDUCE-IT that IPE addresses residual risk.
Targeting the subgroup of high-risk patients with elevated TGs “is easy” because they are so readily identifiable, according to Dr. Bhatt, but he said it should be used for any patient that meet the entry criteria used for REDUCE-IT.
“The overall results of REDUCE-IT were robustly positive, so I wouldn’t just use it in patients with prior MI,” Dr. Bhatt said.
Dr. Bhatt reports financial relationships with more than 20 pharmaceutical companies, including Amarin, which provided funding for this trial. Dr. Boden reports no potential conflicts of interest.
In the placebo-controlled REDUCE-IT trial, icosapent ethyl (IPE) was linked to a significant reduction in major adverse cardiovascular events (MACE) when administered on top of LDL cholesterol control, but a new substudy suggests a greater relative advantage in those with a prior myocardial infarction.
In the study as a whole, IPE (Vascepa, Amarin) was tied to a 20% reduction in CV death (hazard ratio, 0.80; P = .03), but it climbed to a 30% reduction (HR, 0.70; P = .01) in the subgroup with a prior MI, reported a multinational team of investigators led by Prakriti Gaba, MD, a cardiologist at Brigham and Women’s Hospital, Boston.
On the basis of these data, “the imperative to treat patients who have a history of prior MI is even stronger,” said Deepak L. Bhatt, MD, executive director of interventional cardiovascular programs at Brigham and Women’s Hospital.
The principal investigator of REDUCE-IT and a coauthor of this subanalysis, Dr. Bhatt said in an interview, “The significant reduction in cardiovascular mortality, as well as sudden cardiac death and cardiac arrest, really should make physicians strongly consider this therapy in eligible patients.”
The main results of the REDUCE-IT trial were published more than 3 years ago. It enrolled patients with established CV disease or diabetes with additional risk factors who were on a statin and had elevated triglyceride (TG) levels.
A 25% reduction in MACE reported
In those randomized to IPE, there was about a 25% reduction in the primary composite MACE outcome of cardiovascular death, nonfatal MI, nonfatal stroke, revascularization, and unstable angina relative to placebo. About the same relative reduction was achieved in the key secondary endpoint of CV death, nonfatal MI, and nonfatal stroke.
Some guidelines have been changed on the basis of these data. The National Lipid Association, for example, conferred a class 1 recommendation for adding IPE to other appropriate lipid-reducing therapies in any individual 45 years of age or older with atherosclerotic cardiovascular disease.
This new substudy (J Am Coll Cardiol. 2022 Apr 25; doi: 10.1016/j.jacc.2022.02.035), is likely to be influential for those guidelines not yet revised. In the substudy of the prior MI patients, the relative benefit of IPE for the primary and secondary MACE endpoints were of similar magnitude to the overall study population, but events occurred more frequently in the prior-MI subgroup, greatly increasing the statistical power of the advantage.
More MACE in prior MI patients
For example, the primary outcome was observed in 22% of the placebo patients in the overall REDUCE-IT analysis but in 26.1% of those with prior MI, so even though the relative risk reduction remained at about 25%, the statistical strength was a hundred-fold greater (P = .00001 vs. P < .001).
For the key secondary composite MACE endpoint, the relative reduction for those with a prior MI was modestly greater than the study as a whole (HR 0.71 vs. HR. 075) but the statistical strength was again magnified in those with a prior MI (P = .00006 vs. P < .001). In those with a prior MI , the advantage of receiving IPE was similar whether or not there had been a prior revascularization.
The 20% lower rate of all-cause mortality among prior MI patients receiving IPE rather than placebo fell just short of statistical significance (HR, 0.80; P = .054). Ischemic events on IPE were reduced by 35% (P = .0000001) and recurrent MI was reduced by 34% (P = .00009).
In the substudy as well as in the REDUCE-IT trial overall, IPE was well tolerated. A slightly higher rate of atrial fibrillation was reported in both.
The REDUCE-IT substudy evaluated 3,693 patients with a history of MI, representing 45% of the 8,179 patients randomized.
IPE, an ethyl ester of the omega-3 polyunsaturated fatty acid, initially attracted attention for its ability to reduce elevated TG. It was hoped this would address reduce residual risk in patients on maximally reduced LDL cholesterol. However, it is suspected that IPE exerts benefits additive to or independent of TG lowering, according to the authors of the REDUCE-IT substudy. These include attenuation of the inflammatory response, release of nitric oxide, and effects that support stabilization of atherosclerotic plaque.
The investigators reported that the pattern of response supports this theory. In the newly reported substudy, the primary event curves that included nonthrombotic events separated at about 1 year, but even curves for CV death and sudden cardiac death were more delayed.
This delay might be explained “by the slow but steady reduction in plaque volume, mitigation of inflammation, improvements in endothelial function, and membrane stabilization,” according to the authors, who cited studies suggesting each of these effects might not be wholly dependent on TG reductions alone.
Prior TG-lowering studies disappointing
In fact, several studies evaluating other strategies for TG reductions have been disappointing, according to an accompanying editorial (J Am Coll Cardiol. 2022 Apr 25; doi: 10.1016/j.jacc.2022.03.001). For example, the STRENGTH trial did not show clinical benefits despite a slightly greater reduction in TGs than that shown in REDUCE-IT (19% reduction vs. 18.3%).
Overall, the REDUCE-IT trial and the prior-MI REDUCE-IT substudy show that there is targetable residual risk in high risk patients on statin therapy. One of the authors of the editorial that accompanied the prior-MI substudy of REDUCE-IT, William E. Boden, MD, professor of medicine, Boston University, emphasized this point. On the basis of REDUCE-IT, he said he believes that IPE should be considered to have broad indications as an adjunctive treatment to other lipid-lowering strategies.
“My practice centers on optimizing secondary prevention in high-risk patients who have elevated TG levels despite well-controlled LDL levels on statins, ezetimibe, or even PCSK-9 [proprotein convertase subtilisin/kexin type 9] inhibitors,” Dr. Boden said in an interview. Patients with diabetes are notorious for presenting with this profile of dyslipidemia, but he added that “even nondiabetics with prior MI, acute coronary syndrome, or revascularization will benefit from the addition of IPE to high-potency statins.”
Although the American Heart Association and the American College of Cardiology have not yet updated their guidelines to include IPE, Dr. Boden pointed out that the European Society of Cardiology, the Canadian Cardiovascular Society, and the American Diabetes Society have.
Dr. Bhatt added that there is a clear message from REDUCE-IT that IPE addresses residual risk.
Targeting the subgroup of high-risk patients with elevated TGs “is easy” because they are so readily identifiable, according to Dr. Bhatt, but he said it should be used for any patient that meet the entry criteria used for REDUCE-IT.
“The overall results of REDUCE-IT were robustly positive, so I wouldn’t just use it in patients with prior MI,” Dr. Bhatt said.
Dr. Bhatt reports financial relationships with more than 20 pharmaceutical companies, including Amarin, which provided funding for this trial. Dr. Boden reports no potential conflicts of interest.
FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY
Wearable sensors deemed reliable for home gait assessment in knee OA
Remote gait assessment in people with knee osteoarthritis using wearable sensors appears reliable but yields results slightly different from those achieved in the laboratory, researchers from Boston University have found.
As reported at the OARSI 2022 World Congress, there was “good to excellent reliability” in repeated measures collected by patients at home while being instructed via video teleconferencing.
Agreement was “moderate to excellent” when the findings were compared with those recorded in the lab, Michael J. Rose of Boston University reported at the congress, sponsored by the Osteoarthritis Research Society International.
“People walked faster and stood up faster in the lab,” Mr. Rose said. “Later we found that the difference in gait speed was statistically significant between the lab and home environment.”
This has been suggested previously and implies that data collected at home may have “greater ecological validity,” he observed.
Accelerated adoption of telehealth
Assessing how well someone walks or can stand from a seated position are well known and important assessments in knee OA, but these have but have traditionally only been done in large and expensive gait labs, Mr. Rose said.
Wearable technologies, such as the ones used in the study he presented, could help move these assessments out into the community. This is particularly timely considering the increased adoption of telehealth practices during the COVID-19 pandemic.
To look at the reliability measurements obtained via wearable sensors versus lab assessments, Mr. Rose and associates set up a substudy within a larger ongoing, single-arm trial looking at the use of digital assessments to measure the efficacy of an exercise intervention in reducing knee pain and improving knee function.
For inclusion in the main trial (n = 60), and hence the substudy (n = 20), participants had to have physician-diagnosed knee OA, be 50 years of age or older, have a body mass index of 40 kg/m2 or lower, be able to walk at for a least 20 minutes, and have a score of three or higher on the Knee Injury and Osteoarthritis Outcome Score pain subscale for weight-bearing items.
Acceptance of in-lab versus home testing
The substudy participants (mean age, 70.5 years) all underwent in-person lab visits in which a wearable sensor was placed on each foot and one around the lower back and the participant asked to perform walking and chair stand tests. The latter involved standing from a seated position as quickly as possible without using the arms five times, while the former involved walking 28 meters in two laps of a 7-meter path defined by two cones. These tests were repeated twice.
Participants were then given the equipment to repeat these tests at home; this included the three sensors, a tablet computer, and chair and cones. The home assessments were conducted via video conferencing, with the researchers reminding how to place the sensors correctly. The walking and chair stand tests were then each performed four times: Twice in a row and then a 15-minute rest period before being performed twice in a row again.
The researchers collected participants’ feedback about the process on questionnaires and Likert scales that showed an overall positive experience for the remote home visit, with the median rating being “very likely” to participate in another home visit and that the time commitment required was “very manageable.”
Good correlation found
To determine the correlation and the test-retest reliability of the data obtained during the repeated home tasks, Mr. Rose and collaborators used Pearson’s correlation R2 and the intra-class correlation coefficients (ICC).
ICCs for various gait and chair stand variables obtained with the sensors were between 0.85 and 0.96 for the test-retest reliability during the remote home visit, and R2 ranged between 0.81 and 0.95. Variables include stance, cadence (steps per minute), step duration and length, speed, and chair stand duration.
With regard to the agreement between the home versus lab results, ICCs ranged between 0.63 and 0.9.
“There were some logistical and technological challenges with the approach,” Mr. Rose conceded. “Despite written and verbal instructions, 2 of the 20 participants ended up having gait data that was unusable in the home visit.”
Another limitation is that the study population, while “representative,” contained a higher number of individuals than the general population who identified as being White (95%) and female (85%), and 90% had a college degree.
“Individuals typically representative of an OA population were generally accepting and willing to participate in remote visits showing the feasibility of our approach,” Mr. Rose said.
“We need to determine the responsiveness of gait and chair stand outcomes from wearable sensors at home to change over time.”
The study was sponsored by Boston University with funding from Pfizer and Eli Lilly. The researchers used the OPAL inertial sensor (APDM Wearable Technologies) in the study. Mr. Rose made no personal disclosures. Four of his collaborators were employees of Pfizer and one is an employee of Eli Lilly & Company, all with stock or stock options.
Remote gait assessment in people with knee osteoarthritis using wearable sensors appears reliable but yields results slightly different from those achieved in the laboratory, researchers from Boston University have found.
As reported at the OARSI 2022 World Congress, there was “good to excellent reliability” in repeated measures collected by patients at home while being instructed via video teleconferencing.
Agreement was “moderate to excellent” when the findings were compared with those recorded in the lab, Michael J. Rose of Boston University reported at the congress, sponsored by the Osteoarthritis Research Society International.
“People walked faster and stood up faster in the lab,” Mr. Rose said. “Later we found that the difference in gait speed was statistically significant between the lab and home environment.”
This has been suggested previously and implies that data collected at home may have “greater ecological validity,” he observed.
Accelerated adoption of telehealth
Assessing how well someone walks or can stand from a seated position are well known and important assessments in knee OA, but these have but have traditionally only been done in large and expensive gait labs, Mr. Rose said.
Wearable technologies, such as the ones used in the study he presented, could help move these assessments out into the community. This is particularly timely considering the increased adoption of telehealth practices during the COVID-19 pandemic.
To look at the reliability measurements obtained via wearable sensors versus lab assessments, Mr. Rose and associates set up a substudy within a larger ongoing, single-arm trial looking at the use of digital assessments to measure the efficacy of an exercise intervention in reducing knee pain and improving knee function.
For inclusion in the main trial (n = 60), and hence the substudy (n = 20), participants had to have physician-diagnosed knee OA, be 50 years of age or older, have a body mass index of 40 kg/m2 or lower, be able to walk at for a least 20 minutes, and have a score of three or higher on the Knee Injury and Osteoarthritis Outcome Score pain subscale for weight-bearing items.
Acceptance of in-lab versus home testing
The substudy participants (mean age, 70.5 years) all underwent in-person lab visits in which a wearable sensor was placed on each foot and one around the lower back and the participant asked to perform walking and chair stand tests. The latter involved standing from a seated position as quickly as possible without using the arms five times, while the former involved walking 28 meters in two laps of a 7-meter path defined by two cones. These tests were repeated twice.
Participants were then given the equipment to repeat these tests at home; this included the three sensors, a tablet computer, and chair and cones. The home assessments were conducted via video conferencing, with the researchers reminding how to place the sensors correctly. The walking and chair stand tests were then each performed four times: Twice in a row and then a 15-minute rest period before being performed twice in a row again.
The researchers collected participants’ feedback about the process on questionnaires and Likert scales that showed an overall positive experience for the remote home visit, with the median rating being “very likely” to participate in another home visit and that the time commitment required was “very manageable.”
Good correlation found
To determine the correlation and the test-retest reliability of the data obtained during the repeated home tasks, Mr. Rose and collaborators used Pearson’s correlation R2 and the intra-class correlation coefficients (ICC).
ICCs for various gait and chair stand variables obtained with the sensors were between 0.85 and 0.96 for the test-retest reliability during the remote home visit, and R2 ranged between 0.81 and 0.95. Variables include stance, cadence (steps per minute), step duration and length, speed, and chair stand duration.
With regard to the agreement between the home versus lab results, ICCs ranged between 0.63 and 0.9.
“There were some logistical and technological challenges with the approach,” Mr. Rose conceded. “Despite written and verbal instructions, 2 of the 20 participants ended up having gait data that was unusable in the home visit.”
Another limitation is that the study population, while “representative,” contained a higher number of individuals than the general population who identified as being White (95%) and female (85%), and 90% had a college degree.
“Individuals typically representative of an OA population were generally accepting and willing to participate in remote visits showing the feasibility of our approach,” Mr. Rose said.
“We need to determine the responsiveness of gait and chair stand outcomes from wearable sensors at home to change over time.”
The study was sponsored by Boston University with funding from Pfizer and Eli Lilly. The researchers used the OPAL inertial sensor (APDM Wearable Technologies) in the study. Mr. Rose made no personal disclosures. Four of his collaborators were employees of Pfizer and one is an employee of Eli Lilly & Company, all with stock or stock options.
Remote gait assessment in people with knee osteoarthritis using wearable sensors appears reliable but yields results slightly different from those achieved in the laboratory, researchers from Boston University have found.
As reported at the OARSI 2022 World Congress, there was “good to excellent reliability” in repeated measures collected by patients at home while being instructed via video teleconferencing.
Agreement was “moderate to excellent” when the findings were compared with those recorded in the lab, Michael J. Rose of Boston University reported at the congress, sponsored by the Osteoarthritis Research Society International.
“People walked faster and stood up faster in the lab,” Mr. Rose said. “Later we found that the difference in gait speed was statistically significant between the lab and home environment.”
This has been suggested previously and implies that data collected at home may have “greater ecological validity,” he observed.
Accelerated adoption of telehealth
Assessing how well someone walks or can stand from a seated position are well known and important assessments in knee OA, but these have but have traditionally only been done in large and expensive gait labs, Mr. Rose said.
Wearable technologies, such as the ones used in the study he presented, could help move these assessments out into the community. This is particularly timely considering the increased adoption of telehealth practices during the COVID-19 pandemic.
To look at the reliability measurements obtained via wearable sensors versus lab assessments, Mr. Rose and associates set up a substudy within a larger ongoing, single-arm trial looking at the use of digital assessments to measure the efficacy of an exercise intervention in reducing knee pain and improving knee function.
For inclusion in the main trial (n = 60), and hence the substudy (n = 20), participants had to have physician-diagnosed knee OA, be 50 years of age or older, have a body mass index of 40 kg/m2 or lower, be able to walk at for a least 20 minutes, and have a score of three or higher on the Knee Injury and Osteoarthritis Outcome Score pain subscale for weight-bearing items.
Acceptance of in-lab versus home testing
The substudy participants (mean age, 70.5 years) all underwent in-person lab visits in which a wearable sensor was placed on each foot and one around the lower back and the participant asked to perform walking and chair stand tests. The latter involved standing from a seated position as quickly as possible without using the arms five times, while the former involved walking 28 meters in two laps of a 7-meter path defined by two cones. These tests were repeated twice.
Participants were then given the equipment to repeat these tests at home; this included the three sensors, a tablet computer, and chair and cones. The home assessments were conducted via video conferencing, with the researchers reminding how to place the sensors correctly. The walking and chair stand tests were then each performed four times: Twice in a row and then a 15-minute rest period before being performed twice in a row again.
The researchers collected participants’ feedback about the process on questionnaires and Likert scales that showed an overall positive experience for the remote home visit, with the median rating being “very likely” to participate in another home visit and that the time commitment required was “very manageable.”
Good correlation found
To determine the correlation and the test-retest reliability of the data obtained during the repeated home tasks, Mr. Rose and collaborators used Pearson’s correlation R2 and the intra-class correlation coefficients (ICC).
ICCs for various gait and chair stand variables obtained with the sensors were between 0.85 and 0.96 for the test-retest reliability during the remote home visit, and R2 ranged between 0.81 and 0.95. Variables include stance, cadence (steps per minute), step duration and length, speed, and chair stand duration.
With regard to the agreement between the home versus lab results, ICCs ranged between 0.63 and 0.9.
“There were some logistical and technological challenges with the approach,” Mr. Rose conceded. “Despite written and verbal instructions, 2 of the 20 participants ended up having gait data that was unusable in the home visit.”
Another limitation is that the study population, while “representative,” contained a higher number of individuals than the general population who identified as being White (95%) and female (85%), and 90% had a college degree.
“Individuals typically representative of an OA population were generally accepting and willing to participate in remote visits showing the feasibility of our approach,” Mr. Rose said.
“We need to determine the responsiveness of gait and chair stand outcomes from wearable sensors at home to change over time.”
The study was sponsored by Boston University with funding from Pfizer and Eli Lilly. The researchers used the OPAL inertial sensor (APDM Wearable Technologies) in the study. Mr. Rose made no personal disclosures. Four of his collaborators were employees of Pfizer and one is an employee of Eli Lilly & Company, all with stock or stock options.
FROM OARSI 2022
Secukinumab’s antipsoriatic effects confirmed in U.S. patient population
and those who up-titrated to 300 mg from the lower approved dose of 150 mg also saw benefits obtained at that level.
Researchers conducted a postmarketing trial of secukinumab in patients at U.S. centers, called CHOICE, after it was approved for psoriasis and PsA in 2015 and 2016 based on trials mainly conducted outside of the United States. The American patients in those studies “had a baseline clinical profile indicating harder-to-treat disease than the total study population, including higher body mass index (BMI), higher tender and swollen joint counts, increased prevalence of enthesitis and dactylitis, and more tumor necrosis factor inhibitor (TNFi) experience,” Tien Q. Nguyen, MD, a dermatologist in private practice in Irvine, Calif., and colleagues wrote in the Journal of Rheumatology.
In order to get a better sense of how secukinumab performs in U.S. patients who have not been treated with biologics, the researchers conducted the multicenter, randomized, double-blind, placebo-controlled, parallel-group, phase 4 CHOICE trial. It recruited patients for about 26 months at 67 U.S. centers during 2016-2018. The 258 patients randomized in the study to 300 mg (n = 103), 150 mg secukinumab (n = 103), or placebo (n = 52) had a mean time since PsA diagnosis of 3.0-3.9 years and all had a mean BMI of greater than 30 kg/m2, with dactylitis present in 48% and enthesitis in 73%. About one-third were taking methotrexate at baseline.
At week 16, patients taking secukinumab 300 mg were about 3.5 times more likely to have 20% improvement in American College of Rheumatology response criteria than with placebo (51.5% vs. 23.1%), whereas the response rate with 150 mg was not significantly different from placebo (36.9%). Rates of achieving ACR50 were significantly greater for both 300- and 150-mg doses versus placebo (28.2% and 24.3% vs. 5.8%), but only 300 mg led to a statistically significant difference in the rate of ACR70 responses, compared with placebo (17.5% vs. 1.9%).
In general, efficacy based on ACR20/50/70 responses and either remission or low disease activity on the Disease Activity in Psoriatic Arthritis index was lower among patients with less than 10 tender joints and less than 10 swollen joints at baseline. Methotrexate use at baseline did not affect ACR20 rates at week 16 in patients taking secukinumab, but the effect of methotrexate on ACR20 rates was noticeable among placebo-treated patients (38.9% vs. 14.7%). Enthesitis appeared to resolve significantly more often among patients on secukinumab, and more patients on secukinumab also had their dactylitis resolve, but the difference was not statistically significant.
Patients with psoriasis affecting more than 3% of their body surface area experienced higher response rates on the Psoriasis Area Severity Index (PASI) for 75%, 90%, and 100% skin lesion clearance than did patients taking placebo.
Patients who switched from 150 mg to 300 mg secukinumab after week 16 in the second treatment period of the trial more often achieved ACR20/50/70 responses by week 52, going from 2.4% to 65.9% of the up-titration subset for ACR20 and from 0% to 34.1% for ACR50 and to 12.2% for ACR70. Patients on placebo who switched also experienced increases in these response rates out to week 52. However, BMI above 30 kg/m2 led to numerically lower ACR50, ACR70, and PASI response rates at week 52.
The researchers noted that the response rates observed in CHOICE were lower than for the pivotal trials used for Food and Drug Administration approval for PsA, which “may have been due to patients in CHOICE having higher disease activity scores at baseline, compared with TNFi-naive patients” in the pivotal trials.
The safety profile of secukinumab appeared to be no different from what has been reported previously. The researchers said that, throughout the 52-week study, the most common adverse events in patients receiving secukinumab were upper respiratory tract infection in about 13% and diarrhea in about 7%. Most adverse events were mild or moderate, with serious adverse events occurring in 9.6% of patients taking secukinumab 300 mg and in 7.8% of patients taking secukinumab 150 mg over the 52 weeks.
“Overall, the findings from CHOICE were consistent with previous studies and demonstrated that secukinumab provides significant and sustained improvements in signs and symptoms of psoriatic arthritis. Our findings suggest that secukinumab 300 mg is safe and efficacious as a first-line biologic treatment for patients with PsA. Further studies will also help determine the optimal dose of secukinumab for treating overweight patients or those with high disease activity at treatment initiation,” the authors wrote.
The study was funded by Novartis, which manufactures secukinumab. Dr. Nguyen and some coauthors reported serving as a consultant, investigator, and/or speaker for numerous pharmaceutical companies, including Novartis.
and those who up-titrated to 300 mg from the lower approved dose of 150 mg also saw benefits obtained at that level.
Researchers conducted a postmarketing trial of secukinumab in patients at U.S. centers, called CHOICE, after it was approved for psoriasis and PsA in 2015 and 2016 based on trials mainly conducted outside of the United States. The American patients in those studies “had a baseline clinical profile indicating harder-to-treat disease than the total study population, including higher body mass index (BMI), higher tender and swollen joint counts, increased prevalence of enthesitis and dactylitis, and more tumor necrosis factor inhibitor (TNFi) experience,” Tien Q. Nguyen, MD, a dermatologist in private practice in Irvine, Calif., and colleagues wrote in the Journal of Rheumatology.
In order to get a better sense of how secukinumab performs in U.S. patients who have not been treated with biologics, the researchers conducted the multicenter, randomized, double-blind, placebo-controlled, parallel-group, phase 4 CHOICE trial. It recruited patients for about 26 months at 67 U.S. centers during 2016-2018. The 258 patients randomized in the study to 300 mg (n = 103), 150 mg secukinumab (n = 103), or placebo (n = 52) had a mean time since PsA diagnosis of 3.0-3.9 years and all had a mean BMI of greater than 30 kg/m2, with dactylitis present in 48% and enthesitis in 73%. About one-third were taking methotrexate at baseline.
At week 16, patients taking secukinumab 300 mg were about 3.5 times more likely to have 20% improvement in American College of Rheumatology response criteria than with placebo (51.5% vs. 23.1%), whereas the response rate with 150 mg was not significantly different from placebo (36.9%). Rates of achieving ACR50 were significantly greater for both 300- and 150-mg doses versus placebo (28.2% and 24.3% vs. 5.8%), but only 300 mg led to a statistically significant difference in the rate of ACR70 responses, compared with placebo (17.5% vs. 1.9%).
In general, efficacy based on ACR20/50/70 responses and either remission or low disease activity on the Disease Activity in Psoriatic Arthritis index was lower among patients with less than 10 tender joints and less than 10 swollen joints at baseline. Methotrexate use at baseline did not affect ACR20 rates at week 16 in patients taking secukinumab, but the effect of methotrexate on ACR20 rates was noticeable among placebo-treated patients (38.9% vs. 14.7%). Enthesitis appeared to resolve significantly more often among patients on secukinumab, and more patients on secukinumab also had their dactylitis resolve, but the difference was not statistically significant.
Patients with psoriasis affecting more than 3% of their body surface area experienced higher response rates on the Psoriasis Area Severity Index (PASI) for 75%, 90%, and 100% skin lesion clearance than did patients taking placebo.
Patients who switched from 150 mg to 300 mg secukinumab after week 16 in the second treatment period of the trial more often achieved ACR20/50/70 responses by week 52, going from 2.4% to 65.9% of the up-titration subset for ACR20 and from 0% to 34.1% for ACR50 and to 12.2% for ACR70. Patients on placebo who switched also experienced increases in these response rates out to week 52. However, BMI above 30 kg/m2 led to numerically lower ACR50, ACR70, and PASI response rates at week 52.
The researchers noted that the response rates observed in CHOICE were lower than for the pivotal trials used for Food and Drug Administration approval for PsA, which “may have been due to patients in CHOICE having higher disease activity scores at baseline, compared with TNFi-naive patients” in the pivotal trials.
The safety profile of secukinumab appeared to be no different from what has been reported previously. The researchers said that, throughout the 52-week study, the most common adverse events in patients receiving secukinumab were upper respiratory tract infection in about 13% and diarrhea in about 7%. Most adverse events were mild or moderate, with serious adverse events occurring in 9.6% of patients taking secukinumab 300 mg and in 7.8% of patients taking secukinumab 150 mg over the 52 weeks.
“Overall, the findings from CHOICE were consistent with previous studies and demonstrated that secukinumab provides significant and sustained improvements in signs and symptoms of psoriatic arthritis. Our findings suggest that secukinumab 300 mg is safe and efficacious as a first-line biologic treatment for patients with PsA. Further studies will also help determine the optimal dose of secukinumab for treating overweight patients or those with high disease activity at treatment initiation,” the authors wrote.
The study was funded by Novartis, which manufactures secukinumab. Dr. Nguyen and some coauthors reported serving as a consultant, investigator, and/or speaker for numerous pharmaceutical companies, including Novartis.
and those who up-titrated to 300 mg from the lower approved dose of 150 mg also saw benefits obtained at that level.
Researchers conducted a postmarketing trial of secukinumab in patients at U.S. centers, called CHOICE, after it was approved for psoriasis and PsA in 2015 and 2016 based on trials mainly conducted outside of the United States. The American patients in those studies “had a baseline clinical profile indicating harder-to-treat disease than the total study population, including higher body mass index (BMI), higher tender and swollen joint counts, increased prevalence of enthesitis and dactylitis, and more tumor necrosis factor inhibitor (TNFi) experience,” Tien Q. Nguyen, MD, a dermatologist in private practice in Irvine, Calif., and colleagues wrote in the Journal of Rheumatology.
In order to get a better sense of how secukinumab performs in U.S. patients who have not been treated with biologics, the researchers conducted the multicenter, randomized, double-blind, placebo-controlled, parallel-group, phase 4 CHOICE trial. It recruited patients for about 26 months at 67 U.S. centers during 2016-2018. The 258 patients randomized in the study to 300 mg (n = 103), 150 mg secukinumab (n = 103), or placebo (n = 52) had a mean time since PsA diagnosis of 3.0-3.9 years and all had a mean BMI of greater than 30 kg/m2, with dactylitis present in 48% and enthesitis in 73%. About one-third were taking methotrexate at baseline.
At week 16, patients taking secukinumab 300 mg were about 3.5 times more likely to have 20% improvement in American College of Rheumatology response criteria than with placebo (51.5% vs. 23.1%), whereas the response rate with 150 mg was not significantly different from placebo (36.9%). Rates of achieving ACR50 were significantly greater for both 300- and 150-mg doses versus placebo (28.2% and 24.3% vs. 5.8%), but only 300 mg led to a statistically significant difference in the rate of ACR70 responses, compared with placebo (17.5% vs. 1.9%).
In general, efficacy based on ACR20/50/70 responses and either remission or low disease activity on the Disease Activity in Psoriatic Arthritis index was lower among patients with less than 10 tender joints and less than 10 swollen joints at baseline. Methotrexate use at baseline did not affect ACR20 rates at week 16 in patients taking secukinumab, but the effect of methotrexate on ACR20 rates was noticeable among placebo-treated patients (38.9% vs. 14.7%). Enthesitis appeared to resolve significantly more often among patients on secukinumab, and more patients on secukinumab also had their dactylitis resolve, but the difference was not statistically significant.
Patients with psoriasis affecting more than 3% of their body surface area experienced higher response rates on the Psoriasis Area Severity Index (PASI) for 75%, 90%, and 100% skin lesion clearance than did patients taking placebo.
Patients who switched from 150 mg to 300 mg secukinumab after week 16 in the second treatment period of the trial more often achieved ACR20/50/70 responses by week 52, going from 2.4% to 65.9% of the up-titration subset for ACR20 and from 0% to 34.1% for ACR50 and to 12.2% for ACR70. Patients on placebo who switched also experienced increases in these response rates out to week 52. However, BMI above 30 kg/m2 led to numerically lower ACR50, ACR70, and PASI response rates at week 52.
The researchers noted that the response rates observed in CHOICE were lower than for the pivotal trials used for Food and Drug Administration approval for PsA, which “may have been due to patients in CHOICE having higher disease activity scores at baseline, compared with TNFi-naive patients” in the pivotal trials.
The safety profile of secukinumab appeared to be no different from what has been reported previously. The researchers said that, throughout the 52-week study, the most common adverse events in patients receiving secukinumab were upper respiratory tract infection in about 13% and diarrhea in about 7%. Most adverse events were mild or moderate, with serious adverse events occurring in 9.6% of patients taking secukinumab 300 mg and in 7.8% of patients taking secukinumab 150 mg over the 52 weeks.
“Overall, the findings from CHOICE were consistent with previous studies and demonstrated that secukinumab provides significant and sustained improvements in signs and symptoms of psoriatic arthritis. Our findings suggest that secukinumab 300 mg is safe and efficacious as a first-line biologic treatment for patients with PsA. Further studies will also help determine the optimal dose of secukinumab for treating overweight patients or those with high disease activity at treatment initiation,” the authors wrote.
The study was funded by Novartis, which manufactures secukinumab. Dr. Nguyen and some coauthors reported serving as a consultant, investigator, and/or speaker for numerous pharmaceutical companies, including Novartis.
FROM THE JOURNAL OF RHEUMATOLOGY
An aspirin a day ... for CRC?
Dear colleagues,
We are all often asked by friends, colleagues, and especially patients how to reduce the risk of getting colorectal cancer. We offer exercise, diet, and smoking cessation as some possible ways to mitigate risk. But what about that wonder drug – the ubiquitous aspirin? The American Gastroenterological Association’s recent clinical practice update suggests that aspirin may be protective in some patients younger than 70 years depending on their cardiovascular and gastrointestinal bleeding risks. If so, should we gastroenterologists be the ones to recommend or even prescribe aspirin? Or are the data just not there yet? We invite two colorectal cancer experts, Dr. Sonia Kupfer and Dr. Jennifer Weiss, to share their perspectives in light of these new recommendations. I invite you to a great debate and look forward to hearing your own thoughts online and by email at [email protected].
Gyanprakash A. Ketwaroo, MD, MSc, is assistant professor of medicine at Baylor College of Medicine, Houston. He is an associate editor for GI & Hepatology News.
Not our lane
By Jennifer Weiss, MD, MS
In 2021, the AGA published a clinical practice update on chemoprevention for colorectal neoplasia that advises clinicians to use low-dose aspirin to reduce colorectal cancer (CRC) incidence and mortality in average-risk individuals who are (1) younger than 70 years with a life expectancy of at least 10 years, (2) have at least a 10% 10-year cardiovascular disease (CVD) risk, and (3) are not at high risk for gastrointestinal bleeding.1 As gastroenterologists, we may see average-risk patients only at the time of their screening or surveillance colonoscopies, and I wonder if we should be taking the lead in prescribing/recommending aspirin for CRC chemoprevention in these patients. To answer this question, I will review three main concerns: (1) issues with the overall strength of the evidence on the effectiveness of aspirin to reduce CRC incidence and mortality, (2) determining an individual’s long-term CVD risk and life expectancy may be outside of a gastroenterologist’s purview, and (3) the potential for serious gastrointestinal bleeding is dynamic and requires continual review.
Studies examining the effects of aspirin on CRC incidence and mortality have limitations and mixed results. Many of the randomized controlled trials have primarily been secondary analyses of studies with primary CVD endpoints. When examined individually some studies show no significant reduction in CRC risk such as the Women’s Health Study (at 10 years of follow-up), the Swedish Aspirin Low-Dose Trial, and the UK-TIA Aspirin Trial, while some meta-analyses have shown a decrease in CRC incidence and mortality.2 One reason for this discrepancy may be varying lengths of follow-up across studies. In addition, we do not yet know the optimal aspirin dose or duration of therapy. The protective effect of aspirin on CRC incidence and mortality in average-risk individuals is mostly seen after 10-20 years of follow-up. This is relevant to the first part of the AGA clinical practice update recommendation that refers to individuals with a life expectancy of at least 10 years. The second part of the recommendation includes individuals with a 10-year CVD risk of at least 10%. As gastroenterologists, we may see these patients only two to three times over a 10-20 year period and only for their screening/surveillance colonoscopy. I would argue that we are not in the best position to address changes in life expectancy and 10-year CVD risk status over time and determine if they should start or continue taking aspirin for CRC chemoprevention.
The United States Preventive Services Task Force is also reexamining their previous recommendations for aspirin for primary prevention of cardiovascular disease. The 2016 guidelines recommended initiation of low-dose aspirin for primary prevention of CVD and CRC in adults aged 50-59 years who have a 10% or greater 10-year CVD risk and at least a 10-year life expectancy (Grade B). The current draft recommendations state that aspirin use for the primary prevention of CVD events in adults aged 40-59 years who have a 10% or greater 10-year CVD risk has a small net benefit (Grade C) and that initiating aspirin for the primary prevention of CVD events in adults aged 60 years and older has no net benefit (Grade D). They also state that, based on longer-term follow-up data from the Women’s Health Study and newer trials, the evidence is inadequate that low-dose aspirin use reduces CRC incidence and mortality.3 Because of these moving targets, we may also find ourselves walking back the AGA clinical practice update recommendations in the future.
One main concern for long-term aspirin use is the potential for gastrointestinal bleeding. Participants in more than one of the CVD prevention trials had a significant increase in gastrointestinal bleeding.1,2 While gastrointestinal bleeding falls within our wheelhouse, we are not always privy to a patient’s risk factors for bleeding. For example, patients may receive multiple courses of steroids for arthritis or chronic pulmonary disorders and not take concomitant acid suppression. These risks are dynamic and require continual reassessment as individuals age, new diagnoses are made, and new medications are started or stopped by providers other than their gastroenterologist. If a patient is taking aspirin, regardless of the reason, we need to make sure it is correctly recorded in their medication list, especially if they are obtaining it over the counter. This is one area where we should definitely play a role.
There is a population in which I do recommend aspirin for reduction of CRC chemoprevention – individuals with Lynch syndrome. I believe the data for the protective effects of aspirin on CRC incidence are much stronger for individuals with Lynch syndrome than the average-risk population. The CAPP2 trial was a randomized trial with a two-by-two factorial design where individuals with Lynch syndrome were randomly assigned to aspirin 600 mg/day or aspirin placebo or resistant starch or starch placebo for up to 4 years. The primary endpoint of this trial was development of CRC (unlike the CVD trials referred to earlier in this article). Long-term follow-up of the CAPP2 trial participants found a significantly decreased risk of CRC after 2 years of aspirin use (hazard ratio, 0.56, 95% confidence interval, 0.34-0.91).4 The current CAPP3 trial will answer questions about the effectiveness of lower doses of aspirin (100 mg and 300 mg).
The recommendation for aspirin use for CRC chemoprevention in average-risk individuals depends on multiple factors (life expectancy, determination of CVD risk, and dynamic assessment of gastrointestinal bleeding risk) that are outside the purview of a gastroenterologist who sees the patient only at a screening or surveillance colonoscopy. This is not in our lane. What is in our lane, however, is the recommendation for aspirin use for CRC chemoprevention in select high-risk populations such as individuals with Lynch syndrome.
Dr. Weiss is associate professor in the division of gastroenterology and hepatology and director of the University of Wisconsin Gastroenterology Genetics Clinic at University of Wisconsin School of Medicine and Public Health. She reports receiving research support from Exact Sciences as a site-PI of a multisite trial.
References
1. Liang PS et al. Clin Gastroenterol Hepatol. 2021 Jul;19(7):1327-36. doi: 10.1016/j.cgh.2021.02.014
2. Katona BW and Weiss JM. Gastroenterology. 2020 Jan;158(2):368-88. doi: 10.1053/j.gastro.2019.06.047
3. United States Preventive Services Task Force. “Aspirin Use to Prevent Cardiovascular Disease: Preventive Medication.” Accessed April 5, 2022.
4. Burn J et al. Lancet. 2020 Jun13;395(10240):1855-63. doi: 10.1016/s0140-6736(20)30366-4
Yes, but individualize it
By Sonia S. Kupfer, MD
Colorectal cancer (CRC) is one of the top three causes of cancer and cancer death worldwide with an alarming rise in younger adults. Preventive strategies including screening, chemoprevention, and risk factor modification are important to reduce overall CRC burden. Aspirin, which is cheap and readily available, is supported for CRC chemoprevention by multiple lines of strong evidence. Recent AGA practice guidelines recommend low-dose aspirin chemoprevention in individuals at average CRC risk who are younger than 70 years with a life expectancy of at least 10 years, have a 10-year cardiovascular disease risk of at least 10% and are not at high risk for bleeding.1 This advice diverges from the most recent U.S. Preventive Services Task Force–proposed guidelines2 that reverse the 2016 USPSTF recommendation for aspirin CRC chemoprevention (and primary prevention of cardiovascular disease) based on uncertainty of net benefit over harms, especially in older individuals. In light of conflicting advice, how should we counsel our patients about aspirin use for CRC chemoprevention? In my opinion, we shouldn’t “throw the baby out with the bathwater” and should follow the AGA practice guideline to individualize aspirin chemoprevention based on balancing known benefits and risks.
As reviewed in the AGA practice guidelines1, many, but not all, randomized controlled and observational trials have shown efficacy of aspirin for reduction of CRC mortality, incidence, and adenoma recurrence. Analysis of cardiovascular prevention trials including over 14,000 mostly middle-aged people showed 33% reduction in 20-year cumulative CRC mortality. While a pooled estimate of four trials did not show reduced incidence 0-12 years after aspirin initiation, as noted in the practice guideline, three of these trials did show a 40% reduction between 10 and 19 years, a finding that is in line with results from a 20-year pooled analysis showing 24% reduction in CRC incidence by aspirin. Among Lynch syndrome patients, exposure to high-dose aspirin also significantly reduced CRC incidence in a randomized controlled trial with up to 20 years of follow-up3 highlighting that chemoprotective effects take years to manifest, and long-term follow-up in cancer chemoprevention trials is needed. Studies also have shown reduced adenoma incidence or recurrence by aspirin ranging from 17% to 51% depending on the study population, dose, and adherence. In addition to clinical trials, experimental data have demonstrated protective cellular effects of aspirin on colonic carcinogenesis, though exact mechanisms of this protective effect remain incompletely understood and are active areas of research, including in my lab. Taken together, there is a large body of evidence supporting a protective effect of aspirin on CRC mortality and colorectal neoplasia incidence most evident after 1-2 decades of follow-up.
Not all trials have shown that aspirin is chemoprotective, and, in fact, the ASPREE trial,4 that randomized over 19,000 healthy adults over the age of 70 to 100 mg aspirin or placebo, showed increased cancer mortality when the trial was stopped prematurely after 5 years. Individuals who started aspirin under age 70 appear to have continued chemoprotection as they age5 suggesting that aspirin, if it is tolerated, might not necessarily need to be stopped at a certain age. Notably, the ASPREE trial did not show increased CRC incidence, which begs the question of the biological mechanism underlying increased cancer mortality in trial participants. Beyond the findings of ASPREE, aspirin use is associated with risks of intracranial and gastrointestinal bleeding with estimated odds ratios of 1.29 and 1.59, respectively. The AGA practice guideline acknowledges these risks especially in older adults and recommends initiation of aspirin in individuals under the age of 70 who are expected to live another 10 years without bleeding risks in order to reap the benefits and minimize the risks.
Risk stratification hinges on acceptance and feasibility. Three-quarters of providers, when surveyed, reported aspirin to be a suitable preventive treatment with more favorable views expressed by gastroenterologists and genetics providers, compared with colorectal surgeons.6 In Lynch syndrome, rates of aspirin chemoprevention recommendation by providers in real-world practices ranged from 35% to 67%; my own practice strives to discuss aspirin use with every Lynch patient at every clinic and endoscopy visit. Real-world data for uptake and adherence of aspirin CRC chemoprevention are sparse. Uptake and adherence of aspirin for cancer chemoprevention in clinical trials ranged from 41% to 80% with good adherence, although these findings likely are not generalizable to routine practice. Current blood pressure and cholesterol guidelines for primary prevention include calculation of 10-year cardiovascular risk using automatic calculators in the electronic health record; thus, it should be relatively straightforward to apply this approach for aspirin CRC chemoprevention as well. While calculation of bleeding risk is less well established, there are publicly available calculators that combine cardiovascular and bleeding risk for primary aspirin prevention and such decision aids should be explored for aspirin CRC chemoprevention. However, given the recent recommendation reversal by the USPSTF, I am concerned that recommendation and uptake of aspirin CRC chemoprevention will decline substantially.
In order to reduce CRC burden, we should employ everything in our armamentarium including aspirin chemoprevention. Individualized risk assessment for aspirin chemoprevention, as advised by the AGA practice guideline, will enable the right people to benefit while minimizing risks. Future studies should strengthen the evidence base for aspirin CRC chemoprevention and refine risk stratification, including for younger individuals given the rise in early-onset CRC. The optimal approach to aspirin chemoprevention was best summed up by the foremost expert in the field, Dr. Andy Chan, to the New York Times:7 “we need to think about personalizing who we give aspirin to, and move away from a one-size-fits-all solution”.
Dr. Kupfer is associate professor of medicine, director of the Gastrointestinal Cancer Risk and Prevention Clinic, and codirector of the Comprehensive Cancer Risk and Prevention Clinic at the University of Chicago. She reports no relevant conflicts of interest.
References
1. Liang PS et al. Clin Gastroenterol Hepatol. 2021 Jul;19(7):1327-36.
2. United States Preventive Services Task Force. “Aspirin Use to Prevent Cardiovascular Disease: Preventive Medication.” Accessed April 10, 2022.
3. Burn J et al. Lancet. 2020 Jun 13;395(10240):1855-63.
4. McNeil JJ et al. N Engl J Med. 2018 Oct 18;379(16):1519-28.
5. Guo CG et al. JAMA Oncol. 2021 Mar 1;7(3):428-35.
6. Lloyd KE et al. Prev Med. 2022 Jan;154:106872.
7. Rabin RC. “Aspirin Use to Prevent 1st Heart Attack or Stroke Should Be Curtailed, U.S. Panel Says.” New York Times. Oct. 13, 2021. Accessed April 10, 2022.
Dear colleagues,
We are all often asked by friends, colleagues, and especially patients how to reduce the risk of getting colorectal cancer. We offer exercise, diet, and smoking cessation as some possible ways to mitigate risk. But what about that wonder drug – the ubiquitous aspirin? The American Gastroenterological Association’s recent clinical practice update suggests that aspirin may be protective in some patients younger than 70 years depending on their cardiovascular and gastrointestinal bleeding risks. If so, should we gastroenterologists be the ones to recommend or even prescribe aspirin? Or are the data just not there yet? We invite two colorectal cancer experts, Dr. Sonia Kupfer and Dr. Jennifer Weiss, to share their perspectives in light of these new recommendations. I invite you to a great debate and look forward to hearing your own thoughts online and by email at [email protected].
Gyanprakash A. Ketwaroo, MD, MSc, is assistant professor of medicine at Baylor College of Medicine, Houston. He is an associate editor for GI & Hepatology News.
Not our lane
By Jennifer Weiss, MD, MS
In 2021, the AGA published a clinical practice update on chemoprevention for colorectal neoplasia that advises clinicians to use low-dose aspirin to reduce colorectal cancer (CRC) incidence and mortality in average-risk individuals who are (1) younger than 70 years with a life expectancy of at least 10 years, (2) have at least a 10% 10-year cardiovascular disease (CVD) risk, and (3) are not at high risk for gastrointestinal bleeding.1 As gastroenterologists, we may see average-risk patients only at the time of their screening or surveillance colonoscopies, and I wonder if we should be taking the lead in prescribing/recommending aspirin for CRC chemoprevention in these patients. To answer this question, I will review three main concerns: (1) issues with the overall strength of the evidence on the effectiveness of aspirin to reduce CRC incidence and mortality, (2) determining an individual’s long-term CVD risk and life expectancy may be outside of a gastroenterologist’s purview, and (3) the potential for serious gastrointestinal bleeding is dynamic and requires continual review.
Studies examining the effects of aspirin on CRC incidence and mortality have limitations and mixed results. Many of the randomized controlled trials have primarily been secondary analyses of studies with primary CVD endpoints. When examined individually some studies show no significant reduction in CRC risk such as the Women’s Health Study (at 10 years of follow-up), the Swedish Aspirin Low-Dose Trial, and the UK-TIA Aspirin Trial, while some meta-analyses have shown a decrease in CRC incidence and mortality.2 One reason for this discrepancy may be varying lengths of follow-up across studies. In addition, we do not yet know the optimal aspirin dose or duration of therapy. The protective effect of aspirin on CRC incidence and mortality in average-risk individuals is mostly seen after 10-20 years of follow-up. This is relevant to the first part of the AGA clinical practice update recommendation that refers to individuals with a life expectancy of at least 10 years. The second part of the recommendation includes individuals with a 10-year CVD risk of at least 10%. As gastroenterologists, we may see these patients only two to three times over a 10-20 year period and only for their screening/surveillance colonoscopy. I would argue that we are not in the best position to address changes in life expectancy and 10-year CVD risk status over time and determine if they should start or continue taking aspirin for CRC chemoprevention.
The United States Preventive Services Task Force is also reexamining their previous recommendations for aspirin for primary prevention of cardiovascular disease. The 2016 guidelines recommended initiation of low-dose aspirin for primary prevention of CVD and CRC in adults aged 50-59 years who have a 10% or greater 10-year CVD risk and at least a 10-year life expectancy (Grade B). The current draft recommendations state that aspirin use for the primary prevention of CVD events in adults aged 40-59 years who have a 10% or greater 10-year CVD risk has a small net benefit (Grade C) and that initiating aspirin for the primary prevention of CVD events in adults aged 60 years and older has no net benefit (Grade D). They also state that, based on longer-term follow-up data from the Women’s Health Study and newer trials, the evidence is inadequate that low-dose aspirin use reduces CRC incidence and mortality.3 Because of these moving targets, we may also find ourselves walking back the AGA clinical practice update recommendations in the future.
One main concern for long-term aspirin use is the potential for gastrointestinal bleeding. Participants in more than one of the CVD prevention trials had a significant increase in gastrointestinal bleeding.1,2 While gastrointestinal bleeding falls within our wheelhouse, we are not always privy to a patient’s risk factors for bleeding. For example, patients may receive multiple courses of steroids for arthritis or chronic pulmonary disorders and not take concomitant acid suppression. These risks are dynamic and require continual reassessment as individuals age, new diagnoses are made, and new medications are started or stopped by providers other than their gastroenterologist. If a patient is taking aspirin, regardless of the reason, we need to make sure it is correctly recorded in their medication list, especially if they are obtaining it over the counter. This is one area where we should definitely play a role.
There is a population in which I do recommend aspirin for reduction of CRC chemoprevention – individuals with Lynch syndrome. I believe the data for the protective effects of aspirin on CRC incidence are much stronger for individuals with Lynch syndrome than the average-risk population. The CAPP2 trial was a randomized trial with a two-by-two factorial design where individuals with Lynch syndrome were randomly assigned to aspirin 600 mg/day or aspirin placebo or resistant starch or starch placebo for up to 4 years. The primary endpoint of this trial was development of CRC (unlike the CVD trials referred to earlier in this article). Long-term follow-up of the CAPP2 trial participants found a significantly decreased risk of CRC after 2 years of aspirin use (hazard ratio, 0.56, 95% confidence interval, 0.34-0.91).4 The current CAPP3 trial will answer questions about the effectiveness of lower doses of aspirin (100 mg and 300 mg).
The recommendation for aspirin use for CRC chemoprevention in average-risk individuals depends on multiple factors (life expectancy, determination of CVD risk, and dynamic assessment of gastrointestinal bleeding risk) that are outside the purview of a gastroenterologist who sees the patient only at a screening or surveillance colonoscopy. This is not in our lane. What is in our lane, however, is the recommendation for aspirin use for CRC chemoprevention in select high-risk populations such as individuals with Lynch syndrome.
Dr. Weiss is associate professor in the division of gastroenterology and hepatology and director of the University of Wisconsin Gastroenterology Genetics Clinic at University of Wisconsin School of Medicine and Public Health. She reports receiving research support from Exact Sciences as a site-PI of a multisite trial.
References
1. Liang PS et al. Clin Gastroenterol Hepatol. 2021 Jul;19(7):1327-36. doi: 10.1016/j.cgh.2021.02.014
2. Katona BW and Weiss JM. Gastroenterology. 2020 Jan;158(2):368-88. doi: 10.1053/j.gastro.2019.06.047
3. United States Preventive Services Task Force. “Aspirin Use to Prevent Cardiovascular Disease: Preventive Medication.” Accessed April 5, 2022.
4. Burn J et al. Lancet. 2020 Jun13;395(10240):1855-63. doi: 10.1016/s0140-6736(20)30366-4
Yes, but individualize it
By Sonia S. Kupfer, MD
Colorectal cancer (CRC) is one of the top three causes of cancer and cancer death worldwide with an alarming rise in younger adults. Preventive strategies including screening, chemoprevention, and risk factor modification are important to reduce overall CRC burden. Aspirin, which is cheap and readily available, is supported for CRC chemoprevention by multiple lines of strong evidence. Recent AGA practice guidelines recommend low-dose aspirin chemoprevention in individuals at average CRC risk who are younger than 70 years with a life expectancy of at least 10 years, have a 10-year cardiovascular disease risk of at least 10% and are not at high risk for bleeding.1 This advice diverges from the most recent U.S. Preventive Services Task Force–proposed guidelines2 that reverse the 2016 USPSTF recommendation for aspirin CRC chemoprevention (and primary prevention of cardiovascular disease) based on uncertainty of net benefit over harms, especially in older individuals. In light of conflicting advice, how should we counsel our patients about aspirin use for CRC chemoprevention? In my opinion, we shouldn’t “throw the baby out with the bathwater” and should follow the AGA practice guideline to individualize aspirin chemoprevention based on balancing known benefits and risks.
As reviewed in the AGA practice guidelines1, many, but not all, randomized controlled and observational trials have shown efficacy of aspirin for reduction of CRC mortality, incidence, and adenoma recurrence. Analysis of cardiovascular prevention trials including over 14,000 mostly middle-aged people showed 33% reduction in 20-year cumulative CRC mortality. While a pooled estimate of four trials did not show reduced incidence 0-12 years after aspirin initiation, as noted in the practice guideline, three of these trials did show a 40% reduction between 10 and 19 years, a finding that is in line with results from a 20-year pooled analysis showing 24% reduction in CRC incidence by aspirin. Among Lynch syndrome patients, exposure to high-dose aspirin also significantly reduced CRC incidence in a randomized controlled trial with up to 20 years of follow-up3 highlighting that chemoprotective effects take years to manifest, and long-term follow-up in cancer chemoprevention trials is needed. Studies also have shown reduced adenoma incidence or recurrence by aspirin ranging from 17% to 51% depending on the study population, dose, and adherence. In addition to clinical trials, experimental data have demonstrated protective cellular effects of aspirin on colonic carcinogenesis, though exact mechanisms of this protective effect remain incompletely understood and are active areas of research, including in my lab. Taken together, there is a large body of evidence supporting a protective effect of aspirin on CRC mortality and colorectal neoplasia incidence most evident after 1-2 decades of follow-up.
Not all trials have shown that aspirin is chemoprotective, and, in fact, the ASPREE trial,4 that randomized over 19,000 healthy adults over the age of 70 to 100 mg aspirin or placebo, showed increased cancer mortality when the trial was stopped prematurely after 5 years. Individuals who started aspirin under age 70 appear to have continued chemoprotection as they age5 suggesting that aspirin, if it is tolerated, might not necessarily need to be stopped at a certain age. Notably, the ASPREE trial did not show increased CRC incidence, which begs the question of the biological mechanism underlying increased cancer mortality in trial participants. Beyond the findings of ASPREE, aspirin use is associated with risks of intracranial and gastrointestinal bleeding with estimated odds ratios of 1.29 and 1.59, respectively. The AGA practice guideline acknowledges these risks especially in older adults and recommends initiation of aspirin in individuals under the age of 70 who are expected to live another 10 years without bleeding risks in order to reap the benefits and minimize the risks.
Risk stratification hinges on acceptance and feasibility. Three-quarters of providers, when surveyed, reported aspirin to be a suitable preventive treatment with more favorable views expressed by gastroenterologists and genetics providers, compared with colorectal surgeons.6 In Lynch syndrome, rates of aspirin chemoprevention recommendation by providers in real-world practices ranged from 35% to 67%; my own practice strives to discuss aspirin use with every Lynch patient at every clinic and endoscopy visit. Real-world data for uptake and adherence of aspirin CRC chemoprevention are sparse. Uptake and adherence of aspirin for cancer chemoprevention in clinical trials ranged from 41% to 80% with good adherence, although these findings likely are not generalizable to routine practice. Current blood pressure and cholesterol guidelines for primary prevention include calculation of 10-year cardiovascular risk using automatic calculators in the electronic health record; thus, it should be relatively straightforward to apply this approach for aspirin CRC chemoprevention as well. While calculation of bleeding risk is less well established, there are publicly available calculators that combine cardiovascular and bleeding risk for primary aspirin prevention and such decision aids should be explored for aspirin CRC chemoprevention. However, given the recent recommendation reversal by the USPSTF, I am concerned that recommendation and uptake of aspirin CRC chemoprevention will decline substantially.
In order to reduce CRC burden, we should employ everything in our armamentarium including aspirin chemoprevention. Individualized risk assessment for aspirin chemoprevention, as advised by the AGA practice guideline, will enable the right people to benefit while minimizing risks. Future studies should strengthen the evidence base for aspirin CRC chemoprevention and refine risk stratification, including for younger individuals given the rise in early-onset CRC. The optimal approach to aspirin chemoprevention was best summed up by the foremost expert in the field, Dr. Andy Chan, to the New York Times:7 “we need to think about personalizing who we give aspirin to, and move away from a one-size-fits-all solution”.
Dr. Kupfer is associate professor of medicine, director of the Gastrointestinal Cancer Risk and Prevention Clinic, and codirector of the Comprehensive Cancer Risk and Prevention Clinic at the University of Chicago. She reports no relevant conflicts of interest.
References
1. Liang PS et al. Clin Gastroenterol Hepatol. 2021 Jul;19(7):1327-36.
2. United States Preventive Services Task Force. “Aspirin Use to Prevent Cardiovascular Disease: Preventive Medication.” Accessed April 10, 2022.
3. Burn J et al. Lancet. 2020 Jun 13;395(10240):1855-63.
4. McNeil JJ et al. N Engl J Med. 2018 Oct 18;379(16):1519-28.
5. Guo CG et al. JAMA Oncol. 2021 Mar 1;7(3):428-35.
6. Lloyd KE et al. Prev Med. 2022 Jan;154:106872.
7. Rabin RC. “Aspirin Use to Prevent 1st Heart Attack or Stroke Should Be Curtailed, U.S. Panel Says.” New York Times. Oct. 13, 2021. Accessed April 10, 2022.
Dear colleagues,
We are all often asked by friends, colleagues, and especially patients how to reduce the risk of getting colorectal cancer. We offer exercise, diet, and smoking cessation as some possible ways to mitigate risk. But what about that wonder drug – the ubiquitous aspirin? The American Gastroenterological Association’s recent clinical practice update suggests that aspirin may be protective in some patients younger than 70 years depending on their cardiovascular and gastrointestinal bleeding risks. If so, should we gastroenterologists be the ones to recommend or even prescribe aspirin? Or are the data just not there yet? We invite two colorectal cancer experts, Dr. Sonia Kupfer and Dr. Jennifer Weiss, to share their perspectives in light of these new recommendations. I invite you to a great debate and look forward to hearing your own thoughts online and by email at [email protected].
Gyanprakash A. Ketwaroo, MD, MSc, is assistant professor of medicine at Baylor College of Medicine, Houston. He is an associate editor for GI & Hepatology News.
Not our lane
By Jennifer Weiss, MD, MS
In 2021, the AGA published a clinical practice update on chemoprevention for colorectal neoplasia that advises clinicians to use low-dose aspirin to reduce colorectal cancer (CRC) incidence and mortality in average-risk individuals who are (1) younger than 70 years with a life expectancy of at least 10 years, (2) have at least a 10% 10-year cardiovascular disease (CVD) risk, and (3) are not at high risk for gastrointestinal bleeding.1 As gastroenterologists, we may see average-risk patients only at the time of their screening or surveillance colonoscopies, and I wonder if we should be taking the lead in prescribing/recommending aspirin for CRC chemoprevention in these patients. To answer this question, I will review three main concerns: (1) issues with the overall strength of the evidence on the effectiveness of aspirin to reduce CRC incidence and mortality, (2) determining an individual’s long-term CVD risk and life expectancy may be outside of a gastroenterologist’s purview, and (3) the potential for serious gastrointestinal bleeding is dynamic and requires continual review.
Studies examining the effects of aspirin on CRC incidence and mortality have limitations and mixed results. Many of the randomized controlled trials have primarily been secondary analyses of studies with primary CVD endpoints. When examined individually some studies show no significant reduction in CRC risk such as the Women’s Health Study (at 10 years of follow-up), the Swedish Aspirin Low-Dose Trial, and the UK-TIA Aspirin Trial, while some meta-analyses have shown a decrease in CRC incidence and mortality.2 One reason for this discrepancy may be varying lengths of follow-up across studies. In addition, we do not yet know the optimal aspirin dose or duration of therapy. The protective effect of aspirin on CRC incidence and mortality in average-risk individuals is mostly seen after 10-20 years of follow-up. This is relevant to the first part of the AGA clinical practice update recommendation that refers to individuals with a life expectancy of at least 10 years. The second part of the recommendation includes individuals with a 10-year CVD risk of at least 10%. As gastroenterologists, we may see these patients only two to three times over a 10-20 year period and only for their screening/surveillance colonoscopy. I would argue that we are not in the best position to address changes in life expectancy and 10-year CVD risk status over time and determine if they should start or continue taking aspirin for CRC chemoprevention.
The United States Preventive Services Task Force is also reexamining their previous recommendations for aspirin for primary prevention of cardiovascular disease. The 2016 guidelines recommended initiation of low-dose aspirin for primary prevention of CVD and CRC in adults aged 50-59 years who have a 10% or greater 10-year CVD risk and at least a 10-year life expectancy (Grade B). The current draft recommendations state that aspirin use for the primary prevention of CVD events in adults aged 40-59 years who have a 10% or greater 10-year CVD risk has a small net benefit (Grade C) and that initiating aspirin for the primary prevention of CVD events in adults aged 60 years and older has no net benefit (Grade D). They also state that, based on longer-term follow-up data from the Women’s Health Study and newer trials, the evidence is inadequate that low-dose aspirin use reduces CRC incidence and mortality.3 Because of these moving targets, we may also find ourselves walking back the AGA clinical practice update recommendations in the future.
One main concern for long-term aspirin use is the potential for gastrointestinal bleeding. Participants in more than one of the CVD prevention trials had a significant increase in gastrointestinal bleeding.1,2 While gastrointestinal bleeding falls within our wheelhouse, we are not always privy to a patient’s risk factors for bleeding. For example, patients may receive multiple courses of steroids for arthritis or chronic pulmonary disorders and not take concomitant acid suppression. These risks are dynamic and require continual reassessment as individuals age, new diagnoses are made, and new medications are started or stopped by providers other than their gastroenterologist. If a patient is taking aspirin, regardless of the reason, we need to make sure it is correctly recorded in their medication list, especially if they are obtaining it over the counter. This is one area where we should definitely play a role.
There is a population in which I do recommend aspirin for reduction of CRC chemoprevention – individuals with Lynch syndrome. I believe the data for the protective effects of aspirin on CRC incidence are much stronger for individuals with Lynch syndrome than the average-risk population. The CAPP2 trial was a randomized trial with a two-by-two factorial design where individuals with Lynch syndrome were randomly assigned to aspirin 600 mg/day or aspirin placebo or resistant starch or starch placebo for up to 4 years. The primary endpoint of this trial was development of CRC (unlike the CVD trials referred to earlier in this article). Long-term follow-up of the CAPP2 trial participants found a significantly decreased risk of CRC after 2 years of aspirin use (hazard ratio, 0.56, 95% confidence interval, 0.34-0.91).4 The current CAPP3 trial will answer questions about the effectiveness of lower doses of aspirin (100 mg and 300 mg).
The recommendation for aspirin use for CRC chemoprevention in average-risk individuals depends on multiple factors (life expectancy, determination of CVD risk, and dynamic assessment of gastrointestinal bleeding risk) that are outside the purview of a gastroenterologist who sees the patient only at a screening or surveillance colonoscopy. This is not in our lane. What is in our lane, however, is the recommendation for aspirin use for CRC chemoprevention in select high-risk populations such as individuals with Lynch syndrome.
Dr. Weiss is associate professor in the division of gastroenterology and hepatology and director of the University of Wisconsin Gastroenterology Genetics Clinic at University of Wisconsin School of Medicine and Public Health. She reports receiving research support from Exact Sciences as a site-PI of a multisite trial.
References
1. Liang PS et al. Clin Gastroenterol Hepatol. 2021 Jul;19(7):1327-36. doi: 10.1016/j.cgh.2021.02.014
2. Katona BW and Weiss JM. Gastroenterology. 2020 Jan;158(2):368-88. doi: 10.1053/j.gastro.2019.06.047
3. United States Preventive Services Task Force. “Aspirin Use to Prevent Cardiovascular Disease: Preventive Medication.” Accessed April 5, 2022.
4. Burn J et al. Lancet. 2020 Jun13;395(10240):1855-63. doi: 10.1016/s0140-6736(20)30366-4
Yes, but individualize it
By Sonia S. Kupfer, MD
Colorectal cancer (CRC) is one of the top three causes of cancer and cancer death worldwide with an alarming rise in younger adults. Preventive strategies including screening, chemoprevention, and risk factor modification are important to reduce overall CRC burden. Aspirin, which is cheap and readily available, is supported for CRC chemoprevention by multiple lines of strong evidence. Recent AGA practice guidelines recommend low-dose aspirin chemoprevention in individuals at average CRC risk who are younger than 70 years with a life expectancy of at least 10 years, have a 10-year cardiovascular disease risk of at least 10% and are not at high risk for bleeding.1 This advice diverges from the most recent U.S. Preventive Services Task Force–proposed guidelines2 that reverse the 2016 USPSTF recommendation for aspirin CRC chemoprevention (and primary prevention of cardiovascular disease) based on uncertainty of net benefit over harms, especially in older individuals. In light of conflicting advice, how should we counsel our patients about aspirin use for CRC chemoprevention? In my opinion, we shouldn’t “throw the baby out with the bathwater” and should follow the AGA practice guideline to individualize aspirin chemoprevention based on balancing known benefits and risks.
As reviewed in the AGA practice guidelines1, many, but not all, randomized controlled and observational trials have shown efficacy of aspirin for reduction of CRC mortality, incidence, and adenoma recurrence. Analysis of cardiovascular prevention trials including over 14,000 mostly middle-aged people showed 33% reduction in 20-year cumulative CRC mortality. While a pooled estimate of four trials did not show reduced incidence 0-12 years after aspirin initiation, as noted in the practice guideline, three of these trials did show a 40% reduction between 10 and 19 years, a finding that is in line with results from a 20-year pooled analysis showing 24% reduction in CRC incidence by aspirin. Among Lynch syndrome patients, exposure to high-dose aspirin also significantly reduced CRC incidence in a randomized controlled trial with up to 20 years of follow-up3 highlighting that chemoprotective effects take years to manifest, and long-term follow-up in cancer chemoprevention trials is needed. Studies also have shown reduced adenoma incidence or recurrence by aspirin ranging from 17% to 51% depending on the study population, dose, and adherence. In addition to clinical trials, experimental data have demonstrated protective cellular effects of aspirin on colonic carcinogenesis, though exact mechanisms of this protective effect remain incompletely understood and are active areas of research, including in my lab. Taken together, there is a large body of evidence supporting a protective effect of aspirin on CRC mortality and colorectal neoplasia incidence most evident after 1-2 decades of follow-up.
Not all trials have shown that aspirin is chemoprotective, and, in fact, the ASPREE trial,4 that randomized over 19,000 healthy adults over the age of 70 to 100 mg aspirin or placebo, showed increased cancer mortality when the trial was stopped prematurely after 5 years. Individuals who started aspirin under age 70 appear to have continued chemoprotection as they age5 suggesting that aspirin, if it is tolerated, might not necessarily need to be stopped at a certain age. Notably, the ASPREE trial did not show increased CRC incidence, which begs the question of the biological mechanism underlying increased cancer mortality in trial participants. Beyond the findings of ASPREE, aspirin use is associated with risks of intracranial and gastrointestinal bleeding with estimated odds ratios of 1.29 and 1.59, respectively. The AGA practice guideline acknowledges these risks especially in older adults and recommends initiation of aspirin in individuals under the age of 70 who are expected to live another 10 years without bleeding risks in order to reap the benefits and minimize the risks.
Risk stratification hinges on acceptance and feasibility. Three-quarters of providers, when surveyed, reported aspirin to be a suitable preventive treatment with more favorable views expressed by gastroenterologists and genetics providers, compared with colorectal surgeons.6 In Lynch syndrome, rates of aspirin chemoprevention recommendation by providers in real-world practices ranged from 35% to 67%; my own practice strives to discuss aspirin use with every Lynch patient at every clinic and endoscopy visit. Real-world data for uptake and adherence of aspirin CRC chemoprevention are sparse. Uptake and adherence of aspirin for cancer chemoprevention in clinical trials ranged from 41% to 80% with good adherence, although these findings likely are not generalizable to routine practice. Current blood pressure and cholesterol guidelines for primary prevention include calculation of 10-year cardiovascular risk using automatic calculators in the electronic health record; thus, it should be relatively straightforward to apply this approach for aspirin CRC chemoprevention as well. While calculation of bleeding risk is less well established, there are publicly available calculators that combine cardiovascular and bleeding risk for primary aspirin prevention and such decision aids should be explored for aspirin CRC chemoprevention. However, given the recent recommendation reversal by the USPSTF, I am concerned that recommendation and uptake of aspirin CRC chemoprevention will decline substantially.
In order to reduce CRC burden, we should employ everything in our armamentarium including aspirin chemoprevention. Individualized risk assessment for aspirin chemoprevention, as advised by the AGA practice guideline, will enable the right people to benefit while minimizing risks. Future studies should strengthen the evidence base for aspirin CRC chemoprevention and refine risk stratification, including for younger individuals given the rise in early-onset CRC. The optimal approach to aspirin chemoprevention was best summed up by the foremost expert in the field, Dr. Andy Chan, to the New York Times:7 “we need to think about personalizing who we give aspirin to, and move away from a one-size-fits-all solution”.
Dr. Kupfer is associate professor of medicine, director of the Gastrointestinal Cancer Risk and Prevention Clinic, and codirector of the Comprehensive Cancer Risk and Prevention Clinic at the University of Chicago. She reports no relevant conflicts of interest.
References
1. Liang PS et al. Clin Gastroenterol Hepatol. 2021 Jul;19(7):1327-36.
2. United States Preventive Services Task Force. “Aspirin Use to Prevent Cardiovascular Disease: Preventive Medication.” Accessed April 10, 2022.
3. Burn J et al. Lancet. 2020 Jun 13;395(10240):1855-63.
4. McNeil JJ et al. N Engl J Med. 2018 Oct 18;379(16):1519-28.
5. Guo CG et al. JAMA Oncol. 2021 Mar 1;7(3):428-35.
6. Lloyd KE et al. Prev Med. 2022 Jan;154:106872.
7. Rabin RC. “Aspirin Use to Prevent 1st Heart Attack or Stroke Should Be Curtailed, U.S. Panel Says.” New York Times. Oct. 13, 2021. Accessed April 10, 2022.
OARSI sets sights on classifying early-stage knee OA
An expert task force convened by the Osteoarthritis Research Society International (OARSI) has started the process of consolidating classification criteria for early-stage knee osteoarthritis (OA).
“Early-stage knee OA classification criteria, we believe are critically required,” Gillian Hawker, MD, MSc, said at the OARSI 2022 World Congress.
Dr. Hawker, who is the chair of the Task Force Steering Committee, noted that classification criteria are needed for several reasons, such as “to advance OA therapeutics and [the] earlier identification of people with knee OA who can benefit from existing treatments.”
Moreover, they are needed so that people with knee OA can “be poised and ready to receive available therapies once we develop them,” said Dr. Hawker, professor of medicine at the University of Toronto and a senior clinician-scientist in the Women’s College Research Institute at Women’s College Hospital in Toronto.
Reasoning for looking at early OA
“Osteoarthritis is a very serious disease with a growing population burden,” Dr. Hawker reminded delegates at the congress. Yet despite “amazing advances” in the understanding of the pathophysiology of disease and several potential druggable targets being identified, “we still have no safe and effective interventions to prevent or slow the progression of the disease.”
“Why have all the DMOADs [disease-modifying osteoarthritis drugs] failed?” she questioned.
One hypothesis is that it’s down to the heterogeneity of OA. “We’ve been plugging people with different kinds or phenotypes of OA into the same clinical trials, and we need to better match OA phenotypes with appropriate treatment,” Dr. Hawker said.
Also, “structural changes on imaging, and the symptoms that characterize the disease of function, pain, stiffness, etc., are not super well correlated. It may be that any attempts at structure modification alone won’t adequately improve clinical symptoms.”
Perhaps most importantly, however, “we’re treating people way too late in the course of their disease,” Dr. Hawker said. “When we keep putting people with Kellgren and Lawrence [grade] 2 or 3 into clinical trials, it may be that we there’s nothing that we’re going to be able to do that’s really going to make a difference.”
Why just knee OA?
The reason for looking at early-stage OA specifically is that current knee OA classification criteria were developed nearly 40 years ago and were looking at a later stage of disease, mainly differentiating OA from other types of inflammatory arthritis, notably rheumatoid arthritis (RA).
The aim of the OARSI Early OA Task Force is thus to develop, refine, and validate classification criteria that will not only help identify people with early-stage OA who can then be entered into clinical trials of new therapies but also define a population that can be used in preclinical and prognostic work.
“The task force decided to start with early-stage knee OA due to the highest burden and the focus of most clinical trials,” steering committee member Martin Englund, MD, PhD, observed during the discussion.
“When we see how that goes, we may consider early hip OA,” said Dr. Englund, of Lund University and Skåne University Hospital in Sweden.
Dr. Hawker added that the task force felt that lumping hip and knee OA together would complicate matters because they thought that the classification criteria will likely look very different from each other.
“But the good news is we think that if we can identify early knee OA, we will likely also identify people with at least hand OA,” she said.
Building on previous work
The OARSI Task Force initiative will build on the early OA work by Stefan Lohmander, MD, PhD, and Frank Luyten, MD, PhD, who were part of a consensus panel that proposed draft classification criteria a few years ago. Those criteria, derived from a consensus workshop that had included basic scientists, physician-scientists, rheumatologists, orthopedic surgeons, and physiotherapists, identified three main areas of importance: Patient symptoms such as pain and function, the presence of crepitus or tender joints on clinical examination, and having a low Kellgren and Lawrence grade (0 or 1).
Dr. Lohmander remains heavily involved, heading up the advisory committee, with many other ad hoc committees likely to be set up during the project.
“We had over 70 people in the OARSI community volunteering to participate in some way, shape, or form,” Dr. Hawker said. All will be needed, she said, as there will be a lot of work to do. The starting point is people with undifferentiated knee symptoms, identifying the factors that increase or decrease the likelihood of having early-stage OA. Once a population has been found, the outcomes for prevention need to be defined.
A systematic search of the available literature has started and full-text review of more than 200 papers is in progress. The challenge ahead is to define what the ‘anchor question’ will be. That is, what question should be asked in order to determine whether a patient fulfills the criteria?
Dr. Hawker noted that when the American College of Rheumatology developed the RA classification criteria, the anchor question had been around whether methotrexate should be prescribed.
“We don’t have a ‘methotrexate’ in osteoarthritis, and it’s pretty low risk to start weight management or physical activity or even prescribe a topical anti-inflammatory,” she said. “So, we’re still trying to work out exactly how we create our anchor.”
It’s likely that the anchor question will be based on expert opinion rather than hard data. Perhaps it will focus on the chances that a patient’s symptoms will become persistent with loss of function or that they will develop established OA. It could perhaps be around the initiation of a novel DMOAD, if one proved effective enough to be used.
“We have many, many, many, questions!” Dr. Hawker said. One of the important ones is deciding what exactly should be prevented. Symptoms? Structural damage?
“I think a combination of symptoms and loss of function are probably what we want to prevent. But again, we’re going to have to define that very clearly. This is going to take us quite a bit of time.”
It’s likely to be a two-stage process: “First we define what is early stage OA, and then we identify those who are at the highest risk of rapid progression so that we can target those individuals for clinical trials.”
Dr. Hawker and Dr. Englund had no conflicts of interest to disclose.
An expert task force convened by the Osteoarthritis Research Society International (OARSI) has started the process of consolidating classification criteria for early-stage knee osteoarthritis (OA).
“Early-stage knee OA classification criteria, we believe are critically required,” Gillian Hawker, MD, MSc, said at the OARSI 2022 World Congress.
Dr. Hawker, who is the chair of the Task Force Steering Committee, noted that classification criteria are needed for several reasons, such as “to advance OA therapeutics and [the] earlier identification of people with knee OA who can benefit from existing treatments.”
Moreover, they are needed so that people with knee OA can “be poised and ready to receive available therapies once we develop them,” said Dr. Hawker, professor of medicine at the University of Toronto and a senior clinician-scientist in the Women’s College Research Institute at Women’s College Hospital in Toronto.
Reasoning for looking at early OA
“Osteoarthritis is a very serious disease with a growing population burden,” Dr. Hawker reminded delegates at the congress. Yet despite “amazing advances” in the understanding of the pathophysiology of disease and several potential druggable targets being identified, “we still have no safe and effective interventions to prevent or slow the progression of the disease.”
“Why have all the DMOADs [disease-modifying osteoarthritis drugs] failed?” she questioned.
One hypothesis is that it’s down to the heterogeneity of OA. “We’ve been plugging people with different kinds or phenotypes of OA into the same clinical trials, and we need to better match OA phenotypes with appropriate treatment,” Dr. Hawker said.
Also, “structural changes on imaging, and the symptoms that characterize the disease of function, pain, stiffness, etc., are not super well correlated. It may be that any attempts at structure modification alone won’t adequately improve clinical symptoms.”
Perhaps most importantly, however, “we’re treating people way too late in the course of their disease,” Dr. Hawker said. “When we keep putting people with Kellgren and Lawrence [grade] 2 or 3 into clinical trials, it may be that we there’s nothing that we’re going to be able to do that’s really going to make a difference.”
Why just knee OA?
The reason for looking at early-stage OA specifically is that current knee OA classification criteria were developed nearly 40 years ago and were looking at a later stage of disease, mainly differentiating OA from other types of inflammatory arthritis, notably rheumatoid arthritis (RA).
The aim of the OARSI Early OA Task Force is thus to develop, refine, and validate classification criteria that will not only help identify people with early-stage OA who can then be entered into clinical trials of new therapies but also define a population that can be used in preclinical and prognostic work.
“The task force decided to start with early-stage knee OA due to the highest burden and the focus of most clinical trials,” steering committee member Martin Englund, MD, PhD, observed during the discussion.
“When we see how that goes, we may consider early hip OA,” said Dr. Englund, of Lund University and Skåne University Hospital in Sweden.
Dr. Hawker added that the task force felt that lumping hip and knee OA together would complicate matters because they thought that the classification criteria will likely look very different from each other.
“But the good news is we think that if we can identify early knee OA, we will likely also identify people with at least hand OA,” she said.
Building on previous work
The OARSI Task Force initiative will build on the early OA work by Stefan Lohmander, MD, PhD, and Frank Luyten, MD, PhD, who were part of a consensus panel that proposed draft classification criteria a few years ago. Those criteria, derived from a consensus workshop that had included basic scientists, physician-scientists, rheumatologists, orthopedic surgeons, and physiotherapists, identified three main areas of importance: Patient symptoms such as pain and function, the presence of crepitus or tender joints on clinical examination, and having a low Kellgren and Lawrence grade (0 or 1).
Dr. Lohmander remains heavily involved, heading up the advisory committee, with many other ad hoc committees likely to be set up during the project.
“We had over 70 people in the OARSI community volunteering to participate in some way, shape, or form,” Dr. Hawker said. All will be needed, she said, as there will be a lot of work to do. The starting point is people with undifferentiated knee symptoms, identifying the factors that increase or decrease the likelihood of having early-stage OA. Once a population has been found, the outcomes for prevention need to be defined.
A systematic search of the available literature has started and full-text review of more than 200 papers is in progress. The challenge ahead is to define what the ‘anchor question’ will be. That is, what question should be asked in order to determine whether a patient fulfills the criteria?
Dr. Hawker noted that when the American College of Rheumatology developed the RA classification criteria, the anchor question had been around whether methotrexate should be prescribed.
“We don’t have a ‘methotrexate’ in osteoarthritis, and it’s pretty low risk to start weight management or physical activity or even prescribe a topical anti-inflammatory,” she said. “So, we’re still trying to work out exactly how we create our anchor.”
It’s likely that the anchor question will be based on expert opinion rather than hard data. Perhaps it will focus on the chances that a patient’s symptoms will become persistent with loss of function or that they will develop established OA. It could perhaps be around the initiation of a novel DMOAD, if one proved effective enough to be used.
“We have many, many, many, questions!” Dr. Hawker said. One of the important ones is deciding what exactly should be prevented. Symptoms? Structural damage?
“I think a combination of symptoms and loss of function are probably what we want to prevent. But again, we’re going to have to define that very clearly. This is going to take us quite a bit of time.”
It’s likely to be a two-stage process: “First we define what is early stage OA, and then we identify those who are at the highest risk of rapid progression so that we can target those individuals for clinical trials.”
Dr. Hawker and Dr. Englund had no conflicts of interest to disclose.
An expert task force convened by the Osteoarthritis Research Society International (OARSI) has started the process of consolidating classification criteria for early-stage knee osteoarthritis (OA).
“Early-stage knee OA classification criteria, we believe are critically required,” Gillian Hawker, MD, MSc, said at the OARSI 2022 World Congress.
Dr. Hawker, who is the chair of the Task Force Steering Committee, noted that classification criteria are needed for several reasons, such as “to advance OA therapeutics and [the] earlier identification of people with knee OA who can benefit from existing treatments.”
Moreover, they are needed so that people with knee OA can “be poised and ready to receive available therapies once we develop them,” said Dr. Hawker, professor of medicine at the University of Toronto and a senior clinician-scientist in the Women’s College Research Institute at Women’s College Hospital in Toronto.
Reasoning for looking at early OA
“Osteoarthritis is a very serious disease with a growing population burden,” Dr. Hawker reminded delegates at the congress. Yet despite “amazing advances” in the understanding of the pathophysiology of disease and several potential druggable targets being identified, “we still have no safe and effective interventions to prevent or slow the progression of the disease.”
“Why have all the DMOADs [disease-modifying osteoarthritis drugs] failed?” she questioned.
One hypothesis is that it’s down to the heterogeneity of OA. “We’ve been plugging people with different kinds or phenotypes of OA into the same clinical trials, and we need to better match OA phenotypes with appropriate treatment,” Dr. Hawker said.
Also, “structural changes on imaging, and the symptoms that characterize the disease of function, pain, stiffness, etc., are not super well correlated. It may be that any attempts at structure modification alone won’t adequately improve clinical symptoms.”
Perhaps most importantly, however, “we’re treating people way too late in the course of their disease,” Dr. Hawker said. “When we keep putting people with Kellgren and Lawrence [grade] 2 or 3 into clinical trials, it may be that we there’s nothing that we’re going to be able to do that’s really going to make a difference.”
Why just knee OA?
The reason for looking at early-stage OA specifically is that current knee OA classification criteria were developed nearly 40 years ago and were looking at a later stage of disease, mainly differentiating OA from other types of inflammatory arthritis, notably rheumatoid arthritis (RA).
The aim of the OARSI Early OA Task Force is thus to develop, refine, and validate classification criteria that will not only help identify people with early-stage OA who can then be entered into clinical trials of new therapies but also define a population that can be used in preclinical and prognostic work.
“The task force decided to start with early-stage knee OA due to the highest burden and the focus of most clinical trials,” steering committee member Martin Englund, MD, PhD, observed during the discussion.
“When we see how that goes, we may consider early hip OA,” said Dr. Englund, of Lund University and Skåne University Hospital in Sweden.
Dr. Hawker added that the task force felt that lumping hip and knee OA together would complicate matters because they thought that the classification criteria will likely look very different from each other.
“But the good news is we think that if we can identify early knee OA, we will likely also identify people with at least hand OA,” she said.
Building on previous work
The OARSI Task Force initiative will build on the early OA work by Stefan Lohmander, MD, PhD, and Frank Luyten, MD, PhD, who were part of a consensus panel that proposed draft classification criteria a few years ago. Those criteria, derived from a consensus workshop that had included basic scientists, physician-scientists, rheumatologists, orthopedic surgeons, and physiotherapists, identified three main areas of importance: Patient symptoms such as pain and function, the presence of crepitus or tender joints on clinical examination, and having a low Kellgren and Lawrence grade (0 or 1).
Dr. Lohmander remains heavily involved, heading up the advisory committee, with many other ad hoc committees likely to be set up during the project.
“We had over 70 people in the OARSI community volunteering to participate in some way, shape, or form,” Dr. Hawker said. All will be needed, she said, as there will be a lot of work to do. The starting point is people with undifferentiated knee symptoms, identifying the factors that increase or decrease the likelihood of having early-stage OA. Once a population has been found, the outcomes for prevention need to be defined.
A systematic search of the available literature has started and full-text review of more than 200 papers is in progress. The challenge ahead is to define what the ‘anchor question’ will be. That is, what question should be asked in order to determine whether a patient fulfills the criteria?
Dr. Hawker noted that when the American College of Rheumatology developed the RA classification criteria, the anchor question had been around whether methotrexate should be prescribed.
“We don’t have a ‘methotrexate’ in osteoarthritis, and it’s pretty low risk to start weight management or physical activity or even prescribe a topical anti-inflammatory,” she said. “So, we’re still trying to work out exactly how we create our anchor.”
It’s likely that the anchor question will be based on expert opinion rather than hard data. Perhaps it will focus on the chances that a patient’s symptoms will become persistent with loss of function or that they will develop established OA. It could perhaps be around the initiation of a novel DMOAD, if one proved effective enough to be used.
“We have many, many, many, questions!” Dr. Hawker said. One of the important ones is deciding what exactly should be prevented. Symptoms? Structural damage?
“I think a combination of symptoms and loss of function are probably what we want to prevent. But again, we’re going to have to define that very clearly. This is going to take us quite a bit of time.”
It’s likely to be a two-stage process: “First we define what is early stage OA, and then we identify those who are at the highest risk of rapid progression so that we can target those individuals for clinical trials.”
Dr. Hawker and Dr. Englund had no conflicts of interest to disclose.
FROM OARSI 2022