LayerRx Mapping ID
118
Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort
Allow Teaser Image
Medscape Lead Concept
902

A paleolithic raw bar, and the human brush with extinction

Article Type
Changed
Fri, 03/26/2021 - 14:10

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

Publications
Topics
Sections

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

Cannabinoids promising for improving appetite, behavior in dementia

Article Type
Changed
Fri, 06/25/2021 - 17:37

For patients with dementia, cannabinoids may be a promising intervention for treating neuropsychiatric symptoms (NPS) and the refusing of food, new research suggests.

AndreyPopov/Getty Images

Results of a systematic literature review, presented at the 2021 meeting of the American Association for Geriatric Psychiatry, showed that cannabinoids were associated with reduced agitation, longer sleep, and lower NPS. They were also linked to increased meal consumption and weight gain.

Refusing food is a common problem for patients with dementia, often resulting in worsening sleep, agitation, and mood, study investigator Niraj Asthana, MD, a second-year resident in the department of psychiatry, University of California, San Diego, said in an interview. Dr. Asthana noted that certain cannabinoid analogues are now used to stimulate appetite for patients undergoing chemotherapy.
 

Filling a treatment gap

After years of legal and other problems affecting cannabinoid research, there is renewed interest in investigating its use for patients with dementia. Early evidence suggests that cannabinoids may also be beneficial for pain, sleep, and aggression.

The researchers noted that cannabinoids may be especially valuable in areas where there are currently limited therapies, including food refusal and NPS.

“Unfortunately, there are limited treatments available for food refusal, so we’re left with appetite stimulants and electroconvulsive therapy, and although atypical antipsychotics are commonly used to treat NPS, they’re associated with an increased risk of serious adverse events and mortality in older patients,” said Dr. Asthana.

Dr. Asthana and colleague Dan Sewell, MD, carried out a systematic literature review of relevant studies of the use of cannabinoids for dementia patients.

“We found there are lot of studies, but they’re small scale; I’d say the largest was probably about 50 patients, with most studies having 10-50 patients,” said Dr. Asthana. In part, this may be because, until very recently, research on cannabinoids was controversial.

To review the current literature on the potential applications of cannabinoids in the treatment of food refusal and NPS in dementia patients, the researchers conducted a literature review.

They identified 23 relevant studies of the use of synthetic cannabinoids, including dronabinol and nabilone, for dementia patients. These products contain tetrahydrocannabinol (THC), the main psychoactive compound in cannabis.
 

More research coming

Several studies showed that cannabinoid use was associated with reduced nighttime motor activity, improved sleep duration, reduced agitation, and lower Neuropsychiatric Inventory scores.

Several studies revealed a link between cannabinoids use and increased appetite and the consumption of more meals. One crossover placebo-controlled trial showed an overall increase in body weight among dementia patients who took dronabinol.

This suggests there might be something to the “colloquial cultural association between cannabinoids and the munchies,” said Dr. Asthana.

Possible mechanisms for the effects on appetite may be that cannabinoids increase levels of the hormone ghrelin, which is also known as the “hunger hormone,” and decrease leptin levels, a hormone that inhibits hunger. Dr. Asthana noted that, in these studies, the dose of THC was low and that overall, cannabinoids appeared to be safe.

“We found that, at least in these small-scale studies, cannabinoid analogues are well tolerated,” possibly because of the relatively low doses of THC, said Dr. Asthana. “They generally don’t seem to have a ton of side effects; they may make people a little sleepy, which is actually good, because these patents also have a lot of trouble sleeping.”

He noted that more recent research suggests cannabidiol oil may reduce agitation by up to 40%.

“Now that cannabis is losing a lot of its stigma, both culturally and in the scientific community, you’re seeing a lot of grant applications for clinical trials,” said Dr. Asthana. “I’m excited to see what we find in the next 5-10 years.”

In a comment, Kirsten Wilkins, MD, associate professor of psychiatry, Yale University, New Haven, Conn., who is also a geriatric psychiatrist at the Veterans Affairs Connecticut Health Care System, welcomed the new research in this area.

“With limited safe and effective treatments for food refusal and neuropsychiatric symptoms of dementia, Dr. Asthana and Dr. Sewell highlight the growing body of literature suggesting cannabinoids may be a novel treatment option,” she said.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(7)
Publications
Topics
Sections

For patients with dementia, cannabinoids may be a promising intervention for treating neuropsychiatric symptoms (NPS) and the refusing of food, new research suggests.

AndreyPopov/Getty Images

Results of a systematic literature review, presented at the 2021 meeting of the American Association for Geriatric Psychiatry, showed that cannabinoids were associated with reduced agitation, longer sleep, and lower NPS. They were also linked to increased meal consumption and weight gain.

Refusing food is a common problem for patients with dementia, often resulting in worsening sleep, agitation, and mood, study investigator Niraj Asthana, MD, a second-year resident in the department of psychiatry, University of California, San Diego, said in an interview. Dr. Asthana noted that certain cannabinoid analogues are now used to stimulate appetite for patients undergoing chemotherapy.
 

Filling a treatment gap

After years of legal and other problems affecting cannabinoid research, there is renewed interest in investigating its use for patients with dementia. Early evidence suggests that cannabinoids may also be beneficial for pain, sleep, and aggression.

The researchers noted that cannabinoids may be especially valuable in areas where there are currently limited therapies, including food refusal and NPS.

“Unfortunately, there are limited treatments available for food refusal, so we’re left with appetite stimulants and electroconvulsive therapy, and although atypical antipsychotics are commonly used to treat NPS, they’re associated with an increased risk of serious adverse events and mortality in older patients,” said Dr. Asthana.

Dr. Asthana and colleague Dan Sewell, MD, carried out a systematic literature review of relevant studies of the use of cannabinoids for dementia patients.

“We found there are lot of studies, but they’re small scale; I’d say the largest was probably about 50 patients, with most studies having 10-50 patients,” said Dr. Asthana. In part, this may be because, until very recently, research on cannabinoids was controversial.

To review the current literature on the potential applications of cannabinoids in the treatment of food refusal and NPS in dementia patients, the researchers conducted a literature review.

They identified 23 relevant studies of the use of synthetic cannabinoids, including dronabinol and nabilone, for dementia patients. These products contain tetrahydrocannabinol (THC), the main psychoactive compound in cannabis.
 

More research coming

Several studies showed that cannabinoid use was associated with reduced nighttime motor activity, improved sleep duration, reduced agitation, and lower Neuropsychiatric Inventory scores.

Several studies revealed a link between cannabinoids use and increased appetite and the consumption of more meals. One crossover placebo-controlled trial showed an overall increase in body weight among dementia patients who took dronabinol.

This suggests there might be something to the “colloquial cultural association between cannabinoids and the munchies,” said Dr. Asthana.

Possible mechanisms for the effects on appetite may be that cannabinoids increase levels of the hormone ghrelin, which is also known as the “hunger hormone,” and decrease leptin levels, a hormone that inhibits hunger. Dr. Asthana noted that, in these studies, the dose of THC was low and that overall, cannabinoids appeared to be safe.

“We found that, at least in these small-scale studies, cannabinoid analogues are well tolerated,” possibly because of the relatively low doses of THC, said Dr. Asthana. “They generally don’t seem to have a ton of side effects; they may make people a little sleepy, which is actually good, because these patents also have a lot of trouble sleeping.”

He noted that more recent research suggests cannabidiol oil may reduce agitation by up to 40%.

“Now that cannabis is losing a lot of its stigma, both culturally and in the scientific community, you’re seeing a lot of grant applications for clinical trials,” said Dr. Asthana. “I’m excited to see what we find in the next 5-10 years.”

In a comment, Kirsten Wilkins, MD, associate professor of psychiatry, Yale University, New Haven, Conn., who is also a geriatric psychiatrist at the Veterans Affairs Connecticut Health Care System, welcomed the new research in this area.

“With limited safe and effective treatments for food refusal and neuropsychiatric symptoms of dementia, Dr. Asthana and Dr. Sewell highlight the growing body of literature suggesting cannabinoids may be a novel treatment option,” she said.

A version of this article first appeared on Medscape.com.

For patients with dementia, cannabinoids may be a promising intervention for treating neuropsychiatric symptoms (NPS) and the refusing of food, new research suggests.

AndreyPopov/Getty Images

Results of a systematic literature review, presented at the 2021 meeting of the American Association for Geriatric Psychiatry, showed that cannabinoids were associated with reduced agitation, longer sleep, and lower NPS. They were also linked to increased meal consumption and weight gain.

Refusing food is a common problem for patients with dementia, often resulting in worsening sleep, agitation, and mood, study investigator Niraj Asthana, MD, a second-year resident in the department of psychiatry, University of California, San Diego, said in an interview. Dr. Asthana noted that certain cannabinoid analogues are now used to stimulate appetite for patients undergoing chemotherapy.
 

Filling a treatment gap

After years of legal and other problems affecting cannabinoid research, there is renewed interest in investigating its use for patients with dementia. Early evidence suggests that cannabinoids may also be beneficial for pain, sleep, and aggression.

The researchers noted that cannabinoids may be especially valuable in areas where there are currently limited therapies, including food refusal and NPS.

“Unfortunately, there are limited treatments available for food refusal, so we’re left with appetite stimulants and electroconvulsive therapy, and although atypical antipsychotics are commonly used to treat NPS, they’re associated with an increased risk of serious adverse events and mortality in older patients,” said Dr. Asthana.

Dr. Asthana and colleague Dan Sewell, MD, carried out a systematic literature review of relevant studies of the use of cannabinoids for dementia patients.

“We found there are lot of studies, but they’re small scale; I’d say the largest was probably about 50 patients, with most studies having 10-50 patients,” said Dr. Asthana. In part, this may be because, until very recently, research on cannabinoids was controversial.

To review the current literature on the potential applications of cannabinoids in the treatment of food refusal and NPS in dementia patients, the researchers conducted a literature review.

They identified 23 relevant studies of the use of synthetic cannabinoids, including dronabinol and nabilone, for dementia patients. These products contain tetrahydrocannabinol (THC), the main psychoactive compound in cannabis.
 

More research coming

Several studies showed that cannabinoid use was associated with reduced nighttime motor activity, improved sleep duration, reduced agitation, and lower Neuropsychiatric Inventory scores.

Several studies revealed a link between cannabinoids use and increased appetite and the consumption of more meals. One crossover placebo-controlled trial showed an overall increase in body weight among dementia patients who took dronabinol.

This suggests there might be something to the “colloquial cultural association between cannabinoids and the munchies,” said Dr. Asthana.

Possible mechanisms for the effects on appetite may be that cannabinoids increase levels of the hormone ghrelin, which is also known as the “hunger hormone,” and decrease leptin levels, a hormone that inhibits hunger. Dr. Asthana noted that, in these studies, the dose of THC was low and that overall, cannabinoids appeared to be safe.

“We found that, at least in these small-scale studies, cannabinoid analogues are well tolerated,” possibly because of the relatively low doses of THC, said Dr. Asthana. “They generally don’t seem to have a ton of side effects; they may make people a little sleepy, which is actually good, because these patents also have a lot of trouble sleeping.”

He noted that more recent research suggests cannabidiol oil may reduce agitation by up to 40%.

“Now that cannabis is losing a lot of its stigma, both culturally and in the scientific community, you’re seeing a lot of grant applications for clinical trials,” said Dr. Asthana. “I’m excited to see what we find in the next 5-10 years.”

In a comment, Kirsten Wilkins, MD, associate professor of psychiatry, Yale University, New Haven, Conn., who is also a geriatric psychiatrist at the Veterans Affairs Connecticut Health Care System, welcomed the new research in this area.

“With limited safe and effective treatments for food refusal and neuropsychiatric symptoms of dementia, Dr. Asthana and Dr. Sewell highlight the growing body of literature suggesting cannabinoids may be a novel treatment option,” she said.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(7)
Issue
Neurology Reviews- 29(7)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: March 23, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Despite risks and warnings, CNS polypharmacy is prevalent among patients with dementia

Article Type
Changed
Thu, 12/15/2022 - 15:41

 

A significant proportion of community-dwelling older adults with dementia take three or more central nervous system medications despite guidelines that say to avoid this dangerous practice, new research suggests.

Investigators found that 14% of these individuals were receiving CNS-active polypharmacy, defined as combinations of multiple psychotropic and opioid medications taken for more than 30 days.

“For most patients, the risks of these medications, particularly in combination, are almost certainly greater than the potential benefits,” said Donovan Maust, MD, associate director of the geriatric psychiatry program, University of Michigan, Ann Arbor.

The study was published online March 9 in JAMA.
 

Serious risks

Memory impairment is the cardinal feature of dementia, but behavioral and psychological symptoms, which can include apathy, delusions, and agitation, are common during all stages of illness and cause significant caregiver distress, the researchers noted.

They noted that there is a dearth of high-quality evidence to support prescribing these medications in this patient population, yet “clinicians regularly prescribe psychotropic medications to community-dwelling persons with dementia in rates that far exceed use in the general older adult population.”

The Beers Criteria, from the American Geriatrics Society, advise against the practice of CNS polypharmacy because of the significant increase in risk for falls as well as impaired cognition, cardiac conduction abnormalities, respiratory suppression, and death when polypharmacy involves opioids.

They note that previous studies from Europe of polypharmacy for patients with dementia have not included antiepileptic medications or opioids, so the true extent of CNS-active polypharmacy may be “significantly” underestimated.

To determine the prevalence of polypharmacy with CNS-active medications among community-dwelling older adults with dementia, the researchers analyzed data on prescription fills for nearly 1.2 million community-dwelling Medicare patients with dementia.

The primary outcome was the prevalence of CNS-active polypharmacy in 2018. They defined CNS-active polypharmacy as exposure to three or more medications for more than 30 consecutive days from the following drug classes: antidepressants, antipsychotics, antiepileptics, benzodiazepines, nonbenzodiazepines, benzodiazepine receptor agonist hypnotics, and opioids.

They found that roughly one in seven (13.9%) patients met criteria for CNS-active polypharmacy. Of those receiving a CNS-active polypharmacy regimen, 57.8% had been doing so for longer than 180 days, and 6.8% had been doing so for a year. Nearly 30% of patients were exposed to five or more medications, and 5.2% were exposed to five or more medication classes.
 

Conservative approach warranted

Nearly all (92%) patients taking three or more CNS-active medications were taking an antidepressant, “consistent with their place as the psychotropic class most commonly prescribed both to older adults overall and those with dementia,” the investigators noted.

There is minimal high-quality evidence to support the efficacy of antidepressants for the treatment of depression for patients with dementia, they pointed out.

Nearly half (47%) of patients who were taking three or more CNS-active medications received at least one antipsychotic, most often quetiapine. Antipsychotics are not approved for people with dementia but are often prescribed off label for agitation, anxiety, and sleep problems, the researchers noted.

Nearly two thirds (62%) of patients with dementia who were taking three or more CNS drugs were taking an antiepileptic (most commonly, gabapentin); 41%, benzodiazepines; 32%, opioids; and 6%, Z-drugs.

The most common polypharmacy class combination included at least one antidepressant, one antiepileptic, and one antipsychotic. These accounted for 12.9% of polypharmacy days.

Despite limited high-quality evidence of efficacy, the prescribing of psychotropic medications and opioids is “pervasive” for adults with dementia in the United States, the investigators noted.

“Especially given that older adults with dementia might not be able to convey side effects they are experiencing, I think clinicians should be more conservative in how they are prescribing these medications and skeptical about the potential for benefit,” said Dr. Maust.

Regarding study limitations, the researchers noted that prescription medication claims may have led to an overestimation of the exposure to polypharmacy, insofar as the prescriptions may have been filled but not taken or were taken only on an as-needed basis.

In addition, the investigators were unable to determine the appropriateness of the particular combinations used or to examine the specific harms associated with CNS-active polypharmacy.
 

 

 

A major clinical challenge

Weighing in on the results, Howard Fillit, MD, founding executive director and chief science officer of the Alzheimer’s Drug Discovery Foundation, said the study is important because polypharmacy is one of the “geriatric giants, and the question is, what do you do about it?”

Dr. Fillit said it is important to conduct a careful medication review for all older patients, “making sure that the use of each drug is appropriate. The most important thing is to define what is the appropriate utilization of these kinds of drugs. That goes for both overutilization or misuse of these drugs and underutilization, where people are undertreated for symptoms that can’t be managed by behavioral management, for example,” Dr. Fillit said.

Dr. Fillit also said the finding that about 14% of dementia patients were receiving three or more of these drugs “may not be an outrageous number, because these patients, especially as they get into moderate and severe stages of disease, can be incredibly difficult to manage.

“Very often, dementia patients have depression, and up to 90% will have agitation and even psychosis during the course of dementia. And many of these patients need these types of drugs,” said Dr. Fillit.

Echoing the authors, Dr. Fillit said a key limitation of the study is not knowing whether the prescribing was appropriate or not.

The study was supported by a grant from the National Institute on Aging. Dr. Maust and Dr. Fillit have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Publications
Topics
Sections

 

A significant proportion of community-dwelling older adults with dementia take three or more central nervous system medications despite guidelines that say to avoid this dangerous practice, new research suggests.

Investigators found that 14% of these individuals were receiving CNS-active polypharmacy, defined as combinations of multiple psychotropic and opioid medications taken for more than 30 days.

“For most patients, the risks of these medications, particularly in combination, are almost certainly greater than the potential benefits,” said Donovan Maust, MD, associate director of the geriatric psychiatry program, University of Michigan, Ann Arbor.

The study was published online March 9 in JAMA.
 

Serious risks

Memory impairment is the cardinal feature of dementia, but behavioral and psychological symptoms, which can include apathy, delusions, and agitation, are common during all stages of illness and cause significant caregiver distress, the researchers noted.

They noted that there is a dearth of high-quality evidence to support prescribing these medications in this patient population, yet “clinicians regularly prescribe psychotropic medications to community-dwelling persons with dementia in rates that far exceed use in the general older adult population.”

The Beers Criteria, from the American Geriatrics Society, advise against the practice of CNS polypharmacy because of the significant increase in risk for falls as well as impaired cognition, cardiac conduction abnormalities, respiratory suppression, and death when polypharmacy involves opioids.

They note that previous studies from Europe of polypharmacy for patients with dementia have not included antiepileptic medications or opioids, so the true extent of CNS-active polypharmacy may be “significantly” underestimated.

To determine the prevalence of polypharmacy with CNS-active medications among community-dwelling older adults with dementia, the researchers analyzed data on prescription fills for nearly 1.2 million community-dwelling Medicare patients with dementia.

The primary outcome was the prevalence of CNS-active polypharmacy in 2018. They defined CNS-active polypharmacy as exposure to three or more medications for more than 30 consecutive days from the following drug classes: antidepressants, antipsychotics, antiepileptics, benzodiazepines, nonbenzodiazepines, benzodiazepine receptor agonist hypnotics, and opioids.

They found that roughly one in seven (13.9%) patients met criteria for CNS-active polypharmacy. Of those receiving a CNS-active polypharmacy regimen, 57.8% had been doing so for longer than 180 days, and 6.8% had been doing so for a year. Nearly 30% of patients were exposed to five or more medications, and 5.2% were exposed to five or more medication classes.
 

Conservative approach warranted

Nearly all (92%) patients taking three or more CNS-active medications were taking an antidepressant, “consistent with their place as the psychotropic class most commonly prescribed both to older adults overall and those with dementia,” the investigators noted.

There is minimal high-quality evidence to support the efficacy of antidepressants for the treatment of depression for patients with dementia, they pointed out.

Nearly half (47%) of patients who were taking three or more CNS-active medications received at least one antipsychotic, most often quetiapine. Antipsychotics are not approved for people with dementia but are often prescribed off label for agitation, anxiety, and sleep problems, the researchers noted.

Nearly two thirds (62%) of patients with dementia who were taking three or more CNS drugs were taking an antiepileptic (most commonly, gabapentin); 41%, benzodiazepines; 32%, opioids; and 6%, Z-drugs.

The most common polypharmacy class combination included at least one antidepressant, one antiepileptic, and one antipsychotic. These accounted for 12.9% of polypharmacy days.

Despite limited high-quality evidence of efficacy, the prescribing of psychotropic medications and opioids is “pervasive” for adults with dementia in the United States, the investigators noted.

“Especially given that older adults with dementia might not be able to convey side effects they are experiencing, I think clinicians should be more conservative in how they are prescribing these medications and skeptical about the potential for benefit,” said Dr. Maust.

Regarding study limitations, the researchers noted that prescription medication claims may have led to an overestimation of the exposure to polypharmacy, insofar as the prescriptions may have been filled but not taken or were taken only on an as-needed basis.

In addition, the investigators were unable to determine the appropriateness of the particular combinations used or to examine the specific harms associated with CNS-active polypharmacy.
 

 

 

A major clinical challenge

Weighing in on the results, Howard Fillit, MD, founding executive director and chief science officer of the Alzheimer’s Drug Discovery Foundation, said the study is important because polypharmacy is one of the “geriatric giants, and the question is, what do you do about it?”

Dr. Fillit said it is important to conduct a careful medication review for all older patients, “making sure that the use of each drug is appropriate. The most important thing is to define what is the appropriate utilization of these kinds of drugs. That goes for both overutilization or misuse of these drugs and underutilization, where people are undertreated for symptoms that can’t be managed by behavioral management, for example,” Dr. Fillit said.

Dr. Fillit also said the finding that about 14% of dementia patients were receiving three or more of these drugs “may not be an outrageous number, because these patients, especially as they get into moderate and severe stages of disease, can be incredibly difficult to manage.

“Very often, dementia patients have depression, and up to 90% will have agitation and even psychosis during the course of dementia. And many of these patients need these types of drugs,” said Dr. Fillit.

Echoing the authors, Dr. Fillit said a key limitation of the study is not knowing whether the prescribing was appropriate or not.

The study was supported by a grant from the National Institute on Aging. Dr. Maust and Dr. Fillit have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

A significant proportion of community-dwelling older adults with dementia take three or more central nervous system medications despite guidelines that say to avoid this dangerous practice, new research suggests.

Investigators found that 14% of these individuals were receiving CNS-active polypharmacy, defined as combinations of multiple psychotropic and opioid medications taken for more than 30 days.

“For most patients, the risks of these medications, particularly in combination, are almost certainly greater than the potential benefits,” said Donovan Maust, MD, associate director of the geriatric psychiatry program, University of Michigan, Ann Arbor.

The study was published online March 9 in JAMA.
 

Serious risks

Memory impairment is the cardinal feature of dementia, but behavioral and psychological symptoms, which can include apathy, delusions, and agitation, are common during all stages of illness and cause significant caregiver distress, the researchers noted.

They noted that there is a dearth of high-quality evidence to support prescribing these medications in this patient population, yet “clinicians regularly prescribe psychotropic medications to community-dwelling persons with dementia in rates that far exceed use in the general older adult population.”

The Beers Criteria, from the American Geriatrics Society, advise against the practice of CNS polypharmacy because of the significant increase in risk for falls as well as impaired cognition, cardiac conduction abnormalities, respiratory suppression, and death when polypharmacy involves opioids.

They note that previous studies from Europe of polypharmacy for patients with dementia have not included antiepileptic medications or opioids, so the true extent of CNS-active polypharmacy may be “significantly” underestimated.

To determine the prevalence of polypharmacy with CNS-active medications among community-dwelling older adults with dementia, the researchers analyzed data on prescription fills for nearly 1.2 million community-dwelling Medicare patients with dementia.

The primary outcome was the prevalence of CNS-active polypharmacy in 2018. They defined CNS-active polypharmacy as exposure to three or more medications for more than 30 consecutive days from the following drug classes: antidepressants, antipsychotics, antiepileptics, benzodiazepines, nonbenzodiazepines, benzodiazepine receptor agonist hypnotics, and opioids.

They found that roughly one in seven (13.9%) patients met criteria for CNS-active polypharmacy. Of those receiving a CNS-active polypharmacy regimen, 57.8% had been doing so for longer than 180 days, and 6.8% had been doing so for a year. Nearly 30% of patients were exposed to five or more medications, and 5.2% were exposed to five or more medication classes.
 

Conservative approach warranted

Nearly all (92%) patients taking three or more CNS-active medications were taking an antidepressant, “consistent with their place as the psychotropic class most commonly prescribed both to older adults overall and those with dementia,” the investigators noted.

There is minimal high-quality evidence to support the efficacy of antidepressants for the treatment of depression for patients with dementia, they pointed out.

Nearly half (47%) of patients who were taking three or more CNS-active medications received at least one antipsychotic, most often quetiapine. Antipsychotics are not approved for people with dementia but are often prescribed off label for agitation, anxiety, and sleep problems, the researchers noted.

Nearly two thirds (62%) of patients with dementia who were taking three or more CNS drugs were taking an antiepileptic (most commonly, gabapentin); 41%, benzodiazepines; 32%, opioids; and 6%, Z-drugs.

The most common polypharmacy class combination included at least one antidepressant, one antiepileptic, and one antipsychotic. These accounted for 12.9% of polypharmacy days.

Despite limited high-quality evidence of efficacy, the prescribing of psychotropic medications and opioids is “pervasive” for adults with dementia in the United States, the investigators noted.

“Especially given that older adults with dementia might not be able to convey side effects they are experiencing, I think clinicians should be more conservative in how they are prescribing these medications and skeptical about the potential for benefit,” said Dr. Maust.

Regarding study limitations, the researchers noted that prescription medication claims may have led to an overestimation of the exposure to polypharmacy, insofar as the prescriptions may have been filled but not taken or were taken only on an as-needed basis.

In addition, the investigators were unable to determine the appropriateness of the particular combinations used or to examine the specific harms associated with CNS-active polypharmacy.
 

 

 

A major clinical challenge

Weighing in on the results, Howard Fillit, MD, founding executive director and chief science officer of the Alzheimer’s Drug Discovery Foundation, said the study is important because polypharmacy is one of the “geriatric giants, and the question is, what do you do about it?”

Dr. Fillit said it is important to conduct a careful medication review for all older patients, “making sure that the use of each drug is appropriate. The most important thing is to define what is the appropriate utilization of these kinds of drugs. That goes for both overutilization or misuse of these drugs and underutilization, where people are undertreated for symptoms that can’t be managed by behavioral management, for example,” Dr. Fillit said.

Dr. Fillit also said the finding that about 14% of dementia patients were receiving three or more of these drugs “may not be an outrageous number, because these patients, especially as they get into moderate and severe stages of disease, can be incredibly difficult to manage.

“Very often, dementia patients have depression, and up to 90% will have agitation and even psychosis during the course of dementia. And many of these patients need these types of drugs,” said Dr. Fillit.

Echoing the authors, Dr. Fillit said a key limitation of the study is not knowing whether the prescribing was appropriate or not.

The study was supported by a grant from the National Institute on Aging. Dr. Maust and Dr. Fillit have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Issue
Neurology Reviews- 29(4)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA

Citation Override
Publish date: March 16, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Novel Alzheimer’s drug slows cognitive decline in phase 2 trial

Article Type
Changed
Mon, 04/05/2021 - 14:32

 

Results from a phase 2 placebo-controlled trial of the investigational antiamyloid drug donanemab show that the novel agent met the primary outcome of slowing cognitive decline in patients with early symptomatic Alzheimer’s disease (AD). 

Results from the TRAILBLAZER-ALZ trial were presented at the 2021 International Conference on Alzheimer’s and Parkinson’s Diseases (AD/PD) and were simultaneously published online March 13 in the New England Journal of Medicine.

As previously reported by Medscape Medical News, topline results showed that donanemab slowed cognitive decline by 32% on the Integrated AD Rating Scale (iADRS) from baseline to 76 weeks relative to placebo.

The newly released detailed findings showed that “the use of donanemab resulted in a better composite score for cognition and for the ability to perform activities of daily living than placebo at 76 weeks, although results for secondary outcomes were mixed,” the investigators, with first author Mark A. Mintun, MD, an employee of Eli Lilly, reported.   

Results revealed improvement in scores on the Clinical Dementia Rating Scale-Sum of Boxes (CDR-SB) and the 13-item cognitive subscale of the AD Assessment Scale (ADAS-Cog13), but the differences between the two treatment groups were not significant. In addition, score changes on the AD Cooperative Study–Instrumental Activities of Daily Inventory (ADCS-iADL) and the Mini-Mental State Examination (MMSE) were not “substantial.”

However, the donanemab group did show an 85-centiloid greater reduction in amyloid plaque level at 76 weeks, as shown on PET, compared with the placebo group.
 

Proof of concept?

The humanized antibody donanemab, which was previously known as LY3002813, targets a modified form of deposited amyloid-beta (A-beta) peptide called N3pG.

The randomized, placebo-controlled, double-blind TRAILBLAZER-ALZ trial, which was described as a “phase 2 proof of concept trial” in the AD/PD program, was conducted at 56 sites in the United States and Canada and included 257 patients between the ages of 60 and 85 years (52% were women). PET confirmed tau and amyloid deposition in all participants.

The active treatment group (n = 131) was randomly assigned to receive donanemab 700 mg for three doses; after that, treatment was bumped up to 1,400 mg. Both the donanemab and placebo groups (n = 126) received treatment intravenously every 4 weeks for up to 72 weeks.

Participants also underwent F-florbetapir and F-flortaucipir PET scans at various timepoints and completed a slew of cognitive tests.

The study’s primary outcome measure was change between baseline and 76 weeks post treatment on composite score for cognition, as measured by the iADRS. The iADRS combines the ADAS-Cog13 and the ADCS-iADL.

This measure ranges from 0 to 144, with lower scores associated with greater cognitive impairment. Both treatment groups had an iADRS score of 106 at baseline.
 

More research needed

Results showed that the score change from baseline on the iADRS was –6.86 for the active treatment group vs –10.06 for the placebo group (group difference, 3.2; 95% confidence interval [CI], 0.12-6.27; P = .04). Although significant, “the trial was powered to show a 6-point difference,” which was not met, the investigators note.

Differences in score changes from baseline to 76 weeks for the treatment vs. placebo groups on the following secondary outcome measures were:

  • CDR-SB: –0.36 (95% CI, –0.83 to –0.12).
  • ADAS-Cog13: –1.86 (95% CI, –3.63 to –0.09).
  • ADCS-iADL: 1.21 (95% CI, –0.77 to 3.2).
  • MMSE: 0.64 (95% CI, –0.4 to 1.67).

The CDR-SB was designated as the first secondary outcome, and because it did not show a significant between-group difference, “the hierarchy failed and no definite conclusions can be drawn from data regarding the differences between groups in the change in the ADAS-Cog13,” the investigators wrote.

In addition, the differences in scores on the latter two secondary outcomes were not “substantial,” they reported.

However, at 76 weeks, the donanemab group showed a reduction of 84.13 centiloids in amyloid plaque level vs. an increase of 0.93 centiloids in the placebo group (between-group difference, 85.06 centiloids). At 24 weeks, the active-treatment group had a 67.83-centiloids greater reduction vs. the placebo group.

In addition, 40%, 59.8%, and 67.8% of the donanemab group achieved “amyloid-negative status” at 24, 52, and 76 weeks, respectively. Amyloid-negative status was defined as an amyloid plaque level of less than 24.1 centiloids.

Total incidence of death or serious adverse events did not differ significantly between the groups. However, the donanemab group had significantly more reports of ARIA-E compared with the placebo group (26.7% vs. 0.8%).

Overall, the researchers noted that more trials of longer duration with larger patient numbers are warranted “to further determine the efficacy and safety of donanemab” in AD.
 

Positive signal?

In a statement, Maria Carrillo, PhD, chief science officer for the Alzheimer’s Association, said the organization “is encouraged by this promising data.

“It is the first phase 2 Alzheimer’s trial to show positive results on a primary outcome measure related to memory and thinking,” Dr. Carrillo said. However, “more work needs to be done on this experimental drug therapy.”

Dr. Carrillo noted that because the trial was moderately sized and only 180 participants completed the study, “we look forward to the results of a second, larger phase 2 trial of this drug.”

Still, she added, there were several “novel and innovative aspects” in the way the study was conducted noting that it showcases the evolution of AD research.

“I’m hopeful for the future,” Dr. Carrillo said.

Also commenting on the results, Howard Fillit, MD, neuroscientist and founding executive director and chief science officer of the Alzheimer’s Drug Discovery Foundation, said the study showed “the pharmacology works” and that the drug did what it was supposed to do in terms of removing A-beta plaque.

“It also gave us a signal in a relatively small phase 2 study that there might be a modest cognitive benefit,” said Dr. Fillit, who was not involved with the research.

He noted that although the rate of decline slowing was statistically significant it remains to be seen whether this is clinically meaningful, particularly in light of the fact that the secondary outcome results were mixed.  

“Basically, it was a positive study that probably needs to be followed by another, much larger study to get us to really see the benefit,” Dr. Fillit said.

Dr. Mintun is an employee of Eli Lilly, which funded the study. Dr. Carrillo and Dr. Fillit have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Publications
Topics
Sections

 

Results from a phase 2 placebo-controlled trial of the investigational antiamyloid drug donanemab show that the novel agent met the primary outcome of slowing cognitive decline in patients with early symptomatic Alzheimer’s disease (AD). 

Results from the TRAILBLAZER-ALZ trial were presented at the 2021 International Conference on Alzheimer’s and Parkinson’s Diseases (AD/PD) and were simultaneously published online March 13 in the New England Journal of Medicine.

As previously reported by Medscape Medical News, topline results showed that donanemab slowed cognitive decline by 32% on the Integrated AD Rating Scale (iADRS) from baseline to 76 weeks relative to placebo.

The newly released detailed findings showed that “the use of donanemab resulted in a better composite score for cognition and for the ability to perform activities of daily living than placebo at 76 weeks, although results for secondary outcomes were mixed,” the investigators, with first author Mark A. Mintun, MD, an employee of Eli Lilly, reported.   

Results revealed improvement in scores on the Clinical Dementia Rating Scale-Sum of Boxes (CDR-SB) and the 13-item cognitive subscale of the AD Assessment Scale (ADAS-Cog13), but the differences between the two treatment groups were not significant. In addition, score changes on the AD Cooperative Study–Instrumental Activities of Daily Inventory (ADCS-iADL) and the Mini-Mental State Examination (MMSE) were not “substantial.”

However, the donanemab group did show an 85-centiloid greater reduction in amyloid plaque level at 76 weeks, as shown on PET, compared with the placebo group.
 

Proof of concept?

The humanized antibody donanemab, which was previously known as LY3002813, targets a modified form of deposited amyloid-beta (A-beta) peptide called N3pG.

The randomized, placebo-controlled, double-blind TRAILBLAZER-ALZ trial, which was described as a “phase 2 proof of concept trial” in the AD/PD program, was conducted at 56 sites in the United States and Canada and included 257 patients between the ages of 60 and 85 years (52% were women). PET confirmed tau and amyloid deposition in all participants.

The active treatment group (n = 131) was randomly assigned to receive donanemab 700 mg for three doses; after that, treatment was bumped up to 1,400 mg. Both the donanemab and placebo groups (n = 126) received treatment intravenously every 4 weeks for up to 72 weeks.

Participants also underwent F-florbetapir and F-flortaucipir PET scans at various timepoints and completed a slew of cognitive tests.

The study’s primary outcome measure was change between baseline and 76 weeks post treatment on composite score for cognition, as measured by the iADRS. The iADRS combines the ADAS-Cog13 and the ADCS-iADL.

This measure ranges from 0 to 144, with lower scores associated with greater cognitive impairment. Both treatment groups had an iADRS score of 106 at baseline.
 

More research needed

Results showed that the score change from baseline on the iADRS was –6.86 for the active treatment group vs –10.06 for the placebo group (group difference, 3.2; 95% confidence interval [CI], 0.12-6.27; P = .04). Although significant, “the trial was powered to show a 6-point difference,” which was not met, the investigators note.

Differences in score changes from baseline to 76 weeks for the treatment vs. placebo groups on the following secondary outcome measures were:

  • CDR-SB: –0.36 (95% CI, –0.83 to –0.12).
  • ADAS-Cog13: –1.86 (95% CI, –3.63 to –0.09).
  • ADCS-iADL: 1.21 (95% CI, –0.77 to 3.2).
  • MMSE: 0.64 (95% CI, –0.4 to 1.67).

The CDR-SB was designated as the first secondary outcome, and because it did not show a significant between-group difference, “the hierarchy failed and no definite conclusions can be drawn from data regarding the differences between groups in the change in the ADAS-Cog13,” the investigators wrote.

In addition, the differences in scores on the latter two secondary outcomes were not “substantial,” they reported.

However, at 76 weeks, the donanemab group showed a reduction of 84.13 centiloids in amyloid plaque level vs. an increase of 0.93 centiloids in the placebo group (between-group difference, 85.06 centiloids). At 24 weeks, the active-treatment group had a 67.83-centiloids greater reduction vs. the placebo group.

In addition, 40%, 59.8%, and 67.8% of the donanemab group achieved “amyloid-negative status” at 24, 52, and 76 weeks, respectively. Amyloid-negative status was defined as an amyloid plaque level of less than 24.1 centiloids.

Total incidence of death or serious adverse events did not differ significantly between the groups. However, the donanemab group had significantly more reports of ARIA-E compared with the placebo group (26.7% vs. 0.8%).

Overall, the researchers noted that more trials of longer duration with larger patient numbers are warranted “to further determine the efficacy and safety of donanemab” in AD.
 

Positive signal?

In a statement, Maria Carrillo, PhD, chief science officer for the Alzheimer’s Association, said the organization “is encouraged by this promising data.

“It is the first phase 2 Alzheimer’s trial to show positive results on a primary outcome measure related to memory and thinking,” Dr. Carrillo said. However, “more work needs to be done on this experimental drug therapy.”

Dr. Carrillo noted that because the trial was moderately sized and only 180 participants completed the study, “we look forward to the results of a second, larger phase 2 trial of this drug.”

Still, she added, there were several “novel and innovative aspects” in the way the study was conducted noting that it showcases the evolution of AD research.

“I’m hopeful for the future,” Dr. Carrillo said.

Also commenting on the results, Howard Fillit, MD, neuroscientist and founding executive director and chief science officer of the Alzheimer’s Drug Discovery Foundation, said the study showed “the pharmacology works” and that the drug did what it was supposed to do in terms of removing A-beta plaque.

“It also gave us a signal in a relatively small phase 2 study that there might be a modest cognitive benefit,” said Dr. Fillit, who was not involved with the research.

He noted that although the rate of decline slowing was statistically significant it remains to be seen whether this is clinically meaningful, particularly in light of the fact that the secondary outcome results were mixed.  

“Basically, it was a positive study that probably needs to be followed by another, much larger study to get us to really see the benefit,” Dr. Fillit said.

Dr. Mintun is an employee of Eli Lilly, which funded the study. Dr. Carrillo and Dr. Fillit have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Results from a phase 2 placebo-controlled trial of the investigational antiamyloid drug donanemab show that the novel agent met the primary outcome of slowing cognitive decline in patients with early symptomatic Alzheimer’s disease (AD). 

Results from the TRAILBLAZER-ALZ trial were presented at the 2021 International Conference on Alzheimer’s and Parkinson’s Diseases (AD/PD) and were simultaneously published online March 13 in the New England Journal of Medicine.

As previously reported by Medscape Medical News, topline results showed that donanemab slowed cognitive decline by 32% on the Integrated AD Rating Scale (iADRS) from baseline to 76 weeks relative to placebo.

The newly released detailed findings showed that “the use of donanemab resulted in a better composite score for cognition and for the ability to perform activities of daily living than placebo at 76 weeks, although results for secondary outcomes were mixed,” the investigators, with first author Mark A. Mintun, MD, an employee of Eli Lilly, reported.   

Results revealed improvement in scores on the Clinical Dementia Rating Scale-Sum of Boxes (CDR-SB) and the 13-item cognitive subscale of the AD Assessment Scale (ADAS-Cog13), but the differences between the two treatment groups were not significant. In addition, score changes on the AD Cooperative Study–Instrumental Activities of Daily Inventory (ADCS-iADL) and the Mini-Mental State Examination (MMSE) were not “substantial.”

However, the donanemab group did show an 85-centiloid greater reduction in amyloid plaque level at 76 weeks, as shown on PET, compared with the placebo group.
 

Proof of concept?

The humanized antibody donanemab, which was previously known as LY3002813, targets a modified form of deposited amyloid-beta (A-beta) peptide called N3pG.

The randomized, placebo-controlled, double-blind TRAILBLAZER-ALZ trial, which was described as a “phase 2 proof of concept trial” in the AD/PD program, was conducted at 56 sites in the United States and Canada and included 257 patients between the ages of 60 and 85 years (52% were women). PET confirmed tau and amyloid deposition in all participants.

The active treatment group (n = 131) was randomly assigned to receive donanemab 700 mg for three doses; after that, treatment was bumped up to 1,400 mg. Both the donanemab and placebo groups (n = 126) received treatment intravenously every 4 weeks for up to 72 weeks.

Participants also underwent F-florbetapir and F-flortaucipir PET scans at various timepoints and completed a slew of cognitive tests.

The study’s primary outcome measure was change between baseline and 76 weeks post treatment on composite score for cognition, as measured by the iADRS. The iADRS combines the ADAS-Cog13 and the ADCS-iADL.

This measure ranges from 0 to 144, with lower scores associated with greater cognitive impairment. Both treatment groups had an iADRS score of 106 at baseline.
 

More research needed

Results showed that the score change from baseline on the iADRS was –6.86 for the active treatment group vs –10.06 for the placebo group (group difference, 3.2; 95% confidence interval [CI], 0.12-6.27; P = .04). Although significant, “the trial was powered to show a 6-point difference,” which was not met, the investigators note.

Differences in score changes from baseline to 76 weeks for the treatment vs. placebo groups on the following secondary outcome measures were:

  • CDR-SB: –0.36 (95% CI, –0.83 to –0.12).
  • ADAS-Cog13: –1.86 (95% CI, –3.63 to –0.09).
  • ADCS-iADL: 1.21 (95% CI, –0.77 to 3.2).
  • MMSE: 0.64 (95% CI, –0.4 to 1.67).

The CDR-SB was designated as the first secondary outcome, and because it did not show a significant between-group difference, “the hierarchy failed and no definite conclusions can be drawn from data regarding the differences between groups in the change in the ADAS-Cog13,” the investigators wrote.

In addition, the differences in scores on the latter two secondary outcomes were not “substantial,” they reported.

However, at 76 weeks, the donanemab group showed a reduction of 84.13 centiloids in amyloid plaque level vs. an increase of 0.93 centiloids in the placebo group (between-group difference, 85.06 centiloids). At 24 weeks, the active-treatment group had a 67.83-centiloids greater reduction vs. the placebo group.

In addition, 40%, 59.8%, and 67.8% of the donanemab group achieved “amyloid-negative status” at 24, 52, and 76 weeks, respectively. Amyloid-negative status was defined as an amyloid plaque level of less than 24.1 centiloids.

Total incidence of death or serious adverse events did not differ significantly between the groups. However, the donanemab group had significantly more reports of ARIA-E compared with the placebo group (26.7% vs. 0.8%).

Overall, the researchers noted that more trials of longer duration with larger patient numbers are warranted “to further determine the efficacy and safety of donanemab” in AD.
 

Positive signal?

In a statement, Maria Carrillo, PhD, chief science officer for the Alzheimer’s Association, said the organization “is encouraged by this promising data.

“It is the first phase 2 Alzheimer’s trial to show positive results on a primary outcome measure related to memory and thinking,” Dr. Carrillo said. However, “more work needs to be done on this experimental drug therapy.”

Dr. Carrillo noted that because the trial was moderately sized and only 180 participants completed the study, “we look forward to the results of a second, larger phase 2 trial of this drug.”

Still, she added, there were several “novel and innovative aspects” in the way the study was conducted noting that it showcases the evolution of AD research.

“I’m hopeful for the future,” Dr. Carrillo said.

Also commenting on the results, Howard Fillit, MD, neuroscientist and founding executive director and chief science officer of the Alzheimer’s Drug Discovery Foundation, said the study showed “the pharmacology works” and that the drug did what it was supposed to do in terms of removing A-beta plaque.

“It also gave us a signal in a relatively small phase 2 study that there might be a modest cognitive benefit,” said Dr. Fillit, who was not involved with the research.

He noted that although the rate of decline slowing was statistically significant it remains to be seen whether this is clinically meaningful, particularly in light of the fact that the secondary outcome results were mixed.  

“Basically, it was a positive study that probably needs to be followed by another, much larger study to get us to really see the benefit,” Dr. Fillit said.

Dr. Mintun is an employee of Eli Lilly, which funded the study. Dr. Carrillo and Dr. Fillit have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Issue
Neurology Reviews- 29(4)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: March 16, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Palliative care for patients with dementia: When to refer?

Article Type
Changed
Thu, 12/15/2022 - 15:41

Palliative care for people with dementia is increasingly recognized as a way to improve quality of life and provide relief from the myriad physical and psychological symptoms of advancing neurodegenerative disease. But unlike in cancer, relatively few patients with terminal dementia receive referrals to palliative care.

A new literature review has found these referrals to be all over the map among patients with dementia – with many occurring very late in the disease process – and do not reflect any consistent criteria based on patient needs.

For their research, published March 2 in the Journal of the American Geriatrics Society, Li Mo, MD, of the University of Texas MD Anderson Cancer Center in Houston, and colleagues looked at nearly 60 studies dating back to the early 1990s that contained information on referrals to palliative care for patients with dementia. While a palliative care approach can be provided by nonspecialists, all the included studies dealt at least in part with specialist care.
 

Standardized criteria is lacking

The investigators found advanced or late-stage dementia to be the most common reason cited for referral, with three quarters of the studies recommending palliative care for late-stage or advanced dementia, generally without qualifying what symptoms or needs were present. Patients received palliative care across a range of settings, including nursing homes, hospitals, and their own homes, though many articles did not include information on where patients received care.

A fifth of the articles suggested that medical complications of dementia including falls, pneumonia, and ulcers should trigger referrals to palliative care, while another fifth cited poor prognosis, defined varyingly as having between 2 years and 6 months likely left to live. Poor nutrition status was identified in 10% of studies as meriting referral.

Only 20% of the studies identified patient needs – evidence of psychological distress or functional decline, for example – as criteria for referral, despite these being ubiquitous in dementia. The authors said they were surprised by this finding, which could possibly be explained, they wrote, by “the interest among geriatrician, neurologist, and primary care teams to provide good symptom management,” reflecting a de facto palliative care approach. “There is also significant stigma associated with a specialist palliative care referral,” the authors noted.

Curiously, the researchers noted, a new diagnosis of dementia in more than a quarter of the studies triggered referral, a finding that possibly reflected delayed diagnoses.

The findings revealed “heterogeneity in the literature in reasons for involving specialist palliative care, which may partly explain the variation in patterns of palliative care referral,” Dr. Mo and colleagues wrote, stressing that more standardized criteria are urgently needed to bring dementia in line with cancer in terms of providing timely palliative care.

Patients with advancing dementia have little chance to self-report symptoms, meaning that more attention to patient complaints earlier in the disease course, and greater sensitivity to patient distress, are required. By routinely screening symptoms, clinicians could use specific cutoffs “as triggers to initiate automatic timely palliative care referral,” the authors concluded, noting that more research was needed before these cutoffs, whether based on symptom intensity or other measures, could be calculated.

Dr. Mo and colleagues acknowledged as weaknesses of their study the fact that a third of the articles in the review were based on expert consensus, while others did not distinguish clearly between primary and specialist palliative care.
 

 

 

A starting point for further discussion

Asked to comment on the findings, Elizabeth Sampson, MD, a palliative care researcher at University College London, praised Dr. Mo and colleagues’ study as “starting to pull together the strands” of a systematic approach to referrals and access to palliative care in dementia.

Dr. Elizabeth Sampson


“Sometimes you need a paper like this to kick off the discussion to say look, this is where we are,” Dr. Sampson said, noting that the focus on need-based criteria dovetailed with a “general feeling in the field that we need to really think about needs, and what palliative care needs might be. What the threshold for referral should be we don’t know yet. Should it be three unmet needs? Or five? We’re still a long way from knowing.”

Dr. Sampson’s group is leading a UK-government funded research effort that aims to develop cost-effective palliative care interventions in dementia, in part through a tool that uses caregiver reports to assess symptom burden and patient needs. The research program “is founded on a needs-based approach, which aims to look at people’s individual needs and responding to them in a proactive way,” she said.

One of the obstacles to timely palliative care in dementia, Dr. Sampson said, is weighing resource allocation against what can be wildly varying prognoses. “Hospices understand when someone has terminal cancer and [is] likely to die within a few weeks, but it’s not unheard of for someone in very advanced stages of dementia to live another year,” she said. “There are concerns that a rapid increase in people with dementia being moved to palliative care could overwhelm already limited hospice capacity. We would argue that the best approach is to get palliative care out to where people with dementia live, which is usually the care home.”

Dr. Mo and colleagues’ study received funding from the National Institutes of Health, and its authors disclosed no financial conflicts of interest. Dr. Sampson’s work is supported by the UK’s Economic and Social Research Council and National Institute for Health Research. She disclosed no conflicts of interest.

Issue
Neurology Reviews- 29(4)
Publications
Topics
Sections

Palliative care for people with dementia is increasingly recognized as a way to improve quality of life and provide relief from the myriad physical and psychological symptoms of advancing neurodegenerative disease. But unlike in cancer, relatively few patients with terminal dementia receive referrals to palliative care.

A new literature review has found these referrals to be all over the map among patients with dementia – with many occurring very late in the disease process – and do not reflect any consistent criteria based on patient needs.

For their research, published March 2 in the Journal of the American Geriatrics Society, Li Mo, MD, of the University of Texas MD Anderson Cancer Center in Houston, and colleagues looked at nearly 60 studies dating back to the early 1990s that contained information on referrals to palliative care for patients with dementia. While a palliative care approach can be provided by nonspecialists, all the included studies dealt at least in part with specialist care.
 

Standardized criteria is lacking

The investigators found advanced or late-stage dementia to be the most common reason cited for referral, with three quarters of the studies recommending palliative care for late-stage or advanced dementia, generally without qualifying what symptoms or needs were present. Patients received palliative care across a range of settings, including nursing homes, hospitals, and their own homes, though many articles did not include information on where patients received care.

A fifth of the articles suggested that medical complications of dementia including falls, pneumonia, and ulcers should trigger referrals to palliative care, while another fifth cited poor prognosis, defined varyingly as having between 2 years and 6 months likely left to live. Poor nutrition status was identified in 10% of studies as meriting referral.

Only 20% of the studies identified patient needs – evidence of psychological distress or functional decline, for example – as criteria for referral, despite these being ubiquitous in dementia. The authors said they were surprised by this finding, which could possibly be explained, they wrote, by “the interest among geriatrician, neurologist, and primary care teams to provide good symptom management,” reflecting a de facto palliative care approach. “There is also significant stigma associated with a specialist palliative care referral,” the authors noted.

Curiously, the researchers noted, a new diagnosis of dementia in more than a quarter of the studies triggered referral, a finding that possibly reflected delayed diagnoses.

The findings revealed “heterogeneity in the literature in reasons for involving specialist palliative care, which may partly explain the variation in patterns of palliative care referral,” Dr. Mo and colleagues wrote, stressing that more standardized criteria are urgently needed to bring dementia in line with cancer in terms of providing timely palliative care.

Patients with advancing dementia have little chance to self-report symptoms, meaning that more attention to patient complaints earlier in the disease course, and greater sensitivity to patient distress, are required. By routinely screening symptoms, clinicians could use specific cutoffs “as triggers to initiate automatic timely palliative care referral,” the authors concluded, noting that more research was needed before these cutoffs, whether based on symptom intensity or other measures, could be calculated.

Dr. Mo and colleagues acknowledged as weaknesses of their study the fact that a third of the articles in the review were based on expert consensus, while others did not distinguish clearly between primary and specialist palliative care.
 

 

 

A starting point for further discussion

Asked to comment on the findings, Elizabeth Sampson, MD, a palliative care researcher at University College London, praised Dr. Mo and colleagues’ study as “starting to pull together the strands” of a systematic approach to referrals and access to palliative care in dementia.

Dr. Elizabeth Sampson


“Sometimes you need a paper like this to kick off the discussion to say look, this is where we are,” Dr. Sampson said, noting that the focus on need-based criteria dovetailed with a “general feeling in the field that we need to really think about needs, and what palliative care needs might be. What the threshold for referral should be we don’t know yet. Should it be three unmet needs? Or five? We’re still a long way from knowing.”

Dr. Sampson’s group is leading a UK-government funded research effort that aims to develop cost-effective palliative care interventions in dementia, in part through a tool that uses caregiver reports to assess symptom burden and patient needs. The research program “is founded on a needs-based approach, which aims to look at people’s individual needs and responding to them in a proactive way,” she said.

One of the obstacles to timely palliative care in dementia, Dr. Sampson said, is weighing resource allocation against what can be wildly varying prognoses. “Hospices understand when someone has terminal cancer and [is] likely to die within a few weeks, but it’s not unheard of for someone in very advanced stages of dementia to live another year,” she said. “There are concerns that a rapid increase in people with dementia being moved to palliative care could overwhelm already limited hospice capacity. We would argue that the best approach is to get palliative care out to where people with dementia live, which is usually the care home.”

Dr. Mo and colleagues’ study received funding from the National Institutes of Health, and its authors disclosed no financial conflicts of interest. Dr. Sampson’s work is supported by the UK’s Economic and Social Research Council and National Institute for Health Research. She disclosed no conflicts of interest.

Palliative care for people with dementia is increasingly recognized as a way to improve quality of life and provide relief from the myriad physical and psychological symptoms of advancing neurodegenerative disease. But unlike in cancer, relatively few patients with terminal dementia receive referrals to palliative care.

A new literature review has found these referrals to be all over the map among patients with dementia – with many occurring very late in the disease process – and do not reflect any consistent criteria based on patient needs.

For their research, published March 2 in the Journal of the American Geriatrics Society, Li Mo, MD, of the University of Texas MD Anderson Cancer Center in Houston, and colleagues looked at nearly 60 studies dating back to the early 1990s that contained information on referrals to palliative care for patients with dementia. While a palliative care approach can be provided by nonspecialists, all the included studies dealt at least in part with specialist care.
 

Standardized criteria is lacking

The investigators found advanced or late-stage dementia to be the most common reason cited for referral, with three quarters of the studies recommending palliative care for late-stage or advanced dementia, generally without qualifying what symptoms or needs were present. Patients received palliative care across a range of settings, including nursing homes, hospitals, and their own homes, though many articles did not include information on where patients received care.

A fifth of the articles suggested that medical complications of dementia including falls, pneumonia, and ulcers should trigger referrals to palliative care, while another fifth cited poor prognosis, defined varyingly as having between 2 years and 6 months likely left to live. Poor nutrition status was identified in 10% of studies as meriting referral.

Only 20% of the studies identified patient needs – evidence of psychological distress or functional decline, for example – as criteria for referral, despite these being ubiquitous in dementia. The authors said they were surprised by this finding, which could possibly be explained, they wrote, by “the interest among geriatrician, neurologist, and primary care teams to provide good symptom management,” reflecting a de facto palliative care approach. “There is also significant stigma associated with a specialist palliative care referral,” the authors noted.

Curiously, the researchers noted, a new diagnosis of dementia in more than a quarter of the studies triggered referral, a finding that possibly reflected delayed diagnoses.

The findings revealed “heterogeneity in the literature in reasons for involving specialist palliative care, which may partly explain the variation in patterns of palliative care referral,” Dr. Mo and colleagues wrote, stressing that more standardized criteria are urgently needed to bring dementia in line with cancer in terms of providing timely palliative care.

Patients with advancing dementia have little chance to self-report symptoms, meaning that more attention to patient complaints earlier in the disease course, and greater sensitivity to patient distress, are required. By routinely screening symptoms, clinicians could use specific cutoffs “as triggers to initiate automatic timely palliative care referral,” the authors concluded, noting that more research was needed before these cutoffs, whether based on symptom intensity or other measures, could be calculated.

Dr. Mo and colleagues acknowledged as weaknesses of their study the fact that a third of the articles in the review were based on expert consensus, while others did not distinguish clearly between primary and specialist palliative care.
 

 

 

A starting point for further discussion

Asked to comment on the findings, Elizabeth Sampson, MD, a palliative care researcher at University College London, praised Dr. Mo and colleagues’ study as “starting to pull together the strands” of a systematic approach to referrals and access to palliative care in dementia.

Dr. Elizabeth Sampson


“Sometimes you need a paper like this to kick off the discussion to say look, this is where we are,” Dr. Sampson said, noting that the focus on need-based criteria dovetailed with a “general feeling in the field that we need to really think about needs, and what palliative care needs might be. What the threshold for referral should be we don’t know yet. Should it be three unmet needs? Or five? We’re still a long way from knowing.”

Dr. Sampson’s group is leading a UK-government funded research effort that aims to develop cost-effective palliative care interventions in dementia, in part through a tool that uses caregiver reports to assess symptom burden and patient needs. The research program “is founded on a needs-based approach, which aims to look at people’s individual needs and responding to them in a proactive way,” she said.

One of the obstacles to timely palliative care in dementia, Dr. Sampson said, is weighing resource allocation against what can be wildly varying prognoses. “Hospices understand when someone has terminal cancer and [is] likely to die within a few weeks, but it’s not unheard of for someone in very advanced stages of dementia to live another year,” she said. “There are concerns that a rapid increase in people with dementia being moved to palliative care could overwhelm already limited hospice capacity. We would argue that the best approach is to get palliative care out to where people with dementia live, which is usually the care home.”

Dr. Mo and colleagues’ study received funding from the National Institutes of Health, and its authors disclosed no financial conflicts of interest. Dr. Sampson’s work is supported by the UK’s Economic and Social Research Council and National Institute for Health Research. She disclosed no conflicts of interest.

Issue
Neurology Reviews- 29(4)
Issue
Neurology Reviews- 29(4)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN GERIATRICS SOCIETY

Citation Override
Publish date: March 10, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Sleep apnea and cognitive impairment are common bedfellows

Article Type
Changed
Mon, 04/05/2021 - 14:43

More than 50% of patients with cognitive impairment have obstructive sleep apnea, according to findings that also reveal OSA severity is correlated to the degree of cognitive impairment and sleep quality.

“The study shows obstructive sleep apnea is common in patients with cognitive impairment. The results suggest that people with cognitive impairment should be assessed for sleep apnea if they have difficulty with sleep or if they demonstrate sleep-related symptoms,” said study investigator David Colelli, MSc, research coordinator at Sunnybrook Health Sciences Centre in Toronto.

The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology..
 

Linked to cognitive impairment

OSA is a common sleep disorder and is associated with an increased risk of developing cognitive impairment. It is also prevalent in the general population, but even more common among patients with dementia.

However, the investigators noted, the frequency and predictors of OSA have not been well established in Alzheimer’s disease and other related conditions such as vascular dementia.

The investigators had conducted a previous feasibility study investigating a home sleep monitor as an OSA screening tool. The current research examined potential correlations between OSA detected by this monitor and cognitive impairment.

The study included 67 patients with cognitive impairment due to neurodegenerative or vascular disease. The range of disorders included Alzheimer’s disease, mild cognitive impairment caused by Alzheimer’s disease, dementia caused by Parkinson’s or Lewy body disease, and vascular conditions.

Participants had a mean age of 72.8 years and 44.8% were male. The mean body mass index (BMI) was 25.6 kg/m2.

These participants completed a home sleep apnea test, which is an alternative to polysomnography for the detection of OSA.

Researchers identified OSA in 52.2% of the study population. This, Mr. Colelli said, “is in the range” of other research investigating sleep and cognitive impairment.

“In the general population, however, this number is a lot lower – in the 10%-20% range depending on the population or country you’re looking at,” Mr. Colelli said.

He emphasized that, without an objective sleep test, some patients may be unaware of their sleep issues. Those with cognitive impairment may “misjudge how they’re sleeping,” especially if they sleep without a partner, so it’s possible that sleep disorder symptoms often go undetected.
 

Bidirectional relationship?

Participants answered questionnaires on sleep, cognition, and mood. They also completed the 30-point Montreal Cognitive Assessment (MoCA) to assess language, visuospatial abilities, memory and recall, and abstract thinking.

Scores on this test range from 0 to 30, with a score of 26 or higher signifying normal, 18-25 indicating mild cognitive impairment, and 17 or lower indicating moderate to severe cognitive impairment. The average score for study participants with OSA was 20.5, compared with 23.6 for those without the sleep disorder.

Results showed OSA was significantly associated with a lower score on the MoCA scale (odds ratio, 0.40; P = .048). “This demonstrated an association of OSA with lower cognitive scores,” Mr. Colelli said.

The analysis also showed that OSA severity was correlated with actigraphy-derived sleep variables, including lower total sleep time, greater sleep onset latency, lower sleep efficiency, and more awakenings.

The study was too small to determine whether a specific diagnosis of cognitive impairment affected the link to OSA, Mr. Colelli said. “But definitely future research should be directed towards looking at this.”

Obesity is a risk factor for OSA, but the mean BMI in the study was not in the obese range of 30 and over. This, Mr. Colelli said, suggests that sleep apnea may present differently in those with cognitive impairment.

“Sleep apnea in this population might not present with the typical risk factors of obesity or snoring or feeling tired.”

While the new study “adds to the understanding that there’s a link between sleep and cognitive impairment, the direction of that link isn’t entirely clear,” Mr. Colelli said.

“It’s slowly becoming appreciated that the relationship might be bidirectionality, where sleep apnea might be contributing to the cognitive impairment and cognitive impairment could be contributing to the sleep issues.”

The study highlights how essential sleep is to mental health, Mr. Colelli said. “I feel, and I’m sure you do too, that if you don’t get good sleep, you feel tired during the day and you may not have the best concentration or memory.”

Identifying sleep issues in patients with cognitive impairment is important, as treatment and management of these issues could affect outcomes including cognition and quality of life, he added.

“Future research should be directed to see if treatment of sleep disorders with continuous positive airway pressure (CPAP), which is the gold standard, and various other treatments, can improve outcomes.” Future research should also examine OSA prevalence in larger cohorts.
 

Common, undertreated

Commenting on the resaerch, Lei Gao, MD, assistant professor of anesthesia at Harvard Medical School, Boston, whose areas of expertise include disorders of cognition, sleep, and circadian rhythm, believes the findings are important. “It highlights how common and potentially undertreated OSA is in this age group, and in particular, its link to cognitive impairment.”

OSA is often associated with significant comorbidities, as well as sleep disruption, Dr. Gao noted. One of the study’s strengths was including objective assessment of sleep using actigraphy. “It will be interesting to see to what extent the OSA link to cognitive impairment is via poor sleep or disrupted circadian rest/activity cycles.”

It would also be interesting “to tease out whether OSA is more linked to dementia of vascular etiologies due to common risk factors, or whether it is pervasive to all forms of dementia,” he added.

A version of this article first appeared on Medscape.com.

Meeting/Event
Issue
Neurology Reviews- 29(4)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

More than 50% of patients with cognitive impairment have obstructive sleep apnea, according to findings that also reveal OSA severity is correlated to the degree of cognitive impairment and sleep quality.

“The study shows obstructive sleep apnea is common in patients with cognitive impairment. The results suggest that people with cognitive impairment should be assessed for sleep apnea if they have difficulty with sleep or if they demonstrate sleep-related symptoms,” said study investigator David Colelli, MSc, research coordinator at Sunnybrook Health Sciences Centre in Toronto.

The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology..
 

Linked to cognitive impairment

OSA is a common sleep disorder and is associated with an increased risk of developing cognitive impairment. It is also prevalent in the general population, but even more common among patients with dementia.

However, the investigators noted, the frequency and predictors of OSA have not been well established in Alzheimer’s disease and other related conditions such as vascular dementia.

The investigators had conducted a previous feasibility study investigating a home sleep monitor as an OSA screening tool. The current research examined potential correlations between OSA detected by this monitor and cognitive impairment.

The study included 67 patients with cognitive impairment due to neurodegenerative or vascular disease. The range of disorders included Alzheimer’s disease, mild cognitive impairment caused by Alzheimer’s disease, dementia caused by Parkinson’s or Lewy body disease, and vascular conditions.

Participants had a mean age of 72.8 years and 44.8% were male. The mean body mass index (BMI) was 25.6 kg/m2.

These participants completed a home sleep apnea test, which is an alternative to polysomnography for the detection of OSA.

Researchers identified OSA in 52.2% of the study population. This, Mr. Colelli said, “is in the range” of other research investigating sleep and cognitive impairment.

“In the general population, however, this number is a lot lower – in the 10%-20% range depending on the population or country you’re looking at,” Mr. Colelli said.

He emphasized that, without an objective sleep test, some patients may be unaware of their sleep issues. Those with cognitive impairment may “misjudge how they’re sleeping,” especially if they sleep without a partner, so it’s possible that sleep disorder symptoms often go undetected.
 

Bidirectional relationship?

Participants answered questionnaires on sleep, cognition, and mood. They also completed the 30-point Montreal Cognitive Assessment (MoCA) to assess language, visuospatial abilities, memory and recall, and abstract thinking.

Scores on this test range from 0 to 30, with a score of 26 or higher signifying normal, 18-25 indicating mild cognitive impairment, and 17 or lower indicating moderate to severe cognitive impairment. The average score for study participants with OSA was 20.5, compared with 23.6 for those without the sleep disorder.

Results showed OSA was significantly associated with a lower score on the MoCA scale (odds ratio, 0.40; P = .048). “This demonstrated an association of OSA with lower cognitive scores,” Mr. Colelli said.

The analysis also showed that OSA severity was correlated with actigraphy-derived sleep variables, including lower total sleep time, greater sleep onset latency, lower sleep efficiency, and more awakenings.

The study was too small to determine whether a specific diagnosis of cognitive impairment affected the link to OSA, Mr. Colelli said. “But definitely future research should be directed towards looking at this.”

Obesity is a risk factor for OSA, but the mean BMI in the study was not in the obese range of 30 and over. This, Mr. Colelli said, suggests that sleep apnea may present differently in those with cognitive impairment.

“Sleep apnea in this population might not present with the typical risk factors of obesity or snoring or feeling tired.”

While the new study “adds to the understanding that there’s a link between sleep and cognitive impairment, the direction of that link isn’t entirely clear,” Mr. Colelli said.

“It’s slowly becoming appreciated that the relationship might be bidirectionality, where sleep apnea might be contributing to the cognitive impairment and cognitive impairment could be contributing to the sleep issues.”

The study highlights how essential sleep is to mental health, Mr. Colelli said. “I feel, and I’m sure you do too, that if you don’t get good sleep, you feel tired during the day and you may not have the best concentration or memory.”

Identifying sleep issues in patients with cognitive impairment is important, as treatment and management of these issues could affect outcomes including cognition and quality of life, he added.

“Future research should be directed to see if treatment of sleep disorders with continuous positive airway pressure (CPAP), which is the gold standard, and various other treatments, can improve outcomes.” Future research should also examine OSA prevalence in larger cohorts.
 

Common, undertreated

Commenting on the resaerch, Lei Gao, MD, assistant professor of anesthesia at Harvard Medical School, Boston, whose areas of expertise include disorders of cognition, sleep, and circadian rhythm, believes the findings are important. “It highlights how common and potentially undertreated OSA is in this age group, and in particular, its link to cognitive impairment.”

OSA is often associated with significant comorbidities, as well as sleep disruption, Dr. Gao noted. One of the study’s strengths was including objective assessment of sleep using actigraphy. “It will be interesting to see to what extent the OSA link to cognitive impairment is via poor sleep or disrupted circadian rest/activity cycles.”

It would also be interesting “to tease out whether OSA is more linked to dementia of vascular etiologies due to common risk factors, or whether it is pervasive to all forms of dementia,” he added.

A version of this article first appeared on Medscape.com.

More than 50% of patients with cognitive impairment have obstructive sleep apnea, according to findings that also reveal OSA severity is correlated to the degree of cognitive impairment and sleep quality.

“The study shows obstructive sleep apnea is common in patients with cognitive impairment. The results suggest that people with cognitive impairment should be assessed for sleep apnea if they have difficulty with sleep or if they demonstrate sleep-related symptoms,” said study investigator David Colelli, MSc, research coordinator at Sunnybrook Health Sciences Centre in Toronto.

The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology..
 

Linked to cognitive impairment

OSA is a common sleep disorder and is associated with an increased risk of developing cognitive impairment. It is also prevalent in the general population, but even more common among patients with dementia.

However, the investigators noted, the frequency and predictors of OSA have not been well established in Alzheimer’s disease and other related conditions such as vascular dementia.

The investigators had conducted a previous feasibility study investigating a home sleep monitor as an OSA screening tool. The current research examined potential correlations between OSA detected by this monitor and cognitive impairment.

The study included 67 patients with cognitive impairment due to neurodegenerative or vascular disease. The range of disorders included Alzheimer’s disease, mild cognitive impairment caused by Alzheimer’s disease, dementia caused by Parkinson’s or Lewy body disease, and vascular conditions.

Participants had a mean age of 72.8 years and 44.8% were male. The mean body mass index (BMI) was 25.6 kg/m2.

These participants completed a home sleep apnea test, which is an alternative to polysomnography for the detection of OSA.

Researchers identified OSA in 52.2% of the study population. This, Mr. Colelli said, “is in the range” of other research investigating sleep and cognitive impairment.

“In the general population, however, this number is a lot lower – in the 10%-20% range depending on the population or country you’re looking at,” Mr. Colelli said.

He emphasized that, without an objective sleep test, some patients may be unaware of their sleep issues. Those with cognitive impairment may “misjudge how they’re sleeping,” especially if they sleep without a partner, so it’s possible that sleep disorder symptoms often go undetected.
 

Bidirectional relationship?

Participants answered questionnaires on sleep, cognition, and mood. They also completed the 30-point Montreal Cognitive Assessment (MoCA) to assess language, visuospatial abilities, memory and recall, and abstract thinking.

Scores on this test range from 0 to 30, with a score of 26 or higher signifying normal, 18-25 indicating mild cognitive impairment, and 17 or lower indicating moderate to severe cognitive impairment. The average score for study participants with OSA was 20.5, compared with 23.6 for those without the sleep disorder.

Results showed OSA was significantly associated with a lower score on the MoCA scale (odds ratio, 0.40; P = .048). “This demonstrated an association of OSA with lower cognitive scores,” Mr. Colelli said.

The analysis also showed that OSA severity was correlated with actigraphy-derived sleep variables, including lower total sleep time, greater sleep onset latency, lower sleep efficiency, and more awakenings.

The study was too small to determine whether a specific diagnosis of cognitive impairment affected the link to OSA, Mr. Colelli said. “But definitely future research should be directed towards looking at this.”

Obesity is a risk factor for OSA, but the mean BMI in the study was not in the obese range of 30 and over. This, Mr. Colelli said, suggests that sleep apnea may present differently in those with cognitive impairment.

“Sleep apnea in this population might not present with the typical risk factors of obesity or snoring or feeling tired.”

While the new study “adds to the understanding that there’s a link between sleep and cognitive impairment, the direction of that link isn’t entirely clear,” Mr. Colelli said.

“It’s slowly becoming appreciated that the relationship might be bidirectionality, where sleep apnea might be contributing to the cognitive impairment and cognitive impairment could be contributing to the sleep issues.”

The study highlights how essential sleep is to mental health, Mr. Colelli said. “I feel, and I’m sure you do too, that if you don’t get good sleep, you feel tired during the day and you may not have the best concentration or memory.”

Identifying sleep issues in patients with cognitive impairment is important, as treatment and management of these issues could affect outcomes including cognition and quality of life, he added.

“Future research should be directed to see if treatment of sleep disorders with continuous positive airway pressure (CPAP), which is the gold standard, and various other treatments, can improve outcomes.” Future research should also examine OSA prevalence in larger cohorts.
 

Common, undertreated

Commenting on the resaerch, Lei Gao, MD, assistant professor of anesthesia at Harvard Medical School, Boston, whose areas of expertise include disorders of cognition, sleep, and circadian rhythm, believes the findings are important. “It highlights how common and potentially undertreated OSA is in this age group, and in particular, its link to cognitive impairment.”

OSA is often associated with significant comorbidities, as well as sleep disruption, Dr. Gao noted. One of the study’s strengths was including objective assessment of sleep using actigraphy. “It will be interesting to see to what extent the OSA link to cognitive impairment is via poor sleep or disrupted circadian rest/activity cycles.”

It would also be interesting “to tease out whether OSA is more linked to dementia of vascular etiologies due to common risk factors, or whether it is pervasive to all forms of dementia,” he added.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Issue
Neurology Reviews- 29(4)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAN 2021

Citation Override
Publish date: March 5, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Neurologic disorders ubiquitous and rising in the U.S.

Article Type
Changed
Mon, 04/05/2021 - 14:07

Stroke, dementias, and migraine cause the most disability among neurological disorders in the United States, according to new findings derived from the 2017 Global Burden of Disease study. 

Dr. Valery Feigin

The authors of the analysis, led by Valery Feigin, MD, PhD, of New Zealand’s National Institute for Stroke and Applied Neurosciences, and published in the February 2021 issue of JAMA Neurology, looked at prevalence, incidence, mortality, and disability-adjusted life years for 14 neurological disorders across 50 states between 1990 and 2017. The diseases included in the analysis were stroke, Alzheimer’s disease and other dementias, Parkinson’s disease, epilepsy, multiple sclerosis, motor neuron disease, headaches, traumatic brain injury, spinal cord injuries, brain and other nervous system cancers, meningitis, encephalitis, and tetanus.
 

Tracking the burden of neurologic diseases

Dr. Feigin and colleagues estimated that a full 60% of the U.S. population lives with one or more of these disorders, a figure much greater than previous estimates for neurological disease burden nationwide. Tension-type headache and migraine were the most prevalent in the analysis by Dr. Feigin and colleagues. During the study period, they found, prevalence, incidence, and disability burden of nearly all the included disorders increased, with the exception of brain and spinal cord injuries, meningitis, and encephalitis.

The researchers attributed most of the rise in noncommunicable neurological diseases to population aging. An age-standardized analysis found trends for stroke and Alzheimer’s disease and other dementias to be declining or flat. Age-standardized stroke incidence dropped by 16% from 1990 to 2017, while stroke mortality declined by nearly a third, and stroke disability by a quarter. Age-standardized incidence of Alzheimer’s disease and other dementias dropped by 12%, and their prevalence by 13%, during the study period, though dementia mortality and disability were seen increasing.

The authors surmised that the age-standardized declines in stroke and dementias could reflect that “primary prevention of these disorders are beginning to show an influence.” With dementia, which is linked to cognitive reserve and education, “improving educational levels of cohort reaching the age groups at greatest risk of disease may also be contributing to a modest decline over time,” Dr. Feigin and his colleagues wrote.

Parkinson’s disease and multiple sclerosis, meanwhile, were both seen rising in incidence, prevalence, and disability adjusted life years (DALYs) even with age-standardized figures. The United States saw comparatively more disability in 2017 from dementias, Parkinson’s disease, epilepsy, multiple sclerosis, motor neuron disease, and headache disorders, which together comprised 6.7% of DALYs, compared with 4.4% globally; these also accounted for a higher share of mortality in the U.S. than worldwide. The authors attributed at least some of the difference to better case ascertainment in the U.S.
 

Regional variations

The researchers also reported variations in disease burden by state and region. While previous studies have identified a “stroke belt” concentrated in North Carolina, South Carolina, and Georgia, the new findings point to stroke disability highest in Alabama, Arkansas, and Mississippi, and mortality highest in Alabama, Mississippi, and South Carolina. The researchers noted increases in dementia mortality in these states, “likely attributable to the reciprocal association between stroke and dementia.”

Northern states saw higher burdens of multiple sclerosis compared with the rest of the country, while eastern states had higher rates of Parkinson’s disease.

Such regional and state-by state variations, Dr. Feigin and colleagues wrote in their analysis, “may be associated with differences in the case ascertainment, as well as access to health care; racial/ethnic, genetic, and socioeconomic diversity; quality and comprehensiveness of preventive strategies; and risk factor distribution.”

The researchers noted as a limitation of their study that the 14 diseases captured were not an exhaustive list of neurological conditions; chronic lower back pain, a condition included in a previous major study of the burden of neurological disease in the United States, was omitted, as were restless legs syndrome and peripheral neuropathy. The researchers cited changes to coding practice in the U.S. and accuracy of medical claims data as potential limitations of their analysis. The Global Burden of Disease study is funded by the Bill and Melinda Gates Foundation, and several of Dr. Feigin’s coauthors reported financial relationships with industry.
 

Time to adjust the stroke belt?

Amelia Boehme, PhD, a stroke epidemiologist at Columbia University Mailman School of Public Health in New York, said in an interview that the current study added to recent findings showing surprising local variability in stroke prevalence, incidence, and mortality. “What we had always conceptually thought of as the ‘stroke belt’ isn’t necessarily the case,” Dr. Boehme said, but is rather subject to local, county-by-county variations. “Looking at the data here in conjunction with what previous authors have found, it raises some questions as to whether or not state-level data is giving a completely accurate picture, and whether we need to start looking at the county level and adjust for populations and age.” Importantly, Dr. Boehme said, data collected in the Global Burden of Disease study tends to be exceptionally rigorous and systematic, adding weight to Dr. Feigin and colleagues’ suggestions that prevention efforts may be making a dent in stroke and dementia. 

Dr. Amelia Boehme

“More data is always needed before we start to say we’re seeing things change,” Dr. Boehme noted. “But any glimmer of optimism is welcome, especially with regard to interventions that have been put in place, to allow us to build on those interventions.”

Dr. Boehme disclosed no financial conflicts of interest.

Issue
Neurology Reviews- 29(4)
Publications
Topics
Sections

Stroke, dementias, and migraine cause the most disability among neurological disorders in the United States, according to new findings derived from the 2017 Global Burden of Disease study. 

Dr. Valery Feigin

The authors of the analysis, led by Valery Feigin, MD, PhD, of New Zealand’s National Institute for Stroke and Applied Neurosciences, and published in the February 2021 issue of JAMA Neurology, looked at prevalence, incidence, mortality, and disability-adjusted life years for 14 neurological disorders across 50 states between 1990 and 2017. The diseases included in the analysis were stroke, Alzheimer’s disease and other dementias, Parkinson’s disease, epilepsy, multiple sclerosis, motor neuron disease, headaches, traumatic brain injury, spinal cord injuries, brain and other nervous system cancers, meningitis, encephalitis, and tetanus.
 

Tracking the burden of neurologic diseases

Dr. Feigin and colleagues estimated that a full 60% of the U.S. population lives with one or more of these disorders, a figure much greater than previous estimates for neurological disease burden nationwide. Tension-type headache and migraine were the most prevalent in the analysis by Dr. Feigin and colleagues. During the study period, they found, prevalence, incidence, and disability burden of nearly all the included disorders increased, with the exception of brain and spinal cord injuries, meningitis, and encephalitis.

The researchers attributed most of the rise in noncommunicable neurological diseases to population aging. An age-standardized analysis found trends for stroke and Alzheimer’s disease and other dementias to be declining or flat. Age-standardized stroke incidence dropped by 16% from 1990 to 2017, while stroke mortality declined by nearly a third, and stroke disability by a quarter. Age-standardized incidence of Alzheimer’s disease and other dementias dropped by 12%, and their prevalence by 13%, during the study period, though dementia mortality and disability were seen increasing.

The authors surmised that the age-standardized declines in stroke and dementias could reflect that “primary prevention of these disorders are beginning to show an influence.” With dementia, which is linked to cognitive reserve and education, “improving educational levels of cohort reaching the age groups at greatest risk of disease may also be contributing to a modest decline over time,” Dr. Feigin and his colleagues wrote.

Parkinson’s disease and multiple sclerosis, meanwhile, were both seen rising in incidence, prevalence, and disability adjusted life years (DALYs) even with age-standardized figures. The United States saw comparatively more disability in 2017 from dementias, Parkinson’s disease, epilepsy, multiple sclerosis, motor neuron disease, and headache disorders, which together comprised 6.7% of DALYs, compared with 4.4% globally; these also accounted for a higher share of mortality in the U.S. than worldwide. The authors attributed at least some of the difference to better case ascertainment in the U.S.
 

Regional variations

The researchers also reported variations in disease burden by state and region. While previous studies have identified a “stroke belt” concentrated in North Carolina, South Carolina, and Georgia, the new findings point to stroke disability highest in Alabama, Arkansas, and Mississippi, and mortality highest in Alabama, Mississippi, and South Carolina. The researchers noted increases in dementia mortality in these states, “likely attributable to the reciprocal association between stroke and dementia.”

Northern states saw higher burdens of multiple sclerosis compared with the rest of the country, while eastern states had higher rates of Parkinson’s disease.

Such regional and state-by state variations, Dr. Feigin and colleagues wrote in their analysis, “may be associated with differences in the case ascertainment, as well as access to health care; racial/ethnic, genetic, and socioeconomic diversity; quality and comprehensiveness of preventive strategies; and risk factor distribution.”

The researchers noted as a limitation of their study that the 14 diseases captured were not an exhaustive list of neurological conditions; chronic lower back pain, a condition included in a previous major study of the burden of neurological disease in the United States, was omitted, as were restless legs syndrome and peripheral neuropathy. The researchers cited changes to coding practice in the U.S. and accuracy of medical claims data as potential limitations of their analysis. The Global Burden of Disease study is funded by the Bill and Melinda Gates Foundation, and several of Dr. Feigin’s coauthors reported financial relationships with industry.
 

Time to adjust the stroke belt?

Amelia Boehme, PhD, a stroke epidemiologist at Columbia University Mailman School of Public Health in New York, said in an interview that the current study added to recent findings showing surprising local variability in stroke prevalence, incidence, and mortality. “What we had always conceptually thought of as the ‘stroke belt’ isn’t necessarily the case,” Dr. Boehme said, but is rather subject to local, county-by-county variations. “Looking at the data here in conjunction with what previous authors have found, it raises some questions as to whether or not state-level data is giving a completely accurate picture, and whether we need to start looking at the county level and adjust for populations and age.” Importantly, Dr. Boehme said, data collected in the Global Burden of Disease study tends to be exceptionally rigorous and systematic, adding weight to Dr. Feigin and colleagues’ suggestions that prevention efforts may be making a dent in stroke and dementia. 

Dr. Amelia Boehme

“More data is always needed before we start to say we’re seeing things change,” Dr. Boehme noted. “But any glimmer of optimism is welcome, especially with regard to interventions that have been put in place, to allow us to build on those interventions.”

Dr. Boehme disclosed no financial conflicts of interest.

Stroke, dementias, and migraine cause the most disability among neurological disorders in the United States, according to new findings derived from the 2017 Global Burden of Disease study. 

Dr. Valery Feigin

The authors of the analysis, led by Valery Feigin, MD, PhD, of New Zealand’s National Institute for Stroke and Applied Neurosciences, and published in the February 2021 issue of JAMA Neurology, looked at prevalence, incidence, mortality, and disability-adjusted life years for 14 neurological disorders across 50 states between 1990 and 2017. The diseases included in the analysis were stroke, Alzheimer’s disease and other dementias, Parkinson’s disease, epilepsy, multiple sclerosis, motor neuron disease, headaches, traumatic brain injury, spinal cord injuries, brain and other nervous system cancers, meningitis, encephalitis, and tetanus.
 

Tracking the burden of neurologic diseases

Dr. Feigin and colleagues estimated that a full 60% of the U.S. population lives with one or more of these disorders, a figure much greater than previous estimates for neurological disease burden nationwide. Tension-type headache and migraine were the most prevalent in the analysis by Dr. Feigin and colleagues. During the study period, they found, prevalence, incidence, and disability burden of nearly all the included disorders increased, with the exception of brain and spinal cord injuries, meningitis, and encephalitis.

The researchers attributed most of the rise in noncommunicable neurological diseases to population aging. An age-standardized analysis found trends for stroke and Alzheimer’s disease and other dementias to be declining or flat. Age-standardized stroke incidence dropped by 16% from 1990 to 2017, while stroke mortality declined by nearly a third, and stroke disability by a quarter. Age-standardized incidence of Alzheimer’s disease and other dementias dropped by 12%, and their prevalence by 13%, during the study period, though dementia mortality and disability were seen increasing.

The authors surmised that the age-standardized declines in stroke and dementias could reflect that “primary prevention of these disorders are beginning to show an influence.” With dementia, which is linked to cognitive reserve and education, “improving educational levels of cohort reaching the age groups at greatest risk of disease may also be contributing to a modest decline over time,” Dr. Feigin and his colleagues wrote.

Parkinson’s disease and multiple sclerosis, meanwhile, were both seen rising in incidence, prevalence, and disability adjusted life years (DALYs) even with age-standardized figures. The United States saw comparatively more disability in 2017 from dementias, Parkinson’s disease, epilepsy, multiple sclerosis, motor neuron disease, and headache disorders, which together comprised 6.7% of DALYs, compared with 4.4% globally; these also accounted for a higher share of mortality in the U.S. than worldwide. The authors attributed at least some of the difference to better case ascertainment in the U.S.
 

Regional variations

The researchers also reported variations in disease burden by state and region. While previous studies have identified a “stroke belt” concentrated in North Carolina, South Carolina, and Georgia, the new findings point to stroke disability highest in Alabama, Arkansas, and Mississippi, and mortality highest in Alabama, Mississippi, and South Carolina. The researchers noted increases in dementia mortality in these states, “likely attributable to the reciprocal association between stroke and dementia.”

Northern states saw higher burdens of multiple sclerosis compared with the rest of the country, while eastern states had higher rates of Parkinson’s disease.

Such regional and state-by state variations, Dr. Feigin and colleagues wrote in their analysis, “may be associated with differences in the case ascertainment, as well as access to health care; racial/ethnic, genetic, and socioeconomic diversity; quality and comprehensiveness of preventive strategies; and risk factor distribution.”

The researchers noted as a limitation of their study that the 14 diseases captured were not an exhaustive list of neurological conditions; chronic lower back pain, a condition included in a previous major study of the burden of neurological disease in the United States, was omitted, as were restless legs syndrome and peripheral neuropathy. The researchers cited changes to coding practice in the U.S. and accuracy of medical claims data as potential limitations of their analysis. The Global Burden of Disease study is funded by the Bill and Melinda Gates Foundation, and several of Dr. Feigin’s coauthors reported financial relationships with industry.
 

Time to adjust the stroke belt?

Amelia Boehme, PhD, a stroke epidemiologist at Columbia University Mailman School of Public Health in New York, said in an interview that the current study added to recent findings showing surprising local variability in stroke prevalence, incidence, and mortality. “What we had always conceptually thought of as the ‘stroke belt’ isn’t necessarily the case,” Dr. Boehme said, but is rather subject to local, county-by-county variations. “Looking at the data here in conjunction with what previous authors have found, it raises some questions as to whether or not state-level data is giving a completely accurate picture, and whether we need to start looking at the county level and adjust for populations and age.” Importantly, Dr. Boehme said, data collected in the Global Burden of Disease study tends to be exceptionally rigorous and systematic, adding weight to Dr. Feigin and colleagues’ suggestions that prevention efforts may be making a dent in stroke and dementia. 

Dr. Amelia Boehme

“More data is always needed before we start to say we’re seeing things change,” Dr. Boehme noted. “But any glimmer of optimism is welcome, especially with regard to interventions that have been put in place, to allow us to build on those interventions.”

Dr. Boehme disclosed no financial conflicts of interest.

Issue
Neurology Reviews- 29(4)
Issue
Neurology Reviews- 29(4)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NEUROLOGY

Citation Override
Publish date: March 2, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Core feature of frontotemporal dementia may aid diagnosis

Article Type
Changed
Thu, 12/15/2022 - 15:41

Increased white matter hyperintensities (WMH) are strongly associated with Alzheimer’s disease, but new research reveals they are also a “core feature” of frontotemporal dementia (FTD) in findings that may help physicians make this difficult diagnosis that affects adults in their prime.

“The assessment of WMH can aid differential diagnosis of bvFTD [behavioral-variant FTD] against other neurodegenerative conditions in the absence of vascular risk factors, especially when considering their spatial distribution,” said senior author Ramón Landin-Romero, PhD, Appenzeller Neuroscience Fellow, Frontotemporal Dementia Research Group, University of Sydney.

“Clinicians can ask for specific sequences in routine MRI scans to visually detect WMH,” said Dr. Landin-Romero, who is also a senior lecturer in the School of Psychology and Brain and Mind Center.

The study was published online Feb. 17 in Neurology.
 

Difficult diagnosis

“FTD is a collection of unrecognized young-onset (before age 65) dementia syndromes that affect people in their prime,” said Dr. Landin-Romero. He added that heterogeneity in progression trajectories and symptoms, which can include changes in behavior and personality, language impairments, and psychosis, make it a difficult disease to diagnose.

“As such, our research was motivated by the need of sensitive and specific biomarkers of FTD, which are urgently needed to aid diagnosis, prognosis, and treatment development,” he said.

Previous research has been limited; there have only been a “handful” of cohort and case studies and studies involving individuals with mutations in one FTD-causative gene.

FTD is genetically and pathologically complex, and there has been no clear correlation between genetic mutations/underlying pathology and clinical presentation, Dr. Landin-Romero said.

WMH are common in older individuals and are linked to increased risk for cognitive impairment and dementia. Traditionally, they have been associated with vascular risk factors, such as smoking and diabetes. “But the presentation of WMH in FTD and its associations with the severity of symptoms and brain atrophy across FTD symptoms remains to be established,” said Dr. Landin-Romero.
 

Higher disease severity

To explore the possible association, the researchers studied 129 patients with either bvFTD (n = 64; mean age, 64 years) or Alzheimer’s disease (n = 65; mean age, 64.66 years).

Neuropsychological assessments, medical and neurologic examinations, clinical interview, and structural brain MRI were conducted for all patients, who were compared with 66 age-, sex-, and education-matched healthy control persons (mean age, 64.69 years).

Some participants in the FTD, Alzheimer’s disease, and healthy control groups (n = 54, 44, and 26, respectively) also underwent genetic screening. Postmortem pathology findings were available for a small number of FTD and Alzheimer’s disease participants (n = 13 and 5, respectively).

The medical history included lifestyle and cardiovascular risk factors, as well as other health and neurologic conditions and medication history. Hypertension, hypercholesterolemia, diabetes, and smoking were used to assess vascular risk.

The FTD and Alzheimer’s disease groups did not differ with regard to disease duration (3.55 years; standard deviation, 1.75, and 3.24 years; SD, 1.59, respectively). However, disease severity was significantly higher among those with FTD than among those with Alzheimer’s disease, as measured by the FTD Rating Scale Rasch score (–0.52; SD, 1.28, vs. 0.78; SD, 1.55; P < .001).

Compared with healthy controls, patients in the FTD and Alzheimer’s disease groups scored significantly lower on the Addenbrooke’s Cognitive Examination–Revised (ACE-R) or ACE-III scale. Patients with Alzheimer’s disease showed “disproportionately larger deficits” in memory and visuospatial processing, compared with those with FTD, whereas those with FTD performed significantly worse than those with Alzheimer’s disease in the fluency subdomain.

A larger number of patients in the FTD group screened positive for genetic abnormalities than in the Alzheimer’s disease group; no participants in the healthy control group had genetic mutations.
 

 

 

Unexpected findings

Mean WMH volume was significantly higher in participants with FTD than in participants with Alzheimer’s disease and in healthy controls (mean, 0.76 mL, 0.40 mL, and 0.12 mL respectively). These larger volumes contributed to greater disease severity and cortical atrophy. Moreover, disease severity was “found to be a strong predictor of WMH volume in FTD,” the authors stated. Among patients with FTD, WMH volumes did not differ significantly with regard to genetic mutation status or presence of strong family history.

After controlling for age, vascular risk did not significantly predict WMH volume in the FTD group (P = .16); however, that did not hold true in the Alzheimer’s disease group.

Increased WMH were associated with anterior brain regions in FTD and with posterior brain regions in Alzheimer’s disease. In both disorders, higher WMH volume in the corpus callosum was associated with poorer cognitive performance in the domain of attention.

“The spatial distribution of WMH mirrored patterns of brain atrophy in FTD and Alzheimer’s disease, was partially independent of cortical degeneration, and was correlated with cognitive deficits,” said Dr. Landin-Romero.

The findings were not what he and his research colleagues expected. “We were expecting that the amounts of WMH would be similar in FTD and Alzheimer’s disease, but we actually found higher levels in participants with FTD,” he said. Additionally, he anticipated that patients with either FTD or Alzheimer’s disease who had more severe disease would have more WMH, but that finding only held true for people with FTD.

“In sum, our findings show that WMH are a core feature of FTD and Alzheimer’s disease that can contribute to cognitive problems, and not simply as a marker of vascular disease,” said Dr. Landin-Romero.
 

Major research contribution

Commenting on the study, Jordi Matias-Guiu, PhD, MD, of the department of neurology, Hospital Clinico, San Carlos, Spain, considers the study to be a “great contribution to the field.” Dr. Matias-Guiu, who was not involved with the study, said that WMH “do not necessarily mean vascular pathology, and atrophy may partially explain these abnormalities and should be taken into account in the interpretation of brain MRI.

“WMH are present in both Alzheimer’s disease and FTD and are relevant to cognitive deficits found in these disorders,” he added.

The study was funded by grants from the National Health and Medical Research Council of Australia, the Dementia Research Team, and the ARC Center of Excellence in Cognition and Its Disorders. Dr. Landin-Romero is supported by the Appenzeller Neuroscience Fellowship in Alzheimer’s Disease and the ARC Center of Excellence in Cognition and Its Disorders Memory Program. The other authors’ disclosures are listed on the original article. Dr. Matias-Guiu reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Publications
Topics
Sections

Increased white matter hyperintensities (WMH) are strongly associated with Alzheimer’s disease, but new research reveals they are also a “core feature” of frontotemporal dementia (FTD) in findings that may help physicians make this difficult diagnosis that affects adults in their prime.

“The assessment of WMH can aid differential diagnosis of bvFTD [behavioral-variant FTD] against other neurodegenerative conditions in the absence of vascular risk factors, especially when considering their spatial distribution,” said senior author Ramón Landin-Romero, PhD, Appenzeller Neuroscience Fellow, Frontotemporal Dementia Research Group, University of Sydney.

“Clinicians can ask for specific sequences in routine MRI scans to visually detect WMH,” said Dr. Landin-Romero, who is also a senior lecturer in the School of Psychology and Brain and Mind Center.

The study was published online Feb. 17 in Neurology.
 

Difficult diagnosis

“FTD is a collection of unrecognized young-onset (before age 65) dementia syndromes that affect people in their prime,” said Dr. Landin-Romero. He added that heterogeneity in progression trajectories and symptoms, which can include changes in behavior and personality, language impairments, and psychosis, make it a difficult disease to diagnose.

“As such, our research was motivated by the need of sensitive and specific biomarkers of FTD, which are urgently needed to aid diagnosis, prognosis, and treatment development,” he said.

Previous research has been limited; there have only been a “handful” of cohort and case studies and studies involving individuals with mutations in one FTD-causative gene.

FTD is genetically and pathologically complex, and there has been no clear correlation between genetic mutations/underlying pathology and clinical presentation, Dr. Landin-Romero said.

WMH are common in older individuals and are linked to increased risk for cognitive impairment and dementia. Traditionally, they have been associated with vascular risk factors, such as smoking and diabetes. “But the presentation of WMH in FTD and its associations with the severity of symptoms and brain atrophy across FTD symptoms remains to be established,” said Dr. Landin-Romero.
 

Higher disease severity

To explore the possible association, the researchers studied 129 patients with either bvFTD (n = 64; mean age, 64 years) or Alzheimer’s disease (n = 65; mean age, 64.66 years).

Neuropsychological assessments, medical and neurologic examinations, clinical interview, and structural brain MRI were conducted for all patients, who were compared with 66 age-, sex-, and education-matched healthy control persons (mean age, 64.69 years).

Some participants in the FTD, Alzheimer’s disease, and healthy control groups (n = 54, 44, and 26, respectively) also underwent genetic screening. Postmortem pathology findings were available for a small number of FTD and Alzheimer’s disease participants (n = 13 and 5, respectively).

The medical history included lifestyle and cardiovascular risk factors, as well as other health and neurologic conditions and medication history. Hypertension, hypercholesterolemia, diabetes, and smoking were used to assess vascular risk.

The FTD and Alzheimer’s disease groups did not differ with regard to disease duration (3.55 years; standard deviation, 1.75, and 3.24 years; SD, 1.59, respectively). However, disease severity was significantly higher among those with FTD than among those with Alzheimer’s disease, as measured by the FTD Rating Scale Rasch score (–0.52; SD, 1.28, vs. 0.78; SD, 1.55; P < .001).

Compared with healthy controls, patients in the FTD and Alzheimer’s disease groups scored significantly lower on the Addenbrooke’s Cognitive Examination–Revised (ACE-R) or ACE-III scale. Patients with Alzheimer’s disease showed “disproportionately larger deficits” in memory and visuospatial processing, compared with those with FTD, whereas those with FTD performed significantly worse than those with Alzheimer’s disease in the fluency subdomain.

A larger number of patients in the FTD group screened positive for genetic abnormalities than in the Alzheimer’s disease group; no participants in the healthy control group had genetic mutations.
 

 

 

Unexpected findings

Mean WMH volume was significantly higher in participants with FTD than in participants with Alzheimer’s disease and in healthy controls (mean, 0.76 mL, 0.40 mL, and 0.12 mL respectively). These larger volumes contributed to greater disease severity and cortical atrophy. Moreover, disease severity was “found to be a strong predictor of WMH volume in FTD,” the authors stated. Among patients with FTD, WMH volumes did not differ significantly with regard to genetic mutation status or presence of strong family history.

After controlling for age, vascular risk did not significantly predict WMH volume in the FTD group (P = .16); however, that did not hold true in the Alzheimer’s disease group.

Increased WMH were associated with anterior brain regions in FTD and with posterior brain regions in Alzheimer’s disease. In both disorders, higher WMH volume in the corpus callosum was associated with poorer cognitive performance in the domain of attention.

“The spatial distribution of WMH mirrored patterns of brain atrophy in FTD and Alzheimer’s disease, was partially independent of cortical degeneration, and was correlated with cognitive deficits,” said Dr. Landin-Romero.

The findings were not what he and his research colleagues expected. “We were expecting that the amounts of WMH would be similar in FTD and Alzheimer’s disease, but we actually found higher levels in participants with FTD,” he said. Additionally, he anticipated that patients with either FTD or Alzheimer’s disease who had more severe disease would have more WMH, but that finding only held true for people with FTD.

“In sum, our findings show that WMH are a core feature of FTD and Alzheimer’s disease that can contribute to cognitive problems, and not simply as a marker of vascular disease,” said Dr. Landin-Romero.
 

Major research contribution

Commenting on the study, Jordi Matias-Guiu, PhD, MD, of the department of neurology, Hospital Clinico, San Carlos, Spain, considers the study to be a “great contribution to the field.” Dr. Matias-Guiu, who was not involved with the study, said that WMH “do not necessarily mean vascular pathology, and atrophy may partially explain these abnormalities and should be taken into account in the interpretation of brain MRI.

“WMH are present in both Alzheimer’s disease and FTD and are relevant to cognitive deficits found in these disorders,” he added.

The study was funded by grants from the National Health and Medical Research Council of Australia, the Dementia Research Team, and the ARC Center of Excellence in Cognition and Its Disorders. Dr. Landin-Romero is supported by the Appenzeller Neuroscience Fellowship in Alzheimer’s Disease and the ARC Center of Excellence in Cognition and Its Disorders Memory Program. The other authors’ disclosures are listed on the original article. Dr. Matias-Guiu reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Increased white matter hyperintensities (WMH) are strongly associated with Alzheimer’s disease, but new research reveals they are also a “core feature” of frontotemporal dementia (FTD) in findings that may help physicians make this difficult diagnosis that affects adults in their prime.

“The assessment of WMH can aid differential diagnosis of bvFTD [behavioral-variant FTD] against other neurodegenerative conditions in the absence of vascular risk factors, especially when considering their spatial distribution,” said senior author Ramón Landin-Romero, PhD, Appenzeller Neuroscience Fellow, Frontotemporal Dementia Research Group, University of Sydney.

“Clinicians can ask for specific sequences in routine MRI scans to visually detect WMH,” said Dr. Landin-Romero, who is also a senior lecturer in the School of Psychology and Brain and Mind Center.

The study was published online Feb. 17 in Neurology.
 

Difficult diagnosis

“FTD is a collection of unrecognized young-onset (before age 65) dementia syndromes that affect people in their prime,” said Dr. Landin-Romero. He added that heterogeneity in progression trajectories and symptoms, which can include changes in behavior and personality, language impairments, and psychosis, make it a difficult disease to diagnose.

“As such, our research was motivated by the need of sensitive and specific biomarkers of FTD, which are urgently needed to aid diagnosis, prognosis, and treatment development,” he said.

Previous research has been limited; there have only been a “handful” of cohort and case studies and studies involving individuals with mutations in one FTD-causative gene.

FTD is genetically and pathologically complex, and there has been no clear correlation between genetic mutations/underlying pathology and clinical presentation, Dr. Landin-Romero said.

WMH are common in older individuals and are linked to increased risk for cognitive impairment and dementia. Traditionally, they have been associated with vascular risk factors, such as smoking and diabetes. “But the presentation of WMH in FTD and its associations with the severity of symptoms and brain atrophy across FTD symptoms remains to be established,” said Dr. Landin-Romero.
 

Higher disease severity

To explore the possible association, the researchers studied 129 patients with either bvFTD (n = 64; mean age, 64 years) or Alzheimer’s disease (n = 65; mean age, 64.66 years).

Neuropsychological assessments, medical and neurologic examinations, clinical interview, and structural brain MRI were conducted for all patients, who were compared with 66 age-, sex-, and education-matched healthy control persons (mean age, 64.69 years).

Some participants in the FTD, Alzheimer’s disease, and healthy control groups (n = 54, 44, and 26, respectively) also underwent genetic screening. Postmortem pathology findings were available for a small number of FTD and Alzheimer’s disease participants (n = 13 and 5, respectively).

The medical history included lifestyle and cardiovascular risk factors, as well as other health and neurologic conditions and medication history. Hypertension, hypercholesterolemia, diabetes, and smoking were used to assess vascular risk.

The FTD and Alzheimer’s disease groups did not differ with regard to disease duration (3.55 years; standard deviation, 1.75, and 3.24 years; SD, 1.59, respectively). However, disease severity was significantly higher among those with FTD than among those with Alzheimer’s disease, as measured by the FTD Rating Scale Rasch score (–0.52; SD, 1.28, vs. 0.78; SD, 1.55; P < .001).

Compared with healthy controls, patients in the FTD and Alzheimer’s disease groups scored significantly lower on the Addenbrooke’s Cognitive Examination–Revised (ACE-R) or ACE-III scale. Patients with Alzheimer’s disease showed “disproportionately larger deficits” in memory and visuospatial processing, compared with those with FTD, whereas those with FTD performed significantly worse than those with Alzheimer’s disease in the fluency subdomain.

A larger number of patients in the FTD group screened positive for genetic abnormalities than in the Alzheimer’s disease group; no participants in the healthy control group had genetic mutations.
 

 

 

Unexpected findings

Mean WMH volume was significantly higher in participants with FTD than in participants with Alzheimer’s disease and in healthy controls (mean, 0.76 mL, 0.40 mL, and 0.12 mL respectively). These larger volumes contributed to greater disease severity and cortical atrophy. Moreover, disease severity was “found to be a strong predictor of WMH volume in FTD,” the authors stated. Among patients with FTD, WMH volumes did not differ significantly with regard to genetic mutation status or presence of strong family history.

After controlling for age, vascular risk did not significantly predict WMH volume in the FTD group (P = .16); however, that did not hold true in the Alzheimer’s disease group.

Increased WMH were associated with anterior brain regions in FTD and with posterior brain regions in Alzheimer’s disease. In both disorders, higher WMH volume in the corpus callosum was associated with poorer cognitive performance in the domain of attention.

“The spatial distribution of WMH mirrored patterns of brain atrophy in FTD and Alzheimer’s disease, was partially independent of cortical degeneration, and was correlated with cognitive deficits,” said Dr. Landin-Romero.

The findings were not what he and his research colleagues expected. “We were expecting that the amounts of WMH would be similar in FTD and Alzheimer’s disease, but we actually found higher levels in participants with FTD,” he said. Additionally, he anticipated that patients with either FTD or Alzheimer’s disease who had more severe disease would have more WMH, but that finding only held true for people with FTD.

“In sum, our findings show that WMH are a core feature of FTD and Alzheimer’s disease that can contribute to cognitive problems, and not simply as a marker of vascular disease,” said Dr. Landin-Romero.
 

Major research contribution

Commenting on the study, Jordi Matias-Guiu, PhD, MD, of the department of neurology, Hospital Clinico, San Carlos, Spain, considers the study to be a “great contribution to the field.” Dr. Matias-Guiu, who was not involved with the study, said that WMH “do not necessarily mean vascular pathology, and atrophy may partially explain these abnormalities and should be taken into account in the interpretation of brain MRI.

“WMH are present in both Alzheimer’s disease and FTD and are relevant to cognitive deficits found in these disorders,” he added.

The study was funded by grants from the National Health and Medical Research Council of Australia, the Dementia Research Team, and the ARC Center of Excellence in Cognition and Its Disorders. Dr. Landin-Romero is supported by the Appenzeller Neuroscience Fellowship in Alzheimer’s Disease and the ARC Center of Excellence in Cognition and Its Disorders Memory Program. The other authors’ disclosures are listed on the original article. Dr. Matias-Guiu reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Issue
Neurology Reviews- 29(4)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: February 25, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

FDA extends review period for anticipated Alzheimer’s drug

Article Type
Changed
Mon, 03/01/2021 - 14:10

The Food and Drug Administration has extended the review period for aducanumab, the investigational amyloid-clearing treatment for Alzheimer’s disease, by 3 months, the drug’s manufacturers have announced. The updated prescription drug user fee act (PDUFA) action date has been pushed forward from March 7 to June 7, 2021.

“As part of the ongoing review, Biogen submitted a response to an information request by the FDA, including additional analyses and clinical data, which the FDA considered a major amendment to the application that will require additional time for review,” Biogen and Eisai said in a statement.

“We are committed to working with the FDA as it completes its review of the aducanumab application. We want to thank the FDA for its continued diligence during the review,” said Biogen CEO Michel Vounatsos.

Biogen submitted the aducanumab application for approval to the FDA in July 2020. The FDA accepted it in August and granted priority review.

Aducanumab is a recombinant human monoclonal antibody targeting beta-amyloid (Abeta). If approved, it would be the first disease-modifying treatment for Alzheimer’s disease.

However, the road to approval has been bumpy. In November, despite high expectations and pleas from patients, caregivers, and advocacy groups, an FDA advisory panel declined to recommend approval of aducanumab.

As previously reported by this news organization, members of the FDA’s Peripheral and Central Nervous System Drugs Advisory Committee determined that results from Biogen’s one large positive trial did not provide strong enough evidence of efficacy for the treatment of Alzheimer’s disease. 

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(3)
Publications
Topics
Sections

The Food and Drug Administration has extended the review period for aducanumab, the investigational amyloid-clearing treatment for Alzheimer’s disease, by 3 months, the drug’s manufacturers have announced. The updated prescription drug user fee act (PDUFA) action date has been pushed forward from March 7 to June 7, 2021.

“As part of the ongoing review, Biogen submitted a response to an information request by the FDA, including additional analyses and clinical data, which the FDA considered a major amendment to the application that will require additional time for review,” Biogen and Eisai said in a statement.

“We are committed to working with the FDA as it completes its review of the aducanumab application. We want to thank the FDA for its continued diligence during the review,” said Biogen CEO Michel Vounatsos.

Biogen submitted the aducanumab application for approval to the FDA in July 2020. The FDA accepted it in August and granted priority review.

Aducanumab is a recombinant human monoclonal antibody targeting beta-amyloid (Abeta). If approved, it would be the first disease-modifying treatment for Alzheimer’s disease.

However, the road to approval has been bumpy. In November, despite high expectations and pleas from patients, caregivers, and advocacy groups, an FDA advisory panel declined to recommend approval of aducanumab.

As previously reported by this news organization, members of the FDA’s Peripheral and Central Nervous System Drugs Advisory Committee determined that results from Biogen’s one large positive trial did not provide strong enough evidence of efficacy for the treatment of Alzheimer’s disease. 

A version of this article first appeared on Medscape.com.

The Food and Drug Administration has extended the review period for aducanumab, the investigational amyloid-clearing treatment for Alzheimer’s disease, by 3 months, the drug’s manufacturers have announced. The updated prescription drug user fee act (PDUFA) action date has been pushed forward from March 7 to June 7, 2021.

“As part of the ongoing review, Biogen submitted a response to an information request by the FDA, including additional analyses and clinical data, which the FDA considered a major amendment to the application that will require additional time for review,” Biogen and Eisai said in a statement.

“We are committed to working with the FDA as it completes its review of the aducanumab application. We want to thank the FDA for its continued diligence during the review,” said Biogen CEO Michel Vounatsos.

Biogen submitted the aducanumab application for approval to the FDA in July 2020. The FDA accepted it in August and granted priority review.

Aducanumab is a recombinant human monoclonal antibody targeting beta-amyloid (Abeta). If approved, it would be the first disease-modifying treatment for Alzheimer’s disease.

However, the road to approval has been bumpy. In November, despite high expectations and pleas from patients, caregivers, and advocacy groups, an FDA advisory panel declined to recommend approval of aducanumab.

As previously reported by this news organization, members of the FDA’s Peripheral and Central Nervous System Drugs Advisory Committee determined that results from Biogen’s one large positive trial did not provide strong enough evidence of efficacy for the treatment of Alzheimer’s disease. 

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(3)
Issue
Neurology Reviews- 29(3)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: February 4, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer

Cognitive effects seen as transient for Alzheimer’s drug atabecestat

Article Type
Changed
Thu, 12/15/2022 - 15:42

Adverse cognitive and psychiatric effects seen associated with the investigational Alzheimer’s drug atabecestat were reversed within 6 months of treatment cessation, according to follow-up results from a truncated clinical trial.

A blinded, placebo-controlled, manufacturer-sponsored trial that had randomized 557 patients with preclinical Alzheimer’s disease to 25 mg daily oral atabecestat, 5 mg atabecestat, or placebo, was halted in 2018 over concerns about liver toxicity. The main outcome measure of the trial was change on the Alzheimer’s Disease Cooperative Study Preclinical Alzheimer Cognitive Composite, while two other scales were used to assess cognitive function and neuropsychological status.

A preliminary analysis found the higher dose of the atabecestat to significantly worsen subjects’ cognition starting at around 3 months of treatment, compared with placebo. Treatment with atabecestat was also seen associated with higher incidence of neuropsychiatric adverse events, including anxiety and depression.

In their follow-up study published Jan. 19, 2021 in JAMA Neurology (doi: 10.1001/jamaneurol.2020.4857), Reisa Sperling, MD, of Brigham and Women’s Hospital, Boston, and colleagues reported that the cognitive worsening and neuropsychiatric adverse effects seen linked to atabecestat treatment reverted to baseline levels within 6 months of halting treatment. Most of the worsening seen in the study was associated with episodic memory tasks, including “list learning, story memory, list recognition, story recall, and figure recall,” Dr. Sperling and colleagues found.

Atabecestat was also associated with “dose-related and duration-related decreases in whole-brain volume, compared with placebo treatment,” the investigators reported. Brain volume loss has been seen in trials of other beta-secretase (BACE) inhibitors and shown with one, umibecestat, to be reversible after stopping treatment.

Dr. Sperling and colleagues acknowledged as a major limitation of their study that just over a third of the cohort received another cognitive composite score after baseline. “The observation that cognitive worsening and neuropsychiatric-related [adverse events] recovered following discontinuation of atabecestat is encouraging but needs replication, given that the observation period after stopping treatment was variable and not preplanned,” the investigators wrote in their analysis. After a median exposure of 21 weeks to the study drug or placebo, subjects were followed off treatment for a median 15 weeks.
 

Questions surround BACE inhibitors

Development of atabecestat has been discontinued along with others in its class of agents, known as BACE inhibitors, which target an enzyme that initiates production of amyloid-beta, the plaque-forming peptide that is considered a driver of Alzheimer’s disease. In the past few years a number of BACE inhibitors have been shown in trials to worsen cognition in a dose-dependent way, compared with placebo. The reasons for these effects are still unknown.

Dr. Sperling and colleagues concluded that, if BACE investigators like atabecestat are to be studied anew, it must be at low doses, with more modest enzyme inhibition, and alongside careful safety and cognitive monitoring.

While no BACE inhibitor is currently in the pipeline for Alzheimer’s – trials of these agents have been stopped for futility or toxicity –Paul Aisen, MD of the University of Southern California, Los Angeles, and a coauthor of Dr. Sperling and colleagues’ study, commented that it was important that clinical investigation of BACE inhibitors continue. 

“This drug class is optimal to correct the metabolic dysregulation that is likely a primary root cause” of Alzheimer’s disease, Dr. Aisen said in an interview. “Evidence from trials such as this suggest that the cognitive toxicity of BACE inhibitors is dose related, nonprogressive, and reversible. We should now focus on establishing the safety of relatively low-dose BACE inhibition so that such regimens can be tested in AD trials.”
 

 

 

Research should continue

Robert Vassar, PhD, of Northwestern University, Chicago, who was not a coauthor on the study, also expressed a desire for BACE inhibitor research to continue.

“It is my view that the cognitive worsening of atabecestat and the other BACE inhibitors was caused by overinhibition of the enzyme related to functions of certain BACE substrates in the brain,” Dr. Vassar commented. “A major question is whether a lower dose of BACE inhibitor – achieving about 30% inhibition – could be safe and lower amyloid-beta enough to delay onset in people still without symptoms. The good news of this study is that the atabecestat-related cognitive worsening is reversible, leaving open the possibility of low-dose prevention trials.”

Dr. Vassar noted that, with both doses of atabecestat, Dr. Sperling and colleagues did not see changes in neurofilament light or total tau, two biomarkers of neurodegeneration, but did report decreases in phosphorylated tau (p181 tau), a marker of disease progression, compared with placebo.

“This indicates that atabecestat did not cause neurodegeneration and in fact moved p181 tau in the beneficial direction for Alzheimer’s disease. Perhaps if it were not for the liver toxicity, the trial may have been completed and other Alzheimer’s disease biomarkers may have changed in the beneficial direction as well,” Dr. Vassar said.

Dr. Sperling and colleagues’ study was sponsored by Janssen, the manufacturer of atabecestat. Dr. Sperling disclosed receiving research funding from Janssen and other drug makers, while nearly all the study’s coauthors reported being directly employed by the sponsor or receiving industry funding. Dr. Aisen disclosed personal fees from several manufacturers and past fees from the sponsor. Dr. Vassar disclosed consulting and other financial relationships with biotechnology companies that did not include this study’s sponsor.

Issue
Neurology Reviews- 29(3)
Publications
Topics
Sections

Adverse cognitive and psychiatric effects seen associated with the investigational Alzheimer’s drug atabecestat were reversed within 6 months of treatment cessation, according to follow-up results from a truncated clinical trial.

A blinded, placebo-controlled, manufacturer-sponsored trial that had randomized 557 patients with preclinical Alzheimer’s disease to 25 mg daily oral atabecestat, 5 mg atabecestat, or placebo, was halted in 2018 over concerns about liver toxicity. The main outcome measure of the trial was change on the Alzheimer’s Disease Cooperative Study Preclinical Alzheimer Cognitive Composite, while two other scales were used to assess cognitive function and neuropsychological status.

A preliminary analysis found the higher dose of the atabecestat to significantly worsen subjects’ cognition starting at around 3 months of treatment, compared with placebo. Treatment with atabecestat was also seen associated with higher incidence of neuropsychiatric adverse events, including anxiety and depression.

In their follow-up study published Jan. 19, 2021 in JAMA Neurology (doi: 10.1001/jamaneurol.2020.4857), Reisa Sperling, MD, of Brigham and Women’s Hospital, Boston, and colleagues reported that the cognitive worsening and neuropsychiatric adverse effects seen linked to atabecestat treatment reverted to baseline levels within 6 months of halting treatment. Most of the worsening seen in the study was associated with episodic memory tasks, including “list learning, story memory, list recognition, story recall, and figure recall,” Dr. Sperling and colleagues found.

Atabecestat was also associated with “dose-related and duration-related decreases in whole-brain volume, compared with placebo treatment,” the investigators reported. Brain volume loss has been seen in trials of other beta-secretase (BACE) inhibitors and shown with one, umibecestat, to be reversible after stopping treatment.

Dr. Sperling and colleagues acknowledged as a major limitation of their study that just over a third of the cohort received another cognitive composite score after baseline. “The observation that cognitive worsening and neuropsychiatric-related [adverse events] recovered following discontinuation of atabecestat is encouraging but needs replication, given that the observation period after stopping treatment was variable and not preplanned,” the investigators wrote in their analysis. After a median exposure of 21 weeks to the study drug or placebo, subjects were followed off treatment for a median 15 weeks.
 

Questions surround BACE inhibitors

Development of atabecestat has been discontinued along with others in its class of agents, known as BACE inhibitors, which target an enzyme that initiates production of amyloid-beta, the plaque-forming peptide that is considered a driver of Alzheimer’s disease. In the past few years a number of BACE inhibitors have been shown in trials to worsen cognition in a dose-dependent way, compared with placebo. The reasons for these effects are still unknown.

Dr. Sperling and colleagues concluded that, if BACE investigators like atabecestat are to be studied anew, it must be at low doses, with more modest enzyme inhibition, and alongside careful safety and cognitive monitoring.

While no BACE inhibitor is currently in the pipeline for Alzheimer’s – trials of these agents have been stopped for futility or toxicity –Paul Aisen, MD of the University of Southern California, Los Angeles, and a coauthor of Dr. Sperling and colleagues’ study, commented that it was important that clinical investigation of BACE inhibitors continue. 

“This drug class is optimal to correct the metabolic dysregulation that is likely a primary root cause” of Alzheimer’s disease, Dr. Aisen said in an interview. “Evidence from trials such as this suggest that the cognitive toxicity of BACE inhibitors is dose related, nonprogressive, and reversible. We should now focus on establishing the safety of relatively low-dose BACE inhibition so that such regimens can be tested in AD trials.”
 

 

 

Research should continue

Robert Vassar, PhD, of Northwestern University, Chicago, who was not a coauthor on the study, also expressed a desire for BACE inhibitor research to continue.

“It is my view that the cognitive worsening of atabecestat and the other BACE inhibitors was caused by overinhibition of the enzyme related to functions of certain BACE substrates in the brain,” Dr. Vassar commented. “A major question is whether a lower dose of BACE inhibitor – achieving about 30% inhibition – could be safe and lower amyloid-beta enough to delay onset in people still without symptoms. The good news of this study is that the atabecestat-related cognitive worsening is reversible, leaving open the possibility of low-dose prevention trials.”

Dr. Vassar noted that, with both doses of atabecestat, Dr. Sperling and colleagues did not see changes in neurofilament light or total tau, two biomarkers of neurodegeneration, but did report decreases in phosphorylated tau (p181 tau), a marker of disease progression, compared with placebo.

“This indicates that atabecestat did not cause neurodegeneration and in fact moved p181 tau in the beneficial direction for Alzheimer’s disease. Perhaps if it were not for the liver toxicity, the trial may have been completed and other Alzheimer’s disease biomarkers may have changed in the beneficial direction as well,” Dr. Vassar said.

Dr. Sperling and colleagues’ study was sponsored by Janssen, the manufacturer of atabecestat. Dr. Sperling disclosed receiving research funding from Janssen and other drug makers, while nearly all the study’s coauthors reported being directly employed by the sponsor or receiving industry funding. Dr. Aisen disclosed personal fees from several manufacturers and past fees from the sponsor. Dr. Vassar disclosed consulting and other financial relationships with biotechnology companies that did not include this study’s sponsor.

Adverse cognitive and psychiatric effects seen associated with the investigational Alzheimer’s drug atabecestat were reversed within 6 months of treatment cessation, according to follow-up results from a truncated clinical trial.

A blinded, placebo-controlled, manufacturer-sponsored trial that had randomized 557 patients with preclinical Alzheimer’s disease to 25 mg daily oral atabecestat, 5 mg atabecestat, or placebo, was halted in 2018 over concerns about liver toxicity. The main outcome measure of the trial was change on the Alzheimer’s Disease Cooperative Study Preclinical Alzheimer Cognitive Composite, while two other scales were used to assess cognitive function and neuropsychological status.

A preliminary analysis found the higher dose of the atabecestat to significantly worsen subjects’ cognition starting at around 3 months of treatment, compared with placebo. Treatment with atabecestat was also seen associated with higher incidence of neuropsychiatric adverse events, including anxiety and depression.

In their follow-up study published Jan. 19, 2021 in JAMA Neurology (doi: 10.1001/jamaneurol.2020.4857), Reisa Sperling, MD, of Brigham and Women’s Hospital, Boston, and colleagues reported that the cognitive worsening and neuropsychiatric adverse effects seen linked to atabecestat treatment reverted to baseline levels within 6 months of halting treatment. Most of the worsening seen in the study was associated with episodic memory tasks, including “list learning, story memory, list recognition, story recall, and figure recall,” Dr. Sperling and colleagues found.

Atabecestat was also associated with “dose-related and duration-related decreases in whole-brain volume, compared with placebo treatment,” the investigators reported. Brain volume loss has been seen in trials of other beta-secretase (BACE) inhibitors and shown with one, umibecestat, to be reversible after stopping treatment.

Dr. Sperling and colleagues acknowledged as a major limitation of their study that just over a third of the cohort received another cognitive composite score after baseline. “The observation that cognitive worsening and neuropsychiatric-related [adverse events] recovered following discontinuation of atabecestat is encouraging but needs replication, given that the observation period after stopping treatment was variable and not preplanned,” the investigators wrote in their analysis. After a median exposure of 21 weeks to the study drug or placebo, subjects were followed off treatment for a median 15 weeks.
 

Questions surround BACE inhibitors

Development of atabecestat has been discontinued along with others in its class of agents, known as BACE inhibitors, which target an enzyme that initiates production of amyloid-beta, the plaque-forming peptide that is considered a driver of Alzheimer’s disease. In the past few years a number of BACE inhibitors have been shown in trials to worsen cognition in a dose-dependent way, compared with placebo. The reasons for these effects are still unknown.

Dr. Sperling and colleagues concluded that, if BACE investigators like atabecestat are to be studied anew, it must be at low doses, with more modest enzyme inhibition, and alongside careful safety and cognitive monitoring.

While no BACE inhibitor is currently in the pipeline for Alzheimer’s – trials of these agents have been stopped for futility or toxicity –Paul Aisen, MD of the University of Southern California, Los Angeles, and a coauthor of Dr. Sperling and colleagues’ study, commented that it was important that clinical investigation of BACE inhibitors continue. 

“This drug class is optimal to correct the metabolic dysregulation that is likely a primary root cause” of Alzheimer’s disease, Dr. Aisen said in an interview. “Evidence from trials such as this suggest that the cognitive toxicity of BACE inhibitors is dose related, nonprogressive, and reversible. We should now focus on establishing the safety of relatively low-dose BACE inhibition so that such regimens can be tested in AD trials.”
 

 

 

Research should continue

Robert Vassar, PhD, of Northwestern University, Chicago, who was not a coauthor on the study, also expressed a desire for BACE inhibitor research to continue.

“It is my view that the cognitive worsening of atabecestat and the other BACE inhibitors was caused by overinhibition of the enzyme related to functions of certain BACE substrates in the brain,” Dr. Vassar commented. “A major question is whether a lower dose of BACE inhibitor – achieving about 30% inhibition – could be safe and lower amyloid-beta enough to delay onset in people still without symptoms. The good news of this study is that the atabecestat-related cognitive worsening is reversible, leaving open the possibility of low-dose prevention trials.”

Dr. Vassar noted that, with both doses of atabecestat, Dr. Sperling and colleagues did not see changes in neurofilament light or total tau, two biomarkers of neurodegeneration, but did report decreases in phosphorylated tau (p181 tau), a marker of disease progression, compared with placebo.

“This indicates that atabecestat did not cause neurodegeneration and in fact moved p181 tau in the beneficial direction for Alzheimer’s disease. Perhaps if it were not for the liver toxicity, the trial may have been completed and other Alzheimer’s disease biomarkers may have changed in the beneficial direction as well,” Dr. Vassar said.

Dr. Sperling and colleagues’ study was sponsored by Janssen, the manufacturer of atabecestat. Dr. Sperling disclosed receiving research funding from Janssen and other drug makers, while nearly all the study’s coauthors reported being directly employed by the sponsor or receiving industry funding. Dr. Aisen disclosed personal fees from several manufacturers and past fees from the sponsor. Dr. Vassar disclosed consulting and other financial relationships with biotechnology companies that did not include this study’s sponsor.

Issue
Neurology Reviews- 29(3)
Issue
Neurology Reviews- 29(3)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NEUROLOGY

Citation Override
Publish date: February 3, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer