User login
Opioid deaths and suicides – twin tragedies that need a community-wide response
WASHINGTON - Community pressures drive the twin tragedies of suicide and opioid deaths, which destroy community structure and must be addressed by community efforts, experts said during a panel discussion.
These so-called “deaths of despair” are inextricably linked, according to thought leaders in clinical medicine, volunteerism, and advocacy who gathered to share data and brainstorm solutions. A clear picture emerged of a professional community struggling to create a unified plan of attack and a unified voice to bring that plan to fruition.
The event was sponsored by the Education Development Center, a nonprofit that implements and evaluates programs to improve education, health, and economic opportunity worldwide, and the National Action Alliance for Suicide Prevention.
“We convened key leaders, including health care systems, federal agencies, national nonprofits, and faith-based organizations to strengthen our community response to suicide and opioids misuse and restore hope across the United States,” said Jerry Reed, PhD, EDC senior vice president. “To identify positive and lasting solutions requires collaboration from all sectors to achieve not only a nation free of suicide, but [also] a nation where all individuals are resilient, hopeful, and leading healthier lives.”
While several of the leading causes of death in the United States – including heart disease, stroke, and cancer – are declining, suicides and opioid deaths are surging, Alex Crosby, MD, told the gathering. An epidemiologist at the Centers for Disease Control and Prevention, Dr. Crosby cited the most recent national data, gathered in 2015. The numbers present a picture of two terrible problems striking virtually identical communities.
“Suicide rates increased 25% from 2000 to 2015,” said Dr. Crosby, who is also the senior adviser in the division of violence prevention at the National Center for Injury Prevention and Control. “In 2000, there were 30,000 suicides, and in 2015, there were 44,000. We are now looking at a suicide every 12 minutes in this country.”
Suicides cluster in several demographics, he said, “This is something that disproportionately affects males, working adults aged 25-64, non-Hispanic whites and non-Hispanic Native Americans, Alaskan natives, and rural areas.”
Deaths from drug overdoses, 60% of which now involve opioids, are on a parallel increase. “These have quadrupled since 1999, and the at-risk groups significantly overlap, with males, adults aged 24-52, non-Hispanic whites and Native Americans, Alaska natives, and rural communities most impacted. These are the very same groups seeing that increase in suicides.”
Joint tragedies, overlapping causes
The center is taking a public health stance on researching and managing both issues, Dr. Crosby said. “As we start looking at risk factors, we see that chronic health conditions, mental health, and pain management are factors common to both groups. But in addition to individual risks, there are societal risks … things in the family, community, and general society also influence these deaths.”
Former U.S. Rep. Patrick J. Kennedy agreed. Now a mental health policy advocate, Mr. Kennedy is not surprised that overlapping communities suffer these joint problems.
“On a societal level, these issues are directly related to the hollowing out of the manufacturing class, anxiety in a new generation that sees no financial stability,” he said. “Clearly, these are some of the reasons that these deaths track parts of the country that have been hardest hit economically. On top of that, we are lacking the kind of community connectedness we once had.”
Mr. Kennedy also faulted the marginalization of people with mental illnesses and the dearth of early screening that could identify mental disorders before they balloon into related substance abuse disorders. “Those are folks who, if screened and found to have a vulnerability from a mental illness, could be properly treated. These are illnesses that pathologize by neglect.”
The lack of awareness isn’t just a broad societal concept, but a specific weakness in the medical community, said Elinore F. McCance-Katz, MD, assistant secretary for mental health and substance use at the Substance Abuse and Mental Health Services Administration.
“There’s not a lot of attention paid to this issue unless you’re in a profession like psychiatry, where we are taught to systematically assess for suicidality,” Dr. McCance-Katz noted. “But unless you’re trained to address it, it’s not something you think about; and if you don’t think about it, you won’t uncover it.
“In primary care, for example, we see many with a pain complaint,” she continued. “That pain will be treated medically, but the psychological component, which might be devastating, won’t be. This can cause depression and suicidal thinking, and if patients are not asked about it, they will not offer it. So, we have the terrible situation where someone can leave the office with the means to harm themselves, but not the help they need to save their life.”
How to reach the vulnerable
When a medical disorder and its attendant comorbidities present a multifactorial etiology, the clinician must address the problem as a unit. This isn’t happening with suicide and drug overdose deaths, said Arthur C. Evans Jr, PhD, chief executive officer of the American Psychological Association.
“These conditions are tied to societal determinants, but our approach to them is still focused at the individual level,” said Dr. Evans. “As long as our primary way is to build treatment programs and expect people to find their way into them on their own, it’s not going to work. We know that 90% of patients with substance abuse problems don’t come to treatment. So, our strategy is missing a whole lot of people.”
A better way, he said, is to proactively provide holistic person-centered care. He has some very specific ideas, honed by his 12 years as commissioner of Philadelphia’s Department of Behavioral Health and Intellectual Disability Service, a $1.2 billion health care agency that is the safety net for 1.5 million Philadelphians with behavioral health and intellectual disabilities. Dr. Evans is credited with transforming the agency into a community-integrated, recovery-oriented treatment model.
He would like to see a similar national transformation in how at-risk groups are targeted, educated, screened, and treated. “We can create a culture where these issues are better understood by the public, so they can recognize problems early and connect to better health care.”
“Hope is at the center of our work,” Dr. Evans said. “The whole recovery movement has been about helping people have the hope that they can get better, that their lives can improve. Fundamentally, this must be the basis for treatment. We have focused for too long on the symptoms people bring to us, and missed the fact that these problems of suicide and drug abuse arise because people are hurting, both physically and psychologically. To recover, they need to believe there is a future in which they can feel better.”
If there’s a way, is there a will?
But while thought leaders continue to fine-tune their message, people continue to die, Mr. Kennedy said.
“We have all the experts who know what to do; the thing that is missing is the political will to do it,” he noted. “It’s driven by the stigma, the silence of families suffering from these illnesses. If we can’t talk about it in our families, we can’t talk about it to our legislators, and if they don’t hear from us, they do nothing. We need a political answer, ultimately. We appropriated a billion dollars over 2 years for the opioid crisis, but within 3 days of Hurricane Harvey, we appropriated $15 billion. It may seem we are making progress because we have great forums, but it’s a lot of talk, and people are dying every day.”
He likened the suicide and opioid death crisis to a natural disaster that requires not just money, but a highly coordinated response that targets multiple impacted areas.
“We need a Federal Emergency Management Agency–like response to this,” Mr. Kennedy said. “FEMA is designed to address all the missing pieces necessary for someone to recover from a disaster. In recovery, we have a physical problem, a mental obsession, and a spiritual malady. People need medical help – access to medication to get their lives stabilized. They need the psychological component of cognitive behavioral therapy. And they need the spiritual angle, which is social support, people reaching out to each other.
“Right now, everyone thinks this is a problem to be dealt with ‘over there,’ but it isn’t,” he added. “It involves all of us, and if we want to put these communities back together, we need everyone energized and contributing.”
WASHINGTON - Community pressures drive the twin tragedies of suicide and opioid deaths, which destroy community structure and must be addressed by community efforts, experts said during a panel discussion.
These so-called “deaths of despair” are inextricably linked, according to thought leaders in clinical medicine, volunteerism, and advocacy who gathered to share data and brainstorm solutions. A clear picture emerged of a professional community struggling to create a unified plan of attack and a unified voice to bring that plan to fruition.
The event was sponsored by the Education Development Center, a nonprofit that implements and evaluates programs to improve education, health, and economic opportunity worldwide, and the National Action Alliance for Suicide Prevention.
“We convened key leaders, including health care systems, federal agencies, national nonprofits, and faith-based organizations to strengthen our community response to suicide and opioids misuse and restore hope across the United States,” said Jerry Reed, PhD, EDC senior vice president. “To identify positive and lasting solutions requires collaboration from all sectors to achieve not only a nation free of suicide, but [also] a nation where all individuals are resilient, hopeful, and leading healthier lives.”
While several of the leading causes of death in the United States – including heart disease, stroke, and cancer – are declining, suicides and opioid deaths are surging, Alex Crosby, MD, told the gathering. An epidemiologist at the Centers for Disease Control and Prevention, Dr. Crosby cited the most recent national data, gathered in 2015. The numbers present a picture of two terrible problems striking virtually identical communities.
“Suicide rates increased 25% from 2000 to 2015,” said Dr. Crosby, who is also the senior adviser in the division of violence prevention at the National Center for Injury Prevention and Control. “In 2000, there were 30,000 suicides, and in 2015, there were 44,000. We are now looking at a suicide every 12 minutes in this country.”
Suicides cluster in several demographics, he said, “This is something that disproportionately affects males, working adults aged 25-64, non-Hispanic whites and non-Hispanic Native Americans, Alaskan natives, and rural areas.”
Deaths from drug overdoses, 60% of which now involve opioids, are on a parallel increase. “These have quadrupled since 1999, and the at-risk groups significantly overlap, with males, adults aged 24-52, non-Hispanic whites and Native Americans, Alaska natives, and rural communities most impacted. These are the very same groups seeing that increase in suicides.”
Joint tragedies, overlapping causes
The center is taking a public health stance on researching and managing both issues, Dr. Crosby said. “As we start looking at risk factors, we see that chronic health conditions, mental health, and pain management are factors common to both groups. But in addition to individual risks, there are societal risks … things in the family, community, and general society also influence these deaths.”
Former U.S. Rep. Patrick J. Kennedy agreed. Now a mental health policy advocate, Mr. Kennedy is not surprised that overlapping communities suffer these joint problems.
“On a societal level, these issues are directly related to the hollowing out of the manufacturing class, anxiety in a new generation that sees no financial stability,” he said. “Clearly, these are some of the reasons that these deaths track parts of the country that have been hardest hit economically. On top of that, we are lacking the kind of community connectedness we once had.”
Mr. Kennedy also faulted the marginalization of people with mental illnesses and the dearth of early screening that could identify mental disorders before they balloon into related substance abuse disorders. “Those are folks who, if screened and found to have a vulnerability from a mental illness, could be properly treated. These are illnesses that pathologize by neglect.”
The lack of awareness isn’t just a broad societal concept, but a specific weakness in the medical community, said Elinore F. McCance-Katz, MD, assistant secretary for mental health and substance use at the Substance Abuse and Mental Health Services Administration.
“There’s not a lot of attention paid to this issue unless you’re in a profession like psychiatry, where we are taught to systematically assess for suicidality,” Dr. McCance-Katz noted. “But unless you’re trained to address it, it’s not something you think about; and if you don’t think about it, you won’t uncover it.
“In primary care, for example, we see many with a pain complaint,” she continued. “That pain will be treated medically, but the psychological component, which might be devastating, won’t be. This can cause depression and suicidal thinking, and if patients are not asked about it, they will not offer it. So, we have the terrible situation where someone can leave the office with the means to harm themselves, but not the help they need to save their life.”
How to reach the vulnerable
When a medical disorder and its attendant comorbidities present a multifactorial etiology, the clinician must address the problem as a unit. This isn’t happening with suicide and drug overdose deaths, said Arthur C. Evans Jr, PhD, chief executive officer of the American Psychological Association.
“These conditions are tied to societal determinants, but our approach to them is still focused at the individual level,” said Dr. Evans. “As long as our primary way is to build treatment programs and expect people to find their way into them on their own, it’s not going to work. We know that 90% of patients with substance abuse problems don’t come to treatment. So, our strategy is missing a whole lot of people.”
A better way, he said, is to proactively provide holistic person-centered care. He has some very specific ideas, honed by his 12 years as commissioner of Philadelphia’s Department of Behavioral Health and Intellectual Disability Service, a $1.2 billion health care agency that is the safety net for 1.5 million Philadelphians with behavioral health and intellectual disabilities. Dr. Evans is credited with transforming the agency into a community-integrated, recovery-oriented treatment model.
He would like to see a similar national transformation in how at-risk groups are targeted, educated, screened, and treated. “We can create a culture where these issues are better understood by the public, so they can recognize problems early and connect to better health care.”
“Hope is at the center of our work,” Dr. Evans said. “The whole recovery movement has been about helping people have the hope that they can get better, that their lives can improve. Fundamentally, this must be the basis for treatment. We have focused for too long on the symptoms people bring to us, and missed the fact that these problems of suicide and drug abuse arise because people are hurting, both physically and psychologically. To recover, they need to believe there is a future in which they can feel better.”
If there’s a way, is there a will?
But while thought leaders continue to fine-tune their message, people continue to die, Mr. Kennedy said.
“We have all the experts who know what to do; the thing that is missing is the political will to do it,” he noted. “It’s driven by the stigma, the silence of families suffering from these illnesses. If we can’t talk about it in our families, we can’t talk about it to our legislators, and if they don’t hear from us, they do nothing. We need a political answer, ultimately. We appropriated a billion dollars over 2 years for the opioid crisis, but within 3 days of Hurricane Harvey, we appropriated $15 billion. It may seem we are making progress because we have great forums, but it’s a lot of talk, and people are dying every day.”
He likened the suicide and opioid death crisis to a natural disaster that requires not just money, but a highly coordinated response that targets multiple impacted areas.
“We need a Federal Emergency Management Agency–like response to this,” Mr. Kennedy said. “FEMA is designed to address all the missing pieces necessary for someone to recover from a disaster. In recovery, we have a physical problem, a mental obsession, and a spiritual malady. People need medical help – access to medication to get their lives stabilized. They need the psychological component of cognitive behavioral therapy. And they need the spiritual angle, which is social support, people reaching out to each other.
“Right now, everyone thinks this is a problem to be dealt with ‘over there,’ but it isn’t,” he added. “It involves all of us, and if we want to put these communities back together, we need everyone energized and contributing.”
WASHINGTON - Community pressures drive the twin tragedies of suicide and opioid deaths, which destroy community structure and must be addressed by community efforts, experts said during a panel discussion.
These so-called “deaths of despair” are inextricably linked, according to thought leaders in clinical medicine, volunteerism, and advocacy who gathered to share data and brainstorm solutions. A clear picture emerged of a professional community struggling to create a unified plan of attack and a unified voice to bring that plan to fruition.
The event was sponsored by the Education Development Center, a nonprofit that implements and evaluates programs to improve education, health, and economic opportunity worldwide, and the National Action Alliance for Suicide Prevention.
“We convened key leaders, including health care systems, federal agencies, national nonprofits, and faith-based organizations to strengthen our community response to suicide and opioids misuse and restore hope across the United States,” said Jerry Reed, PhD, EDC senior vice president. “To identify positive and lasting solutions requires collaboration from all sectors to achieve not only a nation free of suicide, but [also] a nation where all individuals are resilient, hopeful, and leading healthier lives.”
While several of the leading causes of death in the United States – including heart disease, stroke, and cancer – are declining, suicides and opioid deaths are surging, Alex Crosby, MD, told the gathering. An epidemiologist at the Centers for Disease Control and Prevention, Dr. Crosby cited the most recent national data, gathered in 2015. The numbers present a picture of two terrible problems striking virtually identical communities.
“Suicide rates increased 25% from 2000 to 2015,” said Dr. Crosby, who is also the senior adviser in the division of violence prevention at the National Center for Injury Prevention and Control. “In 2000, there were 30,000 suicides, and in 2015, there were 44,000. We are now looking at a suicide every 12 minutes in this country.”
Suicides cluster in several demographics, he said, “This is something that disproportionately affects males, working adults aged 25-64, non-Hispanic whites and non-Hispanic Native Americans, Alaskan natives, and rural areas.”
Deaths from drug overdoses, 60% of which now involve opioids, are on a parallel increase. “These have quadrupled since 1999, and the at-risk groups significantly overlap, with males, adults aged 24-52, non-Hispanic whites and Native Americans, Alaska natives, and rural communities most impacted. These are the very same groups seeing that increase in suicides.”
Joint tragedies, overlapping causes
The center is taking a public health stance on researching and managing both issues, Dr. Crosby said. “As we start looking at risk factors, we see that chronic health conditions, mental health, and pain management are factors common to both groups. But in addition to individual risks, there are societal risks … things in the family, community, and general society also influence these deaths.”
Former U.S. Rep. Patrick J. Kennedy agreed. Now a mental health policy advocate, Mr. Kennedy is not surprised that overlapping communities suffer these joint problems.
“On a societal level, these issues are directly related to the hollowing out of the manufacturing class, anxiety in a new generation that sees no financial stability,” he said. “Clearly, these are some of the reasons that these deaths track parts of the country that have been hardest hit economically. On top of that, we are lacking the kind of community connectedness we once had.”
Mr. Kennedy also faulted the marginalization of people with mental illnesses and the dearth of early screening that could identify mental disorders before they balloon into related substance abuse disorders. “Those are folks who, if screened and found to have a vulnerability from a mental illness, could be properly treated. These are illnesses that pathologize by neglect.”
The lack of awareness isn’t just a broad societal concept, but a specific weakness in the medical community, said Elinore F. McCance-Katz, MD, assistant secretary for mental health and substance use at the Substance Abuse and Mental Health Services Administration.
“There’s not a lot of attention paid to this issue unless you’re in a profession like psychiatry, where we are taught to systematically assess for suicidality,” Dr. McCance-Katz noted. “But unless you’re trained to address it, it’s not something you think about; and if you don’t think about it, you won’t uncover it.
“In primary care, for example, we see many with a pain complaint,” she continued. “That pain will be treated medically, but the psychological component, which might be devastating, won’t be. This can cause depression and suicidal thinking, and if patients are not asked about it, they will not offer it. So, we have the terrible situation where someone can leave the office with the means to harm themselves, but not the help they need to save their life.”
How to reach the vulnerable
When a medical disorder and its attendant comorbidities present a multifactorial etiology, the clinician must address the problem as a unit. This isn’t happening with suicide and drug overdose deaths, said Arthur C. Evans Jr, PhD, chief executive officer of the American Psychological Association.
“These conditions are tied to societal determinants, but our approach to them is still focused at the individual level,” said Dr. Evans. “As long as our primary way is to build treatment programs and expect people to find their way into them on their own, it’s not going to work. We know that 90% of patients with substance abuse problems don’t come to treatment. So, our strategy is missing a whole lot of people.”
A better way, he said, is to proactively provide holistic person-centered care. He has some very specific ideas, honed by his 12 years as commissioner of Philadelphia’s Department of Behavioral Health and Intellectual Disability Service, a $1.2 billion health care agency that is the safety net for 1.5 million Philadelphians with behavioral health and intellectual disabilities. Dr. Evans is credited with transforming the agency into a community-integrated, recovery-oriented treatment model.
He would like to see a similar national transformation in how at-risk groups are targeted, educated, screened, and treated. “We can create a culture where these issues are better understood by the public, so they can recognize problems early and connect to better health care.”
“Hope is at the center of our work,” Dr. Evans said. “The whole recovery movement has been about helping people have the hope that they can get better, that their lives can improve. Fundamentally, this must be the basis for treatment. We have focused for too long on the symptoms people bring to us, and missed the fact that these problems of suicide and drug abuse arise because people are hurting, both physically and psychologically. To recover, they need to believe there is a future in which they can feel better.”
If there’s a way, is there a will?
But while thought leaders continue to fine-tune their message, people continue to die, Mr. Kennedy said.
“We have all the experts who know what to do; the thing that is missing is the political will to do it,” he noted. “It’s driven by the stigma, the silence of families suffering from these illnesses. If we can’t talk about it in our families, we can’t talk about it to our legislators, and if they don’t hear from us, they do nothing. We need a political answer, ultimately. We appropriated a billion dollars over 2 years for the opioid crisis, but within 3 days of Hurricane Harvey, we appropriated $15 billion. It may seem we are making progress because we have great forums, but it’s a lot of talk, and people are dying every day.”
He likened the suicide and opioid death crisis to a natural disaster that requires not just money, but a highly coordinated response that targets multiple impacted areas.
“We need a Federal Emergency Management Agency–like response to this,” Mr. Kennedy said. “FEMA is designed to address all the missing pieces necessary for someone to recover from a disaster. In recovery, we have a physical problem, a mental obsession, and a spiritual malady. People need medical help – access to medication to get their lives stabilized. They need the psychological component of cognitive behavioral therapy. And they need the spiritual angle, which is social support, people reaching out to each other.
“Right now, everyone thinks this is a problem to be dealt with ‘over there,’ but it isn’t,” he added. “It involves all of us, and if we want to put these communities back together, we need everyone energized and contributing.”
AT AN EXPERT PANEL ON SUICIDE AND OPIOID DEATHS
Personality changes may not occur before Alzheimer’s onset
Personality changes do not presage dementia, at least when examined through the lens of self-report, a large retrospective study has determined.
Dementia patients do show personality characteristics that are different from those of their cognitively normal peers, wrote Antonio Terracciano, PhD (JAMA Psychiatry. 2017 Sep 20. doi: 10.1001/jamapsychiatry.2017.2816). Notably, they tend to be more neurotic and less conscientious, he noted. But among more than 2,000 older adults with up to 36 years of data, no temporal associations were found between these traits and the onset of cognitive difficulty, even within a few years of the onset of dementia symptoms.
“From a clinical perspective, these findings suggest that tracking change in self-rated personality as an early indicator of dementia is unlikely to be fruitful, while a single assessment provides reliable information on the personality traits that increase resilience [e.g., conscientiousness] or vulnerability [e.g., neuroticism] to clinical dementia,” wrote Dr. Terracciano of Florida State University, Tallahassee, and his coauthors.
However, the authors noted, it’s possible that self-reported personality may not be as good a marker of dementia-related personality change as informant report.
“Self-rated personality provides participants’ perspectives of themselves. … Individuals with AD could be anosognosic to change in their psychological trains and functioning. Self-reported personality might remain stable and reflect premorbid functioning more than current traits,” the researchers wrote.
The study tracked 2,046 community-living older adults who were part of the Baltimore Longitudinal Study of Aging, which began in 1958. Healthy individuals of different ages are continuously enrolled in the study and assessed with regular follow-up visits. These visits include cognitive and neuropsychiatric assessments, from which data for this study were extracted. The mean follow-up time was about 12 years, but some subjects had up to 36 years. From 1980 to 2016, the group completed more than 8,000 assessments and accumulated 24,569 person/years of follow-up.
Dr. Terracciano examined the cohort’s Revised NEO Personality Inventory results, a 240-item questionnaire that assesses 30 facets of personality in the dimensions of neuroticism, extraversion, openness, agreeableness, and conscientiousness. Cognitive decline was assessed by results on the Clinical Dementia Rating Scale and the older Dementia Questionnaire.
At the end of the follow-up period, 104 subjects (5%) had developed mild cognitive impairment, and 255 (12.5%) all-cause dementia; of those, 194 (9.5%) were later diagnosed with Alzheimer’s disease. In an unadjusted analysis, the authors found that the group that eventually developed AD scored higher on neuroticism, and lower on extraversion, openness, and conscientiousness than did the nonaffected subjects.
Over time, the authors found some changes in the reference group, including small declines in neuroticism and extraversion, and small increases in agreeableness and conscientiousness. However, when they looked at the trajectory of change, they found no significant differences in the rate of any change, compared with the AD group – although that group continued to display changes in its baseline difference of neuroticism and conscientiousness.
“Although the trajectories were similar, there were significant ... differences on the intercept,” they wrote. “The AD cohort scored higher on neuroticism and lower on conscientiousness and extraversion than the nonimpaired group.”
The team ran several temporal analyses on the data, and none found any significant temporal association with accelerated personality change in the AD group, the MCI group, or the all-cause dementia groups compared with the reference group, with one exception: Subjects with MCI showed a steeper decline in openness than did nonaffected subjects.
Those results were consistent even when they examined the two assessments performed just before the onset of cognitive symptoms (a mean of 6 and 3 years). “Consistent with the results and the broader literature, the AD group scored higher on neuroticism and lower on conscientiousness. Contrary to expectations, the AD group did not increase in neuroticism and decline in conscientiousness.”
The findings may shed some light on the chicken-or-egg question of personality change and dementia, they suggested.
“This research has relevance to the question of reverse causality for the association between personality and risk of incident AD. That is, if personality changed in response to increasing neuropathology in the brain in the preclinical phase, the association between personality and AD could have been due to the disease process rather than personality as an independent risk factor. We did not, however, find any evidence that neuroticism and conscientiousness changed significantly as the onset of disease approached. Thus, rather than an effect of AD neuropathology, these traits appear to confer risk for the development of the disease.”
The Baltimore Longitudinal Study of Aging is supported by the National Institutes of Health. Neither Dr. Terracciano nor his colleagues had financial disclosures.
[email protected]
On Twitter @alz_gal
Personality changes do not presage dementia, at least when examined through the lens of self-report, a large retrospective study has determined.
Dementia patients do show personality characteristics that are different from those of their cognitively normal peers, wrote Antonio Terracciano, PhD (JAMA Psychiatry. 2017 Sep 20. doi: 10.1001/jamapsychiatry.2017.2816). Notably, they tend to be more neurotic and less conscientious, he noted. But among more than 2,000 older adults with up to 36 years of data, no temporal associations were found between these traits and the onset of cognitive difficulty, even within a few years of the onset of dementia symptoms.
“From a clinical perspective, these findings suggest that tracking change in self-rated personality as an early indicator of dementia is unlikely to be fruitful, while a single assessment provides reliable information on the personality traits that increase resilience [e.g., conscientiousness] or vulnerability [e.g., neuroticism] to clinical dementia,” wrote Dr. Terracciano of Florida State University, Tallahassee, and his coauthors.
However, the authors noted, it’s possible that self-reported personality may not be as good a marker of dementia-related personality change as informant report.
“Self-rated personality provides participants’ perspectives of themselves. … Individuals with AD could be anosognosic to change in their psychological trains and functioning. Self-reported personality might remain stable and reflect premorbid functioning more than current traits,” the researchers wrote.
The study tracked 2,046 community-living older adults who were part of the Baltimore Longitudinal Study of Aging, which began in 1958. Healthy individuals of different ages are continuously enrolled in the study and assessed with regular follow-up visits. These visits include cognitive and neuropsychiatric assessments, from which data for this study were extracted. The mean follow-up time was about 12 years, but some subjects had up to 36 years. From 1980 to 2016, the group completed more than 8,000 assessments and accumulated 24,569 person/years of follow-up.
Dr. Terracciano examined the cohort’s Revised NEO Personality Inventory results, a 240-item questionnaire that assesses 30 facets of personality in the dimensions of neuroticism, extraversion, openness, agreeableness, and conscientiousness. Cognitive decline was assessed by results on the Clinical Dementia Rating Scale and the older Dementia Questionnaire.
At the end of the follow-up period, 104 subjects (5%) had developed mild cognitive impairment, and 255 (12.5%) all-cause dementia; of those, 194 (9.5%) were later diagnosed with Alzheimer’s disease. In an unadjusted analysis, the authors found that the group that eventually developed AD scored higher on neuroticism, and lower on extraversion, openness, and conscientiousness than did the nonaffected subjects.
Over time, the authors found some changes in the reference group, including small declines in neuroticism and extraversion, and small increases in agreeableness and conscientiousness. However, when they looked at the trajectory of change, they found no significant differences in the rate of any change, compared with the AD group – although that group continued to display changes in its baseline difference of neuroticism and conscientiousness.
“Although the trajectories were similar, there were significant ... differences on the intercept,” they wrote. “The AD cohort scored higher on neuroticism and lower on conscientiousness and extraversion than the nonimpaired group.”
The team ran several temporal analyses on the data, and none found any significant temporal association with accelerated personality change in the AD group, the MCI group, or the all-cause dementia groups compared with the reference group, with one exception: Subjects with MCI showed a steeper decline in openness than did nonaffected subjects.
Those results were consistent even when they examined the two assessments performed just before the onset of cognitive symptoms (a mean of 6 and 3 years). “Consistent with the results and the broader literature, the AD group scored higher on neuroticism and lower on conscientiousness. Contrary to expectations, the AD group did not increase in neuroticism and decline in conscientiousness.”
The findings may shed some light on the chicken-or-egg question of personality change and dementia, they suggested.
“This research has relevance to the question of reverse causality for the association between personality and risk of incident AD. That is, if personality changed in response to increasing neuropathology in the brain in the preclinical phase, the association between personality and AD could have been due to the disease process rather than personality as an independent risk factor. We did not, however, find any evidence that neuroticism and conscientiousness changed significantly as the onset of disease approached. Thus, rather than an effect of AD neuropathology, these traits appear to confer risk for the development of the disease.”
The Baltimore Longitudinal Study of Aging is supported by the National Institutes of Health. Neither Dr. Terracciano nor his colleagues had financial disclosures.
[email protected]
On Twitter @alz_gal
Personality changes do not presage dementia, at least when examined through the lens of self-report, a large retrospective study has determined.
Dementia patients do show personality characteristics that are different from those of their cognitively normal peers, wrote Antonio Terracciano, PhD (JAMA Psychiatry. 2017 Sep 20. doi: 10.1001/jamapsychiatry.2017.2816). Notably, they tend to be more neurotic and less conscientious, he noted. But among more than 2,000 older adults with up to 36 years of data, no temporal associations were found between these traits and the onset of cognitive difficulty, even within a few years of the onset of dementia symptoms.
“From a clinical perspective, these findings suggest that tracking change in self-rated personality as an early indicator of dementia is unlikely to be fruitful, while a single assessment provides reliable information on the personality traits that increase resilience [e.g., conscientiousness] or vulnerability [e.g., neuroticism] to clinical dementia,” wrote Dr. Terracciano of Florida State University, Tallahassee, and his coauthors.
However, the authors noted, it’s possible that self-reported personality may not be as good a marker of dementia-related personality change as informant report.
“Self-rated personality provides participants’ perspectives of themselves. … Individuals with AD could be anosognosic to change in their psychological trains and functioning. Self-reported personality might remain stable and reflect premorbid functioning more than current traits,” the researchers wrote.
The study tracked 2,046 community-living older adults who were part of the Baltimore Longitudinal Study of Aging, which began in 1958. Healthy individuals of different ages are continuously enrolled in the study and assessed with regular follow-up visits. These visits include cognitive and neuropsychiatric assessments, from which data for this study were extracted. The mean follow-up time was about 12 years, but some subjects had up to 36 years. From 1980 to 2016, the group completed more than 8,000 assessments and accumulated 24,569 person/years of follow-up.
Dr. Terracciano examined the cohort’s Revised NEO Personality Inventory results, a 240-item questionnaire that assesses 30 facets of personality in the dimensions of neuroticism, extraversion, openness, agreeableness, and conscientiousness. Cognitive decline was assessed by results on the Clinical Dementia Rating Scale and the older Dementia Questionnaire.
At the end of the follow-up period, 104 subjects (5%) had developed mild cognitive impairment, and 255 (12.5%) all-cause dementia; of those, 194 (9.5%) were later diagnosed with Alzheimer’s disease. In an unadjusted analysis, the authors found that the group that eventually developed AD scored higher on neuroticism, and lower on extraversion, openness, and conscientiousness than did the nonaffected subjects.
Over time, the authors found some changes in the reference group, including small declines in neuroticism and extraversion, and small increases in agreeableness and conscientiousness. However, when they looked at the trajectory of change, they found no significant differences in the rate of any change, compared with the AD group – although that group continued to display changes in its baseline difference of neuroticism and conscientiousness.
“Although the trajectories were similar, there were significant ... differences on the intercept,” they wrote. “The AD cohort scored higher on neuroticism and lower on conscientiousness and extraversion than the nonimpaired group.”
The team ran several temporal analyses on the data, and none found any significant temporal association with accelerated personality change in the AD group, the MCI group, or the all-cause dementia groups compared with the reference group, with one exception: Subjects with MCI showed a steeper decline in openness than did nonaffected subjects.
Those results were consistent even when they examined the two assessments performed just before the onset of cognitive symptoms (a mean of 6 and 3 years). “Consistent with the results and the broader literature, the AD group scored higher on neuroticism and lower on conscientiousness. Contrary to expectations, the AD group did not increase in neuroticism and decline in conscientiousness.”
The findings may shed some light on the chicken-or-egg question of personality change and dementia, they suggested.
“This research has relevance to the question of reverse causality for the association between personality and risk of incident AD. That is, if personality changed in response to increasing neuropathology in the brain in the preclinical phase, the association between personality and AD could have been due to the disease process rather than personality as an independent risk factor. We did not, however, find any evidence that neuroticism and conscientiousness changed significantly as the onset of disease approached. Thus, rather than an effect of AD neuropathology, these traits appear to confer risk for the development of the disease.”
The Baltimore Longitudinal Study of Aging is supported by the National Institutes of Health. Neither Dr. Terracciano nor his colleagues had financial disclosures.
[email protected]
On Twitter @alz_gal
FROM JAMA PSYCHIATRY
Key clinical point:
Major finding: Although patients with AD scored higher on neuroticism and lower on conscientiousness, those traits did not change any faster than personality traits in the nonaffected subjects.
Data source: The study comprised 2,046 subjects with up to 36 years’ follow-up.
Disclosures: The Baltimore Longitudinal Study on Aging is funded by the National Institutes of Health. Neither Dr. Terracciano nor his coauthors had financial disclosures.
Postsurgical antibiotics cut infection in obese women after C-section
A 48-hour course of postoperative cephalexin and metronidazole, plus typical preoperative antibiotics, cut surgical site infections by 59% in obese women who had a cesarean delivery.
The benefit of the additional postoperative treatment was driven by a significant, 69% risk reduction among women who had ruptured membranes, Amy M. Valent, DO, and her colleagues reported (JAMA. 2017;318[11]:1026-34). However, the authors noted, “tests for interaction between the intact membranes and [ruptured] subgroups and postpartum cephalexin-metronidazole were not statistically different and should not be interpreted as showing a difference in significance or effect size among the subgroups with and without [rupture].”
The trial comprised 403 obese women who had a cesarean delivery. They were a mean of 28 years old. The mean body mass index was 40 kg/m2, and the mean subcutaneous adipose tissue thickness was about 3.4 cm. About a third of each treatment group was positive for Group B streptococcus; 31% had ruptured membranes at the onset of labor. More than 60% of women in both groups had a scheduled cesarean delivery.
All women had standard preoperative care, including skin prep with a chlorhexidine or povidone-iodine cleansing and an intravenous infusion of 2 g cefazolin. After delivery, they were randomized to placebo or to oral cephalexin 500 mg plus metronidazole 500 mg every 8 hours for 48 hours. The primary outcome was surgical site infection incidence within 30 days.
The overall rate of surgical site infection was 10.9% (44 women). Infections developed in 13 women in the active group and 31 in the placebo group (6.4% vs. 15.4%) – a significant difference, translating to a 59% risk reduction (relative risk, 0.41). Cellulitis was the only secondary outcome that was significantly reduced by prophylactic antibiotics, with infections occurring in 5.9% of the metronidazole-cephalexin group vs. 13.4% of the placebo group (RR, 0.44). The antibiotic regimen didn’t affect the other secondary endpoints, which included rates of incisional morbidity, endometritis, fever of unknown etiology, and wound separation.
The authors conducted a post-hoc analysis to examine the antibiotics’ effects on women who had ruptured and intact membranes at the time of delivery. The benefit was greatest among those with ruptured membranes. There were six infections among the active group and 19 among the placebo group (9.5% vs. 30.2%). This difference translated to a relative risk of 0.31 – a 69% risk reduction.
Among women with intact membranes, there were seven infections in the active group and 12 in the placebo group (5% vs. 8.7%). This translated to a 0.58 relative risk, which was not statistically significant.
“Interaction testing was performed between study groups (cephalexin-metronidazole vs. placebo) and by membrane status (intact vs. ruptured),” the authors noted. “The rate of surgical site infection was highest in those with [ruptured membranes] who received placebo (30.2%) and lowest in those with intact membranes who received antibiotics (5.0%), but the test for interaction did not show statistical significance at P = .30.”
There were no serious adverse events or allergic reactions reported for cephalexin or metronidazole. The authors noted that both drugs are excreted into breast milk in small amounts, but that no study has ever linked them with neonatal harm through breast milk exposure. However, they added, “Long-term childhood or adverse neonatal outcomes specific to cephalexin-metronidazole exposure cannot be determined, as outcome measures were not evaluated for this study protocol. Recognizing the maternal and neonatal benefit of breastfeeding, the lack of known neonatal adverse effects, and maternal reduction in [surgical site infection], the benefit of this antibiotic regimen likely outweighs the theoretical risks of breast milk exposure in the obese population.”
The University of Cincinnati Department of Obstetrics and Gynecology sponsored the trial. None of the authors reported any financial conflicts.
Despite the positive outcomes of this trial, it’s not yet time to tack on yet more antibiotics for every obese woman who undergoes a cesarean delivery, David P. Calfee, MD, and Amos Grünebaum, MD, wrote in an accompanying editorial (JAMA. 2017;318[11]:1012-3).
“When determining if and how the results of this study should alter current clinical practice, it is important to recognize that the results of this study are quite different from those of several previous studies conducted in other surgical patient populations in which no benefit from postoperative antimicrobial prophylaxis was found and on which current clinical guidelines for antimicrobial prophylaxis are based,” they wrote. “The explanation for this difference may be as simple as the identification in the current study of a very specific, high-risk group of patients for which the intervention is effective. However, several questions are worthy of additional consideration and study.”
For instance, the study was conducted over 5 years and may not reflect current practices for managing these patients, such as glycemic control and maintaining normothermia. Additionally, there may be additional risks to women that were not identified in the study, such as infection from antimicrobial-resistant pathogens.
Dr. Calfee and Dr. Grünebaum are at Weill Cornell Medical Center in New York. Dr. Calfee reported receiving grants from Merck, Sharp, and Dohme.
Despite the positive outcomes of this trial, it’s not yet time to tack on yet more antibiotics for every obese woman who undergoes a cesarean delivery, David P. Calfee, MD, and Amos Grünebaum, MD, wrote in an accompanying editorial (JAMA. 2017;318[11]:1012-3).
“When determining if and how the results of this study should alter current clinical practice, it is important to recognize that the results of this study are quite different from those of several previous studies conducted in other surgical patient populations in which no benefit from postoperative antimicrobial prophylaxis was found and on which current clinical guidelines for antimicrobial prophylaxis are based,” they wrote. “The explanation for this difference may be as simple as the identification in the current study of a very specific, high-risk group of patients for which the intervention is effective. However, several questions are worthy of additional consideration and study.”
For instance, the study was conducted over 5 years and may not reflect current practices for managing these patients, such as glycemic control and maintaining normothermia. Additionally, there may be additional risks to women that were not identified in the study, such as infection from antimicrobial-resistant pathogens.
Dr. Calfee and Dr. Grünebaum are at Weill Cornell Medical Center in New York. Dr. Calfee reported receiving grants from Merck, Sharp, and Dohme.
Despite the positive outcomes of this trial, it’s not yet time to tack on yet more antibiotics for every obese woman who undergoes a cesarean delivery, David P. Calfee, MD, and Amos Grünebaum, MD, wrote in an accompanying editorial (JAMA. 2017;318[11]:1012-3).
“When determining if and how the results of this study should alter current clinical practice, it is important to recognize that the results of this study are quite different from those of several previous studies conducted in other surgical patient populations in which no benefit from postoperative antimicrobial prophylaxis was found and on which current clinical guidelines for antimicrobial prophylaxis are based,” they wrote. “The explanation for this difference may be as simple as the identification in the current study of a very specific, high-risk group of patients for which the intervention is effective. However, several questions are worthy of additional consideration and study.”
For instance, the study was conducted over 5 years and may not reflect current practices for managing these patients, such as glycemic control and maintaining normothermia. Additionally, there may be additional risks to women that were not identified in the study, such as infection from antimicrobial-resistant pathogens.
Dr. Calfee and Dr. Grünebaum are at Weill Cornell Medical Center in New York. Dr. Calfee reported receiving grants from Merck, Sharp, and Dohme.
A 48-hour course of postoperative cephalexin and metronidazole, plus typical preoperative antibiotics, cut surgical site infections by 59% in obese women who had a cesarean delivery.
The benefit of the additional postoperative treatment was driven by a significant, 69% risk reduction among women who had ruptured membranes, Amy M. Valent, DO, and her colleagues reported (JAMA. 2017;318[11]:1026-34). However, the authors noted, “tests for interaction between the intact membranes and [ruptured] subgroups and postpartum cephalexin-metronidazole were not statistically different and should not be interpreted as showing a difference in significance or effect size among the subgroups with and without [rupture].”
The trial comprised 403 obese women who had a cesarean delivery. They were a mean of 28 years old. The mean body mass index was 40 kg/m2, and the mean subcutaneous adipose tissue thickness was about 3.4 cm. About a third of each treatment group was positive for Group B streptococcus; 31% had ruptured membranes at the onset of labor. More than 60% of women in both groups had a scheduled cesarean delivery.
All women had standard preoperative care, including skin prep with a chlorhexidine or povidone-iodine cleansing and an intravenous infusion of 2 g cefazolin. After delivery, they were randomized to placebo or to oral cephalexin 500 mg plus metronidazole 500 mg every 8 hours for 48 hours. The primary outcome was surgical site infection incidence within 30 days.
The overall rate of surgical site infection was 10.9% (44 women). Infections developed in 13 women in the active group and 31 in the placebo group (6.4% vs. 15.4%) – a significant difference, translating to a 59% risk reduction (relative risk, 0.41). Cellulitis was the only secondary outcome that was significantly reduced by prophylactic antibiotics, with infections occurring in 5.9% of the metronidazole-cephalexin group vs. 13.4% of the placebo group (RR, 0.44). The antibiotic regimen didn’t affect the other secondary endpoints, which included rates of incisional morbidity, endometritis, fever of unknown etiology, and wound separation.
The authors conducted a post-hoc analysis to examine the antibiotics’ effects on women who had ruptured and intact membranes at the time of delivery. The benefit was greatest among those with ruptured membranes. There were six infections among the active group and 19 among the placebo group (9.5% vs. 30.2%). This difference translated to a relative risk of 0.31 – a 69% risk reduction.
Among women with intact membranes, there were seven infections in the active group and 12 in the placebo group (5% vs. 8.7%). This translated to a 0.58 relative risk, which was not statistically significant.
“Interaction testing was performed between study groups (cephalexin-metronidazole vs. placebo) and by membrane status (intact vs. ruptured),” the authors noted. “The rate of surgical site infection was highest in those with [ruptured membranes] who received placebo (30.2%) and lowest in those with intact membranes who received antibiotics (5.0%), but the test for interaction did not show statistical significance at P = .30.”
There were no serious adverse events or allergic reactions reported for cephalexin or metronidazole. The authors noted that both drugs are excreted into breast milk in small amounts, but that no study has ever linked them with neonatal harm through breast milk exposure. However, they added, “Long-term childhood or adverse neonatal outcomes specific to cephalexin-metronidazole exposure cannot be determined, as outcome measures were not evaluated for this study protocol. Recognizing the maternal and neonatal benefit of breastfeeding, the lack of known neonatal adverse effects, and maternal reduction in [surgical site infection], the benefit of this antibiotic regimen likely outweighs the theoretical risks of breast milk exposure in the obese population.”
The University of Cincinnati Department of Obstetrics and Gynecology sponsored the trial. None of the authors reported any financial conflicts.
A 48-hour course of postoperative cephalexin and metronidazole, plus typical preoperative antibiotics, cut surgical site infections by 59% in obese women who had a cesarean delivery.
The benefit of the additional postoperative treatment was driven by a significant, 69% risk reduction among women who had ruptured membranes, Amy M. Valent, DO, and her colleagues reported (JAMA. 2017;318[11]:1026-34). However, the authors noted, “tests for interaction between the intact membranes and [ruptured] subgroups and postpartum cephalexin-metronidazole were not statistically different and should not be interpreted as showing a difference in significance or effect size among the subgroups with and without [rupture].”
The trial comprised 403 obese women who had a cesarean delivery. They were a mean of 28 years old. The mean body mass index was 40 kg/m2, and the mean subcutaneous adipose tissue thickness was about 3.4 cm. About a third of each treatment group was positive for Group B streptococcus; 31% had ruptured membranes at the onset of labor. More than 60% of women in both groups had a scheduled cesarean delivery.
All women had standard preoperative care, including skin prep with a chlorhexidine or povidone-iodine cleansing and an intravenous infusion of 2 g cefazolin. After delivery, they were randomized to placebo or to oral cephalexin 500 mg plus metronidazole 500 mg every 8 hours for 48 hours. The primary outcome was surgical site infection incidence within 30 days.
The overall rate of surgical site infection was 10.9% (44 women). Infections developed in 13 women in the active group and 31 in the placebo group (6.4% vs. 15.4%) – a significant difference, translating to a 59% risk reduction (relative risk, 0.41). Cellulitis was the only secondary outcome that was significantly reduced by prophylactic antibiotics, with infections occurring in 5.9% of the metronidazole-cephalexin group vs. 13.4% of the placebo group (RR, 0.44). The antibiotic regimen didn’t affect the other secondary endpoints, which included rates of incisional morbidity, endometritis, fever of unknown etiology, and wound separation.
The authors conducted a post-hoc analysis to examine the antibiotics’ effects on women who had ruptured and intact membranes at the time of delivery. The benefit was greatest among those with ruptured membranes. There were six infections among the active group and 19 among the placebo group (9.5% vs. 30.2%). This difference translated to a relative risk of 0.31 – a 69% risk reduction.
Among women with intact membranes, there were seven infections in the active group and 12 in the placebo group (5% vs. 8.7%). This translated to a 0.58 relative risk, which was not statistically significant.
“Interaction testing was performed between study groups (cephalexin-metronidazole vs. placebo) and by membrane status (intact vs. ruptured),” the authors noted. “The rate of surgical site infection was highest in those with [ruptured membranes] who received placebo (30.2%) and lowest in those with intact membranes who received antibiotics (5.0%), but the test for interaction did not show statistical significance at P = .30.”
There were no serious adverse events or allergic reactions reported for cephalexin or metronidazole. The authors noted that both drugs are excreted into breast milk in small amounts, but that no study has ever linked them with neonatal harm through breast milk exposure. However, they added, “Long-term childhood or adverse neonatal outcomes specific to cephalexin-metronidazole exposure cannot be determined, as outcome measures were not evaluated for this study protocol. Recognizing the maternal and neonatal benefit of breastfeeding, the lack of known neonatal adverse effects, and maternal reduction in [surgical site infection], the benefit of this antibiotic regimen likely outweighs the theoretical risks of breast milk exposure in the obese population.”
The University of Cincinnati Department of Obstetrics and Gynecology sponsored the trial. None of the authors reported any financial conflicts.
FROM JAMA
Key clinical point:
Major finding: Infections developed in 13 women in the active group and 31 in the placebo group (6.4% vs. 15.4%) – a significant difference, translating to a 59% risk reduction (relative risk, 0.41).
Data source: The randomized, placebo-controlled study comprised 403 women.
Disclosures: The University of Cincinnati Department of Obstetrics and Gynecology sponsored the study. None of the authors reported any financial conflicts.
Bedside imaging allowed for individualized PEEP adjustments
A noninvasive bedside imaging technique can individually calibrate positive end-expiratory pressure settings in patients on extracorporeal membrane oxygenation (ECMO) for severe acute respiratory distress syndrome (ARDS), a study showed.
The step-down PEEP (positive end-expiratory pressure) trial could not identify a single PEEP setting that optimally balanced lung overdistension and lung collapse for all 15 patients. But, electrical impedance tomography (EIT) allowed investigators to individually titrate PEEP settings for each patient, Guillaume Franchineau, MD, wrote (Am J Respir Crit Care Med. 2017;196[4]:447-57 doi: 10.1164/rccm.201605-1055OC).
The 4-month study involved 15 patients (aged, 18-79 years) who were in acute respiratory distress syndrome for a variety of reasons, including influenza (7 patients), pneumonia (3), leukemia (2), and 1 case each of Pneumocystis, antisynthetase syndrome, and trauma. All patients were receiving ECMO with a constant driving pressure of 14 cm H2O. After verifying that the inspiratory flow was 0 at the end of inspiration, PEEP was increased to 20 cm H2O (PEEP 20) with a peak inspiratory pressure of 34 cm H2O. PEEP 20 was held for 20 minutes and then lowered by 5 cm H2O decrements with the potential of reaching PEEP 0.
The EIT device, consisting of a silicone belt with 16 surface electrodes, was placed around the thorax aligning with the sixth intercostal parasternal space and connected to a monitor. By measuring conductivity and impeditivity in the underlying tissues, the device generates a low-resolution, two-dimensional image. The image was sufficient to show lung distension and collapse as the PEEP settings changed. Investigators looked for the best compromise between overdistension and collapsed zones, which they defined as the lowest pressure able to limit EIT-assessed collapse to no more than 15% with the least overdistension.
There was no one-size-fits-all PEEP setting, the authors found. The setting that minimized both overdistension and collapse was PEEP 15 in seven patients, PEEP 10 in six patients, and PEEP 5 in two patients.
At each patient’s optimal PEEP setting, the median tidal volume was similar: 3.8 mL/kg ideal body weight for PEEP 15, 3.9 mL/kg ideal body weight for PEEP 10, and 4.3 mL/kg ideal body weight for PEEP 5.
Respiratory system compliance was also similar among the groups, at 20 mL/cm H2O, 18 mL/cm H2O, and 21 mL/cm H2O, respectively. However, arterial partial pressure of oxygen decreased as the PEEP setting decreased, dropping from 148 mm Hg to 128 mm Hg to 100 mm Hg, respectively. Conversely, arterial partial pressure of CO2 increased (32-41 mm Hg).
EIT also allowed clinicians to pinpoint areas of distension or collapse. As PEEP decreased, there was steady ventilation loss in the medial-dorsal and dorsal regions, which shifted to the medial-ventral and ventral regions.
“Most end-expiratory lung impedances were located in medial-dorsal and medial-ventral regions, whereas the dorsal region constantly contributed less than 10% of total end-expiratory lung impedance,” the authors noted.
“The broad variability of EIT-based best compromise PEEPs in these patients with severe ARDS reinforces the need to provide ventilation settings individually tailored to the regional ARDS-lesion distribution,” they concluded. “To achieve that goal, EIT seems to be an interesting bedside noninvasive tool to provide real-time monitoring of the PEEP effect and ventilation distribution on ECMO.”
Dr. Franchineau reported receiving speakers fees from Mapquet.
[email protected]
On Twitter @Alz_Gal
This first study to examine electrical impedance tomography (EIT) in patients under extracorporeal membrane oxygenation shows important clinical potential, but also raises important questions, Claude Guerin, MD, wrote in an accompanying editorial. (Am J Respir Crit Care Med. doi: 10.1164/rccm.201701-0167ed).
The ability to titrate PEEP settings to a patient’s individual needs could substantially reduce the risk of lung derecruitment or damage by overdistension.
The current study, however, has limitations that must be addressed in the next phase of research, before this technique can be adopted into clinical practice, Dr. Guerin said: The 5-cm H20 PEEP steps may be too large to detect relevant changes.
In several other studies, PEEP was reduced more gradually in 2- to 3-cm H2O increments. “Surprisingly, PEEP was reduced to 0 cm H2O in this study, with this step maintained for 20 minutes, raising the risk of derecruitment and further stretching once higher PEEP levels were resumed.”
The investigators did not perform any recruitment maneuvers before proceeding with PEEP adjustment. This is contrary to what has been done in prior animal and human studies.
The computation of driving pressure was done without taking total PEEP into account. “As total PEEP is frequently greater than PEEP in patients with [acute respiratory distress syndrome], driving pressure can be overestimated with the common computation.”
The optimal PEEP that the investigators aimed for was determined retrospectively from an offline analysis of the data; this technique would not be suitable for bedside management. “When ‘optimal’ PEEP was defined from [EIT criteria], from a higher PaO2 [arterial partial pressure of oxygen] or from a higher compliance of the respiratory system during the decremental PEEP trial, these three criteria were observed together in only four patients with [acute respiratory distress syndrome].”
The study was done only once and cannot comply with the need for regular PEEP-level assessments over time, as could be done with some other strategies.
“Further studies should also consider taking into account the role of chest wall mechanics,” Dr. Guerin said.
Nevertheless, he concluded, EIT-based PEEP titration for each individual patient represents a prospective tool for assisting with the treatment of acute respiratory distress syndrome, and should be fully investigated in a large, prospective trial.
Dr. Guerin is a pulmonologist at the Hospital de la Croix Rousse, Lyon, France. He had no relevant financial disclosures.
This first study to examine electrical impedance tomography (EIT) in patients under extracorporeal membrane oxygenation shows important clinical potential, but also raises important questions, Claude Guerin, MD, wrote in an accompanying editorial. (Am J Respir Crit Care Med. doi: 10.1164/rccm.201701-0167ed).
The ability to titrate PEEP settings to a patient’s individual needs could substantially reduce the risk of lung derecruitment or damage by overdistension.
The current study, however, has limitations that must be addressed in the next phase of research, before this technique can be adopted into clinical practice, Dr. Guerin said: The 5-cm H20 PEEP steps may be too large to detect relevant changes.
In several other studies, PEEP was reduced more gradually in 2- to 3-cm H2O increments. “Surprisingly, PEEP was reduced to 0 cm H2O in this study, with this step maintained for 20 minutes, raising the risk of derecruitment and further stretching once higher PEEP levels were resumed.”
The investigators did not perform any recruitment maneuvers before proceeding with PEEP adjustment. This is contrary to what has been done in prior animal and human studies.
The computation of driving pressure was done without taking total PEEP into account. “As total PEEP is frequently greater than PEEP in patients with [acute respiratory distress syndrome], driving pressure can be overestimated with the common computation.”
The optimal PEEP that the investigators aimed for was determined retrospectively from an offline analysis of the data; this technique would not be suitable for bedside management. “When ‘optimal’ PEEP was defined from [EIT criteria], from a higher PaO2 [arterial partial pressure of oxygen] or from a higher compliance of the respiratory system during the decremental PEEP trial, these three criteria were observed together in only four patients with [acute respiratory distress syndrome].”
The study was done only once and cannot comply with the need for regular PEEP-level assessments over time, as could be done with some other strategies.
“Further studies should also consider taking into account the role of chest wall mechanics,” Dr. Guerin said.
Nevertheless, he concluded, EIT-based PEEP titration for each individual patient represents a prospective tool for assisting with the treatment of acute respiratory distress syndrome, and should be fully investigated in a large, prospective trial.
Dr. Guerin is a pulmonologist at the Hospital de la Croix Rousse, Lyon, France. He had no relevant financial disclosures.
This first study to examine electrical impedance tomography (EIT) in patients under extracorporeal membrane oxygenation shows important clinical potential, but also raises important questions, Claude Guerin, MD, wrote in an accompanying editorial. (Am J Respir Crit Care Med. doi: 10.1164/rccm.201701-0167ed).
The ability to titrate PEEP settings to a patient’s individual needs could substantially reduce the risk of lung derecruitment or damage by overdistension.
The current study, however, has limitations that must be addressed in the next phase of research, before this technique can be adopted into clinical practice, Dr. Guerin said: The 5-cm H20 PEEP steps may be too large to detect relevant changes.
In several other studies, PEEP was reduced more gradually in 2- to 3-cm H2O increments. “Surprisingly, PEEP was reduced to 0 cm H2O in this study, with this step maintained for 20 minutes, raising the risk of derecruitment and further stretching once higher PEEP levels were resumed.”
The investigators did not perform any recruitment maneuvers before proceeding with PEEP adjustment. This is contrary to what has been done in prior animal and human studies.
The computation of driving pressure was done without taking total PEEP into account. “As total PEEP is frequently greater than PEEP in patients with [acute respiratory distress syndrome], driving pressure can be overestimated with the common computation.”
The optimal PEEP that the investigators aimed for was determined retrospectively from an offline analysis of the data; this technique would not be suitable for bedside management. “When ‘optimal’ PEEP was defined from [EIT criteria], from a higher PaO2 [arterial partial pressure of oxygen] or from a higher compliance of the respiratory system during the decremental PEEP trial, these three criteria were observed together in only four patients with [acute respiratory distress syndrome].”
The study was done only once and cannot comply with the need for regular PEEP-level assessments over time, as could be done with some other strategies.
“Further studies should also consider taking into account the role of chest wall mechanics,” Dr. Guerin said.
Nevertheless, he concluded, EIT-based PEEP titration for each individual patient represents a prospective tool for assisting with the treatment of acute respiratory distress syndrome, and should be fully investigated in a large, prospective trial.
Dr. Guerin is a pulmonologist at the Hospital de la Croix Rousse, Lyon, France. He had no relevant financial disclosures.
A noninvasive bedside imaging technique can individually calibrate positive end-expiratory pressure settings in patients on extracorporeal membrane oxygenation (ECMO) for severe acute respiratory distress syndrome (ARDS), a study showed.
The step-down PEEP (positive end-expiratory pressure) trial could not identify a single PEEP setting that optimally balanced lung overdistension and lung collapse for all 15 patients. But, electrical impedance tomography (EIT) allowed investigators to individually titrate PEEP settings for each patient, Guillaume Franchineau, MD, wrote (Am J Respir Crit Care Med. 2017;196[4]:447-57 doi: 10.1164/rccm.201605-1055OC).
The 4-month study involved 15 patients (aged, 18-79 years) who were in acute respiratory distress syndrome for a variety of reasons, including influenza (7 patients), pneumonia (3), leukemia (2), and 1 case each of Pneumocystis, antisynthetase syndrome, and trauma. All patients were receiving ECMO with a constant driving pressure of 14 cm H2O. After verifying that the inspiratory flow was 0 at the end of inspiration, PEEP was increased to 20 cm H2O (PEEP 20) with a peak inspiratory pressure of 34 cm H2O. PEEP 20 was held for 20 minutes and then lowered by 5 cm H2O decrements with the potential of reaching PEEP 0.
The EIT device, consisting of a silicone belt with 16 surface electrodes, was placed around the thorax aligning with the sixth intercostal parasternal space and connected to a monitor. By measuring conductivity and impeditivity in the underlying tissues, the device generates a low-resolution, two-dimensional image. The image was sufficient to show lung distension and collapse as the PEEP settings changed. Investigators looked for the best compromise between overdistension and collapsed zones, which they defined as the lowest pressure able to limit EIT-assessed collapse to no more than 15% with the least overdistension.
There was no one-size-fits-all PEEP setting, the authors found. The setting that minimized both overdistension and collapse was PEEP 15 in seven patients, PEEP 10 in six patients, and PEEP 5 in two patients.
At each patient’s optimal PEEP setting, the median tidal volume was similar: 3.8 mL/kg ideal body weight for PEEP 15, 3.9 mL/kg ideal body weight for PEEP 10, and 4.3 mL/kg ideal body weight for PEEP 5.
Respiratory system compliance was also similar among the groups, at 20 mL/cm H2O, 18 mL/cm H2O, and 21 mL/cm H2O, respectively. However, arterial partial pressure of oxygen decreased as the PEEP setting decreased, dropping from 148 mm Hg to 128 mm Hg to 100 mm Hg, respectively. Conversely, arterial partial pressure of CO2 increased (32-41 mm Hg).
EIT also allowed clinicians to pinpoint areas of distension or collapse. As PEEP decreased, there was steady ventilation loss in the medial-dorsal and dorsal regions, which shifted to the medial-ventral and ventral regions.
“Most end-expiratory lung impedances were located in medial-dorsal and medial-ventral regions, whereas the dorsal region constantly contributed less than 10% of total end-expiratory lung impedance,” the authors noted.
“The broad variability of EIT-based best compromise PEEPs in these patients with severe ARDS reinforces the need to provide ventilation settings individually tailored to the regional ARDS-lesion distribution,” they concluded. “To achieve that goal, EIT seems to be an interesting bedside noninvasive tool to provide real-time monitoring of the PEEP effect and ventilation distribution on ECMO.”
Dr. Franchineau reported receiving speakers fees from Mapquet.
[email protected]
On Twitter @Alz_Gal
A noninvasive bedside imaging technique can individually calibrate positive end-expiratory pressure settings in patients on extracorporeal membrane oxygenation (ECMO) for severe acute respiratory distress syndrome (ARDS), a study showed.
The step-down PEEP (positive end-expiratory pressure) trial could not identify a single PEEP setting that optimally balanced lung overdistension and lung collapse for all 15 patients. But, electrical impedance tomography (EIT) allowed investigators to individually titrate PEEP settings for each patient, Guillaume Franchineau, MD, wrote (Am J Respir Crit Care Med. 2017;196[4]:447-57 doi: 10.1164/rccm.201605-1055OC).
The 4-month study involved 15 patients (aged, 18-79 years) who were in acute respiratory distress syndrome for a variety of reasons, including influenza (7 patients), pneumonia (3), leukemia (2), and 1 case each of Pneumocystis, antisynthetase syndrome, and trauma. All patients were receiving ECMO with a constant driving pressure of 14 cm H2O. After verifying that the inspiratory flow was 0 at the end of inspiration, PEEP was increased to 20 cm H2O (PEEP 20) with a peak inspiratory pressure of 34 cm H2O. PEEP 20 was held for 20 minutes and then lowered by 5 cm H2O decrements with the potential of reaching PEEP 0.
The EIT device, consisting of a silicone belt with 16 surface electrodes, was placed around the thorax aligning with the sixth intercostal parasternal space and connected to a monitor. By measuring conductivity and impeditivity in the underlying tissues, the device generates a low-resolution, two-dimensional image. The image was sufficient to show lung distension and collapse as the PEEP settings changed. Investigators looked for the best compromise between overdistension and collapsed zones, which they defined as the lowest pressure able to limit EIT-assessed collapse to no more than 15% with the least overdistension.
There was no one-size-fits-all PEEP setting, the authors found. The setting that minimized both overdistension and collapse was PEEP 15 in seven patients, PEEP 10 in six patients, and PEEP 5 in two patients.
At each patient’s optimal PEEP setting, the median tidal volume was similar: 3.8 mL/kg ideal body weight for PEEP 15, 3.9 mL/kg ideal body weight for PEEP 10, and 4.3 mL/kg ideal body weight for PEEP 5.
Respiratory system compliance was also similar among the groups, at 20 mL/cm H2O, 18 mL/cm H2O, and 21 mL/cm H2O, respectively. However, arterial partial pressure of oxygen decreased as the PEEP setting decreased, dropping from 148 mm Hg to 128 mm Hg to 100 mm Hg, respectively. Conversely, arterial partial pressure of CO2 increased (32-41 mm Hg).
EIT also allowed clinicians to pinpoint areas of distension or collapse. As PEEP decreased, there was steady ventilation loss in the medial-dorsal and dorsal regions, which shifted to the medial-ventral and ventral regions.
“Most end-expiratory lung impedances were located in medial-dorsal and medial-ventral regions, whereas the dorsal region constantly contributed less than 10% of total end-expiratory lung impedance,” the authors noted.
“The broad variability of EIT-based best compromise PEEPs in these patients with severe ARDS reinforces the need to provide ventilation settings individually tailored to the regional ARDS-lesion distribution,” they concluded. “To achieve that goal, EIT seems to be an interesting bedside noninvasive tool to provide real-time monitoring of the PEEP effect and ventilation distribution on ECMO.”
Dr. Franchineau reported receiving speakers fees from Mapquet.
[email protected]
On Twitter @Alz_Gal
FROM THE AMERICAN JOURNAL OF RESPIRATORY AND CRITICAL CARE MEDICINE
Key clinical point:
Major finding: The PEEP settings that minimized both overdistension and collapse were PEEP 15 in seven patients, PEEP 10 in six patients, and PEEP 5 in two patients.
Data source: A prospective study of 15 patients.
Disclosures: Dr. Franchineau reported receiving speakers fees from Mapquet. Dr. Guerin had no relevant financial disclosures.
Exenatide improved motor function in Parkinson’s patients with off-medication symptoms
An anti-diabetes drug significantly improved motor function in patients with Parkinson’s disease who had off-medication symptoms despite dopaminergic therapy in a phase 2 trial.
Patients taking exenatide (Byetta), an agonist of the GLP-1 receptor, experienced a mean 2.5-point improvement in the part 3 motor score on the Movement Disorders Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) over 48 weeks, compared with a 1-point decline in patients taking placebo, Dilan Athauda, MBBS, and his colleagues reported (Lancet. 2017 Aug 3. doi: 10.1016/S0140-6736[17]31585-4).
The mechanism of action is unclear, the investigators noted. Dopamine transporter scanning with [123I]FP-CIT single photon emission CT (DaTscan) revealed a tantalizing hint of neuroprotection, as the rate of decline in dopaminergic neurons seemed to be slightly reduced among those taking the medication. However, it’s also possible that exenatide somehow altered the pharmacokinetics of levodopa and other dopaminergic drugs, making them more effective, Dr. Athauda and his associates said.
Still, the double-blinded study’s positive results are encouraging, and they replicate those of the team’s 2013 open-label trial (J Clin Invest. 2013 Jun 3;123[6]:2730-6), they asserted.
“Whether this drug acts as a novel symptomatic agent, influences compensatory responses or behaviors, or has neuroprotective effects on underlying pathology is unclear, but there is a strong indication that GLP-1 receptor agonists may have a useful role in future treatment of Parkinson’s disease,” the investigators wrote.
The study randomized 62 patients who had Parkinson’s with off-medication motor symptoms to weekly injections of either placebo or 2 mg subcutaneous exenatide for 48 weeks. A 12-week washout period followed. Despite randomization, there were some important baseline differences between the groups. Those taking exenatide were older (62 vs. 58 years) and had a higher score on the part 3 motor score of the MDS-UPDRS, the study’s primary endpoint (32.8 vs. 27.1). Exenatide users were also taking a lower mean dopaminergic drug dose (mean 774 mg vs. 826 mg levodopa equivalent).
Patients were assessed in clinic every 12 weeks, not only for the primary endpoint of dyskinesia off-medication, but for cognition, quality of life, mood, and nonmotor symptoms. All assessments were done in the morning, after at least 8 hours off levodopa or 36 hours off long-acting dopaminergic drugs.
Exenatide’s benefit in off-medication dyskinesias was apparent after the first 12 weeks of treatment, Dr. Athauda and his coauthors noted. The MDS-UPDRS score had decreased from 32.8 to 30.2 in the active group, and increased from 27.1 to 27.6 in the placebo group. Those taking exenatide held steady at that improvement for the entire 48 weeks, ending at 30.3 (2.3 points below baseline). Those taking placebo continued to decline, ending at 28.8 (1.7 points above baseline). The adjusted between-group difference was 4.3 points, in favor of exenatide (P = .0026).
At 60 weeks, after the 12-week washout period, patients who took exenatide were still doing better, reaching an adjusted between-group difference of –3.5 (P-= .0318).
However, off-medication dyskinesia was the only improvement noted in the trial. Exenatide did not affect any secondary endpoints, including any sections of the on-medication MDS-UPDRS.
The investigators noted that, during the 60 weeks, mean levodopa equivalent dosage increased more in the active group than in the placebo group (132 vs. 112 mg). This brought the active group up much closer to the placebo group’s dose than had been observed at baseline (906 vs. 942 mg).
Exenatide was generally well tolerated, with the exception of a mean 2.6-kg weight loss among those taking it. This was likely related to an increased incidence of gastrointestinal side effects. Weight returned to normal during the washout period.
There were three drop-outs, two in the placebo arm because of worsening anxiety and worsening dyskinesia and one in the exenatide arm because of asymptomatic hyperamylasemia.
The investigators also measured dopamine transporter availability via DaTscan to assess exenatide’s potential impact on dopaminergic neurons. Although areas of decreased binding declined in both groups, the exenatide group showed a signal of reduced rate of decline in the right and left putamen.
“However,” the authors noted, “because this signal was detectable only at uncorrected height thresholds of P = .0034 or less, these data would benefit from larger confirmatory studies or studies of patients at an earlier disease stage when the rate of change of DaTscan uptake is greater, making group differences more readily detectable.”
It won’t be easy to discover how exenatide exerts its benefit, the authors said. They pointed to a robust compendium of preclinical data suggesting that the drug reduces inflammation, promotes mitochondrial biogenesis, exerts neurotrophic effects, stimulates neurogenesis, and restores neuronal insulin signaling.
“Whether some or all of these mechanisms contributed to the clinical effects in our study cannot be definitively established, but one or several of these mechanisms could have acted in synergy to promote cell survival, preserve compensatory responses, and prevent maladaptive responses.”
The Michael J. Fox Foundation for Parkinson’s Research funded the study. Dr. Athauda had no financial disclosures but several of his coauthors disclosed relationships with pharmaceutical companies.
[email protected]
On Twitter @alz_gal
The EXENATIDE-PD trial is an exciting peek into a potential new mechanism in treating Parkinson’s, but it must be viewed cautiously.
The baseline between-group differences are concerning, and although the authors tried to adjust for this discrepancy, a confounding effect for differences in concomitant dopaminergic therapy during the trial cannot be excluded.
It is also puzzling that only off-medication dyskinesias improved without any on-medication improvements or other benefits. The 12-week washout period also might have been too short to eliminate potentially long-lasting symptomatic effects of exenatide.
The DaTscan results are not completely reliable in this analysis because it has previously been shown that GLP-1 receptor stimulation in rodents inhibits the ability of cocaine to increase extracellular dopamine concentrations, which is associated with increased DAT surface expression in the forebrain lateral septum. If present in human beings, such a pharmacological mechanism could potentially account for the symptomatic motor effects of exenatide in Parkinson’s disease.
Nevertheless, the MDS-UPDRS part 3 improvements at 12 weeks do suggest that exenatide has symptomatic motor effects. It’s just not clear how the drug exerts those effects. Other potential symptomatic pharmacological mechanisms of exenatide could include improved functioning in surviving dopaminergic neurons or modified pharmacokinetics of dopaminergic treatments.
Whether exenatide acts as a novel symptomatic agent or has neuroprotective effects on the underlying Parkinson’s disease pathology remains unclear, but this study opens up a new therapeutic avenue in treatment of Parkinson’s disease.
Werner Poewe, MD, is professor of neurology and director of the department of neurology at Innsbruck (Austria) Medical University. Klaus Seppi, MD, is assistant professor of neurology there. Both reported a variety of financial relationships with companies that make drugs for Parkinson’s. Their comments are taken from an editorial accompanying the EXENATIDE-PD trial report (Lancet. 2017 Aug 3. doi: 10.1016/S0140-6736[17]32101-3).
The EXENATIDE-PD trial is an exciting peek into a potential new mechanism in treating Parkinson’s, but it must be viewed cautiously.
The baseline between-group differences are concerning, and although the authors tried to adjust for this discrepancy, a confounding effect for differences in concomitant dopaminergic therapy during the trial cannot be excluded.
It is also puzzling that only off-medication dyskinesias improved without any on-medication improvements or other benefits. The 12-week washout period also might have been too short to eliminate potentially long-lasting symptomatic effects of exenatide.
The DaTscan results are not completely reliable in this analysis because it has previously been shown that GLP-1 receptor stimulation in rodents inhibits the ability of cocaine to increase extracellular dopamine concentrations, which is associated with increased DAT surface expression in the forebrain lateral septum. If present in human beings, such a pharmacological mechanism could potentially account for the symptomatic motor effects of exenatide in Parkinson’s disease.
Nevertheless, the MDS-UPDRS part 3 improvements at 12 weeks do suggest that exenatide has symptomatic motor effects. It’s just not clear how the drug exerts those effects. Other potential symptomatic pharmacological mechanisms of exenatide could include improved functioning in surviving dopaminergic neurons or modified pharmacokinetics of dopaminergic treatments.
Whether exenatide acts as a novel symptomatic agent or has neuroprotective effects on the underlying Parkinson’s disease pathology remains unclear, but this study opens up a new therapeutic avenue in treatment of Parkinson’s disease.
Werner Poewe, MD, is professor of neurology and director of the department of neurology at Innsbruck (Austria) Medical University. Klaus Seppi, MD, is assistant professor of neurology there. Both reported a variety of financial relationships with companies that make drugs for Parkinson’s. Their comments are taken from an editorial accompanying the EXENATIDE-PD trial report (Lancet. 2017 Aug 3. doi: 10.1016/S0140-6736[17]32101-3).
The EXENATIDE-PD trial is an exciting peek into a potential new mechanism in treating Parkinson’s, but it must be viewed cautiously.
The baseline between-group differences are concerning, and although the authors tried to adjust for this discrepancy, a confounding effect for differences in concomitant dopaminergic therapy during the trial cannot be excluded.
It is also puzzling that only off-medication dyskinesias improved without any on-medication improvements or other benefits. The 12-week washout period also might have been too short to eliminate potentially long-lasting symptomatic effects of exenatide.
The DaTscan results are not completely reliable in this analysis because it has previously been shown that GLP-1 receptor stimulation in rodents inhibits the ability of cocaine to increase extracellular dopamine concentrations, which is associated with increased DAT surface expression in the forebrain lateral septum. If present in human beings, such a pharmacological mechanism could potentially account for the symptomatic motor effects of exenatide in Parkinson’s disease.
Nevertheless, the MDS-UPDRS part 3 improvements at 12 weeks do suggest that exenatide has symptomatic motor effects. It’s just not clear how the drug exerts those effects. Other potential symptomatic pharmacological mechanisms of exenatide could include improved functioning in surviving dopaminergic neurons or modified pharmacokinetics of dopaminergic treatments.
Whether exenatide acts as a novel symptomatic agent or has neuroprotective effects on the underlying Parkinson’s disease pathology remains unclear, but this study opens up a new therapeutic avenue in treatment of Parkinson’s disease.
Werner Poewe, MD, is professor of neurology and director of the department of neurology at Innsbruck (Austria) Medical University. Klaus Seppi, MD, is assistant professor of neurology there. Both reported a variety of financial relationships with companies that make drugs for Parkinson’s. Their comments are taken from an editorial accompanying the EXENATIDE-PD trial report (Lancet. 2017 Aug 3. doi: 10.1016/S0140-6736[17]32101-3).
An anti-diabetes drug significantly improved motor function in patients with Parkinson’s disease who had off-medication symptoms despite dopaminergic therapy in a phase 2 trial.
Patients taking exenatide (Byetta), an agonist of the GLP-1 receptor, experienced a mean 2.5-point improvement in the part 3 motor score on the Movement Disorders Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) over 48 weeks, compared with a 1-point decline in patients taking placebo, Dilan Athauda, MBBS, and his colleagues reported (Lancet. 2017 Aug 3. doi: 10.1016/S0140-6736[17]31585-4).
The mechanism of action is unclear, the investigators noted. Dopamine transporter scanning with [123I]FP-CIT single photon emission CT (DaTscan) revealed a tantalizing hint of neuroprotection, as the rate of decline in dopaminergic neurons seemed to be slightly reduced among those taking the medication. However, it’s also possible that exenatide somehow altered the pharmacokinetics of levodopa and other dopaminergic drugs, making them more effective, Dr. Athauda and his associates said.
Still, the double-blinded study’s positive results are encouraging, and they replicate those of the team’s 2013 open-label trial (J Clin Invest. 2013 Jun 3;123[6]:2730-6), they asserted.
“Whether this drug acts as a novel symptomatic agent, influences compensatory responses or behaviors, or has neuroprotective effects on underlying pathology is unclear, but there is a strong indication that GLP-1 receptor agonists may have a useful role in future treatment of Parkinson’s disease,” the investigators wrote.
The study randomized 62 patients who had Parkinson’s with off-medication motor symptoms to weekly injections of either placebo or 2 mg subcutaneous exenatide for 48 weeks. A 12-week washout period followed. Despite randomization, there were some important baseline differences between the groups. Those taking exenatide were older (62 vs. 58 years) and had a higher score on the part 3 motor score of the MDS-UPDRS, the study’s primary endpoint (32.8 vs. 27.1). Exenatide users were also taking a lower mean dopaminergic drug dose (mean 774 mg vs. 826 mg levodopa equivalent).
Patients were assessed in clinic every 12 weeks, not only for the primary endpoint of dyskinesia off-medication, but for cognition, quality of life, mood, and nonmotor symptoms. All assessments were done in the morning, after at least 8 hours off levodopa or 36 hours off long-acting dopaminergic drugs.
Exenatide’s benefit in off-medication dyskinesias was apparent after the first 12 weeks of treatment, Dr. Athauda and his coauthors noted. The MDS-UPDRS score had decreased from 32.8 to 30.2 in the active group, and increased from 27.1 to 27.6 in the placebo group. Those taking exenatide held steady at that improvement for the entire 48 weeks, ending at 30.3 (2.3 points below baseline). Those taking placebo continued to decline, ending at 28.8 (1.7 points above baseline). The adjusted between-group difference was 4.3 points, in favor of exenatide (P = .0026).
At 60 weeks, after the 12-week washout period, patients who took exenatide were still doing better, reaching an adjusted between-group difference of –3.5 (P-= .0318).
However, off-medication dyskinesia was the only improvement noted in the trial. Exenatide did not affect any secondary endpoints, including any sections of the on-medication MDS-UPDRS.
The investigators noted that, during the 60 weeks, mean levodopa equivalent dosage increased more in the active group than in the placebo group (132 vs. 112 mg). This brought the active group up much closer to the placebo group’s dose than had been observed at baseline (906 vs. 942 mg).
Exenatide was generally well tolerated, with the exception of a mean 2.6-kg weight loss among those taking it. This was likely related to an increased incidence of gastrointestinal side effects. Weight returned to normal during the washout period.
There were three drop-outs, two in the placebo arm because of worsening anxiety and worsening dyskinesia and one in the exenatide arm because of asymptomatic hyperamylasemia.
The investigators also measured dopamine transporter availability via DaTscan to assess exenatide’s potential impact on dopaminergic neurons. Although areas of decreased binding declined in both groups, the exenatide group showed a signal of reduced rate of decline in the right and left putamen.
“However,” the authors noted, “because this signal was detectable only at uncorrected height thresholds of P = .0034 or less, these data would benefit from larger confirmatory studies or studies of patients at an earlier disease stage when the rate of change of DaTscan uptake is greater, making group differences more readily detectable.”
It won’t be easy to discover how exenatide exerts its benefit, the authors said. They pointed to a robust compendium of preclinical data suggesting that the drug reduces inflammation, promotes mitochondrial biogenesis, exerts neurotrophic effects, stimulates neurogenesis, and restores neuronal insulin signaling.
“Whether some or all of these mechanisms contributed to the clinical effects in our study cannot be definitively established, but one or several of these mechanisms could have acted in synergy to promote cell survival, preserve compensatory responses, and prevent maladaptive responses.”
The Michael J. Fox Foundation for Parkinson’s Research funded the study. Dr. Athauda had no financial disclosures but several of his coauthors disclosed relationships with pharmaceutical companies.
[email protected]
On Twitter @alz_gal
An anti-diabetes drug significantly improved motor function in patients with Parkinson’s disease who had off-medication symptoms despite dopaminergic therapy in a phase 2 trial.
Patients taking exenatide (Byetta), an agonist of the GLP-1 receptor, experienced a mean 2.5-point improvement in the part 3 motor score on the Movement Disorders Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) over 48 weeks, compared with a 1-point decline in patients taking placebo, Dilan Athauda, MBBS, and his colleagues reported (Lancet. 2017 Aug 3. doi: 10.1016/S0140-6736[17]31585-4).
The mechanism of action is unclear, the investigators noted. Dopamine transporter scanning with [123I]FP-CIT single photon emission CT (DaTscan) revealed a tantalizing hint of neuroprotection, as the rate of decline in dopaminergic neurons seemed to be slightly reduced among those taking the medication. However, it’s also possible that exenatide somehow altered the pharmacokinetics of levodopa and other dopaminergic drugs, making them more effective, Dr. Athauda and his associates said.
Still, the double-blinded study’s positive results are encouraging, and they replicate those of the team’s 2013 open-label trial (J Clin Invest. 2013 Jun 3;123[6]:2730-6), they asserted.
“Whether this drug acts as a novel symptomatic agent, influences compensatory responses or behaviors, or has neuroprotective effects on underlying pathology is unclear, but there is a strong indication that GLP-1 receptor agonists may have a useful role in future treatment of Parkinson’s disease,” the investigators wrote.
The study randomized 62 patients who had Parkinson’s with off-medication motor symptoms to weekly injections of either placebo or 2 mg subcutaneous exenatide for 48 weeks. A 12-week washout period followed. Despite randomization, there were some important baseline differences between the groups. Those taking exenatide were older (62 vs. 58 years) and had a higher score on the part 3 motor score of the MDS-UPDRS, the study’s primary endpoint (32.8 vs. 27.1). Exenatide users were also taking a lower mean dopaminergic drug dose (mean 774 mg vs. 826 mg levodopa equivalent).
Patients were assessed in clinic every 12 weeks, not only for the primary endpoint of dyskinesia off-medication, but for cognition, quality of life, mood, and nonmotor symptoms. All assessments were done in the morning, after at least 8 hours off levodopa or 36 hours off long-acting dopaminergic drugs.
Exenatide’s benefit in off-medication dyskinesias was apparent after the first 12 weeks of treatment, Dr. Athauda and his coauthors noted. The MDS-UPDRS score had decreased from 32.8 to 30.2 in the active group, and increased from 27.1 to 27.6 in the placebo group. Those taking exenatide held steady at that improvement for the entire 48 weeks, ending at 30.3 (2.3 points below baseline). Those taking placebo continued to decline, ending at 28.8 (1.7 points above baseline). The adjusted between-group difference was 4.3 points, in favor of exenatide (P = .0026).
At 60 weeks, after the 12-week washout period, patients who took exenatide were still doing better, reaching an adjusted between-group difference of –3.5 (P-= .0318).
However, off-medication dyskinesia was the only improvement noted in the trial. Exenatide did not affect any secondary endpoints, including any sections of the on-medication MDS-UPDRS.
The investigators noted that, during the 60 weeks, mean levodopa equivalent dosage increased more in the active group than in the placebo group (132 vs. 112 mg). This brought the active group up much closer to the placebo group’s dose than had been observed at baseline (906 vs. 942 mg).
Exenatide was generally well tolerated, with the exception of a mean 2.6-kg weight loss among those taking it. This was likely related to an increased incidence of gastrointestinal side effects. Weight returned to normal during the washout period.
There were three drop-outs, two in the placebo arm because of worsening anxiety and worsening dyskinesia and one in the exenatide arm because of asymptomatic hyperamylasemia.
The investigators also measured dopamine transporter availability via DaTscan to assess exenatide’s potential impact on dopaminergic neurons. Although areas of decreased binding declined in both groups, the exenatide group showed a signal of reduced rate of decline in the right and left putamen.
“However,” the authors noted, “because this signal was detectable only at uncorrected height thresholds of P = .0034 or less, these data would benefit from larger confirmatory studies or studies of patients at an earlier disease stage when the rate of change of DaTscan uptake is greater, making group differences more readily detectable.”
It won’t be easy to discover how exenatide exerts its benefit, the authors said. They pointed to a robust compendium of preclinical data suggesting that the drug reduces inflammation, promotes mitochondrial biogenesis, exerts neurotrophic effects, stimulates neurogenesis, and restores neuronal insulin signaling.
“Whether some or all of these mechanisms contributed to the clinical effects in our study cannot be definitively established, but one or several of these mechanisms could have acted in synergy to promote cell survival, preserve compensatory responses, and prevent maladaptive responses.”
The Michael J. Fox Foundation for Parkinson’s Research funded the study. Dr. Athauda had no financial disclosures but several of his coauthors disclosed relationships with pharmaceutical companies.
[email protected]
On Twitter @alz_gal
FROM THE LANCET
Key clinical point:
Major finding: After 48 weeks, those taking the drug had a 4.3-point advantage over those taking placebo on the Movement Disorders Society Unified Parkinson’s Disease Rating Scale part 3 motor score.
Data source: The phase 2, double-blind, randomized, placebo-controlled study comprised 62 patients with moderate Parkinson’s.
Disclosures: The Michael J. Fox Foundation for Parkinson’s Research funded the study. Dr. Athauda had no financial disclosures; several of his coauthors disclosed relationships with pharmaceutical companies.
What’s in a name?
The quest for earlier diagnosis and treatment of polycystic ovarian syndrome may be branding too many young women with an unnecessary – and emotionally burdensome – tag, experts fear.
There’s little doubt that the classic phenotypes of PCOS, driven by androgen excess, can impair fertility and increase the long-term risks of cardiovascular complications and type 2 diabetes mellitus. But the recent expansion of those phenotypes to include categories that are not androgen driven has vastly increased the number of diagnosable cases, especially in teens. Recent analyses suggest that up to 21% of teenage girls now could potentially fit one of the phenotypes – a considerable increase from the 4%-6% prevalence associated with the original National Institutes of Health criteria of 20 years ago.
Some of these newly established phenotypes include signs and symptoms that may be driven by genetics or lifestyle instead of hormones, like hirsutism, acne, and obesity. Other problems may resolve spontaneously as a girl matures or loses weight, leaving her with a perfectly normal physiology, but a lifelong PCOS label.
Tessa Copp, a PhD student at the University of Sydney, is particularly interested in this issue. She and her mentor, psychologist Jesse Janssen, PhD, also of the university, recently published their analysis of the potential harms of these ever-proliferating PCOS diagnostic categories (BMJ. 2017;358:j3694).
“Women with a diagnosis of PCOS tend to have higher rates of depression and anxiety, a negative body image, and reduced relationship and sexual satisfaction,” Ms. Copp said in an interview. “But it’s unclear if those are because of the condition or the impact of getting a diagnosis associated with infertility and poor long-term health outcomes.”
“This label can induce fear and anxiety about the future. And young women may feel pressured to make altered life decisions about their future fertility at times when they may not be prepared to do so.”
Evolving diagnostic criteria
Three sets of diagnostic criteria have been proposed over the past 3 decades, said Ricardo Azziz, MD, chief officer of academic health and hospital affairs for the State University of New York system, and a renowned expert on PCOS. Dr. Azziz has had a hand in constructing several of the current diagnostic criteria.
In the 1990s, the key diagnostic features of PCOS were clinical or biochemical hyperandrogenism and chronic oligoanovulation. But in 2003, members of the European Society for Human Reproduction and Embryology and the American Society for Reproductive Medicine met in Rotterdam, the Netherlands, to review the data and refine these criteria. For the first time, ultrasound entered the picture; polycystic ovarian morphology became part of the diagnostic criteria.
A diagnosis using the new Rotterdam criteria required two of three characteristics: hyperandrogenicity, chronic ovulatory dysfunction, and polycystic ovarian morphology. These changes substantially expanded the number of diagnosable patients, Dr. Azziz said in an interview. Many have since criticized the inclusion of polycystic ovaries, because they are often present in women who don’t have any other PCOS symptom, especially younger women.
In 2006, the Androgen Excess & PCOS Society took a crack at the issue. They conducted a large data review and concluded that PCOS diagnosis should be based on the presence of clinical or biochemical hyperandrogenism in combination with ovarian dysfunction, thus taking the ovaries completely out of the picture.
This definition, however, resulted in some confusion in clinical practice, Dr. Azziz said. So in 2012, the National Institutes of Health gathered an international panel of PCOS experts, who reviewed the pros and cons of the diagnostic system. The panel endorsed the broader Rotterdam criteria, which included ovarian morphology, but issued a detailed description of four phenotypes. These are now the ones most often used in clinical practice:
• A. Hyperandrogenicity (clinical or biochemical) with ovarian dysfunction and polycystic ovarian morphology
• B. Hyperandrogenicity plus ovarian dysfunction
• C. Hyperandrogenicity plus polycystic ovarian morphology
• D. Ovarian dysfunction plus polycystic ovarian morphology
When is PCOS not PCOS?
Although Dr. Azziz has been a leader in this effort to impose diagnostic order, he also gets the system’s potential problems, especially when it comes to teenagers.
“I have been involved in each of these successive expansions, I understand the concern of people who worry that we are getting further away from classic PCOS, especially by adding phenotypes with normal ovulation. Are these actually the same disorder? Do they imply the same risks? Using the Rotterdam criteria does capture the greatest number of patients, but we have to be very careful of these phenotypes.”
Phenotypes A and B have accrued the most long-term data and clearly carry associated long-term cardiovascular and metabolic risks. For these women, Dr. Azziz said, early diagnosis leads to early treatment and a jump start on modifying those risks.
The picture is much different for phenotypes C and D. “As we get more data, it becomes increasingly clear that types C and D don’t behave in the same way or carry the same risks.”
Part of the problem is that some of the secondary definitions of signs and symptoms are themselves not well defined and don’t account for other possible etiologies, said Lubna Pal, MD, another PCOS expert. Hirsutism and acne are good examples. “What about the young woman who is overweight, and complains about acne and being hairy? You might think of these as PCOS symptoms, but in her history, find out that her mother or sisters also have a lot of hair and had acne. How do you treat that information then?”
None of the classic signs and symptoms of PCOS have been validated in teens, said Dr. Pal, director of the Polycystic Ovary Syndrome (PCOS) Program at the Yale Reproductive Endocrinology center, New Haven, Conn. There are no validated cutoffs for abnormal androgen in teen females, and no one really knows whether the adult values are meaningful in younger women.
Nor is there an age-specific cut-off for “polycystic-appearing ovaries,” Ms. Copp said. “Right now, the Rotterdam criteria define this as 12 or more follicles on ultrasound, but that was based on the ultrasound technology available at the time – and that count might be normal in early adulthood.” She noted that in 2014, an expert panel recommended increasing the threshold to more than 24 follicles per ovary, in accordance with findings using advanced imaging techniques. This recommendation has not been adopted.
Some classic symptoms, like menstrual irregularity, acne, and high body weight, can also be part of a transitory developmental phase as a girl moves through puberty into a more adult physiology. Others, like insulin resistance, can resolve if the patient loses weight, Dr. Pal said. So are those things really indicators of a true PCOS state?
These questions all need to be answered, said Ms. Copp. In the meantime, young women are being tagged as having a chronic disorder that might not be there, or if it is, might spontaneously resolve.
“There are at least three studies in different populations that have found that the prevalence of PCOS falls rapidly after 25 years of age,” she said. “So these signs and symptoms of PCOS might really be transitory for many.”
Women with “true,” androgen-driven PCOS benefit from early identification, treatment, and metabolic and cardiovascular risk management. But everyone interviewed for this article agreed that a PCOS label for every young woman who is overweight, hirsute, acne-prone, and irregular in her periods, is inappropriate and potentially harmful. All three mentioned that while a diagnostic label may bring an element of relief – as in “I finally know what’s going on” - it carries attendant anxiety about a future with an incurable disorder which may not even impart much long-term risk, or even daily bother. There is also a risk of potentially unnecessary medical screenings and interventions, Ms. Copp noted.
Treat the syndrome – or the patient?
A better way to proceed, Dr. Pal suggested, is to drop the labels and embrace the patient’s experience.
“I ask them, ‘What is your bother?’ Is it the irregular periods? The acne? The hair? The weight? It may be that a more mature woman wants to start a family, and that is the issue we address. Or for younger women, it may be the other issues, and for them, that should be our primary concern.”
Dr. Azziz agreed.
“We need to confirm the diagnosis, and then confirm any related disorders, and make sure our patients are healthy in other ways. PCOS alone is not so much a concern. We simply cannot treat all our patients the same. Instead, we need to get very clear with them about their own objectives. Are they trying to lose weight or get pregnant? We are talking about basic personalized medicine here, not labeling just for the sake of giving something a name.”
Ms. Copp’s thoughts were in the same vein as well.
“Do all these women really need to be ‘diagnosed,’ or can we monitor their symptoms and treat what is bothersome without a label? A diagnostic label doesn’t change the treatment, especially for young women whose PCOS symptoms might be transient and not require treatment at all.”
None of those interviewed for this article had any relevant financial disclosures.
[email protected]
On Twitter @Alz_Gal
The quest for earlier diagnosis and treatment of polycystic ovarian syndrome may be branding too many young women with an unnecessary – and emotionally burdensome – tag, experts fear.
There’s little doubt that the classic phenotypes of PCOS, driven by androgen excess, can impair fertility and increase the long-term risks of cardiovascular complications and type 2 diabetes mellitus. But the recent expansion of those phenotypes to include categories that are not androgen driven has vastly increased the number of diagnosable cases, especially in teens. Recent analyses suggest that up to 21% of teenage girls now could potentially fit one of the phenotypes – a considerable increase from the 4%-6% prevalence associated with the original National Institutes of Health criteria of 20 years ago.
Some of these newly established phenotypes include signs and symptoms that may be driven by genetics or lifestyle instead of hormones, like hirsutism, acne, and obesity. Other problems may resolve spontaneously as a girl matures or loses weight, leaving her with a perfectly normal physiology, but a lifelong PCOS label.
Tessa Copp, a PhD student at the University of Sydney, is particularly interested in this issue. She and her mentor, psychologist Jesse Janssen, PhD, also of the university, recently published their analysis of the potential harms of these ever-proliferating PCOS diagnostic categories (BMJ. 2017;358:j3694).
“Women with a diagnosis of PCOS tend to have higher rates of depression and anxiety, a negative body image, and reduced relationship and sexual satisfaction,” Ms. Copp said in an interview. “But it’s unclear if those are because of the condition or the impact of getting a diagnosis associated with infertility and poor long-term health outcomes.”
“This label can induce fear and anxiety about the future. And young women may feel pressured to make altered life decisions about their future fertility at times when they may not be prepared to do so.”
Evolving diagnostic criteria
Three sets of diagnostic criteria have been proposed over the past 3 decades, said Ricardo Azziz, MD, chief officer of academic health and hospital affairs for the State University of New York system, and a renowned expert on PCOS. Dr. Azziz has had a hand in constructing several of the current diagnostic criteria.
In the 1990s, the key diagnostic features of PCOS were clinical or biochemical hyperandrogenism and chronic oligoanovulation. But in 2003, members of the European Society for Human Reproduction and Embryology and the American Society for Reproductive Medicine met in Rotterdam, the Netherlands, to review the data and refine these criteria. For the first time, ultrasound entered the picture; polycystic ovarian morphology became part of the diagnostic criteria.
A diagnosis using the new Rotterdam criteria required two of three characteristics: hyperandrogenicity, chronic ovulatory dysfunction, and polycystic ovarian morphology. These changes substantially expanded the number of diagnosable patients, Dr. Azziz said in an interview. Many have since criticized the inclusion of polycystic ovaries, because they are often present in women who don’t have any other PCOS symptom, especially younger women.
In 2006, the Androgen Excess & PCOS Society took a crack at the issue. They conducted a large data review and concluded that PCOS diagnosis should be based on the presence of clinical or biochemical hyperandrogenism in combination with ovarian dysfunction, thus taking the ovaries completely out of the picture.
This definition, however, resulted in some confusion in clinical practice, Dr. Azziz said. So in 2012, the National Institutes of Health gathered an international panel of PCOS experts, who reviewed the pros and cons of the diagnostic system. The panel endorsed the broader Rotterdam criteria, which included ovarian morphology, but issued a detailed description of four phenotypes. These are now the ones most often used in clinical practice:
• A. Hyperandrogenicity (clinical or biochemical) with ovarian dysfunction and polycystic ovarian morphology
• B. Hyperandrogenicity plus ovarian dysfunction
• C. Hyperandrogenicity plus polycystic ovarian morphology
• D. Ovarian dysfunction plus polycystic ovarian morphology
When is PCOS not PCOS?
Although Dr. Azziz has been a leader in this effort to impose diagnostic order, he also gets the system’s potential problems, especially when it comes to teenagers.
“I have been involved in each of these successive expansions, I understand the concern of people who worry that we are getting further away from classic PCOS, especially by adding phenotypes with normal ovulation. Are these actually the same disorder? Do they imply the same risks? Using the Rotterdam criteria does capture the greatest number of patients, but we have to be very careful of these phenotypes.”
Phenotypes A and B have accrued the most long-term data and clearly carry associated long-term cardiovascular and metabolic risks. For these women, Dr. Azziz said, early diagnosis leads to early treatment and a jump start on modifying those risks.
The picture is much different for phenotypes C and D. “As we get more data, it becomes increasingly clear that types C and D don’t behave in the same way or carry the same risks.”
Part of the problem is that some of the secondary definitions of signs and symptoms are themselves not well defined and don’t account for other possible etiologies, said Lubna Pal, MD, another PCOS expert. Hirsutism and acne are good examples. “What about the young woman who is overweight, and complains about acne and being hairy? You might think of these as PCOS symptoms, but in her history, find out that her mother or sisters also have a lot of hair and had acne. How do you treat that information then?”
None of the classic signs and symptoms of PCOS have been validated in teens, said Dr. Pal, director of the Polycystic Ovary Syndrome (PCOS) Program at the Yale Reproductive Endocrinology center, New Haven, Conn. There are no validated cutoffs for abnormal androgen in teen females, and no one really knows whether the adult values are meaningful in younger women.
Nor is there an age-specific cut-off for “polycystic-appearing ovaries,” Ms. Copp said. “Right now, the Rotterdam criteria define this as 12 or more follicles on ultrasound, but that was based on the ultrasound technology available at the time – and that count might be normal in early adulthood.” She noted that in 2014, an expert panel recommended increasing the threshold to more than 24 follicles per ovary, in accordance with findings using advanced imaging techniques. This recommendation has not been adopted.
Some classic symptoms, like menstrual irregularity, acne, and high body weight, can also be part of a transitory developmental phase as a girl moves through puberty into a more adult physiology. Others, like insulin resistance, can resolve if the patient loses weight, Dr. Pal said. So are those things really indicators of a true PCOS state?
These questions all need to be answered, said Ms. Copp. In the meantime, young women are being tagged as having a chronic disorder that might not be there, or if it is, might spontaneously resolve.
“There are at least three studies in different populations that have found that the prevalence of PCOS falls rapidly after 25 years of age,” she said. “So these signs and symptoms of PCOS might really be transitory for many.”
Women with “true,” androgen-driven PCOS benefit from early identification, treatment, and metabolic and cardiovascular risk management. But everyone interviewed for this article agreed that a PCOS label for every young woman who is overweight, hirsute, acne-prone, and irregular in her periods, is inappropriate and potentially harmful. All three mentioned that while a diagnostic label may bring an element of relief – as in “I finally know what’s going on” - it carries attendant anxiety about a future with an incurable disorder which may not even impart much long-term risk, or even daily bother. There is also a risk of potentially unnecessary medical screenings and interventions, Ms. Copp noted.
Treat the syndrome – or the patient?
A better way to proceed, Dr. Pal suggested, is to drop the labels and embrace the patient’s experience.
“I ask them, ‘What is your bother?’ Is it the irregular periods? The acne? The hair? The weight? It may be that a more mature woman wants to start a family, and that is the issue we address. Or for younger women, it may be the other issues, and for them, that should be our primary concern.”
Dr. Azziz agreed.
“We need to confirm the diagnosis, and then confirm any related disorders, and make sure our patients are healthy in other ways. PCOS alone is not so much a concern. We simply cannot treat all our patients the same. Instead, we need to get very clear with them about their own objectives. Are they trying to lose weight or get pregnant? We are talking about basic personalized medicine here, not labeling just for the sake of giving something a name.”
Ms. Copp’s thoughts were in the same vein as well.
“Do all these women really need to be ‘diagnosed,’ or can we monitor their symptoms and treat what is bothersome without a label? A diagnostic label doesn’t change the treatment, especially for young women whose PCOS symptoms might be transient and not require treatment at all.”
None of those interviewed for this article had any relevant financial disclosures.
[email protected]
On Twitter @Alz_Gal
The quest for earlier diagnosis and treatment of polycystic ovarian syndrome may be branding too many young women with an unnecessary – and emotionally burdensome – tag, experts fear.
There’s little doubt that the classic phenotypes of PCOS, driven by androgen excess, can impair fertility and increase the long-term risks of cardiovascular complications and type 2 diabetes mellitus. But the recent expansion of those phenotypes to include categories that are not androgen driven has vastly increased the number of diagnosable cases, especially in teens. Recent analyses suggest that up to 21% of teenage girls now could potentially fit one of the phenotypes – a considerable increase from the 4%-6% prevalence associated with the original National Institutes of Health criteria of 20 years ago.
Some of these newly established phenotypes include signs and symptoms that may be driven by genetics or lifestyle instead of hormones, like hirsutism, acne, and obesity. Other problems may resolve spontaneously as a girl matures or loses weight, leaving her with a perfectly normal physiology, but a lifelong PCOS label.
Tessa Copp, a PhD student at the University of Sydney, is particularly interested in this issue. She and her mentor, psychologist Jesse Janssen, PhD, also of the university, recently published their analysis of the potential harms of these ever-proliferating PCOS diagnostic categories (BMJ. 2017;358:j3694).
“Women with a diagnosis of PCOS tend to have higher rates of depression and anxiety, a negative body image, and reduced relationship and sexual satisfaction,” Ms. Copp said in an interview. “But it’s unclear if those are because of the condition or the impact of getting a diagnosis associated with infertility and poor long-term health outcomes.”
“This label can induce fear and anxiety about the future. And young women may feel pressured to make altered life decisions about their future fertility at times when they may not be prepared to do so.”
Evolving diagnostic criteria
Three sets of diagnostic criteria have been proposed over the past 3 decades, said Ricardo Azziz, MD, chief officer of academic health and hospital affairs for the State University of New York system, and a renowned expert on PCOS. Dr. Azziz has had a hand in constructing several of the current diagnostic criteria.
In the 1990s, the key diagnostic features of PCOS were clinical or biochemical hyperandrogenism and chronic oligoanovulation. But in 2003, members of the European Society for Human Reproduction and Embryology and the American Society for Reproductive Medicine met in Rotterdam, the Netherlands, to review the data and refine these criteria. For the first time, ultrasound entered the picture; polycystic ovarian morphology became part of the diagnostic criteria.
A diagnosis using the new Rotterdam criteria required two of three characteristics: hyperandrogenicity, chronic ovulatory dysfunction, and polycystic ovarian morphology. These changes substantially expanded the number of diagnosable patients, Dr. Azziz said in an interview. Many have since criticized the inclusion of polycystic ovaries, because they are often present in women who don’t have any other PCOS symptom, especially younger women.
In 2006, the Androgen Excess & PCOS Society took a crack at the issue. They conducted a large data review and concluded that PCOS diagnosis should be based on the presence of clinical or biochemical hyperandrogenism in combination with ovarian dysfunction, thus taking the ovaries completely out of the picture.
This definition, however, resulted in some confusion in clinical practice, Dr. Azziz said. So in 2012, the National Institutes of Health gathered an international panel of PCOS experts, who reviewed the pros and cons of the diagnostic system. The panel endorsed the broader Rotterdam criteria, which included ovarian morphology, but issued a detailed description of four phenotypes. These are now the ones most often used in clinical practice:
• A. Hyperandrogenicity (clinical or biochemical) with ovarian dysfunction and polycystic ovarian morphology
• B. Hyperandrogenicity plus ovarian dysfunction
• C. Hyperandrogenicity plus polycystic ovarian morphology
• D. Ovarian dysfunction plus polycystic ovarian morphology
When is PCOS not PCOS?
Although Dr. Azziz has been a leader in this effort to impose diagnostic order, he also gets the system’s potential problems, especially when it comes to teenagers.
“I have been involved in each of these successive expansions, I understand the concern of people who worry that we are getting further away from classic PCOS, especially by adding phenotypes with normal ovulation. Are these actually the same disorder? Do they imply the same risks? Using the Rotterdam criteria does capture the greatest number of patients, but we have to be very careful of these phenotypes.”
Phenotypes A and B have accrued the most long-term data and clearly carry associated long-term cardiovascular and metabolic risks. For these women, Dr. Azziz said, early diagnosis leads to early treatment and a jump start on modifying those risks.
The picture is much different for phenotypes C and D. “As we get more data, it becomes increasingly clear that types C and D don’t behave in the same way or carry the same risks.”
Part of the problem is that some of the secondary definitions of signs and symptoms are themselves not well defined and don’t account for other possible etiologies, said Lubna Pal, MD, another PCOS expert. Hirsutism and acne are good examples. “What about the young woman who is overweight, and complains about acne and being hairy? You might think of these as PCOS symptoms, but in her history, find out that her mother or sisters also have a lot of hair and had acne. How do you treat that information then?”
None of the classic signs and symptoms of PCOS have been validated in teens, said Dr. Pal, director of the Polycystic Ovary Syndrome (PCOS) Program at the Yale Reproductive Endocrinology center, New Haven, Conn. There are no validated cutoffs for abnormal androgen in teen females, and no one really knows whether the adult values are meaningful in younger women.
Nor is there an age-specific cut-off for “polycystic-appearing ovaries,” Ms. Copp said. “Right now, the Rotterdam criteria define this as 12 or more follicles on ultrasound, but that was based on the ultrasound technology available at the time – and that count might be normal in early adulthood.” She noted that in 2014, an expert panel recommended increasing the threshold to more than 24 follicles per ovary, in accordance with findings using advanced imaging techniques. This recommendation has not been adopted.
Some classic symptoms, like menstrual irregularity, acne, and high body weight, can also be part of a transitory developmental phase as a girl moves through puberty into a more adult physiology. Others, like insulin resistance, can resolve if the patient loses weight, Dr. Pal said. So are those things really indicators of a true PCOS state?
These questions all need to be answered, said Ms. Copp. In the meantime, young women are being tagged as having a chronic disorder that might not be there, or if it is, might spontaneously resolve.
“There are at least three studies in different populations that have found that the prevalence of PCOS falls rapidly after 25 years of age,” she said. “So these signs and symptoms of PCOS might really be transitory for many.”
Women with “true,” androgen-driven PCOS benefit from early identification, treatment, and metabolic and cardiovascular risk management. But everyone interviewed for this article agreed that a PCOS label for every young woman who is overweight, hirsute, acne-prone, and irregular in her periods, is inappropriate and potentially harmful. All three mentioned that while a diagnostic label may bring an element of relief – as in “I finally know what’s going on” - it carries attendant anxiety about a future with an incurable disorder which may not even impart much long-term risk, or even daily bother. There is also a risk of potentially unnecessary medical screenings and interventions, Ms. Copp noted.
Treat the syndrome – or the patient?
A better way to proceed, Dr. Pal suggested, is to drop the labels and embrace the patient’s experience.
“I ask them, ‘What is your bother?’ Is it the irregular periods? The acne? The hair? The weight? It may be that a more mature woman wants to start a family, and that is the issue we address. Or for younger women, it may be the other issues, and for them, that should be our primary concern.”
Dr. Azziz agreed.
“We need to confirm the diagnosis, and then confirm any related disorders, and make sure our patients are healthy in other ways. PCOS alone is not so much a concern. We simply cannot treat all our patients the same. Instead, we need to get very clear with them about their own objectives. Are they trying to lose weight or get pregnant? We are talking about basic personalized medicine here, not labeling just for the sake of giving something a name.”
Ms. Copp’s thoughts were in the same vein as well.
“Do all these women really need to be ‘diagnosed,’ or can we monitor their symptoms and treat what is bothersome without a label? A diagnostic label doesn’t change the treatment, especially for young women whose PCOS symptoms might be transient and not require treatment at all.”
None of those interviewed for this article had any relevant financial disclosures.
[email protected]
On Twitter @Alz_Gal
Fueling the Alzheimer’s brain with fat
LONDON – A 3-month diet comprised of 70% fat improved cognition in Alzheimer’s disease patients better than any anti-amyloid drug that has ever been tested.
In a small pilot study, Alzheimer’s patients who followed the University of Kansas’s ketogenic diet program improved an average of 4 points on one of the most important cognitive assessments in dementia care, the Alzheimer’s Disease Assessment Scale–cognitive domain (ADAS-cog). Not only was this gain statistically significant, but it reached a level that clinical trialists believe to be clinically meaningful, and it was similar to the gains that won Food and Drug Administration approval for donepezil in 1996, according to Russell Swerdlow, MD, director of the University of Kansas Alzheimer’s Disease Center in Fairway.
“This is the most robust improvement in the ADAS-cog scale that I am aware of for an Alzheimer’s interventional trial,” said Dr. Swerdlow, who presented the study at the Alzheimer’s Association International Conference. “In some studies, patients decline along the lines of 5 points or so per year on this measure, so an improvement of 4 points is quite something.”
To put the results in perspective, donepezil was approved on a 4-point spread between the active and placebo arm over 3 months, said Dr. Swerdlow, who is also the Gene and Marge Sweeney Professor of Neurology at the university. Part of this difference was driven by a 2-point decline in the placebo group. Relative to its baseline, the treatment group improved, on average, by about 2 points.
But in the Ketogenic Diet Retention and Feasibility Trail (KDRAFT), also 3 months long, patients’ ADAS-cog scores didn’t decline at all. Everyone who stayed with the diet and kept on their baseline medications improved, although to varying degrees.
KDRAFT was very small, with just 10 patients completing the intervention, and lacked a comparator group, so the results should be interpreted extremely cautiously, Dr. Swerdlow said in an interview. “We have to very careful about overinterpreting these findings. It’s a pilot study, and a small group, so we don’t know how genuine the finding is. But if it is true, it’s a big deal.”
Diet and dementia
Emerging evidence suggests that modifying diet can help prevent Alzheimer’s and may even help AD patients think and function better. But this research has largely focused on the heart-healthy diets already proven successful in preventing and treating hypertension, diabetes, and cardiovascular disease. Most notably, the Mediterranean-DASH Intervention for Neurodegenerative Delay (MIND) diet cut the risk of AD by up to 53% (Alzheimers Dement. 2015 Sep;11[9]:1007-14) and also slowed aging-related cognitive decline (Alzheimers Dement. 2015 Sep; 11[9]:1015-22).
MIND is a combination of the low-salt, plant-focused DASH diet, and the heart-healthy Mediterranean diet. It is a moderate-fat plan, with a ratio of 33% fat, 38% carbohydrates, and 26% protein. Ideally, only 3% of the fat should be saturated, so MIND draws on olive oil, nuts, and other foods with monounsaturated fats, largely eschewing animal fats. It’s generally considered fairly easy to follow, since it allows a wide variety of whole grains, beans, nuts, fruits, vegetables, salads, fish, and poultry. Butter, red meat, fried foods, full-fat dairy, and fast foods are strict no-nos.
A ketogenic diet, however, turns MIND on its head. With a 70% fat, 20% protein, 10% carbohydrate ratio, a typical ketogenic diet nearly eliminates most fruits, and virtually all starchy vegetables, beans, and grains. It does, however, incorporate a large amount of fat from many sources, including olive oil, butter, cream, eggs, nuts, all kinds of meat, and fish. For a ketogenic diet, Dr. Swerdlow said, the ratio of fat to protein and carbs is more critical than the source of the fat.
MIND was designed to prevent the cardiovascular and endocrine disorders than predispose to dementia over the long term. But a ketogenic diet for patients with Alzheimer’s acutely manipulates the brain’s energy metabolism system, forcing it to use ketone bodies instead of glucose for fuel.
In normal energy metabolism, carbohydrates provide a ready supply of glucose, the brain’s primary fuel. When carbs are limited or absent, serum insulin decreases and glucagon increases. This promotes lipolysis. Ketones (primarily beta-hydroxybutyrate and acetoacetate) are formed in the liver from the newly released fatty acids, and released into the circulation, including into the brain during times of decreased glucose availability – a state characteristic of Alzheimer’s disease.
Induced ketogenesis trial
Inducing ketosis through diet seems to help correct the normal, age-related decline in the brain’s ability to use glucose, said Stephen Cunnane, PhD, who also presented ketogenic intervention results at AAIC. “Cognitively normal, healthy older adults experience a 10% reduction in the brain’s ability to metabolize glucose compared to healthy young people,” he said in an interview. But this decline accelerates as Alzheimer’s hits. Those with early AD have a 20% decrement in glucose utilization, compared with healthy elders.
What’s more, Dr. Cunnane said, these decrements are region-specific. Deficits in glucose metabolism hit the thalamus, and temporal and parietal cortices – all pathologically important in AD – particularly hard. The brain glucose deficit isn’t unique to the elderly, or even to patients with AD – it also occurs in those who have a family history of the disease, who carry the APOE4 allele, those with presenilin-1 mutations, and those with insulin resistance and diabetes.
Changes in brain glucose metabolism can develop years before any cognitive symptoms manifest and seem to increase the risk of Alzheimer’s, said Dr. Cunnane of Sherbrooke University, Que.
“We propose that this vicious cycle of presymptomatic glucose hypometabolism causes chronic brain energy deprivation, and might contribute to deteriorating neuronal function. That could cause a further decrease in the demand for glucose, leading to cognitive decline.”
“What doesn’t change, though, is the brain’s ability to take up ketone bodies,” he said. If anything, the brain appears to use ketones more efficiently as AD becomes established. “It’s almost like the brain is trying to rescue itself. If those cells were dead, they would not be able to take up ketones. Because they do, we think they are instead starving because of their inability to use glucose and that maybe we can rescue them with ketones before they die.”
At AAIC, Dr. Cunnane reported interim results of an investigation of induced ketogenesis in patients with mild cognitive impairment (MCI). The 6-month BENEFIC trial comprises 50 patients, randomized to either a daily nutritional supplement with 30 g medium chain triglycerides (MCT) in a unflavored, nondairy emulsion, or a fat-equivalent placebo drink. When consumed, the liver very quickly converts MCT fatty acids into ketone bodies, which then circulate throughout the body, including passing the blood-brain barrier.
All of the participants in the BENEFIC trial underwent brain PET scanning for both glucose and ketone uptake. Early results clearly showed that the MCI brains took up just as much acetoacetate as did the brains of cognitively normal young adults. And although the study wasn’t powered for a full cognitive assessment, Dr. Cunnane did present 6-month data on three measures in the MCI group: trail making time, verbal fluency, and the Boston Naming Test. In the active group on MCT, scores on all three measures improved “modestly” in direct correlation with brain ketone uptake. In the placebo group, scores remained unchanged.
“We don’t have enough people in the study to make any definitive statement about cognition, but it’s nice to see the trend going in the right direction, Dr. Cunnane said. “I really think of this as a dose-finding study and a chance to demonstrate the safety and tolerability of a liquid MCT supplement in people with MCI. Our next study will use a 45 g per day supplement of MCT.”
Details of the KDRAFT study
The BENEFIC study looked only at the effects of an MCT supplement, which may not deliver all the metabolic benefits of a ketogenic diet. KDRAFT, however, employed both, and assessed not only cognitive outcomes and adverse effects, but the practical matter of whether AD patients and their caregivers could implement the diet and stick to it.
Couples recruited into the trial met with a dietitian who explained the importance of sticking with the strict fat:carb:protein ratio. It’s not easy to stay in that zone, Dr. Swerdlow said, and the MCT supplement really helps there.
“Adding the MCT, which is typically done for the ketogenic diet in epilepsy, increases the fat intake so you can tolerate a bit more carbohydrate and still remain in ketosis. MCT therefore makes it easier to successfully do the diet, if we define success by time in ketosis. Ultimately, it is an iterative diet. You check your urine, and if you are in ketosis, you are doing well. If you are not in ketosis, you have to increase your fat intake, decrease your carb intake, or both.”
The study comprised 15 patients (7 with very mild AD, 4 with mild, and 4 with moderate disease). All patients were instructed to remain on their current medications for Alzheimer’s disease for the duration of the study if they were taking any. All of the patients with moderate AD and one with very mild AD dropped out of the study within the first month, citing caregiver burden. The supplement was in the form of an oil, not an emulsion like the BENEFIC supplement, and it caused diarrhea and nausea in five subjects, although none discontinued because of that.
“We found that a slow titration of the oil could deal with the GI issues. Rather, the primary deal-breaker seemed to be the stress of planning the menus and preparing the meals.”
One patient discontinued his cholinesterase inhibitor during the study, for unknown reasons. His cognitive scores declined, but was still included in the diet-compliant analysis.
The diet didn’t affect weight, blood pressure, insulin sensitivity or resistance, or glucose level, but the intervention was short-lived. Nor were there any significant changes in high-density, low-density, or total cholesterol. Liver enzymes were stable, too.
“The only thing that changed was that they really did increase their fat and decrease their carb intake,” Dr. Swerdlow said. Daily fat jumped from 91 g to 167 g, and carbs dropped from 201 g to 46 g.
Almost everyone who stuck with the diet achieved and maintained ketosis during the study, although with varying degrees of success. “Many only had a trace amount of urinary ketones,” Dr. Swerdlow said. The investigators tracked serum beta hydroxybutyrate levels every month as well, and those measures also confirmed ketosis in the group as a whole, although some patients fluctuated in and out of the state.
The cognitive changes were striking, he said. In the 10-patient analysis, ADAS-cog scores improved by an average of 4.1 points. The results were better when Dr. Swerdlow excluded the patient who stopped his cholinesterase inhibitor medication. In that nine-patient group, the ADAS-cog improved an average of 5.3 points.
While urging caution over the small sample size and lack of a control comparator, Dr. Swerdlow expressed deep satisfaction over the outcomes. A clinician as well as a researcher, he is accustomed to the slow but inexorable decline of AD patients.
“I’m going to try to relate the impression you get in the clinic with these scores,” he said. “Very rarely, but sometimes, with a cholinesterase inhibitor in patients, we’ll see something like a 7-point change. That’s a fantastic response, an improvement you can see across the room. A change of 2 points really doesn’t look that much different, although caregivers will tell you there is a subtle change, maybe a little more focus. The average we got in our 10 subjects was a 4-point improvement. That’s impressive. And a 5-point change is like rolling the clock back by a year.”
The improvements didn’t last, though. A 1-month washout period followed the intervention. By the end, both ADAS-cog and Mini-Mental State Examination scores had returned to their baseline levels. At the end of the study, a few of the patients and their partners expressed their intent to resume the diet, but the investigators do not know whether this indeed happened. Still, the results are encouraging enough that, like Dr. Cunnane, Dr. Swerdlow hopes to conduct a larger, longer study – one that would include a control group.
Future investigations of the ketogenic diet in AD might do well to also include an exercise component, both researchers mentioned. In addition to starvation, ketogenic dieting, and MCT supplementation, exercise is an effective way to induce ketogenesis.
“Exercise produces ketones, but most importantly, it increases the capacity of the brain to use ketones,” Dr. Cunnane said. The connection may help explain some of the cognitive benefits seen in exercise trials in patients with MCI and AD.
“This raises the possibility that if in fact exercise benefits the brain, ketone bodies may mediate some of that effect,” Dr. Swerdlow said. “Could exercise potentiate the ketosis from the diet? That is possible, and maybe using these interventions in conjunction would be synergistic. At this point, we are just happy to show the diet is feasible, if even for a limited period.”
Implementing KDRAFT: Research team dishes the skinny on fats
The KDRAFT study diet is surprisingly flexible despite its strict ratio of fat to protein and carbohydrate, according to the University of Kansas research team that implemented it. It only took a few counseling sessions to get most study participants enthusiastically embracing the new eating plan, even one so radically different from the way they were accustomed to eating.
“We focused mainly on the macronutrient makeup,” said Matthew Taylor, PhD, who supervised the diet study on a day-to-day basis. Instead of distributing a rigid diet plan, with prespecified meals and snacks, “We talked more in general about foods they could have and foods they couldn’t have.”
“When people think ‘ketogenic,’ they think bacon, eggs, oil, butter and cream, and may have an automatic negative connotation that this is unhealthy eating,” Dr. Taylor said in an interview. “But yes, eggs were in there and, because a lot of people really like bacon, there was bacon, too!”
The educational sessions did include teaching about healthy and unhealthy fats, and Dr. Taylor “tried to steer people toward the healthier ones, like olive oil, avocados, and nuts. But I didn’t say, ‘Eat this one and not that one.’ If it took melting butter on vegetables to get to that fat ratio, I was not as concerned about where the fat came from as about getting there and maintaining ketosis.”
KDRAFT also had a twist that’s becoming more common among ketogenic eating plans: lots of vegetables. Dr. Taylor asked participants to concentrate on nonstarchy vegetables and forgo potatoes, corn, beans, and lima beans, although some people did enjoy peas occasionally.
“We used to be think we had to restrict vegetables or people would go out of ketosis more easily. But that doesn’t seem to be true. We focused a lot on eating vegetables, and everyone increased their vegetable intake dramatically. We actually tried to use vegetables as a vehicle for fat. For example, people would roast Brussels sprouts or broccoli in olive oil and then put melted butter on it. It was pretty much, ‘Eat all the vegetables you can and put fat on them.’”
Fruits are full of sugar, so they are not liberally used in most ketogenic diets, but KDRAFT did allow one type: berries, and blueberries in particular. “We had people eating a couple of small handfuls of berries throughout the day and still being able to maintain ketosis. We did severely cut back on the amount and type of fruit people could have, but berries seemed to work well.”
Whipping cream had a place, too. “It fit really well in the diet, because it’s basically all fat,” Dr. Taylor said. “It’s used more often in pediatric ketogenic diets as a milk substitute. One thing our subjects liked to do was use it to make a sweet snack. All it takes is a packet of [stevia] sweetener and some vanilla. Then you whip and freeze it and it’s like an ice cream dessert.”
After the initial drop-outs, the remaining study pairs embraced the intervention enthusiastically.
“When the study partner took the diet on too, we had our best success. One of our last pairs had an entire family join in – children, grandchildren, everyone decided to follow the diet. That is a very helpful piece to this. It’s difficult to always say, ‘Here’s our normal food and here’s the keto food over here.’”
The dropouts occurred very early. These study pairs, all of whom included patients with moderate Alzheimer’s, never embraced the plan at all, and this is a telling point, Dr. Taylor noted.
“When you get to a level of dementia there are so many other things in the caregiving process that taking on big behavioral changes is very difficult.”
Although the study showed that the diet wasn’t practical for sicker patients at home, it still might be beneficial in other settings, said Debra Sullivan, PhD, RD. Dr. Sullivan chairs the department of dietetics and nutrition at the University of Kansas Medical Center and holds the Midwest Dairy Council Endowed Professorship in Clinical Nutrition.
“I think that we might be able to create a version of the diet that could be used in an institutional setting for our more advanced patients,” she said. “But there’s no denying that this can be challenging. It’s a big change from the way the typical American eats.”
None of the KDRAFT participants experienced any lipid changes, for either better or worse. The 3-month intervention was long enough to have picked up such changes if they were in the offing, said principal investigator Russell Swerdlow, MD. While there are mixed data on ketogenic diets’ atherogenic effects, many people respond positively, with improved cholesterol.
“Much of what it comes down to is, are you in a catabolic or anabolic states? Are you building up or tearing down? Excessive cholesterol is a sign of being overfed and laying down energy supplies. You take in carbon and turn it into cholesterol. But if you can trick your body into a catabolic state – essentially make it think it’s starving, which a ketogenic diet does – then you have consistently low insulin levels, and you don’t turn on the cholesterol synthesis pathway. You may increase your cholesterol intake through diet, but you’re not synthesizing it in your body, and that synthesis is what really drives your cholesterol level. If you’re not overeating, your body’s production goes down.”
Brain Energy and Memory (BEAM) study
Dr. Swerdlow isn’t the only clinician researcher looking at how a ketogenic diet might influence cognition. Suzanne Craft, PhD, well known for her investigations of the role of insulin signaling and therapy in AD, is running a ketogenic diet trial as well.
As noted on clinicaltrials.gov, the 24-week Brain Energy and Memory (BEAM) study aimed to recruit 25 subjects in two cohorts: adults with mild memory complaints, and cognitively normal adults with prediabetes. A comparator group of healthy controls will contribute cognitive assessments, blood and stool sample collection, neuroimaging, and lumbar puncture at baseline.
Both active groups will be randomized to 6 weeks of either a low-fat, high-carbohydrate diet, with carbs making up 50%-60% of daily caloric intake, or a modified ketogenic-Mediterranean Diet with carbs comprising less than 10% of daily caloric intake.
BEAM’s primary outcome will be changes in the AD cerebrospinal fluid biomarkers beta-amyloid and tau. Secondary endpoints include cognitive assessments, brain ketone uptake on PET scanning, and insulin sensitivity.
Dr. Cunnane has no financial interest in the MCT emulsion, which was supplied by Abitec. He reported conference travel support from Abitec, Nisshin OilliO, and Pruvit. He also reported receiving research project funding from Nestlé and Bulletproof.
Dr. Swerdlow had no financial disclosures.
[email protected]
On Twitter @alz_gal
In Alzheimer’s disease (AD), there are early significant deficits in glucose utilization that become increasingly severe as disease progresses.
Most reports from early-onset AD animal models find that these energy deficits are largely due to defects in mitochondrial complex IV and V, and possibly related to mitochondrial fusion and fission regulators. Animal models of tauopathy demonstrate Complex I deficits.
In AD-vulnerable brain regions with early glucose utilization deficits, surviving neurons show large reductions in mitochondrial complex I, IV, and V gene expression and proteins. These changes appear sufficient to contribute to cognitive deficits. These are not shared by nondemented individuals, even in the presences of AD pathology.
The precise causes of reduced glucose utilization in AD are unknown, but may reflect these mitochondrial deficits, as well as defective insulin signaling. These changes lead to adenosine triphosphate deficits and disruptions in the balance of NAD+/NADH, both of which are already altered by normal aging.
However, because metabolism is coupled to synaptic activity, it is difficult to ascertain whether these “bioenergetic” deficits are simply secondary to progressive neuron and synapse loss or a contributing factor to neuron and synapse loss and cognitive deficits.
One of the best ways to discern the contribution of bioenergetics deficits is to treat them. Many animal models and some small trials appear to show possible benefits from supplements directed at improving energy metabolism.
In the context of these known deficits in Alzheimer’s, the new positive results with ketogenic diet reported by Dr. Swerdlow should not be ignored despite the small sample size and open-label design with the diet. The impressive 4-5 point increase in ADAS-cog that they saw is not easily achieved, and the rapid loss with washout suggests a real benefit with a large effect size.
Similarly, despite the study’s limitations with dose and size, Dr. Cunnane’s imaging of ketone body uptake and its correlation with cognitive improvement suggests that ameliorating energy deficits can be a real target capable of producing substantial short-term benefits for patients with Alzheimer’s.
Given the rapid results and large effect size, this is an area that needs to see more trials.
Gregory Cole, PhD , is a professor of neurology at the University of California, Los Angeles, and interim director of the Mary S. Easton Alzheimer Center. He had no relevant financial disclosures.
In Alzheimer’s disease (AD), there are early significant deficits in glucose utilization that become increasingly severe as disease progresses.
Most reports from early-onset AD animal models find that these energy deficits are largely due to defects in mitochondrial complex IV and V, and possibly related to mitochondrial fusion and fission regulators. Animal models of tauopathy demonstrate Complex I deficits.
In AD-vulnerable brain regions with early glucose utilization deficits, surviving neurons show large reductions in mitochondrial complex I, IV, and V gene expression and proteins. These changes appear sufficient to contribute to cognitive deficits. These are not shared by nondemented individuals, even in the presences of AD pathology.
The precise causes of reduced glucose utilization in AD are unknown, but may reflect these mitochondrial deficits, as well as defective insulin signaling. These changes lead to adenosine triphosphate deficits and disruptions in the balance of NAD+/NADH, both of which are already altered by normal aging.
However, because metabolism is coupled to synaptic activity, it is difficult to ascertain whether these “bioenergetic” deficits are simply secondary to progressive neuron and synapse loss or a contributing factor to neuron and synapse loss and cognitive deficits.
One of the best ways to discern the contribution of bioenergetics deficits is to treat them. Many animal models and some small trials appear to show possible benefits from supplements directed at improving energy metabolism.
In the context of these known deficits in Alzheimer’s, the new positive results with ketogenic diet reported by Dr. Swerdlow should not be ignored despite the small sample size and open-label design with the diet. The impressive 4-5 point increase in ADAS-cog that they saw is not easily achieved, and the rapid loss with washout suggests a real benefit with a large effect size.
Similarly, despite the study’s limitations with dose and size, Dr. Cunnane’s imaging of ketone body uptake and its correlation with cognitive improvement suggests that ameliorating energy deficits can be a real target capable of producing substantial short-term benefits for patients with Alzheimer’s.
Given the rapid results and large effect size, this is an area that needs to see more trials.
Gregory Cole, PhD , is a professor of neurology at the University of California, Los Angeles, and interim director of the Mary S. Easton Alzheimer Center. He had no relevant financial disclosures.
In Alzheimer’s disease (AD), there are early significant deficits in glucose utilization that become increasingly severe as disease progresses.
Most reports from early-onset AD animal models find that these energy deficits are largely due to defects in mitochondrial complex IV and V, and possibly related to mitochondrial fusion and fission regulators. Animal models of tauopathy demonstrate Complex I deficits.
In AD-vulnerable brain regions with early glucose utilization deficits, surviving neurons show large reductions in mitochondrial complex I, IV, and V gene expression and proteins. These changes appear sufficient to contribute to cognitive deficits. These are not shared by nondemented individuals, even in the presences of AD pathology.
The precise causes of reduced glucose utilization in AD are unknown, but may reflect these mitochondrial deficits, as well as defective insulin signaling. These changes lead to adenosine triphosphate deficits and disruptions in the balance of NAD+/NADH, both of which are already altered by normal aging.
However, because metabolism is coupled to synaptic activity, it is difficult to ascertain whether these “bioenergetic” deficits are simply secondary to progressive neuron and synapse loss or a contributing factor to neuron and synapse loss and cognitive deficits.
One of the best ways to discern the contribution of bioenergetics deficits is to treat them. Many animal models and some small trials appear to show possible benefits from supplements directed at improving energy metabolism.
In the context of these known deficits in Alzheimer’s, the new positive results with ketogenic diet reported by Dr. Swerdlow should not be ignored despite the small sample size and open-label design with the diet. The impressive 4-5 point increase in ADAS-cog that they saw is not easily achieved, and the rapid loss with washout suggests a real benefit with a large effect size.
Similarly, despite the study’s limitations with dose and size, Dr. Cunnane’s imaging of ketone body uptake and its correlation with cognitive improvement suggests that ameliorating energy deficits can be a real target capable of producing substantial short-term benefits for patients with Alzheimer’s.
Given the rapid results and large effect size, this is an area that needs to see more trials.
Gregory Cole, PhD , is a professor of neurology at the University of California, Los Angeles, and interim director of the Mary S. Easton Alzheimer Center. He had no relevant financial disclosures.
LONDON – A 3-month diet comprised of 70% fat improved cognition in Alzheimer’s disease patients better than any anti-amyloid drug that has ever been tested.
In a small pilot study, Alzheimer’s patients who followed the University of Kansas’s ketogenic diet program improved an average of 4 points on one of the most important cognitive assessments in dementia care, the Alzheimer’s Disease Assessment Scale–cognitive domain (ADAS-cog). Not only was this gain statistically significant, but it reached a level that clinical trialists believe to be clinically meaningful, and it was similar to the gains that won Food and Drug Administration approval for donepezil in 1996, according to Russell Swerdlow, MD, director of the University of Kansas Alzheimer’s Disease Center in Fairway.
“This is the most robust improvement in the ADAS-cog scale that I am aware of for an Alzheimer’s interventional trial,” said Dr. Swerdlow, who presented the study at the Alzheimer’s Association International Conference. “In some studies, patients decline along the lines of 5 points or so per year on this measure, so an improvement of 4 points is quite something.”
To put the results in perspective, donepezil was approved on a 4-point spread between the active and placebo arm over 3 months, said Dr. Swerdlow, who is also the Gene and Marge Sweeney Professor of Neurology at the university. Part of this difference was driven by a 2-point decline in the placebo group. Relative to its baseline, the treatment group improved, on average, by about 2 points.
But in the Ketogenic Diet Retention and Feasibility Trail (KDRAFT), also 3 months long, patients’ ADAS-cog scores didn’t decline at all. Everyone who stayed with the diet and kept on their baseline medications improved, although to varying degrees.
KDRAFT was very small, with just 10 patients completing the intervention, and lacked a comparator group, so the results should be interpreted extremely cautiously, Dr. Swerdlow said in an interview. “We have to very careful about overinterpreting these findings. It’s a pilot study, and a small group, so we don’t know how genuine the finding is. But if it is true, it’s a big deal.”
Diet and dementia
Emerging evidence suggests that modifying diet can help prevent Alzheimer’s and may even help AD patients think and function better. But this research has largely focused on the heart-healthy diets already proven successful in preventing and treating hypertension, diabetes, and cardiovascular disease. Most notably, the Mediterranean-DASH Intervention for Neurodegenerative Delay (MIND) diet cut the risk of AD by up to 53% (Alzheimers Dement. 2015 Sep;11[9]:1007-14) and also slowed aging-related cognitive decline (Alzheimers Dement. 2015 Sep; 11[9]:1015-22).
MIND is a combination of the low-salt, plant-focused DASH diet, and the heart-healthy Mediterranean diet. It is a moderate-fat plan, with a ratio of 33% fat, 38% carbohydrates, and 26% protein. Ideally, only 3% of the fat should be saturated, so MIND draws on olive oil, nuts, and other foods with monounsaturated fats, largely eschewing animal fats. It’s generally considered fairly easy to follow, since it allows a wide variety of whole grains, beans, nuts, fruits, vegetables, salads, fish, and poultry. Butter, red meat, fried foods, full-fat dairy, and fast foods are strict no-nos.
A ketogenic diet, however, turns MIND on its head. With a 70% fat, 20% protein, 10% carbohydrate ratio, a typical ketogenic diet nearly eliminates most fruits, and virtually all starchy vegetables, beans, and grains. It does, however, incorporate a large amount of fat from many sources, including olive oil, butter, cream, eggs, nuts, all kinds of meat, and fish. For a ketogenic diet, Dr. Swerdlow said, the ratio of fat to protein and carbs is more critical than the source of the fat.
MIND was designed to prevent the cardiovascular and endocrine disorders than predispose to dementia over the long term. But a ketogenic diet for patients with Alzheimer’s acutely manipulates the brain’s energy metabolism system, forcing it to use ketone bodies instead of glucose for fuel.
In normal energy metabolism, carbohydrates provide a ready supply of glucose, the brain’s primary fuel. When carbs are limited or absent, serum insulin decreases and glucagon increases. This promotes lipolysis. Ketones (primarily beta-hydroxybutyrate and acetoacetate) are formed in the liver from the newly released fatty acids, and released into the circulation, including into the brain during times of decreased glucose availability – a state characteristic of Alzheimer’s disease.
Induced ketogenesis trial
Inducing ketosis through diet seems to help correct the normal, age-related decline in the brain’s ability to use glucose, said Stephen Cunnane, PhD, who also presented ketogenic intervention results at AAIC. “Cognitively normal, healthy older adults experience a 10% reduction in the brain’s ability to metabolize glucose compared to healthy young people,” he said in an interview. But this decline accelerates as Alzheimer’s hits. Those with early AD have a 20% decrement in glucose utilization, compared with healthy elders.
What’s more, Dr. Cunnane said, these decrements are region-specific. Deficits in glucose metabolism hit the thalamus, and temporal and parietal cortices – all pathologically important in AD – particularly hard. The brain glucose deficit isn’t unique to the elderly, or even to patients with AD – it also occurs in those who have a family history of the disease, who carry the APOE4 allele, those with presenilin-1 mutations, and those with insulin resistance and diabetes.
Changes in brain glucose metabolism can develop years before any cognitive symptoms manifest and seem to increase the risk of Alzheimer’s, said Dr. Cunnane of Sherbrooke University, Que.
“We propose that this vicious cycle of presymptomatic glucose hypometabolism causes chronic brain energy deprivation, and might contribute to deteriorating neuronal function. That could cause a further decrease in the demand for glucose, leading to cognitive decline.”
“What doesn’t change, though, is the brain’s ability to take up ketone bodies,” he said. If anything, the brain appears to use ketones more efficiently as AD becomes established. “It’s almost like the brain is trying to rescue itself. If those cells were dead, they would not be able to take up ketones. Because they do, we think they are instead starving because of their inability to use glucose and that maybe we can rescue them with ketones before they die.”
At AAIC, Dr. Cunnane reported interim results of an investigation of induced ketogenesis in patients with mild cognitive impairment (MCI). The 6-month BENEFIC trial comprises 50 patients, randomized to either a daily nutritional supplement with 30 g medium chain triglycerides (MCT) in a unflavored, nondairy emulsion, or a fat-equivalent placebo drink. When consumed, the liver very quickly converts MCT fatty acids into ketone bodies, which then circulate throughout the body, including passing the blood-brain barrier.
All of the participants in the BENEFIC trial underwent brain PET scanning for both glucose and ketone uptake. Early results clearly showed that the MCI brains took up just as much acetoacetate as did the brains of cognitively normal young adults. And although the study wasn’t powered for a full cognitive assessment, Dr. Cunnane did present 6-month data on three measures in the MCI group: trail making time, verbal fluency, and the Boston Naming Test. In the active group on MCT, scores on all three measures improved “modestly” in direct correlation with brain ketone uptake. In the placebo group, scores remained unchanged.
“We don’t have enough people in the study to make any definitive statement about cognition, but it’s nice to see the trend going in the right direction, Dr. Cunnane said. “I really think of this as a dose-finding study and a chance to demonstrate the safety and tolerability of a liquid MCT supplement in people with MCI. Our next study will use a 45 g per day supplement of MCT.”
Details of the KDRAFT study
The BENEFIC study looked only at the effects of an MCT supplement, which may not deliver all the metabolic benefits of a ketogenic diet. KDRAFT, however, employed both, and assessed not only cognitive outcomes and adverse effects, but the practical matter of whether AD patients and their caregivers could implement the diet and stick to it.
Couples recruited into the trial met with a dietitian who explained the importance of sticking with the strict fat:carb:protein ratio. It’s not easy to stay in that zone, Dr. Swerdlow said, and the MCT supplement really helps there.
“Adding the MCT, which is typically done for the ketogenic diet in epilepsy, increases the fat intake so you can tolerate a bit more carbohydrate and still remain in ketosis. MCT therefore makes it easier to successfully do the diet, if we define success by time in ketosis. Ultimately, it is an iterative diet. You check your urine, and if you are in ketosis, you are doing well. If you are not in ketosis, you have to increase your fat intake, decrease your carb intake, or both.”
The study comprised 15 patients (7 with very mild AD, 4 with mild, and 4 with moderate disease). All patients were instructed to remain on their current medications for Alzheimer’s disease for the duration of the study if they were taking any. All of the patients with moderate AD and one with very mild AD dropped out of the study within the first month, citing caregiver burden. The supplement was in the form of an oil, not an emulsion like the BENEFIC supplement, and it caused diarrhea and nausea in five subjects, although none discontinued because of that.
“We found that a slow titration of the oil could deal with the GI issues. Rather, the primary deal-breaker seemed to be the stress of planning the menus and preparing the meals.”
One patient discontinued his cholinesterase inhibitor during the study, for unknown reasons. His cognitive scores declined, but was still included in the diet-compliant analysis.
The diet didn’t affect weight, blood pressure, insulin sensitivity or resistance, or glucose level, but the intervention was short-lived. Nor were there any significant changes in high-density, low-density, or total cholesterol. Liver enzymes were stable, too.
“The only thing that changed was that they really did increase their fat and decrease their carb intake,” Dr. Swerdlow said. Daily fat jumped from 91 g to 167 g, and carbs dropped from 201 g to 46 g.
Almost everyone who stuck with the diet achieved and maintained ketosis during the study, although with varying degrees of success. “Many only had a trace amount of urinary ketones,” Dr. Swerdlow said. The investigators tracked serum beta hydroxybutyrate levels every month as well, and those measures also confirmed ketosis in the group as a whole, although some patients fluctuated in and out of the state.
The cognitive changes were striking, he said. In the 10-patient analysis, ADAS-cog scores improved by an average of 4.1 points. The results were better when Dr. Swerdlow excluded the patient who stopped his cholinesterase inhibitor medication. In that nine-patient group, the ADAS-cog improved an average of 5.3 points.
While urging caution over the small sample size and lack of a control comparator, Dr. Swerdlow expressed deep satisfaction over the outcomes. A clinician as well as a researcher, he is accustomed to the slow but inexorable decline of AD patients.
“I’m going to try to relate the impression you get in the clinic with these scores,” he said. “Very rarely, but sometimes, with a cholinesterase inhibitor in patients, we’ll see something like a 7-point change. That’s a fantastic response, an improvement you can see across the room. A change of 2 points really doesn’t look that much different, although caregivers will tell you there is a subtle change, maybe a little more focus. The average we got in our 10 subjects was a 4-point improvement. That’s impressive. And a 5-point change is like rolling the clock back by a year.”
The improvements didn’t last, though. A 1-month washout period followed the intervention. By the end, both ADAS-cog and Mini-Mental State Examination scores had returned to their baseline levels. At the end of the study, a few of the patients and their partners expressed their intent to resume the diet, but the investigators do not know whether this indeed happened. Still, the results are encouraging enough that, like Dr. Cunnane, Dr. Swerdlow hopes to conduct a larger, longer study – one that would include a control group.
Future investigations of the ketogenic diet in AD might do well to also include an exercise component, both researchers mentioned. In addition to starvation, ketogenic dieting, and MCT supplementation, exercise is an effective way to induce ketogenesis.
“Exercise produces ketones, but most importantly, it increases the capacity of the brain to use ketones,” Dr. Cunnane said. The connection may help explain some of the cognitive benefits seen in exercise trials in patients with MCI and AD.
“This raises the possibility that if in fact exercise benefits the brain, ketone bodies may mediate some of that effect,” Dr. Swerdlow said. “Could exercise potentiate the ketosis from the diet? That is possible, and maybe using these interventions in conjunction would be synergistic. At this point, we are just happy to show the diet is feasible, if even for a limited period.”
Implementing KDRAFT: Research team dishes the skinny on fats
The KDRAFT study diet is surprisingly flexible despite its strict ratio of fat to protein and carbohydrate, according to the University of Kansas research team that implemented it. It only took a few counseling sessions to get most study participants enthusiastically embracing the new eating plan, even one so radically different from the way they were accustomed to eating.
“We focused mainly on the macronutrient makeup,” said Matthew Taylor, PhD, who supervised the diet study on a day-to-day basis. Instead of distributing a rigid diet plan, with prespecified meals and snacks, “We talked more in general about foods they could have and foods they couldn’t have.”
“When people think ‘ketogenic,’ they think bacon, eggs, oil, butter and cream, and may have an automatic negative connotation that this is unhealthy eating,” Dr. Taylor said in an interview. “But yes, eggs were in there and, because a lot of people really like bacon, there was bacon, too!”
The educational sessions did include teaching about healthy and unhealthy fats, and Dr. Taylor “tried to steer people toward the healthier ones, like olive oil, avocados, and nuts. But I didn’t say, ‘Eat this one and not that one.’ If it took melting butter on vegetables to get to that fat ratio, I was not as concerned about where the fat came from as about getting there and maintaining ketosis.”
KDRAFT also had a twist that’s becoming more common among ketogenic eating plans: lots of vegetables. Dr. Taylor asked participants to concentrate on nonstarchy vegetables and forgo potatoes, corn, beans, and lima beans, although some people did enjoy peas occasionally.
“We used to be think we had to restrict vegetables or people would go out of ketosis more easily. But that doesn’t seem to be true. We focused a lot on eating vegetables, and everyone increased their vegetable intake dramatically. We actually tried to use vegetables as a vehicle for fat. For example, people would roast Brussels sprouts or broccoli in olive oil and then put melted butter on it. It was pretty much, ‘Eat all the vegetables you can and put fat on them.’”
Fruits are full of sugar, so they are not liberally used in most ketogenic diets, but KDRAFT did allow one type: berries, and blueberries in particular. “We had people eating a couple of small handfuls of berries throughout the day and still being able to maintain ketosis. We did severely cut back on the amount and type of fruit people could have, but berries seemed to work well.”
Whipping cream had a place, too. “It fit really well in the diet, because it’s basically all fat,” Dr. Taylor said. “It’s used more often in pediatric ketogenic diets as a milk substitute. One thing our subjects liked to do was use it to make a sweet snack. All it takes is a packet of [stevia] sweetener and some vanilla. Then you whip and freeze it and it’s like an ice cream dessert.”
After the initial drop-outs, the remaining study pairs embraced the intervention enthusiastically.
“When the study partner took the diet on too, we had our best success. One of our last pairs had an entire family join in – children, grandchildren, everyone decided to follow the diet. That is a very helpful piece to this. It’s difficult to always say, ‘Here’s our normal food and here’s the keto food over here.’”
The dropouts occurred very early. These study pairs, all of whom included patients with moderate Alzheimer’s, never embraced the plan at all, and this is a telling point, Dr. Taylor noted.
“When you get to a level of dementia there are so many other things in the caregiving process that taking on big behavioral changes is very difficult.”
Although the study showed that the diet wasn’t practical for sicker patients at home, it still might be beneficial in other settings, said Debra Sullivan, PhD, RD. Dr. Sullivan chairs the department of dietetics and nutrition at the University of Kansas Medical Center and holds the Midwest Dairy Council Endowed Professorship in Clinical Nutrition.
“I think that we might be able to create a version of the diet that could be used in an institutional setting for our more advanced patients,” she said. “But there’s no denying that this can be challenging. It’s a big change from the way the typical American eats.”
None of the KDRAFT participants experienced any lipid changes, for either better or worse. The 3-month intervention was long enough to have picked up such changes if they were in the offing, said principal investigator Russell Swerdlow, MD. While there are mixed data on ketogenic diets’ atherogenic effects, many people respond positively, with improved cholesterol.
“Much of what it comes down to is, are you in a catabolic or anabolic states? Are you building up or tearing down? Excessive cholesterol is a sign of being overfed and laying down energy supplies. You take in carbon and turn it into cholesterol. But if you can trick your body into a catabolic state – essentially make it think it’s starving, which a ketogenic diet does – then you have consistently low insulin levels, and you don’t turn on the cholesterol synthesis pathway. You may increase your cholesterol intake through diet, but you’re not synthesizing it in your body, and that synthesis is what really drives your cholesterol level. If you’re not overeating, your body’s production goes down.”
Brain Energy and Memory (BEAM) study
Dr. Swerdlow isn’t the only clinician researcher looking at how a ketogenic diet might influence cognition. Suzanne Craft, PhD, well known for her investigations of the role of insulin signaling and therapy in AD, is running a ketogenic diet trial as well.
As noted on clinicaltrials.gov, the 24-week Brain Energy and Memory (BEAM) study aimed to recruit 25 subjects in two cohorts: adults with mild memory complaints, and cognitively normal adults with prediabetes. A comparator group of healthy controls will contribute cognitive assessments, blood and stool sample collection, neuroimaging, and lumbar puncture at baseline.
Both active groups will be randomized to 6 weeks of either a low-fat, high-carbohydrate diet, with carbs making up 50%-60% of daily caloric intake, or a modified ketogenic-Mediterranean Diet with carbs comprising less than 10% of daily caloric intake.
BEAM’s primary outcome will be changes in the AD cerebrospinal fluid biomarkers beta-amyloid and tau. Secondary endpoints include cognitive assessments, brain ketone uptake on PET scanning, and insulin sensitivity.
Dr. Cunnane has no financial interest in the MCT emulsion, which was supplied by Abitec. He reported conference travel support from Abitec, Nisshin OilliO, and Pruvit. He also reported receiving research project funding from Nestlé and Bulletproof.
Dr. Swerdlow had no financial disclosures.
[email protected]
On Twitter @alz_gal
LONDON – A 3-month diet comprised of 70% fat improved cognition in Alzheimer’s disease patients better than any anti-amyloid drug that has ever been tested.
In a small pilot study, Alzheimer’s patients who followed the University of Kansas’s ketogenic diet program improved an average of 4 points on one of the most important cognitive assessments in dementia care, the Alzheimer’s Disease Assessment Scale–cognitive domain (ADAS-cog). Not only was this gain statistically significant, but it reached a level that clinical trialists believe to be clinically meaningful, and it was similar to the gains that won Food and Drug Administration approval for donepezil in 1996, according to Russell Swerdlow, MD, director of the University of Kansas Alzheimer’s Disease Center in Fairway.
“This is the most robust improvement in the ADAS-cog scale that I am aware of for an Alzheimer’s interventional trial,” said Dr. Swerdlow, who presented the study at the Alzheimer’s Association International Conference. “In some studies, patients decline along the lines of 5 points or so per year on this measure, so an improvement of 4 points is quite something.”
To put the results in perspective, donepezil was approved on a 4-point spread between the active and placebo arm over 3 months, said Dr. Swerdlow, who is also the Gene and Marge Sweeney Professor of Neurology at the university. Part of this difference was driven by a 2-point decline in the placebo group. Relative to its baseline, the treatment group improved, on average, by about 2 points.
But in the Ketogenic Diet Retention and Feasibility Trail (KDRAFT), also 3 months long, patients’ ADAS-cog scores didn’t decline at all. Everyone who stayed with the diet and kept on their baseline medications improved, although to varying degrees.
KDRAFT was very small, with just 10 patients completing the intervention, and lacked a comparator group, so the results should be interpreted extremely cautiously, Dr. Swerdlow said in an interview. “We have to very careful about overinterpreting these findings. It’s a pilot study, and a small group, so we don’t know how genuine the finding is. But if it is true, it’s a big deal.”
Diet and dementia
Emerging evidence suggests that modifying diet can help prevent Alzheimer’s and may even help AD patients think and function better. But this research has largely focused on the heart-healthy diets already proven successful in preventing and treating hypertension, diabetes, and cardiovascular disease. Most notably, the Mediterranean-DASH Intervention for Neurodegenerative Delay (MIND) diet cut the risk of AD by up to 53% (Alzheimers Dement. 2015 Sep;11[9]:1007-14) and also slowed aging-related cognitive decline (Alzheimers Dement. 2015 Sep; 11[9]:1015-22).
MIND is a combination of the low-salt, plant-focused DASH diet, and the heart-healthy Mediterranean diet. It is a moderate-fat plan, with a ratio of 33% fat, 38% carbohydrates, and 26% protein. Ideally, only 3% of the fat should be saturated, so MIND draws on olive oil, nuts, and other foods with monounsaturated fats, largely eschewing animal fats. It’s generally considered fairly easy to follow, since it allows a wide variety of whole grains, beans, nuts, fruits, vegetables, salads, fish, and poultry. Butter, red meat, fried foods, full-fat dairy, and fast foods are strict no-nos.
A ketogenic diet, however, turns MIND on its head. With a 70% fat, 20% protein, 10% carbohydrate ratio, a typical ketogenic diet nearly eliminates most fruits, and virtually all starchy vegetables, beans, and grains. It does, however, incorporate a large amount of fat from many sources, including olive oil, butter, cream, eggs, nuts, all kinds of meat, and fish. For a ketogenic diet, Dr. Swerdlow said, the ratio of fat to protein and carbs is more critical than the source of the fat.
MIND was designed to prevent the cardiovascular and endocrine disorders than predispose to dementia over the long term. But a ketogenic diet for patients with Alzheimer’s acutely manipulates the brain’s energy metabolism system, forcing it to use ketone bodies instead of glucose for fuel.
In normal energy metabolism, carbohydrates provide a ready supply of glucose, the brain’s primary fuel. When carbs are limited or absent, serum insulin decreases and glucagon increases. This promotes lipolysis. Ketones (primarily beta-hydroxybutyrate and acetoacetate) are formed in the liver from the newly released fatty acids, and released into the circulation, including into the brain during times of decreased glucose availability – a state characteristic of Alzheimer’s disease.
Induced ketogenesis trial
Inducing ketosis through diet seems to help correct the normal, age-related decline in the brain’s ability to use glucose, said Stephen Cunnane, PhD, who also presented ketogenic intervention results at AAIC. “Cognitively normal, healthy older adults experience a 10% reduction in the brain’s ability to metabolize glucose compared to healthy young people,” he said in an interview. But this decline accelerates as Alzheimer’s hits. Those with early AD have a 20% decrement in glucose utilization, compared with healthy elders.
What’s more, Dr. Cunnane said, these decrements are region-specific. Deficits in glucose metabolism hit the thalamus, and temporal and parietal cortices – all pathologically important in AD – particularly hard. The brain glucose deficit isn’t unique to the elderly, or even to patients with AD – it also occurs in those who have a family history of the disease, who carry the APOE4 allele, those with presenilin-1 mutations, and those with insulin resistance and diabetes.
Changes in brain glucose metabolism can develop years before any cognitive symptoms manifest and seem to increase the risk of Alzheimer’s, said Dr. Cunnane of Sherbrooke University, Que.
“We propose that this vicious cycle of presymptomatic glucose hypometabolism causes chronic brain energy deprivation, and might contribute to deteriorating neuronal function. That could cause a further decrease in the demand for glucose, leading to cognitive decline.”
“What doesn’t change, though, is the brain’s ability to take up ketone bodies,” he said. If anything, the brain appears to use ketones more efficiently as AD becomes established. “It’s almost like the brain is trying to rescue itself. If those cells were dead, they would not be able to take up ketones. Because they do, we think they are instead starving because of their inability to use glucose and that maybe we can rescue them with ketones before they die.”
At AAIC, Dr. Cunnane reported interim results of an investigation of induced ketogenesis in patients with mild cognitive impairment (MCI). The 6-month BENEFIC trial comprises 50 patients, randomized to either a daily nutritional supplement with 30 g medium chain triglycerides (MCT) in a unflavored, nondairy emulsion, or a fat-equivalent placebo drink. When consumed, the liver very quickly converts MCT fatty acids into ketone bodies, which then circulate throughout the body, including passing the blood-brain barrier.
All of the participants in the BENEFIC trial underwent brain PET scanning for both glucose and ketone uptake. Early results clearly showed that the MCI brains took up just as much acetoacetate as did the brains of cognitively normal young adults. And although the study wasn’t powered for a full cognitive assessment, Dr. Cunnane did present 6-month data on three measures in the MCI group: trail making time, verbal fluency, and the Boston Naming Test. In the active group on MCT, scores on all three measures improved “modestly” in direct correlation with brain ketone uptake. In the placebo group, scores remained unchanged.
“We don’t have enough people in the study to make any definitive statement about cognition, but it’s nice to see the trend going in the right direction, Dr. Cunnane said. “I really think of this as a dose-finding study and a chance to demonstrate the safety and tolerability of a liquid MCT supplement in people with MCI. Our next study will use a 45 g per day supplement of MCT.”
Details of the KDRAFT study
The BENEFIC study looked only at the effects of an MCT supplement, which may not deliver all the metabolic benefits of a ketogenic diet. KDRAFT, however, employed both, and assessed not only cognitive outcomes and adverse effects, but the practical matter of whether AD patients and their caregivers could implement the diet and stick to it.
Couples recruited into the trial met with a dietitian who explained the importance of sticking with the strict fat:carb:protein ratio. It’s not easy to stay in that zone, Dr. Swerdlow said, and the MCT supplement really helps there.
“Adding the MCT, which is typically done for the ketogenic diet in epilepsy, increases the fat intake so you can tolerate a bit more carbohydrate and still remain in ketosis. MCT therefore makes it easier to successfully do the diet, if we define success by time in ketosis. Ultimately, it is an iterative diet. You check your urine, and if you are in ketosis, you are doing well. If you are not in ketosis, you have to increase your fat intake, decrease your carb intake, or both.”
The study comprised 15 patients (7 with very mild AD, 4 with mild, and 4 with moderate disease). All patients were instructed to remain on their current medications for Alzheimer’s disease for the duration of the study if they were taking any. All of the patients with moderate AD and one with very mild AD dropped out of the study within the first month, citing caregiver burden. The supplement was in the form of an oil, not an emulsion like the BENEFIC supplement, and it caused diarrhea and nausea in five subjects, although none discontinued because of that.
“We found that a slow titration of the oil could deal with the GI issues. Rather, the primary deal-breaker seemed to be the stress of planning the menus and preparing the meals.”
One patient discontinued his cholinesterase inhibitor during the study, for unknown reasons. His cognitive scores declined, but was still included in the diet-compliant analysis.
The diet didn’t affect weight, blood pressure, insulin sensitivity or resistance, or glucose level, but the intervention was short-lived. Nor were there any significant changes in high-density, low-density, or total cholesterol. Liver enzymes were stable, too.
“The only thing that changed was that they really did increase their fat and decrease their carb intake,” Dr. Swerdlow said. Daily fat jumped from 91 g to 167 g, and carbs dropped from 201 g to 46 g.
Almost everyone who stuck with the diet achieved and maintained ketosis during the study, although with varying degrees of success. “Many only had a trace amount of urinary ketones,” Dr. Swerdlow said. The investigators tracked serum beta hydroxybutyrate levels every month as well, and those measures also confirmed ketosis in the group as a whole, although some patients fluctuated in and out of the state.
The cognitive changes were striking, he said. In the 10-patient analysis, ADAS-cog scores improved by an average of 4.1 points. The results were better when Dr. Swerdlow excluded the patient who stopped his cholinesterase inhibitor medication. In that nine-patient group, the ADAS-cog improved an average of 5.3 points.
While urging caution over the small sample size and lack of a control comparator, Dr. Swerdlow expressed deep satisfaction over the outcomes. A clinician as well as a researcher, he is accustomed to the slow but inexorable decline of AD patients.
“I’m going to try to relate the impression you get in the clinic with these scores,” he said. “Very rarely, but sometimes, with a cholinesterase inhibitor in patients, we’ll see something like a 7-point change. That’s a fantastic response, an improvement you can see across the room. A change of 2 points really doesn’t look that much different, although caregivers will tell you there is a subtle change, maybe a little more focus. The average we got in our 10 subjects was a 4-point improvement. That’s impressive. And a 5-point change is like rolling the clock back by a year.”
The improvements didn’t last, though. A 1-month washout period followed the intervention. By the end, both ADAS-cog and Mini-Mental State Examination scores had returned to their baseline levels. At the end of the study, a few of the patients and their partners expressed their intent to resume the diet, but the investigators do not know whether this indeed happened. Still, the results are encouraging enough that, like Dr. Cunnane, Dr. Swerdlow hopes to conduct a larger, longer study – one that would include a control group.
Future investigations of the ketogenic diet in AD might do well to also include an exercise component, both researchers mentioned. In addition to starvation, ketogenic dieting, and MCT supplementation, exercise is an effective way to induce ketogenesis.
“Exercise produces ketones, but most importantly, it increases the capacity of the brain to use ketones,” Dr. Cunnane said. The connection may help explain some of the cognitive benefits seen in exercise trials in patients with MCI and AD.
“This raises the possibility that if in fact exercise benefits the brain, ketone bodies may mediate some of that effect,” Dr. Swerdlow said. “Could exercise potentiate the ketosis from the diet? That is possible, and maybe using these interventions in conjunction would be synergistic. At this point, we are just happy to show the diet is feasible, if even for a limited period.”
Implementing KDRAFT: Research team dishes the skinny on fats
The KDRAFT study diet is surprisingly flexible despite its strict ratio of fat to protein and carbohydrate, according to the University of Kansas research team that implemented it. It only took a few counseling sessions to get most study participants enthusiastically embracing the new eating plan, even one so radically different from the way they were accustomed to eating.
“We focused mainly on the macronutrient makeup,” said Matthew Taylor, PhD, who supervised the diet study on a day-to-day basis. Instead of distributing a rigid diet plan, with prespecified meals and snacks, “We talked more in general about foods they could have and foods they couldn’t have.”
“When people think ‘ketogenic,’ they think bacon, eggs, oil, butter and cream, and may have an automatic negative connotation that this is unhealthy eating,” Dr. Taylor said in an interview. “But yes, eggs were in there and, because a lot of people really like bacon, there was bacon, too!”
The educational sessions did include teaching about healthy and unhealthy fats, and Dr. Taylor “tried to steer people toward the healthier ones, like olive oil, avocados, and nuts. But I didn’t say, ‘Eat this one and not that one.’ If it took melting butter on vegetables to get to that fat ratio, I was not as concerned about where the fat came from as about getting there and maintaining ketosis.”
KDRAFT also had a twist that’s becoming more common among ketogenic eating plans: lots of vegetables. Dr. Taylor asked participants to concentrate on nonstarchy vegetables and forgo potatoes, corn, beans, and lima beans, although some people did enjoy peas occasionally.
“We used to be think we had to restrict vegetables or people would go out of ketosis more easily. But that doesn’t seem to be true. We focused a lot on eating vegetables, and everyone increased their vegetable intake dramatically. We actually tried to use vegetables as a vehicle for fat. For example, people would roast Brussels sprouts or broccoli in olive oil and then put melted butter on it. It was pretty much, ‘Eat all the vegetables you can and put fat on them.’”
Fruits are full of sugar, so they are not liberally used in most ketogenic diets, but KDRAFT did allow one type: berries, and blueberries in particular. “We had people eating a couple of small handfuls of berries throughout the day and still being able to maintain ketosis. We did severely cut back on the amount and type of fruit people could have, but berries seemed to work well.”
Whipping cream had a place, too. “It fit really well in the diet, because it’s basically all fat,” Dr. Taylor said. “It’s used more often in pediatric ketogenic diets as a milk substitute. One thing our subjects liked to do was use it to make a sweet snack. All it takes is a packet of [stevia] sweetener and some vanilla. Then you whip and freeze it and it’s like an ice cream dessert.”
After the initial drop-outs, the remaining study pairs embraced the intervention enthusiastically.
“When the study partner took the diet on too, we had our best success. One of our last pairs had an entire family join in – children, grandchildren, everyone decided to follow the diet. That is a very helpful piece to this. It’s difficult to always say, ‘Here’s our normal food and here’s the keto food over here.’”
The dropouts occurred very early. These study pairs, all of whom included patients with moderate Alzheimer’s, never embraced the plan at all, and this is a telling point, Dr. Taylor noted.
“When you get to a level of dementia there are so many other things in the caregiving process that taking on big behavioral changes is very difficult.”
Although the study showed that the diet wasn’t practical for sicker patients at home, it still might be beneficial in other settings, said Debra Sullivan, PhD, RD. Dr. Sullivan chairs the department of dietetics and nutrition at the University of Kansas Medical Center and holds the Midwest Dairy Council Endowed Professorship in Clinical Nutrition.
“I think that we might be able to create a version of the diet that could be used in an institutional setting for our more advanced patients,” she said. “But there’s no denying that this can be challenging. It’s a big change from the way the typical American eats.”
None of the KDRAFT participants experienced any lipid changes, for either better or worse. The 3-month intervention was long enough to have picked up such changes if they were in the offing, said principal investigator Russell Swerdlow, MD. While there are mixed data on ketogenic diets’ atherogenic effects, many people respond positively, with improved cholesterol.
“Much of what it comes down to is, are you in a catabolic or anabolic states? Are you building up or tearing down? Excessive cholesterol is a sign of being overfed and laying down energy supplies. You take in carbon and turn it into cholesterol. But if you can trick your body into a catabolic state – essentially make it think it’s starving, which a ketogenic diet does – then you have consistently low insulin levels, and you don’t turn on the cholesterol synthesis pathway. You may increase your cholesterol intake through diet, but you’re not synthesizing it in your body, and that synthesis is what really drives your cholesterol level. If you’re not overeating, your body’s production goes down.”
Brain Energy and Memory (BEAM) study
Dr. Swerdlow isn’t the only clinician researcher looking at how a ketogenic diet might influence cognition. Suzanne Craft, PhD, well known for her investigations of the role of insulin signaling and therapy in AD, is running a ketogenic diet trial as well.
As noted on clinicaltrials.gov, the 24-week Brain Energy and Memory (BEAM) study aimed to recruit 25 subjects in two cohorts: adults with mild memory complaints, and cognitively normal adults with prediabetes. A comparator group of healthy controls will contribute cognitive assessments, blood and stool sample collection, neuroimaging, and lumbar puncture at baseline.
Both active groups will be randomized to 6 weeks of either a low-fat, high-carbohydrate diet, with carbs making up 50%-60% of daily caloric intake, or a modified ketogenic-Mediterranean Diet with carbs comprising less than 10% of daily caloric intake.
BEAM’s primary outcome will be changes in the AD cerebrospinal fluid biomarkers beta-amyloid and tau. Secondary endpoints include cognitive assessments, brain ketone uptake on PET scanning, and insulin sensitivity.
Dr. Cunnane has no financial interest in the MCT emulsion, which was supplied by Abitec. He reported conference travel support from Abitec, Nisshin OilliO, and Pruvit. He also reported receiving research project funding from Nestlé and Bulletproof.
Dr. Swerdlow had no financial disclosures.
[email protected]
On Twitter @alz_gal
AT AAIC 2017
Fewer severe hypoglycemia episodes seen over time in patients on tight control
Episodes of severe hypoglycemia became less frequent over a period of 26 years in patients with type 1 diabetes whose glucose was initially intensively managed to a hemoglobin A1c target of 7%, but, conversely, these problems increased in patients who had been managed to a conventional target of 9%, .
At the end of the 20-year-long Epidemiology of Diabetes Interventions and Complications (EDIC) study, which used an 8% HbA1c target for everyone, rates of severe hypoglycemia were 37 cases per 100 patient-years in patients managed intensively in the preceding Diabetes Control and Complications Trial (DCCT) – a significant decrease from the 61 cases per 100 patient-years observed when the DCCT concluded. However, rates of severe hypoglycemia also increased in the conventionally managed group, from 19 to 41 cases per 100 patient-years, according to Rose A. Gubitosi-Klug, MD, PhD, and her coauthors (Diab Care. 2017;40[8]:1010-6).
“The equalization of rates between the two original treatment groups is largely attributable to their similar HbA1c levels during EDIC,” wrote Dr. Gubitosi-Klug, who is division chief of pediatric endocrinology at University Hospital, Cleveland Medical Center, and her coauthors. “[There was] a 13%-15% rise in severe hypoglycemia risk for every 10% decrement in HbA1c.”
The DCCT enrolled 1,441 patients with type 1 diabetes from 1983 to 1989. They were either managed intensively, with a target HbA1c of 7%, or, conventionally, with a 9% target. Specifically, during the DCCT, participants in the intensive group had lower current and mean HbA1c levels (~2% mean difference; P less than .001) as well as higher insulin doses than those in the conventional group (mean difference, 0.04 units/kg per day; P less than .001). During the DCCT, pump use across all quarterly visits averaged 35.7% in the intensive group versus 0.7% in the conventional group and rose to 41% and 1.6%, respectively, by the DCCT closeout (P less than .001).
Almost everyone in the DCCT then enrolled in the EDIC study, which ran from 1995 to 2013. EDIC used an 8% HbA1c target. During EDIC, diabetes management has evolved dramatically, with the introduction of rapid- and long-acting insulins and improved insulin pumps and blood glucose meters. Dr. Gubitosi-Klug and her colleagues examined how rates of severe hypoglycemia have changed with those advances.
During the DCCT, intensively managed patients were three times more likely than those conventionally managed to experience an episode of severe hypoglycemia, including seizure or coma. During EDIC, the frequency of severe hypoglycemia increased in patients who had been conventionally managed but decreased among those who had been intensively managed.
When the DCCT ended with an average 6.5 years follow-up, 65% of the intensive group and 35% of the conventional group had experienced at least one episode of severe hypoglycemia. But when the 20-year EDIC study ended, about half of everyone in the group had experienced at least one episode. Many experienced multiple events. In the DCCT, 54% of the intensive group and 30% of the conventional group had experienced at least four episodes. In EDIC, 37% of the intensive group and 33% of the conventional group had experienced at least four.
But the repeat events seemed to occur in a subset of patients, with 14% in the DCCT experiencing about half of the study’s severe hypoglycemic events, and 7% in EDIC experiencing about a third of them in that study.
The biggest risk factor for severe hypoglycemia was similar for both groups. A first incident doubled the risk for another in the conventional therapy group and tripled it in the intensive therapy group.
“The current data support the clinical perception that a small subset of individuals is more susceptible to severe hypoglycemia,” with 7% of patients with 11 or more episodes during EDIC representing 32% all the events in that study.
These events impart a risk of serious consequences, the authors said. There were 51 major accidents during the 6.5 years of the DCCT and 143 during the 20 years of EDIC, and these were similar between treatment groups. Most of these were motor vehicle accidents, and hypoglycemia was the possible, probable, or principal cause of 18 of the 28 in the DCCT and 23 of the 54 in EDIC.
Nevertheless, the finding that intensively managed patients did better over the years is encouraging, the authors noted. “Advancements in the tools for diabetes management and additional clinical trials have also demonstrated the importance of educational programs to support intensive diabetes therapy. Thus, with increasing years of experience, participants have likely benefitted from tailored educational efforts provided by treating physicians and certified diabetes educators to minimize hypoglycemia.”
None of the study authors reported any financial conflicts.
[email protected]
On Twitter @Alz_gal
Episodes of severe hypoglycemia became less frequent over a period of 26 years in patients with type 1 diabetes whose glucose was initially intensively managed to a hemoglobin A1c target of 7%, but, conversely, these problems increased in patients who had been managed to a conventional target of 9%, .
At the end of the 20-year-long Epidemiology of Diabetes Interventions and Complications (EDIC) study, which used an 8% HbA1c target for everyone, rates of severe hypoglycemia were 37 cases per 100 patient-years in patients managed intensively in the preceding Diabetes Control and Complications Trial (DCCT) – a significant decrease from the 61 cases per 100 patient-years observed when the DCCT concluded. However, rates of severe hypoglycemia also increased in the conventionally managed group, from 19 to 41 cases per 100 patient-years, according to Rose A. Gubitosi-Klug, MD, PhD, and her coauthors (Diab Care. 2017;40[8]:1010-6).
“The equalization of rates between the two original treatment groups is largely attributable to their similar HbA1c levels during EDIC,” wrote Dr. Gubitosi-Klug, who is division chief of pediatric endocrinology at University Hospital, Cleveland Medical Center, and her coauthors. “[There was] a 13%-15% rise in severe hypoglycemia risk for every 10% decrement in HbA1c.”
The DCCT enrolled 1,441 patients with type 1 diabetes from 1983 to 1989. They were either managed intensively, with a target HbA1c of 7%, or, conventionally, with a 9% target. Specifically, during the DCCT, participants in the intensive group had lower current and mean HbA1c levels (~2% mean difference; P less than .001) as well as higher insulin doses than those in the conventional group (mean difference, 0.04 units/kg per day; P less than .001). During the DCCT, pump use across all quarterly visits averaged 35.7% in the intensive group versus 0.7% in the conventional group and rose to 41% and 1.6%, respectively, by the DCCT closeout (P less than .001).
Almost everyone in the DCCT then enrolled in the EDIC study, which ran from 1995 to 2013. EDIC used an 8% HbA1c target. During EDIC, diabetes management has evolved dramatically, with the introduction of rapid- and long-acting insulins and improved insulin pumps and blood glucose meters. Dr. Gubitosi-Klug and her colleagues examined how rates of severe hypoglycemia have changed with those advances.
During the DCCT, intensively managed patients were three times more likely than those conventionally managed to experience an episode of severe hypoglycemia, including seizure or coma. During EDIC, the frequency of severe hypoglycemia increased in patients who had been conventionally managed but decreased among those who had been intensively managed.
When the DCCT ended with an average 6.5 years follow-up, 65% of the intensive group and 35% of the conventional group had experienced at least one episode of severe hypoglycemia. But when the 20-year EDIC study ended, about half of everyone in the group had experienced at least one episode. Many experienced multiple events. In the DCCT, 54% of the intensive group and 30% of the conventional group had experienced at least four episodes. In EDIC, 37% of the intensive group and 33% of the conventional group had experienced at least four.
But the repeat events seemed to occur in a subset of patients, with 14% in the DCCT experiencing about half of the study’s severe hypoglycemic events, and 7% in EDIC experiencing about a third of them in that study.
The biggest risk factor for severe hypoglycemia was similar for both groups. A first incident doubled the risk for another in the conventional therapy group and tripled it in the intensive therapy group.
“The current data support the clinical perception that a small subset of individuals is more susceptible to severe hypoglycemia,” with 7% of patients with 11 or more episodes during EDIC representing 32% all the events in that study.
These events impart a risk of serious consequences, the authors said. There were 51 major accidents during the 6.5 years of the DCCT and 143 during the 20 years of EDIC, and these were similar between treatment groups. Most of these were motor vehicle accidents, and hypoglycemia was the possible, probable, or principal cause of 18 of the 28 in the DCCT and 23 of the 54 in EDIC.
Nevertheless, the finding that intensively managed patients did better over the years is encouraging, the authors noted. “Advancements in the tools for diabetes management and additional clinical trials have also demonstrated the importance of educational programs to support intensive diabetes therapy. Thus, with increasing years of experience, participants have likely benefitted from tailored educational efforts provided by treating physicians and certified diabetes educators to minimize hypoglycemia.”
None of the study authors reported any financial conflicts.
[email protected]
On Twitter @Alz_gal
Episodes of severe hypoglycemia became less frequent over a period of 26 years in patients with type 1 diabetes whose glucose was initially intensively managed to a hemoglobin A1c target of 7%, but, conversely, these problems increased in patients who had been managed to a conventional target of 9%, .
At the end of the 20-year-long Epidemiology of Diabetes Interventions and Complications (EDIC) study, which used an 8% HbA1c target for everyone, rates of severe hypoglycemia were 37 cases per 100 patient-years in patients managed intensively in the preceding Diabetes Control and Complications Trial (DCCT) – a significant decrease from the 61 cases per 100 patient-years observed when the DCCT concluded. However, rates of severe hypoglycemia also increased in the conventionally managed group, from 19 to 41 cases per 100 patient-years, according to Rose A. Gubitosi-Klug, MD, PhD, and her coauthors (Diab Care. 2017;40[8]:1010-6).
“The equalization of rates between the two original treatment groups is largely attributable to their similar HbA1c levels during EDIC,” wrote Dr. Gubitosi-Klug, who is division chief of pediatric endocrinology at University Hospital, Cleveland Medical Center, and her coauthors. “[There was] a 13%-15% rise in severe hypoglycemia risk for every 10% decrement in HbA1c.”
The DCCT enrolled 1,441 patients with type 1 diabetes from 1983 to 1989. They were either managed intensively, with a target HbA1c of 7%, or, conventionally, with a 9% target. Specifically, during the DCCT, participants in the intensive group had lower current and mean HbA1c levels (~2% mean difference; P less than .001) as well as higher insulin doses than those in the conventional group (mean difference, 0.04 units/kg per day; P less than .001). During the DCCT, pump use across all quarterly visits averaged 35.7% in the intensive group versus 0.7% in the conventional group and rose to 41% and 1.6%, respectively, by the DCCT closeout (P less than .001).
Almost everyone in the DCCT then enrolled in the EDIC study, which ran from 1995 to 2013. EDIC used an 8% HbA1c target. During EDIC, diabetes management has evolved dramatically, with the introduction of rapid- and long-acting insulins and improved insulin pumps and blood glucose meters. Dr. Gubitosi-Klug and her colleagues examined how rates of severe hypoglycemia have changed with those advances.
During the DCCT, intensively managed patients were three times more likely than those conventionally managed to experience an episode of severe hypoglycemia, including seizure or coma. During EDIC, the frequency of severe hypoglycemia increased in patients who had been conventionally managed but decreased among those who had been intensively managed.
When the DCCT ended with an average 6.5 years follow-up, 65% of the intensive group and 35% of the conventional group had experienced at least one episode of severe hypoglycemia. But when the 20-year EDIC study ended, about half of everyone in the group had experienced at least one episode. Many experienced multiple events. In the DCCT, 54% of the intensive group and 30% of the conventional group had experienced at least four episodes. In EDIC, 37% of the intensive group and 33% of the conventional group had experienced at least four.
But the repeat events seemed to occur in a subset of patients, with 14% in the DCCT experiencing about half of the study’s severe hypoglycemic events, and 7% in EDIC experiencing about a third of them in that study.
The biggest risk factor for severe hypoglycemia was similar for both groups. A first incident doubled the risk for another in the conventional therapy group and tripled it in the intensive therapy group.
“The current data support the clinical perception that a small subset of individuals is more susceptible to severe hypoglycemia,” with 7% of patients with 11 or more episodes during EDIC representing 32% all the events in that study.
These events impart a risk of serious consequences, the authors said. There were 51 major accidents during the 6.5 years of the DCCT and 143 during the 20 years of EDIC, and these were similar between treatment groups. Most of these were motor vehicle accidents, and hypoglycemia was the possible, probable, or principal cause of 18 of the 28 in the DCCT and 23 of the 54 in EDIC.
Nevertheless, the finding that intensively managed patients did better over the years is encouraging, the authors noted. “Advancements in the tools for diabetes management and additional clinical trials have also demonstrated the importance of educational programs to support intensive diabetes therapy. Thus, with increasing years of experience, participants have likely benefitted from tailored educational efforts provided by treating physicians and certified diabetes educators to minimize hypoglycemia.”
None of the study authors reported any financial conflicts.
[email protected]
On Twitter @Alz_gal
FROM DIABETES CARE
Key clinical point:
Major finding: About half of the patients in the follow-up study, the Epidemiology of Diabetes Interventions and Complications trial, experienced at least one severe hypoglycemia event, regardless of their initial management.
Data source: The Diabetes Control and Complications Trial involving 1,441 patients, 97% of whom enrolled in the Epidemiology of Diabetes Interventions and Complications trial.
Disclosures: The authors had no financial disclosures.
Repeat blood cultures not useful in treating Gram-negative bacteremia
Follow-up blood cultures rarely provide useful clinical information in patients who are being treated for Gram-negative bacteremia, according to a study by Gabriel M. Aisenberg, MD, and his colleagues.
In a review of 140 Gram-negative bacteremia episodes, 17 follow-up blood cultures (FUBC) were required to identify one positive result, wrote Dr. Aisenberg of McGovern Medical School at the University of Texas Health Science Center in Houston. This was in stark contest to the test’s utility in patients with Gram-positive infections, which identified one positive result for every five cultures. (Clin Infect Dis. 2017 July 26. doi: 10.1093/cid/cix648)
Dr. Aisenberg and his colleagues reviewed 500 bacteremias treated at a single center during 2015. The mean duration of bacteremia was about 3 days, with a mean follow-up time of 4.5 days. Most of the cases (206) were caused by Gram-positive cocci; 140 were due to Gram-negative bacilli, and 30 were polymicrobial.
Most patients (383; 77%) had at least one FUBC. Patients had an average of 2.3 FUBC, but the range was wide: Up to 12 cultures were performed for Gram-positive infections and up to six for Gram-negative infections.
Only 14% of the FUBC were positive, and most of these (78%) were for Gram-positive infections. Only eight cultures (15%) returned positive results for Gram-negative infections.
The mean duration of bacteremia was 3 days, and did not vary between Gram-positive, Gram-negative, or polymicrobial infections. The use of antibiotics wasn’t associated with a positive FUBC, although fever on the day of the test was. Urinary tract and severe skin infections were negatively associated with a positive FUBC, while IV catheter infections increased the risk. There were no associations between positive FUBC and mortality or ICU placement.
There are no guidelines describing the best use of FUBC in Gram-negative bacteremia, which are usually managed clinically, Dr. Aisenberg said.
“Even in Gram-negative bacteremia infections most prone to seeding the bloodstream, the bacteremia usually resolves within a short time after the institution of appropriate antibiotic therapy and/or source control,” he wrote. “Currently the management of [such infections] is determined by clinical judgment, allowing some clinicians to utilize blood cultures in an unrestricted way. Unrestrained use of blood cultures has serious implications for patient safety and health care costs,” driven by the strong likelihood of false positive results, which grows even stronger with repeat tests.
“As many as 90% of all blood cultures grow no organisms,” Dr. Aisenberg said. “Of the 10% that do, almost half are considered contaminants. Assuming a constant rate of contamination, the more FUBC performed, the higher the chance of encountering contamination, which may result in increased costs, longer hospital stays, unnecessary consultations, and inappropriate use of antibiotics.”
Neither Dr. Aisenberg nor his colleagues had any financial disclosures.
Follow-up blood cultures rarely provide useful clinical information in patients who are being treated for Gram-negative bacteremia, according to a study by Gabriel M. Aisenberg, MD, and his colleagues.
In a review of 140 Gram-negative bacteremia episodes, 17 follow-up blood cultures (FUBC) were required to identify one positive result, wrote Dr. Aisenberg of McGovern Medical School at the University of Texas Health Science Center in Houston. This was in stark contest to the test’s utility in patients with Gram-positive infections, which identified one positive result for every five cultures. (Clin Infect Dis. 2017 July 26. doi: 10.1093/cid/cix648)
Dr. Aisenberg and his colleagues reviewed 500 bacteremias treated at a single center during 2015. The mean duration of bacteremia was about 3 days, with a mean follow-up time of 4.5 days. Most of the cases (206) were caused by Gram-positive cocci; 140 were due to Gram-negative bacilli, and 30 were polymicrobial.
Most patients (383; 77%) had at least one FUBC. Patients had an average of 2.3 FUBC, but the range was wide: Up to 12 cultures were performed for Gram-positive infections and up to six for Gram-negative infections.
Only 14% of the FUBC were positive, and most of these (78%) were for Gram-positive infections. Only eight cultures (15%) returned positive results for Gram-negative infections.
The mean duration of bacteremia was 3 days, and did not vary between Gram-positive, Gram-negative, or polymicrobial infections. The use of antibiotics wasn’t associated with a positive FUBC, although fever on the day of the test was. Urinary tract and severe skin infections were negatively associated with a positive FUBC, while IV catheter infections increased the risk. There were no associations between positive FUBC and mortality or ICU placement.
There are no guidelines describing the best use of FUBC in Gram-negative bacteremia, which are usually managed clinically, Dr. Aisenberg said.
“Even in Gram-negative bacteremia infections most prone to seeding the bloodstream, the bacteremia usually resolves within a short time after the institution of appropriate antibiotic therapy and/or source control,” he wrote. “Currently the management of [such infections] is determined by clinical judgment, allowing some clinicians to utilize blood cultures in an unrestricted way. Unrestrained use of blood cultures has serious implications for patient safety and health care costs,” driven by the strong likelihood of false positive results, which grows even stronger with repeat tests.
“As many as 90% of all blood cultures grow no organisms,” Dr. Aisenberg said. “Of the 10% that do, almost half are considered contaminants. Assuming a constant rate of contamination, the more FUBC performed, the higher the chance of encountering contamination, which may result in increased costs, longer hospital stays, unnecessary consultations, and inappropriate use of antibiotics.”
Neither Dr. Aisenberg nor his colleagues had any financial disclosures.
Follow-up blood cultures rarely provide useful clinical information in patients who are being treated for Gram-negative bacteremia, according to a study by Gabriel M. Aisenberg, MD, and his colleagues.
In a review of 140 Gram-negative bacteremia episodes, 17 follow-up blood cultures (FUBC) were required to identify one positive result, wrote Dr. Aisenberg of McGovern Medical School at the University of Texas Health Science Center in Houston. This was in stark contest to the test’s utility in patients with Gram-positive infections, which identified one positive result for every five cultures. (Clin Infect Dis. 2017 July 26. doi: 10.1093/cid/cix648)
Dr. Aisenberg and his colleagues reviewed 500 bacteremias treated at a single center during 2015. The mean duration of bacteremia was about 3 days, with a mean follow-up time of 4.5 days. Most of the cases (206) were caused by Gram-positive cocci; 140 were due to Gram-negative bacilli, and 30 were polymicrobial.
Most patients (383; 77%) had at least one FUBC. Patients had an average of 2.3 FUBC, but the range was wide: Up to 12 cultures were performed for Gram-positive infections and up to six for Gram-negative infections.
Only 14% of the FUBC were positive, and most of these (78%) were for Gram-positive infections. Only eight cultures (15%) returned positive results for Gram-negative infections.
The mean duration of bacteremia was 3 days, and did not vary between Gram-positive, Gram-negative, or polymicrobial infections. The use of antibiotics wasn’t associated with a positive FUBC, although fever on the day of the test was. Urinary tract and severe skin infections were negatively associated with a positive FUBC, while IV catheter infections increased the risk. There were no associations between positive FUBC and mortality or ICU placement.
There are no guidelines describing the best use of FUBC in Gram-negative bacteremia, which are usually managed clinically, Dr. Aisenberg said.
“Even in Gram-negative bacteremia infections most prone to seeding the bloodstream, the bacteremia usually resolves within a short time after the institution of appropriate antibiotic therapy and/or source control,” he wrote. “Currently the management of [such infections] is determined by clinical judgment, allowing some clinicians to utilize blood cultures in an unrestricted way. Unrestrained use of blood cultures has serious implications for patient safety and health care costs,” driven by the strong likelihood of false positive results, which grows even stronger with repeat tests.
“As many as 90% of all blood cultures grow no organisms,” Dr. Aisenberg said. “Of the 10% that do, almost half are considered contaminants. Assuming a constant rate of contamination, the more FUBC performed, the higher the chance of encountering contamination, which may result in increased costs, longer hospital stays, unnecessary consultations, and inappropriate use of antibiotics.”
Neither Dr. Aisenberg nor his colleagues had any financial disclosures.
FROM CLINICAL INFECTIOUS DISEASES
Key clinical point:
Major finding: Among 140 cases of Gram-negative bacteremia, 17 follow-up blood cultures were necessary to return one positive result.
Data source: A retrospective study comprising 500 infections.
Disclosures: None of the study authors reported financial disclosures.
Depression, PTSD double risk of dementia for older female veterans
LONDON – Women veterans with either depression or post-traumatic stress disorder face a doubling in their risk of dementia – and having both increases the risk even more, Dr. Kristine Yaffe reported at the Alzheimer’s Association International Conference.*
The risk ratios for incident dementia that Dr. Yaffe of the University of California, San Francisco, and her colleagues calculated from their analysis of a cohort of 149,000 older female veterans in the national Veterans Health Administration (VHA) database remained unchanged even when they adjusted for age, education, medical comorbidities, and other confounders.
“Our work tells us that older women veterans with depression or PTSD [post-traumatic stress disorder] should perhaps be monitored more closely or screened for dementia. The question now, really, is would treatment for depression or PTSD somehow delay this? I don’t think we have the answer. It’s an important question, though. And of course, we need to understand the underlying mechanism here, which may someday inform treatment.”
Not only are older women veterans a growing group; they are frequently diagnosed with mental health disorders. In 2012, 45% of women veteran patients in the VHA had a mental health condition, Dr. Yaffe noted.
“Over 9% of all veterans in the U.S. are women, accounting for more than 2 million women veterans. And 30% of those are more than 55 years old. Additionally, the number of women utilizing the Veterans Healthcare Administration system has nearly doubled in the last decade.”
The study of the impact of depression and PTSD on incident dementia is the first of its kind, Dr. Yaffe noted. The cohort comprised women without dementia who had at least two VHA visits during 2005-2015. They were followed for a mean of 5 years. A diagnosis of depression or PTSD had to occur during a 2-year baseline period. Confounders considered in the analysis were demographics, medical comorbidities, and health habits, including alcohol and tobacco use. The primary outcome was time to incident dementia.
At baseline, the group was a mean of 67 years old. Most subjects (70%) were white. Hypertension was common (46%), as was diabetes (16%). About 6% had cardiovascular disease. Depression was present in 18% and PTSD in 4%.
When parsed by diagnosis, there were some significant between-group differences at baseline. Women with depression or PTSD were younger than those without (65 and 63 vs. 67 years). Women who had both disorders were the youngest group, at 62 years.
Hypertension was least common in women without depression or PTSD (41%), and most common among those with depression (65%). Diabetes was almost more common among women with depression than among those without (24% vs. 14%).
Dr. Yaffe created two regression analyses. Model 1 controlled for age, race, and education. Model 2 controlled for the factors in Model 1, plus diabetes, hypertension, and cardiovascular disease.
By the end of follow-up, 4% of the group had developed dementia. The presence of depression approximately doubled the risk of dementia (hazard ratio, 2.14), compared with women who had neither depression nor PTSD. This risk was virtually unchanged in both Model 1 and Model 2 (HRs, 2.12 and 2.00).
The risk associated with PTSD was quite similar, increasing the risk of dementia twofold (HR, 2.19). Again, this was similar after controlling for the confounders in both Model 1 (HR, 2.20) and Model 2 (HR, 2.16).
Women with both depression and PTSD had almost a tripling of risk for dementia (HR, 2.71). Adjustment for confounders did not significantly alter this risk, either in Model 1 (HR, 2.59) or Model 2 (HR, 2.42).
“A question that often comes up in these types of studies is, ‘Is this a reverse causation?’ ” Dr. Yaffe said. “In other words, are people with dementia somehow getting more depression? We conducted a lag-time analysis that allowed a 2-year lag time for dementia, and also adjusted for the number of clinic visits. The results were almost identical.”
“This consistent doubling of risk is quite high,” Dr. Yaffe said. “In our prior work with male veterans, we didn’t see this robust an association.”
The study was funded by the Department of Defense and the National Institutes of Health. Dr. Yaffe had no financial disclosures.
Correction, 8/7/17: An earlier version of this article misstated Dr. Kristine Yaffe's degree.
[email protected]
On Twitter @alz_gal
LONDON – Women veterans with either depression or post-traumatic stress disorder face a doubling in their risk of dementia – and having both increases the risk even more, Dr. Kristine Yaffe reported at the Alzheimer’s Association International Conference.*
The risk ratios for incident dementia that Dr. Yaffe of the University of California, San Francisco, and her colleagues calculated from their analysis of a cohort of 149,000 older female veterans in the national Veterans Health Administration (VHA) database remained unchanged even when they adjusted for age, education, medical comorbidities, and other confounders.
“Our work tells us that older women veterans with depression or PTSD [post-traumatic stress disorder] should perhaps be monitored more closely or screened for dementia. The question now, really, is would treatment for depression or PTSD somehow delay this? I don’t think we have the answer. It’s an important question, though. And of course, we need to understand the underlying mechanism here, which may someday inform treatment.”
Not only are older women veterans a growing group; they are frequently diagnosed with mental health disorders. In 2012, 45% of women veteran patients in the VHA had a mental health condition, Dr. Yaffe noted.
“Over 9% of all veterans in the U.S. are women, accounting for more than 2 million women veterans. And 30% of those are more than 55 years old. Additionally, the number of women utilizing the Veterans Healthcare Administration system has nearly doubled in the last decade.”
The study of the impact of depression and PTSD on incident dementia is the first of its kind, Dr. Yaffe noted. The cohort comprised women without dementia who had at least two VHA visits during 2005-2015. They were followed for a mean of 5 years. A diagnosis of depression or PTSD had to occur during a 2-year baseline period. Confounders considered in the analysis were demographics, medical comorbidities, and health habits, including alcohol and tobacco use. The primary outcome was time to incident dementia.
At baseline, the group was a mean of 67 years old. Most subjects (70%) were white. Hypertension was common (46%), as was diabetes (16%). About 6% had cardiovascular disease. Depression was present in 18% and PTSD in 4%.
When parsed by diagnosis, there were some significant between-group differences at baseline. Women with depression or PTSD were younger than those without (65 and 63 vs. 67 years). Women who had both disorders were the youngest group, at 62 years.
Hypertension was least common in women without depression or PTSD (41%), and most common among those with depression (65%). Diabetes was almost more common among women with depression than among those without (24% vs. 14%).
Dr. Yaffe created two regression analyses. Model 1 controlled for age, race, and education. Model 2 controlled for the factors in Model 1, plus diabetes, hypertension, and cardiovascular disease.
By the end of follow-up, 4% of the group had developed dementia. The presence of depression approximately doubled the risk of dementia (hazard ratio, 2.14), compared with women who had neither depression nor PTSD. This risk was virtually unchanged in both Model 1 and Model 2 (HRs, 2.12 and 2.00).
The risk associated with PTSD was quite similar, increasing the risk of dementia twofold (HR, 2.19). Again, this was similar after controlling for the confounders in both Model 1 (HR, 2.20) and Model 2 (HR, 2.16).
Women with both depression and PTSD had almost a tripling of risk for dementia (HR, 2.71). Adjustment for confounders did not significantly alter this risk, either in Model 1 (HR, 2.59) or Model 2 (HR, 2.42).
“A question that often comes up in these types of studies is, ‘Is this a reverse causation?’ ” Dr. Yaffe said. “In other words, are people with dementia somehow getting more depression? We conducted a lag-time analysis that allowed a 2-year lag time for dementia, and also adjusted for the number of clinic visits. The results were almost identical.”
“This consistent doubling of risk is quite high,” Dr. Yaffe said. “In our prior work with male veterans, we didn’t see this robust an association.”
The study was funded by the Department of Defense and the National Institutes of Health. Dr. Yaffe had no financial disclosures.
Correction, 8/7/17: An earlier version of this article misstated Dr. Kristine Yaffe's degree.
[email protected]
On Twitter @alz_gal
LONDON – Women veterans with either depression or post-traumatic stress disorder face a doubling in their risk of dementia – and having both increases the risk even more, Dr. Kristine Yaffe reported at the Alzheimer’s Association International Conference.*
The risk ratios for incident dementia that Dr. Yaffe of the University of California, San Francisco, and her colleagues calculated from their analysis of a cohort of 149,000 older female veterans in the national Veterans Health Administration (VHA) database remained unchanged even when they adjusted for age, education, medical comorbidities, and other confounders.
“Our work tells us that older women veterans with depression or PTSD [post-traumatic stress disorder] should perhaps be monitored more closely or screened for dementia. The question now, really, is would treatment for depression or PTSD somehow delay this? I don’t think we have the answer. It’s an important question, though. And of course, we need to understand the underlying mechanism here, which may someday inform treatment.”
Not only are older women veterans a growing group; they are frequently diagnosed with mental health disorders. In 2012, 45% of women veteran patients in the VHA had a mental health condition, Dr. Yaffe noted.
“Over 9% of all veterans in the U.S. are women, accounting for more than 2 million women veterans. And 30% of those are more than 55 years old. Additionally, the number of women utilizing the Veterans Healthcare Administration system has nearly doubled in the last decade.”
The study of the impact of depression and PTSD on incident dementia is the first of its kind, Dr. Yaffe noted. The cohort comprised women without dementia who had at least two VHA visits during 2005-2015. They were followed for a mean of 5 years. A diagnosis of depression or PTSD had to occur during a 2-year baseline period. Confounders considered in the analysis were demographics, medical comorbidities, and health habits, including alcohol and tobacco use. The primary outcome was time to incident dementia.
At baseline, the group was a mean of 67 years old. Most subjects (70%) were white. Hypertension was common (46%), as was diabetes (16%). About 6% had cardiovascular disease. Depression was present in 18% and PTSD in 4%.
When parsed by diagnosis, there were some significant between-group differences at baseline. Women with depression or PTSD were younger than those without (65 and 63 vs. 67 years). Women who had both disorders were the youngest group, at 62 years.
Hypertension was least common in women without depression or PTSD (41%), and most common among those with depression (65%). Diabetes was almost more common among women with depression than among those without (24% vs. 14%).
Dr. Yaffe created two regression analyses. Model 1 controlled for age, race, and education. Model 2 controlled for the factors in Model 1, plus diabetes, hypertension, and cardiovascular disease.
By the end of follow-up, 4% of the group had developed dementia. The presence of depression approximately doubled the risk of dementia (hazard ratio, 2.14), compared with women who had neither depression nor PTSD. This risk was virtually unchanged in both Model 1 and Model 2 (HRs, 2.12 and 2.00).
The risk associated with PTSD was quite similar, increasing the risk of dementia twofold (HR, 2.19). Again, this was similar after controlling for the confounders in both Model 1 (HR, 2.20) and Model 2 (HR, 2.16).
Women with both depression and PTSD had almost a tripling of risk for dementia (HR, 2.71). Adjustment for confounders did not significantly alter this risk, either in Model 1 (HR, 2.59) or Model 2 (HR, 2.42).
“A question that often comes up in these types of studies is, ‘Is this a reverse causation?’ ” Dr. Yaffe said. “In other words, are people with dementia somehow getting more depression? We conducted a lag-time analysis that allowed a 2-year lag time for dementia, and also adjusted for the number of clinic visits. The results were almost identical.”
“This consistent doubling of risk is quite high,” Dr. Yaffe said. “In our prior work with male veterans, we didn’t see this robust an association.”
The study was funded by the Department of Defense and the National Institutes of Health. Dr. Yaffe had no financial disclosures.
Correction, 8/7/17: An earlier version of this article misstated Dr. Kristine Yaffe's degree.
[email protected]
On Twitter @alz_gal
AT AAIC 2017
Key clinical point:
Major finding: Depression or PTSD both doubled the risk of dementia; both conditions together increased the risk by almost 2.5 times.
Data source: The retrospective cohort study comprised 149,000 women in the national Veterans Health Administration database.
Disclosures: The Department of Defense and National Institutes of Health Funded the study. The presenter had no financial disclosures.