User login
Wisdom may counter loneliness, burnout in older adults
Wisdom increases with age, and although this personality trait is regarded as nebulous by many, there is evidence that it has biological and neuropsychiatric underpinnings. It could even hold the key to reducing loneliness and burnout among older people.
Those were some of the key messages delivered by Tanya T. Nguyen, PhD, of the department of psychiatry at the University of California, San Diego, who spoke at a virtual meeting presented by Current Psychiatry and the American Academy of Clinical Psychiatrists.
“To many people, wisdom remains a fuzzy concept that’s difficult to operationalize and measure. It’s analogous to the concepts of consciousness, emotions, and cognitions, which at one point were considered nonscientific, but today we accept them as biological and scientific entities,” Dr. Nguyen said during her talk at the meeting presented by MedscapeLive. MedscapeLive and this news organization are owned by the same parent company.
Interest in quantifying and studying wisdom has picked up in recent years, and Dr. Nguyen gave a definition with six elements that includes prosocial behaviors such as empathy and compassion, as well as emotional regulation, self-reflection, decisiveness, and social decision-making. She also included a spirituality component, though she conceded that this is controversial.
She noted that there are cultural variations in the definition of wisdom, but it has changed little over time, suggesting that it may be biological rather than cultural in nature, and therefore may have a neuropsychiatric underpinning.
Loss of some or all characteristics of wisdom occurs in some behaviors and disorders, including most markedly in the neurodegenerative disorder frontotemporal dementia (FTD), which is characterized by damage only in the prefrontal cortex and anterior temporal lobes. It usually occurs before age 60, and patients exhibit poor social awareness, impulsivity, antisocial behavior, and a lack of insight and empathy.
This and other lines of evidence have led to the suggestion that wisdom may be governed by processes in the prefrontal cortex and the limbic striatum. The prefrontal cortex controls executive functions such as planning, predicting, and anticipating events, as well as managing emotional reactions and impulses. “Thus, wisdom involves parts of the brain that balance cold, hard analytical reasoning with primitive desires and drives, which ultimately leads to self-regulation, social insight, theory of mind, and empathy,” said Dr. Nguyen.
Wisdom has long been associated with age, but age is also linked to cognitive decline. A recent discovery that the brain does not stop evolving at older age may help explain this contradiction. Brains develop in a back to front order, so that the prefrontal cortex is the last to mature. As we age, neural activity shifts from the occipital lobes to the prefrontal cortex and its executive decision-making power.
“The brain may recruit higher-order networks to the prefrontal cortex that are associated with wisdom development,” said Dr. Nguyen. She also pointed out that asymmetry between the left and right hemisphere is reduced with age, as tasks that relied on circuits from one hemisphere or another more often call upon both. “In order to make up for lost synapses and neurons with aging, active older adults use more neuronal networks from both hemispheres to perform the same mental activity,” Dr. Nguyen said.
Some interventions can improve scores in traits associated with wisdom in older adults, and could be an important contributor to improvements in health and longevity, said Dr. Nguyen. Randomized, controlled trials have demonstrated that psychosocial or behavioral interventions can improve elements of wisdom such as prosocial behaviors and emotional regulation, both in people with mental illness and in the general population, with moderate to large effect sizes. But such studies don’t prove an effect on overall wisdom.
The intervention achieved positive results in 89 participants in senior housing communities, though the effect sizes were small, possibly because of high baseline resilience. A subanalysis suggested that reduction in loneliness was mediated by an increase in compassion.
“One of the most striking findings from our research on wisdom is this consistent and very strongly negative correlation between wisdom and loneliness,” Dr. Nguyen said. She highlighted other U.S. nationwide and cross-cultural studies that showed inverse relationships between loneliness and wisdom.
Loneliness is an important topic because it can contribute to burnout and suicide rates.
“Loneliness has a profound effect on how we show up in the workplace, in school, and in our communities. And that leads to anxiety, depression, depersonalization, and emotional fatigue. All are key features of burnout. And together loneliness and burnout have contributed to increased rates of suicide by 30%, and opioid-related deaths almost sixfold since the late 1990s,” Dr. Nguyen said.
Loneliness also is associated with worse physical health, and it may be linked to wisdom. “Loneliness can be conceptualized as being caused and maintained by objective circumstances, such as physical or social distancing, and by thoughts, behaviors, and feelings surrounding those experiences, including biased perceptions of social relations, and a negative assessment of one’s social skills, which then results in a discrepancy between one’s desired and perceived social relationships, which then can contribute to social withdrawal,” Dr. Nguyen said.
Dr. Nguyen highlighted the AARP Foundation’s Experience Corps program, which recruits older adults to act as mentors and tutors for children in kindergarten through third grade. It involves 15 hours per week over an entire school year, with a focus on child literacy, development, and behavioral management skills. A study revealed a significant impact. “It showed improvements in children’s grades and happiness, as well as seniors’ mental and physical health,” Dr. Nguyen said.
Dr. Nguyen concluded that wisdom “may be a vaccine against compassion fatigue and burnout that drive today’s behavioral epidemics of loneliness, opioid abuse, and suicide. It’s a tool for our times. It’s nuanced, flexible, pragmatic, compassionate, and it presents a reasonable framework for getting along in the often messy world that we all share.”
Implications for psychiatrists
Henry A. Nasrallah, MD, who organized the conference, suggested that the benefits of wisdom may not be limited to patients. He pointed out that surgeons often retire at age 60 or 65 because of declining physical skills, while psychiatrists continue to practice.
“We develop more wisdom and better skills, and we can practice into our 60s and 70s. I know psychiatrists who practice sometimes into their 80s. It’s really a wonderful thing to know that what you do in life develops or enhances the neuroplasticity of certain brain regions. In our case, in psychiatry, it is the brain regions involved in wisdom,” commented Dr. Nasrallah, who is a professor of psychiatry, neurology, and neuroscience at the University of Cincinnati.
Dr. Nguyen has no financial disclosures. Dr. Nasrallah has received grants from Abbott, AstraZeneca, Forest, Janssen, Lilly, Pfizer, and Shire, and advises Abbott, AstraZeneca, and Shire.
Wisdom increases with age, and although this personality trait is regarded as nebulous by many, there is evidence that it has biological and neuropsychiatric underpinnings. It could even hold the key to reducing loneliness and burnout among older people.
Those were some of the key messages delivered by Tanya T. Nguyen, PhD, of the department of psychiatry at the University of California, San Diego, who spoke at a virtual meeting presented by Current Psychiatry and the American Academy of Clinical Psychiatrists.
“To many people, wisdom remains a fuzzy concept that’s difficult to operationalize and measure. It’s analogous to the concepts of consciousness, emotions, and cognitions, which at one point were considered nonscientific, but today we accept them as biological and scientific entities,” Dr. Nguyen said during her talk at the meeting presented by MedscapeLive. MedscapeLive and this news organization are owned by the same parent company.
Interest in quantifying and studying wisdom has picked up in recent years, and Dr. Nguyen gave a definition with six elements that includes prosocial behaviors such as empathy and compassion, as well as emotional regulation, self-reflection, decisiveness, and social decision-making. She also included a spirituality component, though she conceded that this is controversial.
She noted that there are cultural variations in the definition of wisdom, but it has changed little over time, suggesting that it may be biological rather than cultural in nature, and therefore may have a neuropsychiatric underpinning.
Loss of some or all characteristics of wisdom occurs in some behaviors and disorders, including most markedly in the neurodegenerative disorder frontotemporal dementia (FTD), which is characterized by damage only in the prefrontal cortex and anterior temporal lobes. It usually occurs before age 60, and patients exhibit poor social awareness, impulsivity, antisocial behavior, and a lack of insight and empathy.
This and other lines of evidence have led to the suggestion that wisdom may be governed by processes in the prefrontal cortex and the limbic striatum. The prefrontal cortex controls executive functions such as planning, predicting, and anticipating events, as well as managing emotional reactions and impulses. “Thus, wisdom involves parts of the brain that balance cold, hard analytical reasoning with primitive desires and drives, which ultimately leads to self-regulation, social insight, theory of mind, and empathy,” said Dr. Nguyen.
Wisdom has long been associated with age, but age is also linked to cognitive decline. A recent discovery that the brain does not stop evolving at older age may help explain this contradiction. Brains develop in a back to front order, so that the prefrontal cortex is the last to mature. As we age, neural activity shifts from the occipital lobes to the prefrontal cortex and its executive decision-making power.
“The brain may recruit higher-order networks to the prefrontal cortex that are associated with wisdom development,” said Dr. Nguyen. She also pointed out that asymmetry between the left and right hemisphere is reduced with age, as tasks that relied on circuits from one hemisphere or another more often call upon both. “In order to make up for lost synapses and neurons with aging, active older adults use more neuronal networks from both hemispheres to perform the same mental activity,” Dr. Nguyen said.
Some interventions can improve scores in traits associated with wisdom in older adults, and could be an important contributor to improvements in health and longevity, said Dr. Nguyen. Randomized, controlled trials have demonstrated that psychosocial or behavioral interventions can improve elements of wisdom such as prosocial behaviors and emotional regulation, both in people with mental illness and in the general population, with moderate to large effect sizes. But such studies don’t prove an effect on overall wisdom.
The intervention achieved positive results in 89 participants in senior housing communities, though the effect sizes were small, possibly because of high baseline resilience. A subanalysis suggested that reduction in loneliness was mediated by an increase in compassion.
“One of the most striking findings from our research on wisdom is this consistent and very strongly negative correlation between wisdom and loneliness,” Dr. Nguyen said. She highlighted other U.S. nationwide and cross-cultural studies that showed inverse relationships between loneliness and wisdom.
Loneliness is an important topic because it can contribute to burnout and suicide rates.
“Loneliness has a profound effect on how we show up in the workplace, in school, and in our communities. And that leads to anxiety, depression, depersonalization, and emotional fatigue. All are key features of burnout. And together loneliness and burnout have contributed to increased rates of suicide by 30%, and opioid-related deaths almost sixfold since the late 1990s,” Dr. Nguyen said.
Loneliness also is associated with worse physical health, and it may be linked to wisdom. “Loneliness can be conceptualized as being caused and maintained by objective circumstances, such as physical or social distancing, and by thoughts, behaviors, and feelings surrounding those experiences, including biased perceptions of social relations, and a negative assessment of one’s social skills, which then results in a discrepancy between one’s desired and perceived social relationships, which then can contribute to social withdrawal,” Dr. Nguyen said.
Dr. Nguyen highlighted the AARP Foundation’s Experience Corps program, which recruits older adults to act as mentors and tutors for children in kindergarten through third grade. It involves 15 hours per week over an entire school year, with a focus on child literacy, development, and behavioral management skills. A study revealed a significant impact. “It showed improvements in children’s grades and happiness, as well as seniors’ mental and physical health,” Dr. Nguyen said.
Dr. Nguyen concluded that wisdom “may be a vaccine against compassion fatigue and burnout that drive today’s behavioral epidemics of loneliness, opioid abuse, and suicide. It’s a tool for our times. It’s nuanced, flexible, pragmatic, compassionate, and it presents a reasonable framework for getting along in the often messy world that we all share.”
Implications for psychiatrists
Henry A. Nasrallah, MD, who organized the conference, suggested that the benefits of wisdom may not be limited to patients. He pointed out that surgeons often retire at age 60 or 65 because of declining physical skills, while psychiatrists continue to practice.
“We develop more wisdom and better skills, and we can practice into our 60s and 70s. I know psychiatrists who practice sometimes into their 80s. It’s really a wonderful thing to know that what you do in life develops or enhances the neuroplasticity of certain brain regions. In our case, in psychiatry, it is the brain regions involved in wisdom,” commented Dr. Nasrallah, who is a professor of psychiatry, neurology, and neuroscience at the University of Cincinnati.
Dr. Nguyen has no financial disclosures. Dr. Nasrallah has received grants from Abbott, AstraZeneca, Forest, Janssen, Lilly, Pfizer, and Shire, and advises Abbott, AstraZeneca, and Shire.
Wisdom increases with age, and although this personality trait is regarded as nebulous by many, there is evidence that it has biological and neuropsychiatric underpinnings. It could even hold the key to reducing loneliness and burnout among older people.
Those were some of the key messages delivered by Tanya T. Nguyen, PhD, of the department of psychiatry at the University of California, San Diego, who spoke at a virtual meeting presented by Current Psychiatry and the American Academy of Clinical Psychiatrists.
“To many people, wisdom remains a fuzzy concept that’s difficult to operationalize and measure. It’s analogous to the concepts of consciousness, emotions, and cognitions, which at one point were considered nonscientific, but today we accept them as biological and scientific entities,” Dr. Nguyen said during her talk at the meeting presented by MedscapeLive. MedscapeLive and this news organization are owned by the same parent company.
Interest in quantifying and studying wisdom has picked up in recent years, and Dr. Nguyen gave a definition with six elements that includes prosocial behaviors such as empathy and compassion, as well as emotional regulation, self-reflection, decisiveness, and social decision-making. She also included a spirituality component, though she conceded that this is controversial.
She noted that there are cultural variations in the definition of wisdom, but it has changed little over time, suggesting that it may be biological rather than cultural in nature, and therefore may have a neuropsychiatric underpinning.
Loss of some or all characteristics of wisdom occurs in some behaviors and disorders, including most markedly in the neurodegenerative disorder frontotemporal dementia (FTD), which is characterized by damage only in the prefrontal cortex and anterior temporal lobes. It usually occurs before age 60, and patients exhibit poor social awareness, impulsivity, antisocial behavior, and a lack of insight and empathy.
This and other lines of evidence have led to the suggestion that wisdom may be governed by processes in the prefrontal cortex and the limbic striatum. The prefrontal cortex controls executive functions such as planning, predicting, and anticipating events, as well as managing emotional reactions and impulses. “Thus, wisdom involves parts of the brain that balance cold, hard analytical reasoning with primitive desires and drives, which ultimately leads to self-regulation, social insight, theory of mind, and empathy,” said Dr. Nguyen.
Wisdom has long been associated with age, but age is also linked to cognitive decline. A recent discovery that the brain does not stop evolving at older age may help explain this contradiction. Brains develop in a back to front order, so that the prefrontal cortex is the last to mature. As we age, neural activity shifts from the occipital lobes to the prefrontal cortex and its executive decision-making power.
“The brain may recruit higher-order networks to the prefrontal cortex that are associated with wisdom development,” said Dr. Nguyen. She also pointed out that asymmetry between the left and right hemisphere is reduced with age, as tasks that relied on circuits from one hemisphere or another more often call upon both. “In order to make up for lost synapses and neurons with aging, active older adults use more neuronal networks from both hemispheres to perform the same mental activity,” Dr. Nguyen said.
Some interventions can improve scores in traits associated with wisdom in older adults, and could be an important contributor to improvements in health and longevity, said Dr. Nguyen. Randomized, controlled trials have demonstrated that psychosocial or behavioral interventions can improve elements of wisdom such as prosocial behaviors and emotional regulation, both in people with mental illness and in the general population, with moderate to large effect sizes. But such studies don’t prove an effect on overall wisdom.
The intervention achieved positive results in 89 participants in senior housing communities, though the effect sizes were small, possibly because of high baseline resilience. A subanalysis suggested that reduction in loneliness was mediated by an increase in compassion.
“One of the most striking findings from our research on wisdom is this consistent and very strongly negative correlation between wisdom and loneliness,” Dr. Nguyen said. She highlighted other U.S. nationwide and cross-cultural studies that showed inverse relationships between loneliness and wisdom.
Loneliness is an important topic because it can contribute to burnout and suicide rates.
“Loneliness has a profound effect on how we show up in the workplace, in school, and in our communities. And that leads to anxiety, depression, depersonalization, and emotional fatigue. All are key features of burnout. And together loneliness and burnout have contributed to increased rates of suicide by 30%, and opioid-related deaths almost sixfold since the late 1990s,” Dr. Nguyen said.
Loneliness also is associated with worse physical health, and it may be linked to wisdom. “Loneliness can be conceptualized as being caused and maintained by objective circumstances, such as physical or social distancing, and by thoughts, behaviors, and feelings surrounding those experiences, including biased perceptions of social relations, and a negative assessment of one’s social skills, which then results in a discrepancy between one’s desired and perceived social relationships, which then can contribute to social withdrawal,” Dr. Nguyen said.
Dr. Nguyen highlighted the AARP Foundation’s Experience Corps program, which recruits older adults to act as mentors and tutors for children in kindergarten through third grade. It involves 15 hours per week over an entire school year, with a focus on child literacy, development, and behavioral management skills. A study revealed a significant impact. “It showed improvements in children’s grades and happiness, as well as seniors’ mental and physical health,” Dr. Nguyen said.
Dr. Nguyen concluded that wisdom “may be a vaccine against compassion fatigue and burnout that drive today’s behavioral epidemics of loneliness, opioid abuse, and suicide. It’s a tool for our times. It’s nuanced, flexible, pragmatic, compassionate, and it presents a reasonable framework for getting along in the often messy world that we all share.”
Implications for psychiatrists
Henry A. Nasrallah, MD, who organized the conference, suggested that the benefits of wisdom may not be limited to patients. He pointed out that surgeons often retire at age 60 or 65 because of declining physical skills, while psychiatrists continue to practice.
“We develop more wisdom and better skills, and we can practice into our 60s and 70s. I know psychiatrists who practice sometimes into their 80s. It’s really a wonderful thing to know that what you do in life develops or enhances the neuroplasticity of certain brain regions. In our case, in psychiatry, it is the brain regions involved in wisdom,” commented Dr. Nasrallah, who is a professor of psychiatry, neurology, and neuroscience at the University of Cincinnati.
Dr. Nguyen has no financial disclosures. Dr. Nasrallah has received grants from Abbott, AstraZeneca, Forest, Janssen, Lilly, Pfizer, and Shire, and advises Abbott, AstraZeneca, and Shire.
REPORTING FROM FOCUS ON NEUROPSYCHIATRY 2021
Western diet promoted skin, joint inflammation in preclinical study
A short-term Western diet facilitated the development of interleukin (IL)-23-mediated psoriasis-like skin and joint inflammation and caused shifts in the intestinal microbiota in a murine model –
, say the investigators and other experts who reviewed the findings.The mice did not become obese during the short duration of the multilayered study, which suggests that a Western diet (high sugar, moderate fat) can be impactful independent of obesity, Samuel T. Hwang, MD, PhD, professor and chair of dermatology at the University of California, Davis, and senior author of the study, said in an interview. The study was published in the Journal of Investigative Dermatology.
In an accompanying commentary, Renuka R. Nayak, MD, PhD, of the department of rheumatology at the University of California, San Francisco, wrote that the findings “add to the mounting evidence suggesting that diet has a prominent role in the treatment of psoriasis and [psoriatic arthritis] and raise the possibility that the microbiome may contribute to disease severity”.
Mice were fed a Western diet (WD) or conventional chow diet for 6 weeks and then injected with IL-23 minicircle (MC) DNA to induce systemic IL-23 overexpression – or a control minicircle DNA injection – and continued on these diets for another 4 weeks.
The mice in the WD/IL-23 MC DNA group developed erythema and scaling and increased epidermal thickness in the ears; such changes were “remarkably milder” or nonexistent in the other groups. Skin and joint immune cell populations, such as gamma delta T cells, neutrophils, and T helper type 17 cytokines were elevated in WD-fed mice, as were other markers of IL-23-mediated joint inflammation.
Recent research has suggested that the gut microbiota is dysbiotic in patients with psoriasis, and this new study found that WD-fed mice had less microbial diversity than that of mice fed a conventional diet. After IL-23 MC delivery, WD-fed reduced microbial diversity and pronounced dysbiosis.
“When we combined the Western diet and IL-23, we saw some very different microbes in abundance. The whole landscape changed,” Dr. Hwang said in the interview.
The data “suggest that WD and overexpression of IL-23 may contribute to gut microbiota dysbiosis in a synergistic and complex manner,” he and his coinvestigators wrote.
Treatment with broad-spectrum antibiotics suppressed IL-23-mediated skin and joint inflammation in the WD-fed mice – and moderately affected skin inflammation in conventionally-fed mice as well – which affirmed the role of dysbiosis.
And “notably,” in another layer of the study, mice that switched diets from a WD to a conventional diet had reduced skin and joint inflammation and increased diversity of gut microbiota. (Mice that were fed a WD for 6 weeks and given the IL-23 MC DNA were randomized to continue this diet for another 4 weeks or switch to a conventional diet.)
Commenting on the new research, Wilson Liao, MD, professor and vice chair of research in the department of dermatology at the University of California, San Francisco, said it “provides evidence” that diet can affect not only psoriasis, but psoriatic arthritis (PsA) as well, “through altering the ratio of good to bad bacteria in the gut.”
Going forward, better understanding “which specific gut bacteria and bacterial products lead to increased psoriatic inflammation, and the immunologic mechanism by which this occurs” will be important and could lead to novel treatments for psoriasis and PsA, said Dr. Liao, director of the UCSF Psoriasis and Skin Treatment Center.
Next on his research agenda, Dr. Hwang said, is the question of “how microbiota in the gut are actually able to influence inflammation at very distant sites in the joints and the skin.
“We want to understand the metabolic mechanisms,” he said, noting that “we invariably talk about cytokines, but there are other substances, like certain bile acids that are metabolized through the gut microbiome,” which may play a role.
The findings also offer a basis for treatment experiments in humans – of diet, probiotic therapy, or selective antibiotic modulation, for instance, Dr. Hwang said.
And in the meantime, the findings should encourage patients who are interested in making dietary changes, such as reducing sugar intake. “There’s wide interest – patients will ask, is there something I can change to make this better?” Dr. Hwang said. “Before, we could say it might be logical, but now we have some evidence. The message now is [high-sugar, moderate-fat] diets, apart from their ability to stimulate obesity, probably have some effects.”
Dietary change may not replace the need for other psoriasis treatments, he said, “but I think there’s good reason to believe that if you do change your diet, your treatment will be better than it would be without that dietary change,” he said.
In their discussion, Dr. Hwang and coauthors note that WD with IL-23 overexpression also decreased the mRNA expression of barrier-forming tight junction proteins, thus increasing intestinal permeability. This finding may be relevant, they wrote, because “leaky gut has been proposed as a pathogenic link between unhealthy diet, gut dysbiosis, and enhanced immune response,” and has been observed in a number of autoimmune diseases, including psoriasis.
Dr. Hwang, lead author Zhenrui Shi, MD, PhD, and coauthors reported no conflicts of interest. Their study was supported by the National Psoriasis Foundation, as well as the National Institutes of Health/National Institute of Arthritis and Musculoskeletal and Skin Diseases, and the National Cancer Institute.
A short-term Western diet facilitated the development of interleukin (IL)-23-mediated psoriasis-like skin and joint inflammation and caused shifts in the intestinal microbiota in a murine model –
, say the investigators and other experts who reviewed the findings.The mice did not become obese during the short duration of the multilayered study, which suggests that a Western diet (high sugar, moderate fat) can be impactful independent of obesity, Samuel T. Hwang, MD, PhD, professor and chair of dermatology at the University of California, Davis, and senior author of the study, said in an interview. The study was published in the Journal of Investigative Dermatology.
In an accompanying commentary, Renuka R. Nayak, MD, PhD, of the department of rheumatology at the University of California, San Francisco, wrote that the findings “add to the mounting evidence suggesting that diet has a prominent role in the treatment of psoriasis and [psoriatic arthritis] and raise the possibility that the microbiome may contribute to disease severity”.
Mice were fed a Western diet (WD) or conventional chow diet for 6 weeks and then injected with IL-23 minicircle (MC) DNA to induce systemic IL-23 overexpression – or a control minicircle DNA injection – and continued on these diets for another 4 weeks.
The mice in the WD/IL-23 MC DNA group developed erythema and scaling and increased epidermal thickness in the ears; such changes were “remarkably milder” or nonexistent in the other groups. Skin and joint immune cell populations, such as gamma delta T cells, neutrophils, and T helper type 17 cytokines were elevated in WD-fed mice, as were other markers of IL-23-mediated joint inflammation.
Recent research has suggested that the gut microbiota is dysbiotic in patients with psoriasis, and this new study found that WD-fed mice had less microbial diversity than that of mice fed a conventional diet. After IL-23 MC delivery, WD-fed reduced microbial diversity and pronounced dysbiosis.
“When we combined the Western diet and IL-23, we saw some very different microbes in abundance. The whole landscape changed,” Dr. Hwang said in the interview.
The data “suggest that WD and overexpression of IL-23 may contribute to gut microbiota dysbiosis in a synergistic and complex manner,” he and his coinvestigators wrote.
Treatment with broad-spectrum antibiotics suppressed IL-23-mediated skin and joint inflammation in the WD-fed mice – and moderately affected skin inflammation in conventionally-fed mice as well – which affirmed the role of dysbiosis.
And “notably,” in another layer of the study, mice that switched diets from a WD to a conventional diet had reduced skin and joint inflammation and increased diversity of gut microbiota. (Mice that were fed a WD for 6 weeks and given the IL-23 MC DNA were randomized to continue this diet for another 4 weeks or switch to a conventional diet.)
Commenting on the new research, Wilson Liao, MD, professor and vice chair of research in the department of dermatology at the University of California, San Francisco, said it “provides evidence” that diet can affect not only psoriasis, but psoriatic arthritis (PsA) as well, “through altering the ratio of good to bad bacteria in the gut.”
Going forward, better understanding “which specific gut bacteria and bacterial products lead to increased psoriatic inflammation, and the immunologic mechanism by which this occurs” will be important and could lead to novel treatments for psoriasis and PsA, said Dr. Liao, director of the UCSF Psoriasis and Skin Treatment Center.
Next on his research agenda, Dr. Hwang said, is the question of “how microbiota in the gut are actually able to influence inflammation at very distant sites in the joints and the skin.
“We want to understand the metabolic mechanisms,” he said, noting that “we invariably talk about cytokines, but there are other substances, like certain bile acids that are metabolized through the gut microbiome,” which may play a role.
The findings also offer a basis for treatment experiments in humans – of diet, probiotic therapy, or selective antibiotic modulation, for instance, Dr. Hwang said.
And in the meantime, the findings should encourage patients who are interested in making dietary changes, such as reducing sugar intake. “There’s wide interest – patients will ask, is there something I can change to make this better?” Dr. Hwang said. “Before, we could say it might be logical, but now we have some evidence. The message now is [high-sugar, moderate-fat] diets, apart from their ability to stimulate obesity, probably have some effects.”
Dietary change may not replace the need for other psoriasis treatments, he said, “but I think there’s good reason to believe that if you do change your diet, your treatment will be better than it would be without that dietary change,” he said.
In their discussion, Dr. Hwang and coauthors note that WD with IL-23 overexpression also decreased the mRNA expression of barrier-forming tight junction proteins, thus increasing intestinal permeability. This finding may be relevant, they wrote, because “leaky gut has been proposed as a pathogenic link between unhealthy diet, gut dysbiosis, and enhanced immune response,” and has been observed in a number of autoimmune diseases, including psoriasis.
Dr. Hwang, lead author Zhenrui Shi, MD, PhD, and coauthors reported no conflicts of interest. Their study was supported by the National Psoriasis Foundation, as well as the National Institutes of Health/National Institute of Arthritis and Musculoskeletal and Skin Diseases, and the National Cancer Institute.
A short-term Western diet facilitated the development of interleukin (IL)-23-mediated psoriasis-like skin and joint inflammation and caused shifts in the intestinal microbiota in a murine model –
, say the investigators and other experts who reviewed the findings.The mice did not become obese during the short duration of the multilayered study, which suggests that a Western diet (high sugar, moderate fat) can be impactful independent of obesity, Samuel T. Hwang, MD, PhD, professor and chair of dermatology at the University of California, Davis, and senior author of the study, said in an interview. The study was published in the Journal of Investigative Dermatology.
In an accompanying commentary, Renuka R. Nayak, MD, PhD, of the department of rheumatology at the University of California, San Francisco, wrote that the findings “add to the mounting evidence suggesting that diet has a prominent role in the treatment of psoriasis and [psoriatic arthritis] and raise the possibility that the microbiome may contribute to disease severity”.
Mice were fed a Western diet (WD) or conventional chow diet for 6 weeks and then injected with IL-23 minicircle (MC) DNA to induce systemic IL-23 overexpression – or a control minicircle DNA injection – and continued on these diets for another 4 weeks.
The mice in the WD/IL-23 MC DNA group developed erythema and scaling and increased epidermal thickness in the ears; such changes were “remarkably milder” or nonexistent in the other groups. Skin and joint immune cell populations, such as gamma delta T cells, neutrophils, and T helper type 17 cytokines were elevated in WD-fed mice, as were other markers of IL-23-mediated joint inflammation.
Recent research has suggested that the gut microbiota is dysbiotic in patients with psoriasis, and this new study found that WD-fed mice had less microbial diversity than that of mice fed a conventional diet. After IL-23 MC delivery, WD-fed reduced microbial diversity and pronounced dysbiosis.
“When we combined the Western diet and IL-23, we saw some very different microbes in abundance. The whole landscape changed,” Dr. Hwang said in the interview.
The data “suggest that WD and overexpression of IL-23 may contribute to gut microbiota dysbiosis in a synergistic and complex manner,” he and his coinvestigators wrote.
Treatment with broad-spectrum antibiotics suppressed IL-23-mediated skin and joint inflammation in the WD-fed mice – and moderately affected skin inflammation in conventionally-fed mice as well – which affirmed the role of dysbiosis.
And “notably,” in another layer of the study, mice that switched diets from a WD to a conventional diet had reduced skin and joint inflammation and increased diversity of gut microbiota. (Mice that were fed a WD for 6 weeks and given the IL-23 MC DNA were randomized to continue this diet for another 4 weeks or switch to a conventional diet.)
Commenting on the new research, Wilson Liao, MD, professor and vice chair of research in the department of dermatology at the University of California, San Francisco, said it “provides evidence” that diet can affect not only psoriasis, but psoriatic arthritis (PsA) as well, “through altering the ratio of good to bad bacteria in the gut.”
Going forward, better understanding “which specific gut bacteria and bacterial products lead to increased psoriatic inflammation, and the immunologic mechanism by which this occurs” will be important and could lead to novel treatments for psoriasis and PsA, said Dr. Liao, director of the UCSF Psoriasis and Skin Treatment Center.
Next on his research agenda, Dr. Hwang said, is the question of “how microbiota in the gut are actually able to influence inflammation at very distant sites in the joints and the skin.
“We want to understand the metabolic mechanisms,” he said, noting that “we invariably talk about cytokines, but there are other substances, like certain bile acids that are metabolized through the gut microbiome,” which may play a role.
The findings also offer a basis for treatment experiments in humans – of diet, probiotic therapy, or selective antibiotic modulation, for instance, Dr. Hwang said.
And in the meantime, the findings should encourage patients who are interested in making dietary changes, such as reducing sugar intake. “There’s wide interest – patients will ask, is there something I can change to make this better?” Dr. Hwang said. “Before, we could say it might be logical, but now we have some evidence. The message now is [high-sugar, moderate-fat] diets, apart from their ability to stimulate obesity, probably have some effects.”
Dietary change may not replace the need for other psoriasis treatments, he said, “but I think there’s good reason to believe that if you do change your diet, your treatment will be better than it would be without that dietary change,” he said.
In their discussion, Dr. Hwang and coauthors note that WD with IL-23 overexpression also decreased the mRNA expression of barrier-forming tight junction proteins, thus increasing intestinal permeability. This finding may be relevant, they wrote, because “leaky gut has been proposed as a pathogenic link between unhealthy diet, gut dysbiosis, and enhanced immune response,” and has been observed in a number of autoimmune diseases, including psoriasis.
Dr. Hwang, lead author Zhenrui Shi, MD, PhD, and coauthors reported no conflicts of interest. Their study was supported by the National Psoriasis Foundation, as well as the National Institutes of Health/National Institute of Arthritis and Musculoskeletal and Skin Diseases, and the National Cancer Institute.
FROM THE JOURNAL OF INVESTIGATIVE DERMATOLOGY
CDC officially endorses third dose of mRNA vaccines for immunocompromised
Centers for Disease Control and Prevention Director Rochelle Walensky, MD, has officially signed off on a recommendation by an independent panel of 11 experts to allow people with weakened immune function to get a third dose of certain COVID-19 vaccines.
The decision follows a unanimous vote by the CDC’s Advisory Committee on Immunization Practices (ACIP), which in turn came hours after the U.S. Food and Drug Administration updated its Emergency Use Authorization (EUA) for the Pfizer and Moderna mRNA vaccines.
About 7 million adults in the United States have moderately to severely impaired immune function because of a medical condition they live with or a medication they take to manage a health condition.
People who fall into this category are at higher risk of being hospitalized or dying if they get COVID-19. They are also more likely to transmit the infection. About 40% of vaccinated patients who are hospitalized with breakthrough cases are immunocompromised.
Recent studies have shown that between one-third and one-half of immunocompromised people who didn’t develop antibodies after two doses of a vaccine do get some level of protection after a third dose.
Even then, however, the protection immunocompromised people get from vaccines is not as robust as someone who has healthy immune function, and some panel members were concerned that a third dose might come with a false sense of security.
“My only concern with adding a third dose for the immunocompromised is the impression that our immunocompromised population [will] then be safe,” said ACIP member Helen Talbot, MD, MPH, an associate professor of medicine at Vanderbilt University Medical Center in Nashville, Tenn.
“I think the reality is they’ll be safer but still at incredibly high risk for severe disease and death,” she said.
In updating its EUA, the FDA stressed that, even after a third dose, people who are immunocompromised will still need to wear a mask indoors, socially distance, and avoid large crowds. In addition, family members and other close contacts should be fully vaccinated to protect these vulnerable individuals.
Johnson & Johnson not in the mix
The boosters will be available to children as young as 12 years of age who’ve had a Pfizer vaccine or those ages 18 and older who’ve gotten the Moderna vaccine.
For now, people who’ve had the one-dose Johnson & Johnson vaccine have not been cleared to get a second dose of any vaccine.
FDA experts acknowledged the gap but said that people who had received the Johnson & Johnson vaccine represented a small slice of vaccinated Americans, and said they couldn’t act before the FDA had updated its authorization for that vaccine, which the agency is actively exploring.
“We had to do what we’re doing based on the data we have in hand,” said Peter Marks, MD, director of the Center for Biologics Evaluation and Research at the FDA, the division of the agency that regulates vaccines.
“We think at least there is a solution here for the very large majority of immunocompromised individuals, and we believe we will probably have a solution for the remainder in the not-too-distant future,” Dr. Marks said.
In its updated EUA, the FDA said that the third shots were intended for people who had undergone solid organ transplants or have an “equivalent level of immunocompromise.”
The details
Clinical experts on the CDC panel spent a good deal of time trying to suss out exactly what conditions might fall under the FDA’s umbrella for a third dose.
In a presentation to the committee, Neela Goswami, MD, PhD, an assistant professor of infectious diseases at Emory University School of Medicine and of epidemiology at the Emory Rollins School of Public Health, Atlanta, stressed that the shots are intended for patients who are moderately or severely immunocompromised, in close consultation with their doctors, but that people who should qualify would include those:
- Receiving treatment for solid tumors or blood cancers
- Taking immunosuppressing medications after a solid organ transplant
- Within 2 years of receiving CAR-T therapy or a stem cell transplant
- Who have primary immunodeficiencies – rare genetic disorders that prevent the immune system from working properly
- With advanced or untreated
- Taking high-dose corticosteroids (more than 20 milligrams of or its equivalent daily), alkylating agents, antimetabolites, chemotherapy, TNF blockers, or other immunomodulating or immunosuppressing biologics
- With certain chronic medical conditions, such as or asplenia – living without a spleen
- Receiving dialysis
In discussion, CDC experts clarified that these third doses were not intended for people whose immune function had waned with age, such as elderly residents of long-term care facilities or people with chronic diseases like diabetes.
The idea is to try to get a third dose of the vaccine they’ve already had – Moderna or Pfizer – but if that’s not feasible, it’s fine for the third dose to be different from what someone has had before. The third dose should be given at least 28 days after a second dose, and, ideally, before the initiation of immunosuppressive therapy.
Participants in the meeting said that the CDC would post updated materials on its website to help guide physicians on exactly who should receive third doses.
Ultimately, however, the extra doses will be given on an honor system; no prescriptions or other kinds of clinical documentation will be required for people to get a third dose of these shots.
Tests to measure neutralizing antibodies are also not recommended before the shots are given because of differences in the types of tests used to measure these antibodies and the difficulty in interpreting them. It’s unclear right now what level of neutralizing antibodies is needed for protection.
‘Peace of mind’
In public testimony, Heather Braaten, a 44-year-old being treated for ovarian cancer, said she was grateful to have gotten two shots of the Pfizer vaccine last winter, in between rounds of chemotherapy, but she knew she was probably not well protected. She said she’d become obsessive over the past few months reading medical studies and trying to understand her risk.
“I have felt distraught over the situation. My prognosis is poor. I most likely have about two to three years left to live, so everything counts,” Ms. Braaten said.
She said her life ambitions were humble. She wants to visit with friends and family and not have to worry that she’ll be a breakthrough case. She wants to go grocery shopping again and “not panic and leave the store after five minutes.” She’d love to feel free to travel, she said.
“While I understand I still need to be cautious, I am hopeful for the peace of mind and greater freedom a third shot can provide,” Ms. Braaten said.
More boosters on the way?
In the second half of the meeting, the CDC also signaled that it was considering the use of boosters for people whose immunity might have waned in the months since they had completed their vaccine series, particularly seniors. About 75% of people hospitalized with vaccine breakthrough cases are over age 65, according to CDC data.
Those considerations are becoming more urgent as the Delta variant continues to pummel less vaccinated states and counties.
In its presentation to the ACIP, Heather Scobie, PhD, MPH, a member of the CDC’s COVID Response Team, highlighted data from Canada, Israel, Qatar, and the United Kingdom showing that, while the Pfizer vaccine was still highly effective at preventing hospitalizations and death, it’s far less likely when faced with Delta to prevent an infection that causes symptoms.
In Israel, Pfizer’s vaccine prevented symptoms an average of 41% of the time. In Qatar, which is also using the Moderna vaccine, Pfizer’s prevented symptomatic infections with Delta about 54% of the time compared with 85% with Moderna’s.
Dr. Scobie noted that Pfizer’s waning efficacy may have something to do with the fact that it uses a lower dosage than Moderna’s. Pfizer’s recommended dosing interval is also shorter – 3 weeks compared with 4 weeks for Moderna’s. Stretching the time between shots has been shown to boost vaccine effectiveness, she said.
New data from the Mayo clinic, published ahead of peer review, also suggest that Pfizer’s protection may be fading more quickly than Moderna’s.
In February, both shots were nearly 100% effective at preventing the SARS-CoV-2 infection, but by July, against Delta, Pfizer’s efficacy had dropped to somewhere between 13% and 62%, while Moderna’s was still effective at preventing infection between 58% and 87% of the time.
In July, Pfizer’s was between 24% and 94% effective at preventing hospitalization with a COVID-19 infection and Moderna’s was between 33% and 96% effective at preventing hospitalization.
While that may sound like cause for concern, Dr. Scobie noted that, as of August 2, severe COVD-19 outcomes after vaccination are still very rare. Among 164 million fully vaccinated people in the United States there have been about 7,000 hospitalizations and 1,500 deaths; nearly three out of four of these have been in people over the age of 65.
The ACIP will next meet on August 24 to focus solely on the COVID-19 vaccines.
A version of this article first appeared on Medscape.com.
Centers for Disease Control and Prevention Director Rochelle Walensky, MD, has officially signed off on a recommendation by an independent panel of 11 experts to allow people with weakened immune function to get a third dose of certain COVID-19 vaccines.
The decision follows a unanimous vote by the CDC’s Advisory Committee on Immunization Practices (ACIP), which in turn came hours after the U.S. Food and Drug Administration updated its Emergency Use Authorization (EUA) for the Pfizer and Moderna mRNA vaccines.
About 7 million adults in the United States have moderately to severely impaired immune function because of a medical condition they live with or a medication they take to manage a health condition.
People who fall into this category are at higher risk of being hospitalized or dying if they get COVID-19. They are also more likely to transmit the infection. About 40% of vaccinated patients who are hospitalized with breakthrough cases are immunocompromised.
Recent studies have shown that between one-third and one-half of immunocompromised people who didn’t develop antibodies after two doses of a vaccine do get some level of protection after a third dose.
Even then, however, the protection immunocompromised people get from vaccines is not as robust as someone who has healthy immune function, and some panel members were concerned that a third dose might come with a false sense of security.
“My only concern with adding a third dose for the immunocompromised is the impression that our immunocompromised population [will] then be safe,” said ACIP member Helen Talbot, MD, MPH, an associate professor of medicine at Vanderbilt University Medical Center in Nashville, Tenn.
“I think the reality is they’ll be safer but still at incredibly high risk for severe disease and death,” she said.
In updating its EUA, the FDA stressed that, even after a third dose, people who are immunocompromised will still need to wear a mask indoors, socially distance, and avoid large crowds. In addition, family members and other close contacts should be fully vaccinated to protect these vulnerable individuals.
Johnson & Johnson not in the mix
The boosters will be available to children as young as 12 years of age who’ve had a Pfizer vaccine or those ages 18 and older who’ve gotten the Moderna vaccine.
For now, people who’ve had the one-dose Johnson & Johnson vaccine have not been cleared to get a second dose of any vaccine.
FDA experts acknowledged the gap but said that people who had received the Johnson & Johnson vaccine represented a small slice of vaccinated Americans, and said they couldn’t act before the FDA had updated its authorization for that vaccine, which the agency is actively exploring.
“We had to do what we’re doing based on the data we have in hand,” said Peter Marks, MD, director of the Center for Biologics Evaluation and Research at the FDA, the division of the agency that regulates vaccines.
“We think at least there is a solution here for the very large majority of immunocompromised individuals, and we believe we will probably have a solution for the remainder in the not-too-distant future,” Dr. Marks said.
In its updated EUA, the FDA said that the third shots were intended for people who had undergone solid organ transplants or have an “equivalent level of immunocompromise.”
The details
Clinical experts on the CDC panel spent a good deal of time trying to suss out exactly what conditions might fall under the FDA’s umbrella for a third dose.
In a presentation to the committee, Neela Goswami, MD, PhD, an assistant professor of infectious diseases at Emory University School of Medicine and of epidemiology at the Emory Rollins School of Public Health, Atlanta, stressed that the shots are intended for patients who are moderately or severely immunocompromised, in close consultation with their doctors, but that people who should qualify would include those:
- Receiving treatment for solid tumors or blood cancers
- Taking immunosuppressing medications after a solid organ transplant
- Within 2 years of receiving CAR-T therapy or a stem cell transplant
- Who have primary immunodeficiencies – rare genetic disorders that prevent the immune system from working properly
- With advanced or untreated
- Taking high-dose corticosteroids (more than 20 milligrams of or its equivalent daily), alkylating agents, antimetabolites, chemotherapy, TNF blockers, or other immunomodulating or immunosuppressing biologics
- With certain chronic medical conditions, such as or asplenia – living without a spleen
- Receiving dialysis
In discussion, CDC experts clarified that these third doses were not intended for people whose immune function had waned with age, such as elderly residents of long-term care facilities or people with chronic diseases like diabetes.
The idea is to try to get a third dose of the vaccine they’ve already had – Moderna or Pfizer – but if that’s not feasible, it’s fine for the third dose to be different from what someone has had before. The third dose should be given at least 28 days after a second dose, and, ideally, before the initiation of immunosuppressive therapy.
Participants in the meeting said that the CDC would post updated materials on its website to help guide physicians on exactly who should receive third doses.
Ultimately, however, the extra doses will be given on an honor system; no prescriptions or other kinds of clinical documentation will be required for people to get a third dose of these shots.
Tests to measure neutralizing antibodies are also not recommended before the shots are given because of differences in the types of tests used to measure these antibodies and the difficulty in interpreting them. It’s unclear right now what level of neutralizing antibodies is needed for protection.
‘Peace of mind’
In public testimony, Heather Braaten, a 44-year-old being treated for ovarian cancer, said she was grateful to have gotten two shots of the Pfizer vaccine last winter, in between rounds of chemotherapy, but she knew she was probably not well protected. She said she’d become obsessive over the past few months reading medical studies and trying to understand her risk.
“I have felt distraught over the situation. My prognosis is poor. I most likely have about two to three years left to live, so everything counts,” Ms. Braaten said.
She said her life ambitions were humble. She wants to visit with friends and family and not have to worry that she’ll be a breakthrough case. She wants to go grocery shopping again and “not panic and leave the store after five minutes.” She’d love to feel free to travel, she said.
“While I understand I still need to be cautious, I am hopeful for the peace of mind and greater freedom a third shot can provide,” Ms. Braaten said.
More boosters on the way?
In the second half of the meeting, the CDC also signaled that it was considering the use of boosters for people whose immunity might have waned in the months since they had completed their vaccine series, particularly seniors. About 75% of people hospitalized with vaccine breakthrough cases are over age 65, according to CDC data.
Those considerations are becoming more urgent as the Delta variant continues to pummel less vaccinated states and counties.
In its presentation to the ACIP, Heather Scobie, PhD, MPH, a member of the CDC’s COVID Response Team, highlighted data from Canada, Israel, Qatar, and the United Kingdom showing that, while the Pfizer vaccine was still highly effective at preventing hospitalizations and death, it’s far less likely when faced with Delta to prevent an infection that causes symptoms.
In Israel, Pfizer’s vaccine prevented symptoms an average of 41% of the time. In Qatar, which is also using the Moderna vaccine, Pfizer’s prevented symptomatic infections with Delta about 54% of the time compared with 85% with Moderna’s.
Dr. Scobie noted that Pfizer’s waning efficacy may have something to do with the fact that it uses a lower dosage than Moderna’s. Pfizer’s recommended dosing interval is also shorter – 3 weeks compared with 4 weeks for Moderna’s. Stretching the time between shots has been shown to boost vaccine effectiveness, she said.
New data from the Mayo clinic, published ahead of peer review, also suggest that Pfizer’s protection may be fading more quickly than Moderna’s.
In February, both shots were nearly 100% effective at preventing the SARS-CoV-2 infection, but by July, against Delta, Pfizer’s efficacy had dropped to somewhere between 13% and 62%, while Moderna’s was still effective at preventing infection between 58% and 87% of the time.
In July, Pfizer’s was between 24% and 94% effective at preventing hospitalization with a COVID-19 infection and Moderna’s was between 33% and 96% effective at preventing hospitalization.
While that may sound like cause for concern, Dr. Scobie noted that, as of August 2, severe COVD-19 outcomes after vaccination are still very rare. Among 164 million fully vaccinated people in the United States there have been about 7,000 hospitalizations and 1,500 deaths; nearly three out of four of these have been in people over the age of 65.
The ACIP will next meet on August 24 to focus solely on the COVID-19 vaccines.
A version of this article first appeared on Medscape.com.
Centers for Disease Control and Prevention Director Rochelle Walensky, MD, has officially signed off on a recommendation by an independent panel of 11 experts to allow people with weakened immune function to get a third dose of certain COVID-19 vaccines.
The decision follows a unanimous vote by the CDC’s Advisory Committee on Immunization Practices (ACIP), which in turn came hours after the U.S. Food and Drug Administration updated its Emergency Use Authorization (EUA) for the Pfizer and Moderna mRNA vaccines.
About 7 million adults in the United States have moderately to severely impaired immune function because of a medical condition they live with or a medication they take to manage a health condition.
People who fall into this category are at higher risk of being hospitalized or dying if they get COVID-19. They are also more likely to transmit the infection. About 40% of vaccinated patients who are hospitalized with breakthrough cases are immunocompromised.
Recent studies have shown that between one-third and one-half of immunocompromised people who didn’t develop antibodies after two doses of a vaccine do get some level of protection after a third dose.
Even then, however, the protection immunocompromised people get from vaccines is not as robust as someone who has healthy immune function, and some panel members were concerned that a third dose might come with a false sense of security.
“My only concern with adding a third dose for the immunocompromised is the impression that our immunocompromised population [will] then be safe,” said ACIP member Helen Talbot, MD, MPH, an associate professor of medicine at Vanderbilt University Medical Center in Nashville, Tenn.
“I think the reality is they’ll be safer but still at incredibly high risk for severe disease and death,” she said.
In updating its EUA, the FDA stressed that, even after a third dose, people who are immunocompromised will still need to wear a mask indoors, socially distance, and avoid large crowds. In addition, family members and other close contacts should be fully vaccinated to protect these vulnerable individuals.
Johnson & Johnson not in the mix
The boosters will be available to children as young as 12 years of age who’ve had a Pfizer vaccine or those ages 18 and older who’ve gotten the Moderna vaccine.
For now, people who’ve had the one-dose Johnson & Johnson vaccine have not been cleared to get a second dose of any vaccine.
FDA experts acknowledged the gap but said that people who had received the Johnson & Johnson vaccine represented a small slice of vaccinated Americans, and said they couldn’t act before the FDA had updated its authorization for that vaccine, which the agency is actively exploring.
“We had to do what we’re doing based on the data we have in hand,” said Peter Marks, MD, director of the Center for Biologics Evaluation and Research at the FDA, the division of the agency that regulates vaccines.
“We think at least there is a solution here for the very large majority of immunocompromised individuals, and we believe we will probably have a solution for the remainder in the not-too-distant future,” Dr. Marks said.
In its updated EUA, the FDA said that the third shots were intended for people who had undergone solid organ transplants or have an “equivalent level of immunocompromise.”
The details
Clinical experts on the CDC panel spent a good deal of time trying to suss out exactly what conditions might fall under the FDA’s umbrella for a third dose.
In a presentation to the committee, Neela Goswami, MD, PhD, an assistant professor of infectious diseases at Emory University School of Medicine and of epidemiology at the Emory Rollins School of Public Health, Atlanta, stressed that the shots are intended for patients who are moderately or severely immunocompromised, in close consultation with their doctors, but that people who should qualify would include those:
- Receiving treatment for solid tumors or blood cancers
- Taking immunosuppressing medications after a solid organ transplant
- Within 2 years of receiving CAR-T therapy or a stem cell transplant
- Who have primary immunodeficiencies – rare genetic disorders that prevent the immune system from working properly
- With advanced or untreated
- Taking high-dose corticosteroids (more than 20 milligrams of or its equivalent daily), alkylating agents, antimetabolites, chemotherapy, TNF blockers, or other immunomodulating or immunosuppressing biologics
- With certain chronic medical conditions, such as or asplenia – living without a spleen
- Receiving dialysis
In discussion, CDC experts clarified that these third doses were not intended for people whose immune function had waned with age, such as elderly residents of long-term care facilities or people with chronic diseases like diabetes.
The idea is to try to get a third dose of the vaccine they’ve already had – Moderna or Pfizer – but if that’s not feasible, it’s fine for the third dose to be different from what someone has had before. The third dose should be given at least 28 days after a second dose, and, ideally, before the initiation of immunosuppressive therapy.
Participants in the meeting said that the CDC would post updated materials on its website to help guide physicians on exactly who should receive third doses.
Ultimately, however, the extra doses will be given on an honor system; no prescriptions or other kinds of clinical documentation will be required for people to get a third dose of these shots.
Tests to measure neutralizing antibodies are also not recommended before the shots are given because of differences in the types of tests used to measure these antibodies and the difficulty in interpreting them. It’s unclear right now what level of neutralizing antibodies is needed for protection.
‘Peace of mind’
In public testimony, Heather Braaten, a 44-year-old being treated for ovarian cancer, said she was grateful to have gotten two shots of the Pfizer vaccine last winter, in between rounds of chemotherapy, but she knew she was probably not well protected. She said she’d become obsessive over the past few months reading medical studies and trying to understand her risk.
“I have felt distraught over the situation. My prognosis is poor. I most likely have about two to three years left to live, so everything counts,” Ms. Braaten said.
She said her life ambitions were humble. She wants to visit with friends and family and not have to worry that she’ll be a breakthrough case. She wants to go grocery shopping again and “not panic and leave the store after five minutes.” She’d love to feel free to travel, she said.
“While I understand I still need to be cautious, I am hopeful for the peace of mind and greater freedom a third shot can provide,” Ms. Braaten said.
More boosters on the way?
In the second half of the meeting, the CDC also signaled that it was considering the use of boosters for people whose immunity might have waned in the months since they had completed their vaccine series, particularly seniors. About 75% of people hospitalized with vaccine breakthrough cases are over age 65, according to CDC data.
Those considerations are becoming more urgent as the Delta variant continues to pummel less vaccinated states and counties.
In its presentation to the ACIP, Heather Scobie, PhD, MPH, a member of the CDC’s COVID Response Team, highlighted data from Canada, Israel, Qatar, and the United Kingdom showing that, while the Pfizer vaccine was still highly effective at preventing hospitalizations and death, it’s far less likely when faced with Delta to prevent an infection that causes symptoms.
In Israel, Pfizer’s vaccine prevented symptoms an average of 41% of the time. In Qatar, which is also using the Moderna vaccine, Pfizer’s prevented symptomatic infections with Delta about 54% of the time compared with 85% with Moderna’s.
Dr. Scobie noted that Pfizer’s waning efficacy may have something to do with the fact that it uses a lower dosage than Moderna’s. Pfizer’s recommended dosing interval is also shorter – 3 weeks compared with 4 weeks for Moderna’s. Stretching the time between shots has been shown to boost vaccine effectiveness, she said.
New data from the Mayo clinic, published ahead of peer review, also suggest that Pfizer’s protection may be fading more quickly than Moderna’s.
In February, both shots were nearly 100% effective at preventing the SARS-CoV-2 infection, but by July, against Delta, Pfizer’s efficacy had dropped to somewhere between 13% and 62%, while Moderna’s was still effective at preventing infection between 58% and 87% of the time.
In July, Pfizer’s was between 24% and 94% effective at preventing hospitalization with a COVID-19 infection and Moderna’s was between 33% and 96% effective at preventing hospitalization.
While that may sound like cause for concern, Dr. Scobie noted that, as of August 2, severe COVD-19 outcomes after vaccination are still very rare. Among 164 million fully vaccinated people in the United States there have been about 7,000 hospitalizations and 1,500 deaths; nearly three out of four of these have been in people over the age of 65.
The ACIP will next meet on August 24 to focus solely on the COVID-19 vaccines.
A version of this article first appeared on Medscape.com.
Use of point-of-care ultrasound (POCUS) for heart failure
Case
A 65-year-old woman presents to the emergency department with a chief complaint of shortness of breath for 3 days. Medical history is notable for moderate chronic obstructive pulmonary disorder, systolic heart failure with last known ejection fraction (EF) of 35% and type 2 diabetes complicated by hyperglycemia when on steroids. You are talking the case over with colleagues and they suggest point-of-care ultrasound (POCUS) would be useful in her case.
Brief overview of the issue
Once mainly used by ED and critical care physicians, POCUS is now a tool that many hospitalists are using at the bedside. POCUS differs from traditional comprehensive ultrasounds in the following ways: POCUS is designed to answer a specific clinical question (as opposed to evaluating all organs in a specific region), POCUS exams are performed by the clinician who is formulating the clinical question (as opposed to by a consultative service such as cardiology and radiology), and POCUS can evaluate multiple organ systems (such as by evaluating a patient’s heart, lungs, and inferior vena cava to determine the etiology of hypoxia).
Hospitalist use of POCUS may include guiding procedures, aiding in diagnosis, and assessing effectiveness of treatment. Many high-quality studies have been published that support the use of POCUS and have proven that POCUS can decrease medical errors, help reach diagnoses in a more expedited fashion, and complement or replace more advanced imaging.
A challenge of POCUS is that it is user dependent and there are no established standards for hospitalists in POCUS training. As the Society of Hospital Medicine position statement on POCUS points out, there is a significant difference between skill levels required to obtain a certificate of completion for POCUS training and a certificate of competency in POCUS. Therefore, it is recommended hospitalists work with local credentialing committees to delineate the requirements for POCUS use.
Overview of the data
POCUS for initial assessment and diagnosis of heart failure (HF)
Use of POCUS in cases of suspected HF includes examination of the heart, lungs, and inferior vena cava (IVC). Cardiac ultrasound provides an estimated ejection fraction. Lung ultrasound (LUS) functions to examine for B lines and pleural effusions. The presence of more than three B lines per thoracic zone bilaterally suggests cardiogenic pulmonary edema. Scanning the IVC provides a noninvasive way to assess volume status and is especially helpful when body habitus prevents accurate assessment of jugular venous pressure.
Several studies have addressed the utility of bedside ultrasound in the initial assessment or diagnosis of acute decompensated heart failure (ADHF) in patients presenting with dyspnea in emergency or inpatient settings. Positive B lines are a useful finding, with high sensitivities, high specificities, and positive likelihood ratios. One large multicenter prospective study found LUS to have a sensitivity of 90.5%, specificity of 93.5%, and positive and negative LRs of 14.0 and 0.10, respectively.1 Another large multicenter prospective cohort study showed that LUS was more sensitive and more specific than chest x-ray (CXR) and brain natriuretic peptide in detecting ADHF.2 Additional POCUS findings that have shown relatively high sensitivities and specificities in the initial diagnosis of ADHF include pleural effusion, reduced left ventricular ejection fraction (LVEF), increased left ventricular end-diastolic dimension, and jugular venous distention.
Data also exists on assessments of ADHF using combinations of POCUS findings; for example, lung and cardiac ultrasound (LuCUS) protocols include an evaluation for B lines, assessment of IVC size and collapsibility, and determination of LVEF, although this has mainly been examined in ED patients. For patients who presented to the ED with undifferentiated dyspnea, one such study showed a specificity of 100% when a LuCUS protocol was used to diagnose ADHF while another study showed that the use of a LuCUS protocol changed management in 47% of patients.3,4 Of note, although each LuCUS protocol integrated the use of lung findings, IVC collapsibility, and LVEF, the exact protocols varied by institution. Finally, it has been established in multiple studies that LUS used in addition to standard workup including history and physical, labs, and electrocardiogram has been shown to increase diagnostic accuracy.2,5
Using POCUS to guide diuretic therapy in HF
To date, there have been multiple small studies published on the utility of daily POCUS in hospitalized patients with ADHF to help assess response to treatment and guide diuresis by looking for reduction in B lines on LUS or a change in IVC size or collapsibility. Volpicelli and colleagues showed that daily LUS was at least as good as daily CXR in monitoring response to therapy.6 Similarly, Mozzini and colleagues performed a randomized controlled trial of 120 patients admitted for ADHF who were randomized to a CXR group (who had a CXR performed on admission and discharge) and a LUS group (which was performed at admission, 24 hours, 48 hours, 72 hours, and discharge).7 This study found that the LUS group underwent a significantly higher number of diuretic dose adjustments as compared with the CXR group (P < .001) and had a modest improvement in LOS, compared with the CXR group. Specifically, median LOS was 8 days in CXR group (range, 4-17 days) and 7 days in the LUS group (range, 3-10 days; P < .001).
The impact of POCUS on length of stay (LOS) and readmissions
There is increasing data that POCUS can have meaningful impacts on patient-centered outcomes (morbidity, mortality, and readmission) while exposing patients to minimal discomfort, no venipuncture, and no radiation exposure. First, multiple studies looked at whether performing focused cardiac US of the IVC as a marker of volume status could predict readmission in patients hospitalized for ADHF.8,9 Both of these trials showed that plethoric, noncollapsible IVC at discharge were statistically significant predictors of readmission. In fact, Goonewardena and colleagues demonstrated that patients who required readmission had an enlarged IVC at discharge nearly 3 times more frequently (21% vs. 61%, P < .001) and abnormal IVC collapsibility 1.5 times more frequently (41% vs. 71%, P = .01) as compared with patients who remained out of the hospital.9
Similarly, a subsequent trial looked at whether IVC size on admission was of prognostic importance in patients hospitalized for ADHF and showed that admission IVC diameter was an independent predictor of both 90-day mortality (hazard ratio, 5.88; 95% confidence interval, 1.21-28.10; P = .025) and 90-day readmission (HR, 3.20; 95% CI, 1.24-8.21; P = .016).10 Additionally, LUS heart failure assessment for pulmonary congestion by counting B lines also showed that having more than 15 B lines prior to discharge was an independent predictor of readmission for ADHF at 6 months (HR, 11.74; 95% CI, 1.30-106.16).11
A challenge of POCUS: Obtaining competency
As previously noted, there are not yet any established standards for training and assessing hospitalists in POCUS. The SHM Position Statement on POCUS recommends the following criteria for training5: the training environment should be similar to the location in which the trainee will practice, training and feedback should occur in real time, the trainee should be taught specific applications of POCUS (such as cardiac US, LUS, and IVC US) as each application comes with unique skills and knowledge, clinical competence must be achieved and demonstrated, and continued education and feedback are necessary once competence is obtained.12 SHM recommends residency-based training pathways, training through a local or national program such as the SHM POCUS certificate program, or training through other medical societies for hospitalists already in practice.
Application of the data to our original case
Targeted POCUS using the LuCUS protocol is performed and reveals three B lines in two lung zones bilaterally, moderate bilateral pleural effusions, EF 20%, and a noncollapsible IVC leading to a diagnosis of ADHF. Her ADHF is treated with intravenous diuresis. She is continued on her chronic maintenance chronic obstructive pulmonary disorder regimen but does not receive steroids, avoiding hyperglycemia that has complicated prior admissions. Over the next few days her respiratory and cardiac status is monitored using POCUS to assess her response to therapy and titrate her diuretics to her true dry weight, which was several pounds lower than her previously assumed dry weight. At discharge she is instructed to use the new dry weight which may avoid readmissions for HF.
Bottom line
POCUS improves diagnostic accuracy and facilitates volume assessment and management in acute decompensated heart failure.
Dr. Farber is a medical instructor at Duke University and hospitalist at Duke Regional Hospital, both in Durham, N.C. Dr. Marcantonio is a medical instructor in the department of internal medicine and department of pediatrics at Duke University and hospitalist at Duke University Hospital and Duke Regional Hospital. Dr. Stafford and Dr. Brooks are assistant professors of medicine and hospitalists at Duke Regional Hospital. Dr. Wachter is associate medical director at Duke Regional Hospital and assistant professor at Duke University. Dr. Menon is a hospitalist at Duke University. Dr. Sharma is associate medical director for clinical education at Duke Regional Hospital and associate professor of medicine at Duke University.
References
1. Pivetta E et al. Lung ultrasound integrated with clinical assessment for the diagnosis of acute decompensated heart failure in the emergency department: A randomized controlled trial. Eur J Heart Fail. 2019 Jun;21(6):754-66. doi: 10.1002/ejhf.1379.
2. Pivetta E et al. Lung ultrasound-implemented diagnosis of acute decompensated heart failure in the ED: A SIMEU multicenter study. Chest. 2015;148(1):202-10. doi: 10.1378/chest.14-2608.
3. Anderson KL et al. Diagnosing heart failure among acutely dyspneic patients with cardiac, inferior vena cava, and lung ultrasonography. Am J Emerg Med. 2013;31:1208-14. doi: 10.1016/j.ajem.2013.05.007.
4. Russell FM et al. Diagnosing acute heart failure in patients with undifferentiated dyspnea: A lung and cardiac ultrasound (LuCUS) protocol. Acad Emerg Med. 2015;22(2):182-91. doi:10.1111/acem.12570.
5. Maw AM et al. Diagnostic accuracy of point-of-care lung ultrasonography and chest radiography in adults with symptoms suggestive of acute decompensated heart failure: A systematic review and meta-analysis. JAMA Netw Open. 2019 Mar 1;2(3):e190703. doi:10.1001/jamanetworkopen.2019.0703.
6. Volpicelli G et al. Bedside ultrasound of the lung for the monitoring of acute decompensated heart failure. Am J Emerg Med. 2008 Jun;26(5):585-91. doi:10.1016/j.ajem.2007.09.014.
7. Mozzini C et al. Lung ultrasound in internal medicine efficiently drives the management of patients with heart failure and speeds up the discharge time. Intern Emerg Med. 2018 Jan;13(1):27-33. doi: 10.1007/s11739-017-1738-1.
8. Laffin LJ et al. Focused cardiac ultrasound as a predictor of readmission in acute decompensated heart failure. Int J Cardiovasc Imaging. 2018;34(7):1075-9. doi:10.1007/s10554-018-1317-1.
9. Goonewardena SN et al. Comparison of hand-carried ultrasound assessment of the inferior vena cava and N-terminal pro-brain natriuretic peptide for predicting readmission after hospitalization for acute decompensated heart failure. JACC Cardiovasc Imaging. 2008;1(5):595-601. doi:10.1016/j.jcmg.2008.06.005.
10. Cubo-Romano P et al. Admission inferior vena cava measurements are associated with mortality after hospitalization for acute decompensated heart failure. J Hosp Med. 2016 Nov;11(11):778-84. doi: 10.1002/jhm.2620.
11. Gargani L et al. Persistent pulmonary congestion before discharge predicts rehospitalization in heart failure: A lung ultrasound study. Cardiovasc Ultrasound. 2015 Sep 4;13:40. doi: 10.1186/s12947-015-0033-4.
12. Soni NJ et al. Point-of-care ultrasound for hospitalists: A Position Statement of the Society of Hospital Medicine. J Hosp Med. 2019 Jan 2;14:E1-6. doi: 10.12788/jhm.3079.
Key points
- Studies have found POCUS improves the diagnosis of acute decompensated heart failure in patients presenting with dyspnea.
- Daily evaluation with POCUS has decreased length of stay in acute decompensated heart failure.
- Credentialing requirements for hospitalists to use POCUS for clinical care vary by hospital.
Additional reading
Maw AM and Soni NJ. Annals for hospitalists inpatient notes – why should hospitalists use point-of-care ultrasound? Ann Intern Med. 2018 Apr 17;168(8):HO2-HO3. doi: 10.7326/M18-0367.
Lewiss RE. “The ultrasound looked fine”: Point of care ultrasound and patient safety. AHRQ’s Patient Safety Network. WebM&M: Case Studies. 2018 Jul 1. https://psnet.ahrq.gov/web-mm/ultrasound-looked-fine-point-care-ultrasound-and-patient-safety.
Quiz: Testing your POCUS knowledge
POCUS is increasingly prevalent in hospital medicine, but use varies among different disease processes. Which organ system ultrasound or lab test would be most helpful in the following scenario?
An acutely dyspneic patient with no past medical history presents to the ED. Chest x-ray is equivocal. Of the following, which study best confirms a diagnosis of acute decompensated heart failure?
A. Brain natriuretic peptide
B. Point-of-care cardiac ultrasound
C. Point-of-care lung ultrasound
D. Point-of-care inferior vena cava ultrasound
Answer
C. Point-of-care lung ultrasound
Multiple studies, including three systematic reviews, have shown that point-of-care lung ultrasound has high sensitivity and specificity to evaluate for B lines as a marker for cardiogenic pulmonary edema. Point-of-care ultrasound of ejection fraction and inferior vena cava have not been evaluated by systematic review although one randomized, controlled trial showed that an EF less than 45% had 74% specificity and 77% sensitivity and IVC collapsibility index less than 20% had an 86% specificity and 52% sensitivity for detection of acute decompensated heart failure. This same study showed that the combination of cardiac, lung, and IVC point-of-care ultrasound had 100% specificity for diagnosing acute decompensated heart failure. In the future, health care providers could rely on this multiorgan evaluation with point-of-care ultrasound to confirm a diagnosis of acute decompensated heart failure in a dyspneic patient.
Case
A 65-year-old woman presents to the emergency department with a chief complaint of shortness of breath for 3 days. Medical history is notable for moderate chronic obstructive pulmonary disorder, systolic heart failure with last known ejection fraction (EF) of 35% and type 2 diabetes complicated by hyperglycemia when on steroids. You are talking the case over with colleagues and they suggest point-of-care ultrasound (POCUS) would be useful in her case.
Brief overview of the issue
Once mainly used by ED and critical care physicians, POCUS is now a tool that many hospitalists are using at the bedside. POCUS differs from traditional comprehensive ultrasounds in the following ways: POCUS is designed to answer a specific clinical question (as opposed to evaluating all organs in a specific region), POCUS exams are performed by the clinician who is formulating the clinical question (as opposed to by a consultative service such as cardiology and radiology), and POCUS can evaluate multiple organ systems (such as by evaluating a patient’s heart, lungs, and inferior vena cava to determine the etiology of hypoxia).
Hospitalist use of POCUS may include guiding procedures, aiding in diagnosis, and assessing effectiveness of treatment. Many high-quality studies have been published that support the use of POCUS and have proven that POCUS can decrease medical errors, help reach diagnoses in a more expedited fashion, and complement or replace more advanced imaging.
A challenge of POCUS is that it is user dependent and there are no established standards for hospitalists in POCUS training. As the Society of Hospital Medicine position statement on POCUS points out, there is a significant difference between skill levels required to obtain a certificate of completion for POCUS training and a certificate of competency in POCUS. Therefore, it is recommended hospitalists work with local credentialing committees to delineate the requirements for POCUS use.
Overview of the data
POCUS for initial assessment and diagnosis of heart failure (HF)
Use of POCUS in cases of suspected HF includes examination of the heart, lungs, and inferior vena cava (IVC). Cardiac ultrasound provides an estimated ejection fraction. Lung ultrasound (LUS) functions to examine for B lines and pleural effusions. The presence of more than three B lines per thoracic zone bilaterally suggests cardiogenic pulmonary edema. Scanning the IVC provides a noninvasive way to assess volume status and is especially helpful when body habitus prevents accurate assessment of jugular venous pressure.
Several studies have addressed the utility of bedside ultrasound in the initial assessment or diagnosis of acute decompensated heart failure (ADHF) in patients presenting with dyspnea in emergency or inpatient settings. Positive B lines are a useful finding, with high sensitivities, high specificities, and positive likelihood ratios. One large multicenter prospective study found LUS to have a sensitivity of 90.5%, specificity of 93.5%, and positive and negative LRs of 14.0 and 0.10, respectively.1 Another large multicenter prospective cohort study showed that LUS was more sensitive and more specific than chest x-ray (CXR) and brain natriuretic peptide in detecting ADHF.2 Additional POCUS findings that have shown relatively high sensitivities and specificities in the initial diagnosis of ADHF include pleural effusion, reduced left ventricular ejection fraction (LVEF), increased left ventricular end-diastolic dimension, and jugular venous distention.
Data also exists on assessments of ADHF using combinations of POCUS findings; for example, lung and cardiac ultrasound (LuCUS) protocols include an evaluation for B lines, assessment of IVC size and collapsibility, and determination of LVEF, although this has mainly been examined in ED patients. For patients who presented to the ED with undifferentiated dyspnea, one such study showed a specificity of 100% when a LuCUS protocol was used to diagnose ADHF while another study showed that the use of a LuCUS protocol changed management in 47% of patients.3,4 Of note, although each LuCUS protocol integrated the use of lung findings, IVC collapsibility, and LVEF, the exact protocols varied by institution. Finally, it has been established in multiple studies that LUS used in addition to standard workup including history and physical, labs, and electrocardiogram has been shown to increase diagnostic accuracy.2,5
Using POCUS to guide diuretic therapy in HF
To date, there have been multiple small studies published on the utility of daily POCUS in hospitalized patients with ADHF to help assess response to treatment and guide diuresis by looking for reduction in B lines on LUS or a change in IVC size or collapsibility. Volpicelli and colleagues showed that daily LUS was at least as good as daily CXR in monitoring response to therapy.6 Similarly, Mozzini and colleagues performed a randomized controlled trial of 120 patients admitted for ADHF who were randomized to a CXR group (who had a CXR performed on admission and discharge) and a LUS group (which was performed at admission, 24 hours, 48 hours, 72 hours, and discharge).7 This study found that the LUS group underwent a significantly higher number of diuretic dose adjustments as compared with the CXR group (P < .001) and had a modest improvement in LOS, compared with the CXR group. Specifically, median LOS was 8 days in CXR group (range, 4-17 days) and 7 days in the LUS group (range, 3-10 days; P < .001).
The impact of POCUS on length of stay (LOS) and readmissions
There is increasing data that POCUS can have meaningful impacts on patient-centered outcomes (morbidity, mortality, and readmission) while exposing patients to minimal discomfort, no venipuncture, and no radiation exposure. First, multiple studies looked at whether performing focused cardiac US of the IVC as a marker of volume status could predict readmission in patients hospitalized for ADHF.8,9 Both of these trials showed that plethoric, noncollapsible IVC at discharge were statistically significant predictors of readmission. In fact, Goonewardena and colleagues demonstrated that patients who required readmission had an enlarged IVC at discharge nearly 3 times more frequently (21% vs. 61%, P < .001) and abnormal IVC collapsibility 1.5 times more frequently (41% vs. 71%, P = .01) as compared with patients who remained out of the hospital.9
Similarly, a subsequent trial looked at whether IVC size on admission was of prognostic importance in patients hospitalized for ADHF and showed that admission IVC diameter was an independent predictor of both 90-day mortality (hazard ratio, 5.88; 95% confidence interval, 1.21-28.10; P = .025) and 90-day readmission (HR, 3.20; 95% CI, 1.24-8.21; P = .016).10 Additionally, LUS heart failure assessment for pulmonary congestion by counting B lines also showed that having more than 15 B lines prior to discharge was an independent predictor of readmission for ADHF at 6 months (HR, 11.74; 95% CI, 1.30-106.16).11
A challenge of POCUS: Obtaining competency
As previously noted, there are not yet any established standards for training and assessing hospitalists in POCUS. The SHM Position Statement on POCUS recommends the following criteria for training5: the training environment should be similar to the location in which the trainee will practice, training and feedback should occur in real time, the trainee should be taught specific applications of POCUS (such as cardiac US, LUS, and IVC US) as each application comes with unique skills and knowledge, clinical competence must be achieved and demonstrated, and continued education and feedback are necessary once competence is obtained.12 SHM recommends residency-based training pathways, training through a local or national program such as the SHM POCUS certificate program, or training through other medical societies for hospitalists already in practice.
Application of the data to our original case
Targeted POCUS using the LuCUS protocol is performed and reveals three B lines in two lung zones bilaterally, moderate bilateral pleural effusions, EF 20%, and a noncollapsible IVC leading to a diagnosis of ADHF. Her ADHF is treated with intravenous diuresis. She is continued on her chronic maintenance chronic obstructive pulmonary disorder regimen but does not receive steroids, avoiding hyperglycemia that has complicated prior admissions. Over the next few days her respiratory and cardiac status is monitored using POCUS to assess her response to therapy and titrate her diuretics to her true dry weight, which was several pounds lower than her previously assumed dry weight. At discharge she is instructed to use the new dry weight which may avoid readmissions for HF.
Bottom line
POCUS improves diagnostic accuracy and facilitates volume assessment and management in acute decompensated heart failure.
Dr. Farber is a medical instructor at Duke University and hospitalist at Duke Regional Hospital, both in Durham, N.C. Dr. Marcantonio is a medical instructor in the department of internal medicine and department of pediatrics at Duke University and hospitalist at Duke University Hospital and Duke Regional Hospital. Dr. Stafford and Dr. Brooks are assistant professors of medicine and hospitalists at Duke Regional Hospital. Dr. Wachter is associate medical director at Duke Regional Hospital and assistant professor at Duke University. Dr. Menon is a hospitalist at Duke University. Dr. Sharma is associate medical director for clinical education at Duke Regional Hospital and associate professor of medicine at Duke University.
References
1. Pivetta E et al. Lung ultrasound integrated with clinical assessment for the diagnosis of acute decompensated heart failure in the emergency department: A randomized controlled trial. Eur J Heart Fail. 2019 Jun;21(6):754-66. doi: 10.1002/ejhf.1379.
2. Pivetta E et al. Lung ultrasound-implemented diagnosis of acute decompensated heart failure in the ED: A SIMEU multicenter study. Chest. 2015;148(1):202-10. doi: 10.1378/chest.14-2608.
3. Anderson KL et al. Diagnosing heart failure among acutely dyspneic patients with cardiac, inferior vena cava, and lung ultrasonography. Am J Emerg Med. 2013;31:1208-14. doi: 10.1016/j.ajem.2013.05.007.
4. Russell FM et al. Diagnosing acute heart failure in patients with undifferentiated dyspnea: A lung and cardiac ultrasound (LuCUS) protocol. Acad Emerg Med. 2015;22(2):182-91. doi:10.1111/acem.12570.
5. Maw AM et al. Diagnostic accuracy of point-of-care lung ultrasonography and chest radiography in adults with symptoms suggestive of acute decompensated heart failure: A systematic review and meta-analysis. JAMA Netw Open. 2019 Mar 1;2(3):e190703. doi:10.1001/jamanetworkopen.2019.0703.
6. Volpicelli G et al. Bedside ultrasound of the lung for the monitoring of acute decompensated heart failure. Am J Emerg Med. 2008 Jun;26(5):585-91. doi:10.1016/j.ajem.2007.09.014.
7. Mozzini C et al. Lung ultrasound in internal medicine efficiently drives the management of patients with heart failure and speeds up the discharge time. Intern Emerg Med. 2018 Jan;13(1):27-33. doi: 10.1007/s11739-017-1738-1.
8. Laffin LJ et al. Focused cardiac ultrasound as a predictor of readmission in acute decompensated heart failure. Int J Cardiovasc Imaging. 2018;34(7):1075-9. doi:10.1007/s10554-018-1317-1.
9. Goonewardena SN et al. Comparison of hand-carried ultrasound assessment of the inferior vena cava and N-terminal pro-brain natriuretic peptide for predicting readmission after hospitalization for acute decompensated heart failure. JACC Cardiovasc Imaging. 2008;1(5):595-601. doi:10.1016/j.jcmg.2008.06.005.
10. Cubo-Romano P et al. Admission inferior vena cava measurements are associated with mortality after hospitalization for acute decompensated heart failure. J Hosp Med. 2016 Nov;11(11):778-84. doi: 10.1002/jhm.2620.
11. Gargani L et al. Persistent pulmonary congestion before discharge predicts rehospitalization in heart failure: A lung ultrasound study. Cardiovasc Ultrasound. 2015 Sep 4;13:40. doi: 10.1186/s12947-015-0033-4.
12. Soni NJ et al. Point-of-care ultrasound for hospitalists: A Position Statement of the Society of Hospital Medicine. J Hosp Med. 2019 Jan 2;14:E1-6. doi: 10.12788/jhm.3079.
Key points
- Studies have found POCUS improves the diagnosis of acute decompensated heart failure in patients presenting with dyspnea.
- Daily evaluation with POCUS has decreased length of stay in acute decompensated heart failure.
- Credentialing requirements for hospitalists to use POCUS for clinical care vary by hospital.
Additional reading
Maw AM and Soni NJ. Annals for hospitalists inpatient notes – why should hospitalists use point-of-care ultrasound? Ann Intern Med. 2018 Apr 17;168(8):HO2-HO3. doi: 10.7326/M18-0367.
Lewiss RE. “The ultrasound looked fine”: Point of care ultrasound and patient safety. AHRQ’s Patient Safety Network. WebM&M: Case Studies. 2018 Jul 1. https://psnet.ahrq.gov/web-mm/ultrasound-looked-fine-point-care-ultrasound-and-patient-safety.
Quiz: Testing your POCUS knowledge
POCUS is increasingly prevalent in hospital medicine, but use varies among different disease processes. Which organ system ultrasound or lab test would be most helpful in the following scenario?
An acutely dyspneic patient with no past medical history presents to the ED. Chest x-ray is equivocal. Of the following, which study best confirms a diagnosis of acute decompensated heart failure?
A. Brain natriuretic peptide
B. Point-of-care cardiac ultrasound
C. Point-of-care lung ultrasound
D. Point-of-care inferior vena cava ultrasound
Answer
C. Point-of-care lung ultrasound
Multiple studies, including three systematic reviews, have shown that point-of-care lung ultrasound has high sensitivity and specificity to evaluate for B lines as a marker for cardiogenic pulmonary edema. Point-of-care ultrasound of ejection fraction and inferior vena cava have not been evaluated by systematic review although one randomized, controlled trial showed that an EF less than 45% had 74% specificity and 77% sensitivity and IVC collapsibility index less than 20% had an 86% specificity and 52% sensitivity for detection of acute decompensated heart failure. This same study showed that the combination of cardiac, lung, and IVC point-of-care ultrasound had 100% specificity for diagnosing acute decompensated heart failure. In the future, health care providers could rely on this multiorgan evaluation with point-of-care ultrasound to confirm a diagnosis of acute decompensated heart failure in a dyspneic patient.
Case
A 65-year-old woman presents to the emergency department with a chief complaint of shortness of breath for 3 days. Medical history is notable for moderate chronic obstructive pulmonary disorder, systolic heart failure with last known ejection fraction (EF) of 35% and type 2 diabetes complicated by hyperglycemia when on steroids. You are talking the case over with colleagues and they suggest point-of-care ultrasound (POCUS) would be useful in her case.
Brief overview of the issue
Once mainly used by ED and critical care physicians, POCUS is now a tool that many hospitalists are using at the bedside. POCUS differs from traditional comprehensive ultrasounds in the following ways: POCUS is designed to answer a specific clinical question (as opposed to evaluating all organs in a specific region), POCUS exams are performed by the clinician who is formulating the clinical question (as opposed to by a consultative service such as cardiology and radiology), and POCUS can evaluate multiple organ systems (such as by evaluating a patient’s heart, lungs, and inferior vena cava to determine the etiology of hypoxia).
Hospitalist use of POCUS may include guiding procedures, aiding in diagnosis, and assessing effectiveness of treatment. Many high-quality studies have been published that support the use of POCUS and have proven that POCUS can decrease medical errors, help reach diagnoses in a more expedited fashion, and complement or replace more advanced imaging.
A challenge of POCUS is that it is user dependent and there are no established standards for hospitalists in POCUS training. As the Society of Hospital Medicine position statement on POCUS points out, there is a significant difference between skill levels required to obtain a certificate of completion for POCUS training and a certificate of competency in POCUS. Therefore, it is recommended hospitalists work with local credentialing committees to delineate the requirements for POCUS use.
Overview of the data
POCUS for initial assessment and diagnosis of heart failure (HF)
Use of POCUS in cases of suspected HF includes examination of the heart, lungs, and inferior vena cava (IVC). Cardiac ultrasound provides an estimated ejection fraction. Lung ultrasound (LUS) functions to examine for B lines and pleural effusions. The presence of more than three B lines per thoracic zone bilaterally suggests cardiogenic pulmonary edema. Scanning the IVC provides a noninvasive way to assess volume status and is especially helpful when body habitus prevents accurate assessment of jugular venous pressure.
Several studies have addressed the utility of bedside ultrasound in the initial assessment or diagnosis of acute decompensated heart failure (ADHF) in patients presenting with dyspnea in emergency or inpatient settings. Positive B lines are a useful finding, with high sensitivities, high specificities, and positive likelihood ratios. One large multicenter prospective study found LUS to have a sensitivity of 90.5%, specificity of 93.5%, and positive and negative LRs of 14.0 and 0.10, respectively.1 Another large multicenter prospective cohort study showed that LUS was more sensitive and more specific than chest x-ray (CXR) and brain natriuretic peptide in detecting ADHF.2 Additional POCUS findings that have shown relatively high sensitivities and specificities in the initial diagnosis of ADHF include pleural effusion, reduced left ventricular ejection fraction (LVEF), increased left ventricular end-diastolic dimension, and jugular venous distention.
Data also exists on assessments of ADHF using combinations of POCUS findings; for example, lung and cardiac ultrasound (LuCUS) protocols include an evaluation for B lines, assessment of IVC size and collapsibility, and determination of LVEF, although this has mainly been examined in ED patients. For patients who presented to the ED with undifferentiated dyspnea, one such study showed a specificity of 100% when a LuCUS protocol was used to diagnose ADHF while another study showed that the use of a LuCUS protocol changed management in 47% of patients.3,4 Of note, although each LuCUS protocol integrated the use of lung findings, IVC collapsibility, and LVEF, the exact protocols varied by institution. Finally, it has been established in multiple studies that LUS used in addition to standard workup including history and physical, labs, and electrocardiogram has been shown to increase diagnostic accuracy.2,5
Using POCUS to guide diuretic therapy in HF
To date, there have been multiple small studies published on the utility of daily POCUS in hospitalized patients with ADHF to help assess response to treatment and guide diuresis by looking for reduction in B lines on LUS or a change in IVC size or collapsibility. Volpicelli and colleagues showed that daily LUS was at least as good as daily CXR in monitoring response to therapy.6 Similarly, Mozzini and colleagues performed a randomized controlled trial of 120 patients admitted for ADHF who were randomized to a CXR group (who had a CXR performed on admission and discharge) and a LUS group (which was performed at admission, 24 hours, 48 hours, 72 hours, and discharge).7 This study found that the LUS group underwent a significantly higher number of diuretic dose adjustments as compared with the CXR group (P < .001) and had a modest improvement in LOS, compared with the CXR group. Specifically, median LOS was 8 days in CXR group (range, 4-17 days) and 7 days in the LUS group (range, 3-10 days; P < .001).
The impact of POCUS on length of stay (LOS) and readmissions
There is increasing data that POCUS can have meaningful impacts on patient-centered outcomes (morbidity, mortality, and readmission) while exposing patients to minimal discomfort, no venipuncture, and no radiation exposure. First, multiple studies looked at whether performing focused cardiac US of the IVC as a marker of volume status could predict readmission in patients hospitalized for ADHF.8,9 Both of these trials showed that plethoric, noncollapsible IVC at discharge were statistically significant predictors of readmission. In fact, Goonewardena and colleagues demonstrated that patients who required readmission had an enlarged IVC at discharge nearly 3 times more frequently (21% vs. 61%, P < .001) and abnormal IVC collapsibility 1.5 times more frequently (41% vs. 71%, P = .01) as compared with patients who remained out of the hospital.9
Similarly, a subsequent trial looked at whether IVC size on admission was of prognostic importance in patients hospitalized for ADHF and showed that admission IVC diameter was an independent predictor of both 90-day mortality (hazard ratio, 5.88; 95% confidence interval, 1.21-28.10; P = .025) and 90-day readmission (HR, 3.20; 95% CI, 1.24-8.21; P = .016).10 Additionally, LUS heart failure assessment for pulmonary congestion by counting B lines also showed that having more than 15 B lines prior to discharge was an independent predictor of readmission for ADHF at 6 months (HR, 11.74; 95% CI, 1.30-106.16).11
A challenge of POCUS: Obtaining competency
As previously noted, there are not yet any established standards for training and assessing hospitalists in POCUS. The SHM Position Statement on POCUS recommends the following criteria for training5: the training environment should be similar to the location in which the trainee will practice, training and feedback should occur in real time, the trainee should be taught specific applications of POCUS (such as cardiac US, LUS, and IVC US) as each application comes with unique skills and knowledge, clinical competence must be achieved and demonstrated, and continued education and feedback are necessary once competence is obtained.12 SHM recommends residency-based training pathways, training through a local or national program such as the SHM POCUS certificate program, or training through other medical societies for hospitalists already in practice.
Application of the data to our original case
Targeted POCUS using the LuCUS protocol is performed and reveals three B lines in two lung zones bilaterally, moderate bilateral pleural effusions, EF 20%, and a noncollapsible IVC leading to a diagnosis of ADHF. Her ADHF is treated with intravenous diuresis. She is continued on her chronic maintenance chronic obstructive pulmonary disorder regimen but does not receive steroids, avoiding hyperglycemia that has complicated prior admissions. Over the next few days her respiratory and cardiac status is monitored using POCUS to assess her response to therapy and titrate her diuretics to her true dry weight, which was several pounds lower than her previously assumed dry weight. At discharge she is instructed to use the new dry weight which may avoid readmissions for HF.
Bottom line
POCUS improves diagnostic accuracy and facilitates volume assessment and management in acute decompensated heart failure.
Dr. Farber is a medical instructor at Duke University and hospitalist at Duke Regional Hospital, both in Durham, N.C. Dr. Marcantonio is a medical instructor in the department of internal medicine and department of pediatrics at Duke University and hospitalist at Duke University Hospital and Duke Regional Hospital. Dr. Stafford and Dr. Brooks are assistant professors of medicine and hospitalists at Duke Regional Hospital. Dr. Wachter is associate medical director at Duke Regional Hospital and assistant professor at Duke University. Dr. Menon is a hospitalist at Duke University. Dr. Sharma is associate medical director for clinical education at Duke Regional Hospital and associate professor of medicine at Duke University.
References
1. Pivetta E et al. Lung ultrasound integrated with clinical assessment for the diagnosis of acute decompensated heart failure in the emergency department: A randomized controlled trial. Eur J Heart Fail. 2019 Jun;21(6):754-66. doi: 10.1002/ejhf.1379.
2. Pivetta E et al. Lung ultrasound-implemented diagnosis of acute decompensated heart failure in the ED: A SIMEU multicenter study. Chest. 2015;148(1):202-10. doi: 10.1378/chest.14-2608.
3. Anderson KL et al. Diagnosing heart failure among acutely dyspneic patients with cardiac, inferior vena cava, and lung ultrasonography. Am J Emerg Med. 2013;31:1208-14. doi: 10.1016/j.ajem.2013.05.007.
4. Russell FM et al. Diagnosing acute heart failure in patients with undifferentiated dyspnea: A lung and cardiac ultrasound (LuCUS) protocol. Acad Emerg Med. 2015;22(2):182-91. doi:10.1111/acem.12570.
5. Maw AM et al. Diagnostic accuracy of point-of-care lung ultrasonography and chest radiography in adults with symptoms suggestive of acute decompensated heart failure: A systematic review and meta-analysis. JAMA Netw Open. 2019 Mar 1;2(3):e190703. doi:10.1001/jamanetworkopen.2019.0703.
6. Volpicelli G et al. Bedside ultrasound of the lung for the monitoring of acute decompensated heart failure. Am J Emerg Med. 2008 Jun;26(5):585-91. doi:10.1016/j.ajem.2007.09.014.
7. Mozzini C et al. Lung ultrasound in internal medicine efficiently drives the management of patients with heart failure and speeds up the discharge time. Intern Emerg Med. 2018 Jan;13(1):27-33. doi: 10.1007/s11739-017-1738-1.
8. Laffin LJ et al. Focused cardiac ultrasound as a predictor of readmission in acute decompensated heart failure. Int J Cardiovasc Imaging. 2018;34(7):1075-9. doi:10.1007/s10554-018-1317-1.
9. Goonewardena SN et al. Comparison of hand-carried ultrasound assessment of the inferior vena cava and N-terminal pro-brain natriuretic peptide for predicting readmission after hospitalization for acute decompensated heart failure. JACC Cardiovasc Imaging. 2008;1(5):595-601. doi:10.1016/j.jcmg.2008.06.005.
10. Cubo-Romano P et al. Admission inferior vena cava measurements are associated with mortality after hospitalization for acute decompensated heart failure. J Hosp Med. 2016 Nov;11(11):778-84. doi: 10.1002/jhm.2620.
11. Gargani L et al. Persistent pulmonary congestion before discharge predicts rehospitalization in heart failure: A lung ultrasound study. Cardiovasc Ultrasound. 2015 Sep 4;13:40. doi: 10.1186/s12947-015-0033-4.
12. Soni NJ et al. Point-of-care ultrasound for hospitalists: A Position Statement of the Society of Hospital Medicine. J Hosp Med. 2019 Jan 2;14:E1-6. doi: 10.12788/jhm.3079.
Key points
- Studies have found POCUS improves the diagnosis of acute decompensated heart failure in patients presenting with dyspnea.
- Daily evaluation with POCUS has decreased length of stay in acute decompensated heart failure.
- Credentialing requirements for hospitalists to use POCUS for clinical care vary by hospital.
Additional reading
Maw AM and Soni NJ. Annals for hospitalists inpatient notes – why should hospitalists use point-of-care ultrasound? Ann Intern Med. 2018 Apr 17;168(8):HO2-HO3. doi: 10.7326/M18-0367.
Lewiss RE. “The ultrasound looked fine”: Point of care ultrasound and patient safety. AHRQ’s Patient Safety Network. WebM&M: Case Studies. 2018 Jul 1. https://psnet.ahrq.gov/web-mm/ultrasound-looked-fine-point-care-ultrasound-and-patient-safety.
Quiz: Testing your POCUS knowledge
POCUS is increasingly prevalent in hospital medicine, but use varies among different disease processes. Which organ system ultrasound or lab test would be most helpful in the following scenario?
An acutely dyspneic patient with no past medical history presents to the ED. Chest x-ray is equivocal. Of the following, which study best confirms a diagnosis of acute decompensated heart failure?
A. Brain natriuretic peptide
B. Point-of-care cardiac ultrasound
C. Point-of-care lung ultrasound
D. Point-of-care inferior vena cava ultrasound
Answer
C. Point-of-care lung ultrasound
Multiple studies, including three systematic reviews, have shown that point-of-care lung ultrasound has high sensitivity and specificity to evaluate for B lines as a marker for cardiogenic pulmonary edema. Point-of-care ultrasound of ejection fraction and inferior vena cava have not been evaluated by systematic review although one randomized, controlled trial showed that an EF less than 45% had 74% specificity and 77% sensitivity and IVC collapsibility index less than 20% had an 86% specificity and 52% sensitivity for detection of acute decompensated heart failure. This same study showed that the combination of cardiac, lung, and IVC point-of-care ultrasound had 100% specificity for diagnosing acute decompensated heart failure. In the future, health care providers could rely on this multiorgan evaluation with point-of-care ultrasound to confirm a diagnosis of acute decompensated heart failure in a dyspneic patient.
‘Striking’ difference in adverse events in women with Watchman LAAO
Women have more in-hospital complications than men and double the risk for major adverse events after left atrial appendage occlusion (LAAO) with the Watchman device, according to new National Cardiovascular Data Registry (NCDR) LAAO Registry data.
In-hospital mortality was also twofold higher among women than men and hospital stay was longer. Even after adjustment for potential confounders, these relationships still exist, Douglas Darden, MD, University of California, San Diego, and colleagues reported online in JAMA Cardiology.
“I think this article certainly highlights – specific to a procedure that has gained more popularity and will become more commonplace in cardiovascular practice – that operators and patients need to pay more attention [to the fact] that women may be at more risk for adverse events and mortality,” senior author Jonathan Hsu, MD, also from UCSD, told this news organization.
Possible explanations for the disparities include anatomic differences between the sexes, such as smaller vessel diameter, thinner myocardial wall, and a more friable LLA in women; increased frailty; and clinician inexperience, the authors suggest.
“It could be something as simple or as specific as thinness of tissue or friability of tissue that may predispose women more than men to perforation or other risks that may put them at risk for adverse events specifically,” Dr. Hsu said.
Commenting further, he said, “I think we would be remiss not to mention the fact that part of this association may unfortunately be a disparity in care that women as a specific sex may receive,” he said.
Indeed, postimplantation women had higher adjusted odds of receiving a direct oral anticoagulant only (odds ratio, 1.07, P = .02) and warfarin only (OR, 1.12; P < .001), and lower odds of receiving clinical trial-recommended combined oral anticoagulants plus single antiplatelet therapy (OR, 0.91; P < .001).
“This article highlights the fact that in all aspects we need to pay attention that women receive as high-level, guideline-driven care as men,” Dr. Hsu said.
First author Dr. Darden pointed out in an email that women suffer disproportionately from atrial fibrillation (AFib), compared with men, with worse quality of life and higher risk for stroke. So “it’s only natural to seek further treatment in order to decrease that risk, specifically LAAO with Watchman.”
Despite the fact that women are known to be at greater risk for adverse events after invasive procedures, including AFib ablation and TAVR, little is known about sex differences with LAAO, as the LAAO clinical trials only included about 30% women, he said.
Two 2021 papers zeroing in on these sex differences produced mixed results. An American report in roughly 9,200 patients reported a higher risk for major in-hospital events in women after receipt of Watchman implants, whereas a German report found similar safety and efficacy among 387 consecutive patients, regardless of sex.
The present study involved 20,388 women and 28,969 men implanted with the Watchman device between January 2016 and June 2019 in the NCDR registry, the largest LAAO registry with adjudicated events with participation mandated for Medicare coverage.
The women were older (mean age, 76.5 vs. 75.8 years), had a higher mean CHA2DS2-VASc score (5.3 vs. 4.5), and were more likely to have a high fall risk as an indication for LAAO (39.8% vs. 33.5%).
Furthermore, women were more likely than men to have paroxysmal atrial fibrillation and uncontrolled hypertension, but less likely to have congestive heart failure, diabetes, and coronary artery disease.
After multivariable adjustment, all but one of the primary outcomes was significantly worse in women versus men:
- Aborted or canceled procedure: 3.0% vs. 2.9% (OR, 1.01; P = .87)
- Any adverse event: 6.3% vs. 3.9% (OR, 1.63; P < .001)
- Major adverse event: 4.1% vs. 2.0% (OR, 2.06; P < .001)
- Hospital stay more than 1 day: 16.0% vs. 11.6% (OR, 1.46; P < .001)
- Death: 58/0.3% vs. 37/0.1% (OR, 2.01; P = .001).
The authors point out that device-related adverse events are lower than in the PROTECT-AF and PREVAIL clinical trials of the Watchman, with 0.8% of patients developing a pericardial effusion requiring drainage and 1.2% having major bleeding, down from highs of 4.8% and 3.5%, respectively, in PROTECT-AF.
Although promising overall, adverse events among women were driven by higher rates of both pericardial effusion requiring draining (1.2% vs. 0.5%; P < .001) and major bleeding (1.7% vs. 0.8%; P < .001).
Commenting for this news organization John Mandrola, MD, Baptist Health, Louisville, Kentucky, expressed concern that despite its increasing popularity, the rate of serious complications appears to be increasing for the preventive procedure. “That’s peculiar because you’d expect increased experience and device iterations to decrease complications. And the NCDR data surely undercounts the real rate of adverse events because it only includes in-hospital complications.”
Based on the current data, he observed that there’s a 3% chance for a major complication overall, with the typical female Watchman patient facing a 6% chance of any adverse event and 4% risk for a major adverse event during her hospital stay alone.
“The striking difference in complications in women is a super important observation because higher upfront risk has an even more negative effect on the harm-benefit calculus of this procedure,” Dr. Mandrola said.
“Some of the increased harm in women may have been due to the slightly higher rate of comorbid conditions, but that is real-life,” he said. “Registry data like this is extremely valuable because, unlike the carefully selected randomized trial, registries reflect what is actually being done in practice.”
Dr. Hsu agreed that the absolute numbers are concerning. Nevertheless, “it doesn’t necessarily sound an alarm that our adverse events are worse in contemporary practice or that adverse events continue to increase. But, in general, it just points to the fact that there is this inherent larger risk in women, compared with men, and that we need to, first, figure out why, and second, we need to figure out how to improve.”
Strategies to mitigate procedural risk included ultrasound-guided venous access, preprocedural imaging, improved proficiency with LAAO devices, and continued development of safer devices, they note.
Despite the more generalizable nature of registry data, “the results of this study should not result in differing sex-based thresholds for LAAO implant,” the authors conclude.
The study was supported by the American College of Cardiology Foundation’s NCDR. Dr. Hsu reports financial relationships with Medtronic, Boston Scientific, Abbott, Biotronik, Janssen Pharmaceutical, Bristol Myers Squibb, Pfizer, Biosense Webster, Altathera Pharmaceuticals, and Zoll Medical and holding equity interest in Acutus Medical and Vektor Medical outside the submitted work. Dr. Darden reports no relevant financial relationships. Dr. Mandrola is a regular contributor to Medscape Cardiology.
A version of this article first appeared on Medscape.com.
Women have more in-hospital complications than men and double the risk for major adverse events after left atrial appendage occlusion (LAAO) with the Watchman device, according to new National Cardiovascular Data Registry (NCDR) LAAO Registry data.
In-hospital mortality was also twofold higher among women than men and hospital stay was longer. Even after adjustment for potential confounders, these relationships still exist, Douglas Darden, MD, University of California, San Diego, and colleagues reported online in JAMA Cardiology.
“I think this article certainly highlights – specific to a procedure that has gained more popularity and will become more commonplace in cardiovascular practice – that operators and patients need to pay more attention [to the fact] that women may be at more risk for adverse events and mortality,” senior author Jonathan Hsu, MD, also from UCSD, told this news organization.
Possible explanations for the disparities include anatomic differences between the sexes, such as smaller vessel diameter, thinner myocardial wall, and a more friable LLA in women; increased frailty; and clinician inexperience, the authors suggest.
“It could be something as simple or as specific as thinness of tissue or friability of tissue that may predispose women more than men to perforation or other risks that may put them at risk for adverse events specifically,” Dr. Hsu said.
Commenting further, he said, “I think we would be remiss not to mention the fact that part of this association may unfortunately be a disparity in care that women as a specific sex may receive,” he said.
Indeed, postimplantation women had higher adjusted odds of receiving a direct oral anticoagulant only (odds ratio, 1.07, P = .02) and warfarin only (OR, 1.12; P < .001), and lower odds of receiving clinical trial-recommended combined oral anticoagulants plus single antiplatelet therapy (OR, 0.91; P < .001).
“This article highlights the fact that in all aspects we need to pay attention that women receive as high-level, guideline-driven care as men,” Dr. Hsu said.
First author Dr. Darden pointed out in an email that women suffer disproportionately from atrial fibrillation (AFib), compared with men, with worse quality of life and higher risk for stroke. So “it’s only natural to seek further treatment in order to decrease that risk, specifically LAAO with Watchman.”
Despite the fact that women are known to be at greater risk for adverse events after invasive procedures, including AFib ablation and TAVR, little is known about sex differences with LAAO, as the LAAO clinical trials only included about 30% women, he said.
Two 2021 papers zeroing in on these sex differences produced mixed results. An American report in roughly 9,200 patients reported a higher risk for major in-hospital events in women after receipt of Watchman implants, whereas a German report found similar safety and efficacy among 387 consecutive patients, regardless of sex.
The present study involved 20,388 women and 28,969 men implanted with the Watchman device between January 2016 and June 2019 in the NCDR registry, the largest LAAO registry with adjudicated events with participation mandated for Medicare coverage.
The women were older (mean age, 76.5 vs. 75.8 years), had a higher mean CHA2DS2-VASc score (5.3 vs. 4.5), and were more likely to have a high fall risk as an indication for LAAO (39.8% vs. 33.5%).
Furthermore, women were more likely than men to have paroxysmal atrial fibrillation and uncontrolled hypertension, but less likely to have congestive heart failure, diabetes, and coronary artery disease.
After multivariable adjustment, all but one of the primary outcomes was significantly worse in women versus men:
- Aborted or canceled procedure: 3.0% vs. 2.9% (OR, 1.01; P = .87)
- Any adverse event: 6.3% vs. 3.9% (OR, 1.63; P < .001)
- Major adverse event: 4.1% vs. 2.0% (OR, 2.06; P < .001)
- Hospital stay more than 1 day: 16.0% vs. 11.6% (OR, 1.46; P < .001)
- Death: 58/0.3% vs. 37/0.1% (OR, 2.01; P = .001).
The authors point out that device-related adverse events are lower than in the PROTECT-AF and PREVAIL clinical trials of the Watchman, with 0.8% of patients developing a pericardial effusion requiring drainage and 1.2% having major bleeding, down from highs of 4.8% and 3.5%, respectively, in PROTECT-AF.
Although promising overall, adverse events among women were driven by higher rates of both pericardial effusion requiring draining (1.2% vs. 0.5%; P < .001) and major bleeding (1.7% vs. 0.8%; P < .001).
Commenting for this news organization John Mandrola, MD, Baptist Health, Louisville, Kentucky, expressed concern that despite its increasing popularity, the rate of serious complications appears to be increasing for the preventive procedure. “That’s peculiar because you’d expect increased experience and device iterations to decrease complications. And the NCDR data surely undercounts the real rate of adverse events because it only includes in-hospital complications.”
Based on the current data, he observed that there’s a 3% chance for a major complication overall, with the typical female Watchman patient facing a 6% chance of any adverse event and 4% risk for a major adverse event during her hospital stay alone.
“The striking difference in complications in women is a super important observation because higher upfront risk has an even more negative effect on the harm-benefit calculus of this procedure,” Dr. Mandrola said.
“Some of the increased harm in women may have been due to the slightly higher rate of comorbid conditions, but that is real-life,” he said. “Registry data like this is extremely valuable because, unlike the carefully selected randomized trial, registries reflect what is actually being done in practice.”
Dr. Hsu agreed that the absolute numbers are concerning. Nevertheless, “it doesn’t necessarily sound an alarm that our adverse events are worse in contemporary practice or that adverse events continue to increase. But, in general, it just points to the fact that there is this inherent larger risk in women, compared with men, and that we need to, first, figure out why, and second, we need to figure out how to improve.”
Strategies to mitigate procedural risk included ultrasound-guided venous access, preprocedural imaging, improved proficiency with LAAO devices, and continued development of safer devices, they note.
Despite the more generalizable nature of registry data, “the results of this study should not result in differing sex-based thresholds for LAAO implant,” the authors conclude.
The study was supported by the American College of Cardiology Foundation’s NCDR. Dr. Hsu reports financial relationships with Medtronic, Boston Scientific, Abbott, Biotronik, Janssen Pharmaceutical, Bristol Myers Squibb, Pfizer, Biosense Webster, Altathera Pharmaceuticals, and Zoll Medical and holding equity interest in Acutus Medical and Vektor Medical outside the submitted work. Dr. Darden reports no relevant financial relationships. Dr. Mandrola is a regular contributor to Medscape Cardiology.
A version of this article first appeared on Medscape.com.
Women have more in-hospital complications than men and double the risk for major adverse events after left atrial appendage occlusion (LAAO) with the Watchman device, according to new National Cardiovascular Data Registry (NCDR) LAAO Registry data.
In-hospital mortality was also twofold higher among women than men and hospital stay was longer. Even after adjustment for potential confounders, these relationships still exist, Douglas Darden, MD, University of California, San Diego, and colleagues reported online in JAMA Cardiology.
“I think this article certainly highlights – specific to a procedure that has gained more popularity and will become more commonplace in cardiovascular practice – that operators and patients need to pay more attention [to the fact] that women may be at more risk for adverse events and mortality,” senior author Jonathan Hsu, MD, also from UCSD, told this news organization.
Possible explanations for the disparities include anatomic differences between the sexes, such as smaller vessel diameter, thinner myocardial wall, and a more friable LLA in women; increased frailty; and clinician inexperience, the authors suggest.
“It could be something as simple or as specific as thinness of tissue or friability of tissue that may predispose women more than men to perforation or other risks that may put them at risk for adverse events specifically,” Dr. Hsu said.
Commenting further, he said, “I think we would be remiss not to mention the fact that part of this association may unfortunately be a disparity in care that women as a specific sex may receive,” he said.
Indeed, postimplantation women had higher adjusted odds of receiving a direct oral anticoagulant only (odds ratio, 1.07, P = .02) and warfarin only (OR, 1.12; P < .001), and lower odds of receiving clinical trial-recommended combined oral anticoagulants plus single antiplatelet therapy (OR, 0.91; P < .001).
“This article highlights the fact that in all aspects we need to pay attention that women receive as high-level, guideline-driven care as men,” Dr. Hsu said.
First author Dr. Darden pointed out in an email that women suffer disproportionately from atrial fibrillation (AFib), compared with men, with worse quality of life and higher risk for stroke. So “it’s only natural to seek further treatment in order to decrease that risk, specifically LAAO with Watchman.”
Despite the fact that women are known to be at greater risk for adverse events after invasive procedures, including AFib ablation and TAVR, little is known about sex differences with LAAO, as the LAAO clinical trials only included about 30% women, he said.
Two 2021 papers zeroing in on these sex differences produced mixed results. An American report in roughly 9,200 patients reported a higher risk for major in-hospital events in women after receipt of Watchman implants, whereas a German report found similar safety and efficacy among 387 consecutive patients, regardless of sex.
The present study involved 20,388 women and 28,969 men implanted with the Watchman device between January 2016 and June 2019 in the NCDR registry, the largest LAAO registry with adjudicated events with participation mandated for Medicare coverage.
The women were older (mean age, 76.5 vs. 75.8 years), had a higher mean CHA2DS2-VASc score (5.3 vs. 4.5), and were more likely to have a high fall risk as an indication for LAAO (39.8% vs. 33.5%).
Furthermore, women were more likely than men to have paroxysmal atrial fibrillation and uncontrolled hypertension, but less likely to have congestive heart failure, diabetes, and coronary artery disease.
After multivariable adjustment, all but one of the primary outcomes was significantly worse in women versus men:
- Aborted or canceled procedure: 3.0% vs. 2.9% (OR, 1.01; P = .87)
- Any adverse event: 6.3% vs. 3.9% (OR, 1.63; P < .001)
- Major adverse event: 4.1% vs. 2.0% (OR, 2.06; P < .001)
- Hospital stay more than 1 day: 16.0% vs. 11.6% (OR, 1.46; P < .001)
- Death: 58/0.3% vs. 37/0.1% (OR, 2.01; P = .001).
The authors point out that device-related adverse events are lower than in the PROTECT-AF and PREVAIL clinical trials of the Watchman, with 0.8% of patients developing a pericardial effusion requiring drainage and 1.2% having major bleeding, down from highs of 4.8% and 3.5%, respectively, in PROTECT-AF.
Although promising overall, adverse events among women were driven by higher rates of both pericardial effusion requiring draining (1.2% vs. 0.5%; P < .001) and major bleeding (1.7% vs. 0.8%; P < .001).
Commenting for this news organization John Mandrola, MD, Baptist Health, Louisville, Kentucky, expressed concern that despite its increasing popularity, the rate of serious complications appears to be increasing for the preventive procedure. “That’s peculiar because you’d expect increased experience and device iterations to decrease complications. And the NCDR data surely undercounts the real rate of adverse events because it only includes in-hospital complications.”
Based on the current data, he observed that there’s a 3% chance for a major complication overall, with the typical female Watchman patient facing a 6% chance of any adverse event and 4% risk for a major adverse event during her hospital stay alone.
“The striking difference in complications in women is a super important observation because higher upfront risk has an even more negative effect on the harm-benefit calculus of this procedure,” Dr. Mandrola said.
“Some of the increased harm in women may have been due to the slightly higher rate of comorbid conditions, but that is real-life,” he said. “Registry data like this is extremely valuable because, unlike the carefully selected randomized trial, registries reflect what is actually being done in practice.”
Dr. Hsu agreed that the absolute numbers are concerning. Nevertheless, “it doesn’t necessarily sound an alarm that our adverse events are worse in contemporary practice or that adverse events continue to increase. But, in general, it just points to the fact that there is this inherent larger risk in women, compared with men, and that we need to, first, figure out why, and second, we need to figure out how to improve.”
Strategies to mitigate procedural risk included ultrasound-guided venous access, preprocedural imaging, improved proficiency with LAAO devices, and continued development of safer devices, they note.
Despite the more generalizable nature of registry data, “the results of this study should not result in differing sex-based thresholds for LAAO implant,” the authors conclude.
The study was supported by the American College of Cardiology Foundation’s NCDR. Dr. Hsu reports financial relationships with Medtronic, Boston Scientific, Abbott, Biotronik, Janssen Pharmaceutical, Bristol Myers Squibb, Pfizer, Biosense Webster, Altathera Pharmaceuticals, and Zoll Medical and holding equity interest in Acutus Medical and Vektor Medical outside the submitted work. Dr. Darden reports no relevant financial relationships. Dr. Mandrola is a regular contributor to Medscape Cardiology.
A version of this article first appeared on Medscape.com.
Colorectal cancer screening, 2021: An update
Colorectal cancer is a common disease that has a very lengthy natural history of progression from small (<8 mm) to large (≥8 mm) polyps, then to dysplasia, and eventually to invasive cancer. It is estimated that this progression takes 10 years.1 The long natural history from preneoplasia to cancer makes colorectal cancer an ideal target for screening. Screening for colorectal cancer is divided into two clinical pathways, screening for people at average risk and for those at high risk. Clinical factors that increase the risk of colorectal cancer are listed in TABLE 1. This editorial is focused on the clinical approach to screening for people at average risk for colorectal cancer.
Colorectal cancer is the second most common cause of cancer death
The top 6 causes of cancer death in the United States are2:
- lung cancer (23% of cancer deaths)
- colon and rectum (9%)
- pancreas (8%)
- female breast (7%)
- prostate (5%)
- liver/bile ducts (5%).
In 2020 it is estimated that 147,950 people were diagnosed with colorectal cancer, including 17,930 people less than 50 years of age.3 In 2020, it is also estimated that 53,200 people in the United States died of colorectal cancer, including 3,640 people younger than age 50.3 By contrast, the American Cancer Society estimates that, in 2021, cervical cancer will be diagnosed in 14,480 women and 4,290 women with the disease will die.4
According to a Centers for Disease Control and Prevention (CDC) study, among people 50 to 64 years of age, 63% report being up to date with colorectal cancer screening—leaving a full one-third not up to date with their screening.5 Among people aged 65 to 75, 79% report being up to date with colorectal cancer screening. Among those aged 50 to 64, those with health insurance were more likely to be up to date with screening than people without insurance—67% versus 33%, respectively. People with a household income greater than $75,000 and less than $35,000 reported up-to-date screening rates of 71% and 55%, respectively. Among people aged 50 to 64, non-Hispanic White and Black people reported similar rates of being up to date with colorectal screening (66% and 65%, respectively). Hispanic people, however, reported a significantly lower rate of being up to date with colorectal cancer screening (51%).5
A weakness of this CDC study is that the response rate from the surveyed population was less than 50%, raising questions about validity and generalizability of the reported results. Of note, other studies report that Black men may have lower rates of colorectal cancer screening than non-Black men.6 These data show that focused interventions to improve colorectal cancer screening are required for people 50 to 64 years of age, particularly among underinsured and some minority populations.
Continue to: Inequitable health outcomes for colorectal cancer...
Inequitable health outcomes for colorectal cancer
The purpose of screening for cancer is to reduce the morbidity and mortality associated with the disease. Based on the Surveillance, Epidemiology and End Results (SEER) national reporting system, from 2014 to 2018 colorectal death rates per 100,000 adults were 18 for Black adults; 15.1 for American Indian/Alaska native adults; 13.6 for White non-Hispanic adults; 10.9 for White, Hispanic adults; and 9.4 for Asian/Pacific Islander adults.7 Lack of access to and a lower utilization rate of high-quality colon cancer screening modalities, for example colonoscopy, and a lower rate of optimal colon cancer treatment may account for the higher colorectal death rate among Black adults.8,9
Colorectal cancer screening should begin at age 45
In 2015 the Agency for Health Research and Quality (AHRQ) published data showing that the benefit of initiating screening for colorectal cancer at 45 years of age outweighed the additional cost.10 In 2018, the American Cancer Society recommended that screening for colorectal cancer should begin at age 45.11 In 2021, after resisting the change for many years, the US Preventive Services Task Force (USPSTF) also recommended that screening for colorectal cancer should begin at 45.7 The new recommendation is based on statistical models that showed a significant increase in life-years gained at a small incremental cost. The USPSTF also recommended that clinicians and patients could consider discontinuing colorectal cancer screening at 75 years of age because the net benefit of continuing screening after age 75 is minimal.
Prior to 2021 the USPSTF recommended that screening be initiated at age 50. However, from 2010 to 2020 there was a significant increase in the percentage of new cases of colorectal cancer detected in people younger than 50. In 2010, colon and rectal cancer among people under 50 years of age accounted for 5% and 9% of all cases, respectively.12 In 2020, colon and rectal cancer in people younger than age 50 accounted for 11% and 15% of all cases, respectively.3
Options for colon cancer screening
There are many options for colorectal cancer screening (TABLE 2).10,13 Experts conclude that the best colorectal cancer screening test is the test that the patient will complete. Among options for screening, colonoscopy and the multitarget stool FIT-DNA test (Cologuard) have greater sensitivity for detecting colorectal precancer and cancer lesions compared with fecal immunochemical testing (FIT), computed tomography colonography imaging (CTC), and stool guaiac testing (see TABLE 1).
In my practice, I suggest patients use either colonoscopy (every 10 years) or the multitarget stool FIT-DNA test (every 1 to 3 years) for screening. Most of my patients select colonoscopy, but some prefer the multitarget stool FIT-DNA test because they fear the pre-colonoscopy bowel preparation and the risk of bowel perforation with colonoscopy. Most colonoscopy procedures are performed with sedation, requiring an adult to take responsibility for transporting the patient to their residence, adding complexity to the performance of colonoscopy. These two tests are discussed in more detail below.
Colonoscopy
Colonoscopy occupies a unique position among the options for colorectal cancer screening because it is both a screening test and the gold standard for diagnosis, based on histologic analysis of the polypoid tissue biopsied at the time of colonoscopy. For all other screening tests, if the test yields an abnormal result, it is necessary to perform a colonoscopy. Colonoscopy screening offers the advantage of “one and done for 10 years.” In my practice it is much easier to manage a test that is performed every 10 years than a test that should be performed annually.
Colonoscopy also accounts for most of the harms of colorectal screening because of serious procedure complications, including bowel perforation (1 in 2,000 cases) and major bleeding (1 in 500 cases).7
Continue to: Multitarget stool FIT-DNA test (Cologuard)...
Multitarget stool FIT-DNA test (Cologuard)
The multitarget stool FIT-DNA test is a remarkable innovation in cancer screening combining 3 independent biomarkers associated with precancerous lesions and colorectal cancer.14 The 3 test components include14:
- a fecal immunochemical test (FIT) for hemoglobin (which uses antibodies to detect hemoglobin)
- a test for epigenetic changes in the methylation pattern of promoter DNA, including the promoter regions on the N-Myc Downstream-Regulated Gene 4 (NDRG4) and Bone Morphogenetic Protein 3 (BMP3) genes
- a test for 7 gene mutations in the V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS).
In addition, the amount of the beta-actin DNA present in the stool specimen is assessed and used as a quantitative control for the total amount of DNA in the specimen.
In one large clinical study, 9,989 people at average risk for colorectal cancer were screened with both a multitarget stool FIT-DNA test and a stool FIT test.15 Positive test results triggered a referral for colonoscopy. Among this cohort, 1% of participants were diagnosed with colorectal cancer and 7.6% with a precancerous lesion. The sensitivity of the multitarget stool FIT-DNA test and the FIT test for detecting colorectal cancer was 92.3% and 73.8%, respectively. The sensitivities of the multitarget stool FIT-DNA test and the FIT test for detecting precancerous lesions were 42.4% and 23.8%, respectively. The specificity of the FIT-DNA and FIT tests for detecting any cancer or precancerous lesion was 90% and 96.4%, respectively.15 The FIT test is less expensive than the multitarget stool FIT-DNA test. Eligible patients can order the FIT test through a Quest website.16 In June 2021 the published cost was $89 for the test plus a $6 physician fee. Most insurers will reimburse the expense of the test for eligible patients.
The multitarget stool FIT-DNA test should be performed every 1 to 3 years. Unlike colonoscopy or CT colonography, the stool is collected at home and sent to a testing laboratory, saving the patient time and travel costs. A disadvantage of the test is that it is more expensive than FIT or guaiac testing. Eligible patients can request a test kit by completing a telemedicine visit through the Cologuard website.17 One website lists the cost of a Cologuard test at $599.18 This test is eligible for reimbursement by most insurers.
Ensure patients are informed of needed screening
Most obstetrician-gynecologists have many women in their practice who are aged 45 to 64, a key target group for colorectal cancer screening. The American Cancer Society and the USPSTF strongly recommend that people in this age range be screened for colorectal cancer. Given that one-third of people these ages have not been screened, obstetrician-gynecologists can play an important role in reducing the health burden of the second most common cause of cancer death by ensuring that their patients are up to date with colorectal screening. ●
- Winawer SJ, Fletcher RH, Miller L, et al. Colorectal cancer screening, clinical guidelines and rationale. Gastroenterology. 1997;112:594. doi: 10.1053/gast.1997.v112.agast970594.
- Centers for Disease Control and Prevention website. An update on cancer deaths in the United States. Accessed July 14, 2021.
- Siegel RL, Miller KD, Goding SA, et al. Colorectal cancer statistics, 2020. CA Cancer J Clin. 2020;70:145-164. doi: 10.3322/caac.21601.
- American Cancer Society website. Key statistics for cervical cancer. https://www.cancer.org/cancer/cervical-cancer/about/key-statistics.html. Accessed July 14, 2021.
- Joseph DA, King JB, Dowling NF, et al. Vital signs: colorectal cancer screening test use, United States. Morb Mortal Wkly Rep. 2020;69:253-259.
- Rogers CR, Matthews P, Xu L, et al. Interventions for increasing colorectal cancer screening uptake among African-American men: a systematic review and meta-analysis. PLoS One. 2020;15:e0238354. doi: 10.1371/journal.pone.0238354.
- US Preventive Services Task Force. Screening for colorectal cancer: US Preventive Services Task Force recommendation statement. JAMA. 2021;325:1965-1977. doi: 10.1001/jama.2021.6238.
- Carethers JM, Doubeni CA. Causes of socioeconomic disparities in colorectal cancer and intervention framework and strategies. Gastroenterology. 2020;158:354-367. doi: 10.1053/j.gastro.2019.10.029.
- Rutter CM, Knudsen AB, Lin JS, et al. Black and White differences in colorectal cancer screening and screening outcomes: a narrative review. Cancer Epidemiol Biomarkers Prev. 2021;30:3-12. doi: 10.1158/1055-9965.EPI-19-1537.
- Zauber A, Knudsen A, Rutter CM, et al; Writing Committee of the Cancer Intervention and Surveillance Modeling Network (CISNET) Colorectal Cancer Working Group. Evaluating the benefits and harms of colorectal cancer screening strategies: a collaborative modeling approach. AHRQ Publication No. 14-05203-EF-2. Rockville, MD: Agency for Healthcare Research and Quality; October 2015. file:///C:/Users/loconnor/Downloads/cisnet-draft-modeling-report.pdf. Accessed July 15, 2021.
- American Cancer Society website. Cancer screening guidelines by age. . Accessed July 15, 2021.
- Bailey CE, Hu CY, You YN, et al. Increasing disparities in the age-related incidences of colon and rectal cancers in the United States, 1975-2010. JAMA Surg. 2015;150:17-22. doi: 10.1001/jamasurg.2014.1756.
- Knudsen AB, Zauber AG, Rutter CM, et al. Estimation of benefits, burden, and harms of colorectal cancer screening strategies: modeling study for the US Preventive Services Task Force. JAMA. 2016;315:2595. doi: 10.1001/jama.2016.6828.
- FDA summary of safety and effectiveness data. https://www.accessdata.fda.gov/cdrh_docs/pdf13/P130017B.pdf. Accessed July 15, 2021.
- Imperiale TF, Ransohoff DF, Itzkowitz SH, et al. Mulitarget stool DNA testing for colorectal-cancer screening. N Engl J Med. 2014;370:1287-1297. doi: 10.1056/NEJMoa1311194.
- FIT colorectal cancer screening. Quest Diagnostics website. https://questdirect.questdiagnostics.com/products/fit-colorectal-cancer-screening/d41c67cb-a16d-4ad6-82b9-1a77d32daf41?utm_source=google&utm_medium=cpc&utm_campaign=71700000081635378&utm_content=58700006943838348&utm_term=p62498361603&gclsrc=aw.ds&gclid=EAIaIQobChMIgZLq9NOI8QIVufvjBx0slQWPEAAYAiAAEgKHqfD_BwE. Accessed July 15, 2021.
- Request Cologuard without leaving your home. Cologuard website. https://www.cologuard.com/how-to-get-cologuard?gclsrc=aw.ds&gclid=EAIaIQobChMIgZLq9NOI8QIVufvjBx0slQWPEAAYASAAEgKHIfD_BwE. Accessed July 15, 2021.
- Cologuard. Colonoscopy Assist website. https: //colonoscopyassist.com/Cologuard.html. Accessed July 15, 2021.
Colorectal cancer is a common disease that has a very lengthy natural history of progression from small (<8 mm) to large (≥8 mm) polyps, then to dysplasia, and eventually to invasive cancer. It is estimated that this progression takes 10 years.1 The long natural history from preneoplasia to cancer makes colorectal cancer an ideal target for screening. Screening for colorectal cancer is divided into two clinical pathways, screening for people at average risk and for those at high risk. Clinical factors that increase the risk of colorectal cancer are listed in TABLE 1. This editorial is focused on the clinical approach to screening for people at average risk for colorectal cancer.
Colorectal cancer is the second most common cause of cancer death
The top 6 causes of cancer death in the United States are2:
- lung cancer (23% of cancer deaths)
- colon and rectum (9%)
- pancreas (8%)
- female breast (7%)
- prostate (5%)
- liver/bile ducts (5%).
In 2020 it is estimated that 147,950 people were diagnosed with colorectal cancer, including 17,930 people less than 50 years of age.3 In 2020, it is also estimated that 53,200 people in the United States died of colorectal cancer, including 3,640 people younger than age 50.3 By contrast, the American Cancer Society estimates that, in 2021, cervical cancer will be diagnosed in 14,480 women and 4,290 women with the disease will die.4
According to a Centers for Disease Control and Prevention (CDC) study, among people 50 to 64 years of age, 63% report being up to date with colorectal cancer screening—leaving a full one-third not up to date with their screening.5 Among people aged 65 to 75, 79% report being up to date with colorectal cancer screening. Among those aged 50 to 64, those with health insurance were more likely to be up to date with screening than people without insurance—67% versus 33%, respectively. People with a household income greater than $75,000 and less than $35,000 reported up-to-date screening rates of 71% and 55%, respectively. Among people aged 50 to 64, non-Hispanic White and Black people reported similar rates of being up to date with colorectal screening (66% and 65%, respectively). Hispanic people, however, reported a significantly lower rate of being up to date with colorectal cancer screening (51%).5
A weakness of this CDC study is that the response rate from the surveyed population was less than 50%, raising questions about validity and generalizability of the reported results. Of note, other studies report that Black men may have lower rates of colorectal cancer screening than non-Black men.6 These data show that focused interventions to improve colorectal cancer screening are required for people 50 to 64 years of age, particularly among underinsured and some minority populations.
Continue to: Inequitable health outcomes for colorectal cancer...
Inequitable health outcomes for colorectal cancer
The purpose of screening for cancer is to reduce the morbidity and mortality associated with the disease. Based on the Surveillance, Epidemiology and End Results (SEER) national reporting system, from 2014 to 2018 colorectal death rates per 100,000 adults were 18 for Black adults; 15.1 for American Indian/Alaska native adults; 13.6 for White non-Hispanic adults; 10.9 for White, Hispanic adults; and 9.4 for Asian/Pacific Islander adults.7 Lack of access to and a lower utilization rate of high-quality colon cancer screening modalities, for example colonoscopy, and a lower rate of optimal colon cancer treatment may account for the higher colorectal death rate among Black adults.8,9
Colorectal cancer screening should begin at age 45
In 2015 the Agency for Health Research and Quality (AHRQ) published data showing that the benefit of initiating screening for colorectal cancer at 45 years of age outweighed the additional cost.10 In 2018, the American Cancer Society recommended that screening for colorectal cancer should begin at age 45.11 In 2021, after resisting the change for many years, the US Preventive Services Task Force (USPSTF) also recommended that screening for colorectal cancer should begin at 45.7 The new recommendation is based on statistical models that showed a significant increase in life-years gained at a small incremental cost. The USPSTF also recommended that clinicians and patients could consider discontinuing colorectal cancer screening at 75 years of age because the net benefit of continuing screening after age 75 is minimal.
Prior to 2021 the USPSTF recommended that screening be initiated at age 50. However, from 2010 to 2020 there was a significant increase in the percentage of new cases of colorectal cancer detected in people younger than 50. In 2010, colon and rectal cancer among people under 50 years of age accounted for 5% and 9% of all cases, respectively.12 In 2020, colon and rectal cancer in people younger than age 50 accounted for 11% and 15% of all cases, respectively.3
Options for colon cancer screening
There are many options for colorectal cancer screening (TABLE 2).10,13 Experts conclude that the best colorectal cancer screening test is the test that the patient will complete. Among options for screening, colonoscopy and the multitarget stool FIT-DNA test (Cologuard) have greater sensitivity for detecting colorectal precancer and cancer lesions compared with fecal immunochemical testing (FIT), computed tomography colonography imaging (CTC), and stool guaiac testing (see TABLE 1).
In my practice, I suggest patients use either colonoscopy (every 10 years) or the multitarget stool FIT-DNA test (every 1 to 3 years) for screening. Most of my patients select colonoscopy, but some prefer the multitarget stool FIT-DNA test because they fear the pre-colonoscopy bowel preparation and the risk of bowel perforation with colonoscopy. Most colonoscopy procedures are performed with sedation, requiring an adult to take responsibility for transporting the patient to their residence, adding complexity to the performance of colonoscopy. These two tests are discussed in more detail below.
Colonoscopy
Colonoscopy occupies a unique position among the options for colorectal cancer screening because it is both a screening test and the gold standard for diagnosis, based on histologic analysis of the polypoid tissue biopsied at the time of colonoscopy. For all other screening tests, if the test yields an abnormal result, it is necessary to perform a colonoscopy. Colonoscopy screening offers the advantage of “one and done for 10 years.” In my practice it is much easier to manage a test that is performed every 10 years than a test that should be performed annually.
Colonoscopy also accounts for most of the harms of colorectal screening because of serious procedure complications, including bowel perforation (1 in 2,000 cases) and major bleeding (1 in 500 cases).7
Continue to: Multitarget stool FIT-DNA test (Cologuard)...
Multitarget stool FIT-DNA test (Cologuard)
The multitarget stool FIT-DNA test is a remarkable innovation in cancer screening combining 3 independent biomarkers associated with precancerous lesions and colorectal cancer.14 The 3 test components include14:
- a fecal immunochemical test (FIT) for hemoglobin (which uses antibodies to detect hemoglobin)
- a test for epigenetic changes in the methylation pattern of promoter DNA, including the promoter regions on the N-Myc Downstream-Regulated Gene 4 (NDRG4) and Bone Morphogenetic Protein 3 (BMP3) genes
- a test for 7 gene mutations in the V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS).
In addition, the amount of the beta-actin DNA present in the stool specimen is assessed and used as a quantitative control for the total amount of DNA in the specimen.
In one large clinical study, 9,989 people at average risk for colorectal cancer were screened with both a multitarget stool FIT-DNA test and a stool FIT test.15 Positive test results triggered a referral for colonoscopy. Among this cohort, 1% of participants were diagnosed with colorectal cancer and 7.6% with a precancerous lesion. The sensitivity of the multitarget stool FIT-DNA test and the FIT test for detecting colorectal cancer was 92.3% and 73.8%, respectively. The sensitivities of the multitarget stool FIT-DNA test and the FIT test for detecting precancerous lesions were 42.4% and 23.8%, respectively. The specificity of the FIT-DNA and FIT tests for detecting any cancer or precancerous lesion was 90% and 96.4%, respectively.15 The FIT test is less expensive than the multitarget stool FIT-DNA test. Eligible patients can order the FIT test through a Quest website.16 In June 2021 the published cost was $89 for the test plus a $6 physician fee. Most insurers will reimburse the expense of the test for eligible patients.
The multitarget stool FIT-DNA test should be performed every 1 to 3 years. Unlike colonoscopy or CT colonography, the stool is collected at home and sent to a testing laboratory, saving the patient time and travel costs. A disadvantage of the test is that it is more expensive than FIT or guaiac testing. Eligible patients can request a test kit by completing a telemedicine visit through the Cologuard website.17 One website lists the cost of a Cologuard test at $599.18 This test is eligible for reimbursement by most insurers.
Ensure patients are informed of needed screening
Most obstetrician-gynecologists have many women in their practice who are aged 45 to 64, a key target group for colorectal cancer screening. The American Cancer Society and the USPSTF strongly recommend that people in this age range be screened for colorectal cancer. Given that one-third of people these ages have not been screened, obstetrician-gynecologists can play an important role in reducing the health burden of the second most common cause of cancer death by ensuring that their patients are up to date with colorectal screening. ●
Colorectal cancer is a common disease that has a very lengthy natural history of progression from small (<8 mm) to large (≥8 mm) polyps, then to dysplasia, and eventually to invasive cancer. It is estimated that this progression takes 10 years.1 The long natural history from preneoplasia to cancer makes colorectal cancer an ideal target for screening. Screening for colorectal cancer is divided into two clinical pathways, screening for people at average risk and for those at high risk. Clinical factors that increase the risk of colorectal cancer are listed in TABLE 1. This editorial is focused on the clinical approach to screening for people at average risk for colorectal cancer.
Colorectal cancer is the second most common cause of cancer death
The top 6 causes of cancer death in the United States are2:
- lung cancer (23% of cancer deaths)
- colon and rectum (9%)
- pancreas (8%)
- female breast (7%)
- prostate (5%)
- liver/bile ducts (5%).
In 2020 it is estimated that 147,950 people were diagnosed with colorectal cancer, including 17,930 people less than 50 years of age.3 In 2020, it is also estimated that 53,200 people in the United States died of colorectal cancer, including 3,640 people younger than age 50.3 By contrast, the American Cancer Society estimates that, in 2021, cervical cancer will be diagnosed in 14,480 women and 4,290 women with the disease will die.4
According to a Centers for Disease Control and Prevention (CDC) study, among people 50 to 64 years of age, 63% report being up to date with colorectal cancer screening—leaving a full one-third not up to date with their screening.5 Among people aged 65 to 75, 79% report being up to date with colorectal cancer screening. Among those aged 50 to 64, those with health insurance were more likely to be up to date with screening than people without insurance—67% versus 33%, respectively. People with a household income greater than $75,000 and less than $35,000 reported up-to-date screening rates of 71% and 55%, respectively. Among people aged 50 to 64, non-Hispanic White and Black people reported similar rates of being up to date with colorectal screening (66% and 65%, respectively). Hispanic people, however, reported a significantly lower rate of being up to date with colorectal cancer screening (51%).5
A weakness of this CDC study is that the response rate from the surveyed population was less than 50%, raising questions about validity and generalizability of the reported results. Of note, other studies report that Black men may have lower rates of colorectal cancer screening than non-Black men.6 These data show that focused interventions to improve colorectal cancer screening are required for people 50 to 64 years of age, particularly among underinsured and some minority populations.
Continue to: Inequitable health outcomes for colorectal cancer...
Inequitable health outcomes for colorectal cancer
The purpose of screening for cancer is to reduce the morbidity and mortality associated with the disease. Based on the Surveillance, Epidemiology and End Results (SEER) national reporting system, from 2014 to 2018 colorectal death rates per 100,000 adults were 18 for Black adults; 15.1 for American Indian/Alaska native adults; 13.6 for White non-Hispanic adults; 10.9 for White, Hispanic adults; and 9.4 for Asian/Pacific Islander adults.7 Lack of access to and a lower utilization rate of high-quality colon cancer screening modalities, for example colonoscopy, and a lower rate of optimal colon cancer treatment may account for the higher colorectal death rate among Black adults.8,9
Colorectal cancer screening should begin at age 45
In 2015 the Agency for Health Research and Quality (AHRQ) published data showing that the benefit of initiating screening for colorectal cancer at 45 years of age outweighed the additional cost.10 In 2018, the American Cancer Society recommended that screening for colorectal cancer should begin at age 45.11 In 2021, after resisting the change for many years, the US Preventive Services Task Force (USPSTF) also recommended that screening for colorectal cancer should begin at 45.7 The new recommendation is based on statistical models that showed a significant increase in life-years gained at a small incremental cost. The USPSTF also recommended that clinicians and patients could consider discontinuing colorectal cancer screening at 75 years of age because the net benefit of continuing screening after age 75 is minimal.
Prior to 2021 the USPSTF recommended that screening be initiated at age 50. However, from 2010 to 2020 there was a significant increase in the percentage of new cases of colorectal cancer detected in people younger than 50. In 2010, colon and rectal cancer among people under 50 years of age accounted for 5% and 9% of all cases, respectively.12 In 2020, colon and rectal cancer in people younger than age 50 accounted for 11% and 15% of all cases, respectively.3
Options for colon cancer screening
There are many options for colorectal cancer screening (TABLE 2).10,13 Experts conclude that the best colorectal cancer screening test is the test that the patient will complete. Among options for screening, colonoscopy and the multitarget stool FIT-DNA test (Cologuard) have greater sensitivity for detecting colorectal precancer and cancer lesions compared with fecal immunochemical testing (FIT), computed tomography colonography imaging (CTC), and stool guaiac testing (see TABLE 1).
In my practice, I suggest patients use either colonoscopy (every 10 years) or the multitarget stool FIT-DNA test (every 1 to 3 years) for screening. Most of my patients select colonoscopy, but some prefer the multitarget stool FIT-DNA test because they fear the pre-colonoscopy bowel preparation and the risk of bowel perforation with colonoscopy. Most colonoscopy procedures are performed with sedation, requiring an adult to take responsibility for transporting the patient to their residence, adding complexity to the performance of colonoscopy. These two tests are discussed in more detail below.
Colonoscopy
Colonoscopy occupies a unique position among the options for colorectal cancer screening because it is both a screening test and the gold standard for diagnosis, based on histologic analysis of the polypoid tissue biopsied at the time of colonoscopy. For all other screening tests, if the test yields an abnormal result, it is necessary to perform a colonoscopy. Colonoscopy screening offers the advantage of “one and done for 10 years.” In my practice it is much easier to manage a test that is performed every 10 years than a test that should be performed annually.
Colonoscopy also accounts for most of the harms of colorectal screening because of serious procedure complications, including bowel perforation (1 in 2,000 cases) and major bleeding (1 in 500 cases).7
Continue to: Multitarget stool FIT-DNA test (Cologuard)...
Multitarget stool FIT-DNA test (Cologuard)
The multitarget stool FIT-DNA test is a remarkable innovation in cancer screening combining 3 independent biomarkers associated with precancerous lesions and colorectal cancer.14 The 3 test components include14:
- a fecal immunochemical test (FIT) for hemoglobin (which uses antibodies to detect hemoglobin)
- a test for epigenetic changes in the methylation pattern of promoter DNA, including the promoter regions on the N-Myc Downstream-Regulated Gene 4 (NDRG4) and Bone Morphogenetic Protein 3 (BMP3) genes
- a test for 7 gene mutations in the V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS).
In addition, the amount of the beta-actin DNA present in the stool specimen is assessed and used as a quantitative control for the total amount of DNA in the specimen.
In one large clinical study, 9,989 people at average risk for colorectal cancer were screened with both a multitarget stool FIT-DNA test and a stool FIT test.15 Positive test results triggered a referral for colonoscopy. Among this cohort, 1% of participants were diagnosed with colorectal cancer and 7.6% with a precancerous lesion. The sensitivity of the multitarget stool FIT-DNA test and the FIT test for detecting colorectal cancer was 92.3% and 73.8%, respectively. The sensitivities of the multitarget stool FIT-DNA test and the FIT test for detecting precancerous lesions were 42.4% and 23.8%, respectively. The specificity of the FIT-DNA and FIT tests for detecting any cancer or precancerous lesion was 90% and 96.4%, respectively.15 The FIT test is less expensive than the multitarget stool FIT-DNA test. Eligible patients can order the FIT test through a Quest website.16 In June 2021 the published cost was $89 for the test plus a $6 physician fee. Most insurers will reimburse the expense of the test for eligible patients.
The multitarget stool FIT-DNA test should be performed every 1 to 3 years. Unlike colonoscopy or CT colonography, the stool is collected at home and sent to a testing laboratory, saving the patient time and travel costs. A disadvantage of the test is that it is more expensive than FIT or guaiac testing. Eligible patients can request a test kit by completing a telemedicine visit through the Cologuard website.17 One website lists the cost of a Cologuard test at $599.18 This test is eligible for reimbursement by most insurers.
Ensure patients are informed of needed screening
Most obstetrician-gynecologists have many women in their practice who are aged 45 to 64, a key target group for colorectal cancer screening. The American Cancer Society and the USPSTF strongly recommend that people in this age range be screened for colorectal cancer. Given that one-third of people these ages have not been screened, obstetrician-gynecologists can play an important role in reducing the health burden of the second most common cause of cancer death by ensuring that their patients are up to date with colorectal screening. ●
- Winawer SJ, Fletcher RH, Miller L, et al. Colorectal cancer screening, clinical guidelines and rationale. Gastroenterology. 1997;112:594. doi: 10.1053/gast.1997.v112.agast970594.
- Centers for Disease Control and Prevention website. An update on cancer deaths in the United States. Accessed July 14, 2021.
- Siegel RL, Miller KD, Goding SA, et al. Colorectal cancer statistics, 2020. CA Cancer J Clin. 2020;70:145-164. doi: 10.3322/caac.21601.
- American Cancer Society website. Key statistics for cervical cancer. https://www.cancer.org/cancer/cervical-cancer/about/key-statistics.html. Accessed July 14, 2021.
- Joseph DA, King JB, Dowling NF, et al. Vital signs: colorectal cancer screening test use, United States. Morb Mortal Wkly Rep. 2020;69:253-259.
- Rogers CR, Matthews P, Xu L, et al. Interventions for increasing colorectal cancer screening uptake among African-American men: a systematic review and meta-analysis. PLoS One. 2020;15:e0238354. doi: 10.1371/journal.pone.0238354.
- US Preventive Services Task Force. Screening for colorectal cancer: US Preventive Services Task Force recommendation statement. JAMA. 2021;325:1965-1977. doi: 10.1001/jama.2021.6238.
- Carethers JM, Doubeni CA. Causes of socioeconomic disparities in colorectal cancer and intervention framework and strategies. Gastroenterology. 2020;158:354-367. doi: 10.1053/j.gastro.2019.10.029.
- Rutter CM, Knudsen AB, Lin JS, et al. Black and White differences in colorectal cancer screening and screening outcomes: a narrative review. Cancer Epidemiol Biomarkers Prev. 2021;30:3-12. doi: 10.1158/1055-9965.EPI-19-1537.
- Zauber A, Knudsen A, Rutter CM, et al; Writing Committee of the Cancer Intervention and Surveillance Modeling Network (CISNET) Colorectal Cancer Working Group. Evaluating the benefits and harms of colorectal cancer screening strategies: a collaborative modeling approach. AHRQ Publication No. 14-05203-EF-2. Rockville, MD: Agency for Healthcare Research and Quality; October 2015. file:///C:/Users/loconnor/Downloads/cisnet-draft-modeling-report.pdf. Accessed July 15, 2021.
- American Cancer Society website. Cancer screening guidelines by age. . Accessed July 15, 2021.
- Bailey CE, Hu CY, You YN, et al. Increasing disparities in the age-related incidences of colon and rectal cancers in the United States, 1975-2010. JAMA Surg. 2015;150:17-22. doi: 10.1001/jamasurg.2014.1756.
- Knudsen AB, Zauber AG, Rutter CM, et al. Estimation of benefits, burden, and harms of colorectal cancer screening strategies: modeling study for the US Preventive Services Task Force. JAMA. 2016;315:2595. doi: 10.1001/jama.2016.6828.
- FDA summary of safety and effectiveness data. https://www.accessdata.fda.gov/cdrh_docs/pdf13/P130017B.pdf. Accessed July 15, 2021.
- Imperiale TF, Ransohoff DF, Itzkowitz SH, et al. Mulitarget stool DNA testing for colorectal-cancer screening. N Engl J Med. 2014;370:1287-1297. doi: 10.1056/NEJMoa1311194.
- FIT colorectal cancer screening. Quest Diagnostics website. https://questdirect.questdiagnostics.com/products/fit-colorectal-cancer-screening/d41c67cb-a16d-4ad6-82b9-1a77d32daf41?utm_source=google&utm_medium=cpc&utm_campaign=71700000081635378&utm_content=58700006943838348&utm_term=p62498361603&gclsrc=aw.ds&gclid=EAIaIQobChMIgZLq9NOI8QIVufvjBx0slQWPEAAYAiAAEgKHqfD_BwE. Accessed July 15, 2021.
- Request Cologuard without leaving your home. Cologuard website. https://www.cologuard.com/how-to-get-cologuard?gclsrc=aw.ds&gclid=EAIaIQobChMIgZLq9NOI8QIVufvjBx0slQWPEAAYASAAEgKHIfD_BwE. Accessed July 15, 2021.
- Cologuard. Colonoscopy Assist website. https: //colonoscopyassist.com/Cologuard.html. Accessed July 15, 2021.
- Winawer SJ, Fletcher RH, Miller L, et al. Colorectal cancer screening, clinical guidelines and rationale. Gastroenterology. 1997;112:594. doi: 10.1053/gast.1997.v112.agast970594.
- Centers for Disease Control and Prevention website. An update on cancer deaths in the United States. Accessed July 14, 2021.
- Siegel RL, Miller KD, Goding SA, et al. Colorectal cancer statistics, 2020. CA Cancer J Clin. 2020;70:145-164. doi: 10.3322/caac.21601.
- American Cancer Society website. Key statistics for cervical cancer. https://www.cancer.org/cancer/cervical-cancer/about/key-statistics.html. Accessed July 14, 2021.
- Joseph DA, King JB, Dowling NF, et al. Vital signs: colorectal cancer screening test use, United States. Morb Mortal Wkly Rep. 2020;69:253-259.
- Rogers CR, Matthews P, Xu L, et al. Interventions for increasing colorectal cancer screening uptake among African-American men: a systematic review and meta-analysis. PLoS One. 2020;15:e0238354. doi: 10.1371/journal.pone.0238354.
- US Preventive Services Task Force. Screening for colorectal cancer: US Preventive Services Task Force recommendation statement. JAMA. 2021;325:1965-1977. doi: 10.1001/jama.2021.6238.
- Carethers JM, Doubeni CA. Causes of socioeconomic disparities in colorectal cancer and intervention framework and strategies. Gastroenterology. 2020;158:354-367. doi: 10.1053/j.gastro.2019.10.029.
- Rutter CM, Knudsen AB, Lin JS, et al. Black and White differences in colorectal cancer screening and screening outcomes: a narrative review. Cancer Epidemiol Biomarkers Prev. 2021;30:3-12. doi: 10.1158/1055-9965.EPI-19-1537.
- Zauber A, Knudsen A, Rutter CM, et al; Writing Committee of the Cancer Intervention and Surveillance Modeling Network (CISNET) Colorectal Cancer Working Group. Evaluating the benefits and harms of colorectal cancer screening strategies: a collaborative modeling approach. AHRQ Publication No. 14-05203-EF-2. Rockville, MD: Agency for Healthcare Research and Quality; October 2015. file:///C:/Users/loconnor/Downloads/cisnet-draft-modeling-report.pdf. Accessed July 15, 2021.
- American Cancer Society website. Cancer screening guidelines by age. . Accessed July 15, 2021.
- Bailey CE, Hu CY, You YN, et al. Increasing disparities in the age-related incidences of colon and rectal cancers in the United States, 1975-2010. JAMA Surg. 2015;150:17-22. doi: 10.1001/jamasurg.2014.1756.
- Knudsen AB, Zauber AG, Rutter CM, et al. Estimation of benefits, burden, and harms of colorectal cancer screening strategies: modeling study for the US Preventive Services Task Force. JAMA. 2016;315:2595. doi: 10.1001/jama.2016.6828.
- FDA summary of safety and effectiveness data. https://www.accessdata.fda.gov/cdrh_docs/pdf13/P130017B.pdf. Accessed July 15, 2021.
- Imperiale TF, Ransohoff DF, Itzkowitz SH, et al. Mulitarget stool DNA testing for colorectal-cancer screening. N Engl J Med. 2014;370:1287-1297. doi: 10.1056/NEJMoa1311194.
- FIT colorectal cancer screening. Quest Diagnostics website. https://questdirect.questdiagnostics.com/products/fit-colorectal-cancer-screening/d41c67cb-a16d-4ad6-82b9-1a77d32daf41?utm_source=google&utm_medium=cpc&utm_campaign=71700000081635378&utm_content=58700006943838348&utm_term=p62498361603&gclsrc=aw.ds&gclid=EAIaIQobChMIgZLq9NOI8QIVufvjBx0slQWPEAAYAiAAEgKHqfD_BwE. Accessed July 15, 2021.
- Request Cologuard without leaving your home. Cologuard website. https://www.cologuard.com/how-to-get-cologuard?gclsrc=aw.ds&gclid=EAIaIQobChMIgZLq9NOI8QIVufvjBx0slQWPEAAYASAAEgKHIfD_BwE. Accessed July 15, 2021.
- Cologuard. Colonoscopy Assist website. https: //colonoscopyassist.com/Cologuard.html. Accessed July 15, 2021.
Endocrinologists’ wealth remains steady, despite pandemic
Despite ongoing pandemic-related economic challenges, endocrinologists report stability in their overall wealth in the past year, with more than a third of the specialists having a net worth between $1 million and $5 million, according to the Medscape Endocrinologist Wealth & Debt Report 2021.
The findings regarding wealth and debt among endocrinologists, along with 28 other specialties, were reported as part of Medscape’s Physician Compensation Report 2021, which included nearly 18,000 physicians.
According to the report, endocrinologists had an upswing in their income, compared with the prior year, with average annual earnings of $245,000 versus $236,000 in 2020. The earnings tie them with infectious disease specialists at fourth from the bottom of the list of specialties.
In the latest report, 38% reported a net worth between $1 million and $5 million, down 1% from 39% in last year’s report.
Nine percent of endocrinologists had a net worth of over $5 million, matching last year’s rate.
That puts endocrinologists and rheumatologists near the middle of specialists earning more than $5 million. Dermatologists rank the highest, with 28% worth over $5 million. Allergy and immunology specialists are at the bottom of the list, with just 2%.
Joel Greenwald, MD, a wealth management advisor to physicians based in St. Louis Park, Minn., said the reasons for the stability in wealth are multifactorial.
“The rise in home prices is certainly a factor,” he said. “Definitely the rise in the stock market played a large role; the S&P 500 finished the year up over 18%.
“I’ve seen clients accumulate cash, which has added to their net worth,” Dr. Greenwald added. “They cut back on spending because they were worried about big declines in income and also because there was simply less to spend money on [during lockdowns].”
The percentage of endocrinologists reporting a net worth below $500,000 decreased from 37% in 2020 to 31% for the current report, placing them fifth from the top of the list of specialists with a net worth below $500,000. Family medicine was at the top of the list, at 40%.
Gender disparities in net worth are striking
The gender disparities in net worth among endocrinologists are substantial. Although only 15% of male endocrinologists have a net worth of less than $500,000, that rate is nearly three times higher – 44% – for female endocrinologists.
Twenty-seven percent of male endocrinologists have a net worth between $1 million and $2 million, compared with just 13% among women. Although 14% of men have a net worth of more than $5 million, only 4% of female endocrinologists fall in that category.
Of note, 61% of those who responded to the poll were men; 36% were women.
Expenses, savings
Only 6% of endocrinologists reported being unable to pay their mortgage as a result of the pandemic; 8% said they were unable to pay other bills because of COVID-19.
The vast majority, however – 91% – said the pandemic did not affect their ability to pay bills or their mortgage. U.S. Census Bureau data from last July show that about a quarter of adults (25.3%) missed a mortgage or rent payment because of challenges related to COVID-19.
Approximately three-quarters of endocrinologists (72%) reported having not made any changes to reduce major expenses in 2020, despite the pandemic. About 25% took significant measures to reduce expenses, including refinancing their house or moving to a different home.
Seventeen percent say they are still paying off their school loans, similar to the rate last year.
The report notes that, according to the Association of American Medical Colleges, the average medical school debt for students who graduated in 2019 was $201,490, compared with an average student loan debt for all graduating students in the same year of $28,950.
Although 65% of endocrinologists said they added the same amount to their 401(k) plan in the past year, 28% put less into their fund, and although 53% put the same amount into their taxable savings account, 23% reported not using the taxable savings accounts at all.
Although earnings were steady in the past year, 12% of endocrinologists report having losses from practice problems, compared with 5% the previous year. COVID-19 was the most common cause. The proportion reporting no financial losses declined to 65%, versus 75% in the last report.
A version of this article first appeared on Medscape.com.
Despite ongoing pandemic-related economic challenges, endocrinologists report stability in their overall wealth in the past year, with more than a third of the specialists having a net worth between $1 million and $5 million, according to the Medscape Endocrinologist Wealth & Debt Report 2021.
The findings regarding wealth and debt among endocrinologists, along with 28 other specialties, were reported as part of Medscape’s Physician Compensation Report 2021, which included nearly 18,000 physicians.
According to the report, endocrinologists had an upswing in their income, compared with the prior year, with average annual earnings of $245,000 versus $236,000 in 2020. The earnings tie them with infectious disease specialists at fourth from the bottom of the list of specialties.
In the latest report, 38% reported a net worth between $1 million and $5 million, down 1% from 39% in last year’s report.
Nine percent of endocrinologists had a net worth of over $5 million, matching last year’s rate.
That puts endocrinologists and rheumatologists near the middle of specialists earning more than $5 million. Dermatologists rank the highest, with 28% worth over $5 million. Allergy and immunology specialists are at the bottom of the list, with just 2%.
Joel Greenwald, MD, a wealth management advisor to physicians based in St. Louis Park, Minn., said the reasons for the stability in wealth are multifactorial.
“The rise in home prices is certainly a factor,” he said. “Definitely the rise in the stock market played a large role; the S&P 500 finished the year up over 18%.
“I’ve seen clients accumulate cash, which has added to their net worth,” Dr. Greenwald added. “They cut back on spending because they were worried about big declines in income and also because there was simply less to spend money on [during lockdowns].”
The percentage of endocrinologists reporting a net worth below $500,000 decreased from 37% in 2020 to 31% for the current report, placing them fifth from the top of the list of specialists with a net worth below $500,000. Family medicine was at the top of the list, at 40%.
Gender disparities in net worth are striking
The gender disparities in net worth among endocrinologists are substantial. Although only 15% of male endocrinologists have a net worth of less than $500,000, that rate is nearly three times higher – 44% – for female endocrinologists.
Twenty-seven percent of male endocrinologists have a net worth between $1 million and $2 million, compared with just 13% among women. Although 14% of men have a net worth of more than $5 million, only 4% of female endocrinologists fall in that category.
Of note, 61% of those who responded to the poll were men; 36% were women.
Expenses, savings
Only 6% of endocrinologists reported being unable to pay their mortgage as a result of the pandemic; 8% said they were unable to pay other bills because of COVID-19.
The vast majority, however – 91% – said the pandemic did not affect their ability to pay bills or their mortgage. U.S. Census Bureau data from last July show that about a quarter of adults (25.3%) missed a mortgage or rent payment because of challenges related to COVID-19.
Approximately three-quarters of endocrinologists (72%) reported having not made any changes to reduce major expenses in 2020, despite the pandemic. About 25% took significant measures to reduce expenses, including refinancing their house or moving to a different home.
Seventeen percent say they are still paying off their school loans, similar to the rate last year.
The report notes that, according to the Association of American Medical Colleges, the average medical school debt for students who graduated in 2019 was $201,490, compared with an average student loan debt for all graduating students in the same year of $28,950.
Although 65% of endocrinologists said they added the same amount to their 401(k) plan in the past year, 28% put less into their fund, and although 53% put the same amount into their taxable savings account, 23% reported not using the taxable savings accounts at all.
Although earnings were steady in the past year, 12% of endocrinologists report having losses from practice problems, compared with 5% the previous year. COVID-19 was the most common cause. The proportion reporting no financial losses declined to 65%, versus 75% in the last report.
A version of this article first appeared on Medscape.com.
Despite ongoing pandemic-related economic challenges, endocrinologists report stability in their overall wealth in the past year, with more than a third of the specialists having a net worth between $1 million and $5 million, according to the Medscape Endocrinologist Wealth & Debt Report 2021.
The findings regarding wealth and debt among endocrinologists, along with 28 other specialties, were reported as part of Medscape’s Physician Compensation Report 2021, which included nearly 18,000 physicians.
According to the report, endocrinologists had an upswing in their income, compared with the prior year, with average annual earnings of $245,000 versus $236,000 in 2020. The earnings tie them with infectious disease specialists at fourth from the bottom of the list of specialties.
In the latest report, 38% reported a net worth between $1 million and $5 million, down 1% from 39% in last year’s report.
Nine percent of endocrinologists had a net worth of over $5 million, matching last year’s rate.
That puts endocrinologists and rheumatologists near the middle of specialists earning more than $5 million. Dermatologists rank the highest, with 28% worth over $5 million. Allergy and immunology specialists are at the bottom of the list, with just 2%.
Joel Greenwald, MD, a wealth management advisor to physicians based in St. Louis Park, Minn., said the reasons for the stability in wealth are multifactorial.
“The rise in home prices is certainly a factor,” he said. “Definitely the rise in the stock market played a large role; the S&P 500 finished the year up over 18%.
“I’ve seen clients accumulate cash, which has added to their net worth,” Dr. Greenwald added. “They cut back on spending because they were worried about big declines in income and also because there was simply less to spend money on [during lockdowns].”
The percentage of endocrinologists reporting a net worth below $500,000 decreased from 37% in 2020 to 31% for the current report, placing them fifth from the top of the list of specialists with a net worth below $500,000. Family medicine was at the top of the list, at 40%.
Gender disparities in net worth are striking
The gender disparities in net worth among endocrinologists are substantial. Although only 15% of male endocrinologists have a net worth of less than $500,000, that rate is nearly three times higher – 44% – for female endocrinologists.
Twenty-seven percent of male endocrinologists have a net worth between $1 million and $2 million, compared with just 13% among women. Although 14% of men have a net worth of more than $5 million, only 4% of female endocrinologists fall in that category.
Of note, 61% of those who responded to the poll were men; 36% were women.
Expenses, savings
Only 6% of endocrinologists reported being unable to pay their mortgage as a result of the pandemic; 8% said they were unable to pay other bills because of COVID-19.
The vast majority, however – 91% – said the pandemic did not affect their ability to pay bills or their mortgage. U.S. Census Bureau data from last July show that about a quarter of adults (25.3%) missed a mortgage or rent payment because of challenges related to COVID-19.
Approximately three-quarters of endocrinologists (72%) reported having not made any changes to reduce major expenses in 2020, despite the pandemic. About 25% took significant measures to reduce expenses, including refinancing their house or moving to a different home.
Seventeen percent say they are still paying off their school loans, similar to the rate last year.
The report notes that, according to the Association of American Medical Colleges, the average medical school debt for students who graduated in 2019 was $201,490, compared with an average student loan debt for all graduating students in the same year of $28,950.
Although 65% of endocrinologists said they added the same amount to their 401(k) plan in the past year, 28% put less into their fund, and although 53% put the same amount into their taxable savings account, 23% reported not using the taxable savings accounts at all.
Although earnings were steady in the past year, 12% of endocrinologists report having losses from practice problems, compared with 5% the previous year. COVID-19 was the most common cause. The proportion reporting no financial losses declined to 65%, versus 75% in the last report.
A version of this article first appeared on Medscape.com.
Brain memory signals appear to regulate metabolism
Rhythmic brain signals that help encode memories also appear to influence blood sugar levels and may regulate the timing of the release of hormones, early, pre-clinical research shows.
“Our study is the first to show how clusters of brain cell firing in the hippocampus may directly regulate metabolism,” senior author György Buzsáki, MD, PhD, professor, department of neuroscience and physiology, NYU Grossman School of Medicine and NYU Langone Health, said in a news release.
“Evidence suggests that the brain evolved, for reasons of efficiency, to use the same signals to achieve two very different functions in terms of memory and hormonal regulation,” added corresponding author David Tingley, PhD, a post-doctoral scholar in Dr. Buzsáki’s lab.
The study was published online August 11 in Nature.
It’s recently been discovered that populations of hippocampal neurons fire within milliseconds of each other in cycles. This firing pattern is called a “sharp wave ripple” for the shape it takes when captured graphically by electroencephalogram.
In their study, Dr. Buzsáki, Dr. Tingley, and colleagues observed that clusters of sharp wave ripples recorded from the hippocampus of rats were “reliably” and rapidly, followed by decreases in blood sugar concentrations in the animals.
“This correlation was not dependent on circadian, ultradian, or meal-triggered fluctuations; it could be mimicked with optogenetically induced ripples in the hippocampus, but not in the parietal cortex, and was attenuated to chance levels by pharmacogenetically suppressing activity of the lateral septum (LS), the major conduit between the hippocampus and hypothalamus,” the researchers report.
These observations suggest that hippocampal sharp wave ripples may regulate the timing of the release of hormones, possibly including insulin, by the pancreas and liver, as well as other hormones by the pituitary gland, the researchers note.
As sharp wave ripples mostly occur during non-rapid eye movement sleep, the impact of sleep disturbance on sharp wave ripples may provide a mechanistic link between poor sleep and high blood sugar levels seen in type 2 diabetes, they suggest.
“There are a couple of experimental studies showing that if you deprive a young healthy person from sleep [for 48 hours], their glucose tolerance resembles” that of a person with diabetes, Dr. Buzsáki noted in an interview.
Moving forward, the researchers will seek to extend their theory that several hormones could be affected by nightly sharp wave ripples.
The research was funded by National Institutes of Health. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Rhythmic brain signals that help encode memories also appear to influence blood sugar levels and may regulate the timing of the release of hormones, early, pre-clinical research shows.
“Our study is the first to show how clusters of brain cell firing in the hippocampus may directly regulate metabolism,” senior author György Buzsáki, MD, PhD, professor, department of neuroscience and physiology, NYU Grossman School of Medicine and NYU Langone Health, said in a news release.
“Evidence suggests that the brain evolved, for reasons of efficiency, to use the same signals to achieve two very different functions in terms of memory and hormonal regulation,” added corresponding author David Tingley, PhD, a post-doctoral scholar in Dr. Buzsáki’s lab.
The study was published online August 11 in Nature.
It’s recently been discovered that populations of hippocampal neurons fire within milliseconds of each other in cycles. This firing pattern is called a “sharp wave ripple” for the shape it takes when captured graphically by electroencephalogram.
In their study, Dr. Buzsáki, Dr. Tingley, and colleagues observed that clusters of sharp wave ripples recorded from the hippocampus of rats were “reliably” and rapidly, followed by decreases in blood sugar concentrations in the animals.
“This correlation was not dependent on circadian, ultradian, or meal-triggered fluctuations; it could be mimicked with optogenetically induced ripples in the hippocampus, but not in the parietal cortex, and was attenuated to chance levels by pharmacogenetically suppressing activity of the lateral septum (LS), the major conduit between the hippocampus and hypothalamus,” the researchers report.
These observations suggest that hippocampal sharp wave ripples may regulate the timing of the release of hormones, possibly including insulin, by the pancreas and liver, as well as other hormones by the pituitary gland, the researchers note.
As sharp wave ripples mostly occur during non-rapid eye movement sleep, the impact of sleep disturbance on sharp wave ripples may provide a mechanistic link between poor sleep and high blood sugar levels seen in type 2 diabetes, they suggest.
“There are a couple of experimental studies showing that if you deprive a young healthy person from sleep [for 48 hours], their glucose tolerance resembles” that of a person with diabetes, Dr. Buzsáki noted in an interview.
Moving forward, the researchers will seek to extend their theory that several hormones could be affected by nightly sharp wave ripples.
The research was funded by National Institutes of Health. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Rhythmic brain signals that help encode memories also appear to influence blood sugar levels and may regulate the timing of the release of hormones, early, pre-clinical research shows.
“Our study is the first to show how clusters of brain cell firing in the hippocampus may directly regulate metabolism,” senior author György Buzsáki, MD, PhD, professor, department of neuroscience and physiology, NYU Grossman School of Medicine and NYU Langone Health, said in a news release.
“Evidence suggests that the brain evolved, for reasons of efficiency, to use the same signals to achieve two very different functions in terms of memory and hormonal regulation,” added corresponding author David Tingley, PhD, a post-doctoral scholar in Dr. Buzsáki’s lab.
The study was published online August 11 in Nature.
It’s recently been discovered that populations of hippocampal neurons fire within milliseconds of each other in cycles. This firing pattern is called a “sharp wave ripple” for the shape it takes when captured graphically by electroencephalogram.
In their study, Dr. Buzsáki, Dr. Tingley, and colleagues observed that clusters of sharp wave ripples recorded from the hippocampus of rats were “reliably” and rapidly, followed by decreases in blood sugar concentrations in the animals.
“This correlation was not dependent on circadian, ultradian, or meal-triggered fluctuations; it could be mimicked with optogenetically induced ripples in the hippocampus, but not in the parietal cortex, and was attenuated to chance levels by pharmacogenetically suppressing activity of the lateral septum (LS), the major conduit between the hippocampus and hypothalamus,” the researchers report.
These observations suggest that hippocampal sharp wave ripples may regulate the timing of the release of hormones, possibly including insulin, by the pancreas and liver, as well as other hormones by the pituitary gland, the researchers note.
As sharp wave ripples mostly occur during non-rapid eye movement sleep, the impact of sleep disturbance on sharp wave ripples may provide a mechanistic link between poor sleep and high blood sugar levels seen in type 2 diabetes, they suggest.
“There are a couple of experimental studies showing that if you deprive a young healthy person from sleep [for 48 hours], their glucose tolerance resembles” that of a person with diabetes, Dr. Buzsáki noted in an interview.
Moving forward, the researchers will seek to extend their theory that several hormones could be affected by nightly sharp wave ripples.
The research was funded by National Institutes of Health. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FDA approves first drug for idiopathic hypersomnia
, the company announced in a news release.
It marks the second approval for Xywav. The FDA approved it last year for the treatment of cataplexy or excessive daytime sleepiness in patients with narcolepsy as young as 7 years of age.
This recent approval is the first for a treatment for idiopathic hypersomnia.
“Idiopathic hypersomnia can have a significant impact on the social, educational, and occupational functioning of people living with the condition,” Diane Powell, board chair and CEO of the Hypersomnia Foundation, noted in the release.
This FDA approval “is a major milestone for the entire idiopathic hypersomnia community as Xywav becomes the first medicine approved to manage this chronic sleep disorder,” said Ms. Powell.
Low sodium oxybate product
Xywav is a novel oxybate product with a unique composition of cations. It contains 92% less sodium than sodium oxybate (Xyrem) at the recommended adult dosage range of 6 to 9 g, the company noted in a news release.
An estimated 37,000 people in the United States have been diagnosed with idiopathic hypersomnia, a neurologic sleep disorder characterized by chronic excessive daytime sleepiness.
Other symptoms of the disorder may include severe sleep inertia or sleep drunkenness (prolonged difficulty waking with frequent re-entries into sleep, confusion, and irritability), as well as prolonged, nonrestorative night-time sleep, cognitive impairment, and long and unrefreshing naps.
The approval was based on findings from a phase 3, double-blind, multicenter, placebo-controlled, randomized withdrawal study.
Results showed “statistically significant and clinically meaningful” differences compared with placebo in change in the primary endpoint of Epworth Sleepiness Scale score (P < .0001) and the secondary endpoints of Patient Global Impression of Change (P < .0001) and the Idiopathic Hypersomnia Severity Scale (P < .0001), the company reported.
The most common adverse reactions were nausea, headache, dizziness, anxiety, insomnia, decreased appetite, hyperhidrosis, vomiting, diarrhea, dry mouth, parasomnia, somnolence, fatigue, and tremor.
The novel agent can be administered once or twice nightly for the treatment of idiopathic hypersomnia in adults.
“To optimize response, a patient’s health care provider may consider prescribing a twice-nightly regimen in equally or unequally divided doses at bedtime and 2.5 to 4 hours later and gradually titrate Xywav so that a patient may receive an individualized dose and regimen based on efficacy and tolerability,” the company said.
Xywav carries a boxed warning because it is a central nervous system depressant and because there is potential for abuse and misuse. The drug is only available through a risk evaluation and mitigation strategy (REMS) program.
The company plans to make Xywav available to patients with idiopathic hypersomnia later this year following implementation of the REMS program.
A version of this article first appeared on Medscape.com.
, the company announced in a news release.
It marks the second approval for Xywav. The FDA approved it last year for the treatment of cataplexy or excessive daytime sleepiness in patients with narcolepsy as young as 7 years of age.
This recent approval is the first for a treatment for idiopathic hypersomnia.
“Idiopathic hypersomnia can have a significant impact on the social, educational, and occupational functioning of people living with the condition,” Diane Powell, board chair and CEO of the Hypersomnia Foundation, noted in the release.
This FDA approval “is a major milestone for the entire idiopathic hypersomnia community as Xywav becomes the first medicine approved to manage this chronic sleep disorder,” said Ms. Powell.
Low sodium oxybate product
Xywav is a novel oxybate product with a unique composition of cations. It contains 92% less sodium than sodium oxybate (Xyrem) at the recommended adult dosage range of 6 to 9 g, the company noted in a news release.
An estimated 37,000 people in the United States have been diagnosed with idiopathic hypersomnia, a neurologic sleep disorder characterized by chronic excessive daytime sleepiness.
Other symptoms of the disorder may include severe sleep inertia or sleep drunkenness (prolonged difficulty waking with frequent re-entries into sleep, confusion, and irritability), as well as prolonged, nonrestorative night-time sleep, cognitive impairment, and long and unrefreshing naps.
The approval was based on findings from a phase 3, double-blind, multicenter, placebo-controlled, randomized withdrawal study.
Results showed “statistically significant and clinically meaningful” differences compared with placebo in change in the primary endpoint of Epworth Sleepiness Scale score (P < .0001) and the secondary endpoints of Patient Global Impression of Change (P < .0001) and the Idiopathic Hypersomnia Severity Scale (P < .0001), the company reported.
The most common adverse reactions were nausea, headache, dizziness, anxiety, insomnia, decreased appetite, hyperhidrosis, vomiting, diarrhea, dry mouth, parasomnia, somnolence, fatigue, and tremor.
The novel agent can be administered once or twice nightly for the treatment of idiopathic hypersomnia in adults.
“To optimize response, a patient’s health care provider may consider prescribing a twice-nightly regimen in equally or unequally divided doses at bedtime and 2.5 to 4 hours later and gradually titrate Xywav so that a patient may receive an individualized dose and regimen based on efficacy and tolerability,” the company said.
Xywav carries a boxed warning because it is a central nervous system depressant and because there is potential for abuse and misuse. The drug is only available through a risk evaluation and mitigation strategy (REMS) program.
The company plans to make Xywav available to patients with idiopathic hypersomnia later this year following implementation of the REMS program.
A version of this article first appeared on Medscape.com.
, the company announced in a news release.
It marks the second approval for Xywav. The FDA approved it last year for the treatment of cataplexy or excessive daytime sleepiness in patients with narcolepsy as young as 7 years of age.
This recent approval is the first for a treatment for idiopathic hypersomnia.
“Idiopathic hypersomnia can have a significant impact on the social, educational, and occupational functioning of people living with the condition,” Diane Powell, board chair and CEO of the Hypersomnia Foundation, noted in the release.
This FDA approval “is a major milestone for the entire idiopathic hypersomnia community as Xywav becomes the first medicine approved to manage this chronic sleep disorder,” said Ms. Powell.
Low sodium oxybate product
Xywav is a novel oxybate product with a unique composition of cations. It contains 92% less sodium than sodium oxybate (Xyrem) at the recommended adult dosage range of 6 to 9 g, the company noted in a news release.
An estimated 37,000 people in the United States have been diagnosed with idiopathic hypersomnia, a neurologic sleep disorder characterized by chronic excessive daytime sleepiness.
Other symptoms of the disorder may include severe sleep inertia or sleep drunkenness (prolonged difficulty waking with frequent re-entries into sleep, confusion, and irritability), as well as prolonged, nonrestorative night-time sleep, cognitive impairment, and long and unrefreshing naps.
The approval was based on findings from a phase 3, double-blind, multicenter, placebo-controlled, randomized withdrawal study.
Results showed “statistically significant and clinically meaningful” differences compared with placebo in change in the primary endpoint of Epworth Sleepiness Scale score (P < .0001) and the secondary endpoints of Patient Global Impression of Change (P < .0001) and the Idiopathic Hypersomnia Severity Scale (P < .0001), the company reported.
The most common adverse reactions were nausea, headache, dizziness, anxiety, insomnia, decreased appetite, hyperhidrosis, vomiting, diarrhea, dry mouth, parasomnia, somnolence, fatigue, and tremor.
The novel agent can be administered once or twice nightly for the treatment of idiopathic hypersomnia in adults.
“To optimize response, a patient’s health care provider may consider prescribing a twice-nightly regimen in equally or unequally divided doses at bedtime and 2.5 to 4 hours later and gradually titrate Xywav so that a patient may receive an individualized dose and regimen based on efficacy and tolerability,” the company said.
Xywav carries a boxed warning because it is a central nervous system depressant and because there is potential for abuse and misuse. The drug is only available through a risk evaluation and mitigation strategy (REMS) program.
The company plans to make Xywav available to patients with idiopathic hypersomnia later this year following implementation of the REMS program.
A version of this article first appeared on Medscape.com.
MR elastography could predict cirrhosis in NAFLD
Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.
“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.
Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.
Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.
To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.
Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.
At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.
Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.
According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.
The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.
While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.
The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.
NAFLD is rapidly becoming one of the most common causes of liver disease. While most patients have a benign course, approximately 20% of patients develop nonalcoholic steatohepatitis, the progressive form of the disease. Given the high prevalence (30% of the U.S. population), it is vital to determine which patients are at risk for progression, cirrhosis, and decompensation. Although liver biopsy is the preferred method, this procedure is invasive and carries substantial risks, including severe bleeding. Noninvasive tests that measure liver stiffness have emerged: Examples are controlled elastography (VCTE), such as Fibroscan, and magnetic resonance elastography (MRE). Data support the use of liver stiffness as a surrogate measure of fibrosis; MRE has demonstrated higher fidelity and accuracy, compared with VCTE, while being limited because of cost and availability. However, there is a paucity of data regarding the use of liver stiffness to predict progression to cirrhosis or liver-related events.
This study by Dr. Gidener and colleagues highlights the use of MRE to evaluate liver stiffness measurements as a predictor for cirrhosis and decompensation. Baseline measurements more than 4-5 kPa should alert clinicians regarding increased risk of progression to cirrhosis. Patients with cirrhosis and baseline measurements of 8 kPa or higher have a high risk of decompensation/death, suggesting that they should be followed more closely. Given the burgeoning number of patients with NAFLD and NASH, this study demonstrates the importance of identifying high-risk patients in order to optimize use of resources and improve clinical outcomes.
Yamini Natarajan, MD, is an investigator at the Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston, and an assistant professor, department of medicine, section of gastroenterology and hepatology, Baylor College of Medicine, Houston. She has no conflicts.
NAFLD is rapidly becoming one of the most common causes of liver disease. While most patients have a benign course, approximately 20% of patients develop nonalcoholic steatohepatitis, the progressive form of the disease. Given the high prevalence (30% of the U.S. population), it is vital to determine which patients are at risk for progression, cirrhosis, and decompensation. Although liver biopsy is the preferred method, this procedure is invasive and carries substantial risks, including severe bleeding. Noninvasive tests that measure liver stiffness have emerged: Examples are controlled elastography (VCTE), such as Fibroscan, and magnetic resonance elastography (MRE). Data support the use of liver stiffness as a surrogate measure of fibrosis; MRE has demonstrated higher fidelity and accuracy, compared with VCTE, while being limited because of cost and availability. However, there is a paucity of data regarding the use of liver stiffness to predict progression to cirrhosis or liver-related events.
This study by Dr. Gidener and colleagues highlights the use of MRE to evaluate liver stiffness measurements as a predictor for cirrhosis and decompensation. Baseline measurements more than 4-5 kPa should alert clinicians regarding increased risk of progression to cirrhosis. Patients with cirrhosis and baseline measurements of 8 kPa or higher have a high risk of decompensation/death, suggesting that they should be followed more closely. Given the burgeoning number of patients with NAFLD and NASH, this study demonstrates the importance of identifying high-risk patients in order to optimize use of resources and improve clinical outcomes.
Yamini Natarajan, MD, is an investigator at the Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston, and an assistant professor, department of medicine, section of gastroenterology and hepatology, Baylor College of Medicine, Houston. She has no conflicts.
NAFLD is rapidly becoming one of the most common causes of liver disease. While most patients have a benign course, approximately 20% of patients develop nonalcoholic steatohepatitis, the progressive form of the disease. Given the high prevalence (30% of the U.S. population), it is vital to determine which patients are at risk for progression, cirrhosis, and decompensation. Although liver biopsy is the preferred method, this procedure is invasive and carries substantial risks, including severe bleeding. Noninvasive tests that measure liver stiffness have emerged: Examples are controlled elastography (VCTE), such as Fibroscan, and magnetic resonance elastography (MRE). Data support the use of liver stiffness as a surrogate measure of fibrosis; MRE has demonstrated higher fidelity and accuracy, compared with VCTE, while being limited because of cost and availability. However, there is a paucity of data regarding the use of liver stiffness to predict progression to cirrhosis or liver-related events.
This study by Dr. Gidener and colleagues highlights the use of MRE to evaluate liver stiffness measurements as a predictor for cirrhosis and decompensation. Baseline measurements more than 4-5 kPa should alert clinicians regarding increased risk of progression to cirrhosis. Patients with cirrhosis and baseline measurements of 8 kPa or higher have a high risk of decompensation/death, suggesting that they should be followed more closely. Given the burgeoning number of patients with NAFLD and NASH, this study demonstrates the importance of identifying high-risk patients in order to optimize use of resources and improve clinical outcomes.
Yamini Natarajan, MD, is an investigator at the Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston, and an assistant professor, department of medicine, section of gastroenterology and hepatology, Baylor College of Medicine, Houston. She has no conflicts.
Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.
“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.
Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.
Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.
To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.
Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.
At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.
Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.
According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.
The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.
While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.
The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.
Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.
“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.
Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.
Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.
To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.
Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.
At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.
Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.
According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.
The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.
While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.
The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY