User login
Omega-3 supplements may impact breast cancer risk
The study was presented by Katherine Cook, PhD, during a poster session at the San Antonio Breast Cancer Symposium. Dr. Cook is a researcher at Wake Forest University, Winston-Salem, N.C.
Obesity increases risk of breast cancer, but it also alters the composition of the gut microbiome. Obesity is associated with a greater frequency of Firmicute bacteria phyla, compared with Bacteroidetes phyla, while abnormally low ratios are associated with inflammatory bowel disease.
In mice, the researchers previously showed that diet can lead to changes in the microbiome of both the gut and the breast. They conducted fecal transplants between mice who were fed normal or high-fat diets (HFD), and then used a chemical carcinogenesis model to investigate the impact on tumor outcomes. They observed changes in the microbiota populations in both the gut and the mammary glands when mice fed a normal diet received fecal transplants from HFD mice. On the other hand, when HFD mice received fecal transplants from mice with normal diets, the transplants countered the increase in serum lipopolysaccharide levels associated with HFD. In vitro models showed that microbiota from HFD mice also altered the epithelial permeability of breast tissue, and infection of breast cancer cells with HFD microbiota led to greater proliferation.
The researchers also examined breast cancer tissue from women who received omega-3 PUFA supplements or placebo before undergoing primary tumor resection, and found that there were differences in the proportional abundance of specific microbes between tumor and adjacent normal tissue, with the former having excess of Lachnospiraceae and Ruminococcus. The finding suggests that these bacteria may grow better in a tumor microenvironment, and could play a role in breast cancer cell signaling. The supplements altered the microbiota of both normal and breast cancer tissue.
In the study presented at SABCS, the researchers analyzed fecal samples from 34 obese and overweight postmenopausal women involved in a weight-loss trial, who received 3.25 g/day of omega-3 PUFA supplements or placebo combined with calorie restriction and exercise. They performed metagenomic sequencing from the fecal samples at baseline and 6 months to determine microbiome populations.
Women who experienced weight loss, with or without omega-3 PUFA supplementation, had a decline in the abundance of Firmicutes phyla – a group linked to inflammation risk – as a percentage of overall bacterial phyla. The researchers found a similar trend among women who received omega-3 PUFA, regardless of how much weight they lost. At the species level, those who received supplements had higher proportional abundance of Phocaeicola massiliensis and reduced proportions of Faecalibacterium prausnitzii, R. lactaris, Blautia obeum, and Dorea formicigenerans (P < .05).
Weight loss combined with supplementation also seemed to affect gut microbiota, with subjects who lost more than 10% of their body weight and received omega-3 PUFA supplements having elevated Bacteriodetes and reduced Firmicutes, compared with all other groups (P < .05).
At 6 months, the researchers grouped women by mean body fat composition, and found both positive and negative correlations among different bacterial species. Finally, the researchers looked at serum levels of the inflammatory cytokines interleukin-6, monocyte chemoattractant protein-1 (MCP-1), and tumor necrosis factor–alpha at 6 months. Women with elevated levels of at least two cytokines had higher levels of two species of mucin-degrading bacteria. Levels of MCP-1 alone also correlated with greater proportions of mucin-degrading bacteria (P < .05).
The authors concluded that increasing omega-3 PUFA uptake to about 2% of total daily calorie intake could push the gut microbiome in a direction that improves intestinal permeability parameters and reduces chronic inflammation. These changes could lead to a reduction in the risk for postmenopausal breast cancer.
The study was funded by the Breast Cancer Research Foundation.
The study was presented by Katherine Cook, PhD, during a poster session at the San Antonio Breast Cancer Symposium. Dr. Cook is a researcher at Wake Forest University, Winston-Salem, N.C.
Obesity increases risk of breast cancer, but it also alters the composition of the gut microbiome. Obesity is associated with a greater frequency of Firmicute bacteria phyla, compared with Bacteroidetes phyla, while abnormally low ratios are associated with inflammatory bowel disease.
In mice, the researchers previously showed that diet can lead to changes in the microbiome of both the gut and the breast. They conducted fecal transplants between mice who were fed normal or high-fat diets (HFD), and then used a chemical carcinogenesis model to investigate the impact on tumor outcomes. They observed changes in the microbiota populations in both the gut and the mammary glands when mice fed a normal diet received fecal transplants from HFD mice. On the other hand, when HFD mice received fecal transplants from mice with normal diets, the transplants countered the increase in serum lipopolysaccharide levels associated with HFD. In vitro models showed that microbiota from HFD mice also altered the epithelial permeability of breast tissue, and infection of breast cancer cells with HFD microbiota led to greater proliferation.
The researchers also examined breast cancer tissue from women who received omega-3 PUFA supplements or placebo before undergoing primary tumor resection, and found that there were differences in the proportional abundance of specific microbes between tumor and adjacent normal tissue, with the former having excess of Lachnospiraceae and Ruminococcus. The finding suggests that these bacteria may grow better in a tumor microenvironment, and could play a role in breast cancer cell signaling. The supplements altered the microbiota of both normal and breast cancer tissue.
In the study presented at SABCS, the researchers analyzed fecal samples from 34 obese and overweight postmenopausal women involved in a weight-loss trial, who received 3.25 g/day of omega-3 PUFA supplements or placebo combined with calorie restriction and exercise. They performed metagenomic sequencing from the fecal samples at baseline and 6 months to determine microbiome populations.
Women who experienced weight loss, with or without omega-3 PUFA supplementation, had a decline in the abundance of Firmicutes phyla – a group linked to inflammation risk – as a percentage of overall bacterial phyla. The researchers found a similar trend among women who received omega-3 PUFA, regardless of how much weight they lost. At the species level, those who received supplements had higher proportional abundance of Phocaeicola massiliensis and reduced proportions of Faecalibacterium prausnitzii, R. lactaris, Blautia obeum, and Dorea formicigenerans (P < .05).
Weight loss combined with supplementation also seemed to affect gut microbiota, with subjects who lost more than 10% of their body weight and received omega-3 PUFA supplements having elevated Bacteriodetes and reduced Firmicutes, compared with all other groups (P < .05).
At 6 months, the researchers grouped women by mean body fat composition, and found both positive and negative correlations among different bacterial species. Finally, the researchers looked at serum levels of the inflammatory cytokines interleukin-6, monocyte chemoattractant protein-1 (MCP-1), and tumor necrosis factor–alpha at 6 months. Women with elevated levels of at least two cytokines had higher levels of two species of mucin-degrading bacteria. Levels of MCP-1 alone also correlated with greater proportions of mucin-degrading bacteria (P < .05).
The authors concluded that increasing omega-3 PUFA uptake to about 2% of total daily calorie intake could push the gut microbiome in a direction that improves intestinal permeability parameters and reduces chronic inflammation. These changes could lead to a reduction in the risk for postmenopausal breast cancer.
The study was funded by the Breast Cancer Research Foundation.
The study was presented by Katherine Cook, PhD, during a poster session at the San Antonio Breast Cancer Symposium. Dr. Cook is a researcher at Wake Forest University, Winston-Salem, N.C.
Obesity increases risk of breast cancer, but it also alters the composition of the gut microbiome. Obesity is associated with a greater frequency of Firmicute bacteria phyla, compared with Bacteroidetes phyla, while abnormally low ratios are associated with inflammatory bowel disease.
In mice, the researchers previously showed that diet can lead to changes in the microbiome of both the gut and the breast. They conducted fecal transplants between mice who were fed normal or high-fat diets (HFD), and then used a chemical carcinogenesis model to investigate the impact on tumor outcomes. They observed changes in the microbiota populations in both the gut and the mammary glands when mice fed a normal diet received fecal transplants from HFD mice. On the other hand, when HFD mice received fecal transplants from mice with normal diets, the transplants countered the increase in serum lipopolysaccharide levels associated with HFD. In vitro models showed that microbiota from HFD mice also altered the epithelial permeability of breast tissue, and infection of breast cancer cells with HFD microbiota led to greater proliferation.
The researchers also examined breast cancer tissue from women who received omega-3 PUFA supplements or placebo before undergoing primary tumor resection, and found that there were differences in the proportional abundance of specific microbes between tumor and adjacent normal tissue, with the former having excess of Lachnospiraceae and Ruminococcus. The finding suggests that these bacteria may grow better in a tumor microenvironment, and could play a role in breast cancer cell signaling. The supplements altered the microbiota of both normal and breast cancer tissue.
In the study presented at SABCS, the researchers analyzed fecal samples from 34 obese and overweight postmenopausal women involved in a weight-loss trial, who received 3.25 g/day of omega-3 PUFA supplements or placebo combined with calorie restriction and exercise. They performed metagenomic sequencing from the fecal samples at baseline and 6 months to determine microbiome populations.
Women who experienced weight loss, with or without omega-3 PUFA supplementation, had a decline in the abundance of Firmicutes phyla – a group linked to inflammation risk – as a percentage of overall bacterial phyla. The researchers found a similar trend among women who received omega-3 PUFA, regardless of how much weight they lost. At the species level, those who received supplements had higher proportional abundance of Phocaeicola massiliensis and reduced proportions of Faecalibacterium prausnitzii, R. lactaris, Blautia obeum, and Dorea formicigenerans (P < .05).
Weight loss combined with supplementation also seemed to affect gut microbiota, with subjects who lost more than 10% of their body weight and received omega-3 PUFA supplements having elevated Bacteriodetes and reduced Firmicutes, compared with all other groups (P < .05).
At 6 months, the researchers grouped women by mean body fat composition, and found both positive and negative correlations among different bacterial species. Finally, the researchers looked at serum levels of the inflammatory cytokines interleukin-6, monocyte chemoattractant protein-1 (MCP-1), and tumor necrosis factor–alpha at 6 months. Women with elevated levels of at least two cytokines had higher levels of two species of mucin-degrading bacteria. Levels of MCP-1 alone also correlated with greater proportions of mucin-degrading bacteria (P < .05).
The authors concluded that increasing omega-3 PUFA uptake to about 2% of total daily calorie intake could push the gut microbiome in a direction that improves intestinal permeability parameters and reduces chronic inflammation. These changes could lead to a reduction in the risk for postmenopausal breast cancer.
The study was funded by the Breast Cancer Research Foundation.
FROM SABCS 2021
1 in 7 breast cancers are overdiagnosed
A new model based on data from the Breast Cancer Surveillance Consortium (BCSC) suggests that overdiagnosis of screen-detected breast cancer is less frequent than estimates from excess-incidence studies, but the model also takes into account indolent tumors and produced a higher estimate than previous models that didn’t consider this factor.
“There is a pronounced lack of consensus of the true rate of overdiagnosis in the contemporary U.S. mammography practice. This uncertainty about the extent of overdiagnosis is a problem for the development of guidelines and policies. By overcoming shortcomings of previous studies, we produced a defensible estimate of overdiagnosis in contemporary U.S. mammography practice. About one in seven screen-detected cancers in women (between 50 and 74 years) undergoing biennial screening will be overdiagnosed, and about one in three overdiagnosed cancers are attributed to the detection of nonprogressive cancers,” said Marc D. Ryser, PhD, in an interview. Dr. Ryser is an expert in mathematical and statistical modeling in population health science at Duke University, Durham, N.C. He presented the results of the model at the 2021 San Antonio Breast Cancer Symposium.
Previous models have come up with estimates ranging from 0% to 54%, but the heterogeneity makes them difficult to compare. “They differ in study populations, estimation methods and their definitions of overdiagnosis,” Dr. Ryser said.
There are two general ways to estimate overdiagnosis. One is a model-based approach that works out the tumor latency using models of disease natural history and clinical data, and then uses that to predict overdiagnosis. But these models may not account or indolent tumors, or tumors that would not likely cause death during the patient’s lifetime, and the assumptions behind the models can be opaque. On the other hand, the excess-incidence strategy compares incidence in screened versus unscreened populations and assumes that excess cancers in the screened group is caused by overdiagnosis, but this can be affected by bias.
To get around these limitations, Dr. Ryser’s group used a model-based approach, but also allowed for indolent tumors. They ensured transparency of the underlying assumptions of the model, and took advantage of a contemporary, high-quality data source in the BCSC.
They used individual mammography screening and breast cancer diagnosis records from 35,986 women aged 50-74 years, who were first screened between 2000 and 2018. To estimate overdiagnosis caused by indolent tumors, they used the risk of non–breast cancer mortality from age cohort–adjusted annual mortality risks. There were a total 82,677 screens and 718 cases of breast cancer diagnosed. 3.6% of detected tumor were indolent (95% credible interval, 0.2%-13.8%). The predicted overdiagnosis rate for a biennial screening program was 15.3% (95% prediction interval, 9.7%-25.2%). 6.0% of overdiagnosis was projected to be caused by indolent tumors (95% PI, 0.2%-19.0%) that don’t progress at all, and 9.3% to tumors that would progress, but not fast enough to cause mortality during the individual’s lifetime. An annual screening program had a predicted overdiagnosis rate of 14.6% (95% PI, 9.4%-23.9%).
Dr. Ryser identified some specific studies that used the same definition of overdiagnosis as his group used, and compared them with the 15.3% incidence that his group determined. Excess-incidence studies produced higher estimates, while modeling studies produced lower estimates.
The model did not distinguish between ductal carcinoma in situ and invasive cancers, and it did not account for patient race and breast density.
The study was funded by the National Institutes of Health. Dr. Ryser has no relevant financial disclosures.
A new model based on data from the Breast Cancer Surveillance Consortium (BCSC) suggests that overdiagnosis of screen-detected breast cancer is less frequent than estimates from excess-incidence studies, but the model also takes into account indolent tumors and produced a higher estimate than previous models that didn’t consider this factor.
“There is a pronounced lack of consensus of the true rate of overdiagnosis in the contemporary U.S. mammography practice. This uncertainty about the extent of overdiagnosis is a problem for the development of guidelines and policies. By overcoming shortcomings of previous studies, we produced a defensible estimate of overdiagnosis in contemporary U.S. mammography practice. About one in seven screen-detected cancers in women (between 50 and 74 years) undergoing biennial screening will be overdiagnosed, and about one in three overdiagnosed cancers are attributed to the detection of nonprogressive cancers,” said Marc D. Ryser, PhD, in an interview. Dr. Ryser is an expert in mathematical and statistical modeling in population health science at Duke University, Durham, N.C. He presented the results of the model at the 2021 San Antonio Breast Cancer Symposium.
Previous models have come up with estimates ranging from 0% to 54%, but the heterogeneity makes them difficult to compare. “They differ in study populations, estimation methods and their definitions of overdiagnosis,” Dr. Ryser said.
There are two general ways to estimate overdiagnosis. One is a model-based approach that works out the tumor latency using models of disease natural history and clinical data, and then uses that to predict overdiagnosis. But these models may not account or indolent tumors, or tumors that would not likely cause death during the patient’s lifetime, and the assumptions behind the models can be opaque. On the other hand, the excess-incidence strategy compares incidence in screened versus unscreened populations and assumes that excess cancers in the screened group is caused by overdiagnosis, but this can be affected by bias.
To get around these limitations, Dr. Ryser’s group used a model-based approach, but also allowed for indolent tumors. They ensured transparency of the underlying assumptions of the model, and took advantage of a contemporary, high-quality data source in the BCSC.
They used individual mammography screening and breast cancer diagnosis records from 35,986 women aged 50-74 years, who were first screened between 2000 and 2018. To estimate overdiagnosis caused by indolent tumors, they used the risk of non–breast cancer mortality from age cohort–adjusted annual mortality risks. There were a total 82,677 screens and 718 cases of breast cancer diagnosed. 3.6% of detected tumor were indolent (95% credible interval, 0.2%-13.8%). The predicted overdiagnosis rate for a biennial screening program was 15.3% (95% prediction interval, 9.7%-25.2%). 6.0% of overdiagnosis was projected to be caused by indolent tumors (95% PI, 0.2%-19.0%) that don’t progress at all, and 9.3% to tumors that would progress, but not fast enough to cause mortality during the individual’s lifetime. An annual screening program had a predicted overdiagnosis rate of 14.6% (95% PI, 9.4%-23.9%).
Dr. Ryser identified some specific studies that used the same definition of overdiagnosis as his group used, and compared them with the 15.3% incidence that his group determined. Excess-incidence studies produced higher estimates, while modeling studies produced lower estimates.
The model did not distinguish between ductal carcinoma in situ and invasive cancers, and it did not account for patient race and breast density.
The study was funded by the National Institutes of Health. Dr. Ryser has no relevant financial disclosures.
A new model based on data from the Breast Cancer Surveillance Consortium (BCSC) suggests that overdiagnosis of screen-detected breast cancer is less frequent than estimates from excess-incidence studies, but the model also takes into account indolent tumors and produced a higher estimate than previous models that didn’t consider this factor.
“There is a pronounced lack of consensus of the true rate of overdiagnosis in the contemporary U.S. mammography practice. This uncertainty about the extent of overdiagnosis is a problem for the development of guidelines and policies. By overcoming shortcomings of previous studies, we produced a defensible estimate of overdiagnosis in contemporary U.S. mammography practice. About one in seven screen-detected cancers in women (between 50 and 74 years) undergoing biennial screening will be overdiagnosed, and about one in three overdiagnosed cancers are attributed to the detection of nonprogressive cancers,” said Marc D. Ryser, PhD, in an interview. Dr. Ryser is an expert in mathematical and statistical modeling in population health science at Duke University, Durham, N.C. He presented the results of the model at the 2021 San Antonio Breast Cancer Symposium.
Previous models have come up with estimates ranging from 0% to 54%, but the heterogeneity makes them difficult to compare. “They differ in study populations, estimation methods and their definitions of overdiagnosis,” Dr. Ryser said.
There are two general ways to estimate overdiagnosis. One is a model-based approach that works out the tumor latency using models of disease natural history and clinical data, and then uses that to predict overdiagnosis. But these models may not account or indolent tumors, or tumors that would not likely cause death during the patient’s lifetime, and the assumptions behind the models can be opaque. On the other hand, the excess-incidence strategy compares incidence in screened versus unscreened populations and assumes that excess cancers in the screened group is caused by overdiagnosis, but this can be affected by bias.
To get around these limitations, Dr. Ryser’s group used a model-based approach, but also allowed for indolent tumors. They ensured transparency of the underlying assumptions of the model, and took advantage of a contemporary, high-quality data source in the BCSC.
They used individual mammography screening and breast cancer diagnosis records from 35,986 women aged 50-74 years, who were first screened between 2000 and 2018. To estimate overdiagnosis caused by indolent tumors, they used the risk of non–breast cancer mortality from age cohort–adjusted annual mortality risks. There were a total 82,677 screens and 718 cases of breast cancer diagnosed. 3.6% of detected tumor were indolent (95% credible interval, 0.2%-13.8%). The predicted overdiagnosis rate for a biennial screening program was 15.3% (95% prediction interval, 9.7%-25.2%). 6.0% of overdiagnosis was projected to be caused by indolent tumors (95% PI, 0.2%-19.0%) that don’t progress at all, and 9.3% to tumors that would progress, but not fast enough to cause mortality during the individual’s lifetime. An annual screening program had a predicted overdiagnosis rate of 14.6% (95% PI, 9.4%-23.9%).
Dr. Ryser identified some specific studies that used the same definition of overdiagnosis as his group used, and compared them with the 15.3% incidence that his group determined. Excess-incidence studies produced higher estimates, while modeling studies produced lower estimates.
The model did not distinguish between ductal carcinoma in situ and invasive cancers, and it did not account for patient race and breast density.
The study was funded by the National Institutes of Health. Dr. Ryser has no relevant financial disclosures.
FROM SABCS 2021
Vitamin D counters bone density loss with aromatase inhibitors
Among women with breast cancer being treated with aromatase inhibitors (AI), supplementation with vitamin D and calcium protected against bone loss after 5 years, according to results from a prospective cohort study in Brazil. The study found no difference in bone mineral density outcomes at 5 years between women with hormone receptor–positive cancers treated with aromatase inhibitors (AIG) and triple negative or HER-2 positive patients who were treated with another therapy (CG).
About two-thirds of women with breast cancer have tumors that are positive for hormone receptors, and so are often treated with endocrine therapy such as selective estrogen receptor modulators or AI. However, there are concerns that AI treatment may lead to a loss of bone mineral density and impacts on quality of life. This loss is influenced by a range of factors, including body weight, physical activity, smoking, alcohol consumption, corticosteroid use, calcium in the diet, and circulating levels of vitamin D.
Vitamin D helps to regulate absorption of calcium and phosphorus, ensuring that their plasma concentrations are high enough for adequate bone health. But vitamin D deficiency is a common problem, even in tropical areas such as Brazil. “It is high in the general population and especially in postmenopausal breast cancer patients. Thus, vitamin D and calcium supplementation has an impact on these women’s lives,” said lead author Marcelo Antonini, MD, who presented the study (abstract P1-13-04) at the San Antonio Breast Cancer Symposium. He is a researcher at Hospital Servidor Publico Estadual in São Paulo, Brazil.
Although the findings are encouraging, more work needs to be done before it leads to a change in practice. “Larger studies must be carried out to prove this theory; however, in noncancer patients we have already well established the benefits of vitamin D and calcium supplementation,” Dr. Antonini said in an interview
The researchers examined women before the start of treatment, at 6 months, and at 5 years. Those with vitamin D levels below 30 ng/mL received 7,000 IU/day for 8 weeks, followed by a 1,000 IU/day maintenance dose. Subjects with osteopenia received a calcium supplement (500 mg calcium carbonate), and those with osteoporosis received 4 mg zoledronic acid (Zometa, Novartis).
There were 140 patients in both the AIG and CG groups. The average age was 65 years. Sixty-four percent of the AIG group and 71% of the CG group were vitamin D deficient at baseline. At 5 years, the frequencies were 17% and 16%, respectively. Both groups showed significant declines in bone mineral density in the femoral neck and femur at both 6 months and 5 years, but there was no significant difference between them. There was no significant difference between the two groups with respect to bone density loss in the spine.
The study is limited by the fact that it was conducted at a single center and had a small population size.
Another prospective observational study, published earlier this year, looked at vitamin D supplementation in 741 patients (mean age 61.9 years) being treated with aromatase inhibitors, whose baseline vitamin D levels were less 30 ng/mL. They received 16,000 IU dose of oral calcifediol every 2 weeks. At 3 months, individuals who achieved vitamin D levels of 40 ng/mL or higher were less likely to have joint pain (P < .05). At 12 months, data from 473 patients showed that for every 10-ng/mL increase in serum vitamin D at 3 months, there was a reduction in loss of bone marrow density in the lumbar spine (adjusted beta = +0.177%, P < .05), though there were no associations between vitamin D levels and BMD of the femur or total hip.
“Our results suggest that optimal levels of vitamin D are associated with a reduced risk of joint pain related to AI treatment. A target threshold (of vitamin D) levels was set at 40 ng/mL to significantly reduce the increase in joint pain. The authors noted that this threshold is well above the goal of 20 ng/mL recommended by the 2010 Institute of Medicine report.
The study did not receive external funding. Dr. Antonini has no relevant financial disclosures.
Among women with breast cancer being treated with aromatase inhibitors (AI), supplementation with vitamin D and calcium protected against bone loss after 5 years, according to results from a prospective cohort study in Brazil. The study found no difference in bone mineral density outcomes at 5 years between women with hormone receptor–positive cancers treated with aromatase inhibitors (AIG) and triple negative or HER-2 positive patients who were treated with another therapy (CG).
About two-thirds of women with breast cancer have tumors that are positive for hormone receptors, and so are often treated with endocrine therapy such as selective estrogen receptor modulators or AI. However, there are concerns that AI treatment may lead to a loss of bone mineral density and impacts on quality of life. This loss is influenced by a range of factors, including body weight, physical activity, smoking, alcohol consumption, corticosteroid use, calcium in the diet, and circulating levels of vitamin D.
Vitamin D helps to regulate absorption of calcium and phosphorus, ensuring that their plasma concentrations are high enough for adequate bone health. But vitamin D deficiency is a common problem, even in tropical areas such as Brazil. “It is high in the general population and especially in postmenopausal breast cancer patients. Thus, vitamin D and calcium supplementation has an impact on these women’s lives,” said lead author Marcelo Antonini, MD, who presented the study (abstract P1-13-04) at the San Antonio Breast Cancer Symposium. He is a researcher at Hospital Servidor Publico Estadual in São Paulo, Brazil.
Although the findings are encouraging, more work needs to be done before it leads to a change in practice. “Larger studies must be carried out to prove this theory; however, in noncancer patients we have already well established the benefits of vitamin D and calcium supplementation,” Dr. Antonini said in an interview
The researchers examined women before the start of treatment, at 6 months, and at 5 years. Those with vitamin D levels below 30 ng/mL received 7,000 IU/day for 8 weeks, followed by a 1,000 IU/day maintenance dose. Subjects with osteopenia received a calcium supplement (500 mg calcium carbonate), and those with osteoporosis received 4 mg zoledronic acid (Zometa, Novartis).
There were 140 patients in both the AIG and CG groups. The average age was 65 years. Sixty-four percent of the AIG group and 71% of the CG group were vitamin D deficient at baseline. At 5 years, the frequencies were 17% and 16%, respectively. Both groups showed significant declines in bone mineral density in the femoral neck and femur at both 6 months and 5 years, but there was no significant difference between them. There was no significant difference between the two groups with respect to bone density loss in the spine.
The study is limited by the fact that it was conducted at a single center and had a small population size.
Another prospective observational study, published earlier this year, looked at vitamin D supplementation in 741 patients (mean age 61.9 years) being treated with aromatase inhibitors, whose baseline vitamin D levels were less 30 ng/mL. They received 16,000 IU dose of oral calcifediol every 2 weeks. At 3 months, individuals who achieved vitamin D levels of 40 ng/mL or higher were less likely to have joint pain (P < .05). At 12 months, data from 473 patients showed that for every 10-ng/mL increase in serum vitamin D at 3 months, there was a reduction in loss of bone marrow density in the lumbar spine (adjusted beta = +0.177%, P < .05), though there were no associations between vitamin D levels and BMD of the femur or total hip.
“Our results suggest that optimal levels of vitamin D are associated with a reduced risk of joint pain related to AI treatment. A target threshold (of vitamin D) levels was set at 40 ng/mL to significantly reduce the increase in joint pain. The authors noted that this threshold is well above the goal of 20 ng/mL recommended by the 2010 Institute of Medicine report.
The study did not receive external funding. Dr. Antonini has no relevant financial disclosures.
Among women with breast cancer being treated with aromatase inhibitors (AI), supplementation with vitamin D and calcium protected against bone loss after 5 years, according to results from a prospective cohort study in Brazil. The study found no difference in bone mineral density outcomes at 5 years between women with hormone receptor–positive cancers treated with aromatase inhibitors (AIG) and triple negative or HER-2 positive patients who were treated with another therapy (CG).
About two-thirds of women with breast cancer have tumors that are positive for hormone receptors, and so are often treated with endocrine therapy such as selective estrogen receptor modulators or AI. However, there are concerns that AI treatment may lead to a loss of bone mineral density and impacts on quality of life. This loss is influenced by a range of factors, including body weight, physical activity, smoking, alcohol consumption, corticosteroid use, calcium in the diet, and circulating levels of vitamin D.
Vitamin D helps to regulate absorption of calcium and phosphorus, ensuring that their plasma concentrations are high enough for adequate bone health. But vitamin D deficiency is a common problem, even in tropical areas such as Brazil. “It is high in the general population and especially in postmenopausal breast cancer patients. Thus, vitamin D and calcium supplementation has an impact on these women’s lives,” said lead author Marcelo Antonini, MD, who presented the study (abstract P1-13-04) at the San Antonio Breast Cancer Symposium. He is a researcher at Hospital Servidor Publico Estadual in São Paulo, Brazil.
Although the findings are encouraging, more work needs to be done before it leads to a change in practice. “Larger studies must be carried out to prove this theory; however, in noncancer patients we have already well established the benefits of vitamin D and calcium supplementation,” Dr. Antonini said in an interview
The researchers examined women before the start of treatment, at 6 months, and at 5 years. Those with vitamin D levels below 30 ng/mL received 7,000 IU/day for 8 weeks, followed by a 1,000 IU/day maintenance dose. Subjects with osteopenia received a calcium supplement (500 mg calcium carbonate), and those with osteoporosis received 4 mg zoledronic acid (Zometa, Novartis).
There were 140 patients in both the AIG and CG groups. The average age was 65 years. Sixty-four percent of the AIG group and 71% of the CG group were vitamin D deficient at baseline. At 5 years, the frequencies were 17% and 16%, respectively. Both groups showed significant declines in bone mineral density in the femoral neck and femur at both 6 months and 5 years, but there was no significant difference between them. There was no significant difference between the two groups with respect to bone density loss in the spine.
The study is limited by the fact that it was conducted at a single center and had a small population size.
Another prospective observational study, published earlier this year, looked at vitamin D supplementation in 741 patients (mean age 61.9 years) being treated with aromatase inhibitors, whose baseline vitamin D levels were less 30 ng/mL. They received 16,000 IU dose of oral calcifediol every 2 weeks. At 3 months, individuals who achieved vitamin D levels of 40 ng/mL or higher were less likely to have joint pain (P < .05). At 12 months, data from 473 patients showed that for every 10-ng/mL increase in serum vitamin D at 3 months, there was a reduction in loss of bone marrow density in the lumbar spine (adjusted beta = +0.177%, P < .05), though there were no associations between vitamin D levels and BMD of the femur or total hip.
“Our results suggest that optimal levels of vitamin D are associated with a reduced risk of joint pain related to AI treatment. A target threshold (of vitamin D) levels was set at 40 ng/mL to significantly reduce the increase in joint pain. The authors noted that this threshold is well above the goal of 20 ng/mL recommended by the 2010 Institute of Medicine report.
The study did not receive external funding. Dr. Antonini has no relevant financial disclosures.
FROM SABCS 2021
FIB-4 could ID liver risk in primary care
Fibrosis-4 index (FIB-4) scores are strongly associated with severe liver disease outcomes in a primary care population, both in patients with known chronic liver disease and those without known CLD. The result could help identify patients with CLD before their condition becomes severe.
FIB-4 has previously shown utility in predicting the risk of advanced fibrosis in patients with viral hepatitis B and C, nonalcoholic fatty liver disease (NAFLD), nonalcoholic steatohepatitis (NASH), and alcohol-related liver disease.
“This is really important in primary care because FIB-4 is easy to calculate. Its inputs are accessible, and it is inexpensive, often taking advantage of labs that we’ve ordered anyway. And if we can use it to find advanced fibrosis, it will be critically important because we know that advanced fibrosis is associated with severe liver outcomes – these are going to be patients that we need to make sure are in touch with our hepatology colleagues,” said Andrew Schreiner, MD, during a presentation of the results at the annual meeting of the American Association for the Study of Liver Diseases. Dr. Schreiner is general internist at the Medical University of South Carolina, Charleston.
He also noted that FIB-4 is playing an important role in the assessment of NAFLD and NASH. Many newer algorithms to manage NAFLD in the primary care setting rely on FIB-4, but that application is limited because NAFLD is underdiagnosed according to administrative database studies, which found rates of about 2%-5% despite the fact that estimates put it at having a prevalence of 25%-30% in the U.S. population.
To determine if FIB-4 scores could assist in identifying primary care patients at risk of severe outcomes, including cirrhosis, hepatocellular carcinoma, and liver transplant, the researchers conducted a retrospective analysis of primary care electronic health care data from 20,556 patients between 2007 and 2018 who were seen at their institution. Participants had ALT and AST values less than 500 IU/L, as well as a platelet count within two months preceding or on the day of the liver enzyme tests. They excluded individuals with known chronic or severe liver disease.
65% of patients were female, 45% were Black, and the mean BMI was 29.8 kg/m2. 64% of participants were ranked as low risk (FIB-4 ≤1.3), 29% with undetermined risk (1.3-2.67), and 7% with high risk (>2.67).
The population had more liver risk than expected. “[It is] a distribution that certainly may have more high risk and indeterminant risks than we would have anticipated, but we have seen this in external studies,” said Dr. Schreiner.
Over a mean follow-up period of 8.2 years, 11% were diagnosed with CLD: 2.3% developed NAFLD, 8.2% another CLD, and 0.5% had NAFLD and another CLD. About 4% developed a severe CLD. A severe liver outcome occurred in 2.2% of those who had been classified as FIB-4 low risk, 4.2% classified as indeterminate risk, and 20.8% of those classified as high risk.
“Troublingly,” said Dr. Schreiner, 49% of those who went on to develop a severe liver outcome had no CLD diagnosis before it occurred. “This is a tremendous opportunity to improve diagnosis in this setting.”
After adjustment for race, gender, marital status, smoking history, BMI, and various comorbidities, the researchers found a higher risk of severe liver disease associated with indeterminate FIB-4 risk score (hazard ratio, 1.62; 95% confidence interval, 1.36-1.92) and a high FIB-4 risk score (HR, 6.64; 95% CI, 5.58-7.90), compared with those with a low FIB-4 risk score. The same was true for individual liver diseases, including NAFLD (indeterminate HR, 1.88; 95% CI, 0.99-3.60; high HR, 7.32; 95% CI, 3.44-15.58), other liver diagnosis (indeterminate HR, 2.65; 95% CI, 1.93-3.63; high HR, 11.39; 95% CI, 8.53-15.20), and NAFLD plus another liver disease (intermediate HR, 2.53; 95% CI, 0.79-8.12; high HR, 6.89; 95% CI, 1.82-26.14).
Dr. Schreiner conceded that the study may not be generalizable, since FIB-4 was not designed for use in general populations, and it was conducted at a single center.
During the question-and-answer session after the talk, Dr. Schreiner was asked if the majority of the 49% who had a severe liver outcome without previous liver disease had NAFLD. He said that was the team’s hypothesis, and they are in the process of examining that data, but a significant number appear to be alcohol related. “For us in the primary care setting, it’s just another opportunity to emphasize that we have to do a better job getting exposure histories, and alcohol histories in particular, and finding ways to document those in ways that we can make diagnoses for patients and for our hepatology colleagues,” said Dr. Schreiner.
Comoderator Kathleen Corey, MD, asked Dr. Schreiner if he had any concerns about false positives from FIB-4 screening, and whether that could lead to overtreatment. “We’ve seen other screening tests leading to patient distress and overutilization of resources. How do you think we might be able to mitigate that?” asked Dr. Corey, who is an assistant professor of medicine at Harvard Medical School and director of the Fatty Liver Clinic at Massachusetts General Hospital, both in Boston.
Dr. Schreiner underscored the need for more physician education about FIB-4, both its potential and its pitfalls, since many primary care providers don’t use it or even know about it. “FIB-4 is very popular in the hepatology literature, but in primary care, we don’t talk about it as often. So I think educational efforts about its possible utility, about some of the drawbacks, or some of the things that might lead to inappropriately positive results – like advanced age, for those of us who see patients 60 and older. Those are really important considerations both for the patient and the provider for management of expectations and concerns. I’m worried too about application in our younger cohorts. The explosion of NAFLD in adolescence, and the likelihood that we might get a false negative in maybe a 28-year-old who might have problematic disease, is a concern as well,” said Dr. Schreiner.
Dr. Schreiner has no relevant financial disclosures. Dr. Corey has been on an advisory committee or review panel for Bristol-Myers Squibb, Novo Nordisk, and Gilead. She has consulted for Novo Nordisk and received research support from BMS, Boehringer Ingelheim, Novartis, and Boehringer Ingelheim.
Fibrosis-4 index (FIB-4) scores are strongly associated with severe liver disease outcomes in a primary care population, both in patients with known chronic liver disease and those without known CLD. The result could help identify patients with CLD before their condition becomes severe.
FIB-4 has previously shown utility in predicting the risk of advanced fibrosis in patients with viral hepatitis B and C, nonalcoholic fatty liver disease (NAFLD), nonalcoholic steatohepatitis (NASH), and alcohol-related liver disease.
“This is really important in primary care because FIB-4 is easy to calculate. Its inputs are accessible, and it is inexpensive, often taking advantage of labs that we’ve ordered anyway. And if we can use it to find advanced fibrosis, it will be critically important because we know that advanced fibrosis is associated with severe liver outcomes – these are going to be patients that we need to make sure are in touch with our hepatology colleagues,” said Andrew Schreiner, MD, during a presentation of the results at the annual meeting of the American Association for the Study of Liver Diseases. Dr. Schreiner is general internist at the Medical University of South Carolina, Charleston.
He also noted that FIB-4 is playing an important role in the assessment of NAFLD and NASH. Many newer algorithms to manage NAFLD in the primary care setting rely on FIB-4, but that application is limited because NAFLD is underdiagnosed according to administrative database studies, which found rates of about 2%-5% despite the fact that estimates put it at having a prevalence of 25%-30% in the U.S. population.
To determine if FIB-4 scores could assist in identifying primary care patients at risk of severe outcomes, including cirrhosis, hepatocellular carcinoma, and liver transplant, the researchers conducted a retrospective analysis of primary care electronic health care data from 20,556 patients between 2007 and 2018 who were seen at their institution. Participants had ALT and AST values less than 500 IU/L, as well as a platelet count within two months preceding or on the day of the liver enzyme tests. They excluded individuals with known chronic or severe liver disease.
65% of patients were female, 45% were Black, and the mean BMI was 29.8 kg/m2. 64% of participants were ranked as low risk (FIB-4 ≤1.3), 29% with undetermined risk (1.3-2.67), and 7% with high risk (>2.67).
The population had more liver risk than expected. “[It is] a distribution that certainly may have more high risk and indeterminant risks than we would have anticipated, but we have seen this in external studies,” said Dr. Schreiner.
Over a mean follow-up period of 8.2 years, 11% were diagnosed with CLD: 2.3% developed NAFLD, 8.2% another CLD, and 0.5% had NAFLD and another CLD. About 4% developed a severe CLD. A severe liver outcome occurred in 2.2% of those who had been classified as FIB-4 low risk, 4.2% classified as indeterminate risk, and 20.8% of those classified as high risk.
“Troublingly,” said Dr. Schreiner, 49% of those who went on to develop a severe liver outcome had no CLD diagnosis before it occurred. “This is a tremendous opportunity to improve diagnosis in this setting.”
After adjustment for race, gender, marital status, smoking history, BMI, and various comorbidities, the researchers found a higher risk of severe liver disease associated with indeterminate FIB-4 risk score (hazard ratio, 1.62; 95% confidence interval, 1.36-1.92) and a high FIB-4 risk score (HR, 6.64; 95% CI, 5.58-7.90), compared with those with a low FIB-4 risk score. The same was true for individual liver diseases, including NAFLD (indeterminate HR, 1.88; 95% CI, 0.99-3.60; high HR, 7.32; 95% CI, 3.44-15.58), other liver diagnosis (indeterminate HR, 2.65; 95% CI, 1.93-3.63; high HR, 11.39; 95% CI, 8.53-15.20), and NAFLD plus another liver disease (intermediate HR, 2.53; 95% CI, 0.79-8.12; high HR, 6.89; 95% CI, 1.82-26.14).
Dr. Schreiner conceded that the study may not be generalizable, since FIB-4 was not designed for use in general populations, and it was conducted at a single center.
During the question-and-answer session after the talk, Dr. Schreiner was asked if the majority of the 49% who had a severe liver outcome without previous liver disease had NAFLD. He said that was the team’s hypothesis, and they are in the process of examining that data, but a significant number appear to be alcohol related. “For us in the primary care setting, it’s just another opportunity to emphasize that we have to do a better job getting exposure histories, and alcohol histories in particular, and finding ways to document those in ways that we can make diagnoses for patients and for our hepatology colleagues,” said Dr. Schreiner.
Comoderator Kathleen Corey, MD, asked Dr. Schreiner if he had any concerns about false positives from FIB-4 screening, and whether that could lead to overtreatment. “We’ve seen other screening tests leading to patient distress and overutilization of resources. How do you think we might be able to mitigate that?” asked Dr. Corey, who is an assistant professor of medicine at Harvard Medical School and director of the Fatty Liver Clinic at Massachusetts General Hospital, both in Boston.
Dr. Schreiner underscored the need for more physician education about FIB-4, both its potential and its pitfalls, since many primary care providers don’t use it or even know about it. “FIB-4 is very popular in the hepatology literature, but in primary care, we don’t talk about it as often. So I think educational efforts about its possible utility, about some of the drawbacks, or some of the things that might lead to inappropriately positive results – like advanced age, for those of us who see patients 60 and older. Those are really important considerations both for the patient and the provider for management of expectations and concerns. I’m worried too about application in our younger cohorts. The explosion of NAFLD in adolescence, and the likelihood that we might get a false negative in maybe a 28-year-old who might have problematic disease, is a concern as well,” said Dr. Schreiner.
Dr. Schreiner has no relevant financial disclosures. Dr. Corey has been on an advisory committee or review panel for Bristol-Myers Squibb, Novo Nordisk, and Gilead. She has consulted for Novo Nordisk and received research support from BMS, Boehringer Ingelheim, Novartis, and Boehringer Ingelheim.
Fibrosis-4 index (FIB-4) scores are strongly associated with severe liver disease outcomes in a primary care population, both in patients with known chronic liver disease and those without known CLD. The result could help identify patients with CLD before their condition becomes severe.
FIB-4 has previously shown utility in predicting the risk of advanced fibrosis in patients with viral hepatitis B and C, nonalcoholic fatty liver disease (NAFLD), nonalcoholic steatohepatitis (NASH), and alcohol-related liver disease.
“This is really important in primary care because FIB-4 is easy to calculate. Its inputs are accessible, and it is inexpensive, often taking advantage of labs that we’ve ordered anyway. And if we can use it to find advanced fibrosis, it will be critically important because we know that advanced fibrosis is associated with severe liver outcomes – these are going to be patients that we need to make sure are in touch with our hepatology colleagues,” said Andrew Schreiner, MD, during a presentation of the results at the annual meeting of the American Association for the Study of Liver Diseases. Dr. Schreiner is general internist at the Medical University of South Carolina, Charleston.
He also noted that FIB-4 is playing an important role in the assessment of NAFLD and NASH. Many newer algorithms to manage NAFLD in the primary care setting rely on FIB-4, but that application is limited because NAFLD is underdiagnosed according to administrative database studies, which found rates of about 2%-5% despite the fact that estimates put it at having a prevalence of 25%-30% in the U.S. population.
To determine if FIB-4 scores could assist in identifying primary care patients at risk of severe outcomes, including cirrhosis, hepatocellular carcinoma, and liver transplant, the researchers conducted a retrospective analysis of primary care electronic health care data from 20,556 patients between 2007 and 2018 who were seen at their institution. Participants had ALT and AST values less than 500 IU/L, as well as a platelet count within two months preceding or on the day of the liver enzyme tests. They excluded individuals with known chronic or severe liver disease.
65% of patients were female, 45% were Black, and the mean BMI was 29.8 kg/m2. 64% of participants were ranked as low risk (FIB-4 ≤1.3), 29% with undetermined risk (1.3-2.67), and 7% with high risk (>2.67).
The population had more liver risk than expected. “[It is] a distribution that certainly may have more high risk and indeterminant risks than we would have anticipated, but we have seen this in external studies,” said Dr. Schreiner.
Over a mean follow-up period of 8.2 years, 11% were diagnosed with CLD: 2.3% developed NAFLD, 8.2% another CLD, and 0.5% had NAFLD and another CLD. About 4% developed a severe CLD. A severe liver outcome occurred in 2.2% of those who had been classified as FIB-4 low risk, 4.2% classified as indeterminate risk, and 20.8% of those classified as high risk.
“Troublingly,” said Dr. Schreiner, 49% of those who went on to develop a severe liver outcome had no CLD diagnosis before it occurred. “This is a tremendous opportunity to improve diagnosis in this setting.”
After adjustment for race, gender, marital status, smoking history, BMI, and various comorbidities, the researchers found a higher risk of severe liver disease associated with indeterminate FIB-4 risk score (hazard ratio, 1.62; 95% confidence interval, 1.36-1.92) and a high FIB-4 risk score (HR, 6.64; 95% CI, 5.58-7.90), compared with those with a low FIB-4 risk score. The same was true for individual liver diseases, including NAFLD (indeterminate HR, 1.88; 95% CI, 0.99-3.60; high HR, 7.32; 95% CI, 3.44-15.58), other liver diagnosis (indeterminate HR, 2.65; 95% CI, 1.93-3.63; high HR, 11.39; 95% CI, 8.53-15.20), and NAFLD plus another liver disease (intermediate HR, 2.53; 95% CI, 0.79-8.12; high HR, 6.89; 95% CI, 1.82-26.14).
Dr. Schreiner conceded that the study may not be generalizable, since FIB-4 was not designed for use in general populations, and it was conducted at a single center.
During the question-and-answer session after the talk, Dr. Schreiner was asked if the majority of the 49% who had a severe liver outcome without previous liver disease had NAFLD. He said that was the team’s hypothesis, and they are in the process of examining that data, but a significant number appear to be alcohol related. “For us in the primary care setting, it’s just another opportunity to emphasize that we have to do a better job getting exposure histories, and alcohol histories in particular, and finding ways to document those in ways that we can make diagnoses for patients and for our hepatology colleagues,” said Dr. Schreiner.
Comoderator Kathleen Corey, MD, asked Dr. Schreiner if he had any concerns about false positives from FIB-4 screening, and whether that could lead to overtreatment. “We’ve seen other screening tests leading to patient distress and overutilization of resources. How do you think we might be able to mitigate that?” asked Dr. Corey, who is an assistant professor of medicine at Harvard Medical School and director of the Fatty Liver Clinic at Massachusetts General Hospital, both in Boston.
Dr. Schreiner underscored the need for more physician education about FIB-4, both its potential and its pitfalls, since many primary care providers don’t use it or even know about it. “FIB-4 is very popular in the hepatology literature, but in primary care, we don’t talk about it as often. So I think educational efforts about its possible utility, about some of the drawbacks, or some of the things that might lead to inappropriately positive results – like advanced age, for those of us who see patients 60 and older. Those are really important considerations both for the patient and the provider for management of expectations and concerns. I’m worried too about application in our younger cohorts. The explosion of NAFLD in adolescence, and the likelihood that we might get a false negative in maybe a 28-year-old who might have problematic disease, is a concern as well,” said Dr. Schreiner.
Dr. Schreiner has no relevant financial disclosures. Dr. Corey has been on an advisory committee or review panel for Bristol-Myers Squibb, Novo Nordisk, and Gilead. She has consulted for Novo Nordisk and received research support from BMS, Boehringer Ingelheim, Novartis, and Boehringer Ingelheim.
FROM THE LIVER MEETING
Idiopathic pulmonary fibrosis – a mortality predictor found?
A low lymphocyte-to-monocyte ratio (LMR) is associated with worse survival in newly-diagnosed patients with idiopathic pulmonary fibrosis (IPF), according to a new retrospective, single-center analysis. Patients with both IPF and lung cancer also had a lower LMR than patients with IPF alone.
The study, published online in Respiratory Medicine, was conducted among 77 newly diagnosed patients, 40 end-stage IPF patients, and 17 patients with IPF and lung cancer. All received at least 1 year of antifibrotic therapy (pirfenidone or nintedanib). The researchers collected demographic and clinical data between December 2014 and December 2020.
The disease course of IPF is difficult to predict, with some patients progressing slowly, and others suffer a rapid decline to respiratory failure. Previous studies found that higher levels of monocytes are associated with higher mortality in IPF and other fibrotic lung disease, and both neutrophil-to-lymphocyte ratio (NLR) and LMR have been shown to predict mortality in lung cancer.
A previous study found that IPF patients had a higher NLR and a lower LMR than controls, but that research did not consider the impact of antifibrotic treatment, which may improve outcomes.
There has been accumulating cellular and molecular evidence that leukocyte population abnormalities are associated with IPF outcomes, but that work was more discovery based and relied on tests that aren’t readily available clinically, said Erica L. Herzog, MD, PhD, who was asked to comment on the study. “It’s provided a lot of insight into potential new mechanisms and potential biomarkers, but their clinical utility for patients is limited. So the use of lymphocyte-to-monocyte ratios that can be obtained from a complete blood cell count, which is a test that can be done in any hospital, would really be a game-changer in terms of predictive algorithms for patients with IPF,” said Dr. Herzog, who is a professor of medicine and pathology at Yale University, New Haven, Conn., and director of the Yale ILD Center of Excellence.
In humans, abnormalities in circulating monocytes and lymphocytes have individually been linked worse IPF outcomes. Animal studies have implicated monocyte-derived cells in lung fibrosis, but because animal studies have shown that lymphocyte populations are not required for fibrosis, more work is needed.
“I think what we’re finding is that lymphocytes probably have a regulatory role, and there’s probably a protective population and potentially a pathogenic population. Something about the balance between adaptive immunity, which is reflected by your lymphocytes, and innate immunity, which is reflected by your monocytes. Something about that balance is important for tissue homeostasis, and then when it’s disrupted or perturbed, fibrosis ensues,” said Dr. Herzog.
Study details
The newly diagnosed patients were older (mean age, 70 years) than the end-stage IPF patients (mean age, 60 years) and patients with IPF and lung cancer (mean age, 64 years; P < .0001).
Among newly diagnosed IPF patients, a receiving operating characteristic analysis before antifibrotic treatment determined a cutoff LMR value of less than 4.18, with an area under the curve of 0.67 (P = .025). Values below 4.18 were associated with shorter survival (hazard ratio, 6.88; P = .027).
Patients with LMR less than 4.18 were more likely to be men (89% vs. 67%; P = .036), had a lower percent predicted forced vital capacity (76% vs. 87%; P = .023), and were more likely to die or undergo lung transplant (34% vs. 5%; P = .009).
Mortality results
A Kaplan-Meier curve illustrated a stark difference between patients with LMR of 4.18 or higher, nearly all of whom remained alive out to almost 100 months of follow-up. Around 30% of those with LMR less than 4.18 remained alive while close to 100% of the patients with LMR below this value showed worsened outcomes. “You don’t normally see curves like that,” said Dr. Herzog.
There was no significant difference in blood cell counts and ratios at the time of IPF diagnosis and after 1 year of antifibrotic treatment, which suggests that the risk profile is independent of treatment, according to the study authors.
Among patient subgroups, those with IPF and lung cancer had the lowest mean LMR (2.2), followed by newly diagnosed patients (3.5), and those with end-stage disease (3.6; P < .0001).
The study authors reported financial relationships with various pharmaceutical companies. Dr. Herzog had no relevant financial disclosures.
A low lymphocyte-to-monocyte ratio (LMR) is associated with worse survival in newly-diagnosed patients with idiopathic pulmonary fibrosis (IPF), according to a new retrospective, single-center analysis. Patients with both IPF and lung cancer also had a lower LMR than patients with IPF alone.
The study, published online in Respiratory Medicine, was conducted among 77 newly diagnosed patients, 40 end-stage IPF patients, and 17 patients with IPF and lung cancer. All received at least 1 year of antifibrotic therapy (pirfenidone or nintedanib). The researchers collected demographic and clinical data between December 2014 and December 2020.
The disease course of IPF is difficult to predict, with some patients progressing slowly, and others suffer a rapid decline to respiratory failure. Previous studies found that higher levels of monocytes are associated with higher mortality in IPF and other fibrotic lung disease, and both neutrophil-to-lymphocyte ratio (NLR) and LMR have been shown to predict mortality in lung cancer.
A previous study found that IPF patients had a higher NLR and a lower LMR than controls, but that research did not consider the impact of antifibrotic treatment, which may improve outcomes.
There has been accumulating cellular and molecular evidence that leukocyte population abnormalities are associated with IPF outcomes, but that work was more discovery based and relied on tests that aren’t readily available clinically, said Erica L. Herzog, MD, PhD, who was asked to comment on the study. “It’s provided a lot of insight into potential new mechanisms and potential biomarkers, but their clinical utility for patients is limited. So the use of lymphocyte-to-monocyte ratios that can be obtained from a complete blood cell count, which is a test that can be done in any hospital, would really be a game-changer in terms of predictive algorithms for patients with IPF,” said Dr. Herzog, who is a professor of medicine and pathology at Yale University, New Haven, Conn., and director of the Yale ILD Center of Excellence.
In humans, abnormalities in circulating monocytes and lymphocytes have individually been linked worse IPF outcomes. Animal studies have implicated monocyte-derived cells in lung fibrosis, but because animal studies have shown that lymphocyte populations are not required for fibrosis, more work is needed.
“I think what we’re finding is that lymphocytes probably have a regulatory role, and there’s probably a protective population and potentially a pathogenic population. Something about the balance between adaptive immunity, which is reflected by your lymphocytes, and innate immunity, which is reflected by your monocytes. Something about that balance is important for tissue homeostasis, and then when it’s disrupted or perturbed, fibrosis ensues,” said Dr. Herzog.
Study details
The newly diagnosed patients were older (mean age, 70 years) than the end-stage IPF patients (mean age, 60 years) and patients with IPF and lung cancer (mean age, 64 years; P < .0001).
Among newly diagnosed IPF patients, a receiving operating characteristic analysis before antifibrotic treatment determined a cutoff LMR value of less than 4.18, with an area under the curve of 0.67 (P = .025). Values below 4.18 were associated with shorter survival (hazard ratio, 6.88; P = .027).
Patients with LMR less than 4.18 were more likely to be men (89% vs. 67%; P = .036), had a lower percent predicted forced vital capacity (76% vs. 87%; P = .023), and were more likely to die or undergo lung transplant (34% vs. 5%; P = .009).
Mortality results
A Kaplan-Meier curve illustrated a stark difference between patients with LMR of 4.18 or higher, nearly all of whom remained alive out to almost 100 months of follow-up. Around 30% of those with LMR less than 4.18 remained alive while close to 100% of the patients with LMR below this value showed worsened outcomes. “You don’t normally see curves like that,” said Dr. Herzog.
There was no significant difference in blood cell counts and ratios at the time of IPF diagnosis and after 1 year of antifibrotic treatment, which suggests that the risk profile is independent of treatment, according to the study authors.
Among patient subgroups, those with IPF and lung cancer had the lowest mean LMR (2.2), followed by newly diagnosed patients (3.5), and those with end-stage disease (3.6; P < .0001).
The study authors reported financial relationships with various pharmaceutical companies. Dr. Herzog had no relevant financial disclosures.
A low lymphocyte-to-monocyte ratio (LMR) is associated with worse survival in newly-diagnosed patients with idiopathic pulmonary fibrosis (IPF), according to a new retrospective, single-center analysis. Patients with both IPF and lung cancer also had a lower LMR than patients with IPF alone.
The study, published online in Respiratory Medicine, was conducted among 77 newly diagnosed patients, 40 end-stage IPF patients, and 17 patients with IPF and lung cancer. All received at least 1 year of antifibrotic therapy (pirfenidone or nintedanib). The researchers collected demographic and clinical data between December 2014 and December 2020.
The disease course of IPF is difficult to predict, with some patients progressing slowly, and others suffer a rapid decline to respiratory failure. Previous studies found that higher levels of monocytes are associated with higher mortality in IPF and other fibrotic lung disease, and both neutrophil-to-lymphocyte ratio (NLR) and LMR have been shown to predict mortality in lung cancer.
A previous study found that IPF patients had a higher NLR and a lower LMR than controls, but that research did not consider the impact of antifibrotic treatment, which may improve outcomes.
There has been accumulating cellular and molecular evidence that leukocyte population abnormalities are associated with IPF outcomes, but that work was more discovery based and relied on tests that aren’t readily available clinically, said Erica L. Herzog, MD, PhD, who was asked to comment on the study. “It’s provided a lot of insight into potential new mechanisms and potential biomarkers, but their clinical utility for patients is limited. So the use of lymphocyte-to-monocyte ratios that can be obtained from a complete blood cell count, which is a test that can be done in any hospital, would really be a game-changer in terms of predictive algorithms for patients with IPF,” said Dr. Herzog, who is a professor of medicine and pathology at Yale University, New Haven, Conn., and director of the Yale ILD Center of Excellence.
In humans, abnormalities in circulating monocytes and lymphocytes have individually been linked worse IPF outcomes. Animal studies have implicated monocyte-derived cells in lung fibrosis, but because animal studies have shown that lymphocyte populations are not required for fibrosis, more work is needed.
“I think what we’re finding is that lymphocytes probably have a regulatory role, and there’s probably a protective population and potentially a pathogenic population. Something about the balance between adaptive immunity, which is reflected by your lymphocytes, and innate immunity, which is reflected by your monocytes. Something about that balance is important for tissue homeostasis, and then when it’s disrupted or perturbed, fibrosis ensues,” said Dr. Herzog.
Study details
The newly diagnosed patients were older (mean age, 70 years) than the end-stage IPF patients (mean age, 60 years) and patients with IPF and lung cancer (mean age, 64 years; P < .0001).
Among newly diagnosed IPF patients, a receiving operating characteristic analysis before antifibrotic treatment determined a cutoff LMR value of less than 4.18, with an area under the curve of 0.67 (P = .025). Values below 4.18 were associated with shorter survival (hazard ratio, 6.88; P = .027).
Patients with LMR less than 4.18 were more likely to be men (89% vs. 67%; P = .036), had a lower percent predicted forced vital capacity (76% vs. 87%; P = .023), and were more likely to die or undergo lung transplant (34% vs. 5%; P = .009).
Mortality results
A Kaplan-Meier curve illustrated a stark difference between patients with LMR of 4.18 or higher, nearly all of whom remained alive out to almost 100 months of follow-up. Around 30% of those with LMR less than 4.18 remained alive while close to 100% of the patients with LMR below this value showed worsened outcomes. “You don’t normally see curves like that,” said Dr. Herzog.
There was no significant difference in blood cell counts and ratios at the time of IPF diagnosis and after 1 year of antifibrotic treatment, which suggests that the risk profile is independent of treatment, according to the study authors.
Among patient subgroups, those with IPF and lung cancer had the lowest mean LMR (2.2), followed by newly diagnosed patients (3.5), and those with end-stage disease (3.6; P < .0001).
The study authors reported financial relationships with various pharmaceutical companies. Dr. Herzog had no relevant financial disclosures.
FROM RESPIRATORY MEDICINE
Bulevirtide shows real-world efficacy versus HDV
A real-world analysis of bulevirtide found a safety and efficacy profile similar to what was seen in earlier clinical trials in the treatment of hepatitis delta virus (HDV) infection.
HDV can only infect patients already carrying hepatitis B virus (HBV), but it causes the most severe form of viral hepatitis as it can progress to cirrhosis within 5 years and to hepatocellular carcinoma within 10 years.
Bulevirtide is a first-in-class medication that mimics the hepatitis B surface antigen, binding to its receptor on hepatocytes and preventing HDV viral particles from binding to it. The drug received conditional marketing approval by the European Medicines Agency in 2020 and has received a breakthrough therapy designation from the U.S. Food and Drug Administration.
The study was presented at the annual meeting of the American Association for the Study of Liver Diseases by Victor De Ledinghen, PhD, who is a professor of hepatology and head of the hepatology and liver transplantation unit at Bordeaux (France) University Hospital.
The early-access program launched after the French National Agency for Medicines and Health Products approved bulevirtide in 2019. It was made available to patients with compensated cirrhosis or severe liver fibrosis (F3) or patients with F2 fibrosis and alanine amino transferase levels more than twice the upper limit of normal for 6 months or more. Patients received bulevirtide alone (n = 77) or in combination with peg-interferon (n = 68), as determined by their physician.
The researchers defined virologic efficacy as HDV RNA levels being undetectable, or decreased by at least 2 log10 from baseline. They defined biochemical efficacy as ALT levels below 40 IU/L.
A per-protocol analysis included all patients in the bulevirtide group, but excluded 12 from the combination group who discontinued peg-interferon (n = 56). Nineteen patients in bulevirtide group had a treatment modification, and seven discontinued treatment. Five in the combination group had a treatment modification, and 14 stopped treatment. At 12 months, there was a greater decline in median log10 IU/mL in the combination group (–5.65 versus –3.64), though the study was not powered to compare the two. At 12 months, the combination group had 93.9% virologic efficacy, compared with 68.3% in the bulevirtide group.
The two groups had similar mean ALT levels at 12 months (48.91 and 48.03 IU/mL, respectively), with more patients in the bulevirtide group having normal ALT levels (<40 IU/L; 48.8% versus 36.4%). At 12 months, 39.0% of the bulevirtide group and 30.3% of the combination group had a combined response, defined as either undetectable HDV RNA or ≥2 log10 from baseline plus normal ALT levels.
Twenty-nine patients in the bulevirtide group had an adverse event, compared with 43 in the combination group. The two groups were similar in the frequency of grade 3-4 adverse events (7 versus 6), discontinuation due to adverse events (2 versus 3), deaths (0 in both), injection-site reactions (2 in both), liver-related adverse events (4 versus 2), and elevated bile acid (76 versus 68).
During the Q&A period following the presentation, Dr. De Ledinghen was asked if he has a preferred regimen for HDV patients. “I think it depends on the tolerance of peg-interferon because of all the side effects with this drug. I think we need to have predictive factors of virological response with or without interferon. At this time, I don’t have a preference, but I think at this time we need to work on predictive factors associated with virologic response,” he said.
The EMA’s conditional bulevirtide approval hinged on results from phase 2 clinical trials, while the phase 3 clinical studies are ongoing. “This was a very unusual step for the EMA to provide what is similar to emergency use approval while the phase 3 clinical trials are still ongoing,” said Anna Lok, MD, who was asked to comment on the study. Dr. Lok is a professor of internal medicine, director of clinical hepatology, and assistant dean for clinical research at the University of Michigan, Ann Arbor.
She noted that the phase 2 studies indicated that the combination with peg-interferon seems to have an additive effect on HDV suppression, while monotherapy with bulevirtide has a greater effect on normalizing ALT levels. The real-world experience confirms these findings.
But the real-world data revealed some concerns. “What really worried me is the large number of patients who required dose modifications or discontinuations, and that seems to be the case in both treatment groups. They didn’t really go into a lot of details [about] why patients needed treatment modifications, but one has to assume that this is due to side effects,” said Dr. Lok.
She also noted that the per-protocol analysis, instead of an intention-to-treat analysis, is a weakness of the study. Additionally, over time, the number of patients analyzed decreased – as many as 40% of patients didn’t have test results at month 12. “It makes you wonder what happened to those patients. Many probably didn’t respond, in which case your overall response rate will be far lower,” said Dr. Lok.
The study was funded by Gilead. Dr. De Ledinghen has financial relationships with Gilead, AbbVie, Echosens, Hologic, Intercept Pharma, Tillotts, Orphalan, Alfasigma, Bristol Myers Squibb, and Siemens Healthineers. Dr. Lok has no relevant financial disclosures.
A real-world analysis of bulevirtide found a safety and efficacy profile similar to what was seen in earlier clinical trials in the treatment of hepatitis delta virus (HDV) infection.
HDV can only infect patients already carrying hepatitis B virus (HBV), but it causes the most severe form of viral hepatitis as it can progress to cirrhosis within 5 years and to hepatocellular carcinoma within 10 years.
Bulevirtide is a first-in-class medication that mimics the hepatitis B surface antigen, binding to its receptor on hepatocytes and preventing HDV viral particles from binding to it. The drug received conditional marketing approval by the European Medicines Agency in 2020 and has received a breakthrough therapy designation from the U.S. Food and Drug Administration.
The study was presented at the annual meeting of the American Association for the Study of Liver Diseases by Victor De Ledinghen, PhD, who is a professor of hepatology and head of the hepatology and liver transplantation unit at Bordeaux (France) University Hospital.
The early-access program launched after the French National Agency for Medicines and Health Products approved bulevirtide in 2019. It was made available to patients with compensated cirrhosis or severe liver fibrosis (F3) or patients with F2 fibrosis and alanine amino transferase levels more than twice the upper limit of normal for 6 months or more. Patients received bulevirtide alone (n = 77) or in combination with peg-interferon (n = 68), as determined by their physician.
The researchers defined virologic efficacy as HDV RNA levels being undetectable, or decreased by at least 2 log10 from baseline. They defined biochemical efficacy as ALT levels below 40 IU/L.
A per-protocol analysis included all patients in the bulevirtide group, but excluded 12 from the combination group who discontinued peg-interferon (n = 56). Nineteen patients in bulevirtide group had a treatment modification, and seven discontinued treatment. Five in the combination group had a treatment modification, and 14 stopped treatment. At 12 months, there was a greater decline in median log10 IU/mL in the combination group (–5.65 versus –3.64), though the study was not powered to compare the two. At 12 months, the combination group had 93.9% virologic efficacy, compared with 68.3% in the bulevirtide group.
The two groups had similar mean ALT levels at 12 months (48.91 and 48.03 IU/mL, respectively), with more patients in the bulevirtide group having normal ALT levels (<40 IU/L; 48.8% versus 36.4%). At 12 months, 39.0% of the bulevirtide group and 30.3% of the combination group had a combined response, defined as either undetectable HDV RNA or ≥2 log10 from baseline plus normal ALT levels.
Twenty-nine patients in the bulevirtide group had an adverse event, compared with 43 in the combination group. The two groups were similar in the frequency of grade 3-4 adverse events (7 versus 6), discontinuation due to adverse events (2 versus 3), deaths (0 in both), injection-site reactions (2 in both), liver-related adverse events (4 versus 2), and elevated bile acid (76 versus 68).
During the Q&A period following the presentation, Dr. De Ledinghen was asked if he has a preferred regimen for HDV patients. “I think it depends on the tolerance of peg-interferon because of all the side effects with this drug. I think we need to have predictive factors of virological response with or without interferon. At this time, I don’t have a preference, but I think at this time we need to work on predictive factors associated with virologic response,” he said.
The EMA’s conditional bulevirtide approval hinged on results from phase 2 clinical trials, while the phase 3 clinical studies are ongoing. “This was a very unusual step for the EMA to provide what is similar to emergency use approval while the phase 3 clinical trials are still ongoing,” said Anna Lok, MD, who was asked to comment on the study. Dr. Lok is a professor of internal medicine, director of clinical hepatology, and assistant dean for clinical research at the University of Michigan, Ann Arbor.
She noted that the phase 2 studies indicated that the combination with peg-interferon seems to have an additive effect on HDV suppression, while monotherapy with bulevirtide has a greater effect on normalizing ALT levels. The real-world experience confirms these findings.
But the real-world data revealed some concerns. “What really worried me is the large number of patients who required dose modifications or discontinuations, and that seems to be the case in both treatment groups. They didn’t really go into a lot of details [about] why patients needed treatment modifications, but one has to assume that this is due to side effects,” said Dr. Lok.
She also noted that the per-protocol analysis, instead of an intention-to-treat analysis, is a weakness of the study. Additionally, over time, the number of patients analyzed decreased – as many as 40% of patients didn’t have test results at month 12. “It makes you wonder what happened to those patients. Many probably didn’t respond, in which case your overall response rate will be far lower,” said Dr. Lok.
The study was funded by Gilead. Dr. De Ledinghen has financial relationships with Gilead, AbbVie, Echosens, Hologic, Intercept Pharma, Tillotts, Orphalan, Alfasigma, Bristol Myers Squibb, and Siemens Healthineers. Dr. Lok has no relevant financial disclosures.
A real-world analysis of bulevirtide found a safety and efficacy profile similar to what was seen in earlier clinical trials in the treatment of hepatitis delta virus (HDV) infection.
HDV can only infect patients already carrying hepatitis B virus (HBV), but it causes the most severe form of viral hepatitis as it can progress to cirrhosis within 5 years and to hepatocellular carcinoma within 10 years.
Bulevirtide is a first-in-class medication that mimics the hepatitis B surface antigen, binding to its receptor on hepatocytes and preventing HDV viral particles from binding to it. The drug received conditional marketing approval by the European Medicines Agency in 2020 and has received a breakthrough therapy designation from the U.S. Food and Drug Administration.
The study was presented at the annual meeting of the American Association for the Study of Liver Diseases by Victor De Ledinghen, PhD, who is a professor of hepatology and head of the hepatology and liver transplantation unit at Bordeaux (France) University Hospital.
The early-access program launched after the French National Agency for Medicines and Health Products approved bulevirtide in 2019. It was made available to patients with compensated cirrhosis or severe liver fibrosis (F3) or patients with F2 fibrosis and alanine amino transferase levels more than twice the upper limit of normal for 6 months or more. Patients received bulevirtide alone (n = 77) or in combination with peg-interferon (n = 68), as determined by their physician.
The researchers defined virologic efficacy as HDV RNA levels being undetectable, or decreased by at least 2 log10 from baseline. They defined biochemical efficacy as ALT levels below 40 IU/L.
A per-protocol analysis included all patients in the bulevirtide group, but excluded 12 from the combination group who discontinued peg-interferon (n = 56). Nineteen patients in bulevirtide group had a treatment modification, and seven discontinued treatment. Five in the combination group had a treatment modification, and 14 stopped treatment. At 12 months, there was a greater decline in median log10 IU/mL in the combination group (–5.65 versus –3.64), though the study was not powered to compare the two. At 12 months, the combination group had 93.9% virologic efficacy, compared with 68.3% in the bulevirtide group.
The two groups had similar mean ALT levels at 12 months (48.91 and 48.03 IU/mL, respectively), with more patients in the bulevirtide group having normal ALT levels (<40 IU/L; 48.8% versus 36.4%). At 12 months, 39.0% of the bulevirtide group and 30.3% of the combination group had a combined response, defined as either undetectable HDV RNA or ≥2 log10 from baseline plus normal ALT levels.
Twenty-nine patients in the bulevirtide group had an adverse event, compared with 43 in the combination group. The two groups were similar in the frequency of grade 3-4 adverse events (7 versus 6), discontinuation due to adverse events (2 versus 3), deaths (0 in both), injection-site reactions (2 in both), liver-related adverse events (4 versus 2), and elevated bile acid (76 versus 68).
During the Q&A period following the presentation, Dr. De Ledinghen was asked if he has a preferred regimen for HDV patients. “I think it depends on the tolerance of peg-interferon because of all the side effects with this drug. I think we need to have predictive factors of virological response with or without interferon. At this time, I don’t have a preference, but I think at this time we need to work on predictive factors associated with virologic response,” he said.
The EMA’s conditional bulevirtide approval hinged on results from phase 2 clinical trials, while the phase 3 clinical studies are ongoing. “This was a very unusual step for the EMA to provide what is similar to emergency use approval while the phase 3 clinical trials are still ongoing,” said Anna Lok, MD, who was asked to comment on the study. Dr. Lok is a professor of internal medicine, director of clinical hepatology, and assistant dean for clinical research at the University of Michigan, Ann Arbor.
She noted that the phase 2 studies indicated that the combination with peg-interferon seems to have an additive effect on HDV suppression, while monotherapy with bulevirtide has a greater effect on normalizing ALT levels. The real-world experience confirms these findings.
But the real-world data revealed some concerns. “What really worried me is the large number of patients who required dose modifications or discontinuations, and that seems to be the case in both treatment groups. They didn’t really go into a lot of details [about] why patients needed treatment modifications, but one has to assume that this is due to side effects,” said Dr. Lok.
She also noted that the per-protocol analysis, instead of an intention-to-treat analysis, is a weakness of the study. Additionally, over time, the number of patients analyzed decreased – as many as 40% of patients didn’t have test results at month 12. “It makes you wonder what happened to those patients. Many probably didn’t respond, in which case your overall response rate will be far lower,” said Dr. Lok.
The study was funded by Gilead. Dr. De Ledinghen has financial relationships with Gilead, AbbVie, Echosens, Hologic, Intercept Pharma, Tillotts, Orphalan, Alfasigma, Bristol Myers Squibb, and Siemens Healthineers. Dr. Lok has no relevant financial disclosures.
FROM THE LIVER MEETING
CDK4/6 inhibitors: Should they be stopped in the face of COVID-19?
The treatment interruptions occurred during the COVID-19 pandemic, out of concern that myelosuppression from the drugs might make patients more vulnerable to COVID-19 infection, and that other side effects might be confused with symptoms of COVID-19 infection.
The finding comes from a multicenter study presented by Sophie Martin, PhD, at the San Antonio Breast Cancer Symposium. Dr. Martin is a researcher at ICANS Institut de cancérologie Strasbourg Europe. The patient population had a complete or partial response, or stable disease complete for at least 6 months.
Although CDK4/6i combined with endocrine therapy has led to significant improvements in outcomes among metastatic HR-positive, HER-2-negative patients, the treatment can lead to chronic toxicities that may affect quality of life.
In its 2020 guidance on management of cancer patients during the COVID-19 pandemic, the European Society for Medical Oncology noted that cancer patients are at higher risk of severe symptoms and worse outcomes. However, it points out that there is no direct evidence that neutropenia caused CDK4/6i or poly-adenosine diphosphate ribose polymer inhibitors leads to an increase risk of COVID-19 infection.
The American Society for Clinical Oncology guidance for managing treatment of cancer patients in the context of COVID-19 also says there is little direct evidence to guide practice regarding therapies that may lead to immunosuppression. Therefore, the society recommends against changing or withholding those drugs. “The balance of potential harms that may result from delaying or interrupting treatment versus the potential benefits of possibly preventing or delaying COVID-19 infection is very uncertain,” the authors wrote.
There were 60 patients in the study, and the median age was 64 years. The average interruption period was 8 weeks. Twenty-two patients (37%) experienced radiological and/or clinical disease progression. Sixteen of the 22 (73%) restarted on CDK4/6I, while the remaining 4 patients initiated chemotherapy or targeted therapy. Two patients died during CDK4/6i treatment interruption. A univariate analysis found that the presence of liver metastases was associated with increased risk of progression during CDK4/6I withdrawal (odds ratio, 5.50; 95% confidence interval, 1.14-26.41).
There was also a trend toward greater likelihood of disease progression when the withdrawal period was 2 or more months (OR, 2.38), but the finding was not statistically significant. Although the study looked at treatment interruption due to the COVID-19 pandemic, the authors noted that the findings likely apply to other reasons for interruption, such as analgesic radiotherapy or programmed surgery.
Although the study authors advise against stopping CDK4/6i inhibitors, another small study conducted at a single German center suggested that treatment interruption might be an option in patients with stable disease. The authors examined elective CDK4/6i discontinuation among 22 patients with advanced, hormone receptor–positive, HER-2-negative breast cancer who had stable disease for at least 6 months with treatment regimens of CDK4/6i plus aromatase inhibitors or fulvestrant. After discontinuation of CDK4/6i but maintenance of endocrine therapy, 13 patients had stable disease, 8 had a partial response, and 1 had a complete response. After withdrawal, 5 patients had a local relapse and 1 experienced systemic progression. The patients restabilized with chemotherapy or retreatment with CDK4/6i.
“Discontinuation of CDK4/6 inhibitors seems to be safe in selected patients with metastatic HR-positive HER-2-negative breast cancer and prolonged disease control,” the authors wrote, although they noted that the results need to be backed up with prospective clinical trials.
Both studies had small sample sizes and were retrospective in nature.
One author on the COVID-19 study has received consulting fees from Lilly, Novartis, Pfizer, Daïchi, Seagen, and AstraZeneca. Authors of the German study have received honoraria from Iomedico, Novartis, Roche, AstraZeneca, Boehringer Ingelheim, Merck, Sanofi, and BMS.
The treatment interruptions occurred during the COVID-19 pandemic, out of concern that myelosuppression from the drugs might make patients more vulnerable to COVID-19 infection, and that other side effects might be confused with symptoms of COVID-19 infection.
The finding comes from a multicenter study presented by Sophie Martin, PhD, at the San Antonio Breast Cancer Symposium. Dr. Martin is a researcher at ICANS Institut de cancérologie Strasbourg Europe. The patient population had a complete or partial response, or stable disease complete for at least 6 months.
Although CDK4/6i combined with endocrine therapy has led to significant improvements in outcomes among metastatic HR-positive, HER-2-negative patients, the treatment can lead to chronic toxicities that may affect quality of life.
In its 2020 guidance on management of cancer patients during the COVID-19 pandemic, the European Society for Medical Oncology noted that cancer patients are at higher risk of severe symptoms and worse outcomes. However, it points out that there is no direct evidence that neutropenia caused CDK4/6i or poly-adenosine diphosphate ribose polymer inhibitors leads to an increase risk of COVID-19 infection.
The American Society for Clinical Oncology guidance for managing treatment of cancer patients in the context of COVID-19 also says there is little direct evidence to guide practice regarding therapies that may lead to immunosuppression. Therefore, the society recommends against changing or withholding those drugs. “The balance of potential harms that may result from delaying or interrupting treatment versus the potential benefits of possibly preventing or delaying COVID-19 infection is very uncertain,” the authors wrote.
There were 60 patients in the study, and the median age was 64 years. The average interruption period was 8 weeks. Twenty-two patients (37%) experienced radiological and/or clinical disease progression. Sixteen of the 22 (73%) restarted on CDK4/6I, while the remaining 4 patients initiated chemotherapy or targeted therapy. Two patients died during CDK4/6i treatment interruption. A univariate analysis found that the presence of liver metastases was associated with increased risk of progression during CDK4/6I withdrawal (odds ratio, 5.50; 95% confidence interval, 1.14-26.41).
There was also a trend toward greater likelihood of disease progression when the withdrawal period was 2 or more months (OR, 2.38), but the finding was not statistically significant. Although the study looked at treatment interruption due to the COVID-19 pandemic, the authors noted that the findings likely apply to other reasons for interruption, such as analgesic radiotherapy or programmed surgery.
Although the study authors advise against stopping CDK4/6i inhibitors, another small study conducted at a single German center suggested that treatment interruption might be an option in patients with stable disease. The authors examined elective CDK4/6i discontinuation among 22 patients with advanced, hormone receptor–positive, HER-2-negative breast cancer who had stable disease for at least 6 months with treatment regimens of CDK4/6i plus aromatase inhibitors or fulvestrant. After discontinuation of CDK4/6i but maintenance of endocrine therapy, 13 patients had stable disease, 8 had a partial response, and 1 had a complete response. After withdrawal, 5 patients had a local relapse and 1 experienced systemic progression. The patients restabilized with chemotherapy or retreatment with CDK4/6i.
“Discontinuation of CDK4/6 inhibitors seems to be safe in selected patients with metastatic HR-positive HER-2-negative breast cancer and prolonged disease control,” the authors wrote, although they noted that the results need to be backed up with prospective clinical trials.
Both studies had small sample sizes and were retrospective in nature.
One author on the COVID-19 study has received consulting fees from Lilly, Novartis, Pfizer, Daïchi, Seagen, and AstraZeneca. Authors of the German study have received honoraria from Iomedico, Novartis, Roche, AstraZeneca, Boehringer Ingelheim, Merck, Sanofi, and BMS.
The treatment interruptions occurred during the COVID-19 pandemic, out of concern that myelosuppression from the drugs might make patients more vulnerable to COVID-19 infection, and that other side effects might be confused with symptoms of COVID-19 infection.
The finding comes from a multicenter study presented by Sophie Martin, PhD, at the San Antonio Breast Cancer Symposium. Dr. Martin is a researcher at ICANS Institut de cancérologie Strasbourg Europe. The patient population had a complete or partial response, or stable disease complete for at least 6 months.
Although CDK4/6i combined with endocrine therapy has led to significant improvements in outcomes among metastatic HR-positive, HER-2-negative patients, the treatment can lead to chronic toxicities that may affect quality of life.
In its 2020 guidance on management of cancer patients during the COVID-19 pandemic, the European Society for Medical Oncology noted that cancer patients are at higher risk of severe symptoms and worse outcomes. However, it points out that there is no direct evidence that neutropenia caused CDK4/6i or poly-adenosine diphosphate ribose polymer inhibitors leads to an increase risk of COVID-19 infection.
The American Society for Clinical Oncology guidance for managing treatment of cancer patients in the context of COVID-19 also says there is little direct evidence to guide practice regarding therapies that may lead to immunosuppression. Therefore, the society recommends against changing or withholding those drugs. “The balance of potential harms that may result from delaying or interrupting treatment versus the potential benefits of possibly preventing or delaying COVID-19 infection is very uncertain,” the authors wrote.
There were 60 patients in the study, and the median age was 64 years. The average interruption period was 8 weeks. Twenty-two patients (37%) experienced radiological and/or clinical disease progression. Sixteen of the 22 (73%) restarted on CDK4/6I, while the remaining 4 patients initiated chemotherapy or targeted therapy. Two patients died during CDK4/6i treatment interruption. A univariate analysis found that the presence of liver metastases was associated with increased risk of progression during CDK4/6I withdrawal (odds ratio, 5.50; 95% confidence interval, 1.14-26.41).
There was also a trend toward greater likelihood of disease progression when the withdrawal period was 2 or more months (OR, 2.38), but the finding was not statistically significant. Although the study looked at treatment interruption due to the COVID-19 pandemic, the authors noted that the findings likely apply to other reasons for interruption, such as analgesic radiotherapy or programmed surgery.
Although the study authors advise against stopping CDK4/6i inhibitors, another small study conducted at a single German center suggested that treatment interruption might be an option in patients with stable disease. The authors examined elective CDK4/6i discontinuation among 22 patients with advanced, hormone receptor–positive, HER-2-negative breast cancer who had stable disease for at least 6 months with treatment regimens of CDK4/6i plus aromatase inhibitors or fulvestrant. After discontinuation of CDK4/6i but maintenance of endocrine therapy, 13 patients had stable disease, 8 had a partial response, and 1 had a complete response. After withdrawal, 5 patients had a local relapse and 1 experienced systemic progression. The patients restabilized with chemotherapy or retreatment with CDK4/6i.
“Discontinuation of CDK4/6 inhibitors seems to be safe in selected patients with metastatic HR-positive HER-2-negative breast cancer and prolonged disease control,” the authors wrote, although they noted that the results need to be backed up with prospective clinical trials.
Both studies had small sample sizes and were retrospective in nature.
One author on the COVID-19 study has received consulting fees from Lilly, Novartis, Pfizer, Daïchi, Seagen, and AstraZeneca. Authors of the German study have received honoraria from Iomedico, Novartis, Roche, AstraZeneca, Boehringer Ingelheim, Merck, Sanofi, and BMS.
FROM SABCS 2021
In metastatic breast cancer, primary resections on the decline
Retrospective studies have suggested a possible benefit to resection, according to Sasha Douglas, MD, who presented the study (abstract PD7-06) at the 2021 San Antonio Breast Cancer Symposium. “Intuitively, you would think that you would want to get the primary tumor out even if it’s metastasized, so that it couldn’t metastasize more,” said Dr. Douglas, who is a surgical resident at the University of California, San Diego.
However, clinical trials have yielded mixed results, and the picture is complicated by the various molecular subtypes of breast cancer, metastatic sites, and other factors. “Different studies, whether it’s retrospective, and a really large database that has lots of numbers of patients, can give you a different answer than a smaller prospective randomized, controlled study in a different cohort of patients. So, we just thought it would be really interesting to look at all the trends over time at Commission on Cancer–accredited hospitals. Do they seem to be following what the latest literature is showing?” Dr. Douglas said in an interview.
The researchers used data from 87,331 cases from the National Cancer Database (NCDB) and examined rates of primary surgery as well as palliative care in women with metastatic breast cancer who had responded well to systemic therapy.
Between 2004 and 2009, the frequency of primary tumor resection remained near 35% (with a peak of 37% in 2009), then began a steady descent to 18% by 2017. The researchers found similar trends in estrogen receptor–positive/progesterone receptor–positive, HER2-negative (ER/PR+HERer2–); HER2-positive; and triple-negative subtypes.
In 2004, 48% of patients received only systemic therapy, while 37% received some combination of surgery and radiation to the primary tumor. By 2019, 69% received only systemic therapy and 20% received locoregional therapy (P < .001). “It seems that surgeons and providers and medical oncologists are becoming more selective about who they’re going to offer surgery to, and I think that’s very appropriate,” said Dr. Douglas.
But another finding suggests room for improvement: Just 21% of patients received palliative care. “I think that everybody with a major systemic illness like this would benefit from palliative care, just on a supportive basis. The palliative care team can really help people with quality of life, but I think it still has that stigma, and I think that’s what we’ve seen from our study,” said Dr. Douglas.
“We’re just postulating, [but] a lot of that could be from the stigma of thinking that palliative care means giving up. It doesn’t necessarily mean that. It means you’re dealing with a difficult chronic illness, and [palliative care] can be very, very helpful for patients,” said Dr. Douglas.
The study is limited by its retrospective nature, and palliative care might be underreported in the NCDB.
The study was funded by the National Cancer Institute and the University of California, San Diego. Dr. Douglas has no financial disclosures.
Retrospective studies have suggested a possible benefit to resection, according to Sasha Douglas, MD, who presented the study (abstract PD7-06) at the 2021 San Antonio Breast Cancer Symposium. “Intuitively, you would think that you would want to get the primary tumor out even if it’s metastasized, so that it couldn’t metastasize more,” said Dr. Douglas, who is a surgical resident at the University of California, San Diego.
However, clinical trials have yielded mixed results, and the picture is complicated by the various molecular subtypes of breast cancer, metastatic sites, and other factors. “Different studies, whether it’s retrospective, and a really large database that has lots of numbers of patients, can give you a different answer than a smaller prospective randomized, controlled study in a different cohort of patients. So, we just thought it would be really interesting to look at all the trends over time at Commission on Cancer–accredited hospitals. Do they seem to be following what the latest literature is showing?” Dr. Douglas said in an interview.
The researchers used data from 87,331 cases from the National Cancer Database (NCDB) and examined rates of primary surgery as well as palliative care in women with metastatic breast cancer who had responded well to systemic therapy.
Between 2004 and 2009, the frequency of primary tumor resection remained near 35% (with a peak of 37% in 2009), then began a steady descent to 18% by 2017. The researchers found similar trends in estrogen receptor–positive/progesterone receptor–positive, HER2-negative (ER/PR+HERer2–); HER2-positive; and triple-negative subtypes.
In 2004, 48% of patients received only systemic therapy, while 37% received some combination of surgery and radiation to the primary tumor. By 2019, 69% received only systemic therapy and 20% received locoregional therapy (P < .001). “It seems that surgeons and providers and medical oncologists are becoming more selective about who they’re going to offer surgery to, and I think that’s very appropriate,” said Dr. Douglas.
But another finding suggests room for improvement: Just 21% of patients received palliative care. “I think that everybody with a major systemic illness like this would benefit from palliative care, just on a supportive basis. The palliative care team can really help people with quality of life, but I think it still has that stigma, and I think that’s what we’ve seen from our study,” said Dr. Douglas.
“We’re just postulating, [but] a lot of that could be from the stigma of thinking that palliative care means giving up. It doesn’t necessarily mean that. It means you’re dealing with a difficult chronic illness, and [palliative care] can be very, very helpful for patients,” said Dr. Douglas.
The study is limited by its retrospective nature, and palliative care might be underreported in the NCDB.
The study was funded by the National Cancer Institute and the University of California, San Diego. Dr. Douglas has no financial disclosures.
Retrospective studies have suggested a possible benefit to resection, according to Sasha Douglas, MD, who presented the study (abstract PD7-06) at the 2021 San Antonio Breast Cancer Symposium. “Intuitively, you would think that you would want to get the primary tumor out even if it’s metastasized, so that it couldn’t metastasize more,” said Dr. Douglas, who is a surgical resident at the University of California, San Diego.
However, clinical trials have yielded mixed results, and the picture is complicated by the various molecular subtypes of breast cancer, metastatic sites, and other factors. “Different studies, whether it’s retrospective, and a really large database that has lots of numbers of patients, can give you a different answer than a smaller prospective randomized, controlled study in a different cohort of patients. So, we just thought it would be really interesting to look at all the trends over time at Commission on Cancer–accredited hospitals. Do they seem to be following what the latest literature is showing?” Dr. Douglas said in an interview.
The researchers used data from 87,331 cases from the National Cancer Database (NCDB) and examined rates of primary surgery as well as palliative care in women with metastatic breast cancer who had responded well to systemic therapy.
Between 2004 and 2009, the frequency of primary tumor resection remained near 35% (with a peak of 37% in 2009), then began a steady descent to 18% by 2017. The researchers found similar trends in estrogen receptor–positive/progesterone receptor–positive, HER2-negative (ER/PR+HERer2–); HER2-positive; and triple-negative subtypes.
In 2004, 48% of patients received only systemic therapy, while 37% received some combination of surgery and radiation to the primary tumor. By 2019, 69% received only systemic therapy and 20% received locoregional therapy (P < .001). “It seems that surgeons and providers and medical oncologists are becoming more selective about who they’re going to offer surgery to, and I think that’s very appropriate,” said Dr. Douglas.
But another finding suggests room for improvement: Just 21% of patients received palliative care. “I think that everybody with a major systemic illness like this would benefit from palliative care, just on a supportive basis. The palliative care team can really help people with quality of life, but I think it still has that stigma, and I think that’s what we’ve seen from our study,” said Dr. Douglas.
“We’re just postulating, [but] a lot of that could be from the stigma of thinking that palliative care means giving up. It doesn’t necessarily mean that. It means you’re dealing with a difficult chronic illness, and [palliative care] can be very, very helpful for patients,” said Dr. Douglas.
The study is limited by its retrospective nature, and palliative care might be underreported in the NCDB.
The study was funded by the National Cancer Institute and the University of California, San Diego. Dr. Douglas has no financial disclosures.
FROM SABCS 2021
OSA linked to white-matter hyperintensities
Individuals diagnosed with obstructive sleep apnea (OSA) have higher volumes of white-matter hyperintensities (WMHs), according to a new analysis of data from the SHIP-Trend-0 cohort in Western Pomerania, Germany, which is part of the Study of Health In Pomerania. The association was true for individual measures of OSA, including apnea-hypopnea index (AHI) and oxygen desaturation index (ODI).
WMHs are often seen on MRI in older people and in patients with stroke or dementia, and they may be an indicator of cerebral small-vessel disease. They are linked to greater risk of abnormal gait, worsening balance, depression, cognitive decline, dementia, stroke, and death. Suggested mechanisms for harms from WMHs include ischemia, hypoxia, hypoperfusion, inflammation, and demyelination.
WMHs have been linked to vascular risk factors like smoking, diabetes, and hypertension. Brain pathology studies have found loss of myelin, axonal loss, and scarring close to WMHs.
Although a few studies have looked for associations between WMHs and OSA, they have yielded inconsistent results. The new work employed highly standardized data collection and more complete covariate adjustment. The results, published in JAMA Network Open, suggest a novel, and potentially treatable, pathological WMH mechanism, according to the authors.
“This is an important study. It has strong methodology. The automated analysis of WMH in a large population-based cohort helps to eliminate several biases that can occur in this type of assessment. The data analysis was massive, with adequate control of all potential confounders and testing for interactions. This generated robust results,” said Diego Z. Carvalho, MD, who was asked to comment on the findings. Dr. Carvalho is an assistant professor of neurology at the Center for Sleep Medicine at the Mayo Clinic, Rochester, Minn.
Worse apnea, worse hyperintensity
“The association varies according to the degree of apnea severity, so mild OSA is probably not associated with increased WMH, while severe OSA is mostly likely driving most of the associations,” said Dr. Carvalho.
If a causal mechanism were to be proven, it would “bring a stronger call for treatment of severe OSA patients, particularly those with increased risk for small-vessel disease, [such as] patients with metabolic syndrome. Likewise, patients with severe OSA would be the best candidates for therapeutic trials with [continuous positive airway pressure] with or without possible adjunctive neuroprotective treatment for halting or slowing down WMH progression,” said Dr. Carvalho.
Stuart McCarter, MD, who is an instructor of neurology at the Center for Sleep Medicine at the Mayo Clinic, Rochester, Minn., also found the results interesting but pointed out that much more work needs to be done. “While they found a relationship between OSA as well as OSA severity and WMH despite adjusting for other known confounders, it is unlikely that it is as simple as OSA is the main causal factor for WMH, given the complex relationship between OSA, hypertension, and metabolic syndrome. However, this data does highlight the importance of considering OSA in addition to other more traditional risk factors when considering modifiable risk factors for brain aging,” said Dr. McCarter. The study cohort was mostly of White European ancestry, so more work also needs to be done in other racial groups.
The study underlines the importance of screening among individuals with cognitive impairment. “If OSA represents a modifiable risk factor for WMH and associated cognitive decline, then it would represent one of the few potentially treatable etiologies, or at least contributors of cognitive impairment,” said Dr. McCarter.
The SHIP-Trend-0 cohort is drawn from adults in Western Pomerania. The researchers analyzed data from 529 patients who had WMH and for whom intracranial volume data were available. Each member of the cohort also underwent polysomnography.
Based on AHI criteria, 24% of the overall sample had mild OSA, 10% had moderate OSA, and 6% had severe OSA.
After adjustment for sex, age, intracranial volume, and body weight, WMH volume was associated with AHI (beta = 0.024; P < .001) and ODI (beta = 0.033; P < .001). WMH counts were also associated with AHI (beta = 0.008; P = .01) and ODI (beta = 0.011; P = .02).
The effect size increased with greater OSA severity, as measured by AHI for both WMH volume (beta = 0.312, 0.480, and 1.255 for mild, moderate, and severe OSA, respectively) and WMH count (beta = 0.129, 0.107, and 0.419). The ODI regression models showed similar associations for WMH volume (beta = 0.426, 1.030, and 1.130) and WMH count (beta = 0.141, 0.315, and 0.538).
Dr. Carvalho and Dr. McCarter disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Individuals diagnosed with obstructive sleep apnea (OSA) have higher volumes of white-matter hyperintensities (WMHs), according to a new analysis of data from the SHIP-Trend-0 cohort in Western Pomerania, Germany, which is part of the Study of Health In Pomerania. The association was true for individual measures of OSA, including apnea-hypopnea index (AHI) and oxygen desaturation index (ODI).
WMHs are often seen on MRI in older people and in patients with stroke or dementia, and they may be an indicator of cerebral small-vessel disease. They are linked to greater risk of abnormal gait, worsening balance, depression, cognitive decline, dementia, stroke, and death. Suggested mechanisms for harms from WMHs include ischemia, hypoxia, hypoperfusion, inflammation, and demyelination.
WMHs have been linked to vascular risk factors like smoking, diabetes, and hypertension. Brain pathology studies have found loss of myelin, axonal loss, and scarring close to WMHs.
Although a few studies have looked for associations between WMHs and OSA, they have yielded inconsistent results. The new work employed highly standardized data collection and more complete covariate adjustment. The results, published in JAMA Network Open, suggest a novel, and potentially treatable, pathological WMH mechanism, according to the authors.
“This is an important study. It has strong methodology. The automated analysis of WMH in a large population-based cohort helps to eliminate several biases that can occur in this type of assessment. The data analysis was massive, with adequate control of all potential confounders and testing for interactions. This generated robust results,” said Diego Z. Carvalho, MD, who was asked to comment on the findings. Dr. Carvalho is an assistant professor of neurology at the Center for Sleep Medicine at the Mayo Clinic, Rochester, Minn.
Worse apnea, worse hyperintensity
“The association varies according to the degree of apnea severity, so mild OSA is probably not associated with increased WMH, while severe OSA is mostly likely driving most of the associations,” said Dr. Carvalho.
If a causal mechanism were to be proven, it would “bring a stronger call for treatment of severe OSA patients, particularly those with increased risk for small-vessel disease, [such as] patients with metabolic syndrome. Likewise, patients with severe OSA would be the best candidates for therapeutic trials with [continuous positive airway pressure] with or without possible adjunctive neuroprotective treatment for halting or slowing down WMH progression,” said Dr. Carvalho.
Stuart McCarter, MD, who is an instructor of neurology at the Center for Sleep Medicine at the Mayo Clinic, Rochester, Minn., also found the results interesting but pointed out that much more work needs to be done. “While they found a relationship between OSA as well as OSA severity and WMH despite adjusting for other known confounders, it is unlikely that it is as simple as OSA is the main causal factor for WMH, given the complex relationship between OSA, hypertension, and metabolic syndrome. However, this data does highlight the importance of considering OSA in addition to other more traditional risk factors when considering modifiable risk factors for brain aging,” said Dr. McCarter. The study cohort was mostly of White European ancestry, so more work also needs to be done in other racial groups.
The study underlines the importance of screening among individuals with cognitive impairment. “If OSA represents a modifiable risk factor for WMH and associated cognitive decline, then it would represent one of the few potentially treatable etiologies, or at least contributors of cognitive impairment,” said Dr. McCarter.
The SHIP-Trend-0 cohort is drawn from adults in Western Pomerania. The researchers analyzed data from 529 patients who had WMH and for whom intracranial volume data were available. Each member of the cohort also underwent polysomnography.
Based on AHI criteria, 24% of the overall sample had mild OSA, 10% had moderate OSA, and 6% had severe OSA.
After adjustment for sex, age, intracranial volume, and body weight, WMH volume was associated with AHI (beta = 0.024; P < .001) and ODI (beta = 0.033; P < .001). WMH counts were also associated with AHI (beta = 0.008; P = .01) and ODI (beta = 0.011; P = .02).
The effect size increased with greater OSA severity, as measured by AHI for both WMH volume (beta = 0.312, 0.480, and 1.255 for mild, moderate, and severe OSA, respectively) and WMH count (beta = 0.129, 0.107, and 0.419). The ODI regression models showed similar associations for WMH volume (beta = 0.426, 1.030, and 1.130) and WMH count (beta = 0.141, 0.315, and 0.538).
Dr. Carvalho and Dr. McCarter disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Individuals diagnosed with obstructive sleep apnea (OSA) have higher volumes of white-matter hyperintensities (WMHs), according to a new analysis of data from the SHIP-Trend-0 cohort in Western Pomerania, Germany, which is part of the Study of Health In Pomerania. The association was true for individual measures of OSA, including apnea-hypopnea index (AHI) and oxygen desaturation index (ODI).
WMHs are often seen on MRI in older people and in patients with stroke or dementia, and they may be an indicator of cerebral small-vessel disease. They are linked to greater risk of abnormal gait, worsening balance, depression, cognitive decline, dementia, stroke, and death. Suggested mechanisms for harms from WMHs include ischemia, hypoxia, hypoperfusion, inflammation, and demyelination.
WMHs have been linked to vascular risk factors like smoking, diabetes, and hypertension. Brain pathology studies have found loss of myelin, axonal loss, and scarring close to WMHs.
Although a few studies have looked for associations between WMHs and OSA, they have yielded inconsistent results. The new work employed highly standardized data collection and more complete covariate adjustment. The results, published in JAMA Network Open, suggest a novel, and potentially treatable, pathological WMH mechanism, according to the authors.
“This is an important study. It has strong methodology. The automated analysis of WMH in a large population-based cohort helps to eliminate several biases that can occur in this type of assessment. The data analysis was massive, with adequate control of all potential confounders and testing for interactions. This generated robust results,” said Diego Z. Carvalho, MD, who was asked to comment on the findings. Dr. Carvalho is an assistant professor of neurology at the Center for Sleep Medicine at the Mayo Clinic, Rochester, Minn.
Worse apnea, worse hyperintensity
“The association varies according to the degree of apnea severity, so mild OSA is probably not associated with increased WMH, while severe OSA is mostly likely driving most of the associations,” said Dr. Carvalho.
If a causal mechanism were to be proven, it would “bring a stronger call for treatment of severe OSA patients, particularly those with increased risk for small-vessel disease, [such as] patients with metabolic syndrome. Likewise, patients with severe OSA would be the best candidates for therapeutic trials with [continuous positive airway pressure] with or without possible adjunctive neuroprotective treatment for halting or slowing down WMH progression,” said Dr. Carvalho.
Stuart McCarter, MD, who is an instructor of neurology at the Center for Sleep Medicine at the Mayo Clinic, Rochester, Minn., also found the results interesting but pointed out that much more work needs to be done. “While they found a relationship between OSA as well as OSA severity and WMH despite adjusting for other known confounders, it is unlikely that it is as simple as OSA is the main causal factor for WMH, given the complex relationship between OSA, hypertension, and metabolic syndrome. However, this data does highlight the importance of considering OSA in addition to other more traditional risk factors when considering modifiable risk factors for brain aging,” said Dr. McCarter. The study cohort was mostly of White European ancestry, so more work also needs to be done in other racial groups.
The study underlines the importance of screening among individuals with cognitive impairment. “If OSA represents a modifiable risk factor for WMH and associated cognitive decline, then it would represent one of the few potentially treatable etiologies, or at least contributors of cognitive impairment,” said Dr. McCarter.
The SHIP-Trend-0 cohort is drawn from adults in Western Pomerania. The researchers analyzed data from 529 patients who had WMH and for whom intracranial volume data were available. Each member of the cohort also underwent polysomnography.
Based on AHI criteria, 24% of the overall sample had mild OSA, 10% had moderate OSA, and 6% had severe OSA.
After adjustment for sex, age, intracranial volume, and body weight, WMH volume was associated with AHI (beta = 0.024; P < .001) and ODI (beta = 0.033; P < .001). WMH counts were also associated with AHI (beta = 0.008; P = .01) and ODI (beta = 0.011; P = .02).
The effect size increased with greater OSA severity, as measured by AHI for both WMH volume (beta = 0.312, 0.480, and 1.255 for mild, moderate, and severe OSA, respectively) and WMH count (beta = 0.129, 0.107, and 0.419). The ODI regression models showed similar associations for WMH volume (beta = 0.426, 1.030, and 1.130) and WMH count (beta = 0.141, 0.315, and 0.538).
Dr. Carvalho and Dr. McCarter disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Virtual center boosts liver transplant listings in rural area
A “virtual” liver transplant center servicing Vermont and New Hampshire has improved access to liver transplant listing among patients in rural areas of the region, according to a new analysis.
The virtual center was established in 2016 at Dartmouth Hitchcock Medical Center, and it allows patients to receive pre–liver transplant evaluations, testing, and care and posttransplant follow-up there rather than at the liver transplant center that conducts the surgery. The center includes two hepatologists, two associate care providers, and a nurse liver transplant coordinator at DHMC, and led to increased transplant listing in the vicinity, according to Margaret Liu, MD, who presented the study at the virtual annual meeting of the American Association for the Study of Liver Diseases.
“The initiation of this Virtual Liver Transplant Center has been able to provide patients with the ability to get a full liver transplant workup and evaluation at a center near their home rather than the often time-consuming and costly process of potentially multiple trips to a liver transplant center up to 250 miles away for a full transplant evaluation,” said Dr. Liu in an interview. Dr. Liu is an internal medicine resident at Dartmouth Hitchcock Medical Center.
“Our results did show that the initiation of a virtual liver transplant center correlated with an increased and sustained liver transplant listing rate within 60 miles of Dartmouth over that particular study period. Conversely there was no significant change in the listing rate of New Hampshire zip codes that were within 60 miles of the nearest transplant center during the same study period,” said Dr. Liu.
The center receives referrals of patients who are potential candidates for liver transplant listing from practices throughout New Hampshire and Vermont, or from their own center. Their specialists conduct full testing, including a full liver transplant workup that includes evaluation of the patient’s general health and social factors, prior to sending the patient to the actual liver transplant center for their evaluation and transplant surgery. “We essentially do all of the pre–liver transplant workup, a formal liver transplant evaluation, and then the whole packet gets sent to an actual liver transplant center to expedite the process of getting listed for liver transplant. We’re able to streamline the process, so they get everything done here at a hospital near their home. If that requires multiple trips, it’s a lot more doable for the patients,” said Dr. Liu.
The researchers defined urban areas as having more than 50,000 people per square mile and within 30 miles of the nearest hospital, and rural as fewer than 10,000 and more than 60 miles from the nearest hospital. They used the Scientific Registry of Transplant Recipients to determine the number of liver transplant listings per zip code.
Between 2015 and 2019, the frequency of liver transplant listings per 10,000 people remained nearly unchanged in the metropolitan area of southern New Hampshire, ranging from around 0.36 to 0.75. In the rural area within 60 miles of DHMC, the frequency increased from about 0.7 per 10,000 in 2015 to about 1.4 in 2016 and 0.9 in 2017. There was an increase to nearly 3 in 10,000 in 2018, and the frequency was just over 2 in 2019.
The model has the potential to be used in other areas, according to Dr. Liu. “This could potentially be implemented in other rural areas that do not have a transplant center or don’t have a formal liver transplant evaluation process,” said Dr. Liu.
While other centers may have taken on some aspects of liver transplant evaluation and posttransplant care, the Virtual Liver Transplant Center is unique in that a great deal of effort has gone into covering all of a patient’s needs for the liver transplant evaluation. “It’s really the formalization that, from what I have researched, has not been done before,” said Dr. Liu.
The model addresses transplant-listing disparity, as well as improves patient quality of life through reduction in travel, according to Mayur Brahmania, MD, of Western University, London, Ont., who moderated the session. “They’ve proven that they can get more of their patients listed over the study period, which I think is amazing. The next step, I think, would be about whether getting them onto the transplant list actually made a difference in terms of outcome – looking at their wait list mortality, looking at how many of these patients actually got a liver transplantation. That’s the ultimate outcome,” said Dr. Brahmania.
He also noted the challenge of setting up a virtual center. “You have to have allied health staff – addiction counselors, physical therapists, dietitians, social workers. You need to have the appropriate ancillary services like cardiac testing, pulmonary function testing. It’s quite an endeavor, and if the program isn’t too enthusiastic or doesn’t have a local champion, it’s really hard to get something like this started off. So kudos to them for taking on this challenge and getting this up and running over the last 5 years,” said Dr. Brahmania.
Dr. Liu and Dr. Brahmania have no relevant financial disclosures.
AGA applauds researchers who are working to raise our awareness of health disparities in digestive diseases. AGA is committed to addressing this important societal issue head on. Learn more about AGA’s commitment through the AGA Equity Project.
A “virtual” liver transplant center servicing Vermont and New Hampshire has improved access to liver transplant listing among patients in rural areas of the region, according to a new analysis.
The virtual center was established in 2016 at Dartmouth Hitchcock Medical Center, and it allows patients to receive pre–liver transplant evaluations, testing, and care and posttransplant follow-up there rather than at the liver transplant center that conducts the surgery. The center includes two hepatologists, two associate care providers, and a nurse liver transplant coordinator at DHMC, and led to increased transplant listing in the vicinity, according to Margaret Liu, MD, who presented the study at the virtual annual meeting of the American Association for the Study of Liver Diseases.
“The initiation of this Virtual Liver Transplant Center has been able to provide patients with the ability to get a full liver transplant workup and evaluation at a center near their home rather than the often time-consuming and costly process of potentially multiple trips to a liver transplant center up to 250 miles away for a full transplant evaluation,” said Dr. Liu in an interview. Dr. Liu is an internal medicine resident at Dartmouth Hitchcock Medical Center.
“Our results did show that the initiation of a virtual liver transplant center correlated with an increased and sustained liver transplant listing rate within 60 miles of Dartmouth over that particular study period. Conversely there was no significant change in the listing rate of New Hampshire zip codes that were within 60 miles of the nearest transplant center during the same study period,” said Dr. Liu.
The center receives referrals of patients who are potential candidates for liver transplant listing from practices throughout New Hampshire and Vermont, or from their own center. Their specialists conduct full testing, including a full liver transplant workup that includes evaluation of the patient’s general health and social factors, prior to sending the patient to the actual liver transplant center for their evaluation and transplant surgery. “We essentially do all of the pre–liver transplant workup, a formal liver transplant evaluation, and then the whole packet gets sent to an actual liver transplant center to expedite the process of getting listed for liver transplant. We’re able to streamline the process, so they get everything done here at a hospital near their home. If that requires multiple trips, it’s a lot more doable for the patients,” said Dr. Liu.
The researchers defined urban areas as having more than 50,000 people per square mile and within 30 miles of the nearest hospital, and rural as fewer than 10,000 and more than 60 miles from the nearest hospital. They used the Scientific Registry of Transplant Recipients to determine the number of liver transplant listings per zip code.
Between 2015 and 2019, the frequency of liver transplant listings per 10,000 people remained nearly unchanged in the metropolitan area of southern New Hampshire, ranging from around 0.36 to 0.75. In the rural area within 60 miles of DHMC, the frequency increased from about 0.7 per 10,000 in 2015 to about 1.4 in 2016 and 0.9 in 2017. There was an increase to nearly 3 in 10,000 in 2018, and the frequency was just over 2 in 2019.
The model has the potential to be used in other areas, according to Dr. Liu. “This could potentially be implemented in other rural areas that do not have a transplant center or don’t have a formal liver transplant evaluation process,” said Dr. Liu.
While other centers may have taken on some aspects of liver transplant evaluation and posttransplant care, the Virtual Liver Transplant Center is unique in that a great deal of effort has gone into covering all of a patient’s needs for the liver transplant evaluation. “It’s really the formalization that, from what I have researched, has not been done before,” said Dr. Liu.
The model addresses transplant-listing disparity, as well as improves patient quality of life through reduction in travel, according to Mayur Brahmania, MD, of Western University, London, Ont., who moderated the session. “They’ve proven that they can get more of their patients listed over the study period, which I think is amazing. The next step, I think, would be about whether getting them onto the transplant list actually made a difference in terms of outcome – looking at their wait list mortality, looking at how many of these patients actually got a liver transplantation. That’s the ultimate outcome,” said Dr. Brahmania.
He also noted the challenge of setting up a virtual center. “You have to have allied health staff – addiction counselors, physical therapists, dietitians, social workers. You need to have the appropriate ancillary services like cardiac testing, pulmonary function testing. It’s quite an endeavor, and if the program isn’t too enthusiastic or doesn’t have a local champion, it’s really hard to get something like this started off. So kudos to them for taking on this challenge and getting this up and running over the last 5 years,” said Dr. Brahmania.
Dr. Liu and Dr. Brahmania have no relevant financial disclosures.
AGA applauds researchers who are working to raise our awareness of health disparities in digestive diseases. AGA is committed to addressing this important societal issue head on. Learn more about AGA’s commitment through the AGA Equity Project.
A “virtual” liver transplant center servicing Vermont and New Hampshire has improved access to liver transplant listing among patients in rural areas of the region, according to a new analysis.
The virtual center was established in 2016 at Dartmouth Hitchcock Medical Center, and it allows patients to receive pre–liver transplant evaluations, testing, and care and posttransplant follow-up there rather than at the liver transplant center that conducts the surgery. The center includes two hepatologists, two associate care providers, and a nurse liver transplant coordinator at DHMC, and led to increased transplant listing in the vicinity, according to Margaret Liu, MD, who presented the study at the virtual annual meeting of the American Association for the Study of Liver Diseases.
“The initiation of this Virtual Liver Transplant Center has been able to provide patients with the ability to get a full liver transplant workup and evaluation at a center near their home rather than the often time-consuming and costly process of potentially multiple trips to a liver transplant center up to 250 miles away for a full transplant evaluation,” said Dr. Liu in an interview. Dr. Liu is an internal medicine resident at Dartmouth Hitchcock Medical Center.
“Our results did show that the initiation of a virtual liver transplant center correlated with an increased and sustained liver transplant listing rate within 60 miles of Dartmouth over that particular study period. Conversely there was no significant change in the listing rate of New Hampshire zip codes that were within 60 miles of the nearest transplant center during the same study period,” said Dr. Liu.
The center receives referrals of patients who are potential candidates for liver transplant listing from practices throughout New Hampshire and Vermont, or from their own center. Their specialists conduct full testing, including a full liver transplant workup that includes evaluation of the patient’s general health and social factors, prior to sending the patient to the actual liver transplant center for their evaluation and transplant surgery. “We essentially do all of the pre–liver transplant workup, a formal liver transplant evaluation, and then the whole packet gets sent to an actual liver transplant center to expedite the process of getting listed for liver transplant. We’re able to streamline the process, so they get everything done here at a hospital near their home. If that requires multiple trips, it’s a lot more doable for the patients,” said Dr. Liu.
The researchers defined urban areas as having more than 50,000 people per square mile and within 30 miles of the nearest hospital, and rural as fewer than 10,000 and more than 60 miles from the nearest hospital. They used the Scientific Registry of Transplant Recipients to determine the number of liver transplant listings per zip code.
Between 2015 and 2019, the frequency of liver transplant listings per 10,000 people remained nearly unchanged in the metropolitan area of southern New Hampshire, ranging from around 0.36 to 0.75. In the rural area within 60 miles of DHMC, the frequency increased from about 0.7 per 10,000 in 2015 to about 1.4 in 2016 and 0.9 in 2017. There was an increase to nearly 3 in 10,000 in 2018, and the frequency was just over 2 in 2019.
The model has the potential to be used in other areas, according to Dr. Liu. “This could potentially be implemented in other rural areas that do not have a transplant center or don’t have a formal liver transplant evaluation process,” said Dr. Liu.
While other centers may have taken on some aspects of liver transplant evaluation and posttransplant care, the Virtual Liver Transplant Center is unique in that a great deal of effort has gone into covering all of a patient’s needs for the liver transplant evaluation. “It’s really the formalization that, from what I have researched, has not been done before,” said Dr. Liu.
The model addresses transplant-listing disparity, as well as improves patient quality of life through reduction in travel, according to Mayur Brahmania, MD, of Western University, London, Ont., who moderated the session. “They’ve proven that they can get more of their patients listed over the study period, which I think is amazing. The next step, I think, would be about whether getting them onto the transplant list actually made a difference in terms of outcome – looking at their wait list mortality, looking at how many of these patients actually got a liver transplantation. That’s the ultimate outcome,” said Dr. Brahmania.
He also noted the challenge of setting up a virtual center. “You have to have allied health staff – addiction counselors, physical therapists, dietitians, social workers. You need to have the appropriate ancillary services like cardiac testing, pulmonary function testing. It’s quite an endeavor, and if the program isn’t too enthusiastic or doesn’t have a local champion, it’s really hard to get something like this started off. So kudos to them for taking on this challenge and getting this up and running over the last 5 years,” said Dr. Brahmania.
Dr. Liu and Dr. Brahmania have no relevant financial disclosures.
AGA applauds researchers who are working to raise our awareness of health disparities in digestive diseases. AGA is committed to addressing this important societal issue head on. Learn more about AGA’s commitment through the AGA Equity Project.
FROM THE LIVER MEETING