User login
Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Aging HIV patients face comorbidities and hospitalizations
Thanks to effective treatment, people with HIV are living longer. But as they age, they face higher rates of age-related comorbidities and hospitalizations, according to a recent study of hospitalized patients.
Decision-makers will need to allocate resources, train providers, and plan ways to manage chronic diseases, such as diabetes and cancer, among geriatric HIV inpatients, according to the authors.
“There will be more [HIV] patients with age-related chronic conditions at an earlier age and who will utilize or will have a unique need for [health care for] these geriatric conditions,” first author Khairul A. Siddiqi, PhD, University of Florida, Gainesville, said in an interview. “Eventually, that may increase inpatient resource utilization and costs.”
The study was published online in HIV Medicine.
Aging with HIV
Analyzing the National Inpatient Sample (NIS) of the Healthcare Cost and Utilization Project, the authors compared characteristics and comorbidities linked to hospital stays among people with HIV (HSWH) to those linked to hospital stays among people without HIV (HSWOH).
The NIS is a database of hospital records that captures 20% of discharges in the United States and covers all payers. Data in this analysis covered the years 2003-2015.
Among HSWH, patients aged 50 or older accounted for an increasing proportion over time, from fewer than 25% in 2003 to over 50% by 2015, the authors found. The subgroup aged 65-80 had risen from 2.39% to 8.63% by 2015.
The authors also studied rates of eight comorbidities, termed HIV-associated non-AIDS (HANA) conditions: cardiovascular, lung, liver, neurologic, and kidney diseases; diabetes; cancer; and bone loss.
The average number of these conditions among both HSWH and HSWOH rose over time. But this change was disproportionately high among HSWH aged 50-64 and those aged 65 and older.
Over the study period, among patients aged 65 or older, six of the eight age-related conditions the researchers studied rose disproportionately among HSWH in comparison with HSWOH; among those aged 50-64, five conditions did so.
The researchers are now building on the current study of HSWH by examining rates of resource utilization, such as MRIs and procedures, Dr. Siddiqi said.
Study limitations included a lack of data from long-term facilities, potential skewing by patients hospitalized multiple times, and the inherent limitations of administrative data.
A unique group of older people
Among people with HIV (PWH) in the United States, nearly half are aged 50 or older. By 2030, this group is expected to account for some 70% of PWH.
“We need to pay attention to what we know about aging generally. It is also important to study aging in this special population, because we don’t necessarily know a lot about that,” Amy Justice, MD, PhD, professor of medicine and of public health at Yale University, New Haven, Conn., said in an interview. Dr. Justice was not involved in the study.
The HIV epidemic has disproportionately affected people of color, men who have sex with men, and people with a history of injection drug use, Dr. Justice said.
“We don’t know about aging with [a] past history of injection drug use. We don’t even know much about aging with hepatitis C, necessarily,” she said. “So there are lots of reasons to pay some attention to this population to try to optimize their care.”
In addition, compared with their non–HIV-affected counterparts, these individuals are more susceptible to HANA comorbidities. They may experience these conditions at a younger age or more severely. Chronic inflammation and polypharmacy may be to blame, said Dr. Justice.
Given the burden of comorbidities and polypharmacy in this patient population, Dr. Siddiqi said, policy makers will need to focus on developing chronic disease management interventions for them.
However, Dr. Justice added, the risk for multimorbidity is higher among people with HIV throughout the age cycle: “It’s not like I turn 50 with HIV and all of a sudden all the wheels come off. There are ways to successfully age with HIV.”
Geriatric HIV expertise needed
Dr. Justice called the study’s analysis a useful addition to the literature and noted its implications for training.
“One of the biggest challenges with this large bolus of folks who are aging with HIV,” she said, “is to what extent should they be cared for by the people who have been caring for them – largely infectious disease docs – and to what extent should we really be transitioning their care to people with more experience with aging.”
Another key question, Dr. Justice said, relates to nursing homes and assisted-living facilities, whose staff may lack experience caring for HIV patients. Training them and hospital-based providers is crucial, in part to avoid key errors, such as missed antiretroviral doses, she said: “We need to really think about how to get non-HIV providers up to speed.”
That may begin by simply making it clear that this population is here.
“A decade ago, HIV patients used to have a lower life expectancy, so all HIV studies used to use 50 years as the cutoff point for [the] older population,” Dr. Siddiqi said. “Now we know they’re living longer.”
Added Dr. Justice: “Previously, people thought aging and HIV were not coincident findings.”
The study was funded by the Office of the Vice President for Research at the University of South Carolina. The authors and Dr. Justice disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Thanks to effective treatment, people with HIV are living longer. But as they age, they face higher rates of age-related comorbidities and hospitalizations, according to a recent study of hospitalized patients.
Decision-makers will need to allocate resources, train providers, and plan ways to manage chronic diseases, such as diabetes and cancer, among geriatric HIV inpatients, according to the authors.
“There will be more [HIV] patients with age-related chronic conditions at an earlier age and who will utilize or will have a unique need for [health care for] these geriatric conditions,” first author Khairul A. Siddiqi, PhD, University of Florida, Gainesville, said in an interview. “Eventually, that may increase inpatient resource utilization and costs.”
The study was published online in HIV Medicine.
Aging with HIV
Analyzing the National Inpatient Sample (NIS) of the Healthcare Cost and Utilization Project, the authors compared characteristics and comorbidities linked to hospital stays among people with HIV (HSWH) to those linked to hospital stays among people without HIV (HSWOH).
The NIS is a database of hospital records that captures 20% of discharges in the United States and covers all payers. Data in this analysis covered the years 2003-2015.
Among HSWH, patients aged 50 or older accounted for an increasing proportion over time, from fewer than 25% in 2003 to over 50% by 2015, the authors found. The subgroup aged 65-80 had risen from 2.39% to 8.63% by 2015.
The authors also studied rates of eight comorbidities, termed HIV-associated non-AIDS (HANA) conditions: cardiovascular, lung, liver, neurologic, and kidney diseases; diabetes; cancer; and bone loss.
The average number of these conditions among both HSWH and HSWOH rose over time. But this change was disproportionately high among HSWH aged 50-64 and those aged 65 and older.
Over the study period, among patients aged 65 or older, six of the eight age-related conditions the researchers studied rose disproportionately among HSWH in comparison with HSWOH; among those aged 50-64, five conditions did so.
The researchers are now building on the current study of HSWH by examining rates of resource utilization, such as MRIs and procedures, Dr. Siddiqi said.
Study limitations included a lack of data from long-term facilities, potential skewing by patients hospitalized multiple times, and the inherent limitations of administrative data.
A unique group of older people
Among people with HIV (PWH) in the United States, nearly half are aged 50 or older. By 2030, this group is expected to account for some 70% of PWH.
“We need to pay attention to what we know about aging generally. It is also important to study aging in this special population, because we don’t necessarily know a lot about that,” Amy Justice, MD, PhD, professor of medicine and of public health at Yale University, New Haven, Conn., said in an interview. Dr. Justice was not involved in the study.
The HIV epidemic has disproportionately affected people of color, men who have sex with men, and people with a history of injection drug use, Dr. Justice said.
“We don’t know about aging with [a] past history of injection drug use. We don’t even know much about aging with hepatitis C, necessarily,” she said. “So there are lots of reasons to pay some attention to this population to try to optimize their care.”
In addition, compared with their non–HIV-affected counterparts, these individuals are more susceptible to HANA comorbidities. They may experience these conditions at a younger age or more severely. Chronic inflammation and polypharmacy may be to blame, said Dr. Justice.
Given the burden of comorbidities and polypharmacy in this patient population, Dr. Siddiqi said, policy makers will need to focus on developing chronic disease management interventions for them.
However, Dr. Justice added, the risk for multimorbidity is higher among people with HIV throughout the age cycle: “It’s not like I turn 50 with HIV and all of a sudden all the wheels come off. There are ways to successfully age with HIV.”
Geriatric HIV expertise needed
Dr. Justice called the study’s analysis a useful addition to the literature and noted its implications for training.
“One of the biggest challenges with this large bolus of folks who are aging with HIV,” she said, “is to what extent should they be cared for by the people who have been caring for them – largely infectious disease docs – and to what extent should we really be transitioning their care to people with more experience with aging.”
Another key question, Dr. Justice said, relates to nursing homes and assisted-living facilities, whose staff may lack experience caring for HIV patients. Training them and hospital-based providers is crucial, in part to avoid key errors, such as missed antiretroviral doses, she said: “We need to really think about how to get non-HIV providers up to speed.”
That may begin by simply making it clear that this population is here.
“A decade ago, HIV patients used to have a lower life expectancy, so all HIV studies used to use 50 years as the cutoff point for [the] older population,” Dr. Siddiqi said. “Now we know they’re living longer.”
Added Dr. Justice: “Previously, people thought aging and HIV were not coincident findings.”
The study was funded by the Office of the Vice President for Research at the University of South Carolina. The authors and Dr. Justice disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Thanks to effective treatment, people with HIV are living longer. But as they age, they face higher rates of age-related comorbidities and hospitalizations, according to a recent study of hospitalized patients.
Decision-makers will need to allocate resources, train providers, and plan ways to manage chronic diseases, such as diabetes and cancer, among geriatric HIV inpatients, according to the authors.
“There will be more [HIV] patients with age-related chronic conditions at an earlier age and who will utilize or will have a unique need for [health care for] these geriatric conditions,” first author Khairul A. Siddiqi, PhD, University of Florida, Gainesville, said in an interview. “Eventually, that may increase inpatient resource utilization and costs.”
The study was published online in HIV Medicine.
Aging with HIV
Analyzing the National Inpatient Sample (NIS) of the Healthcare Cost and Utilization Project, the authors compared characteristics and comorbidities linked to hospital stays among people with HIV (HSWH) to those linked to hospital stays among people without HIV (HSWOH).
The NIS is a database of hospital records that captures 20% of discharges in the United States and covers all payers. Data in this analysis covered the years 2003-2015.
Among HSWH, patients aged 50 or older accounted for an increasing proportion over time, from fewer than 25% in 2003 to over 50% by 2015, the authors found. The subgroup aged 65-80 had risen from 2.39% to 8.63% by 2015.
The authors also studied rates of eight comorbidities, termed HIV-associated non-AIDS (HANA) conditions: cardiovascular, lung, liver, neurologic, and kidney diseases; diabetes; cancer; and bone loss.
The average number of these conditions among both HSWH and HSWOH rose over time. But this change was disproportionately high among HSWH aged 50-64 and those aged 65 and older.
Over the study period, among patients aged 65 or older, six of the eight age-related conditions the researchers studied rose disproportionately among HSWH in comparison with HSWOH; among those aged 50-64, five conditions did so.
The researchers are now building on the current study of HSWH by examining rates of resource utilization, such as MRIs and procedures, Dr. Siddiqi said.
Study limitations included a lack of data from long-term facilities, potential skewing by patients hospitalized multiple times, and the inherent limitations of administrative data.
A unique group of older people
Among people with HIV (PWH) in the United States, nearly half are aged 50 or older. By 2030, this group is expected to account for some 70% of PWH.
“We need to pay attention to what we know about aging generally. It is also important to study aging in this special population, because we don’t necessarily know a lot about that,” Amy Justice, MD, PhD, professor of medicine and of public health at Yale University, New Haven, Conn., said in an interview. Dr. Justice was not involved in the study.
The HIV epidemic has disproportionately affected people of color, men who have sex with men, and people with a history of injection drug use, Dr. Justice said.
“We don’t know about aging with [a] past history of injection drug use. We don’t even know much about aging with hepatitis C, necessarily,” she said. “So there are lots of reasons to pay some attention to this population to try to optimize their care.”
In addition, compared with their non–HIV-affected counterparts, these individuals are more susceptible to HANA comorbidities. They may experience these conditions at a younger age or more severely. Chronic inflammation and polypharmacy may be to blame, said Dr. Justice.
Given the burden of comorbidities and polypharmacy in this patient population, Dr. Siddiqi said, policy makers will need to focus on developing chronic disease management interventions for them.
However, Dr. Justice added, the risk for multimorbidity is higher among people with HIV throughout the age cycle: “It’s not like I turn 50 with HIV and all of a sudden all the wheels come off. There are ways to successfully age with HIV.”
Geriatric HIV expertise needed
Dr. Justice called the study’s analysis a useful addition to the literature and noted its implications for training.
“One of the biggest challenges with this large bolus of folks who are aging with HIV,” she said, “is to what extent should they be cared for by the people who have been caring for them – largely infectious disease docs – and to what extent should we really be transitioning their care to people with more experience with aging.”
Another key question, Dr. Justice said, relates to nursing homes and assisted-living facilities, whose staff may lack experience caring for HIV patients. Training them and hospital-based providers is crucial, in part to avoid key errors, such as missed antiretroviral doses, she said: “We need to really think about how to get non-HIV providers up to speed.”
That may begin by simply making it clear that this population is here.
“A decade ago, HIV patients used to have a lower life expectancy, so all HIV studies used to use 50 years as the cutoff point for [the] older population,” Dr. Siddiqi said. “Now we know they’re living longer.”
Added Dr. Justice: “Previously, people thought aging and HIV were not coincident findings.”
The study was funded by the Office of the Vice President for Research at the University of South Carolina. The authors and Dr. Justice disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM HIV MEDICINE
Artificial intelligence: The Netflix of cancer treatment
Chemotherapy, now streaming at just $15.99 a month!
It’s a lazy Sunday and you flip on Netflix, looking for something new to watch. There’s an almost-overwhelming number of shows out there, but right at the top of the recommended list is something that strikes your fancy right away. The algorithm behind the scenes is doing its job well, winnowing the universe of content right down to the few things you’ll find relevant, based on what you’ve watched and liked in the past.
Now, the almighty content algorithm is coming for something a little more useful than binge watching obscure 80s sitcoms: cancer treatment.
By plugging the fully sequenced genomes of nearly 10,000 patients with 33 different types of cancer into an algorithm powered by the same sort of artificial intelligence used by Netflix, researchers from London and San Diego found 21 common faults in the chromosomes of tumors, which they called copy number signatures. While cancer is a complex disease, when faults occur in those copy number signatures, the results were similar across the board. If X genetic defect occurs within a tumor, Y result will happen, even across cancer types. For example, tumors whose chromosomes had shattered and reformed had by far the worst disease outcomes.
The eventual hope is that, just as Netflix can predict what you’ll want to watch based on what you’ve already seen, oncologists will be able to predict the course of a cancer, based on the tumor’s early genetic traits, and get ahead of future genetic degradation to prevent the worst outcomes. A sort of “Oh, your tumor has enjoyed The Office. Might we suggest a treatment of 30 Rock” situation. Further research will be required to determine whether or not the cancer algorithm can get us part 2 of “Stranger Things 4” a week early.
Pay criminals, cut crime?
What is the best method for punishing those who commit wrongdoing? Fines? Jail time? Actually, no. A recent study says that financial compensation works best.
In other words, pay them for their actions. Really.
Psychologist Tage S. Rai, PhD, of the University of California, San Diego, Rady School of Management, found that people who hurt others or commit crimes are actually doing it because they think it’s the right thing to do. The results of this study say play at the angle of their morality. When that’s compromised, the offender is less likely to do it again.
Four different experiments were conducted using an online economics game with nearly 1,500 participants. Dr. Rai found that providing a monetary bonus for inflicting a punishment on a third party within the game cut the participants’ willingness to do it again by 50%.
“People punish others to signal their own goodness and receiving compensation might make it seem as though they’re driven by greed rather than justice,” he said.
The big deterrent, though, was negative judgment from peers. People in the study were even more hesitant to inflict harm and gain a profit if they thought they were going to be judged for it.
So maybe the answer to cutting crime isn’t as simple as slapping on a fine. It’s slapping on shame and paying them for it.
A conspiracy of chronobiologic proportions
The Golden State Warriors just won the NBA championship – that much is true – but we’ve got some news that you didn’t get from ESPN. The kind of news that their “partners” from the NBA didn’t want them to report. Unlike most conspiracy theories, however, this one has some science behind it.
In this case, science in the form of a study published in Frontiers in Physiology says that jet lag had a greater effect on the Boston Celtics than it did on the Warriors.
“Eastward travel – where the destination time is later than the origin time – requires the athlete to shorten their day (known as a phase advance). During phase advance, athletes often struggle to fall asleep at an earlier bedtime, leading to sleep loss and, consequently, potential impaired physiological performance and motivation the next day,” senior author Elise Facer-Childs, PhD, of Monash University, Melbourne, said in written statement.
Dr. Facer-Childs and associates took a very close look at 10 seasons’ worth of NBA games – 11,481 games, to be exact – and found “that eastward (but not westward) jet lag was associated with impaired performance for home (but not away) teams.” The existence of a pro-Western bias against teams that traveled eastward for their home games was clear:
- The chance of winning for eastern teams was reduced by 6.0%.
- They grabbed 1.3 fewer rebounds per game.
- Their field goal percentage was 1.2% lower.
And here’s the final nail in the conspiracy coffin: The NBA knew about the jet lag effect and changed the schedule of the finals in 2014 in a way that makes it worse. Before that, the higher-seeded team got two home games, then the lower-seeded team had three at home, followed by two more at the home of the higher seed. Now it’s a 2-2-1-1-1 arrangement that leads to more travel and, of course, more jet lag.
The study was published during the championship series, so the investigators suggested that the Celtics “might benefit from chronobiology-informed strategies designed to mitigate eastward jet lag symptomatology.”
So there you have it, sports fans/conspiracy theorists: You can’t chase Steph Curry around the court for 48 minutes without the right chronobiology-informed strategy. Everyone knows that.
Being hungry can alter your ‘type’
Fasting and being hungry can be a dangerous mix for becoming “hangry” and irritable, but did you know being hungry can also affect your attraction to other people?
Evidence has shown that being hungry can affect important things such as decision-making, memory, cognition, and function. It might affect decision-making in the sense that those six tacos at Taco Bell might win out over grilled chicken breast and veggies at home, but can hunger make you think that the person you just swiped right on isn’t really your type after all?
We’ll leave that up to Valentina Cazzato of Liverpool (England) John Moores University and associates, whose study involved 44 people, of whom 21 were women in their early 20s. The participants were shown computer-generated images of men and women of different sizes. The same background was used for each picture and all the expressions of the models were neutral. Participants were asked to rate each image on how much they liked it. One study was done on participants who had been fasting for 12 hours, and the second was done on those who had just eaten something.
The subjects generally preferred slim models over more rounded ones, but not after fasting. When they were hungry, they found the round human bodies and faces more attractive. So, yes, it’s definitely possible that hunger can alter your attraction to others.
“Future work might seek to elucidate the relationship between physiological states of hunger and shifts in appreciation of the human bodies and whether this relationship might be mediated by individual traits associated with to beholder’s body adiposity,” said researchers.
Chemotherapy, now streaming at just $15.99 a month!
It’s a lazy Sunday and you flip on Netflix, looking for something new to watch. There’s an almost-overwhelming number of shows out there, but right at the top of the recommended list is something that strikes your fancy right away. The algorithm behind the scenes is doing its job well, winnowing the universe of content right down to the few things you’ll find relevant, based on what you’ve watched and liked in the past.
Now, the almighty content algorithm is coming for something a little more useful than binge watching obscure 80s sitcoms: cancer treatment.
By plugging the fully sequenced genomes of nearly 10,000 patients with 33 different types of cancer into an algorithm powered by the same sort of artificial intelligence used by Netflix, researchers from London and San Diego found 21 common faults in the chromosomes of tumors, which they called copy number signatures. While cancer is a complex disease, when faults occur in those copy number signatures, the results were similar across the board. If X genetic defect occurs within a tumor, Y result will happen, even across cancer types. For example, tumors whose chromosomes had shattered and reformed had by far the worst disease outcomes.
The eventual hope is that, just as Netflix can predict what you’ll want to watch based on what you’ve already seen, oncologists will be able to predict the course of a cancer, based on the tumor’s early genetic traits, and get ahead of future genetic degradation to prevent the worst outcomes. A sort of “Oh, your tumor has enjoyed The Office. Might we suggest a treatment of 30 Rock” situation. Further research will be required to determine whether or not the cancer algorithm can get us part 2 of “Stranger Things 4” a week early.
Pay criminals, cut crime?
What is the best method for punishing those who commit wrongdoing? Fines? Jail time? Actually, no. A recent study says that financial compensation works best.
In other words, pay them for their actions. Really.
Psychologist Tage S. Rai, PhD, of the University of California, San Diego, Rady School of Management, found that people who hurt others or commit crimes are actually doing it because they think it’s the right thing to do. The results of this study say play at the angle of their morality. When that’s compromised, the offender is less likely to do it again.
Four different experiments were conducted using an online economics game with nearly 1,500 participants. Dr. Rai found that providing a monetary bonus for inflicting a punishment on a third party within the game cut the participants’ willingness to do it again by 50%.
“People punish others to signal their own goodness and receiving compensation might make it seem as though they’re driven by greed rather than justice,” he said.
The big deterrent, though, was negative judgment from peers. People in the study were even more hesitant to inflict harm and gain a profit if they thought they were going to be judged for it.
So maybe the answer to cutting crime isn’t as simple as slapping on a fine. It’s slapping on shame and paying them for it.
A conspiracy of chronobiologic proportions
The Golden State Warriors just won the NBA championship – that much is true – but we’ve got some news that you didn’t get from ESPN. The kind of news that their “partners” from the NBA didn’t want them to report. Unlike most conspiracy theories, however, this one has some science behind it.
In this case, science in the form of a study published in Frontiers in Physiology says that jet lag had a greater effect on the Boston Celtics than it did on the Warriors.
“Eastward travel – where the destination time is later than the origin time – requires the athlete to shorten their day (known as a phase advance). During phase advance, athletes often struggle to fall asleep at an earlier bedtime, leading to sleep loss and, consequently, potential impaired physiological performance and motivation the next day,” senior author Elise Facer-Childs, PhD, of Monash University, Melbourne, said in written statement.
Dr. Facer-Childs and associates took a very close look at 10 seasons’ worth of NBA games – 11,481 games, to be exact – and found “that eastward (but not westward) jet lag was associated with impaired performance for home (but not away) teams.” The existence of a pro-Western bias against teams that traveled eastward for their home games was clear:
- The chance of winning for eastern teams was reduced by 6.0%.
- They grabbed 1.3 fewer rebounds per game.
- Their field goal percentage was 1.2% lower.
And here’s the final nail in the conspiracy coffin: The NBA knew about the jet lag effect and changed the schedule of the finals in 2014 in a way that makes it worse. Before that, the higher-seeded team got two home games, then the lower-seeded team had three at home, followed by two more at the home of the higher seed. Now it’s a 2-2-1-1-1 arrangement that leads to more travel and, of course, more jet lag.
The study was published during the championship series, so the investigators suggested that the Celtics “might benefit from chronobiology-informed strategies designed to mitigate eastward jet lag symptomatology.”
So there you have it, sports fans/conspiracy theorists: You can’t chase Steph Curry around the court for 48 minutes without the right chronobiology-informed strategy. Everyone knows that.
Being hungry can alter your ‘type’
Fasting and being hungry can be a dangerous mix for becoming “hangry” and irritable, but did you know being hungry can also affect your attraction to other people?
Evidence has shown that being hungry can affect important things such as decision-making, memory, cognition, and function. It might affect decision-making in the sense that those six tacos at Taco Bell might win out over grilled chicken breast and veggies at home, but can hunger make you think that the person you just swiped right on isn’t really your type after all?
We’ll leave that up to Valentina Cazzato of Liverpool (England) John Moores University and associates, whose study involved 44 people, of whom 21 were women in their early 20s. The participants were shown computer-generated images of men and women of different sizes. The same background was used for each picture and all the expressions of the models were neutral. Participants were asked to rate each image on how much they liked it. One study was done on participants who had been fasting for 12 hours, and the second was done on those who had just eaten something.
The subjects generally preferred slim models over more rounded ones, but not after fasting. When they were hungry, they found the round human bodies and faces more attractive. So, yes, it’s definitely possible that hunger can alter your attraction to others.
“Future work might seek to elucidate the relationship between physiological states of hunger and shifts in appreciation of the human bodies and whether this relationship might be mediated by individual traits associated with to beholder’s body adiposity,” said researchers.
Chemotherapy, now streaming at just $15.99 a month!
It’s a lazy Sunday and you flip on Netflix, looking for something new to watch. There’s an almost-overwhelming number of shows out there, but right at the top of the recommended list is something that strikes your fancy right away. The algorithm behind the scenes is doing its job well, winnowing the universe of content right down to the few things you’ll find relevant, based on what you’ve watched and liked in the past.
Now, the almighty content algorithm is coming for something a little more useful than binge watching obscure 80s sitcoms: cancer treatment.
By plugging the fully sequenced genomes of nearly 10,000 patients with 33 different types of cancer into an algorithm powered by the same sort of artificial intelligence used by Netflix, researchers from London and San Diego found 21 common faults in the chromosomes of tumors, which they called copy number signatures. While cancer is a complex disease, when faults occur in those copy number signatures, the results were similar across the board. If X genetic defect occurs within a tumor, Y result will happen, even across cancer types. For example, tumors whose chromosomes had shattered and reformed had by far the worst disease outcomes.
The eventual hope is that, just as Netflix can predict what you’ll want to watch based on what you’ve already seen, oncologists will be able to predict the course of a cancer, based on the tumor’s early genetic traits, and get ahead of future genetic degradation to prevent the worst outcomes. A sort of “Oh, your tumor has enjoyed The Office. Might we suggest a treatment of 30 Rock” situation. Further research will be required to determine whether or not the cancer algorithm can get us part 2 of “Stranger Things 4” a week early.
Pay criminals, cut crime?
What is the best method for punishing those who commit wrongdoing? Fines? Jail time? Actually, no. A recent study says that financial compensation works best.
In other words, pay them for their actions. Really.
Psychologist Tage S. Rai, PhD, of the University of California, San Diego, Rady School of Management, found that people who hurt others or commit crimes are actually doing it because they think it’s the right thing to do. The results of this study say play at the angle of their morality. When that’s compromised, the offender is less likely to do it again.
Four different experiments were conducted using an online economics game with nearly 1,500 participants. Dr. Rai found that providing a monetary bonus for inflicting a punishment on a third party within the game cut the participants’ willingness to do it again by 50%.
“People punish others to signal their own goodness and receiving compensation might make it seem as though they’re driven by greed rather than justice,” he said.
The big deterrent, though, was negative judgment from peers. People in the study were even more hesitant to inflict harm and gain a profit if they thought they were going to be judged for it.
So maybe the answer to cutting crime isn’t as simple as slapping on a fine. It’s slapping on shame and paying them for it.
A conspiracy of chronobiologic proportions
The Golden State Warriors just won the NBA championship – that much is true – but we’ve got some news that you didn’t get from ESPN. The kind of news that their “partners” from the NBA didn’t want them to report. Unlike most conspiracy theories, however, this one has some science behind it.
In this case, science in the form of a study published in Frontiers in Physiology says that jet lag had a greater effect on the Boston Celtics than it did on the Warriors.
“Eastward travel – where the destination time is later than the origin time – requires the athlete to shorten their day (known as a phase advance). During phase advance, athletes often struggle to fall asleep at an earlier bedtime, leading to sleep loss and, consequently, potential impaired physiological performance and motivation the next day,” senior author Elise Facer-Childs, PhD, of Monash University, Melbourne, said in written statement.
Dr. Facer-Childs and associates took a very close look at 10 seasons’ worth of NBA games – 11,481 games, to be exact – and found “that eastward (but not westward) jet lag was associated with impaired performance for home (but not away) teams.” The existence of a pro-Western bias against teams that traveled eastward for their home games was clear:
- The chance of winning for eastern teams was reduced by 6.0%.
- They grabbed 1.3 fewer rebounds per game.
- Their field goal percentage was 1.2% lower.
And here’s the final nail in the conspiracy coffin: The NBA knew about the jet lag effect and changed the schedule of the finals in 2014 in a way that makes it worse. Before that, the higher-seeded team got two home games, then the lower-seeded team had three at home, followed by two more at the home of the higher seed. Now it’s a 2-2-1-1-1 arrangement that leads to more travel and, of course, more jet lag.
The study was published during the championship series, so the investigators suggested that the Celtics “might benefit from chronobiology-informed strategies designed to mitigate eastward jet lag symptomatology.”
So there you have it, sports fans/conspiracy theorists: You can’t chase Steph Curry around the court for 48 minutes without the right chronobiology-informed strategy. Everyone knows that.
Being hungry can alter your ‘type’
Fasting and being hungry can be a dangerous mix for becoming “hangry” and irritable, but did you know being hungry can also affect your attraction to other people?
Evidence has shown that being hungry can affect important things such as decision-making, memory, cognition, and function. It might affect decision-making in the sense that those six tacos at Taco Bell might win out over grilled chicken breast and veggies at home, but can hunger make you think that the person you just swiped right on isn’t really your type after all?
We’ll leave that up to Valentina Cazzato of Liverpool (England) John Moores University and associates, whose study involved 44 people, of whom 21 were women in their early 20s. The participants were shown computer-generated images of men and women of different sizes. The same background was used for each picture and all the expressions of the models were neutral. Participants were asked to rate each image on how much they liked it. One study was done on participants who had been fasting for 12 hours, and the second was done on those who had just eaten something.
The subjects generally preferred slim models over more rounded ones, but not after fasting. When they were hungry, they found the round human bodies and faces more attractive. So, yes, it’s definitely possible that hunger can alter your attraction to others.
“Future work might seek to elucidate the relationship between physiological states of hunger and shifts in appreciation of the human bodies and whether this relationship might be mediated by individual traits associated with to beholder’s body adiposity,” said researchers.
Pediatric obesity disparities widen
Lower levels of household income and education in the United States are associated with higher rates of adolescent obesity. These socioeconomic disparities “have widened during the last two decades,” new research shows.
Because obesity in adolescence has immediate and long-term health consequences, this phenomenon “may exacerbate socioeconomic disparities in chronic diseases into adulthood,” study author Ryunosuke Goto, MD, of University of Tokyo Hospital, and colleagues reported in JAMA Pediatrics.
Groups with higher rates of obesity may also be less likely to access treatment, said Kyung E. Rhee, MD, professor of pediatrics at University of California, San Diego School of Medicine, who was not involved in the new analysis.
“These are the families who have a harder time getting to the doctor’s office or getting to programs because they are working multiple jobs, or they don’t have as much flexibility,” Dr. Rhee told this news organization.
20 years of data
A recent study showed a relationship between socioeconomic status (SES) and weight in adults. Research examining current trends in adolescents has been limited, however, according to the authors of the new study.
To address this gap, Dr. Goto and colleagues looked at obesity trends among approximately 20,000 U.S. children aged 10-19 years using cross-sectional data from the 1999-2018 National Health and Nutrition Examination Surveys.
They compared the prevalence of obesity among participants whose household income was 138% of the federal poverty level or less versus those with higher levels of household income. They also examined obesity prevalence according to whether the head of household had graduated college.
Relative to higher-income households, adolescents from lower-income households were more likely to be non-Hispanic Black (21.7% vs. 10.4%) or Hispanic (30.6% vs. 13.4%) and to have an unmarried parent (54.5% vs. 23%). They were also more likely to have obesity (22.8% vs. 17.3%).
The prevalence of obesity likewise was higher among adolescents whose head of household did not have a college degree (21.8% vs. 11.6%).
In an analysis that adjusted for race, ethnicity, height, and marital status of the head of household, the prevalence of obesity increased over 20 years, particularly among adolescents from lower-income homes, the researchers reported.
Lower income was associated with an increase in obesity prevalence of 4.2 percentage points, and less education was associated with an increase in obesity prevalence of 9 percentage points.
By 2015-2018, the gap in obesity prevalence between low-income households and higher-income households was 6.4 percentage points more than it had been during 1999-2002 (95% confidence interval, 1.5-11.4). “When we assessed linear trends, the gap in obesity prevalence by income and education increased by an average of 1.5 (95% CI, 0.4-2.6) and 1.1 (95% CI, 0.0-2.3) percentage points every 4 years, respectively,” according to the researchers.
How to treat
Separately, researchers are studying ways to help treat patients with obesity and increase access to treatment. To that end, Dr. Rhee and colleagues developed a new program called Guided Self-Help Obesity Treatment in the Doctor’s Office (GOT Doc).
The guided self-help program was designed to provide similar resources as a leading treatment approach – family-based treatment – but in a less intensive, more accessible way.
Results from a randomized trial comparing this guided self-help approach with family-based treatment were published in Pediatrics.
The trial included 159 children and their parents. The children had an average age of 9.6 years and body mass index z-score of 2.1. Participants were primarily Latinx and from lower income neighborhoods.
Whereas family-based treatment included hour-long sessions at an academic center, the guided self-help program featured a 20-minute session in the office where patients typically see their primary care physician.
Both programs covered how to self-monitor food intake, set healthy goals, and modify the home environment to promote behavioral change. They also discussed body image, bullying, and emotional health. The program is framed around developing a healthy lifestyle rather than weight loss itself, Dr. Rhee said.
Children in both groups had significant reductions in their body mass index percentiles after the 6-month treatment programs. The reductions were largely maintained at 6-month follow-up.
Families in the guided self-help program, however, had a 67% lower risk of dropping out of the study and reported greater satisfaction and convenience. They attended more than half of the treatment sessions, whereas participants assigned to family-based treatment attended 1 in 5 sessions, on average.
The trial was conducted before the COVID-19 pandemic. Next, the researchers plan to test delivery of a guided self-help program via video calls, Dr. Rhee said.
Having options readily available for families who are interested in treatment for obesity proved valuable to clinicians, Dr. Rhee said. “They could then just refer them down the hall to the interventionist who was there, who was going to then work with the family to make these changes,” she said.
The study by Dr. Goto and colleagues was supported by grants from the Japan Society for the Promotion of Science. The trial by Dr. Rhee et al. was supported by a grant from the Health Resources and Services Administration. Neither research team had conflict of interest disclosures.
A version of this article first appeared on Medscape.com.
Lower levels of household income and education in the United States are associated with higher rates of adolescent obesity. These socioeconomic disparities “have widened during the last two decades,” new research shows.
Because obesity in adolescence has immediate and long-term health consequences, this phenomenon “may exacerbate socioeconomic disparities in chronic diseases into adulthood,” study author Ryunosuke Goto, MD, of University of Tokyo Hospital, and colleagues reported in JAMA Pediatrics.
Groups with higher rates of obesity may also be less likely to access treatment, said Kyung E. Rhee, MD, professor of pediatrics at University of California, San Diego School of Medicine, who was not involved in the new analysis.
“These are the families who have a harder time getting to the doctor’s office or getting to programs because they are working multiple jobs, or they don’t have as much flexibility,” Dr. Rhee told this news organization.
20 years of data
A recent study showed a relationship between socioeconomic status (SES) and weight in adults. Research examining current trends in adolescents has been limited, however, according to the authors of the new study.
To address this gap, Dr. Goto and colleagues looked at obesity trends among approximately 20,000 U.S. children aged 10-19 years using cross-sectional data from the 1999-2018 National Health and Nutrition Examination Surveys.
They compared the prevalence of obesity among participants whose household income was 138% of the federal poverty level or less versus those with higher levels of household income. They also examined obesity prevalence according to whether the head of household had graduated college.
Relative to higher-income households, adolescents from lower-income households were more likely to be non-Hispanic Black (21.7% vs. 10.4%) or Hispanic (30.6% vs. 13.4%) and to have an unmarried parent (54.5% vs. 23%). They were also more likely to have obesity (22.8% vs. 17.3%).
The prevalence of obesity likewise was higher among adolescents whose head of household did not have a college degree (21.8% vs. 11.6%).
In an analysis that adjusted for race, ethnicity, height, and marital status of the head of household, the prevalence of obesity increased over 20 years, particularly among adolescents from lower-income homes, the researchers reported.
Lower income was associated with an increase in obesity prevalence of 4.2 percentage points, and less education was associated with an increase in obesity prevalence of 9 percentage points.
By 2015-2018, the gap in obesity prevalence between low-income households and higher-income households was 6.4 percentage points more than it had been during 1999-2002 (95% confidence interval, 1.5-11.4). “When we assessed linear trends, the gap in obesity prevalence by income and education increased by an average of 1.5 (95% CI, 0.4-2.6) and 1.1 (95% CI, 0.0-2.3) percentage points every 4 years, respectively,” according to the researchers.
How to treat
Separately, researchers are studying ways to help treat patients with obesity and increase access to treatment. To that end, Dr. Rhee and colleagues developed a new program called Guided Self-Help Obesity Treatment in the Doctor’s Office (GOT Doc).
The guided self-help program was designed to provide similar resources as a leading treatment approach – family-based treatment – but in a less intensive, more accessible way.
Results from a randomized trial comparing this guided self-help approach with family-based treatment were published in Pediatrics.
The trial included 159 children and their parents. The children had an average age of 9.6 years and body mass index z-score of 2.1. Participants were primarily Latinx and from lower income neighborhoods.
Whereas family-based treatment included hour-long sessions at an academic center, the guided self-help program featured a 20-minute session in the office where patients typically see their primary care physician.
Both programs covered how to self-monitor food intake, set healthy goals, and modify the home environment to promote behavioral change. They also discussed body image, bullying, and emotional health. The program is framed around developing a healthy lifestyle rather than weight loss itself, Dr. Rhee said.
Children in both groups had significant reductions in their body mass index percentiles after the 6-month treatment programs. The reductions were largely maintained at 6-month follow-up.
Families in the guided self-help program, however, had a 67% lower risk of dropping out of the study and reported greater satisfaction and convenience. They attended more than half of the treatment sessions, whereas participants assigned to family-based treatment attended 1 in 5 sessions, on average.
The trial was conducted before the COVID-19 pandemic. Next, the researchers plan to test delivery of a guided self-help program via video calls, Dr. Rhee said.
Having options readily available for families who are interested in treatment for obesity proved valuable to clinicians, Dr. Rhee said. “They could then just refer them down the hall to the interventionist who was there, who was going to then work with the family to make these changes,” she said.
The study by Dr. Goto and colleagues was supported by grants from the Japan Society for the Promotion of Science. The trial by Dr. Rhee et al. was supported by a grant from the Health Resources and Services Administration. Neither research team had conflict of interest disclosures.
A version of this article first appeared on Medscape.com.
Lower levels of household income and education in the United States are associated with higher rates of adolescent obesity. These socioeconomic disparities “have widened during the last two decades,” new research shows.
Because obesity in adolescence has immediate and long-term health consequences, this phenomenon “may exacerbate socioeconomic disparities in chronic diseases into adulthood,” study author Ryunosuke Goto, MD, of University of Tokyo Hospital, and colleagues reported in JAMA Pediatrics.
Groups with higher rates of obesity may also be less likely to access treatment, said Kyung E. Rhee, MD, professor of pediatrics at University of California, San Diego School of Medicine, who was not involved in the new analysis.
“These are the families who have a harder time getting to the doctor’s office or getting to programs because they are working multiple jobs, or they don’t have as much flexibility,” Dr. Rhee told this news organization.
20 years of data
A recent study showed a relationship between socioeconomic status (SES) and weight in adults. Research examining current trends in adolescents has been limited, however, according to the authors of the new study.
To address this gap, Dr. Goto and colleagues looked at obesity trends among approximately 20,000 U.S. children aged 10-19 years using cross-sectional data from the 1999-2018 National Health and Nutrition Examination Surveys.
They compared the prevalence of obesity among participants whose household income was 138% of the federal poverty level or less versus those with higher levels of household income. They also examined obesity prevalence according to whether the head of household had graduated college.
Relative to higher-income households, adolescents from lower-income households were more likely to be non-Hispanic Black (21.7% vs. 10.4%) or Hispanic (30.6% vs. 13.4%) and to have an unmarried parent (54.5% vs. 23%). They were also more likely to have obesity (22.8% vs. 17.3%).
The prevalence of obesity likewise was higher among adolescents whose head of household did not have a college degree (21.8% vs. 11.6%).
In an analysis that adjusted for race, ethnicity, height, and marital status of the head of household, the prevalence of obesity increased over 20 years, particularly among adolescents from lower-income homes, the researchers reported.
Lower income was associated with an increase in obesity prevalence of 4.2 percentage points, and less education was associated with an increase in obesity prevalence of 9 percentage points.
By 2015-2018, the gap in obesity prevalence between low-income households and higher-income households was 6.4 percentage points more than it had been during 1999-2002 (95% confidence interval, 1.5-11.4). “When we assessed linear trends, the gap in obesity prevalence by income and education increased by an average of 1.5 (95% CI, 0.4-2.6) and 1.1 (95% CI, 0.0-2.3) percentage points every 4 years, respectively,” according to the researchers.
How to treat
Separately, researchers are studying ways to help treat patients with obesity and increase access to treatment. To that end, Dr. Rhee and colleagues developed a new program called Guided Self-Help Obesity Treatment in the Doctor’s Office (GOT Doc).
The guided self-help program was designed to provide similar resources as a leading treatment approach – family-based treatment – but in a less intensive, more accessible way.
Results from a randomized trial comparing this guided self-help approach with family-based treatment were published in Pediatrics.
The trial included 159 children and their parents. The children had an average age of 9.6 years and body mass index z-score of 2.1. Participants were primarily Latinx and from lower income neighborhoods.
Whereas family-based treatment included hour-long sessions at an academic center, the guided self-help program featured a 20-minute session in the office where patients typically see their primary care physician.
Both programs covered how to self-monitor food intake, set healthy goals, and modify the home environment to promote behavioral change. They also discussed body image, bullying, and emotional health. The program is framed around developing a healthy lifestyle rather than weight loss itself, Dr. Rhee said.
Children in both groups had significant reductions in their body mass index percentiles after the 6-month treatment programs. The reductions were largely maintained at 6-month follow-up.
Families in the guided self-help program, however, had a 67% lower risk of dropping out of the study and reported greater satisfaction and convenience. They attended more than half of the treatment sessions, whereas participants assigned to family-based treatment attended 1 in 5 sessions, on average.
The trial was conducted before the COVID-19 pandemic. Next, the researchers plan to test delivery of a guided self-help program via video calls, Dr. Rhee said.
Having options readily available for families who are interested in treatment for obesity proved valuable to clinicians, Dr. Rhee said. “They could then just refer them down the hall to the interventionist who was there, who was going to then work with the family to make these changes,” she said.
The study by Dr. Goto and colleagues was supported by grants from the Japan Society for the Promotion of Science. The trial by Dr. Rhee et al. was supported by a grant from the Health Resources and Services Administration. Neither research team had conflict of interest disclosures.
A version of this article first appeared on Medscape.com.
iLet system simplifies insulin delivery for type 1 diabetes
This transcript has been edited for clarity.
Today, I’m going to discuss the results of a new automated insulin delivery system that I think can really help many people with type 1 diabetes.
Dr. Steven Russell presented the results at the Advanced Technologies & Treatments for Diabetes meeting. The study focused on the iLet system, which is made by Beta Bionics and has been under development for a while. This was the single-hormone study, so it just looked at their algorithm using insulin alone. Eventually they’re going to study this, looking at the use of insulin plus glucagon together to see if that further improves outcomes.
One of the main reasons I think this study was so cool is because it included over 25% minority individuals who aren’t routinely studied in these insulin device trials. The study also included people who had a wide range of hemoglobin A1c levels; there was no high cut-point here. Over 30% of participants had an A1c greater than 8%. They also studied both children and adults and combined the results together.
Before I talk about the results, let me tell you about the pump. This is a tubed pump that has a sensor that it communicates with – it’s the Dexcom sensor – and it has an algorithm so it does automated insulin delivery. Instead of having to enter all sorts of information into the system, this thing requires that you put in only the patient’s weight. That’s it. From there, the system begins to figure out what the patient needs in terms of automated insulin delivery.
There are several different target settings that can be entered, and they can differ by time of day. There’s basically the time of day that one is eating a meal, so breakfast, lunch, or dinner, and there is the meal size, basically small, medium, and large. The individual enters this in real time so the system knows they’re eating, but other than that, the system just works.
It does this in a way that doesn’t allow for the individual using the pump to fidget with it. They can’t override the system and they can’t put in other insulin doses. The system is just there to take care of their diabetes.
They compared this system with people on any other system, including other automated insulin delivery systems, and put them into this trial. People were randomized to this system vs. whatever they’d been on (that was the control group) and they followed them for 13 weeks, which is not all that long.
There was a 0.5% reduction in A1c between the two groups. There was also an increase in the time in range, and this improvement in time in range happened almost immediately – within the first day or two of people being on the system. In terms of actual numbers, the adult patients started out with a time in range of 56% and this increased to 69% by the end of the study. The biggest improvement was time in range overnight, as is seen with other automated insulin delivery systems.
There was no reduction in time below a glucose level of 54 and there was an increase in the number of episodes of severe hypoglycemia in the group treated with the iLet system, but this was not statistically significant between the two groups.
I think these results are hard to compare with other pivotal trials investigating automated insulin delivery systems. The Tandem pivotal trial was a randomized controlled trial similar to this one, but the Medtronic and Omnipod studies were single-arm trials where patients were compared before and after they used the device.
More than anything, I think what’s important about this system is that it may allow for greater use of automated insulin delivery systems. It may allow primary care providers to use these systems without needing all sorts of support, and patients may be able to use these devices more simply than a device where they have to do carb counting and adjusting in ways that I think tend to be pretty complicated and require higher numeracy and literacy skills.
I couldn’t be happier. I love what they’re doing at Beta Bionics, and I look forward to more results, and in particular, to see if these results improve further when they do a study of insulin and glucagon in their dual-hormone pump system.
Thank you very much. This has been Dr Anne Peters for Medscape.
Dr. Peters is professor of medicine at the University of Southern California, Los Angeles, and director of the USC clinical diabetes programs. She has published more than 200 articles, reviews, and abstracts, and three books, on diabetes, and has been an investigator for more than 40 research studies. She has spoken internationally at over 400 programs and serves on many committees of several professional organizations. She disclosed ties with Abbott Diabetes Care, AstraZeneca, Becton Dickinson, Boehringer Ingelheim Pharmaceuticals, Dexcom, Eli Lilly, Lexicon Pharmaceuticals, Livongo, MannKind Corporation, Medscape, Merck, Novo Nordisk, Omada Health, OptumHealth, Sanofi, and Zafgen.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Today, I’m going to discuss the results of a new automated insulin delivery system that I think can really help many people with type 1 diabetes.
Dr. Steven Russell presented the results at the Advanced Technologies & Treatments for Diabetes meeting. The study focused on the iLet system, which is made by Beta Bionics and has been under development for a while. This was the single-hormone study, so it just looked at their algorithm using insulin alone. Eventually they’re going to study this, looking at the use of insulin plus glucagon together to see if that further improves outcomes.
One of the main reasons I think this study was so cool is because it included over 25% minority individuals who aren’t routinely studied in these insulin device trials. The study also included people who had a wide range of hemoglobin A1c levels; there was no high cut-point here. Over 30% of participants had an A1c greater than 8%. They also studied both children and adults and combined the results together.
Before I talk about the results, let me tell you about the pump. This is a tubed pump that has a sensor that it communicates with – it’s the Dexcom sensor – and it has an algorithm so it does automated insulin delivery. Instead of having to enter all sorts of information into the system, this thing requires that you put in only the patient’s weight. That’s it. From there, the system begins to figure out what the patient needs in terms of automated insulin delivery.
There are several different target settings that can be entered, and they can differ by time of day. There’s basically the time of day that one is eating a meal, so breakfast, lunch, or dinner, and there is the meal size, basically small, medium, and large. The individual enters this in real time so the system knows they’re eating, but other than that, the system just works.
It does this in a way that doesn’t allow for the individual using the pump to fidget with it. They can’t override the system and they can’t put in other insulin doses. The system is just there to take care of their diabetes.
They compared this system with people on any other system, including other automated insulin delivery systems, and put them into this trial. People were randomized to this system vs. whatever they’d been on (that was the control group) and they followed them for 13 weeks, which is not all that long.
There was a 0.5% reduction in A1c between the two groups. There was also an increase in the time in range, and this improvement in time in range happened almost immediately – within the first day or two of people being on the system. In terms of actual numbers, the adult patients started out with a time in range of 56% and this increased to 69% by the end of the study. The biggest improvement was time in range overnight, as is seen with other automated insulin delivery systems.
There was no reduction in time below a glucose level of 54 and there was an increase in the number of episodes of severe hypoglycemia in the group treated with the iLet system, but this was not statistically significant between the two groups.
I think these results are hard to compare with other pivotal trials investigating automated insulin delivery systems. The Tandem pivotal trial was a randomized controlled trial similar to this one, but the Medtronic and Omnipod studies were single-arm trials where patients were compared before and after they used the device.
More than anything, I think what’s important about this system is that it may allow for greater use of automated insulin delivery systems. It may allow primary care providers to use these systems without needing all sorts of support, and patients may be able to use these devices more simply than a device where they have to do carb counting and adjusting in ways that I think tend to be pretty complicated and require higher numeracy and literacy skills.
I couldn’t be happier. I love what they’re doing at Beta Bionics, and I look forward to more results, and in particular, to see if these results improve further when they do a study of insulin and glucagon in their dual-hormone pump system.
Thank you very much. This has been Dr Anne Peters for Medscape.
Dr. Peters is professor of medicine at the University of Southern California, Los Angeles, and director of the USC clinical diabetes programs. She has published more than 200 articles, reviews, and abstracts, and three books, on diabetes, and has been an investigator for more than 40 research studies. She has spoken internationally at over 400 programs and serves on many committees of several professional organizations. She disclosed ties with Abbott Diabetes Care, AstraZeneca, Becton Dickinson, Boehringer Ingelheim Pharmaceuticals, Dexcom, Eli Lilly, Lexicon Pharmaceuticals, Livongo, MannKind Corporation, Medscape, Merck, Novo Nordisk, Omada Health, OptumHealth, Sanofi, and Zafgen.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Today, I’m going to discuss the results of a new automated insulin delivery system that I think can really help many people with type 1 diabetes.
Dr. Steven Russell presented the results at the Advanced Technologies & Treatments for Diabetes meeting. The study focused on the iLet system, which is made by Beta Bionics and has been under development for a while. This was the single-hormone study, so it just looked at their algorithm using insulin alone. Eventually they’re going to study this, looking at the use of insulin plus glucagon together to see if that further improves outcomes.
One of the main reasons I think this study was so cool is because it included over 25% minority individuals who aren’t routinely studied in these insulin device trials. The study also included people who had a wide range of hemoglobin A1c levels; there was no high cut-point here. Over 30% of participants had an A1c greater than 8%. They also studied both children and adults and combined the results together.
Before I talk about the results, let me tell you about the pump. This is a tubed pump that has a sensor that it communicates with – it’s the Dexcom sensor – and it has an algorithm so it does automated insulin delivery. Instead of having to enter all sorts of information into the system, this thing requires that you put in only the patient’s weight. That’s it. From there, the system begins to figure out what the patient needs in terms of automated insulin delivery.
There are several different target settings that can be entered, and they can differ by time of day. There’s basically the time of day that one is eating a meal, so breakfast, lunch, or dinner, and there is the meal size, basically small, medium, and large. The individual enters this in real time so the system knows they’re eating, but other than that, the system just works.
It does this in a way that doesn’t allow for the individual using the pump to fidget with it. They can’t override the system and they can’t put in other insulin doses. The system is just there to take care of their diabetes.
They compared this system with people on any other system, including other automated insulin delivery systems, and put them into this trial. People were randomized to this system vs. whatever they’d been on (that was the control group) and they followed them for 13 weeks, which is not all that long.
There was a 0.5% reduction in A1c between the two groups. There was also an increase in the time in range, and this improvement in time in range happened almost immediately – within the first day or two of people being on the system. In terms of actual numbers, the adult patients started out with a time in range of 56% and this increased to 69% by the end of the study. The biggest improvement was time in range overnight, as is seen with other automated insulin delivery systems.
There was no reduction in time below a glucose level of 54 and there was an increase in the number of episodes of severe hypoglycemia in the group treated with the iLet system, but this was not statistically significant between the two groups.
I think these results are hard to compare with other pivotal trials investigating automated insulin delivery systems. The Tandem pivotal trial was a randomized controlled trial similar to this one, but the Medtronic and Omnipod studies were single-arm trials where patients were compared before and after they used the device.
More than anything, I think what’s important about this system is that it may allow for greater use of automated insulin delivery systems. It may allow primary care providers to use these systems without needing all sorts of support, and patients may be able to use these devices more simply than a device where they have to do carb counting and adjusting in ways that I think tend to be pretty complicated and require higher numeracy and literacy skills.
I couldn’t be happier. I love what they’re doing at Beta Bionics, and I look forward to more results, and in particular, to see if these results improve further when they do a study of insulin and glucagon in their dual-hormone pump system.
Thank you very much. This has been Dr Anne Peters for Medscape.
Dr. Peters is professor of medicine at the University of Southern California, Los Angeles, and director of the USC clinical diabetes programs. She has published more than 200 articles, reviews, and abstracts, and three books, on diabetes, and has been an investigator for more than 40 research studies. She has spoken internationally at over 400 programs and serves on many committees of several professional organizations. She disclosed ties with Abbott Diabetes Care, AstraZeneca, Becton Dickinson, Boehringer Ingelheim Pharmaceuticals, Dexcom, Eli Lilly, Lexicon Pharmaceuticals, Livongo, MannKind Corporation, Medscape, Merck, Novo Nordisk, Omada Health, OptumHealth, Sanofi, and Zafgen.
A version of this article first appeared on Medscape.com.
Bone density loss in lean male runners parallels similar issue in women
Similar to a phenomenon already well documented in women, inadequate nutrition appears to be linked to hormonal abnormalities and potentially preventable tibial cortical bone density loss in athletic men, according to results of a small, prospective study.
Based on these findings, “we suspect that a subset of male runners might not be fueling their bodies with enough nutrition and calories for their physical activity,” reported Melanie S. Haines, MD, at the annual meeting of the Endocrine Society.
This is not the first study to suggest male athletes are at risk of a condition equivalent to what has been commonly referred to as the female athlete triad, but it enlarges the objective data that the phenomenon is real, and it makes insufficient availability of energy the likely cause.
In women, the triad is described as a lack of adequate stored energy, irregular menses, and bone density loss. In men, menstrual cycles are not relevant, of course, but this study like others suggests a link between the failure to maintain adequate stores of energy, disturbances in hormone function, and decreased bone density in both men and women, Dr. Haines explained.
RED-S vs. male or female athlete triad
“There is now a move away from the term female athlete triad or male athlete triad,” Dr. Haines reported. Rather the factors of failing to maintain adequate energy for metabolic demands, hormonal disturbances, and bone density loss appear to be relevant to both sexes, according to Dr. Haines, an endocrinologist at Massachusetts General Hospital and assistant professor of medicine at Harvard Medical School, both in Boston. She said several groups, including the International Olympic Committee (IOC), have transitioned to the term RED-S to apply to both sexes.
“RED-S is an acronym for relative energy deficiency in sport, and it appears to be gaining traction,” Dr. Haines said in an interview.
According to her study and others, excessive lean body mass from failure to supply sufficient energy for physiological needs “negatively affects hormones and bone,” Dr. Haines explained. In men and women, endocrine disturbances are triggered when insufficient calories lead to inadequate macro- and micronutrients.
In this study, 31 men aged 16-30 years were evaluated. Fifteen were in the athlete group, defined by running at least 30 miles per week for at least the previous 6 months. There were 16 control subjects; all exercised less than 2 hours per week and did not participate in team sports, but they were not permitted in the study if their body mass index exceeded 27.5 kg/m2.
Athletes vs. otherwise healthy controls
Conditions that affect bone health were exclusion criteria in both groups, and neither group was permitted to take medications affecting bone health other than dietary calcium or vitamin D supplements for 2 months prior to the study.
Tibial cortical porosity was significantly greater – signaling deterioration in microarchitecture – in athletes, compared with control subjects (P = .003), according to quantitative computed tomography measurements. There was also significantly lower tibial cortical bone mineral density (P = .008) among athletes relative to controls.
Conversely, tibial trabecular measures of bone density and architecture were better among athletes than controls, but this was expected and did not contradict the hypothesis of the study.
“Trabecular bone refers to the inner part of the bone, which increases with weight-bearing exercise, but cortical bone is the outer shell, and the source of stress fractures,” Dr. Haines explained.
The median age of both the athletes and the controls was 24 years. Baseline measurements were similar. Body mass index, fat mass, estradiol, and leptin were all numerically lower in the athletes than controls, but none were significant, although there was a trend for the difference in leptin (P = .085).
Hormones correlated with tibial failure load
When these characteristics were evaluated in the context of mean tibial failure load, a metric related to strength, there was a strongly significant positive association with lean body mass (R = 0.85; P < 0.001) and estradiol level (R = 0.66; P = .007). The relationship with leptin also reached significance (R = 0.59; P = .046).
Unexpectedly, there was no relationship between testosterone and tibial failure load. The reason is unclear, but Dr. Haines’s interpretation is that the relationship between specific hormonal disturbances and bone density loss “might not be as simple” as once hypothesized.
The next step is a longitudinal evaluation of the same group of athletes to follow changes in the relationship between these variables over time, according to Dr. Haines.
Eventually, with evidence that there is a causal relationship between nutrition, hormonal changes, and bone loss, the research in this area will focus on better detection of risk and prophylactic strategies.
“Intervention trials to show that we can prevent stress factors will be difficult to perform,” Dr. Haines acknowledged, but she said that preventing adverse changes in bone at relatively young ages could have implications for long-term bone health, including protection from osteoporosis later in life.
The research presented by Dr. Haines is consistent with an area of research that is several decades old, at least in females, according to Siobhan M. Statuta, MD, a sports medicine primary care specialist at the University of Virginia, Charlottesville. The evidence that the same phenomenon occurs in men is more recent, but she said that it is now well accepted the there is a parallel hormonal issue in men and women.
“It is not a question of not eating enough. Often, athletes continue to consume the same diet, but their activity increases,” Dr. Statuta explained. “The problem is that they are not supplying enough of the calories they need to sustain the energy they are expending. You might say they are not fueling their engines appropriately.”
In 2014, the International Olympic Committee published a consensus statement on RED-S. They described this as a condition in which a state of energy deficiency leads to numerous complications in athletes, not just osteoporosis. Rather, a host of physiological systems, ranging from gastrointestinal complaints to cardiovascular events, were described.
RED-S addresses health beyond bones
“The RED-S theory is better described as a spoke-and-wheel concept rather than a triad. While inadequate energy availability is important to both, RED-S places this at the center of the wheel with spokes leading to all the possible complications rather than as a first event in a limited triad,” Dr. Statuta said in an interview.
However, she noted that the term RED-S is not yet appropriate to replace that of the male and female athlete triad.
“More research is required to hash out the relationship of a body in a state of energy deficiency and how it affects the entire body, which is the principle of RED-S,” Dr. Statuta said. “There likely are scientific effects, and we are currently investigating these relationships more.”
“These are really quite similar entities but have different foci,” she added. Based on data collected over several decades, “the triad narrows in on two body systems affected by low energy – the reproductive system and bones. RED-S incorporates these same systems yet adds on many more organ systems.
The original group of researchers have remained loyal to the concept of the triad that involves inadequate availability of energy followed by hormonal irregularities and osteoporosis. This group, the Female and Male Athlete Triad Coalition, has issued publications on this topic several times. Consensus statements were updated last year.
“The premise is that the triad leading to bone loss is shared by both men and women, even if the clinical manifestations differ,” said Dr. Statuta. The most notable difference is that men do not experience menstrual irregularities, but Dr. Statuta suggested that the clinical consequences are not necessarily any less.
“Males do not have menstrual cycles as an outward marker of an endocrine disturbance, so it is harder to recognize clinically, but I think there is agreement that not having enough energy available is the trigger of endocrine changes and then bone loss is relevant to both sexes,” she said. She said this is supported by a growing body of evidence, including the data presented by Dr. Haines at the Endocrine Society meeting.
Dr. Haines and Dr. Statuta report no potential conflicts of interest.
Similar to a phenomenon already well documented in women, inadequate nutrition appears to be linked to hormonal abnormalities and potentially preventable tibial cortical bone density loss in athletic men, according to results of a small, prospective study.
Based on these findings, “we suspect that a subset of male runners might not be fueling their bodies with enough nutrition and calories for their physical activity,” reported Melanie S. Haines, MD, at the annual meeting of the Endocrine Society.
This is not the first study to suggest male athletes are at risk of a condition equivalent to what has been commonly referred to as the female athlete triad, but it enlarges the objective data that the phenomenon is real, and it makes insufficient availability of energy the likely cause.
In women, the triad is described as a lack of adequate stored energy, irregular menses, and bone density loss. In men, menstrual cycles are not relevant, of course, but this study like others suggests a link between the failure to maintain adequate stores of energy, disturbances in hormone function, and decreased bone density in both men and women, Dr. Haines explained.
RED-S vs. male or female athlete triad
“There is now a move away from the term female athlete triad or male athlete triad,” Dr. Haines reported. Rather the factors of failing to maintain adequate energy for metabolic demands, hormonal disturbances, and bone density loss appear to be relevant to both sexes, according to Dr. Haines, an endocrinologist at Massachusetts General Hospital and assistant professor of medicine at Harvard Medical School, both in Boston. She said several groups, including the International Olympic Committee (IOC), have transitioned to the term RED-S to apply to both sexes.
“RED-S is an acronym for relative energy deficiency in sport, and it appears to be gaining traction,” Dr. Haines said in an interview.
According to her study and others, excessive lean body mass from failure to supply sufficient energy for physiological needs “negatively affects hormones and bone,” Dr. Haines explained. In men and women, endocrine disturbances are triggered when insufficient calories lead to inadequate macro- and micronutrients.
In this study, 31 men aged 16-30 years were evaluated. Fifteen were in the athlete group, defined by running at least 30 miles per week for at least the previous 6 months. There were 16 control subjects; all exercised less than 2 hours per week and did not participate in team sports, but they were not permitted in the study if their body mass index exceeded 27.5 kg/m2.
Athletes vs. otherwise healthy controls
Conditions that affect bone health were exclusion criteria in both groups, and neither group was permitted to take medications affecting bone health other than dietary calcium or vitamin D supplements for 2 months prior to the study.
Tibial cortical porosity was significantly greater – signaling deterioration in microarchitecture – in athletes, compared with control subjects (P = .003), according to quantitative computed tomography measurements. There was also significantly lower tibial cortical bone mineral density (P = .008) among athletes relative to controls.
Conversely, tibial trabecular measures of bone density and architecture were better among athletes than controls, but this was expected and did not contradict the hypothesis of the study.
“Trabecular bone refers to the inner part of the bone, which increases with weight-bearing exercise, but cortical bone is the outer shell, and the source of stress fractures,” Dr. Haines explained.
The median age of both the athletes and the controls was 24 years. Baseline measurements were similar. Body mass index, fat mass, estradiol, and leptin were all numerically lower in the athletes than controls, but none were significant, although there was a trend for the difference in leptin (P = .085).
Hormones correlated with tibial failure load
When these characteristics were evaluated in the context of mean tibial failure load, a metric related to strength, there was a strongly significant positive association with lean body mass (R = 0.85; P < 0.001) and estradiol level (R = 0.66; P = .007). The relationship with leptin also reached significance (R = 0.59; P = .046).
Unexpectedly, there was no relationship between testosterone and tibial failure load. The reason is unclear, but Dr. Haines’s interpretation is that the relationship between specific hormonal disturbances and bone density loss “might not be as simple” as once hypothesized.
The next step is a longitudinal evaluation of the same group of athletes to follow changes in the relationship between these variables over time, according to Dr. Haines.
Eventually, with evidence that there is a causal relationship between nutrition, hormonal changes, and bone loss, the research in this area will focus on better detection of risk and prophylactic strategies.
“Intervention trials to show that we can prevent stress factors will be difficult to perform,” Dr. Haines acknowledged, but she said that preventing adverse changes in bone at relatively young ages could have implications for long-term bone health, including protection from osteoporosis later in life.
The research presented by Dr. Haines is consistent with an area of research that is several decades old, at least in females, according to Siobhan M. Statuta, MD, a sports medicine primary care specialist at the University of Virginia, Charlottesville. The evidence that the same phenomenon occurs in men is more recent, but she said that it is now well accepted the there is a parallel hormonal issue in men and women.
“It is not a question of not eating enough. Often, athletes continue to consume the same diet, but their activity increases,” Dr. Statuta explained. “The problem is that they are not supplying enough of the calories they need to sustain the energy they are expending. You might say they are not fueling their engines appropriately.”
In 2014, the International Olympic Committee published a consensus statement on RED-S. They described this as a condition in which a state of energy deficiency leads to numerous complications in athletes, not just osteoporosis. Rather, a host of physiological systems, ranging from gastrointestinal complaints to cardiovascular events, were described.
RED-S addresses health beyond bones
“The RED-S theory is better described as a spoke-and-wheel concept rather than a triad. While inadequate energy availability is important to both, RED-S places this at the center of the wheel with spokes leading to all the possible complications rather than as a first event in a limited triad,” Dr. Statuta said in an interview.
However, she noted that the term RED-S is not yet appropriate to replace that of the male and female athlete triad.
“More research is required to hash out the relationship of a body in a state of energy deficiency and how it affects the entire body, which is the principle of RED-S,” Dr. Statuta said. “There likely are scientific effects, and we are currently investigating these relationships more.”
“These are really quite similar entities but have different foci,” she added. Based on data collected over several decades, “the triad narrows in on two body systems affected by low energy – the reproductive system and bones. RED-S incorporates these same systems yet adds on many more organ systems.
The original group of researchers have remained loyal to the concept of the triad that involves inadequate availability of energy followed by hormonal irregularities and osteoporosis. This group, the Female and Male Athlete Triad Coalition, has issued publications on this topic several times. Consensus statements were updated last year.
“The premise is that the triad leading to bone loss is shared by both men and women, even if the clinical manifestations differ,” said Dr. Statuta. The most notable difference is that men do not experience menstrual irregularities, but Dr. Statuta suggested that the clinical consequences are not necessarily any less.
“Males do not have menstrual cycles as an outward marker of an endocrine disturbance, so it is harder to recognize clinically, but I think there is agreement that not having enough energy available is the trigger of endocrine changes and then bone loss is relevant to both sexes,” she said. She said this is supported by a growing body of evidence, including the data presented by Dr. Haines at the Endocrine Society meeting.
Dr. Haines and Dr. Statuta report no potential conflicts of interest.
Similar to a phenomenon already well documented in women, inadequate nutrition appears to be linked to hormonal abnormalities and potentially preventable tibial cortical bone density loss in athletic men, according to results of a small, prospective study.
Based on these findings, “we suspect that a subset of male runners might not be fueling their bodies with enough nutrition and calories for their physical activity,” reported Melanie S. Haines, MD, at the annual meeting of the Endocrine Society.
This is not the first study to suggest male athletes are at risk of a condition equivalent to what has been commonly referred to as the female athlete triad, but it enlarges the objective data that the phenomenon is real, and it makes insufficient availability of energy the likely cause.
In women, the triad is described as a lack of adequate stored energy, irregular menses, and bone density loss. In men, menstrual cycles are not relevant, of course, but this study like others suggests a link between the failure to maintain adequate stores of energy, disturbances in hormone function, and decreased bone density in both men and women, Dr. Haines explained.
RED-S vs. male or female athlete triad
“There is now a move away from the term female athlete triad or male athlete triad,” Dr. Haines reported. Rather the factors of failing to maintain adequate energy for metabolic demands, hormonal disturbances, and bone density loss appear to be relevant to both sexes, according to Dr. Haines, an endocrinologist at Massachusetts General Hospital and assistant professor of medicine at Harvard Medical School, both in Boston. She said several groups, including the International Olympic Committee (IOC), have transitioned to the term RED-S to apply to both sexes.
“RED-S is an acronym for relative energy deficiency in sport, and it appears to be gaining traction,” Dr. Haines said in an interview.
According to her study and others, excessive lean body mass from failure to supply sufficient energy for physiological needs “negatively affects hormones and bone,” Dr. Haines explained. In men and women, endocrine disturbances are triggered when insufficient calories lead to inadequate macro- and micronutrients.
In this study, 31 men aged 16-30 years were evaluated. Fifteen were in the athlete group, defined by running at least 30 miles per week for at least the previous 6 months. There were 16 control subjects; all exercised less than 2 hours per week and did not participate in team sports, but they were not permitted in the study if their body mass index exceeded 27.5 kg/m2.
Athletes vs. otherwise healthy controls
Conditions that affect bone health were exclusion criteria in both groups, and neither group was permitted to take medications affecting bone health other than dietary calcium or vitamin D supplements for 2 months prior to the study.
Tibial cortical porosity was significantly greater – signaling deterioration in microarchitecture – in athletes, compared with control subjects (P = .003), according to quantitative computed tomography measurements. There was also significantly lower tibial cortical bone mineral density (P = .008) among athletes relative to controls.
Conversely, tibial trabecular measures of bone density and architecture were better among athletes than controls, but this was expected and did not contradict the hypothesis of the study.
“Trabecular bone refers to the inner part of the bone, which increases with weight-bearing exercise, but cortical bone is the outer shell, and the source of stress fractures,” Dr. Haines explained.
The median age of both the athletes and the controls was 24 years. Baseline measurements were similar. Body mass index, fat mass, estradiol, and leptin were all numerically lower in the athletes than controls, but none were significant, although there was a trend for the difference in leptin (P = .085).
Hormones correlated with tibial failure load
When these characteristics were evaluated in the context of mean tibial failure load, a metric related to strength, there was a strongly significant positive association with lean body mass (R = 0.85; P < 0.001) and estradiol level (R = 0.66; P = .007). The relationship with leptin also reached significance (R = 0.59; P = .046).
Unexpectedly, there was no relationship between testosterone and tibial failure load. The reason is unclear, but Dr. Haines’s interpretation is that the relationship between specific hormonal disturbances and bone density loss “might not be as simple” as once hypothesized.
The next step is a longitudinal evaluation of the same group of athletes to follow changes in the relationship between these variables over time, according to Dr. Haines.
Eventually, with evidence that there is a causal relationship between nutrition, hormonal changes, and bone loss, the research in this area will focus on better detection of risk and prophylactic strategies.
“Intervention trials to show that we can prevent stress factors will be difficult to perform,” Dr. Haines acknowledged, but she said that preventing adverse changes in bone at relatively young ages could have implications for long-term bone health, including protection from osteoporosis later in life.
The research presented by Dr. Haines is consistent with an area of research that is several decades old, at least in females, according to Siobhan M. Statuta, MD, a sports medicine primary care specialist at the University of Virginia, Charlottesville. The evidence that the same phenomenon occurs in men is more recent, but she said that it is now well accepted the there is a parallel hormonal issue in men and women.
“It is not a question of not eating enough. Often, athletes continue to consume the same diet, but their activity increases,” Dr. Statuta explained. “The problem is that they are not supplying enough of the calories they need to sustain the energy they are expending. You might say they are not fueling their engines appropriately.”
In 2014, the International Olympic Committee published a consensus statement on RED-S. They described this as a condition in which a state of energy deficiency leads to numerous complications in athletes, not just osteoporosis. Rather, a host of physiological systems, ranging from gastrointestinal complaints to cardiovascular events, were described.
RED-S addresses health beyond bones
“The RED-S theory is better described as a spoke-and-wheel concept rather than a triad. While inadequate energy availability is important to both, RED-S places this at the center of the wheel with spokes leading to all the possible complications rather than as a first event in a limited triad,” Dr. Statuta said in an interview.
However, she noted that the term RED-S is not yet appropriate to replace that of the male and female athlete triad.
“More research is required to hash out the relationship of a body in a state of energy deficiency and how it affects the entire body, which is the principle of RED-S,” Dr. Statuta said. “There likely are scientific effects, and we are currently investigating these relationships more.”
“These are really quite similar entities but have different foci,” she added. Based on data collected over several decades, “the triad narrows in on two body systems affected by low energy – the reproductive system and bones. RED-S incorporates these same systems yet adds on many more organ systems.
The original group of researchers have remained loyal to the concept of the triad that involves inadequate availability of energy followed by hormonal irregularities and osteoporosis. This group, the Female and Male Athlete Triad Coalition, has issued publications on this topic several times. Consensus statements were updated last year.
“The premise is that the triad leading to bone loss is shared by both men and women, even if the clinical manifestations differ,” said Dr. Statuta. The most notable difference is that men do not experience menstrual irregularities, but Dr. Statuta suggested that the clinical consequences are not necessarily any less.
“Males do not have menstrual cycles as an outward marker of an endocrine disturbance, so it is harder to recognize clinically, but I think there is agreement that not having enough energy available is the trigger of endocrine changes and then bone loss is relevant to both sexes,” she said. She said this is supported by a growing body of evidence, including the data presented by Dr. Haines at the Endocrine Society meeting.
Dr. Haines and Dr. Statuta report no potential conflicts of interest.
FROM ENDO 2022
Biden moves to limit nicotine levels in cigarettes
The Department of Health and Human Services posted a notice that details plans for a new rule to create a maximum allowed amount of nicotine in certain tobacco products. The Food and Drug Administration would take the action, the notice said, “to reduce addictiveness to certain tobacco products, thus giving addicted users a greater ability to quit.” The product standard would also help keep nonsmokers interested in trying tobacco, mainly youth, from starting to smoke and become regulars.
“Lowering nicotine levels to minimally addictive or non-addictive levels would decrease the likelihood that future generations of young people become addicted to cigarettes and help more currently addicted smokers to quit,” FDA Commissioner Robert Califf, MD, said in a statement.
The FDA, in charge of regulating cigarettes, issues a proposed rule when changes are discussed. That would be followed by a period for public comments before a final rule could be issued.
The proposed rule was first reported by The Washington Post.
The FDA in 2018 published a study in the New England Journal of Medicine that estimated that a potential limit on nicotine in cigarettes could, by the year 2100, prevent more than 33 million people from becoming regular smokers, and prevent the deaths of more than 8 million people from tobacco-related illnesses.
The action to reduce nicotine levels would fit in with President Joe Biden’s goal of reducing cancer death rates by half over 25 years. Each year, according to the American Cancer Society, about 480,000 deaths (about 1 in 5) are related to smoking. Currently, about 34 million American adults still smoke cigarettes.
Matthew Myers, president of the Campaign for Tobacco-Free Kids, called the proposed rule a “truly game-changing proposal.”
“There is no other single action our country can take that would prevent more young people from becoming addicted to tobacco or have a greater impact on reducing deaths from cancer, cardiovascular disease and respiratory disease,” Mr. Myers said in a statement.
However, he said, “these gains will only be realized if the administration and the FDA demonstrate a full-throated commitment to finalizing and implementing this proposal.”
The FDA proposed the nicotine reduction strategy in talks with the White House and the Department of Health and Human Services early in 2021, according to the Post.
Earlier this year, the FDA issued a proposed rule to ban menthol flavoring in cigarettes. The agency is accepting public comments though July 5.
The action of reducing nicotine levels would likely take years to complete, Mitch Zeller, JD, recently retired director of the FDA Center for Tobacco Products, told the Post.
In 2018, the FDA issued a proposed ruling to set a standard for maximum nicotine levels in cigarettes.
Advocates say the action of slashing nicotine, the active – and addictive – ingredient in cigarettes, would save millions of lives for generations to come. Opponents liken it to the prohibition of alcohol in the 1920s and predict the action will fail.
Others say that if limits are put on nicotine levels, adults should have greater access to noncombustible alternatives.
A version of this article first appeared on WebMD.com.
The Department of Health and Human Services posted a notice that details plans for a new rule to create a maximum allowed amount of nicotine in certain tobacco products. The Food and Drug Administration would take the action, the notice said, “to reduce addictiveness to certain tobacco products, thus giving addicted users a greater ability to quit.” The product standard would also help keep nonsmokers interested in trying tobacco, mainly youth, from starting to smoke and become regulars.
“Lowering nicotine levels to minimally addictive or non-addictive levels would decrease the likelihood that future generations of young people become addicted to cigarettes and help more currently addicted smokers to quit,” FDA Commissioner Robert Califf, MD, said in a statement.
The FDA, in charge of regulating cigarettes, issues a proposed rule when changes are discussed. That would be followed by a period for public comments before a final rule could be issued.
The proposed rule was first reported by The Washington Post.
The FDA in 2018 published a study in the New England Journal of Medicine that estimated that a potential limit on nicotine in cigarettes could, by the year 2100, prevent more than 33 million people from becoming regular smokers, and prevent the deaths of more than 8 million people from tobacco-related illnesses.
The action to reduce nicotine levels would fit in with President Joe Biden’s goal of reducing cancer death rates by half over 25 years. Each year, according to the American Cancer Society, about 480,000 deaths (about 1 in 5) are related to smoking. Currently, about 34 million American adults still smoke cigarettes.
Matthew Myers, president of the Campaign for Tobacco-Free Kids, called the proposed rule a “truly game-changing proposal.”
“There is no other single action our country can take that would prevent more young people from becoming addicted to tobacco or have a greater impact on reducing deaths from cancer, cardiovascular disease and respiratory disease,” Mr. Myers said in a statement.
However, he said, “these gains will only be realized if the administration and the FDA demonstrate a full-throated commitment to finalizing and implementing this proposal.”
The FDA proposed the nicotine reduction strategy in talks with the White House and the Department of Health and Human Services early in 2021, according to the Post.
Earlier this year, the FDA issued a proposed rule to ban menthol flavoring in cigarettes. The agency is accepting public comments though July 5.
The action of reducing nicotine levels would likely take years to complete, Mitch Zeller, JD, recently retired director of the FDA Center for Tobacco Products, told the Post.
In 2018, the FDA issued a proposed ruling to set a standard for maximum nicotine levels in cigarettes.
Advocates say the action of slashing nicotine, the active – and addictive – ingredient in cigarettes, would save millions of lives for generations to come. Opponents liken it to the prohibition of alcohol in the 1920s and predict the action will fail.
Others say that if limits are put on nicotine levels, adults should have greater access to noncombustible alternatives.
A version of this article first appeared on WebMD.com.
The Department of Health and Human Services posted a notice that details plans for a new rule to create a maximum allowed amount of nicotine in certain tobacco products. The Food and Drug Administration would take the action, the notice said, “to reduce addictiveness to certain tobacco products, thus giving addicted users a greater ability to quit.” The product standard would also help keep nonsmokers interested in trying tobacco, mainly youth, from starting to smoke and become regulars.
“Lowering nicotine levels to minimally addictive or non-addictive levels would decrease the likelihood that future generations of young people become addicted to cigarettes and help more currently addicted smokers to quit,” FDA Commissioner Robert Califf, MD, said in a statement.
The FDA, in charge of regulating cigarettes, issues a proposed rule when changes are discussed. That would be followed by a period for public comments before a final rule could be issued.
The proposed rule was first reported by The Washington Post.
The FDA in 2018 published a study in the New England Journal of Medicine that estimated that a potential limit on nicotine in cigarettes could, by the year 2100, prevent more than 33 million people from becoming regular smokers, and prevent the deaths of more than 8 million people from tobacco-related illnesses.
The action to reduce nicotine levels would fit in with President Joe Biden’s goal of reducing cancer death rates by half over 25 years. Each year, according to the American Cancer Society, about 480,000 deaths (about 1 in 5) are related to smoking. Currently, about 34 million American adults still smoke cigarettes.
Matthew Myers, president of the Campaign for Tobacco-Free Kids, called the proposed rule a “truly game-changing proposal.”
“There is no other single action our country can take that would prevent more young people from becoming addicted to tobacco or have a greater impact on reducing deaths from cancer, cardiovascular disease and respiratory disease,” Mr. Myers said in a statement.
However, he said, “these gains will only be realized if the administration and the FDA demonstrate a full-throated commitment to finalizing and implementing this proposal.”
The FDA proposed the nicotine reduction strategy in talks with the White House and the Department of Health and Human Services early in 2021, according to the Post.
Earlier this year, the FDA issued a proposed rule to ban menthol flavoring in cigarettes. The agency is accepting public comments though July 5.
The action of reducing nicotine levels would likely take years to complete, Mitch Zeller, JD, recently retired director of the FDA Center for Tobacco Products, told the Post.
In 2018, the FDA issued a proposed ruling to set a standard for maximum nicotine levels in cigarettes.
Advocates say the action of slashing nicotine, the active – and addictive – ingredient in cigarettes, would save millions of lives for generations to come. Opponents liken it to the prohibition of alcohol in the 1920s and predict the action will fail.
Others say that if limits are put on nicotine levels, adults should have greater access to noncombustible alternatives.
A version of this article first appeared on WebMD.com.
COVID-19 Pandemic stress affected ovulation, not menstruation
ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.
Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.
The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.
Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.
“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.
It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.
Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”
Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”
But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
‘Experiment of nature’ revealed invisible effect of pandemic stress
The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.
Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.
Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.
There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.
More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).
The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).
Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.
The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.
And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).
Employment changes, caring responsibilities, and worry likely causes
The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.
“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.
Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.
“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.
Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.
“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.
Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”
Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.
Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.
The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.
Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.
“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.
It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.
Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”
Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”
But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
‘Experiment of nature’ revealed invisible effect of pandemic stress
The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.
Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.
Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.
There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.
More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).
The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).
Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.
The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.
And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).
Employment changes, caring responsibilities, and worry likely causes
The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.
“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.
Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.
“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.
Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.
“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.
Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”
Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.
Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.
The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.
Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.
“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.
It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.
Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”
Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”
But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
‘Experiment of nature’ revealed invisible effect of pandemic stress
The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.
Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.
Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.
There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.
More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).
The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).
Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.
The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.
And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).
Employment changes, caring responsibilities, and worry likely causes
The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.
“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.
Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.
“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.
Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.
“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.
Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”
Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
AT ENDO 2022
Osteoporosis risk rises with air pollution levels
COPENHAGEN – Chronic exposure to high levels of particulate matter (PM) air pollution 2.5 mcm (PM2.5) or larger, and 10 mcm (PM10) or larger, in size is associated with a significantly higher likelihood of having osteoporosis, according to research presented at the annual European Congress of Rheumatology.
The results of the 7-year longitudinal study carried out across Italy from 2013 to 2019 dovetail with other recent published accounts from the same team of Italian researchers, led by Giovanni Adami, MD, of the rheumatology unit at the University of Verona (Italy). In addition to the current report presented at EULAR 2022, Dr. Adami and associates have reported an increased risk of flares of both rheumatoid arthritis and psoriasis following periods of elevated pollution, as well as an overall elevated risk for autoimmune diseases with higher concentrations of PM2.5 and PM10.
The pathogenesis of osteoporosis is thought to involve both genetic and environmental input, such as smoking, which is itself environmental air pollution, Dr. Adami said. The biological rationale for why air pollution might contribute to risk for osteoporosis comes from studies showing that exposure to indoor air pollution from biomass combustion raises serum levels of RANKL (receptor activator of nuclear factor-kappa ligand 1) but lowers serum osteoprotegerin – suggesting an increased risk of bone resorption – and that toxic metals such as lead, cadmium, mercury, and aluminum accumulate in the skeleton and negatively affect bone health.
In their study, Dr. Adami and colleagues found that, overall, the average exposure during the period 2013-2019 across Italy was 16.0 mcg/m3 for PM2.5 and 25.0 mcg/m3 for PM10.
“I can tell you that [25.0 mcg/m3 for PM10] is a very high exposure. It’s not very good for your health,” Dr. Adami said.
Data on more than 59,000 Italian women
Dr. Adami and colleagues used clinical characteristics and densitometric data from Italy’s osteoporosis fracture risk and osteoporosis screening reimbursement tool known as DeFRAcalc79, which has amassed variables from more than 59,000 women across the country. They used long-term average PM concentrations across Italy during 2013-2019 that were obtained from the Italian Institute for Environmental Protection and Research’s 617 air quality stations in 110 Italian provinces. The researchers linked individuals to a PM exposure value determined from the average concentration of urban, rural, and near-traffic stations in each person’s province of residence.
For 59,950 women across Italy who were at high risk for fracture, the researchers found 64.5% with bone mineral density that was defined as osteoporotic. At PM10 concentrations of 30 mcg/m3 or greater, there was a significantly higher likelihood of osteoporosis at both the femoral neck (odds ratio, 1.15) and lumbar spine (OR, 1.17).
The likelihood of osteoporosis was slightly greater with PM2.5 at concentrations of 25 mcg/m3 or more at the femoral neck (OR, 1.22) and lumbar spine (OR, 1.18). These comparisons were adjusted for age, body mass index (BMI), presence of prevalent fragility fractures, family history of osteoporosis, menopause, glucocorticoid use, comorbidities, and for residency in northern, central, or southern Italy.
Both thresholds of PM10 > 30 mcg/m3 and PM2.5 > 25 mcg/m3 “are considered safe … by the World Health Organization,” Dr. Adami pointed out.
“If you live in a place where the chronic exposure is less than 30 mcg/m3, you probably have slightly lower risk of osteoporosis as compared to those who live in a highly industrialized, polluted zone,” he explained.
“The cortical bone – femoral neck – seemed to be more susceptible, compared to trabecular bone, which is the lumbar spine. We have no idea why this is true, but we might speculate that somehow chronic inflammation like the [kind] seen in rheumatoid arthritis might be responsible for cortical bone impairment and not trabecular bone impairment,” Dr. Adami said.
One audience member, Kenneth Poole, BM, PhD, senior lecturer and honorary consultant in Metabolic Bone Disease and Rheumatology at the University of Cambridge (England), asked whether it was possible to account for the possibility of confounding caused by areas with dense housing in places where the particulate matter would be highest, and where residents may be less active and use stairs less often.
Dr. Adami noted that confounding is indeed a possibility, but he said Italy is unique in that its most polluted area – the Po River valley – is also its most wealthy area and in general has less crowded living situations with a healthier population, which could have attenuated, rather than reinforced, the results.
Does air pollution have an immunologic effect?
In interviews with this news organization, session comoderators Filipe Araújo, MD, and Irene Bultink, MD, PhD, said that the growth in evidence for the impact of air pollution on risk for, and severity of, various diseases suggests air pollution might have an immunologic effect.
“I think it’s very important to point this out. I also think it’s very hard to rule out confounding, because when you’re living in a city with crowded housing you may not walk or ride your bike but instead go by car or metro, and [the lifestyle is different],” said Dr. Bultink of Amsterdam University Medical Centers.
“It stresses that these diseases [that are associated with air pollution] although they are different in their pathophysiology, it points toward the systemic nature of rheumatic diseases, including osteoporosis,” said Dr. Araújo of Hospital Cuf Cascais (Portugal) and Hospital Ortopédico de Sant’Ana, Parede, Portugal.
The study was independently supported.Dr. Adami disclosed being a shareholder of Galapagos and Theramex.
A version of this article first appeared on Medscape.com.
COPENHAGEN – Chronic exposure to high levels of particulate matter (PM) air pollution 2.5 mcm (PM2.5) or larger, and 10 mcm (PM10) or larger, in size is associated with a significantly higher likelihood of having osteoporosis, according to research presented at the annual European Congress of Rheumatology.
The results of the 7-year longitudinal study carried out across Italy from 2013 to 2019 dovetail with other recent published accounts from the same team of Italian researchers, led by Giovanni Adami, MD, of the rheumatology unit at the University of Verona (Italy). In addition to the current report presented at EULAR 2022, Dr. Adami and associates have reported an increased risk of flares of both rheumatoid arthritis and psoriasis following periods of elevated pollution, as well as an overall elevated risk for autoimmune diseases with higher concentrations of PM2.5 and PM10.
The pathogenesis of osteoporosis is thought to involve both genetic and environmental input, such as smoking, which is itself environmental air pollution, Dr. Adami said. The biological rationale for why air pollution might contribute to risk for osteoporosis comes from studies showing that exposure to indoor air pollution from biomass combustion raises serum levels of RANKL (receptor activator of nuclear factor-kappa ligand 1) but lowers serum osteoprotegerin – suggesting an increased risk of bone resorption – and that toxic metals such as lead, cadmium, mercury, and aluminum accumulate in the skeleton and negatively affect bone health.
In their study, Dr. Adami and colleagues found that, overall, the average exposure during the period 2013-2019 across Italy was 16.0 mcg/m3 for PM2.5 and 25.0 mcg/m3 for PM10.
“I can tell you that [25.0 mcg/m3 for PM10] is a very high exposure. It’s not very good for your health,” Dr. Adami said.
Data on more than 59,000 Italian women
Dr. Adami and colleagues used clinical characteristics and densitometric data from Italy’s osteoporosis fracture risk and osteoporosis screening reimbursement tool known as DeFRAcalc79, which has amassed variables from more than 59,000 women across the country. They used long-term average PM concentrations across Italy during 2013-2019 that were obtained from the Italian Institute for Environmental Protection and Research’s 617 air quality stations in 110 Italian provinces. The researchers linked individuals to a PM exposure value determined from the average concentration of urban, rural, and near-traffic stations in each person’s province of residence.
For 59,950 women across Italy who were at high risk for fracture, the researchers found 64.5% with bone mineral density that was defined as osteoporotic. At PM10 concentrations of 30 mcg/m3 or greater, there was a significantly higher likelihood of osteoporosis at both the femoral neck (odds ratio, 1.15) and lumbar spine (OR, 1.17).
The likelihood of osteoporosis was slightly greater with PM2.5 at concentrations of 25 mcg/m3 or more at the femoral neck (OR, 1.22) and lumbar spine (OR, 1.18). These comparisons were adjusted for age, body mass index (BMI), presence of prevalent fragility fractures, family history of osteoporosis, menopause, glucocorticoid use, comorbidities, and for residency in northern, central, or southern Italy.
Both thresholds of PM10 > 30 mcg/m3 and PM2.5 > 25 mcg/m3 “are considered safe … by the World Health Organization,” Dr. Adami pointed out.
“If you live in a place where the chronic exposure is less than 30 mcg/m3, you probably have slightly lower risk of osteoporosis as compared to those who live in a highly industrialized, polluted zone,” he explained.
“The cortical bone – femoral neck – seemed to be more susceptible, compared to trabecular bone, which is the lumbar spine. We have no idea why this is true, but we might speculate that somehow chronic inflammation like the [kind] seen in rheumatoid arthritis might be responsible for cortical bone impairment and not trabecular bone impairment,” Dr. Adami said.
One audience member, Kenneth Poole, BM, PhD, senior lecturer and honorary consultant in Metabolic Bone Disease and Rheumatology at the University of Cambridge (England), asked whether it was possible to account for the possibility of confounding caused by areas with dense housing in places where the particulate matter would be highest, and where residents may be less active and use stairs less often.
Dr. Adami noted that confounding is indeed a possibility, but he said Italy is unique in that its most polluted area – the Po River valley – is also its most wealthy area and in general has less crowded living situations with a healthier population, which could have attenuated, rather than reinforced, the results.
Does air pollution have an immunologic effect?
In interviews with this news organization, session comoderators Filipe Araújo, MD, and Irene Bultink, MD, PhD, said that the growth in evidence for the impact of air pollution on risk for, and severity of, various diseases suggests air pollution might have an immunologic effect.
“I think it’s very important to point this out. I also think it’s very hard to rule out confounding, because when you’re living in a city with crowded housing you may not walk or ride your bike but instead go by car or metro, and [the lifestyle is different],” said Dr. Bultink of Amsterdam University Medical Centers.
“It stresses that these diseases [that are associated with air pollution] although they are different in their pathophysiology, it points toward the systemic nature of rheumatic diseases, including osteoporosis,” said Dr. Araújo of Hospital Cuf Cascais (Portugal) and Hospital Ortopédico de Sant’Ana, Parede, Portugal.
The study was independently supported.Dr. Adami disclosed being a shareholder of Galapagos and Theramex.
A version of this article first appeared on Medscape.com.
COPENHAGEN – Chronic exposure to high levels of particulate matter (PM) air pollution 2.5 mcm (PM2.5) or larger, and 10 mcm (PM10) or larger, in size is associated with a significantly higher likelihood of having osteoporosis, according to research presented at the annual European Congress of Rheumatology.
The results of the 7-year longitudinal study carried out across Italy from 2013 to 2019 dovetail with other recent published accounts from the same team of Italian researchers, led by Giovanni Adami, MD, of the rheumatology unit at the University of Verona (Italy). In addition to the current report presented at EULAR 2022, Dr. Adami and associates have reported an increased risk of flares of both rheumatoid arthritis and psoriasis following periods of elevated pollution, as well as an overall elevated risk for autoimmune diseases with higher concentrations of PM2.5 and PM10.
The pathogenesis of osteoporosis is thought to involve both genetic and environmental input, such as smoking, which is itself environmental air pollution, Dr. Adami said. The biological rationale for why air pollution might contribute to risk for osteoporosis comes from studies showing that exposure to indoor air pollution from biomass combustion raises serum levels of RANKL (receptor activator of nuclear factor-kappa ligand 1) but lowers serum osteoprotegerin – suggesting an increased risk of bone resorption – and that toxic metals such as lead, cadmium, mercury, and aluminum accumulate in the skeleton and negatively affect bone health.
In their study, Dr. Adami and colleagues found that, overall, the average exposure during the period 2013-2019 across Italy was 16.0 mcg/m3 for PM2.5 and 25.0 mcg/m3 for PM10.
“I can tell you that [25.0 mcg/m3 for PM10] is a very high exposure. It’s not very good for your health,” Dr. Adami said.
Data on more than 59,000 Italian women
Dr. Adami and colleagues used clinical characteristics and densitometric data from Italy’s osteoporosis fracture risk and osteoporosis screening reimbursement tool known as DeFRAcalc79, which has amassed variables from more than 59,000 women across the country. They used long-term average PM concentrations across Italy during 2013-2019 that were obtained from the Italian Institute for Environmental Protection and Research’s 617 air quality stations in 110 Italian provinces. The researchers linked individuals to a PM exposure value determined from the average concentration of urban, rural, and near-traffic stations in each person’s province of residence.
For 59,950 women across Italy who were at high risk for fracture, the researchers found 64.5% with bone mineral density that was defined as osteoporotic. At PM10 concentrations of 30 mcg/m3 or greater, there was a significantly higher likelihood of osteoporosis at both the femoral neck (odds ratio, 1.15) and lumbar spine (OR, 1.17).
The likelihood of osteoporosis was slightly greater with PM2.5 at concentrations of 25 mcg/m3 or more at the femoral neck (OR, 1.22) and lumbar spine (OR, 1.18). These comparisons were adjusted for age, body mass index (BMI), presence of prevalent fragility fractures, family history of osteoporosis, menopause, glucocorticoid use, comorbidities, and for residency in northern, central, or southern Italy.
Both thresholds of PM10 > 30 mcg/m3 and PM2.5 > 25 mcg/m3 “are considered safe … by the World Health Organization,” Dr. Adami pointed out.
“If you live in a place where the chronic exposure is less than 30 mcg/m3, you probably have slightly lower risk of osteoporosis as compared to those who live in a highly industrialized, polluted zone,” he explained.
“The cortical bone – femoral neck – seemed to be more susceptible, compared to trabecular bone, which is the lumbar spine. We have no idea why this is true, but we might speculate that somehow chronic inflammation like the [kind] seen in rheumatoid arthritis might be responsible for cortical bone impairment and not trabecular bone impairment,” Dr. Adami said.
One audience member, Kenneth Poole, BM, PhD, senior lecturer and honorary consultant in Metabolic Bone Disease and Rheumatology at the University of Cambridge (England), asked whether it was possible to account for the possibility of confounding caused by areas with dense housing in places where the particulate matter would be highest, and where residents may be less active and use stairs less often.
Dr. Adami noted that confounding is indeed a possibility, but he said Italy is unique in that its most polluted area – the Po River valley – is also its most wealthy area and in general has less crowded living situations with a healthier population, which could have attenuated, rather than reinforced, the results.
Does air pollution have an immunologic effect?
In interviews with this news organization, session comoderators Filipe Araújo, MD, and Irene Bultink, MD, PhD, said that the growth in evidence for the impact of air pollution on risk for, and severity of, various diseases suggests air pollution might have an immunologic effect.
“I think it’s very important to point this out. I also think it’s very hard to rule out confounding, because when you’re living in a city with crowded housing you may not walk or ride your bike but instead go by car or metro, and [the lifestyle is different],” said Dr. Bultink of Amsterdam University Medical Centers.
“It stresses that these diseases [that are associated with air pollution] although they are different in their pathophysiology, it points toward the systemic nature of rheumatic diseases, including osteoporosis,” said Dr. Araújo of Hospital Cuf Cascais (Portugal) and Hospital Ortopédico de Sant’Ana, Parede, Portugal.
The study was independently supported.Dr. Adami disclosed being a shareholder of Galapagos and Theramex.
A version of this article first appeared on Medscape.com.
AT THE EULAR 2022 CONGRESS
Could a type 2 diabetes drug tackle kidney stones?
than patients who received placebo during a median 1.5 years of treatment.
These findings are from an analysis of pooled data from phase 1-4 clinical trials of empagliflozin for blood glucose control in 15,081 patients with type 2 diabetes.
Priyadarshini Balasubramanian, MD, presented the study as a poster at the annual meeting of the Endocrine Society; the study also was published online in the Journal of Clinical Endocrinology & Metabolism.
The researchers acknowledge this was a retrospective, post-hoc analysis and that urolithiasis – a stone in the urinary tract, which includes nephrolithiasis, a kidney stone – was an adverse event, not a primary or secondary outcome.
Also, the stone composition, which might help explain how the drug may affect stone formation, is unknown.
Therefore, “dedicated randomized prospective clinical trials are needed to confirm these initial observations in patients both with and without type 2 diabetes,” said Dr. Balasubramanian, a clinical fellow in the section of endocrinology & metabolism, department of internal medicine at Yale University, New Haven, Conn.
However, “if this association is proven, empagliflozin may be used to decrease the risk of kidney stones at least in those with type 2 diabetes, but maybe also in those without diabetes,” Dr. Balasubramanian said in an interview.
Further trials are also needed to determine if this is a class effect, which is likely, she speculated, and to unravel the potential mechanism.
This is important because of the prevalence of kidney stones, which affect up to 15% of the general population and 15%-20% of patients with diabetes, she explained.
‘Provocative’ earlier findings
The current study was prompted by a recent observational study by Kasper B. Kristensen, MD, PhD, and colleagues.
Because SGLT2 inhibitors increase urinary glucose excretion through reduced renal reabsorption of glucose leading to osmotic diuresis and increased urinary flow, they hypothesized that these therapies “may reduce the risk of upper urinary tract stones (nephrolithiasis) by reducing the concentration of lithogenic substances in urine.”
Using data from Danish Health registries, they matched 12,325 individuals newly prescribed an SGLT2 inhibitor 1:1 with patients newly prescribed a glucagonlike peptide-1 (GLP1) agonist, another new class of drugs for treating type 2 diabetes.
They found a hazard ratio of 0.51 (95% confidence interval, 0.37-0.71) for incident nephrolithiasis and a hazard ratio of 0.68 (95% CI, 0.48-0.97) for recurrent nephrolithiasis for patients taking SGLT2 inhibitors versus GLP-1 agonists.
These findings are “striking,” according to Dr. Balasubramanian and colleagues. However, “these data, while provocative, were entirely retrospective and therefore possibly prone to bias,” they add.
Pooled data from 20 trials
The current study analyzed data from 20 randomized controlled trials of glycemic control in type 2 diabetes, in which 10,177 patients had received empagliflozin 10 mg or 25 mg and 4,904 patients had received placebo.
Most patients (46.5%) had participated in the EMPA-REG OUTCOMES trial, which also had the longest follow-up (2.6 years).
The researchers identified patients with a new stone from the urinary tract (including the kidney, ureter, and urethra). Patients had received either the study drug for a median of 543 days or placebo for a median of 549 days.
During treatment, 104 of 10,177 patients in the pooled empagliflozin groups and 79 of 4,904 patients in the pooled placebo groups developed a stone in the urinary tract.
This was equivalent to 0.63 new urinary-tract stones per 100 patient-years in the pooled empagliflozin groups versus 1.01 new urinary-tract stones per 100 patient-years in the pooled placebo groups.
The incidence rate ratio was 0.64 (95% CI, 0.48-0.86), in favor of empagliflozin.
When the analysis was restricted to new kidney stones, the results were similar: 75 of 10,177 patients in the pooled empagliflozin groups and 57 of 4,904 patients in the pooled placebo groups developed a kidney stone.
This was equivalent to 0.45 new kidney stones per 100 patient-years in the pooled empagliflozin groups versus 0.72 new kidney stones per 100 patient-years in the pooled placebo groups.
The IRR was 0.65 (95% CI, 0.46-0.92), in favor of empagliflozin.
Upcoming small RCT in adults without diabetes
Invited to comment on the new study, Dr. Kristensen said: “The reduced risk of SGLT2 inhibitors towards nephrolithiasis is now reported in at least two studies with different methodology, different populations, and different exposure and outcome definitions.”
“I agree that randomized clinical trials designed specifically to confirm these findings appear warranted,” added Dr. Kristensen, from the Institute of Public Health, Clinical Pharmacology, Pharmacy, and Environmental Medicine, University of Southern Denmark in Odense.
There is a need for studies in patients with and without diabetes, he added, especially ones that focus on prevention of nephrolithiasis in patients with kidney stone disease.
A new trial should shed further light on this.
Results are expected by the end of 2022 for SWEETSTONE (Impact of the SGLT2 Inhibitor Empagliflozin on Urinary Supersaturations in Kidney Stone Formers), a randomized, double-blind crossover exploratory study in 46 patients without diabetes.
This should provide preliminary data to “establish the relevance for larger trials assessing the prophylactic potential of empagliflozin in kidney stone disease,” according to an article on the trial protocol recently published in BMJ.
The trials included in the pooled dataset were funded by Boehringer Ingelheim or the Boehringer Ingelheim and Eli Lilly Diabetes Alliance. Dr. Balasubramanian has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
than patients who received placebo during a median 1.5 years of treatment.
These findings are from an analysis of pooled data from phase 1-4 clinical trials of empagliflozin for blood glucose control in 15,081 patients with type 2 diabetes.
Priyadarshini Balasubramanian, MD, presented the study as a poster at the annual meeting of the Endocrine Society; the study also was published online in the Journal of Clinical Endocrinology & Metabolism.
The researchers acknowledge this was a retrospective, post-hoc analysis and that urolithiasis – a stone in the urinary tract, which includes nephrolithiasis, a kidney stone – was an adverse event, not a primary or secondary outcome.
Also, the stone composition, which might help explain how the drug may affect stone formation, is unknown.
Therefore, “dedicated randomized prospective clinical trials are needed to confirm these initial observations in patients both with and without type 2 diabetes,” said Dr. Balasubramanian, a clinical fellow in the section of endocrinology & metabolism, department of internal medicine at Yale University, New Haven, Conn.
However, “if this association is proven, empagliflozin may be used to decrease the risk of kidney stones at least in those with type 2 diabetes, but maybe also in those without diabetes,” Dr. Balasubramanian said in an interview.
Further trials are also needed to determine if this is a class effect, which is likely, she speculated, and to unravel the potential mechanism.
This is important because of the prevalence of kidney stones, which affect up to 15% of the general population and 15%-20% of patients with diabetes, she explained.
‘Provocative’ earlier findings
The current study was prompted by a recent observational study by Kasper B. Kristensen, MD, PhD, and colleagues.
Because SGLT2 inhibitors increase urinary glucose excretion through reduced renal reabsorption of glucose leading to osmotic diuresis and increased urinary flow, they hypothesized that these therapies “may reduce the risk of upper urinary tract stones (nephrolithiasis) by reducing the concentration of lithogenic substances in urine.”
Using data from Danish Health registries, they matched 12,325 individuals newly prescribed an SGLT2 inhibitor 1:1 with patients newly prescribed a glucagonlike peptide-1 (GLP1) agonist, another new class of drugs for treating type 2 diabetes.
They found a hazard ratio of 0.51 (95% confidence interval, 0.37-0.71) for incident nephrolithiasis and a hazard ratio of 0.68 (95% CI, 0.48-0.97) for recurrent nephrolithiasis for patients taking SGLT2 inhibitors versus GLP-1 agonists.
These findings are “striking,” according to Dr. Balasubramanian and colleagues. However, “these data, while provocative, were entirely retrospective and therefore possibly prone to bias,” they add.
Pooled data from 20 trials
The current study analyzed data from 20 randomized controlled trials of glycemic control in type 2 diabetes, in which 10,177 patients had received empagliflozin 10 mg or 25 mg and 4,904 patients had received placebo.
Most patients (46.5%) had participated in the EMPA-REG OUTCOMES trial, which also had the longest follow-up (2.6 years).
The researchers identified patients with a new stone from the urinary tract (including the kidney, ureter, and urethra). Patients had received either the study drug for a median of 543 days or placebo for a median of 549 days.
During treatment, 104 of 10,177 patients in the pooled empagliflozin groups and 79 of 4,904 patients in the pooled placebo groups developed a stone in the urinary tract.
This was equivalent to 0.63 new urinary-tract stones per 100 patient-years in the pooled empagliflozin groups versus 1.01 new urinary-tract stones per 100 patient-years in the pooled placebo groups.
The incidence rate ratio was 0.64 (95% CI, 0.48-0.86), in favor of empagliflozin.
When the analysis was restricted to new kidney stones, the results were similar: 75 of 10,177 patients in the pooled empagliflozin groups and 57 of 4,904 patients in the pooled placebo groups developed a kidney stone.
This was equivalent to 0.45 new kidney stones per 100 patient-years in the pooled empagliflozin groups versus 0.72 new kidney stones per 100 patient-years in the pooled placebo groups.
The IRR was 0.65 (95% CI, 0.46-0.92), in favor of empagliflozin.
Upcoming small RCT in adults without diabetes
Invited to comment on the new study, Dr. Kristensen said: “The reduced risk of SGLT2 inhibitors towards nephrolithiasis is now reported in at least two studies with different methodology, different populations, and different exposure and outcome definitions.”
“I agree that randomized clinical trials designed specifically to confirm these findings appear warranted,” added Dr. Kristensen, from the Institute of Public Health, Clinical Pharmacology, Pharmacy, and Environmental Medicine, University of Southern Denmark in Odense.
There is a need for studies in patients with and without diabetes, he added, especially ones that focus on prevention of nephrolithiasis in patients with kidney stone disease.
A new trial should shed further light on this.
Results are expected by the end of 2022 for SWEETSTONE (Impact of the SGLT2 Inhibitor Empagliflozin on Urinary Supersaturations in Kidney Stone Formers), a randomized, double-blind crossover exploratory study in 46 patients without diabetes.
This should provide preliminary data to “establish the relevance for larger trials assessing the prophylactic potential of empagliflozin in kidney stone disease,” according to an article on the trial protocol recently published in BMJ.
The trials included in the pooled dataset were funded by Boehringer Ingelheim or the Boehringer Ingelheim and Eli Lilly Diabetes Alliance. Dr. Balasubramanian has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
than patients who received placebo during a median 1.5 years of treatment.
These findings are from an analysis of pooled data from phase 1-4 clinical trials of empagliflozin for blood glucose control in 15,081 patients with type 2 diabetes.
Priyadarshini Balasubramanian, MD, presented the study as a poster at the annual meeting of the Endocrine Society; the study also was published online in the Journal of Clinical Endocrinology & Metabolism.
The researchers acknowledge this was a retrospective, post-hoc analysis and that urolithiasis – a stone in the urinary tract, which includes nephrolithiasis, a kidney stone – was an adverse event, not a primary or secondary outcome.
Also, the stone composition, which might help explain how the drug may affect stone formation, is unknown.
Therefore, “dedicated randomized prospective clinical trials are needed to confirm these initial observations in patients both with and without type 2 diabetes,” said Dr. Balasubramanian, a clinical fellow in the section of endocrinology & metabolism, department of internal medicine at Yale University, New Haven, Conn.
However, “if this association is proven, empagliflozin may be used to decrease the risk of kidney stones at least in those with type 2 diabetes, but maybe also in those without diabetes,” Dr. Balasubramanian said in an interview.
Further trials are also needed to determine if this is a class effect, which is likely, she speculated, and to unravel the potential mechanism.
This is important because of the prevalence of kidney stones, which affect up to 15% of the general population and 15%-20% of patients with diabetes, she explained.
‘Provocative’ earlier findings
The current study was prompted by a recent observational study by Kasper B. Kristensen, MD, PhD, and colleagues.
Because SGLT2 inhibitors increase urinary glucose excretion through reduced renal reabsorption of glucose leading to osmotic diuresis and increased urinary flow, they hypothesized that these therapies “may reduce the risk of upper urinary tract stones (nephrolithiasis) by reducing the concentration of lithogenic substances in urine.”
Using data from Danish Health registries, they matched 12,325 individuals newly prescribed an SGLT2 inhibitor 1:1 with patients newly prescribed a glucagonlike peptide-1 (GLP1) agonist, another new class of drugs for treating type 2 diabetes.
They found a hazard ratio of 0.51 (95% confidence interval, 0.37-0.71) for incident nephrolithiasis and a hazard ratio of 0.68 (95% CI, 0.48-0.97) for recurrent nephrolithiasis for patients taking SGLT2 inhibitors versus GLP-1 agonists.
These findings are “striking,” according to Dr. Balasubramanian and colleagues. However, “these data, while provocative, were entirely retrospective and therefore possibly prone to bias,” they add.
Pooled data from 20 trials
The current study analyzed data from 20 randomized controlled trials of glycemic control in type 2 diabetes, in which 10,177 patients had received empagliflozin 10 mg or 25 mg and 4,904 patients had received placebo.
Most patients (46.5%) had participated in the EMPA-REG OUTCOMES trial, which also had the longest follow-up (2.6 years).
The researchers identified patients with a new stone from the urinary tract (including the kidney, ureter, and urethra). Patients had received either the study drug for a median of 543 days or placebo for a median of 549 days.
During treatment, 104 of 10,177 patients in the pooled empagliflozin groups and 79 of 4,904 patients in the pooled placebo groups developed a stone in the urinary tract.
This was equivalent to 0.63 new urinary-tract stones per 100 patient-years in the pooled empagliflozin groups versus 1.01 new urinary-tract stones per 100 patient-years in the pooled placebo groups.
The incidence rate ratio was 0.64 (95% CI, 0.48-0.86), in favor of empagliflozin.
When the analysis was restricted to new kidney stones, the results were similar: 75 of 10,177 patients in the pooled empagliflozin groups and 57 of 4,904 patients in the pooled placebo groups developed a kidney stone.
This was equivalent to 0.45 new kidney stones per 100 patient-years in the pooled empagliflozin groups versus 0.72 new kidney stones per 100 patient-years in the pooled placebo groups.
The IRR was 0.65 (95% CI, 0.46-0.92), in favor of empagliflozin.
Upcoming small RCT in adults without diabetes
Invited to comment on the new study, Dr. Kristensen said: “The reduced risk of SGLT2 inhibitors towards nephrolithiasis is now reported in at least two studies with different methodology, different populations, and different exposure and outcome definitions.”
“I agree that randomized clinical trials designed specifically to confirm these findings appear warranted,” added Dr. Kristensen, from the Institute of Public Health, Clinical Pharmacology, Pharmacy, and Environmental Medicine, University of Southern Denmark in Odense.
There is a need for studies in patients with and without diabetes, he added, especially ones that focus on prevention of nephrolithiasis in patients with kidney stone disease.
A new trial should shed further light on this.
Results are expected by the end of 2022 for SWEETSTONE (Impact of the SGLT2 Inhibitor Empagliflozin on Urinary Supersaturations in Kidney Stone Formers), a randomized, double-blind crossover exploratory study in 46 patients without diabetes.
This should provide preliminary data to “establish the relevance for larger trials assessing the prophylactic potential of empagliflozin in kidney stone disease,” according to an article on the trial protocol recently published in BMJ.
The trials included in the pooled dataset were funded by Boehringer Ingelheim or the Boehringer Ingelheim and Eli Lilly Diabetes Alliance. Dr. Balasubramanian has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM ENDO 2022
What’s the best time of day to exercise? It depends on your goals
For most of us, the “best” time of day to work out is simple: When we can.
Maybe that’s before or after work. Or when the gym offers free daycare. Or when our favorite instructor teaches our favorite class.
That’s why we call it a “routine.” And if the results are the same, it’s hard to imagine changing it up.
But what if the results aren’t the same?
They may not be, according to a new study from a research team at Skidmore College in Saratoga Springs, N.Y.
Women who worked out in the morning lost more fat, while those who trained in the evening gained more upper-body strength and power. As for men, the performance improvements were similar no matter when they exercised. But those who did so in the evening had a significant drop in blood pressure, among other benefits.
The study is part of a growing body of research showing different results for different times of day among different populations. As it turns out, when you exercise can ultimately have a big effect, not just on strength and fat loss, but also heart health, mood, and quality of sleep.
An accidental discovery
The original goal of the Skidmore study was to test a unique fitness program with a group of healthy, fit, and extremely active adults in early middle age.
The program includes four workouts a week, each with a different focus: strength, steady-pace endurance, high-intensity intervals, and flexibility (traditional stretching combined with yoga and Pilates exercises).
But because the group was so large – 27 women and 20 men completed the 3-month program – they had to split them into morning and evening workout groups.
It wasn’t until researchers looked at the results that they saw the differences between morning and evening exercise, says lead author Paul Arciero, PhD.
Dr. Arciero stressed that participants in every group got leaner and stronger. But the women who worked out in the morning got much bigger reductions in body fat and body-fat percentage than the evening group. Meanwhile, women in the evening group got much bigger gains in upper-body strength, power, and muscular endurance than their morning counterparts.
Among the men, the evening group had significantly larger improvements in blood pressure, cholesterol levels, and the percentage of fat they burned for energy, along with a bigger drop in feelings of fatigue.
Strategic timing for powerful results
Some of these findings are consistent with previous research. For example, a study published in 2021 showed that the ability to exert high effort and express strength and power peaks in the late afternoon, about the same time that your core body temperature is at its highest point.
On the other hand, you’ll probably perform better in the morning when the activity requires a lot of skill and coordination or depends on strategic decision-making.
The findings apply to both men and women.
Performance aside, exercise timing might offer strong health benefits for men with type 2 diabetes, or at high risk for it.
A study showed that men who exercised between 3 p.m. and 6 p.m. saw dramatic improvements in blood sugar management and insulin sensitivity, compared to a group that worked out between 8 a.m. and 10 a.m.
They also lost more fat during the 12-week program, even though they were doing the exact same workouts.
Train consistently, sleep well
When you exercise can affect your sleep quality in many ways, said neuroscientist Jennifer Heisz, PhD, of McMaster University, Hamilton, Ont.
First, she said, “exercise helps you fall asleep faster and sleep deeper at night.” (The only exception is if you exercise so intensely or so close to bedtime that your heart rate is still elevated.)
Second, “exercising at a consistent time every day helps regulate the body’s circadian rhythms.” It doesn’t matter if the exercise is in the morning, evening, or anywhere in between. As long as it’s predictable, it will help you fall asleep and wake up at the same times.
Outdoor exercise is even better, she said. The sun is the most powerful regulator of the circadian clock and works in tandem with physical activity.
Third, exercising at specific times can help you overcome jet lag or adjust to an earlier or later shift at work.
“Exercising at 7 a.m. or between 1 and 4 p.m. helps your circadian clock to ‘fall back’ in time, making it easier to wake up earlier,” Dr. Heisz said. If you need to train your body to wake up later in the morning, try working out between 7 p.m. and 10 p.m.
All exercise is good, but the right timing can make it even better
“The best time to exercise is when you can fit it in,” Dr. Arciero said. “You’ve got to choose the time that fits your lifestyle best.”
But context matters, he noted.
“For someone needing to achieve an improvement in their risk for cardiometabolic disease,” his study shows an advantage to working out later in the day, especially for men. If you’re more focused on building upper-body strength and power, you’ll probably get better results from training in the afternoon or evening.
And for fat loss, the Skidmore study shows better results for women who did morning workouts.
And if you’re still not sure? Try sleeping on it – preferably after your workout.
A version of this article first appeared on WebMD.com.
For most of us, the “best” time of day to work out is simple: When we can.
Maybe that’s before or after work. Or when the gym offers free daycare. Or when our favorite instructor teaches our favorite class.
That’s why we call it a “routine.” And if the results are the same, it’s hard to imagine changing it up.
But what if the results aren’t the same?
They may not be, according to a new study from a research team at Skidmore College in Saratoga Springs, N.Y.
Women who worked out in the morning lost more fat, while those who trained in the evening gained more upper-body strength and power. As for men, the performance improvements were similar no matter when they exercised. But those who did so in the evening had a significant drop in blood pressure, among other benefits.
The study is part of a growing body of research showing different results for different times of day among different populations. As it turns out, when you exercise can ultimately have a big effect, not just on strength and fat loss, but also heart health, mood, and quality of sleep.
An accidental discovery
The original goal of the Skidmore study was to test a unique fitness program with a group of healthy, fit, and extremely active adults in early middle age.
The program includes four workouts a week, each with a different focus: strength, steady-pace endurance, high-intensity intervals, and flexibility (traditional stretching combined with yoga and Pilates exercises).
But because the group was so large – 27 women and 20 men completed the 3-month program – they had to split them into morning and evening workout groups.
It wasn’t until researchers looked at the results that they saw the differences between morning and evening exercise, says lead author Paul Arciero, PhD.
Dr. Arciero stressed that participants in every group got leaner and stronger. But the women who worked out in the morning got much bigger reductions in body fat and body-fat percentage than the evening group. Meanwhile, women in the evening group got much bigger gains in upper-body strength, power, and muscular endurance than their morning counterparts.
Among the men, the evening group had significantly larger improvements in blood pressure, cholesterol levels, and the percentage of fat they burned for energy, along with a bigger drop in feelings of fatigue.
Strategic timing for powerful results
Some of these findings are consistent with previous research. For example, a study published in 2021 showed that the ability to exert high effort and express strength and power peaks in the late afternoon, about the same time that your core body temperature is at its highest point.
On the other hand, you’ll probably perform better in the morning when the activity requires a lot of skill and coordination or depends on strategic decision-making.
The findings apply to both men and women.
Performance aside, exercise timing might offer strong health benefits for men with type 2 diabetes, or at high risk for it.
A study showed that men who exercised between 3 p.m. and 6 p.m. saw dramatic improvements in blood sugar management and insulin sensitivity, compared to a group that worked out between 8 a.m. and 10 a.m.
They also lost more fat during the 12-week program, even though they were doing the exact same workouts.
Train consistently, sleep well
When you exercise can affect your sleep quality in many ways, said neuroscientist Jennifer Heisz, PhD, of McMaster University, Hamilton, Ont.
First, she said, “exercise helps you fall asleep faster and sleep deeper at night.” (The only exception is if you exercise so intensely or so close to bedtime that your heart rate is still elevated.)
Second, “exercising at a consistent time every day helps regulate the body’s circadian rhythms.” It doesn’t matter if the exercise is in the morning, evening, or anywhere in between. As long as it’s predictable, it will help you fall asleep and wake up at the same times.
Outdoor exercise is even better, she said. The sun is the most powerful regulator of the circadian clock and works in tandem with physical activity.
Third, exercising at specific times can help you overcome jet lag or adjust to an earlier or later shift at work.
“Exercising at 7 a.m. or between 1 and 4 p.m. helps your circadian clock to ‘fall back’ in time, making it easier to wake up earlier,” Dr. Heisz said. If you need to train your body to wake up later in the morning, try working out between 7 p.m. and 10 p.m.
All exercise is good, but the right timing can make it even better
“The best time to exercise is when you can fit it in,” Dr. Arciero said. “You’ve got to choose the time that fits your lifestyle best.”
But context matters, he noted.
“For someone needing to achieve an improvement in their risk for cardiometabolic disease,” his study shows an advantage to working out later in the day, especially for men. If you’re more focused on building upper-body strength and power, you’ll probably get better results from training in the afternoon or evening.
And for fat loss, the Skidmore study shows better results for women who did morning workouts.
And if you’re still not sure? Try sleeping on it – preferably after your workout.
A version of this article first appeared on WebMD.com.
For most of us, the “best” time of day to work out is simple: When we can.
Maybe that’s before or after work. Or when the gym offers free daycare. Or when our favorite instructor teaches our favorite class.
That’s why we call it a “routine.” And if the results are the same, it’s hard to imagine changing it up.
But what if the results aren’t the same?
They may not be, according to a new study from a research team at Skidmore College in Saratoga Springs, N.Y.
Women who worked out in the morning lost more fat, while those who trained in the evening gained more upper-body strength and power. As for men, the performance improvements were similar no matter when they exercised. But those who did so in the evening had a significant drop in blood pressure, among other benefits.
The study is part of a growing body of research showing different results for different times of day among different populations. As it turns out, when you exercise can ultimately have a big effect, not just on strength and fat loss, but also heart health, mood, and quality of sleep.
An accidental discovery
The original goal of the Skidmore study was to test a unique fitness program with a group of healthy, fit, and extremely active adults in early middle age.
The program includes four workouts a week, each with a different focus: strength, steady-pace endurance, high-intensity intervals, and flexibility (traditional stretching combined with yoga and Pilates exercises).
But because the group was so large – 27 women and 20 men completed the 3-month program – they had to split them into morning and evening workout groups.
It wasn’t until researchers looked at the results that they saw the differences between morning and evening exercise, says lead author Paul Arciero, PhD.
Dr. Arciero stressed that participants in every group got leaner and stronger. But the women who worked out in the morning got much bigger reductions in body fat and body-fat percentage than the evening group. Meanwhile, women in the evening group got much bigger gains in upper-body strength, power, and muscular endurance than their morning counterparts.
Among the men, the evening group had significantly larger improvements in blood pressure, cholesterol levels, and the percentage of fat they burned for energy, along with a bigger drop in feelings of fatigue.
Strategic timing for powerful results
Some of these findings are consistent with previous research. For example, a study published in 2021 showed that the ability to exert high effort and express strength and power peaks in the late afternoon, about the same time that your core body temperature is at its highest point.
On the other hand, you’ll probably perform better in the morning when the activity requires a lot of skill and coordination or depends on strategic decision-making.
The findings apply to both men and women.
Performance aside, exercise timing might offer strong health benefits for men with type 2 diabetes, or at high risk for it.
A study showed that men who exercised between 3 p.m. and 6 p.m. saw dramatic improvements in blood sugar management and insulin sensitivity, compared to a group that worked out between 8 a.m. and 10 a.m.
They also lost more fat during the 12-week program, even though they were doing the exact same workouts.
Train consistently, sleep well
When you exercise can affect your sleep quality in many ways, said neuroscientist Jennifer Heisz, PhD, of McMaster University, Hamilton, Ont.
First, she said, “exercise helps you fall asleep faster and sleep deeper at night.” (The only exception is if you exercise so intensely or so close to bedtime that your heart rate is still elevated.)
Second, “exercising at a consistent time every day helps regulate the body’s circadian rhythms.” It doesn’t matter if the exercise is in the morning, evening, or anywhere in between. As long as it’s predictable, it will help you fall asleep and wake up at the same times.
Outdoor exercise is even better, she said. The sun is the most powerful regulator of the circadian clock and works in tandem with physical activity.
Third, exercising at specific times can help you overcome jet lag or adjust to an earlier or later shift at work.
“Exercising at 7 a.m. or between 1 and 4 p.m. helps your circadian clock to ‘fall back’ in time, making it easier to wake up earlier,” Dr. Heisz said. If you need to train your body to wake up later in the morning, try working out between 7 p.m. and 10 p.m.
All exercise is good, but the right timing can make it even better
“The best time to exercise is when you can fit it in,” Dr. Arciero said. “You’ve got to choose the time that fits your lifestyle best.”
But context matters, he noted.
“For someone needing to achieve an improvement in their risk for cardiometabolic disease,” his study shows an advantage to working out later in the day, especially for men. If you’re more focused on building upper-body strength and power, you’ll probably get better results from training in the afternoon or evening.
And for fat loss, the Skidmore study shows better results for women who did morning workouts.
And if you’re still not sure? Try sleeping on it – preferably after your workout.
A version of this article first appeared on WebMD.com.
FROM FRONTIERS IN PHYSIOLOGY