For CLL, BTKi combo bests chemoimmunotherapy

Article Type
Changed
Fri, 08/11/2023 - 10:13

A new interim analysis of a large randomized, phase 3 trial provides more evidence that a combination of ibrutinib and rituximab is a better option for younger patients with untreated chronic lymphocytic leukemia (CLL) than the once-standard combination of fludarabine, cyclophosphamide, and rituximab (FCR).

The analysis of the open-label FLAIR trial, published in The Lancet Oncology, tracked 771 patients with CLL for a median follow-up of 53 months (interquartile ratio, 41-61 months) and found that median progression-free survival was not reached with ibrutinib/rituximab versus 67 months with FCR (hazard ratio, 0.44, P < .0001).

“This paper is another confirmation to say that Bruton’s tyrosine kinase inhibitors are more powerful than even our strongest chemoimmunotherapy. That’s very reassuring,” said hematologist/oncologist Jan A. Burger, MD, PhD, of the University of Texas MD Anderson Cancer Center, Houston, in an interview. He did not take part in the analysis but is familiar with its findings.

There are caveats to the study. More patients in the ibrutinib/rituximab arm died of cardiac events, possibly reflecting a known risk of those drugs. And for unclear reasons, there was no difference in overall survival – a secondary endpoint – between the groups. The study authors speculate that this may be because some patients on FCR progressed and turned to effective second-line drugs.

Still, the findings are consistent with the landmark E1912 trial, the authors wrote, and adds “to a body of evidence that suggests that the use of ibrutinib-based regimens should be considered for patients with previously untreated CLL, especially those with IGHV-unmutated CLL.”

The study, partially funded by industry, was led by Peter Hillmen, PhD, of Leeds (England) Cancer Center.

According to Dr. Burger, FCR was the standard treatment for younger, fitter patients with CLL about 10-15 years ago. Then Bruton’s tyrosine kinase inhibitors such as ibrutinib entered the picture. But, as the new report notes, initial studies focused on older patients who weren’t considered fit enough to tolerate FCR.

The new study, like the E1912 trial, aimed to compare ibrutinib-rituximab versus FCR in younger, fitter patients.

From 2014 to 2018, researchers assigned 771 patients (median age, 62 years; IQR 56-67; 73% male; 95% White; 66% with World Health Organization performance status, 0) to FCR (n = 385) or ibrutinib/rituximab (n = 386).

Nearly three-quarters (74%) in the FCR group received six cycles of therapy, and 97% of those in the ibrutinib-rituximab group received six cycles of rituximab. Those in the ibrutinib-rituximab group also received daily doses of ibrutinib. Doses could be modified. The data cutoff was May 24, 2021.

Notably, there was no improvement in overall survival in the ibrutinib/rituximab group: 92.1% of patients lived 4 years versus 93.5% in the FCR group. This contrasts with an improvement in overall survival in the earlier E1912 study in the ibrutinib/rituximab group.

However, the study authors noted that overall survival in the FCR group is higher than in earlier studies, perhaps reflecting the wider availability of targeted therapy. The final study analysis will offer more insight into overall survival.

In an interview, hematologist David A. Bond, MD, of Ohio State University, Columbus, who is familiar with the study findings, said “the lack of an improvement in overall survival could be due to differences in available treatments at relapse, as the FLAIR study was conducted more recently than the prior E1912 study.” He added that “the younger ages in the E1912 study may have led to less risk for cardiovascular events or deaths for the patients treated with ibrutinib in the E1912 study.”

The previous E1912 trial showed a larger effect for ibrutinib/rituximab versus FCR on progression-free survival (HR, 0.37, P < .001 for E1912 and HR, 0.44, P< .0001 for the FLAIR trial). However, the study authors noted that FLAIR trial had older subjects (mean age, 62 vs 56.7 in the E1912 trial.)

As for grade 3 or 4 adverse events, leukopenia was most common in the FCR group (n = 203, 54%), compared with the ibrutinib/rituximab group (n = 55, 14%). Serious adverse events were reported in 205 (53%) of patients in the ibrutinib/rituximab group versus 203 (54%) patients in the FCR group.

All-cause infections, myelodysplastic syndrome, acute myeloid leukemia, Richter’s transformation, and other diagnosed cancers were rare but more common in the FCR group. Deaths from COVID-19 were the same at 3 in each group; 2 of 29 deaths in the FCR group and 3 of 30 deaths in the ibrutinib/rituximab group were considered to be likely linked to treatment.

Sudden unexplained or cardiac deaths were more common in the ibrutinib-rituximab group (n = 8, 2%) vs. the FCR group (n = 2, less than 1%).

Dr. Bond said “one of the takeaways for practicing hematologists from the FLAIR study is that cardiovascular complications and sudden cardiac death are clearly an issue for older patients with hypertension treated with ibrutinib. Patients should be monitored for signs or symptoms of cardiovascular disease and have close management of blood pressure.” 

Dr. Burger also noted that cardiac problems are a known risk of ibrutinib. “Fortunately, we have second-generation Bruton’s tyrosine kinase inhibitors that could be chosen for patients when we are worried about side effects.”

He said that chemotherapy remains the preferred – or only – treatment in some parts of the world. And patients may prefer FCR to ibrutinib because of the latter drug’s side effects or a preference for therapy that doesn’t take as long.

The study was funded by Cancer Research UK and Janssen. The study authors reported relationships with companies such as Lilly, Janssen, AbbVie, AstraZeneca, BeiGene, Gilead, and many others. Dr. Burger reports financial support for clinical trials from Pharmacyclics, AstraZeneca, Biogen, and Janssen. Dr. Bond reported no disclosures.

Publications
Topics
Sections

A new interim analysis of a large randomized, phase 3 trial provides more evidence that a combination of ibrutinib and rituximab is a better option for younger patients with untreated chronic lymphocytic leukemia (CLL) than the once-standard combination of fludarabine, cyclophosphamide, and rituximab (FCR).

The analysis of the open-label FLAIR trial, published in The Lancet Oncology, tracked 771 patients with CLL for a median follow-up of 53 months (interquartile ratio, 41-61 months) and found that median progression-free survival was not reached with ibrutinib/rituximab versus 67 months with FCR (hazard ratio, 0.44, P < .0001).

“This paper is another confirmation to say that Bruton’s tyrosine kinase inhibitors are more powerful than even our strongest chemoimmunotherapy. That’s very reassuring,” said hematologist/oncologist Jan A. Burger, MD, PhD, of the University of Texas MD Anderson Cancer Center, Houston, in an interview. He did not take part in the analysis but is familiar with its findings.

There are caveats to the study. More patients in the ibrutinib/rituximab arm died of cardiac events, possibly reflecting a known risk of those drugs. And for unclear reasons, there was no difference in overall survival – a secondary endpoint – between the groups. The study authors speculate that this may be because some patients on FCR progressed and turned to effective second-line drugs.

Still, the findings are consistent with the landmark E1912 trial, the authors wrote, and adds “to a body of evidence that suggests that the use of ibrutinib-based regimens should be considered for patients with previously untreated CLL, especially those with IGHV-unmutated CLL.”

The study, partially funded by industry, was led by Peter Hillmen, PhD, of Leeds (England) Cancer Center.

According to Dr. Burger, FCR was the standard treatment for younger, fitter patients with CLL about 10-15 years ago. Then Bruton’s tyrosine kinase inhibitors such as ibrutinib entered the picture. But, as the new report notes, initial studies focused on older patients who weren’t considered fit enough to tolerate FCR.

The new study, like the E1912 trial, aimed to compare ibrutinib-rituximab versus FCR in younger, fitter patients.

From 2014 to 2018, researchers assigned 771 patients (median age, 62 years; IQR 56-67; 73% male; 95% White; 66% with World Health Organization performance status, 0) to FCR (n = 385) or ibrutinib/rituximab (n = 386).

Nearly three-quarters (74%) in the FCR group received six cycles of therapy, and 97% of those in the ibrutinib-rituximab group received six cycles of rituximab. Those in the ibrutinib-rituximab group also received daily doses of ibrutinib. Doses could be modified. The data cutoff was May 24, 2021.

Notably, there was no improvement in overall survival in the ibrutinib/rituximab group: 92.1% of patients lived 4 years versus 93.5% in the FCR group. This contrasts with an improvement in overall survival in the earlier E1912 study in the ibrutinib/rituximab group.

However, the study authors noted that overall survival in the FCR group is higher than in earlier studies, perhaps reflecting the wider availability of targeted therapy. The final study analysis will offer more insight into overall survival.

In an interview, hematologist David A. Bond, MD, of Ohio State University, Columbus, who is familiar with the study findings, said “the lack of an improvement in overall survival could be due to differences in available treatments at relapse, as the FLAIR study was conducted more recently than the prior E1912 study.” He added that “the younger ages in the E1912 study may have led to less risk for cardiovascular events or deaths for the patients treated with ibrutinib in the E1912 study.”

The previous E1912 trial showed a larger effect for ibrutinib/rituximab versus FCR on progression-free survival (HR, 0.37, P < .001 for E1912 and HR, 0.44, P< .0001 for the FLAIR trial). However, the study authors noted that FLAIR trial had older subjects (mean age, 62 vs 56.7 in the E1912 trial.)

As for grade 3 or 4 adverse events, leukopenia was most common in the FCR group (n = 203, 54%), compared with the ibrutinib/rituximab group (n = 55, 14%). Serious adverse events were reported in 205 (53%) of patients in the ibrutinib/rituximab group versus 203 (54%) patients in the FCR group.

All-cause infections, myelodysplastic syndrome, acute myeloid leukemia, Richter’s transformation, and other diagnosed cancers were rare but more common in the FCR group. Deaths from COVID-19 were the same at 3 in each group; 2 of 29 deaths in the FCR group and 3 of 30 deaths in the ibrutinib/rituximab group were considered to be likely linked to treatment.

Sudden unexplained or cardiac deaths were more common in the ibrutinib-rituximab group (n = 8, 2%) vs. the FCR group (n = 2, less than 1%).

Dr. Bond said “one of the takeaways for practicing hematologists from the FLAIR study is that cardiovascular complications and sudden cardiac death are clearly an issue for older patients with hypertension treated with ibrutinib. Patients should be monitored for signs or symptoms of cardiovascular disease and have close management of blood pressure.” 

Dr. Burger also noted that cardiac problems are a known risk of ibrutinib. “Fortunately, we have second-generation Bruton’s tyrosine kinase inhibitors that could be chosen for patients when we are worried about side effects.”

He said that chemotherapy remains the preferred – or only – treatment in some parts of the world. And patients may prefer FCR to ibrutinib because of the latter drug’s side effects or a preference for therapy that doesn’t take as long.

The study was funded by Cancer Research UK and Janssen. The study authors reported relationships with companies such as Lilly, Janssen, AbbVie, AstraZeneca, BeiGene, Gilead, and many others. Dr. Burger reports financial support for clinical trials from Pharmacyclics, AstraZeneca, Biogen, and Janssen. Dr. Bond reported no disclosures.

A new interim analysis of a large randomized, phase 3 trial provides more evidence that a combination of ibrutinib and rituximab is a better option for younger patients with untreated chronic lymphocytic leukemia (CLL) than the once-standard combination of fludarabine, cyclophosphamide, and rituximab (FCR).

The analysis of the open-label FLAIR trial, published in The Lancet Oncology, tracked 771 patients with CLL for a median follow-up of 53 months (interquartile ratio, 41-61 months) and found that median progression-free survival was not reached with ibrutinib/rituximab versus 67 months with FCR (hazard ratio, 0.44, P < .0001).

“This paper is another confirmation to say that Bruton’s tyrosine kinase inhibitors are more powerful than even our strongest chemoimmunotherapy. That’s very reassuring,” said hematologist/oncologist Jan A. Burger, MD, PhD, of the University of Texas MD Anderson Cancer Center, Houston, in an interview. He did not take part in the analysis but is familiar with its findings.

There are caveats to the study. More patients in the ibrutinib/rituximab arm died of cardiac events, possibly reflecting a known risk of those drugs. And for unclear reasons, there was no difference in overall survival – a secondary endpoint – between the groups. The study authors speculate that this may be because some patients on FCR progressed and turned to effective second-line drugs.

Still, the findings are consistent with the landmark E1912 trial, the authors wrote, and adds “to a body of evidence that suggests that the use of ibrutinib-based regimens should be considered for patients with previously untreated CLL, especially those with IGHV-unmutated CLL.”

The study, partially funded by industry, was led by Peter Hillmen, PhD, of Leeds (England) Cancer Center.

According to Dr. Burger, FCR was the standard treatment for younger, fitter patients with CLL about 10-15 years ago. Then Bruton’s tyrosine kinase inhibitors such as ibrutinib entered the picture. But, as the new report notes, initial studies focused on older patients who weren’t considered fit enough to tolerate FCR.

The new study, like the E1912 trial, aimed to compare ibrutinib-rituximab versus FCR in younger, fitter patients.

From 2014 to 2018, researchers assigned 771 patients (median age, 62 years; IQR 56-67; 73% male; 95% White; 66% with World Health Organization performance status, 0) to FCR (n = 385) or ibrutinib/rituximab (n = 386).

Nearly three-quarters (74%) in the FCR group received six cycles of therapy, and 97% of those in the ibrutinib-rituximab group received six cycles of rituximab. Those in the ibrutinib-rituximab group also received daily doses of ibrutinib. Doses could be modified. The data cutoff was May 24, 2021.

Notably, there was no improvement in overall survival in the ibrutinib/rituximab group: 92.1% of patients lived 4 years versus 93.5% in the FCR group. This contrasts with an improvement in overall survival in the earlier E1912 study in the ibrutinib/rituximab group.

However, the study authors noted that overall survival in the FCR group is higher than in earlier studies, perhaps reflecting the wider availability of targeted therapy. The final study analysis will offer more insight into overall survival.

In an interview, hematologist David A. Bond, MD, of Ohio State University, Columbus, who is familiar with the study findings, said “the lack of an improvement in overall survival could be due to differences in available treatments at relapse, as the FLAIR study was conducted more recently than the prior E1912 study.” He added that “the younger ages in the E1912 study may have led to less risk for cardiovascular events or deaths for the patients treated with ibrutinib in the E1912 study.”

The previous E1912 trial showed a larger effect for ibrutinib/rituximab versus FCR on progression-free survival (HR, 0.37, P < .001 for E1912 and HR, 0.44, P< .0001 for the FLAIR trial). However, the study authors noted that FLAIR trial had older subjects (mean age, 62 vs 56.7 in the E1912 trial.)

As for grade 3 or 4 adverse events, leukopenia was most common in the FCR group (n = 203, 54%), compared with the ibrutinib/rituximab group (n = 55, 14%). Serious adverse events were reported in 205 (53%) of patients in the ibrutinib/rituximab group versus 203 (54%) patients in the FCR group.

All-cause infections, myelodysplastic syndrome, acute myeloid leukemia, Richter’s transformation, and other diagnosed cancers were rare but more common in the FCR group. Deaths from COVID-19 were the same at 3 in each group; 2 of 29 deaths in the FCR group and 3 of 30 deaths in the ibrutinib/rituximab group were considered to be likely linked to treatment.

Sudden unexplained or cardiac deaths were more common in the ibrutinib-rituximab group (n = 8, 2%) vs. the FCR group (n = 2, less than 1%).

Dr. Bond said “one of the takeaways for practicing hematologists from the FLAIR study is that cardiovascular complications and sudden cardiac death are clearly an issue for older patients with hypertension treated with ibrutinib. Patients should be monitored for signs or symptoms of cardiovascular disease and have close management of blood pressure.” 

Dr. Burger also noted that cardiac problems are a known risk of ibrutinib. “Fortunately, we have second-generation Bruton’s tyrosine kinase inhibitors that could be chosen for patients when we are worried about side effects.”

He said that chemotherapy remains the preferred – or only – treatment in some parts of the world. And patients may prefer FCR to ibrutinib because of the latter drug’s side effects or a preference for therapy that doesn’t take as long.

The study was funded by Cancer Research UK and Janssen. The study authors reported relationships with companies such as Lilly, Janssen, AbbVie, AstraZeneca, BeiGene, Gilead, and many others. Dr. Burger reports financial support for clinical trials from Pharmacyclics, AstraZeneca, Biogen, and Janssen. Dr. Bond reported no disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

On the best way to exercise

Article Type
Changed
Wed, 08/09/2023 - 13:05

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How useful are circulating tumor cells for early diagnosis?

Article Type
Changed
Wed, 08/09/2023 - 13:05

Treatment options for patients with cancer that is detected at a late stage are severely limited, which usually leads to an unfavorable prognosis for such patients. Indeed, the options available for patients with metastatic solid cancers are scarcely curative. Therefore, early diagnosis of neoplasia remains a fundamental mainstay for improving outcomes for cancer patients.

Histopathology is the current gold standard for cancer diagnosis. Biopsy is an invasive procedure that provides physicians with further samples to test but that furnishes limited information concerning tumor heterogeneity. Biopsy specimens are usually obtained only when there is clinical evidence of neoplasia, which significantly limits their usefulness in early diagnosis.

Around 20 years ago, it was discovered that the presence of circulating tumor cells (CTC) in patients with metastatic breast cancer who were about to begin a new line of treatment was predictive of overall and progression-free survival. The prognostic value of CTC was independent of the line of treatment (first or second) and was greater than that of the site of metastasis, the type of therapy, and the time to metastasis after complete primary resection. These results support the idea that the presence of CTC could be used to modify the system for staging advanced disease.

Since then, research into liquid biopsy assays has expanded rapidly, and many biomarkers have been studied in various body fluids for their usefulness in assessing solid tumors.
 

Liquid vs. tissue

Liquid biopsy is a minimally invasive tool that is easy to use. It is employed to detect cancer, to assess treatment response, or to monitor disease progression. Liquid biopsy produces test material from primary and metastatic (or micrometastatic) sites and provides a more heterogeneous picture of the entire tumor cell population, compared with specimens obtained with tissue biopsy.

Metastasis

The notion that metastatic lesions are formed from cancer cells that have disseminated from advanced primary tumors has been substantially revised following the identification of disseminated tumor cells (DTC) in the bone marrow of patients with early-stage disease. These results have led researchers to no longer view cancer metastasis as a linear cascade of events but rather as a series of concurrent, partially overlapping processes, as metastasizing cells assume new phenotypes while abandoning older behaviors.

The initiation of metastasis is not simply a cell-autonomous event but is heavily influenced by complex tissue microenvironments. Although colonization of distant tissues by DTC is an extremely inefficient process, at times, relatively numerous CTC can be detected in the blood of cancer patients (> 1,000 CTC/mL of blood plasma), whereas the number of clinically detectable metastases is disproportionately low, confirming that tumor cell diffusion can happen at an early stage but usually occurs later on.
 

Early dissemination

Little is currently known about the preference of cancer subtypes for distinct tissues or about the receptiveness of a tissue as a metastatic site. What endures as one of the most confounding clinical phenomena is that patients may undergo tumor resection and remain apparently disease free for months, years, and even decades, only to experience relapse and be diagnosed with late-stage metastatic disease. This course may be a result of cell seeding from minimal residual disease after resection of the primary tumor or of preexisting clinically undetectable micrometastases. It may also arise from early disseminated cells that remain dormant and resistant to therapy until they suddenly reawaken to initiate proliferation into clinically detectable macrometastases.

Dormant DTC could be the main reason for delayed detection of metastases. It is thought that around 40% of patients with prostate cancer who undergo radical prostatectomy present with biochemical recurrence, suggesting that it is likely that hidden DTC or micrometastases are present at the time of the procedure. The finding is consistent with the detection of DTC many years after tumor resection, suggesting they were released before surgical treatment. Nevertheless, research into tumor cell dormancy is limited, owing to the invasive and technically challenging nature of obtaining DTC samples, which are predominantly taken from the bone marrow.
 

CTC metastases

Cancer cells can undergo epithelial-to-mesenchymal transition to facilitate their detachment from the primary tumor and intravasation into the blood circulation (step 1). Dissemination of cancer cells from the primary tumor into circulation can involve either single cells or cell clusters containing multiple CTC as well as immune cells and platelets, known as microemboli. CTC that can survive in circulation (step 2) can exit the bloodstream (step 3) and establish metastatic tumors (step 4), or they can enter dormancy and reside in distant organs, such as the bone marrow.

Use in practice

CTC were discovered over a century ago, but only in recent years has technology been sufficiently advanced to study CTC and to assess their usefulness as biomarkers. Recent evidence suggests that not only do the number of CTC increase during sleep and rest phases but also that these CTC are better able to metastasize, compared to those generated during periods of wakefulness or activity.

CTC clusters (microemboli) are defined as groups of two or more CTC. They can consist of CTC alone (homotypic) or can include various stromal cells, such as cancer-associated fibroblasts or platelets and immune cells (heterotypic). CTC clusters (with or without leukocytes) seem to have greater metastatic capacity, compared with individual CTC.

A multitude of characteristics can be measured in CTC, including genetics and epigenetics, as well as protein levels, which might help in understanding many processes involved in the formation of metastases.

Quantitative assessment of CTC could indicate tumor burden in patients with aggressive cancers, as has been seen in patients with primary lung cancer.
 

Early cancer diagnosis

Early research into CTC didn’t explore their usefulness in diagnosing early-stage tumors because it was thought that CTC were characteristic of advanced-stage disease. This hypothesis was later rejected following evidence of local intravascular invasion of very early cancer cells, even over a period of several hours. This feature may allow CTC to be detected before the clinical diagnosis of cancer.

CTC have been detected in various neoplastic conditions: in breast cancer, seen in 20% of patients with stage I disease, in 26.8% with stage II disease, and 26.7% with stage III disease; in nonmetastatic colorectal cancer, including stage I and II disease; and in prostate cancer, seen in over 50% of patients with localized disease.

The presence of CTC has been proven to be an unfavorable prognostic predictor of overall survival among patients with early-stage non–small cell lung cancer. It distinguishes patients with pancreatic ductal adenocarcinoma from those with noncancerous pancreatic diseases with a sensitivity of 75% and a specificity of 96.3%.

CTC positivity scoring (appropriately defined), combined with serum prostate-specific antigen level, was predictive of a biopsy diagnosis of clinically significant prostate cancer.

All these data support the utility of CTC in early cancer diagnosis. Their link with metastases, and thus with aggressive tumors, gives them an advantage over other (noninvasive or minimally invasive) biomarkers in the early identification of invasive tumors for therapeutic intervention with better cure rates.
 

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Publications
Topics
Sections

Treatment options for patients with cancer that is detected at a late stage are severely limited, which usually leads to an unfavorable prognosis for such patients. Indeed, the options available for patients with metastatic solid cancers are scarcely curative. Therefore, early diagnosis of neoplasia remains a fundamental mainstay for improving outcomes for cancer patients.

Histopathology is the current gold standard for cancer diagnosis. Biopsy is an invasive procedure that provides physicians with further samples to test but that furnishes limited information concerning tumor heterogeneity. Biopsy specimens are usually obtained only when there is clinical evidence of neoplasia, which significantly limits their usefulness in early diagnosis.

Around 20 years ago, it was discovered that the presence of circulating tumor cells (CTC) in patients with metastatic breast cancer who were about to begin a new line of treatment was predictive of overall and progression-free survival. The prognostic value of CTC was independent of the line of treatment (first or second) and was greater than that of the site of metastasis, the type of therapy, and the time to metastasis after complete primary resection. These results support the idea that the presence of CTC could be used to modify the system for staging advanced disease.

Since then, research into liquid biopsy assays has expanded rapidly, and many biomarkers have been studied in various body fluids for their usefulness in assessing solid tumors.
 

Liquid vs. tissue

Liquid biopsy is a minimally invasive tool that is easy to use. It is employed to detect cancer, to assess treatment response, or to monitor disease progression. Liquid biopsy produces test material from primary and metastatic (or micrometastatic) sites and provides a more heterogeneous picture of the entire tumor cell population, compared with specimens obtained with tissue biopsy.

Metastasis

The notion that metastatic lesions are formed from cancer cells that have disseminated from advanced primary tumors has been substantially revised following the identification of disseminated tumor cells (DTC) in the bone marrow of patients with early-stage disease. These results have led researchers to no longer view cancer metastasis as a linear cascade of events but rather as a series of concurrent, partially overlapping processes, as metastasizing cells assume new phenotypes while abandoning older behaviors.

The initiation of metastasis is not simply a cell-autonomous event but is heavily influenced by complex tissue microenvironments. Although colonization of distant tissues by DTC is an extremely inefficient process, at times, relatively numerous CTC can be detected in the blood of cancer patients (> 1,000 CTC/mL of blood plasma), whereas the number of clinically detectable metastases is disproportionately low, confirming that tumor cell diffusion can happen at an early stage but usually occurs later on.
 

Early dissemination

Little is currently known about the preference of cancer subtypes for distinct tissues or about the receptiveness of a tissue as a metastatic site. What endures as one of the most confounding clinical phenomena is that patients may undergo tumor resection and remain apparently disease free for months, years, and even decades, only to experience relapse and be diagnosed with late-stage metastatic disease. This course may be a result of cell seeding from minimal residual disease after resection of the primary tumor or of preexisting clinically undetectable micrometastases. It may also arise from early disseminated cells that remain dormant and resistant to therapy until they suddenly reawaken to initiate proliferation into clinically detectable macrometastases.

Dormant DTC could be the main reason for delayed detection of metastases. It is thought that around 40% of patients with prostate cancer who undergo radical prostatectomy present with biochemical recurrence, suggesting that it is likely that hidden DTC or micrometastases are present at the time of the procedure. The finding is consistent with the detection of DTC many years after tumor resection, suggesting they were released before surgical treatment. Nevertheless, research into tumor cell dormancy is limited, owing to the invasive and technically challenging nature of obtaining DTC samples, which are predominantly taken from the bone marrow.
 

CTC metastases

Cancer cells can undergo epithelial-to-mesenchymal transition to facilitate their detachment from the primary tumor and intravasation into the blood circulation (step 1). Dissemination of cancer cells from the primary tumor into circulation can involve either single cells or cell clusters containing multiple CTC as well as immune cells and platelets, known as microemboli. CTC that can survive in circulation (step 2) can exit the bloodstream (step 3) and establish metastatic tumors (step 4), or they can enter dormancy and reside in distant organs, such as the bone marrow.

Use in practice

CTC were discovered over a century ago, but only in recent years has technology been sufficiently advanced to study CTC and to assess their usefulness as biomarkers. Recent evidence suggests that not only do the number of CTC increase during sleep and rest phases but also that these CTC are better able to metastasize, compared to those generated during periods of wakefulness or activity.

CTC clusters (microemboli) are defined as groups of two or more CTC. They can consist of CTC alone (homotypic) or can include various stromal cells, such as cancer-associated fibroblasts or platelets and immune cells (heterotypic). CTC clusters (with or without leukocytes) seem to have greater metastatic capacity, compared with individual CTC.

A multitude of characteristics can be measured in CTC, including genetics and epigenetics, as well as protein levels, which might help in understanding many processes involved in the formation of metastases.

Quantitative assessment of CTC could indicate tumor burden in patients with aggressive cancers, as has been seen in patients with primary lung cancer.
 

Early cancer diagnosis

Early research into CTC didn’t explore their usefulness in diagnosing early-stage tumors because it was thought that CTC were characteristic of advanced-stage disease. This hypothesis was later rejected following evidence of local intravascular invasion of very early cancer cells, even over a period of several hours. This feature may allow CTC to be detected before the clinical diagnosis of cancer.

CTC have been detected in various neoplastic conditions: in breast cancer, seen in 20% of patients with stage I disease, in 26.8% with stage II disease, and 26.7% with stage III disease; in nonmetastatic colorectal cancer, including stage I and II disease; and in prostate cancer, seen in over 50% of patients with localized disease.

The presence of CTC has been proven to be an unfavorable prognostic predictor of overall survival among patients with early-stage non–small cell lung cancer. It distinguishes patients with pancreatic ductal adenocarcinoma from those with noncancerous pancreatic diseases with a sensitivity of 75% and a specificity of 96.3%.

CTC positivity scoring (appropriately defined), combined with serum prostate-specific antigen level, was predictive of a biopsy diagnosis of clinically significant prostate cancer.

All these data support the utility of CTC in early cancer diagnosis. Their link with metastases, and thus with aggressive tumors, gives them an advantage over other (noninvasive or minimally invasive) biomarkers in the early identification of invasive tumors for therapeutic intervention with better cure rates.
 

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Treatment options for patients with cancer that is detected at a late stage are severely limited, which usually leads to an unfavorable prognosis for such patients. Indeed, the options available for patients with metastatic solid cancers are scarcely curative. Therefore, early diagnosis of neoplasia remains a fundamental mainstay for improving outcomes for cancer patients.

Histopathology is the current gold standard for cancer diagnosis. Biopsy is an invasive procedure that provides physicians with further samples to test but that furnishes limited information concerning tumor heterogeneity. Biopsy specimens are usually obtained only when there is clinical evidence of neoplasia, which significantly limits their usefulness in early diagnosis.

Around 20 years ago, it was discovered that the presence of circulating tumor cells (CTC) in patients with metastatic breast cancer who were about to begin a new line of treatment was predictive of overall and progression-free survival. The prognostic value of CTC was independent of the line of treatment (first or second) and was greater than that of the site of metastasis, the type of therapy, and the time to metastasis after complete primary resection. These results support the idea that the presence of CTC could be used to modify the system for staging advanced disease.

Since then, research into liquid biopsy assays has expanded rapidly, and many biomarkers have been studied in various body fluids for their usefulness in assessing solid tumors.
 

Liquid vs. tissue

Liquid biopsy is a minimally invasive tool that is easy to use. It is employed to detect cancer, to assess treatment response, or to monitor disease progression. Liquid biopsy produces test material from primary and metastatic (or micrometastatic) sites and provides a more heterogeneous picture of the entire tumor cell population, compared with specimens obtained with tissue biopsy.

Metastasis

The notion that metastatic lesions are formed from cancer cells that have disseminated from advanced primary tumors has been substantially revised following the identification of disseminated tumor cells (DTC) in the bone marrow of patients with early-stage disease. These results have led researchers to no longer view cancer metastasis as a linear cascade of events but rather as a series of concurrent, partially overlapping processes, as metastasizing cells assume new phenotypes while abandoning older behaviors.

The initiation of metastasis is not simply a cell-autonomous event but is heavily influenced by complex tissue microenvironments. Although colonization of distant tissues by DTC is an extremely inefficient process, at times, relatively numerous CTC can be detected in the blood of cancer patients (> 1,000 CTC/mL of blood plasma), whereas the number of clinically detectable metastases is disproportionately low, confirming that tumor cell diffusion can happen at an early stage but usually occurs later on.
 

Early dissemination

Little is currently known about the preference of cancer subtypes for distinct tissues or about the receptiveness of a tissue as a metastatic site. What endures as one of the most confounding clinical phenomena is that patients may undergo tumor resection and remain apparently disease free for months, years, and even decades, only to experience relapse and be diagnosed with late-stage metastatic disease. This course may be a result of cell seeding from minimal residual disease after resection of the primary tumor or of preexisting clinically undetectable micrometastases. It may also arise from early disseminated cells that remain dormant and resistant to therapy until they suddenly reawaken to initiate proliferation into clinically detectable macrometastases.

Dormant DTC could be the main reason for delayed detection of metastases. It is thought that around 40% of patients with prostate cancer who undergo radical prostatectomy present with biochemical recurrence, suggesting that it is likely that hidden DTC or micrometastases are present at the time of the procedure. The finding is consistent with the detection of DTC many years after tumor resection, suggesting they were released before surgical treatment. Nevertheless, research into tumor cell dormancy is limited, owing to the invasive and technically challenging nature of obtaining DTC samples, which are predominantly taken from the bone marrow.
 

CTC metastases

Cancer cells can undergo epithelial-to-mesenchymal transition to facilitate their detachment from the primary tumor and intravasation into the blood circulation (step 1). Dissemination of cancer cells from the primary tumor into circulation can involve either single cells or cell clusters containing multiple CTC as well as immune cells and platelets, known as microemboli. CTC that can survive in circulation (step 2) can exit the bloodstream (step 3) and establish metastatic tumors (step 4), or they can enter dormancy and reside in distant organs, such as the bone marrow.

Use in practice

CTC were discovered over a century ago, but only in recent years has technology been sufficiently advanced to study CTC and to assess their usefulness as biomarkers. Recent evidence suggests that not only do the number of CTC increase during sleep and rest phases but also that these CTC are better able to metastasize, compared to those generated during periods of wakefulness or activity.

CTC clusters (microemboli) are defined as groups of two or more CTC. They can consist of CTC alone (homotypic) or can include various stromal cells, such as cancer-associated fibroblasts or platelets and immune cells (heterotypic). CTC clusters (with or without leukocytes) seem to have greater metastatic capacity, compared with individual CTC.

A multitude of characteristics can be measured in CTC, including genetics and epigenetics, as well as protein levels, which might help in understanding many processes involved in the formation of metastases.

Quantitative assessment of CTC could indicate tumor burden in patients with aggressive cancers, as has been seen in patients with primary lung cancer.
 

Early cancer diagnosis

Early research into CTC didn’t explore their usefulness in diagnosing early-stage tumors because it was thought that CTC were characteristic of advanced-stage disease. This hypothesis was later rejected following evidence of local intravascular invasion of very early cancer cells, even over a period of several hours. This feature may allow CTC to be detected before the clinical diagnosis of cancer.

CTC have been detected in various neoplastic conditions: in breast cancer, seen in 20% of patients with stage I disease, in 26.8% with stage II disease, and 26.7% with stage III disease; in nonmetastatic colorectal cancer, including stage I and II disease; and in prostate cancer, seen in over 50% of patients with localized disease.

The presence of CTC has been proven to be an unfavorable prognostic predictor of overall survival among patients with early-stage non–small cell lung cancer. It distinguishes patients with pancreatic ductal adenocarcinoma from those with noncancerous pancreatic diseases with a sensitivity of 75% and a specificity of 96.3%.

CTC positivity scoring (appropriately defined), combined with serum prostate-specific antigen level, was predictive of a biopsy diagnosis of clinically significant prostate cancer.

All these data support the utility of CTC in early cancer diagnosis. Their link with metastases, and thus with aggressive tumors, gives them an advantage over other (noninvasive or minimally invasive) biomarkers in the early identification of invasive tumors for therapeutic intervention with better cure rates.
 

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Serious arrhythmias playing video games ‘extremely rare’

Article Type
Changed
Wed, 08/09/2023 - 12:58

Young people diagnosed with a genetic heart disease (GHD) predisposing them to ventricular arrhythmia are at very low risk for a cardiac event while playing video games or other electronic games, provided their condition is properly treated, say researchers based on their large, single-center study.

Among more than 3,000 patients in the study with such a genetic vulnerability, just 6 – or less than 0.2% – experienced an electronic gaming–associated cardiac event.

A previous study had concluded that e-gaming, particularly with war games, might trigger potentially fatal arrhythmias in some vulnerable children. That study “sparked controversy in the field, with both clinicians and patients wondering whether electronic gaming is safe for patients with GHDs,” Michael J. Ackerman, MD, PhD, of Mayo Clinic in Rochester, Minn., said in an interview.

Dr. Ackerman and colleagues conducted the current study, published online in the Journal of the American College of Cardiology, to determine just how often e-gaming triggered cardiac events (CE) in these patients – and who was most at risk.
 

‘Extremely low’ risk

The investigators looked at records from all patients evaluated and treated at the Mayo Clinic’s genetic heart rhythm clinic from 2000 to 2022. They identified those with a history of playing electronic games at the time of their CE, defined here as such an event occurring before diagnosis, or breakthrough cardiac event (BCE), meaning an event occurring after diagnosis.

A total of 3,370 patients with a GHD (55% female) were included in the analysis. More than half (52%) were diagnosed with long-QT syndrome (LQTS). The remainder had various GHDs including, among others, catecholaminergic polymorphic ventricular tachycardia (CPVT) or hypertrophic cardiomyopathy (HCM).

The mean age at first evaluation was 27; 14% of the participants were age 6 or younger, 33% were age 7-20, and 53% were 21 or older. Most patients in each of the three age groups were diagnosed with either LQTS or CPVT.

Of the 3,370 GHD patients, 1,079 (32%) had a CE before diagnosis.

Six patients (0.5%) had a CE in the setting of e-gaming, including five for whom it was the sentinel CE. Five also had CEs in settings not involving e-gaming. Their average age at the time of the CE was 13.

Three of the six patients were diagnosed with CPVT (including two CPVT1 and one CPVT2). Of the others, one was diagnosed with LQT1, one with ventricular fibrillation triggered by premature ventricular contractions, and one with catecholamine-sensitive right ventricular outflow tract ventricular tachycardia (RVOT-VT).

After appropriate treatment, none of the six experienced a BCE during follow-ups ranging from 7 months to 4 years.

Among the full cohort of 3370 patients with GHD, 431 (13%) experienced one or more BCE during follow-up. Of those, one with catecholamine-sensitive RVOT-VT experienced an e-gaming–associated BCE.

“Although anecdotal e-gaming–associated cardiac events, including [sudden cardiac death], have been reported, the absolute risk is extremely low,” the authors wrote.

“Although there are no clear health benefits associated with e-gaming,” Dr. Ackerman said, “the risk of sudden death should not be used as an argument in an effort to curtail the amount of time patients spend e-gaming.”

Furthermore, he added, e-gaming is important to some patients’ quality of life. If patients are “properly diagnosed, risk stratified, and treated, it is okay to engage in e-gaming.”

However, “given that e-gaming may pose some risks, especially when compounded with additional factors such as dehydration, sleep deprivation, and use of performance-enhancing substances such as energy drinks, patients need to be counseled on the potential adverse health consequences,” Dr. Ackerman said.

“To this end,” he added, “we are proponents of incorporating e-gaming status into the clinical evaluation and electronic health record.”

“We would continue to urge common sense and individual risk assessment, with shared decision-making, for those where this may be an issue,” Claire M. Lawley, MBBS, PhD, Children’s Hospital at Westmead (Australia), said in an interview.

“Additionally, syncope during electronic gaming should prompt medical review,” said Dr. Lawley, lead author of the study that prompted Ackerman and colleagues to investigate the issue further.
 

 

 

Buddy system

Maully J. Shah, MBBS, led a study published in 2020 focusing on two case reports of syncope and potentially life-threatening ventricular arrhythmias provoked by emotional surges during play with violent video games. 

Nevertheless, “we do not restrict patients from participating in e-games,” Dr. Shah, a pediatric cardiac electrophysiologist at the Cardiac Center at Children’s Hospital of Philadelphia, said in an interview. “We inform them about the available data regarding the very rare but possible occurrence of an event from e-gaming so that they can make an informed decision.”

Dr. Shah agreed that, “even in children not known to have a cardiac condition, syncope associated with emotional responses during violent video games should prompt cardiac evaluation, similar to exercise-induced syncope.”

If a patient wishes to play e-games, clinicians should ensure medication compliance and recommend a “buddy” system. “Don’t be alone while playing,” she said.

“The present study and previous reports make one pause to think whether these CEs and catecholaminergic drives can occur with sports only. If we now consider electronic gaming as a potential risk, what other activities need to be included?” wrote the authors of an accompanying editorial, led by Shankar Baskar, MD, Cincinnati Children’s Medical Center.

“A catecholaminergic drive can occur in many settings with activities of daily living or activities not considered to be competitive,” the editorialists wrote. “Ultimately these events [are] rare, but they can have life-threatening consequences, and at the same time they might not be altogether preventable and, as in electronic gaming, might be an activity that improves quality of life, especially in those who might be restricted from other sports.”

Dr. Ackerman disclosed consulting for Abbott, Boston Scientific, Bristol-Myers Squibb, Daiichi Sankyo, Invitae, Medtronic, Tenaya Therapeutics, and UpToDate. Dr. Ackerman and the Mayo Clinic have license agreements with AliveCor, Anumana, ARMGO Pharma, Pfizer, and Thryv Therapeutics. The other coauthors reported no relevant relationships. Dr. Baskar and colleagues reported no relevant relationships. Dr. Shah disclosed she is a consultant to Medtronic.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Young people diagnosed with a genetic heart disease (GHD) predisposing them to ventricular arrhythmia are at very low risk for a cardiac event while playing video games or other electronic games, provided their condition is properly treated, say researchers based on their large, single-center study.

Among more than 3,000 patients in the study with such a genetic vulnerability, just 6 – or less than 0.2% – experienced an electronic gaming–associated cardiac event.

A previous study had concluded that e-gaming, particularly with war games, might trigger potentially fatal arrhythmias in some vulnerable children. That study “sparked controversy in the field, with both clinicians and patients wondering whether electronic gaming is safe for patients with GHDs,” Michael J. Ackerman, MD, PhD, of Mayo Clinic in Rochester, Minn., said in an interview.

Dr. Ackerman and colleagues conducted the current study, published online in the Journal of the American College of Cardiology, to determine just how often e-gaming triggered cardiac events (CE) in these patients – and who was most at risk.
 

‘Extremely low’ risk

The investigators looked at records from all patients evaluated and treated at the Mayo Clinic’s genetic heart rhythm clinic from 2000 to 2022. They identified those with a history of playing electronic games at the time of their CE, defined here as such an event occurring before diagnosis, or breakthrough cardiac event (BCE), meaning an event occurring after diagnosis.

A total of 3,370 patients with a GHD (55% female) were included in the analysis. More than half (52%) were diagnosed with long-QT syndrome (LQTS). The remainder had various GHDs including, among others, catecholaminergic polymorphic ventricular tachycardia (CPVT) or hypertrophic cardiomyopathy (HCM).

The mean age at first evaluation was 27; 14% of the participants were age 6 or younger, 33% were age 7-20, and 53% were 21 or older. Most patients in each of the three age groups were diagnosed with either LQTS or CPVT.

Of the 3,370 GHD patients, 1,079 (32%) had a CE before diagnosis.

Six patients (0.5%) had a CE in the setting of e-gaming, including five for whom it was the sentinel CE. Five also had CEs in settings not involving e-gaming. Their average age at the time of the CE was 13.

Three of the six patients were diagnosed with CPVT (including two CPVT1 and one CPVT2). Of the others, one was diagnosed with LQT1, one with ventricular fibrillation triggered by premature ventricular contractions, and one with catecholamine-sensitive right ventricular outflow tract ventricular tachycardia (RVOT-VT).

After appropriate treatment, none of the six experienced a BCE during follow-ups ranging from 7 months to 4 years.

Among the full cohort of 3370 patients with GHD, 431 (13%) experienced one or more BCE during follow-up. Of those, one with catecholamine-sensitive RVOT-VT experienced an e-gaming–associated BCE.

“Although anecdotal e-gaming–associated cardiac events, including [sudden cardiac death], have been reported, the absolute risk is extremely low,” the authors wrote.

“Although there are no clear health benefits associated with e-gaming,” Dr. Ackerman said, “the risk of sudden death should not be used as an argument in an effort to curtail the amount of time patients spend e-gaming.”

Furthermore, he added, e-gaming is important to some patients’ quality of life. If patients are “properly diagnosed, risk stratified, and treated, it is okay to engage in e-gaming.”

However, “given that e-gaming may pose some risks, especially when compounded with additional factors such as dehydration, sleep deprivation, and use of performance-enhancing substances such as energy drinks, patients need to be counseled on the potential adverse health consequences,” Dr. Ackerman said.

“To this end,” he added, “we are proponents of incorporating e-gaming status into the clinical evaluation and electronic health record.”

“We would continue to urge common sense and individual risk assessment, with shared decision-making, for those where this may be an issue,” Claire M. Lawley, MBBS, PhD, Children’s Hospital at Westmead (Australia), said in an interview.

“Additionally, syncope during electronic gaming should prompt medical review,” said Dr. Lawley, lead author of the study that prompted Ackerman and colleagues to investigate the issue further.
 

 

 

Buddy system

Maully J. Shah, MBBS, led a study published in 2020 focusing on two case reports of syncope and potentially life-threatening ventricular arrhythmias provoked by emotional surges during play with violent video games. 

Nevertheless, “we do not restrict patients from participating in e-games,” Dr. Shah, a pediatric cardiac electrophysiologist at the Cardiac Center at Children’s Hospital of Philadelphia, said in an interview. “We inform them about the available data regarding the very rare but possible occurrence of an event from e-gaming so that they can make an informed decision.”

Dr. Shah agreed that, “even in children not known to have a cardiac condition, syncope associated with emotional responses during violent video games should prompt cardiac evaluation, similar to exercise-induced syncope.”

If a patient wishes to play e-games, clinicians should ensure medication compliance and recommend a “buddy” system. “Don’t be alone while playing,” she said.

“The present study and previous reports make one pause to think whether these CEs and catecholaminergic drives can occur with sports only. If we now consider electronic gaming as a potential risk, what other activities need to be included?” wrote the authors of an accompanying editorial, led by Shankar Baskar, MD, Cincinnati Children’s Medical Center.

“A catecholaminergic drive can occur in many settings with activities of daily living or activities not considered to be competitive,” the editorialists wrote. “Ultimately these events [are] rare, but they can have life-threatening consequences, and at the same time they might not be altogether preventable and, as in electronic gaming, might be an activity that improves quality of life, especially in those who might be restricted from other sports.”

Dr. Ackerman disclosed consulting for Abbott, Boston Scientific, Bristol-Myers Squibb, Daiichi Sankyo, Invitae, Medtronic, Tenaya Therapeutics, and UpToDate. Dr. Ackerman and the Mayo Clinic have license agreements with AliveCor, Anumana, ARMGO Pharma, Pfizer, and Thryv Therapeutics. The other coauthors reported no relevant relationships. Dr. Baskar and colleagues reported no relevant relationships. Dr. Shah disclosed she is a consultant to Medtronic.

A version of this article first appeared on Medscape.com.

Young people diagnosed with a genetic heart disease (GHD) predisposing them to ventricular arrhythmia are at very low risk for a cardiac event while playing video games or other electronic games, provided their condition is properly treated, say researchers based on their large, single-center study.

Among more than 3,000 patients in the study with such a genetic vulnerability, just 6 – or less than 0.2% – experienced an electronic gaming–associated cardiac event.

A previous study had concluded that e-gaming, particularly with war games, might trigger potentially fatal arrhythmias in some vulnerable children. That study “sparked controversy in the field, with both clinicians and patients wondering whether electronic gaming is safe for patients with GHDs,” Michael J. Ackerman, MD, PhD, of Mayo Clinic in Rochester, Minn., said in an interview.

Dr. Ackerman and colleagues conducted the current study, published online in the Journal of the American College of Cardiology, to determine just how often e-gaming triggered cardiac events (CE) in these patients – and who was most at risk.
 

‘Extremely low’ risk

The investigators looked at records from all patients evaluated and treated at the Mayo Clinic’s genetic heart rhythm clinic from 2000 to 2022. They identified those with a history of playing electronic games at the time of their CE, defined here as such an event occurring before diagnosis, or breakthrough cardiac event (BCE), meaning an event occurring after diagnosis.

A total of 3,370 patients with a GHD (55% female) were included in the analysis. More than half (52%) were diagnosed with long-QT syndrome (LQTS). The remainder had various GHDs including, among others, catecholaminergic polymorphic ventricular tachycardia (CPVT) or hypertrophic cardiomyopathy (HCM).

The mean age at first evaluation was 27; 14% of the participants were age 6 or younger, 33% were age 7-20, and 53% were 21 or older. Most patients in each of the three age groups were diagnosed with either LQTS or CPVT.

Of the 3,370 GHD patients, 1,079 (32%) had a CE before diagnosis.

Six patients (0.5%) had a CE in the setting of e-gaming, including five for whom it was the sentinel CE. Five also had CEs in settings not involving e-gaming. Their average age at the time of the CE was 13.

Three of the six patients were diagnosed with CPVT (including two CPVT1 and one CPVT2). Of the others, one was diagnosed with LQT1, one with ventricular fibrillation triggered by premature ventricular contractions, and one with catecholamine-sensitive right ventricular outflow tract ventricular tachycardia (RVOT-VT).

After appropriate treatment, none of the six experienced a BCE during follow-ups ranging from 7 months to 4 years.

Among the full cohort of 3370 patients with GHD, 431 (13%) experienced one or more BCE during follow-up. Of those, one with catecholamine-sensitive RVOT-VT experienced an e-gaming–associated BCE.

“Although anecdotal e-gaming–associated cardiac events, including [sudden cardiac death], have been reported, the absolute risk is extremely low,” the authors wrote.

“Although there are no clear health benefits associated with e-gaming,” Dr. Ackerman said, “the risk of sudden death should not be used as an argument in an effort to curtail the amount of time patients spend e-gaming.”

Furthermore, he added, e-gaming is important to some patients’ quality of life. If patients are “properly diagnosed, risk stratified, and treated, it is okay to engage in e-gaming.”

However, “given that e-gaming may pose some risks, especially when compounded with additional factors such as dehydration, sleep deprivation, and use of performance-enhancing substances such as energy drinks, patients need to be counseled on the potential adverse health consequences,” Dr. Ackerman said.

“To this end,” he added, “we are proponents of incorporating e-gaming status into the clinical evaluation and electronic health record.”

“We would continue to urge common sense and individual risk assessment, with shared decision-making, for those where this may be an issue,” Claire M. Lawley, MBBS, PhD, Children’s Hospital at Westmead (Australia), said in an interview.

“Additionally, syncope during electronic gaming should prompt medical review,” said Dr. Lawley, lead author of the study that prompted Ackerman and colleagues to investigate the issue further.
 

 

 

Buddy system

Maully J. Shah, MBBS, led a study published in 2020 focusing on two case reports of syncope and potentially life-threatening ventricular arrhythmias provoked by emotional surges during play with violent video games. 

Nevertheless, “we do not restrict patients from participating in e-games,” Dr. Shah, a pediatric cardiac electrophysiologist at the Cardiac Center at Children’s Hospital of Philadelphia, said in an interview. “We inform them about the available data regarding the very rare but possible occurrence of an event from e-gaming so that they can make an informed decision.”

Dr. Shah agreed that, “even in children not known to have a cardiac condition, syncope associated with emotional responses during violent video games should prompt cardiac evaluation, similar to exercise-induced syncope.”

If a patient wishes to play e-games, clinicians should ensure medication compliance and recommend a “buddy” system. “Don’t be alone while playing,” she said.

“The present study and previous reports make one pause to think whether these CEs and catecholaminergic drives can occur with sports only. If we now consider electronic gaming as a potential risk, what other activities need to be included?” wrote the authors of an accompanying editorial, led by Shankar Baskar, MD, Cincinnati Children’s Medical Center.

“A catecholaminergic drive can occur in many settings with activities of daily living or activities not considered to be competitive,” the editorialists wrote. “Ultimately these events [are] rare, but they can have life-threatening consequences, and at the same time they might not be altogether preventable and, as in electronic gaming, might be an activity that improves quality of life, especially in those who might be restricted from other sports.”

Dr. Ackerman disclosed consulting for Abbott, Boston Scientific, Bristol-Myers Squibb, Daiichi Sankyo, Invitae, Medtronic, Tenaya Therapeutics, and UpToDate. Dr. Ackerman and the Mayo Clinic have license agreements with AliveCor, Anumana, ARMGO Pharma, Pfizer, and Thryv Therapeutics. The other coauthors reported no relevant relationships. Dr. Baskar and colleagues reported no relevant relationships. Dr. Shah disclosed she is a consultant to Medtronic.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Inhaling pleasant scents during sleep tied to a dramatic boost in cognition

Article Type
Changed
Tue, 09/05/2023 - 11:50

Inhaling a pleasant aroma during sleep has been linked to a “dramatic” improvement in memory, early research suggests.

In a small, randomized controlled trial researchers found that when cognitively normal individuals were exposed to the scent of an essential oil for 2 hours every night over 6 months, they experienced a 226% improvement in memory compared with a control group who received only a trace amount of the diffused scent.

In addition, functional magnetic resonance imaging (fMRI) showed that those in the enriched group had improved functioning of the left uncinate fasciculus, an area of the brain linked to memory and cognition, which typically declines with age.

“To my knowledge, that level of [memory] improvement is far greater than anything that has been reported for healthy older adults and we also found a critical memory pathway in their brains improved to a similar extent relative to unenriched older adults,” senior investigator Michael Leon, PhD, professor emeritus, University of California, Irvine, said in an interview.

The study was published online in Frontiers of Neuroscience.
 

The brain’s “superhighway”

Olfactory enrichment “involves the daily exposure of individuals to multiple odorants” and has been shown in mouse models to improve memory and neurogenesis, the investigators noted.

A previous study showed that exposure to individual essential oils for 30 minutes a day over 3 months induced neurogenesis in the olfactory bulb and the hippocampus.

“The olfactory system is the only sense that has a direct ‘superhighway’ input to the memory centers areas of the brain; all the other senses have to reach those brain areas through what you might call the ‘side streets’ of the brain, and so consequently, they have much less impact on maintaining the health of those memory centers.”

When olfaction is compromised, “the memory centers of the brain start to deteriorate and, conversely, when people are given olfactory enrichment, their memory areas become larger and more functional,” he added.

Olfactory dysfunction is the first symptom of Alzheimer’s disease (AD) and is also found in virtually all neurological and psychiatric disorders.

“I’ve counted 68 of them – including anorexia, anxiety, [attention-deficit/hyperactivity disorder], depression, epilepsy, and stroke. In fact, by mid-life, your all-cause mortality can be predicted by your ability to smell things,” Dr. Leon said.

Dr. Leon and colleagues previously developed an effective treatment for autism using environmental enrichment that focused on odor stimulation, along with stimulating other senses. “We then considered the possibility that olfactory enrichment alone might improve brain function.”
 

Rose, orange, eucalyptus …

For the study, the researchers randomly assigned 43 older adults, aged 60-85 years, to receive either nightly exposure to essential oil scents delivered via a diffuser (n = 20; mean [SD] age, 70.1 [6.6] years) or to a sham control with only trace amounts of odorants (n = 23; mean age, 69.2 [7.1] years) for a period of 6 months.

The intervention group was exposed to a single odorant, delivered through a diffuser, for 2 hours nightly, rotating through seven pleasant aromas each week. They included rose, orange, eucalyptus, lemon, peppermint, rosemary, and lavender scents.

All participants completed a battery of tests at baseline, including the Mini-Mental State Examination (MMSE), which confirmed normal cognitive functioning. At baseline and after a 6-month follow-up, participants completed the Rey Auditory Verbal Learning Test (RAVLT) as well as three subsets of the Wechsler Adult Intelligence Scale–Third Edition (WAIS-III).

Olfactory system function was assessed using “Sniffin Sticks,” allowing the researchers to determine if olfactory enrichment enhanced olfactory performance.

Participants underwent fMRI at baseline and again at 6 months.

Brain imaging results showed a “clear, statistically significant 226% difference between enriched and control older adults in performance on the RAVLT, which evaluates learning and memory (timepoint × group interaction; F = 6.63; P = .02; Cohen’s d = 1.08; a “large effect size”).

They also found a significant change in the mean diffusivity of the left uncinate fasciculus in the enriched group compared with the controls (timepoint × group interaction; F = 4.39; P = .043; h 2 p = .101; a “medium-size effect”).

The uncinate fasciculus is a “major pathway” connecting the basolateral amygdala and the entorhinal cortex to the prefrontal cortex. This pathway deteriorates in aging and in AD and “has been suggested to play a role in mediating episodic memory, language, socio-emotional processing, and selecting among competing memories during retrieval.”

No significant differences were found between the groups in olfactory ability.

Limitations of the study include its small sample size. The investigators hope the findings will “stimulate larger scale clinical trials systematically testing the therapeutic efficacy of olfactory enrichment in treating memory loss in older adults.”
 

 

 

Exciting but preliminary

Commenting for this article, Donald Wilson, PhD, professor of child and adolescent psychiatry and of neuroscience and physiology, the Child Study Center, NYU Langone Medical Center, New York, said that multiple studies have “demonstrated that problems with sense of smell are associated with and sometimes can precede other symptoms for many disorders, including AD, Parkinson’s disease, and depression.”

Recent work has suggested that this relationship can be “bidirectional” – for example, losing one’s sense of smell might promote depression, while depressive disorder might lead to impaired smell, according to Dr. Wilson, also director and senior research scientist, the Emotional Brain Institute, Nathan Kline Institute for Psychiatric Research. He was not involved with the study.

This “two-way interaction” may raise the possibility that “improving olfaction could impact nonolfactory disorders.”

This paper “brings together” previous research findings to show that odors during bedtime can improve some aspects of cognitive function and circuits that are known to be important for memory and cognition – which Dr. Wilson called “a very exciting, though relatively preliminary, finding.”

A caveat is that several measures of cognitive function were assessed and only one (verbal memory) showed clear improvement.

Nevertheless, there’s “very strong interest now in the olfactory and nonolfactory aspects of odor training and this training expands the training possibilities to sleep. This could be a powerful tool for cognitive improvement and/or rescue if follow-up studies support these findings,” Dr. Wilson said.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Inhaling a pleasant aroma during sleep has been linked to a “dramatic” improvement in memory, early research suggests.

In a small, randomized controlled trial researchers found that when cognitively normal individuals were exposed to the scent of an essential oil for 2 hours every night over 6 months, they experienced a 226% improvement in memory compared with a control group who received only a trace amount of the diffused scent.

In addition, functional magnetic resonance imaging (fMRI) showed that those in the enriched group had improved functioning of the left uncinate fasciculus, an area of the brain linked to memory and cognition, which typically declines with age.

“To my knowledge, that level of [memory] improvement is far greater than anything that has been reported for healthy older adults and we also found a critical memory pathway in their brains improved to a similar extent relative to unenriched older adults,” senior investigator Michael Leon, PhD, professor emeritus, University of California, Irvine, said in an interview.

The study was published online in Frontiers of Neuroscience.
 

The brain’s “superhighway”

Olfactory enrichment “involves the daily exposure of individuals to multiple odorants” and has been shown in mouse models to improve memory and neurogenesis, the investigators noted.

A previous study showed that exposure to individual essential oils for 30 minutes a day over 3 months induced neurogenesis in the olfactory bulb and the hippocampus.

“The olfactory system is the only sense that has a direct ‘superhighway’ input to the memory centers areas of the brain; all the other senses have to reach those brain areas through what you might call the ‘side streets’ of the brain, and so consequently, they have much less impact on maintaining the health of those memory centers.”

When olfaction is compromised, “the memory centers of the brain start to deteriorate and, conversely, when people are given olfactory enrichment, their memory areas become larger and more functional,” he added.

Olfactory dysfunction is the first symptom of Alzheimer’s disease (AD) and is also found in virtually all neurological and psychiatric disorders.

“I’ve counted 68 of them – including anorexia, anxiety, [attention-deficit/hyperactivity disorder], depression, epilepsy, and stroke. In fact, by mid-life, your all-cause mortality can be predicted by your ability to smell things,” Dr. Leon said.

Dr. Leon and colleagues previously developed an effective treatment for autism using environmental enrichment that focused on odor stimulation, along with stimulating other senses. “We then considered the possibility that olfactory enrichment alone might improve brain function.”
 

Rose, orange, eucalyptus …

For the study, the researchers randomly assigned 43 older adults, aged 60-85 years, to receive either nightly exposure to essential oil scents delivered via a diffuser (n = 20; mean [SD] age, 70.1 [6.6] years) or to a sham control with only trace amounts of odorants (n = 23; mean age, 69.2 [7.1] years) for a period of 6 months.

The intervention group was exposed to a single odorant, delivered through a diffuser, for 2 hours nightly, rotating through seven pleasant aromas each week. They included rose, orange, eucalyptus, lemon, peppermint, rosemary, and lavender scents.

All participants completed a battery of tests at baseline, including the Mini-Mental State Examination (MMSE), which confirmed normal cognitive functioning. At baseline and after a 6-month follow-up, participants completed the Rey Auditory Verbal Learning Test (RAVLT) as well as three subsets of the Wechsler Adult Intelligence Scale–Third Edition (WAIS-III).

Olfactory system function was assessed using “Sniffin Sticks,” allowing the researchers to determine if olfactory enrichment enhanced olfactory performance.

Participants underwent fMRI at baseline and again at 6 months.

Brain imaging results showed a “clear, statistically significant 226% difference between enriched and control older adults in performance on the RAVLT, which evaluates learning and memory (timepoint × group interaction; F = 6.63; P = .02; Cohen’s d = 1.08; a “large effect size”).

They also found a significant change in the mean diffusivity of the left uncinate fasciculus in the enriched group compared with the controls (timepoint × group interaction; F = 4.39; P = .043; h 2 p = .101; a “medium-size effect”).

The uncinate fasciculus is a “major pathway” connecting the basolateral amygdala and the entorhinal cortex to the prefrontal cortex. This pathway deteriorates in aging and in AD and “has been suggested to play a role in mediating episodic memory, language, socio-emotional processing, and selecting among competing memories during retrieval.”

No significant differences were found between the groups in olfactory ability.

Limitations of the study include its small sample size. The investigators hope the findings will “stimulate larger scale clinical trials systematically testing the therapeutic efficacy of olfactory enrichment in treating memory loss in older adults.”
 

 

 

Exciting but preliminary

Commenting for this article, Donald Wilson, PhD, professor of child and adolescent psychiatry and of neuroscience and physiology, the Child Study Center, NYU Langone Medical Center, New York, said that multiple studies have “demonstrated that problems with sense of smell are associated with and sometimes can precede other symptoms for many disorders, including AD, Parkinson’s disease, and depression.”

Recent work has suggested that this relationship can be “bidirectional” – for example, losing one’s sense of smell might promote depression, while depressive disorder might lead to impaired smell, according to Dr. Wilson, also director and senior research scientist, the Emotional Brain Institute, Nathan Kline Institute for Psychiatric Research. He was not involved with the study.

This “two-way interaction” may raise the possibility that “improving olfaction could impact nonolfactory disorders.”

This paper “brings together” previous research findings to show that odors during bedtime can improve some aspects of cognitive function and circuits that are known to be important for memory and cognition – which Dr. Wilson called “a very exciting, though relatively preliminary, finding.”

A caveat is that several measures of cognitive function were assessed and only one (verbal memory) showed clear improvement.

Nevertheless, there’s “very strong interest now in the olfactory and nonolfactory aspects of odor training and this training expands the training possibilities to sleep. This could be a powerful tool for cognitive improvement and/or rescue if follow-up studies support these findings,” Dr. Wilson said.

A version of this article appeared on Medscape.com.

Inhaling a pleasant aroma during sleep has been linked to a “dramatic” improvement in memory, early research suggests.

In a small, randomized controlled trial researchers found that when cognitively normal individuals were exposed to the scent of an essential oil for 2 hours every night over 6 months, they experienced a 226% improvement in memory compared with a control group who received only a trace amount of the diffused scent.

In addition, functional magnetic resonance imaging (fMRI) showed that those in the enriched group had improved functioning of the left uncinate fasciculus, an area of the brain linked to memory and cognition, which typically declines with age.

“To my knowledge, that level of [memory] improvement is far greater than anything that has been reported for healthy older adults and we also found a critical memory pathway in their brains improved to a similar extent relative to unenriched older adults,” senior investigator Michael Leon, PhD, professor emeritus, University of California, Irvine, said in an interview.

The study was published online in Frontiers of Neuroscience.
 

The brain’s “superhighway”

Olfactory enrichment “involves the daily exposure of individuals to multiple odorants” and has been shown in mouse models to improve memory and neurogenesis, the investigators noted.

A previous study showed that exposure to individual essential oils for 30 minutes a day over 3 months induced neurogenesis in the olfactory bulb and the hippocampus.

“The olfactory system is the only sense that has a direct ‘superhighway’ input to the memory centers areas of the brain; all the other senses have to reach those brain areas through what you might call the ‘side streets’ of the brain, and so consequently, they have much less impact on maintaining the health of those memory centers.”

When olfaction is compromised, “the memory centers of the brain start to deteriorate and, conversely, when people are given olfactory enrichment, their memory areas become larger and more functional,” he added.

Olfactory dysfunction is the first symptom of Alzheimer’s disease (AD) and is also found in virtually all neurological and psychiatric disorders.

“I’ve counted 68 of them – including anorexia, anxiety, [attention-deficit/hyperactivity disorder], depression, epilepsy, and stroke. In fact, by mid-life, your all-cause mortality can be predicted by your ability to smell things,” Dr. Leon said.

Dr. Leon and colleagues previously developed an effective treatment for autism using environmental enrichment that focused on odor stimulation, along with stimulating other senses. “We then considered the possibility that olfactory enrichment alone might improve brain function.”
 

Rose, orange, eucalyptus …

For the study, the researchers randomly assigned 43 older adults, aged 60-85 years, to receive either nightly exposure to essential oil scents delivered via a diffuser (n = 20; mean [SD] age, 70.1 [6.6] years) or to a sham control with only trace amounts of odorants (n = 23; mean age, 69.2 [7.1] years) for a period of 6 months.

The intervention group was exposed to a single odorant, delivered through a diffuser, for 2 hours nightly, rotating through seven pleasant aromas each week. They included rose, orange, eucalyptus, lemon, peppermint, rosemary, and lavender scents.

All participants completed a battery of tests at baseline, including the Mini-Mental State Examination (MMSE), which confirmed normal cognitive functioning. At baseline and after a 6-month follow-up, participants completed the Rey Auditory Verbal Learning Test (RAVLT) as well as three subsets of the Wechsler Adult Intelligence Scale–Third Edition (WAIS-III).

Olfactory system function was assessed using “Sniffin Sticks,” allowing the researchers to determine if olfactory enrichment enhanced olfactory performance.

Participants underwent fMRI at baseline and again at 6 months.

Brain imaging results showed a “clear, statistically significant 226% difference between enriched and control older adults in performance on the RAVLT, which evaluates learning and memory (timepoint × group interaction; F = 6.63; P = .02; Cohen’s d = 1.08; a “large effect size”).

They also found a significant change in the mean diffusivity of the left uncinate fasciculus in the enriched group compared with the controls (timepoint × group interaction; F = 4.39; P = .043; h 2 p = .101; a “medium-size effect”).

The uncinate fasciculus is a “major pathway” connecting the basolateral amygdala and the entorhinal cortex to the prefrontal cortex. This pathway deteriorates in aging and in AD and “has been suggested to play a role in mediating episodic memory, language, socio-emotional processing, and selecting among competing memories during retrieval.”

No significant differences were found between the groups in olfactory ability.

Limitations of the study include its small sample size. The investigators hope the findings will “stimulate larger scale clinical trials systematically testing the therapeutic efficacy of olfactory enrichment in treating memory loss in older adults.”
 

 

 

Exciting but preliminary

Commenting for this article, Donald Wilson, PhD, professor of child and adolescent psychiatry and of neuroscience and physiology, the Child Study Center, NYU Langone Medical Center, New York, said that multiple studies have “demonstrated that problems with sense of smell are associated with and sometimes can precede other symptoms for many disorders, including AD, Parkinson’s disease, and depression.”

Recent work has suggested that this relationship can be “bidirectional” – for example, losing one’s sense of smell might promote depression, while depressive disorder might lead to impaired smell, according to Dr. Wilson, also director and senior research scientist, the Emotional Brain Institute, Nathan Kline Institute for Psychiatric Research. He was not involved with the study.

This “two-way interaction” may raise the possibility that “improving olfaction could impact nonolfactory disorders.”

This paper “brings together” previous research findings to show that odors during bedtime can improve some aspects of cognitive function and circuits that are known to be important for memory and cognition – which Dr. Wilson called “a very exciting, though relatively preliminary, finding.”

A caveat is that several measures of cognitive function were assessed and only one (verbal memory) showed clear improvement.

Nevertheless, there’s “very strong interest now in the olfactory and nonolfactory aspects of odor training and this training expands the training possibilities to sleep. This could be a powerful tool for cognitive improvement and/or rescue if follow-up studies support these findings,” Dr. Wilson said.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM FRONTIERS IN NEUROSCIENCE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Considering the true costs of clinical trials

Article Type
Changed
Wed, 08/09/2023 - 13:08

This transcript has been edited for clarity.

We need to think about the ways that participating in clinical trials results in increased out-of-pocket costs to our patients and how that limits the ability of marginalized groups to participate. That should be a problem for us.

There are many subtle and some egregious ways that participating in clinical trials can result in increased costs. We may ask patients to come to the clinic more frequently. That may mean costs for transportation, wear and tear on your car, and gas prices. It may also mean that if you work in a job where you don’t have time off, and if you’re not at work, you don’t get paid. That’s a major hit to your take-home pay.

We also need to take a close and more honest look at our study budgets and what we consider standard of care. Now, this becomes a slippery slope because there are clear recommendations that we would all agree, but there are also differences of practice and differences of opinion.

How often should patients with advanced disease, who clinically are doing well, have scans to evaluate their disease status and look for subtle evidence of progression? Are laboratory studies part of the follow-up in patients in the adjuvant setting? Did you really need a urinalysis in somebody who’s going to be starting chemotherapy? Do you need an EKG if you’re going to be giving them a drug that doesn’t have potential cardiac toxicity, for which QTc prolongation is not a problem?

Those are often included in our clinical trials. In some cases, they might be paid for by the trial. In other cases, they’re billed to the insurance provider, which means they’ll contribute to deductibles and copays will apply. It is very likely that they will cost your patient something out of pocket.

Now, this becomes important because many of our consent forms would specifically say that things that are only done for the study are paid for by the study. How we define standard of care becomes vitally important. These issues have not been linked in this way frequently. It is time for us to tackle this problem and think about how we financially support the additional costs of care that can be real barriers for patients to participate in clinical trials.

Clinical trials are how we make progress. The more patients who are able to participate in clinical trials, the better it is for all of us and all our future patients.

Kathy D. Miller, MD, is associate director of clinical research and codirector of the breast cancer program at the Melvin and Bren Simon Cancer Center at Indiana University, Indianapolis. She disclosed no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

We need to think about the ways that participating in clinical trials results in increased out-of-pocket costs to our patients and how that limits the ability of marginalized groups to participate. That should be a problem for us.

There are many subtle and some egregious ways that participating in clinical trials can result in increased costs. We may ask patients to come to the clinic more frequently. That may mean costs for transportation, wear and tear on your car, and gas prices. It may also mean that if you work in a job where you don’t have time off, and if you’re not at work, you don’t get paid. That’s a major hit to your take-home pay.

We also need to take a close and more honest look at our study budgets and what we consider standard of care. Now, this becomes a slippery slope because there are clear recommendations that we would all agree, but there are also differences of practice and differences of opinion.

How often should patients with advanced disease, who clinically are doing well, have scans to evaluate their disease status and look for subtle evidence of progression? Are laboratory studies part of the follow-up in patients in the adjuvant setting? Did you really need a urinalysis in somebody who’s going to be starting chemotherapy? Do you need an EKG if you’re going to be giving them a drug that doesn’t have potential cardiac toxicity, for which QTc prolongation is not a problem?

Those are often included in our clinical trials. In some cases, they might be paid for by the trial. In other cases, they’re billed to the insurance provider, which means they’ll contribute to deductibles and copays will apply. It is very likely that they will cost your patient something out of pocket.

Now, this becomes important because many of our consent forms would specifically say that things that are only done for the study are paid for by the study. How we define standard of care becomes vitally important. These issues have not been linked in this way frequently. It is time for us to tackle this problem and think about how we financially support the additional costs of care that can be real barriers for patients to participate in clinical trials.

Clinical trials are how we make progress. The more patients who are able to participate in clinical trials, the better it is for all of us and all our future patients.

Kathy D. Miller, MD, is associate director of clinical research and codirector of the breast cancer program at the Melvin and Bren Simon Cancer Center at Indiana University, Indianapolis. She disclosed no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

This transcript has been edited for clarity.

We need to think about the ways that participating in clinical trials results in increased out-of-pocket costs to our patients and how that limits the ability of marginalized groups to participate. That should be a problem for us.

There are many subtle and some egregious ways that participating in clinical trials can result in increased costs. We may ask patients to come to the clinic more frequently. That may mean costs for transportation, wear and tear on your car, and gas prices. It may also mean that if you work in a job where you don’t have time off, and if you’re not at work, you don’t get paid. That’s a major hit to your take-home pay.

We also need to take a close and more honest look at our study budgets and what we consider standard of care. Now, this becomes a slippery slope because there are clear recommendations that we would all agree, but there are also differences of practice and differences of opinion.

How often should patients with advanced disease, who clinically are doing well, have scans to evaluate their disease status and look for subtle evidence of progression? Are laboratory studies part of the follow-up in patients in the adjuvant setting? Did you really need a urinalysis in somebody who’s going to be starting chemotherapy? Do you need an EKG if you’re going to be giving them a drug that doesn’t have potential cardiac toxicity, for which QTc prolongation is not a problem?

Those are often included in our clinical trials. In some cases, they might be paid for by the trial. In other cases, they’re billed to the insurance provider, which means they’ll contribute to deductibles and copays will apply. It is very likely that they will cost your patient something out of pocket.

Now, this becomes important because many of our consent forms would specifically say that things that are only done for the study are paid for by the study. How we define standard of care becomes vitally important. These issues have not been linked in this way frequently. It is time for us to tackle this problem and think about how we financially support the additional costs of care that can be real barriers for patients to participate in clinical trials.

Clinical trials are how we make progress. The more patients who are able to participate in clinical trials, the better it is for all of us and all our future patients.

Kathy D. Miller, MD, is associate director of clinical research and codirector of the breast cancer program at the Melvin and Bren Simon Cancer Center at Indiana University, Indianapolis. She disclosed no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Prioritize nutrients, limit ultraprocessed food in diabetes

Article Type
Changed
Wed, 08/09/2023 - 12:57

In a large cohort of older adults with type 2 diabetes in Italy, those with the highest intake of ultraprocessed food and beverages (UPF) were more likely to die of all causes or cardiovascular disease (CVD) within a decade than those with the lowest intake – independent of adherence to a healthy Mediterranean diet.

Adults in the top quartile of UPF intake had a 64% increased risk of all-cause death and a 2.5-fold increased risk of CVD death during follow-up, compared with those in the lowest quartile, after adjusting for variables including Mediterranean diet score.

These findings from the Moli-sani study by Marialaura Bonaccio, PhD, from the Institute for Research, Hospitalization and Healthcare (IRCCS) Neuromed, in Pozzilli, Italy, and colleagues, were published online in the American Journal of Clinical Nutrition.

“Dietary recommendations for prevention and management of type 2 diabetes almost exclusively prioritize consumption of nutritionally balanced foods that are the source of fiber [and] healthy fats and [are] poor in free sugars, and promote dietary patterns – such as the Mediterranean diet and the DASH diet – that place a large emphasis on food groups (for example, whole grains, legumes, nuts, fruits, and vegetables) regardless of food processing,” the researchers note.

The research suggests that “besides prioritizing the adoption of a diet based on nutritional requirements, dietary guidelines for the management of type 2 diabetes should also recommend limiting UPF,” they conclude.

“In addition to the adoption of a diet based on well-known nutritional requirements, dietary recommendations should also suggest limiting the consumption of ultraprocessed foods as much as possible,” Giovanni de Gaetano, MD, PhD, president, IRCCS Neuromed, echoed, in a press release from the institute.

“In this context, and not only for people with diabetes, the front-of-pack nutrition labels should also include information on the degree of food processing,” he observed.

Caroline M. Apovian, MD, who was not involved with the study, agrees that it is wise to limit consumption of UPF.

However, we need more research to better understand which components of UPF are harmful and the biologic mechanisms, Dr. Apovian, who is codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, and a professor of medicine at Harvard Medical School, both in Boston, told this news organization in an interview.

She noted that in a randomized crossover trial in 20 patients who were instructed to eat as much or as little as they wanted, people ate more and gained weight during 2 weeks of a diet high in UPF, compared with 2 weeks of an unprocessed diet matched for presented calories, carbohydrate, sugar, fat, sodium, and fiber.
 

Ultraprocessed foods classed according to Nova system

UPF is “made mostly or entirely from substances derived from foods and additives, using a series of processes and containing minimal whole foods,” and they “are usually nutrient-poor, high in calories, added sugar, sodium, and unhealthy fats,” the Italian researchers write.

High intake of UPF, they add, may exacerbate health risks in people with type 2 diabetes, who are already at higher risk of premature mortality, mainly due to diabetes-related complications.

The researchers analyzed data from a subset of patients in the Moli-sani study of environmental and genetic factors underlying disease, which enrolled 24,325 individuals aged 35 and older who lived in Molise, in central-southern Italy, in 2005-2010.

The current analysis included 1,065 participants in Moli-sani who had type 2 diabetes at baseline and completed a food frequency questionnaire by which participants reported their consumption of 188 foods and beverages in the previous 12 months.

Participants were a mean age of 65 years, and 60% were men.

Most UPF intake was from processed meat (22.4%), crispbread/rusks (16.6%), nonhomemade pizza (11.2%), and cakes, pies, pastries, and puddings (8.8%).

Researchers categorized foods and beverages into four groups with increasing degrees of processing, based on the Nova Food Classification System:

  • Group 1: Fresh or minimally processed foods and beverages (for example, fruit, meat, milk).
  • Group 2: Processed culinary ingredients (for example, oils, butter).
  • Group 3: Processed foods and beverages (for example, canned fish, bread).
  • Group 4: UPF (22 foods and beverages including carbonated drinks, processed meats, sweet or savory packaged snacks, margarine, and foods and beverages with artificial sweeteners).

Participants were divided into four quartiles based on UPF consumption.

The mean percentage of UPF consumption out of total food and beverage intake was 2.8%, 5.2%, 7.7%, and 14.4% for quartiles 1, 2, 3, and 4, respectively. By sex, these rates for quartile 1 were < 4.7% for women and < 3.7% for men, and for quartile 4 were ≥ 10.5% for women and ≥ 9% for men.

Participants with the highest UPF intake were younger (mean age, 63 vs. 67 years) but otherwise had similar characteristics as other participants.

During a median follow-up of 11.6 years, 308 participants died from all causes, including 129 who died from CVD.

Compared with participants with the lowest intake of UPF (quartile 1), those with the highest intake (quartile 4) had a higher risk of all-cause mortality (hazard ratio, 1.70) and CVD mortality (HR, 2.64) during follow-up, after multivariable adjustment. The analysis adjusted for sex, age, energy intake, residence, education, housing, smoking, body mass index, leisure-time physical activity, history of cancer or cardiovascular disease, hypertension, hyperlipidemia, aspirin use, years since type 2 diabetes diagnosis, and special diet for blood glucose control.

After further adjusting for Mediterranean diet score, the risk of all-cause and CVD mortality during follow-up for patients with the highest versus lowest intake of UPF remained similar (HR, 1.64 and 2.55, respectively).

There was a linear dose–response relationship between UPF and all-cause and CVD mortality.

Increasing intake of fruit drinks, carbonated drinks, and salty biscuits was associated with higher all-cause and CVD mortality rates, and consumption of stock cubes and margarine was further related to higher CVD death.

The researchers acknowledge that the study was observational, and therefore cannot determine cause and effect, and was not designed to specifically collect dietary data according to the Nova classification. The findings may not be generalizable to other populations.

The analysis was partly funded by grants from the AIRC and Italian Ministry of Health. The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

In a large cohort of older adults with type 2 diabetes in Italy, those with the highest intake of ultraprocessed food and beverages (UPF) were more likely to die of all causes or cardiovascular disease (CVD) within a decade than those with the lowest intake – independent of adherence to a healthy Mediterranean diet.

Adults in the top quartile of UPF intake had a 64% increased risk of all-cause death and a 2.5-fold increased risk of CVD death during follow-up, compared with those in the lowest quartile, after adjusting for variables including Mediterranean diet score.

These findings from the Moli-sani study by Marialaura Bonaccio, PhD, from the Institute for Research, Hospitalization and Healthcare (IRCCS) Neuromed, in Pozzilli, Italy, and colleagues, were published online in the American Journal of Clinical Nutrition.

“Dietary recommendations for prevention and management of type 2 diabetes almost exclusively prioritize consumption of nutritionally balanced foods that are the source of fiber [and] healthy fats and [are] poor in free sugars, and promote dietary patterns – such as the Mediterranean diet and the DASH diet – that place a large emphasis on food groups (for example, whole grains, legumes, nuts, fruits, and vegetables) regardless of food processing,” the researchers note.

The research suggests that “besides prioritizing the adoption of a diet based on nutritional requirements, dietary guidelines for the management of type 2 diabetes should also recommend limiting UPF,” they conclude.

“In addition to the adoption of a diet based on well-known nutritional requirements, dietary recommendations should also suggest limiting the consumption of ultraprocessed foods as much as possible,” Giovanni de Gaetano, MD, PhD, president, IRCCS Neuromed, echoed, in a press release from the institute.

“In this context, and not only for people with diabetes, the front-of-pack nutrition labels should also include information on the degree of food processing,” he observed.

Caroline M. Apovian, MD, who was not involved with the study, agrees that it is wise to limit consumption of UPF.

However, we need more research to better understand which components of UPF are harmful and the biologic mechanisms, Dr. Apovian, who is codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, and a professor of medicine at Harvard Medical School, both in Boston, told this news organization in an interview.

She noted that in a randomized crossover trial in 20 patients who were instructed to eat as much or as little as they wanted, people ate more and gained weight during 2 weeks of a diet high in UPF, compared with 2 weeks of an unprocessed diet matched for presented calories, carbohydrate, sugar, fat, sodium, and fiber.
 

Ultraprocessed foods classed according to Nova system

UPF is “made mostly or entirely from substances derived from foods and additives, using a series of processes and containing minimal whole foods,” and they “are usually nutrient-poor, high in calories, added sugar, sodium, and unhealthy fats,” the Italian researchers write.

High intake of UPF, they add, may exacerbate health risks in people with type 2 diabetes, who are already at higher risk of premature mortality, mainly due to diabetes-related complications.

The researchers analyzed data from a subset of patients in the Moli-sani study of environmental and genetic factors underlying disease, which enrolled 24,325 individuals aged 35 and older who lived in Molise, in central-southern Italy, in 2005-2010.

The current analysis included 1,065 participants in Moli-sani who had type 2 diabetes at baseline and completed a food frequency questionnaire by which participants reported their consumption of 188 foods and beverages in the previous 12 months.

Participants were a mean age of 65 years, and 60% were men.

Most UPF intake was from processed meat (22.4%), crispbread/rusks (16.6%), nonhomemade pizza (11.2%), and cakes, pies, pastries, and puddings (8.8%).

Researchers categorized foods and beverages into four groups with increasing degrees of processing, based on the Nova Food Classification System:

  • Group 1: Fresh or minimally processed foods and beverages (for example, fruit, meat, milk).
  • Group 2: Processed culinary ingredients (for example, oils, butter).
  • Group 3: Processed foods and beverages (for example, canned fish, bread).
  • Group 4: UPF (22 foods and beverages including carbonated drinks, processed meats, sweet or savory packaged snacks, margarine, and foods and beverages with artificial sweeteners).

Participants were divided into four quartiles based on UPF consumption.

The mean percentage of UPF consumption out of total food and beverage intake was 2.8%, 5.2%, 7.7%, and 14.4% for quartiles 1, 2, 3, and 4, respectively. By sex, these rates for quartile 1 were < 4.7% for women and < 3.7% for men, and for quartile 4 were ≥ 10.5% for women and ≥ 9% for men.

Participants with the highest UPF intake were younger (mean age, 63 vs. 67 years) but otherwise had similar characteristics as other participants.

During a median follow-up of 11.6 years, 308 participants died from all causes, including 129 who died from CVD.

Compared with participants with the lowest intake of UPF (quartile 1), those with the highest intake (quartile 4) had a higher risk of all-cause mortality (hazard ratio, 1.70) and CVD mortality (HR, 2.64) during follow-up, after multivariable adjustment. The analysis adjusted for sex, age, energy intake, residence, education, housing, smoking, body mass index, leisure-time physical activity, history of cancer or cardiovascular disease, hypertension, hyperlipidemia, aspirin use, years since type 2 diabetes diagnosis, and special diet for blood glucose control.

After further adjusting for Mediterranean diet score, the risk of all-cause and CVD mortality during follow-up for patients with the highest versus lowest intake of UPF remained similar (HR, 1.64 and 2.55, respectively).

There was a linear dose–response relationship between UPF and all-cause and CVD mortality.

Increasing intake of fruit drinks, carbonated drinks, and salty biscuits was associated with higher all-cause and CVD mortality rates, and consumption of stock cubes and margarine was further related to higher CVD death.

The researchers acknowledge that the study was observational, and therefore cannot determine cause and effect, and was not designed to specifically collect dietary data according to the Nova classification. The findings may not be generalizable to other populations.

The analysis was partly funded by grants from the AIRC and Italian Ministry of Health. The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

In a large cohort of older adults with type 2 diabetes in Italy, those with the highest intake of ultraprocessed food and beverages (UPF) were more likely to die of all causes or cardiovascular disease (CVD) within a decade than those with the lowest intake – independent of adherence to a healthy Mediterranean diet.

Adults in the top quartile of UPF intake had a 64% increased risk of all-cause death and a 2.5-fold increased risk of CVD death during follow-up, compared with those in the lowest quartile, after adjusting for variables including Mediterranean diet score.

These findings from the Moli-sani study by Marialaura Bonaccio, PhD, from the Institute for Research, Hospitalization and Healthcare (IRCCS) Neuromed, in Pozzilli, Italy, and colleagues, were published online in the American Journal of Clinical Nutrition.

“Dietary recommendations for prevention and management of type 2 diabetes almost exclusively prioritize consumption of nutritionally balanced foods that are the source of fiber [and] healthy fats and [are] poor in free sugars, and promote dietary patterns – such as the Mediterranean diet and the DASH diet – that place a large emphasis on food groups (for example, whole grains, legumes, nuts, fruits, and vegetables) regardless of food processing,” the researchers note.

The research suggests that “besides prioritizing the adoption of a diet based on nutritional requirements, dietary guidelines for the management of type 2 diabetes should also recommend limiting UPF,” they conclude.

“In addition to the adoption of a diet based on well-known nutritional requirements, dietary recommendations should also suggest limiting the consumption of ultraprocessed foods as much as possible,” Giovanni de Gaetano, MD, PhD, president, IRCCS Neuromed, echoed, in a press release from the institute.

“In this context, and not only for people with diabetes, the front-of-pack nutrition labels should also include information on the degree of food processing,” he observed.

Caroline M. Apovian, MD, who was not involved with the study, agrees that it is wise to limit consumption of UPF.

However, we need more research to better understand which components of UPF are harmful and the biologic mechanisms, Dr. Apovian, who is codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, and a professor of medicine at Harvard Medical School, both in Boston, told this news organization in an interview.

She noted that in a randomized crossover trial in 20 patients who were instructed to eat as much or as little as they wanted, people ate more and gained weight during 2 weeks of a diet high in UPF, compared with 2 weeks of an unprocessed diet matched for presented calories, carbohydrate, sugar, fat, sodium, and fiber.
 

Ultraprocessed foods classed according to Nova system

UPF is “made mostly or entirely from substances derived from foods and additives, using a series of processes and containing minimal whole foods,” and they “are usually nutrient-poor, high in calories, added sugar, sodium, and unhealthy fats,” the Italian researchers write.

High intake of UPF, they add, may exacerbate health risks in people with type 2 diabetes, who are already at higher risk of premature mortality, mainly due to diabetes-related complications.

The researchers analyzed data from a subset of patients in the Moli-sani study of environmental and genetic factors underlying disease, which enrolled 24,325 individuals aged 35 and older who lived in Molise, in central-southern Italy, in 2005-2010.

The current analysis included 1,065 participants in Moli-sani who had type 2 diabetes at baseline and completed a food frequency questionnaire by which participants reported their consumption of 188 foods and beverages in the previous 12 months.

Participants were a mean age of 65 years, and 60% were men.

Most UPF intake was from processed meat (22.4%), crispbread/rusks (16.6%), nonhomemade pizza (11.2%), and cakes, pies, pastries, and puddings (8.8%).

Researchers categorized foods and beverages into four groups with increasing degrees of processing, based on the Nova Food Classification System:

  • Group 1: Fresh or minimally processed foods and beverages (for example, fruit, meat, milk).
  • Group 2: Processed culinary ingredients (for example, oils, butter).
  • Group 3: Processed foods and beverages (for example, canned fish, bread).
  • Group 4: UPF (22 foods and beverages including carbonated drinks, processed meats, sweet or savory packaged snacks, margarine, and foods and beverages with artificial sweeteners).

Participants were divided into four quartiles based on UPF consumption.

The mean percentage of UPF consumption out of total food and beverage intake was 2.8%, 5.2%, 7.7%, and 14.4% for quartiles 1, 2, 3, and 4, respectively. By sex, these rates for quartile 1 were < 4.7% for women and < 3.7% for men, and for quartile 4 were ≥ 10.5% for women and ≥ 9% for men.

Participants with the highest UPF intake were younger (mean age, 63 vs. 67 years) but otherwise had similar characteristics as other participants.

During a median follow-up of 11.6 years, 308 participants died from all causes, including 129 who died from CVD.

Compared with participants with the lowest intake of UPF (quartile 1), those with the highest intake (quartile 4) had a higher risk of all-cause mortality (hazard ratio, 1.70) and CVD mortality (HR, 2.64) during follow-up, after multivariable adjustment. The analysis adjusted for sex, age, energy intake, residence, education, housing, smoking, body mass index, leisure-time physical activity, history of cancer or cardiovascular disease, hypertension, hyperlipidemia, aspirin use, years since type 2 diabetes diagnosis, and special diet for blood glucose control.

After further adjusting for Mediterranean diet score, the risk of all-cause and CVD mortality during follow-up for patients with the highest versus lowest intake of UPF remained similar (HR, 1.64 and 2.55, respectively).

There was a linear dose–response relationship between UPF and all-cause and CVD mortality.

Increasing intake of fruit drinks, carbonated drinks, and salty biscuits was associated with higher all-cause and CVD mortality rates, and consumption of stock cubes and margarine was further related to higher CVD death.

The researchers acknowledge that the study was observational, and therefore cannot determine cause and effect, and was not designed to specifically collect dietary data according to the Nova classification. The findings may not be generalizable to other populations.

The analysis was partly funded by grants from the AIRC and Italian Ministry of Health. The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Does tamoxifen use increase the risk of endometrial cancer in premenopausal patients?

Article Type
Changed
Thu, 08/10/2023 - 11:39

Ryu KJ, Kim MS, Lee JY, et al. Risk of endometrial polyps, hyperplasia, carcinoma, and uterine cancer after tamoxifen treatment in premenopausal women with breast cancer. JAMA Netw Open. 2022;5:e2243951.

EXPERT COMMENTARY

Tamoxifen is a selective estrogen receptor modulator (SERM) approved by the US Food and Drug Administration (FDA) for both adjuvant treatment of invasive or metastatic breast cancer with hormone receptor (HR)–positive tumors (duration, 5 to 10 years) and for reduction of future breast cancers in certain high-risk individuals (duration, 5 years). It is also occasionally used for non-FDA approved indications, such as cyclic mastodynia.

Because breast cancer is among the most frequently diagnosed cancers in the United States (297,790 new cases expected in 2023) and approximately 80% are HR-positive tumors that will require hormonal adjuvant therapy,1 physicians and other gynecologic clinicians should have a working understanding of tamoxifen, including the risks and benefits associated with its use. Among the recognized serious adverse effects of tamoxifen is the increased risk of endometrial cancer in menopausal patients. This adverse effect creates a potential conundrum for clinicians who may be managing patients with tamoxifen to treat or prevent breast cancer, while also increasing the risk of another cancer. Prior prospective studies of tamoxifen have demonstrated a statistically and clinically significant increased risk of endometrial cancer in menopausal patients but not in premenopausal patients.

A recent study challenged those previous findings, suggesting that the risk of endometrial cancer is similar in both premenopausal and postmenopausal patients taking tamoxifen for treatment of breast cancer.2

Details of the study

The study by Ryu and colleagues used data from the Korean National Health Insurance Service, which covers 97% of the Korean population.2 The authors selected patients being treated for invasive breast cancer from January 1, 2003, through December 31, 2018, who were between the ages of 20 and 50 years when the breast cancer diagnosis was first made. Patients with a diagnostic code entered into their electronic health record that was consistent with menopausal status were excluded, along with any patients with a current or prior history of aromatase inhibitor use (for which one must be naturally, medically, or surgically menopausal to use). Based on these exclusions, the study cohort was then assumed to be premenopausal.

The study group included patients diagnosed with invasive breast cancer who were treated with adjuvant hormonal therapy with tamoxifen (n = 34,637), and the control group included patients with invasive breast cancer who were not treated with adjuvant hormonal therapy (n = 43,683). The primary study end point was the finding of endometrial or uterine pathology, including endometrial polyps, endometrial hyperplasia, endometrial cancer, and other uterine malignant neoplasms not originating in the endometrium (for example, uterine sarcomas).

Because this was a retrospective cohort study that included all eligible patients, the 2 groups were not matched. The treatment group was statistically older, had a higher body mass index (BMI) and a larger waist circumference, were more likely to be hypertensive, and included more patients with diabetes than the control group—all known risk factors for endometrial cancer. However, after adjusting for these 4 factors, an increased risk of endometrial cancer remained in the tamoxifen group compared with the control group (hazard ratio [HR], 3.77; 95% confidence interval [CI], 3.04–4.66). In addition, tamoxifen use was independently associated with an increased risk of endometrial polyps (HR, 3.90; 95% CI, 3.65–4.16), endometrial hyperplasia (HR, 5.56; 95% CI, 5.06–6.12), and other uterine cancers (HR, 2.27; 95% CI, 1.54–3.33). In a subgroup analysis, the risk for endometrial cancer was not higher in patients treated for more than 5 years of tamoxifen compared with those treated for 5 years or less.

Study strengths and limitations

A major strength of this study was the large number of study participants (n = 34,637 tamoxifen; n = 43,683 control), the long duration of follow-up (up to 15 years), and use of a single source of data with coverage of nearly the entire population of Korea. While the 2 study populations (tamoxifen vs no tamoxifen) were initially unbalanced in terms of endometrial cancer risk (age, BMI, concurrent diagnoses of hypertension and diabetes), the authors corrected for this with a multivariate analysis.

Furthermore, while the likely homogeneity of the study population may not make the results generalizable, the authors noted that Korean patients have a higher tendency toward early-onset breast cancer. This observation could make this cohort better suited for a study on premenopausal effects of tamoxifen.

Limitations. These data are provocative as they conflict with level 1 evidence based on multiple well-designed, double-blind, placebo-controlled randomized trials in which tamoxifen use for 5 years did not demonstrate a statistically increased risk of endometrial cancer in patients younger than age 50.3-5 Because of the importance of the question and the implications for many premenopausal women being treated with tamoxifen, we carefully evaluated the study methodology to better understand this discrepancy.

Continue to: Methodological concerns...

 

 

Methodological concerns

In the study by Ryu and colleagues, we found the definition of premenopausal to be problematic. Ultimately, if patients did not have a diagnosis of menopause in the problem summary list, they were assumed to be premenopausal if they were between the ages of 20 and 50 and not taking an aromatase inhibitor. However, important considerations in this population include the cancer stage and treatment regimens that can and do directly impact menopausal status.

Data demonstrate that early-onset breast cancer tends to be associated with more biologically aggressive characteristics that frequently require adjuvant or neoadjuvant chemotherapy.6,7 This chemotherapy regimen is comprised most commonly of Adriamycin (doxorubicin), paclitaxel, and cyclophosphamide. Cyclophosphamide is an alkylating agent that is a known gonadotoxin, and it often renders patients either temporarily or permanently menopausal due to chemotherapy-induced ovarian failure. Prior studies have demonstrated that for patients in their 40s, approximately 90% of those treated with cyclophosphamide-containing chemo-therapy for breast cancer will experience chemotherapy-induced amenorrhea (CIA).8 Although some patients in their 40s with CIA will resume ovarian function, the majority will not.8,9

Due to the lack of reliability in diagnosing CIA, blood levels of estradiol and follicle stimulating hormone are often necessary for confirmation and, even so, may be only temporary. One prospective analysis of 4 randomized neoadjuvant/adjuvant breast cancer trials used this approach and demonstrated that 85.1% of the study cohort experienced chemotherapy-induced ovarian failure at the end of their treatment, with some fluctuating back to premenopausal hormonal levels at 6 and 12 months.10

Furthermore, in the study by Ryu and colleagues, there is no description or confirmation of menstrual patterns in the study group to support the diagnosis of ongoing premenopausal status. Data on CIA and loss of ovarian function, therefore, are critical to the accurate categorization of patients as premenopausal or menopausal in this study. The study also relied on consistent and accurate recording of appropriate medical codes to capture a patient’s menopausal status, which is unclear for this particular population and health system.

In evaluating prior research, multiple studies demonstrated no increased risk of endometrial cancer in premenopausal women taking tamoxifen for breast cancer prevention (TABLE).3,5 These breast cancer prevention trials have several major advantages in assessing tamoxifen-associated endometrial cancer risk for premenopausal patients compared with the current study:

  • Both studies were prospective double-blind, placebo-controlled randomized clinical breast cancer prevention trials with carefully designed and measured outcomes.
  • Since these were breast cancer prevention trials, administration of gonadotoxic chemotherapy was not a concern. As a result, miscategorizing patients with chemotherapy-induced menopause as premenopausal would not be expected, and premature menopause would not be expected at a higher rate than the general population.
  • Careful histories were required prior to study entry and throughout the study, including data on menopausal status and menstrual and uterine bleeding histories.11

 

In these prevention trials, the effect of tamoxifen on uterine pathology demonstratedrepeatable evidence that there was a statistically significant increased risk of endometrial cancer in postmenopausal women, but there was no similar increased risk of endometrial cancer in premenopausal women (TABLE).3,5 Interestingly, the magnitude of the endometrial cancer risk found in the premenopausal patients in the study by Ryu and colleagues (RR, 3.77) is comparable to that of the menopausal group in the prevention trials, raising concern that many or most of the patients in the treatment group assumed to be premenopausal may have indeed been “menopausal” for some or all the time they were taking tamoxifen due to the possible aforementioned reasons. ●

WHAT THIS EVIDENCE MEANS FOR PRACTICE

While the data from the study by Ryu and colleagues are provocative, the findings that premenopausal women are at an increased risk of endometrial cancer do not agree with those of well-designed previous trials. Our concerns about categorization bias (that is, women in the treatment group may have been menopausal for some or all the time they were taking tamoxifen but were not formally diagnosed) make the conclusion that endometrial cancer risk is increased in truly premenopausal women somewhat specious. In a Committee Opinion (last endorsed in 2020), the American College of Obstetricians and Gynecologists (ACOG) stated the following: “Postmenopausal women taking tamoxifen should be closely monitored for symptoms of endometrial hyperplasia or cancer. Premenopausal women treated with tamoxifen have no known increased risk of uterine cancer and as such require no additional monitoring beyond routine gynecologic care.12 Based on multiple previously published studies with solid level 1 evidence and the challenges with the current study design, we continue to agree with this ACOG statement.

VERSHA PLEASANT, MD, MPH; MARK D. PEARLMAN, MD

References
  1. Siegel RL, Miller KD, Wagle NS, et al. Cancer statistics, 2023. CA Cancer J Clin. 2023;73:17-48.
  2. Ryu KJ, Kim MS, Lee JY, et al. Risk of endometrial polyps, hyperplasia, carcinoma, and uterine cancer after tamoxifen treatment in premenopausal women with breast cancer. JAMA Netw Open. 2022;5:e2243951-e.
  3.  Fisher B, Costantino JP, Wickerham DL, et al. Tamoxifen for prevention of breast cancer: report of the National Surgical Adjuvant Breast and Bowel Project P-1 Study. J Natl Cancer Inst. 1998;90:1371-1388.
  4.  Fisher B, Costantino JP, Wickerham DL, et al. Tamoxifen for the prevention of breast cancer: current status of the National Surgical Adjuvant Breast and Bowel Project P-1 Study. J Natl Cancer Inst. 2005;97:1652-1662.
  5.  Iqbal J, Ginsburg OM, Wijeratne TD, et al. Endometrial cancer and venous thromboembolism in women under age 50 who take tamoxifen for prevention of breast cancer: a systematic review. Cancer Treat Rev. 2012;38:318-328.
  6.  Kumar R, Abreu C, Toi M, et al. Oncobiology and treatment of breast cancer in young women. Cancer Metastasis Rev. 2022;41:749-770.
  7. Tesch ME, Partidge AH. Treatment of breast cancer in young adults. Am Soc Clin Oncol Educ Book. 2022;42:1-12.
  8.  Han HS, Ro J, Lee KS, et al. Analysis of chemotherapy-induced amenorrhea rates by three different anthracycline and taxane containing regimens for early breast cancer. Breast Cancer Res Treat. 2009;115:335-342.
  9.  Henry NL, Xia R, Banerjee M, et al. Predictors of recovery of ovarian function during aromatase inhibitor therapy. Ann Oncol. 2013;24:2011-2016.
  10.  Furlanetto J, Marme F, Seiler S, et al. Chemotherapy-induced ovarian failure in young women with early breast cancer: prospective analysis of four randomised neoadjuvant/ adjuvant breast cancer trials. Eur J Cancer. 2021;152: 193-203.
  11. Runowicz CD, Costantino JP, Wickerham DL, et al. Gynecologic conditions in participants in the NSABP breast cancer prevention study of tamoxifen and raloxifene (STAR). Am J Obstet Gynecol. 2011;205:535.e1-535.e5.
  12.  American College of Obstetricians and Gynecologists. Committee opinion no. 601: tamoxifen and uterine cancer. Obstet Gynecol. 2014;123:1394-1397.
Article PDF
Author and Disclosure Information

Versha Pleasant, MD, MPH, is Assistant Professor and Director, Center for Cancer Genetics and Breast Health, University of Michigan Health System, Ann Arbor.

Mark D. Pearlman, MD, is Professor Emeritus and Founder, Center for Cancer Genetics and Breast Health, University of Michigan Health System, Ann Arbor.

 

The authors report no financial relationships relevant to this article.

Issue
OBG Management - 35(8)
Publications
Topics
Page Number
17-18, 20-21
Sections
Author and Disclosure Information

Versha Pleasant, MD, MPH, is Assistant Professor and Director, Center for Cancer Genetics and Breast Health, University of Michigan Health System, Ann Arbor.

Mark D. Pearlman, MD, is Professor Emeritus and Founder, Center for Cancer Genetics and Breast Health, University of Michigan Health System, Ann Arbor.

 

The authors report no financial relationships relevant to this article.

Author and Disclosure Information

Versha Pleasant, MD, MPH, is Assistant Professor and Director, Center for Cancer Genetics and Breast Health, University of Michigan Health System, Ann Arbor.

Mark D. Pearlman, MD, is Professor Emeritus and Founder, Center for Cancer Genetics and Breast Health, University of Michigan Health System, Ann Arbor.

 

The authors report no financial relationships relevant to this article.

Article PDF
Article PDF

Ryu KJ, Kim MS, Lee JY, et al. Risk of endometrial polyps, hyperplasia, carcinoma, and uterine cancer after tamoxifen treatment in premenopausal women with breast cancer. JAMA Netw Open. 2022;5:e2243951.

EXPERT COMMENTARY

Tamoxifen is a selective estrogen receptor modulator (SERM) approved by the US Food and Drug Administration (FDA) for both adjuvant treatment of invasive or metastatic breast cancer with hormone receptor (HR)–positive tumors (duration, 5 to 10 years) and for reduction of future breast cancers in certain high-risk individuals (duration, 5 years). It is also occasionally used for non-FDA approved indications, such as cyclic mastodynia.

Because breast cancer is among the most frequently diagnosed cancers in the United States (297,790 new cases expected in 2023) and approximately 80% are HR-positive tumors that will require hormonal adjuvant therapy,1 physicians and other gynecologic clinicians should have a working understanding of tamoxifen, including the risks and benefits associated with its use. Among the recognized serious adverse effects of tamoxifen is the increased risk of endometrial cancer in menopausal patients. This adverse effect creates a potential conundrum for clinicians who may be managing patients with tamoxifen to treat or prevent breast cancer, while also increasing the risk of another cancer. Prior prospective studies of tamoxifen have demonstrated a statistically and clinically significant increased risk of endometrial cancer in menopausal patients but not in premenopausal patients.

A recent study challenged those previous findings, suggesting that the risk of endometrial cancer is similar in both premenopausal and postmenopausal patients taking tamoxifen for treatment of breast cancer.2

Details of the study

The study by Ryu and colleagues used data from the Korean National Health Insurance Service, which covers 97% of the Korean population.2 The authors selected patients being treated for invasive breast cancer from January 1, 2003, through December 31, 2018, who were between the ages of 20 and 50 years when the breast cancer diagnosis was first made. Patients with a diagnostic code entered into their electronic health record that was consistent with menopausal status were excluded, along with any patients with a current or prior history of aromatase inhibitor use (for which one must be naturally, medically, or surgically menopausal to use). Based on these exclusions, the study cohort was then assumed to be premenopausal.

The study group included patients diagnosed with invasive breast cancer who were treated with adjuvant hormonal therapy with tamoxifen (n = 34,637), and the control group included patients with invasive breast cancer who were not treated with adjuvant hormonal therapy (n = 43,683). The primary study end point was the finding of endometrial or uterine pathology, including endometrial polyps, endometrial hyperplasia, endometrial cancer, and other uterine malignant neoplasms not originating in the endometrium (for example, uterine sarcomas).

Because this was a retrospective cohort study that included all eligible patients, the 2 groups were not matched. The treatment group was statistically older, had a higher body mass index (BMI) and a larger waist circumference, were more likely to be hypertensive, and included more patients with diabetes than the control group—all known risk factors for endometrial cancer. However, after adjusting for these 4 factors, an increased risk of endometrial cancer remained in the tamoxifen group compared with the control group (hazard ratio [HR], 3.77; 95% confidence interval [CI], 3.04–4.66). In addition, tamoxifen use was independently associated with an increased risk of endometrial polyps (HR, 3.90; 95% CI, 3.65–4.16), endometrial hyperplasia (HR, 5.56; 95% CI, 5.06–6.12), and other uterine cancers (HR, 2.27; 95% CI, 1.54–3.33). In a subgroup analysis, the risk for endometrial cancer was not higher in patients treated for more than 5 years of tamoxifen compared with those treated for 5 years or less.

Study strengths and limitations

A major strength of this study was the large number of study participants (n = 34,637 tamoxifen; n = 43,683 control), the long duration of follow-up (up to 15 years), and use of a single source of data with coverage of nearly the entire population of Korea. While the 2 study populations (tamoxifen vs no tamoxifen) were initially unbalanced in terms of endometrial cancer risk (age, BMI, concurrent diagnoses of hypertension and diabetes), the authors corrected for this with a multivariate analysis.

Furthermore, while the likely homogeneity of the study population may not make the results generalizable, the authors noted that Korean patients have a higher tendency toward early-onset breast cancer. This observation could make this cohort better suited for a study on premenopausal effects of tamoxifen.

Limitations. These data are provocative as they conflict with level 1 evidence based on multiple well-designed, double-blind, placebo-controlled randomized trials in which tamoxifen use for 5 years did not demonstrate a statistically increased risk of endometrial cancer in patients younger than age 50.3-5 Because of the importance of the question and the implications for many premenopausal women being treated with tamoxifen, we carefully evaluated the study methodology to better understand this discrepancy.

Continue to: Methodological concerns...

 

 

Methodological concerns

In the study by Ryu and colleagues, we found the definition of premenopausal to be problematic. Ultimately, if patients did not have a diagnosis of menopause in the problem summary list, they were assumed to be premenopausal if they were between the ages of 20 and 50 and not taking an aromatase inhibitor. However, important considerations in this population include the cancer stage and treatment regimens that can and do directly impact menopausal status.

Data demonstrate that early-onset breast cancer tends to be associated with more biologically aggressive characteristics that frequently require adjuvant or neoadjuvant chemotherapy.6,7 This chemotherapy regimen is comprised most commonly of Adriamycin (doxorubicin), paclitaxel, and cyclophosphamide. Cyclophosphamide is an alkylating agent that is a known gonadotoxin, and it often renders patients either temporarily or permanently menopausal due to chemotherapy-induced ovarian failure. Prior studies have demonstrated that for patients in their 40s, approximately 90% of those treated with cyclophosphamide-containing chemo-therapy for breast cancer will experience chemotherapy-induced amenorrhea (CIA).8 Although some patients in their 40s with CIA will resume ovarian function, the majority will not.8,9

Due to the lack of reliability in diagnosing CIA, blood levels of estradiol and follicle stimulating hormone are often necessary for confirmation and, even so, may be only temporary. One prospective analysis of 4 randomized neoadjuvant/adjuvant breast cancer trials used this approach and demonstrated that 85.1% of the study cohort experienced chemotherapy-induced ovarian failure at the end of their treatment, with some fluctuating back to premenopausal hormonal levels at 6 and 12 months.10

Furthermore, in the study by Ryu and colleagues, there is no description or confirmation of menstrual patterns in the study group to support the diagnosis of ongoing premenopausal status. Data on CIA and loss of ovarian function, therefore, are critical to the accurate categorization of patients as premenopausal or menopausal in this study. The study also relied on consistent and accurate recording of appropriate medical codes to capture a patient’s menopausal status, which is unclear for this particular population and health system.

In evaluating prior research, multiple studies demonstrated no increased risk of endometrial cancer in premenopausal women taking tamoxifen for breast cancer prevention (TABLE).3,5 These breast cancer prevention trials have several major advantages in assessing tamoxifen-associated endometrial cancer risk for premenopausal patients compared with the current study:

  • Both studies were prospective double-blind, placebo-controlled randomized clinical breast cancer prevention trials with carefully designed and measured outcomes.
  • Since these were breast cancer prevention trials, administration of gonadotoxic chemotherapy was not a concern. As a result, miscategorizing patients with chemotherapy-induced menopause as premenopausal would not be expected, and premature menopause would not be expected at a higher rate than the general population.
  • Careful histories were required prior to study entry and throughout the study, including data on menopausal status and menstrual and uterine bleeding histories.11

 

In these prevention trials, the effect of tamoxifen on uterine pathology demonstratedrepeatable evidence that there was a statistically significant increased risk of endometrial cancer in postmenopausal women, but there was no similar increased risk of endometrial cancer in premenopausal women (TABLE).3,5 Interestingly, the magnitude of the endometrial cancer risk found in the premenopausal patients in the study by Ryu and colleagues (RR, 3.77) is comparable to that of the menopausal group in the prevention trials, raising concern that many or most of the patients in the treatment group assumed to be premenopausal may have indeed been “menopausal” for some or all the time they were taking tamoxifen due to the possible aforementioned reasons. ●

WHAT THIS EVIDENCE MEANS FOR PRACTICE

While the data from the study by Ryu and colleagues are provocative, the findings that premenopausal women are at an increased risk of endometrial cancer do not agree with those of well-designed previous trials. Our concerns about categorization bias (that is, women in the treatment group may have been menopausal for some or all the time they were taking tamoxifen but were not formally diagnosed) make the conclusion that endometrial cancer risk is increased in truly premenopausal women somewhat specious. In a Committee Opinion (last endorsed in 2020), the American College of Obstetricians and Gynecologists (ACOG) stated the following: “Postmenopausal women taking tamoxifen should be closely monitored for symptoms of endometrial hyperplasia or cancer. Premenopausal women treated with tamoxifen have no known increased risk of uterine cancer and as such require no additional monitoring beyond routine gynecologic care.12 Based on multiple previously published studies with solid level 1 evidence and the challenges with the current study design, we continue to agree with this ACOG statement.

VERSHA PLEASANT, MD, MPH; MARK D. PEARLMAN, MD

Ryu KJ, Kim MS, Lee JY, et al. Risk of endometrial polyps, hyperplasia, carcinoma, and uterine cancer after tamoxifen treatment in premenopausal women with breast cancer. JAMA Netw Open. 2022;5:e2243951.

EXPERT COMMENTARY

Tamoxifen is a selective estrogen receptor modulator (SERM) approved by the US Food and Drug Administration (FDA) for both adjuvant treatment of invasive or metastatic breast cancer with hormone receptor (HR)–positive tumors (duration, 5 to 10 years) and for reduction of future breast cancers in certain high-risk individuals (duration, 5 years). It is also occasionally used for non-FDA approved indications, such as cyclic mastodynia.

Because breast cancer is among the most frequently diagnosed cancers in the United States (297,790 new cases expected in 2023) and approximately 80% are HR-positive tumors that will require hormonal adjuvant therapy,1 physicians and other gynecologic clinicians should have a working understanding of tamoxifen, including the risks and benefits associated with its use. Among the recognized serious adverse effects of tamoxifen is the increased risk of endometrial cancer in menopausal patients. This adverse effect creates a potential conundrum for clinicians who may be managing patients with tamoxifen to treat or prevent breast cancer, while also increasing the risk of another cancer. Prior prospective studies of tamoxifen have demonstrated a statistically and clinically significant increased risk of endometrial cancer in menopausal patients but not in premenopausal patients.

A recent study challenged those previous findings, suggesting that the risk of endometrial cancer is similar in both premenopausal and postmenopausal patients taking tamoxifen for treatment of breast cancer.2

Details of the study

The study by Ryu and colleagues used data from the Korean National Health Insurance Service, which covers 97% of the Korean population.2 The authors selected patients being treated for invasive breast cancer from January 1, 2003, through December 31, 2018, who were between the ages of 20 and 50 years when the breast cancer diagnosis was first made. Patients with a diagnostic code entered into their electronic health record that was consistent with menopausal status were excluded, along with any patients with a current or prior history of aromatase inhibitor use (for which one must be naturally, medically, or surgically menopausal to use). Based on these exclusions, the study cohort was then assumed to be premenopausal.

The study group included patients diagnosed with invasive breast cancer who were treated with adjuvant hormonal therapy with tamoxifen (n = 34,637), and the control group included patients with invasive breast cancer who were not treated with adjuvant hormonal therapy (n = 43,683). The primary study end point was the finding of endometrial or uterine pathology, including endometrial polyps, endometrial hyperplasia, endometrial cancer, and other uterine malignant neoplasms not originating in the endometrium (for example, uterine sarcomas).

Because this was a retrospective cohort study that included all eligible patients, the 2 groups were not matched. The treatment group was statistically older, had a higher body mass index (BMI) and a larger waist circumference, were more likely to be hypertensive, and included more patients with diabetes than the control group—all known risk factors for endometrial cancer. However, after adjusting for these 4 factors, an increased risk of endometrial cancer remained in the tamoxifen group compared with the control group (hazard ratio [HR], 3.77; 95% confidence interval [CI], 3.04–4.66). In addition, tamoxifen use was independently associated with an increased risk of endometrial polyps (HR, 3.90; 95% CI, 3.65–4.16), endometrial hyperplasia (HR, 5.56; 95% CI, 5.06–6.12), and other uterine cancers (HR, 2.27; 95% CI, 1.54–3.33). In a subgroup analysis, the risk for endometrial cancer was not higher in patients treated for more than 5 years of tamoxifen compared with those treated for 5 years or less.

Study strengths and limitations

A major strength of this study was the large number of study participants (n = 34,637 tamoxifen; n = 43,683 control), the long duration of follow-up (up to 15 years), and use of a single source of data with coverage of nearly the entire population of Korea. While the 2 study populations (tamoxifen vs no tamoxifen) were initially unbalanced in terms of endometrial cancer risk (age, BMI, concurrent diagnoses of hypertension and diabetes), the authors corrected for this with a multivariate analysis.

Furthermore, while the likely homogeneity of the study population may not make the results generalizable, the authors noted that Korean patients have a higher tendency toward early-onset breast cancer. This observation could make this cohort better suited for a study on premenopausal effects of tamoxifen.

Limitations. These data are provocative as they conflict with level 1 evidence based on multiple well-designed, double-blind, placebo-controlled randomized trials in which tamoxifen use for 5 years did not demonstrate a statistically increased risk of endometrial cancer in patients younger than age 50.3-5 Because of the importance of the question and the implications for many premenopausal women being treated with tamoxifen, we carefully evaluated the study methodology to better understand this discrepancy.

Continue to: Methodological concerns...

 

 

Methodological concerns

In the study by Ryu and colleagues, we found the definition of premenopausal to be problematic. Ultimately, if patients did not have a diagnosis of menopause in the problem summary list, they were assumed to be premenopausal if they were between the ages of 20 and 50 and not taking an aromatase inhibitor. However, important considerations in this population include the cancer stage and treatment regimens that can and do directly impact menopausal status.

Data demonstrate that early-onset breast cancer tends to be associated with more biologically aggressive characteristics that frequently require adjuvant or neoadjuvant chemotherapy.6,7 This chemotherapy regimen is comprised most commonly of Adriamycin (doxorubicin), paclitaxel, and cyclophosphamide. Cyclophosphamide is an alkylating agent that is a known gonadotoxin, and it often renders patients either temporarily or permanently menopausal due to chemotherapy-induced ovarian failure. Prior studies have demonstrated that for patients in their 40s, approximately 90% of those treated with cyclophosphamide-containing chemo-therapy for breast cancer will experience chemotherapy-induced amenorrhea (CIA).8 Although some patients in their 40s with CIA will resume ovarian function, the majority will not.8,9

Due to the lack of reliability in diagnosing CIA, blood levels of estradiol and follicle stimulating hormone are often necessary for confirmation and, even so, may be only temporary. One prospective analysis of 4 randomized neoadjuvant/adjuvant breast cancer trials used this approach and demonstrated that 85.1% of the study cohort experienced chemotherapy-induced ovarian failure at the end of their treatment, with some fluctuating back to premenopausal hormonal levels at 6 and 12 months.10

Furthermore, in the study by Ryu and colleagues, there is no description or confirmation of menstrual patterns in the study group to support the diagnosis of ongoing premenopausal status. Data on CIA and loss of ovarian function, therefore, are critical to the accurate categorization of patients as premenopausal or menopausal in this study. The study also relied on consistent and accurate recording of appropriate medical codes to capture a patient’s menopausal status, which is unclear for this particular population and health system.

In evaluating prior research, multiple studies demonstrated no increased risk of endometrial cancer in premenopausal women taking tamoxifen for breast cancer prevention (TABLE).3,5 These breast cancer prevention trials have several major advantages in assessing tamoxifen-associated endometrial cancer risk for premenopausal patients compared with the current study:

  • Both studies were prospective double-blind, placebo-controlled randomized clinical breast cancer prevention trials with carefully designed and measured outcomes.
  • Since these were breast cancer prevention trials, administration of gonadotoxic chemotherapy was not a concern. As a result, miscategorizing patients with chemotherapy-induced menopause as premenopausal would not be expected, and premature menopause would not be expected at a higher rate than the general population.
  • Careful histories were required prior to study entry and throughout the study, including data on menopausal status and menstrual and uterine bleeding histories.11

 

In these prevention trials, the effect of tamoxifen on uterine pathology demonstratedrepeatable evidence that there was a statistically significant increased risk of endometrial cancer in postmenopausal women, but there was no similar increased risk of endometrial cancer in premenopausal women (TABLE).3,5 Interestingly, the magnitude of the endometrial cancer risk found in the premenopausal patients in the study by Ryu and colleagues (RR, 3.77) is comparable to that of the menopausal group in the prevention trials, raising concern that many or most of the patients in the treatment group assumed to be premenopausal may have indeed been “menopausal” for some or all the time they were taking tamoxifen due to the possible aforementioned reasons. ●

WHAT THIS EVIDENCE MEANS FOR PRACTICE

While the data from the study by Ryu and colleagues are provocative, the findings that premenopausal women are at an increased risk of endometrial cancer do not agree with those of well-designed previous trials. Our concerns about categorization bias (that is, women in the treatment group may have been menopausal for some or all the time they were taking tamoxifen but were not formally diagnosed) make the conclusion that endometrial cancer risk is increased in truly premenopausal women somewhat specious. In a Committee Opinion (last endorsed in 2020), the American College of Obstetricians and Gynecologists (ACOG) stated the following: “Postmenopausal women taking tamoxifen should be closely monitored for symptoms of endometrial hyperplasia or cancer. Premenopausal women treated with tamoxifen have no known increased risk of uterine cancer and as such require no additional monitoring beyond routine gynecologic care.12 Based on multiple previously published studies with solid level 1 evidence and the challenges with the current study design, we continue to agree with this ACOG statement.

VERSHA PLEASANT, MD, MPH; MARK D. PEARLMAN, MD

References
  1. Siegel RL, Miller KD, Wagle NS, et al. Cancer statistics, 2023. CA Cancer J Clin. 2023;73:17-48.
  2. Ryu KJ, Kim MS, Lee JY, et al. Risk of endometrial polyps, hyperplasia, carcinoma, and uterine cancer after tamoxifen treatment in premenopausal women with breast cancer. JAMA Netw Open. 2022;5:e2243951-e.
  3.  Fisher B, Costantino JP, Wickerham DL, et al. Tamoxifen for prevention of breast cancer: report of the National Surgical Adjuvant Breast and Bowel Project P-1 Study. J Natl Cancer Inst. 1998;90:1371-1388.
  4.  Fisher B, Costantino JP, Wickerham DL, et al. Tamoxifen for the prevention of breast cancer: current status of the National Surgical Adjuvant Breast and Bowel Project P-1 Study. J Natl Cancer Inst. 2005;97:1652-1662.
  5.  Iqbal J, Ginsburg OM, Wijeratne TD, et al. Endometrial cancer and venous thromboembolism in women under age 50 who take tamoxifen for prevention of breast cancer: a systematic review. Cancer Treat Rev. 2012;38:318-328.
  6.  Kumar R, Abreu C, Toi M, et al. Oncobiology and treatment of breast cancer in young women. Cancer Metastasis Rev. 2022;41:749-770.
  7. Tesch ME, Partidge AH. Treatment of breast cancer in young adults. Am Soc Clin Oncol Educ Book. 2022;42:1-12.
  8.  Han HS, Ro J, Lee KS, et al. Analysis of chemotherapy-induced amenorrhea rates by three different anthracycline and taxane containing regimens for early breast cancer. Breast Cancer Res Treat. 2009;115:335-342.
  9.  Henry NL, Xia R, Banerjee M, et al. Predictors of recovery of ovarian function during aromatase inhibitor therapy. Ann Oncol. 2013;24:2011-2016.
  10.  Furlanetto J, Marme F, Seiler S, et al. Chemotherapy-induced ovarian failure in young women with early breast cancer: prospective analysis of four randomised neoadjuvant/ adjuvant breast cancer trials. Eur J Cancer. 2021;152: 193-203.
  11. Runowicz CD, Costantino JP, Wickerham DL, et al. Gynecologic conditions in participants in the NSABP breast cancer prevention study of tamoxifen and raloxifene (STAR). Am J Obstet Gynecol. 2011;205:535.e1-535.e5.
  12.  American College of Obstetricians and Gynecologists. Committee opinion no. 601: tamoxifen and uterine cancer. Obstet Gynecol. 2014;123:1394-1397.
References
  1. Siegel RL, Miller KD, Wagle NS, et al. Cancer statistics, 2023. CA Cancer J Clin. 2023;73:17-48.
  2. Ryu KJ, Kim MS, Lee JY, et al. Risk of endometrial polyps, hyperplasia, carcinoma, and uterine cancer after tamoxifen treatment in premenopausal women with breast cancer. JAMA Netw Open. 2022;5:e2243951-e.
  3.  Fisher B, Costantino JP, Wickerham DL, et al. Tamoxifen for prevention of breast cancer: report of the National Surgical Adjuvant Breast and Bowel Project P-1 Study. J Natl Cancer Inst. 1998;90:1371-1388.
  4.  Fisher B, Costantino JP, Wickerham DL, et al. Tamoxifen for the prevention of breast cancer: current status of the National Surgical Adjuvant Breast and Bowel Project P-1 Study. J Natl Cancer Inst. 2005;97:1652-1662.
  5.  Iqbal J, Ginsburg OM, Wijeratne TD, et al. Endometrial cancer and venous thromboembolism in women under age 50 who take tamoxifen for prevention of breast cancer: a systematic review. Cancer Treat Rev. 2012;38:318-328.
  6.  Kumar R, Abreu C, Toi M, et al. Oncobiology and treatment of breast cancer in young women. Cancer Metastasis Rev. 2022;41:749-770.
  7. Tesch ME, Partidge AH. Treatment of breast cancer in young adults. Am Soc Clin Oncol Educ Book. 2022;42:1-12.
  8.  Han HS, Ro J, Lee KS, et al. Analysis of chemotherapy-induced amenorrhea rates by three different anthracycline and taxane containing regimens for early breast cancer. Breast Cancer Res Treat. 2009;115:335-342.
  9.  Henry NL, Xia R, Banerjee M, et al. Predictors of recovery of ovarian function during aromatase inhibitor therapy. Ann Oncol. 2013;24:2011-2016.
  10.  Furlanetto J, Marme F, Seiler S, et al. Chemotherapy-induced ovarian failure in young women with early breast cancer: prospective analysis of four randomised neoadjuvant/ adjuvant breast cancer trials. Eur J Cancer. 2021;152: 193-203.
  11. Runowicz CD, Costantino JP, Wickerham DL, et al. Gynecologic conditions in participants in the NSABP breast cancer prevention study of tamoxifen and raloxifene (STAR). Am J Obstet Gynecol. 2011;205:535.e1-535.e5.
  12.  American College of Obstetricians and Gynecologists. Committee opinion no. 601: tamoxifen and uterine cancer. Obstet Gynecol. 2014;123:1394-1397.
Issue
OBG Management - 35(8)
Issue
OBG Management - 35(8)
Page Number
17-18, 20-21
Page Number
17-18, 20-21
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

How newly discovered genes might fit into obesity

Article Type
Changed
Wed, 08/09/2023 - 11:21

Newly discovered genes could explain body fat differences between men and women with obesity, as well as why some people gain excess weight in childhood.

Identifying specific genes adds to growing evidence that biology, in part, drives obesity. Researchers hope the findings will lead to effective treatments, and in the meantime add to the understanding that there are many types of obesity that come from a mix of genes and environmental factors.

Although the study is not the first to point to specific genes, “we were quite surprised by the proposed function of some of the genes we identified,” Lena R. Kaisinger, lead study investigator and a PhD student in the MRC Epidemiology Unit at the University of Cambridge (England), wrote in an email. For example, the genes also manage cell death and influence how cells respond to DNA damage. 

The investigators are not sure why genes involved in body size perform this kind of double duty, which opens avenues for future research.

The gene sequencing study was published online in the journal Cell Genomics.
 

Differences between women and men

The researchers found five new genes in females and two new genes in males linked to greater body mass index (BMI): DIDO1, KIAA1109, MC4R, PTPRG and SLC12A5 in women and MC4R and SLTM in men. People who recall having obesity as a child were more likely to have rare genetic changes in two other genes, OBSCN and MADD.

“The key thing is that when you see real genes with real gene names, it really makes real the notion that genetics underlie obesity,” said Lee Kaplan, MD, PhD, director of the Obesity and Metabolism Institute in Boston, who was not affiliated with the research.

Ms. Kaisinger and colleagues found these significant genetic differences by studying genomes of about 420,000 people stored in the UK Biobank, a huge biomedical database. The researchers decided to look at genes by sex and age because these are “two areas that we still know very little about,” Ms. Kaisinger said.

“We know that different types of obesity connect to different ages,” said Dr. Kaplan, who is also past president of the Obesity Society. “But what they’ve done now is find genes that are associated with particular subtypes of obesity ... some more common in one sex and some more common in different phases of life, including early onset obesity.”
 

The future is already here

Treatment for obesity based on a person’s genes already exists. For example, in June 2022, the Food and Drug Administration approved setmelanotide (Imcivree) for weight management in adults and children aged over 6 years with specific genetic markers. 

Even as encouraging as setmelanotide is to Ms. Kaisinger and colleagues, these are still early days for translating the current research findings into clinical obesity tests and potential treatment, she said.

The “holy grail,” Dr. Kaplan said, is a future where people get screened for a particular genetic profile and their provider can say something like, “You’re probably most susceptible to this type, so we’ll treat you with this particular drug that’s been developed for people with this phenotype.”

Dr. Kaplan added: “That’s exactly what we are trying to do.”

Moving forward, Ms. Kaisinger and colleagues plan to repeat the study in larger and more diverse populations. They also plan to reverse the usual road map for studies, which typically start in animals and then progress to humans.

“We plan to take the most promising gene candidates forward into mouse models to learn more about their function and how exactly their dysfunction results in obesity,” Ms. Kaisinger said. 

Three study coauthors are employees and shareholders of Adrestia Therapeutics. No other conflicts of interest were reported.

A version of this article appeared on WebMD.com.

Publications
Topics
Sections

Newly discovered genes could explain body fat differences between men and women with obesity, as well as why some people gain excess weight in childhood.

Identifying specific genes adds to growing evidence that biology, in part, drives obesity. Researchers hope the findings will lead to effective treatments, and in the meantime add to the understanding that there are many types of obesity that come from a mix of genes and environmental factors.

Although the study is not the first to point to specific genes, “we were quite surprised by the proposed function of some of the genes we identified,” Lena R. Kaisinger, lead study investigator and a PhD student in the MRC Epidemiology Unit at the University of Cambridge (England), wrote in an email. For example, the genes also manage cell death and influence how cells respond to DNA damage. 

The investigators are not sure why genes involved in body size perform this kind of double duty, which opens avenues for future research.

The gene sequencing study was published online in the journal Cell Genomics.
 

Differences between women and men

The researchers found five new genes in females and two new genes in males linked to greater body mass index (BMI): DIDO1, KIAA1109, MC4R, PTPRG and SLC12A5 in women and MC4R and SLTM in men. People who recall having obesity as a child were more likely to have rare genetic changes in two other genes, OBSCN and MADD.

“The key thing is that when you see real genes with real gene names, it really makes real the notion that genetics underlie obesity,” said Lee Kaplan, MD, PhD, director of the Obesity and Metabolism Institute in Boston, who was not affiliated with the research.

Ms. Kaisinger and colleagues found these significant genetic differences by studying genomes of about 420,000 people stored in the UK Biobank, a huge biomedical database. The researchers decided to look at genes by sex and age because these are “two areas that we still know very little about,” Ms. Kaisinger said.

“We know that different types of obesity connect to different ages,” said Dr. Kaplan, who is also past president of the Obesity Society. “But what they’ve done now is find genes that are associated with particular subtypes of obesity ... some more common in one sex and some more common in different phases of life, including early onset obesity.”
 

The future is already here

Treatment for obesity based on a person’s genes already exists. For example, in June 2022, the Food and Drug Administration approved setmelanotide (Imcivree) for weight management in adults and children aged over 6 years with specific genetic markers. 

Even as encouraging as setmelanotide is to Ms. Kaisinger and colleagues, these are still early days for translating the current research findings into clinical obesity tests and potential treatment, she said.

The “holy grail,” Dr. Kaplan said, is a future where people get screened for a particular genetic profile and their provider can say something like, “You’re probably most susceptible to this type, so we’ll treat you with this particular drug that’s been developed for people with this phenotype.”

Dr. Kaplan added: “That’s exactly what we are trying to do.”

Moving forward, Ms. Kaisinger and colleagues plan to repeat the study in larger and more diverse populations. They also plan to reverse the usual road map for studies, which typically start in animals and then progress to humans.

“We plan to take the most promising gene candidates forward into mouse models to learn more about their function and how exactly their dysfunction results in obesity,” Ms. Kaisinger said. 

Three study coauthors are employees and shareholders of Adrestia Therapeutics. No other conflicts of interest were reported.

A version of this article appeared on WebMD.com.

Newly discovered genes could explain body fat differences between men and women with obesity, as well as why some people gain excess weight in childhood.

Identifying specific genes adds to growing evidence that biology, in part, drives obesity. Researchers hope the findings will lead to effective treatments, and in the meantime add to the understanding that there are many types of obesity that come from a mix of genes and environmental factors.

Although the study is not the first to point to specific genes, “we were quite surprised by the proposed function of some of the genes we identified,” Lena R. Kaisinger, lead study investigator and a PhD student in the MRC Epidemiology Unit at the University of Cambridge (England), wrote in an email. For example, the genes also manage cell death and influence how cells respond to DNA damage. 

The investigators are not sure why genes involved in body size perform this kind of double duty, which opens avenues for future research.

The gene sequencing study was published online in the journal Cell Genomics.
 

Differences between women and men

The researchers found five new genes in females and two new genes in males linked to greater body mass index (BMI): DIDO1, KIAA1109, MC4R, PTPRG and SLC12A5 in women and MC4R and SLTM in men. People who recall having obesity as a child were more likely to have rare genetic changes in two other genes, OBSCN and MADD.

“The key thing is that when you see real genes with real gene names, it really makes real the notion that genetics underlie obesity,” said Lee Kaplan, MD, PhD, director of the Obesity and Metabolism Institute in Boston, who was not affiliated with the research.

Ms. Kaisinger and colleagues found these significant genetic differences by studying genomes of about 420,000 people stored in the UK Biobank, a huge biomedical database. The researchers decided to look at genes by sex and age because these are “two areas that we still know very little about,” Ms. Kaisinger said.

“We know that different types of obesity connect to different ages,” said Dr. Kaplan, who is also past president of the Obesity Society. “But what they’ve done now is find genes that are associated with particular subtypes of obesity ... some more common in one sex and some more common in different phases of life, including early onset obesity.”
 

The future is already here

Treatment for obesity based on a person’s genes already exists. For example, in June 2022, the Food and Drug Administration approved setmelanotide (Imcivree) for weight management in adults and children aged over 6 years with specific genetic markers. 

Even as encouraging as setmelanotide is to Ms. Kaisinger and colleagues, these are still early days for translating the current research findings into clinical obesity tests and potential treatment, she said.

The “holy grail,” Dr. Kaplan said, is a future where people get screened for a particular genetic profile and their provider can say something like, “You’re probably most susceptible to this type, so we’ll treat you with this particular drug that’s been developed for people with this phenotype.”

Dr. Kaplan added: “That’s exactly what we are trying to do.”

Moving forward, Ms. Kaisinger and colleagues plan to repeat the study in larger and more diverse populations. They also plan to reverse the usual road map for studies, which typically start in animals and then progress to humans.

“We plan to take the most promising gene candidates forward into mouse models to learn more about their function and how exactly their dysfunction results in obesity,” Ms. Kaisinger said. 

Three study coauthors are employees and shareholders of Adrestia Therapeutics. No other conflicts of interest were reported.

A version of this article appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL GENOMICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New consensus on managing acetaminophen poisoning

Article Type
Changed
Wed, 08/09/2023 - 11:12

 

TOPLINE:

An expert panel has updated recommendations for emergency department assessment, management, and treatment of acetaminophen poisoning.

METHODOLOGY:

The United States and Canada have no formal guidelines for managing acetaminophen poisoning, which is characterized by hepatocellular damage and potential liver failure, which can be life-threatening.

The past 25 years has seen the introduction of products that contain greater amounts of acetaminophen, extended-release preparations, and new drugs that combine acetaminophen with opioids or other ingredients.

From the medical literature and current poison control guidelines, the panel used a modified Delphi method to create a decision framework and determine appropriate management, including triage and laboratory evaluation, for acetaminophen poisoning, addressing scenarios such as extended-release and high-risk ingestion, co-ingestion of anticholinergics or opioids, pregnancy, weight greater than 100 kg, and criteria for consultation with a toxicologist.

TAKEAWAY:

The panel emphasized the role of the patient’s history; an inaccurate estimate of the time of ingestion, for example, can lead to the erroneous conclusion that acetylcysteine, a medication used to treat overdose, is not needed or can be discontinued prematurely – a potentially fatal mistake.

The initial dose of acetylcysteine should be administered as soon as the need becomes evident, with the panel recommending at least 300 mg/kg orally or intravenously during the first 20 or 24 hours of treatment.

Management of ingestion of extended-release preparations is the same, with the exception of obtaining a second acetaminophen blood concentration in some cases.

When acetaminophen is co-ingested with anticholinergic or opioid agonist medications, management is the same, except if the first acetaminophen concentration measured at 4-24 hours after ingestion is 10 mcg/mL or less, another measurement and acetylcysteine treatment are not needed.

IN PRACTICE:

“A guideline that provides management guidance could optimize patient outcomes, reduce disruption for patients and caregivers, and reduce costs by shortening the length of hospitalization,” write the authors.

SOURCE:

The study was conducted by Richard C. Dart, MD, PhD, Rocky Mountain Poison and Drug Safety, University of Colorado, Denver, and colleagues. It was published online in JAMA Network Open.

LIMITATIONS:

The work lacked high-quality data that address clinical decisions needed for managing acetaminophen poisoning. There were only a few well-controlled comparative studies, which focused on specific issues and not on patient management.

DISCLOSURES:

The work was supported by a grant from Johnson & Johnson Consumer Inc. Dr. Dart has reported receiving grants from Johnson & Johnson outside the submitted work. See paper for disclosures of other authors.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

An expert panel has updated recommendations for emergency department assessment, management, and treatment of acetaminophen poisoning.

METHODOLOGY:

The United States and Canada have no formal guidelines for managing acetaminophen poisoning, which is characterized by hepatocellular damage and potential liver failure, which can be life-threatening.

The past 25 years has seen the introduction of products that contain greater amounts of acetaminophen, extended-release preparations, and new drugs that combine acetaminophen with opioids or other ingredients.

From the medical literature and current poison control guidelines, the panel used a modified Delphi method to create a decision framework and determine appropriate management, including triage and laboratory evaluation, for acetaminophen poisoning, addressing scenarios such as extended-release and high-risk ingestion, co-ingestion of anticholinergics or opioids, pregnancy, weight greater than 100 kg, and criteria for consultation with a toxicologist.

TAKEAWAY:

The panel emphasized the role of the patient’s history; an inaccurate estimate of the time of ingestion, for example, can lead to the erroneous conclusion that acetylcysteine, a medication used to treat overdose, is not needed or can be discontinued prematurely – a potentially fatal mistake.

The initial dose of acetylcysteine should be administered as soon as the need becomes evident, with the panel recommending at least 300 mg/kg orally or intravenously during the first 20 or 24 hours of treatment.

Management of ingestion of extended-release preparations is the same, with the exception of obtaining a second acetaminophen blood concentration in some cases.

When acetaminophen is co-ingested with anticholinergic or opioid agonist medications, management is the same, except if the first acetaminophen concentration measured at 4-24 hours after ingestion is 10 mcg/mL or less, another measurement and acetylcysteine treatment are not needed.

IN PRACTICE:

“A guideline that provides management guidance could optimize patient outcomes, reduce disruption for patients and caregivers, and reduce costs by shortening the length of hospitalization,” write the authors.

SOURCE:

The study was conducted by Richard C. Dart, MD, PhD, Rocky Mountain Poison and Drug Safety, University of Colorado, Denver, and colleagues. It was published online in JAMA Network Open.

LIMITATIONS:

The work lacked high-quality data that address clinical decisions needed for managing acetaminophen poisoning. There were only a few well-controlled comparative studies, which focused on specific issues and not on patient management.

DISCLOSURES:

The work was supported by a grant from Johnson & Johnson Consumer Inc. Dr. Dart has reported receiving grants from Johnson & Johnson outside the submitted work. See paper for disclosures of other authors.

A version of this article appeared on Medscape.com.

 

TOPLINE:

An expert panel has updated recommendations for emergency department assessment, management, and treatment of acetaminophen poisoning.

METHODOLOGY:

The United States and Canada have no formal guidelines for managing acetaminophen poisoning, which is characterized by hepatocellular damage and potential liver failure, which can be life-threatening.

The past 25 years has seen the introduction of products that contain greater amounts of acetaminophen, extended-release preparations, and new drugs that combine acetaminophen with opioids or other ingredients.

From the medical literature and current poison control guidelines, the panel used a modified Delphi method to create a decision framework and determine appropriate management, including triage and laboratory evaluation, for acetaminophen poisoning, addressing scenarios such as extended-release and high-risk ingestion, co-ingestion of anticholinergics or opioids, pregnancy, weight greater than 100 kg, and criteria for consultation with a toxicologist.

TAKEAWAY:

The panel emphasized the role of the patient’s history; an inaccurate estimate of the time of ingestion, for example, can lead to the erroneous conclusion that acetylcysteine, a medication used to treat overdose, is not needed or can be discontinued prematurely – a potentially fatal mistake.

The initial dose of acetylcysteine should be administered as soon as the need becomes evident, with the panel recommending at least 300 mg/kg orally or intravenously during the first 20 or 24 hours of treatment.

Management of ingestion of extended-release preparations is the same, with the exception of obtaining a second acetaminophen blood concentration in some cases.

When acetaminophen is co-ingested with anticholinergic or opioid agonist medications, management is the same, except if the first acetaminophen concentration measured at 4-24 hours after ingestion is 10 mcg/mL or less, another measurement and acetylcysteine treatment are not needed.

IN PRACTICE:

“A guideline that provides management guidance could optimize patient outcomes, reduce disruption for patients and caregivers, and reduce costs by shortening the length of hospitalization,” write the authors.

SOURCE:

The study was conducted by Richard C. Dart, MD, PhD, Rocky Mountain Poison and Drug Safety, University of Colorado, Denver, and colleagues. It was published online in JAMA Network Open.

LIMITATIONS:

The work lacked high-quality data that address clinical decisions needed for managing acetaminophen poisoning. There were only a few well-controlled comparative studies, which focused on specific issues and not on patient management.

DISCLOSURES:

The work was supported by a grant from Johnson & Johnson Consumer Inc. Dr. Dart has reported receiving grants from Johnson & Johnson outside the submitted work. See paper for disclosures of other authors.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article