Medical treatment for appendicitis effective long-term

Article Type
Changed
Fri, 08/11/2023 - 10:12

 

TOPLINE:

Most patients who receive antibiotics rather than surgical treatment for appendicitis have successful long-term outcomes, but some may require surgery up to 20 years later.

METHODOLOGY:

  • Follow-up on 292 patients involved in two randomized controlled trials conducted in the 1990s by the Swedish National Patient Registry
  • Both trials divided patients into two groups: those who underwent appendectomy and those who received antibiotic treatment for appendicitis.
  • Researchers looked at rates of recurrent appendicitis that required surgery later in life.

TAKEAWAY:

  • 29% of patients in the nonoperative group who were discharged successfully during the initial study eventually underwent surgery.
  • Some patients who initially received antibiotics required surgery up to 20 years later.
  • 9.5% of patients who didn’t undergo surgery went to a surgical outpatient clinic for abdominal pain, compared with 0.01% of those who had surgery.

IN PRACTICE:

“More than half of the patients treated nonoperatively did not experience recurrence and avoided surgery over approximately 2 decades. There is no evidence for long-term risks of nonoperative management other than that of recurrence of appendicitis,” the authors report.

SOURCE:

Simon Eaton, PhD, of UCL Great Ormond Street Institute of Child Health in London, was the corresponding author of the study, published online in JAMA Surgery. The study was funded by the NIHR Biomedical Research Centre at Great Ormond Street Hospital and the Swedish Research Council.

LIMITATIONS:

The data were retrospective, so the researchers could not track how patients’ circumstances and characteristics changed over time. Most patients were male, and the researchers lacked histopathology results for patients for whom nonsurgical treatment succeeded initially but who later required appendectomy. They also relied on diagnostic standards used in the 1990s, when the initial studies were performed; these were less sophisticated and accurate than recent standards.

DISCLOSURES:

Coauthor Jan Svensson, MD, PhD, reported receiving grants from the Lovisa Foundation during the conduct of the study. No other disclosures were reported.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Most patients who receive antibiotics rather than surgical treatment for appendicitis have successful long-term outcomes, but some may require surgery up to 20 years later.

METHODOLOGY:

  • Follow-up on 292 patients involved in two randomized controlled trials conducted in the 1990s by the Swedish National Patient Registry
  • Both trials divided patients into two groups: those who underwent appendectomy and those who received antibiotic treatment for appendicitis.
  • Researchers looked at rates of recurrent appendicitis that required surgery later in life.

TAKEAWAY:

  • 29% of patients in the nonoperative group who were discharged successfully during the initial study eventually underwent surgery.
  • Some patients who initially received antibiotics required surgery up to 20 years later.
  • 9.5% of patients who didn’t undergo surgery went to a surgical outpatient clinic for abdominal pain, compared with 0.01% of those who had surgery.

IN PRACTICE:

“More than half of the patients treated nonoperatively did not experience recurrence and avoided surgery over approximately 2 decades. There is no evidence for long-term risks of nonoperative management other than that of recurrence of appendicitis,” the authors report.

SOURCE:

Simon Eaton, PhD, of UCL Great Ormond Street Institute of Child Health in London, was the corresponding author of the study, published online in JAMA Surgery. The study was funded by the NIHR Biomedical Research Centre at Great Ormond Street Hospital and the Swedish Research Council.

LIMITATIONS:

The data were retrospective, so the researchers could not track how patients’ circumstances and characteristics changed over time. Most patients were male, and the researchers lacked histopathology results for patients for whom nonsurgical treatment succeeded initially but who later required appendectomy. They also relied on diagnostic standards used in the 1990s, when the initial studies were performed; these were less sophisticated and accurate than recent standards.

DISCLOSURES:

Coauthor Jan Svensson, MD, PhD, reported receiving grants from the Lovisa Foundation during the conduct of the study. No other disclosures were reported.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Most patients who receive antibiotics rather than surgical treatment for appendicitis have successful long-term outcomes, but some may require surgery up to 20 years later.

METHODOLOGY:

  • Follow-up on 292 patients involved in two randomized controlled trials conducted in the 1990s by the Swedish National Patient Registry
  • Both trials divided patients into two groups: those who underwent appendectomy and those who received antibiotic treatment for appendicitis.
  • Researchers looked at rates of recurrent appendicitis that required surgery later in life.

TAKEAWAY:

  • 29% of patients in the nonoperative group who were discharged successfully during the initial study eventually underwent surgery.
  • Some patients who initially received antibiotics required surgery up to 20 years later.
  • 9.5% of patients who didn’t undergo surgery went to a surgical outpatient clinic for abdominal pain, compared with 0.01% of those who had surgery.

IN PRACTICE:

“More than half of the patients treated nonoperatively did not experience recurrence and avoided surgery over approximately 2 decades. There is no evidence for long-term risks of nonoperative management other than that of recurrence of appendicitis,” the authors report.

SOURCE:

Simon Eaton, PhD, of UCL Great Ormond Street Institute of Child Health in London, was the corresponding author of the study, published online in JAMA Surgery. The study was funded by the NIHR Biomedical Research Centre at Great Ormond Street Hospital and the Swedish Research Council.

LIMITATIONS:

The data were retrospective, so the researchers could not track how patients’ circumstances and characteristics changed over time. Most patients were male, and the researchers lacked histopathology results for patients for whom nonsurgical treatment succeeded initially but who later required appendectomy. They also relied on diagnostic standards used in the 1990s, when the initial studies were performed; these were less sophisticated and accurate than recent standards.

DISCLOSURES:

Coauthor Jan Svensson, MD, PhD, reported receiving grants from the Lovisa Foundation during the conduct of the study. No other disclosures were reported.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Genetic profiles affect smokers’ lung cancer risk

Article Type
Changed
Wed, 08/16/2023 - 17:18

Smokers with extreme phenotypes of high and low risk of developing tobacco-associated lung cancer have different genetic profiles, according to a multidisciplinary study conducted by specialists from the Cancer Center at the University of Navarra Clinic (CUN). The results were presented at the annual meeting of the American Society for Clinical Oncology. 

Ana Patiño García, PhD, director of the genomic medicine unit at the CUN and a coordinator of the research, explained in an interview the main reason why this study was conducted. “This study came straight out of the oncology clinic, where we are constantly encountering patients with lung cancer who have never smoked or who have smoked very little, while we also all know people who have smoked a lot throughout their lifetime and have never developed cancer. This observation has led us to ask whether there are genetic factors that increase or decrease the risk of cancer and protect people against this disease.”

José Luis Pérez Gracia, MD, PhD, oncologist, coordinator of the oncology trials department at the CUN and another of the individuals responsible for this research, said: “This is the first study to validate genetic factors associated with people who appear to be resistant to developing tobacco-related lung cancer or who, on the other hand, are at high risk of developing this disease.”
 

Pioneering approach 

Earlier evidence showed that some smokers develop cancer, and others don’t. “This is a very well-known fact, since everyone knows about some elderly person who has been a heavy smoker and has never developed lung cancer,” said Dr. Pérez. “Unfortunately, we oncologists encounter young smokers who have been diagnosed with this disease. However, despite the importance of understanding the causes behind these phenotypes, it is a question that has never been studied from a genetic standpoint.”

The study was conducted using DNA from 133 heavy smokers who had not developed lung cancer at a mean age of 80 years, and from another 116 heavy smokers who had developed this type of cancer at a mean age of 50 years. This DNA was sequenced using next-generation techniques, and the results were analyzed using bioinformatics and artificial intelligence systems in collaboration with the University of Navarra Applied Medical Research Center and the University of Navarra School of Engineering.

When asked how this methodology could be applied to support other research conducted along these lines, Dr. Patiño said, “The most novel thing about this research is actually its approach. It’s based on groups at the extremes, defined by the patient’s age at the time of developing lung cancer and how much they had smoked. This type of comparative design is called extreme phenotypes, and its main distinguishing characteristic – which is also its most limiting characteristic – is choosing cases and controls well. Obviously, with today’s next-generation sequencing technologies, we achieve a quantity and quality of data that would have been unattainable in years gone by.”

Speaking to the role played by bioinformatics and artificial intelligence in this research, Dr. Patiño explained that they are fairly new techniques. “In fact, these technologies could be thought of as spearheading a lot of the biomedical research being done today. They’ve also somewhat set the stage for the paradigm shift where the investigator asks the data a question, and in the case of artificial intelligence, it’s the data that answer.”
 

 

 

Pinpointing genetic differences

In his analysis of the most noteworthy data and conclusions from this research, Dr. Pérez noted, “The most significant thing we’ve seen is that both populations have genetic differences. This suggests that our hypothesis is correct. Of course, more studies including a larger number of individuals will be needed to confirm these findings. For the first time, our work has laid the foundation for developing this line of research.” 

“Many genetic variants that we have identified as differentials in cases and controls are found in genes relevant to the immune system (HLA system), in genes related to functional pathways that are often altered in tumor development, and in structural proteins and in genes related to cell mobility,” emphasized Dr. Patiño.

Many of the genetic characteristics that were discovered are located in genes with functions related to cancer development, such as immune response, repair of genetic material, regulation of inflammation, etc. This finding is highly significant, said Dr. Pérez. “However, we must remember that these phenotypes may be attributable to multiple causes, not just one cause.”

Furthermore, the specialist explained the next steps to be taken in the context of the line opened up by this research. “First, we must expand these studies, including more individuals with, if possible, even more extreme phenotypes: more smokers who are older and younger, respectively. Once the statistical evidence is stronger, we must also confirm that the alterations observed in lab-based studies truly impact gene function.”
 

Earlier diagnosis 

The clinician also discussed the potential ways that the conclusions of this study could be applied to clinical practice now and in the future, and how the conclusions could benefit these patients. “The results of our line of research may help in early identification of those individuals at high risk of developing lung cancer if they smoke, so that they could be included in prevention programs to keep them from smoking or to help them stop smoking,” said Dr. Pérez. “It would also allow for early diagnosis of cancer at a time when there is a much higher chance of curing it. 

“However, the most important thing is that our study may allow us to better understand the mechanisms by which cancer arises and especially why some people do not develop it. This [understanding] could lead to new diagnostic techniques and new treatments for this disease. The techniques needed to develop this line of research (bioinformatic mass sequencing and artificial intelligence) are available and becoming more reliable and more accessible every day. So, we believe our strategy is very realistic,” he added.

Although the line of research opened up by this study depicts a new scenario, the specialists still must face several challenges to discover why some smokers are more likely than others to develop lung cancer.

“There are many lines of research in this regard,” said Dr. Pérez. “But to name a few, I would draw attention to the need to increase the number of cases and controls to improve the comparison, study patients with other tumors related to tobacco use, ask new questions using the data we have already collected, and apply other genomic techniques that would allow us to perform additional studies of genetic variants that have not yet been studied. And, of course, we need to use functional studies to expand our understanding of the function and activity of the genes that have already been identified.” 

Dr. Patiño and Dr. Pérez declared that they have no relevant financial conflicts of interest.

This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Smokers with extreme phenotypes of high and low risk of developing tobacco-associated lung cancer have different genetic profiles, according to a multidisciplinary study conducted by specialists from the Cancer Center at the University of Navarra Clinic (CUN). The results were presented at the annual meeting of the American Society for Clinical Oncology. 

Ana Patiño García, PhD, director of the genomic medicine unit at the CUN and a coordinator of the research, explained in an interview the main reason why this study was conducted. “This study came straight out of the oncology clinic, where we are constantly encountering patients with lung cancer who have never smoked or who have smoked very little, while we also all know people who have smoked a lot throughout their lifetime and have never developed cancer. This observation has led us to ask whether there are genetic factors that increase or decrease the risk of cancer and protect people against this disease.”

José Luis Pérez Gracia, MD, PhD, oncologist, coordinator of the oncology trials department at the CUN and another of the individuals responsible for this research, said: “This is the first study to validate genetic factors associated with people who appear to be resistant to developing tobacco-related lung cancer or who, on the other hand, are at high risk of developing this disease.”
 

Pioneering approach 

Earlier evidence showed that some smokers develop cancer, and others don’t. “This is a very well-known fact, since everyone knows about some elderly person who has been a heavy smoker and has never developed lung cancer,” said Dr. Pérez. “Unfortunately, we oncologists encounter young smokers who have been diagnosed with this disease. However, despite the importance of understanding the causes behind these phenotypes, it is a question that has never been studied from a genetic standpoint.”

The study was conducted using DNA from 133 heavy smokers who had not developed lung cancer at a mean age of 80 years, and from another 116 heavy smokers who had developed this type of cancer at a mean age of 50 years. This DNA was sequenced using next-generation techniques, and the results were analyzed using bioinformatics and artificial intelligence systems in collaboration with the University of Navarra Applied Medical Research Center and the University of Navarra School of Engineering.

When asked how this methodology could be applied to support other research conducted along these lines, Dr. Patiño said, “The most novel thing about this research is actually its approach. It’s based on groups at the extremes, defined by the patient’s age at the time of developing lung cancer and how much they had smoked. This type of comparative design is called extreme phenotypes, and its main distinguishing characteristic – which is also its most limiting characteristic – is choosing cases and controls well. Obviously, with today’s next-generation sequencing technologies, we achieve a quantity and quality of data that would have been unattainable in years gone by.”

Speaking to the role played by bioinformatics and artificial intelligence in this research, Dr. Patiño explained that they are fairly new techniques. “In fact, these technologies could be thought of as spearheading a lot of the biomedical research being done today. They’ve also somewhat set the stage for the paradigm shift where the investigator asks the data a question, and in the case of artificial intelligence, it’s the data that answer.”
 

 

 

Pinpointing genetic differences

In his analysis of the most noteworthy data and conclusions from this research, Dr. Pérez noted, “The most significant thing we’ve seen is that both populations have genetic differences. This suggests that our hypothesis is correct. Of course, more studies including a larger number of individuals will be needed to confirm these findings. For the first time, our work has laid the foundation for developing this line of research.” 

“Many genetic variants that we have identified as differentials in cases and controls are found in genes relevant to the immune system (HLA system), in genes related to functional pathways that are often altered in tumor development, and in structural proteins and in genes related to cell mobility,” emphasized Dr. Patiño.

Many of the genetic characteristics that were discovered are located in genes with functions related to cancer development, such as immune response, repair of genetic material, regulation of inflammation, etc. This finding is highly significant, said Dr. Pérez. “However, we must remember that these phenotypes may be attributable to multiple causes, not just one cause.”

Furthermore, the specialist explained the next steps to be taken in the context of the line opened up by this research. “First, we must expand these studies, including more individuals with, if possible, even more extreme phenotypes: more smokers who are older and younger, respectively. Once the statistical evidence is stronger, we must also confirm that the alterations observed in lab-based studies truly impact gene function.”
 

Earlier diagnosis 

The clinician also discussed the potential ways that the conclusions of this study could be applied to clinical practice now and in the future, and how the conclusions could benefit these patients. “The results of our line of research may help in early identification of those individuals at high risk of developing lung cancer if they smoke, so that they could be included in prevention programs to keep them from smoking or to help them stop smoking,” said Dr. Pérez. “It would also allow for early diagnosis of cancer at a time when there is a much higher chance of curing it. 

“However, the most important thing is that our study may allow us to better understand the mechanisms by which cancer arises and especially why some people do not develop it. This [understanding] could lead to new diagnostic techniques and new treatments for this disease. The techniques needed to develop this line of research (bioinformatic mass sequencing and artificial intelligence) are available and becoming more reliable and more accessible every day. So, we believe our strategy is very realistic,” he added.

Although the line of research opened up by this study depicts a new scenario, the specialists still must face several challenges to discover why some smokers are more likely than others to develop lung cancer.

“There are many lines of research in this regard,” said Dr. Pérez. “But to name a few, I would draw attention to the need to increase the number of cases and controls to improve the comparison, study patients with other tumors related to tobacco use, ask new questions using the data we have already collected, and apply other genomic techniques that would allow us to perform additional studies of genetic variants that have not yet been studied. And, of course, we need to use functional studies to expand our understanding of the function and activity of the genes that have already been identified.” 

Dr. Patiño and Dr. Pérez declared that they have no relevant financial conflicts of interest.

This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.

Smokers with extreme phenotypes of high and low risk of developing tobacco-associated lung cancer have different genetic profiles, according to a multidisciplinary study conducted by specialists from the Cancer Center at the University of Navarra Clinic (CUN). The results were presented at the annual meeting of the American Society for Clinical Oncology. 

Ana Patiño García, PhD, director of the genomic medicine unit at the CUN and a coordinator of the research, explained in an interview the main reason why this study was conducted. “This study came straight out of the oncology clinic, where we are constantly encountering patients with lung cancer who have never smoked or who have smoked very little, while we also all know people who have smoked a lot throughout their lifetime and have never developed cancer. This observation has led us to ask whether there are genetic factors that increase or decrease the risk of cancer and protect people against this disease.”

José Luis Pérez Gracia, MD, PhD, oncologist, coordinator of the oncology trials department at the CUN and another of the individuals responsible for this research, said: “This is the first study to validate genetic factors associated with people who appear to be resistant to developing tobacco-related lung cancer or who, on the other hand, are at high risk of developing this disease.”
 

Pioneering approach 

Earlier evidence showed that some smokers develop cancer, and others don’t. “This is a very well-known fact, since everyone knows about some elderly person who has been a heavy smoker and has never developed lung cancer,” said Dr. Pérez. “Unfortunately, we oncologists encounter young smokers who have been diagnosed with this disease. However, despite the importance of understanding the causes behind these phenotypes, it is a question that has never been studied from a genetic standpoint.”

The study was conducted using DNA from 133 heavy smokers who had not developed lung cancer at a mean age of 80 years, and from another 116 heavy smokers who had developed this type of cancer at a mean age of 50 years. This DNA was sequenced using next-generation techniques, and the results were analyzed using bioinformatics and artificial intelligence systems in collaboration with the University of Navarra Applied Medical Research Center and the University of Navarra School of Engineering.

When asked how this methodology could be applied to support other research conducted along these lines, Dr. Patiño said, “The most novel thing about this research is actually its approach. It’s based on groups at the extremes, defined by the patient’s age at the time of developing lung cancer and how much they had smoked. This type of comparative design is called extreme phenotypes, and its main distinguishing characteristic – which is also its most limiting characteristic – is choosing cases and controls well. Obviously, with today’s next-generation sequencing technologies, we achieve a quantity and quality of data that would have been unattainable in years gone by.”

Speaking to the role played by bioinformatics and artificial intelligence in this research, Dr. Patiño explained that they are fairly new techniques. “In fact, these technologies could be thought of as spearheading a lot of the biomedical research being done today. They’ve also somewhat set the stage for the paradigm shift where the investigator asks the data a question, and in the case of artificial intelligence, it’s the data that answer.”
 

 

 

Pinpointing genetic differences

In his analysis of the most noteworthy data and conclusions from this research, Dr. Pérez noted, “The most significant thing we’ve seen is that both populations have genetic differences. This suggests that our hypothesis is correct. Of course, more studies including a larger number of individuals will be needed to confirm these findings. For the first time, our work has laid the foundation for developing this line of research.” 

“Many genetic variants that we have identified as differentials in cases and controls are found in genes relevant to the immune system (HLA system), in genes related to functional pathways that are often altered in tumor development, and in structural proteins and in genes related to cell mobility,” emphasized Dr. Patiño.

Many of the genetic characteristics that were discovered are located in genes with functions related to cancer development, such as immune response, repair of genetic material, regulation of inflammation, etc. This finding is highly significant, said Dr. Pérez. “However, we must remember that these phenotypes may be attributable to multiple causes, not just one cause.”

Furthermore, the specialist explained the next steps to be taken in the context of the line opened up by this research. “First, we must expand these studies, including more individuals with, if possible, even more extreme phenotypes: more smokers who are older and younger, respectively. Once the statistical evidence is stronger, we must also confirm that the alterations observed in lab-based studies truly impact gene function.”
 

Earlier diagnosis 

The clinician also discussed the potential ways that the conclusions of this study could be applied to clinical practice now and in the future, and how the conclusions could benefit these patients. “The results of our line of research may help in early identification of those individuals at high risk of developing lung cancer if they smoke, so that they could be included in prevention programs to keep them from smoking or to help them stop smoking,” said Dr. Pérez. “It would also allow for early diagnosis of cancer at a time when there is a much higher chance of curing it. 

“However, the most important thing is that our study may allow us to better understand the mechanisms by which cancer arises and especially why some people do not develop it. This [understanding] could lead to new diagnostic techniques and new treatments for this disease. The techniques needed to develop this line of research (bioinformatic mass sequencing and artificial intelligence) are available and becoming more reliable and more accessible every day. So, we believe our strategy is very realistic,” he added.

Although the line of research opened up by this study depicts a new scenario, the specialists still must face several challenges to discover why some smokers are more likely than others to develop lung cancer.

“There are many lines of research in this regard,” said Dr. Pérez. “But to name a few, I would draw attention to the need to increase the number of cases and controls to improve the comparison, study patients with other tumors related to tobacco use, ask new questions using the data we have already collected, and apply other genomic techniques that would allow us to perform additional studies of genetic variants that have not yet been studied. And, of course, we need to use functional studies to expand our understanding of the function and activity of the genes that have already been identified.” 

Dr. Patiño and Dr. Pérez declared that they have no relevant financial conflicts of interest.

This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Female CRC survivors may experience long-term GI symptoms

Article Type
Changed
Wed, 08/09/2023 - 15:23

 

TOPLINE:

Most older women who are colorectal cancer (CRC) survivors experience prolonged gastrointestinal (GI) symptoms many years after diagnosis and treatment, suggesting a need to improve GI symptom management in this population.

METHODOLOGY:

  • In this cross-sectional study, investigators used data from the Women’s Health Initiative (WHI) Life and Longevity After Cancer study to explore the impact of cancer treatments on persistent GI symptoms in long-term female CRC survivors and why some patients suffer from these symptoms.
  • The cohort consisted of 413 postmenopausal women aged 50-79 years. The mean age of the patients was 62.7 years at the time of CRC diagnosis and 71.2 years at survey completion.
  • Study participants received a CRC diagnosis, mostly in the colon (n = 341), before 2011.
  • Participants completed lifestyle questionnaires at baseline and annually thereafter. The questionnaires assessed a range of factors, including GI symptoms, psychological well-being, physical activity, and dietary habits.

TAKEAWAY:

  • Most CRC survivors (81%) reported persistent GI symptoms more than 8 years after their cancer diagnosis.
  • Abdominal bloating/gas was the most common symptom (54.2%), followed by constipation (44.1%), diarrhea (33.4%), and abdominal/pelvic pain (28.6%). Overall, 15.4% of CRC survivors reported having moderate to severe overall GI symptoms.
  • Psychological distress – namely, fatigue, sleep disturbance, and anxiety – represented the most important risk factor for long-term GI symptoms. Other risk factors included time since cancer diagnosis of less than 5 years, advanced cancer stage, poor dietary habits, and low physical activity.
  • GI symptoms affected survivors’ quality of life, functioning, and body image.

IN PRACTICE:

“Building upon prior work, our findings contribute to the literature by demonstrating strong relationships between GI symptoms and psychological symptoms,” the authors concluded. “Our findings shed light on the importance of psychosocial support as well as lifestyle interventions (specifically nutritional management) in managing GI symptoms in CRC survivors.”

SOURCE:

The study was led by Claire Han and was published in PLOS ONE in May 2023.

LIMITATIONS:

  • The cross-sectional study design limited the researchers’ ability to identify causal effects with respect to risk factors, life impact, and GI symptoms.
  • Symptom data were self-reported, so may have been underreported or overreported.

DISCLOSURES:

The study had no direct funding support. The original data collection for the WHI was funded by the National Heart, Lung, and Blood Institute. Authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Most older women who are colorectal cancer (CRC) survivors experience prolonged gastrointestinal (GI) symptoms many years after diagnosis and treatment, suggesting a need to improve GI symptom management in this population.

METHODOLOGY:

  • In this cross-sectional study, investigators used data from the Women’s Health Initiative (WHI) Life and Longevity After Cancer study to explore the impact of cancer treatments on persistent GI symptoms in long-term female CRC survivors and why some patients suffer from these symptoms.
  • The cohort consisted of 413 postmenopausal women aged 50-79 years. The mean age of the patients was 62.7 years at the time of CRC diagnosis and 71.2 years at survey completion.
  • Study participants received a CRC diagnosis, mostly in the colon (n = 341), before 2011.
  • Participants completed lifestyle questionnaires at baseline and annually thereafter. The questionnaires assessed a range of factors, including GI symptoms, psychological well-being, physical activity, and dietary habits.

TAKEAWAY:

  • Most CRC survivors (81%) reported persistent GI symptoms more than 8 years after their cancer diagnosis.
  • Abdominal bloating/gas was the most common symptom (54.2%), followed by constipation (44.1%), diarrhea (33.4%), and abdominal/pelvic pain (28.6%). Overall, 15.4% of CRC survivors reported having moderate to severe overall GI symptoms.
  • Psychological distress – namely, fatigue, sleep disturbance, and anxiety – represented the most important risk factor for long-term GI symptoms. Other risk factors included time since cancer diagnosis of less than 5 years, advanced cancer stage, poor dietary habits, and low physical activity.
  • GI symptoms affected survivors’ quality of life, functioning, and body image.

IN PRACTICE:

“Building upon prior work, our findings contribute to the literature by demonstrating strong relationships between GI symptoms and psychological symptoms,” the authors concluded. “Our findings shed light on the importance of psychosocial support as well as lifestyle interventions (specifically nutritional management) in managing GI symptoms in CRC survivors.”

SOURCE:

The study was led by Claire Han and was published in PLOS ONE in May 2023.

LIMITATIONS:

  • The cross-sectional study design limited the researchers’ ability to identify causal effects with respect to risk factors, life impact, and GI symptoms.
  • Symptom data were self-reported, so may have been underreported or overreported.

DISCLOSURES:

The study had no direct funding support. The original data collection for the WHI was funded by the National Heart, Lung, and Blood Institute. Authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Most older women who are colorectal cancer (CRC) survivors experience prolonged gastrointestinal (GI) symptoms many years after diagnosis and treatment, suggesting a need to improve GI symptom management in this population.

METHODOLOGY:

  • In this cross-sectional study, investigators used data from the Women’s Health Initiative (WHI) Life and Longevity After Cancer study to explore the impact of cancer treatments on persistent GI symptoms in long-term female CRC survivors and why some patients suffer from these symptoms.
  • The cohort consisted of 413 postmenopausal women aged 50-79 years. The mean age of the patients was 62.7 years at the time of CRC diagnosis and 71.2 years at survey completion.
  • Study participants received a CRC diagnosis, mostly in the colon (n = 341), before 2011.
  • Participants completed lifestyle questionnaires at baseline and annually thereafter. The questionnaires assessed a range of factors, including GI symptoms, psychological well-being, physical activity, and dietary habits.

TAKEAWAY:

  • Most CRC survivors (81%) reported persistent GI symptoms more than 8 years after their cancer diagnosis.
  • Abdominal bloating/gas was the most common symptom (54.2%), followed by constipation (44.1%), diarrhea (33.4%), and abdominal/pelvic pain (28.6%). Overall, 15.4% of CRC survivors reported having moderate to severe overall GI symptoms.
  • Psychological distress – namely, fatigue, sleep disturbance, and anxiety – represented the most important risk factor for long-term GI symptoms. Other risk factors included time since cancer diagnosis of less than 5 years, advanced cancer stage, poor dietary habits, and low physical activity.
  • GI symptoms affected survivors’ quality of life, functioning, and body image.

IN PRACTICE:

“Building upon prior work, our findings contribute to the literature by demonstrating strong relationships between GI symptoms and psychological symptoms,” the authors concluded. “Our findings shed light on the importance of psychosocial support as well as lifestyle interventions (specifically nutritional management) in managing GI symptoms in CRC survivors.”

SOURCE:

The study was led by Claire Han and was published in PLOS ONE in May 2023.

LIMITATIONS:

  • The cross-sectional study design limited the researchers’ ability to identify causal effects with respect to risk factors, life impact, and GI symptoms.
  • Symptom data were self-reported, so may have been underreported or overreported.

DISCLOSURES:

The study had no direct funding support. The original data collection for the WHI was funded by the National Heart, Lung, and Blood Institute. Authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PLOS ONE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Growth hormone links with worse T2D control in adolescents

Article Type
Changed
Wed, 08/09/2023 - 15:11

– Plasma levels of three proteins involved in growth hormone activity showed significant links to the controllability of type 2 diabetes in children, a finding that suggests these proteins may serve as risk markers for incident type 2 diabetes and help identify adolescents who could benefit from aggressive preventive care.

“Plasma growth hormone mediators are associated with glycemic failure in youth with type 2 diabetes,” Chang Lu, MD, said at the at the annual scientific sessions of the American Diabetes Association. “Our hope is that these mediators could be biomarkers for predicting type 2 diabetes onset,” she added in an interview.

Another potential application is to “leverage these data to find predictive markers” that could identify adolescents with type 2 diabetes “at risk for particularly aggressive disease and target them for more intervention,” added Elvira M. Isganaitis, MD, senior author of the report and a pediatric endocrinologist at the Joslin Diabetes Center in Boston.
 

Does growth hormone cause incident T2D at puberty?

Changes in levels of growth hormone–associated peptides during puberty “could potentially explain why children with type 2 diabetes have a more aggressive course” of the disorder, added Dr. Lu, a pediatric endocrinologist at Joslin and at Boston’s Children’s Hospital.

Puberty-associated changes in growth hormone and related peptides “could be why type 2 diabetes starts during puberty. Type 2 diabetes is almost unheard of before children reach about age 10,” Dr. Isganaitis said in an interview.

A current hypothesis is that “high levels of growth hormone is a cause of insulin resistance during puberty, but in healthy children their beta cells overcome this by making more insulin and so they do not develop diabetes,” said Kristen J. Nadeau, MD, a pediatric endocrinologist and professor at Children’s Hospital Colorado in Denver. 

“But this is a stress situation, and if someone has poor beta-cell function they may develop diabetes. The increase in growth hormone [during puberty] can unmask a physiologic and genetic predisposition” to developing type 2 diabetes, Dr. Nadeau said in an interview.

The analyses run by Dr. Lu, Dr. Isganaitis, and their coauthors used data collected in the Treatment Options for Type 2 Diabetes in Adolescents and Youth (TODAY) study, which randomized 699 children aged 10-17 years with type 2 diabetes to one of three antidiabetes treatment regimens and tallied the subsequent incidence of glycemic failure. The study defined the latter as either 6 months with a hemoglobin A1c level of at least 8% or need for insulin treatment.

The primary outcome showed a 39%-52% incidence of failure during 5 years of follow-up depending on the specific treatments the study participants received.
 

Growth hormone correlates of glycemic failure

The new analyses focused on 310 study participants from TODAY who had plasma specimens available from baseline and a second specimen obtained after 3 years of follow-up. The researchers compared the levels of three peptides that mediate growth hormone signaling at baseline and after 3 years, and assessed these changes relative to the endpoint of glycemic failure.

The results showed that an increase in insulin-like growth factor-1 significantly linked with a reduced incidence of glycemic failure and improved glycemia and beta-cell function.

In contrast, increasing plasma levels of growth hormone receptor significantly linked with an increased rate of glycemic failure, hyperglycemia, insulin resistance, and diminished beta-cell function. Also, an increase in insulin-like growth factor binding protein-1 significantly linked with glycemic failure and hyperglycemia at 36 months, and with higher insulin sensitivity at baseline. All these analyses adjusted for baseline differences in several demographic and clinical variables.

But these post hoc analyses could not determine whether these associations resulted from, or had a causal role in, treatment failure, cautioned Dr. Lu.

Future studies should examine the relationship of growth hormone signaling and the course of glycemic control in children and adolescents with prediabetes and obesity, Dr. Lu said.

Confirming that these growth hormone-related proteins are reliable predictors of future glycemic dysfunction would open the door to studies of interventions to slow or prevent progression to type 2 diabetes in children identified as high risk.

Potential interventions include early initiation of insulin treatment, which could help preserve beta-cell function, or treatment with a glucagon-like peptide-1 (GLP-1) agonist, a class of agents that may interact with the insulin-like growth factor-1 receptors on beta cells, Dr. Lu said.

The study received no commercial funding. Dr. Lu, Dr. Isganaitis, and Dr. Nadeau reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Plasma levels of three proteins involved in growth hormone activity showed significant links to the controllability of type 2 diabetes in children, a finding that suggests these proteins may serve as risk markers for incident type 2 diabetes and help identify adolescents who could benefit from aggressive preventive care.

“Plasma growth hormone mediators are associated with glycemic failure in youth with type 2 diabetes,” Chang Lu, MD, said at the at the annual scientific sessions of the American Diabetes Association. “Our hope is that these mediators could be biomarkers for predicting type 2 diabetes onset,” she added in an interview.

Another potential application is to “leverage these data to find predictive markers” that could identify adolescents with type 2 diabetes “at risk for particularly aggressive disease and target them for more intervention,” added Elvira M. Isganaitis, MD, senior author of the report and a pediatric endocrinologist at the Joslin Diabetes Center in Boston.
 

Does growth hormone cause incident T2D at puberty?

Changes in levels of growth hormone–associated peptides during puberty “could potentially explain why children with type 2 diabetes have a more aggressive course” of the disorder, added Dr. Lu, a pediatric endocrinologist at Joslin and at Boston’s Children’s Hospital.

Puberty-associated changes in growth hormone and related peptides “could be why type 2 diabetes starts during puberty. Type 2 diabetes is almost unheard of before children reach about age 10,” Dr. Isganaitis said in an interview.

A current hypothesis is that “high levels of growth hormone is a cause of insulin resistance during puberty, but in healthy children their beta cells overcome this by making more insulin and so they do not develop diabetes,” said Kristen J. Nadeau, MD, a pediatric endocrinologist and professor at Children’s Hospital Colorado in Denver. 

“But this is a stress situation, and if someone has poor beta-cell function they may develop diabetes. The increase in growth hormone [during puberty] can unmask a physiologic and genetic predisposition” to developing type 2 diabetes, Dr. Nadeau said in an interview.

The analyses run by Dr. Lu, Dr. Isganaitis, and their coauthors used data collected in the Treatment Options for Type 2 Diabetes in Adolescents and Youth (TODAY) study, which randomized 699 children aged 10-17 years with type 2 diabetes to one of three antidiabetes treatment regimens and tallied the subsequent incidence of glycemic failure. The study defined the latter as either 6 months with a hemoglobin A1c level of at least 8% or need for insulin treatment.

The primary outcome showed a 39%-52% incidence of failure during 5 years of follow-up depending on the specific treatments the study participants received.
 

Growth hormone correlates of glycemic failure

The new analyses focused on 310 study participants from TODAY who had plasma specimens available from baseline and a second specimen obtained after 3 years of follow-up. The researchers compared the levels of three peptides that mediate growth hormone signaling at baseline and after 3 years, and assessed these changes relative to the endpoint of glycemic failure.

The results showed that an increase in insulin-like growth factor-1 significantly linked with a reduced incidence of glycemic failure and improved glycemia and beta-cell function.

In contrast, increasing plasma levels of growth hormone receptor significantly linked with an increased rate of glycemic failure, hyperglycemia, insulin resistance, and diminished beta-cell function. Also, an increase in insulin-like growth factor binding protein-1 significantly linked with glycemic failure and hyperglycemia at 36 months, and with higher insulin sensitivity at baseline. All these analyses adjusted for baseline differences in several demographic and clinical variables.

But these post hoc analyses could not determine whether these associations resulted from, or had a causal role in, treatment failure, cautioned Dr. Lu.

Future studies should examine the relationship of growth hormone signaling and the course of glycemic control in children and adolescents with prediabetes and obesity, Dr. Lu said.

Confirming that these growth hormone-related proteins are reliable predictors of future glycemic dysfunction would open the door to studies of interventions to slow or prevent progression to type 2 diabetes in children identified as high risk.

Potential interventions include early initiation of insulin treatment, which could help preserve beta-cell function, or treatment with a glucagon-like peptide-1 (GLP-1) agonist, a class of agents that may interact with the insulin-like growth factor-1 receptors on beta cells, Dr. Lu said.

The study received no commercial funding. Dr. Lu, Dr. Isganaitis, and Dr. Nadeau reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

– Plasma levels of three proteins involved in growth hormone activity showed significant links to the controllability of type 2 diabetes in children, a finding that suggests these proteins may serve as risk markers for incident type 2 diabetes and help identify adolescents who could benefit from aggressive preventive care.

“Plasma growth hormone mediators are associated with glycemic failure in youth with type 2 diabetes,” Chang Lu, MD, said at the at the annual scientific sessions of the American Diabetes Association. “Our hope is that these mediators could be biomarkers for predicting type 2 diabetes onset,” she added in an interview.

Another potential application is to “leverage these data to find predictive markers” that could identify adolescents with type 2 diabetes “at risk for particularly aggressive disease and target them for more intervention,” added Elvira M. Isganaitis, MD, senior author of the report and a pediatric endocrinologist at the Joslin Diabetes Center in Boston.
 

Does growth hormone cause incident T2D at puberty?

Changes in levels of growth hormone–associated peptides during puberty “could potentially explain why children with type 2 diabetes have a more aggressive course” of the disorder, added Dr. Lu, a pediatric endocrinologist at Joslin and at Boston’s Children’s Hospital.

Puberty-associated changes in growth hormone and related peptides “could be why type 2 diabetes starts during puberty. Type 2 diabetes is almost unheard of before children reach about age 10,” Dr. Isganaitis said in an interview.

A current hypothesis is that “high levels of growth hormone is a cause of insulin resistance during puberty, but in healthy children their beta cells overcome this by making more insulin and so they do not develop diabetes,” said Kristen J. Nadeau, MD, a pediatric endocrinologist and professor at Children’s Hospital Colorado in Denver. 

“But this is a stress situation, and if someone has poor beta-cell function they may develop diabetes. The increase in growth hormone [during puberty] can unmask a physiologic and genetic predisposition” to developing type 2 diabetes, Dr. Nadeau said in an interview.

The analyses run by Dr. Lu, Dr. Isganaitis, and their coauthors used data collected in the Treatment Options for Type 2 Diabetes in Adolescents and Youth (TODAY) study, which randomized 699 children aged 10-17 years with type 2 diabetes to one of three antidiabetes treatment regimens and tallied the subsequent incidence of glycemic failure. The study defined the latter as either 6 months with a hemoglobin A1c level of at least 8% or need for insulin treatment.

The primary outcome showed a 39%-52% incidence of failure during 5 years of follow-up depending on the specific treatments the study participants received.
 

Growth hormone correlates of glycemic failure

The new analyses focused on 310 study participants from TODAY who had plasma specimens available from baseline and a second specimen obtained after 3 years of follow-up. The researchers compared the levels of three peptides that mediate growth hormone signaling at baseline and after 3 years, and assessed these changes relative to the endpoint of glycemic failure.

The results showed that an increase in insulin-like growth factor-1 significantly linked with a reduced incidence of glycemic failure and improved glycemia and beta-cell function.

In contrast, increasing plasma levels of growth hormone receptor significantly linked with an increased rate of glycemic failure, hyperglycemia, insulin resistance, and diminished beta-cell function. Also, an increase in insulin-like growth factor binding protein-1 significantly linked with glycemic failure and hyperglycemia at 36 months, and with higher insulin sensitivity at baseline. All these analyses adjusted for baseline differences in several demographic and clinical variables.

But these post hoc analyses could not determine whether these associations resulted from, or had a causal role in, treatment failure, cautioned Dr. Lu.

Future studies should examine the relationship of growth hormone signaling and the course of glycemic control in children and adolescents with prediabetes and obesity, Dr. Lu said.

Confirming that these growth hormone-related proteins are reliable predictors of future glycemic dysfunction would open the door to studies of interventions to slow or prevent progression to type 2 diabetes in children identified as high risk.

Potential interventions include early initiation of insulin treatment, which could help preserve beta-cell function, or treatment with a glucagon-like peptide-1 (GLP-1) agonist, a class of agents that may interact with the insulin-like growth factor-1 receptors on beta cells, Dr. Lu said.

The study received no commercial funding. Dr. Lu, Dr. Isganaitis, and Dr. Nadeau reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ADA 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Older women risk overdiagnosis with mammograms: Study

Article Type
Changed
Wed, 08/09/2023 - 15:17

 

Women who continued breast cancer screenings when they reached age 70 had no lower chance of dying from the disease, and just getting a mammogram could instead set them on a path toward unnecessary risks, according to a new study from Yale University.

The findings, published in Annals of Internal Medicine, suggest that between 31% and 54% of all breast cancer diagnoses in women aged 70 years and older could be considered overdiagnoses, meaning that the cancer found during the screening would not have caused symptoms in a person’s lifetime. (For context, the average life expectancy of a woman in the U.S. is 79 years, according to the Centers for Disease Control and Prevention.) 

Overdiagnosis can be harmful because it carries the risks of complications from overtreatment, plus financial and emotional hardships and unnecessary use of limited resources.

For the study, researchers analyzed data for 54,635 women aged 70 and older and compared the rate of breast cancer diagnosis and death among women who did and did not have mammograms during a 15-year follow-up period. 

The rate of breast cancer in the study among women aged 70-74 was 6% for women who were screened and 4% for women who were not screened. The researchers estimated that 31% of the cases were potentially overdiagnosed. Among women aged 75-84, breast cancer was found in 5% of women who were screened, compared to less than 3% of unscreened women. Their estimated overdiagnosis rate was 47%. Finally, 3% of women aged 85 and older who were screened had breast cancer detected, compared with 1% of women in the unscreened group. For the older group, the overdiagnosis rate was 54%.

Yale University
Dr. Ilana Richman

“While our study focused on overdiagnosis, it is important to acknowledge that overdiagnosis is just one of many considerations when deciding whether to continue screening,” researcher and Yale assistant professor of medicine Ilana Richman, MD, said in a statement. “A patient’s preferences and values, personal risk factors, and the overall balance of risks and benefits from screening are also important to take into account when making screening decisions.”

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

 

Women who continued breast cancer screenings when they reached age 70 had no lower chance of dying from the disease, and just getting a mammogram could instead set them on a path toward unnecessary risks, according to a new study from Yale University.

The findings, published in Annals of Internal Medicine, suggest that between 31% and 54% of all breast cancer diagnoses in women aged 70 years and older could be considered overdiagnoses, meaning that the cancer found during the screening would not have caused symptoms in a person’s lifetime. (For context, the average life expectancy of a woman in the U.S. is 79 years, according to the Centers for Disease Control and Prevention.) 

Overdiagnosis can be harmful because it carries the risks of complications from overtreatment, plus financial and emotional hardships and unnecessary use of limited resources.

For the study, researchers analyzed data for 54,635 women aged 70 and older and compared the rate of breast cancer diagnosis and death among women who did and did not have mammograms during a 15-year follow-up period. 

The rate of breast cancer in the study among women aged 70-74 was 6% for women who were screened and 4% for women who were not screened. The researchers estimated that 31% of the cases were potentially overdiagnosed. Among women aged 75-84, breast cancer was found in 5% of women who were screened, compared to less than 3% of unscreened women. Their estimated overdiagnosis rate was 47%. Finally, 3% of women aged 85 and older who were screened had breast cancer detected, compared with 1% of women in the unscreened group. For the older group, the overdiagnosis rate was 54%.

Yale University
Dr. Ilana Richman

“While our study focused on overdiagnosis, it is important to acknowledge that overdiagnosis is just one of many considerations when deciding whether to continue screening,” researcher and Yale assistant professor of medicine Ilana Richman, MD, said in a statement. “A patient’s preferences and values, personal risk factors, and the overall balance of risks and benefits from screening are also important to take into account when making screening decisions.”

A version of this article first appeared on WebMD.com.

 

Women who continued breast cancer screenings when they reached age 70 had no lower chance of dying from the disease, and just getting a mammogram could instead set them on a path toward unnecessary risks, according to a new study from Yale University.

The findings, published in Annals of Internal Medicine, suggest that between 31% and 54% of all breast cancer diagnoses in women aged 70 years and older could be considered overdiagnoses, meaning that the cancer found during the screening would not have caused symptoms in a person’s lifetime. (For context, the average life expectancy of a woman in the U.S. is 79 years, according to the Centers for Disease Control and Prevention.) 

Overdiagnosis can be harmful because it carries the risks of complications from overtreatment, plus financial and emotional hardships and unnecessary use of limited resources.

For the study, researchers analyzed data for 54,635 women aged 70 and older and compared the rate of breast cancer diagnosis and death among women who did and did not have mammograms during a 15-year follow-up period. 

The rate of breast cancer in the study among women aged 70-74 was 6% for women who were screened and 4% for women who were not screened. The researchers estimated that 31% of the cases were potentially overdiagnosed. Among women aged 75-84, breast cancer was found in 5% of women who were screened, compared to less than 3% of unscreened women. Their estimated overdiagnosis rate was 47%. Finally, 3% of women aged 85 and older who were screened had breast cancer detected, compared with 1% of women in the unscreened group. For the older group, the overdiagnosis rate was 54%.

Yale University
Dr. Ilana Richman

“While our study focused on overdiagnosis, it is important to acknowledge that overdiagnosis is just one of many considerations when deciding whether to continue screening,” researcher and Yale assistant professor of medicine Ilana Richman, MD, said in a statement. “A patient’s preferences and values, personal risk factors, and the overall balance of risks and benefits from screening are also important to take into account when making screening decisions.”

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Emerging’ biomarker may predict mild cognitive impairment years before symptoms

Article Type
Changed
Wed, 08/09/2023 - 14:41

 

Measuring levels of the synaptic protein NPTX2 in cerebrospinal fluid (CSF) may serve as an early predictor of mild cognitive impairment (MCI) years before symptoms appear, new research indicates.

“Our study shows that low NPTX2 levels are predictive of MCI symptom onset more than 7 years in advance, including among individuals who are in late middle age,” said study investigator Anja Soldan, PhD, associate professor of neurology, Johns Hopkins University School of Medicine, Baltimore.

NPTX2 is still considered an “emerging biomarker” because knowledge about this protein is limited, Dr. Soldan noted.

Prior studies have shown that levels of NPTX2 are lower in people with MCI and dementia than in those with normal cognition and that low levels of this protein in people with MCI are associated with an increased risk of developing dementia.

“Our study extends these prior findings by showing that low protein levels are also associated with the onset of MCI symptoms,” Dr. Soldan said.

The study was published online in Annals of Neurology.
 

New therapeutic target?

The researchers measured NPTX2, as well as amyloid beta 42/40, phosphorylated (p)-tau181, and total (t)-tau in CSF collected longitudinally from 269 cognitively normal adults from the BIOCARD study.

The average age at baseline was 57.7 years. Nearly all were White, 59% were women, most were college educated, and three-quarters had a close relative with Alzheimer’s disease.

During a mean follow-up average of 16 years, 77 participants progressed to MCI or dementia within or after 7 years of baseline measurements.

In Cox regression models, lower baseline NPTX2 levels were associated with an earlier time to MCI symptom onset (hazard ratio, 0.76; P = .023). This association was significant for progression within 7 years (P = .036) and after 7 years from baseline (P = .001), the investigators reported.

Adults who progressed to MCI had, on average, about 15% lower levels of NPTX2 at baseline, compared with adults who remained cognitively normal.

Baseline NPTX2 levels improved prediction of time to MCI symptom onset after accounting for baseline Alzheimer’s disease biomarker levels (P < .01), and NPTX2 did not interact with the CSF Alzheimer’s disease biomarkers or APOE-ε4 genetic status.

Higher baseline levels of p-tau181 and t-tau were associated with higher baseline NPTX2 levels (both P < .001) and with greater declines in NPTX2 over time, suggesting that NPTX2 may decline in response to tau pathology, the investigators suggested.

Dr. Soldan said NPTX2 may be “a novel target” for developing new therapeutics for Alzheimer’s disease and other dementing and neurodegenerative disorders, as it is not an Alzheimer’s disease–specific protein.

“Efforts are underway for developing a sensitive way to measure NPTX2 brain levels in blood, which could then help clinicians identify individuals at greatest risk for cognitive decline,” she explained.

“Other next steps are to examine how changes in NPTX2 over time relate to changes in brain structure and function and to identify factors that alter levels of NPTX2, including genetic factors and potentially modifiable lifestyle factors,” Dr. Soldan said.

“If having higher levels of NPTX2 in the brain provides some resilience against developing symptoms of Alzheimer’s disease, it would be great if we could somehow increase levels of the protein,” she noted.
 

 

 

Caveats, cautionary notes

Commenting on this research, Christopher Weber, PhD, Alzheimer’s Association director of global science initiatives, said, “Research has shown that when NPTX2 levels are low, it may lead to weaker connections between neurons and could potentially affect cognitive functions, including memory and learning.”

“This new study found an association between lower levels of NPTX2 in CSF and earlier time to MCI symptom onset, and when combined with other established Alzheimer’s biomarkers, they found that NPTX2 improved the prediction of Alzheimer’s symptom onset,” Dr. Weber said.

“This is in line with previous research that suggests NPTX2 levels are associated with an increased risk of progression from MCI to Alzheimer’s dementia,” Dr. Weber said.

However, he noted some limitations of the study. “Participants were primarily White [and] highly educated, and therefore findings may not be generalizable to a real-world population,” he cautioned.

Dr. Weber said it’s also important to note that NPTX2 is not considered an Alzheimer’s-specific biomarker but rather a marker of synaptic activity and neurodegeneration. “The exact role of NPTX2 in predicting dementia is unknown,” Dr. Weber said.

He said that more studies with larger, more diverse cohorts are needed to fully understand its significance as a biomarker or therapeutic target for neurodegenerative diseases, as well as to develop a blood test for NPTX2.  

The study was supported by the National Institutes of Health. Dr. Soldan and Dr. Weber report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Measuring levels of the synaptic protein NPTX2 in cerebrospinal fluid (CSF) may serve as an early predictor of mild cognitive impairment (MCI) years before symptoms appear, new research indicates.

“Our study shows that low NPTX2 levels are predictive of MCI symptom onset more than 7 years in advance, including among individuals who are in late middle age,” said study investigator Anja Soldan, PhD, associate professor of neurology, Johns Hopkins University School of Medicine, Baltimore.

NPTX2 is still considered an “emerging biomarker” because knowledge about this protein is limited, Dr. Soldan noted.

Prior studies have shown that levels of NPTX2 are lower in people with MCI and dementia than in those with normal cognition and that low levels of this protein in people with MCI are associated with an increased risk of developing dementia.

“Our study extends these prior findings by showing that low protein levels are also associated with the onset of MCI symptoms,” Dr. Soldan said.

The study was published online in Annals of Neurology.
 

New therapeutic target?

The researchers measured NPTX2, as well as amyloid beta 42/40, phosphorylated (p)-tau181, and total (t)-tau in CSF collected longitudinally from 269 cognitively normal adults from the BIOCARD study.

The average age at baseline was 57.7 years. Nearly all were White, 59% were women, most were college educated, and three-quarters had a close relative with Alzheimer’s disease.

During a mean follow-up average of 16 years, 77 participants progressed to MCI or dementia within or after 7 years of baseline measurements.

In Cox regression models, lower baseline NPTX2 levels were associated with an earlier time to MCI symptom onset (hazard ratio, 0.76; P = .023). This association was significant for progression within 7 years (P = .036) and after 7 years from baseline (P = .001), the investigators reported.

Adults who progressed to MCI had, on average, about 15% lower levels of NPTX2 at baseline, compared with adults who remained cognitively normal.

Baseline NPTX2 levels improved prediction of time to MCI symptom onset after accounting for baseline Alzheimer’s disease biomarker levels (P < .01), and NPTX2 did not interact with the CSF Alzheimer’s disease biomarkers or APOE-ε4 genetic status.

Higher baseline levels of p-tau181 and t-tau were associated with higher baseline NPTX2 levels (both P < .001) and with greater declines in NPTX2 over time, suggesting that NPTX2 may decline in response to tau pathology, the investigators suggested.

Dr. Soldan said NPTX2 may be “a novel target” for developing new therapeutics for Alzheimer’s disease and other dementing and neurodegenerative disorders, as it is not an Alzheimer’s disease–specific protein.

“Efforts are underway for developing a sensitive way to measure NPTX2 brain levels in blood, which could then help clinicians identify individuals at greatest risk for cognitive decline,” she explained.

“Other next steps are to examine how changes in NPTX2 over time relate to changes in brain structure and function and to identify factors that alter levels of NPTX2, including genetic factors and potentially modifiable lifestyle factors,” Dr. Soldan said.

“If having higher levels of NPTX2 in the brain provides some resilience against developing symptoms of Alzheimer’s disease, it would be great if we could somehow increase levels of the protein,” she noted.
 

 

 

Caveats, cautionary notes

Commenting on this research, Christopher Weber, PhD, Alzheimer’s Association director of global science initiatives, said, “Research has shown that when NPTX2 levels are low, it may lead to weaker connections between neurons and could potentially affect cognitive functions, including memory and learning.”

“This new study found an association between lower levels of NPTX2 in CSF and earlier time to MCI symptom onset, and when combined with other established Alzheimer’s biomarkers, they found that NPTX2 improved the prediction of Alzheimer’s symptom onset,” Dr. Weber said.

“This is in line with previous research that suggests NPTX2 levels are associated with an increased risk of progression from MCI to Alzheimer’s dementia,” Dr. Weber said.

However, he noted some limitations of the study. “Participants were primarily White [and] highly educated, and therefore findings may not be generalizable to a real-world population,” he cautioned.

Dr. Weber said it’s also important to note that NPTX2 is not considered an Alzheimer’s-specific biomarker but rather a marker of synaptic activity and neurodegeneration. “The exact role of NPTX2 in predicting dementia is unknown,” Dr. Weber said.

He said that more studies with larger, more diverse cohorts are needed to fully understand its significance as a biomarker or therapeutic target for neurodegenerative diseases, as well as to develop a blood test for NPTX2.  

The study was supported by the National Institutes of Health. Dr. Soldan and Dr. Weber report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Measuring levels of the synaptic protein NPTX2 in cerebrospinal fluid (CSF) may serve as an early predictor of mild cognitive impairment (MCI) years before symptoms appear, new research indicates.

“Our study shows that low NPTX2 levels are predictive of MCI symptom onset more than 7 years in advance, including among individuals who are in late middle age,” said study investigator Anja Soldan, PhD, associate professor of neurology, Johns Hopkins University School of Medicine, Baltimore.

NPTX2 is still considered an “emerging biomarker” because knowledge about this protein is limited, Dr. Soldan noted.

Prior studies have shown that levels of NPTX2 are lower in people with MCI and dementia than in those with normal cognition and that low levels of this protein in people with MCI are associated with an increased risk of developing dementia.

“Our study extends these prior findings by showing that low protein levels are also associated with the onset of MCI symptoms,” Dr. Soldan said.

The study was published online in Annals of Neurology.
 

New therapeutic target?

The researchers measured NPTX2, as well as amyloid beta 42/40, phosphorylated (p)-tau181, and total (t)-tau in CSF collected longitudinally from 269 cognitively normal adults from the BIOCARD study.

The average age at baseline was 57.7 years. Nearly all were White, 59% were women, most were college educated, and three-quarters had a close relative with Alzheimer’s disease.

During a mean follow-up average of 16 years, 77 participants progressed to MCI or dementia within or after 7 years of baseline measurements.

In Cox regression models, lower baseline NPTX2 levels were associated with an earlier time to MCI symptom onset (hazard ratio, 0.76; P = .023). This association was significant for progression within 7 years (P = .036) and after 7 years from baseline (P = .001), the investigators reported.

Adults who progressed to MCI had, on average, about 15% lower levels of NPTX2 at baseline, compared with adults who remained cognitively normal.

Baseline NPTX2 levels improved prediction of time to MCI symptom onset after accounting for baseline Alzheimer’s disease biomarker levels (P < .01), and NPTX2 did not interact with the CSF Alzheimer’s disease biomarkers or APOE-ε4 genetic status.

Higher baseline levels of p-tau181 and t-tau were associated with higher baseline NPTX2 levels (both P < .001) and with greater declines in NPTX2 over time, suggesting that NPTX2 may decline in response to tau pathology, the investigators suggested.

Dr. Soldan said NPTX2 may be “a novel target” for developing new therapeutics for Alzheimer’s disease and other dementing and neurodegenerative disorders, as it is not an Alzheimer’s disease–specific protein.

“Efforts are underway for developing a sensitive way to measure NPTX2 brain levels in blood, which could then help clinicians identify individuals at greatest risk for cognitive decline,” she explained.

“Other next steps are to examine how changes in NPTX2 over time relate to changes in brain structure and function and to identify factors that alter levels of NPTX2, including genetic factors and potentially modifiable lifestyle factors,” Dr. Soldan said.

“If having higher levels of NPTX2 in the brain provides some resilience against developing symptoms of Alzheimer’s disease, it would be great if we could somehow increase levels of the protein,” she noted.
 

 

 

Caveats, cautionary notes

Commenting on this research, Christopher Weber, PhD, Alzheimer’s Association director of global science initiatives, said, “Research has shown that when NPTX2 levels are low, it may lead to weaker connections between neurons and could potentially affect cognitive functions, including memory and learning.”

“This new study found an association between lower levels of NPTX2 in CSF and earlier time to MCI symptom onset, and when combined with other established Alzheimer’s biomarkers, they found that NPTX2 improved the prediction of Alzheimer’s symptom onset,” Dr. Weber said.

“This is in line with previous research that suggests NPTX2 levels are associated with an increased risk of progression from MCI to Alzheimer’s dementia,” Dr. Weber said.

However, he noted some limitations of the study. “Participants were primarily White [and] highly educated, and therefore findings may not be generalizable to a real-world population,” he cautioned.

Dr. Weber said it’s also important to note that NPTX2 is not considered an Alzheimer’s-specific biomarker but rather a marker of synaptic activity and neurodegeneration. “The exact role of NPTX2 in predicting dementia is unknown,” Dr. Weber said.

He said that more studies with larger, more diverse cohorts are needed to fully understand its significance as a biomarker or therapeutic target for neurodegenerative diseases, as well as to develop a blood test for NPTX2.  

The study was supported by the National Institutes of Health. Dr. Soldan and Dr. Weber report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Scalp cooling for chemo hair loss strikes out with patients

Article Type
Changed
Wed, 08/09/2023 - 15:40

 

TOPLINE:

Most women with breast cancer who received scalp cooling did not perceive benefits in preventing hair loss, overall quality of life, body image, and other outcomes, compared with those who opted to forgo scalp cooling.

METHODOLOGY:

  • Although studies have demonstrated the effectiveness of scalp cooling to reduce hair loss during breast cancer chemotherapy, most were in the setting of single-agent regimens instead of much more commonly used combined chemotherapy, and few studies assessed patients’ subjective experience.
  • To get a real-world sense of the treatment, investigators compared outcomes in 75 women who opted to use the Orbis Paxman cooling cap during taxane/anthracycline-based chemotherapy sessions with 38 women with breast cancer patients who declined to use the cooling cap.
  • The women were surveyed for hair loss perception, functional health, and body image at baseline, midchemotherapy, and at their last chemotherapy cycle, as well as at 3 months and 6-9 months following chemotherapy.
  • The women were treated at the Medical University of Innsbruck, Austria, for various stages of breast cancer; about half were premenopausal.

TAKEAWAY:

  • There was no significant difference between the scalp-cooling and control groups in patient-reported hair loss (P = .831), overall quality of life (P = .627), emotional functioning (P = .737), social functioning (P = .635), and body image (P = .463).
  • On average, women stayed on treatment with the cooling cap for about 40% of the duration of their chemotherapy.
  • Overall, 53 of 75 women (70.7%) stopped scalp cooling early, with most (73.9%) citing alopecia as the primary reason; only 30% completed treatment.

IN PRACTICE:

“The efficacy and tolerability of [scalp cooling] applied in a clinical routine setting ... appeared to be limited,” the authors concluded. “The further determination and up-front definition of criteria prognostic for effectiveness of [scalp cooling] may be helpful to identify patient subgroups that may experience a treatment benefit.”

SOURCE:

The work, led by Christine Brunner, Medical University of Innsbruck, Austria, was published in Breast Cancer: Targets and Therapy.

LIMITATIONS:

  • Shorter intervals between surveys might have given a more granular understanding of patients’ experiences with scalp cooling.
  • There were no biomarker assessments to help identify patients more likely to benefit.

DISCLOSURES:

The work was supported by the Medical University of Innsbruck. Dr. Brunner disclosed a grant from Paxman UK, maker of the cooling cap used in the study. Another investigator disclosed personal fees from AstraZeneca, Daiichi Sankyo, Gilead, Lilly, Novartis, and Sirius.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Most women with breast cancer who received scalp cooling did not perceive benefits in preventing hair loss, overall quality of life, body image, and other outcomes, compared with those who opted to forgo scalp cooling.

METHODOLOGY:

  • Although studies have demonstrated the effectiveness of scalp cooling to reduce hair loss during breast cancer chemotherapy, most were in the setting of single-agent regimens instead of much more commonly used combined chemotherapy, and few studies assessed patients’ subjective experience.
  • To get a real-world sense of the treatment, investigators compared outcomes in 75 women who opted to use the Orbis Paxman cooling cap during taxane/anthracycline-based chemotherapy sessions with 38 women with breast cancer patients who declined to use the cooling cap.
  • The women were surveyed for hair loss perception, functional health, and body image at baseline, midchemotherapy, and at their last chemotherapy cycle, as well as at 3 months and 6-9 months following chemotherapy.
  • The women were treated at the Medical University of Innsbruck, Austria, for various stages of breast cancer; about half were premenopausal.

TAKEAWAY:

  • There was no significant difference between the scalp-cooling and control groups in patient-reported hair loss (P = .831), overall quality of life (P = .627), emotional functioning (P = .737), social functioning (P = .635), and body image (P = .463).
  • On average, women stayed on treatment with the cooling cap for about 40% of the duration of their chemotherapy.
  • Overall, 53 of 75 women (70.7%) stopped scalp cooling early, with most (73.9%) citing alopecia as the primary reason; only 30% completed treatment.

IN PRACTICE:

“The efficacy and tolerability of [scalp cooling] applied in a clinical routine setting ... appeared to be limited,” the authors concluded. “The further determination and up-front definition of criteria prognostic for effectiveness of [scalp cooling] may be helpful to identify patient subgroups that may experience a treatment benefit.”

SOURCE:

The work, led by Christine Brunner, Medical University of Innsbruck, Austria, was published in Breast Cancer: Targets and Therapy.

LIMITATIONS:

  • Shorter intervals between surveys might have given a more granular understanding of patients’ experiences with scalp cooling.
  • There were no biomarker assessments to help identify patients more likely to benefit.

DISCLOSURES:

The work was supported by the Medical University of Innsbruck. Dr. Brunner disclosed a grant from Paxman UK, maker of the cooling cap used in the study. Another investigator disclosed personal fees from AstraZeneca, Daiichi Sankyo, Gilead, Lilly, Novartis, and Sirius.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Most women with breast cancer who received scalp cooling did not perceive benefits in preventing hair loss, overall quality of life, body image, and other outcomes, compared with those who opted to forgo scalp cooling.

METHODOLOGY:

  • Although studies have demonstrated the effectiveness of scalp cooling to reduce hair loss during breast cancer chemotherapy, most were in the setting of single-agent regimens instead of much more commonly used combined chemotherapy, and few studies assessed patients’ subjective experience.
  • To get a real-world sense of the treatment, investigators compared outcomes in 75 women who opted to use the Orbis Paxman cooling cap during taxane/anthracycline-based chemotherapy sessions with 38 women with breast cancer patients who declined to use the cooling cap.
  • The women were surveyed for hair loss perception, functional health, and body image at baseline, midchemotherapy, and at their last chemotherapy cycle, as well as at 3 months and 6-9 months following chemotherapy.
  • The women were treated at the Medical University of Innsbruck, Austria, for various stages of breast cancer; about half were premenopausal.

TAKEAWAY:

  • There was no significant difference between the scalp-cooling and control groups in patient-reported hair loss (P = .831), overall quality of life (P = .627), emotional functioning (P = .737), social functioning (P = .635), and body image (P = .463).
  • On average, women stayed on treatment with the cooling cap for about 40% of the duration of their chemotherapy.
  • Overall, 53 of 75 women (70.7%) stopped scalp cooling early, with most (73.9%) citing alopecia as the primary reason; only 30% completed treatment.

IN PRACTICE:

“The efficacy and tolerability of [scalp cooling] applied in a clinical routine setting ... appeared to be limited,” the authors concluded. “The further determination and up-front definition of criteria prognostic for effectiveness of [scalp cooling] may be helpful to identify patient subgroups that may experience a treatment benefit.”

SOURCE:

The work, led by Christine Brunner, Medical University of Innsbruck, Austria, was published in Breast Cancer: Targets and Therapy.

LIMITATIONS:

  • Shorter intervals between surveys might have given a more granular understanding of patients’ experiences with scalp cooling.
  • There were no biomarker assessments to help identify patients more likely to benefit.

DISCLOSURES:

The work was supported by the Medical University of Innsbruck. Dr. Brunner disclosed a grant from Paxman UK, maker of the cooling cap used in the study. Another investigator disclosed personal fees from AstraZeneca, Daiichi Sankyo, Gilead, Lilly, Novartis, and Sirius.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

BREAST CANCER: TARGETS AND THERAPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Try a little D.I.Y.

Article Type
Changed
Wed, 08/09/2023 - 13:49

Burnout continues to be a hot topic in medicine. It seems like either you are a victim or are concerned that you may become one. Does the solution lie in a restructuring of our health care nonsystem? Or do we need to do a better job of preparing physicians for the realities of an increasingly challenging profession?

Which side of the work/life balance needs adjusting?

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

Obviously, it is both and a recent article in the Journal of the American Informatics Association provides some hints and suggests where we might begin to look for workable solutions. Targeting a single large university health care system, the investigators reviewed the answers provided by more than 600 attending physicians. Nearly half of the respondents reported symptoms of burnout. Those physicians feeling a higher level of EHR (electronic health record) stress were more likely to experiencing burnout. Interestingly, there was no difference in the odds of having burnout between the physicians who were receiving patient emails (MyChart messages) that had been screened by a pool support personnel and those physicians who were receiving the emails directly from the patients.

While this finding about delegating physician-patient communications may come as a surprise to some of you, it supports a series of observations I have made over the last several decades. Whether we are talking about a physicians’ office or an insurance agency, I suspect most business consultants will suggest that things will run more smoothly and efficiently if there is well-structured system in which incoming communications from the clients/patients are dealt with first by less skilled, and therefore less costly, members of the team before they are passed on to the most senior personnel. It just makes sense.

But, it doesn’t always work that well. If the screener has neglected to ask a critical question or anticipated a question by the ultimate decision-makers, this is likely to require another interaction between the client and then screener and then the screener with the decision-maker. If the decision-maker – let’s now call her a physician – had taken the call directly from the patient, it would have saved three people some time and very possibly ended up with a higher quality response, certainly a more patient-friendly one.

I can understand why you might consider my suggestion unworkable when we are talking about phone calls. It will only work if you dedicate specific call-in times for the patients as my partner and I did back in the dark ages. However, when we are talking about a communication a bit less time critical (e.g. an email or a text), it becomes very workable and I think that’s what this recent paper is hinting at.

Too many of us have adopted a protectionist attitude toward our patients in which somehow it is unprofessional or certainly inefficient to communicate with them directly unless we are sitting down together in our offices. Please, not in the checkout at the grocery store. I hope this is not because, like lawyers, we feel we can’t bill for it. The patients love hearing from you directly even if you keep your responses short and to the point. Many will learn to follow suit and adopt your communication style.

You can argue that your staff is so well trained that your communication with the patients seldom becomes a time-gobbling ping-pong match of he-said/she-said/he-said. Then good for you. You are a better delegator than I am.

If this is your first foray into Do-It-Yourself medicine and it works, I encourage you to consider giving your own injections. It’s a clear-cut statement of the importance you attach to immunizations. And ... it will keep your staffing overhead down.

Finally, I can’t resist adding that the authors of this paper also found that physicians sleeping less than 6 hours per night had a significantly higher odds of burnout. While we’re waiting for our health care nonsystem to straighten out, we need to take better care of ourselves.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Topics
Sections

Burnout continues to be a hot topic in medicine. It seems like either you are a victim or are concerned that you may become one. Does the solution lie in a restructuring of our health care nonsystem? Or do we need to do a better job of preparing physicians for the realities of an increasingly challenging profession?

Which side of the work/life balance needs adjusting?

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

Obviously, it is both and a recent article in the Journal of the American Informatics Association provides some hints and suggests where we might begin to look for workable solutions. Targeting a single large university health care system, the investigators reviewed the answers provided by more than 600 attending physicians. Nearly half of the respondents reported symptoms of burnout. Those physicians feeling a higher level of EHR (electronic health record) stress were more likely to experiencing burnout. Interestingly, there was no difference in the odds of having burnout between the physicians who were receiving patient emails (MyChart messages) that had been screened by a pool support personnel and those physicians who were receiving the emails directly from the patients.

While this finding about delegating physician-patient communications may come as a surprise to some of you, it supports a series of observations I have made over the last several decades. Whether we are talking about a physicians’ office or an insurance agency, I suspect most business consultants will suggest that things will run more smoothly and efficiently if there is well-structured system in which incoming communications from the clients/patients are dealt with first by less skilled, and therefore less costly, members of the team before they are passed on to the most senior personnel. It just makes sense.

But, it doesn’t always work that well. If the screener has neglected to ask a critical question or anticipated a question by the ultimate decision-makers, this is likely to require another interaction between the client and then screener and then the screener with the decision-maker. If the decision-maker – let’s now call her a physician – had taken the call directly from the patient, it would have saved three people some time and very possibly ended up with a higher quality response, certainly a more patient-friendly one.

I can understand why you might consider my suggestion unworkable when we are talking about phone calls. It will only work if you dedicate specific call-in times for the patients as my partner and I did back in the dark ages. However, when we are talking about a communication a bit less time critical (e.g. an email or a text), it becomes very workable and I think that’s what this recent paper is hinting at.

Too many of us have adopted a protectionist attitude toward our patients in which somehow it is unprofessional or certainly inefficient to communicate with them directly unless we are sitting down together in our offices. Please, not in the checkout at the grocery store. I hope this is not because, like lawyers, we feel we can’t bill for it. The patients love hearing from you directly even if you keep your responses short and to the point. Many will learn to follow suit and adopt your communication style.

You can argue that your staff is so well trained that your communication with the patients seldom becomes a time-gobbling ping-pong match of he-said/she-said/he-said. Then good for you. You are a better delegator than I am.

If this is your first foray into Do-It-Yourself medicine and it works, I encourage you to consider giving your own injections. It’s a clear-cut statement of the importance you attach to immunizations. And ... it will keep your staffing overhead down.

Finally, I can’t resist adding that the authors of this paper also found that physicians sleeping less than 6 hours per night had a significantly higher odds of burnout. While we’re waiting for our health care nonsystem to straighten out, we need to take better care of ourselves.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Burnout continues to be a hot topic in medicine. It seems like either you are a victim or are concerned that you may become one. Does the solution lie in a restructuring of our health care nonsystem? Or do we need to do a better job of preparing physicians for the realities of an increasingly challenging profession?

Which side of the work/life balance needs adjusting?

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

Obviously, it is both and a recent article in the Journal of the American Informatics Association provides some hints and suggests where we might begin to look for workable solutions. Targeting a single large university health care system, the investigators reviewed the answers provided by more than 600 attending physicians. Nearly half of the respondents reported symptoms of burnout. Those physicians feeling a higher level of EHR (electronic health record) stress were more likely to experiencing burnout. Interestingly, there was no difference in the odds of having burnout between the physicians who were receiving patient emails (MyChart messages) that had been screened by a pool support personnel and those physicians who were receiving the emails directly from the patients.

While this finding about delegating physician-patient communications may come as a surprise to some of you, it supports a series of observations I have made over the last several decades. Whether we are talking about a physicians’ office or an insurance agency, I suspect most business consultants will suggest that things will run more smoothly and efficiently if there is well-structured system in which incoming communications from the clients/patients are dealt with first by less skilled, and therefore less costly, members of the team before they are passed on to the most senior personnel. It just makes sense.

But, it doesn’t always work that well. If the screener has neglected to ask a critical question or anticipated a question by the ultimate decision-makers, this is likely to require another interaction between the client and then screener and then the screener with the decision-maker. If the decision-maker – let’s now call her a physician – had taken the call directly from the patient, it would have saved three people some time and very possibly ended up with a higher quality response, certainly a more patient-friendly one.

I can understand why you might consider my suggestion unworkable when we are talking about phone calls. It will only work if you dedicate specific call-in times for the patients as my partner and I did back in the dark ages. However, when we are talking about a communication a bit less time critical (e.g. an email or a text), it becomes very workable and I think that’s what this recent paper is hinting at.

Too many of us have adopted a protectionist attitude toward our patients in which somehow it is unprofessional or certainly inefficient to communicate with them directly unless we are sitting down together in our offices. Please, not in the checkout at the grocery store. I hope this is not because, like lawyers, we feel we can’t bill for it. The patients love hearing from you directly even if you keep your responses short and to the point. Many will learn to follow suit and adopt your communication style.

You can argue that your staff is so well trained that your communication with the patients seldom becomes a time-gobbling ping-pong match of he-said/she-said/he-said. Then good for you. You are a better delegator than I am.

If this is your first foray into Do-It-Yourself medicine and it works, I encourage you to consider giving your own injections. It’s a clear-cut statement of the importance you attach to immunizations. And ... it will keep your staffing overhead down.

Finally, I can’t resist adding that the authors of this paper also found that physicians sleeping less than 6 hours per night had a significantly higher odds of burnout. While we’re waiting for our health care nonsystem to straighten out, we need to take better care of ourselves.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

For CLL, BTKi combo bests chemoimmunotherapy

Article Type
Changed
Fri, 08/11/2023 - 10:13

A new interim analysis of a large randomized, phase 3 trial provides more evidence that a combination of ibrutinib and rituximab is a better option for younger patients with untreated chronic lymphocytic leukemia (CLL) than the once-standard combination of fludarabine, cyclophosphamide, and rituximab (FCR).

The analysis of the open-label FLAIR trial, published in The Lancet Oncology, tracked 771 patients with CLL for a median follow-up of 53 months (interquartile ratio, 41-61 months) and found that median progression-free survival was not reached with ibrutinib/rituximab versus 67 months with FCR (hazard ratio, 0.44, P < .0001).

“This paper is another confirmation to say that Bruton’s tyrosine kinase inhibitors are more powerful than even our strongest chemoimmunotherapy. That’s very reassuring,” said hematologist/oncologist Jan A. Burger, MD, PhD, of the University of Texas MD Anderson Cancer Center, Houston, in an interview. He did not take part in the analysis but is familiar with its findings.

There are caveats to the study. More patients in the ibrutinib/rituximab arm died of cardiac events, possibly reflecting a known risk of those drugs. And for unclear reasons, there was no difference in overall survival – a secondary endpoint – between the groups. The study authors speculate that this may be because some patients on FCR progressed and turned to effective second-line drugs.

Still, the findings are consistent with the landmark E1912 trial, the authors wrote, and adds “to a body of evidence that suggests that the use of ibrutinib-based regimens should be considered for patients with previously untreated CLL, especially those with IGHV-unmutated CLL.”

The study, partially funded by industry, was led by Peter Hillmen, PhD, of Leeds (England) Cancer Center.

According to Dr. Burger, FCR was the standard treatment for younger, fitter patients with CLL about 10-15 years ago. Then Bruton’s tyrosine kinase inhibitors such as ibrutinib entered the picture. But, as the new report notes, initial studies focused on older patients who weren’t considered fit enough to tolerate FCR.

The new study, like the E1912 trial, aimed to compare ibrutinib-rituximab versus FCR in younger, fitter patients.

From 2014 to 2018, researchers assigned 771 patients (median age, 62 years; IQR 56-67; 73% male; 95% White; 66% with World Health Organization performance status, 0) to FCR (n = 385) or ibrutinib/rituximab (n = 386).

Nearly three-quarters (74%) in the FCR group received six cycles of therapy, and 97% of those in the ibrutinib-rituximab group received six cycles of rituximab. Those in the ibrutinib-rituximab group also received daily doses of ibrutinib. Doses could be modified. The data cutoff was May 24, 2021.

Notably, there was no improvement in overall survival in the ibrutinib/rituximab group: 92.1% of patients lived 4 years versus 93.5% in the FCR group. This contrasts with an improvement in overall survival in the earlier E1912 study in the ibrutinib/rituximab group.

However, the study authors noted that overall survival in the FCR group is higher than in earlier studies, perhaps reflecting the wider availability of targeted therapy. The final study analysis will offer more insight into overall survival.

In an interview, hematologist David A. Bond, MD, of Ohio State University, Columbus, who is familiar with the study findings, said “the lack of an improvement in overall survival could be due to differences in available treatments at relapse, as the FLAIR study was conducted more recently than the prior E1912 study.” He added that “the younger ages in the E1912 study may have led to less risk for cardiovascular events or deaths for the patients treated with ibrutinib in the E1912 study.”

The previous E1912 trial showed a larger effect for ibrutinib/rituximab versus FCR on progression-free survival (HR, 0.37, P < .001 for E1912 and HR, 0.44, P< .0001 for the FLAIR trial). However, the study authors noted that FLAIR trial had older subjects (mean age, 62 vs 56.7 in the E1912 trial.)

As for grade 3 or 4 adverse events, leukopenia was most common in the FCR group (n = 203, 54%), compared with the ibrutinib/rituximab group (n = 55, 14%). Serious adverse events were reported in 205 (53%) of patients in the ibrutinib/rituximab group versus 203 (54%) patients in the FCR group.

All-cause infections, myelodysplastic syndrome, acute myeloid leukemia, Richter’s transformation, and other diagnosed cancers were rare but more common in the FCR group. Deaths from COVID-19 were the same at 3 in each group; 2 of 29 deaths in the FCR group and 3 of 30 deaths in the ibrutinib/rituximab group were considered to be likely linked to treatment.

Sudden unexplained or cardiac deaths were more common in the ibrutinib-rituximab group (n = 8, 2%) vs. the FCR group (n = 2, less than 1%).

Dr. Bond said “one of the takeaways for practicing hematologists from the FLAIR study is that cardiovascular complications and sudden cardiac death are clearly an issue for older patients with hypertension treated with ibrutinib. Patients should be monitored for signs or symptoms of cardiovascular disease and have close management of blood pressure.” 

Dr. Burger also noted that cardiac problems are a known risk of ibrutinib. “Fortunately, we have second-generation Bruton’s tyrosine kinase inhibitors that could be chosen for patients when we are worried about side effects.”

He said that chemotherapy remains the preferred – or only – treatment in some parts of the world. And patients may prefer FCR to ibrutinib because of the latter drug’s side effects or a preference for therapy that doesn’t take as long.

The study was funded by Cancer Research UK and Janssen. The study authors reported relationships with companies such as Lilly, Janssen, AbbVie, AstraZeneca, BeiGene, Gilead, and many others. Dr. Burger reports financial support for clinical trials from Pharmacyclics, AstraZeneca, Biogen, and Janssen. Dr. Bond reported no disclosures.

Publications
Topics
Sections

A new interim analysis of a large randomized, phase 3 trial provides more evidence that a combination of ibrutinib and rituximab is a better option for younger patients with untreated chronic lymphocytic leukemia (CLL) than the once-standard combination of fludarabine, cyclophosphamide, and rituximab (FCR).

The analysis of the open-label FLAIR trial, published in The Lancet Oncology, tracked 771 patients with CLL for a median follow-up of 53 months (interquartile ratio, 41-61 months) and found that median progression-free survival was not reached with ibrutinib/rituximab versus 67 months with FCR (hazard ratio, 0.44, P < .0001).

“This paper is another confirmation to say that Bruton’s tyrosine kinase inhibitors are more powerful than even our strongest chemoimmunotherapy. That’s very reassuring,” said hematologist/oncologist Jan A. Burger, MD, PhD, of the University of Texas MD Anderson Cancer Center, Houston, in an interview. He did not take part in the analysis but is familiar with its findings.

There are caveats to the study. More patients in the ibrutinib/rituximab arm died of cardiac events, possibly reflecting a known risk of those drugs. And for unclear reasons, there was no difference in overall survival – a secondary endpoint – between the groups. The study authors speculate that this may be because some patients on FCR progressed and turned to effective second-line drugs.

Still, the findings are consistent with the landmark E1912 trial, the authors wrote, and adds “to a body of evidence that suggests that the use of ibrutinib-based regimens should be considered for patients with previously untreated CLL, especially those with IGHV-unmutated CLL.”

The study, partially funded by industry, was led by Peter Hillmen, PhD, of Leeds (England) Cancer Center.

According to Dr. Burger, FCR was the standard treatment for younger, fitter patients with CLL about 10-15 years ago. Then Bruton’s tyrosine kinase inhibitors such as ibrutinib entered the picture. But, as the new report notes, initial studies focused on older patients who weren’t considered fit enough to tolerate FCR.

The new study, like the E1912 trial, aimed to compare ibrutinib-rituximab versus FCR in younger, fitter patients.

From 2014 to 2018, researchers assigned 771 patients (median age, 62 years; IQR 56-67; 73% male; 95% White; 66% with World Health Organization performance status, 0) to FCR (n = 385) or ibrutinib/rituximab (n = 386).

Nearly three-quarters (74%) in the FCR group received six cycles of therapy, and 97% of those in the ibrutinib-rituximab group received six cycles of rituximab. Those in the ibrutinib-rituximab group also received daily doses of ibrutinib. Doses could be modified. The data cutoff was May 24, 2021.

Notably, there was no improvement in overall survival in the ibrutinib/rituximab group: 92.1% of patients lived 4 years versus 93.5% in the FCR group. This contrasts with an improvement in overall survival in the earlier E1912 study in the ibrutinib/rituximab group.

However, the study authors noted that overall survival in the FCR group is higher than in earlier studies, perhaps reflecting the wider availability of targeted therapy. The final study analysis will offer more insight into overall survival.

In an interview, hematologist David A. Bond, MD, of Ohio State University, Columbus, who is familiar with the study findings, said “the lack of an improvement in overall survival could be due to differences in available treatments at relapse, as the FLAIR study was conducted more recently than the prior E1912 study.” He added that “the younger ages in the E1912 study may have led to less risk for cardiovascular events or deaths for the patients treated with ibrutinib in the E1912 study.”

The previous E1912 trial showed a larger effect for ibrutinib/rituximab versus FCR on progression-free survival (HR, 0.37, P < .001 for E1912 and HR, 0.44, P< .0001 for the FLAIR trial). However, the study authors noted that FLAIR trial had older subjects (mean age, 62 vs 56.7 in the E1912 trial.)

As for grade 3 or 4 adverse events, leukopenia was most common in the FCR group (n = 203, 54%), compared with the ibrutinib/rituximab group (n = 55, 14%). Serious adverse events were reported in 205 (53%) of patients in the ibrutinib/rituximab group versus 203 (54%) patients in the FCR group.

All-cause infections, myelodysplastic syndrome, acute myeloid leukemia, Richter’s transformation, and other diagnosed cancers were rare but more common in the FCR group. Deaths from COVID-19 were the same at 3 in each group; 2 of 29 deaths in the FCR group and 3 of 30 deaths in the ibrutinib/rituximab group were considered to be likely linked to treatment.

Sudden unexplained or cardiac deaths were more common in the ibrutinib-rituximab group (n = 8, 2%) vs. the FCR group (n = 2, less than 1%).

Dr. Bond said “one of the takeaways for practicing hematologists from the FLAIR study is that cardiovascular complications and sudden cardiac death are clearly an issue for older patients with hypertension treated with ibrutinib. Patients should be monitored for signs or symptoms of cardiovascular disease and have close management of blood pressure.” 

Dr. Burger also noted that cardiac problems are a known risk of ibrutinib. “Fortunately, we have second-generation Bruton’s tyrosine kinase inhibitors that could be chosen for patients when we are worried about side effects.”

He said that chemotherapy remains the preferred – or only – treatment in some parts of the world. And patients may prefer FCR to ibrutinib because of the latter drug’s side effects or a preference for therapy that doesn’t take as long.

The study was funded by Cancer Research UK and Janssen. The study authors reported relationships with companies such as Lilly, Janssen, AbbVie, AstraZeneca, BeiGene, Gilead, and many others. Dr. Burger reports financial support for clinical trials from Pharmacyclics, AstraZeneca, Biogen, and Janssen. Dr. Bond reported no disclosures.

A new interim analysis of a large randomized, phase 3 trial provides more evidence that a combination of ibrutinib and rituximab is a better option for younger patients with untreated chronic lymphocytic leukemia (CLL) than the once-standard combination of fludarabine, cyclophosphamide, and rituximab (FCR).

The analysis of the open-label FLAIR trial, published in The Lancet Oncology, tracked 771 patients with CLL for a median follow-up of 53 months (interquartile ratio, 41-61 months) and found that median progression-free survival was not reached with ibrutinib/rituximab versus 67 months with FCR (hazard ratio, 0.44, P < .0001).

“This paper is another confirmation to say that Bruton’s tyrosine kinase inhibitors are more powerful than even our strongest chemoimmunotherapy. That’s very reassuring,” said hematologist/oncologist Jan A. Burger, MD, PhD, of the University of Texas MD Anderson Cancer Center, Houston, in an interview. He did not take part in the analysis but is familiar with its findings.

There are caveats to the study. More patients in the ibrutinib/rituximab arm died of cardiac events, possibly reflecting a known risk of those drugs. And for unclear reasons, there was no difference in overall survival – a secondary endpoint – between the groups. The study authors speculate that this may be because some patients on FCR progressed and turned to effective second-line drugs.

Still, the findings are consistent with the landmark E1912 trial, the authors wrote, and adds “to a body of evidence that suggests that the use of ibrutinib-based regimens should be considered for patients with previously untreated CLL, especially those with IGHV-unmutated CLL.”

The study, partially funded by industry, was led by Peter Hillmen, PhD, of Leeds (England) Cancer Center.

According to Dr. Burger, FCR was the standard treatment for younger, fitter patients with CLL about 10-15 years ago. Then Bruton’s tyrosine kinase inhibitors such as ibrutinib entered the picture. But, as the new report notes, initial studies focused on older patients who weren’t considered fit enough to tolerate FCR.

The new study, like the E1912 trial, aimed to compare ibrutinib-rituximab versus FCR in younger, fitter patients.

From 2014 to 2018, researchers assigned 771 patients (median age, 62 years; IQR 56-67; 73% male; 95% White; 66% with World Health Organization performance status, 0) to FCR (n = 385) or ibrutinib/rituximab (n = 386).

Nearly three-quarters (74%) in the FCR group received six cycles of therapy, and 97% of those in the ibrutinib-rituximab group received six cycles of rituximab. Those in the ibrutinib-rituximab group also received daily doses of ibrutinib. Doses could be modified. The data cutoff was May 24, 2021.

Notably, there was no improvement in overall survival in the ibrutinib/rituximab group: 92.1% of patients lived 4 years versus 93.5% in the FCR group. This contrasts with an improvement in overall survival in the earlier E1912 study in the ibrutinib/rituximab group.

However, the study authors noted that overall survival in the FCR group is higher than in earlier studies, perhaps reflecting the wider availability of targeted therapy. The final study analysis will offer more insight into overall survival.

In an interview, hematologist David A. Bond, MD, of Ohio State University, Columbus, who is familiar with the study findings, said “the lack of an improvement in overall survival could be due to differences in available treatments at relapse, as the FLAIR study was conducted more recently than the prior E1912 study.” He added that “the younger ages in the E1912 study may have led to less risk for cardiovascular events or deaths for the patients treated with ibrutinib in the E1912 study.”

The previous E1912 trial showed a larger effect for ibrutinib/rituximab versus FCR on progression-free survival (HR, 0.37, P < .001 for E1912 and HR, 0.44, P< .0001 for the FLAIR trial). However, the study authors noted that FLAIR trial had older subjects (mean age, 62 vs 56.7 in the E1912 trial.)

As for grade 3 or 4 adverse events, leukopenia was most common in the FCR group (n = 203, 54%), compared with the ibrutinib/rituximab group (n = 55, 14%). Serious adverse events were reported in 205 (53%) of patients in the ibrutinib/rituximab group versus 203 (54%) patients in the FCR group.

All-cause infections, myelodysplastic syndrome, acute myeloid leukemia, Richter’s transformation, and other diagnosed cancers were rare but more common in the FCR group. Deaths from COVID-19 were the same at 3 in each group; 2 of 29 deaths in the FCR group and 3 of 30 deaths in the ibrutinib/rituximab group were considered to be likely linked to treatment.

Sudden unexplained or cardiac deaths were more common in the ibrutinib-rituximab group (n = 8, 2%) vs. the FCR group (n = 2, less than 1%).

Dr. Bond said “one of the takeaways for practicing hematologists from the FLAIR study is that cardiovascular complications and sudden cardiac death are clearly an issue for older patients with hypertension treated with ibrutinib. Patients should be monitored for signs or symptoms of cardiovascular disease and have close management of blood pressure.” 

Dr. Burger also noted that cardiac problems are a known risk of ibrutinib. “Fortunately, we have second-generation Bruton’s tyrosine kinase inhibitors that could be chosen for patients when we are worried about side effects.”

He said that chemotherapy remains the preferred – or only – treatment in some parts of the world. And patients may prefer FCR to ibrutinib because of the latter drug’s side effects or a preference for therapy that doesn’t take as long.

The study was funded by Cancer Research UK and Janssen. The study authors reported relationships with companies such as Lilly, Janssen, AbbVie, AstraZeneca, BeiGene, Gilead, and many others. Dr. Burger reports financial support for clinical trials from Pharmacyclics, AstraZeneca, Biogen, and Janssen. Dr. Bond reported no disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

On the best way to exercise

Article Type
Changed
Wed, 08/09/2023 - 13:05

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article