New tool may provide point-of-care differentiation between bacterial, viral infections

Article Type
Changed
Thu, 12/10/2020 - 13:12

The World Health Organization estimates that 14.9 million of 57 million annual deaths worldwide (25%) are related directly to diseases caused by bacterial and/or viral infections.

Ivana Pennisi

The first crucial step in order to build a successful surveillance system is to accurately identify and diagnose disease, Ivana Pennisi reminded the audience at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year. A problem, particularly in primary care, is differentiating between patients with bacterial infections who might benefit from antibiotics and those with viral infections where supportive treatment is generally required. One solution might a rapid point-of-care tool.

Ms. Pennisi described early experiences of using microchip technology to detect RNA biomarkers in the blood rather than look for the pathogen itself. Early results suggest high diagnostic accuracy at low cost.

It is known that when a bacteria or virus enters the body, it stimulates the immune system in a unique way leading to the expression of different genes in the host blood. As part of the Personalized Management of Febrile Illnesses study, researchers have demonstrated a number of high correlated transcripts. Of current interest are two genes which are upregulated in childhood febrile illnesses.

Ms. Pennisi, a PhD student working as part of a multidisciplinary at the department of infectious disease and Centre for Bioinspired Technology at Imperial College, London, developed loop-mediated isothermal amplification (LAMP) assays to detect for the first time host RNA signatures on a nucleic acid–based point-of-care handheld system to discriminate bacterial from viral infection. The amplification reaction is then combined with microchip technology in the well of a portable point-of-care device named Lacewing. It translates the nucleic acid amplification signal into a quantitative electrochemical signal without the need for a thermal cycler.

The combination of genomic expertise in the section of paediatrics lead by Michael Levin, PhD, and microchip-based technologies in the department of electrical and electronic engineering under the guidance of Pantelis Georgiou, PhD, enabled the team overcome many clinical challenges.

Ms. Pennisi presented her team’s early experiences with clinical samples from 455 febrile children. First, transcription isothermal amplification techniques were employed to confirm bacterial and viral infections. Results were then validated using standard fluorescent-based quantitative polymerase chain reaction (PCR) instruments. In order to define a decision boundary between bacterial and viral patients, cutoff levels were determined using multivariate logistic regression analysis. Results then were evaluated using microarrays, reverse transcriptase PCR (RT-PCR), and the eLAMP to confirm comparability with preferred techniques.

In conclusion, Ms. Pennisi reported that the two-gene signature combined with the use of eLAMP technology in a point-of-care tool offered the potential of low cost and accurate discrimination between bacterial and viral infection in febrile children. She outlined her vision for the future: “The patient sample and reagent are loaded into a disposable cartridge. This is then placed into a device to monitor in real time the reaction and share all the data via a Bluetooth to a dedicated app on a smart phone. All data and location of the outbreak are then stored in [the] cloud, making it easier for epidemiological studies and tracking of new outbreaks. We hope that by enhancing the capability of our platform, we contribute to better patient care.”

“Distinguishing between bacterial and viral infections remains one of the key questions in the daily pediatric acute care,” commented Lauri Ivaska, MD, from the department of pediatrics and adolescent medicine at Turku (Finland) University Hospital. “One of the most promising laboratory methods to do this is by measuring quantities of two specific host RNA transcripts from a blood sample. It would be of great importance if this could be done reliably by using a fast and cheap method as presented here by Ivana Pennisi.”

Ms. Pennisi had no relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The World Health Organization estimates that 14.9 million of 57 million annual deaths worldwide (25%) are related directly to diseases caused by bacterial and/or viral infections.

Ivana Pennisi

The first crucial step in order to build a successful surveillance system is to accurately identify and diagnose disease, Ivana Pennisi reminded the audience at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year. A problem, particularly in primary care, is differentiating between patients with bacterial infections who might benefit from antibiotics and those with viral infections where supportive treatment is generally required. One solution might a rapid point-of-care tool.

Ms. Pennisi described early experiences of using microchip technology to detect RNA biomarkers in the blood rather than look for the pathogen itself. Early results suggest high diagnostic accuracy at low cost.

It is known that when a bacteria or virus enters the body, it stimulates the immune system in a unique way leading to the expression of different genes in the host blood. As part of the Personalized Management of Febrile Illnesses study, researchers have demonstrated a number of high correlated transcripts. Of current interest are two genes which are upregulated in childhood febrile illnesses.

Ms. Pennisi, a PhD student working as part of a multidisciplinary at the department of infectious disease and Centre for Bioinspired Technology at Imperial College, London, developed loop-mediated isothermal amplification (LAMP) assays to detect for the first time host RNA signatures on a nucleic acid–based point-of-care handheld system to discriminate bacterial from viral infection. The amplification reaction is then combined with microchip technology in the well of a portable point-of-care device named Lacewing. It translates the nucleic acid amplification signal into a quantitative electrochemical signal without the need for a thermal cycler.

The combination of genomic expertise in the section of paediatrics lead by Michael Levin, PhD, and microchip-based technologies in the department of electrical and electronic engineering under the guidance of Pantelis Georgiou, PhD, enabled the team overcome many clinical challenges.

Ms. Pennisi presented her team’s early experiences with clinical samples from 455 febrile children. First, transcription isothermal amplification techniques were employed to confirm bacterial and viral infections. Results were then validated using standard fluorescent-based quantitative polymerase chain reaction (PCR) instruments. In order to define a decision boundary between bacterial and viral patients, cutoff levels were determined using multivariate logistic regression analysis. Results then were evaluated using microarrays, reverse transcriptase PCR (RT-PCR), and the eLAMP to confirm comparability with preferred techniques.

In conclusion, Ms. Pennisi reported that the two-gene signature combined with the use of eLAMP technology in a point-of-care tool offered the potential of low cost and accurate discrimination between bacterial and viral infection in febrile children. She outlined her vision for the future: “The patient sample and reagent are loaded into a disposable cartridge. This is then placed into a device to monitor in real time the reaction and share all the data via a Bluetooth to a dedicated app on a smart phone. All data and location of the outbreak are then stored in [the] cloud, making it easier for epidemiological studies and tracking of new outbreaks. We hope that by enhancing the capability of our platform, we contribute to better patient care.”

“Distinguishing between bacterial and viral infections remains one of the key questions in the daily pediatric acute care,” commented Lauri Ivaska, MD, from the department of pediatrics and adolescent medicine at Turku (Finland) University Hospital. “One of the most promising laboratory methods to do this is by measuring quantities of two specific host RNA transcripts from a blood sample. It would be of great importance if this could be done reliably by using a fast and cheap method as presented here by Ivana Pennisi.”

Ms. Pennisi had no relevant financial disclosures.

The World Health Organization estimates that 14.9 million of 57 million annual deaths worldwide (25%) are related directly to diseases caused by bacterial and/or viral infections.

Ivana Pennisi

The first crucial step in order to build a successful surveillance system is to accurately identify and diagnose disease, Ivana Pennisi reminded the audience at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year. A problem, particularly in primary care, is differentiating between patients with bacterial infections who might benefit from antibiotics and those with viral infections where supportive treatment is generally required. One solution might a rapid point-of-care tool.

Ms. Pennisi described early experiences of using microchip technology to detect RNA biomarkers in the blood rather than look for the pathogen itself. Early results suggest high diagnostic accuracy at low cost.

It is known that when a bacteria or virus enters the body, it stimulates the immune system in a unique way leading to the expression of different genes in the host blood. As part of the Personalized Management of Febrile Illnesses study, researchers have demonstrated a number of high correlated transcripts. Of current interest are two genes which are upregulated in childhood febrile illnesses.

Ms. Pennisi, a PhD student working as part of a multidisciplinary at the department of infectious disease and Centre for Bioinspired Technology at Imperial College, London, developed loop-mediated isothermal amplification (LAMP) assays to detect for the first time host RNA signatures on a nucleic acid–based point-of-care handheld system to discriminate bacterial from viral infection. The amplification reaction is then combined with microchip technology in the well of a portable point-of-care device named Lacewing. It translates the nucleic acid amplification signal into a quantitative electrochemical signal without the need for a thermal cycler.

The combination of genomic expertise in the section of paediatrics lead by Michael Levin, PhD, and microchip-based technologies in the department of electrical and electronic engineering under the guidance of Pantelis Georgiou, PhD, enabled the team overcome many clinical challenges.

Ms. Pennisi presented her team’s early experiences with clinical samples from 455 febrile children. First, transcription isothermal amplification techniques were employed to confirm bacterial and viral infections. Results were then validated using standard fluorescent-based quantitative polymerase chain reaction (PCR) instruments. In order to define a decision boundary between bacterial and viral patients, cutoff levels were determined using multivariate logistic regression analysis. Results then were evaluated using microarrays, reverse transcriptase PCR (RT-PCR), and the eLAMP to confirm comparability with preferred techniques.

In conclusion, Ms. Pennisi reported that the two-gene signature combined with the use of eLAMP technology in a point-of-care tool offered the potential of low cost and accurate discrimination between bacterial and viral infection in febrile children. She outlined her vision for the future: “The patient sample and reagent are loaded into a disposable cartridge. This is then placed into a device to monitor in real time the reaction and share all the data via a Bluetooth to a dedicated app on a smart phone. All data and location of the outbreak are then stored in [the] cloud, making it easier for epidemiological studies and tracking of new outbreaks. We hope that by enhancing the capability of our platform, we contribute to better patient care.”

“Distinguishing between bacterial and viral infections remains one of the key questions in the daily pediatric acute care,” commented Lauri Ivaska, MD, from the department of pediatrics and adolescent medicine at Turku (Finland) University Hospital. “One of the most promising laboratory methods to do this is by measuring quantities of two specific host RNA transcripts from a blood sample. It would be of great importance if this could be done reliably by using a fast and cheap method as presented here by Ivana Pennisi.”

Ms. Pennisi had no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ESPID 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

To D or not to D? Vitamin D doesn’t reduce falls in older adults

Article Type
Changed
Tue, 12/15/2020 - 09:08

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

How should we evaluate the benefit of immunotherapy combinations?

Article Type
Changed
Thu, 12/10/2020 - 09:19

Every medical oncologist who has described a combination chemotherapy regimen to a patient with advanced cancer has likely been asked whether the benefits of tumor shrinkage, disease-free survival (DFS), and overall survival are worth the risks of adverse events (AEs).

Dr. Alan P. Lyss

Single-agent immunotherapy and, more recently, combinations of immunotherapy drugs have been approved for a variety of metastatic tumors. In general, combination immunotherapy regimens have more AEs and a higher frequency of premature treatment discontinuation for toxicity.

Michael Postow, MD, of Memorial Sloan Kettering Cancer Center in New York, reflected on new ways to evaluate the benefits and risks of immunotherapy combinations during a plenary session on novel combinations at the American Association for Cancer Research’s Virtual Special Conference on Tumor Immunology and Immunotherapy.
 

Potential targets

As with chemotherapy drugs, immunotherapy combinations make the most sense when drugs targeting independent processes are employed.

As described in a paper published in Nature in 2011, the process for recruiting the immune system to combat cancer is as follows:

  • Dendritic cells must sample antigens derived from the tumor.
  • The dendritic cells must receive an activation signal so they promote immunity rather than tolerance.
  • The tumor antigen–loaded dendritic cells need to generate protective T-cell responses, instead of T-regulatory responses, in lymphoid tissues.
  • Cancer antigen–specific T cells must enter tumor tissues.
  • Tumor-derived mechanisms for promoting immunosuppression need to be circumvented.

Since each step in the cascade is a potential therapeutic target, there are large numbers of potential drug combinations.
 

Measuring impact

Conventional measurements of tumor response may not be adequately sensitive to the impact from immunotherapy drugs. A case in point is sipuleucel-T, which is approved to treat advanced prostate cancer.

In the pivotal phase 3 trial, only 1 of 341 patients receiving sipuleucel-T achieved a partial response by RECIST criteria. Only 2.6% of patients had a 50% reduction in prostate-specific antigen levels. Nonetheless, a 4.1-month improvement in median overall survival was achieved. These results were published in the New England Journal of Medicine.

The discrepancy between tumor shrinkage and survival benefit for immunotherapy is not unexpected. As many as 10% of patients treated with ipilimumab (ipi) for stage IV malignant melanoma have progressive disease by tumor size but experience prolongation of survival, according to guidelines published in Clinical Cancer Research.

Accurate assessment of the ultimate efficacy of immunotherapy over time would benefit patients and clinicians since immune checkpoint inhibitors are often administered for several years, are financially costly, and treatment-associated AEs emerge unpredictably at any time.

Curtailing the duration of ineffective treatment could be valuable from many perspectives.
 

Immunotherapy combinations in metastatic melanoma

In the CheckMate 067 study, there was an improvement in response, progression-free survival (PFS), and overall survival for nivolumab (nivo) plus ipi or nivo alone, in comparison with ipi alone, in patients with advanced melanoma. Initial results from this trial were published in the New England Journal of Medicine in 2017.

At a minimum follow-up of 60 months, the 5-year overall survival was 52% for the nivo/ipi regimen, 44% for nivo alone, and 26% for ipi alone. These results were published in the New England Journal of Medicine in 2019.

The trial was not statistically powered to conclude whether the overall survival for the combination was superior to that of single-agent nivo alone, but both nivo regimens were superior to ipi alone.

Unfortunately, the combination also produced the highest treatment-related AE rates – 59% with nivo/ipi, 23% with nivo, and 28% with ipi in 2019. In the 2017 report, the combination regimen had more than twice as many premature treatment discontinuations as the other two study arms.

Is there a better way to quantify the risk-benefit ratio and explain it to patients?
 

Alternative strategies for assessing benefit: Treatment-free survival

Researchers have proposed treatment-free survival (TFS) as a potential new metric to characterize not only antitumor activity but also toxicity experienced after the cessation of therapy and before initiation of subsequent systemic therapy or death.

TFS is defined as the area between Kaplan-Meier curves from immunotherapy cessation until the reinitiation of systemic therapy or death. All patients who began immunotherapy are included – not just those achieving response or concluding a predefined number of cycles of treatment.

The curves can be partitioned into states with and without toxicity to establish a unique endpoint: time to cessation of both immunotherapy and toxicity.

Researchers conducted a pooled analysis of 3-year follow-up data from the 1,077 patients who participated in CheckMate 069, testing nivo/ipi versus nivo alone, and CheckMate 067, comparing nivo/ipi, nivo alone, and ipi alone. The results were published in the Journal of Clinical Oncology.

The TFS without grade 3 or higher AEs was 28% for nivo/ipi, 11% for nivo alone, and 23% for ipi alone. The restricted mean time without either treatment or grade 3 or greater AEs was 10.1 months, 4.1 months, and 8.5 months, respectively.

TFS incentivizes the use of regimens that have:

  • A short duration of treatment
  • Prolonged time to subsequent therapy or death
  • Only mild AEs of brief duration.

A higher TFS corresponds with the goals that patients and their providers would have for a treatment regimen.
 

Adaptive models provide clues about benefit from extended therapy

In contrast to cytotoxic chemotherapy and molecularly targeted agents, benefit from immune-targeted therapy can deepen and persist after treatment discontinuation.

In advanced melanoma, researchers observed that overall survival was similar for patients who discontinued nivo/ipi because of AEs during the induction phase of treatment and those who did not. These results were published in the Journal of Clinical Oncology.

This observation has led to an individualized, adaptive approach to de-escalating combination immunotherapy, described in Clinical Cancer Research. The approach is dubbed “SMART,” which stands for sequential multiple assignment randomized trial designs.

With the SMART approach, each stage of a trial corresponds to an important treatment decision point. The goal is to define the population of patients who can safely discontinue treatment based on response, rather than doing so after the development of AEs.

In the Adapt-IT prospective study, 60 patients with advanced melanoma with poor prognostic features were given two doses of nivo/ipi followed by a CT scan at week 6. They were triaged to stopping ipi and proceeding with maintenance therapy with nivo alone or continuing the combination for an additional two cycles of treatment. Results from this trial were presented at ASCO 2020 (abstract 10003).

The investigators found that 68% of patients had no tumor burden increase at week 6 and could discontinue ipi. For those patients, their response rate of 57% approached the expected results from a full course of ipi.

At median follow-up of 22.3 months, median response duration, PFS, and overall survival had not been reached for the responders who received an abbreviated course of the combination regimen.

There were two observations that suggested the first two cycles of treatment drove not only toxicity but also tumor control:

  • The rate of grade 3-4 toxicity from only two cycles was high (57%).
  • Of the 19 patients (32% of the original 60 patients) who had progressive disease after two cycles of nivo/ipi, there were no responders with continued therapy.

Dr. Postow commented that, in correlative studies conducted as part of Adapt-IT, the Ki-67 of CD8-positive T cells increased after the initial dose of nivo/ipi. However, proliferation did not continue with subsequent cycles (that is, Ki-67 did not continue to rise).

When they examined markers of T-cell stimulation such as inducible costimulator of CD8-positive T cells, the researchers observed the same effect. The “immune boost” occurred with cycle one but not after subsequent doses of the nivo/ipi combination.

Although unproven in clinical trials at this time, these data suggest that response and risks of toxicity may not support giving patients more than one cycle of combination treatment.
 

More nuanced ways of assessing tumor growth

Dr. Postow noted that judgment about treatment effects over time are often made by displaying spider plots of changes from baseline tumor size from “time zero” – the time at which combination therapy is commenced.

He speculated that it might be worthwhile to give a dose or two of immune-targeted monotherapy (such as a PD-1 or PD-L1 inhibitor alone) before time zero, measure tumor growth prior to and after the single agent, and reserve using combination immunotherapy only for those patients who do not experience a dampening of the growth curve.

Patients whose tumor growth kinetics are improved with single-agent treatment could be spared the additional toxicity (and uncertain additive benefit) from the second agent.
 

Treatment optimization: More than ‘messaging’

Oncology practice has passed through a long era of “more is better,” an era that gave rise to intensive cytotoxic chemotherapy for hematologic and solid tumors in the metastatic and adjuvant settings. In some cases, that approach proved to be curative, but not in all.

More recently, because of better staging, improved outcomes with newer technology and treatments, and concern about immediate- and late-onset health risks, there has been an effort to deintensify therapy when it can be done safely.

Once a treatment regimen and treatment duration become established, however, patients and their physicians are reluctant to deintensity therapy.

Dr. Postow’s presentation demonstrated that, with regard to immunotherapy combinations – as in other realms of medical practice – science can lead the way to treatment optimization for individual patients.

We have the potential to reassure patients that treatment de-escalation is a rational and personalized component of treatment optimization through the combination of:

  • Identifying new endpoints to quantify treatment benefits and risks.
  • SMART trial designs.
  • Innovative ways to assess tumor response during each phase of a treatment course.

Precision assessment of immunotherapy effect in individual patients can be a key part of precision medicine.

Dr. Postow disclosed relationships with Aduro, Array BioPharma, Bristol Myers Squibb, Eisai, Incyte, Infinity, Merck, NewLink Genetics, Novartis, and RGenix.


Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.

Publications
Topics
Sections

Every medical oncologist who has described a combination chemotherapy regimen to a patient with advanced cancer has likely been asked whether the benefits of tumor shrinkage, disease-free survival (DFS), and overall survival are worth the risks of adverse events (AEs).

Dr. Alan P. Lyss

Single-agent immunotherapy and, more recently, combinations of immunotherapy drugs have been approved for a variety of metastatic tumors. In general, combination immunotherapy regimens have more AEs and a higher frequency of premature treatment discontinuation for toxicity.

Michael Postow, MD, of Memorial Sloan Kettering Cancer Center in New York, reflected on new ways to evaluate the benefits and risks of immunotherapy combinations during a plenary session on novel combinations at the American Association for Cancer Research’s Virtual Special Conference on Tumor Immunology and Immunotherapy.
 

Potential targets

As with chemotherapy drugs, immunotherapy combinations make the most sense when drugs targeting independent processes are employed.

As described in a paper published in Nature in 2011, the process for recruiting the immune system to combat cancer is as follows:

  • Dendritic cells must sample antigens derived from the tumor.
  • The dendritic cells must receive an activation signal so they promote immunity rather than tolerance.
  • The tumor antigen–loaded dendritic cells need to generate protective T-cell responses, instead of T-regulatory responses, in lymphoid tissues.
  • Cancer antigen–specific T cells must enter tumor tissues.
  • Tumor-derived mechanisms for promoting immunosuppression need to be circumvented.

Since each step in the cascade is a potential therapeutic target, there are large numbers of potential drug combinations.
 

Measuring impact

Conventional measurements of tumor response may not be adequately sensitive to the impact from immunotherapy drugs. A case in point is sipuleucel-T, which is approved to treat advanced prostate cancer.

In the pivotal phase 3 trial, only 1 of 341 patients receiving sipuleucel-T achieved a partial response by RECIST criteria. Only 2.6% of patients had a 50% reduction in prostate-specific antigen levels. Nonetheless, a 4.1-month improvement in median overall survival was achieved. These results were published in the New England Journal of Medicine.

The discrepancy between tumor shrinkage and survival benefit for immunotherapy is not unexpected. As many as 10% of patients treated with ipilimumab (ipi) for stage IV malignant melanoma have progressive disease by tumor size but experience prolongation of survival, according to guidelines published in Clinical Cancer Research.

Accurate assessment of the ultimate efficacy of immunotherapy over time would benefit patients and clinicians since immune checkpoint inhibitors are often administered for several years, are financially costly, and treatment-associated AEs emerge unpredictably at any time.

Curtailing the duration of ineffective treatment could be valuable from many perspectives.
 

Immunotherapy combinations in metastatic melanoma

In the CheckMate 067 study, there was an improvement in response, progression-free survival (PFS), and overall survival for nivolumab (nivo) plus ipi or nivo alone, in comparison with ipi alone, in patients with advanced melanoma. Initial results from this trial were published in the New England Journal of Medicine in 2017.

At a minimum follow-up of 60 months, the 5-year overall survival was 52% for the nivo/ipi regimen, 44% for nivo alone, and 26% for ipi alone. These results were published in the New England Journal of Medicine in 2019.

The trial was not statistically powered to conclude whether the overall survival for the combination was superior to that of single-agent nivo alone, but both nivo regimens were superior to ipi alone.

Unfortunately, the combination also produced the highest treatment-related AE rates – 59% with nivo/ipi, 23% with nivo, and 28% with ipi in 2019. In the 2017 report, the combination regimen had more than twice as many premature treatment discontinuations as the other two study arms.

Is there a better way to quantify the risk-benefit ratio and explain it to patients?
 

Alternative strategies for assessing benefit: Treatment-free survival

Researchers have proposed treatment-free survival (TFS) as a potential new metric to characterize not only antitumor activity but also toxicity experienced after the cessation of therapy and before initiation of subsequent systemic therapy or death.

TFS is defined as the area between Kaplan-Meier curves from immunotherapy cessation until the reinitiation of systemic therapy or death. All patients who began immunotherapy are included – not just those achieving response or concluding a predefined number of cycles of treatment.

The curves can be partitioned into states with and without toxicity to establish a unique endpoint: time to cessation of both immunotherapy and toxicity.

Researchers conducted a pooled analysis of 3-year follow-up data from the 1,077 patients who participated in CheckMate 069, testing nivo/ipi versus nivo alone, and CheckMate 067, comparing nivo/ipi, nivo alone, and ipi alone. The results were published in the Journal of Clinical Oncology.

The TFS without grade 3 or higher AEs was 28% for nivo/ipi, 11% for nivo alone, and 23% for ipi alone. The restricted mean time without either treatment or grade 3 or greater AEs was 10.1 months, 4.1 months, and 8.5 months, respectively.

TFS incentivizes the use of regimens that have:

  • A short duration of treatment
  • Prolonged time to subsequent therapy or death
  • Only mild AEs of brief duration.

A higher TFS corresponds with the goals that patients and their providers would have for a treatment regimen.
 

Adaptive models provide clues about benefit from extended therapy

In contrast to cytotoxic chemotherapy and molecularly targeted agents, benefit from immune-targeted therapy can deepen and persist after treatment discontinuation.

In advanced melanoma, researchers observed that overall survival was similar for patients who discontinued nivo/ipi because of AEs during the induction phase of treatment and those who did not. These results were published in the Journal of Clinical Oncology.

This observation has led to an individualized, adaptive approach to de-escalating combination immunotherapy, described in Clinical Cancer Research. The approach is dubbed “SMART,” which stands for sequential multiple assignment randomized trial designs.

With the SMART approach, each stage of a trial corresponds to an important treatment decision point. The goal is to define the population of patients who can safely discontinue treatment based on response, rather than doing so after the development of AEs.

In the Adapt-IT prospective study, 60 patients with advanced melanoma with poor prognostic features were given two doses of nivo/ipi followed by a CT scan at week 6. They were triaged to stopping ipi and proceeding with maintenance therapy with nivo alone or continuing the combination for an additional two cycles of treatment. Results from this trial were presented at ASCO 2020 (abstract 10003).

The investigators found that 68% of patients had no tumor burden increase at week 6 and could discontinue ipi. For those patients, their response rate of 57% approached the expected results from a full course of ipi.

At median follow-up of 22.3 months, median response duration, PFS, and overall survival had not been reached for the responders who received an abbreviated course of the combination regimen.

There were two observations that suggested the first two cycles of treatment drove not only toxicity but also tumor control:

  • The rate of grade 3-4 toxicity from only two cycles was high (57%).
  • Of the 19 patients (32% of the original 60 patients) who had progressive disease after two cycles of nivo/ipi, there were no responders with continued therapy.

Dr. Postow commented that, in correlative studies conducted as part of Adapt-IT, the Ki-67 of CD8-positive T cells increased after the initial dose of nivo/ipi. However, proliferation did not continue with subsequent cycles (that is, Ki-67 did not continue to rise).

When they examined markers of T-cell stimulation such as inducible costimulator of CD8-positive T cells, the researchers observed the same effect. The “immune boost” occurred with cycle one but not after subsequent doses of the nivo/ipi combination.

Although unproven in clinical trials at this time, these data suggest that response and risks of toxicity may not support giving patients more than one cycle of combination treatment.
 

More nuanced ways of assessing tumor growth

Dr. Postow noted that judgment about treatment effects over time are often made by displaying spider plots of changes from baseline tumor size from “time zero” – the time at which combination therapy is commenced.

He speculated that it might be worthwhile to give a dose or two of immune-targeted monotherapy (such as a PD-1 or PD-L1 inhibitor alone) before time zero, measure tumor growth prior to and after the single agent, and reserve using combination immunotherapy only for those patients who do not experience a dampening of the growth curve.

Patients whose tumor growth kinetics are improved with single-agent treatment could be spared the additional toxicity (and uncertain additive benefit) from the second agent.
 

Treatment optimization: More than ‘messaging’

Oncology practice has passed through a long era of “more is better,” an era that gave rise to intensive cytotoxic chemotherapy for hematologic and solid tumors in the metastatic and adjuvant settings. In some cases, that approach proved to be curative, but not in all.

More recently, because of better staging, improved outcomes with newer technology and treatments, and concern about immediate- and late-onset health risks, there has been an effort to deintensify therapy when it can be done safely.

Once a treatment regimen and treatment duration become established, however, patients and their physicians are reluctant to deintensity therapy.

Dr. Postow’s presentation demonstrated that, with regard to immunotherapy combinations – as in other realms of medical practice – science can lead the way to treatment optimization for individual patients.

We have the potential to reassure patients that treatment de-escalation is a rational and personalized component of treatment optimization through the combination of:

  • Identifying new endpoints to quantify treatment benefits and risks.
  • SMART trial designs.
  • Innovative ways to assess tumor response during each phase of a treatment course.

Precision assessment of immunotherapy effect in individual patients can be a key part of precision medicine.

Dr. Postow disclosed relationships with Aduro, Array BioPharma, Bristol Myers Squibb, Eisai, Incyte, Infinity, Merck, NewLink Genetics, Novartis, and RGenix.


Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.

Every medical oncologist who has described a combination chemotherapy regimen to a patient with advanced cancer has likely been asked whether the benefits of tumor shrinkage, disease-free survival (DFS), and overall survival are worth the risks of adverse events (AEs).

Dr. Alan P. Lyss

Single-agent immunotherapy and, more recently, combinations of immunotherapy drugs have been approved for a variety of metastatic tumors. In general, combination immunotherapy regimens have more AEs and a higher frequency of premature treatment discontinuation for toxicity.

Michael Postow, MD, of Memorial Sloan Kettering Cancer Center in New York, reflected on new ways to evaluate the benefits and risks of immunotherapy combinations during a plenary session on novel combinations at the American Association for Cancer Research’s Virtual Special Conference on Tumor Immunology and Immunotherapy.
 

Potential targets

As with chemotherapy drugs, immunotherapy combinations make the most sense when drugs targeting independent processes are employed.

As described in a paper published in Nature in 2011, the process for recruiting the immune system to combat cancer is as follows:

  • Dendritic cells must sample antigens derived from the tumor.
  • The dendritic cells must receive an activation signal so they promote immunity rather than tolerance.
  • The tumor antigen–loaded dendritic cells need to generate protective T-cell responses, instead of T-regulatory responses, in lymphoid tissues.
  • Cancer antigen–specific T cells must enter tumor tissues.
  • Tumor-derived mechanisms for promoting immunosuppression need to be circumvented.

Since each step in the cascade is a potential therapeutic target, there are large numbers of potential drug combinations.
 

Measuring impact

Conventional measurements of tumor response may not be adequately sensitive to the impact from immunotherapy drugs. A case in point is sipuleucel-T, which is approved to treat advanced prostate cancer.

In the pivotal phase 3 trial, only 1 of 341 patients receiving sipuleucel-T achieved a partial response by RECIST criteria. Only 2.6% of patients had a 50% reduction in prostate-specific antigen levels. Nonetheless, a 4.1-month improvement in median overall survival was achieved. These results were published in the New England Journal of Medicine.

The discrepancy between tumor shrinkage and survival benefit for immunotherapy is not unexpected. As many as 10% of patients treated with ipilimumab (ipi) for stage IV malignant melanoma have progressive disease by tumor size but experience prolongation of survival, according to guidelines published in Clinical Cancer Research.

Accurate assessment of the ultimate efficacy of immunotherapy over time would benefit patients and clinicians since immune checkpoint inhibitors are often administered for several years, are financially costly, and treatment-associated AEs emerge unpredictably at any time.

Curtailing the duration of ineffective treatment could be valuable from many perspectives.
 

Immunotherapy combinations in metastatic melanoma

In the CheckMate 067 study, there was an improvement in response, progression-free survival (PFS), and overall survival for nivolumab (nivo) plus ipi or nivo alone, in comparison with ipi alone, in patients with advanced melanoma. Initial results from this trial were published in the New England Journal of Medicine in 2017.

At a minimum follow-up of 60 months, the 5-year overall survival was 52% for the nivo/ipi regimen, 44% for nivo alone, and 26% for ipi alone. These results were published in the New England Journal of Medicine in 2019.

The trial was not statistically powered to conclude whether the overall survival for the combination was superior to that of single-agent nivo alone, but both nivo regimens were superior to ipi alone.

Unfortunately, the combination also produced the highest treatment-related AE rates – 59% with nivo/ipi, 23% with nivo, and 28% with ipi in 2019. In the 2017 report, the combination regimen had more than twice as many premature treatment discontinuations as the other two study arms.

Is there a better way to quantify the risk-benefit ratio and explain it to patients?
 

Alternative strategies for assessing benefit: Treatment-free survival

Researchers have proposed treatment-free survival (TFS) as a potential new metric to characterize not only antitumor activity but also toxicity experienced after the cessation of therapy and before initiation of subsequent systemic therapy or death.

TFS is defined as the area between Kaplan-Meier curves from immunotherapy cessation until the reinitiation of systemic therapy or death. All patients who began immunotherapy are included – not just those achieving response or concluding a predefined number of cycles of treatment.

The curves can be partitioned into states with and without toxicity to establish a unique endpoint: time to cessation of both immunotherapy and toxicity.

Researchers conducted a pooled analysis of 3-year follow-up data from the 1,077 patients who participated in CheckMate 069, testing nivo/ipi versus nivo alone, and CheckMate 067, comparing nivo/ipi, nivo alone, and ipi alone. The results were published in the Journal of Clinical Oncology.

The TFS without grade 3 or higher AEs was 28% for nivo/ipi, 11% for nivo alone, and 23% for ipi alone. The restricted mean time without either treatment or grade 3 or greater AEs was 10.1 months, 4.1 months, and 8.5 months, respectively.

TFS incentivizes the use of regimens that have:

  • A short duration of treatment
  • Prolonged time to subsequent therapy or death
  • Only mild AEs of brief duration.

A higher TFS corresponds with the goals that patients and their providers would have for a treatment regimen.
 

Adaptive models provide clues about benefit from extended therapy

In contrast to cytotoxic chemotherapy and molecularly targeted agents, benefit from immune-targeted therapy can deepen and persist after treatment discontinuation.

In advanced melanoma, researchers observed that overall survival was similar for patients who discontinued nivo/ipi because of AEs during the induction phase of treatment and those who did not. These results were published in the Journal of Clinical Oncology.

This observation has led to an individualized, adaptive approach to de-escalating combination immunotherapy, described in Clinical Cancer Research. The approach is dubbed “SMART,” which stands for sequential multiple assignment randomized trial designs.

With the SMART approach, each stage of a trial corresponds to an important treatment decision point. The goal is to define the population of patients who can safely discontinue treatment based on response, rather than doing so after the development of AEs.

In the Adapt-IT prospective study, 60 patients with advanced melanoma with poor prognostic features were given two doses of nivo/ipi followed by a CT scan at week 6. They were triaged to stopping ipi and proceeding with maintenance therapy with nivo alone or continuing the combination for an additional two cycles of treatment. Results from this trial were presented at ASCO 2020 (abstract 10003).

The investigators found that 68% of patients had no tumor burden increase at week 6 and could discontinue ipi. For those patients, their response rate of 57% approached the expected results from a full course of ipi.

At median follow-up of 22.3 months, median response duration, PFS, and overall survival had not been reached for the responders who received an abbreviated course of the combination regimen.

There were two observations that suggested the first two cycles of treatment drove not only toxicity but also tumor control:

  • The rate of grade 3-4 toxicity from only two cycles was high (57%).
  • Of the 19 patients (32% of the original 60 patients) who had progressive disease after two cycles of nivo/ipi, there were no responders with continued therapy.

Dr. Postow commented that, in correlative studies conducted as part of Adapt-IT, the Ki-67 of CD8-positive T cells increased after the initial dose of nivo/ipi. However, proliferation did not continue with subsequent cycles (that is, Ki-67 did not continue to rise).

When they examined markers of T-cell stimulation such as inducible costimulator of CD8-positive T cells, the researchers observed the same effect. The “immune boost” occurred with cycle one but not after subsequent doses of the nivo/ipi combination.

Although unproven in clinical trials at this time, these data suggest that response and risks of toxicity may not support giving patients more than one cycle of combination treatment.
 

More nuanced ways of assessing tumor growth

Dr. Postow noted that judgment about treatment effects over time are often made by displaying spider plots of changes from baseline tumor size from “time zero” – the time at which combination therapy is commenced.

He speculated that it might be worthwhile to give a dose or two of immune-targeted monotherapy (such as a PD-1 or PD-L1 inhibitor alone) before time zero, measure tumor growth prior to and after the single agent, and reserve using combination immunotherapy only for those patients who do not experience a dampening of the growth curve.

Patients whose tumor growth kinetics are improved with single-agent treatment could be spared the additional toxicity (and uncertain additive benefit) from the second agent.
 

Treatment optimization: More than ‘messaging’

Oncology practice has passed through a long era of “more is better,” an era that gave rise to intensive cytotoxic chemotherapy for hematologic and solid tumors in the metastatic and adjuvant settings. In some cases, that approach proved to be curative, but not in all.

More recently, because of better staging, improved outcomes with newer technology and treatments, and concern about immediate- and late-onset health risks, there has been an effort to deintensify therapy when it can be done safely.

Once a treatment regimen and treatment duration become established, however, patients and their physicians are reluctant to deintensity therapy.

Dr. Postow’s presentation demonstrated that, with regard to immunotherapy combinations – as in other realms of medical practice – science can lead the way to treatment optimization for individual patients.

We have the potential to reassure patients that treatment de-escalation is a rational and personalized component of treatment optimization through the combination of:

  • Identifying new endpoints to quantify treatment benefits and risks.
  • SMART trial designs.
  • Innovative ways to assess tumor response during each phase of a treatment course.

Precision assessment of immunotherapy effect in individual patients can be a key part of precision medicine.

Dr. Postow disclosed relationships with Aduro, Array BioPharma, Bristol Myers Squibb, Eisai, Incyte, Infinity, Merck, NewLink Genetics, Novartis, and RGenix.


Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AACR: TUMOR IMMUNOLOGY AND IMMUNOTHERAPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

This month in the journal CHEST®

Article Type
Changed
Thu, 12/10/2020 - 00:15

Editor’s picks

 



Power Outage: An Ignored Risk Factor for Chronic Obstructive Pulmonary Disease ExacerbationsBy Dr. Wangjian Zhang, et al.



PROPHETIC: Prospective Identification of Pneumonia in Hospitalized Patients in the ICU By Dr. Stephen P. Bergin, et al.



Chronic Beryllium Disease: Update on a Moving Target By Dr. Maeve MacMurdo, et al.



Development of Learning Curves for Bronchoscopy: Results of a Multicenter Study of Pulmonary Trainees By Dr. Nha Voduc, et al.



Bias and Racism Teaching Rounds at an Academic Medical Center By Dr. Quinn Capers, IV, et al.

Publications
Topics
Sections

Editor’s picks

Editor’s picks

 



Power Outage: An Ignored Risk Factor for Chronic Obstructive Pulmonary Disease ExacerbationsBy Dr. Wangjian Zhang, et al.



PROPHETIC: Prospective Identification of Pneumonia in Hospitalized Patients in the ICU By Dr. Stephen P. Bergin, et al.



Chronic Beryllium Disease: Update on a Moving Target By Dr. Maeve MacMurdo, et al.



Development of Learning Curves for Bronchoscopy: Results of a Multicenter Study of Pulmonary Trainees By Dr. Nha Voduc, et al.



Bias and Racism Teaching Rounds at an Academic Medical Center By Dr. Quinn Capers, IV, et al.

 



Power Outage: An Ignored Risk Factor for Chronic Obstructive Pulmonary Disease ExacerbationsBy Dr. Wangjian Zhang, et al.



PROPHETIC: Prospective Identification of Pneumonia in Hospitalized Patients in the ICU By Dr. Stephen P. Bergin, et al.



Chronic Beryllium Disease: Update on a Moving Target By Dr. Maeve MacMurdo, et al.



Development of Learning Curves for Bronchoscopy: Results of a Multicenter Study of Pulmonary Trainees By Dr. Nha Voduc, et al.



Bias and Racism Teaching Rounds at an Academic Medical Center By Dr. Quinn Capers, IV, et al.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Three genes could predict congenital Zika infection susceptibility

Article Type
Changed
Wed, 12/09/2020 - 16:52

Three genes that could predict susceptibility to congenital Zika virus (ZIKV) infection have been identified, Dr. Irene Rivero-Calle, MD, shared at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year.

ZIKV, an emerging flavivirus, is responsible for one the most critical pandemic emergencies of the last decade and has been associated with severe neonatal brain disabilities, declared Dr. Rivero-Calle, of the Hospital Clínico Universitario de Santiago de Compostela in Santiago de Compostela, Spain. “We think that understanding the genomic background could explain some of the most relevant symptoms of congenital Zika syndrome (CZS) and could be essential to better comprehend this disease.”

To achieve this understanding, Dr. Rivero-Calle and her colleagues conducted a study aiming to analyze any genetic factors that could explain the variation in phenotypes in newborns from mothers who had a Zika infection during their pregnancy. Additionally, they strove to “elucidate if the possible genetic association is specific to mothers or their newborns, and to check if this genomic background or any genomic ancestry pattern could be related with the phenotype,” she explained.

In their study, Dr. Rivero-Calle and her team analyzed 80 samples, comprising 40 samples from mothers who had been infected by ZIKV during their pregnancy and 40 from their newborns. Of those descendants, 20 were asymptomatic and 20 were symptomatic (13 had CZS, 3 had microcephaly, 2 had a pathologic MRI, 1 had hearing loss, and 1 was born preterm).

Population stratification, which Dr. Rivero-Calle explained “lets us know if the population is African, European, or Native American looking at the genes,” did not show any relation with the phenotype. We had a mixture of population genomics among all samples.”

Dr. Rivero-Calle and her team then performed three analyses: genotype analysis, an allelic test, and gene analysis. The allelic test and gene-collapsing method highlighted three genes (PANO1, PIDD1, and SLC25A22) as potential determinants of the varying phenotypes in the newborns from ZIKV-infected mothers. Overrepresentation analysis of gene ontology terms shows that PIDD1 and PANO1 are related to apoptosis and cell death, which is closely related to early infantile epilepsy. This could explain the most severe complications of CZS: seizures, brain damage, microcephaly, and detrimental neurodevelopmental growth. Regarding reactome and KEGG analysis, gene PIID1 is related with p53 pathway, which correlates with cell’s death and apoptosis, and with microcephaly, a typical phenotypic feature of CZS.

“So, in conclusion, we found three genes which could predict susceptibility to congenital Zika infection; we saw that the functionality of these genes seems to be deeply related with mechanisms which could explain the different phenotypes; and we saw that these three genes only appear in the children’s cohort, so there is no candidate gene in the mother’s genomic background which can help predict the phenotype of the newborn,” Dr. Rivero-Calle declared. “Finally, there is no ancestry pattern associated with disabilities caused by Zika infection.”

Dr. Rivero-Calle reported that this project (ZikAction) has received funding from the European Union’s Horizon 2020 research and innovation program, under grant agreement 734857.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Three genes that could predict susceptibility to congenital Zika virus (ZIKV) infection have been identified, Dr. Irene Rivero-Calle, MD, shared at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year.

ZIKV, an emerging flavivirus, is responsible for one the most critical pandemic emergencies of the last decade and has been associated with severe neonatal brain disabilities, declared Dr. Rivero-Calle, of the Hospital Clínico Universitario de Santiago de Compostela in Santiago de Compostela, Spain. “We think that understanding the genomic background could explain some of the most relevant symptoms of congenital Zika syndrome (CZS) and could be essential to better comprehend this disease.”

To achieve this understanding, Dr. Rivero-Calle and her colleagues conducted a study aiming to analyze any genetic factors that could explain the variation in phenotypes in newborns from mothers who had a Zika infection during their pregnancy. Additionally, they strove to “elucidate if the possible genetic association is specific to mothers or their newborns, and to check if this genomic background or any genomic ancestry pattern could be related with the phenotype,” she explained.

In their study, Dr. Rivero-Calle and her team analyzed 80 samples, comprising 40 samples from mothers who had been infected by ZIKV during their pregnancy and 40 from their newborns. Of those descendants, 20 were asymptomatic and 20 were symptomatic (13 had CZS, 3 had microcephaly, 2 had a pathologic MRI, 1 had hearing loss, and 1 was born preterm).

Population stratification, which Dr. Rivero-Calle explained “lets us know if the population is African, European, or Native American looking at the genes,” did not show any relation with the phenotype. We had a mixture of population genomics among all samples.”

Dr. Rivero-Calle and her team then performed three analyses: genotype analysis, an allelic test, and gene analysis. The allelic test and gene-collapsing method highlighted three genes (PANO1, PIDD1, and SLC25A22) as potential determinants of the varying phenotypes in the newborns from ZIKV-infected mothers. Overrepresentation analysis of gene ontology terms shows that PIDD1 and PANO1 are related to apoptosis and cell death, which is closely related to early infantile epilepsy. This could explain the most severe complications of CZS: seizures, brain damage, microcephaly, and detrimental neurodevelopmental growth. Regarding reactome and KEGG analysis, gene PIID1 is related with p53 pathway, which correlates with cell’s death and apoptosis, and with microcephaly, a typical phenotypic feature of CZS.

“So, in conclusion, we found three genes which could predict susceptibility to congenital Zika infection; we saw that the functionality of these genes seems to be deeply related with mechanisms which could explain the different phenotypes; and we saw that these three genes only appear in the children’s cohort, so there is no candidate gene in the mother’s genomic background which can help predict the phenotype of the newborn,” Dr. Rivero-Calle declared. “Finally, there is no ancestry pattern associated with disabilities caused by Zika infection.”

Dr. Rivero-Calle reported that this project (ZikAction) has received funding from the European Union’s Horizon 2020 research and innovation program, under grant agreement 734857.

Three genes that could predict susceptibility to congenital Zika virus (ZIKV) infection have been identified, Dr. Irene Rivero-Calle, MD, shared at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year.

ZIKV, an emerging flavivirus, is responsible for one the most critical pandemic emergencies of the last decade and has been associated with severe neonatal brain disabilities, declared Dr. Rivero-Calle, of the Hospital Clínico Universitario de Santiago de Compostela in Santiago de Compostela, Spain. “We think that understanding the genomic background could explain some of the most relevant symptoms of congenital Zika syndrome (CZS) and could be essential to better comprehend this disease.”

To achieve this understanding, Dr. Rivero-Calle and her colleagues conducted a study aiming to analyze any genetic factors that could explain the variation in phenotypes in newborns from mothers who had a Zika infection during their pregnancy. Additionally, they strove to “elucidate if the possible genetic association is specific to mothers or their newborns, and to check if this genomic background or any genomic ancestry pattern could be related with the phenotype,” she explained.

In their study, Dr. Rivero-Calle and her team analyzed 80 samples, comprising 40 samples from mothers who had been infected by ZIKV during their pregnancy and 40 from their newborns. Of those descendants, 20 were asymptomatic and 20 were symptomatic (13 had CZS, 3 had microcephaly, 2 had a pathologic MRI, 1 had hearing loss, and 1 was born preterm).

Population stratification, which Dr. Rivero-Calle explained “lets us know if the population is African, European, or Native American looking at the genes,” did not show any relation with the phenotype. We had a mixture of population genomics among all samples.”

Dr. Rivero-Calle and her team then performed three analyses: genotype analysis, an allelic test, and gene analysis. The allelic test and gene-collapsing method highlighted three genes (PANO1, PIDD1, and SLC25A22) as potential determinants of the varying phenotypes in the newborns from ZIKV-infected mothers. Overrepresentation analysis of gene ontology terms shows that PIDD1 and PANO1 are related to apoptosis and cell death, which is closely related to early infantile epilepsy. This could explain the most severe complications of CZS: seizures, brain damage, microcephaly, and detrimental neurodevelopmental growth. Regarding reactome and KEGG analysis, gene PIID1 is related with p53 pathway, which correlates with cell’s death and apoptosis, and with microcephaly, a typical phenotypic feature of CZS.

“So, in conclusion, we found three genes which could predict susceptibility to congenital Zika infection; we saw that the functionality of these genes seems to be deeply related with mechanisms which could explain the different phenotypes; and we saw that these three genes only appear in the children’s cohort, so there is no candidate gene in the mother’s genomic background which can help predict the phenotype of the newborn,” Dr. Rivero-Calle declared. “Finally, there is no ancestry pattern associated with disabilities caused by Zika infection.”

Dr. Rivero-Calle reported that this project (ZikAction) has received funding from the European Union’s Horizon 2020 research and innovation program, under grant agreement 734857.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ESPID 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

C. difficile control could require integrated approach

Article Type
Changed
Wed, 12/16/2020 - 15:49
Display Headline
C. difficile control could require integrated approach

Clostridioides difficile (C. diff) infection (CDI) is a pathogen of both humans and animals, and to control it will require an integrated approach that encompasses human health care, veterinary health care, environmental regulation, and public policy. That is the conclusion of a group led by Su-Chen Lim, MD, and Tom Riley, MD, of Edith Cowan University in Australia, who published a review in Clinical Microbiology and Infection.

CDI was generally considered a nuisance infection until the early 21st century, when a hypervirulent fluoroquinolone-resistant strain emerged in North America. The strain is now documented In the United States, Canada, and most countries in Europe.

Another new feature of CDI is increased evidence of community transmission, which was previously rare. This is defined as cases where the patient experienced symptom onset outside the hospital, and had no history of hospitalization in the previous 12 weeks or symptom onset within 48 hours of hospital admission. Community-associated CDI now accounts for 41% of U.S. cases, nearly 30% of Australian cases, and about 14% in Europe, according to recent studies.

Several features of CDI suggest a need for an integrated management plan. The preferred habitat of C. diff is the gastrointestinal track of mammals, and likely colonizes all mammalian neonates. Over time, colonization by other microbes likely crowd it out and prevent overgrowth. But widespread use of antimicrobials in animal production can lead to the creation of an environment resembling that of the neonate, allowing C. diff to expand. That has led to food animals becoming a major C. diff reservoir, and whole-genome studies showed that strains found in humans, food, animals, and the environment are closely related and sometimes genetically indistinguishable, suggesting transmission between humans and animals that may be attributable to contaminated food and environments.

The authors suggest that C. diff infection control should be guided by the One Health initiative, which seeks cooperation between physicians, osteopathic physicians, veterinarians, dentists, nurses, and other scientific and environmental disciplines. The goal is to enhance surveillance and interdisciplinary communication, as well as integrated policies. The authors note that C. diff is often thought of by physicians as primarily a hospital problem, who may be unaware of the increased prevalence of community-acquired disease. It is also a significant problem in agriculture, since as many as 50% of piglets succumb to the disease. Other studies have recently shown that asymptomatic carriers of toxigenic strains are likely to transmit the bacteria to C. diff-negative patients. Asymptomatic carriers cluster with symptomatic patients. In one Cleveland hospital, more than 25% of hospital-associated CDI cases were found to have been colonized prior to admission, suggesting that these were not true hospital-associated cases.

C. diff has been isolated from a wide range of sources, including food animals, meat, seafood, vegetables, household environments, and natural environments like rivers, lakes, and soil. About 20% of calves and 70% of piglets are colonized with C. diff. It has a high prevalence in meat products in the United States, but lower in the Europe, possibly because of different slaughtering practices.

The authors suggest that zoonotic C. diff spread is unlikely to be confined to any geographic region or population, and that widespread C. diff contamination is occurring through food or the environment. This could be occurring because spores can withstand cooking temperatures and disseminate through the air, and even through manure from food animals made into compost or fertilizer.

Veterinary efforts mimicking hospital measures have reduced animal CDI, but there are no rapid diagnostic tests for CDI in animals, making it challenging to control its spread in this context.

The authors call for enhanced antimicrobial stewardship in both human and animal settings, including banning of antimicrobial agents as growth promoters. This has been done in the United States and Europe, but not in Brazil, China, Canada, India, and Australia. They also call for research on inactivation of C. diff spores during waste treatment.

Even better, the authors suggest that vaccines should be developed and employed in both animals and humans. No such vaccine exists in animals, but Pfizer has one for humans in a phase 3 clinical trial, but it does not prevent colonization. Others are in development.

The epidemiology of CDI is an ongoing challenge, with emerging new strains and changing social and environmental conditions. “However, it is with the collaborative efforts of industry partners, policymakers, veterinarians, clinicians, and researchers that CDI needs to be approached, a perfect example of One Health. Opening an interdisciplinary dialogue to address CDI and One Health issues has to be the focus of future studies,” the authors concluded.

SOURCE: SC Lim et al. Clinical Microbiology and Infection. 2020;26:85-863.

Publications
Topics
Sections

Clostridioides difficile (C. diff) infection (CDI) is a pathogen of both humans and animals, and to control it will require an integrated approach that encompasses human health care, veterinary health care, environmental regulation, and public policy. That is the conclusion of a group led by Su-Chen Lim, MD, and Tom Riley, MD, of Edith Cowan University in Australia, who published a review in Clinical Microbiology and Infection.

CDI was generally considered a nuisance infection until the early 21st century, when a hypervirulent fluoroquinolone-resistant strain emerged in North America. The strain is now documented In the United States, Canada, and most countries in Europe.

Another new feature of CDI is increased evidence of community transmission, which was previously rare. This is defined as cases where the patient experienced symptom onset outside the hospital, and had no history of hospitalization in the previous 12 weeks or symptom onset within 48 hours of hospital admission. Community-associated CDI now accounts for 41% of U.S. cases, nearly 30% of Australian cases, and about 14% in Europe, according to recent studies.

Several features of CDI suggest a need for an integrated management plan. The preferred habitat of C. diff is the gastrointestinal track of mammals, and likely colonizes all mammalian neonates. Over time, colonization by other microbes likely crowd it out and prevent overgrowth. But widespread use of antimicrobials in animal production can lead to the creation of an environment resembling that of the neonate, allowing C. diff to expand. That has led to food animals becoming a major C. diff reservoir, and whole-genome studies showed that strains found in humans, food, animals, and the environment are closely related and sometimes genetically indistinguishable, suggesting transmission between humans and animals that may be attributable to contaminated food and environments.

The authors suggest that C. diff infection control should be guided by the One Health initiative, which seeks cooperation between physicians, osteopathic physicians, veterinarians, dentists, nurses, and other scientific and environmental disciplines. The goal is to enhance surveillance and interdisciplinary communication, as well as integrated policies. The authors note that C. diff is often thought of by physicians as primarily a hospital problem, who may be unaware of the increased prevalence of community-acquired disease. It is also a significant problem in agriculture, since as many as 50% of piglets succumb to the disease. Other studies have recently shown that asymptomatic carriers of toxigenic strains are likely to transmit the bacteria to C. diff-negative patients. Asymptomatic carriers cluster with symptomatic patients. In one Cleveland hospital, more than 25% of hospital-associated CDI cases were found to have been colonized prior to admission, suggesting that these were not true hospital-associated cases.

C. diff has been isolated from a wide range of sources, including food animals, meat, seafood, vegetables, household environments, and natural environments like rivers, lakes, and soil. About 20% of calves and 70% of piglets are colonized with C. diff. It has a high prevalence in meat products in the United States, but lower in the Europe, possibly because of different slaughtering practices.

The authors suggest that zoonotic C. diff spread is unlikely to be confined to any geographic region or population, and that widespread C. diff contamination is occurring through food or the environment. This could be occurring because spores can withstand cooking temperatures and disseminate through the air, and even through manure from food animals made into compost or fertilizer.

Veterinary efforts mimicking hospital measures have reduced animal CDI, but there are no rapid diagnostic tests for CDI in animals, making it challenging to control its spread in this context.

The authors call for enhanced antimicrobial stewardship in both human and animal settings, including banning of antimicrobial agents as growth promoters. This has been done in the United States and Europe, but not in Brazil, China, Canada, India, and Australia. They also call for research on inactivation of C. diff spores during waste treatment.

Even better, the authors suggest that vaccines should be developed and employed in both animals and humans. No such vaccine exists in animals, but Pfizer has one for humans in a phase 3 clinical trial, but it does not prevent colonization. Others are in development.

The epidemiology of CDI is an ongoing challenge, with emerging new strains and changing social and environmental conditions. “However, it is with the collaborative efforts of industry partners, policymakers, veterinarians, clinicians, and researchers that CDI needs to be approached, a perfect example of One Health. Opening an interdisciplinary dialogue to address CDI and One Health issues has to be the focus of future studies,” the authors concluded.

SOURCE: SC Lim et al. Clinical Microbiology and Infection. 2020;26:85-863.

Clostridioides difficile (C. diff) infection (CDI) is a pathogen of both humans and animals, and to control it will require an integrated approach that encompasses human health care, veterinary health care, environmental regulation, and public policy. That is the conclusion of a group led by Su-Chen Lim, MD, and Tom Riley, MD, of Edith Cowan University in Australia, who published a review in Clinical Microbiology and Infection.

CDI was generally considered a nuisance infection until the early 21st century, when a hypervirulent fluoroquinolone-resistant strain emerged in North America. The strain is now documented In the United States, Canada, and most countries in Europe.

Another new feature of CDI is increased evidence of community transmission, which was previously rare. This is defined as cases where the patient experienced symptom onset outside the hospital, and had no history of hospitalization in the previous 12 weeks or symptom onset within 48 hours of hospital admission. Community-associated CDI now accounts for 41% of U.S. cases, nearly 30% of Australian cases, and about 14% in Europe, according to recent studies.

Several features of CDI suggest a need for an integrated management plan. The preferred habitat of C. diff is the gastrointestinal track of mammals, and likely colonizes all mammalian neonates. Over time, colonization by other microbes likely crowd it out and prevent overgrowth. But widespread use of antimicrobials in animal production can lead to the creation of an environment resembling that of the neonate, allowing C. diff to expand. That has led to food animals becoming a major C. diff reservoir, and whole-genome studies showed that strains found in humans, food, animals, and the environment are closely related and sometimes genetically indistinguishable, suggesting transmission between humans and animals that may be attributable to contaminated food and environments.

The authors suggest that C. diff infection control should be guided by the One Health initiative, which seeks cooperation between physicians, osteopathic physicians, veterinarians, dentists, nurses, and other scientific and environmental disciplines. The goal is to enhance surveillance and interdisciplinary communication, as well as integrated policies. The authors note that C. diff is often thought of by physicians as primarily a hospital problem, who may be unaware of the increased prevalence of community-acquired disease. It is also a significant problem in agriculture, since as many as 50% of piglets succumb to the disease. Other studies have recently shown that asymptomatic carriers of toxigenic strains are likely to transmit the bacteria to C. diff-negative patients. Asymptomatic carriers cluster with symptomatic patients. In one Cleveland hospital, more than 25% of hospital-associated CDI cases were found to have been colonized prior to admission, suggesting that these were not true hospital-associated cases.

C. diff has been isolated from a wide range of sources, including food animals, meat, seafood, vegetables, household environments, and natural environments like rivers, lakes, and soil. About 20% of calves and 70% of piglets are colonized with C. diff. It has a high prevalence in meat products in the United States, but lower in the Europe, possibly because of different slaughtering practices.

The authors suggest that zoonotic C. diff spread is unlikely to be confined to any geographic region or population, and that widespread C. diff contamination is occurring through food or the environment. This could be occurring because spores can withstand cooking temperatures and disseminate through the air, and even through manure from food animals made into compost or fertilizer.

Veterinary efforts mimicking hospital measures have reduced animal CDI, but there are no rapid diagnostic tests for CDI in animals, making it challenging to control its spread in this context.

The authors call for enhanced antimicrobial stewardship in both human and animal settings, including banning of antimicrobial agents as growth promoters. This has been done in the United States and Europe, but not in Brazil, China, Canada, India, and Australia. They also call for research on inactivation of C. diff spores during waste treatment.

Even better, the authors suggest that vaccines should be developed and employed in both animals and humans. No such vaccine exists in animals, but Pfizer has one for humans in a phase 3 clinical trial, but it does not prevent colonization. Others are in development.

The epidemiology of CDI is an ongoing challenge, with emerging new strains and changing social and environmental conditions. “However, it is with the collaborative efforts of industry partners, policymakers, veterinarians, clinicians, and researchers that CDI needs to be approached, a perfect example of One Health. Opening an interdisciplinary dialogue to address CDI and One Health issues has to be the focus of future studies,” the authors concluded.

SOURCE: SC Lim et al. Clinical Microbiology and Infection. 2020;26:85-863.

Publications
Publications
Topics
Article Type
Display Headline
C. difficile control could require integrated approach
Display Headline
C. difficile control could require integrated approach
Sections
Article Source

FROM CLINICAL MICROBIOLOGY AND INFECTION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Upper GI bleeds in COVID-19 not related to increased mortality

Article Type
Changed
Thu, 08/26/2021 - 15:55

A Spanish survey of COVID-19 patients suggests that upper gastrointestinal bleeding (UGB) does not affect in-hospital mortality. It also found that fewer COVID-19–positive patients underwent endoscopies, but there was no statistically significant difference in in-hospital mortality outcome as a result of delays.

“In-hospital mortality in COVID-19 patients with upper-GI bleeding seemed to be more influenced by COVID-19 than by upper-GI bleeding, and that’s something I think is important for us to know,” Gyanprakash Ketwaroo, MD, associate professor at Baylor College of Medicine, Houston, said in an interview. Dr. Ketwaroo was not involved in the study.

The results weren’t a surprise, but they do provide some reassurance. “It’s probably what I expected. Initially, we thought there might be some COVID-19 related (GI) lesions, but that didn’t seem to be borne out. So we thought the bleeding was related to (the patient) being in a hospital or the typical reasons for bleeding. It’s also what I expected that less endoscopies would be performed in these patients, and even though fewer endoscopies were performed, the outcomes were still similar. I think it’s what most people expected,” said Dr. Ketwaroo.

The study was published online Nov. 25 in the Journal of Clinical Gastroenterology, and led by Rebeca González González, MD, of Severo Ochoa University Hospital in Leganés, Madrid, and Pascual Piñera-Salmerón, MD, of Reina Sofia University General Hospital in Murcia, Spain. The researchers retrospectively analyzed data on 71,904 COVID-19 patients at 62 emergency departments in Spain, and compared 83 patients who had COVID-19 and UGB to two control groups: 249 randomly selected COVID-19 patients without UGB, and 249 randomly selected non-COVID-19 patients with UGB.

They found that 1.11% of COVID-19 patients presented with UGB, compared with 1.78% of non-COVID-19 patients at emergency departments. In patients with COVID-19, risk of UGB was associated with hemoglobin values < 10 g/L (odds ratio [OR], 34.255, 95% confidence interval [CI], 12.752-92.021), abdominal pain (OR, 11.4; 95% CI, 5.092-25.944), and systolic blood pressure < 90 mm Hg (OR, 11.096; 95% CI, 2.975-41.390).

Compared with non-COVID-19 patients with UGB, those COVID-19 patients with UGB were more likely to have interstitial lung infiltrates (OR, 66.42; 95% CI, 15.364-287.223) and ground-glass opacities (OR, 21.27; 95% CI, 9.720-46.567) in chest radiograph, as well as fever (OR, 34.67; 95% CI, 11.719-102.572) and cough (OR, 26.4; 95% CI, 8.845-78.806).

Gastroscopy and endoscopic procedures were lower in patients with COVID-19 than in the general population (gastroscopy OR, 0.269; 95% CI, 0.160-0.453: endoscopy OR, 0.26; 95% CI, 0.165-0.623). There was no difference between the two groups with respect to endoscopic findings. After adjustment for age and sex, the only significant difference between COVID-19 patients with UGB and COVID-19 patients without UGB was a higher rate of intensive care unit admission (OR, 2.98; 95% CI, 1.16-7.65). Differences between COVID-19 patients with UGB and non–COVID-19 patients with UGB included higher rates of ICU admission (OR, 3.29; 95% CI, 1.28-8.47), prolonged hospitalizations (OR, 2.02; 95% CI, 1.15-3.55), and in-hospital mortality (OR, 2.05; 95% CI, 1.09-3.86).

UGB development was not associated with increased in-hospital mortality in COVID-19 patients (OR, 1.14; 95% CI, 0.59-2.19).

A limitation to the study it was that it was performed in Spain, where endoscopies are performed in the emergency department, and where there are different thresholds for admission to the intensive care unit than in the United States.

The authors did not report a funding source. Dr. Ketwaroo has no relevant financial disclosures.

SOURCE: González González R et al. J Clin Gastroenterol. 10.1097/MCG.0000000000001465.

Publications
Topics
Sections

A Spanish survey of COVID-19 patients suggests that upper gastrointestinal bleeding (UGB) does not affect in-hospital mortality. It also found that fewer COVID-19–positive patients underwent endoscopies, but there was no statistically significant difference in in-hospital mortality outcome as a result of delays.

“In-hospital mortality in COVID-19 patients with upper-GI bleeding seemed to be more influenced by COVID-19 than by upper-GI bleeding, and that’s something I think is important for us to know,” Gyanprakash Ketwaroo, MD, associate professor at Baylor College of Medicine, Houston, said in an interview. Dr. Ketwaroo was not involved in the study.

The results weren’t a surprise, but they do provide some reassurance. “It’s probably what I expected. Initially, we thought there might be some COVID-19 related (GI) lesions, but that didn’t seem to be borne out. So we thought the bleeding was related to (the patient) being in a hospital or the typical reasons for bleeding. It’s also what I expected that less endoscopies would be performed in these patients, and even though fewer endoscopies were performed, the outcomes were still similar. I think it’s what most people expected,” said Dr. Ketwaroo.

The study was published online Nov. 25 in the Journal of Clinical Gastroenterology, and led by Rebeca González González, MD, of Severo Ochoa University Hospital in Leganés, Madrid, and Pascual Piñera-Salmerón, MD, of Reina Sofia University General Hospital in Murcia, Spain. The researchers retrospectively analyzed data on 71,904 COVID-19 patients at 62 emergency departments in Spain, and compared 83 patients who had COVID-19 and UGB to two control groups: 249 randomly selected COVID-19 patients without UGB, and 249 randomly selected non-COVID-19 patients with UGB.

They found that 1.11% of COVID-19 patients presented with UGB, compared with 1.78% of non-COVID-19 patients at emergency departments. In patients with COVID-19, risk of UGB was associated with hemoglobin values < 10 g/L (odds ratio [OR], 34.255, 95% confidence interval [CI], 12.752-92.021), abdominal pain (OR, 11.4; 95% CI, 5.092-25.944), and systolic blood pressure < 90 mm Hg (OR, 11.096; 95% CI, 2.975-41.390).

Compared with non-COVID-19 patients with UGB, those COVID-19 patients with UGB were more likely to have interstitial lung infiltrates (OR, 66.42; 95% CI, 15.364-287.223) and ground-glass opacities (OR, 21.27; 95% CI, 9.720-46.567) in chest radiograph, as well as fever (OR, 34.67; 95% CI, 11.719-102.572) and cough (OR, 26.4; 95% CI, 8.845-78.806).

Gastroscopy and endoscopic procedures were lower in patients with COVID-19 than in the general population (gastroscopy OR, 0.269; 95% CI, 0.160-0.453: endoscopy OR, 0.26; 95% CI, 0.165-0.623). There was no difference between the two groups with respect to endoscopic findings. After adjustment for age and sex, the only significant difference between COVID-19 patients with UGB and COVID-19 patients without UGB was a higher rate of intensive care unit admission (OR, 2.98; 95% CI, 1.16-7.65). Differences between COVID-19 patients with UGB and non–COVID-19 patients with UGB included higher rates of ICU admission (OR, 3.29; 95% CI, 1.28-8.47), prolonged hospitalizations (OR, 2.02; 95% CI, 1.15-3.55), and in-hospital mortality (OR, 2.05; 95% CI, 1.09-3.86).

UGB development was not associated with increased in-hospital mortality in COVID-19 patients (OR, 1.14; 95% CI, 0.59-2.19).

A limitation to the study it was that it was performed in Spain, where endoscopies are performed in the emergency department, and where there are different thresholds for admission to the intensive care unit than in the United States.

The authors did not report a funding source. Dr. Ketwaroo has no relevant financial disclosures.

SOURCE: González González R et al. J Clin Gastroenterol. 10.1097/MCG.0000000000001465.

A Spanish survey of COVID-19 patients suggests that upper gastrointestinal bleeding (UGB) does not affect in-hospital mortality. It also found that fewer COVID-19–positive patients underwent endoscopies, but there was no statistically significant difference in in-hospital mortality outcome as a result of delays.

“In-hospital mortality in COVID-19 patients with upper-GI bleeding seemed to be more influenced by COVID-19 than by upper-GI bleeding, and that’s something I think is important for us to know,” Gyanprakash Ketwaroo, MD, associate professor at Baylor College of Medicine, Houston, said in an interview. Dr. Ketwaroo was not involved in the study.

The results weren’t a surprise, but they do provide some reassurance. “It’s probably what I expected. Initially, we thought there might be some COVID-19 related (GI) lesions, but that didn’t seem to be borne out. So we thought the bleeding was related to (the patient) being in a hospital or the typical reasons for bleeding. It’s also what I expected that less endoscopies would be performed in these patients, and even though fewer endoscopies were performed, the outcomes were still similar. I think it’s what most people expected,” said Dr. Ketwaroo.

The study was published online Nov. 25 in the Journal of Clinical Gastroenterology, and led by Rebeca González González, MD, of Severo Ochoa University Hospital in Leganés, Madrid, and Pascual Piñera-Salmerón, MD, of Reina Sofia University General Hospital in Murcia, Spain. The researchers retrospectively analyzed data on 71,904 COVID-19 patients at 62 emergency departments in Spain, and compared 83 patients who had COVID-19 and UGB to two control groups: 249 randomly selected COVID-19 patients without UGB, and 249 randomly selected non-COVID-19 patients with UGB.

They found that 1.11% of COVID-19 patients presented with UGB, compared with 1.78% of non-COVID-19 patients at emergency departments. In patients with COVID-19, risk of UGB was associated with hemoglobin values < 10 g/L (odds ratio [OR], 34.255, 95% confidence interval [CI], 12.752-92.021), abdominal pain (OR, 11.4; 95% CI, 5.092-25.944), and systolic blood pressure < 90 mm Hg (OR, 11.096; 95% CI, 2.975-41.390).

Compared with non-COVID-19 patients with UGB, those COVID-19 patients with UGB were more likely to have interstitial lung infiltrates (OR, 66.42; 95% CI, 15.364-287.223) and ground-glass opacities (OR, 21.27; 95% CI, 9.720-46.567) in chest radiograph, as well as fever (OR, 34.67; 95% CI, 11.719-102.572) and cough (OR, 26.4; 95% CI, 8.845-78.806).

Gastroscopy and endoscopic procedures were lower in patients with COVID-19 than in the general population (gastroscopy OR, 0.269; 95% CI, 0.160-0.453: endoscopy OR, 0.26; 95% CI, 0.165-0.623). There was no difference between the two groups with respect to endoscopic findings. After adjustment for age and sex, the only significant difference between COVID-19 patients with UGB and COVID-19 patients without UGB was a higher rate of intensive care unit admission (OR, 2.98; 95% CI, 1.16-7.65). Differences between COVID-19 patients with UGB and non–COVID-19 patients with UGB included higher rates of ICU admission (OR, 3.29; 95% CI, 1.28-8.47), prolonged hospitalizations (OR, 2.02; 95% CI, 1.15-3.55), and in-hospital mortality (OR, 2.05; 95% CI, 1.09-3.86).

UGB development was not associated with increased in-hospital mortality in COVID-19 patients (OR, 1.14; 95% CI, 0.59-2.19).

A limitation to the study it was that it was performed in Spain, where endoscopies are performed in the emergency department, and where there are different thresholds for admission to the intensive care unit than in the United States.

The authors did not report a funding source. Dr. Ketwaroo has no relevant financial disclosures.

SOURCE: González González R et al. J Clin Gastroenterol. 10.1097/MCG.0000000000001465.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Geography and behaviors linked to early-onset colorectal cancer survival in U.S. women

Article Type
Changed
Thu, 12/10/2020 - 13:00

An analysis of nearly 29,000 U.S. women with early-onset colorectal cancer (CRC) showed that physical inactivity and fertility correlated modestly with living in “hot spots,” or counties with high early-onset CRC mortality rates among women.

Approximately one-third of the variation in early-onset CRC survival among women was accounted for by differences in individual- or community-level features.

Andreana N. Holowatyj, PhD, of Vanderbilt University Medical Center in Nashville, Tenn., and colleagues reported these findings in Clinical and Translational Gastroenterology.

Dr. Holowatyj and colleagues noted that prior studies have linked health behaviors with an increased risk of early-onset CRC among women. However, the impact of health behaviors on outcomes of early-onset CRC is unknown.

The researchers hypothesized that biological-, individual-, and community-level factors may be contributing to known sex-specific differences in CRC outcomes and geographic variations in survival by sex.
 

Hot spot counties with high mortality

The researchers identified geographic hot spots using three geospatial autocorrelation approaches with Centers for Disease Control and Prevention national

mortality data. The team also analyzed data from the Surveillance, Epidemiology, and End Results program on 28,790 women (aged 15-49 years) diagnosed with CRC during 1999-2016.

Of the 3,108 counties in the contiguous United States, 191 were identified as hot spots. Among these, 101 (52.9%) were located in the South.

Earlier research had shown a predominance of hot spots for early-onset CRC mortality among both men and women in the South.

However, the current study of women showed that almost half of these counties were located in the Midwest and the Northeast as well as the South.

Also in the current analysis, about one in every seven women (13.7%) with early-onset CRC resided in hot spot counties.

Race/ethnicity, stage at diagnosis, histopathology, and receipt of first-course therapies also differed significantly (P ≤ .0001) between women residing in hot spot versus non–hot spot counties.

Non-Hispanic Black patients, for example, accounted for 23.7% of early-onset CRC cases in hot spot counties, as compared with 14.3% in non–hot spot counties (P < .0001). The county-level proportion of non-Hispanic Black patients also modestly correlated with hot spot residence (rs = .26; P < .0001).

Race and ethnicity accounted for less than 0.5% of the variation in early-onset CRC survival among women in non–hot spot counties. In hot spot counties, however, this factor explained 1.4% of the variation in early-onset CRC-specific survival among women.
 

Inactivity correlates with hot spot residence

Dr. Holowatyj and colleagues also identified physical inactivity and lower fertility as county-level factors modestly correlated with hot spot residence (rs = .21, rs = –.23: P < .01).

Nearly a quarter of adults living in hot spot counties reported no physical activity during their leisure time (24.1% vs. 21.7% in non–hot spot counties; P < .01).

The rate of live births in the last year among women aged 15-50 years was lower in hot spot counties than in non–hot spot counties (4.9% vs. 5.4%; P < .01).

Individual- and community-level features overall accounted for different proportions of variance in early-onset CRC survival among women residing in hot spot counties (33.8%) versus non–hot spot counties (34.1%).

In addition to race and ethnicity, age at diagnosis, tumor histology, county-level proportions of the non-Hispanic Black population, women with a live birth in the last year, and annual household income of less than $20,000 all explained greater variance in CRC survival in young women in hot spot counties versus non–hot spot counties.
 

Keep CRC in differential diagnosis

“These individual- and community-level feature differences between hot spot and non–hot spot counties illustrate the importance of understanding how these factors may be contributing to early-onset CRC mortality among women – particularly in hot spot counties,” Dr. Holowatyj said in an interview. “They may provide us with key clues for developing effective strategies to reduce the burden of CRC in young women across the United States.

“Every primary care physician and gastroenterologist, particularly in hot spot counties, should keep CRC in their differential diagnosis, particularly if a patient is presenting with typical signs and symptoms, even if they are not yet of screening age. Early-stage diagnosis increases survival odds because the cancer may be easier to treat.”

Health professionals can also encourage physical activity and a healthy lifestyle, she added.

The authors declared no competing interests. Their research was funded by grants from the federal government and foundations.

SOURCE: Holowatyj AN et al. Clin and Transl Gastroenterol. 2020;11:e00266.

Publications
Topics
Sections

An analysis of nearly 29,000 U.S. women with early-onset colorectal cancer (CRC) showed that physical inactivity and fertility correlated modestly with living in “hot spots,” or counties with high early-onset CRC mortality rates among women.

Approximately one-third of the variation in early-onset CRC survival among women was accounted for by differences in individual- or community-level features.

Andreana N. Holowatyj, PhD, of Vanderbilt University Medical Center in Nashville, Tenn., and colleagues reported these findings in Clinical and Translational Gastroenterology.

Dr. Holowatyj and colleagues noted that prior studies have linked health behaviors with an increased risk of early-onset CRC among women. However, the impact of health behaviors on outcomes of early-onset CRC is unknown.

The researchers hypothesized that biological-, individual-, and community-level factors may be contributing to known sex-specific differences in CRC outcomes and geographic variations in survival by sex.
 

Hot spot counties with high mortality

The researchers identified geographic hot spots using three geospatial autocorrelation approaches with Centers for Disease Control and Prevention national

mortality data. The team also analyzed data from the Surveillance, Epidemiology, and End Results program on 28,790 women (aged 15-49 years) diagnosed with CRC during 1999-2016.

Of the 3,108 counties in the contiguous United States, 191 were identified as hot spots. Among these, 101 (52.9%) were located in the South.

Earlier research had shown a predominance of hot spots for early-onset CRC mortality among both men and women in the South.

However, the current study of women showed that almost half of these counties were located in the Midwest and the Northeast as well as the South.

Also in the current analysis, about one in every seven women (13.7%) with early-onset CRC resided in hot spot counties.

Race/ethnicity, stage at diagnosis, histopathology, and receipt of first-course therapies also differed significantly (P ≤ .0001) between women residing in hot spot versus non–hot spot counties.

Non-Hispanic Black patients, for example, accounted for 23.7% of early-onset CRC cases in hot spot counties, as compared with 14.3% in non–hot spot counties (P < .0001). The county-level proportion of non-Hispanic Black patients also modestly correlated with hot spot residence (rs = .26; P < .0001).

Race and ethnicity accounted for less than 0.5% of the variation in early-onset CRC survival among women in non–hot spot counties. In hot spot counties, however, this factor explained 1.4% of the variation in early-onset CRC-specific survival among women.
 

Inactivity correlates with hot spot residence

Dr. Holowatyj and colleagues also identified physical inactivity and lower fertility as county-level factors modestly correlated with hot spot residence (rs = .21, rs = –.23: P < .01).

Nearly a quarter of adults living in hot spot counties reported no physical activity during their leisure time (24.1% vs. 21.7% in non–hot spot counties; P < .01).

The rate of live births in the last year among women aged 15-50 years was lower in hot spot counties than in non–hot spot counties (4.9% vs. 5.4%; P < .01).

Individual- and community-level features overall accounted for different proportions of variance in early-onset CRC survival among women residing in hot spot counties (33.8%) versus non–hot spot counties (34.1%).

In addition to race and ethnicity, age at diagnosis, tumor histology, county-level proportions of the non-Hispanic Black population, women with a live birth in the last year, and annual household income of less than $20,000 all explained greater variance in CRC survival in young women in hot spot counties versus non–hot spot counties.
 

Keep CRC in differential diagnosis

“These individual- and community-level feature differences between hot spot and non–hot spot counties illustrate the importance of understanding how these factors may be contributing to early-onset CRC mortality among women – particularly in hot spot counties,” Dr. Holowatyj said in an interview. “They may provide us with key clues for developing effective strategies to reduce the burden of CRC in young women across the United States.

“Every primary care physician and gastroenterologist, particularly in hot spot counties, should keep CRC in their differential diagnosis, particularly if a patient is presenting with typical signs and symptoms, even if they are not yet of screening age. Early-stage diagnosis increases survival odds because the cancer may be easier to treat.”

Health professionals can also encourage physical activity and a healthy lifestyle, she added.

The authors declared no competing interests. Their research was funded by grants from the federal government and foundations.

SOURCE: Holowatyj AN et al. Clin and Transl Gastroenterol. 2020;11:e00266.

An analysis of nearly 29,000 U.S. women with early-onset colorectal cancer (CRC) showed that physical inactivity and fertility correlated modestly with living in “hot spots,” or counties with high early-onset CRC mortality rates among women.

Approximately one-third of the variation in early-onset CRC survival among women was accounted for by differences in individual- or community-level features.

Andreana N. Holowatyj, PhD, of Vanderbilt University Medical Center in Nashville, Tenn., and colleagues reported these findings in Clinical and Translational Gastroenterology.

Dr. Holowatyj and colleagues noted that prior studies have linked health behaviors with an increased risk of early-onset CRC among women. However, the impact of health behaviors on outcomes of early-onset CRC is unknown.

The researchers hypothesized that biological-, individual-, and community-level factors may be contributing to known sex-specific differences in CRC outcomes and geographic variations in survival by sex.
 

Hot spot counties with high mortality

The researchers identified geographic hot spots using three geospatial autocorrelation approaches with Centers for Disease Control and Prevention national

mortality data. The team also analyzed data from the Surveillance, Epidemiology, and End Results program on 28,790 women (aged 15-49 years) diagnosed with CRC during 1999-2016.

Of the 3,108 counties in the contiguous United States, 191 were identified as hot spots. Among these, 101 (52.9%) were located in the South.

Earlier research had shown a predominance of hot spots for early-onset CRC mortality among both men and women in the South.

However, the current study of women showed that almost half of these counties were located in the Midwest and the Northeast as well as the South.

Also in the current analysis, about one in every seven women (13.7%) with early-onset CRC resided in hot spot counties.

Race/ethnicity, stage at diagnosis, histopathology, and receipt of first-course therapies also differed significantly (P ≤ .0001) between women residing in hot spot versus non–hot spot counties.

Non-Hispanic Black patients, for example, accounted for 23.7% of early-onset CRC cases in hot spot counties, as compared with 14.3% in non–hot spot counties (P < .0001). The county-level proportion of non-Hispanic Black patients also modestly correlated with hot spot residence (rs = .26; P < .0001).

Race and ethnicity accounted for less than 0.5% of the variation in early-onset CRC survival among women in non–hot spot counties. In hot spot counties, however, this factor explained 1.4% of the variation in early-onset CRC-specific survival among women.
 

Inactivity correlates with hot spot residence

Dr. Holowatyj and colleagues also identified physical inactivity and lower fertility as county-level factors modestly correlated with hot spot residence (rs = .21, rs = –.23: P < .01).

Nearly a quarter of adults living in hot spot counties reported no physical activity during their leisure time (24.1% vs. 21.7% in non–hot spot counties; P < .01).

The rate of live births in the last year among women aged 15-50 years was lower in hot spot counties than in non–hot spot counties (4.9% vs. 5.4%; P < .01).

Individual- and community-level features overall accounted for different proportions of variance in early-onset CRC survival among women residing in hot spot counties (33.8%) versus non–hot spot counties (34.1%).

In addition to race and ethnicity, age at diagnosis, tumor histology, county-level proportions of the non-Hispanic Black population, women with a live birth in the last year, and annual household income of less than $20,000 all explained greater variance in CRC survival in young women in hot spot counties versus non–hot spot counties.
 

Keep CRC in differential diagnosis

“These individual- and community-level feature differences between hot spot and non–hot spot counties illustrate the importance of understanding how these factors may be contributing to early-onset CRC mortality among women – particularly in hot spot counties,” Dr. Holowatyj said in an interview. “They may provide us with key clues for developing effective strategies to reduce the burden of CRC in young women across the United States.

“Every primary care physician and gastroenterologist, particularly in hot spot counties, should keep CRC in their differential diagnosis, particularly if a patient is presenting with typical signs and symptoms, even if they are not yet of screening age. Early-stage diagnosis increases survival odds because the cancer may be easier to treat.”

Health professionals can also encourage physical activity and a healthy lifestyle, she added.

The authors declared no competing interests. Their research was funded by grants from the federal government and foundations.

SOURCE: Holowatyj AN et al. Clin and Transl Gastroenterol. 2020;11:e00266.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL AND TRANSLATIONAL GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Medical societies waive fees, weigh other options during pandemic

Article Type
Changed
Thu, 08/26/2021 - 15:55

COVID-19’s toll on member facilities pushed the American Academy of Sleep Medicine (AASM) recently to take a sizable gamble.

AASM announced in September that it would waive facility fees at all 2,648 AASM-accredited sleep facilities for 2021.

At $1,800-$2,600 for each facility, that will mean lost revenue of between $4.8 million and $6.9 million, but it’s a risk the academy felt it had to take.

AASM President Kannan Ramar, MBBS, MD, said in an interview that they are betting on the future of the field.

An internal survey of members, he said, found that nearly half (46%) of the 551 respondents thought they might have to close by the end of the year.

In addition, 66% reported a lower patient volume in the past month, and 36% reported that their practice or facility had to apply for loans or other financial assistance because of COVID-19, AASM said in its press release.

“We are hoping that if we help our members through this, they will be there for our patients,” Dr. Ramar said.

Other medical societies also are weighing options, straddling the line between needing income to provide resources for members but being acutely aware of the financial toll the pandemic is taking, according to one sampling.

As previously reported, primary care practices are projected to lose more than $68,000 in revenue per full-time physician in 2020, after steep drops in office visits and the collection of fees from March to May, according to a study led by researchers in the Blavatnik Institute at Harvard Medical School, Boston.

Those losses were calculated without considering a potential second wave of COVID-19 this year, the authors noted.
 

‘We can survive this’

Although AASM waived fees for its member facilities, individual physician fees have not been reduced so far. But the group is looking for more ways to help lower the economic burden on members, Dr. Ramar said.

“I don’t think we’ve ever been in this situation in the 45 years of the academy. This is a once-in-a-lifetime event for challenges we’re going through,” he said. “The board and the leadership realized that, if we’re going to do something, this is the time to do it.”

In addition to waiving the fees, AASM and the AASM Foundation are offering relief funding to state and regional sleep societies and research award recipients through programs created in response to COVID-19.

Some societies said they are not making changes to their dues or fees, some are forgoing cost-of-living fee increases, and some are waiving registration fees for annual meetings.

The American College of Allergy, Asthma and Immunology (ACAAI) waived most members’ registration fees for its annual meeting in November. Typically, that fee would be $500-$800 per member, plus charges for some premium sessions, Michael Blaiss, MD, ACAAI executive medical director, said.

Dr. Blaiss said in an interview that the college thought offering its 6,000 members essentially 25 free hours of CME would benefit them more than waiving annual membership dues, which are about $425 for physicians in the United States.

If the pandemic stretches through 2021, Dr. Blaiss said, “We can survive this. I’m not worried about that at all.”

But he acknowledged the painful effect on medical societies.

“I don’t think any organization would tell you it’s not having an effect on their income,” he said. “I know it is for us and for virtually any medical organization. A high percentage of income comes from the annual meeting.”

Waiving dues has not been a high priority among members in communications so far, Blaiss said.

American Academy of Dermatology President Bruce H. Thiers, MD, said in an interview that there will be no cost-of-living increase for 2021 dues, and AAD members can request a reduction in dues, which will be considered on a case-by-case basis.

“We understand that many members will have to make tough financial decisions,” he said.

In addition, AAD, which has more than 20,000 members, is exploring payment options to help members spread out the cost of membership.
 

 

 

ACP extends membership

The American College of Physicians, whose membership cycle starts in July, did not reduce dues but extended membership at no cost for 3 months through September to its 163,000 members, Phil Masters, MD, ACP’s vice president of membership, said in an interview.

It also expanded its educational offerings related to the pandemic, including webinars on physician wellness and issues regarding telemedicine.

He said expanding educational resources rather than waiving dues was an intentional decision after much discussion because “we’re primarily a services resource organization.”

Membership data are still being calculated, but early indications are that membership is not increasing this year, after seeing annual growth of about 2%-2.5%, Dr. Masters said. He noted that income is down “by several percent.” Annual membership dues average about $500 for physicians who have been practicing for 10 years.

“We’re well positioned to tolerate the ups and downs,” he said, but he acknowledged that “there’s no question the financial impact has been devastating on some practices.”

Like some other associations, ACP decided to cancel this year’s annual meeting, which had been planned for April. The 2021 annual meeting will be conducted online from April 29 to May 1.

Smaller organizations that rely heavily on income from the annual meeting will be severely challenged the longer the pandemic continues, Dr. Masters said.

The decision is not as simple as whether to reduce or eliminate dues, he noted. Organizations will have to reexamine their missions and structure their fees and offerings according to the needs of members.

“It’s a balance in doing things for the community at large and balancing the need to be sensitive to financial implications,” Dr. Masters said.

This article first appeared on Medscape.com.

Publications
Topics
Sections

COVID-19’s toll on member facilities pushed the American Academy of Sleep Medicine (AASM) recently to take a sizable gamble.

AASM announced in September that it would waive facility fees at all 2,648 AASM-accredited sleep facilities for 2021.

At $1,800-$2,600 for each facility, that will mean lost revenue of between $4.8 million and $6.9 million, but it’s a risk the academy felt it had to take.

AASM President Kannan Ramar, MBBS, MD, said in an interview that they are betting on the future of the field.

An internal survey of members, he said, found that nearly half (46%) of the 551 respondents thought they might have to close by the end of the year.

In addition, 66% reported a lower patient volume in the past month, and 36% reported that their practice or facility had to apply for loans or other financial assistance because of COVID-19, AASM said in its press release.

“We are hoping that if we help our members through this, they will be there for our patients,” Dr. Ramar said.

Other medical societies also are weighing options, straddling the line between needing income to provide resources for members but being acutely aware of the financial toll the pandemic is taking, according to one sampling.

As previously reported, primary care practices are projected to lose more than $68,000 in revenue per full-time physician in 2020, after steep drops in office visits and the collection of fees from March to May, according to a study led by researchers in the Blavatnik Institute at Harvard Medical School, Boston.

Those losses were calculated without considering a potential second wave of COVID-19 this year, the authors noted.
 

‘We can survive this’

Although AASM waived fees for its member facilities, individual physician fees have not been reduced so far. But the group is looking for more ways to help lower the economic burden on members, Dr. Ramar said.

“I don’t think we’ve ever been in this situation in the 45 years of the academy. This is a once-in-a-lifetime event for challenges we’re going through,” he said. “The board and the leadership realized that, if we’re going to do something, this is the time to do it.”

In addition to waiving the fees, AASM and the AASM Foundation are offering relief funding to state and regional sleep societies and research award recipients through programs created in response to COVID-19.

Some societies said they are not making changes to their dues or fees, some are forgoing cost-of-living fee increases, and some are waiving registration fees for annual meetings.

The American College of Allergy, Asthma and Immunology (ACAAI) waived most members’ registration fees for its annual meeting in November. Typically, that fee would be $500-$800 per member, plus charges for some premium sessions, Michael Blaiss, MD, ACAAI executive medical director, said.

Dr. Blaiss said in an interview that the college thought offering its 6,000 members essentially 25 free hours of CME would benefit them more than waiving annual membership dues, which are about $425 for physicians in the United States.

If the pandemic stretches through 2021, Dr. Blaiss said, “We can survive this. I’m not worried about that at all.”

But he acknowledged the painful effect on medical societies.

“I don’t think any organization would tell you it’s not having an effect on their income,” he said. “I know it is for us and for virtually any medical organization. A high percentage of income comes from the annual meeting.”

Waiving dues has not been a high priority among members in communications so far, Blaiss said.

American Academy of Dermatology President Bruce H. Thiers, MD, said in an interview that there will be no cost-of-living increase for 2021 dues, and AAD members can request a reduction in dues, which will be considered on a case-by-case basis.

“We understand that many members will have to make tough financial decisions,” he said.

In addition, AAD, which has more than 20,000 members, is exploring payment options to help members spread out the cost of membership.
 

 

 

ACP extends membership

The American College of Physicians, whose membership cycle starts in July, did not reduce dues but extended membership at no cost for 3 months through September to its 163,000 members, Phil Masters, MD, ACP’s vice president of membership, said in an interview.

It also expanded its educational offerings related to the pandemic, including webinars on physician wellness and issues regarding telemedicine.

He said expanding educational resources rather than waiving dues was an intentional decision after much discussion because “we’re primarily a services resource organization.”

Membership data are still being calculated, but early indications are that membership is not increasing this year, after seeing annual growth of about 2%-2.5%, Dr. Masters said. He noted that income is down “by several percent.” Annual membership dues average about $500 for physicians who have been practicing for 10 years.

“We’re well positioned to tolerate the ups and downs,” he said, but he acknowledged that “there’s no question the financial impact has been devastating on some practices.”

Like some other associations, ACP decided to cancel this year’s annual meeting, which had been planned for April. The 2021 annual meeting will be conducted online from April 29 to May 1.

Smaller organizations that rely heavily on income from the annual meeting will be severely challenged the longer the pandemic continues, Dr. Masters said.

The decision is not as simple as whether to reduce or eliminate dues, he noted. Organizations will have to reexamine their missions and structure their fees and offerings according to the needs of members.

“It’s a balance in doing things for the community at large and balancing the need to be sensitive to financial implications,” Dr. Masters said.

This article first appeared on Medscape.com.

COVID-19’s toll on member facilities pushed the American Academy of Sleep Medicine (AASM) recently to take a sizable gamble.

AASM announced in September that it would waive facility fees at all 2,648 AASM-accredited sleep facilities for 2021.

At $1,800-$2,600 for each facility, that will mean lost revenue of between $4.8 million and $6.9 million, but it’s a risk the academy felt it had to take.

AASM President Kannan Ramar, MBBS, MD, said in an interview that they are betting on the future of the field.

An internal survey of members, he said, found that nearly half (46%) of the 551 respondents thought they might have to close by the end of the year.

In addition, 66% reported a lower patient volume in the past month, and 36% reported that their practice or facility had to apply for loans or other financial assistance because of COVID-19, AASM said in its press release.

“We are hoping that if we help our members through this, they will be there for our patients,” Dr. Ramar said.

Other medical societies also are weighing options, straddling the line between needing income to provide resources for members but being acutely aware of the financial toll the pandemic is taking, according to one sampling.

As previously reported, primary care practices are projected to lose more than $68,000 in revenue per full-time physician in 2020, after steep drops in office visits and the collection of fees from March to May, according to a study led by researchers in the Blavatnik Institute at Harvard Medical School, Boston.

Those losses were calculated without considering a potential second wave of COVID-19 this year, the authors noted.
 

‘We can survive this’

Although AASM waived fees for its member facilities, individual physician fees have not been reduced so far. But the group is looking for more ways to help lower the economic burden on members, Dr. Ramar said.

“I don’t think we’ve ever been in this situation in the 45 years of the academy. This is a once-in-a-lifetime event for challenges we’re going through,” he said. “The board and the leadership realized that, if we’re going to do something, this is the time to do it.”

In addition to waiving the fees, AASM and the AASM Foundation are offering relief funding to state and regional sleep societies and research award recipients through programs created in response to COVID-19.

Some societies said they are not making changes to their dues or fees, some are forgoing cost-of-living fee increases, and some are waiving registration fees for annual meetings.

The American College of Allergy, Asthma and Immunology (ACAAI) waived most members’ registration fees for its annual meeting in November. Typically, that fee would be $500-$800 per member, plus charges for some premium sessions, Michael Blaiss, MD, ACAAI executive medical director, said.

Dr. Blaiss said in an interview that the college thought offering its 6,000 members essentially 25 free hours of CME would benefit them more than waiving annual membership dues, which are about $425 for physicians in the United States.

If the pandemic stretches through 2021, Dr. Blaiss said, “We can survive this. I’m not worried about that at all.”

But he acknowledged the painful effect on medical societies.

“I don’t think any organization would tell you it’s not having an effect on their income,” he said. “I know it is for us and for virtually any medical organization. A high percentage of income comes from the annual meeting.”

Waiving dues has not been a high priority among members in communications so far, Blaiss said.

American Academy of Dermatology President Bruce H. Thiers, MD, said in an interview that there will be no cost-of-living increase for 2021 dues, and AAD members can request a reduction in dues, which will be considered on a case-by-case basis.

“We understand that many members will have to make tough financial decisions,” he said.

In addition, AAD, which has more than 20,000 members, is exploring payment options to help members spread out the cost of membership.
 

 

 

ACP extends membership

The American College of Physicians, whose membership cycle starts in July, did not reduce dues but extended membership at no cost for 3 months through September to its 163,000 members, Phil Masters, MD, ACP’s vice president of membership, said in an interview.

It also expanded its educational offerings related to the pandemic, including webinars on physician wellness and issues regarding telemedicine.

He said expanding educational resources rather than waiving dues was an intentional decision after much discussion because “we’re primarily a services resource organization.”

Membership data are still being calculated, but early indications are that membership is not increasing this year, after seeing annual growth of about 2%-2.5%, Dr. Masters said. He noted that income is down “by several percent.” Annual membership dues average about $500 for physicians who have been practicing for 10 years.

“We’re well positioned to tolerate the ups and downs,” he said, but he acknowledged that “there’s no question the financial impact has been devastating on some practices.”

Like some other associations, ACP decided to cancel this year’s annual meeting, which had been planned for April. The 2021 annual meeting will be conducted online from April 29 to May 1.

Smaller organizations that rely heavily on income from the annual meeting will be severely challenged the longer the pandemic continues, Dr. Masters said.

The decision is not as simple as whether to reduce or eliminate dues, he noted. Organizations will have to reexamine their missions and structure their fees and offerings according to the needs of members.

“It’s a balance in doing things for the community at large and balancing the need to be sensitive to financial implications,” Dr. Masters said.

This article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Are pregnant women with COVID-19 at greater risk for severe illness?

Article Type
Changed
Thu, 08/26/2021 - 15:55

Article PDF
Issue
OBG Management - 32(12)
Publications
Topics
Article PDF
Article PDF

Issue
OBG Management - 32(12)
Issue
OBG Management - 32(12)
Publications
Publications
Topics
Article Type
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Eyebrow Default
INFOGRAPHIC
Gate On Date
Wed, 12/09/2020 - 13:45
Un-Gate On Date
Wed, 12/09/2020 - 13:45
Use ProPublica
CFC Schedule Remove Status
Wed, 12/09/2020 - 13:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media