Synovial, skin gene expression differences may explain PsA treatment responses

Article Type
Changed
Tue, 02/07/2023 - 16:47

Differences in gene expression between the skin and synovial tissues of individuals with psoriatic arthritis could explain why treatments targeting proinflammatory mechanisms don’t improve joint symptoms in some patients.

SilverV/Thinkstock

A paper published in Annals of the Rheumatic Diseases presents the results of an observational, open-label study involving 27 patients with active psoriatic arthritis, 18 of whom were treated with anti–tumor necrosis factor (anti-TNF) therapies and 9 with the monoclonal antibody ustekinumab (Stelara). This drug targets the axis of proinflammatory cytokine interleukin-23 and effector cytokine IL-12, which are believed to play an important role both in skin and nail psoriasis, and psoriatic arthritis.

However, while anti–IL-23 antibodies seem to work well to address skin manifestations of psoriasis, they tend to improve joint symptoms only in selected patients.

“The lack of a clear mechanism to explain such divergent responses prompted this study,” said Dr. Alessandra Nerviani, lead author of the study, from the Barts and The London School of Medicine & Dentistry.

Participants also had biopsies taken from the synovium – in particular, from joints that were clinically and ultrasonographically active – and from lesional and nonlesional skin for gene expression analysis.

In terms of treatment response, the ustekinumab-treated group showed significantly higher scores for erythrocyte sedimentation rate, joint pain, and disease activity, compared with the anti–TNF-treated group. Psoriasis Area and Severity Index scores were similar in both treatment arms, but significantly more patients in the anti-TNF group met the EULAR Disease Activity Score for response (70.6% vs. 22.2%).

The gene expression analysis, which assessed the activity of 80 genes related to inflammation in 14 patient samples from synovial tissue, lesional skin, and nonlesional skin, found that patterns of expression in the synovium clustered away from those from skin.



This was particularly the case when it came to genes related to drug targets. The targets for anti-TNF showed similar levels of expression in both skin and synovial tissue. However, the targets for ustekinumab – namely interleukin (IL)–23A, IL-23R and, IL-12B – showed higher levels of expression in lesional skin than in nonlesional skin and synovial tissue.

“Interestingly, we observed that, while some patients did express IL-23 cytokines/receptor in both skin and joint, others had discordant expression, that is, active IL-23 pathway in the lesional skin but not in the synovium,” the authors wrote.

When researchers then stratified patients according to how much synovial inflammation they had, they found that patients who had higher scores also showed higher expression of genes for IL-12B and IL-23R, but not IL-23A, despite showing no other major clinical differences.

The authors also looked at the protein expression levels for IL-23p40, IL-23p19 and IL-23R, and found that while the percentage of cells positive for these proteins was significantly higher in lesional, compared with nonlesional skin, it was also higher in the synovium among patients with more inflammation.

“Except for the LIKERT patient score, we did not detect other significant correlations between IL-23 axis expression and clinical parameters at baseline, suggesting that patients with comparable disease severity may have, in fact, heterogeneous histopathological features and expression of drug targets within the diseased synovium,” they wrote.

 

 

More selective expression of IL-23 in synovium

Commenting on the findings, the authors highlighted that the expression of targets for anti-TNF was much more homogeneous across skin and synovial tissue, but the IL-23A/IL-12B/IL-23R genes generally showed higher levels of expression in lesional skin. compared with either nonlesional skin or synovium. However, even within the synovium, expression of these genes varied enormously, from levels similar to those seen in paired lesional skin to levels well below those.

“It is plausible to speculate that an overall higher presence of IL-23 in the psoriatic skin supports the concept of a generally better response in terms of skin manifestations, including almost complete clearance of psoriatic lesions,” Dr. Nerviani said in an interview. “While, on the other hand, the more selective expression of IL-23 in the synovium, namely in histologically more inflamed synovium characterized by immune cells infiltration, may explain the overall more modest success to meet stringent response criteria in the joints.“

Of particular significance was the observation that IL-12B and IL-23R transcription levels were higher in patients with higher levels of synovial tissue inflammation.

“We confirmed that IL-23 axis expression relates to the synovial histopathology not only in PsA at different stages of the disease, including early treatment-naive patients, but also in the early phase of RA, investigated as disease control,” they wrote.

Dr. Nerviani said the results could inform a more tailored “precision medicine” approach to treating patients with psoriatic arthritis.

“While randomized synovial biopsy–driven clinical trials are now a reality in rheumatoid arthritis, in psoriatic arthritis, these kinds of studies have not been performed yet but may become actual in the future,” she said. “An in-depth characterization of the synovial tissue represents the first essential step towards addressing current unmet clinical needs and, potentially, changing our practice.”

However, she stressed that the study was not powered to test the correlation between the expression level of these pathways in disease tissue and clinical response to treatment.

“Further dedicated clinical trials should be designed to look at the relationship between synovial pathology and molecular characteristics, and response to targeted treatment to address this question,” Dr. Nerviani said.

The study was supported by the Queen Mary University of London and the Fondazione Ceschina, and in part by grants from Versus Arthritis. No conflicts of interest were declared.

SOURCE: Nerviani A et al. Ann Rheum Dis. 2020 Nov 26. doi: 10.1136/annrheumdis-2020-218186.

Nerviani A et al. Ann Rheum Dis. 2020 Nov 26. doi: 10.1136/annrheumdis-2020-218186.

Publications
Topics
Sections

Differences in gene expression between the skin and synovial tissues of individuals with psoriatic arthritis could explain why treatments targeting proinflammatory mechanisms don’t improve joint symptoms in some patients.

SilverV/Thinkstock

A paper published in Annals of the Rheumatic Diseases presents the results of an observational, open-label study involving 27 patients with active psoriatic arthritis, 18 of whom were treated with anti–tumor necrosis factor (anti-TNF) therapies and 9 with the monoclonal antibody ustekinumab (Stelara). This drug targets the axis of proinflammatory cytokine interleukin-23 and effector cytokine IL-12, which are believed to play an important role both in skin and nail psoriasis, and psoriatic arthritis.

However, while anti–IL-23 antibodies seem to work well to address skin manifestations of psoriasis, they tend to improve joint symptoms only in selected patients.

“The lack of a clear mechanism to explain such divergent responses prompted this study,” said Dr. Alessandra Nerviani, lead author of the study, from the Barts and The London School of Medicine & Dentistry.

Participants also had biopsies taken from the synovium – in particular, from joints that were clinically and ultrasonographically active – and from lesional and nonlesional skin for gene expression analysis.

In terms of treatment response, the ustekinumab-treated group showed significantly higher scores for erythrocyte sedimentation rate, joint pain, and disease activity, compared with the anti–TNF-treated group. Psoriasis Area and Severity Index scores were similar in both treatment arms, but significantly more patients in the anti-TNF group met the EULAR Disease Activity Score for response (70.6% vs. 22.2%).

The gene expression analysis, which assessed the activity of 80 genes related to inflammation in 14 patient samples from synovial tissue, lesional skin, and nonlesional skin, found that patterns of expression in the synovium clustered away from those from skin.



This was particularly the case when it came to genes related to drug targets. The targets for anti-TNF showed similar levels of expression in both skin and synovial tissue. However, the targets for ustekinumab – namely interleukin (IL)–23A, IL-23R and, IL-12B – showed higher levels of expression in lesional skin than in nonlesional skin and synovial tissue.

“Interestingly, we observed that, while some patients did express IL-23 cytokines/receptor in both skin and joint, others had discordant expression, that is, active IL-23 pathway in the lesional skin but not in the synovium,” the authors wrote.

When researchers then stratified patients according to how much synovial inflammation they had, they found that patients who had higher scores also showed higher expression of genes for IL-12B and IL-23R, but not IL-23A, despite showing no other major clinical differences.

The authors also looked at the protein expression levels for IL-23p40, IL-23p19 and IL-23R, and found that while the percentage of cells positive for these proteins was significantly higher in lesional, compared with nonlesional skin, it was also higher in the synovium among patients with more inflammation.

“Except for the LIKERT patient score, we did not detect other significant correlations between IL-23 axis expression and clinical parameters at baseline, suggesting that patients with comparable disease severity may have, in fact, heterogeneous histopathological features and expression of drug targets within the diseased synovium,” they wrote.

 

 

More selective expression of IL-23 in synovium

Commenting on the findings, the authors highlighted that the expression of targets for anti-TNF was much more homogeneous across skin and synovial tissue, but the IL-23A/IL-12B/IL-23R genes generally showed higher levels of expression in lesional skin. compared with either nonlesional skin or synovium. However, even within the synovium, expression of these genes varied enormously, from levels similar to those seen in paired lesional skin to levels well below those.

“It is plausible to speculate that an overall higher presence of IL-23 in the psoriatic skin supports the concept of a generally better response in terms of skin manifestations, including almost complete clearance of psoriatic lesions,” Dr. Nerviani said in an interview. “While, on the other hand, the more selective expression of IL-23 in the synovium, namely in histologically more inflamed synovium characterized by immune cells infiltration, may explain the overall more modest success to meet stringent response criteria in the joints.“

Of particular significance was the observation that IL-12B and IL-23R transcription levels were higher in patients with higher levels of synovial tissue inflammation.

“We confirmed that IL-23 axis expression relates to the synovial histopathology not only in PsA at different stages of the disease, including early treatment-naive patients, but also in the early phase of RA, investigated as disease control,” they wrote.

Dr. Nerviani said the results could inform a more tailored “precision medicine” approach to treating patients with psoriatic arthritis.

“While randomized synovial biopsy–driven clinical trials are now a reality in rheumatoid arthritis, in psoriatic arthritis, these kinds of studies have not been performed yet but may become actual in the future,” she said. “An in-depth characterization of the synovial tissue represents the first essential step towards addressing current unmet clinical needs and, potentially, changing our practice.”

However, she stressed that the study was not powered to test the correlation between the expression level of these pathways in disease tissue and clinical response to treatment.

“Further dedicated clinical trials should be designed to look at the relationship between synovial pathology and molecular characteristics, and response to targeted treatment to address this question,” Dr. Nerviani said.

The study was supported by the Queen Mary University of London and the Fondazione Ceschina, and in part by grants from Versus Arthritis. No conflicts of interest were declared.

SOURCE: Nerviani A et al. Ann Rheum Dis. 2020 Nov 26. doi: 10.1136/annrheumdis-2020-218186.

Nerviani A et al. Ann Rheum Dis. 2020 Nov 26. doi: 10.1136/annrheumdis-2020-218186.

Differences in gene expression between the skin and synovial tissues of individuals with psoriatic arthritis could explain why treatments targeting proinflammatory mechanisms don’t improve joint symptoms in some patients.

SilverV/Thinkstock

A paper published in Annals of the Rheumatic Diseases presents the results of an observational, open-label study involving 27 patients with active psoriatic arthritis, 18 of whom were treated with anti–tumor necrosis factor (anti-TNF) therapies and 9 with the monoclonal antibody ustekinumab (Stelara). This drug targets the axis of proinflammatory cytokine interleukin-23 and effector cytokine IL-12, which are believed to play an important role both in skin and nail psoriasis, and psoriatic arthritis.

However, while anti–IL-23 antibodies seem to work well to address skin manifestations of psoriasis, they tend to improve joint symptoms only in selected patients.

“The lack of a clear mechanism to explain such divergent responses prompted this study,” said Dr. Alessandra Nerviani, lead author of the study, from the Barts and The London School of Medicine & Dentistry.

Participants also had biopsies taken from the synovium – in particular, from joints that were clinically and ultrasonographically active – and from lesional and nonlesional skin for gene expression analysis.

In terms of treatment response, the ustekinumab-treated group showed significantly higher scores for erythrocyte sedimentation rate, joint pain, and disease activity, compared with the anti–TNF-treated group. Psoriasis Area and Severity Index scores were similar in both treatment arms, but significantly more patients in the anti-TNF group met the EULAR Disease Activity Score for response (70.6% vs. 22.2%).

The gene expression analysis, which assessed the activity of 80 genes related to inflammation in 14 patient samples from synovial tissue, lesional skin, and nonlesional skin, found that patterns of expression in the synovium clustered away from those from skin.



This was particularly the case when it came to genes related to drug targets. The targets for anti-TNF showed similar levels of expression in both skin and synovial tissue. However, the targets for ustekinumab – namely interleukin (IL)–23A, IL-23R and, IL-12B – showed higher levels of expression in lesional skin than in nonlesional skin and synovial tissue.

“Interestingly, we observed that, while some patients did express IL-23 cytokines/receptor in both skin and joint, others had discordant expression, that is, active IL-23 pathway in the lesional skin but not in the synovium,” the authors wrote.

When researchers then stratified patients according to how much synovial inflammation they had, they found that patients who had higher scores also showed higher expression of genes for IL-12B and IL-23R, but not IL-23A, despite showing no other major clinical differences.

The authors also looked at the protein expression levels for IL-23p40, IL-23p19 and IL-23R, and found that while the percentage of cells positive for these proteins was significantly higher in lesional, compared with nonlesional skin, it was also higher in the synovium among patients with more inflammation.

“Except for the LIKERT patient score, we did not detect other significant correlations between IL-23 axis expression and clinical parameters at baseline, suggesting that patients with comparable disease severity may have, in fact, heterogeneous histopathological features and expression of drug targets within the diseased synovium,” they wrote.

 

 

More selective expression of IL-23 in synovium

Commenting on the findings, the authors highlighted that the expression of targets for anti-TNF was much more homogeneous across skin and synovial tissue, but the IL-23A/IL-12B/IL-23R genes generally showed higher levels of expression in lesional skin. compared with either nonlesional skin or synovium. However, even within the synovium, expression of these genes varied enormously, from levels similar to those seen in paired lesional skin to levels well below those.

“It is plausible to speculate that an overall higher presence of IL-23 in the psoriatic skin supports the concept of a generally better response in terms of skin manifestations, including almost complete clearance of psoriatic lesions,” Dr. Nerviani said in an interview. “While, on the other hand, the more selective expression of IL-23 in the synovium, namely in histologically more inflamed synovium characterized by immune cells infiltration, may explain the overall more modest success to meet stringent response criteria in the joints.“

Of particular significance was the observation that IL-12B and IL-23R transcription levels were higher in patients with higher levels of synovial tissue inflammation.

“We confirmed that IL-23 axis expression relates to the synovial histopathology not only in PsA at different stages of the disease, including early treatment-naive patients, but also in the early phase of RA, investigated as disease control,” they wrote.

Dr. Nerviani said the results could inform a more tailored “precision medicine” approach to treating patients with psoriatic arthritis.

“While randomized synovial biopsy–driven clinical trials are now a reality in rheumatoid arthritis, in psoriatic arthritis, these kinds of studies have not been performed yet but may become actual in the future,” she said. “An in-depth characterization of the synovial tissue represents the first essential step towards addressing current unmet clinical needs and, potentially, changing our practice.”

However, she stressed that the study was not powered to test the correlation between the expression level of these pathways in disease tissue and clinical response to treatment.

“Further dedicated clinical trials should be designed to look at the relationship between synovial pathology and molecular characteristics, and response to targeted treatment to address this question,” Dr. Nerviani said.

The study was supported by the Queen Mary University of London and the Fondazione Ceschina, and in part by grants from Versus Arthritis. No conflicts of interest were declared.

SOURCE: Nerviani A et al. Ann Rheum Dis. 2020 Nov 26. doi: 10.1136/annrheumdis-2020-218186.

Nerviani A et al. Ann Rheum Dis. 2020 Nov 26. doi: 10.1136/annrheumdis-2020-218186.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM ANNALS OF THE RHEUMATIC DISEASES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Synthetic lethality: Triple combination is a viable strategy for B-cell malignancies

Article Type
Changed
Thu, 01/12/2023 - 10:44

For B-cell malignancies, synthetic lethality is a viable treatment approach, according to preliminary clinical trial data with once-daily oral DTRM-555. The triple combination therapy, DTRM-555, combines a Bruton’s tyrosine kinase (BTK) inhibitor, a mammalian target of rapamycin (mTOR) inhibitor and pomalidomide, an immunomodulatory imide drug (IMiD), according to Anthony R. Mato, MD, in a presentation at the annual meeting of the American Society of Hematology, which was held virtually.
 

Richter’s transformation, a rare event

Dr. Mato’s phase 1 clinical trial included 13 patients with Richter’s transformation (RT) and 11 with diffuse large B-cell lymphoma (DLBCL). Richter’s transformation, a rare event occurring in 5%-7% of chronic lymphocytic leukemia (CLL) cases, has no clear standard of care and universally poor outcomes (overall survival, 3-12 months) once it becomes refractory to anthracycline-based chemotherapy, according to Dr. Mato.

Despite great progress in treating DLBCL, cure rates with R-CHOP (rituximab, cyclophosphamide, doxorubicin, vincristine, prednisone), the standard of care, are in the 50%-60% range and much lower (30%-40%) with poor-risk features. Furthermore, most (60%-70%) patients receiving autologous stem cell transplant or CAR-T still require additional lines of therapy.

The “synthetic lethality” (SL) strategy, which has become a focus of cancer treatment in the last decade, identifies multiple disease primary aberrant and compensatory pathways and then inhibits them together in a manner lethal to cell survival. Preclinical studies have shown low doses of a BTK inhibitor/mTOR inhibitor/IMiD to synergistically kill malignant B cells. DTRM-555 is an optimized, oral, once-daily triplet combination of a novel and clinically differentiated irreversible BTK inhibitor (DTRM-12), everolimus and pomalidomide, Dr. Mato explained.

Individuals (38% women) included in the trial had a median of 2 (1-10) prior lines of therapy, with a CD20 monoclonal antibody as one of them in all cases, and 83% with R-CHOP. All patients had life expectancy >12 weeks, with 0-1 performance status and adequate organ and hematologic function.

DTRM-12 plasma concentrations, Dr. Mato noted, were unaffected by coadministration with everolimus with or without pomalidomide.
 

Manageable adverse events

Among adverse events, neutropenia (grade 3-4, 33%/21%) and thrombocytopenia (grade 3-4, 29%/8%) were most common. One patient had grade 4 leukopenia (4%). No patients discontinued treatment on account of adverse events, however, and nonhematologic adverse event rates were low, without grade 4 events. Eight different grade 3 adverse events (atrial fibrillation [with prior history], diarrhea, hyponatremia pneumonia, pulmonary opportunistic infection, rash maculopapular, rash acneiform, skin ulceration) were reported, each in one patient. Pharmacokinetic data supported once-daily dosing for DTRM-12, with an estimated half-life of 5-9 hours that was comparable with that of once-daily ibrutinib, and longer than that of other agents of the same class. The recommended phase 2 dose going forward was 200 mg for DTRM-12, 5 mg for everolimus and 2 mg for pomalidomide.
 

Favorable responses

In efficacy analysis for 22 evaluable patients (11 in the RT group, 11 in the DLBCL ), there was 1 complete response in the RT group and 2 in the DLBCL group, with partial responses in 4 and 3, respectively, giving overall response rates of 46% in the RT group and 45% in the DLBCL group. Two and four patients, respectively, in the RT and DLBCL groups, had stable disease, Dr. Mato said, and most patients (71%) had SPD (sum of the product of the diameters) lymph node reductions, with lymph node reductions of 50% or more in 43%.

“Encouraging clinical activity was observed in high-risk, heavily pretreated Richter’s transformation and diffuse large B-cell lymphoma patients,” Dr. Mato concluded. He also noted that the main safety findings were “expected and manageable.”

The session moderator, Chaitra S. Ujjani, MD, of the Seattle Health Care Alliance, asked if the DTRM-555 regimen should be considered definitive therapy in patients who are responding, or if moving on to cellular therapies or a consolidative approach should be considered.

“If they are responding, it is reasonable to consider consolidating with a cellular therapy at this point in time,” Dr. Mato replied. He did observe, however, that many of the included patients had tried experimental therapies, including cellular therapy. “Without [data from] a much larger patient population and longer-term follow-up, I think that, for responding patients with a durable remission who have a [chimeric antigen receptor] T or transplant option, these, at the least, have to be discussed with them.”

To an additional question as to whether any of the subjects had prior exposure to BTK inhibitors, Dr. Mato responded, “There is a high exposure to BTK inhibitors, and almost universally these patients were progressors. So again, this is supportive of the hypothesis that hitting multiple pathways simultaneously is somewhat different from hitting just BTK by itself, even in the setting of progression.”

A DTRM-555 triple fixed-dose combination tablet is under development, and a double fixed-dose tablet (DTRM-505) is ready for the ongoing phase 2 U.S. study (NCT04030544) among patients with relapsed/refractory CLL or non-Hodgkin lymphoma (RT, DLBCL or transformed follicular lymphoma) with prior exposure to a novel agent.

Dr. Mato, disclosed consultancy and research funding relationships with multiple pharmaceutical and biotechnology companies.

SOURCE: Mato AR et al. ASH 2020, Abstract 126.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

For B-cell malignancies, synthetic lethality is a viable treatment approach, according to preliminary clinical trial data with once-daily oral DTRM-555. The triple combination therapy, DTRM-555, combines a Bruton’s tyrosine kinase (BTK) inhibitor, a mammalian target of rapamycin (mTOR) inhibitor and pomalidomide, an immunomodulatory imide drug (IMiD), according to Anthony R. Mato, MD, in a presentation at the annual meeting of the American Society of Hematology, which was held virtually.
 

Richter’s transformation, a rare event

Dr. Mato’s phase 1 clinical trial included 13 patients with Richter’s transformation (RT) and 11 with diffuse large B-cell lymphoma (DLBCL). Richter’s transformation, a rare event occurring in 5%-7% of chronic lymphocytic leukemia (CLL) cases, has no clear standard of care and universally poor outcomes (overall survival, 3-12 months) once it becomes refractory to anthracycline-based chemotherapy, according to Dr. Mato.

Despite great progress in treating DLBCL, cure rates with R-CHOP (rituximab, cyclophosphamide, doxorubicin, vincristine, prednisone), the standard of care, are in the 50%-60% range and much lower (30%-40%) with poor-risk features. Furthermore, most (60%-70%) patients receiving autologous stem cell transplant or CAR-T still require additional lines of therapy.

The “synthetic lethality” (SL) strategy, which has become a focus of cancer treatment in the last decade, identifies multiple disease primary aberrant and compensatory pathways and then inhibits them together in a manner lethal to cell survival. Preclinical studies have shown low doses of a BTK inhibitor/mTOR inhibitor/IMiD to synergistically kill malignant B cells. DTRM-555 is an optimized, oral, once-daily triplet combination of a novel and clinically differentiated irreversible BTK inhibitor (DTRM-12), everolimus and pomalidomide, Dr. Mato explained.

Individuals (38% women) included in the trial had a median of 2 (1-10) prior lines of therapy, with a CD20 monoclonal antibody as one of them in all cases, and 83% with R-CHOP. All patients had life expectancy >12 weeks, with 0-1 performance status and adequate organ and hematologic function.

DTRM-12 plasma concentrations, Dr. Mato noted, were unaffected by coadministration with everolimus with or without pomalidomide.
 

Manageable adverse events

Among adverse events, neutropenia (grade 3-4, 33%/21%) and thrombocytopenia (grade 3-4, 29%/8%) were most common. One patient had grade 4 leukopenia (4%). No patients discontinued treatment on account of adverse events, however, and nonhematologic adverse event rates were low, without grade 4 events. Eight different grade 3 adverse events (atrial fibrillation [with prior history], diarrhea, hyponatremia pneumonia, pulmonary opportunistic infection, rash maculopapular, rash acneiform, skin ulceration) were reported, each in one patient. Pharmacokinetic data supported once-daily dosing for DTRM-12, with an estimated half-life of 5-9 hours that was comparable with that of once-daily ibrutinib, and longer than that of other agents of the same class. The recommended phase 2 dose going forward was 200 mg for DTRM-12, 5 mg for everolimus and 2 mg for pomalidomide.
 

Favorable responses

In efficacy analysis for 22 evaluable patients (11 in the RT group, 11 in the DLBCL ), there was 1 complete response in the RT group and 2 in the DLBCL group, with partial responses in 4 and 3, respectively, giving overall response rates of 46% in the RT group and 45% in the DLBCL group. Two and four patients, respectively, in the RT and DLBCL groups, had stable disease, Dr. Mato said, and most patients (71%) had SPD (sum of the product of the diameters) lymph node reductions, with lymph node reductions of 50% or more in 43%.

“Encouraging clinical activity was observed in high-risk, heavily pretreated Richter’s transformation and diffuse large B-cell lymphoma patients,” Dr. Mato concluded. He also noted that the main safety findings were “expected and manageable.”

The session moderator, Chaitra S. Ujjani, MD, of the Seattle Health Care Alliance, asked if the DTRM-555 regimen should be considered definitive therapy in patients who are responding, or if moving on to cellular therapies or a consolidative approach should be considered.

“If they are responding, it is reasonable to consider consolidating with a cellular therapy at this point in time,” Dr. Mato replied. He did observe, however, that many of the included patients had tried experimental therapies, including cellular therapy. “Without [data from] a much larger patient population and longer-term follow-up, I think that, for responding patients with a durable remission who have a [chimeric antigen receptor] T or transplant option, these, at the least, have to be discussed with them.”

To an additional question as to whether any of the subjects had prior exposure to BTK inhibitors, Dr. Mato responded, “There is a high exposure to BTK inhibitors, and almost universally these patients were progressors. So again, this is supportive of the hypothesis that hitting multiple pathways simultaneously is somewhat different from hitting just BTK by itself, even in the setting of progression.”

A DTRM-555 triple fixed-dose combination tablet is under development, and a double fixed-dose tablet (DTRM-505) is ready for the ongoing phase 2 U.S. study (NCT04030544) among patients with relapsed/refractory CLL or non-Hodgkin lymphoma (RT, DLBCL or transformed follicular lymphoma) with prior exposure to a novel agent.

Dr. Mato, disclosed consultancy and research funding relationships with multiple pharmaceutical and biotechnology companies.

SOURCE: Mato AR et al. ASH 2020, Abstract 126.

For B-cell malignancies, synthetic lethality is a viable treatment approach, according to preliminary clinical trial data with once-daily oral DTRM-555. The triple combination therapy, DTRM-555, combines a Bruton’s tyrosine kinase (BTK) inhibitor, a mammalian target of rapamycin (mTOR) inhibitor and pomalidomide, an immunomodulatory imide drug (IMiD), according to Anthony R. Mato, MD, in a presentation at the annual meeting of the American Society of Hematology, which was held virtually.
 

Richter’s transformation, a rare event

Dr. Mato’s phase 1 clinical trial included 13 patients with Richter’s transformation (RT) and 11 with diffuse large B-cell lymphoma (DLBCL). Richter’s transformation, a rare event occurring in 5%-7% of chronic lymphocytic leukemia (CLL) cases, has no clear standard of care and universally poor outcomes (overall survival, 3-12 months) once it becomes refractory to anthracycline-based chemotherapy, according to Dr. Mato.

Despite great progress in treating DLBCL, cure rates with R-CHOP (rituximab, cyclophosphamide, doxorubicin, vincristine, prednisone), the standard of care, are in the 50%-60% range and much lower (30%-40%) with poor-risk features. Furthermore, most (60%-70%) patients receiving autologous stem cell transplant or CAR-T still require additional lines of therapy.

The “synthetic lethality” (SL) strategy, which has become a focus of cancer treatment in the last decade, identifies multiple disease primary aberrant and compensatory pathways and then inhibits them together in a manner lethal to cell survival. Preclinical studies have shown low doses of a BTK inhibitor/mTOR inhibitor/IMiD to synergistically kill malignant B cells. DTRM-555 is an optimized, oral, once-daily triplet combination of a novel and clinically differentiated irreversible BTK inhibitor (DTRM-12), everolimus and pomalidomide, Dr. Mato explained.

Individuals (38% women) included in the trial had a median of 2 (1-10) prior lines of therapy, with a CD20 monoclonal antibody as one of them in all cases, and 83% with R-CHOP. All patients had life expectancy >12 weeks, with 0-1 performance status and adequate organ and hematologic function.

DTRM-12 plasma concentrations, Dr. Mato noted, were unaffected by coadministration with everolimus with or without pomalidomide.
 

Manageable adverse events

Among adverse events, neutropenia (grade 3-4, 33%/21%) and thrombocytopenia (grade 3-4, 29%/8%) were most common. One patient had grade 4 leukopenia (4%). No patients discontinued treatment on account of adverse events, however, and nonhematologic adverse event rates were low, without grade 4 events. Eight different grade 3 adverse events (atrial fibrillation [with prior history], diarrhea, hyponatremia pneumonia, pulmonary opportunistic infection, rash maculopapular, rash acneiform, skin ulceration) were reported, each in one patient. Pharmacokinetic data supported once-daily dosing for DTRM-12, with an estimated half-life of 5-9 hours that was comparable with that of once-daily ibrutinib, and longer than that of other agents of the same class. The recommended phase 2 dose going forward was 200 mg for DTRM-12, 5 mg for everolimus and 2 mg for pomalidomide.
 

Favorable responses

In efficacy analysis for 22 evaluable patients (11 in the RT group, 11 in the DLBCL ), there was 1 complete response in the RT group and 2 in the DLBCL group, with partial responses in 4 and 3, respectively, giving overall response rates of 46% in the RT group and 45% in the DLBCL group. Two and four patients, respectively, in the RT and DLBCL groups, had stable disease, Dr. Mato said, and most patients (71%) had SPD (sum of the product of the diameters) lymph node reductions, with lymph node reductions of 50% or more in 43%.

“Encouraging clinical activity was observed in high-risk, heavily pretreated Richter’s transformation and diffuse large B-cell lymphoma patients,” Dr. Mato concluded. He also noted that the main safety findings were “expected and manageable.”

The session moderator, Chaitra S. Ujjani, MD, of the Seattle Health Care Alliance, asked if the DTRM-555 regimen should be considered definitive therapy in patients who are responding, or if moving on to cellular therapies or a consolidative approach should be considered.

“If they are responding, it is reasonable to consider consolidating with a cellular therapy at this point in time,” Dr. Mato replied. He did observe, however, that many of the included patients had tried experimental therapies, including cellular therapy. “Without [data from] a much larger patient population and longer-term follow-up, I think that, for responding patients with a durable remission who have a [chimeric antigen receptor] T or transplant option, these, at the least, have to be discussed with them.”

To an additional question as to whether any of the subjects had prior exposure to BTK inhibitors, Dr. Mato responded, “There is a high exposure to BTK inhibitors, and almost universally these patients were progressors. So again, this is supportive of the hypothesis that hitting multiple pathways simultaneously is somewhat different from hitting just BTK by itself, even in the setting of progression.”

A DTRM-555 triple fixed-dose combination tablet is under development, and a double fixed-dose tablet (DTRM-505) is ready for the ongoing phase 2 U.S. study (NCT04030544) among patients with relapsed/refractory CLL or non-Hodgkin lymphoma (RT, DLBCL or transformed follicular lymphoma) with prior exposure to a novel agent.

Dr. Mato, disclosed consultancy and research funding relationships with multiple pharmaceutical and biotechnology companies.

SOURCE: Mato AR et al. ASH 2020, Abstract 126.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASH 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Wearable device clears a first ‘milestone’ in seizure detection

Article Type
Changed
Mon, 01/04/2021 - 12:35

A wrist-worn device that uses machine learning accurately detects different seizure types. The new findings have the potential to revolutionize the management of patients with epilepsy, according to the researchers. “We have set a first benchmark for automatic detection of a variety of epileptic seizures using wearable sensors and deep-learning algorithms. In other words, we have shown for the first time that it’s possible to do this,” said study investigator Jianbin Tang, MA, data science project lead, IBM Research Australia, Victoria.

The findings were presented at the American Epilepsy Society’s annual meeting, held online this year because of the COVID-19 pandemic.

Accurate monitoring of seizures is important for assessing risk, injury prevention, and treatment response evaluation. Currently, video EEG is the gold standard for seizure detection, but it requires a hospital stay, is often costly, and can be stigmatizing, said Mr. Tang.
 

An advance in detecting seizure types

Recent advances in non-EEG wearable devices show promise in detecting generalized onset tonic-clonic and focal to bilateral tonic-clonic seizures, but it’s not clear if they have the ability to detect other seizure types. “We hope to fill this gap by expanding wearable seizure detection to additional seizure types,” said Mr. Tang.

Seizure tracking outside the hospital setting largely “relies on manually annotated family and patient reports, which often can be unreliable due to missed seizures and problems recalling seizures,” he said.

The study included 75 children (44% were female; mean age was 11.1 years) admitted to a long-term EEG monitoring unit at a single center for a 24-hour stay. Patients wore the detector on the ankle or wrist. The device continuously collected data on functions such as sweating, heart rate, movement, and temperature.

With part of the dataset, researchers trained deep-learning algorithms to automatically detect seizure segments. They then validated the performance of the detection algorithms on the remainder of the dataset.

The analysis was based on data from 722 epileptic seizures of all types including focal and generalized, motor and nonmotor. Seizures occurred throughout the day and during the night while patients were awake or asleep.

When a seizure is detected, the system triggers a real-time alert and will store the information about the detected seizure in a repository, said Mr. Tang.

The signals were initially stored in the wristband and then securely uploaded to the Cloud. From there, the signal files were downloaded by the investigators for analysis and interpretation. All data were entirely anonymized and de-identified. Researchers used Area Under Curve–Receiver Operating Characteristic (AUC-ROC) to assess performance.

“Our best performing detection models reach an AUC-ROC of 67.59%, which represents a decent performance level,” said Mr. Tang. “There certainly is room for performance improvement and we are already working on this,” he added.  

The device performed “better than chance,” which is a “standard technical term” in the field of machine learning and is “the first hurdle any machine-learning model needs to take to be considered useful.” The investigators noted that such automatic seizure detection “is feasible across a broad spectrum of epileptic seizure types,” said Mr. Tang. “This is a first and has not been shown before.”

The study suggests that the noninvasive wearable device could be used at home, at school, and in other everyday settings outside the clinic. “This could one day provide patients, caregivers, and clinicians with reliable seizure reports,” said Mr. Tang.

He said he believes the device might be especially useful in detecting frequent or subtle seizures, which are easy to miss. Patients requiring medication evaluation and rescue medication and those at risk of status epilepticus may be good candidates.

The researchers don’t expect wearable technology to totally replace EEG but see it as “a useful complementary tool to track seizures continuously at times or in settings where EEG monitoring is not available,” said Mr. Tang.
 

 

 

‘Important milestone’

Commenting on the research, Benjamin H. Brinkmann, PhD, associate professor of neurology at the Mayo Clinic in Rochester, Minn., said the investigators “have done very good work applying state of the art machine learning techniques” to the “important problem” of accurately detecting seizures.

Dr. Brinkmann is part of the Epilepsy Foundation–sponsored “My Seizure Gauge” project that’s evaluating various wearable devices, including the Empatica E4 wristband and the Fitbit Charge 3, to determine what measurements are needed for reliable seizure forecasting.

“Previously, no one knew whether seizure prediction was possible with these devices, and the fact that this group was able to achieve ‘better-than-chance’ prediction accuracy is an important milestone.”

However, he emphasized that there is still a great deal of work to be done to determine, for example, if seizure prediction with these devices can be accurate enough to be clinically useful. “For example, if the system generates too many false-positive predictions, patients won’t use it.”

In addition, the findings need to be replicated and recordings extended to 6 months or more to determine whether they are helpful to patients long term and in the home environment, said Dr. Brinkmann.

The investigators and Dr. Brinkmann have disclosed having no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Issue
Neurology Reviews- 29(1)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A wrist-worn device that uses machine learning accurately detects different seizure types. The new findings have the potential to revolutionize the management of patients with epilepsy, according to the researchers. “We have set a first benchmark for automatic detection of a variety of epileptic seizures using wearable sensors and deep-learning algorithms. In other words, we have shown for the first time that it’s possible to do this,” said study investigator Jianbin Tang, MA, data science project lead, IBM Research Australia, Victoria.

The findings were presented at the American Epilepsy Society’s annual meeting, held online this year because of the COVID-19 pandemic.

Accurate monitoring of seizures is important for assessing risk, injury prevention, and treatment response evaluation. Currently, video EEG is the gold standard for seizure detection, but it requires a hospital stay, is often costly, and can be stigmatizing, said Mr. Tang.
 

An advance in detecting seizure types

Recent advances in non-EEG wearable devices show promise in detecting generalized onset tonic-clonic and focal to bilateral tonic-clonic seizures, but it’s not clear if they have the ability to detect other seizure types. “We hope to fill this gap by expanding wearable seizure detection to additional seizure types,” said Mr. Tang.

Seizure tracking outside the hospital setting largely “relies on manually annotated family and patient reports, which often can be unreliable due to missed seizures and problems recalling seizures,” he said.

The study included 75 children (44% were female; mean age was 11.1 years) admitted to a long-term EEG monitoring unit at a single center for a 24-hour stay. Patients wore the detector on the ankle or wrist. The device continuously collected data on functions such as sweating, heart rate, movement, and temperature.

With part of the dataset, researchers trained deep-learning algorithms to automatically detect seizure segments. They then validated the performance of the detection algorithms on the remainder of the dataset.

The analysis was based on data from 722 epileptic seizures of all types including focal and generalized, motor and nonmotor. Seizures occurred throughout the day and during the night while patients were awake or asleep.

When a seizure is detected, the system triggers a real-time alert and will store the information about the detected seizure in a repository, said Mr. Tang.

The signals were initially stored in the wristband and then securely uploaded to the Cloud. From there, the signal files were downloaded by the investigators for analysis and interpretation. All data were entirely anonymized and de-identified. Researchers used Area Under Curve–Receiver Operating Characteristic (AUC-ROC) to assess performance.

“Our best performing detection models reach an AUC-ROC of 67.59%, which represents a decent performance level,” said Mr. Tang. “There certainly is room for performance improvement and we are already working on this,” he added.  

The device performed “better than chance,” which is a “standard technical term” in the field of machine learning and is “the first hurdle any machine-learning model needs to take to be considered useful.” The investigators noted that such automatic seizure detection “is feasible across a broad spectrum of epileptic seizure types,” said Mr. Tang. “This is a first and has not been shown before.”

The study suggests that the noninvasive wearable device could be used at home, at school, and in other everyday settings outside the clinic. “This could one day provide patients, caregivers, and clinicians with reliable seizure reports,” said Mr. Tang.

He said he believes the device might be especially useful in detecting frequent or subtle seizures, which are easy to miss. Patients requiring medication evaluation and rescue medication and those at risk of status epilepticus may be good candidates.

The researchers don’t expect wearable technology to totally replace EEG but see it as “a useful complementary tool to track seizures continuously at times or in settings where EEG monitoring is not available,” said Mr. Tang.
 

 

 

‘Important milestone’

Commenting on the research, Benjamin H. Brinkmann, PhD, associate professor of neurology at the Mayo Clinic in Rochester, Minn., said the investigators “have done very good work applying state of the art machine learning techniques” to the “important problem” of accurately detecting seizures.

Dr. Brinkmann is part of the Epilepsy Foundation–sponsored “My Seizure Gauge” project that’s evaluating various wearable devices, including the Empatica E4 wristband and the Fitbit Charge 3, to determine what measurements are needed for reliable seizure forecasting.

“Previously, no one knew whether seizure prediction was possible with these devices, and the fact that this group was able to achieve ‘better-than-chance’ prediction accuracy is an important milestone.”

However, he emphasized that there is still a great deal of work to be done to determine, for example, if seizure prediction with these devices can be accurate enough to be clinically useful. “For example, if the system generates too many false-positive predictions, patients won’t use it.”

In addition, the findings need to be replicated and recordings extended to 6 months or more to determine whether they are helpful to patients long term and in the home environment, said Dr. Brinkmann.

The investigators and Dr. Brinkmann have disclosed having no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

A wrist-worn device that uses machine learning accurately detects different seizure types. The new findings have the potential to revolutionize the management of patients with epilepsy, according to the researchers. “We have set a first benchmark for automatic detection of a variety of epileptic seizures using wearable sensors and deep-learning algorithms. In other words, we have shown for the first time that it’s possible to do this,” said study investigator Jianbin Tang, MA, data science project lead, IBM Research Australia, Victoria.

The findings were presented at the American Epilepsy Society’s annual meeting, held online this year because of the COVID-19 pandemic.

Accurate monitoring of seizures is important for assessing risk, injury prevention, and treatment response evaluation. Currently, video EEG is the gold standard for seizure detection, but it requires a hospital stay, is often costly, and can be stigmatizing, said Mr. Tang.
 

An advance in detecting seizure types

Recent advances in non-EEG wearable devices show promise in detecting generalized onset tonic-clonic and focal to bilateral tonic-clonic seizures, but it’s not clear if they have the ability to detect other seizure types. “We hope to fill this gap by expanding wearable seizure detection to additional seizure types,” said Mr. Tang.

Seizure tracking outside the hospital setting largely “relies on manually annotated family and patient reports, which often can be unreliable due to missed seizures and problems recalling seizures,” he said.

The study included 75 children (44% were female; mean age was 11.1 years) admitted to a long-term EEG monitoring unit at a single center for a 24-hour stay. Patients wore the detector on the ankle or wrist. The device continuously collected data on functions such as sweating, heart rate, movement, and temperature.

With part of the dataset, researchers trained deep-learning algorithms to automatically detect seizure segments. They then validated the performance of the detection algorithms on the remainder of the dataset.

The analysis was based on data from 722 epileptic seizures of all types including focal and generalized, motor and nonmotor. Seizures occurred throughout the day and during the night while patients were awake or asleep.

When a seizure is detected, the system triggers a real-time alert and will store the information about the detected seizure in a repository, said Mr. Tang.

The signals were initially stored in the wristband and then securely uploaded to the Cloud. From there, the signal files were downloaded by the investigators for analysis and interpretation. All data were entirely anonymized and de-identified. Researchers used Area Under Curve–Receiver Operating Characteristic (AUC-ROC) to assess performance.

“Our best performing detection models reach an AUC-ROC of 67.59%, which represents a decent performance level,” said Mr. Tang. “There certainly is room for performance improvement and we are already working on this,” he added.  

The device performed “better than chance,” which is a “standard technical term” in the field of machine learning and is “the first hurdle any machine-learning model needs to take to be considered useful.” The investigators noted that such automatic seizure detection “is feasible across a broad spectrum of epileptic seizure types,” said Mr. Tang. “This is a first and has not been shown before.”

The study suggests that the noninvasive wearable device could be used at home, at school, and in other everyday settings outside the clinic. “This could one day provide patients, caregivers, and clinicians with reliable seizure reports,” said Mr. Tang.

He said he believes the device might be especially useful in detecting frequent or subtle seizures, which are easy to miss. Patients requiring medication evaluation and rescue medication and those at risk of status epilepticus may be good candidates.

The researchers don’t expect wearable technology to totally replace EEG but see it as “a useful complementary tool to track seizures continuously at times or in settings where EEG monitoring is not available,” said Mr. Tang.
 

 

 

‘Important milestone’

Commenting on the research, Benjamin H. Brinkmann, PhD, associate professor of neurology at the Mayo Clinic in Rochester, Minn., said the investigators “have done very good work applying state of the art machine learning techniques” to the “important problem” of accurately detecting seizures.

Dr. Brinkmann is part of the Epilepsy Foundation–sponsored “My Seizure Gauge” project that’s evaluating various wearable devices, including the Empatica E4 wristband and the Fitbit Charge 3, to determine what measurements are needed for reliable seizure forecasting.

“Previously, no one knew whether seizure prediction was possible with these devices, and the fact that this group was able to achieve ‘better-than-chance’ prediction accuracy is an important milestone.”

However, he emphasized that there is still a great deal of work to be done to determine, for example, if seizure prediction with these devices can be accurate enough to be clinically useful. “For example, if the system generates too many false-positive predictions, patients won’t use it.”

In addition, the findings need to be replicated and recordings extended to 6 months or more to determine whether they are helpful to patients long term and in the home environment, said Dr. Brinkmann.

The investigators and Dr. Brinkmann have disclosed having no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 29(1)
Issue
Neurology Reviews- 29(1)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AES 2020

Citation Override
Publish date: December 11, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

RxPONDER: Even more women may forgo chemo for breast cancer

Article Type
Changed
Wed, 01/04/2023 - 16:42

More women with early-stage breast cancer may safely forgo chemotherapy, suggests an interim analysis of the large-scale phase 3 RxPONDER trial, presented at the San Antonio Breast Cancer Symposium 2020.

The investigators reported that adding chemotherapy to endocrine therapy did not improve outcomes for postmenopausal women with low-risk, node-positive, hormone receptor–positive (HR+), HER2-negative (HER2–) breast cancer in comparison with endocrine therapy alone.

These results are akin to those from the TAILORx trial. The results of that trial were first presented in 2018 and have changed practice for women with early-stage disease who have no lymph node involvement.

Clinicians celebrated the new results for women with lymph node–positive disease.

“RxPonder: practice changing!!!” tweeted meeting attendee Sarah Sammons, MD, Duke Cancer Center, Durham, N.C.

“Data from RxPonder are the most clinically important this year at @SABCSSanAntonio,” tweeted Hal Burstein, MD, Dana Farber Cancer Institute, Boston, who was not involved in the study.

“This will save tens of thousands of women the time, expense, and potentially harmful side effects that can be associated with chemotherapy infusions,” asserted study lead author Kevin Kalinsky, MD, Winship Cancer Institute of Emory University, Atlanta, during a meeting press conference.

But the trial, run by the SWOG Cancer Research Network, was not without controversy.

That’s because the trial also included premenopausal women whose disease characteristics were the same and who were found to have benefited from chemotherapy.

It was not clear whether the benefit was from chemotherapy’s cytotoxicity or its endocrine effects/ovarian suppression (which limits the production of estrogen, a breast cell stimulant) in these young women. But multiple experts asserted that the effect was very likely from ovarian suppression.

“There are less toxic ways than chemo to suppress ovarian function,” tweeted Tatiana Prowell, MD, Johns Hopkins University, Baltimore, who is not a study investigator.

Some experts strongly doubted the findings in premenopausal women.

“I hate to come away with the message that all [low-risk, node-positive] premenopausal patients should get chemotherapy,” summarized C. Kent Osborne, MD, Baylor College of Medicine, Houston, who is codirector of SABCS and was not involved in the study.

RxPONDER will follow patients for 15 years, so additional data and insights will follow, observed SWOG in a press statement.
 

Women had limited positive nodes

RxPONDER, or SWOG S1007, involved more than 5000 women who had HR+, HER2– breast cancer with involvement of one to three lymph nodes. The patients’ recurrence score was ≤25 on a 21-tumor gene expression assay (Oncotype Dx), which is characterized as low risk.

Approximately 20% of U.S. women with nonmetastatic HR+, HER2– breast cancer present with involvement of one to three lymph nodes, Dr. Kalinsky noted.

Study participants were randomly assigned to receive either standard chemotherapy plus endocrine therapy or endocrine therapy alone. Follow-up was for a median of 5 years before the current preplanned analysis.

Over a median follow-up of 5.1 years, there were 447 observed invasive disease-free survival (IDFS) events, the primary endpoint, which is 54% of the expected number at final analysis.

Across the whole cohort, adding chemotherapy to endocrine therapy was associated with a significant improvement in IDFS, with a 5-year rate of 92.4% vs 91.0% for endocrine therapy (P = .026).

Among the postmenopausal women, no such improvement was seen. The 5-year IDFS rate was 91.6% with chemotherapy plus endocrine therapy and 91.9% with endocrine therapy alone (P = .82).

Among premenopausal women, there was improvement in IDFS. The 5-year rate was 94.2% with chemotherapy plus endocrine therapy and 89.0% for endocrine therapy alone (P = .0004).

These differences were reflected in the results for overall survival. For postmenopausal women, there was a nonsignificant difference in 5-year overall survival rates (96.2% vs. 96.1%).

On the other hand, for premenopausal women, there was a significant difference in 5-year overall survival rates (98.6% vs. 97.3%; P = .032).

Stratifying patients by recurrence score, 0-13 versus 14-25, and by involvement of one versus two to three nodes did not have a major impact on the results, said Dr. Kalinsky, who also noted that future analyses will include quality of life and other outcomes.
 

 

 

More about endocrine therapy in RxPONDER

Dr. Osborne said that premenopausal women in RxPONDER were “nearly always” prescribed tamoxifen.

However, he observed that the current standard approach to treatment in this age group would be ovarian suppression plus either an aromatase inhibitor or tamoxifen, “both of which have been shown to be superior to tamoxifen alone in this subgroup.

“Since the adjuvant chemotherapy causes ovarian suppression in many premenopausal patients,” he said, “these patients then, in fact, received ovarian suppression plus tamoxifen,” rather than tamoxifen alone for the group that did not receive chemotherapy.

Dr. Osborne asked a question that came up again and again during the postpresentation discussion: “Is the difference in outcome in this subset due to the endocrine effects of chemotherapy? Unfortunately, we may never know the answer to this question,” he said.

Dr. Kalinsky replied that whether the difference in benefit of chemotherapy in premenopausal women “was a direct benefit, meaning that there’s something about the biology difference” between tumors in premenopausal versus postmenopausal women, “or whether this was an indirect effect, meaning impacting rates of amenorrhea... is not specifically how this study was designed.”

However, an exploratory landmark analysis at 6 months suggested that the use of ovarian suppression with endocrine therapy did not have an effect on outcomes.

Dr. Osborne said he is nevertheless “still skeptical that chemotherapy works differently in premenopausal women. Until we show that it’s not an endocrine effect ... I just can’t imagine why that group of patients, even the ones with very low Oncotype [score], would have a different response to chemotherapy.”

He added: “If I can think of a rationale ... I would believe it, but right now, I’m a little bit skeptical.”

Virginia Kaklamani, MD, of the University of Texas Health San Antonio Cancer Center, San Antonio, who is a meeting codirector, said she wanted to “second that.

“I honestly think that this is an OFS [ovarian function suppression effect] that we are seeing. We have several clinical trials that have been done looking at ovarian function suppression versus not ... showing that [it] can help as much as chemotherapy.”

Dr. Kaklamani continued: “Unfortunately, the arms to those trials were not perfect for now, and this is going to be an unanswered question until we have a large trial comparing OFS to chemotherapy.”

The study was sponsored by the National Cancer Institute, the Susan G. Komen for the Cure Research Program, the Hope Foundation for Cancer Research, the Breast Cancer Research Foundation, and Exact Sciences. Dr. Kalinsky, Dr. Osborne, and Dr. Kaklamani report financial ties to multiple pharmaceutical companies.

This article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

More women with early-stage breast cancer may safely forgo chemotherapy, suggests an interim analysis of the large-scale phase 3 RxPONDER trial, presented at the San Antonio Breast Cancer Symposium 2020.

The investigators reported that adding chemotherapy to endocrine therapy did not improve outcomes for postmenopausal women with low-risk, node-positive, hormone receptor–positive (HR+), HER2-negative (HER2–) breast cancer in comparison with endocrine therapy alone.

These results are akin to those from the TAILORx trial. The results of that trial were first presented in 2018 and have changed practice for women with early-stage disease who have no lymph node involvement.

Clinicians celebrated the new results for women with lymph node–positive disease.

“RxPonder: practice changing!!!” tweeted meeting attendee Sarah Sammons, MD, Duke Cancer Center, Durham, N.C.

“Data from RxPonder are the most clinically important this year at @SABCSSanAntonio,” tweeted Hal Burstein, MD, Dana Farber Cancer Institute, Boston, who was not involved in the study.

“This will save tens of thousands of women the time, expense, and potentially harmful side effects that can be associated with chemotherapy infusions,” asserted study lead author Kevin Kalinsky, MD, Winship Cancer Institute of Emory University, Atlanta, during a meeting press conference.

But the trial, run by the SWOG Cancer Research Network, was not without controversy.

That’s because the trial also included premenopausal women whose disease characteristics were the same and who were found to have benefited from chemotherapy.

It was not clear whether the benefit was from chemotherapy’s cytotoxicity or its endocrine effects/ovarian suppression (which limits the production of estrogen, a breast cell stimulant) in these young women. But multiple experts asserted that the effect was very likely from ovarian suppression.

“There are less toxic ways than chemo to suppress ovarian function,” tweeted Tatiana Prowell, MD, Johns Hopkins University, Baltimore, who is not a study investigator.

Some experts strongly doubted the findings in premenopausal women.

“I hate to come away with the message that all [low-risk, node-positive] premenopausal patients should get chemotherapy,” summarized C. Kent Osborne, MD, Baylor College of Medicine, Houston, who is codirector of SABCS and was not involved in the study.

RxPONDER will follow patients for 15 years, so additional data and insights will follow, observed SWOG in a press statement.
 

Women had limited positive nodes

RxPONDER, or SWOG S1007, involved more than 5000 women who had HR+, HER2– breast cancer with involvement of one to three lymph nodes. The patients’ recurrence score was ≤25 on a 21-tumor gene expression assay (Oncotype Dx), which is characterized as low risk.

Approximately 20% of U.S. women with nonmetastatic HR+, HER2– breast cancer present with involvement of one to three lymph nodes, Dr. Kalinsky noted.

Study participants were randomly assigned to receive either standard chemotherapy plus endocrine therapy or endocrine therapy alone. Follow-up was for a median of 5 years before the current preplanned analysis.

Over a median follow-up of 5.1 years, there were 447 observed invasive disease-free survival (IDFS) events, the primary endpoint, which is 54% of the expected number at final analysis.

Across the whole cohort, adding chemotherapy to endocrine therapy was associated with a significant improvement in IDFS, with a 5-year rate of 92.4% vs 91.0% for endocrine therapy (P = .026).

Among the postmenopausal women, no such improvement was seen. The 5-year IDFS rate was 91.6% with chemotherapy plus endocrine therapy and 91.9% with endocrine therapy alone (P = .82).

Among premenopausal women, there was improvement in IDFS. The 5-year rate was 94.2% with chemotherapy plus endocrine therapy and 89.0% for endocrine therapy alone (P = .0004).

These differences were reflected in the results for overall survival. For postmenopausal women, there was a nonsignificant difference in 5-year overall survival rates (96.2% vs. 96.1%).

On the other hand, for premenopausal women, there was a significant difference in 5-year overall survival rates (98.6% vs. 97.3%; P = .032).

Stratifying patients by recurrence score, 0-13 versus 14-25, and by involvement of one versus two to three nodes did not have a major impact on the results, said Dr. Kalinsky, who also noted that future analyses will include quality of life and other outcomes.
 

 

 

More about endocrine therapy in RxPONDER

Dr. Osborne said that premenopausal women in RxPONDER were “nearly always” prescribed tamoxifen.

However, he observed that the current standard approach to treatment in this age group would be ovarian suppression plus either an aromatase inhibitor or tamoxifen, “both of which have been shown to be superior to tamoxifen alone in this subgroup.

“Since the adjuvant chemotherapy causes ovarian suppression in many premenopausal patients,” he said, “these patients then, in fact, received ovarian suppression plus tamoxifen,” rather than tamoxifen alone for the group that did not receive chemotherapy.

Dr. Osborne asked a question that came up again and again during the postpresentation discussion: “Is the difference in outcome in this subset due to the endocrine effects of chemotherapy? Unfortunately, we may never know the answer to this question,” he said.

Dr. Kalinsky replied that whether the difference in benefit of chemotherapy in premenopausal women “was a direct benefit, meaning that there’s something about the biology difference” between tumors in premenopausal versus postmenopausal women, “or whether this was an indirect effect, meaning impacting rates of amenorrhea... is not specifically how this study was designed.”

However, an exploratory landmark analysis at 6 months suggested that the use of ovarian suppression with endocrine therapy did not have an effect on outcomes.

Dr. Osborne said he is nevertheless “still skeptical that chemotherapy works differently in premenopausal women. Until we show that it’s not an endocrine effect ... I just can’t imagine why that group of patients, even the ones with very low Oncotype [score], would have a different response to chemotherapy.”

He added: “If I can think of a rationale ... I would believe it, but right now, I’m a little bit skeptical.”

Virginia Kaklamani, MD, of the University of Texas Health San Antonio Cancer Center, San Antonio, who is a meeting codirector, said she wanted to “second that.

“I honestly think that this is an OFS [ovarian function suppression effect] that we are seeing. We have several clinical trials that have been done looking at ovarian function suppression versus not ... showing that [it] can help as much as chemotherapy.”

Dr. Kaklamani continued: “Unfortunately, the arms to those trials were not perfect for now, and this is going to be an unanswered question until we have a large trial comparing OFS to chemotherapy.”

The study was sponsored by the National Cancer Institute, the Susan G. Komen for the Cure Research Program, the Hope Foundation for Cancer Research, the Breast Cancer Research Foundation, and Exact Sciences. Dr. Kalinsky, Dr. Osborne, and Dr. Kaklamani report financial ties to multiple pharmaceutical companies.

This article first appeared on Medscape.com.

More women with early-stage breast cancer may safely forgo chemotherapy, suggests an interim analysis of the large-scale phase 3 RxPONDER trial, presented at the San Antonio Breast Cancer Symposium 2020.

The investigators reported that adding chemotherapy to endocrine therapy did not improve outcomes for postmenopausal women with low-risk, node-positive, hormone receptor–positive (HR+), HER2-negative (HER2–) breast cancer in comparison with endocrine therapy alone.

These results are akin to those from the TAILORx trial. The results of that trial were first presented in 2018 and have changed practice for women with early-stage disease who have no lymph node involvement.

Clinicians celebrated the new results for women with lymph node–positive disease.

“RxPonder: practice changing!!!” tweeted meeting attendee Sarah Sammons, MD, Duke Cancer Center, Durham, N.C.

“Data from RxPonder are the most clinically important this year at @SABCSSanAntonio,” tweeted Hal Burstein, MD, Dana Farber Cancer Institute, Boston, who was not involved in the study.

“This will save tens of thousands of women the time, expense, and potentially harmful side effects that can be associated with chemotherapy infusions,” asserted study lead author Kevin Kalinsky, MD, Winship Cancer Institute of Emory University, Atlanta, during a meeting press conference.

But the trial, run by the SWOG Cancer Research Network, was not without controversy.

That’s because the trial also included premenopausal women whose disease characteristics were the same and who were found to have benefited from chemotherapy.

It was not clear whether the benefit was from chemotherapy’s cytotoxicity or its endocrine effects/ovarian suppression (which limits the production of estrogen, a breast cell stimulant) in these young women. But multiple experts asserted that the effect was very likely from ovarian suppression.

“There are less toxic ways than chemo to suppress ovarian function,” tweeted Tatiana Prowell, MD, Johns Hopkins University, Baltimore, who is not a study investigator.

Some experts strongly doubted the findings in premenopausal women.

“I hate to come away with the message that all [low-risk, node-positive] premenopausal patients should get chemotherapy,” summarized C. Kent Osborne, MD, Baylor College of Medicine, Houston, who is codirector of SABCS and was not involved in the study.

RxPONDER will follow patients for 15 years, so additional data and insights will follow, observed SWOG in a press statement.
 

Women had limited positive nodes

RxPONDER, or SWOG S1007, involved more than 5000 women who had HR+, HER2– breast cancer with involvement of one to three lymph nodes. The patients’ recurrence score was ≤25 on a 21-tumor gene expression assay (Oncotype Dx), which is characterized as low risk.

Approximately 20% of U.S. women with nonmetastatic HR+, HER2– breast cancer present with involvement of one to three lymph nodes, Dr. Kalinsky noted.

Study participants were randomly assigned to receive either standard chemotherapy plus endocrine therapy or endocrine therapy alone. Follow-up was for a median of 5 years before the current preplanned analysis.

Over a median follow-up of 5.1 years, there were 447 observed invasive disease-free survival (IDFS) events, the primary endpoint, which is 54% of the expected number at final analysis.

Across the whole cohort, adding chemotherapy to endocrine therapy was associated with a significant improvement in IDFS, with a 5-year rate of 92.4% vs 91.0% for endocrine therapy (P = .026).

Among the postmenopausal women, no such improvement was seen. The 5-year IDFS rate was 91.6% with chemotherapy plus endocrine therapy and 91.9% with endocrine therapy alone (P = .82).

Among premenopausal women, there was improvement in IDFS. The 5-year rate was 94.2% with chemotherapy plus endocrine therapy and 89.0% for endocrine therapy alone (P = .0004).

These differences were reflected in the results for overall survival. For postmenopausal women, there was a nonsignificant difference in 5-year overall survival rates (96.2% vs. 96.1%).

On the other hand, for premenopausal women, there was a significant difference in 5-year overall survival rates (98.6% vs. 97.3%; P = .032).

Stratifying patients by recurrence score, 0-13 versus 14-25, and by involvement of one versus two to three nodes did not have a major impact on the results, said Dr. Kalinsky, who also noted that future analyses will include quality of life and other outcomes.
 

 

 

More about endocrine therapy in RxPONDER

Dr. Osborne said that premenopausal women in RxPONDER were “nearly always” prescribed tamoxifen.

However, he observed that the current standard approach to treatment in this age group would be ovarian suppression plus either an aromatase inhibitor or tamoxifen, “both of which have been shown to be superior to tamoxifen alone in this subgroup.

“Since the adjuvant chemotherapy causes ovarian suppression in many premenopausal patients,” he said, “these patients then, in fact, received ovarian suppression plus tamoxifen,” rather than tamoxifen alone for the group that did not receive chemotherapy.

Dr. Osborne asked a question that came up again and again during the postpresentation discussion: “Is the difference in outcome in this subset due to the endocrine effects of chemotherapy? Unfortunately, we may never know the answer to this question,” he said.

Dr. Kalinsky replied that whether the difference in benefit of chemotherapy in premenopausal women “was a direct benefit, meaning that there’s something about the biology difference” between tumors in premenopausal versus postmenopausal women, “or whether this was an indirect effect, meaning impacting rates of amenorrhea... is not specifically how this study was designed.”

However, an exploratory landmark analysis at 6 months suggested that the use of ovarian suppression with endocrine therapy did not have an effect on outcomes.

Dr. Osborne said he is nevertheless “still skeptical that chemotherapy works differently in premenopausal women. Until we show that it’s not an endocrine effect ... I just can’t imagine why that group of patients, even the ones with very low Oncotype [score], would have a different response to chemotherapy.”

He added: “If I can think of a rationale ... I would believe it, but right now, I’m a little bit skeptical.”

Virginia Kaklamani, MD, of the University of Texas Health San Antonio Cancer Center, San Antonio, who is a meeting codirector, said she wanted to “second that.

“I honestly think that this is an OFS [ovarian function suppression effect] that we are seeing. We have several clinical trials that have been done looking at ovarian function suppression versus not ... showing that [it] can help as much as chemotherapy.”

Dr. Kaklamani continued: “Unfortunately, the arms to those trials were not perfect for now, and this is going to be an unanswered question until we have a large trial comparing OFS to chemotherapy.”

The study was sponsored by the National Cancer Institute, the Susan G. Komen for the Cure Research Program, the Hope Foundation for Cancer Research, the Breast Cancer Research Foundation, and Exact Sciences. Dr. Kalinsky, Dr. Osborne, and Dr. Kaklamani report financial ties to multiple pharmaceutical companies.

This article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM SABCS 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

COVID-19 vaccines: Preparing for patient questions

Article Type
Changed
Thu, 08/26/2021 - 15:54

With U.S. approval of one coronavirus vaccine likely imminent and approval of a second one expected soon after, physicians will likely be deluged with questions. Public attitudes about the vaccines vary by demographics, with a recent poll showing that men and older adults are more likely to choose vaccination, and women and people of color evincing more wariness.

Although the reasons for reluctance may vary, questions from patient will likely be similar. Some are related to the “warp speed” language about the vaccines. Other concerns arise from the fact that the platform – mRNA – has not been used in human vaccines before. And as with any vaccine, there are rumors and false claims making the rounds on social media.

In anticipation of the most common questions physicians may encounter, two experts, Krutika Kuppalli, MD, assistant professor of medicine in the division of infectious diseases at the Medical University of South Carolina, Charleston, and Angela Rasmussen, PhD, virologist and nonresident affiliate at Georgetown University’s Center for Global Health Science and Security, Washington, talked in an interview about what clinicians can expect and what evidence-based – as well as compassionate – answers might look like.
 

Q: Will this vaccine give me COVID-19?

“There is not an intact virus in there,” Dr. Rasmussen said. The mRNA-based vaccines cannot cause COVID-19 because they don’t use any part of the coronavirus itself. Instead, the Moderna and Pfizer vaccines contain manufactured mRNA molecules that carry the instructions for building the virus’ spike protein. After vaccine administration, the recipient’s own cells take up this mRNA, use it to build this bit of protein, and display it on their surfaces. The foreign protein flag triggers the immune system response.

The mRNA does not enter the cell nucleus or interact with the recipient’s DNA. And because it’s so fragile, it degrades quite quickly. To keep that from happening before cell entry, the mRNAs are cushioned in protective fats.

Q: Was this vaccine made too quickly?

“People have been working on this platform for 30 years, so it’s not that this is brand new,” Dr. Kuppalli said.

Researchers began working on mRNA vaccines in the 1990s. Technological developments in the last decade have meant that their use has become feasible, and they have been tested in animals against many viral diseases. The mRNA vaccines are attractive because they’re expected to be safe and easily manufactured from common materials. That’s what we’ve seen in the COVID-19 pandemic, the  Centers for Disease Control and Prevention says on its website. Design of the spike protein mRNA component began as soon as the viral genome became available in January.

Usually, rolling out a vaccine takes years, so less than a year under a program called Operation Warp Speed can seem like moving too fast, Dr. Rasmussen acknowledged. “The name has given people the impression that by going at warp speed, we’re cutting all the corners. [But] the reality is that Operation Warp Speed is mostly for manufacturing and distribution.”

What underlies the speed is a restructuring of the normal vaccine development process, Dr. Kuppalli said. The same phases of development – animal testing, a small initial human phase, a second for safety testing, a third large phase for efficacy – were all conducted as for any vaccine. But in this case, some phases were completed in parallel, rather than sequentially. This approach has proved so successful that there is already talk about making it the model for developing future vaccines.

Two other factors contributed to the speed, said Dr. Kuppalli and Dr. Rasmussen. First, gearing up production can slow a rollout, but with these vaccines, companies ramped up production even before anyone knew if the vaccines would work – the “warp speed” part. The second factor has been the large number of cases, making exposures more likely and thus accelerating the results of the efficacy trials. “There is so much COVID being transmitted everywhere in the United States that it did not take long to hit the threshold of events to read out phase 3,” Dr. Rasmussen said.

 

 

Q: This vaccine has never been used in humans. How do we know it’s safe?

The Pfizer phase 3 trial included more than 43,000 people, and Moderna’s had more than 30,000. The first humans received mRNA-based COVID-19 vaccines in March. The most common adverse events emerge right after a vaccination, Dr. Kuppalli said.

As with any vaccine that gains approval, monitoring will continue.

UK health officials have reported that two health care workers vaccinated in the initial rollout of the Pfizer vaccine had what seems to have been a severe allergic response. Both recipients had a history of anaphylactic allergic responses and carried EpiPens, and both recovered. During the trial, allergic reaction rates were 0.63% in the vaccine group and 0.51% in the placebo group.

As a result of the two reactions, UK regulators are now recommending that patients with a history of severe allergies not receive the vaccine at the current time.

Q: What are the likely side effects?

So far, the most common side effects are pain at the injection site and an achy, flu-like feeling, Dr. Kuppalli said. More severe reactions have been reported, but were not common in the trials.

Dr. Rasmussen noted that the common side effects are a good sign, and signal that the recipient is generating “a robust immune response.”

“Everybody I’ve talked to who’s had the response has said they would go through it again,” Dr. Kruppalli said. “I definitely plan on lining up and being one of the first people to get the vaccine.”

Q: I already had COVID-19 or had a positive antibody test. Do I still need to get the vaccine?

Dr. Rasmussen said that there are “too many unknowns” to say if a history of COVID-19 would make a difference. “We don’t know how long neutralizing antibodies last” after infection, she said. “What we know is that the vaccine tends to produce antibody titers towards the higher end of the spectrum,” suggesting better immunity with vaccination than after natural infection.

Q: Can patients of color feel safe getting the vaccine?

“People of color might be understandably reluctant to take a vaccine that was developed in a way that appears to be faster [than past development],” said Dr. Rasmussen. She said physicians should acknowledge and understand the history that has led them to feel that way, “everything from Tuskegee to Henrietta Lacks to today.”

Empathy is key, and “providers should meet patients where they are and not condescend to them.”

Dr. Kuppalli agreed. “Clinicians really need to work on trying to strip away their biases.”

Thus far there are no safety signals that differ by race or ethnicity, according to the companies. The Pfizer phase 3 trial enrolled just over 9% Black participants, 0.5% Native American/Alaska Native, 0.2% Native Hawaiian/Pacific Islander, 2.3% multiracial participants, and 28% Hispanic/Latinx. For its part, Moderna says that approximately 37% of participants in its phase 3 trial come from communities of color.

Q: What about children and pregnant women?

Although the trials included participants from many different age groups and backgrounds, children and pregnant or lactating women were not among them. Pfizer gained approval in October to include participants as young as age 12 years, and a Moderna spokesperson said in an interview that the company planned pediatric inclusion at the end of 2020, pending approval.

“Unfortunately, we don’t have data on pregnant and lactating women,” Dr. Kuppalli said. She said she hopes that public health organizations such as the CDC will address that in the coming weeks. Dr. Rasmussen called the lack of data in pregnant women and children “a big oversight.”

Dr. Rasmussen has disclosed no relevant financial relationships. Dr. Kuppalli is a consultant with GlaxoSmithKline.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 29(2)
Publications
Topics
Sections

With U.S. approval of one coronavirus vaccine likely imminent and approval of a second one expected soon after, physicians will likely be deluged with questions. Public attitudes about the vaccines vary by demographics, with a recent poll showing that men and older adults are more likely to choose vaccination, and women and people of color evincing more wariness.

Although the reasons for reluctance may vary, questions from patient will likely be similar. Some are related to the “warp speed” language about the vaccines. Other concerns arise from the fact that the platform – mRNA – has not been used in human vaccines before. And as with any vaccine, there are rumors and false claims making the rounds on social media.

In anticipation of the most common questions physicians may encounter, two experts, Krutika Kuppalli, MD, assistant professor of medicine in the division of infectious diseases at the Medical University of South Carolina, Charleston, and Angela Rasmussen, PhD, virologist and nonresident affiliate at Georgetown University’s Center for Global Health Science and Security, Washington, talked in an interview about what clinicians can expect and what evidence-based – as well as compassionate – answers might look like.
 

Q: Will this vaccine give me COVID-19?

“There is not an intact virus in there,” Dr. Rasmussen said. The mRNA-based vaccines cannot cause COVID-19 because they don’t use any part of the coronavirus itself. Instead, the Moderna and Pfizer vaccines contain manufactured mRNA molecules that carry the instructions for building the virus’ spike protein. After vaccine administration, the recipient’s own cells take up this mRNA, use it to build this bit of protein, and display it on their surfaces. The foreign protein flag triggers the immune system response.

The mRNA does not enter the cell nucleus or interact with the recipient’s DNA. And because it’s so fragile, it degrades quite quickly. To keep that from happening before cell entry, the mRNAs are cushioned in protective fats.

Q: Was this vaccine made too quickly?

“People have been working on this platform for 30 years, so it’s not that this is brand new,” Dr. Kuppalli said.

Researchers began working on mRNA vaccines in the 1990s. Technological developments in the last decade have meant that their use has become feasible, and they have been tested in animals against many viral diseases. The mRNA vaccines are attractive because they’re expected to be safe and easily manufactured from common materials. That’s what we’ve seen in the COVID-19 pandemic, the  Centers for Disease Control and Prevention says on its website. Design of the spike protein mRNA component began as soon as the viral genome became available in January.

Usually, rolling out a vaccine takes years, so less than a year under a program called Operation Warp Speed can seem like moving too fast, Dr. Rasmussen acknowledged. “The name has given people the impression that by going at warp speed, we’re cutting all the corners. [But] the reality is that Operation Warp Speed is mostly for manufacturing and distribution.”

What underlies the speed is a restructuring of the normal vaccine development process, Dr. Kuppalli said. The same phases of development – animal testing, a small initial human phase, a second for safety testing, a third large phase for efficacy – were all conducted as for any vaccine. But in this case, some phases were completed in parallel, rather than sequentially. This approach has proved so successful that there is already talk about making it the model for developing future vaccines.

Two other factors contributed to the speed, said Dr. Kuppalli and Dr. Rasmussen. First, gearing up production can slow a rollout, but with these vaccines, companies ramped up production even before anyone knew if the vaccines would work – the “warp speed” part. The second factor has been the large number of cases, making exposures more likely and thus accelerating the results of the efficacy trials. “There is so much COVID being transmitted everywhere in the United States that it did not take long to hit the threshold of events to read out phase 3,” Dr. Rasmussen said.

 

 

Q: This vaccine has never been used in humans. How do we know it’s safe?

The Pfizer phase 3 trial included more than 43,000 people, and Moderna’s had more than 30,000. The first humans received mRNA-based COVID-19 vaccines in March. The most common adverse events emerge right after a vaccination, Dr. Kuppalli said.

As with any vaccine that gains approval, monitoring will continue.

UK health officials have reported that two health care workers vaccinated in the initial rollout of the Pfizer vaccine had what seems to have been a severe allergic response. Both recipients had a history of anaphylactic allergic responses and carried EpiPens, and both recovered. During the trial, allergic reaction rates were 0.63% in the vaccine group and 0.51% in the placebo group.

As a result of the two reactions, UK regulators are now recommending that patients with a history of severe allergies not receive the vaccine at the current time.

Q: What are the likely side effects?

So far, the most common side effects are pain at the injection site and an achy, flu-like feeling, Dr. Kuppalli said. More severe reactions have been reported, but were not common in the trials.

Dr. Rasmussen noted that the common side effects are a good sign, and signal that the recipient is generating “a robust immune response.”

“Everybody I’ve talked to who’s had the response has said they would go through it again,” Dr. Kruppalli said. “I definitely plan on lining up and being one of the first people to get the vaccine.”

Q: I already had COVID-19 or had a positive antibody test. Do I still need to get the vaccine?

Dr. Rasmussen said that there are “too many unknowns” to say if a history of COVID-19 would make a difference. “We don’t know how long neutralizing antibodies last” after infection, she said. “What we know is that the vaccine tends to produce antibody titers towards the higher end of the spectrum,” suggesting better immunity with vaccination than after natural infection.

Q: Can patients of color feel safe getting the vaccine?

“People of color might be understandably reluctant to take a vaccine that was developed in a way that appears to be faster [than past development],” said Dr. Rasmussen. She said physicians should acknowledge and understand the history that has led them to feel that way, “everything from Tuskegee to Henrietta Lacks to today.”

Empathy is key, and “providers should meet patients where they are and not condescend to them.”

Dr. Kuppalli agreed. “Clinicians really need to work on trying to strip away their biases.”

Thus far there are no safety signals that differ by race or ethnicity, according to the companies. The Pfizer phase 3 trial enrolled just over 9% Black participants, 0.5% Native American/Alaska Native, 0.2% Native Hawaiian/Pacific Islander, 2.3% multiracial participants, and 28% Hispanic/Latinx. For its part, Moderna says that approximately 37% of participants in its phase 3 trial come from communities of color.

Q: What about children and pregnant women?

Although the trials included participants from many different age groups and backgrounds, children and pregnant or lactating women were not among them. Pfizer gained approval in October to include participants as young as age 12 years, and a Moderna spokesperson said in an interview that the company planned pediatric inclusion at the end of 2020, pending approval.

“Unfortunately, we don’t have data on pregnant and lactating women,” Dr. Kuppalli said. She said she hopes that public health organizations such as the CDC will address that in the coming weeks. Dr. Rasmussen called the lack of data in pregnant women and children “a big oversight.”

Dr. Rasmussen has disclosed no relevant financial relationships. Dr. Kuppalli is a consultant with GlaxoSmithKline.

A version of this article originally appeared on Medscape.com.

With U.S. approval of one coronavirus vaccine likely imminent and approval of a second one expected soon after, physicians will likely be deluged with questions. Public attitudes about the vaccines vary by demographics, with a recent poll showing that men and older adults are more likely to choose vaccination, and women and people of color evincing more wariness.

Although the reasons for reluctance may vary, questions from patient will likely be similar. Some are related to the “warp speed” language about the vaccines. Other concerns arise from the fact that the platform – mRNA – has not been used in human vaccines before. And as with any vaccine, there are rumors and false claims making the rounds on social media.

In anticipation of the most common questions physicians may encounter, two experts, Krutika Kuppalli, MD, assistant professor of medicine in the division of infectious diseases at the Medical University of South Carolina, Charleston, and Angela Rasmussen, PhD, virologist and nonresident affiliate at Georgetown University’s Center for Global Health Science and Security, Washington, talked in an interview about what clinicians can expect and what evidence-based – as well as compassionate – answers might look like.
 

Q: Will this vaccine give me COVID-19?

“There is not an intact virus in there,” Dr. Rasmussen said. The mRNA-based vaccines cannot cause COVID-19 because they don’t use any part of the coronavirus itself. Instead, the Moderna and Pfizer vaccines contain manufactured mRNA molecules that carry the instructions for building the virus’ spike protein. After vaccine administration, the recipient’s own cells take up this mRNA, use it to build this bit of protein, and display it on their surfaces. The foreign protein flag triggers the immune system response.

The mRNA does not enter the cell nucleus or interact with the recipient’s DNA. And because it’s so fragile, it degrades quite quickly. To keep that from happening before cell entry, the mRNAs are cushioned in protective fats.

Q: Was this vaccine made too quickly?

“People have been working on this platform for 30 years, so it’s not that this is brand new,” Dr. Kuppalli said.

Researchers began working on mRNA vaccines in the 1990s. Technological developments in the last decade have meant that their use has become feasible, and they have been tested in animals against many viral diseases. The mRNA vaccines are attractive because they’re expected to be safe and easily manufactured from common materials. That’s what we’ve seen in the COVID-19 pandemic, the  Centers for Disease Control and Prevention says on its website. Design of the spike protein mRNA component began as soon as the viral genome became available in January.

Usually, rolling out a vaccine takes years, so less than a year under a program called Operation Warp Speed can seem like moving too fast, Dr. Rasmussen acknowledged. “The name has given people the impression that by going at warp speed, we’re cutting all the corners. [But] the reality is that Operation Warp Speed is mostly for manufacturing and distribution.”

What underlies the speed is a restructuring of the normal vaccine development process, Dr. Kuppalli said. The same phases of development – animal testing, a small initial human phase, a second for safety testing, a third large phase for efficacy – were all conducted as for any vaccine. But in this case, some phases were completed in parallel, rather than sequentially. This approach has proved so successful that there is already talk about making it the model for developing future vaccines.

Two other factors contributed to the speed, said Dr. Kuppalli and Dr. Rasmussen. First, gearing up production can slow a rollout, but with these vaccines, companies ramped up production even before anyone knew if the vaccines would work – the “warp speed” part. The second factor has been the large number of cases, making exposures more likely and thus accelerating the results of the efficacy trials. “There is so much COVID being transmitted everywhere in the United States that it did not take long to hit the threshold of events to read out phase 3,” Dr. Rasmussen said.

 

 

Q: This vaccine has never been used in humans. How do we know it’s safe?

The Pfizer phase 3 trial included more than 43,000 people, and Moderna’s had more than 30,000. The first humans received mRNA-based COVID-19 vaccines in March. The most common adverse events emerge right after a vaccination, Dr. Kuppalli said.

As with any vaccine that gains approval, monitoring will continue.

UK health officials have reported that two health care workers vaccinated in the initial rollout of the Pfizer vaccine had what seems to have been a severe allergic response. Both recipients had a history of anaphylactic allergic responses and carried EpiPens, and both recovered. During the trial, allergic reaction rates were 0.63% in the vaccine group and 0.51% in the placebo group.

As a result of the two reactions, UK regulators are now recommending that patients with a history of severe allergies not receive the vaccine at the current time.

Q: What are the likely side effects?

So far, the most common side effects are pain at the injection site and an achy, flu-like feeling, Dr. Kuppalli said. More severe reactions have been reported, but were not common in the trials.

Dr. Rasmussen noted that the common side effects are a good sign, and signal that the recipient is generating “a robust immune response.”

“Everybody I’ve talked to who’s had the response has said they would go through it again,” Dr. Kruppalli said. “I definitely plan on lining up and being one of the first people to get the vaccine.”

Q: I already had COVID-19 or had a positive antibody test. Do I still need to get the vaccine?

Dr. Rasmussen said that there are “too many unknowns” to say if a history of COVID-19 would make a difference. “We don’t know how long neutralizing antibodies last” after infection, she said. “What we know is that the vaccine tends to produce antibody titers towards the higher end of the spectrum,” suggesting better immunity with vaccination than after natural infection.

Q: Can patients of color feel safe getting the vaccine?

“People of color might be understandably reluctant to take a vaccine that was developed in a way that appears to be faster [than past development],” said Dr. Rasmussen. She said physicians should acknowledge and understand the history that has led them to feel that way, “everything from Tuskegee to Henrietta Lacks to today.”

Empathy is key, and “providers should meet patients where they are and not condescend to them.”

Dr. Kuppalli agreed. “Clinicians really need to work on trying to strip away their biases.”

Thus far there are no safety signals that differ by race or ethnicity, according to the companies. The Pfizer phase 3 trial enrolled just over 9% Black participants, 0.5% Native American/Alaska Native, 0.2% Native Hawaiian/Pacific Islander, 2.3% multiracial participants, and 28% Hispanic/Latinx. For its part, Moderna says that approximately 37% of participants in its phase 3 trial come from communities of color.

Q: What about children and pregnant women?

Although the trials included participants from many different age groups and backgrounds, children and pregnant or lactating women were not among them. Pfizer gained approval in October to include participants as young as age 12 years, and a Moderna spokesperson said in an interview that the company planned pediatric inclusion at the end of 2020, pending approval.

“Unfortunately, we don’t have data on pregnant and lactating women,” Dr. Kuppalli said. She said she hopes that public health organizations such as the CDC will address that in the coming weeks. Dr. Rasmussen called the lack of data in pregnant women and children “a big oversight.”

Dr. Rasmussen has disclosed no relevant financial relationships. Dr. Kuppalli is a consultant with GlaxoSmithKline.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 29(2)
Issue
Neurology Reviews- 29(2)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: December 11, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Circadian rhythms: Does the time of day you use a skin care product matter?

Article Type
Changed
Thu, 12/24/2020 - 14:02

The majority of human cells, including skin and hair cells, keep their own time; that is, they manifest autonomous clocks and the genes that regulate their functioning.1 During the day, one primary function of the skin is protection; at night, repairing any damage (particularly DNA impairment) incurred during the day prevails.2-4 These activities are driven through circadian rhythms using clock genes that exist in all cutaneous cells.2 Important cutaneous functions such as blood flow, transepidermal water loss, and capacitance are affected by circadian rhythms.5 Hydration and inflammation are also among the several functions pertaining to epidermal homeostasis affected by circadian rhythms.6 In addition, some collagens and extracellular matrix proteases are diurnally regulated, and approximately 10% of the transcriptome, including the extracellular matrix, is thought to be controlled by circadian rhythms.7

Dr. Leslie S. Baumann

Emerging research on the circadian rhythms displayed in the skin yield implications related to skin care. Cutaneous cell migration and proliferation, wound healing, and tissue vulnerability to harm from UV exposure, oxidative stress, and protease activity, for example, are affected by circadian rhythms, Sherratt et al. noted in suggesting that chronotherapy presents promise for enhancing skin therapy.7 Indeed, recent research has led to the understanding that cutaneous aging, cellular repair, optimal timing for drug delivery to the skin, and skin cancer development are all affected by the chronobiological functioning of the skin.8

We have known for several years that certain types of products should be used at different times of the day. For instance, antioxidants should be used in the morning to protect skin from sun exposure and retinols should be used in the evening because of its induction of light sensitivity. The remainder of this column focuses on research in the last 2 decades that reinforces the notion of circadian rhythms working in the skin, and may alter how we view the timing of skin care. Next month’s column, part two on the circadian rhythms of the skin, will address recent clinical trials and the implications for timing treatments for certain cutaneous conditions.
 

Emerging data on the circadian rhythms of the skin

In 2001, Le Fur et al. studied the cutaneous circadian rhythms in the facial and forearm skin of eight healthy White women during a 48-hour period. They were able to detect such rhythms in facial sebum excretion, transepidermal water loss (TEWL) in the face and forearm, pH in the face, forearm skin temperature, and forearm capacitance using cosinor or analysis of variance methods. The investigators also observed 8- and 12-hour rhythms in TEWL in both areas, and 12 hours for forearm skin temperature. They verified that such rhythms could be measured and that they vary between skin sites. In addition, they were the first to show that ultradian and/or component rhythms can also be found in TEWL, sebum excretion, and skin temperature.9

A year later, Kawara et al. showed that mRNA of the circadian clock genes Per1, Clock, and bmal1/mop3 are expressed in normal human-cultured keratinocytes and that low-dose UVB down-regulates these genes and changes their express in keratinocyte cell cultures. They concluded that UV targeting of keratinocytes could alter circadian rhythms.10

In 2011, Spörl and colleagues characterized an in vitro functional cell autonomous circadian clock in adult human low calcium temperature keratinocytes, demonstrating that the molecular composition of the keratinocyte clock was comparable with peripheral tissue clocks. Notably, they observed that temperature acts as a robust time cue for epidermal traits, such as cholesterol homeostasis and differentiation.11

The next year, Sandu et al. investigated the kinetics of clock gene expression in epidermal and dermal cells collected from the same donor and compared their characteristics. They were able to reveal the presence of functional circadian machinery in primary cultures of fibroblasts, keratinocytes, and melanocytes, with oscillators identified in all skin cell types and thought to be involved in spurring cutaneous rhythmic functions as they exhibited discrete periods and phase relationships between clock genes.12

Three years later, Sandu et al. characterized the circadian clocks in rat skin and dermal fibroblasts. They found that skin has a self-sustaining circadian clock that experiences age-dependent alterations, and that dermal fibroblasts manifest circadian rhythms that can be modulated by endogenous (e.g., melatonin) and exogenous (e.g., temperature) influences.13

In 2019, Park et al. demonstrated that the diurnal expression of the gene TIMP3, which is thought to evince a circadian rhythm in synchronized human keratinocytes, experiences disruptions in such rhythms by UVB exposure. The inflammation that results can be blocked, they argued, by recovering the circadian expression of TIMP3 using synthetic TIMP3 peptides or bioactive natural ingredients, such as green tea extracts.6

Conclusion

Circadian rhythms and the biological clocks by which most cells, including skin and hair cells, regulate themselves represent a ripe and fascinating area of research. Applying evidence in this realm to skin care has been occurring over time and is likely to enhance our practice even more as we continue to elucidate the behavior of cutaneous cells based on the solar day. Based on this information, my recommendations are to use antioxidants and protective products in the morning, and use DNA repair enzymes, retinoids, and other repair products at night.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur who practices in Miami. She founded the Cosmetic Dermatology Center at the University of Miami in 1997. Dr. Baumann has written two textbooks and a New York Times Best Sellers book for consumers. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Revance, Evolus, and Burt’s Bees. She is the CEO of Skin Type Solutions, a company that independently tests skin care products and makes recommendations to physicians on which skin care technologies are best. Write to her at [email protected].

References

1. Dong K et al. Int J Mol Sci. 2020 Jan 3. doi: 10.3390/ijms21010326.

2. Dong K et al. Int J Cosmet Sci. 2019 Dec;41(6):558-62.

3. Lyons AB et al. J Clin Aesthet Dermatol. 2019 Sep;12(9):42-5.

4. Wu G et al. Proc Natl Acad Sci U S A. 2018 Nov 27;115(48):12313-8.

5. Vaughn AR et al. Pediatr Dermatol. 2018 Jan;35(1):152-7.

6. Park S et al. Int J Mol Sci. 2019 Feb 16. doi: 10.3390/ijms20040862.

7. Sherratt MJ et al. Matrix Biol. 2019 Nov;84:97-110.

8. Luber AJ et al. J Drugs Dermatol. 2014 Feb;13(2):130-4.

9. Le Fur I et al. J Invest Dermatol. 2001 Sep;117(3):718-24.

10. Kawara S et al. J Invest Dermatol. 2002 Dec;119(6):1220-3.

11. Spörl F et al. J Invest Dermatol. 2011 Feb;131(2):338-48.

12. Sandu C et al. Cell Mol Life Sci. 2012 Oct;69(19):3329-39.

13. Sandu C et al. Cell Mol Life Sci. 2015 Jun;72(11):2237-48.

Publications
Topics
Sections

The majority of human cells, including skin and hair cells, keep their own time; that is, they manifest autonomous clocks and the genes that regulate their functioning.1 During the day, one primary function of the skin is protection; at night, repairing any damage (particularly DNA impairment) incurred during the day prevails.2-4 These activities are driven through circadian rhythms using clock genes that exist in all cutaneous cells.2 Important cutaneous functions such as blood flow, transepidermal water loss, and capacitance are affected by circadian rhythms.5 Hydration and inflammation are also among the several functions pertaining to epidermal homeostasis affected by circadian rhythms.6 In addition, some collagens and extracellular matrix proteases are diurnally regulated, and approximately 10% of the transcriptome, including the extracellular matrix, is thought to be controlled by circadian rhythms.7

Dr. Leslie S. Baumann

Emerging research on the circadian rhythms displayed in the skin yield implications related to skin care. Cutaneous cell migration and proliferation, wound healing, and tissue vulnerability to harm from UV exposure, oxidative stress, and protease activity, for example, are affected by circadian rhythms, Sherratt et al. noted in suggesting that chronotherapy presents promise for enhancing skin therapy.7 Indeed, recent research has led to the understanding that cutaneous aging, cellular repair, optimal timing for drug delivery to the skin, and skin cancer development are all affected by the chronobiological functioning of the skin.8

We have known for several years that certain types of products should be used at different times of the day. For instance, antioxidants should be used in the morning to protect skin from sun exposure and retinols should be used in the evening because of its induction of light sensitivity. The remainder of this column focuses on research in the last 2 decades that reinforces the notion of circadian rhythms working in the skin, and may alter how we view the timing of skin care. Next month’s column, part two on the circadian rhythms of the skin, will address recent clinical trials and the implications for timing treatments for certain cutaneous conditions.
 

Emerging data on the circadian rhythms of the skin

In 2001, Le Fur et al. studied the cutaneous circadian rhythms in the facial and forearm skin of eight healthy White women during a 48-hour period. They were able to detect such rhythms in facial sebum excretion, transepidermal water loss (TEWL) in the face and forearm, pH in the face, forearm skin temperature, and forearm capacitance using cosinor or analysis of variance methods. The investigators also observed 8- and 12-hour rhythms in TEWL in both areas, and 12 hours for forearm skin temperature. They verified that such rhythms could be measured and that they vary between skin sites. In addition, they were the first to show that ultradian and/or component rhythms can also be found in TEWL, sebum excretion, and skin temperature.9

A year later, Kawara et al. showed that mRNA of the circadian clock genes Per1, Clock, and bmal1/mop3 are expressed in normal human-cultured keratinocytes and that low-dose UVB down-regulates these genes and changes their express in keratinocyte cell cultures. They concluded that UV targeting of keratinocytes could alter circadian rhythms.10

In 2011, Spörl and colleagues characterized an in vitro functional cell autonomous circadian clock in adult human low calcium temperature keratinocytes, demonstrating that the molecular composition of the keratinocyte clock was comparable with peripheral tissue clocks. Notably, they observed that temperature acts as a robust time cue for epidermal traits, such as cholesterol homeostasis and differentiation.11

The next year, Sandu et al. investigated the kinetics of clock gene expression in epidermal and dermal cells collected from the same donor and compared their characteristics. They were able to reveal the presence of functional circadian machinery in primary cultures of fibroblasts, keratinocytes, and melanocytes, with oscillators identified in all skin cell types and thought to be involved in spurring cutaneous rhythmic functions as they exhibited discrete periods and phase relationships between clock genes.12

Three years later, Sandu et al. characterized the circadian clocks in rat skin and dermal fibroblasts. They found that skin has a self-sustaining circadian clock that experiences age-dependent alterations, and that dermal fibroblasts manifest circadian rhythms that can be modulated by endogenous (e.g., melatonin) and exogenous (e.g., temperature) influences.13

In 2019, Park et al. demonstrated that the diurnal expression of the gene TIMP3, which is thought to evince a circadian rhythm in synchronized human keratinocytes, experiences disruptions in such rhythms by UVB exposure. The inflammation that results can be blocked, they argued, by recovering the circadian expression of TIMP3 using synthetic TIMP3 peptides or bioactive natural ingredients, such as green tea extracts.6

Conclusion

Circadian rhythms and the biological clocks by which most cells, including skin and hair cells, regulate themselves represent a ripe and fascinating area of research. Applying evidence in this realm to skin care has been occurring over time and is likely to enhance our practice even more as we continue to elucidate the behavior of cutaneous cells based on the solar day. Based on this information, my recommendations are to use antioxidants and protective products in the morning, and use DNA repair enzymes, retinoids, and other repair products at night.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur who practices in Miami. She founded the Cosmetic Dermatology Center at the University of Miami in 1997. Dr. Baumann has written two textbooks and a New York Times Best Sellers book for consumers. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Revance, Evolus, and Burt’s Bees. She is the CEO of Skin Type Solutions, a company that independently tests skin care products and makes recommendations to physicians on which skin care technologies are best. Write to her at [email protected].

References

1. Dong K et al. Int J Mol Sci. 2020 Jan 3. doi: 10.3390/ijms21010326.

2. Dong K et al. Int J Cosmet Sci. 2019 Dec;41(6):558-62.

3. Lyons AB et al. J Clin Aesthet Dermatol. 2019 Sep;12(9):42-5.

4. Wu G et al. Proc Natl Acad Sci U S A. 2018 Nov 27;115(48):12313-8.

5. Vaughn AR et al. Pediatr Dermatol. 2018 Jan;35(1):152-7.

6. Park S et al. Int J Mol Sci. 2019 Feb 16. doi: 10.3390/ijms20040862.

7. Sherratt MJ et al. Matrix Biol. 2019 Nov;84:97-110.

8. Luber AJ et al. J Drugs Dermatol. 2014 Feb;13(2):130-4.

9. Le Fur I et al. J Invest Dermatol. 2001 Sep;117(3):718-24.

10. Kawara S et al. J Invest Dermatol. 2002 Dec;119(6):1220-3.

11. Spörl F et al. J Invest Dermatol. 2011 Feb;131(2):338-48.

12. Sandu C et al. Cell Mol Life Sci. 2012 Oct;69(19):3329-39.

13. Sandu C et al. Cell Mol Life Sci. 2015 Jun;72(11):2237-48.

The majority of human cells, including skin and hair cells, keep their own time; that is, they manifest autonomous clocks and the genes that regulate their functioning.1 During the day, one primary function of the skin is protection; at night, repairing any damage (particularly DNA impairment) incurred during the day prevails.2-4 These activities are driven through circadian rhythms using clock genes that exist in all cutaneous cells.2 Important cutaneous functions such as blood flow, transepidermal water loss, and capacitance are affected by circadian rhythms.5 Hydration and inflammation are also among the several functions pertaining to epidermal homeostasis affected by circadian rhythms.6 In addition, some collagens and extracellular matrix proteases are diurnally regulated, and approximately 10% of the transcriptome, including the extracellular matrix, is thought to be controlled by circadian rhythms.7

Dr. Leslie S. Baumann

Emerging research on the circadian rhythms displayed in the skin yield implications related to skin care. Cutaneous cell migration and proliferation, wound healing, and tissue vulnerability to harm from UV exposure, oxidative stress, and protease activity, for example, are affected by circadian rhythms, Sherratt et al. noted in suggesting that chronotherapy presents promise for enhancing skin therapy.7 Indeed, recent research has led to the understanding that cutaneous aging, cellular repair, optimal timing for drug delivery to the skin, and skin cancer development are all affected by the chronobiological functioning of the skin.8

We have known for several years that certain types of products should be used at different times of the day. For instance, antioxidants should be used in the morning to protect skin from sun exposure and retinols should be used in the evening because of its induction of light sensitivity. The remainder of this column focuses on research in the last 2 decades that reinforces the notion of circadian rhythms working in the skin, and may alter how we view the timing of skin care. Next month’s column, part two on the circadian rhythms of the skin, will address recent clinical trials and the implications for timing treatments for certain cutaneous conditions.
 

Emerging data on the circadian rhythms of the skin

In 2001, Le Fur et al. studied the cutaneous circadian rhythms in the facial and forearm skin of eight healthy White women during a 48-hour period. They were able to detect such rhythms in facial sebum excretion, transepidermal water loss (TEWL) in the face and forearm, pH in the face, forearm skin temperature, and forearm capacitance using cosinor or analysis of variance methods. The investigators also observed 8- and 12-hour rhythms in TEWL in both areas, and 12 hours for forearm skin temperature. They verified that such rhythms could be measured and that they vary between skin sites. In addition, they were the first to show that ultradian and/or component rhythms can also be found in TEWL, sebum excretion, and skin temperature.9

A year later, Kawara et al. showed that mRNA of the circadian clock genes Per1, Clock, and bmal1/mop3 are expressed in normal human-cultured keratinocytes and that low-dose UVB down-regulates these genes and changes their express in keratinocyte cell cultures. They concluded that UV targeting of keratinocytes could alter circadian rhythms.10

In 2011, Spörl and colleagues characterized an in vitro functional cell autonomous circadian clock in adult human low calcium temperature keratinocytes, demonstrating that the molecular composition of the keratinocyte clock was comparable with peripheral tissue clocks. Notably, they observed that temperature acts as a robust time cue for epidermal traits, such as cholesterol homeostasis and differentiation.11

The next year, Sandu et al. investigated the kinetics of clock gene expression in epidermal and dermal cells collected from the same donor and compared their characteristics. They were able to reveal the presence of functional circadian machinery in primary cultures of fibroblasts, keratinocytes, and melanocytes, with oscillators identified in all skin cell types and thought to be involved in spurring cutaneous rhythmic functions as they exhibited discrete periods and phase relationships between clock genes.12

Three years later, Sandu et al. characterized the circadian clocks in rat skin and dermal fibroblasts. They found that skin has a self-sustaining circadian clock that experiences age-dependent alterations, and that dermal fibroblasts manifest circadian rhythms that can be modulated by endogenous (e.g., melatonin) and exogenous (e.g., temperature) influences.13

In 2019, Park et al. demonstrated that the diurnal expression of the gene TIMP3, which is thought to evince a circadian rhythm in synchronized human keratinocytes, experiences disruptions in such rhythms by UVB exposure. The inflammation that results can be blocked, they argued, by recovering the circadian expression of TIMP3 using synthetic TIMP3 peptides or bioactive natural ingredients, such as green tea extracts.6

Conclusion

Circadian rhythms and the biological clocks by which most cells, including skin and hair cells, regulate themselves represent a ripe and fascinating area of research. Applying evidence in this realm to skin care has been occurring over time and is likely to enhance our practice even more as we continue to elucidate the behavior of cutaneous cells based on the solar day. Based on this information, my recommendations are to use antioxidants and protective products in the morning, and use DNA repair enzymes, retinoids, and other repair products at night.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur who practices in Miami. She founded the Cosmetic Dermatology Center at the University of Miami in 1997. Dr. Baumann has written two textbooks and a New York Times Best Sellers book for consumers. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Revance, Evolus, and Burt’s Bees. She is the CEO of Skin Type Solutions, a company that independently tests skin care products and makes recommendations to physicians on which skin care technologies are best. Write to her at [email protected].

References

1. Dong K et al. Int J Mol Sci. 2020 Jan 3. doi: 10.3390/ijms21010326.

2. Dong K et al. Int J Cosmet Sci. 2019 Dec;41(6):558-62.

3. Lyons AB et al. J Clin Aesthet Dermatol. 2019 Sep;12(9):42-5.

4. Wu G et al. Proc Natl Acad Sci U S A. 2018 Nov 27;115(48):12313-8.

5. Vaughn AR et al. Pediatr Dermatol. 2018 Jan;35(1):152-7.

6. Park S et al. Int J Mol Sci. 2019 Feb 16. doi: 10.3390/ijms20040862.

7. Sherratt MJ et al. Matrix Biol. 2019 Nov;84:97-110.

8. Luber AJ et al. J Drugs Dermatol. 2014 Feb;13(2):130-4.

9. Le Fur I et al. J Invest Dermatol. 2001 Sep;117(3):718-24.

10. Kawara S et al. J Invest Dermatol. 2002 Dec;119(6):1220-3.

11. Spörl F et al. J Invest Dermatol. 2011 Feb;131(2):338-48.

12. Sandu C et al. Cell Mol Life Sci. 2012 Oct;69(19):3329-39.

13. Sandu C et al. Cell Mol Life Sci. 2015 Jun;72(11):2237-48.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Vitamin D deficiency in COVID-19 quadrupled death rate

Article Type
Changed
Thu, 08/26/2021 - 15:54

Vitamin D deficiency on admission to hospital was associated with a 3.7-fold increase in the odds of dying from COVID-19, according to an observational study looking back at data from the first wave of the pandemic.

Nearly 60% of patients with COVID-19 were vitamin D deficient upon hospitalization, with men in the advanced stages of COVID-19 pneumonia showing the greatest deficit.

Importantly, the results were independent of comorbidities known to be affected by vitamin D deficiency, wrote the authors, led by Dieter De Smet, MD, from AZ Delta General Hospital, Roeselare, Belgium.

“[The findings] highlight the need for randomized, controlled trials specifically targeting vitamin D–deficient patients at intake, and make a call for general avoidance of vitamin D deficiency as a safe and inexpensive possible mitigation of the SARS-CoV-2 pandemic,” Dr. De Smet and colleagues wrote in their article, published online Nov. 25 in the American Journal of Clinical Pathology.

A search of ClinicalTrials.gov reveals there are currently close to 40 ongoing intervention trials with vitamin D in COVID-19 around the world for varying purposes, including prevention, and varying forms of treatment.
 

Consider vitamin D to prevent COVID-19 infection

With regard to the potential role in prevention, “Numerous observational studies have shown that low vitamin D levels are a major predictor for poor COVID outcomes,” noted Jacob Teitelbaum, MD, an internist who specializes in treating chronic fatigue syndrome and fibromyalgia who also has an interest in COVID-19.

“This study shows how severe a problem this is,” Dr. Teitelbaum said in an interview. “A 3.7-fold increase in death rate if someone’s vitamin D level was below 20 [ng/mL] is staggering. It is arguably one of the most important risk factors to consider.”

“What is not clear is whether vitamin D levels are acting as an acute-phase reactant, dropping because of the infection, with larger drops indicating more severe disease, or whether vitamin D deficiency is causing worse outcomes,” added Dr. Teitelbaum, who is director of the Center for Effective CFIDS/Fibromyalgia Therapies, Kailua-Kona, Hawaii.

Also asked to comment, Andrea Giustina, MD, president of the European Society of Endocrinology, said: “The paper by De Smet et al confirms what we already hypothesized in BMJ last March: that patients with low vitamin D levels are at high risk of hospitalization for COVID-19 and developing severe and lethal disease. This is likely due to the loss in the protective action of vitamin D on the immune system and against the SARS-CoV-2–induced cytokine storm.”

He said it is particularly interesting that the authors of the new study had reported more prevalent vitamin D deficiency among men than women, most likely because women are more often treated with vitamin D for osteoporosis.

The new study should prompt all clinicians and health authorities to seriously consider vitamin D supplementation as an additional tool in the fight against COVID-19, particularly for the prevention of infection in those at high risk of both COVID-19 and hypovitaminosis D, such the elderly, urged Dr. Giustina, of San Raffaele Vita-Salute University, Milan.
 

Results adjusted for multiple confounders

Dr. De Smet and colleagues looked at serum 25-hydroxyvitamin D (25[OH]D) levels in 186 patients hospitalized for severe COVID-19 infection as a function of radiologic stage of COVID-19 pneumonia as well as the association between vitamin D status on admission and COVID-19 mortality.

Cognizant of the potential for confounding by multiple factors, they adjusted for age, sex, and known vitamin D–affected comorbidities such as diabetes, chronic lung disease, and coronary artery disease.

Patients were hospitalized from March 1 to April 7, 2020 (the peak of the first wave of the pandemic) at their institution, AZ Delta General Hospital, a tertiary network hospital.

The mean age of patients was 69 years, 41% were women, and 59% had coronary artery disease. Upon admission to hospital, median vitamin D level was 18 ng/mL (women, 20.7 ng/mL; men, 17.6 ng/mL).

A remarkably high percentage (59%, 109/186) of patients with COVID-19 were vitamin D deficient (25[OH]D <20 ng/mL) when admitted (47% of women and 67% of men), wrote the authors.

“What surprises me,” said Dr. Teitelbaum, is that almost 60% “of these patients had 25(OH)D under 20 ng/mL but most clinicians consider under 50 to be low.”

All patients had a chest CT scan to determine the radiologic stage of COVID-19 pneumonia and serum vitamin D measurement on admission. Radiologic stage of pneumonia was used as a proxy for immunologic phase of COVID-19.
 

Vitamin D deficiency correlated with worsening pneumonia

Among men, rates of vitamin D deficiency increased with advancing disease, with rates of 55% in stage 1, 67% in stage 2, and up to 74% in stage 3 pneumonia.

There is therefore “a clear correlation between 25(OH)D level and temporal stages of viral pneumonia, particularly in male patients,” the authors wrote.

“Vitamin D dampens excessive inflammation,” said Dr. Teitelbaum. “In these patients with acute respiratory distress syndrome, the immune system has gone wild.”

“The study was carried out in Belgium, so there’s less sunlight there than some other places, but even here in Hawaii, with plenty of sunshine, we have vitamin D deficiency,” he added.

“More studies are needed, but I think there are enough data to suggest a multivitamin should be used to aid prophylaxis, and this is reflected in [some] infectious disease recommendations,” he noted.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Vitamin D deficiency on admission to hospital was associated with a 3.7-fold increase in the odds of dying from COVID-19, according to an observational study looking back at data from the first wave of the pandemic.

Nearly 60% of patients with COVID-19 were vitamin D deficient upon hospitalization, with men in the advanced stages of COVID-19 pneumonia showing the greatest deficit.

Importantly, the results were independent of comorbidities known to be affected by vitamin D deficiency, wrote the authors, led by Dieter De Smet, MD, from AZ Delta General Hospital, Roeselare, Belgium.

“[The findings] highlight the need for randomized, controlled trials specifically targeting vitamin D–deficient patients at intake, and make a call for general avoidance of vitamin D deficiency as a safe and inexpensive possible mitigation of the SARS-CoV-2 pandemic,” Dr. De Smet and colleagues wrote in their article, published online Nov. 25 in the American Journal of Clinical Pathology.

A search of ClinicalTrials.gov reveals there are currently close to 40 ongoing intervention trials with vitamin D in COVID-19 around the world for varying purposes, including prevention, and varying forms of treatment.
 

Consider vitamin D to prevent COVID-19 infection

With regard to the potential role in prevention, “Numerous observational studies have shown that low vitamin D levels are a major predictor for poor COVID outcomes,” noted Jacob Teitelbaum, MD, an internist who specializes in treating chronic fatigue syndrome and fibromyalgia who also has an interest in COVID-19.

“This study shows how severe a problem this is,” Dr. Teitelbaum said in an interview. “A 3.7-fold increase in death rate if someone’s vitamin D level was below 20 [ng/mL] is staggering. It is arguably one of the most important risk factors to consider.”

“What is not clear is whether vitamin D levels are acting as an acute-phase reactant, dropping because of the infection, with larger drops indicating more severe disease, or whether vitamin D deficiency is causing worse outcomes,” added Dr. Teitelbaum, who is director of the Center for Effective CFIDS/Fibromyalgia Therapies, Kailua-Kona, Hawaii.

Also asked to comment, Andrea Giustina, MD, president of the European Society of Endocrinology, said: “The paper by De Smet et al confirms what we already hypothesized in BMJ last March: that patients with low vitamin D levels are at high risk of hospitalization for COVID-19 and developing severe and lethal disease. This is likely due to the loss in the protective action of vitamin D on the immune system and against the SARS-CoV-2–induced cytokine storm.”

He said it is particularly interesting that the authors of the new study had reported more prevalent vitamin D deficiency among men than women, most likely because women are more often treated with vitamin D for osteoporosis.

The new study should prompt all clinicians and health authorities to seriously consider vitamin D supplementation as an additional tool in the fight against COVID-19, particularly for the prevention of infection in those at high risk of both COVID-19 and hypovitaminosis D, such the elderly, urged Dr. Giustina, of San Raffaele Vita-Salute University, Milan.
 

Results adjusted for multiple confounders

Dr. De Smet and colleagues looked at serum 25-hydroxyvitamin D (25[OH]D) levels in 186 patients hospitalized for severe COVID-19 infection as a function of radiologic stage of COVID-19 pneumonia as well as the association between vitamin D status on admission and COVID-19 mortality.

Cognizant of the potential for confounding by multiple factors, they adjusted for age, sex, and known vitamin D–affected comorbidities such as diabetes, chronic lung disease, and coronary artery disease.

Patients were hospitalized from March 1 to April 7, 2020 (the peak of the first wave of the pandemic) at their institution, AZ Delta General Hospital, a tertiary network hospital.

The mean age of patients was 69 years, 41% were women, and 59% had coronary artery disease. Upon admission to hospital, median vitamin D level was 18 ng/mL (women, 20.7 ng/mL; men, 17.6 ng/mL).

A remarkably high percentage (59%, 109/186) of patients with COVID-19 were vitamin D deficient (25[OH]D <20 ng/mL) when admitted (47% of women and 67% of men), wrote the authors.

“What surprises me,” said Dr. Teitelbaum, is that almost 60% “of these patients had 25(OH)D under 20 ng/mL but most clinicians consider under 50 to be low.”

All patients had a chest CT scan to determine the radiologic stage of COVID-19 pneumonia and serum vitamin D measurement on admission. Radiologic stage of pneumonia was used as a proxy for immunologic phase of COVID-19.
 

Vitamin D deficiency correlated with worsening pneumonia

Among men, rates of vitamin D deficiency increased with advancing disease, with rates of 55% in stage 1, 67% in stage 2, and up to 74% in stage 3 pneumonia.

There is therefore “a clear correlation between 25(OH)D level and temporal stages of viral pneumonia, particularly in male patients,” the authors wrote.

“Vitamin D dampens excessive inflammation,” said Dr. Teitelbaum. “In these patients with acute respiratory distress syndrome, the immune system has gone wild.”

“The study was carried out in Belgium, so there’s less sunlight there than some other places, but even here in Hawaii, with plenty of sunshine, we have vitamin D deficiency,” he added.

“More studies are needed, but I think there are enough data to suggest a multivitamin should be used to aid prophylaxis, and this is reflected in [some] infectious disease recommendations,” he noted.

A version of this article originally appeared on Medscape.com.

Vitamin D deficiency on admission to hospital was associated with a 3.7-fold increase in the odds of dying from COVID-19, according to an observational study looking back at data from the first wave of the pandemic.

Nearly 60% of patients with COVID-19 were vitamin D deficient upon hospitalization, with men in the advanced stages of COVID-19 pneumonia showing the greatest deficit.

Importantly, the results were independent of comorbidities known to be affected by vitamin D deficiency, wrote the authors, led by Dieter De Smet, MD, from AZ Delta General Hospital, Roeselare, Belgium.

“[The findings] highlight the need for randomized, controlled trials specifically targeting vitamin D–deficient patients at intake, and make a call for general avoidance of vitamin D deficiency as a safe and inexpensive possible mitigation of the SARS-CoV-2 pandemic,” Dr. De Smet and colleagues wrote in their article, published online Nov. 25 in the American Journal of Clinical Pathology.

A search of ClinicalTrials.gov reveals there are currently close to 40 ongoing intervention trials with vitamin D in COVID-19 around the world for varying purposes, including prevention, and varying forms of treatment.
 

Consider vitamin D to prevent COVID-19 infection

With regard to the potential role in prevention, “Numerous observational studies have shown that low vitamin D levels are a major predictor for poor COVID outcomes,” noted Jacob Teitelbaum, MD, an internist who specializes in treating chronic fatigue syndrome and fibromyalgia who also has an interest in COVID-19.

“This study shows how severe a problem this is,” Dr. Teitelbaum said in an interview. “A 3.7-fold increase in death rate if someone’s vitamin D level was below 20 [ng/mL] is staggering. It is arguably one of the most important risk factors to consider.”

“What is not clear is whether vitamin D levels are acting as an acute-phase reactant, dropping because of the infection, with larger drops indicating more severe disease, or whether vitamin D deficiency is causing worse outcomes,” added Dr. Teitelbaum, who is director of the Center for Effective CFIDS/Fibromyalgia Therapies, Kailua-Kona, Hawaii.

Also asked to comment, Andrea Giustina, MD, president of the European Society of Endocrinology, said: “The paper by De Smet et al confirms what we already hypothesized in BMJ last March: that patients with low vitamin D levels are at high risk of hospitalization for COVID-19 and developing severe and lethal disease. This is likely due to the loss in the protective action of vitamin D on the immune system and against the SARS-CoV-2–induced cytokine storm.”

He said it is particularly interesting that the authors of the new study had reported more prevalent vitamin D deficiency among men than women, most likely because women are more often treated with vitamin D for osteoporosis.

The new study should prompt all clinicians and health authorities to seriously consider vitamin D supplementation as an additional tool in the fight against COVID-19, particularly for the prevention of infection in those at high risk of both COVID-19 and hypovitaminosis D, such the elderly, urged Dr. Giustina, of San Raffaele Vita-Salute University, Milan.
 

Results adjusted for multiple confounders

Dr. De Smet and colleagues looked at serum 25-hydroxyvitamin D (25[OH]D) levels in 186 patients hospitalized for severe COVID-19 infection as a function of radiologic stage of COVID-19 pneumonia as well as the association between vitamin D status on admission and COVID-19 mortality.

Cognizant of the potential for confounding by multiple factors, they adjusted for age, sex, and known vitamin D–affected comorbidities such as diabetes, chronic lung disease, and coronary artery disease.

Patients were hospitalized from March 1 to April 7, 2020 (the peak of the first wave of the pandemic) at their institution, AZ Delta General Hospital, a tertiary network hospital.

The mean age of patients was 69 years, 41% were women, and 59% had coronary artery disease. Upon admission to hospital, median vitamin D level was 18 ng/mL (women, 20.7 ng/mL; men, 17.6 ng/mL).

A remarkably high percentage (59%, 109/186) of patients with COVID-19 were vitamin D deficient (25[OH]D <20 ng/mL) when admitted (47% of women and 67% of men), wrote the authors.

“What surprises me,” said Dr. Teitelbaum, is that almost 60% “of these patients had 25(OH)D under 20 ng/mL but most clinicians consider under 50 to be low.”

All patients had a chest CT scan to determine the radiologic stage of COVID-19 pneumonia and serum vitamin D measurement on admission. Radiologic stage of pneumonia was used as a proxy for immunologic phase of COVID-19.
 

Vitamin D deficiency correlated with worsening pneumonia

Among men, rates of vitamin D deficiency increased with advancing disease, with rates of 55% in stage 1, 67% in stage 2, and up to 74% in stage 3 pneumonia.

There is therefore “a clear correlation between 25(OH)D level and temporal stages of viral pneumonia, particularly in male patients,” the authors wrote.

“Vitamin D dampens excessive inflammation,” said Dr. Teitelbaum. “In these patients with acute respiratory distress syndrome, the immune system has gone wild.”

“The study was carried out in Belgium, so there’s less sunlight there than some other places, but even here in Hawaii, with plenty of sunshine, we have vitamin D deficiency,” he added.

“More studies are needed, but I think there are enough data to suggest a multivitamin should be used to aid prophylaxis, and this is reflected in [some] infectious disease recommendations,” he noted.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

ZUMA-12 study shows frontline axi-cel has substantial activity in high-risk large B-cell lymphoma

Article Type
Changed
Wed, 01/11/2023 - 15:10

Axicabtagene ciloleucel (axi-cel) can be safely administered and has substantial clinical benefit as part of first-line therapy in patients with high-risk large B-cell lymphoma, according to an investigator in a phase 2 study.

The chimeric antigen receptor (CAR) T-cell therapy had a “very high” overall response rate (ORR) of 85% and a complete response (CR) rate of 74% in the ZUMA-12 study, said investigator Sattva S. Neelapu, MD, of The University of Texas MD Anderson Cancer Center in Houston.

Nearly three-quarters of responses were ongoing with a median of follow-up of about 9 months, Dr. Neelapu said in interim analysis of ZUMA-12 presented at the annual meeting of the American Society of Hematology, which was held virtually.

While axi-cel is approved for treatment of certain relapsed/refractory large B-cell lymphomas (LBCLs), Dr. Neelapu said this is the first-ever study evaluating a CAR T-cell therapy as a first-line treatment for patients with LBCL that is high risk as defined by histology or International Prognostic Index (IPI) scoring.

Treatment with axi-cel was guided by dynamic risk assessment, Dr. Neelapu explained, meaning that patients received the CAR T-cell treatment if they had a positive interim positron emission tomography (PET) scan after two cycles of an anti-CD20 monoclonal antibody and anthracycline-containing regimen.
 

Longer follow-up needed

The interim efficacy analysis is based on 27 evaluable patients out of 40 patients planned to be enrolled, meaning that the final analysis is needed, and longer follow-up is needed to ensure that durability is maintained, Dr. Neelapu said in a question-and-answer session following his presentation.

Nevertheless, the 74% complete response rate in the frontline setting is “quite encouraging” compared to historical data in high-risk LBCL, where CR rates have generally been less than 50%, Dr. Neelapu added.

“Assuming that long-term data in the final analysis confirms this encouraging activity, I think we likely would need a randomized phase 3 trial to compare (axi-cel) head-to-head with frontline therapy,” he said.

Without mature data available, it’s hard to say in this single-arm study how much axi-cel is improving outcomes at the cost of significant toxicity, said Catherine M. Diefenbach, MD, director of the clinical lymphoma program at NYU Langone’s Perlmutter Cancer Center in New York.

Adverse events as reported by Dr. Neelapu included grade 3 cytokine release syndrome (CRS) in 9% of patients, and 25% grade 3 or greater neurologic events in 25%.

“It appears as though it may be salvaging some patients, as the response rate is higher than that expected for chemotherapy alone in this setting,” Dr. Diefenbach said in an interview, “but toxicity is not trivial, so the long-term data will provide better clarity as to the degree of benefit.”
 

Ongoing responses at 9 months

The phase 2 ZUMA-12 study includes patients classified as high risk based on MYC and BCL2 and/or BCL6 translocations, or by an International Prognostic Indicator score of 3 or greater.

Patients initially received two cycles of anti-CD20 monoclonal antibody therapy plus an anthracycline containing regimen. Those with a positive interim PET (score of 4 or 5 on the 5-point Deauville scale) received fludarabine/cyclophosphamide conditioning plus axi-cel as a single intravenous infusion of 2 x 106 CAR T cells per kg of body weight.

As of the report at the ASH meeting, 32 patient had received axi-cel, of whom 32 were evaluable for safety and 27 were evaluable for efficacy.

The ORR was 85% (23 of 27 patients), and the CR rate was 74% (20 of 27 patients), Dr. Neelapu reported, noting that with a median follow-up of 9.3 months, 70% of responders (19 of 27) were in ongoing response.

Median duration of response, progression-free survival, and overall survival have not been reached, he added.

Encephalopathy was the most common grade 3 or greater adverse event related to axi-cel, occurring in 16% of patients, while increased alanine aminotransferase and decreased neutrophil count were each seen in 9% of patients, Dr. Neelapu said.

All 32 patients experienced CRS, including grade 3 CRS in 3 patients (9%), according to the reported data. Neurologic events were seen in 22 patients (69%) including grade 3 or greater in 8 (25%). There were 2 grade 4 neurologic events – both encephalopathies that resolved, according to Dr. Neelapu – and no grade 5 neurologic events.

ZUMA-12 is sponsored by Kite, a Gilead Company. Dr. Neelapu reported disclosures related to Acerta, Adicet Bio, Bristol-Myers Squibb, Kite, and various other pharmaceutical and biotechnology companies.
 

SOURCE: Neelapu SS et al. ASH 2020, Abstract 405.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Axicabtagene ciloleucel (axi-cel) can be safely administered and has substantial clinical benefit as part of first-line therapy in patients with high-risk large B-cell lymphoma, according to an investigator in a phase 2 study.

The chimeric antigen receptor (CAR) T-cell therapy had a “very high” overall response rate (ORR) of 85% and a complete response (CR) rate of 74% in the ZUMA-12 study, said investigator Sattva S. Neelapu, MD, of The University of Texas MD Anderson Cancer Center in Houston.

Nearly three-quarters of responses were ongoing with a median of follow-up of about 9 months, Dr. Neelapu said in interim analysis of ZUMA-12 presented at the annual meeting of the American Society of Hematology, which was held virtually.

While axi-cel is approved for treatment of certain relapsed/refractory large B-cell lymphomas (LBCLs), Dr. Neelapu said this is the first-ever study evaluating a CAR T-cell therapy as a first-line treatment for patients with LBCL that is high risk as defined by histology or International Prognostic Index (IPI) scoring.

Treatment with axi-cel was guided by dynamic risk assessment, Dr. Neelapu explained, meaning that patients received the CAR T-cell treatment if they had a positive interim positron emission tomography (PET) scan after two cycles of an anti-CD20 monoclonal antibody and anthracycline-containing regimen.
 

Longer follow-up needed

The interim efficacy analysis is based on 27 evaluable patients out of 40 patients planned to be enrolled, meaning that the final analysis is needed, and longer follow-up is needed to ensure that durability is maintained, Dr. Neelapu said in a question-and-answer session following his presentation.

Nevertheless, the 74% complete response rate in the frontline setting is “quite encouraging” compared to historical data in high-risk LBCL, where CR rates have generally been less than 50%, Dr. Neelapu added.

“Assuming that long-term data in the final analysis confirms this encouraging activity, I think we likely would need a randomized phase 3 trial to compare (axi-cel) head-to-head with frontline therapy,” he said.

Without mature data available, it’s hard to say in this single-arm study how much axi-cel is improving outcomes at the cost of significant toxicity, said Catherine M. Diefenbach, MD, director of the clinical lymphoma program at NYU Langone’s Perlmutter Cancer Center in New York.

Adverse events as reported by Dr. Neelapu included grade 3 cytokine release syndrome (CRS) in 9% of patients, and 25% grade 3 or greater neurologic events in 25%.

“It appears as though it may be salvaging some patients, as the response rate is higher than that expected for chemotherapy alone in this setting,” Dr. Diefenbach said in an interview, “but toxicity is not trivial, so the long-term data will provide better clarity as to the degree of benefit.”
 

Ongoing responses at 9 months

The phase 2 ZUMA-12 study includes patients classified as high risk based on MYC and BCL2 and/or BCL6 translocations, or by an International Prognostic Indicator score of 3 or greater.

Patients initially received two cycles of anti-CD20 monoclonal antibody therapy plus an anthracycline containing regimen. Those with a positive interim PET (score of 4 or 5 on the 5-point Deauville scale) received fludarabine/cyclophosphamide conditioning plus axi-cel as a single intravenous infusion of 2 x 106 CAR T cells per kg of body weight.

As of the report at the ASH meeting, 32 patient had received axi-cel, of whom 32 were evaluable for safety and 27 were evaluable for efficacy.

The ORR was 85% (23 of 27 patients), and the CR rate was 74% (20 of 27 patients), Dr. Neelapu reported, noting that with a median follow-up of 9.3 months, 70% of responders (19 of 27) were in ongoing response.

Median duration of response, progression-free survival, and overall survival have not been reached, he added.

Encephalopathy was the most common grade 3 or greater adverse event related to axi-cel, occurring in 16% of patients, while increased alanine aminotransferase and decreased neutrophil count were each seen in 9% of patients, Dr. Neelapu said.

All 32 patients experienced CRS, including grade 3 CRS in 3 patients (9%), according to the reported data. Neurologic events were seen in 22 patients (69%) including grade 3 or greater in 8 (25%). There were 2 grade 4 neurologic events – both encephalopathies that resolved, according to Dr. Neelapu – and no grade 5 neurologic events.

ZUMA-12 is sponsored by Kite, a Gilead Company. Dr. Neelapu reported disclosures related to Acerta, Adicet Bio, Bristol-Myers Squibb, Kite, and various other pharmaceutical and biotechnology companies.
 

SOURCE: Neelapu SS et al. ASH 2020, Abstract 405.

Axicabtagene ciloleucel (axi-cel) can be safely administered and has substantial clinical benefit as part of first-line therapy in patients with high-risk large B-cell lymphoma, according to an investigator in a phase 2 study.

The chimeric antigen receptor (CAR) T-cell therapy had a “very high” overall response rate (ORR) of 85% and a complete response (CR) rate of 74% in the ZUMA-12 study, said investigator Sattva S. Neelapu, MD, of The University of Texas MD Anderson Cancer Center in Houston.

Nearly three-quarters of responses were ongoing with a median of follow-up of about 9 months, Dr. Neelapu said in interim analysis of ZUMA-12 presented at the annual meeting of the American Society of Hematology, which was held virtually.

While axi-cel is approved for treatment of certain relapsed/refractory large B-cell lymphomas (LBCLs), Dr. Neelapu said this is the first-ever study evaluating a CAR T-cell therapy as a first-line treatment for patients with LBCL that is high risk as defined by histology or International Prognostic Index (IPI) scoring.

Treatment with axi-cel was guided by dynamic risk assessment, Dr. Neelapu explained, meaning that patients received the CAR T-cell treatment if they had a positive interim positron emission tomography (PET) scan after two cycles of an anti-CD20 monoclonal antibody and anthracycline-containing regimen.
 

Longer follow-up needed

The interim efficacy analysis is based on 27 evaluable patients out of 40 patients planned to be enrolled, meaning that the final analysis is needed, and longer follow-up is needed to ensure that durability is maintained, Dr. Neelapu said in a question-and-answer session following his presentation.

Nevertheless, the 74% complete response rate in the frontline setting is “quite encouraging” compared to historical data in high-risk LBCL, where CR rates have generally been less than 50%, Dr. Neelapu added.

“Assuming that long-term data in the final analysis confirms this encouraging activity, I think we likely would need a randomized phase 3 trial to compare (axi-cel) head-to-head with frontline therapy,” he said.

Without mature data available, it’s hard to say in this single-arm study how much axi-cel is improving outcomes at the cost of significant toxicity, said Catherine M. Diefenbach, MD, director of the clinical lymphoma program at NYU Langone’s Perlmutter Cancer Center in New York.

Adverse events as reported by Dr. Neelapu included grade 3 cytokine release syndrome (CRS) in 9% of patients, and 25% grade 3 or greater neurologic events in 25%.

“It appears as though it may be salvaging some patients, as the response rate is higher than that expected for chemotherapy alone in this setting,” Dr. Diefenbach said in an interview, “but toxicity is not trivial, so the long-term data will provide better clarity as to the degree of benefit.”
 

Ongoing responses at 9 months

The phase 2 ZUMA-12 study includes patients classified as high risk based on MYC and BCL2 and/or BCL6 translocations, or by an International Prognostic Indicator score of 3 or greater.

Patients initially received two cycles of anti-CD20 monoclonal antibody therapy plus an anthracycline containing regimen. Those with a positive interim PET (score of 4 or 5 on the 5-point Deauville scale) received fludarabine/cyclophosphamide conditioning plus axi-cel as a single intravenous infusion of 2 x 106 CAR T cells per kg of body weight.

As of the report at the ASH meeting, 32 patient had received axi-cel, of whom 32 were evaluable for safety and 27 were evaluable for efficacy.

The ORR was 85% (23 of 27 patients), and the CR rate was 74% (20 of 27 patients), Dr. Neelapu reported, noting that with a median follow-up of 9.3 months, 70% of responders (19 of 27) were in ongoing response.

Median duration of response, progression-free survival, and overall survival have not been reached, he added.

Encephalopathy was the most common grade 3 or greater adverse event related to axi-cel, occurring in 16% of patients, while increased alanine aminotransferase and decreased neutrophil count were each seen in 9% of patients, Dr. Neelapu said.

All 32 patients experienced CRS, including grade 3 CRS in 3 patients (9%), according to the reported data. Neurologic events were seen in 22 patients (69%) including grade 3 or greater in 8 (25%). There were 2 grade 4 neurologic events – both encephalopathies that resolved, according to Dr. Neelapu – and no grade 5 neurologic events.

ZUMA-12 is sponsored by Kite, a Gilead Company. Dr. Neelapu reported disclosures related to Acerta, Adicet Bio, Bristol-Myers Squibb, Kite, and various other pharmaceutical and biotechnology companies.
 

SOURCE: Neelapu SS et al. ASH 2020, Abstract 405.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASH 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Highly effective in Ph-negative B-cell ALL: Hyper-CVAD with sequential blinatumomab

Article Type
Changed
Mon, 12/14/2020 - 09:04

Hyper-CVAD (fractionated cyclophosphamide, vincristine, doxorubicin, and dexamethasone) with sequential blinatumomab is highly effective as frontline therapy for Philadelphia Chromosome (Ph)–negative B-cell acute lymphoblastic leukemia (ALL), according to results of a phase 2 study reported at the annual meeting of the American Society of Hematology.

Favorable minimal residual disease (MRD) negativity and overall survival with low higher-grade toxicities suggest that reductions in chemotherapy in this setting are feasible, said Nicholas J. Short, MD, of the University of Texas MD Anderson Cancer Center, Houston.

While complete response rates with current ALL therapy are 80%-90%, long-term overall survival is only 40%-50%. Blinatumomab, a bispecific T-cell–engaging CD3-CD19 antibody, has been shown to be superior to chemotherapy in relapsed/refractory B-cell ALL, and to produce high rates of MRD eradication, the most important prognostic factor in ALL, Dr. Short said at the meeting, which was held virtually.

The hypothesis of the current study was that early incorporation of blinatumomab with hyper-CVAD in patients with newly diagnosed Ph-negative B-cell ALL would decrease the need for intensive chemotherapy and lead to higher efficacy and cure rates with less myelosuppression. Patients were required to have a performance status of 3 or less, total bilirubin 2 mg/dL or less and creatinine 2 mg/dL or less. Investigators enrolled 38 patients (mean age, 37 years,; range, 17-59) with most (79%) in performance status 0-1. The primary endpoint was relapse-free survival (RFS).
 

Study details

Patients received hyper-CVAD alternating with high-dose methotrexate and cytarabine for up to four cycles followed by four cycles of blinatumomab at standard doses. Those with CD20-positive disease (1% or greater percentage of the cells) received eight doses of ofatumumab or rituximab, and prophylactic intrathecal chemotherapy was given eight times in the first four cycles. Maintenance consisted of alternating blocks of POMP (6-mercaptopurine, vincristine, methotrexate, prednisone) and blinatumomab. When two patients with high-risk features experienced early relapse, investigators amended the protocol to allow blinatumomab after only two cycles of hyper-CVAD in those with high-risk features (e.g., CRLF2 positive by flow cytometry, complex karyotype, KMT2A rearranged, low hypodiploidy/near triploidy, TP53 mutation, or persistent MRD). Nineteen patients (56%) had at least one high-risk feature, and 82% received ofatumumab or rituximab. Six patients were in complete remission at the start of the study (four of them MRD negative).

Complete responses

After induction, complete responses were achieved in 81% (26/32), with all patients achieving a complete response at some point, according to Dr. Short. The MRD negativity rate was 71% (24/34) after induction and 97% (33/34) at any time. Among the 38 patients, all with complete response at median follow-up of 24 months (range, 2-45), relapses occurred only in those 5 patients with high-risk features. Twelve patients underwent transplant in the first remission. Two relapsed, both with high-risk features. The other 21 patients had ongoing complete responses.

RFS at 1- and 2-years was 80% and 71%, respectively. Five among seven relapses were without hematopoietic stem cell transplantation, and 2 were post HSCT. Two deaths occurred in patients with complete responses (one pulmonary embolism and one with post-HSCT complications). Overall survival at 1 and 2 years was 85% and 80%, respectively, with the 2-year rate comparable with prior reports for hyper-CVAD plus ofatumumab, Dr. Short said.

The most common nonhematologic grade 3-4 adverse events with hyper-CVAD plus blinatumomab were ALT/AST elevation (24%) and hyperglycemia (21%). The overall cytokine release syndrome rate was 13%, with 3% for higher-grade reactions. The rate for blinatumomab-related neurologic events was 45% overall and 13% for higher grades, with 1 discontinuation attributed to grade 2 encephalopathy and dysphasia.

“Overall, this study shows the potential benefit of incorporating frontline blinatumomab into the treatment of younger adults with newly diagnosed Philadelphia chromosome–negative B-cell lymphoma, and shows, as well, that reduction of chemotherapy in this context is feasible,” Dr. Short stated.

“Ultimately, often for any patients with acute leukemias and ALL, our only chance to cure them is in the frontline setting, so our approach is to include all of the most effective agents we have. So that means including blinatumomab in all of our frontline regimens in clinical trials – and now we’ve amended that to add inotuzumab ozogamicin with the goal of deepening responses and increasing cure rates,” he added.

Dr. Short reported consulting with Takeda Oncology and Astrazeneca, and receiving research funding and honoraria from Amgen, Astella, and Takeda Oncology.

SOURCE: Short NG et al. ASH 2020, Abstract 464.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Hyper-CVAD (fractionated cyclophosphamide, vincristine, doxorubicin, and dexamethasone) with sequential blinatumomab is highly effective as frontline therapy for Philadelphia Chromosome (Ph)–negative B-cell acute lymphoblastic leukemia (ALL), according to results of a phase 2 study reported at the annual meeting of the American Society of Hematology.

Favorable minimal residual disease (MRD) negativity and overall survival with low higher-grade toxicities suggest that reductions in chemotherapy in this setting are feasible, said Nicholas J. Short, MD, of the University of Texas MD Anderson Cancer Center, Houston.

While complete response rates with current ALL therapy are 80%-90%, long-term overall survival is only 40%-50%. Blinatumomab, a bispecific T-cell–engaging CD3-CD19 antibody, has been shown to be superior to chemotherapy in relapsed/refractory B-cell ALL, and to produce high rates of MRD eradication, the most important prognostic factor in ALL, Dr. Short said at the meeting, which was held virtually.

The hypothesis of the current study was that early incorporation of blinatumomab with hyper-CVAD in patients with newly diagnosed Ph-negative B-cell ALL would decrease the need for intensive chemotherapy and lead to higher efficacy and cure rates with less myelosuppression. Patients were required to have a performance status of 3 or less, total bilirubin 2 mg/dL or less and creatinine 2 mg/dL or less. Investigators enrolled 38 patients (mean age, 37 years,; range, 17-59) with most (79%) in performance status 0-1. The primary endpoint was relapse-free survival (RFS).
 

Study details

Patients received hyper-CVAD alternating with high-dose methotrexate and cytarabine for up to four cycles followed by four cycles of blinatumomab at standard doses. Those with CD20-positive disease (1% or greater percentage of the cells) received eight doses of ofatumumab or rituximab, and prophylactic intrathecal chemotherapy was given eight times in the first four cycles. Maintenance consisted of alternating blocks of POMP (6-mercaptopurine, vincristine, methotrexate, prednisone) and blinatumomab. When two patients with high-risk features experienced early relapse, investigators amended the protocol to allow blinatumomab after only two cycles of hyper-CVAD in those with high-risk features (e.g., CRLF2 positive by flow cytometry, complex karyotype, KMT2A rearranged, low hypodiploidy/near triploidy, TP53 mutation, or persistent MRD). Nineteen patients (56%) had at least one high-risk feature, and 82% received ofatumumab or rituximab. Six patients were in complete remission at the start of the study (four of them MRD negative).

Complete responses

After induction, complete responses were achieved in 81% (26/32), with all patients achieving a complete response at some point, according to Dr. Short. The MRD negativity rate was 71% (24/34) after induction and 97% (33/34) at any time. Among the 38 patients, all with complete response at median follow-up of 24 months (range, 2-45), relapses occurred only in those 5 patients with high-risk features. Twelve patients underwent transplant in the first remission. Two relapsed, both with high-risk features. The other 21 patients had ongoing complete responses.

RFS at 1- and 2-years was 80% and 71%, respectively. Five among seven relapses were without hematopoietic stem cell transplantation, and 2 were post HSCT. Two deaths occurred in patients with complete responses (one pulmonary embolism and one with post-HSCT complications). Overall survival at 1 and 2 years was 85% and 80%, respectively, with the 2-year rate comparable with prior reports for hyper-CVAD plus ofatumumab, Dr. Short said.

The most common nonhematologic grade 3-4 adverse events with hyper-CVAD plus blinatumomab were ALT/AST elevation (24%) and hyperglycemia (21%). The overall cytokine release syndrome rate was 13%, with 3% for higher-grade reactions. The rate for blinatumomab-related neurologic events was 45% overall and 13% for higher grades, with 1 discontinuation attributed to grade 2 encephalopathy and dysphasia.

“Overall, this study shows the potential benefit of incorporating frontline blinatumomab into the treatment of younger adults with newly diagnosed Philadelphia chromosome–negative B-cell lymphoma, and shows, as well, that reduction of chemotherapy in this context is feasible,” Dr. Short stated.

“Ultimately, often for any patients with acute leukemias and ALL, our only chance to cure them is in the frontline setting, so our approach is to include all of the most effective agents we have. So that means including blinatumomab in all of our frontline regimens in clinical trials – and now we’ve amended that to add inotuzumab ozogamicin with the goal of deepening responses and increasing cure rates,” he added.

Dr. Short reported consulting with Takeda Oncology and Astrazeneca, and receiving research funding and honoraria from Amgen, Astella, and Takeda Oncology.

SOURCE: Short NG et al. ASH 2020, Abstract 464.

Hyper-CVAD (fractionated cyclophosphamide, vincristine, doxorubicin, and dexamethasone) with sequential blinatumomab is highly effective as frontline therapy for Philadelphia Chromosome (Ph)–negative B-cell acute lymphoblastic leukemia (ALL), according to results of a phase 2 study reported at the annual meeting of the American Society of Hematology.

Favorable minimal residual disease (MRD) negativity and overall survival with low higher-grade toxicities suggest that reductions in chemotherapy in this setting are feasible, said Nicholas J. Short, MD, of the University of Texas MD Anderson Cancer Center, Houston.

While complete response rates with current ALL therapy are 80%-90%, long-term overall survival is only 40%-50%. Blinatumomab, a bispecific T-cell–engaging CD3-CD19 antibody, has been shown to be superior to chemotherapy in relapsed/refractory B-cell ALL, and to produce high rates of MRD eradication, the most important prognostic factor in ALL, Dr. Short said at the meeting, which was held virtually.

The hypothesis of the current study was that early incorporation of blinatumomab with hyper-CVAD in patients with newly diagnosed Ph-negative B-cell ALL would decrease the need for intensive chemotherapy and lead to higher efficacy and cure rates with less myelosuppression. Patients were required to have a performance status of 3 or less, total bilirubin 2 mg/dL or less and creatinine 2 mg/dL or less. Investigators enrolled 38 patients (mean age, 37 years,; range, 17-59) with most (79%) in performance status 0-1. The primary endpoint was relapse-free survival (RFS).
 

Study details

Patients received hyper-CVAD alternating with high-dose methotrexate and cytarabine for up to four cycles followed by four cycles of blinatumomab at standard doses. Those with CD20-positive disease (1% or greater percentage of the cells) received eight doses of ofatumumab or rituximab, and prophylactic intrathecal chemotherapy was given eight times in the first four cycles. Maintenance consisted of alternating blocks of POMP (6-mercaptopurine, vincristine, methotrexate, prednisone) and blinatumomab. When two patients with high-risk features experienced early relapse, investigators amended the protocol to allow blinatumomab after only two cycles of hyper-CVAD in those with high-risk features (e.g., CRLF2 positive by flow cytometry, complex karyotype, KMT2A rearranged, low hypodiploidy/near triploidy, TP53 mutation, or persistent MRD). Nineteen patients (56%) had at least one high-risk feature, and 82% received ofatumumab or rituximab. Six patients were in complete remission at the start of the study (four of them MRD negative).

Complete responses

After induction, complete responses were achieved in 81% (26/32), with all patients achieving a complete response at some point, according to Dr. Short. The MRD negativity rate was 71% (24/34) after induction and 97% (33/34) at any time. Among the 38 patients, all with complete response at median follow-up of 24 months (range, 2-45), relapses occurred only in those 5 patients with high-risk features. Twelve patients underwent transplant in the first remission. Two relapsed, both with high-risk features. The other 21 patients had ongoing complete responses.

RFS at 1- and 2-years was 80% and 71%, respectively. Five among seven relapses were without hematopoietic stem cell transplantation, and 2 were post HSCT. Two deaths occurred in patients with complete responses (one pulmonary embolism and one with post-HSCT complications). Overall survival at 1 and 2 years was 85% and 80%, respectively, with the 2-year rate comparable with prior reports for hyper-CVAD plus ofatumumab, Dr. Short said.

The most common nonhematologic grade 3-4 adverse events with hyper-CVAD plus blinatumomab were ALT/AST elevation (24%) and hyperglycemia (21%). The overall cytokine release syndrome rate was 13%, with 3% for higher-grade reactions. The rate for blinatumomab-related neurologic events was 45% overall and 13% for higher grades, with 1 discontinuation attributed to grade 2 encephalopathy and dysphasia.

“Overall, this study shows the potential benefit of incorporating frontline blinatumomab into the treatment of younger adults with newly diagnosed Philadelphia chromosome–negative B-cell lymphoma, and shows, as well, that reduction of chemotherapy in this context is feasible,” Dr. Short stated.

“Ultimately, often for any patients with acute leukemias and ALL, our only chance to cure them is in the frontline setting, so our approach is to include all of the most effective agents we have. So that means including blinatumomab in all of our frontline regimens in clinical trials – and now we’ve amended that to add inotuzumab ozogamicin with the goal of deepening responses and increasing cure rates,” he added.

Dr. Short reported consulting with Takeda Oncology and Astrazeneca, and receiving research funding and honoraria from Amgen, Astella, and Takeda Oncology.

SOURCE: Short NG et al. ASH 2020, Abstract 464.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASH 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

IBD: Fecal calprotectin’s role in guiding treatment debated

Article Type
Changed
Fri, 12/11/2020 - 11:20

Questions on fecal calprotectin’s usefulness as a measure of intestinal inflammation in inflammatory bowel disease (IBD) dominated the viewer chat after the opening session of Advances in Inflammatory Bowel Diseases 2020 Annual Meeting.

The measure is often used to differentiate irritable bowel syndrome (IBS) from IBD.

Panelists differed on how predictive fecal calprotectin is for disease status and what information the stool concentration of calprotectin imparts. Several experts discussed calprotectin cutoffs for when disease would be considered in remission or when a colonoscopy is needed for evaluation.

Bruce E. Sands, MD, of the Icahn School of Medicine at Mount Sinai, New York, said about the noninvasive test: “It can be very tricky to use.”
 

Variation by time of day, by person

He explained that there can be individual differences, and that the concentration may be different in the first stool of the day compared with the last.

“There’s a lot of variation, which makes the cutoffs good on average for populations but a little bit more difficult to apply to individuals,” he said.

Dr. Sands said the marker has more merit for people with large-bowel inflammation but is not quite as accurate a marker for patients with exclusively small-bowel inflammation.

Moderator Steven Hanauer, MD, professor of medicine, gastroenterology, and hepatology at Northwestern University, Chicago, asked Dr. Sands what his next move would be if a patient had a concentration of 160 mcg/mg.

Sands called concentrations between 150 and 250 mcg/mg “a gray zone.”

“That usually indicates for me a need to evaluate with a colonoscopy,” he said.

“If we’re talking about using fecal calprotectin to rule out IBS, the cutoff there is more like 50, 55. But that isn’t how we’re generally using it as IBD practitioners.”

Sunanda V. Kane, MD, MSPH, a gastroenterologist with the Mayo Clinic in Rochester, Minn., said in an interview that 160 mcg/mg in a patient with IBD “means to me likely some minimal disease but not enough for me to make drastic changes to a medical regimen.”

She said about the measure, “We need to understand its limitations as well as strengths. Right now, insurance companies consider it ‘experimental’ and a lot of companies will not cover it. Ironically, they will cover the cost of a colonoscopy but not a stool test.”
 

Use as a benchmark

Dr. Sands said if he’s doing a colonoscopy to establish that the patient is in remission and knows what the fecal calprotectin level is at the time, he uses it as a benchmark for the future to judge whether the patient is deviating from remission.

He added that the negative predictive value of fecal calprotectin with a cutoff of 100 mcg/mg is “actually pretty good so you can avoid a number of unnecessary colonoscopies to look for recurrence.”

William J. Sandborn, MD, of the University of California, San Diego, said about the marker, “We use it some, but a cutoff of 50 is very specific. You can think of that as equivalent to a Mayo endoscopy score of 0 in ulcerative colitis and probably histologic remission.”

Cutoffs above 50 mcg/mg are “not very clear,” he said.

He said given the lack of consensus on the panel, “others might take some pause about that discomfort.”

Dr. Sandborn pointed out that little is known about elevated calprotectin in ulcerative proctitis and whether it is elevated in Crohn’s ileitis.

Dr. Kane said other factors will affect fecal calprotectin levels.

“We have some data to say that if you are on a proton pump inhibitor that that changes fecal calprotectin levels. Patients who have inflamed pseudopolyps may have quiescent disease around the pseudopolyps that may elevate the fecal calprotectin.”

But it can have particular benefit in some patient populations, she said.

She pointed to a study that concluded calprotectin levels can be used in pregnant ulcerative colitis patients to gauge disease activity noninvasively.

Dr. Sands, Dr. Sandborn, Dr. Kane, and Dr. Hanauer have disclosed having no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Questions on fecal calprotectin’s usefulness as a measure of intestinal inflammation in inflammatory bowel disease (IBD) dominated the viewer chat after the opening session of Advances in Inflammatory Bowel Diseases 2020 Annual Meeting.

The measure is often used to differentiate irritable bowel syndrome (IBS) from IBD.

Panelists differed on how predictive fecal calprotectin is for disease status and what information the stool concentration of calprotectin imparts. Several experts discussed calprotectin cutoffs for when disease would be considered in remission or when a colonoscopy is needed for evaluation.

Bruce E. Sands, MD, of the Icahn School of Medicine at Mount Sinai, New York, said about the noninvasive test: “It can be very tricky to use.”
 

Variation by time of day, by person

He explained that there can be individual differences, and that the concentration may be different in the first stool of the day compared with the last.

“There’s a lot of variation, which makes the cutoffs good on average for populations but a little bit more difficult to apply to individuals,” he said.

Dr. Sands said the marker has more merit for people with large-bowel inflammation but is not quite as accurate a marker for patients with exclusively small-bowel inflammation.

Moderator Steven Hanauer, MD, professor of medicine, gastroenterology, and hepatology at Northwestern University, Chicago, asked Dr. Sands what his next move would be if a patient had a concentration of 160 mcg/mg.

Sands called concentrations between 150 and 250 mcg/mg “a gray zone.”

“That usually indicates for me a need to evaluate with a colonoscopy,” he said.

“If we’re talking about using fecal calprotectin to rule out IBS, the cutoff there is more like 50, 55. But that isn’t how we’re generally using it as IBD practitioners.”

Sunanda V. Kane, MD, MSPH, a gastroenterologist with the Mayo Clinic in Rochester, Minn., said in an interview that 160 mcg/mg in a patient with IBD “means to me likely some minimal disease but not enough for me to make drastic changes to a medical regimen.”

She said about the measure, “We need to understand its limitations as well as strengths. Right now, insurance companies consider it ‘experimental’ and a lot of companies will not cover it. Ironically, they will cover the cost of a colonoscopy but not a stool test.”
 

Use as a benchmark

Dr. Sands said if he’s doing a colonoscopy to establish that the patient is in remission and knows what the fecal calprotectin level is at the time, he uses it as a benchmark for the future to judge whether the patient is deviating from remission.

He added that the negative predictive value of fecal calprotectin with a cutoff of 100 mcg/mg is “actually pretty good so you can avoid a number of unnecessary colonoscopies to look for recurrence.”

William J. Sandborn, MD, of the University of California, San Diego, said about the marker, “We use it some, but a cutoff of 50 is very specific. You can think of that as equivalent to a Mayo endoscopy score of 0 in ulcerative colitis and probably histologic remission.”

Cutoffs above 50 mcg/mg are “not very clear,” he said.

He said given the lack of consensus on the panel, “others might take some pause about that discomfort.”

Dr. Sandborn pointed out that little is known about elevated calprotectin in ulcerative proctitis and whether it is elevated in Crohn’s ileitis.

Dr. Kane said other factors will affect fecal calprotectin levels.

“We have some data to say that if you are on a proton pump inhibitor that that changes fecal calprotectin levels. Patients who have inflamed pseudopolyps may have quiescent disease around the pseudopolyps that may elevate the fecal calprotectin.”

But it can have particular benefit in some patient populations, she said.

She pointed to a study that concluded calprotectin levels can be used in pregnant ulcerative colitis patients to gauge disease activity noninvasively.

Dr. Sands, Dr. Sandborn, Dr. Kane, and Dr. Hanauer have disclosed having no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Questions on fecal calprotectin’s usefulness as a measure of intestinal inflammation in inflammatory bowel disease (IBD) dominated the viewer chat after the opening session of Advances in Inflammatory Bowel Diseases 2020 Annual Meeting.

The measure is often used to differentiate irritable bowel syndrome (IBS) from IBD.

Panelists differed on how predictive fecal calprotectin is for disease status and what information the stool concentration of calprotectin imparts. Several experts discussed calprotectin cutoffs for when disease would be considered in remission or when a colonoscopy is needed for evaluation.

Bruce E. Sands, MD, of the Icahn School of Medicine at Mount Sinai, New York, said about the noninvasive test: “It can be very tricky to use.”
 

Variation by time of day, by person

He explained that there can be individual differences, and that the concentration may be different in the first stool of the day compared with the last.

“There’s a lot of variation, which makes the cutoffs good on average for populations but a little bit more difficult to apply to individuals,” he said.

Dr. Sands said the marker has more merit for people with large-bowel inflammation but is not quite as accurate a marker for patients with exclusively small-bowel inflammation.

Moderator Steven Hanauer, MD, professor of medicine, gastroenterology, and hepatology at Northwestern University, Chicago, asked Dr. Sands what his next move would be if a patient had a concentration of 160 mcg/mg.

Sands called concentrations between 150 and 250 mcg/mg “a gray zone.”

“That usually indicates for me a need to evaluate with a colonoscopy,” he said.

“If we’re talking about using fecal calprotectin to rule out IBS, the cutoff there is more like 50, 55. But that isn’t how we’re generally using it as IBD practitioners.”

Sunanda V. Kane, MD, MSPH, a gastroenterologist with the Mayo Clinic in Rochester, Minn., said in an interview that 160 mcg/mg in a patient with IBD “means to me likely some minimal disease but not enough for me to make drastic changes to a medical regimen.”

She said about the measure, “We need to understand its limitations as well as strengths. Right now, insurance companies consider it ‘experimental’ and a lot of companies will not cover it. Ironically, they will cover the cost of a colonoscopy but not a stool test.”
 

Use as a benchmark

Dr. Sands said if he’s doing a colonoscopy to establish that the patient is in remission and knows what the fecal calprotectin level is at the time, he uses it as a benchmark for the future to judge whether the patient is deviating from remission.

He added that the negative predictive value of fecal calprotectin with a cutoff of 100 mcg/mg is “actually pretty good so you can avoid a number of unnecessary colonoscopies to look for recurrence.”

William J. Sandborn, MD, of the University of California, San Diego, said about the marker, “We use it some, but a cutoff of 50 is very specific. You can think of that as equivalent to a Mayo endoscopy score of 0 in ulcerative colitis and probably histologic remission.”

Cutoffs above 50 mcg/mg are “not very clear,” he said.

He said given the lack of consensus on the panel, “others might take some pause about that discomfort.”

Dr. Sandborn pointed out that little is known about elevated calprotectin in ulcerative proctitis and whether it is elevated in Crohn’s ileitis.

Dr. Kane said other factors will affect fecal calprotectin levels.

“We have some data to say that if you are on a proton pump inhibitor that that changes fecal calprotectin levels. Patients who have inflamed pseudopolyps may have quiescent disease around the pseudopolyps that may elevate the fecal calprotectin.”

But it can have particular benefit in some patient populations, she said.

She pointed to a study that concluded calprotectin levels can be used in pregnant ulcerative colitis patients to gauge disease activity noninvasively.

Dr. Sands, Dr. Sandborn, Dr. Kane, and Dr. Hanauer have disclosed having no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article