The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.

jcom
Main menu
JCOM Main
Explore menu
JCOM Explore
Proclivity ID
18843001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date

Hospitalists and PCPs crave greater communication

Article Type
Changed

Decades after hospitalists took over inpatient care in the 1990s, hospitalists and primary care physicians (PCPs) still struggle with a communication divide, researchers at one teaching hospital found.

Hospitalists and PCPs want more dialogue while patients are in the hospital in order to coordinate and personalize care, according to data collected at Beth Israel Deaconess Medical Center, Boston. The results were presented at the annual meeting of the Society of General Internal Medicine.

“I think a major takeaway is that both hospitalists and primary care doctors agree that it’s important for primary care doctors to be involved in a patient’s hospitalization. They both identified a value that PCPs can bring to the table,” coresearcher Kristen Flint, MD, a primary care resident, told this news organization.

A majority in both camps reported that communication with the other party occurred in less than 25% of cases, whereas ideally it would happen half of the time. Dr. Flint noted that communication tools differ among hospitals, limiting the applicability of the findings.

The research team surveyed 39 hospitalists and 28 PCPs employed by the medical center during the first half of 2021. They also interviewed six hospitalists as they admitted and discharged patients.

The hospitalist movement, which took hold in response to cost and efficiency demands of managed care, led to the start of inpatient specialists, thereby reducing the need for PCPs to commute between their offices and the hospital to care for patients in both settings. 
 

Primary care involvement is important during hospitalization

In the Beth Israel Deaconess survey, four out of five hospitalists and three-quarters of PCPs agreed that primary care involvement is still important during hospitalization, most critically during discharge and admission. Hospitalists reported that PCPs provide valuable data about a patient’s medical status, social supports, mental health, and goals for care. They also said having such data helps to boost patient trust and improve the quality of inpatient care.

“Most projects around communication between inpatient and outpatient doctors have really focused on the time of discharge,” when clinicians identify what care a patient will need after they leave the hospital, Dr. Flint said. “But we found that both sides felt increased communication at time of admission would also be beneficial.”

The biggest barrier for PCPs, cited by 82% of respondents, was lack of time. Hospitalists’ top impediment was being unable to find contact information for the other party, which was cited by 79% of these survey participants.
 

Hospitalists operate ‘in a very stressful environment’

The Beth Israel Deaconess research “documents what has largely been suspected,” said primary care general internist Allan Goroll, MD.

Dr. Goroll, a professor of medicine at Harvard Medical School, Boston, said in an interview that hospitalists operate “in a very stressful environment.”

“They [hospitalists] appreciate accurate information about a patient’s recent medical history, test results, and responses to treatment as well as a briefing on patient values and preferences, family dynamics, and priorities for the admission. It makes for a safer, more personalized, and more efficient hospital admission,” said Dr. Goroll, who was not involved in the research.

In a 2015 article in the New England Journal of Medicine, Dr. Goroll and Daniel Hunt, MD, director of hospital medicine at Emory University, Atlanta, proposed a collaborative model in which PCPs visit hospitalized patients and serve as consultants to inpatient staff. Dr. Goroll said Massachusetts General Hospital in Boston, where he practices, initiated a study of that approach, but it was interrupted by the pandemic.

“As limited time is the most often cited barrier to communication, future interventions such as asynchronous forms of communication between the two groups should be considered,” the researchers wrote in the NEJM perspective.

To narrow the gap, Beth Israel Deaconess will study converting an admission notification letter sent to PCPs into a two-way communication tool in which PCPs can insert patient information, Dr. Flint said.

Dr. Flint and Dr. Goroll have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Decades after hospitalists took over inpatient care in the 1990s, hospitalists and primary care physicians (PCPs) still struggle with a communication divide, researchers at one teaching hospital found.

Hospitalists and PCPs want more dialogue while patients are in the hospital in order to coordinate and personalize care, according to data collected at Beth Israel Deaconess Medical Center, Boston. The results were presented at the annual meeting of the Society of General Internal Medicine.

“I think a major takeaway is that both hospitalists and primary care doctors agree that it’s important for primary care doctors to be involved in a patient’s hospitalization. They both identified a value that PCPs can bring to the table,” coresearcher Kristen Flint, MD, a primary care resident, told this news organization.

A majority in both camps reported that communication with the other party occurred in less than 25% of cases, whereas ideally it would happen half of the time. Dr. Flint noted that communication tools differ among hospitals, limiting the applicability of the findings.

The research team surveyed 39 hospitalists and 28 PCPs employed by the medical center during the first half of 2021. They also interviewed six hospitalists as they admitted and discharged patients.

The hospitalist movement, which took hold in response to cost and efficiency demands of managed care, led to the start of inpatient specialists, thereby reducing the need for PCPs to commute between their offices and the hospital to care for patients in both settings. 
 

Primary care involvement is important during hospitalization

In the Beth Israel Deaconess survey, four out of five hospitalists and three-quarters of PCPs agreed that primary care involvement is still important during hospitalization, most critically during discharge and admission. Hospitalists reported that PCPs provide valuable data about a patient’s medical status, social supports, mental health, and goals for care. They also said having such data helps to boost patient trust and improve the quality of inpatient care.

“Most projects around communication between inpatient and outpatient doctors have really focused on the time of discharge,” when clinicians identify what care a patient will need after they leave the hospital, Dr. Flint said. “But we found that both sides felt increased communication at time of admission would also be beneficial.”

The biggest barrier for PCPs, cited by 82% of respondents, was lack of time. Hospitalists’ top impediment was being unable to find contact information for the other party, which was cited by 79% of these survey participants.
 

Hospitalists operate ‘in a very stressful environment’

The Beth Israel Deaconess research “documents what has largely been suspected,” said primary care general internist Allan Goroll, MD.

Dr. Goroll, a professor of medicine at Harvard Medical School, Boston, said in an interview that hospitalists operate “in a very stressful environment.”

“They [hospitalists] appreciate accurate information about a patient’s recent medical history, test results, and responses to treatment as well as a briefing on patient values and preferences, family dynamics, and priorities for the admission. It makes for a safer, more personalized, and more efficient hospital admission,” said Dr. Goroll, who was not involved in the research.

In a 2015 article in the New England Journal of Medicine, Dr. Goroll and Daniel Hunt, MD, director of hospital medicine at Emory University, Atlanta, proposed a collaborative model in which PCPs visit hospitalized patients and serve as consultants to inpatient staff. Dr. Goroll said Massachusetts General Hospital in Boston, where he practices, initiated a study of that approach, but it was interrupted by the pandemic.

“As limited time is the most often cited barrier to communication, future interventions such as asynchronous forms of communication between the two groups should be considered,” the researchers wrote in the NEJM perspective.

To narrow the gap, Beth Israel Deaconess will study converting an admission notification letter sent to PCPs into a two-way communication tool in which PCPs can insert patient information, Dr. Flint said.

Dr. Flint and Dr. Goroll have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Decades after hospitalists took over inpatient care in the 1990s, hospitalists and primary care physicians (PCPs) still struggle with a communication divide, researchers at one teaching hospital found.

Hospitalists and PCPs want more dialogue while patients are in the hospital in order to coordinate and personalize care, according to data collected at Beth Israel Deaconess Medical Center, Boston. The results were presented at the annual meeting of the Society of General Internal Medicine.

“I think a major takeaway is that both hospitalists and primary care doctors agree that it’s important for primary care doctors to be involved in a patient’s hospitalization. They both identified a value that PCPs can bring to the table,” coresearcher Kristen Flint, MD, a primary care resident, told this news organization.

A majority in both camps reported that communication with the other party occurred in less than 25% of cases, whereas ideally it would happen half of the time. Dr. Flint noted that communication tools differ among hospitals, limiting the applicability of the findings.

The research team surveyed 39 hospitalists and 28 PCPs employed by the medical center during the first half of 2021. They also interviewed six hospitalists as they admitted and discharged patients.

The hospitalist movement, which took hold in response to cost and efficiency demands of managed care, led to the start of inpatient specialists, thereby reducing the need for PCPs to commute between their offices and the hospital to care for patients in both settings. 
 

Primary care involvement is important during hospitalization

In the Beth Israel Deaconess survey, four out of five hospitalists and three-quarters of PCPs agreed that primary care involvement is still important during hospitalization, most critically during discharge and admission. Hospitalists reported that PCPs provide valuable data about a patient’s medical status, social supports, mental health, and goals for care. They also said having such data helps to boost patient trust and improve the quality of inpatient care.

“Most projects around communication between inpatient and outpatient doctors have really focused on the time of discharge,” when clinicians identify what care a patient will need after they leave the hospital, Dr. Flint said. “But we found that both sides felt increased communication at time of admission would also be beneficial.”

The biggest barrier for PCPs, cited by 82% of respondents, was lack of time. Hospitalists’ top impediment was being unable to find contact information for the other party, which was cited by 79% of these survey participants.
 

Hospitalists operate ‘in a very stressful environment’

The Beth Israel Deaconess research “documents what has largely been suspected,” said primary care general internist Allan Goroll, MD.

Dr. Goroll, a professor of medicine at Harvard Medical School, Boston, said in an interview that hospitalists operate “in a very stressful environment.”

“They [hospitalists] appreciate accurate information about a patient’s recent medical history, test results, and responses to treatment as well as a briefing on patient values and preferences, family dynamics, and priorities for the admission. It makes for a safer, more personalized, and more efficient hospital admission,” said Dr. Goroll, who was not involved in the research.

In a 2015 article in the New England Journal of Medicine, Dr. Goroll and Daniel Hunt, MD, director of hospital medicine at Emory University, Atlanta, proposed a collaborative model in which PCPs visit hospitalized patients and serve as consultants to inpatient staff. Dr. Goroll said Massachusetts General Hospital in Boston, where he practices, initiated a study of that approach, but it was interrupted by the pandemic.

“As limited time is the most often cited barrier to communication, future interventions such as asynchronous forms of communication between the two groups should be considered,” the researchers wrote in the NEJM perspective.

To narrow the gap, Beth Israel Deaconess will study converting an admission notification letter sent to PCPs into a two-way communication tool in which PCPs can insert patient information, Dr. Flint said.

Dr. Flint and Dr. Goroll have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SGIM 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

DIAMOND: Adding patiromer helps optimize HF meds, foils hyperkalemia

Article Type
Changed

Several of the core medications for patients with heart failure with reduced ejection fraction (HFrEF) come with a well-known risk of causing hyperkalemia, to which many clinicians respond by pulling back on dosing or withdrawing the culprit drug.

But accompanying renin-angiotensin system–inhibiting agents with the potassium-sequestrant patiromer (Veltassa, Vifor Pharma) appears to shield patients against hyperkalemia enough that they can take more RASI medications at higher doses, suggests a randomized, a controlled study.

Catherine Hackett/MDedge News
Dr. Javed Butler

The DIAMOND trial’s HFrEF patients, who had current or a history of RASI-related hyperkalemia, added either patiromer or placebo to their guideline-directed medical therapy (GDMT), which includes, even emphasizes, the culprit medication. They include ACE inhibitors, angiotensin-receptor blockers (ARBs), angiotensin-receptor/neprilysin inhibitors (ARNIs), and mineralocorticoid receptor antagonists (MRAs).

Those taking patiromer tolerated more intense RASI therapy – including MRAs, which are especially prone to causing hyperkalemia – than the patients assigned to placebo. They also maintained lower potassium concentrations and experienced fewer clinically important hyperkalemia episodes, reported Javed Butler, MD, MPH, MBA, Baylor Scott and White Research Institute, Dallas, at the annual scientific sessions of the American College of Cardiology.

The apparent benefit from patiromer came in part from an advantage for a composite hyperkalemia-event endpoint that included mortality, Dr. Butler noted. That advantage seemed to hold regardless of age, sex, body mass index, HFrEF symptom severity, or initial natriuretic peptide levels.

Patients who took patiromer, compared with those who took placebo, showed a 37% reduction in risk for hyperkalemia (P = .006), defined as potassium levels exceeding 5.5 mEq/L, over a median follow-up of 27 weeks. They were 38% less likely to have their MRA dosage reduced to below target level (P = .006).

More patients in the patiromer group than in the control group attained at least 50% of target dosage for MRAs and ACE inhibitors, ARBs, or ARNIs (92% vs. 87%; P = .015).

Patients with HFrEF are unlikely to achieve best possible outcomes without GDMT optimization, but failure to optimize is often attributed to hyperkalemia concerns. DIAMOND, Dr. Butler said, suggests that, by adding the potassium sequestrant to GDMT, “you can simultaneously control potassium and optimize RASI therapy.” Many clinicians seem to believe they can achieve only one or the other.

DIAMOND was too underpowered to show whether preventing hyperkalemia with patiromer could improve clinical outcomes. But failure to optimize RASI medication in HFrEF can worsen risk for heart failure events and death. So “it stands to reason that optimization of RASI therapy without a concomitant risk of hyperkalemia may, in the long run, lead to better outcomes for these patients,” Dr. Butler said in an interview.

Given the drug’s ability to keep potassium levels in check during RASI therapy, Dr. Butler said, “hypokalemia should not be a reason for suboptimal therapy.”

Patiromer and other potassium sequestrants have been available in the United States and Europe for 4-6 years, but their value as adjuncts to RASI medication in HFrEF or other heart failure has been unclear.

Courtesy Massachusetts General Hospital
Dr. James L. Januzzi

“There’s a good opportunity to expand the use of the drug. The question is, in whom and when?” James L. Januzzi, MD, Massachusetts General Hospital, Boston, said in an interview.

Some HFrEF patients on GDMT “should be treated with patiromer. The bigger question is, should we give someone who has a history of hyperkalemia another chance at GDMT before we treat them with patiromer? Because they may not necessarily develop hyperkalemia a second time,” said Dr. Januzzi, who was on the DIAMOND endpoint-adjudication committee.

Among the most notable findings of the trial, he said, is that the number of people who developed hyperkalemia on RASI medication, although significantly elevated, “wasn’t as high as they expected it would be,” he said. “The data from DIAMOND argue that if a really significant majority does not become hyperkalemic on rechallenge, jumping straight to a potassium-binding drug may be premature.”

Physicians across specialties can differ in how they interpret potassium-level elevation and can use various cut points to flag when to stop RASI medication or at least hold back on up-titration, Dr. Butler observed. “Cardiologists have a different threshold of potassium that they tolerate than say, for instance, a nephrologist.”

Useful, then, might be a way to tell which patients are most likely to develop hyperkalemia with RASI up-titration and so might benefit from a potassium-binding agent right away. But DIAMOND, Dr. Butler said, “does not necessarily define any patient phenotype or any potassium level where we would say that you should use a potassium binder.”

The trial entered 1,642 patients with HFrEF and current or past RASI-related hyperkalemia to a 12-week run-in phase for optimization of GDMT with patiromer. The trial was conducted at nearly 400 centers in 21 countries.

RASI medication could be optimized in 85% of the cohort, from which 878 patients were randomly assigned either to continue optimized GDMT with patiromer or to have the potassium-sequestrant replaced with a placebo.

The patients on patiromer showed a 0.03-mEq/L mean rise in serum potassium levels from randomization to the end of the study, the primary endpoint, compared with a 0.13 mEq/L mean increase for those in the control group (P < .001), Dr. Butler reported.

The win ratio for a RASI-use score hierarchically featuring cardiovascular death and CV hospitalization for hyperkalemia at several levels of severity was 1.25 (95% confidence interval, 1.003-1.564; P = .048), favoring the patiromer group. The win ratio solely for hyperkalemia-related events also favored patients on patiromer, at 1.53 (95% CI, 1.23-1.91; P < .001).

Patiromer also seemed well tolerated, Dr. Butler said.

Hyperkalemia is “one of the most common excuses” from clinicians for failing to up-titrate RASI medicine in patients with heart failure, Dr. Januzzi said. DIAMOND was less about patiromer itself than about ways “to facilitate better GDMT, where we’re really falling short of the mark. During the run-in phase they were able to get the vast majority of individuals to target, which to me is a critically important point, and emblematic of the need for things that facilitate this kind of excellent care.”

DIAMOND was funded by Vifor Pharma. Dr. Butler disclosed receiving consulting fees from Abbott, Adrenomed, Amgen, Applied Therapeutics, Array, AstraZeneca, Bayer, Boehringer Ingelheim, CVRx, G3 Pharma, Impulse Dynamics, Innolife, Janssen, LivaNova, Luitpold, Medtronic, Merck, Novartis, Novo Nordisk, Relypsa, Sequana Medical, and Vifor Pharma. Dr. Januzzi disclosed receiving consultant fees or honoraria from Abbott Laboratories, Imbria, Jana Care, Novartis, Prevencio, and Roche Diagnostics; serving on a data safety monitoring board for AbbVie, Amgen, Bayer Healthcare Pharmaceuticals, Beyer, CVRx, and Takeda Pharmaceuticals North America; and receiving research grants from Abbott Laboratories, Janssen, and Vifor Pharma.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Several of the core medications for patients with heart failure with reduced ejection fraction (HFrEF) come with a well-known risk of causing hyperkalemia, to which many clinicians respond by pulling back on dosing or withdrawing the culprit drug.

But accompanying renin-angiotensin system–inhibiting agents with the potassium-sequestrant patiromer (Veltassa, Vifor Pharma) appears to shield patients against hyperkalemia enough that they can take more RASI medications at higher doses, suggests a randomized, a controlled study.

Catherine Hackett/MDedge News
Dr. Javed Butler

The DIAMOND trial’s HFrEF patients, who had current or a history of RASI-related hyperkalemia, added either patiromer or placebo to their guideline-directed medical therapy (GDMT), which includes, even emphasizes, the culprit medication. They include ACE inhibitors, angiotensin-receptor blockers (ARBs), angiotensin-receptor/neprilysin inhibitors (ARNIs), and mineralocorticoid receptor antagonists (MRAs).

Those taking patiromer tolerated more intense RASI therapy – including MRAs, which are especially prone to causing hyperkalemia – than the patients assigned to placebo. They also maintained lower potassium concentrations and experienced fewer clinically important hyperkalemia episodes, reported Javed Butler, MD, MPH, MBA, Baylor Scott and White Research Institute, Dallas, at the annual scientific sessions of the American College of Cardiology.

The apparent benefit from patiromer came in part from an advantage for a composite hyperkalemia-event endpoint that included mortality, Dr. Butler noted. That advantage seemed to hold regardless of age, sex, body mass index, HFrEF symptom severity, or initial natriuretic peptide levels.

Patients who took patiromer, compared with those who took placebo, showed a 37% reduction in risk for hyperkalemia (P = .006), defined as potassium levels exceeding 5.5 mEq/L, over a median follow-up of 27 weeks. They were 38% less likely to have their MRA dosage reduced to below target level (P = .006).

More patients in the patiromer group than in the control group attained at least 50% of target dosage for MRAs and ACE inhibitors, ARBs, or ARNIs (92% vs. 87%; P = .015).

Patients with HFrEF are unlikely to achieve best possible outcomes without GDMT optimization, but failure to optimize is often attributed to hyperkalemia concerns. DIAMOND, Dr. Butler said, suggests that, by adding the potassium sequestrant to GDMT, “you can simultaneously control potassium and optimize RASI therapy.” Many clinicians seem to believe they can achieve only one or the other.

DIAMOND was too underpowered to show whether preventing hyperkalemia with patiromer could improve clinical outcomes. But failure to optimize RASI medication in HFrEF can worsen risk for heart failure events and death. So “it stands to reason that optimization of RASI therapy without a concomitant risk of hyperkalemia may, in the long run, lead to better outcomes for these patients,” Dr. Butler said in an interview.

Given the drug’s ability to keep potassium levels in check during RASI therapy, Dr. Butler said, “hypokalemia should not be a reason for suboptimal therapy.”

Patiromer and other potassium sequestrants have been available in the United States and Europe for 4-6 years, but their value as adjuncts to RASI medication in HFrEF or other heart failure has been unclear.

Courtesy Massachusetts General Hospital
Dr. James L. Januzzi

“There’s a good opportunity to expand the use of the drug. The question is, in whom and when?” James L. Januzzi, MD, Massachusetts General Hospital, Boston, said in an interview.

Some HFrEF patients on GDMT “should be treated with patiromer. The bigger question is, should we give someone who has a history of hyperkalemia another chance at GDMT before we treat them with patiromer? Because they may not necessarily develop hyperkalemia a second time,” said Dr. Januzzi, who was on the DIAMOND endpoint-adjudication committee.

Among the most notable findings of the trial, he said, is that the number of people who developed hyperkalemia on RASI medication, although significantly elevated, “wasn’t as high as they expected it would be,” he said. “The data from DIAMOND argue that if a really significant majority does not become hyperkalemic on rechallenge, jumping straight to a potassium-binding drug may be premature.”

Physicians across specialties can differ in how they interpret potassium-level elevation and can use various cut points to flag when to stop RASI medication or at least hold back on up-titration, Dr. Butler observed. “Cardiologists have a different threshold of potassium that they tolerate than say, for instance, a nephrologist.”

Useful, then, might be a way to tell which patients are most likely to develop hyperkalemia with RASI up-titration and so might benefit from a potassium-binding agent right away. But DIAMOND, Dr. Butler said, “does not necessarily define any patient phenotype or any potassium level where we would say that you should use a potassium binder.”

The trial entered 1,642 patients with HFrEF and current or past RASI-related hyperkalemia to a 12-week run-in phase for optimization of GDMT with patiromer. The trial was conducted at nearly 400 centers in 21 countries.

RASI medication could be optimized in 85% of the cohort, from which 878 patients were randomly assigned either to continue optimized GDMT with patiromer or to have the potassium-sequestrant replaced with a placebo.

The patients on patiromer showed a 0.03-mEq/L mean rise in serum potassium levels from randomization to the end of the study, the primary endpoint, compared with a 0.13 mEq/L mean increase for those in the control group (P < .001), Dr. Butler reported.

The win ratio for a RASI-use score hierarchically featuring cardiovascular death and CV hospitalization for hyperkalemia at several levels of severity was 1.25 (95% confidence interval, 1.003-1.564; P = .048), favoring the patiromer group. The win ratio solely for hyperkalemia-related events also favored patients on patiromer, at 1.53 (95% CI, 1.23-1.91; P < .001).

Patiromer also seemed well tolerated, Dr. Butler said.

Hyperkalemia is “one of the most common excuses” from clinicians for failing to up-titrate RASI medicine in patients with heart failure, Dr. Januzzi said. DIAMOND was less about patiromer itself than about ways “to facilitate better GDMT, where we’re really falling short of the mark. During the run-in phase they were able to get the vast majority of individuals to target, which to me is a critically important point, and emblematic of the need for things that facilitate this kind of excellent care.”

DIAMOND was funded by Vifor Pharma. Dr. Butler disclosed receiving consulting fees from Abbott, Adrenomed, Amgen, Applied Therapeutics, Array, AstraZeneca, Bayer, Boehringer Ingelheim, CVRx, G3 Pharma, Impulse Dynamics, Innolife, Janssen, LivaNova, Luitpold, Medtronic, Merck, Novartis, Novo Nordisk, Relypsa, Sequana Medical, and Vifor Pharma. Dr. Januzzi disclosed receiving consultant fees or honoraria from Abbott Laboratories, Imbria, Jana Care, Novartis, Prevencio, and Roche Diagnostics; serving on a data safety monitoring board for AbbVie, Amgen, Bayer Healthcare Pharmaceuticals, Beyer, CVRx, and Takeda Pharmaceuticals North America; and receiving research grants from Abbott Laboratories, Janssen, and Vifor Pharma.

A version of this article first appeared on Medscape.com.

Several of the core medications for patients with heart failure with reduced ejection fraction (HFrEF) come with a well-known risk of causing hyperkalemia, to which many clinicians respond by pulling back on dosing or withdrawing the culprit drug.

But accompanying renin-angiotensin system–inhibiting agents with the potassium-sequestrant patiromer (Veltassa, Vifor Pharma) appears to shield patients against hyperkalemia enough that they can take more RASI medications at higher doses, suggests a randomized, a controlled study.

Catherine Hackett/MDedge News
Dr. Javed Butler

The DIAMOND trial’s HFrEF patients, who had current or a history of RASI-related hyperkalemia, added either patiromer or placebo to their guideline-directed medical therapy (GDMT), which includes, even emphasizes, the culprit medication. They include ACE inhibitors, angiotensin-receptor blockers (ARBs), angiotensin-receptor/neprilysin inhibitors (ARNIs), and mineralocorticoid receptor antagonists (MRAs).

Those taking patiromer tolerated more intense RASI therapy – including MRAs, which are especially prone to causing hyperkalemia – than the patients assigned to placebo. They also maintained lower potassium concentrations and experienced fewer clinically important hyperkalemia episodes, reported Javed Butler, MD, MPH, MBA, Baylor Scott and White Research Institute, Dallas, at the annual scientific sessions of the American College of Cardiology.

The apparent benefit from patiromer came in part from an advantage for a composite hyperkalemia-event endpoint that included mortality, Dr. Butler noted. That advantage seemed to hold regardless of age, sex, body mass index, HFrEF symptom severity, or initial natriuretic peptide levels.

Patients who took patiromer, compared with those who took placebo, showed a 37% reduction in risk for hyperkalemia (P = .006), defined as potassium levels exceeding 5.5 mEq/L, over a median follow-up of 27 weeks. They were 38% less likely to have their MRA dosage reduced to below target level (P = .006).

More patients in the patiromer group than in the control group attained at least 50% of target dosage for MRAs and ACE inhibitors, ARBs, or ARNIs (92% vs. 87%; P = .015).

Patients with HFrEF are unlikely to achieve best possible outcomes without GDMT optimization, but failure to optimize is often attributed to hyperkalemia concerns. DIAMOND, Dr. Butler said, suggests that, by adding the potassium sequestrant to GDMT, “you can simultaneously control potassium and optimize RASI therapy.” Many clinicians seem to believe they can achieve only one or the other.

DIAMOND was too underpowered to show whether preventing hyperkalemia with patiromer could improve clinical outcomes. But failure to optimize RASI medication in HFrEF can worsen risk for heart failure events and death. So “it stands to reason that optimization of RASI therapy without a concomitant risk of hyperkalemia may, in the long run, lead to better outcomes for these patients,” Dr. Butler said in an interview.

Given the drug’s ability to keep potassium levels in check during RASI therapy, Dr. Butler said, “hypokalemia should not be a reason for suboptimal therapy.”

Patiromer and other potassium sequestrants have been available in the United States and Europe for 4-6 years, but their value as adjuncts to RASI medication in HFrEF or other heart failure has been unclear.

Courtesy Massachusetts General Hospital
Dr. James L. Januzzi

“There’s a good opportunity to expand the use of the drug. The question is, in whom and when?” James L. Januzzi, MD, Massachusetts General Hospital, Boston, said in an interview.

Some HFrEF patients on GDMT “should be treated with patiromer. The bigger question is, should we give someone who has a history of hyperkalemia another chance at GDMT before we treat them with patiromer? Because they may not necessarily develop hyperkalemia a second time,” said Dr. Januzzi, who was on the DIAMOND endpoint-adjudication committee.

Among the most notable findings of the trial, he said, is that the number of people who developed hyperkalemia on RASI medication, although significantly elevated, “wasn’t as high as they expected it would be,” he said. “The data from DIAMOND argue that if a really significant majority does not become hyperkalemic on rechallenge, jumping straight to a potassium-binding drug may be premature.”

Physicians across specialties can differ in how they interpret potassium-level elevation and can use various cut points to flag when to stop RASI medication or at least hold back on up-titration, Dr. Butler observed. “Cardiologists have a different threshold of potassium that they tolerate than say, for instance, a nephrologist.”

Useful, then, might be a way to tell which patients are most likely to develop hyperkalemia with RASI up-titration and so might benefit from a potassium-binding agent right away. But DIAMOND, Dr. Butler said, “does not necessarily define any patient phenotype or any potassium level where we would say that you should use a potassium binder.”

The trial entered 1,642 patients with HFrEF and current or past RASI-related hyperkalemia to a 12-week run-in phase for optimization of GDMT with patiromer. The trial was conducted at nearly 400 centers in 21 countries.

RASI medication could be optimized in 85% of the cohort, from which 878 patients were randomly assigned either to continue optimized GDMT with patiromer or to have the potassium-sequestrant replaced with a placebo.

The patients on patiromer showed a 0.03-mEq/L mean rise in serum potassium levels from randomization to the end of the study, the primary endpoint, compared with a 0.13 mEq/L mean increase for those in the control group (P < .001), Dr. Butler reported.

The win ratio for a RASI-use score hierarchically featuring cardiovascular death and CV hospitalization for hyperkalemia at several levels of severity was 1.25 (95% confidence interval, 1.003-1.564; P = .048), favoring the patiromer group. The win ratio solely for hyperkalemia-related events also favored patients on patiromer, at 1.53 (95% CI, 1.23-1.91; P < .001).

Patiromer also seemed well tolerated, Dr. Butler said.

Hyperkalemia is “one of the most common excuses” from clinicians for failing to up-titrate RASI medicine in patients with heart failure, Dr. Januzzi said. DIAMOND was less about patiromer itself than about ways “to facilitate better GDMT, where we’re really falling short of the mark. During the run-in phase they were able to get the vast majority of individuals to target, which to me is a critically important point, and emblematic of the need for things that facilitate this kind of excellent care.”

DIAMOND was funded by Vifor Pharma. Dr. Butler disclosed receiving consulting fees from Abbott, Adrenomed, Amgen, Applied Therapeutics, Array, AstraZeneca, Bayer, Boehringer Ingelheim, CVRx, G3 Pharma, Impulse Dynamics, Innolife, Janssen, LivaNova, Luitpold, Medtronic, Merck, Novartis, Novo Nordisk, Relypsa, Sequana Medical, and Vifor Pharma. Dr. Januzzi disclosed receiving consultant fees or honoraria from Abbott Laboratories, Imbria, Jana Care, Novartis, Prevencio, and Roche Diagnostics; serving on a data safety monitoring board for AbbVie, Amgen, Bayer Healthcare Pharmaceuticals, Beyer, CVRx, and Takeda Pharmaceuticals North America; and receiving research grants from Abbott Laboratories, Janssen, and Vifor Pharma.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ACC 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AI model predicts ovarian cancer responses

Article Type
Changed

An artificial intelligence (AI) model successfully predicted which high-grade serous ovarian cancer patients would have excellent responses to laparoscopic surgery. The model, using still-frame images from pretreatment laparoscopic surgical videos, had an overall accuracy rate of 93%, according to the pilot study’s first author, Deanna Glassman, MD, an oncologic fellow at the University of Texas MD Anderson Cancer Center, Houston.

Dr. Glassman described her research in a presentation given at the annual meeting of the Society of Gynecologic Oncology.

While the AI model successfully identified all excellent-response patients, it did classify about a third of patients with poor responses as excellent responses. The smaller number of images in the poor-response category, Dr. Glassman speculated, may explain the misclassification.

Researchers took 435 representative still-frame images from pretreatment laparoscopic surgical videos of 113 patients with pathologically proven high-grade serous ovarian cancer. Using 70% of the images to train the model, they used 10% for validation and 20% for the actual testing. They developed the AI model with images from four anatomical locations (diaphragm, omentum, peritoneum, and pelvis), training it using deep learning and neural networks to extract morphological disease patterns for correlation with either of two outcomes: excellent response or poor response. An excellent response was defined as progression-free survival of 12 months or more, and poor response as PFS of 6 months or less. In the retrospective study of images, after excluding 32 gray-zone patients, 75 patients (66%) had durable responses to therapy and 6 (5%) had poor responses.

The PFS was 19 months in the excellent-response group and 3 months in the poor-response group.

Clinicians have often observed differences in gross morphology within the single histologic diagnosis of high-grade serous ovarian cancer. The research intent was to determine if AI could detect these distinct morphological patterns in the still frame images taken at the time of laparoscopy, and correlate them with the eventual clinical outcomes. Dr. Glassman and colleagues are currently validating the model with a much larger cohort, and will look into clinical testing.

“The big-picture goal,” Dr. Glassman said in an interview, “would be to utilize the model to predict which patients would do well with traditional standard of care treatments and those who wouldn’t do well so that we can personalize the treatment plan for those patients with alternative agents and therapies.”

Once validated, the model could also be employed to identify patterns of disease in other gynecologic cancers or distinguish between viable and necrosed malignant tissue.

The study’s predominant limitation was the small sample size which is being addressed in a larger ongoing study.

Funding was provided by a T32 grant, MD Anderson Cancer Center Support Grant, MD Anderson Ovarian Cancer Moon Shot, SPORE in Ovarian Cancer, the American Cancer Society, and the Ovarian Cancer Research Alliance. Dr. Glassman declared no relevant financial relationships.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

An artificial intelligence (AI) model successfully predicted which high-grade serous ovarian cancer patients would have excellent responses to laparoscopic surgery. The model, using still-frame images from pretreatment laparoscopic surgical videos, had an overall accuracy rate of 93%, according to the pilot study’s first author, Deanna Glassman, MD, an oncologic fellow at the University of Texas MD Anderson Cancer Center, Houston.

Dr. Glassman described her research in a presentation given at the annual meeting of the Society of Gynecologic Oncology.

While the AI model successfully identified all excellent-response patients, it did classify about a third of patients with poor responses as excellent responses. The smaller number of images in the poor-response category, Dr. Glassman speculated, may explain the misclassification.

Researchers took 435 representative still-frame images from pretreatment laparoscopic surgical videos of 113 patients with pathologically proven high-grade serous ovarian cancer. Using 70% of the images to train the model, they used 10% for validation and 20% for the actual testing. They developed the AI model with images from four anatomical locations (diaphragm, omentum, peritoneum, and pelvis), training it using deep learning and neural networks to extract morphological disease patterns for correlation with either of two outcomes: excellent response or poor response. An excellent response was defined as progression-free survival of 12 months or more, and poor response as PFS of 6 months or less. In the retrospective study of images, after excluding 32 gray-zone patients, 75 patients (66%) had durable responses to therapy and 6 (5%) had poor responses.

The PFS was 19 months in the excellent-response group and 3 months in the poor-response group.

Clinicians have often observed differences in gross morphology within the single histologic diagnosis of high-grade serous ovarian cancer. The research intent was to determine if AI could detect these distinct morphological patterns in the still frame images taken at the time of laparoscopy, and correlate them with the eventual clinical outcomes. Dr. Glassman and colleagues are currently validating the model with a much larger cohort, and will look into clinical testing.

“The big-picture goal,” Dr. Glassman said in an interview, “would be to utilize the model to predict which patients would do well with traditional standard of care treatments and those who wouldn’t do well so that we can personalize the treatment plan for those patients with alternative agents and therapies.”

Once validated, the model could also be employed to identify patterns of disease in other gynecologic cancers or distinguish between viable and necrosed malignant tissue.

The study’s predominant limitation was the small sample size which is being addressed in a larger ongoing study.

Funding was provided by a T32 grant, MD Anderson Cancer Center Support Grant, MD Anderson Ovarian Cancer Moon Shot, SPORE in Ovarian Cancer, the American Cancer Society, and the Ovarian Cancer Research Alliance. Dr. Glassman declared no relevant financial relationships.

An artificial intelligence (AI) model successfully predicted which high-grade serous ovarian cancer patients would have excellent responses to laparoscopic surgery. The model, using still-frame images from pretreatment laparoscopic surgical videos, had an overall accuracy rate of 93%, according to the pilot study’s first author, Deanna Glassman, MD, an oncologic fellow at the University of Texas MD Anderson Cancer Center, Houston.

Dr. Glassman described her research in a presentation given at the annual meeting of the Society of Gynecologic Oncology.

While the AI model successfully identified all excellent-response patients, it did classify about a third of patients with poor responses as excellent responses. The smaller number of images in the poor-response category, Dr. Glassman speculated, may explain the misclassification.

Researchers took 435 representative still-frame images from pretreatment laparoscopic surgical videos of 113 patients with pathologically proven high-grade serous ovarian cancer. Using 70% of the images to train the model, they used 10% for validation and 20% for the actual testing. They developed the AI model with images from four anatomical locations (diaphragm, omentum, peritoneum, and pelvis), training it using deep learning and neural networks to extract morphological disease patterns for correlation with either of two outcomes: excellent response or poor response. An excellent response was defined as progression-free survival of 12 months or more, and poor response as PFS of 6 months or less. In the retrospective study of images, after excluding 32 gray-zone patients, 75 patients (66%) had durable responses to therapy and 6 (5%) had poor responses.

The PFS was 19 months in the excellent-response group and 3 months in the poor-response group.

Clinicians have often observed differences in gross morphology within the single histologic diagnosis of high-grade serous ovarian cancer. The research intent was to determine if AI could detect these distinct morphological patterns in the still frame images taken at the time of laparoscopy, and correlate them with the eventual clinical outcomes. Dr. Glassman and colleagues are currently validating the model with a much larger cohort, and will look into clinical testing.

“The big-picture goal,” Dr. Glassman said in an interview, “would be to utilize the model to predict which patients would do well with traditional standard of care treatments and those who wouldn’t do well so that we can personalize the treatment plan for those patients with alternative agents and therapies.”

Once validated, the model could also be employed to identify patterns of disease in other gynecologic cancers or distinguish between viable and necrosed malignant tissue.

The study’s predominant limitation was the small sample size which is being addressed in a larger ongoing study.

Funding was provided by a T32 grant, MD Anderson Cancer Center Support Grant, MD Anderson Ovarian Cancer Moon Shot, SPORE in Ovarian Cancer, the American Cancer Society, and the Ovarian Cancer Research Alliance. Dr. Glassman declared no relevant financial relationships.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SGO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

About 19% of COVID-19 headaches become chronic

Article Type
Changed

Approximately one in five patients who presented with headache during the acute phase of COVID-19 developed chronic daily headache, according to a study published in Cephalalgia. The greater the headache’s intensity during the acute phase, the greater the likelihood that it would persist.

The research, carried out by members of the Headache Study Group of the Spanish Society of Neurology, evaluated the evolution of headache in more than 900 Spanish patients. Because they found that headache intensity during the acute phase was associated with a more prolonged duration of headache, the team stressed the importance of promptly evaluating patients who have had COVID-19 and who then experience persistent headache.
 

Long-term evolution unknown

Headache is a common symptom of COVID-19, but its long-term evolution remains unknown. The objective of this study was to evaluate the long-term duration of headache in patients who presented with this symptom during the acute phase of the disease.

Recruitment for this multicenter study took place in March and April 2020. The 905 patients who were enrolled came from six level 3 hospitals in Spain. All completed 9 months of neurologic follow-up.

Their median age was 51 years, 66.5% were women, and more than half (52.7%) had a history of primary headache. About half of the patients required hospitalization (50.5%); the rest were treated as outpatients. The most common headache phenotype was holocranial (67.8%) of severe intensity (50.6%).
 

Persistent headache common

In the 96.6% cases for which data were available, the median duration of headache was 14 days. The headache persisted at 1 month in 31.1% of patients, at 2 months in 21.5%, at 3 months in 19%, at 6 months in 16.8%, and at 9 months in 16.0%.

“The median duration of COVID-19 headache is around 2 weeks,” David García Azorín, MD, PhD, a member of the Spanish Society of Neurology and one of the coauthors of the study, said in an interview. “However, almost 20% of patients experience it for longer than that. When still present at 2 months, the headache is more likely to follow a chronic daily pattern.” Dr. García Azorín is a neurologist and clinical researcher at the headache unit of the Hospital Clínico Universitario in Valladolid, Spain.

“So, if the headache isn’t letting up, it’s important to make the most of that window of opportunity and provide treatment in that period of 6-12 weeks,” he continued. “To do this, the best option is to carry out preventive treatment so that the patient will have a better chance of recovering.”

Study participants whose headache persisted at 9 months were older and were mostly women. They were less likely to have had pneumonia or to have experienced stabbing pain, photophobia, or phonophobia. They reported that the headache got worse when they engaged in physical activity but less frequently manifested as a throbbing headache.
 

Secondary tension headaches

On the other hand, Jaime Rodríguez Vico, MD, head of the headache unit at the Jiménez Díaz Foundation Hospital in Madrid, said in an interview that, according to his case studies, the most striking characteristics of post–COVID-19 headaches “in general are secondary, with similarities to tension headaches that patients are able to differentiate from other clinical types of headache. In patients with migraine, very often we see that we’re dealing with a trigger. In other words, more migraines – and more intense ones at that – are brought about.”

He added: “Generally, post–COVID-19 headache usually lasts 1-2 weeks, but we have cases of it lasting several months and even over a year with persistent daily headache. These more persistent cases are probably connected to another type of pathology that makes them more susceptible to becoming chronic, something that occurs in another type of primary headache known as new daily persistent headache.”
 

Primary headache exacerbation

Dr. García Azorín pointed out that it’s not uncommon that among people who already have primary headache, their condition worsens after they become infected with SARS-CoV-2. However, many people differentiate the headache associated with the infection from their usual headache because after becoming infected, their headache is predominantly frontal, oppressive, and chronic.

“Having a prior history of headache is one of the factors that can increase the likelihood that a headache experienced while suffering from COVID-19 will become chronic,” he noted.

This study also found that, more often than not, patients with persistent headache at 9 months had migraine-like pain.

As for headaches in these patients beyond 9 months, “based on our research, the evolution is quite variable,” said Dr. Rodríguez Vico. “Our unit’s numbers are skewed due to the high number of migraine cases that we follow, and therefore our high volume of migraine patients who’ve gotten worse. The same thing happens with COVID-19 vaccines. Migraine is a polygenic disorder with multiple variants and a pathophysiology that we are just beginning to describe. This is why one patient is completely different from another. It’s a real challenge.”

Infections are a common cause of acute and chronic headache. The persistence of a headache after an infection may be caused by the infection becoming chronic, as happens in some types of chronic meningitis, such as tuberculous meningitis. It may also be caused by the persistence of a certain response and activation of the immune system or to the uncovering or worsening of a primary headache coincident with the infection, added Dr. García Azorín.

“Likewise, there are other people who have a biological predisposition to headache as a multifactorial disorder and polygenic disorder, such that a particular stimulus – from trauma or an infection to alcohol consumption – can cause them to develop a headache very similar to a migraine,” he said.
 

Providing prognosis and treatment

Certain factors can give an idea of how long the headache might last. The study’s univariate analysis showed that age, female sex, headache intensity, pressure-like quality, the presence of photophobia/phonophobia, and worsening with physical activity were associated with headache of longer duration. But in the multivariate analysis, only headache intensity during the acute phase remained statistically significant (hazard ratio, 0.655; 95% confidence interval, 0.582-0.737; P < .001).

When asked whether they planned to continue the study, Dr. García Azorín commented, “The main questions that have arisen from this study have been, above all: ‘Why does this headache happen?’ and ‘How can it be treated or avoided?’ To answer them, we’re looking into pain: which factors could predispose a person to it and which changes may be associated with its presence.”

In addition, different treatments that may improve patient outcomes are being evaluated, because to date, treatment has been empirical and based on the predominant pain phenotype.

In any case, most doctors currently treat post–COVID-19 headache on the basis of how similar the symptoms are to those of other primary headaches. “Given the impact that headache has on patients’ quality of life, there’s a pressing need for controlled studies on possible treatments and their effectiveness,” noted Patricia Pozo Rosich, MD, PhD, one of the coauthors of the study.

“We at the Spanish Society of Neurology truly believe that if these patients were to have this symptom correctly addressed from the start, they could avoid many of the problems that arise in the situation becoming chronic,” she concluded.

Dr. García Azorín and Dr. Rodríguez Vico disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology reviews- 30(5)
Publications
Topics
Sections

Approximately one in five patients who presented with headache during the acute phase of COVID-19 developed chronic daily headache, according to a study published in Cephalalgia. The greater the headache’s intensity during the acute phase, the greater the likelihood that it would persist.

The research, carried out by members of the Headache Study Group of the Spanish Society of Neurology, evaluated the evolution of headache in more than 900 Spanish patients. Because they found that headache intensity during the acute phase was associated with a more prolonged duration of headache, the team stressed the importance of promptly evaluating patients who have had COVID-19 and who then experience persistent headache.
 

Long-term evolution unknown

Headache is a common symptom of COVID-19, but its long-term evolution remains unknown. The objective of this study was to evaluate the long-term duration of headache in patients who presented with this symptom during the acute phase of the disease.

Recruitment for this multicenter study took place in March and April 2020. The 905 patients who were enrolled came from six level 3 hospitals in Spain. All completed 9 months of neurologic follow-up.

Their median age was 51 years, 66.5% were women, and more than half (52.7%) had a history of primary headache. About half of the patients required hospitalization (50.5%); the rest were treated as outpatients. The most common headache phenotype was holocranial (67.8%) of severe intensity (50.6%).
 

Persistent headache common

In the 96.6% cases for which data were available, the median duration of headache was 14 days. The headache persisted at 1 month in 31.1% of patients, at 2 months in 21.5%, at 3 months in 19%, at 6 months in 16.8%, and at 9 months in 16.0%.

“The median duration of COVID-19 headache is around 2 weeks,” David García Azorín, MD, PhD, a member of the Spanish Society of Neurology and one of the coauthors of the study, said in an interview. “However, almost 20% of patients experience it for longer than that. When still present at 2 months, the headache is more likely to follow a chronic daily pattern.” Dr. García Azorín is a neurologist and clinical researcher at the headache unit of the Hospital Clínico Universitario in Valladolid, Spain.

“So, if the headache isn’t letting up, it’s important to make the most of that window of opportunity and provide treatment in that period of 6-12 weeks,” he continued. “To do this, the best option is to carry out preventive treatment so that the patient will have a better chance of recovering.”

Study participants whose headache persisted at 9 months were older and were mostly women. They were less likely to have had pneumonia or to have experienced stabbing pain, photophobia, or phonophobia. They reported that the headache got worse when they engaged in physical activity but less frequently manifested as a throbbing headache.
 

Secondary tension headaches

On the other hand, Jaime Rodríguez Vico, MD, head of the headache unit at the Jiménez Díaz Foundation Hospital in Madrid, said in an interview that, according to his case studies, the most striking characteristics of post–COVID-19 headaches “in general are secondary, with similarities to tension headaches that patients are able to differentiate from other clinical types of headache. In patients with migraine, very often we see that we’re dealing with a trigger. In other words, more migraines – and more intense ones at that – are brought about.”

He added: “Generally, post–COVID-19 headache usually lasts 1-2 weeks, but we have cases of it lasting several months and even over a year with persistent daily headache. These more persistent cases are probably connected to another type of pathology that makes them more susceptible to becoming chronic, something that occurs in another type of primary headache known as new daily persistent headache.”
 

Primary headache exacerbation

Dr. García Azorín pointed out that it’s not uncommon that among people who already have primary headache, their condition worsens after they become infected with SARS-CoV-2. However, many people differentiate the headache associated with the infection from their usual headache because after becoming infected, their headache is predominantly frontal, oppressive, and chronic.

“Having a prior history of headache is one of the factors that can increase the likelihood that a headache experienced while suffering from COVID-19 will become chronic,” he noted.

This study also found that, more often than not, patients with persistent headache at 9 months had migraine-like pain.

As for headaches in these patients beyond 9 months, “based on our research, the evolution is quite variable,” said Dr. Rodríguez Vico. “Our unit’s numbers are skewed due to the high number of migraine cases that we follow, and therefore our high volume of migraine patients who’ve gotten worse. The same thing happens with COVID-19 vaccines. Migraine is a polygenic disorder with multiple variants and a pathophysiology that we are just beginning to describe. This is why one patient is completely different from another. It’s a real challenge.”

Infections are a common cause of acute and chronic headache. The persistence of a headache after an infection may be caused by the infection becoming chronic, as happens in some types of chronic meningitis, such as tuberculous meningitis. It may also be caused by the persistence of a certain response and activation of the immune system or to the uncovering or worsening of a primary headache coincident with the infection, added Dr. García Azorín.

“Likewise, there are other people who have a biological predisposition to headache as a multifactorial disorder and polygenic disorder, such that a particular stimulus – from trauma or an infection to alcohol consumption – can cause them to develop a headache very similar to a migraine,” he said.
 

Providing prognosis and treatment

Certain factors can give an idea of how long the headache might last. The study’s univariate analysis showed that age, female sex, headache intensity, pressure-like quality, the presence of photophobia/phonophobia, and worsening with physical activity were associated with headache of longer duration. But in the multivariate analysis, only headache intensity during the acute phase remained statistically significant (hazard ratio, 0.655; 95% confidence interval, 0.582-0.737; P < .001).

When asked whether they planned to continue the study, Dr. García Azorín commented, “The main questions that have arisen from this study have been, above all: ‘Why does this headache happen?’ and ‘How can it be treated or avoided?’ To answer them, we’re looking into pain: which factors could predispose a person to it and which changes may be associated with its presence.”

In addition, different treatments that may improve patient outcomes are being evaluated, because to date, treatment has been empirical and based on the predominant pain phenotype.

In any case, most doctors currently treat post–COVID-19 headache on the basis of how similar the symptoms are to those of other primary headaches. “Given the impact that headache has on patients’ quality of life, there’s a pressing need for controlled studies on possible treatments and their effectiveness,” noted Patricia Pozo Rosich, MD, PhD, one of the coauthors of the study.

“We at the Spanish Society of Neurology truly believe that if these patients were to have this symptom correctly addressed from the start, they could avoid many of the problems that arise in the situation becoming chronic,” she concluded.

Dr. García Azorín and Dr. Rodríguez Vico disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Approximately one in five patients who presented with headache during the acute phase of COVID-19 developed chronic daily headache, according to a study published in Cephalalgia. The greater the headache’s intensity during the acute phase, the greater the likelihood that it would persist.

The research, carried out by members of the Headache Study Group of the Spanish Society of Neurology, evaluated the evolution of headache in more than 900 Spanish patients. Because they found that headache intensity during the acute phase was associated with a more prolonged duration of headache, the team stressed the importance of promptly evaluating patients who have had COVID-19 and who then experience persistent headache.
 

Long-term evolution unknown

Headache is a common symptom of COVID-19, but its long-term evolution remains unknown. The objective of this study was to evaluate the long-term duration of headache in patients who presented with this symptom during the acute phase of the disease.

Recruitment for this multicenter study took place in March and April 2020. The 905 patients who were enrolled came from six level 3 hospitals in Spain. All completed 9 months of neurologic follow-up.

Their median age was 51 years, 66.5% were women, and more than half (52.7%) had a history of primary headache. About half of the patients required hospitalization (50.5%); the rest were treated as outpatients. The most common headache phenotype was holocranial (67.8%) of severe intensity (50.6%).
 

Persistent headache common

In the 96.6% cases for which data were available, the median duration of headache was 14 days. The headache persisted at 1 month in 31.1% of patients, at 2 months in 21.5%, at 3 months in 19%, at 6 months in 16.8%, and at 9 months in 16.0%.

“The median duration of COVID-19 headache is around 2 weeks,” David García Azorín, MD, PhD, a member of the Spanish Society of Neurology and one of the coauthors of the study, said in an interview. “However, almost 20% of patients experience it for longer than that. When still present at 2 months, the headache is more likely to follow a chronic daily pattern.” Dr. García Azorín is a neurologist and clinical researcher at the headache unit of the Hospital Clínico Universitario in Valladolid, Spain.

“So, if the headache isn’t letting up, it’s important to make the most of that window of opportunity and provide treatment in that period of 6-12 weeks,” he continued. “To do this, the best option is to carry out preventive treatment so that the patient will have a better chance of recovering.”

Study participants whose headache persisted at 9 months were older and were mostly women. They were less likely to have had pneumonia or to have experienced stabbing pain, photophobia, or phonophobia. They reported that the headache got worse when they engaged in physical activity but less frequently manifested as a throbbing headache.
 

Secondary tension headaches

On the other hand, Jaime Rodríguez Vico, MD, head of the headache unit at the Jiménez Díaz Foundation Hospital in Madrid, said in an interview that, according to his case studies, the most striking characteristics of post–COVID-19 headaches “in general are secondary, with similarities to tension headaches that patients are able to differentiate from other clinical types of headache. In patients with migraine, very often we see that we’re dealing with a trigger. In other words, more migraines – and more intense ones at that – are brought about.”

He added: “Generally, post–COVID-19 headache usually lasts 1-2 weeks, but we have cases of it lasting several months and even over a year with persistent daily headache. These more persistent cases are probably connected to another type of pathology that makes them more susceptible to becoming chronic, something that occurs in another type of primary headache known as new daily persistent headache.”
 

Primary headache exacerbation

Dr. García Azorín pointed out that it’s not uncommon that among people who already have primary headache, their condition worsens after they become infected with SARS-CoV-2. However, many people differentiate the headache associated with the infection from their usual headache because after becoming infected, their headache is predominantly frontal, oppressive, and chronic.

“Having a prior history of headache is one of the factors that can increase the likelihood that a headache experienced while suffering from COVID-19 will become chronic,” he noted.

This study also found that, more often than not, patients with persistent headache at 9 months had migraine-like pain.

As for headaches in these patients beyond 9 months, “based on our research, the evolution is quite variable,” said Dr. Rodríguez Vico. “Our unit’s numbers are skewed due to the high number of migraine cases that we follow, and therefore our high volume of migraine patients who’ve gotten worse. The same thing happens with COVID-19 vaccines. Migraine is a polygenic disorder with multiple variants and a pathophysiology that we are just beginning to describe. This is why one patient is completely different from another. It’s a real challenge.”

Infections are a common cause of acute and chronic headache. The persistence of a headache after an infection may be caused by the infection becoming chronic, as happens in some types of chronic meningitis, such as tuberculous meningitis. It may also be caused by the persistence of a certain response and activation of the immune system or to the uncovering or worsening of a primary headache coincident with the infection, added Dr. García Azorín.

“Likewise, there are other people who have a biological predisposition to headache as a multifactorial disorder and polygenic disorder, such that a particular stimulus – from trauma or an infection to alcohol consumption – can cause them to develop a headache very similar to a migraine,” he said.
 

Providing prognosis and treatment

Certain factors can give an idea of how long the headache might last. The study’s univariate analysis showed that age, female sex, headache intensity, pressure-like quality, the presence of photophobia/phonophobia, and worsening with physical activity were associated with headache of longer duration. But in the multivariate analysis, only headache intensity during the acute phase remained statistically significant (hazard ratio, 0.655; 95% confidence interval, 0.582-0.737; P < .001).

When asked whether they planned to continue the study, Dr. García Azorín commented, “The main questions that have arisen from this study have been, above all: ‘Why does this headache happen?’ and ‘How can it be treated or avoided?’ To answer them, we’re looking into pain: which factors could predispose a person to it and which changes may be associated with its presence.”

In addition, different treatments that may improve patient outcomes are being evaluated, because to date, treatment has been empirical and based on the predominant pain phenotype.

In any case, most doctors currently treat post–COVID-19 headache on the basis of how similar the symptoms are to those of other primary headaches. “Given the impact that headache has on patients’ quality of life, there’s a pressing need for controlled studies on possible treatments and their effectiveness,” noted Patricia Pozo Rosich, MD, PhD, one of the coauthors of the study.

“We at the Spanish Society of Neurology truly believe that if these patients were to have this symptom correctly addressed from the start, they could avoid many of the problems that arise in the situation becoming chronic,” she concluded.

Dr. García Azorín and Dr. Rodríguez Vico disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology reviews- 30(5)
Issue
Neurology reviews- 30(5)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CEPHALALGIA

Citation Override
Publish date: April 7, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Live-donor liver transplants for patients with CRC liver mets

Article Type
Changed

Encouraging improvements in survival have been reported by surgeons who used liver transplants from live donors as a treatment for patients with colorectal cancer (CRC) and unresectable liver metastases. These patients usually have a poor prognosis, and for many, palliative chemotherapy is the standard of care.

“For the first time, we have been able to demonstrate [outside of Norway] that liver transplantation for patients with unresectable liver metastases is feasible with good outcomes,” lead author Gonzalo Sapisochin, MD, PhD, an assistant professor of surgery at the University of Toronto, said in an interview.

“Furthermore, this is the first time we are able to prove that living donation may be a good strategy in this setting,” Dr. Sapisochin said of the series of 10 cases that they published in JAMA Surgery.

The series showed “excellent perioperative outcomes for both donors and recipients,” noted the authors of an accompanying commentary. They said the team “should be commended for adding liver-donor live transplantation to the armamentarium of surgical options for patients with CRC liver metastases.”

However, they express concern about the relatively short follow-up of 1.5 years and the “very high” recurrence rate of 30%.

Commenting in an interview, lead editorialist Shimul Shah, MD, an associate professor of surgery and the chief of solid organ transplantation at the University of Cincinnati, said: “I agree that overall survival is an important measure to look at, but it’s hard to look at overall survival with [1.5] years of follow-up.”

Other key areas of concern are the need for more standardized practices and for more data on how liver transplantation compares with patients who just continue to receive chemotherapy.

“I certainly think that there’s a role for liver transplantation in these patients, and I am a big fan of this,” Dr. Shah emphasized, noting that four patients at his own center have recently received liver transplants, including three from deceased donors.

“However, I just think that as a community, we need to be cautious and not get too excited too early,” he said. “We need to keep studying it and take it one step at a time.”

Moving from deceased to living donors

Nearly 70% of patients with CRC develop liver metastases, and when these are unresectable, the prognosis is poor, with 5-year survival rates of less than 10%.

The option of liver transplantation was first reported in 2015 by a group in Norway. Their study included 21 patients with CRC and unresectable liver tumors. They reported a striking improvement in overall survival at 5 years (56% vs. 9% among patients who started first-line chemotherapy).

But with shortages of donor livers, this approach has not caught on. Deceased-donor liver allografts are in short supply in most countries, and recent allocation changes have further shifted available organs away from patients with liver tumors.

An alternative is to use living donors. In a recent study, Dr. Sapisochin and colleagues showed viability and a survival advantage, compared with deceased-donor liver transplantation.

Building on that work, they established treatment protocols at three centers – the University of Rochester (N.Y.) Medical Center, the Cleveland Clinic, , and the University Health Network in Toronto.

Of 91 evaluated patients who were prospectively enrolled with liver-confined, unresectable CRC liver metastases, 10 met all inclusion criteria and received living-donor liver transplants between December 2017 and May 2021. The median age of the patients was 45 years; six were men, and four were women.

These patients all had primary tumors greater than stage T2 (six T3 and four T4b). Lymphovascular invasion was present in two patients, and perineural invasion was present in one patient.

The median time from diagnosis of the liver metastases to liver transplant was 1.7 years (range, 1.1-7.8 years).

At a median follow-up of 1.5 years (range, 0.4-2.9 years), recurrences occurred in three patients, with a rate of recurrence-free survival, using Kaplan-Meier estimates, of 62% and a rate of overall survival of 100%.

Rates of morbidity associated with transplantation were no higher than those observed in established standards for the donors or recipients, the authors noted.

Among transplant recipients, three patients had no Clavien-Dindo complications; three had grade II, and four had grade III complications. Among donors, five had no complications, four had grade I, and one had grade III complications.

All 10 donors were discharged from the hospital 4-7 days after surgery and recovered fully.

All three patients who experienced recurrences were treated with palliative chemotherapy. One died of disease after 3 months of treatment. As of the time of publication of the study, the other two had survived for 2 or more years following their live donor liver transplant.
 

Patient selection key

The authors are now investigating tumor subtypes, responses in CRC liver metastases, and other factors, with the aim of developing a novel screening method to identify appropriate candidates more quickly.

In the meantime, they emphasized that indicators of disease biology, such as the Oslo Score, the Clinical Risk Score, and sustained clinical response to systemic therapy, “remain the key filters through which to select patients who have sufficient opportunity for long-term cancer control, which is necessary to justify the risk to a living donor.”

Dr. Sapisochin reported receiving grants from Roche and Bayer and personal fees from Integra, Roche, AstraZeneca, and Novartis outside the submitted work. Dr. Shah disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Encouraging improvements in survival have been reported by surgeons who used liver transplants from live donors as a treatment for patients with colorectal cancer (CRC) and unresectable liver metastases. These patients usually have a poor prognosis, and for many, palliative chemotherapy is the standard of care.

“For the first time, we have been able to demonstrate [outside of Norway] that liver transplantation for patients with unresectable liver metastases is feasible with good outcomes,” lead author Gonzalo Sapisochin, MD, PhD, an assistant professor of surgery at the University of Toronto, said in an interview.

“Furthermore, this is the first time we are able to prove that living donation may be a good strategy in this setting,” Dr. Sapisochin said of the series of 10 cases that they published in JAMA Surgery.

The series showed “excellent perioperative outcomes for both donors and recipients,” noted the authors of an accompanying commentary. They said the team “should be commended for adding liver-donor live transplantation to the armamentarium of surgical options for patients with CRC liver metastases.”

However, they express concern about the relatively short follow-up of 1.5 years and the “very high” recurrence rate of 30%.

Commenting in an interview, lead editorialist Shimul Shah, MD, an associate professor of surgery and the chief of solid organ transplantation at the University of Cincinnati, said: “I agree that overall survival is an important measure to look at, but it’s hard to look at overall survival with [1.5] years of follow-up.”

Other key areas of concern are the need for more standardized practices and for more data on how liver transplantation compares with patients who just continue to receive chemotherapy.

“I certainly think that there’s a role for liver transplantation in these patients, and I am a big fan of this,” Dr. Shah emphasized, noting that four patients at his own center have recently received liver transplants, including three from deceased donors.

“However, I just think that as a community, we need to be cautious and not get too excited too early,” he said. “We need to keep studying it and take it one step at a time.”

Moving from deceased to living donors

Nearly 70% of patients with CRC develop liver metastases, and when these are unresectable, the prognosis is poor, with 5-year survival rates of less than 10%.

The option of liver transplantation was first reported in 2015 by a group in Norway. Their study included 21 patients with CRC and unresectable liver tumors. They reported a striking improvement in overall survival at 5 years (56% vs. 9% among patients who started first-line chemotherapy).

But with shortages of donor livers, this approach has not caught on. Deceased-donor liver allografts are in short supply in most countries, and recent allocation changes have further shifted available organs away from patients with liver tumors.

An alternative is to use living donors. In a recent study, Dr. Sapisochin and colleagues showed viability and a survival advantage, compared with deceased-donor liver transplantation.

Building on that work, they established treatment protocols at three centers – the University of Rochester (N.Y.) Medical Center, the Cleveland Clinic, , and the University Health Network in Toronto.

Of 91 evaluated patients who were prospectively enrolled with liver-confined, unresectable CRC liver metastases, 10 met all inclusion criteria and received living-donor liver transplants between December 2017 and May 2021. The median age of the patients was 45 years; six were men, and four were women.

These patients all had primary tumors greater than stage T2 (six T3 and four T4b). Lymphovascular invasion was present in two patients, and perineural invasion was present in one patient.

The median time from diagnosis of the liver metastases to liver transplant was 1.7 years (range, 1.1-7.8 years).

At a median follow-up of 1.5 years (range, 0.4-2.9 years), recurrences occurred in three patients, with a rate of recurrence-free survival, using Kaplan-Meier estimates, of 62% and a rate of overall survival of 100%.

Rates of morbidity associated with transplantation were no higher than those observed in established standards for the donors or recipients, the authors noted.

Among transplant recipients, three patients had no Clavien-Dindo complications; three had grade II, and four had grade III complications. Among donors, five had no complications, four had grade I, and one had grade III complications.

All 10 donors were discharged from the hospital 4-7 days after surgery and recovered fully.

All three patients who experienced recurrences were treated with palliative chemotherapy. One died of disease after 3 months of treatment. As of the time of publication of the study, the other two had survived for 2 or more years following their live donor liver transplant.
 

Patient selection key

The authors are now investigating tumor subtypes, responses in CRC liver metastases, and other factors, with the aim of developing a novel screening method to identify appropriate candidates more quickly.

In the meantime, they emphasized that indicators of disease biology, such as the Oslo Score, the Clinical Risk Score, and sustained clinical response to systemic therapy, “remain the key filters through which to select patients who have sufficient opportunity for long-term cancer control, which is necessary to justify the risk to a living donor.”

Dr. Sapisochin reported receiving grants from Roche and Bayer and personal fees from Integra, Roche, AstraZeneca, and Novartis outside the submitted work. Dr. Shah disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Encouraging improvements in survival have been reported by surgeons who used liver transplants from live donors as a treatment for patients with colorectal cancer (CRC) and unresectable liver metastases. These patients usually have a poor prognosis, and for many, palliative chemotherapy is the standard of care.

“For the first time, we have been able to demonstrate [outside of Norway] that liver transplantation for patients with unresectable liver metastases is feasible with good outcomes,” lead author Gonzalo Sapisochin, MD, PhD, an assistant professor of surgery at the University of Toronto, said in an interview.

“Furthermore, this is the first time we are able to prove that living donation may be a good strategy in this setting,” Dr. Sapisochin said of the series of 10 cases that they published in JAMA Surgery.

The series showed “excellent perioperative outcomes for both donors and recipients,” noted the authors of an accompanying commentary. They said the team “should be commended for adding liver-donor live transplantation to the armamentarium of surgical options for patients with CRC liver metastases.”

However, they express concern about the relatively short follow-up of 1.5 years and the “very high” recurrence rate of 30%.

Commenting in an interview, lead editorialist Shimul Shah, MD, an associate professor of surgery and the chief of solid organ transplantation at the University of Cincinnati, said: “I agree that overall survival is an important measure to look at, but it’s hard to look at overall survival with [1.5] years of follow-up.”

Other key areas of concern are the need for more standardized practices and for more data on how liver transplantation compares with patients who just continue to receive chemotherapy.

“I certainly think that there’s a role for liver transplantation in these patients, and I am a big fan of this,” Dr. Shah emphasized, noting that four patients at his own center have recently received liver transplants, including three from deceased donors.

“However, I just think that as a community, we need to be cautious and not get too excited too early,” he said. “We need to keep studying it and take it one step at a time.”

Moving from deceased to living donors

Nearly 70% of patients with CRC develop liver metastases, and when these are unresectable, the prognosis is poor, with 5-year survival rates of less than 10%.

The option of liver transplantation was first reported in 2015 by a group in Norway. Their study included 21 patients with CRC and unresectable liver tumors. They reported a striking improvement in overall survival at 5 years (56% vs. 9% among patients who started first-line chemotherapy).

But with shortages of donor livers, this approach has not caught on. Deceased-donor liver allografts are in short supply in most countries, and recent allocation changes have further shifted available organs away from patients with liver tumors.

An alternative is to use living donors. In a recent study, Dr. Sapisochin and colleagues showed viability and a survival advantage, compared with deceased-donor liver transplantation.

Building on that work, they established treatment protocols at three centers – the University of Rochester (N.Y.) Medical Center, the Cleveland Clinic, , and the University Health Network in Toronto.

Of 91 evaluated patients who were prospectively enrolled with liver-confined, unresectable CRC liver metastases, 10 met all inclusion criteria and received living-donor liver transplants between December 2017 and May 2021. The median age of the patients was 45 years; six were men, and four were women.

These patients all had primary tumors greater than stage T2 (six T3 and four T4b). Lymphovascular invasion was present in two patients, and perineural invasion was present in one patient.

The median time from diagnosis of the liver metastases to liver transplant was 1.7 years (range, 1.1-7.8 years).

At a median follow-up of 1.5 years (range, 0.4-2.9 years), recurrences occurred in three patients, with a rate of recurrence-free survival, using Kaplan-Meier estimates, of 62% and a rate of overall survival of 100%.

Rates of morbidity associated with transplantation were no higher than those observed in established standards for the donors or recipients, the authors noted.

Among transplant recipients, three patients had no Clavien-Dindo complications; three had grade II, and four had grade III complications. Among donors, five had no complications, four had grade I, and one had grade III complications.

All 10 donors were discharged from the hospital 4-7 days after surgery and recovered fully.

All three patients who experienced recurrences were treated with palliative chemotherapy. One died of disease after 3 months of treatment. As of the time of publication of the study, the other two had survived for 2 or more years following their live donor liver transplant.
 

Patient selection key

The authors are now investigating tumor subtypes, responses in CRC liver metastases, and other factors, with the aim of developing a novel screening method to identify appropriate candidates more quickly.

In the meantime, they emphasized that indicators of disease biology, such as the Oslo Score, the Clinical Risk Score, and sustained clinical response to systemic therapy, “remain the key filters through which to select patients who have sufficient opportunity for long-term cancer control, which is necessary to justify the risk to a living donor.”

Dr. Sapisochin reported receiving grants from Roche and Bayer and personal fees from Integra, Roche, AstraZeneca, and Novartis outside the submitted work. Dr. Shah disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Some leukemias detectable up to 16 years before diagnosis?

Article Type
Changed

 

The preclinical phase of chronic lymphocytic leukemia (CLL) may be exist longer than previously thought, even in adverse-prognostic cases, as suggested by a sequencing analysis of blood samples obtained up to 22 years prior to CLL diagnosis.

Previous analyses showed that monoclonal B-cell lymphocytosis (MBL), a CLL precursor state, has been detected up to 6 years before CLL diagnosis, the investigators explained, noting that “[a]nother prognostically relevant immunogenetic feature of CLL concerns the stereotype of the B-cell receptor immunoglobulins (BcR IG).”

“Indeed, distinct stereotyped subsets can be defined by the expression of shared sequence motifs and are associated with particular presentation and outcomes,” P. Martijn Kolijn, PhD, a researcher in the department of immunology at Erasmus Medical Center, Rotterdam, the Netherlands, and colleagues wrote in a brief report published online in Blood. In an effort to “gain insight into the composition of the BcR IG repertoire during the early stages of CLL,” the investigators utilized next-generation sequencing to analyze 124 blood samples taken from healthy individuals up to 22 years before they received a diagnosis of CLL or small lymphocytic leukemia (SLL). An additional 118 matched control samples were also analyzed.

Study subjects were participants in the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort.

“First, unsurprisingly, we observed a significant difference in the frequency of the dominant clonotype in CLL patients versus controls with a median frequency of 54.9%, compared to only 0.38% in controls,” they wrote.

Among 28 patients whose lymphocyte counts were measured at baseline, 10 showed evidence of lymphocytosis up to 8 years before CLL diagnosis.

This suggests undiagnosed instances of high-count MBL (cases with a cell count above 0.5x 109 cells/L, which can progress to CLL) or asymptomatic CLL, they explained.

“In contrast, next-generation sequencing results showed detectable skewing of the IGH gene repertoire in 21/28 patients up to 15 years before CLL diagnosis, often in the absence of elevated lymphocyte counts,” they wrote. “Remarkably, some patients with CLL requiring treatment and clinical transformation to an aggressive B-cell lymphoma displayed considerable skewing in the IGH gene repertoire even 16 years before CLL diagnosis.”

Patients with a prediagnostic IGHV-unmutated dominant clonotype had significantly shorter overall survival after CLL diagnosis than did those with an IGHV-mutated clonotype, they noted.

“Furthermore, at early timepoints (>10 years before diagnosis), patients with a high dominant clonotype frequency were more likely to be IGHV mutated, whereas closer to diagnosis this tendency was lost, indicating that the prediagnostic phase may be even longer than 16 years for [mutated] CLL patients,” they added.

The investigators also found that:

  • Twenty-five patients carried stereotyped BcR IG up to 17 years prior to CLL diagnosis, and of these, 10 clonotypes were assigned to minor subsets and 15 to major CLL subsets. Among the latter, 14 of the 15 belonged to high-risk subsets, and most of those showed a trend for faster disease evolution.
  • High frequency of the dominant clonotype was evident in samples obtained less than 6 years before diagnosis, whereas high-risk stereotyped clonotypes found longer before diagnosis (as early as 16 years) tended to have a lower dominant clonotype frequency (<20% of IGH gene repertoire)
  • The stereotyped BcR IG matched the clonotype at diagnosis for both patients with diagnostic material.
  • No stereotyped subsets were identified among the dominant clonotypes of the healthy controls.
 

 

“To our knowledge, the dynamics of the emergence of biclonality in an MBL patient and subsequent progression to CLL have never been captured in such a convincing manner,” they noted.

The findings “extend current knowledge on the evolution of the IGH repertoire prior to CLL diagnosis, highlighting that even high-risk CLL subtypes may display a prolonged indolent preclinical stage,” they added, speculating that “somatic genetic aberrations, (auto)stimulation, epigenetic and/or microenvironmental influences are required for the transformation into overt CLL.”

The investigators also noted that since the observed skewing in the IGH gene repertoire often occurs prior to B-cell lymphocytosis, they consider the findings “a novel extension to the characterization of MBL.”

“Further studies may prove invaluable in the clinical distinction between ‘progressing’ MBL versus ‘stable’ MBL. Notwithstanding the above, we emphasize that early detection is only warranted if it provides clear benefits to patient care,” they concluded.

In a related commentary, Gerald Marti, MD, PhD, of the National Heart, Lung, and Blood Institute, emphasized that the findings “represent the earliest detection of a clonotypic precursor cell for CLL.” .

They also raise new questions and point to new directions for research, Dr. Marti noted.

“Where do we go from here? CLL has a long evolutionary history in which early branching may start as an oligoclonal process (antigen stimulation) and include driver mutations,” he wrote. “A long-term analysis of the B-cell repertoire in familial CLL might shed light on this process. Further clarification of the mechanisms of age-related immune senescence is also of interest.”

The study authors and Dr. Marti reported having no competing financial interests.

Publications
Topics
Sections

 

The preclinical phase of chronic lymphocytic leukemia (CLL) may be exist longer than previously thought, even in adverse-prognostic cases, as suggested by a sequencing analysis of blood samples obtained up to 22 years prior to CLL diagnosis.

Previous analyses showed that monoclonal B-cell lymphocytosis (MBL), a CLL precursor state, has been detected up to 6 years before CLL diagnosis, the investigators explained, noting that “[a]nother prognostically relevant immunogenetic feature of CLL concerns the stereotype of the B-cell receptor immunoglobulins (BcR IG).”

“Indeed, distinct stereotyped subsets can be defined by the expression of shared sequence motifs and are associated with particular presentation and outcomes,” P. Martijn Kolijn, PhD, a researcher in the department of immunology at Erasmus Medical Center, Rotterdam, the Netherlands, and colleagues wrote in a brief report published online in Blood. In an effort to “gain insight into the composition of the BcR IG repertoire during the early stages of CLL,” the investigators utilized next-generation sequencing to analyze 124 blood samples taken from healthy individuals up to 22 years before they received a diagnosis of CLL or small lymphocytic leukemia (SLL). An additional 118 matched control samples were also analyzed.

Study subjects were participants in the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort.

“First, unsurprisingly, we observed a significant difference in the frequency of the dominant clonotype in CLL patients versus controls with a median frequency of 54.9%, compared to only 0.38% in controls,” they wrote.

Among 28 patients whose lymphocyte counts were measured at baseline, 10 showed evidence of lymphocytosis up to 8 years before CLL diagnosis.

This suggests undiagnosed instances of high-count MBL (cases with a cell count above 0.5x 109 cells/L, which can progress to CLL) or asymptomatic CLL, they explained.

“In contrast, next-generation sequencing results showed detectable skewing of the IGH gene repertoire in 21/28 patients up to 15 years before CLL diagnosis, often in the absence of elevated lymphocyte counts,” they wrote. “Remarkably, some patients with CLL requiring treatment and clinical transformation to an aggressive B-cell lymphoma displayed considerable skewing in the IGH gene repertoire even 16 years before CLL diagnosis.”

Patients with a prediagnostic IGHV-unmutated dominant clonotype had significantly shorter overall survival after CLL diagnosis than did those with an IGHV-mutated clonotype, they noted.

“Furthermore, at early timepoints (>10 years before diagnosis), patients with a high dominant clonotype frequency were more likely to be IGHV mutated, whereas closer to diagnosis this tendency was lost, indicating that the prediagnostic phase may be even longer than 16 years for [mutated] CLL patients,” they added.

The investigators also found that:

  • Twenty-five patients carried stereotyped BcR IG up to 17 years prior to CLL diagnosis, and of these, 10 clonotypes were assigned to minor subsets and 15 to major CLL subsets. Among the latter, 14 of the 15 belonged to high-risk subsets, and most of those showed a trend for faster disease evolution.
  • High frequency of the dominant clonotype was evident in samples obtained less than 6 years before diagnosis, whereas high-risk stereotyped clonotypes found longer before diagnosis (as early as 16 years) tended to have a lower dominant clonotype frequency (<20% of IGH gene repertoire)
  • The stereotyped BcR IG matched the clonotype at diagnosis for both patients with diagnostic material.
  • No stereotyped subsets were identified among the dominant clonotypes of the healthy controls.
 

 

“To our knowledge, the dynamics of the emergence of biclonality in an MBL patient and subsequent progression to CLL have never been captured in such a convincing manner,” they noted.

The findings “extend current knowledge on the evolution of the IGH repertoire prior to CLL diagnosis, highlighting that even high-risk CLL subtypes may display a prolonged indolent preclinical stage,” they added, speculating that “somatic genetic aberrations, (auto)stimulation, epigenetic and/or microenvironmental influences are required for the transformation into overt CLL.”

The investigators also noted that since the observed skewing in the IGH gene repertoire often occurs prior to B-cell lymphocytosis, they consider the findings “a novel extension to the characterization of MBL.”

“Further studies may prove invaluable in the clinical distinction between ‘progressing’ MBL versus ‘stable’ MBL. Notwithstanding the above, we emphasize that early detection is only warranted if it provides clear benefits to patient care,” they concluded.

In a related commentary, Gerald Marti, MD, PhD, of the National Heart, Lung, and Blood Institute, emphasized that the findings “represent the earliest detection of a clonotypic precursor cell for CLL.” .

They also raise new questions and point to new directions for research, Dr. Marti noted.

“Where do we go from here? CLL has a long evolutionary history in which early branching may start as an oligoclonal process (antigen stimulation) and include driver mutations,” he wrote. “A long-term analysis of the B-cell repertoire in familial CLL might shed light on this process. Further clarification of the mechanisms of age-related immune senescence is also of interest.”

The study authors and Dr. Marti reported having no competing financial interests.

 

The preclinical phase of chronic lymphocytic leukemia (CLL) may be exist longer than previously thought, even in adverse-prognostic cases, as suggested by a sequencing analysis of blood samples obtained up to 22 years prior to CLL diagnosis.

Previous analyses showed that monoclonal B-cell lymphocytosis (MBL), a CLL precursor state, has been detected up to 6 years before CLL diagnosis, the investigators explained, noting that “[a]nother prognostically relevant immunogenetic feature of CLL concerns the stereotype of the B-cell receptor immunoglobulins (BcR IG).”

“Indeed, distinct stereotyped subsets can be defined by the expression of shared sequence motifs and are associated with particular presentation and outcomes,” P. Martijn Kolijn, PhD, a researcher in the department of immunology at Erasmus Medical Center, Rotterdam, the Netherlands, and colleagues wrote in a brief report published online in Blood. In an effort to “gain insight into the composition of the BcR IG repertoire during the early stages of CLL,” the investigators utilized next-generation sequencing to analyze 124 blood samples taken from healthy individuals up to 22 years before they received a diagnosis of CLL or small lymphocytic leukemia (SLL). An additional 118 matched control samples were also analyzed.

Study subjects were participants in the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort.

“First, unsurprisingly, we observed a significant difference in the frequency of the dominant clonotype in CLL patients versus controls with a median frequency of 54.9%, compared to only 0.38% in controls,” they wrote.

Among 28 patients whose lymphocyte counts were measured at baseline, 10 showed evidence of lymphocytosis up to 8 years before CLL diagnosis.

This suggests undiagnosed instances of high-count MBL (cases with a cell count above 0.5x 109 cells/L, which can progress to CLL) or asymptomatic CLL, they explained.

“In contrast, next-generation sequencing results showed detectable skewing of the IGH gene repertoire in 21/28 patients up to 15 years before CLL diagnosis, often in the absence of elevated lymphocyte counts,” they wrote. “Remarkably, some patients with CLL requiring treatment and clinical transformation to an aggressive B-cell lymphoma displayed considerable skewing in the IGH gene repertoire even 16 years before CLL diagnosis.”

Patients with a prediagnostic IGHV-unmutated dominant clonotype had significantly shorter overall survival after CLL diagnosis than did those with an IGHV-mutated clonotype, they noted.

“Furthermore, at early timepoints (>10 years before diagnosis), patients with a high dominant clonotype frequency were more likely to be IGHV mutated, whereas closer to diagnosis this tendency was lost, indicating that the prediagnostic phase may be even longer than 16 years for [mutated] CLL patients,” they added.

The investigators also found that:

  • Twenty-five patients carried stereotyped BcR IG up to 17 years prior to CLL diagnosis, and of these, 10 clonotypes were assigned to minor subsets and 15 to major CLL subsets. Among the latter, 14 of the 15 belonged to high-risk subsets, and most of those showed a trend for faster disease evolution.
  • High frequency of the dominant clonotype was evident in samples obtained less than 6 years before diagnosis, whereas high-risk stereotyped clonotypes found longer before diagnosis (as early as 16 years) tended to have a lower dominant clonotype frequency (<20% of IGH gene repertoire)
  • The stereotyped BcR IG matched the clonotype at diagnosis for both patients with diagnostic material.
  • No stereotyped subsets were identified among the dominant clonotypes of the healthy controls.
 

 

“To our knowledge, the dynamics of the emergence of biclonality in an MBL patient and subsequent progression to CLL have never been captured in such a convincing manner,” they noted.

The findings “extend current knowledge on the evolution of the IGH repertoire prior to CLL diagnosis, highlighting that even high-risk CLL subtypes may display a prolonged indolent preclinical stage,” they added, speculating that “somatic genetic aberrations, (auto)stimulation, epigenetic and/or microenvironmental influences are required for the transformation into overt CLL.”

The investigators also noted that since the observed skewing in the IGH gene repertoire often occurs prior to B-cell lymphocytosis, they consider the findings “a novel extension to the characterization of MBL.”

“Further studies may prove invaluable in the clinical distinction between ‘progressing’ MBL versus ‘stable’ MBL. Notwithstanding the above, we emphasize that early detection is only warranted if it provides clear benefits to patient care,” they concluded.

In a related commentary, Gerald Marti, MD, PhD, of the National Heart, Lung, and Blood Institute, emphasized that the findings “represent the earliest detection of a clonotypic precursor cell for CLL.” .

They also raise new questions and point to new directions for research, Dr. Marti noted.

“Where do we go from here? CLL has a long evolutionary history in which early branching may start as an oligoclonal process (antigen stimulation) and include driver mutations,” he wrote. “A long-term analysis of the B-cell repertoire in familial CLL might shed light on this process. Further clarification of the mechanisms of age-related immune senescence is also of interest.”

The study authors and Dr. Marti reported having no competing financial interests.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BLOOD

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA Clinical Practice Update: Expert review on deprescribing PPIs

Article Type
Changed

An American Gastroenterological Association practice update on deprescribing proton-pump inhibitors (PPIs) delineates conditions under which drug withdrawal should be considered, and acknowledges that conversations between physicians and patients can be complicated. An inappropriate decision to discontinue PPI therapy can have significant consequences for the patient, while continued inappropriate use raises health care costs and may rarely lead to adverse effects.

One purpose of the update is to provide guidance when patients and providers don’t have the resources to systematically examine the issue, especially when other medical concerns may be in play. The authors also suggested that physicians include pharmacists in the employment of the best practices advice.

monkeybusinessimages/Thinkstock

“None of these statements represents a radical departure from previously published guidance on PPI appropriateness and deprescribing: Our [recommendations] simply seek to summarize the evidence and to provide the clinician with a single document which distills the evidence down into clinically applicable guidance statements,” Laura Targownik, MD, associate professor of medicine at the University of Toronto and corresponding author of the practice update published in Gastroenterology said in an interview.

“PPIs are highly effective medications for specific gastrointestinal conditions, and are largely safe. However, PPIs are often used in situations where they have minimal and no proven benefit, leading to unnecessary health care spending and unnecessary exposure to drugs. Our paper helps clinicians identify which patients require long-term PPI use as well as those who may be using them unnecessarily, and provides actionable advice on how to deprescribe PPIs from those deemed to be using them without clear benefit,” said Dr. Targownik.

An estimated 7%-15% of health care patients in general and 40% of those over 70 use PPIs at any given time, making them among the most commonly used drugs. About one in four patients who start PPIs will use them for a year or more. Aside from their use for acid-mediated upper gastrointestinal conditions, PPIs often find use for less well-defined complaints. Since PPIs are available over the counter, physicians may not even be involved in a patient’s decision to use them.

Although PPI use has been associated with adverse events, including chronic kidney disease, fractures, dementia, and greater risk of COVID-19 infection, there is not high-quality evidence to suggest that PPIs are directly responsible for any of these adverse events.

The authors suggested the primary care provider should periodically review and document the complaints or indications that prompt PPI use. When a patient is found to have no chronic condition that PPIs could reasonably address, the physician should consider a trial withdrawal. Patients who take PPIs twice daily for a known chronic condition should be considered for a reduction to a once-daily dose.

In general, PPI discontinuation is not a good option for most patients with complicated gastroesophageal reflux disease, such as those with a history of severe erosive esophagitis, esophageal ulcer, or peptic stricture. The same is true for patients with Barrett’s esophagus, eosinophilic esophagitis, or idiopathic pulmonary fibrosis.

Before any deprescribing is considered, the patient should be evaluated for risk of upper gastrointestinal bleeding, and those at high risk are not candidates for PPI deprescribing.

When the decision is made to withdraw PPIs, the patient should be advised of an increased risk of transient upper gastrointestinal symptoms caused by rebound acid hypersecretion.

The withdrawal of PPIs can be done abruptly, or the dose can be tapered gradually.

PPI-associated adverse events should not be a consideration when discussing the option of withdrawing from PPIs. Instead, the decision should be based on the absence of a specific reason for their use. A history of such adverse events, or a current adverse event, should not be a sole reason for discontinuation, nor should risk factors associated with risk of adverse events. Concerns about adverse events have driven recent interest in reducing use of PPIs, but those adverse events were identified through retrospective studies and may be only associated with PPI use rather than caused by it. In many cases there is no plausible mechanistic cause, and no clinical trials have demonstrated increased adverse events in PPI users.

Three-quarters of physicians say they have altered treatment plans for patients because of concerns about PPI adverse events, and 80% say they would advise patients to withdraw PPIs if they learned the patient was at increased risk of upper gastrointestinal bleeding. Unnecessary withdrawal can lead to recurrent symptoms and complications when PPIs are effective treatments. “Therefore, physicians should not use concern about unproven complications of PPI use as a justification for PPI deprescribing if there remain ongoing valid indications for PPI use,” the authors wrote.

Dr. Targownik has received investigator-initiated funding from Janssen Canada and served on advisory boards for AbbVie Canada, Takeda Canada, Merck Canada, Pfizer Canada, Janssen Canada, Roche Canada, and Sandoz Canada. She is the lead on an IBD registry supported by AbbVie Canada, Takeda Canada, Merck Canada, Pfizer Canada, Amgen Canada, Roche Canada, and Sandoz Canada. None of the companies with whom Dr. Targownik has a relation are involved in the manufacturing, distribution, or sales of PPIs or any other agents mentioned in the manuscript.

Publications
Topics
Sections

An American Gastroenterological Association practice update on deprescribing proton-pump inhibitors (PPIs) delineates conditions under which drug withdrawal should be considered, and acknowledges that conversations between physicians and patients can be complicated. An inappropriate decision to discontinue PPI therapy can have significant consequences for the patient, while continued inappropriate use raises health care costs and may rarely lead to adverse effects.

One purpose of the update is to provide guidance when patients and providers don’t have the resources to systematically examine the issue, especially when other medical concerns may be in play. The authors also suggested that physicians include pharmacists in the employment of the best practices advice.

monkeybusinessimages/Thinkstock

“None of these statements represents a radical departure from previously published guidance on PPI appropriateness and deprescribing: Our [recommendations] simply seek to summarize the evidence and to provide the clinician with a single document which distills the evidence down into clinically applicable guidance statements,” Laura Targownik, MD, associate professor of medicine at the University of Toronto and corresponding author of the practice update published in Gastroenterology said in an interview.

“PPIs are highly effective medications for specific gastrointestinal conditions, and are largely safe. However, PPIs are often used in situations where they have minimal and no proven benefit, leading to unnecessary health care spending and unnecessary exposure to drugs. Our paper helps clinicians identify which patients require long-term PPI use as well as those who may be using them unnecessarily, and provides actionable advice on how to deprescribe PPIs from those deemed to be using them without clear benefit,” said Dr. Targownik.

An estimated 7%-15% of health care patients in general and 40% of those over 70 use PPIs at any given time, making them among the most commonly used drugs. About one in four patients who start PPIs will use them for a year or more. Aside from their use for acid-mediated upper gastrointestinal conditions, PPIs often find use for less well-defined complaints. Since PPIs are available over the counter, physicians may not even be involved in a patient’s decision to use them.

Although PPI use has been associated with adverse events, including chronic kidney disease, fractures, dementia, and greater risk of COVID-19 infection, there is not high-quality evidence to suggest that PPIs are directly responsible for any of these adverse events.

The authors suggested the primary care provider should periodically review and document the complaints or indications that prompt PPI use. When a patient is found to have no chronic condition that PPIs could reasonably address, the physician should consider a trial withdrawal. Patients who take PPIs twice daily for a known chronic condition should be considered for a reduction to a once-daily dose.

In general, PPI discontinuation is not a good option for most patients with complicated gastroesophageal reflux disease, such as those with a history of severe erosive esophagitis, esophageal ulcer, or peptic stricture. The same is true for patients with Barrett’s esophagus, eosinophilic esophagitis, or idiopathic pulmonary fibrosis.

Before any deprescribing is considered, the patient should be evaluated for risk of upper gastrointestinal bleeding, and those at high risk are not candidates for PPI deprescribing.

When the decision is made to withdraw PPIs, the patient should be advised of an increased risk of transient upper gastrointestinal symptoms caused by rebound acid hypersecretion.

The withdrawal of PPIs can be done abruptly, or the dose can be tapered gradually.

PPI-associated adverse events should not be a consideration when discussing the option of withdrawing from PPIs. Instead, the decision should be based on the absence of a specific reason for their use. A history of such adverse events, or a current adverse event, should not be a sole reason for discontinuation, nor should risk factors associated with risk of adverse events. Concerns about adverse events have driven recent interest in reducing use of PPIs, but those adverse events were identified through retrospective studies and may be only associated with PPI use rather than caused by it. In many cases there is no plausible mechanistic cause, and no clinical trials have demonstrated increased adverse events in PPI users.

Three-quarters of physicians say they have altered treatment plans for patients because of concerns about PPI adverse events, and 80% say they would advise patients to withdraw PPIs if they learned the patient was at increased risk of upper gastrointestinal bleeding. Unnecessary withdrawal can lead to recurrent symptoms and complications when PPIs are effective treatments. “Therefore, physicians should not use concern about unproven complications of PPI use as a justification for PPI deprescribing if there remain ongoing valid indications for PPI use,” the authors wrote.

Dr. Targownik has received investigator-initiated funding from Janssen Canada and served on advisory boards for AbbVie Canada, Takeda Canada, Merck Canada, Pfizer Canada, Janssen Canada, Roche Canada, and Sandoz Canada. She is the lead on an IBD registry supported by AbbVie Canada, Takeda Canada, Merck Canada, Pfizer Canada, Amgen Canada, Roche Canada, and Sandoz Canada. None of the companies with whom Dr. Targownik has a relation are involved in the manufacturing, distribution, or sales of PPIs or any other agents mentioned in the manuscript.

An American Gastroenterological Association practice update on deprescribing proton-pump inhibitors (PPIs) delineates conditions under which drug withdrawal should be considered, and acknowledges that conversations between physicians and patients can be complicated. An inappropriate decision to discontinue PPI therapy can have significant consequences for the patient, while continued inappropriate use raises health care costs and may rarely lead to adverse effects.

One purpose of the update is to provide guidance when patients and providers don’t have the resources to systematically examine the issue, especially when other medical concerns may be in play. The authors also suggested that physicians include pharmacists in the employment of the best practices advice.

monkeybusinessimages/Thinkstock

“None of these statements represents a radical departure from previously published guidance on PPI appropriateness and deprescribing: Our [recommendations] simply seek to summarize the evidence and to provide the clinician with a single document which distills the evidence down into clinically applicable guidance statements,” Laura Targownik, MD, associate professor of medicine at the University of Toronto and corresponding author of the practice update published in Gastroenterology said in an interview.

“PPIs are highly effective medications for specific gastrointestinal conditions, and are largely safe. However, PPIs are often used in situations where they have minimal and no proven benefit, leading to unnecessary health care spending and unnecessary exposure to drugs. Our paper helps clinicians identify which patients require long-term PPI use as well as those who may be using them unnecessarily, and provides actionable advice on how to deprescribe PPIs from those deemed to be using them without clear benefit,” said Dr. Targownik.

An estimated 7%-15% of health care patients in general and 40% of those over 70 use PPIs at any given time, making them among the most commonly used drugs. About one in four patients who start PPIs will use them for a year or more. Aside from their use for acid-mediated upper gastrointestinal conditions, PPIs often find use for less well-defined complaints. Since PPIs are available over the counter, physicians may not even be involved in a patient’s decision to use them.

Although PPI use has been associated with adverse events, including chronic kidney disease, fractures, dementia, and greater risk of COVID-19 infection, there is not high-quality evidence to suggest that PPIs are directly responsible for any of these adverse events.

The authors suggested the primary care provider should periodically review and document the complaints or indications that prompt PPI use. When a patient is found to have no chronic condition that PPIs could reasonably address, the physician should consider a trial withdrawal. Patients who take PPIs twice daily for a known chronic condition should be considered for a reduction to a once-daily dose.

In general, PPI discontinuation is not a good option for most patients with complicated gastroesophageal reflux disease, such as those with a history of severe erosive esophagitis, esophageal ulcer, or peptic stricture. The same is true for patients with Barrett’s esophagus, eosinophilic esophagitis, or idiopathic pulmonary fibrosis.

Before any deprescribing is considered, the patient should be evaluated for risk of upper gastrointestinal bleeding, and those at high risk are not candidates for PPI deprescribing.

When the decision is made to withdraw PPIs, the patient should be advised of an increased risk of transient upper gastrointestinal symptoms caused by rebound acid hypersecretion.

The withdrawal of PPIs can be done abruptly, or the dose can be tapered gradually.

PPI-associated adverse events should not be a consideration when discussing the option of withdrawing from PPIs. Instead, the decision should be based on the absence of a specific reason for their use. A history of such adverse events, or a current adverse event, should not be a sole reason for discontinuation, nor should risk factors associated with risk of adverse events. Concerns about adverse events have driven recent interest in reducing use of PPIs, but those adverse events were identified through retrospective studies and may be only associated with PPI use rather than caused by it. In many cases there is no plausible mechanistic cause, and no clinical trials have demonstrated increased adverse events in PPI users.

Three-quarters of physicians say they have altered treatment plans for patients because of concerns about PPI adverse events, and 80% say they would advise patients to withdraw PPIs if they learned the patient was at increased risk of upper gastrointestinal bleeding. Unnecessary withdrawal can lead to recurrent symptoms and complications when PPIs are effective treatments. “Therefore, physicians should not use concern about unproven complications of PPI use as a justification for PPI deprescribing if there remain ongoing valid indications for PPI use,” the authors wrote.

Dr. Targownik has received investigator-initiated funding from Janssen Canada and served on advisory boards for AbbVie Canada, Takeda Canada, Merck Canada, Pfizer Canada, Janssen Canada, Roche Canada, and Sandoz Canada. She is the lead on an IBD registry supported by AbbVie Canada, Takeda Canada, Merck Canada, Pfizer Canada, Amgen Canada, Roche Canada, and Sandoz Canada. None of the companies with whom Dr. Targownik has a relation are involved in the manufacturing, distribution, or sales of PPIs or any other agents mentioned in the manuscript.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

FDA to decide by June on future of COVID vaccines

Article Type
Changed

The next generation of COVID-19 vaccines should be able to fight off a new strain and be given each year, a panel of experts that advises the Food and Drug Administration said April 6.

But members of the panel also acknowledged that it will be an uphill battle to reach that goal, especially given how quickly the virus continues to change.

The members of the Vaccines and Related Biological Products Advisory Committee said they want to find the balance that makes sure Americans are protected against severe illness and death but doesn’t wear them out with constant recommendations for boosters.

“We don’t feel comfortable with multiple boosters every 8 weeks,” said committee chairman Arnold Monto, MD, professor emeritus of public health at the University of Michigan, Ann Arbor. “We’d love to see an annual vaccination similar to influenza but realize that the evolution of the virus will dictate how we respond in terms of additional vaccine doses.”

The virus itself will dictate vaccination plans, he said.

The government must also keep its focus on convincing Americans who haven’t been vaccinated to join the club, said committee member Henry H. Bernstein, DO, given that “it seems quite obvious that those who are vaccinated do better than those who aren’t vaccinated.”

The government should clearly communicate to the public the goals of vaccination, he said.

“I would suggest that our overall aim is to prevent severe disease, hospitalization, and death more than just infection prevention,” said Dr. Bernstein, professor of pediatrics at Hofstra University, Hempstead, N.Y.

The FDA called the meeting of its advisers to discuss overall booster and vaccine strategy, even though it already authorized a fourth dose of the Pfizer and Moderna vaccines for certain immune compromised adults and for everyone over age 50.

Early in the all-day meeting, temporary committee member James Hildreth, MD, the president of Meharry Medical College, Nashville, Tenn., asked why that authorization was given without the panel’s input. Peter Marks, MD, the director of FDA’s Center for Biologics Evaluation and Research, said the decision was based on data from the United Kingdom and Israel that suggested immunity from a third shot was already waning.

Dr. Marks later said the fourth dose was “authorized as a stopgap measure until we could get something else in place,” because the aim was to protect older Americans who had died at a higher rate than younger individuals.

“I think we’re very much on board that we simply can’t be boosting people as frequently as we are,” said Dr. Marks.
 

Not enough information to make broader plan

The meeting was meant to be a larger conversation about how to keep pace with the evolving virus and to set up a vaccine selection and development process to better and more quickly respond to changes, such as new variants.

But committee members said they felt stymied by a lack of information. They wanted more data from vaccine manufacturers’ clinical trials. And they noted that so far, there’s no objective, reliable lab-based measurement of COVID-19 vaccine effectiveness – known as a correlate of immunity. Instead, public health officials have looked at rates of hospitalizations and deaths to measure whether the vaccine is still offering protection.

“The question is, what is insufficient protection?” asked H. Cody Meissner, MD, director of pediatric infectious disease at Tufts Medical Center in Boston. “At what point will we say the vaccine isn’t working well enough?”

Centers for Disease Control and Prevention officials presented data showing that a third shot has been more effective than a two-shot regimen in preventing serious disease and death, and that the three shots were significantly more protective than being unvaccinated.

In February, as the Omicron variant continued to rage, unvaccinated Americans aged 5 years and older had an almost three times higher risk of testing positive, and nine times higher risk of dying, compared with those who were considered fully vaccinated, said Heather Scobie, PhD, MPH, a member of the CDC’s COVID-19 Emergency Response team.

But only 98 million Americans – about half of those aged 12 years or older – have received a third dose, Dr. Scobie said.

It’s also still not clear how much more protection a fourth shot adds, or how long it will last. The committee heard data on a just-published study of a fourth dose of the Pfizer vaccine given to some 600,000 Israelis during the Omicron wave from January to March. The rate of severe COVID-19 was 3.5 times lower in the group that received a fourth dose, compared with those who had gotten only three shots, and protection lasted for at least 12 weeks.

Still, study authors said, any protection against infection itself was “short lived.”


 

 

 

More like flu vaccine?

The advisers discussed the possibility of making COVID-19 vaccine development similar to the process for the flu vaccine but acknowledged many difficulties.

The flu predictably hits during the winter in each hemisphere and a global surveillance network helps the World Health Organization decide on the vaccine strains each year. Then each nation’s regulatory and public health officials choose the strains for their shot and vaccine makers begin what is typically a 6-month-long manufacturing process.

COVID outbreaks have happened during all seasons and new variants haven’t always hit every country in a similar fashion. The COVID virus has mutated at five times the speed of the flu virus – producing a new dominant strain in a year, compared with the 3-5 years it takes for the flu virus to do so, said Trevor Bedford, PhD, a professor in the vaccine and infectious disease division at the Fred Hutchinson Cancer Research Center in Seattle.

Global COVID surveillance is patchy and the WHO has not yet created a program to help select strains for a COVID-19 vaccine but is working on a process. Currently, vaccine makers seem to be driving vaccine strain selection, said panelist Paul Offit, MD, professor of paediatrics at Children’s Hospital of Philadelphia. “I feel like to some extent the companies dictate the conversation. It shouldn’t come from them. It should come from us.”

“The important thing is that the public understands how complex this is,” said temporary committee member Oveta A. Fuller, PhD, associate professor of microbiology and immunology at the University of Michigan. “We didn’t get to understand influenza in 2 years. It’s taken years to get an imperfect but useful process to deal with flu.”

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

The next generation of COVID-19 vaccines should be able to fight off a new strain and be given each year, a panel of experts that advises the Food and Drug Administration said April 6.

But members of the panel also acknowledged that it will be an uphill battle to reach that goal, especially given how quickly the virus continues to change.

The members of the Vaccines and Related Biological Products Advisory Committee said they want to find the balance that makes sure Americans are protected against severe illness and death but doesn’t wear them out with constant recommendations for boosters.

“We don’t feel comfortable with multiple boosters every 8 weeks,” said committee chairman Arnold Monto, MD, professor emeritus of public health at the University of Michigan, Ann Arbor. “We’d love to see an annual vaccination similar to influenza but realize that the evolution of the virus will dictate how we respond in terms of additional vaccine doses.”

The virus itself will dictate vaccination plans, he said.

The government must also keep its focus on convincing Americans who haven’t been vaccinated to join the club, said committee member Henry H. Bernstein, DO, given that “it seems quite obvious that those who are vaccinated do better than those who aren’t vaccinated.”

The government should clearly communicate to the public the goals of vaccination, he said.

“I would suggest that our overall aim is to prevent severe disease, hospitalization, and death more than just infection prevention,” said Dr. Bernstein, professor of pediatrics at Hofstra University, Hempstead, N.Y.

The FDA called the meeting of its advisers to discuss overall booster and vaccine strategy, even though it already authorized a fourth dose of the Pfizer and Moderna vaccines for certain immune compromised adults and for everyone over age 50.

Early in the all-day meeting, temporary committee member James Hildreth, MD, the president of Meharry Medical College, Nashville, Tenn., asked why that authorization was given without the panel’s input. Peter Marks, MD, the director of FDA’s Center for Biologics Evaluation and Research, said the decision was based on data from the United Kingdom and Israel that suggested immunity from a third shot was already waning.

Dr. Marks later said the fourth dose was “authorized as a stopgap measure until we could get something else in place,” because the aim was to protect older Americans who had died at a higher rate than younger individuals.

“I think we’re very much on board that we simply can’t be boosting people as frequently as we are,” said Dr. Marks.
 

Not enough information to make broader plan

The meeting was meant to be a larger conversation about how to keep pace with the evolving virus and to set up a vaccine selection and development process to better and more quickly respond to changes, such as new variants.

But committee members said they felt stymied by a lack of information. They wanted more data from vaccine manufacturers’ clinical trials. And they noted that so far, there’s no objective, reliable lab-based measurement of COVID-19 vaccine effectiveness – known as a correlate of immunity. Instead, public health officials have looked at rates of hospitalizations and deaths to measure whether the vaccine is still offering protection.

“The question is, what is insufficient protection?” asked H. Cody Meissner, MD, director of pediatric infectious disease at Tufts Medical Center in Boston. “At what point will we say the vaccine isn’t working well enough?”

Centers for Disease Control and Prevention officials presented data showing that a third shot has been more effective than a two-shot regimen in preventing serious disease and death, and that the three shots were significantly more protective than being unvaccinated.

In February, as the Omicron variant continued to rage, unvaccinated Americans aged 5 years and older had an almost three times higher risk of testing positive, and nine times higher risk of dying, compared with those who were considered fully vaccinated, said Heather Scobie, PhD, MPH, a member of the CDC’s COVID-19 Emergency Response team.

But only 98 million Americans – about half of those aged 12 years or older – have received a third dose, Dr. Scobie said.

It’s also still not clear how much more protection a fourth shot adds, or how long it will last. The committee heard data on a just-published study of a fourth dose of the Pfizer vaccine given to some 600,000 Israelis during the Omicron wave from January to March. The rate of severe COVID-19 was 3.5 times lower in the group that received a fourth dose, compared with those who had gotten only three shots, and protection lasted for at least 12 weeks.

Still, study authors said, any protection against infection itself was “short lived.”


 

 

 

More like flu vaccine?

The advisers discussed the possibility of making COVID-19 vaccine development similar to the process for the flu vaccine but acknowledged many difficulties.

The flu predictably hits during the winter in each hemisphere and a global surveillance network helps the World Health Organization decide on the vaccine strains each year. Then each nation’s regulatory and public health officials choose the strains for their shot and vaccine makers begin what is typically a 6-month-long manufacturing process.

COVID outbreaks have happened during all seasons and new variants haven’t always hit every country in a similar fashion. The COVID virus has mutated at five times the speed of the flu virus – producing a new dominant strain in a year, compared with the 3-5 years it takes for the flu virus to do so, said Trevor Bedford, PhD, a professor in the vaccine and infectious disease division at the Fred Hutchinson Cancer Research Center in Seattle.

Global COVID surveillance is patchy and the WHO has not yet created a program to help select strains for a COVID-19 vaccine but is working on a process. Currently, vaccine makers seem to be driving vaccine strain selection, said panelist Paul Offit, MD, professor of paediatrics at Children’s Hospital of Philadelphia. “I feel like to some extent the companies dictate the conversation. It shouldn’t come from them. It should come from us.”

“The important thing is that the public understands how complex this is,” said temporary committee member Oveta A. Fuller, PhD, associate professor of microbiology and immunology at the University of Michigan. “We didn’t get to understand influenza in 2 years. It’s taken years to get an imperfect but useful process to deal with flu.”

A version of this article first appeared on WebMD.com.

The next generation of COVID-19 vaccines should be able to fight off a new strain and be given each year, a panel of experts that advises the Food and Drug Administration said April 6.

But members of the panel also acknowledged that it will be an uphill battle to reach that goal, especially given how quickly the virus continues to change.

The members of the Vaccines and Related Biological Products Advisory Committee said they want to find the balance that makes sure Americans are protected against severe illness and death but doesn’t wear them out with constant recommendations for boosters.

“We don’t feel comfortable with multiple boosters every 8 weeks,” said committee chairman Arnold Monto, MD, professor emeritus of public health at the University of Michigan, Ann Arbor. “We’d love to see an annual vaccination similar to influenza but realize that the evolution of the virus will dictate how we respond in terms of additional vaccine doses.”

The virus itself will dictate vaccination plans, he said.

The government must also keep its focus on convincing Americans who haven’t been vaccinated to join the club, said committee member Henry H. Bernstein, DO, given that “it seems quite obvious that those who are vaccinated do better than those who aren’t vaccinated.”

The government should clearly communicate to the public the goals of vaccination, he said.

“I would suggest that our overall aim is to prevent severe disease, hospitalization, and death more than just infection prevention,” said Dr. Bernstein, professor of pediatrics at Hofstra University, Hempstead, N.Y.

The FDA called the meeting of its advisers to discuss overall booster and vaccine strategy, even though it already authorized a fourth dose of the Pfizer and Moderna vaccines for certain immune compromised adults and for everyone over age 50.

Early in the all-day meeting, temporary committee member James Hildreth, MD, the president of Meharry Medical College, Nashville, Tenn., asked why that authorization was given without the panel’s input. Peter Marks, MD, the director of FDA’s Center for Biologics Evaluation and Research, said the decision was based on data from the United Kingdom and Israel that suggested immunity from a third shot was already waning.

Dr. Marks later said the fourth dose was “authorized as a stopgap measure until we could get something else in place,” because the aim was to protect older Americans who had died at a higher rate than younger individuals.

“I think we’re very much on board that we simply can’t be boosting people as frequently as we are,” said Dr. Marks.
 

Not enough information to make broader plan

The meeting was meant to be a larger conversation about how to keep pace with the evolving virus and to set up a vaccine selection and development process to better and more quickly respond to changes, such as new variants.

But committee members said they felt stymied by a lack of information. They wanted more data from vaccine manufacturers’ clinical trials. And they noted that so far, there’s no objective, reliable lab-based measurement of COVID-19 vaccine effectiveness – known as a correlate of immunity. Instead, public health officials have looked at rates of hospitalizations and deaths to measure whether the vaccine is still offering protection.

“The question is, what is insufficient protection?” asked H. Cody Meissner, MD, director of pediatric infectious disease at Tufts Medical Center in Boston. “At what point will we say the vaccine isn’t working well enough?”

Centers for Disease Control and Prevention officials presented data showing that a third shot has been more effective than a two-shot regimen in preventing serious disease and death, and that the three shots were significantly more protective than being unvaccinated.

In February, as the Omicron variant continued to rage, unvaccinated Americans aged 5 years and older had an almost three times higher risk of testing positive, and nine times higher risk of dying, compared with those who were considered fully vaccinated, said Heather Scobie, PhD, MPH, a member of the CDC’s COVID-19 Emergency Response team.

But only 98 million Americans – about half of those aged 12 years or older – have received a third dose, Dr. Scobie said.

It’s also still not clear how much more protection a fourth shot adds, or how long it will last. The committee heard data on a just-published study of a fourth dose of the Pfizer vaccine given to some 600,000 Israelis during the Omicron wave from January to March. The rate of severe COVID-19 was 3.5 times lower in the group that received a fourth dose, compared with those who had gotten only three shots, and protection lasted for at least 12 weeks.

Still, study authors said, any protection against infection itself was “short lived.”


 

 

 

More like flu vaccine?

The advisers discussed the possibility of making COVID-19 vaccine development similar to the process for the flu vaccine but acknowledged many difficulties.

The flu predictably hits during the winter in each hemisphere and a global surveillance network helps the World Health Organization decide on the vaccine strains each year. Then each nation’s regulatory and public health officials choose the strains for their shot and vaccine makers begin what is typically a 6-month-long manufacturing process.

COVID outbreaks have happened during all seasons and new variants haven’t always hit every country in a similar fashion. The COVID virus has mutated at five times the speed of the flu virus – producing a new dominant strain in a year, compared with the 3-5 years it takes for the flu virus to do so, said Trevor Bedford, PhD, a professor in the vaccine and infectious disease division at the Fred Hutchinson Cancer Research Center in Seattle.

Global COVID surveillance is patchy and the WHO has not yet created a program to help select strains for a COVID-19 vaccine but is working on a process. Currently, vaccine makers seem to be driving vaccine strain selection, said panelist Paul Offit, MD, professor of paediatrics at Children’s Hospital of Philadelphia. “I feel like to some extent the companies dictate the conversation. It shouldn’t come from them. It should come from us.”

“The important thing is that the public understands how complex this is,” said temporary committee member Oveta A. Fuller, PhD, associate professor of microbiology and immunology at the University of Michigan. “We didn’t get to understand influenza in 2 years. It’s taken years to get an imperfect but useful process to deal with flu.”

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

U.S. pulls COVID drug as Omicron subvariant spreads

Article Type
Changed

Federal regulators have announced that GlaxoSmithKline’s COVID-19 drug should no longer be used because it’s likely ineffective against BA.2, the Omicron subvariant that now accounts for most new cases in the United States, The Associated Press reports.

The Food and Drug Administration announced that the antibody drug sotrovimab is no longer authorized to treat patients in U.S. states or territories. The decision was expected, as the FDA restricted the drug’s use across the country throughout March as BA.2 became dominant in certain regions, the AP reported.

The BA.2 subvariant now accounts for 72% of new COVID-19 cases sequenced by health authorities, according to the latest CDC data updated April 5. The FDA cited the CDC data in its reason for pulling back on the authorization of the drug.

The GlaxoSmithKline drug is the latest antibody medication to be pulled due to coronavirus mutations. In January, the FDA halted the use of antibody drugs from Regeneron and Eli Lilly because they didn’t work against the Omicron variant.

The FDA’s decision means that one antibody drug is still authorized for use against routine COVID-19 cases, the AP reported. A different Eli Lilly drug – bebtelovimab – still appears to work against BA.2.

Doctors can also prescribe antiviral pills, which typically affect the coronavirus spike protein and aren’t affected by mutations, to treat mild to moderate COVID-19, the AP reported. The authorized pills from Pfizer and Merck – Paxlovid and Lagevrio – have been shipped to pharmacy chains and medical clinics in hopes of getting them to patients early enough to work.

The federal government purchased nearly $2 billion worth of the GlaxoSmithKline drug and shipped more than 900,000 doses to states last fall, the AP reported. In March, the company announced that it was studying a higher dose that could be effective against BA.2, which would require FDA approval before resuming use in the United States.

The antibody drugs mimic the virus-blocking proteins found in the human body, the AP reported. They’re designed to attack a specific virus and need to be updated as the coronavirus mutates.

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

Federal regulators have announced that GlaxoSmithKline’s COVID-19 drug should no longer be used because it’s likely ineffective against BA.2, the Omicron subvariant that now accounts for most new cases in the United States, The Associated Press reports.

The Food and Drug Administration announced that the antibody drug sotrovimab is no longer authorized to treat patients in U.S. states or territories. The decision was expected, as the FDA restricted the drug’s use across the country throughout March as BA.2 became dominant in certain regions, the AP reported.

The BA.2 subvariant now accounts for 72% of new COVID-19 cases sequenced by health authorities, according to the latest CDC data updated April 5. The FDA cited the CDC data in its reason for pulling back on the authorization of the drug.

The GlaxoSmithKline drug is the latest antibody medication to be pulled due to coronavirus mutations. In January, the FDA halted the use of antibody drugs from Regeneron and Eli Lilly because they didn’t work against the Omicron variant.

The FDA’s decision means that one antibody drug is still authorized for use against routine COVID-19 cases, the AP reported. A different Eli Lilly drug – bebtelovimab – still appears to work against BA.2.

Doctors can also prescribe antiviral pills, which typically affect the coronavirus spike protein and aren’t affected by mutations, to treat mild to moderate COVID-19, the AP reported. The authorized pills from Pfizer and Merck – Paxlovid and Lagevrio – have been shipped to pharmacy chains and medical clinics in hopes of getting them to patients early enough to work.

The federal government purchased nearly $2 billion worth of the GlaxoSmithKline drug and shipped more than 900,000 doses to states last fall, the AP reported. In March, the company announced that it was studying a higher dose that could be effective against BA.2, which would require FDA approval before resuming use in the United States.

The antibody drugs mimic the virus-blocking proteins found in the human body, the AP reported. They’re designed to attack a specific virus and need to be updated as the coronavirus mutates.

A version of this article first appeared on WebMD.com.

Federal regulators have announced that GlaxoSmithKline’s COVID-19 drug should no longer be used because it’s likely ineffective against BA.2, the Omicron subvariant that now accounts for most new cases in the United States, The Associated Press reports.

The Food and Drug Administration announced that the antibody drug sotrovimab is no longer authorized to treat patients in U.S. states or territories. The decision was expected, as the FDA restricted the drug’s use across the country throughout March as BA.2 became dominant in certain regions, the AP reported.

The BA.2 subvariant now accounts for 72% of new COVID-19 cases sequenced by health authorities, according to the latest CDC data updated April 5. The FDA cited the CDC data in its reason for pulling back on the authorization of the drug.

The GlaxoSmithKline drug is the latest antibody medication to be pulled due to coronavirus mutations. In January, the FDA halted the use of antibody drugs from Regeneron and Eli Lilly because they didn’t work against the Omicron variant.

The FDA’s decision means that one antibody drug is still authorized for use against routine COVID-19 cases, the AP reported. A different Eli Lilly drug – bebtelovimab – still appears to work against BA.2.

Doctors can also prescribe antiviral pills, which typically affect the coronavirus spike protein and aren’t affected by mutations, to treat mild to moderate COVID-19, the AP reported. The authorized pills from Pfizer and Merck – Paxlovid and Lagevrio – have been shipped to pharmacy chains and medical clinics in hopes of getting them to patients early enough to work.

The federal government purchased nearly $2 billion worth of the GlaxoSmithKline drug and shipped more than 900,000 doses to states last fall, the AP reported. In March, the company announced that it was studying a higher dose that could be effective against BA.2, which would require FDA approval before resuming use in the United States.

The antibody drugs mimic the virus-blocking proteins found in the human body, the AP reported. They’re designed to attack a specific virus and need to be updated as the coronavirus mutates.

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Type 2 diabetes remission possible for those with lower BMI

Article Type
Changed

A weight-loss program can lead to type 2 diabetes remission, even in individuals with a normal body mass index (BMI), via loss of body fat, particularly in the liver and pancreas, shows a U.K. study.

The ReTUNE trial, funded by Diabetes UK, involved 20 people with type 2 diabetes of less than 6 year’s duration and a BMI of 27 kg/m2 or lower.

Joel Austell/MDedge News
Dr. Roy Taylor

After 1 year, participants had lost 9% of their body weight.

Their body fat decreased significantly, to the same level as controls without type 2 diabetes, and they experienced decreases in liver fat, total triglycerides, and pancreatic fat.

The research, presented at the 2022 Diabetes UK Professional Conference, also showed this was accompanied by increases in insulin secretion and reductions in hemoglobin A1c and fasting plasma glucose levels.

Lead author Roy Taylor, MD, PhD, professor of medicine and metabolism, Newcastle University, Newcastle upon Tyne, England, said the findings indicate that the “etiology and pathophysiology of type 2 diabetes is the same whether BMI is normal or raised.”

This information should make a profound difference in what doctors advise their patients, Dr. Taylor added.

“One of the dramatic things about dealing with people in this group,” he said, “is they feel very resentful that healthcare professionals tell them not to lose weight.”

Based on the current results, Dr. Taylor believes this is “inappropriate advice, and it’s that personal advice that I think that this study points a way towards.”
 

Weight loss ‘first line of treatment’

These findings support the theory of a personal fat threshold, above which “type 2 diabetes occurs,” said Dr. Taylor. “Weight loss is the first-line treatment for all with type 2 diabetes, irrespective of BMI.”

Dr. Taylor already showed in the DiRECT trial that a calorie-restricted liquid diet followed by gradual food reintroduction and a weight-loss maintenance program can achieve and sustain type 2 diabetes remission at 2 years in people who are overweight or obese.

As reported this news organization, 36% of 300 patients enrolled in the trial attained diabetes remission and maintained it out to 24 months, compared with negligible changes in the control group.

Inspired by the results of DiRECT and the DROPLET study, the National Health Service has been rolling out a low calorie–diet treatment program for people who are overweight and living with diabetes.

Asked during the postpresentation discussion whether the current results could have implications for the NHS program, Dr. Taylor said it remains, in effect, a study and will not change things for now.

Chris Askew, chief executive of Diabetes UK, said in a release: “This game-changing study ... advances our understanding of why type 2 diabetes develops and what can be done to treat it.

“Our ambition is for as many people as possible to have the chance to put their type 2 diabetes into remission and live well for longer.”

Mr. Askew continued: “The findings of ReTUNE potentially take us a significant step closer to achieving this goal by showing that remission isn’t only possible for people of certain body weights.”
 

 

 

Weight and body fat decrease led to remission

For ReTUNE, the team recruited 20 individuals with type 2 diabetes of less than 6 year’s duration who had a BMI of 21-27 and compared them with 20 matched controls, with a follow-up of 52 weeks. 

Patients were an average age of 59.0 years, 13 were women, mean BMI was 24.8, and average duration of diabetes was 2.8 years. Mean A1c was 54 mmol/mol.

Fourteen of the patients were taking metformin at enrollment and two were being treated with gliclazide. These medications were stopped when the individuals with type 2 diabetes entered a weight-loss program incremented in 5% steps, followed by 6 weeks of weight stability.

Overall, weight decreased by an average of 9%, while body fat decreased from 32% at baseline to 28% at 1 year (P < .001), the same percentage as that seen in the controls.

Liver fats also decreased significantly from baseline (P < .001) down to approximately the same level as controls at 1 year, a pattern also seen with very low-density lipoprotein cholesterol and triglyceride levels.



Pancreatic fat decreased steadily and significantly over the course of the 52-week follow-up (P < .05), although remained above the level seen in controls.

Insulin secretion increased significantly over the course of the study (P = .005) to finish just over the threshold for the lower range of normal at 52 weeks.

This, Dr. Taylor showed, was enough for the 14 patients who achieved type 2 diabetes remission to see their A1c levels fall significantly during follow-up (P < .001), alongside fasting plasma glucose levels (P < .001).

ReTUNE is funded by Diabetes UK. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A weight-loss program can lead to type 2 diabetes remission, even in individuals with a normal body mass index (BMI), via loss of body fat, particularly in the liver and pancreas, shows a U.K. study.

The ReTUNE trial, funded by Diabetes UK, involved 20 people with type 2 diabetes of less than 6 year’s duration and a BMI of 27 kg/m2 or lower.

Joel Austell/MDedge News
Dr. Roy Taylor

After 1 year, participants had lost 9% of their body weight.

Their body fat decreased significantly, to the same level as controls without type 2 diabetes, and they experienced decreases in liver fat, total triglycerides, and pancreatic fat.

The research, presented at the 2022 Diabetes UK Professional Conference, also showed this was accompanied by increases in insulin secretion and reductions in hemoglobin A1c and fasting plasma glucose levels.

Lead author Roy Taylor, MD, PhD, professor of medicine and metabolism, Newcastle University, Newcastle upon Tyne, England, said the findings indicate that the “etiology and pathophysiology of type 2 diabetes is the same whether BMI is normal or raised.”

This information should make a profound difference in what doctors advise their patients, Dr. Taylor added.

“One of the dramatic things about dealing with people in this group,” he said, “is they feel very resentful that healthcare professionals tell them not to lose weight.”

Based on the current results, Dr. Taylor believes this is “inappropriate advice, and it’s that personal advice that I think that this study points a way towards.”
 

Weight loss ‘first line of treatment’

These findings support the theory of a personal fat threshold, above which “type 2 diabetes occurs,” said Dr. Taylor. “Weight loss is the first-line treatment for all with type 2 diabetes, irrespective of BMI.”

Dr. Taylor already showed in the DiRECT trial that a calorie-restricted liquid diet followed by gradual food reintroduction and a weight-loss maintenance program can achieve and sustain type 2 diabetes remission at 2 years in people who are overweight or obese.

As reported this news organization, 36% of 300 patients enrolled in the trial attained diabetes remission and maintained it out to 24 months, compared with negligible changes in the control group.

Inspired by the results of DiRECT and the DROPLET study, the National Health Service has been rolling out a low calorie–diet treatment program for people who are overweight and living with diabetes.

Asked during the postpresentation discussion whether the current results could have implications for the NHS program, Dr. Taylor said it remains, in effect, a study and will not change things for now.

Chris Askew, chief executive of Diabetes UK, said in a release: “This game-changing study ... advances our understanding of why type 2 diabetes develops and what can be done to treat it.

“Our ambition is for as many people as possible to have the chance to put their type 2 diabetes into remission and live well for longer.”

Mr. Askew continued: “The findings of ReTUNE potentially take us a significant step closer to achieving this goal by showing that remission isn’t only possible for people of certain body weights.”
 

 

 

Weight and body fat decrease led to remission

For ReTUNE, the team recruited 20 individuals with type 2 diabetes of less than 6 year’s duration who had a BMI of 21-27 and compared them with 20 matched controls, with a follow-up of 52 weeks. 

Patients were an average age of 59.0 years, 13 were women, mean BMI was 24.8, and average duration of diabetes was 2.8 years. Mean A1c was 54 mmol/mol.

Fourteen of the patients were taking metformin at enrollment and two were being treated with gliclazide. These medications were stopped when the individuals with type 2 diabetes entered a weight-loss program incremented in 5% steps, followed by 6 weeks of weight stability.

Overall, weight decreased by an average of 9%, while body fat decreased from 32% at baseline to 28% at 1 year (P < .001), the same percentage as that seen in the controls.

Liver fats also decreased significantly from baseline (P < .001) down to approximately the same level as controls at 1 year, a pattern also seen with very low-density lipoprotein cholesterol and triglyceride levels.



Pancreatic fat decreased steadily and significantly over the course of the 52-week follow-up (P < .05), although remained above the level seen in controls.

Insulin secretion increased significantly over the course of the study (P = .005) to finish just over the threshold for the lower range of normal at 52 weeks.

This, Dr. Taylor showed, was enough for the 14 patients who achieved type 2 diabetes remission to see their A1c levels fall significantly during follow-up (P < .001), alongside fasting plasma glucose levels (P < .001).

ReTUNE is funded by Diabetes UK. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A weight-loss program can lead to type 2 diabetes remission, even in individuals with a normal body mass index (BMI), via loss of body fat, particularly in the liver and pancreas, shows a U.K. study.

The ReTUNE trial, funded by Diabetes UK, involved 20 people with type 2 diabetes of less than 6 year’s duration and a BMI of 27 kg/m2 or lower.

Joel Austell/MDedge News
Dr. Roy Taylor

After 1 year, participants had lost 9% of their body weight.

Their body fat decreased significantly, to the same level as controls without type 2 diabetes, and they experienced decreases in liver fat, total triglycerides, and pancreatic fat.

The research, presented at the 2022 Diabetes UK Professional Conference, also showed this was accompanied by increases in insulin secretion and reductions in hemoglobin A1c and fasting plasma glucose levels.

Lead author Roy Taylor, MD, PhD, professor of medicine and metabolism, Newcastle University, Newcastle upon Tyne, England, said the findings indicate that the “etiology and pathophysiology of type 2 diabetes is the same whether BMI is normal or raised.”

This information should make a profound difference in what doctors advise their patients, Dr. Taylor added.

“One of the dramatic things about dealing with people in this group,” he said, “is they feel very resentful that healthcare professionals tell them not to lose weight.”

Based on the current results, Dr. Taylor believes this is “inappropriate advice, and it’s that personal advice that I think that this study points a way towards.”
 

Weight loss ‘first line of treatment’

These findings support the theory of a personal fat threshold, above which “type 2 diabetes occurs,” said Dr. Taylor. “Weight loss is the first-line treatment for all with type 2 diabetes, irrespective of BMI.”

Dr. Taylor already showed in the DiRECT trial that a calorie-restricted liquid diet followed by gradual food reintroduction and a weight-loss maintenance program can achieve and sustain type 2 diabetes remission at 2 years in people who are overweight or obese.

As reported this news organization, 36% of 300 patients enrolled in the trial attained diabetes remission and maintained it out to 24 months, compared with negligible changes in the control group.

Inspired by the results of DiRECT and the DROPLET study, the National Health Service has been rolling out a low calorie–diet treatment program for people who are overweight and living with diabetes.

Asked during the postpresentation discussion whether the current results could have implications for the NHS program, Dr. Taylor said it remains, in effect, a study and will not change things for now.

Chris Askew, chief executive of Diabetes UK, said in a release: “This game-changing study ... advances our understanding of why type 2 diabetes develops and what can be done to treat it.

“Our ambition is for as many people as possible to have the chance to put their type 2 diabetes into remission and live well for longer.”

Mr. Askew continued: “The findings of ReTUNE potentially take us a significant step closer to achieving this goal by showing that remission isn’t only possible for people of certain body weights.”
 

 

 

Weight and body fat decrease led to remission

For ReTUNE, the team recruited 20 individuals with type 2 diabetes of less than 6 year’s duration who had a BMI of 21-27 and compared them with 20 matched controls, with a follow-up of 52 weeks. 

Patients were an average age of 59.0 years, 13 were women, mean BMI was 24.8, and average duration of diabetes was 2.8 years. Mean A1c was 54 mmol/mol.

Fourteen of the patients were taking metformin at enrollment and two were being treated with gliclazide. These medications were stopped when the individuals with type 2 diabetes entered a weight-loss program incremented in 5% steps, followed by 6 weeks of weight stability.

Overall, weight decreased by an average of 9%, while body fat decreased from 32% at baseline to 28% at 1 year (P < .001), the same percentage as that seen in the controls.

Liver fats also decreased significantly from baseline (P < .001) down to approximately the same level as controls at 1 year, a pattern also seen with very low-density lipoprotein cholesterol and triglyceride levels.



Pancreatic fat decreased steadily and significantly over the course of the 52-week follow-up (P < .05), although remained above the level seen in controls.

Insulin secretion increased significantly over the course of the study (P = .005) to finish just over the threshold for the lower range of normal at 52 weeks.

This, Dr. Taylor showed, was enough for the 14 patients who achieved type 2 diabetes remission to see their A1c levels fall significantly during follow-up (P < .001), alongside fasting plasma glucose levels (P < .001).

ReTUNE is funded by Diabetes UK. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article