User login
AGA Clinical Practice Update: Early complications after bariatric/metabolic surgery
The American Gastroenterological Association recently published a clinical practice update concerning endoscopic evaluation and management of early complications after bariatric/metabolic surgery.
The seven best practice advice statements, based on available evidence and expert opinion, range from a general call for high familiarity with available interventions to specific approaches for managing postoperative leaks.
According to lead author Vivek Kumbhari, MD, PhD, director of advanced endoscopy, department of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla., and colleagues, the update was written in consideration of increasing rates of bariatric/metabolic surgery.
“Bariatric/metabolic surgery is unmatched with respect to its weight loss and metabolic benefits,” the investigators wrote in Clinical Gastroenterology and Hepatology. “The selection criteria will continue to broaden, likely resulting in increasing numbers of less robust patients undergoing surgery (e.g., children, elderly, and those with significant cardiorespiratory comorbidities).”
Although the 90-day overall complication rate across all patients undergoing bariatric/metabolic surgery is only 4%, Dr. Kumbhari and colleagues noted that this rate is considerably higher, at 20.1%, among patients aged older than 65 years.
“As utilization escalates, so will the number of patients who suffer early complications,” they wrote.
The first three items of best practice advice describe who should be managing complications after bariatric/metabolic surgery, and how.
Foremost, Dr. Kumbhari and colleagues called for a multidisciplinary approach; they suggested that endoscopists should work closely with related specialists, such as bariatric/metabolic surgeons and interventional radiologists.
“Timely communication between the endoscopist, radiologist, surgeon, nutritionists, and inpatient medical team or primary care physician will result in efficient, effective care with prompt escalation and deescalation,” they wrote. “Daily communication is advised.”
The next two best practice advice statements encourage high familiarity with endoscopic treatments, postsurgical anatomy, interventional radiology, and surgical interventions, including risks and benefits of each approach.
“The endoscopist should ... have expertise in interventional endoscopy techniques, including but not limited to using concomitant fluoroscopy, stent deployment and retrieval, pneumatic balloon dilation, incisional therapies, endoscopic suturing, and managing percutaneous drains,” the investigators wrote. “Having the ability to perform a wide array of therapies will enhance the likelihood that the optimal endoscopic strategy will be employed, as opposed to simply performing a technique with which the endoscopist has experience.”
Following these best practices, Dr. Kumbhari and colleagues advised screening patients with postoperative complications for comorbidities, both medical in nature (such as infection) and psychological.
“Patients often have higher depression and anxiety scores, as well as a lower physical quality of life, and medical teams sometimes neglect the patient’s psychological state,” they wrote. “It is imperative that the multidisciplinary team recognize and acknowledge the patient’s psychological comorbidities and engage expertise to manage them.”
Next, the investigators advised that endoscopic intervention should be considered regardless of time interval since surgery, including the immediate postoperative period.
“Endoscopy is often indicated as the initial therapeutic modality, and it can safely be performed,” Dr. Kumbhari and colleagues wrote. “When endoscopy is performed, it is advised to use carbon dioxide for insufflation. Caution should be used when advancing the endoscope into the small bowel, as it is best to minimize pressure along the fresh staple lines. In cases in which the patient is critically ill or the interventional endoscopist does not have extensive experience with such a scenario, the endoscopy should be performed in the operating room with a surgeon present (preferably the surgeon who performed the operation).”
Dr. Kumbhari and colleagues discussed functional stenosis, which can precipitate and propagate leaks. They noted that “downstream stenosis is frequently seen at the level of the incisura angularis or in the proximal stomach when a sleeve gastrectomy is performed in a patient with a prior laparoscopic adjustable gastric band.”
To address stenosis, the update calls for “aggressive dilation” using a large pneumatic balloon, preferably with fluoroscopy to make sure the distal end of the balloon does not cross the pylorus. The investigators noted that endoscopic suturing may be needed if a tear involving the muscularis propria is encountered.
Lastly, the clinical practice update offers comprehensive guidance for managing staple-line leaks, which “most commonly occur along the staple line of the proximal stomach.”
As leaks are thought to stem from ischemia, “most leaks are not present upon completion of the operation, and they develop over the subsequent weeks, often in the setting of downstream stenosis,” the investigators wrote.
To guide management of staple-line leaks, the investigators presented a treatment algorithm that incorporates defect size, time since surgery, and presence or absence of stenosis.
For example, a defect smaller than 10 mm occurring within 6 weeks of surgery and lacking stenosis may be managed with a percutaneous drain and diversion. In contrast, a defect of similar size, also without stenosis, but occurring later than 6 weeks after the initial procedure, should be managed with endoscopic internal drainage or vacuum therapy.
“Clinicians should recognize that the goal for endoscopic management of staple-line leaks is often not necessarily initial closure of the leak site, but rather techniques to promote drainage of material from the perigastric collection into the gastric lumen such that the leak site closes by secondary intention,” wrote Dr. Kumbhari and colleagues.
The clinical practice update was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Boston Scientific, Medtronic, Apollo Endosurgery, and others.
The American Gastroenterological Association recently published a clinical practice update concerning endoscopic evaluation and management of early complications after bariatric/metabolic surgery.
The seven best practice advice statements, based on available evidence and expert opinion, range from a general call for high familiarity with available interventions to specific approaches for managing postoperative leaks.
According to lead author Vivek Kumbhari, MD, PhD, director of advanced endoscopy, department of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla., and colleagues, the update was written in consideration of increasing rates of bariatric/metabolic surgery.
“Bariatric/metabolic surgery is unmatched with respect to its weight loss and metabolic benefits,” the investigators wrote in Clinical Gastroenterology and Hepatology. “The selection criteria will continue to broaden, likely resulting in increasing numbers of less robust patients undergoing surgery (e.g., children, elderly, and those with significant cardiorespiratory comorbidities).”
Although the 90-day overall complication rate across all patients undergoing bariatric/metabolic surgery is only 4%, Dr. Kumbhari and colleagues noted that this rate is considerably higher, at 20.1%, among patients aged older than 65 years.
“As utilization escalates, so will the number of patients who suffer early complications,” they wrote.
The first three items of best practice advice describe who should be managing complications after bariatric/metabolic surgery, and how.
Foremost, Dr. Kumbhari and colleagues called for a multidisciplinary approach; they suggested that endoscopists should work closely with related specialists, such as bariatric/metabolic surgeons and interventional radiologists.
“Timely communication between the endoscopist, radiologist, surgeon, nutritionists, and inpatient medical team or primary care physician will result in efficient, effective care with prompt escalation and deescalation,” they wrote. “Daily communication is advised.”
The next two best practice advice statements encourage high familiarity with endoscopic treatments, postsurgical anatomy, interventional radiology, and surgical interventions, including risks and benefits of each approach.
“The endoscopist should ... have expertise in interventional endoscopy techniques, including but not limited to using concomitant fluoroscopy, stent deployment and retrieval, pneumatic balloon dilation, incisional therapies, endoscopic suturing, and managing percutaneous drains,” the investigators wrote. “Having the ability to perform a wide array of therapies will enhance the likelihood that the optimal endoscopic strategy will be employed, as opposed to simply performing a technique with which the endoscopist has experience.”
Following these best practices, Dr. Kumbhari and colleagues advised screening patients with postoperative complications for comorbidities, both medical in nature (such as infection) and psychological.
“Patients often have higher depression and anxiety scores, as well as a lower physical quality of life, and medical teams sometimes neglect the patient’s psychological state,” they wrote. “It is imperative that the multidisciplinary team recognize and acknowledge the patient’s psychological comorbidities and engage expertise to manage them.”
Next, the investigators advised that endoscopic intervention should be considered regardless of time interval since surgery, including the immediate postoperative period.
“Endoscopy is often indicated as the initial therapeutic modality, and it can safely be performed,” Dr. Kumbhari and colleagues wrote. “When endoscopy is performed, it is advised to use carbon dioxide for insufflation. Caution should be used when advancing the endoscope into the small bowel, as it is best to minimize pressure along the fresh staple lines. In cases in which the patient is critically ill or the interventional endoscopist does not have extensive experience with such a scenario, the endoscopy should be performed in the operating room with a surgeon present (preferably the surgeon who performed the operation).”
Dr. Kumbhari and colleagues discussed functional stenosis, which can precipitate and propagate leaks. They noted that “downstream stenosis is frequently seen at the level of the incisura angularis or in the proximal stomach when a sleeve gastrectomy is performed in a patient with a prior laparoscopic adjustable gastric band.”
To address stenosis, the update calls for “aggressive dilation” using a large pneumatic balloon, preferably with fluoroscopy to make sure the distal end of the balloon does not cross the pylorus. The investigators noted that endoscopic suturing may be needed if a tear involving the muscularis propria is encountered.
Lastly, the clinical practice update offers comprehensive guidance for managing staple-line leaks, which “most commonly occur along the staple line of the proximal stomach.”
As leaks are thought to stem from ischemia, “most leaks are not present upon completion of the operation, and they develop over the subsequent weeks, often in the setting of downstream stenosis,” the investigators wrote.
To guide management of staple-line leaks, the investigators presented a treatment algorithm that incorporates defect size, time since surgery, and presence or absence of stenosis.
For example, a defect smaller than 10 mm occurring within 6 weeks of surgery and lacking stenosis may be managed with a percutaneous drain and diversion. In contrast, a defect of similar size, also without stenosis, but occurring later than 6 weeks after the initial procedure, should be managed with endoscopic internal drainage or vacuum therapy.
“Clinicians should recognize that the goal for endoscopic management of staple-line leaks is often not necessarily initial closure of the leak site, but rather techniques to promote drainage of material from the perigastric collection into the gastric lumen such that the leak site closes by secondary intention,” wrote Dr. Kumbhari and colleagues.
The clinical practice update was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Boston Scientific, Medtronic, Apollo Endosurgery, and others.
The American Gastroenterological Association recently published a clinical practice update concerning endoscopic evaluation and management of early complications after bariatric/metabolic surgery.
The seven best practice advice statements, based on available evidence and expert opinion, range from a general call for high familiarity with available interventions to specific approaches for managing postoperative leaks.
According to lead author Vivek Kumbhari, MD, PhD, director of advanced endoscopy, department of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla., and colleagues, the update was written in consideration of increasing rates of bariatric/metabolic surgery.
“Bariatric/metabolic surgery is unmatched with respect to its weight loss and metabolic benefits,” the investigators wrote in Clinical Gastroenterology and Hepatology. “The selection criteria will continue to broaden, likely resulting in increasing numbers of less robust patients undergoing surgery (e.g., children, elderly, and those with significant cardiorespiratory comorbidities).”
Although the 90-day overall complication rate across all patients undergoing bariatric/metabolic surgery is only 4%, Dr. Kumbhari and colleagues noted that this rate is considerably higher, at 20.1%, among patients aged older than 65 years.
“As utilization escalates, so will the number of patients who suffer early complications,” they wrote.
The first three items of best practice advice describe who should be managing complications after bariatric/metabolic surgery, and how.
Foremost, Dr. Kumbhari and colleagues called for a multidisciplinary approach; they suggested that endoscopists should work closely with related specialists, such as bariatric/metabolic surgeons and interventional radiologists.
“Timely communication between the endoscopist, radiologist, surgeon, nutritionists, and inpatient medical team or primary care physician will result in efficient, effective care with prompt escalation and deescalation,” they wrote. “Daily communication is advised.”
The next two best practice advice statements encourage high familiarity with endoscopic treatments, postsurgical anatomy, interventional radiology, and surgical interventions, including risks and benefits of each approach.
“The endoscopist should ... have expertise in interventional endoscopy techniques, including but not limited to using concomitant fluoroscopy, stent deployment and retrieval, pneumatic balloon dilation, incisional therapies, endoscopic suturing, and managing percutaneous drains,” the investigators wrote. “Having the ability to perform a wide array of therapies will enhance the likelihood that the optimal endoscopic strategy will be employed, as opposed to simply performing a technique with which the endoscopist has experience.”
Following these best practices, Dr. Kumbhari and colleagues advised screening patients with postoperative complications for comorbidities, both medical in nature (such as infection) and psychological.
“Patients often have higher depression and anxiety scores, as well as a lower physical quality of life, and medical teams sometimes neglect the patient’s psychological state,” they wrote. “It is imperative that the multidisciplinary team recognize and acknowledge the patient’s psychological comorbidities and engage expertise to manage them.”
Next, the investigators advised that endoscopic intervention should be considered regardless of time interval since surgery, including the immediate postoperative period.
“Endoscopy is often indicated as the initial therapeutic modality, and it can safely be performed,” Dr. Kumbhari and colleagues wrote. “When endoscopy is performed, it is advised to use carbon dioxide for insufflation. Caution should be used when advancing the endoscope into the small bowel, as it is best to minimize pressure along the fresh staple lines. In cases in which the patient is critically ill or the interventional endoscopist does not have extensive experience with such a scenario, the endoscopy should be performed in the operating room with a surgeon present (preferably the surgeon who performed the operation).”
Dr. Kumbhari and colleagues discussed functional stenosis, which can precipitate and propagate leaks. They noted that “downstream stenosis is frequently seen at the level of the incisura angularis or in the proximal stomach when a sleeve gastrectomy is performed in a patient with a prior laparoscopic adjustable gastric band.”
To address stenosis, the update calls for “aggressive dilation” using a large pneumatic balloon, preferably with fluoroscopy to make sure the distal end of the balloon does not cross the pylorus. The investigators noted that endoscopic suturing may be needed if a tear involving the muscularis propria is encountered.
Lastly, the clinical practice update offers comprehensive guidance for managing staple-line leaks, which “most commonly occur along the staple line of the proximal stomach.”
As leaks are thought to stem from ischemia, “most leaks are not present upon completion of the operation, and they develop over the subsequent weeks, often in the setting of downstream stenosis,” the investigators wrote.
To guide management of staple-line leaks, the investigators presented a treatment algorithm that incorporates defect size, time since surgery, and presence or absence of stenosis.
For example, a defect smaller than 10 mm occurring within 6 weeks of surgery and lacking stenosis may be managed with a percutaneous drain and diversion. In contrast, a defect of similar size, also without stenosis, but occurring later than 6 weeks after the initial procedure, should be managed with endoscopic internal drainage or vacuum therapy.
“Clinicians should recognize that the goal for endoscopic management of staple-line leaks is often not necessarily initial closure of the leak site, but rather techniques to promote drainage of material from the perigastric collection into the gastric lumen such that the leak site closes by secondary intention,” wrote Dr. Kumbhari and colleagues.
The clinical practice update was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Boston Scientific, Medtronic, Apollo Endosurgery, and others.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Mutational signature may reveal underlying link between red meat and CRC
A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.
This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.
“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”
To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.
This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.
According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”
Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.
“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
Early findings, important implications
Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.
“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”
For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.
“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.
“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
Possible biomarker?
According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.
“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”
The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.
Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.
“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.
A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.
This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.
“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”
To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.
This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.
According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”
Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.
“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
Early findings, important implications
Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.
“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”
For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.
“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.
“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
Possible biomarker?
According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.
“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”
The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.
Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.
“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.
A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.
This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.
“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”
To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.
This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.
According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”
Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.
“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
Early findings, important implications
Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.
“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”
For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.
“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.
“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
Possible biomarker?
According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.
“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”
The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.
Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.
“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.
FROM CANCER DISCOVERY
Early-onset CRC associated with longer survival
Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.
This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.
“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.
To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.
“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.
Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).
In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).
Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.
“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
Cautious about interpretation
Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”
Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.
“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.
The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.
“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
Important but not surprising
It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.
“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”
Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.
Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.
“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”
The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.
Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.
Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.
This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.
“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.
To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.
“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.
Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).
In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).
Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.
“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
Cautious about interpretation
Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”
Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.
“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.
The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.
“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
Important but not surprising
It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.
“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”
Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.
Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.
“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”
The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.
Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.
Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.
This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.
“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.
To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.
“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.
Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).
In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).
Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.
“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
Cautious about interpretation
Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”
Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.
“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.
The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.
“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
Important but not surprising
It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.
“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”
Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.
Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.
“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”
The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.
Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.
FROM JAMA NETWORK OPEN
Early-onset CRC associated with longer survival
Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.
This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.
“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.
To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.
“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.
Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).
In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).
Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.
“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
Cautious about interpretation
Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”
Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.
“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.
The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.
“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
Important but not surprising
It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.
“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”
Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.
Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.
“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”
The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.
Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.
This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.
“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.
To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.
“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.
Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).
In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).
Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.
“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
Cautious about interpretation
Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”
Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.
“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.
The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.
“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
Important but not surprising
It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.
“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”
Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.
Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.
“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”
The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.
Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.
This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.
“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.
To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.
“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.
Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).
In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).
Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.
“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
Cautious about interpretation
Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”
Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.
“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.
The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.
“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
Important but not surprising
It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.
“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”
Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.
Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.
“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”
The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.
FROM JAMA NETWORK OPEN
Average childbirth costs more than $3,000 out of pocket with private insurance
Families with private health insurance pay around $3,000 for newborn delivery and hospitalization, while adding neonatal intensive care can push the bill closer to $5,000, based on a retrospective look at almost 400,000 episodes.
The findings suggest that privately insured families need prenatal financial counseling, as well as screening for financial hardship after delivery, reported lead author Kao-Ping Chua, MD, PhD, assistant professor and health policy researcher in the department of pediatrics and the Susan B. Meister Child Health Evaluation and Research Center at the University of Michigan, Ann Arbor, and colleagues.
“Concern is growing regarding the high and rising financial burden of childbirth for privately insured families,” the investigators wrote in Pediatrics. “Previous studies assessing this burden have focused on out-of-pocket spending for maternal care, including hospitalizations for delivery. However, there are no recent national data on out-of-pocket spending across the childbirth episode, including both deliveries and newborn hospitalizations.”
To address this knowledge gap, Dr. Chua and colleagues turned to Optum’s deidentified Clinformatics Data Mart, comprising 12 million privately insured individuals across the United States. The investigators identified 398,410 childbirth episodes occurring between 2016 and 2019. Each episode was defined as one delivery and at least one newborn hospitalization under the same family plan.
Out-of-pocket cost included copayment plus coinsurance and deductibles. Primary outcomes included mean total out-of-pocket spending and proportion of episodes exceeding $5,000 or $10,000. Subgroup analyses compared differences in spending between episodes involving neonatal intensive care or cesarean birth.
The mean out-of-pocket spending was $2,281 for delivery and $788 for newborn hospitalization, giving a total of $3,068 per childbirth episode. Coinsurance and deductibles accounted for much of that cost, at 55.8% and 42.1%, respectively, whereas copayments accounted for a relatively minor portion (2.2%).
Almost all episodes (95%) cost more than zero dollars, while 17.1% cost more than $5,000 and 1.0% cost more than $10,000. Total mean out-of-pocket spending was higher for episodes involving cesarean birth ($3,389) or neonatal intensive care ($4,969), the latter of which cost more than $10,000 in 8.8% of episodes.
“Because details on plan benefit design were unavailable, the generalizability of findings to all privately insured Americans is unclear,” the investigators noted. “However, the proportion of childbirth episodes covered by high-deductible health plans in this study is consistent with the prevalence of such plans among Americans with employer-sponsored insurance.”
The findings suggest that financial reform is needed, Dr. Chua and colleagues concluded.
“To avoid imposing undue financial burden on families, private insurers should improve childbirth coverage,” they wrote. “An incremental step would be providing first-dollar coverage of deliveries and newborn hospitalizations before deductibles are met. Ideally, however, insurers would waive most or all cost-sharing for these hospitalizations, consistent with the approach taken by Medicaid programs and many developed countries.”
According to Madeline Sutton, MD, assistant professor of obstetrics and gynecology at Morehouse School of Medicine, Atlanta, the size of the study is commendable, but some details are lacking.
“Although the overall sample size allows for a robust analysis, deciding to not report the confidence intervals in this report does not allow for understanding of [the findings with] smaller sample sizes,” Dr. Sutton said in an interview.
(Dr. Chua and colleagues noted that they did not report confidence intervals because “all differences between subgroups were significant owing to large sample sizes.”)
“Still,” Dr. Sutton went on, “this is an important study that has implications for financial counseling that may need to be a part of preconceptional, prenatal, and postnatal visits for privately insured families to help with planning and to decrease the chances of childbirth-related financial hardships. Additionally, policy-level changes that decrease or eliminate these private insurance–related childbirth-episode costs that may negatively impact some families with lower incomes, are warranted.”
The study was funded by the National Institutes of Health. Dr. Chua disclosed a grant from the National Institute on Drug Abuse, while Dr. Moniz is supported by the Agency for Healthcare Research and Quality. Dr. Sutton had no relevant disclosures.
Families with private health insurance pay around $3,000 for newborn delivery and hospitalization, while adding neonatal intensive care can push the bill closer to $5,000, based on a retrospective look at almost 400,000 episodes.
The findings suggest that privately insured families need prenatal financial counseling, as well as screening for financial hardship after delivery, reported lead author Kao-Ping Chua, MD, PhD, assistant professor and health policy researcher in the department of pediatrics and the Susan B. Meister Child Health Evaluation and Research Center at the University of Michigan, Ann Arbor, and colleagues.
“Concern is growing regarding the high and rising financial burden of childbirth for privately insured families,” the investigators wrote in Pediatrics. “Previous studies assessing this burden have focused on out-of-pocket spending for maternal care, including hospitalizations for delivery. However, there are no recent national data on out-of-pocket spending across the childbirth episode, including both deliveries and newborn hospitalizations.”
To address this knowledge gap, Dr. Chua and colleagues turned to Optum’s deidentified Clinformatics Data Mart, comprising 12 million privately insured individuals across the United States. The investigators identified 398,410 childbirth episodes occurring between 2016 and 2019. Each episode was defined as one delivery and at least one newborn hospitalization under the same family plan.
Out-of-pocket cost included copayment plus coinsurance and deductibles. Primary outcomes included mean total out-of-pocket spending and proportion of episodes exceeding $5,000 or $10,000. Subgroup analyses compared differences in spending between episodes involving neonatal intensive care or cesarean birth.
The mean out-of-pocket spending was $2,281 for delivery and $788 for newborn hospitalization, giving a total of $3,068 per childbirth episode. Coinsurance and deductibles accounted for much of that cost, at 55.8% and 42.1%, respectively, whereas copayments accounted for a relatively minor portion (2.2%).
Almost all episodes (95%) cost more than zero dollars, while 17.1% cost more than $5,000 and 1.0% cost more than $10,000. Total mean out-of-pocket spending was higher for episodes involving cesarean birth ($3,389) or neonatal intensive care ($4,969), the latter of which cost more than $10,000 in 8.8% of episodes.
“Because details on plan benefit design were unavailable, the generalizability of findings to all privately insured Americans is unclear,” the investigators noted. “However, the proportion of childbirth episodes covered by high-deductible health plans in this study is consistent with the prevalence of such plans among Americans with employer-sponsored insurance.”
The findings suggest that financial reform is needed, Dr. Chua and colleagues concluded.
“To avoid imposing undue financial burden on families, private insurers should improve childbirth coverage,” they wrote. “An incremental step would be providing first-dollar coverage of deliveries and newborn hospitalizations before deductibles are met. Ideally, however, insurers would waive most or all cost-sharing for these hospitalizations, consistent with the approach taken by Medicaid programs and many developed countries.”
According to Madeline Sutton, MD, assistant professor of obstetrics and gynecology at Morehouse School of Medicine, Atlanta, the size of the study is commendable, but some details are lacking.
“Although the overall sample size allows for a robust analysis, deciding to not report the confidence intervals in this report does not allow for understanding of [the findings with] smaller sample sizes,” Dr. Sutton said in an interview.
(Dr. Chua and colleagues noted that they did not report confidence intervals because “all differences between subgroups were significant owing to large sample sizes.”)
“Still,” Dr. Sutton went on, “this is an important study that has implications for financial counseling that may need to be a part of preconceptional, prenatal, and postnatal visits for privately insured families to help with planning and to decrease the chances of childbirth-related financial hardships. Additionally, policy-level changes that decrease or eliminate these private insurance–related childbirth-episode costs that may negatively impact some families with lower incomes, are warranted.”
The study was funded by the National Institutes of Health. Dr. Chua disclosed a grant from the National Institute on Drug Abuse, while Dr. Moniz is supported by the Agency for Healthcare Research and Quality. Dr. Sutton had no relevant disclosures.
Families with private health insurance pay around $3,000 for newborn delivery and hospitalization, while adding neonatal intensive care can push the bill closer to $5,000, based on a retrospective look at almost 400,000 episodes.
The findings suggest that privately insured families need prenatal financial counseling, as well as screening for financial hardship after delivery, reported lead author Kao-Ping Chua, MD, PhD, assistant professor and health policy researcher in the department of pediatrics and the Susan B. Meister Child Health Evaluation and Research Center at the University of Michigan, Ann Arbor, and colleagues.
“Concern is growing regarding the high and rising financial burden of childbirth for privately insured families,” the investigators wrote in Pediatrics. “Previous studies assessing this burden have focused on out-of-pocket spending for maternal care, including hospitalizations for delivery. However, there are no recent national data on out-of-pocket spending across the childbirth episode, including both deliveries and newborn hospitalizations.”
To address this knowledge gap, Dr. Chua and colleagues turned to Optum’s deidentified Clinformatics Data Mart, comprising 12 million privately insured individuals across the United States. The investigators identified 398,410 childbirth episodes occurring between 2016 and 2019. Each episode was defined as one delivery and at least one newborn hospitalization under the same family plan.
Out-of-pocket cost included copayment plus coinsurance and deductibles. Primary outcomes included mean total out-of-pocket spending and proportion of episodes exceeding $5,000 or $10,000. Subgroup analyses compared differences in spending between episodes involving neonatal intensive care or cesarean birth.
The mean out-of-pocket spending was $2,281 for delivery and $788 for newborn hospitalization, giving a total of $3,068 per childbirth episode. Coinsurance and deductibles accounted for much of that cost, at 55.8% and 42.1%, respectively, whereas copayments accounted for a relatively minor portion (2.2%).
Almost all episodes (95%) cost more than zero dollars, while 17.1% cost more than $5,000 and 1.0% cost more than $10,000. Total mean out-of-pocket spending was higher for episodes involving cesarean birth ($3,389) or neonatal intensive care ($4,969), the latter of which cost more than $10,000 in 8.8% of episodes.
“Because details on plan benefit design were unavailable, the generalizability of findings to all privately insured Americans is unclear,” the investigators noted. “However, the proportion of childbirth episodes covered by high-deductible health plans in this study is consistent with the prevalence of such plans among Americans with employer-sponsored insurance.”
The findings suggest that financial reform is needed, Dr. Chua and colleagues concluded.
“To avoid imposing undue financial burden on families, private insurers should improve childbirth coverage,” they wrote. “An incremental step would be providing first-dollar coverage of deliveries and newborn hospitalizations before deductibles are met. Ideally, however, insurers would waive most or all cost-sharing for these hospitalizations, consistent with the approach taken by Medicaid programs and many developed countries.”
According to Madeline Sutton, MD, assistant professor of obstetrics and gynecology at Morehouse School of Medicine, Atlanta, the size of the study is commendable, but some details are lacking.
“Although the overall sample size allows for a robust analysis, deciding to not report the confidence intervals in this report does not allow for understanding of [the findings with] smaller sample sizes,” Dr. Sutton said in an interview.
(Dr. Chua and colleagues noted that they did not report confidence intervals because “all differences between subgroups were significant owing to large sample sizes.”)
“Still,” Dr. Sutton went on, “this is an important study that has implications for financial counseling that may need to be a part of preconceptional, prenatal, and postnatal visits for privately insured families to help with planning and to decrease the chances of childbirth-related financial hardships. Additionally, policy-level changes that decrease or eliminate these private insurance–related childbirth-episode costs that may negatively impact some families with lower incomes, are warranted.”
The study was funded by the National Institutes of Health. Dr. Chua disclosed a grant from the National Institute on Drug Abuse, while Dr. Moniz is supported by the Agency for Healthcare Research and Quality. Dr. Sutton had no relevant disclosures.
FROM PEDIATRICS
Bifidobacteria supplementation regulates newborn immune system
Supplementing breastfed infants with bifidobacteria promotes development of a well-regulated immune system, theoretically reducing risk of immune-mediated conditions like allergies and asthma, according to investigators.
These findings support the importance of early gut colonization with beneficial microbes, an event that may affect the immune system throughout life, reported lead author Bethany M. Henrick, PhD, director of immunology and diagnostics at Evolve Biosystems, Davis, Calif., and adjunct assistant professor at the University of Nebraska, Lincoln, and colleagues.
“Dysbiosis of the infant gut microbiome is common in modern societies and a likely contributing factor to the increased incidences of immune-mediated disorders,” the investigators wrote in Cell. “Therefore, there is great interest in identifying microbial factors that can support healthier immune system imprinting and hopefully prevent cases of allergy, autoimmunity, and possibly other conditions involving the immune system.”
Prevailing theory suggests that the rising incidence of neonatal intestinal dysbiosis – which is typical in developed countries – may be caused by a variety of factors, including cesarean sections; modern hygiene practices; antibiotics, antiseptics, and other medications; diets high in fat and sugar; and infant formula.
According to Dr. Henrick and colleagues, a healthy gut microbiome plays the greatest role in immunological development during the first 3 months post partum; specifically, a lack of bifidobacteria during this time has been linked with increased risks of autoimmunity and enteric inflammation, although underlying immune mechanisms remain unclear.
Bifidobacteria also exemplify the symbiotic relationship between mothers, babies, and beneficial microbes. The investigators pointed out that breast milk contains human milk oligosaccharides (HMOs), which humans cannot digest, but are an excellent source of energy for bifidobacteria and other beneficial microbes, giving them a “selective nutritional advantage.”
Bifidobacteria should therefore be common residents within the infant gut, but this is often not now the case, leading Dr. Henrick and colleagues to zero in on the microbe, in hopes of determining the exactly how beneficial bacteria shape immune development.
It is only recently that the necessary knowledge and techniques to perform studies like this one have become available, the investigators wrote, noting a better understanding of cell-regulatory relationships, advances in immune profiling at the systems level, and new technology that allows for profiling small-volume samples from infants.
The present study involved a series of observational experiments and a small interventional trial.
First, the investigators conducted a wide array of blood- and fecal-based longitudinal analyses from 208 infants in Sweden to characterize immune cell expansion and microbiome colonization of the gut, with a focus on bifidobacteria.
Their results showed that infants lacking bifidobacteria, and HMO-utilization genes (which are expressed by bifidobacteria and other beneficial microbes), had higher levels of systemic inflammation, including increased T helper 2 (Th2) and Th17 responses.
“Infants not colonized by Bifidobacteriaceae or in cases where these microbes fail to expand during the first months of life there is evidence of systemic and intestinal inflammation, increased frequencies of activated immune cells, and reduced levels of regulatory cells indicative of systemic immune dysregulation,” the investigators wrote.
The interventional part of the study involved 60 breastfed infants in California. Twenty-nine of the newborns were given 1.8 x 1010 colony-forming units (CFUs) of B. longum subsp. infantis EVC001 daily from postnatal day 7 to day 28, while the remaining 31 infants were given no supplementation.
Fecal samples were collected on day 6 and day 60. At day 60, supplemented infants had high levels of HMO-utilization genes, plus significantly greater alpha diversity (P = .0001; Wilcoxon), compared with controls. Infants receiving EVC001 also had less inflammatory fecal cytokines, suggesting that microbes expressing HMO-utilization genes cause a shift away from proinflammatory Th2 and Th17 responses, and toward Th1.
“It is not the simple presence of bifidobacteria that is responsible for the immune effects but the metabolic partnership between the bacteria and HMOs,” the investigators noted.
According to principal investigator Petter Brodin, MD, PhD, professor of pediatric immunology at Karolinska Institutet, Solna, Sweden, the findings deserve further investigation.
“Our data indicate that substitution with beneficial microbes efficiently metabolizing HMOs could open a way to prevent cases of immune-mediated diseases, but larger, randomized trials aimed at this will be required to determine this potential,” Dr. Brodin said in an interview.
Carolynn Dude, MD, PhD, assistant professor in the division of maternal-fetal medicine at Emory University, Atlanta, agreed that more work is needed.
“While this study provides some insight into the mechanisms that may set up a newborn for poor health outcomes later in life, the data is still very limited, and more long-term follow-up on these infants is needed before recommending any sort of bacterial supplementation to a newborn,” Dr. Dude said in an interview.
Dr. Brodin and colleagues are planning an array of related studies, including larger clinical trials; further investigations into mechanisms of action; comparisons between the present cohort and infants in Kenya, where immune-mediated diseases are rare; and evaluations of vaccine responses and infectious disease susceptibility.
The study was supported by the European Research Council, the Swedish Research Council, the Marianne & Marcus Wallenberg Foundation, and others. The investigators disclosed relationships with Cytodelics, Scailyte, Kancera, and others. Dr. Dude reported no relevant conflicts of interest.
Supplementing breastfed infants with bifidobacteria promotes development of a well-regulated immune system, theoretically reducing risk of immune-mediated conditions like allergies and asthma, according to investigators.
These findings support the importance of early gut colonization with beneficial microbes, an event that may affect the immune system throughout life, reported lead author Bethany M. Henrick, PhD, director of immunology and diagnostics at Evolve Biosystems, Davis, Calif., and adjunct assistant professor at the University of Nebraska, Lincoln, and colleagues.
“Dysbiosis of the infant gut microbiome is common in modern societies and a likely contributing factor to the increased incidences of immune-mediated disorders,” the investigators wrote in Cell. “Therefore, there is great interest in identifying microbial factors that can support healthier immune system imprinting and hopefully prevent cases of allergy, autoimmunity, and possibly other conditions involving the immune system.”
Prevailing theory suggests that the rising incidence of neonatal intestinal dysbiosis – which is typical in developed countries – may be caused by a variety of factors, including cesarean sections; modern hygiene practices; antibiotics, antiseptics, and other medications; diets high in fat and sugar; and infant formula.
According to Dr. Henrick and colleagues, a healthy gut microbiome plays the greatest role in immunological development during the first 3 months post partum; specifically, a lack of bifidobacteria during this time has been linked with increased risks of autoimmunity and enteric inflammation, although underlying immune mechanisms remain unclear.
Bifidobacteria also exemplify the symbiotic relationship between mothers, babies, and beneficial microbes. The investigators pointed out that breast milk contains human milk oligosaccharides (HMOs), which humans cannot digest, but are an excellent source of energy for bifidobacteria and other beneficial microbes, giving them a “selective nutritional advantage.”
Bifidobacteria should therefore be common residents within the infant gut, but this is often not now the case, leading Dr. Henrick and colleagues to zero in on the microbe, in hopes of determining the exactly how beneficial bacteria shape immune development.
It is only recently that the necessary knowledge and techniques to perform studies like this one have become available, the investigators wrote, noting a better understanding of cell-regulatory relationships, advances in immune profiling at the systems level, and new technology that allows for profiling small-volume samples from infants.
The present study involved a series of observational experiments and a small interventional trial.
First, the investigators conducted a wide array of blood- and fecal-based longitudinal analyses from 208 infants in Sweden to characterize immune cell expansion and microbiome colonization of the gut, with a focus on bifidobacteria.
Their results showed that infants lacking bifidobacteria, and HMO-utilization genes (which are expressed by bifidobacteria and other beneficial microbes), had higher levels of systemic inflammation, including increased T helper 2 (Th2) and Th17 responses.
“Infants not colonized by Bifidobacteriaceae or in cases where these microbes fail to expand during the first months of life there is evidence of systemic and intestinal inflammation, increased frequencies of activated immune cells, and reduced levels of regulatory cells indicative of systemic immune dysregulation,” the investigators wrote.
The interventional part of the study involved 60 breastfed infants in California. Twenty-nine of the newborns were given 1.8 x 1010 colony-forming units (CFUs) of B. longum subsp. infantis EVC001 daily from postnatal day 7 to day 28, while the remaining 31 infants were given no supplementation.
Fecal samples were collected on day 6 and day 60. At day 60, supplemented infants had high levels of HMO-utilization genes, plus significantly greater alpha diversity (P = .0001; Wilcoxon), compared with controls. Infants receiving EVC001 also had less inflammatory fecal cytokines, suggesting that microbes expressing HMO-utilization genes cause a shift away from proinflammatory Th2 and Th17 responses, and toward Th1.
“It is not the simple presence of bifidobacteria that is responsible for the immune effects but the metabolic partnership between the bacteria and HMOs,” the investigators noted.
According to principal investigator Petter Brodin, MD, PhD, professor of pediatric immunology at Karolinska Institutet, Solna, Sweden, the findings deserve further investigation.
“Our data indicate that substitution with beneficial microbes efficiently metabolizing HMOs could open a way to prevent cases of immune-mediated diseases, but larger, randomized trials aimed at this will be required to determine this potential,” Dr. Brodin said in an interview.
Carolynn Dude, MD, PhD, assistant professor in the division of maternal-fetal medicine at Emory University, Atlanta, agreed that more work is needed.
“While this study provides some insight into the mechanisms that may set up a newborn for poor health outcomes later in life, the data is still very limited, and more long-term follow-up on these infants is needed before recommending any sort of bacterial supplementation to a newborn,” Dr. Dude said in an interview.
Dr. Brodin and colleagues are planning an array of related studies, including larger clinical trials; further investigations into mechanisms of action; comparisons between the present cohort and infants in Kenya, where immune-mediated diseases are rare; and evaluations of vaccine responses and infectious disease susceptibility.
The study was supported by the European Research Council, the Swedish Research Council, the Marianne & Marcus Wallenberg Foundation, and others. The investigators disclosed relationships with Cytodelics, Scailyte, Kancera, and others. Dr. Dude reported no relevant conflicts of interest.
Supplementing breastfed infants with bifidobacteria promotes development of a well-regulated immune system, theoretically reducing risk of immune-mediated conditions like allergies and asthma, according to investigators.
These findings support the importance of early gut colonization with beneficial microbes, an event that may affect the immune system throughout life, reported lead author Bethany M. Henrick, PhD, director of immunology and diagnostics at Evolve Biosystems, Davis, Calif., and adjunct assistant professor at the University of Nebraska, Lincoln, and colleagues.
“Dysbiosis of the infant gut microbiome is common in modern societies and a likely contributing factor to the increased incidences of immune-mediated disorders,” the investigators wrote in Cell. “Therefore, there is great interest in identifying microbial factors that can support healthier immune system imprinting and hopefully prevent cases of allergy, autoimmunity, and possibly other conditions involving the immune system.”
Prevailing theory suggests that the rising incidence of neonatal intestinal dysbiosis – which is typical in developed countries – may be caused by a variety of factors, including cesarean sections; modern hygiene practices; antibiotics, antiseptics, and other medications; diets high in fat and sugar; and infant formula.
According to Dr. Henrick and colleagues, a healthy gut microbiome plays the greatest role in immunological development during the first 3 months post partum; specifically, a lack of bifidobacteria during this time has been linked with increased risks of autoimmunity and enteric inflammation, although underlying immune mechanisms remain unclear.
Bifidobacteria also exemplify the symbiotic relationship between mothers, babies, and beneficial microbes. The investigators pointed out that breast milk contains human milk oligosaccharides (HMOs), which humans cannot digest, but are an excellent source of energy for bifidobacteria and other beneficial microbes, giving them a “selective nutritional advantage.”
Bifidobacteria should therefore be common residents within the infant gut, but this is often not now the case, leading Dr. Henrick and colleagues to zero in on the microbe, in hopes of determining the exactly how beneficial bacteria shape immune development.
It is only recently that the necessary knowledge and techniques to perform studies like this one have become available, the investigators wrote, noting a better understanding of cell-regulatory relationships, advances in immune profiling at the systems level, and new technology that allows for profiling small-volume samples from infants.
The present study involved a series of observational experiments and a small interventional trial.
First, the investigators conducted a wide array of blood- and fecal-based longitudinal analyses from 208 infants in Sweden to characterize immune cell expansion and microbiome colonization of the gut, with a focus on bifidobacteria.
Their results showed that infants lacking bifidobacteria, and HMO-utilization genes (which are expressed by bifidobacteria and other beneficial microbes), had higher levels of systemic inflammation, including increased T helper 2 (Th2) and Th17 responses.
“Infants not colonized by Bifidobacteriaceae or in cases where these microbes fail to expand during the first months of life there is evidence of systemic and intestinal inflammation, increased frequencies of activated immune cells, and reduced levels of regulatory cells indicative of systemic immune dysregulation,” the investigators wrote.
The interventional part of the study involved 60 breastfed infants in California. Twenty-nine of the newborns were given 1.8 x 1010 colony-forming units (CFUs) of B. longum subsp. infantis EVC001 daily from postnatal day 7 to day 28, while the remaining 31 infants were given no supplementation.
Fecal samples were collected on day 6 and day 60. At day 60, supplemented infants had high levels of HMO-utilization genes, plus significantly greater alpha diversity (P = .0001; Wilcoxon), compared with controls. Infants receiving EVC001 also had less inflammatory fecal cytokines, suggesting that microbes expressing HMO-utilization genes cause a shift away from proinflammatory Th2 and Th17 responses, and toward Th1.
“It is not the simple presence of bifidobacteria that is responsible for the immune effects but the metabolic partnership between the bacteria and HMOs,” the investigators noted.
According to principal investigator Petter Brodin, MD, PhD, professor of pediatric immunology at Karolinska Institutet, Solna, Sweden, the findings deserve further investigation.
“Our data indicate that substitution with beneficial microbes efficiently metabolizing HMOs could open a way to prevent cases of immune-mediated diseases, but larger, randomized trials aimed at this will be required to determine this potential,” Dr. Brodin said in an interview.
Carolynn Dude, MD, PhD, assistant professor in the division of maternal-fetal medicine at Emory University, Atlanta, agreed that more work is needed.
“While this study provides some insight into the mechanisms that may set up a newborn for poor health outcomes later in life, the data is still very limited, and more long-term follow-up on these infants is needed before recommending any sort of bacterial supplementation to a newborn,” Dr. Dude said in an interview.
Dr. Brodin and colleagues are planning an array of related studies, including larger clinical trials; further investigations into mechanisms of action; comparisons between the present cohort and infants in Kenya, where immune-mediated diseases are rare; and evaluations of vaccine responses and infectious disease susceptibility.
The study was supported by the European Research Council, the Swedish Research Council, the Marianne & Marcus Wallenberg Foundation, and others. The investigators disclosed relationships with Cytodelics, Scailyte, Kancera, and others. Dr. Dude reported no relevant conflicts of interest.
FROM CELL
Memory benefit seen with antihypertensives crossing blood-brain barrier
Over a 3-year period, cognitively normal older adults taking BBB-crossing antihypertensives demonstrated superior verbal memory, compared with similar individuals receiving non–BBB-crossing antihypertensives, reported lead author Jean K. Ho, PhD, of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, and colleagues.
According to the investigators, the findings add color to a known link between hypertension and neurologic degeneration, and may aid the search for new therapeutic targets.
“Hypertension is a well-established risk factor for cognitive decline and dementia, possibly through its effects on both cerebrovascular disease and Alzheimer’s disease,” Dr. Ho and colleagues wrote in Hypertension. “Studies of antihypertensive treatments have reported possible salutary effects on cognition and cerebrovascular disease, as well as Alzheimer’s disease neuropathology.”
In a previous study, individuals younger than 75 years exposed to antihypertensives had an 8% decreased risk of dementia per year of use, while another trial showed that intensive blood pressure–lowering therapy reduced mild cognitive impairment by 19%.
“Despite these encouraging findings ... larger meta-analytic studies have been hampered by the fact that pharmacokinetic properties are typically not considered in existing studies or routine clinical practice,” wrote Dr. Ho and colleagues. “The present study sought to fill this gap [in that it was] a large and longitudinal meta-analytic study of existing data recoded to assess the effects of BBB-crossing potential in renin-angiotensin system [RAS] treatments among hypertensive adults.”
Methods and results
The meta-analysis included randomized clinical trials, prospective cohort studies, and retrospective observational studies. The researchers assessed data on 12,849 individuals from 14 cohorts that received either BBB-crossing or non–BBB-crossing antihypertensives.
The BBB-crossing properties of RAS treatments were identified by a literature review. Of ACE inhibitors, captopril, fosinopril, lisinopril, perindopril, ramipril, and trandolapril were classified as BBB crossing, and benazepril, enalapril, moexipril, and quinapril were classified as non–BBB-crossing. Of ARBs, telmisartan and candesartan were considered BBB-crossing, and olmesartan, eprosartan, irbesartan, and losartan were tagged as non–BBB-crossing.
Cognition was assessed via the following seven domains: executive function, attention, verbal memory learning, language, mental status, recall, and processing speed.
Compared with individuals taking non–BBB-crossing antihypertensives, those taking BBB-crossing agents had significantly superior verbal memory (recall), with a maximum effect size of 0.07 (P = .03).
According to the investigators, this finding was particularly noteworthy, as the BBB-crossing group had relatively higher vascular risk burden and lower mean education level.
“These differences make it all the more remarkable that the BBB-crossing group displayed better memory ability over time despite these cognitive disadvantages,” the investigators wrote.
Still, not all the findings favored BBB-crossing agents. Individuals in the BBB-crossing group had relatively inferior attention ability, with a minimum effect size of –0.17 (P = .02).
The other cognitive measures were not significantly different between groups.
Clinicians may consider findings after accounting for other factors
Principal investigator Daniel A. Nation, PhD, associate professor of psychological science and a faculty member of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, suggested that the small difference in verbal memory between groups could be clinically significant over a longer period of time.
“Although the overall effect size was pretty small, if you look at how long it would take for someone [with dementia] to progress over many years of decline, it would actually end up being a pretty big effect,” Dr. Nation said in an interview. “Small effect sizes could actually end up preventing a lot of cases of dementia,” he added.
The conflicting results in the BBB-crossing group – better verbal memory but worse attention ability – were “surprising,” he noted.
“I sort of didn’t believe it at first,” Dr. Nation said, “because the memory finding is sort of replication – we’d observed the same exact effect on memory in a smaller sample in another study. ... The attention [finding], going another way, was a new thing.”
Dr. Nation suggested that the intergroup differences in attention ability may stem from idiosyncrasies of the tests used to measure that domain, which can be impacted by cardiovascular or brain vascular disease. Or it could be caused by something else entirely, he said, noting that further investigation is needed.
He added that the improvements in verbal memory within the BBB-crossing group could be caused by direct effects on the brain. He pointed out that certain ACE polymorphisms have been linked with Alzheimer’s disease risk, and those same polymorphisms, in animal models, lead to neurodegeneration, with reversal possible through administration of ACE inhibitors.
“It could be that what we’re observing has nothing really to do with blood pressure,” Dr. Nation explained. “This could be a neuronal effect on learning memory systems.”
He went on to suggest that clinicians may consider these findings when selecting antihypertensive agents for their patients, with the caveat that all other prescribing factors have already been taking to account.
“In the event that you’re going to give an ACE inhibitor or an angiotensin receptor blocker anyway, and it ends up being a somewhat arbitrary decision in terms of which specific drug you’re going to give, then perhaps this is a piece of information you would take into account – that one gets in the brain and one doesn’t – in somebody at risk for cognitive decline,” Dr. Nation said.
Exact mechanisms of action unknown
Hélène Girouard, PhD, assistant professor of pharmacology and physiology at the University of Montreal, said in an interview that the findings are “of considerable importance, knowing that brain alterations could begin as much as 30 years before manifestation of dementia.”
Since 2003, Dr. Girouard has been studying the cognitive effects of antihypertensive medications. She noted that previous studies involving rodents “have shown beneficial effects [of BBB-crossing antihypertensive drugs] on cognition independent of their effects on blood pressure.”
The drugs’ exact mechanisms of action, however, remain elusive, according to Dr. Girouard, who offered several possible explanations, including amelioration of BBB disruption, brain inflammation, cerebral blood flow dysregulation, cholinergic dysfunction, and neurologic deficits. “Whether these mechanisms may explain Ho and colleagues’ observations remains to be established,” she added.
Andrea L. Schneider, MD, PhD, assistant professor of neurology at the University of Pennsylvania, Philadelphia, applauded the study, but ultimately suggested that more research is needed to impact clinical decision-making.
“The results of this important and well-done study suggest that further investigation into targeted mechanism-based approaches to selecting hypertension treatment agents, with a specific focus on cognitive outcomes, is warranted,” Dr. Schneider said in an interview. “Before changing clinical practice, further work is necessary to disentangle contributions of medication mechanism, comorbid vascular risk factors, and achieved blood pressure reduction, among others.”
The investigators disclosed support from the National Institutes of Health, the Alzheimer’s Association, the Waksman Foundation of Japan, and others. The interviewees reported no relevant conflicts of interest.
Over a 3-year period, cognitively normal older adults taking BBB-crossing antihypertensives demonstrated superior verbal memory, compared with similar individuals receiving non–BBB-crossing antihypertensives, reported lead author Jean K. Ho, PhD, of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, and colleagues.
According to the investigators, the findings add color to a known link between hypertension and neurologic degeneration, and may aid the search for new therapeutic targets.
“Hypertension is a well-established risk factor for cognitive decline and dementia, possibly through its effects on both cerebrovascular disease and Alzheimer’s disease,” Dr. Ho and colleagues wrote in Hypertension. “Studies of antihypertensive treatments have reported possible salutary effects on cognition and cerebrovascular disease, as well as Alzheimer’s disease neuropathology.”
In a previous study, individuals younger than 75 years exposed to antihypertensives had an 8% decreased risk of dementia per year of use, while another trial showed that intensive blood pressure–lowering therapy reduced mild cognitive impairment by 19%.
“Despite these encouraging findings ... larger meta-analytic studies have been hampered by the fact that pharmacokinetic properties are typically not considered in existing studies or routine clinical practice,” wrote Dr. Ho and colleagues. “The present study sought to fill this gap [in that it was] a large and longitudinal meta-analytic study of existing data recoded to assess the effects of BBB-crossing potential in renin-angiotensin system [RAS] treatments among hypertensive adults.”
Methods and results
The meta-analysis included randomized clinical trials, prospective cohort studies, and retrospective observational studies. The researchers assessed data on 12,849 individuals from 14 cohorts that received either BBB-crossing or non–BBB-crossing antihypertensives.
The BBB-crossing properties of RAS treatments were identified by a literature review. Of ACE inhibitors, captopril, fosinopril, lisinopril, perindopril, ramipril, and trandolapril were classified as BBB crossing, and benazepril, enalapril, moexipril, and quinapril were classified as non–BBB-crossing. Of ARBs, telmisartan and candesartan were considered BBB-crossing, and olmesartan, eprosartan, irbesartan, and losartan were tagged as non–BBB-crossing.
Cognition was assessed via the following seven domains: executive function, attention, verbal memory learning, language, mental status, recall, and processing speed.
Compared with individuals taking non–BBB-crossing antihypertensives, those taking BBB-crossing agents had significantly superior verbal memory (recall), with a maximum effect size of 0.07 (P = .03).
According to the investigators, this finding was particularly noteworthy, as the BBB-crossing group had relatively higher vascular risk burden and lower mean education level.
“These differences make it all the more remarkable that the BBB-crossing group displayed better memory ability over time despite these cognitive disadvantages,” the investigators wrote.
Still, not all the findings favored BBB-crossing agents. Individuals in the BBB-crossing group had relatively inferior attention ability, with a minimum effect size of –0.17 (P = .02).
The other cognitive measures were not significantly different between groups.
Clinicians may consider findings after accounting for other factors
Principal investigator Daniel A. Nation, PhD, associate professor of psychological science and a faculty member of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, suggested that the small difference in verbal memory between groups could be clinically significant over a longer period of time.
“Although the overall effect size was pretty small, if you look at how long it would take for someone [with dementia] to progress over many years of decline, it would actually end up being a pretty big effect,” Dr. Nation said in an interview. “Small effect sizes could actually end up preventing a lot of cases of dementia,” he added.
The conflicting results in the BBB-crossing group – better verbal memory but worse attention ability – were “surprising,” he noted.
“I sort of didn’t believe it at first,” Dr. Nation said, “because the memory finding is sort of replication – we’d observed the same exact effect on memory in a smaller sample in another study. ... The attention [finding], going another way, was a new thing.”
Dr. Nation suggested that the intergroup differences in attention ability may stem from idiosyncrasies of the tests used to measure that domain, which can be impacted by cardiovascular or brain vascular disease. Or it could be caused by something else entirely, he said, noting that further investigation is needed.
He added that the improvements in verbal memory within the BBB-crossing group could be caused by direct effects on the brain. He pointed out that certain ACE polymorphisms have been linked with Alzheimer’s disease risk, and those same polymorphisms, in animal models, lead to neurodegeneration, with reversal possible through administration of ACE inhibitors.
“It could be that what we’re observing has nothing really to do with blood pressure,” Dr. Nation explained. “This could be a neuronal effect on learning memory systems.”
He went on to suggest that clinicians may consider these findings when selecting antihypertensive agents for their patients, with the caveat that all other prescribing factors have already been taking to account.
“In the event that you’re going to give an ACE inhibitor or an angiotensin receptor blocker anyway, and it ends up being a somewhat arbitrary decision in terms of which specific drug you’re going to give, then perhaps this is a piece of information you would take into account – that one gets in the brain and one doesn’t – in somebody at risk for cognitive decline,” Dr. Nation said.
Exact mechanisms of action unknown
Hélène Girouard, PhD, assistant professor of pharmacology and physiology at the University of Montreal, said in an interview that the findings are “of considerable importance, knowing that brain alterations could begin as much as 30 years before manifestation of dementia.”
Since 2003, Dr. Girouard has been studying the cognitive effects of antihypertensive medications. She noted that previous studies involving rodents “have shown beneficial effects [of BBB-crossing antihypertensive drugs] on cognition independent of their effects on blood pressure.”
The drugs’ exact mechanisms of action, however, remain elusive, according to Dr. Girouard, who offered several possible explanations, including amelioration of BBB disruption, brain inflammation, cerebral blood flow dysregulation, cholinergic dysfunction, and neurologic deficits. “Whether these mechanisms may explain Ho and colleagues’ observations remains to be established,” she added.
Andrea L. Schneider, MD, PhD, assistant professor of neurology at the University of Pennsylvania, Philadelphia, applauded the study, but ultimately suggested that more research is needed to impact clinical decision-making.
“The results of this important and well-done study suggest that further investigation into targeted mechanism-based approaches to selecting hypertension treatment agents, with a specific focus on cognitive outcomes, is warranted,” Dr. Schneider said in an interview. “Before changing clinical practice, further work is necessary to disentangle contributions of medication mechanism, comorbid vascular risk factors, and achieved blood pressure reduction, among others.”
The investigators disclosed support from the National Institutes of Health, the Alzheimer’s Association, the Waksman Foundation of Japan, and others. The interviewees reported no relevant conflicts of interest.
Over a 3-year period, cognitively normal older adults taking BBB-crossing antihypertensives demonstrated superior verbal memory, compared with similar individuals receiving non–BBB-crossing antihypertensives, reported lead author Jean K. Ho, PhD, of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, and colleagues.
According to the investigators, the findings add color to a known link between hypertension and neurologic degeneration, and may aid the search for new therapeutic targets.
“Hypertension is a well-established risk factor for cognitive decline and dementia, possibly through its effects on both cerebrovascular disease and Alzheimer’s disease,” Dr. Ho and colleagues wrote in Hypertension. “Studies of antihypertensive treatments have reported possible salutary effects on cognition and cerebrovascular disease, as well as Alzheimer’s disease neuropathology.”
In a previous study, individuals younger than 75 years exposed to antihypertensives had an 8% decreased risk of dementia per year of use, while another trial showed that intensive blood pressure–lowering therapy reduced mild cognitive impairment by 19%.
“Despite these encouraging findings ... larger meta-analytic studies have been hampered by the fact that pharmacokinetic properties are typically not considered in existing studies or routine clinical practice,” wrote Dr. Ho and colleagues. “The present study sought to fill this gap [in that it was] a large and longitudinal meta-analytic study of existing data recoded to assess the effects of BBB-crossing potential in renin-angiotensin system [RAS] treatments among hypertensive adults.”
Methods and results
The meta-analysis included randomized clinical trials, prospective cohort studies, and retrospective observational studies. The researchers assessed data on 12,849 individuals from 14 cohorts that received either BBB-crossing or non–BBB-crossing antihypertensives.
The BBB-crossing properties of RAS treatments were identified by a literature review. Of ACE inhibitors, captopril, fosinopril, lisinopril, perindopril, ramipril, and trandolapril were classified as BBB crossing, and benazepril, enalapril, moexipril, and quinapril were classified as non–BBB-crossing. Of ARBs, telmisartan and candesartan were considered BBB-crossing, and olmesartan, eprosartan, irbesartan, and losartan were tagged as non–BBB-crossing.
Cognition was assessed via the following seven domains: executive function, attention, verbal memory learning, language, mental status, recall, and processing speed.
Compared with individuals taking non–BBB-crossing antihypertensives, those taking BBB-crossing agents had significantly superior verbal memory (recall), with a maximum effect size of 0.07 (P = .03).
According to the investigators, this finding was particularly noteworthy, as the BBB-crossing group had relatively higher vascular risk burden and lower mean education level.
“These differences make it all the more remarkable that the BBB-crossing group displayed better memory ability over time despite these cognitive disadvantages,” the investigators wrote.
Still, not all the findings favored BBB-crossing agents. Individuals in the BBB-crossing group had relatively inferior attention ability, with a minimum effect size of –0.17 (P = .02).
The other cognitive measures were not significantly different between groups.
Clinicians may consider findings after accounting for other factors
Principal investigator Daniel A. Nation, PhD, associate professor of psychological science and a faculty member of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, suggested that the small difference in verbal memory between groups could be clinically significant over a longer period of time.
“Although the overall effect size was pretty small, if you look at how long it would take for someone [with dementia] to progress over many years of decline, it would actually end up being a pretty big effect,” Dr. Nation said in an interview. “Small effect sizes could actually end up preventing a lot of cases of dementia,” he added.
The conflicting results in the BBB-crossing group – better verbal memory but worse attention ability – were “surprising,” he noted.
“I sort of didn’t believe it at first,” Dr. Nation said, “because the memory finding is sort of replication – we’d observed the same exact effect on memory in a smaller sample in another study. ... The attention [finding], going another way, was a new thing.”
Dr. Nation suggested that the intergroup differences in attention ability may stem from idiosyncrasies of the tests used to measure that domain, which can be impacted by cardiovascular or brain vascular disease. Or it could be caused by something else entirely, he said, noting that further investigation is needed.
He added that the improvements in verbal memory within the BBB-crossing group could be caused by direct effects on the brain. He pointed out that certain ACE polymorphisms have been linked with Alzheimer’s disease risk, and those same polymorphisms, in animal models, lead to neurodegeneration, with reversal possible through administration of ACE inhibitors.
“It could be that what we’re observing has nothing really to do with blood pressure,” Dr. Nation explained. “This could be a neuronal effect on learning memory systems.”
He went on to suggest that clinicians may consider these findings when selecting antihypertensive agents for their patients, with the caveat that all other prescribing factors have already been taking to account.
“In the event that you’re going to give an ACE inhibitor or an angiotensin receptor blocker anyway, and it ends up being a somewhat arbitrary decision in terms of which specific drug you’re going to give, then perhaps this is a piece of information you would take into account – that one gets in the brain and one doesn’t – in somebody at risk for cognitive decline,” Dr. Nation said.
Exact mechanisms of action unknown
Hélène Girouard, PhD, assistant professor of pharmacology and physiology at the University of Montreal, said in an interview that the findings are “of considerable importance, knowing that brain alterations could begin as much as 30 years before manifestation of dementia.”
Since 2003, Dr. Girouard has been studying the cognitive effects of antihypertensive medications. She noted that previous studies involving rodents “have shown beneficial effects [of BBB-crossing antihypertensive drugs] on cognition independent of their effects on blood pressure.”
The drugs’ exact mechanisms of action, however, remain elusive, according to Dr. Girouard, who offered several possible explanations, including amelioration of BBB disruption, brain inflammation, cerebral blood flow dysregulation, cholinergic dysfunction, and neurologic deficits. “Whether these mechanisms may explain Ho and colleagues’ observations remains to be established,” she added.
Andrea L. Schneider, MD, PhD, assistant professor of neurology at the University of Pennsylvania, Philadelphia, applauded the study, but ultimately suggested that more research is needed to impact clinical decision-making.
“The results of this important and well-done study suggest that further investigation into targeted mechanism-based approaches to selecting hypertension treatment agents, with a specific focus on cognitive outcomes, is warranted,” Dr. Schneider said in an interview. “Before changing clinical practice, further work is necessary to disentangle contributions of medication mechanism, comorbid vascular risk factors, and achieved blood pressure reduction, among others.”
The investigators disclosed support from the National Institutes of Health, the Alzheimer’s Association, the Waksman Foundation of Japan, and others. The interviewees reported no relevant conflicts of interest.
FROM HYPERTENSION
Sporebiotics improve functional dyspepsia symptoms
Compared with placebo, sporebiotics significantly reduced postprandial distress, epigastric pain, and several other symptoms of functional dyspepsia, reported lead author Lucas Wauters, MD, PhD, of University Hospitals Leuven (Belgium), and colleagues.
“Acid suppressive or first-line therapy with PPIs [proton pump inhibitors] for functional dyspepsia has limited efficacy and potential long-term side effects,” the investigators reported at the annual Digestive Disease Week® (DDW). “Spore-forming bacteria or sporebiotics may be effective for postprandial distress and epigastric pain or burning symptoms, offering benefits which may differ in relation to PPI intake.”
Sporebiotics improve variety of symptoms
To test this hypothesis, the investigators recruited 68 patients with functional dyspepsia who had similar characteristics at baseline. Half of the participants (n = 34) were taking PPIs.
Patients were randomized in a 1:1 ratio to receive 2.5 x 109 CFU of Bacillus coagulans MY01 and B. subtilis MY02 twice daily for 8 weeks, or matching placebo. Following this period, an additional 8-week open-label regimen was instituted, during which time all patients received sporebiotics. Throughout the study, a daily diary was used to self-report symptoms.
The primary outcome, measured at 8 weeks, was clinical response, defined by a decrease in weekly postprandial distress symptoms greater than 0.7 among patients who had a baseline score greater than 1.0. Secondary outcomes included change in postprandial distress symptoms greater than 0.5 (minimal clinical response), as well as changes in cardinal epigastric pain, cardinal postprandial distress, and other symptoms. At baseline and 8 weeks, patients taking PPIs underwent a 14C-glycocolic acid breath test to detect changes in small intestinal bacterial overgrowth.
At 8 weeks, a clinical response was observed in 48% of patients taking sporebiotics, compared with 20% of those in the placebo group (P = .03). At the same time point, 56% of patients in the treatment group had a minimal clinical response versus 27% in the control group (P = .03).
Spore-forming probiotics were also associated with significantly greater improvements in cardinal postprandial distress, cardinal epigastric pain, postprandial fullness, and upper abdominal pain. A trend toward improvement in upper abdominal bloating was also seen (P = .07).
Among patients taking PPIs, baseline rates of positivity for bile acid breath testing were similar between those in the sporebiotic and placebo group, at 18% and 25%, respectively (P = .29). After 8 weeks, however, patients taking spore-forming probiotics had a significantly lower rate of bile acid breath test positivity (7% vs. 36%; P = .04), suggesting improvements in small intestinal bacterial overgrowth.
In the open-label portion of the trial, patients in the treatment group maintained improvements in postprandial distress. Patients who switched from placebo to sporebiotics had a significant reduction in postprandial distress symptoms.
At 8 weeks, sporebiotics were associated with a trend toward fewer side effects of any kind (16% vs. 33%; P = .09), while rates of GI-specific side effects were comparable between groups, at 3% and 15% for sporebiotics and placebo, respectively (P = .2).“Spore-forming probiotics are effective and safe in patients with functional dyspepsia, decreasing both postprandial distress and epigastric pain symptoms,” the investigators concluded. “In patients [taking PPIs], sporebiotics decrease the percentage of positive bile acid breath tests, suggesting a reduction of small intestinal bacterial overgrowth.”
Results are promising, but big questions remain
Pankaj Jay Pasricha, MBBS, MD, vice chair of medicine innovation and commercialization at Johns Hopkins and director of the Johns Hopkins Center for Neurogastroenterology, Baltimore, called the results “very encouraging.”
“This [study] is the first of its kind for this condition,” Dr. Pasricha said in an interview. “It will be very interesting to see whether others can reproduce these findings, and whether [these improvements] are sustained beyond the first few weeks or months.”
He noted that determining associated mechanisms of action could potentially open up new lines of therapy, and provide greater understanding of pathophysiology, which is currently lacking.
“We don’t fully understand the pathophysiology [of functional dyspepsia],” Dr. Pasricha said. “If you don’t understand the pathophysiology, then it’s difficult to identify the right molecular target to address the root cause. Instead, we use a variety of symptomatic treatments that aren’t actually addressing the root cause, but studies like this may help us gain some insight into the cause of the problem, and if it is in fact a fundamental imbalance in the intestinal microbiota, then this would be a rational approach.”
It’s unclear how sporebiotics may improve functional dyspepsia, Dr. Pasricha noted. He proposed three possible mechanisms: the bacteria could be colonizing the intestine, they could be releasing products as they pass through the intestine that have a therapeutic effect, or they may be altering bile acid metabolism in the colon or having some other effect there.
“It’s speculative on my part to say how it works,” Dr. Pasricha said. “All the dots remain to be connected. But it’s a good start, and an outstanding group of investigators.”Dr. Wauters and colleagues reported no conflicts of interest. Dr. Pasricha disclosed a relationship with Pendulum Therapeutics.
Compared with placebo, sporebiotics significantly reduced postprandial distress, epigastric pain, and several other symptoms of functional dyspepsia, reported lead author Lucas Wauters, MD, PhD, of University Hospitals Leuven (Belgium), and colleagues.
“Acid suppressive or first-line therapy with PPIs [proton pump inhibitors] for functional dyspepsia has limited efficacy and potential long-term side effects,” the investigators reported at the annual Digestive Disease Week® (DDW). “Spore-forming bacteria or sporebiotics may be effective for postprandial distress and epigastric pain or burning symptoms, offering benefits which may differ in relation to PPI intake.”
Sporebiotics improve variety of symptoms
To test this hypothesis, the investigators recruited 68 patients with functional dyspepsia who had similar characteristics at baseline. Half of the participants (n = 34) were taking PPIs.
Patients were randomized in a 1:1 ratio to receive 2.5 x 109 CFU of Bacillus coagulans MY01 and B. subtilis MY02 twice daily for 8 weeks, or matching placebo. Following this period, an additional 8-week open-label regimen was instituted, during which time all patients received sporebiotics. Throughout the study, a daily diary was used to self-report symptoms.
The primary outcome, measured at 8 weeks, was clinical response, defined by a decrease in weekly postprandial distress symptoms greater than 0.7 among patients who had a baseline score greater than 1.0. Secondary outcomes included change in postprandial distress symptoms greater than 0.5 (minimal clinical response), as well as changes in cardinal epigastric pain, cardinal postprandial distress, and other symptoms. At baseline and 8 weeks, patients taking PPIs underwent a 14C-glycocolic acid breath test to detect changes in small intestinal bacterial overgrowth.
At 8 weeks, a clinical response was observed in 48% of patients taking sporebiotics, compared with 20% of those in the placebo group (P = .03). At the same time point, 56% of patients in the treatment group had a minimal clinical response versus 27% in the control group (P = .03).
Spore-forming probiotics were also associated with significantly greater improvements in cardinal postprandial distress, cardinal epigastric pain, postprandial fullness, and upper abdominal pain. A trend toward improvement in upper abdominal bloating was also seen (P = .07).
Among patients taking PPIs, baseline rates of positivity for bile acid breath testing were similar between those in the sporebiotic and placebo group, at 18% and 25%, respectively (P = .29). After 8 weeks, however, patients taking spore-forming probiotics had a significantly lower rate of bile acid breath test positivity (7% vs. 36%; P = .04), suggesting improvements in small intestinal bacterial overgrowth.
In the open-label portion of the trial, patients in the treatment group maintained improvements in postprandial distress. Patients who switched from placebo to sporebiotics had a significant reduction in postprandial distress symptoms.
At 8 weeks, sporebiotics were associated with a trend toward fewer side effects of any kind (16% vs. 33%; P = .09), while rates of GI-specific side effects were comparable between groups, at 3% and 15% for sporebiotics and placebo, respectively (P = .2).“Spore-forming probiotics are effective and safe in patients with functional dyspepsia, decreasing both postprandial distress and epigastric pain symptoms,” the investigators concluded. “In patients [taking PPIs], sporebiotics decrease the percentage of positive bile acid breath tests, suggesting a reduction of small intestinal bacterial overgrowth.”
Results are promising, but big questions remain
Pankaj Jay Pasricha, MBBS, MD, vice chair of medicine innovation and commercialization at Johns Hopkins and director of the Johns Hopkins Center for Neurogastroenterology, Baltimore, called the results “very encouraging.”
“This [study] is the first of its kind for this condition,” Dr. Pasricha said in an interview. “It will be very interesting to see whether others can reproduce these findings, and whether [these improvements] are sustained beyond the first few weeks or months.”
He noted that determining associated mechanisms of action could potentially open up new lines of therapy, and provide greater understanding of pathophysiology, which is currently lacking.
“We don’t fully understand the pathophysiology [of functional dyspepsia],” Dr. Pasricha said. “If you don’t understand the pathophysiology, then it’s difficult to identify the right molecular target to address the root cause. Instead, we use a variety of symptomatic treatments that aren’t actually addressing the root cause, but studies like this may help us gain some insight into the cause of the problem, and if it is in fact a fundamental imbalance in the intestinal microbiota, then this would be a rational approach.”
It’s unclear how sporebiotics may improve functional dyspepsia, Dr. Pasricha noted. He proposed three possible mechanisms: the bacteria could be colonizing the intestine, they could be releasing products as they pass through the intestine that have a therapeutic effect, or they may be altering bile acid metabolism in the colon or having some other effect there.
“It’s speculative on my part to say how it works,” Dr. Pasricha said. “All the dots remain to be connected. But it’s a good start, and an outstanding group of investigators.”Dr. Wauters and colleagues reported no conflicts of interest. Dr. Pasricha disclosed a relationship with Pendulum Therapeutics.
Compared with placebo, sporebiotics significantly reduced postprandial distress, epigastric pain, and several other symptoms of functional dyspepsia, reported lead author Lucas Wauters, MD, PhD, of University Hospitals Leuven (Belgium), and colleagues.
“Acid suppressive or first-line therapy with PPIs [proton pump inhibitors] for functional dyspepsia has limited efficacy and potential long-term side effects,” the investigators reported at the annual Digestive Disease Week® (DDW). “Spore-forming bacteria or sporebiotics may be effective for postprandial distress and epigastric pain or burning symptoms, offering benefits which may differ in relation to PPI intake.”
Sporebiotics improve variety of symptoms
To test this hypothesis, the investigators recruited 68 patients with functional dyspepsia who had similar characteristics at baseline. Half of the participants (n = 34) were taking PPIs.
Patients were randomized in a 1:1 ratio to receive 2.5 x 109 CFU of Bacillus coagulans MY01 and B. subtilis MY02 twice daily for 8 weeks, or matching placebo. Following this period, an additional 8-week open-label regimen was instituted, during which time all patients received sporebiotics. Throughout the study, a daily diary was used to self-report symptoms.
The primary outcome, measured at 8 weeks, was clinical response, defined by a decrease in weekly postprandial distress symptoms greater than 0.7 among patients who had a baseline score greater than 1.0. Secondary outcomes included change in postprandial distress symptoms greater than 0.5 (minimal clinical response), as well as changes in cardinal epigastric pain, cardinal postprandial distress, and other symptoms. At baseline and 8 weeks, patients taking PPIs underwent a 14C-glycocolic acid breath test to detect changes in small intestinal bacterial overgrowth.
At 8 weeks, a clinical response was observed in 48% of patients taking sporebiotics, compared with 20% of those in the placebo group (P = .03). At the same time point, 56% of patients in the treatment group had a minimal clinical response versus 27% in the control group (P = .03).
Spore-forming probiotics were also associated with significantly greater improvements in cardinal postprandial distress, cardinal epigastric pain, postprandial fullness, and upper abdominal pain. A trend toward improvement in upper abdominal bloating was also seen (P = .07).
Among patients taking PPIs, baseline rates of positivity for bile acid breath testing were similar between those in the sporebiotic and placebo group, at 18% and 25%, respectively (P = .29). After 8 weeks, however, patients taking spore-forming probiotics had a significantly lower rate of bile acid breath test positivity (7% vs. 36%; P = .04), suggesting improvements in small intestinal bacterial overgrowth.
In the open-label portion of the trial, patients in the treatment group maintained improvements in postprandial distress. Patients who switched from placebo to sporebiotics had a significant reduction in postprandial distress symptoms.
At 8 weeks, sporebiotics were associated with a trend toward fewer side effects of any kind (16% vs. 33%; P = .09), while rates of GI-specific side effects were comparable between groups, at 3% and 15% for sporebiotics and placebo, respectively (P = .2).“Spore-forming probiotics are effective and safe in patients with functional dyspepsia, decreasing both postprandial distress and epigastric pain symptoms,” the investigators concluded. “In patients [taking PPIs], sporebiotics decrease the percentage of positive bile acid breath tests, suggesting a reduction of small intestinal bacterial overgrowth.”
Results are promising, but big questions remain
Pankaj Jay Pasricha, MBBS, MD, vice chair of medicine innovation and commercialization at Johns Hopkins and director of the Johns Hopkins Center for Neurogastroenterology, Baltimore, called the results “very encouraging.”
“This [study] is the first of its kind for this condition,” Dr. Pasricha said in an interview. “It will be very interesting to see whether others can reproduce these findings, and whether [these improvements] are sustained beyond the first few weeks or months.”
He noted that determining associated mechanisms of action could potentially open up new lines of therapy, and provide greater understanding of pathophysiology, which is currently lacking.
“We don’t fully understand the pathophysiology [of functional dyspepsia],” Dr. Pasricha said. “If you don’t understand the pathophysiology, then it’s difficult to identify the right molecular target to address the root cause. Instead, we use a variety of symptomatic treatments that aren’t actually addressing the root cause, but studies like this may help us gain some insight into the cause of the problem, and if it is in fact a fundamental imbalance in the intestinal microbiota, then this would be a rational approach.”
It’s unclear how sporebiotics may improve functional dyspepsia, Dr. Pasricha noted. He proposed three possible mechanisms: the bacteria could be colonizing the intestine, they could be releasing products as they pass through the intestine that have a therapeutic effect, or they may be altering bile acid metabolism in the colon or having some other effect there.
“It’s speculative on my part to say how it works,” Dr. Pasricha said. “All the dots remain to be connected. But it’s a good start, and an outstanding group of investigators.”Dr. Wauters and colleagues reported no conflicts of interest. Dr. Pasricha disclosed a relationship with Pendulum Therapeutics.
FROM DDW 2021
Some nasogastric intubation procedures lead to less aerosolization than feared
Nasogastric intubation for esophageal manometry or impedance monitoring does not generate significant aerosol particles and is associated with minimal droplet spread, according to a Belgian study presented at the annual Digestive Disease Week® (DDW). These findings suggest that standard personal protective equipment and appropriate patient positioning are likely sufficient to protect health care workers from increased risk of coronavirus transmission during tube placement and removal, reported lead author Wout Verbeure, PhD, of Leuven University Hospital, Belgium, and colleagues.
“Subsequent to the COVID-19 peak, [nasogastric tube insertion and extraction] were scaled back based on the assumption that they generate respiratory aerosol particles and droplet spread,” the investigators reported. “However, there is no scientific evidence for this theory.”
To address this knowledge gap, the investigators conducted an observational trial involving SARS-CoV-2-negative patients and including 21 insertions and removals for high-resolution manometry (HRM), plus 12 insertions and 10 removals for 24-hour multichannel intraluminal impedance-pH monitoring (MII-pH). During the study, a Camfil City M Air Purifier was added to the examination room. This was present during 13 of the 21 HRM insertions and removals, allowing for comparison of aerosol particle measurements before and after introduction of the device.
The mechanics of the study
Aerosol particles (0.3-10 mcm) were measured with a Particle Measuring Systems LASAIR II Particle Counter positioned 1 cm away from the patient’s face. For both procedures, measurements were taken before, during, and up to 5 minutes after each nasogastric tube placement and removal. Additional measurements were taken while the HRM examination was being conducted.
To measure droplet spread, 1% medical fluorescein in saline was applied to each patient’s nasal cavity; droplets were visualized on a white sheet covering the patient and a white apron worn by the health care worker. The patients’ masks were kept below their noses but were covering their mouths.
“During the placement and removal of the catheter, the health care worker was always standing sideways or even behind the patient, and they always stood higher relative to the patient to ensure that when there was aerosol or droplet spread, it was not in their direction,” Dr. Verbeure said during his virtual presentation.
During placement for HRM and removal for MII-pH, aerosol particles (excluding those that were 0.3 mcm), decreased significantly. Otherwise, particle counts remained stable. “This shows that these investigations do not generate additional aerosol [particles], which is good news,” Dr. Verbeure said.
When the air purifier was present, placement and examination for HRM were associated with significant reductions in aerosol particles (excluding those that were 0.3 mcm or 0.5 mcm), whereas removal caused a slight uptick in aerosol particles (excluding those that were 0.3 mcm or 0.5 mcm) that did not decline after 5 minutes. “This was actually a surprise to us,” Dr. Verbeure said. “Because we now had an air purifier present, and we expected an even lower number of particles.”
He suggested that the purifier may have been reducing particle counts during HRM examination, thereby lowering baseline values before removal, making small changes more noticeable; or the purifier may have been causing turbulence that spread particles during removal. Whether either of these hypotheses is true, Dr. Verbeure noted that particle counts were never higher than at the start of the examination. Fluorescein visualization showed “surprisingly little droplet spread,” Dr. Verbeure said, apart from some contamination around the patient’s neck.
“Esophageal investigations do not seem to generate additional [aerosol] particles,” Dr. Verbeure concluded. “So wearing the recommended protective gear and also considering the right positioning of the health care worker relative to the patient is important to keep performing this daily clinical routine.” To avoid droplet spread, health care workers should “be aware of the [patient’s] neck region and the direction of the catheter,” Dr. Verbeure added.
SORTing the results
According to Mahdi Najafi, MD, associate professor in the department of anesthesiology at Tehran University of Medical Sciences, Iran, and adjunct professor at Schulich School of Medicine & Dentistry, Western University, London, Ontario, the findings offer valuable insights. “[This study] is very important for at least two reasons: The extent of using this procedure in patient care, especially in the critical care setting, and the paucity of information for COVID-19 transmission and route of transmission as well,” Dr. Najafi said in an interview.
Yet he cautioned against generalizing the results. “We cannot extend the results to all nasogastric tube intubations,” Dr. Najafi said. “There are reasons for that. The tube for manometry is delicate and flexible, while the nasogastric tube used for drainage and GI pressure release – which is used commonly in intensive care and the operating room – is larger and rather rigid. Moreover, the patient is awake and conscious for manometry while the other procedures are done in sedated or unconscious patients.”
He noted that nasogastric intubation is more challenging in unconscious patients, and often requires a laryngoscope and/or Magill forceps. “The result [of using these instruments] is coughing, which is undoubtedly the most important cause of aerosol generation,” Dr. Najafi said. “It can be regarded as a drawback to this study as well. The authors would be better to report the number and/or severity of the airway reactions during the procedures, which are the main source of droplets and aerosols.”
To reduce risk of coronavirus transmission during nasogastric intubation of unconscious patients, Dr. Najafi recommended the SORT (Sniffing position, nasogastric tube Orientation, contralateral Rotation, and Twisting movement) maneuver, which he introduced in 2016 for use in critical care and operating room settings.
“The employment of anatomical approach and avoiding equipment for intubation were devised to increase the level of safety and decrease hazards and adverse effects,” Dr. Najafi said of the SORT maneuver. “The procedure needs to be done step-by-step and as smooth as possible.”
In a recent study, the SORT maneuver was compared with nasogastric intubation using neck flexion lateral pressure in critically ill patients. The investigators concluded that the SORT maneuver is “a promising method” notable for its simple technique, and suggested that more trials are needed.
The investigators and Dr. Najafi reported no conflicts of interest.
Nasogastric intubation for esophageal manometry or impedance monitoring does not generate significant aerosol particles and is associated with minimal droplet spread, according to a Belgian study presented at the annual Digestive Disease Week® (DDW). These findings suggest that standard personal protective equipment and appropriate patient positioning are likely sufficient to protect health care workers from increased risk of coronavirus transmission during tube placement and removal, reported lead author Wout Verbeure, PhD, of Leuven University Hospital, Belgium, and colleagues.
“Subsequent to the COVID-19 peak, [nasogastric tube insertion and extraction] were scaled back based on the assumption that they generate respiratory aerosol particles and droplet spread,” the investigators reported. “However, there is no scientific evidence for this theory.”
To address this knowledge gap, the investigators conducted an observational trial involving SARS-CoV-2-negative patients and including 21 insertions and removals for high-resolution manometry (HRM), plus 12 insertions and 10 removals for 24-hour multichannel intraluminal impedance-pH monitoring (MII-pH). During the study, a Camfil City M Air Purifier was added to the examination room. This was present during 13 of the 21 HRM insertions and removals, allowing for comparison of aerosol particle measurements before and after introduction of the device.
The mechanics of the study
Aerosol particles (0.3-10 mcm) were measured with a Particle Measuring Systems LASAIR II Particle Counter positioned 1 cm away from the patient’s face. For both procedures, measurements were taken before, during, and up to 5 minutes after each nasogastric tube placement and removal. Additional measurements were taken while the HRM examination was being conducted.
To measure droplet spread, 1% medical fluorescein in saline was applied to each patient’s nasal cavity; droplets were visualized on a white sheet covering the patient and a white apron worn by the health care worker. The patients’ masks were kept below their noses but were covering their mouths.
“During the placement and removal of the catheter, the health care worker was always standing sideways or even behind the patient, and they always stood higher relative to the patient to ensure that when there was aerosol or droplet spread, it was not in their direction,” Dr. Verbeure said during his virtual presentation.
During placement for HRM and removal for MII-pH, aerosol particles (excluding those that were 0.3 mcm), decreased significantly. Otherwise, particle counts remained stable. “This shows that these investigations do not generate additional aerosol [particles], which is good news,” Dr. Verbeure said.
When the air purifier was present, placement and examination for HRM were associated with significant reductions in aerosol particles (excluding those that were 0.3 mcm or 0.5 mcm), whereas removal caused a slight uptick in aerosol particles (excluding those that were 0.3 mcm or 0.5 mcm) that did not decline after 5 minutes. “This was actually a surprise to us,” Dr. Verbeure said. “Because we now had an air purifier present, and we expected an even lower number of particles.”
He suggested that the purifier may have been reducing particle counts during HRM examination, thereby lowering baseline values before removal, making small changes more noticeable; or the purifier may have been causing turbulence that spread particles during removal. Whether either of these hypotheses is true, Dr. Verbeure noted that particle counts were never higher than at the start of the examination. Fluorescein visualization showed “surprisingly little droplet spread,” Dr. Verbeure said, apart from some contamination around the patient’s neck.
“Esophageal investigations do not seem to generate additional [aerosol] particles,” Dr. Verbeure concluded. “So wearing the recommended protective gear and also considering the right positioning of the health care worker relative to the patient is important to keep performing this daily clinical routine.” To avoid droplet spread, health care workers should “be aware of the [patient’s] neck region and the direction of the catheter,” Dr. Verbeure added.
SORTing the results
According to Mahdi Najafi, MD, associate professor in the department of anesthesiology at Tehran University of Medical Sciences, Iran, and adjunct professor at Schulich School of Medicine & Dentistry, Western University, London, Ontario, the findings offer valuable insights. “[This study] is very important for at least two reasons: The extent of using this procedure in patient care, especially in the critical care setting, and the paucity of information for COVID-19 transmission and route of transmission as well,” Dr. Najafi said in an interview.
Yet he cautioned against generalizing the results. “We cannot extend the results to all nasogastric tube intubations,” Dr. Najafi said. “There are reasons for that. The tube for manometry is delicate and flexible, while the nasogastric tube used for drainage and GI pressure release – which is used commonly in intensive care and the operating room – is larger and rather rigid. Moreover, the patient is awake and conscious for manometry while the other procedures are done in sedated or unconscious patients.”
He noted that nasogastric intubation is more challenging in unconscious patients, and often requires a laryngoscope and/or Magill forceps. “The result [of using these instruments] is coughing, which is undoubtedly the most important cause of aerosol generation,” Dr. Najafi said. “It can be regarded as a drawback to this study as well. The authors would be better to report the number and/or severity of the airway reactions during the procedures, which are the main source of droplets and aerosols.”
To reduce risk of coronavirus transmission during nasogastric intubation of unconscious patients, Dr. Najafi recommended the SORT (Sniffing position, nasogastric tube Orientation, contralateral Rotation, and Twisting movement) maneuver, which he introduced in 2016 for use in critical care and operating room settings.
“The employment of anatomical approach and avoiding equipment for intubation were devised to increase the level of safety and decrease hazards and adverse effects,” Dr. Najafi said of the SORT maneuver. “The procedure needs to be done step-by-step and as smooth as possible.”
In a recent study, the SORT maneuver was compared with nasogastric intubation using neck flexion lateral pressure in critically ill patients. The investigators concluded that the SORT maneuver is “a promising method” notable for its simple technique, and suggested that more trials are needed.
The investigators and Dr. Najafi reported no conflicts of interest.
Nasogastric intubation for esophageal manometry or impedance monitoring does not generate significant aerosol particles and is associated with minimal droplet spread, according to a Belgian study presented at the annual Digestive Disease Week® (DDW). These findings suggest that standard personal protective equipment and appropriate patient positioning are likely sufficient to protect health care workers from increased risk of coronavirus transmission during tube placement and removal, reported lead author Wout Verbeure, PhD, of Leuven University Hospital, Belgium, and colleagues.
“Subsequent to the COVID-19 peak, [nasogastric tube insertion and extraction] were scaled back based on the assumption that they generate respiratory aerosol particles and droplet spread,” the investigators reported. “However, there is no scientific evidence for this theory.”
To address this knowledge gap, the investigators conducted an observational trial involving SARS-CoV-2-negative patients and including 21 insertions and removals for high-resolution manometry (HRM), plus 12 insertions and 10 removals for 24-hour multichannel intraluminal impedance-pH monitoring (MII-pH). During the study, a Camfil City M Air Purifier was added to the examination room. This was present during 13 of the 21 HRM insertions and removals, allowing for comparison of aerosol particle measurements before and after introduction of the device.
The mechanics of the study
Aerosol particles (0.3-10 mcm) were measured with a Particle Measuring Systems LASAIR II Particle Counter positioned 1 cm away from the patient’s face. For both procedures, measurements were taken before, during, and up to 5 minutes after each nasogastric tube placement and removal. Additional measurements were taken while the HRM examination was being conducted.
To measure droplet spread, 1% medical fluorescein in saline was applied to each patient’s nasal cavity; droplets were visualized on a white sheet covering the patient and a white apron worn by the health care worker. The patients’ masks were kept below their noses but were covering their mouths.
“During the placement and removal of the catheter, the health care worker was always standing sideways or even behind the patient, and they always stood higher relative to the patient to ensure that when there was aerosol or droplet spread, it was not in their direction,” Dr. Verbeure said during his virtual presentation.
During placement for HRM and removal for MII-pH, aerosol particles (excluding those that were 0.3 mcm), decreased significantly. Otherwise, particle counts remained stable. “This shows that these investigations do not generate additional aerosol [particles], which is good news,” Dr. Verbeure said.
When the air purifier was present, placement and examination for HRM were associated with significant reductions in aerosol particles (excluding those that were 0.3 mcm or 0.5 mcm), whereas removal caused a slight uptick in aerosol particles (excluding those that were 0.3 mcm or 0.5 mcm) that did not decline after 5 minutes. “This was actually a surprise to us,” Dr. Verbeure said. “Because we now had an air purifier present, and we expected an even lower number of particles.”
He suggested that the purifier may have been reducing particle counts during HRM examination, thereby lowering baseline values before removal, making small changes more noticeable; or the purifier may have been causing turbulence that spread particles during removal. Whether either of these hypotheses is true, Dr. Verbeure noted that particle counts were never higher than at the start of the examination. Fluorescein visualization showed “surprisingly little droplet spread,” Dr. Verbeure said, apart from some contamination around the patient’s neck.
“Esophageal investigations do not seem to generate additional [aerosol] particles,” Dr. Verbeure concluded. “So wearing the recommended protective gear and also considering the right positioning of the health care worker relative to the patient is important to keep performing this daily clinical routine.” To avoid droplet spread, health care workers should “be aware of the [patient’s] neck region and the direction of the catheter,” Dr. Verbeure added.
SORTing the results
According to Mahdi Najafi, MD, associate professor in the department of anesthesiology at Tehran University of Medical Sciences, Iran, and adjunct professor at Schulich School of Medicine & Dentistry, Western University, London, Ontario, the findings offer valuable insights. “[This study] is very important for at least two reasons: The extent of using this procedure in patient care, especially in the critical care setting, and the paucity of information for COVID-19 transmission and route of transmission as well,” Dr. Najafi said in an interview.
Yet he cautioned against generalizing the results. “We cannot extend the results to all nasogastric tube intubations,” Dr. Najafi said. “There are reasons for that. The tube for manometry is delicate and flexible, while the nasogastric tube used for drainage and GI pressure release – which is used commonly in intensive care and the operating room – is larger and rather rigid. Moreover, the patient is awake and conscious for manometry while the other procedures are done in sedated or unconscious patients.”
He noted that nasogastric intubation is more challenging in unconscious patients, and often requires a laryngoscope and/or Magill forceps. “The result [of using these instruments] is coughing, which is undoubtedly the most important cause of aerosol generation,” Dr. Najafi said. “It can be regarded as a drawback to this study as well. The authors would be better to report the number and/or severity of the airway reactions during the procedures, which are the main source of droplets and aerosols.”
To reduce risk of coronavirus transmission during nasogastric intubation of unconscious patients, Dr. Najafi recommended the SORT (Sniffing position, nasogastric tube Orientation, contralateral Rotation, and Twisting movement) maneuver, which he introduced in 2016 for use in critical care and operating room settings.
“The employment of anatomical approach and avoiding equipment for intubation were devised to increase the level of safety and decrease hazards and adverse effects,” Dr. Najafi said of the SORT maneuver. “The procedure needs to be done step-by-step and as smooth as possible.”
In a recent study, the SORT maneuver was compared with nasogastric intubation using neck flexion lateral pressure in critically ill patients. The investigators concluded that the SORT maneuver is “a promising method” notable for its simple technique, and suggested that more trials are needed.
The investigators and Dr. Najafi reported no conflicts of interest.
FROM DDW 2021
Head-to-head trial compares ustekinumab with adalimumab in Crohn’s
For biologic-naive adults with moderate to severe Crohn’s disease, treatment with adalimumab or ustekinumab leads to similar outcomes, according to results of the head-to-head SEAVUE trial.
When lead author Bruce E. Sands, MD, of Icahn School of Medicine at Mount Sinai, New York, compared treatment arms, patients had similar rates of clinical remission at one year. All major secondary endpoints, such as endoscopic remission, were comparable, as were safety profiles, Dr. Sands reported at the annual Digestive Disease Week® (DDW).
“From my perspective, this is an important study,” Dr. Sands wrote in a virtual chat following his presentation. “We need more head-to-head studies!”
Results from the SEAVUE trial come almost 2 years after Dr. Sands reported findings of another head-to-head IBD trial: VARSITY, which demonstrated the superiority of vedolizumab over adalimumab among patients with moderate to severe ulcerative colitis.
The multicenter, double-blinded SEAVUE trial involved 386 patients with biologic-naive Crohn’s disease who had failed corticosteroids or immunomodulators. All patients had Crohn’s Disease Activity Index (CDAI) scores ranging from 220 to 450 and had at least one ulcer detected at baseline ileocolonoscopy.
Participants were randomized in a 1:1 ratio to receive monotherapy with either subcutaneous adalimumab (citrate-free; 160 mg at baseline, 70 mg at week 2, then 40 mg every 2 weeks) or ustekinumab, which was given first intravenously at a dose of 6 mg/kg then subcutaneously at 90 mg every 8 weeks.
The primary endpoint was clinical remission at week 52, defined by a CDAI score less than 150. Major secondary endpoints included clinical response, corticosteroid-free remission, endoscopic remission, remission in patient-reported CDAI components, and clinical remission at week 16.
Results were statistically similar across all endpoints, with clinical remission at 1 year occurring in 64.9% and 61.0% of patients receiving ustekinumab and adalimumab, respectively (P = .417).
“Both treatments demonstrated rapid onset of action and robust endoscopy results,” Dr. Sands noted during his presentation; he reported comparable rates of endoscopic remission, at 28.5% and 30.7% for ustekinumab and adalimumab, respectively (P = .631).
Among secondary endpoints, ustekinumab demonstrated some superiority, with greater maintenance of clinical response at week 52 among patients with response at week 16 (88.6% vs. 78.0%; P = .016), greater reduction in liquid/soft stools in prior 7 days from baseline to week 52 (–19.9 vs. –16.2; P = .004), and greater reduction in sum number of liquid/soft stools and abdominal pain scores in prior 7 days from baseline to week 52 (–29.6 vs. –25.1; P = .013).
Safety metrics were similar between groups, and consistent with previous experience. Although the adalimumab group had a higher rate of discontinuation due to adverse events, this trend was not statistically significant (11.3% vs. 6.3%; P value not provided).
Don’t ignore discontinuation rates
Jordan E. Axelrad, MD, assistant professor of medicine at NYU and a clinician at the Inflammatory Bowel Disease Center at NYU Langone Health, New York, commended the SEAVUE trial for its head-to-head design, which is a first for biologics in Crohn’s disease.
“With newer drugs, there’s a critical need for head-to-head studies for us to understand where to position a lot of these agents,” he said in an interview. “[T]his was a good undifferentiated group to understand what’s the first biologic you should use in a patient with moderate-to-severe Crohn’s disease. The primary, major take-home is that [ustekinumab and adalimumab] are similarly effective.”
When asked about the slight superiority in minor secondary endpoints associated with ustekinumab, Dr. Axelrad suggested that rates of discontinuation deserve more attention.
“For me, maybe the major focus would be on the number of patients who stopped treatment,” Dr. Axelrad said, noting a higher rate of discontinuation in the adalimumab group. “Although that was just numerical, that to me is actually more important than [the minor secondary endpoints].” He also highlighted the lower injection burden associated with ustekinumab, which is given every 8 weeks, compared with every 2 weeks for adalimumab.
Ultimately, however, it’s unlikely that treatment sequencing will depend on these finer points, Dr. Axelrad suggested, and will instead come down to finances, especially with adalimumab biosimilars on the horizon, which may be the most cost-effective.
“A lot of the decision-making of where to position [ustekinumab in Crohn’s disease] is going to come down to the payer,” Dr. Axelrad said. “If there was a clear signal, providers such as myself would have a better leg to stand on, like we saw with VARSITY, where vedolizumab was clearly superior to adalimumab on multiple endpoints. We didn’t see that sort of robust signal here.”
The SEAVUE trial was supported by Janssen Scientific Affairs. Dr. Sands disclosed relationships with Janssen, AbbVie, Takeda, and others. Dr. Axelrad disclosed previous consulting fees from Janssen and research support from BioFire.
For biologic-naive adults with moderate to severe Crohn’s disease, treatment with adalimumab or ustekinumab leads to similar outcomes, according to results of the head-to-head SEAVUE trial.
When lead author Bruce E. Sands, MD, of Icahn School of Medicine at Mount Sinai, New York, compared treatment arms, patients had similar rates of clinical remission at one year. All major secondary endpoints, such as endoscopic remission, were comparable, as were safety profiles, Dr. Sands reported at the annual Digestive Disease Week® (DDW).
“From my perspective, this is an important study,” Dr. Sands wrote in a virtual chat following his presentation. “We need more head-to-head studies!”
Results from the SEAVUE trial come almost 2 years after Dr. Sands reported findings of another head-to-head IBD trial: VARSITY, which demonstrated the superiority of vedolizumab over adalimumab among patients with moderate to severe ulcerative colitis.
The multicenter, double-blinded SEAVUE trial involved 386 patients with biologic-naive Crohn’s disease who had failed corticosteroids or immunomodulators. All patients had Crohn’s Disease Activity Index (CDAI) scores ranging from 220 to 450 and had at least one ulcer detected at baseline ileocolonoscopy.
Participants were randomized in a 1:1 ratio to receive monotherapy with either subcutaneous adalimumab (citrate-free; 160 mg at baseline, 70 mg at week 2, then 40 mg every 2 weeks) or ustekinumab, which was given first intravenously at a dose of 6 mg/kg then subcutaneously at 90 mg every 8 weeks.
The primary endpoint was clinical remission at week 52, defined by a CDAI score less than 150. Major secondary endpoints included clinical response, corticosteroid-free remission, endoscopic remission, remission in patient-reported CDAI components, and clinical remission at week 16.
Results were statistically similar across all endpoints, with clinical remission at 1 year occurring in 64.9% and 61.0% of patients receiving ustekinumab and adalimumab, respectively (P = .417).
“Both treatments demonstrated rapid onset of action and robust endoscopy results,” Dr. Sands noted during his presentation; he reported comparable rates of endoscopic remission, at 28.5% and 30.7% for ustekinumab and adalimumab, respectively (P = .631).
Among secondary endpoints, ustekinumab demonstrated some superiority, with greater maintenance of clinical response at week 52 among patients with response at week 16 (88.6% vs. 78.0%; P = .016), greater reduction in liquid/soft stools in prior 7 days from baseline to week 52 (–19.9 vs. –16.2; P = .004), and greater reduction in sum number of liquid/soft stools and abdominal pain scores in prior 7 days from baseline to week 52 (–29.6 vs. –25.1; P = .013).
Safety metrics were similar between groups, and consistent with previous experience. Although the adalimumab group had a higher rate of discontinuation due to adverse events, this trend was not statistically significant (11.3% vs. 6.3%; P value not provided).
Don’t ignore discontinuation rates
Jordan E. Axelrad, MD, assistant professor of medicine at NYU and a clinician at the Inflammatory Bowel Disease Center at NYU Langone Health, New York, commended the SEAVUE trial for its head-to-head design, which is a first for biologics in Crohn’s disease.
“With newer drugs, there’s a critical need for head-to-head studies for us to understand where to position a lot of these agents,” he said in an interview. “[T]his was a good undifferentiated group to understand what’s the first biologic you should use in a patient with moderate-to-severe Crohn’s disease. The primary, major take-home is that [ustekinumab and adalimumab] are similarly effective.”
When asked about the slight superiority in minor secondary endpoints associated with ustekinumab, Dr. Axelrad suggested that rates of discontinuation deserve more attention.
“For me, maybe the major focus would be on the number of patients who stopped treatment,” Dr. Axelrad said, noting a higher rate of discontinuation in the adalimumab group. “Although that was just numerical, that to me is actually more important than [the minor secondary endpoints].” He also highlighted the lower injection burden associated with ustekinumab, which is given every 8 weeks, compared with every 2 weeks for adalimumab.
Ultimately, however, it’s unlikely that treatment sequencing will depend on these finer points, Dr. Axelrad suggested, and will instead come down to finances, especially with adalimumab biosimilars on the horizon, which may be the most cost-effective.
“A lot of the decision-making of where to position [ustekinumab in Crohn’s disease] is going to come down to the payer,” Dr. Axelrad said. “If there was a clear signal, providers such as myself would have a better leg to stand on, like we saw with VARSITY, where vedolizumab was clearly superior to adalimumab on multiple endpoints. We didn’t see that sort of robust signal here.”
The SEAVUE trial was supported by Janssen Scientific Affairs. Dr. Sands disclosed relationships with Janssen, AbbVie, Takeda, and others. Dr. Axelrad disclosed previous consulting fees from Janssen and research support from BioFire.
For biologic-naive adults with moderate to severe Crohn’s disease, treatment with adalimumab or ustekinumab leads to similar outcomes, according to results of the head-to-head SEAVUE trial.
When lead author Bruce E. Sands, MD, of Icahn School of Medicine at Mount Sinai, New York, compared treatment arms, patients had similar rates of clinical remission at one year. All major secondary endpoints, such as endoscopic remission, were comparable, as were safety profiles, Dr. Sands reported at the annual Digestive Disease Week® (DDW).
“From my perspective, this is an important study,” Dr. Sands wrote in a virtual chat following his presentation. “We need more head-to-head studies!”
Results from the SEAVUE trial come almost 2 years after Dr. Sands reported findings of another head-to-head IBD trial: VARSITY, which demonstrated the superiority of vedolizumab over adalimumab among patients with moderate to severe ulcerative colitis.
The multicenter, double-blinded SEAVUE trial involved 386 patients with biologic-naive Crohn’s disease who had failed corticosteroids or immunomodulators. All patients had Crohn’s Disease Activity Index (CDAI) scores ranging from 220 to 450 and had at least one ulcer detected at baseline ileocolonoscopy.
Participants were randomized in a 1:1 ratio to receive monotherapy with either subcutaneous adalimumab (citrate-free; 160 mg at baseline, 70 mg at week 2, then 40 mg every 2 weeks) or ustekinumab, which was given first intravenously at a dose of 6 mg/kg then subcutaneously at 90 mg every 8 weeks.
The primary endpoint was clinical remission at week 52, defined by a CDAI score less than 150. Major secondary endpoints included clinical response, corticosteroid-free remission, endoscopic remission, remission in patient-reported CDAI components, and clinical remission at week 16.
Results were statistically similar across all endpoints, with clinical remission at 1 year occurring in 64.9% and 61.0% of patients receiving ustekinumab and adalimumab, respectively (P = .417).
“Both treatments demonstrated rapid onset of action and robust endoscopy results,” Dr. Sands noted during his presentation; he reported comparable rates of endoscopic remission, at 28.5% and 30.7% for ustekinumab and adalimumab, respectively (P = .631).
Among secondary endpoints, ustekinumab demonstrated some superiority, with greater maintenance of clinical response at week 52 among patients with response at week 16 (88.6% vs. 78.0%; P = .016), greater reduction in liquid/soft stools in prior 7 days from baseline to week 52 (–19.9 vs. –16.2; P = .004), and greater reduction in sum number of liquid/soft stools and abdominal pain scores in prior 7 days from baseline to week 52 (–29.6 vs. –25.1; P = .013).
Safety metrics were similar between groups, and consistent with previous experience. Although the adalimumab group had a higher rate of discontinuation due to adverse events, this trend was not statistically significant (11.3% vs. 6.3%; P value not provided).
Don’t ignore discontinuation rates
Jordan E. Axelrad, MD, assistant professor of medicine at NYU and a clinician at the Inflammatory Bowel Disease Center at NYU Langone Health, New York, commended the SEAVUE trial for its head-to-head design, which is a first for biologics in Crohn’s disease.
“With newer drugs, there’s a critical need for head-to-head studies for us to understand where to position a lot of these agents,” he said in an interview. “[T]his was a good undifferentiated group to understand what’s the first biologic you should use in a patient with moderate-to-severe Crohn’s disease. The primary, major take-home is that [ustekinumab and adalimumab] are similarly effective.”
When asked about the slight superiority in minor secondary endpoints associated with ustekinumab, Dr. Axelrad suggested that rates of discontinuation deserve more attention.
“For me, maybe the major focus would be on the number of patients who stopped treatment,” Dr. Axelrad said, noting a higher rate of discontinuation in the adalimumab group. “Although that was just numerical, that to me is actually more important than [the minor secondary endpoints].” He also highlighted the lower injection burden associated with ustekinumab, which is given every 8 weeks, compared with every 2 weeks for adalimumab.
Ultimately, however, it’s unlikely that treatment sequencing will depend on these finer points, Dr. Axelrad suggested, and will instead come down to finances, especially with adalimumab biosimilars on the horizon, which may be the most cost-effective.
“A lot of the decision-making of where to position [ustekinumab in Crohn’s disease] is going to come down to the payer,” Dr. Axelrad said. “If there was a clear signal, providers such as myself would have a better leg to stand on, like we saw with VARSITY, where vedolizumab was clearly superior to adalimumab on multiple endpoints. We didn’t see that sort of robust signal here.”
The SEAVUE trial was supported by Janssen Scientific Affairs. Dr. Sands disclosed relationships with Janssen, AbbVie, Takeda, and others. Dr. Axelrad disclosed previous consulting fees from Janssen and research support from BioFire.
FROM DDW 2021