User login
Methotrexate Shows Signs of Relieving Painful Knee Osteoarthritis
TOPLINE:
The antimetabolite and immunosuppressant methotrexate, taken orally and in addition to usual analgesia, alleviates pain in patients with knee osteoarthritis.
METHODOLOGY:
- Investigators conducted a phase 3 randomized controlled trial among 155 patients in the United Kingdom with painful radiographic knee osteoarthritis and an inadequate response to their current medication (PROMOTE trial).
- Patients were assigned to oral methotrexate once weekly (6-week escalation from 10 to 25 mg) or placebo for 12 months, added to usual analgesia.
- The main outcome was average knee pain at 6 months on a numerical rating scale from 0 to 10.
TAKEAWAY:
- At 6 months, mean scores for knee pain had decreased by 1.3 points in the methotrexate group and 0.6 points in the placebo group (difference by intention to treat, 0.79 points; P = .030).
- The former also saw greater benefit in terms of Western Ontario and McMaster Universities Osteoarthritis Index scores for stiffness (difference, 0.60 points; P = .045) and physical function (difference, 5.01 points; P = .008).
- Differences between groups were no longer significant at 12 months.
- Benefit of methotrexate appeared to be dose related.
- The groups were similar with respect to nausea and diarrhea; four serious adverse events (two per group) were deemed unrelated to study treatment.
IN PRACTICE:
“Further work is required to understand adequate methotrexate dosing, whether benefits are greater in those with elevated systemic inflammation levels, and to consider cost-effectiveness before introducing this therapy for a potentially large population,” the authors wrote.
SOURCE:
The study was led by Sarah R. Kingsbury, PhD, University of Leeds and National Institute for Health and Care Research Leeds Biomedical Research Centre, Leeds, England, and was published online in Annals of Internal Medicine.
LIMITATIONS:
Limitations included a decrease in methotrexate dose between 6 and 12 months, nonallowance of switching to subcutaneous drug for intolerance, and a lack of assessment of the effectiveness of blinding.
DISCLOSURES:
The study was funded by Versus Arthritis, a charity that supports people with arthritis. Some authors reported affiliations with Versus Arthritis and/or companies that develop drugs for arthritis.
A version of this article appeared on Medscape.com.
TOPLINE:
The antimetabolite and immunosuppressant methotrexate, taken orally and in addition to usual analgesia, alleviates pain in patients with knee osteoarthritis.
METHODOLOGY:
- Investigators conducted a phase 3 randomized controlled trial among 155 patients in the United Kingdom with painful radiographic knee osteoarthritis and an inadequate response to their current medication (PROMOTE trial).
- Patients were assigned to oral methotrexate once weekly (6-week escalation from 10 to 25 mg) or placebo for 12 months, added to usual analgesia.
- The main outcome was average knee pain at 6 months on a numerical rating scale from 0 to 10.
TAKEAWAY:
- At 6 months, mean scores for knee pain had decreased by 1.3 points in the methotrexate group and 0.6 points in the placebo group (difference by intention to treat, 0.79 points; P = .030).
- The former also saw greater benefit in terms of Western Ontario and McMaster Universities Osteoarthritis Index scores for stiffness (difference, 0.60 points; P = .045) and physical function (difference, 5.01 points; P = .008).
- Differences between groups were no longer significant at 12 months.
- Benefit of methotrexate appeared to be dose related.
- The groups were similar with respect to nausea and diarrhea; four serious adverse events (two per group) were deemed unrelated to study treatment.
IN PRACTICE:
“Further work is required to understand adequate methotrexate dosing, whether benefits are greater in those with elevated systemic inflammation levels, and to consider cost-effectiveness before introducing this therapy for a potentially large population,” the authors wrote.
SOURCE:
The study was led by Sarah R. Kingsbury, PhD, University of Leeds and National Institute for Health and Care Research Leeds Biomedical Research Centre, Leeds, England, and was published online in Annals of Internal Medicine.
LIMITATIONS:
Limitations included a decrease in methotrexate dose between 6 and 12 months, nonallowance of switching to subcutaneous drug for intolerance, and a lack of assessment of the effectiveness of blinding.
DISCLOSURES:
The study was funded by Versus Arthritis, a charity that supports people with arthritis. Some authors reported affiliations with Versus Arthritis and/or companies that develop drugs for arthritis.
A version of this article appeared on Medscape.com.
TOPLINE:
The antimetabolite and immunosuppressant methotrexate, taken orally and in addition to usual analgesia, alleviates pain in patients with knee osteoarthritis.
METHODOLOGY:
- Investigators conducted a phase 3 randomized controlled trial among 155 patients in the United Kingdom with painful radiographic knee osteoarthritis and an inadequate response to their current medication (PROMOTE trial).
- Patients were assigned to oral methotrexate once weekly (6-week escalation from 10 to 25 mg) or placebo for 12 months, added to usual analgesia.
- The main outcome was average knee pain at 6 months on a numerical rating scale from 0 to 10.
TAKEAWAY:
- At 6 months, mean scores for knee pain had decreased by 1.3 points in the methotrexate group and 0.6 points in the placebo group (difference by intention to treat, 0.79 points; P = .030).
- The former also saw greater benefit in terms of Western Ontario and McMaster Universities Osteoarthritis Index scores for stiffness (difference, 0.60 points; P = .045) and physical function (difference, 5.01 points; P = .008).
- Differences between groups were no longer significant at 12 months.
- Benefit of methotrexate appeared to be dose related.
- The groups were similar with respect to nausea and diarrhea; four serious adverse events (two per group) were deemed unrelated to study treatment.
IN PRACTICE:
“Further work is required to understand adequate methotrexate dosing, whether benefits are greater in those with elevated systemic inflammation levels, and to consider cost-effectiveness before introducing this therapy for a potentially large population,” the authors wrote.
SOURCE:
The study was led by Sarah R. Kingsbury, PhD, University of Leeds and National Institute for Health and Care Research Leeds Biomedical Research Centre, Leeds, England, and was published online in Annals of Internal Medicine.
LIMITATIONS:
Limitations included a decrease in methotrexate dose between 6 and 12 months, nonallowance of switching to subcutaneous drug for intolerance, and a lack of assessment of the effectiveness of blinding.
DISCLOSURES:
The study was funded by Versus Arthritis, a charity that supports people with arthritis. Some authors reported affiliations with Versus Arthritis and/or companies that develop drugs for arthritis.
A version of this article appeared on Medscape.com.
How Common Is Pediatric Emergency Mistriage?
multicenter retrospective study published in JAMA Pediatrics. Researchers also identified gender, age, race, ethnicity, and comorbidity disparities in those who were undertriaged.
, according to aThe researchers found that only 34.1% of visits were correctly triaged while 58.5% were overtriaged and 7.4% were undertriaged. The findings were based on analysis of more than 1 million pediatric emergency visits over a 5-year period that used the Emergency Severity Index (ESI) version 4 for triage.
“The ESI had poor sensitivity in identifying a critically ill pediatric patient, and undertriage occurred in 1 in 14 children,” wrote Dana R. Sax, MD, a senior emergency physician at The Permanente Medical Group in northern California, and her colleagues.
“More than 90% of pediatric visits were assigned a mid to low triage acuity category, and actual resource use and care intensity frequently did not align with ESI predictions,” the authors wrote. “Our findings highlight an opportunity to improve triage for pediatric patients to mitigate critical undertriage, optimize resource decisions, standardize processes across time and setting, and promote more equitable care.”
The authors added that the study findings are currently being used by the Permanente system “to develop standardized triage education across centers to improve early identification of high-risk patients.”
Disparities in Emergency Care
The results underscore the need for more work to address disparities in emergency care, wrote Warren D. Frankenberger, PhD, RN, a nurse scientist at Children’s Hospital of Philadelphia, and two colleagues in an accompanying editorial.
“Decisions in triage can have significant downstream effects on subsequent care during the ED visit,” they wrote in their editorial. “Given that the triage process in most instances is fully executed by nurses, nurse researchers are in a key position to evaluate these and other covariates to influence further improvements in triage.” They suggested that use of clinical decision support tools and artificial intelligence (AI) may improve the triage process, albeit with the caveat that AI often relies on models with pre-existing historical bias that may perpetuate structural inequalities.
Study Methodology
The researchers analyzed 1,016,816 pediatric visits at 21 emergency departments in Kaiser Permanente Northern California between January 2016 and December 2020. The patients were an average 7 years old, and 47% were female. The researchers excluded visits that lacked ESI data or had incomplete ED time variables as well as those with patients who left against medical advice, were not seen, or were transferred from another ED.
The study relied on novel definitions of ESI undertriage and overtriage developed through a modified Delphi process by a team of four emergency physicians, one pediatric emergency physician, two emergency nurses, and one pediatric ICU physician. The definition involved comparing ESI levels to the clinical outcomes and resource use.
Resources included laboratory analysis, electrocardiography, radiography, CT, MRI, diagnostic ultrasonography (not point of care), angiography, IV fluids, and IV, intramuscular, or nebulized medications. Resources did not include “oral medications, tetanus immunizations, point-of-care testing, history and physical examination, saline or heparin lock, prescription refills, simple wound care, crutches, splints, and slings.”
Level 1 events were those requiring time-sensitive, critical intervention, including high-risk sepsis. Level 2 events included most level 1 events that occurred after the first hour (except operating room admission or hospital transfer) as well as respiratory therapy, toxicology consult, lumbar puncture, suicidality as chief concern, at least 2 doses of albuterol or continuous albuterol nebulization, a skeletal survey x-ray order, and medical social work consult with an ED length of stay of at least 2 hours. Level 3 events included IV mediation order, any CT order, OR admission or hospital transfer after one hour, or any pediatric hospitalist consult.
Analyzing the ED Visits
Overtriaged cases were ESI level 1 or 2 cases in which fewer than 2 resources were used; level 3 cases where fewer than 2 resources were used and no level 1 or 2 events occurred; and level 4 cases where no resources were used.
Undertriaged cases were defined as the following:
- ESI level 5 cases where any resource was used and any level 1, 2, or 3 events occurred.
- Level 4 cases where more than 1 resource was used and any level 1, 2, or 3 events occurred.
- Level 3 cases where any level 1 event occurred, more than one level 2 event occurred, or any level 2 event occurred and more than one additional ED resource type was used.
- Level 2 cases where any level 1 event occurred.
About half the visits (51%) were assigned ESI 3, which was the category with the highest proportion of mistriage. After adjusting for study facility and triage vital signs, the researchers found that children age 6 and older were more likely to be undertriaged than those younger than 6, particularly those age 15 and older (relative risk [RR], 1.36).
Undertriage was also modestly more likely with male patients (female patients’ RR, 0.93), patients with comorbidities (RR, 1.11-1.2), patients who arrived by ambulance (RR, 1.04), and patients who were Asian (RR, 1.10), Black (RR, 1.05), or Hispanic (RR, 1.04). Undertriage became gradually less likely with each additional year in the study period, with an RR of 0.89 in 2019 and 2020.
Among the study’s limitations were use of ESI version 4, instead of the currently used 5, and the omission of common procedures from the outcome definition that “may systematically bias the analysis toward overtriage,” the editorial noted. The authors also did not include pain as a variable in the analysis, which can often indicate patient acuity.
Further, this study was unable to include covariates identified in other research that may influence clinical decision-making, such as “the presenting illness or injury, children with complex medical needs, and language proficiency,” Dr. Frankenberger and colleagues wrote. “Furthermore, environmental stressors, such as ED volume and crowding, can influence how a nurse prioritizes care and may increase bias in decision-making and/or increase practice variability.”
The study was funded by the Kaiser Permanente Northern California (KPNC) Community Health program. One author had consulting payments from CSL Behring and Abbott Point-of-Care, and six of the authors have received grant funding from the KPNC Community Health program. The editorial authors reported no conflicts of interest.
multicenter retrospective study published in JAMA Pediatrics. Researchers also identified gender, age, race, ethnicity, and comorbidity disparities in those who were undertriaged.
, according to aThe researchers found that only 34.1% of visits were correctly triaged while 58.5% were overtriaged and 7.4% were undertriaged. The findings were based on analysis of more than 1 million pediatric emergency visits over a 5-year period that used the Emergency Severity Index (ESI) version 4 for triage.
“The ESI had poor sensitivity in identifying a critically ill pediatric patient, and undertriage occurred in 1 in 14 children,” wrote Dana R. Sax, MD, a senior emergency physician at The Permanente Medical Group in northern California, and her colleagues.
“More than 90% of pediatric visits were assigned a mid to low triage acuity category, and actual resource use and care intensity frequently did not align with ESI predictions,” the authors wrote. “Our findings highlight an opportunity to improve triage for pediatric patients to mitigate critical undertriage, optimize resource decisions, standardize processes across time and setting, and promote more equitable care.”
The authors added that the study findings are currently being used by the Permanente system “to develop standardized triage education across centers to improve early identification of high-risk patients.”
Disparities in Emergency Care
The results underscore the need for more work to address disparities in emergency care, wrote Warren D. Frankenberger, PhD, RN, a nurse scientist at Children’s Hospital of Philadelphia, and two colleagues in an accompanying editorial.
“Decisions in triage can have significant downstream effects on subsequent care during the ED visit,” they wrote in their editorial. “Given that the triage process in most instances is fully executed by nurses, nurse researchers are in a key position to evaluate these and other covariates to influence further improvements in triage.” They suggested that use of clinical decision support tools and artificial intelligence (AI) may improve the triage process, albeit with the caveat that AI often relies on models with pre-existing historical bias that may perpetuate structural inequalities.
Study Methodology
The researchers analyzed 1,016,816 pediatric visits at 21 emergency departments in Kaiser Permanente Northern California between January 2016 and December 2020. The patients were an average 7 years old, and 47% were female. The researchers excluded visits that lacked ESI data or had incomplete ED time variables as well as those with patients who left against medical advice, were not seen, or were transferred from another ED.
The study relied on novel definitions of ESI undertriage and overtriage developed through a modified Delphi process by a team of four emergency physicians, one pediatric emergency physician, two emergency nurses, and one pediatric ICU physician. The definition involved comparing ESI levels to the clinical outcomes and resource use.
Resources included laboratory analysis, electrocardiography, radiography, CT, MRI, diagnostic ultrasonography (not point of care), angiography, IV fluids, and IV, intramuscular, or nebulized medications. Resources did not include “oral medications, tetanus immunizations, point-of-care testing, history and physical examination, saline or heparin lock, prescription refills, simple wound care, crutches, splints, and slings.”
Level 1 events were those requiring time-sensitive, critical intervention, including high-risk sepsis. Level 2 events included most level 1 events that occurred after the first hour (except operating room admission or hospital transfer) as well as respiratory therapy, toxicology consult, lumbar puncture, suicidality as chief concern, at least 2 doses of albuterol or continuous albuterol nebulization, a skeletal survey x-ray order, and medical social work consult with an ED length of stay of at least 2 hours. Level 3 events included IV mediation order, any CT order, OR admission or hospital transfer after one hour, or any pediatric hospitalist consult.
Analyzing the ED Visits
Overtriaged cases were ESI level 1 or 2 cases in which fewer than 2 resources were used; level 3 cases where fewer than 2 resources were used and no level 1 or 2 events occurred; and level 4 cases where no resources were used.
Undertriaged cases were defined as the following:
- ESI level 5 cases where any resource was used and any level 1, 2, or 3 events occurred.
- Level 4 cases where more than 1 resource was used and any level 1, 2, or 3 events occurred.
- Level 3 cases where any level 1 event occurred, more than one level 2 event occurred, or any level 2 event occurred and more than one additional ED resource type was used.
- Level 2 cases where any level 1 event occurred.
About half the visits (51%) were assigned ESI 3, which was the category with the highest proportion of mistriage. After adjusting for study facility and triage vital signs, the researchers found that children age 6 and older were more likely to be undertriaged than those younger than 6, particularly those age 15 and older (relative risk [RR], 1.36).
Undertriage was also modestly more likely with male patients (female patients’ RR, 0.93), patients with comorbidities (RR, 1.11-1.2), patients who arrived by ambulance (RR, 1.04), and patients who were Asian (RR, 1.10), Black (RR, 1.05), or Hispanic (RR, 1.04). Undertriage became gradually less likely with each additional year in the study period, with an RR of 0.89 in 2019 and 2020.
Among the study’s limitations were use of ESI version 4, instead of the currently used 5, and the omission of common procedures from the outcome definition that “may systematically bias the analysis toward overtriage,” the editorial noted. The authors also did not include pain as a variable in the analysis, which can often indicate patient acuity.
Further, this study was unable to include covariates identified in other research that may influence clinical decision-making, such as “the presenting illness or injury, children with complex medical needs, and language proficiency,” Dr. Frankenberger and colleagues wrote. “Furthermore, environmental stressors, such as ED volume and crowding, can influence how a nurse prioritizes care and may increase bias in decision-making and/or increase practice variability.”
The study was funded by the Kaiser Permanente Northern California (KPNC) Community Health program. One author had consulting payments from CSL Behring and Abbott Point-of-Care, and six of the authors have received grant funding from the KPNC Community Health program. The editorial authors reported no conflicts of interest.
multicenter retrospective study published in JAMA Pediatrics. Researchers also identified gender, age, race, ethnicity, and comorbidity disparities in those who were undertriaged.
, according to aThe researchers found that only 34.1% of visits were correctly triaged while 58.5% were overtriaged and 7.4% were undertriaged. The findings were based on analysis of more than 1 million pediatric emergency visits over a 5-year period that used the Emergency Severity Index (ESI) version 4 for triage.
“The ESI had poor sensitivity in identifying a critically ill pediatric patient, and undertriage occurred in 1 in 14 children,” wrote Dana R. Sax, MD, a senior emergency physician at The Permanente Medical Group in northern California, and her colleagues.
“More than 90% of pediatric visits were assigned a mid to low triage acuity category, and actual resource use and care intensity frequently did not align with ESI predictions,” the authors wrote. “Our findings highlight an opportunity to improve triage for pediatric patients to mitigate critical undertriage, optimize resource decisions, standardize processes across time and setting, and promote more equitable care.”
The authors added that the study findings are currently being used by the Permanente system “to develop standardized triage education across centers to improve early identification of high-risk patients.”
Disparities in Emergency Care
The results underscore the need for more work to address disparities in emergency care, wrote Warren D. Frankenberger, PhD, RN, a nurse scientist at Children’s Hospital of Philadelphia, and two colleagues in an accompanying editorial.
“Decisions in triage can have significant downstream effects on subsequent care during the ED visit,” they wrote in their editorial. “Given that the triage process in most instances is fully executed by nurses, nurse researchers are in a key position to evaluate these and other covariates to influence further improvements in triage.” They suggested that use of clinical decision support tools and artificial intelligence (AI) may improve the triage process, albeit with the caveat that AI often relies on models with pre-existing historical bias that may perpetuate structural inequalities.
Study Methodology
The researchers analyzed 1,016,816 pediatric visits at 21 emergency departments in Kaiser Permanente Northern California between January 2016 and December 2020. The patients were an average 7 years old, and 47% were female. The researchers excluded visits that lacked ESI data or had incomplete ED time variables as well as those with patients who left against medical advice, were not seen, or were transferred from another ED.
The study relied on novel definitions of ESI undertriage and overtriage developed through a modified Delphi process by a team of four emergency physicians, one pediatric emergency physician, two emergency nurses, and one pediatric ICU physician. The definition involved comparing ESI levels to the clinical outcomes and resource use.
Resources included laboratory analysis, electrocardiography, radiography, CT, MRI, diagnostic ultrasonography (not point of care), angiography, IV fluids, and IV, intramuscular, or nebulized medications. Resources did not include “oral medications, tetanus immunizations, point-of-care testing, history and physical examination, saline or heparin lock, prescription refills, simple wound care, crutches, splints, and slings.”
Level 1 events were those requiring time-sensitive, critical intervention, including high-risk sepsis. Level 2 events included most level 1 events that occurred after the first hour (except operating room admission or hospital transfer) as well as respiratory therapy, toxicology consult, lumbar puncture, suicidality as chief concern, at least 2 doses of albuterol or continuous albuterol nebulization, a skeletal survey x-ray order, and medical social work consult with an ED length of stay of at least 2 hours. Level 3 events included IV mediation order, any CT order, OR admission or hospital transfer after one hour, or any pediatric hospitalist consult.
Analyzing the ED Visits
Overtriaged cases were ESI level 1 or 2 cases in which fewer than 2 resources were used; level 3 cases where fewer than 2 resources were used and no level 1 or 2 events occurred; and level 4 cases where no resources were used.
Undertriaged cases were defined as the following:
- ESI level 5 cases where any resource was used and any level 1, 2, or 3 events occurred.
- Level 4 cases where more than 1 resource was used and any level 1, 2, or 3 events occurred.
- Level 3 cases where any level 1 event occurred, more than one level 2 event occurred, or any level 2 event occurred and more than one additional ED resource type was used.
- Level 2 cases where any level 1 event occurred.
About half the visits (51%) were assigned ESI 3, which was the category with the highest proportion of mistriage. After adjusting for study facility and triage vital signs, the researchers found that children age 6 and older were more likely to be undertriaged than those younger than 6, particularly those age 15 and older (relative risk [RR], 1.36).
Undertriage was also modestly more likely with male patients (female patients’ RR, 0.93), patients with comorbidities (RR, 1.11-1.2), patients who arrived by ambulance (RR, 1.04), and patients who were Asian (RR, 1.10), Black (RR, 1.05), or Hispanic (RR, 1.04). Undertriage became gradually less likely with each additional year in the study period, with an RR of 0.89 in 2019 and 2020.
Among the study’s limitations were use of ESI version 4, instead of the currently used 5, and the omission of common procedures from the outcome definition that “may systematically bias the analysis toward overtriage,” the editorial noted. The authors also did not include pain as a variable in the analysis, which can often indicate patient acuity.
Further, this study was unable to include covariates identified in other research that may influence clinical decision-making, such as “the presenting illness or injury, children with complex medical needs, and language proficiency,” Dr. Frankenberger and colleagues wrote. “Furthermore, environmental stressors, such as ED volume and crowding, can influence how a nurse prioritizes care and may increase bias in decision-making and/or increase practice variability.”
The study was funded by the Kaiser Permanente Northern California (KPNC) Community Health program. One author had consulting payments from CSL Behring and Abbott Point-of-Care, and six of the authors have received grant funding from the KPNC Community Health program. The editorial authors reported no conflicts of interest.
FROM JAMA PEDIATRICS
Colorectal Cancer: New Primary Care Method Predicts Onset Within Next 2 Years
TOPLINE:
Up to 16% of primary care patients are non-compliant with FIT, which is the gold standard for predicting CRC.
METHODOLOGY:
- This study was retrospective cohort of 50,387 UK Biobank participants reporting a CRC symptom in primary care at age ≥ 40 years.
- The novel method, called an integrated risk model, used a combination of a polygenic risk score from genetic testing, symptoms, and patient characteristics to identify patients likely to develop CRC in the next 2 years.
- The primary outcome was the risk model’s performance in classifying a CRC case according to a statistical metric, the receiver operating characteristic area under the curve. Values range from 0 to 1, where 1 indicates perfect discriminative power and 0.5 indicates no discriminative power.
TAKEAWAY:
- The cohort of 50,387 participants was found to have 438 cases of CRC and 49,949 controls without CRC within 2 years of symptom reporting. CRC cases were diagnosed by hospital records, cancer registries, or death records.
- Increased risk of a CRC diagnosis was identified by a combination of six variables: older age at index date of symptom, higher polygenic risk score, which included 201 variants, male sex, previous smoking, rectal bleeding, and change in bowel habit.
- The polygenic risk score alone had good ability to distinguish cases from controls because 1.45% of participants in the highest quintile and 0.42% in the lowest quintile were later diagnosed with CRC.
- The variables were used to calculate an integrated risk model, which estimated the cross-sectional risk (in 80% of the final cohort) of a subsequent CRC diagnosis within 2 years. The highest scoring integrated risk model in this study was found to have a receiver operating characteristic area under the curve value of 0.76 with a 95% CI of 0.71-0.81. (A value of this magnitude indicates moderate discriminative ability to distinguish cases from controls because it falls between 0.7 and 0.8. A higher value [above 0.8] is considered strong and a lower value [< 0.7] is considered weak.)
IN PRACTICE:
The authors concluded, “The [integrated risk model] developed in this study predicts, with good accuracy, which patients presenting with CRC symptoms in a primary care setting are likely to be diagnosed with CRC within the next 2 years.”
The integrated risk model has “potential to be immediately actionable in the clinical setting … [by] inform[ing] patient triage, improving early diagnostic rates and health outcomes and reducing pressure on diagnostic secondary care services.”
SOURCE:
The corresponding author is Harry D. Green of the University of Exeter, England. The study (2024 Aug 1. doi: 10.1038/s41431-024-01654-3) appeared in the European Journal of Human Genetics.
LIMITATIONS:
Limitations included an observational design and the inability of the integrated risk model to outperform FIT, which has a receiver operating characteristic area under the curve of 0.95.
DISCLOSURES:
None of the authors reported competing interests. The funding sources included the National Institute for Health and Care Research and others.
A version of this article first appeared on Medscape.com.
TOPLINE:
Up to 16% of primary care patients are non-compliant with FIT, which is the gold standard for predicting CRC.
METHODOLOGY:
- This study was retrospective cohort of 50,387 UK Biobank participants reporting a CRC symptom in primary care at age ≥ 40 years.
- The novel method, called an integrated risk model, used a combination of a polygenic risk score from genetic testing, symptoms, and patient characteristics to identify patients likely to develop CRC in the next 2 years.
- The primary outcome was the risk model’s performance in classifying a CRC case according to a statistical metric, the receiver operating characteristic area under the curve. Values range from 0 to 1, where 1 indicates perfect discriminative power and 0.5 indicates no discriminative power.
TAKEAWAY:
- The cohort of 50,387 participants was found to have 438 cases of CRC and 49,949 controls without CRC within 2 years of symptom reporting. CRC cases were diagnosed by hospital records, cancer registries, or death records.
- Increased risk of a CRC diagnosis was identified by a combination of six variables: older age at index date of symptom, higher polygenic risk score, which included 201 variants, male sex, previous smoking, rectal bleeding, and change in bowel habit.
- The polygenic risk score alone had good ability to distinguish cases from controls because 1.45% of participants in the highest quintile and 0.42% in the lowest quintile were later diagnosed with CRC.
- The variables were used to calculate an integrated risk model, which estimated the cross-sectional risk (in 80% of the final cohort) of a subsequent CRC diagnosis within 2 years. The highest scoring integrated risk model in this study was found to have a receiver operating characteristic area under the curve value of 0.76 with a 95% CI of 0.71-0.81. (A value of this magnitude indicates moderate discriminative ability to distinguish cases from controls because it falls between 0.7 and 0.8. A higher value [above 0.8] is considered strong and a lower value [< 0.7] is considered weak.)
IN PRACTICE:
The authors concluded, “The [integrated risk model] developed in this study predicts, with good accuracy, which patients presenting with CRC symptoms in a primary care setting are likely to be diagnosed with CRC within the next 2 years.”
The integrated risk model has “potential to be immediately actionable in the clinical setting … [by] inform[ing] patient triage, improving early diagnostic rates and health outcomes and reducing pressure on diagnostic secondary care services.”
SOURCE:
The corresponding author is Harry D. Green of the University of Exeter, England. The study (2024 Aug 1. doi: 10.1038/s41431-024-01654-3) appeared in the European Journal of Human Genetics.
LIMITATIONS:
Limitations included an observational design and the inability of the integrated risk model to outperform FIT, which has a receiver operating characteristic area under the curve of 0.95.
DISCLOSURES:
None of the authors reported competing interests. The funding sources included the National Institute for Health and Care Research and others.
A version of this article first appeared on Medscape.com.
TOPLINE:
Up to 16% of primary care patients are non-compliant with FIT, which is the gold standard for predicting CRC.
METHODOLOGY:
- This study was retrospective cohort of 50,387 UK Biobank participants reporting a CRC symptom in primary care at age ≥ 40 years.
- The novel method, called an integrated risk model, used a combination of a polygenic risk score from genetic testing, symptoms, and patient characteristics to identify patients likely to develop CRC in the next 2 years.
- The primary outcome was the risk model’s performance in classifying a CRC case according to a statistical metric, the receiver operating characteristic area under the curve. Values range from 0 to 1, where 1 indicates perfect discriminative power and 0.5 indicates no discriminative power.
TAKEAWAY:
- The cohort of 50,387 participants was found to have 438 cases of CRC and 49,949 controls without CRC within 2 years of symptom reporting. CRC cases were diagnosed by hospital records, cancer registries, or death records.
- Increased risk of a CRC diagnosis was identified by a combination of six variables: older age at index date of symptom, higher polygenic risk score, which included 201 variants, male sex, previous smoking, rectal bleeding, and change in bowel habit.
- The polygenic risk score alone had good ability to distinguish cases from controls because 1.45% of participants in the highest quintile and 0.42% in the lowest quintile were later diagnosed with CRC.
- The variables were used to calculate an integrated risk model, which estimated the cross-sectional risk (in 80% of the final cohort) of a subsequent CRC diagnosis within 2 years. The highest scoring integrated risk model in this study was found to have a receiver operating characteristic area under the curve value of 0.76 with a 95% CI of 0.71-0.81. (A value of this magnitude indicates moderate discriminative ability to distinguish cases from controls because it falls between 0.7 and 0.8. A higher value [above 0.8] is considered strong and a lower value [< 0.7] is considered weak.)
IN PRACTICE:
The authors concluded, “The [integrated risk model] developed in this study predicts, with good accuracy, which patients presenting with CRC symptoms in a primary care setting are likely to be diagnosed with CRC within the next 2 years.”
The integrated risk model has “potential to be immediately actionable in the clinical setting … [by] inform[ing] patient triage, improving early diagnostic rates and health outcomes and reducing pressure on diagnostic secondary care services.”
SOURCE:
The corresponding author is Harry D. Green of the University of Exeter, England. The study (2024 Aug 1. doi: 10.1038/s41431-024-01654-3) appeared in the European Journal of Human Genetics.
LIMITATIONS:
Limitations included an observational design and the inability of the integrated risk model to outperform FIT, which has a receiver operating characteristic area under the curve of 0.95.
DISCLOSURES:
None of the authors reported competing interests. The funding sources included the National Institute for Health and Care Research and others.
A version of this article first appeared on Medscape.com.
Scarring Head Wound
The Diagnosis: Brunsting-Perry Cicatricial Pemphigoid
Physical examination and histopathology are paramount in diagnosing Brunsting-Perry cicatricial pemphigoid (BPCP). In our patient, histopathology showed subepidermal blistering with a mixed superficial dermal inflammatory cell infiltrate. Direct immunofluorescence was positive for linear IgG and C3 antibodies along the basement membrane. The scarring erosions on the scalp combined with the autoantibody findings on direct immunofluorescence were consistent with BPCP. He was started on dapsone 100 mg daily and demonstrated complete resolution of symptoms after 10 months, with the exception of persistent scarring hair loss (Figure).
Brunsting-Perry cicatricial pemphigoid is a rare dermatologic condition. It was first defined in 1957 when Brunsting and Perry1 examined 7 patients with cicatricial pemphigoid that predominantly affected the head and neck region, with occasional mucous membrane involvement but no mucosal scarring. Characteristically, BPCP manifests as scarring herpetiform plaques with varied blisters, erosions, crusts, and scarring.1 It primarily affects middle-aged men.2
Historically, BPCP has been considered a variant of cicatricial pemphigoid (now known as mucous membrane pemphigoid), bullous pemphigoid, or epidermolysis bullosa acquisita.3 The antigen target has not been established clearly; however, autoantibodies against laminin 332, collagen VII, and BP180 and BP230 have been proposed.2,4,5 Jacoby et al6 described BPCP on a spectrum with bullous pemphigoid and cicatricial pemphigoid, with primarily circulating autoantibodies on one end and tissue-fixed autoantibodies on the other.
The differential for BPCP also includes anti-p200 pemphigoid and anti–laminin 332 pemphigoid. Anti-p200 pemphigoid also is known as bullous pemphigoid with antibodies against the 200-kDa protein.7 It may clinically manifest similar to bullous pemphigoid and other subepidermal autoimmune blistering diseases; thus, immunopathologic differentiation can be helpful. Anti–laminin 332 pemphigoid (also known as anti–laminin gamma-1 pemphigoid) is characterized by autoantibodies targeting the laminin 332 protein in the basement membrane zone, resulting in blistering and erosions.8 Similar to BPCP and epidermolysis bullosa aquisita, anti–laminin 332 pemphigoid may affect cephalic regions and mucous membrane surfaces, resulting in scarring and cicatricial changes. Anti–laminin 332 pemphigoid also has been associated with internal malignancy.8 The use of the salt-split skin technique can be utilized to differentiate these entities based on their autoantibody-binding patterns in relation to the lamina densa.
Treatment options for mild BPCP include potent topical or intralesional steroids and dapsone, while more severe cases may require systemic therapy with rituximab, azathioprine, mycophenolate mofetil, or cyclophosphamide.4
This case highlights the importance of histopathologic examination of skin lesions with an unusual history or clinical presentation. Dermatologists should consider BPCP when presented with erosions, ulcerations, or blisters of the head and neck in middle-aged male patients.
- Brunsting LA, Perry HO. Benign pemphigoid? a report of seven cases with chronic, scarring, herpetiform plaques about the head and neck. AMA Arch Derm. 1957;75:489-501. doi:10.1001 /archderm.1957.01550160015002
- Jedlickova H, Neidermeier A, Zgažarová S, et al. Brunsting-Perry pemphigoid of the scalp with antibodies against laminin 332. Dermatology. 2011;222:193-195. doi:10.1159/000322842
- Eichhoff G. Brunsting-Perry pemphigoid as differential diagnosis of nonmelanoma skin cancer. Cureus. 2019;11:E5400. doi:10.7759/cureus.5400
- Asfour L, Chong H, Mee J, et al. Epidermolysis bullosa acquisita (Brunsting-Perry pemphigoid variant) localized to the face and diagnosed with antigen identification using skin deficient in type VII collagen. Am J Dermatopathol. 2017;39:e90-e96. doi:10.1097 /DAD.0000000000000829
- Zhou S, Zou Y, Pan M. Brunsting-Perry pemphigoid transitioning from previous bullous pemphigoid. JAAD Case Rep. 2020;6:192-194. doi:10.1016/j.jdcr.2019.12.018
- Jacoby WD Jr, Bartholome CW, Ramchand SC, et al. Cicatricial pemphigoid (Brunsting-Perry type). case report and immunofluorescence findings. Arch Dermatol. 1978;114:779-781. doi:10.1001/archderm.1978.01640170079018
- Kridin K, Ahmed AR. Anti-p200 pemphigoid: a systematic review. Front Immunol. 2019;10:2466. doi:10.3389/fimmu.2019.02466
- Shi L, Li X, Qian H. Anti-laminin 332-type mucous membrane pemphigoid. Biomolecules. 2022;12:1461. doi:10.3390/biom12101461
The Diagnosis: Brunsting-Perry Cicatricial Pemphigoid
Physical examination and histopathology are paramount in diagnosing Brunsting-Perry cicatricial pemphigoid (BPCP). In our patient, histopathology showed subepidermal blistering with a mixed superficial dermal inflammatory cell infiltrate. Direct immunofluorescence was positive for linear IgG and C3 antibodies along the basement membrane. The scarring erosions on the scalp combined with the autoantibody findings on direct immunofluorescence were consistent with BPCP. He was started on dapsone 100 mg daily and demonstrated complete resolution of symptoms after 10 months, with the exception of persistent scarring hair loss (Figure).
Brunsting-Perry cicatricial pemphigoid is a rare dermatologic condition. It was first defined in 1957 when Brunsting and Perry1 examined 7 patients with cicatricial pemphigoid that predominantly affected the head and neck region, with occasional mucous membrane involvement but no mucosal scarring. Characteristically, BPCP manifests as scarring herpetiform plaques with varied blisters, erosions, crusts, and scarring.1 It primarily affects middle-aged men.2
Historically, BPCP has been considered a variant of cicatricial pemphigoid (now known as mucous membrane pemphigoid), bullous pemphigoid, or epidermolysis bullosa acquisita.3 The antigen target has not been established clearly; however, autoantibodies against laminin 332, collagen VII, and BP180 and BP230 have been proposed.2,4,5 Jacoby et al6 described BPCP on a spectrum with bullous pemphigoid and cicatricial pemphigoid, with primarily circulating autoantibodies on one end and tissue-fixed autoantibodies on the other.
The differential for BPCP also includes anti-p200 pemphigoid and anti–laminin 332 pemphigoid. Anti-p200 pemphigoid also is known as bullous pemphigoid with antibodies against the 200-kDa protein.7 It may clinically manifest similar to bullous pemphigoid and other subepidermal autoimmune blistering diseases; thus, immunopathologic differentiation can be helpful. Anti–laminin 332 pemphigoid (also known as anti–laminin gamma-1 pemphigoid) is characterized by autoantibodies targeting the laminin 332 protein in the basement membrane zone, resulting in blistering and erosions.8 Similar to BPCP and epidermolysis bullosa aquisita, anti–laminin 332 pemphigoid may affect cephalic regions and mucous membrane surfaces, resulting in scarring and cicatricial changes. Anti–laminin 332 pemphigoid also has been associated with internal malignancy.8 The use of the salt-split skin technique can be utilized to differentiate these entities based on their autoantibody-binding patterns in relation to the lamina densa.
Treatment options for mild BPCP include potent topical or intralesional steroids and dapsone, while more severe cases may require systemic therapy with rituximab, azathioprine, mycophenolate mofetil, or cyclophosphamide.4
This case highlights the importance of histopathologic examination of skin lesions with an unusual history or clinical presentation. Dermatologists should consider BPCP when presented with erosions, ulcerations, or blisters of the head and neck in middle-aged male patients.
The Diagnosis: Brunsting-Perry Cicatricial Pemphigoid
Physical examination and histopathology are paramount in diagnosing Brunsting-Perry cicatricial pemphigoid (BPCP). In our patient, histopathology showed subepidermal blistering with a mixed superficial dermal inflammatory cell infiltrate. Direct immunofluorescence was positive for linear IgG and C3 antibodies along the basement membrane. The scarring erosions on the scalp combined with the autoantibody findings on direct immunofluorescence were consistent with BPCP. He was started on dapsone 100 mg daily and demonstrated complete resolution of symptoms after 10 months, with the exception of persistent scarring hair loss (Figure).
Brunsting-Perry cicatricial pemphigoid is a rare dermatologic condition. It was first defined in 1957 when Brunsting and Perry1 examined 7 patients with cicatricial pemphigoid that predominantly affected the head and neck region, with occasional mucous membrane involvement but no mucosal scarring. Characteristically, BPCP manifests as scarring herpetiform plaques with varied blisters, erosions, crusts, and scarring.1 It primarily affects middle-aged men.2
Historically, BPCP has been considered a variant of cicatricial pemphigoid (now known as mucous membrane pemphigoid), bullous pemphigoid, or epidermolysis bullosa acquisita.3 The antigen target has not been established clearly; however, autoantibodies against laminin 332, collagen VII, and BP180 and BP230 have been proposed.2,4,5 Jacoby et al6 described BPCP on a spectrum with bullous pemphigoid and cicatricial pemphigoid, with primarily circulating autoantibodies on one end and tissue-fixed autoantibodies on the other.
The differential for BPCP also includes anti-p200 pemphigoid and anti–laminin 332 pemphigoid. Anti-p200 pemphigoid also is known as bullous pemphigoid with antibodies against the 200-kDa protein.7 It may clinically manifest similar to bullous pemphigoid and other subepidermal autoimmune blistering diseases; thus, immunopathologic differentiation can be helpful. Anti–laminin 332 pemphigoid (also known as anti–laminin gamma-1 pemphigoid) is characterized by autoantibodies targeting the laminin 332 protein in the basement membrane zone, resulting in blistering and erosions.8 Similar to BPCP and epidermolysis bullosa aquisita, anti–laminin 332 pemphigoid may affect cephalic regions and mucous membrane surfaces, resulting in scarring and cicatricial changes. Anti–laminin 332 pemphigoid also has been associated with internal malignancy.8 The use of the salt-split skin technique can be utilized to differentiate these entities based on their autoantibody-binding patterns in relation to the lamina densa.
Treatment options for mild BPCP include potent topical or intralesional steroids and dapsone, while more severe cases may require systemic therapy with rituximab, azathioprine, mycophenolate mofetil, or cyclophosphamide.4
This case highlights the importance of histopathologic examination of skin lesions with an unusual history or clinical presentation. Dermatologists should consider BPCP when presented with erosions, ulcerations, or blisters of the head and neck in middle-aged male patients.
- Brunsting LA, Perry HO. Benign pemphigoid? a report of seven cases with chronic, scarring, herpetiform plaques about the head and neck. AMA Arch Derm. 1957;75:489-501. doi:10.1001 /archderm.1957.01550160015002
- Jedlickova H, Neidermeier A, Zgažarová S, et al. Brunsting-Perry pemphigoid of the scalp with antibodies against laminin 332. Dermatology. 2011;222:193-195. doi:10.1159/000322842
- Eichhoff G. Brunsting-Perry pemphigoid as differential diagnosis of nonmelanoma skin cancer. Cureus. 2019;11:E5400. doi:10.7759/cureus.5400
- Asfour L, Chong H, Mee J, et al. Epidermolysis bullosa acquisita (Brunsting-Perry pemphigoid variant) localized to the face and diagnosed with antigen identification using skin deficient in type VII collagen. Am J Dermatopathol. 2017;39:e90-e96. doi:10.1097 /DAD.0000000000000829
- Zhou S, Zou Y, Pan M. Brunsting-Perry pemphigoid transitioning from previous bullous pemphigoid. JAAD Case Rep. 2020;6:192-194. doi:10.1016/j.jdcr.2019.12.018
- Jacoby WD Jr, Bartholome CW, Ramchand SC, et al. Cicatricial pemphigoid (Brunsting-Perry type). case report and immunofluorescence findings. Arch Dermatol. 1978;114:779-781. doi:10.1001/archderm.1978.01640170079018
- Kridin K, Ahmed AR. Anti-p200 pemphigoid: a systematic review. Front Immunol. 2019;10:2466. doi:10.3389/fimmu.2019.02466
- Shi L, Li X, Qian H. Anti-laminin 332-type mucous membrane pemphigoid. Biomolecules. 2022;12:1461. doi:10.3390/biom12101461
- Brunsting LA, Perry HO. Benign pemphigoid? a report of seven cases with chronic, scarring, herpetiform plaques about the head and neck. AMA Arch Derm. 1957;75:489-501. doi:10.1001 /archderm.1957.01550160015002
- Jedlickova H, Neidermeier A, Zgažarová S, et al. Brunsting-Perry pemphigoid of the scalp with antibodies against laminin 332. Dermatology. 2011;222:193-195. doi:10.1159/000322842
- Eichhoff G. Brunsting-Perry pemphigoid as differential diagnosis of nonmelanoma skin cancer. Cureus. 2019;11:E5400. doi:10.7759/cureus.5400
- Asfour L, Chong H, Mee J, et al. Epidermolysis bullosa acquisita (Brunsting-Perry pemphigoid variant) localized to the face and diagnosed with antigen identification using skin deficient in type VII collagen. Am J Dermatopathol. 2017;39:e90-e96. doi:10.1097 /DAD.0000000000000829
- Zhou S, Zou Y, Pan M. Brunsting-Perry pemphigoid transitioning from previous bullous pemphigoid. JAAD Case Rep. 2020;6:192-194. doi:10.1016/j.jdcr.2019.12.018
- Jacoby WD Jr, Bartholome CW, Ramchand SC, et al. Cicatricial pemphigoid (Brunsting-Perry type). case report and immunofluorescence findings. Arch Dermatol. 1978;114:779-781. doi:10.1001/archderm.1978.01640170079018
- Kridin K, Ahmed AR. Anti-p200 pemphigoid: a systematic review. Front Immunol. 2019;10:2466. doi:10.3389/fimmu.2019.02466
- Shi L, Li X, Qian H. Anti-laminin 332-type mucous membrane pemphigoid. Biomolecules. 2022;12:1461. doi:10.3390/biom12101461
A 60-year-old man presented to a dermatology clinic with a wound on the scalp that had persisted for 11 months. The lesion started as a small erosion that eventually progressed to involve the entire parietal scalp. He had a history of type 2 diabetes mellitus, hypertension, and Graves disease. Physical examination demonstrated a large scar over the vertex scalp with central erosion, overlying crust, peripheral scalp atrophy, hypopigmentation at the periphery, and exaggerated superficial vasculature. Some oral erosions also were observed. A review of systems was negative for any constitutional symptoms. A month prior, the patient had been started on dapsone 50 mg with a prednisone taper by an outside dermatologist and noticed some improvement.
Family Size, Dog Ownership Linked With Reduced Risk of Crohn’s
, according to investigators.
Those who live with a pet bird may be more likely to develop CD, although few participants in the study lived with birds, requiring a cautious interpretation of this latter finding, lead author Mingyue Xue, PhD, of Mount Sinai Hospital, Toronto, Ontario, Canada, and colleagues reported.
“Environmental factors, such as smoking, large families, urban environments, and exposure to pets, have been shown to be associated with the risk of CD development,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, most of these studies were based on a retrospective study design, which makes it challenging to understand when and how environmental factors trigger the biological changes that lead to disease.”
The present study prospectively followed 4289 asymptomatic first-degree relatives (FDRs) of patients with CD. Environmental factors were identified via regression models that also considered biological factors, including gut inflammation via fecal calprotectin (FCP) levels, altered intestinal permeability measured by urinary fractional excretion of lactulose to mannitol ratio (LMR), and fecal microbiome composition through 16S rRNA sequencing.
After a median follow-up period of 5.62 years, 86 FDRs (1.9%) developed CD.
Living in a household of at least three people in the first year of life was associated with a 57% reduced risk of CD development (hazard ratio [HR], 0.43; P = .019). Similarly, living with a pet dog between the ages of 5 and 15 also demonstrated a protective effect, dropping risk of CD by 39% (HR, 0.61; P = .025).
“Our analysis revealed a protective trend of living with dogs that transcends the age of exposure, suggesting that dog ownership could confer health benefits in reducing the risk of CD,” the investigators wrote. “Our study also found that living in a large family during the first year of life is significantly associated with the future onset of CD, aligning with prior research that indicates that a larger family size in the first year of life can reduce the risk of developing IBD.”
In contrast, the study identified bird ownership at time of recruitment as a risk factor for CD, increasing risk almost three-fold (HR, 2.84; P = .005). The investigators urged a careful interpretation of this latter finding, however, as relatively few FDRs lived with birds.
“[A]lthough our sample size can be considered large, some environmental variables were uncommon, such as the participants having birds as pets, and would greatly benefit from replication of our findings in other cohorts,” Dr. Xue and colleagues noted.
They suggested several possible ways in which the above environmental factors may impact CD risk, including effects on subclinical inflammation, microbiome composition, and gut permeability.
“Understanding the relationship between CD-related environmental factors and these predisease biomarkers may shed light on the underlying mechanisms by which environmental factors impact host health and ultimately lead to CD onset,” the investigators concluded.
The study was supported by Crohn’s and Colitis Canada, Canadian Institutes of Health Research, the Helmsley Charitable Trust, and others. The investigators disclosed no conflicts of interest.
, according to investigators.
Those who live with a pet bird may be more likely to develop CD, although few participants in the study lived with birds, requiring a cautious interpretation of this latter finding, lead author Mingyue Xue, PhD, of Mount Sinai Hospital, Toronto, Ontario, Canada, and colleagues reported.
“Environmental factors, such as smoking, large families, urban environments, and exposure to pets, have been shown to be associated with the risk of CD development,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, most of these studies were based on a retrospective study design, which makes it challenging to understand when and how environmental factors trigger the biological changes that lead to disease.”
The present study prospectively followed 4289 asymptomatic first-degree relatives (FDRs) of patients with CD. Environmental factors were identified via regression models that also considered biological factors, including gut inflammation via fecal calprotectin (FCP) levels, altered intestinal permeability measured by urinary fractional excretion of lactulose to mannitol ratio (LMR), and fecal microbiome composition through 16S rRNA sequencing.
After a median follow-up period of 5.62 years, 86 FDRs (1.9%) developed CD.
Living in a household of at least three people in the first year of life was associated with a 57% reduced risk of CD development (hazard ratio [HR], 0.43; P = .019). Similarly, living with a pet dog between the ages of 5 and 15 also demonstrated a protective effect, dropping risk of CD by 39% (HR, 0.61; P = .025).
“Our analysis revealed a protective trend of living with dogs that transcends the age of exposure, suggesting that dog ownership could confer health benefits in reducing the risk of CD,” the investigators wrote. “Our study also found that living in a large family during the first year of life is significantly associated with the future onset of CD, aligning with prior research that indicates that a larger family size in the first year of life can reduce the risk of developing IBD.”
In contrast, the study identified bird ownership at time of recruitment as a risk factor for CD, increasing risk almost three-fold (HR, 2.84; P = .005). The investigators urged a careful interpretation of this latter finding, however, as relatively few FDRs lived with birds.
“[A]lthough our sample size can be considered large, some environmental variables were uncommon, such as the participants having birds as pets, and would greatly benefit from replication of our findings in other cohorts,” Dr. Xue and colleagues noted.
They suggested several possible ways in which the above environmental factors may impact CD risk, including effects on subclinical inflammation, microbiome composition, and gut permeability.
“Understanding the relationship between CD-related environmental factors and these predisease biomarkers may shed light on the underlying mechanisms by which environmental factors impact host health and ultimately lead to CD onset,” the investigators concluded.
The study was supported by Crohn’s and Colitis Canada, Canadian Institutes of Health Research, the Helmsley Charitable Trust, and others. The investigators disclosed no conflicts of interest.
, according to investigators.
Those who live with a pet bird may be more likely to develop CD, although few participants in the study lived with birds, requiring a cautious interpretation of this latter finding, lead author Mingyue Xue, PhD, of Mount Sinai Hospital, Toronto, Ontario, Canada, and colleagues reported.
“Environmental factors, such as smoking, large families, urban environments, and exposure to pets, have been shown to be associated with the risk of CD development,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, most of these studies were based on a retrospective study design, which makes it challenging to understand when and how environmental factors trigger the biological changes that lead to disease.”
The present study prospectively followed 4289 asymptomatic first-degree relatives (FDRs) of patients with CD. Environmental factors were identified via regression models that also considered biological factors, including gut inflammation via fecal calprotectin (FCP) levels, altered intestinal permeability measured by urinary fractional excretion of lactulose to mannitol ratio (LMR), and fecal microbiome composition through 16S rRNA sequencing.
After a median follow-up period of 5.62 years, 86 FDRs (1.9%) developed CD.
Living in a household of at least three people in the first year of life was associated with a 57% reduced risk of CD development (hazard ratio [HR], 0.43; P = .019). Similarly, living with a pet dog between the ages of 5 and 15 also demonstrated a protective effect, dropping risk of CD by 39% (HR, 0.61; P = .025).
“Our analysis revealed a protective trend of living with dogs that transcends the age of exposure, suggesting that dog ownership could confer health benefits in reducing the risk of CD,” the investigators wrote. “Our study also found that living in a large family during the first year of life is significantly associated with the future onset of CD, aligning with prior research that indicates that a larger family size in the first year of life can reduce the risk of developing IBD.”
In contrast, the study identified bird ownership at time of recruitment as a risk factor for CD, increasing risk almost three-fold (HR, 2.84; P = .005). The investigators urged a careful interpretation of this latter finding, however, as relatively few FDRs lived with birds.
“[A]lthough our sample size can be considered large, some environmental variables were uncommon, such as the participants having birds as pets, and would greatly benefit from replication of our findings in other cohorts,” Dr. Xue and colleagues noted.
They suggested several possible ways in which the above environmental factors may impact CD risk, including effects on subclinical inflammation, microbiome composition, and gut permeability.
“Understanding the relationship between CD-related environmental factors and these predisease biomarkers may shed light on the underlying mechanisms by which environmental factors impact host health and ultimately lead to CD onset,” the investigators concluded.
The study was supported by Crohn’s and Colitis Canada, Canadian Institutes of Health Research, the Helmsley Charitable Trust, and others. The investigators disclosed no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Is Buprenorphine/Naloxone Safer Than Buprenorphine Alone During Pregnancy?
TOPLINE:
Buprenorphine combined with naloxone during pregnancy is associated with lower risks for neonatal abstinence syndrome and neonatal intensive care unit admission than buprenorphine alone. The study also found no significant differences in major congenital malformations between the two treatments.
METHODOLOGY:
- Researchers conducted a population-based cohort study using healthcare utilization data of people who were insured by Medicaid between 2000 and 2018.
- A total of 8695 pregnant individuals were included, with 3369 exposed to buprenorphine/naloxone and 5326 exposed to buprenorphine alone during the first trimester.
- Outcome measures included major congenital malformations, low birth weight, neonatal abstinence syndrome, neonatal intensive care unit admission, preterm birth, respiratory symptoms, small for gestational age, cesarean delivery, and maternal morbidity.
- The study excluded pregnancies with chromosomal anomalies, first-trimester exposure to known teratogens, or methadone use during baseline or the first trimester.
TAKEAWAY:
- According to the authors, buprenorphine/naloxone exposure during pregnancy was associated with a lower risk for neonatal abstinence syndrome (weighted risk ratio [RR], 0.77; 95% CI, 0.70-0.84) than buprenorphine alone.
- The researchers found a modestly lower risk for neonatal intensive care unit admission (weighted RR, 0.91; 95% CI, 0.85-0.98) and small risk for gestational age (weighted RR, 0.86; 95% CI, 0.75-0.98) in the buprenorphine/naloxone group.
- No significant differences were observed between the two groups in major congenital malformations, low birth weight, preterm birth, respiratory symptoms, or cesarean delivery.
IN PRACTICE:
“For the outcomes assessed, compared with buprenorphine alone, buprenorphine combined with naloxone during pregnancy appears to be a safe treatment option. This supports the view that both formulations are reasonable options for treatment of OUD in pregnancy, affirming flexibility in collaborative treatment decision-making,” the study authors wrote.
SOURCE:
The study was led by Loreen Straub, MD, MS, of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital and Harvard Medical School in Boston. It was published online in JAMA.
LIMITATIONS:
Some potential confounders, such as alcohol use and cigarette smoking, may not have been recorded in claims data. The findings for many of the neonatal and maternal outcomes suggest that confounding by unmeasured factors is an unlikely explanation for the associations observed. Individuals identified as exposed based on filled prescriptions might not have used the medication. The study used outcome algorithms with relatively high positive predictive values to minimize outcome misclassification. The cohort was restricted to live births to enable linkage to infants and to assess neonatal outcomes.
DISCLOSURES:
Various authors reported receiving grants and personal fees from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute on Drug Abuse, Roche, Moderna, Takeda, and Janssen Global, among others.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Buprenorphine combined with naloxone during pregnancy is associated with lower risks for neonatal abstinence syndrome and neonatal intensive care unit admission than buprenorphine alone. The study also found no significant differences in major congenital malformations between the two treatments.
METHODOLOGY:
- Researchers conducted a population-based cohort study using healthcare utilization data of people who were insured by Medicaid between 2000 and 2018.
- A total of 8695 pregnant individuals were included, with 3369 exposed to buprenorphine/naloxone and 5326 exposed to buprenorphine alone during the first trimester.
- Outcome measures included major congenital malformations, low birth weight, neonatal abstinence syndrome, neonatal intensive care unit admission, preterm birth, respiratory symptoms, small for gestational age, cesarean delivery, and maternal morbidity.
- The study excluded pregnancies with chromosomal anomalies, first-trimester exposure to known teratogens, or methadone use during baseline or the first trimester.
TAKEAWAY:
- According to the authors, buprenorphine/naloxone exposure during pregnancy was associated with a lower risk for neonatal abstinence syndrome (weighted risk ratio [RR], 0.77; 95% CI, 0.70-0.84) than buprenorphine alone.
- The researchers found a modestly lower risk for neonatal intensive care unit admission (weighted RR, 0.91; 95% CI, 0.85-0.98) and small risk for gestational age (weighted RR, 0.86; 95% CI, 0.75-0.98) in the buprenorphine/naloxone group.
- No significant differences were observed between the two groups in major congenital malformations, low birth weight, preterm birth, respiratory symptoms, or cesarean delivery.
IN PRACTICE:
“For the outcomes assessed, compared with buprenorphine alone, buprenorphine combined with naloxone during pregnancy appears to be a safe treatment option. This supports the view that both formulations are reasonable options for treatment of OUD in pregnancy, affirming flexibility in collaborative treatment decision-making,” the study authors wrote.
SOURCE:
The study was led by Loreen Straub, MD, MS, of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital and Harvard Medical School in Boston. It was published online in JAMA.
LIMITATIONS:
Some potential confounders, such as alcohol use and cigarette smoking, may not have been recorded in claims data. The findings for many of the neonatal and maternal outcomes suggest that confounding by unmeasured factors is an unlikely explanation for the associations observed. Individuals identified as exposed based on filled prescriptions might not have used the medication. The study used outcome algorithms with relatively high positive predictive values to minimize outcome misclassification. The cohort was restricted to live births to enable linkage to infants and to assess neonatal outcomes.
DISCLOSURES:
Various authors reported receiving grants and personal fees from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute on Drug Abuse, Roche, Moderna, Takeda, and Janssen Global, among others.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Buprenorphine combined with naloxone during pregnancy is associated with lower risks for neonatal abstinence syndrome and neonatal intensive care unit admission than buprenorphine alone. The study also found no significant differences in major congenital malformations between the two treatments.
METHODOLOGY:
- Researchers conducted a population-based cohort study using healthcare utilization data of people who were insured by Medicaid between 2000 and 2018.
- A total of 8695 pregnant individuals were included, with 3369 exposed to buprenorphine/naloxone and 5326 exposed to buprenorphine alone during the first trimester.
- Outcome measures included major congenital malformations, low birth weight, neonatal abstinence syndrome, neonatal intensive care unit admission, preterm birth, respiratory symptoms, small for gestational age, cesarean delivery, and maternal morbidity.
- The study excluded pregnancies with chromosomal anomalies, first-trimester exposure to known teratogens, or methadone use during baseline or the first trimester.
TAKEAWAY:
- According to the authors, buprenorphine/naloxone exposure during pregnancy was associated with a lower risk for neonatal abstinence syndrome (weighted risk ratio [RR], 0.77; 95% CI, 0.70-0.84) than buprenorphine alone.
- The researchers found a modestly lower risk for neonatal intensive care unit admission (weighted RR, 0.91; 95% CI, 0.85-0.98) and small risk for gestational age (weighted RR, 0.86; 95% CI, 0.75-0.98) in the buprenorphine/naloxone group.
- No significant differences were observed between the two groups in major congenital malformations, low birth weight, preterm birth, respiratory symptoms, or cesarean delivery.
IN PRACTICE:
“For the outcomes assessed, compared with buprenorphine alone, buprenorphine combined with naloxone during pregnancy appears to be a safe treatment option. This supports the view that both formulations are reasonable options for treatment of OUD in pregnancy, affirming flexibility in collaborative treatment decision-making,” the study authors wrote.
SOURCE:
The study was led by Loreen Straub, MD, MS, of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital and Harvard Medical School in Boston. It was published online in JAMA.
LIMITATIONS:
Some potential confounders, such as alcohol use and cigarette smoking, may not have been recorded in claims data. The findings for many of the neonatal and maternal outcomes suggest that confounding by unmeasured factors is an unlikely explanation for the associations observed. Individuals identified as exposed based on filled prescriptions might not have used the medication. The study used outcome algorithms with relatively high positive predictive values to minimize outcome misclassification. The cohort was restricted to live births to enable linkage to infants and to assess neonatal outcomes.
DISCLOSURES:
Various authors reported receiving grants and personal fees from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute on Drug Abuse, Roche, Moderna, Takeda, and Janssen Global, among others.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
E-Bikes: The Good ... and the Ugly
Bicycles have been woven into my life since I first straddled a hand-me-down with a fan belt drive when I was 3. At age 12 my friend Ricky and I took a 250 mile–plus 2-night adventure on our 3-speed “English” style bikes. We still marvel that our parents let us do it when neither cell phones nor GPS existed.
I have always bike commuted to work, including the years when that involved a perilous navigation into Boston from the suburbs. In our mid-50s my wife and I biked from Washington state back here to Maine with another couple unsupported. We continue to do at least one self-guided cycle tour out of the country each year.
Not surprisingly, I keep a close eye on what’s happening in the bicycle market. For decades the trends have shifted back and forth between sleek road models and beefier off-roaders. There have been boom years here and there for the dealers and manufacturers, but nothing like what the bike industry is experiencing now with the arrival of e-bikes on the market. Driven primarily by electrification, micromobility ridership (which includes conventional bikes and scooters) has grown more than 50-fold over the last 10 years. Projections suggest the market’s value will be $300 billion by 2030.
It doesn’t take an MBA with a major in marketing to understand the broad appeal of electrification. Most adults have ridden a bicycle as children, but several decades of gap years has left many of them with a level of fitness that makes pedaling against the wind or up any incline difficult and unappealing. An e-bike can put even the least fitness conscious back in the saddle and open the options for outdoor recreation they haven’t dreamed of since childhood.
In large part the people flocking to e-bikes are retiree’s who thought they were “over the hill.” They are having so much fun they don’t care if the Lycra-clad “serious” cyclists notice the battery bulge in the frame on their e-bikes. Another group of e-bike adopters are motivated by the “greenness” of a fossil-fuel–free electric powered transportation which, with minimal compromise, can be used as they would a car around town and for longer commutes than they would have considered on a purely pedal-powered bicycle.
Unfortunately, there is a growing group of younger e-bike riders who are motivated and uninhibited by the potential that the power boost of a small electric motor can provide. And here is where the ugliness begins to intrude on what was otherwise a beautiful and expanding landscape. However, it is the young who are, not surprisingly, drawn to the speed, and with any vehicle – motorized or conventional – as speed increases so does the frequency and seriousness of accidents.
The term e-bike covers a broad range of vehicles, from those designated class 1, which require pedaling and are limited to 20 miles per hour, to class 3, which may have a throttle and unmodified can hit 28 mph. Class 2 bikes have a throttle that will allow the rider to reach 20 mph without pedaling. Modifying any class of e-bike can substantially increase its speed, but this is more common in classes 2 and 3. As an example, some very fast micromobiles are considered unclassified e-bikes and avoid being labeled motorcycles simply because they have pedals.
One has to give some credit to the e-bike industry for eventually adopting this classification system. But, we must give the rest of us, including parents and public safety officials, a failing grade for doing a poor job of translating these scores into enforceable regulations to protect both riders and pedestrians from serious injury.
On the governmental side only a little more than half of US states have used the three category classification to craft their regulations. Many jurisdictions have failed to differentiate between streets, sidewalks, and trails. Regulations vary from state to state, and many states leave it up to local communities. From my experience chairing our town’s Bicycle and Pedestrian Advisory Committee, I can tell you that even “progressive” communities are struggling to decide who can ride what where. The result has been that people of all ages, but mostly adolescents, are traveling on busy streets and sidewalks at speeds that put themselves and pedestrians at risk.
On the parental side of the problem are families that have either allowed or enabled their children to ride class 2 and 3 e-bikes without proper safety equipment or consideration for the safety of the rest of the community. Currently, this is not much of a problem here in Maine thanks to the weather and the high price of e-bikes. However, I frequently visit an affluent community in the San Francisco Bay Area, where it is not uncommon to see middle school children speeding along well in excess of 20 mph.
Unfortunately this is another example, like television and cell phone, in which our society has been unable to keep up with technology by molding the behavior of our children and/or creating enforceable rules that allow us to reap the benefits of new discoveries while minimizing the collateral damage that can accompany them.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Bicycles have been woven into my life since I first straddled a hand-me-down with a fan belt drive when I was 3. At age 12 my friend Ricky and I took a 250 mile–plus 2-night adventure on our 3-speed “English” style bikes. We still marvel that our parents let us do it when neither cell phones nor GPS existed.
I have always bike commuted to work, including the years when that involved a perilous navigation into Boston from the suburbs. In our mid-50s my wife and I biked from Washington state back here to Maine with another couple unsupported. We continue to do at least one self-guided cycle tour out of the country each year.
Not surprisingly, I keep a close eye on what’s happening in the bicycle market. For decades the trends have shifted back and forth between sleek road models and beefier off-roaders. There have been boom years here and there for the dealers and manufacturers, but nothing like what the bike industry is experiencing now with the arrival of e-bikes on the market. Driven primarily by electrification, micromobility ridership (which includes conventional bikes and scooters) has grown more than 50-fold over the last 10 years. Projections suggest the market’s value will be $300 billion by 2030.
It doesn’t take an MBA with a major in marketing to understand the broad appeal of electrification. Most adults have ridden a bicycle as children, but several decades of gap years has left many of them with a level of fitness that makes pedaling against the wind or up any incline difficult and unappealing. An e-bike can put even the least fitness conscious back in the saddle and open the options for outdoor recreation they haven’t dreamed of since childhood.
In large part the people flocking to e-bikes are retiree’s who thought they were “over the hill.” They are having so much fun they don’t care if the Lycra-clad “serious” cyclists notice the battery bulge in the frame on their e-bikes. Another group of e-bike adopters are motivated by the “greenness” of a fossil-fuel–free electric powered transportation which, with minimal compromise, can be used as they would a car around town and for longer commutes than they would have considered on a purely pedal-powered bicycle.
Unfortunately, there is a growing group of younger e-bike riders who are motivated and uninhibited by the potential that the power boost of a small electric motor can provide. And here is where the ugliness begins to intrude on what was otherwise a beautiful and expanding landscape. However, it is the young who are, not surprisingly, drawn to the speed, and with any vehicle – motorized or conventional – as speed increases so does the frequency and seriousness of accidents.
The term e-bike covers a broad range of vehicles, from those designated class 1, which require pedaling and are limited to 20 miles per hour, to class 3, which may have a throttle and unmodified can hit 28 mph. Class 2 bikes have a throttle that will allow the rider to reach 20 mph without pedaling. Modifying any class of e-bike can substantially increase its speed, but this is more common in classes 2 and 3. As an example, some very fast micromobiles are considered unclassified e-bikes and avoid being labeled motorcycles simply because they have pedals.
One has to give some credit to the e-bike industry for eventually adopting this classification system. But, we must give the rest of us, including parents and public safety officials, a failing grade for doing a poor job of translating these scores into enforceable regulations to protect both riders and pedestrians from serious injury.
On the governmental side only a little more than half of US states have used the three category classification to craft their regulations. Many jurisdictions have failed to differentiate between streets, sidewalks, and trails. Regulations vary from state to state, and many states leave it up to local communities. From my experience chairing our town’s Bicycle and Pedestrian Advisory Committee, I can tell you that even “progressive” communities are struggling to decide who can ride what where. The result has been that people of all ages, but mostly adolescents, are traveling on busy streets and sidewalks at speeds that put themselves and pedestrians at risk.
On the parental side of the problem are families that have either allowed or enabled their children to ride class 2 and 3 e-bikes without proper safety equipment or consideration for the safety of the rest of the community. Currently, this is not much of a problem here in Maine thanks to the weather and the high price of e-bikes. However, I frequently visit an affluent community in the San Francisco Bay Area, where it is not uncommon to see middle school children speeding along well in excess of 20 mph.
Unfortunately this is another example, like television and cell phone, in which our society has been unable to keep up with technology by molding the behavior of our children and/or creating enforceable rules that allow us to reap the benefits of new discoveries while minimizing the collateral damage that can accompany them.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Bicycles have been woven into my life since I first straddled a hand-me-down with a fan belt drive when I was 3. At age 12 my friend Ricky and I took a 250 mile–plus 2-night adventure on our 3-speed “English” style bikes. We still marvel that our parents let us do it when neither cell phones nor GPS existed.
I have always bike commuted to work, including the years when that involved a perilous navigation into Boston from the suburbs. In our mid-50s my wife and I biked from Washington state back here to Maine with another couple unsupported. We continue to do at least one self-guided cycle tour out of the country each year.
Not surprisingly, I keep a close eye on what’s happening in the bicycle market. For decades the trends have shifted back and forth between sleek road models and beefier off-roaders. There have been boom years here and there for the dealers and manufacturers, but nothing like what the bike industry is experiencing now with the arrival of e-bikes on the market. Driven primarily by electrification, micromobility ridership (which includes conventional bikes and scooters) has grown more than 50-fold over the last 10 years. Projections suggest the market’s value will be $300 billion by 2030.
It doesn’t take an MBA with a major in marketing to understand the broad appeal of electrification. Most adults have ridden a bicycle as children, but several decades of gap years has left many of them with a level of fitness that makes pedaling against the wind or up any incline difficult and unappealing. An e-bike can put even the least fitness conscious back in the saddle and open the options for outdoor recreation they haven’t dreamed of since childhood.
In large part the people flocking to e-bikes are retiree’s who thought they were “over the hill.” They are having so much fun they don’t care if the Lycra-clad “serious” cyclists notice the battery bulge in the frame on their e-bikes. Another group of e-bike adopters are motivated by the “greenness” of a fossil-fuel–free electric powered transportation which, with minimal compromise, can be used as they would a car around town and for longer commutes than they would have considered on a purely pedal-powered bicycle.
Unfortunately, there is a growing group of younger e-bike riders who are motivated and uninhibited by the potential that the power boost of a small electric motor can provide. And here is where the ugliness begins to intrude on what was otherwise a beautiful and expanding landscape. However, it is the young who are, not surprisingly, drawn to the speed, and with any vehicle – motorized or conventional – as speed increases so does the frequency and seriousness of accidents.
The term e-bike covers a broad range of vehicles, from those designated class 1, which require pedaling and are limited to 20 miles per hour, to class 3, which may have a throttle and unmodified can hit 28 mph. Class 2 bikes have a throttle that will allow the rider to reach 20 mph without pedaling. Modifying any class of e-bike can substantially increase its speed, but this is more common in classes 2 and 3. As an example, some very fast micromobiles are considered unclassified e-bikes and avoid being labeled motorcycles simply because they have pedals.
One has to give some credit to the e-bike industry for eventually adopting this classification system. But, we must give the rest of us, including parents and public safety officials, a failing grade for doing a poor job of translating these scores into enforceable regulations to protect both riders and pedestrians from serious injury.
On the governmental side only a little more than half of US states have used the three category classification to craft their regulations. Many jurisdictions have failed to differentiate between streets, sidewalks, and trails. Regulations vary from state to state, and many states leave it up to local communities. From my experience chairing our town’s Bicycle and Pedestrian Advisory Committee, I can tell you that even “progressive” communities are struggling to decide who can ride what where. The result has been that people of all ages, but mostly adolescents, are traveling on busy streets and sidewalks at speeds that put themselves and pedestrians at risk.
On the parental side of the problem are families that have either allowed or enabled their children to ride class 2 and 3 e-bikes without proper safety equipment or consideration for the safety of the rest of the community. Currently, this is not much of a problem here in Maine thanks to the weather and the high price of e-bikes. However, I frequently visit an affluent community in the San Francisco Bay Area, where it is not uncommon to see middle school children speeding along well in excess of 20 mph.
Unfortunately this is another example, like television and cell phone, in which our society has been unable to keep up with technology by molding the behavior of our children and/or creating enforceable rules that allow us to reap the benefits of new discoveries while minimizing the collateral damage that can accompany them.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A Racing Heart Signals Trouble in Chronic Kidney Disease
TOPLINE:
A higher resting heart rate, even within the normal range, is linked to an increased risk for mortality and cardiovascular events in patients with non–dialysis-dependent chronic kidney disease (CKD).
METHODOLOGY:
- An elevated resting heart rate is an independent risk factor for all-cause mortality and cardiovascular events in the general population; however, the correlation between heart rate and mortality in patients with CKD is unclear.
- Researchers analyzed the longitudinal data of patients with non–dialysis-dependent CKD enrolled in the Fukushima CKD Cohort Study to investigate the association between resting heart rate and adverse clinical outcomes.
- The patient cohort was stratified into four groups on the basis of resting heart rates: < 70, 70-79, 80-89, and ≥ 90 beats/min.
- The primary and secondary outcomes were all-cause mortality and cardiovascular events, respectively, the latter category including myocardial infarction, angina pectoris, and heart failure.
TAKEAWAY:
- Researchers enrolled 1353 patients with non–dialysis-dependent CKD (median age, 65 years; 56.7% men; median estimated glomerular filtration rate, 52.2 mL/min/1.73 m2) who had a median heart rate of 76 beats/min.
- During the median observation period of 4.9 years, 123 patients died and 163 developed cardiovascular events.
- Compared with patients with a resting heart rate < 70 beats/min, those with a resting heart rate of 80-89 and ≥ 90 beats/min had an adjusted hazard ratio of 1.74 and 2.61 for all-cause mortality, respectively.
- Similarly, the risk for cardiovascular events was higher in patients with a heart rate of 80-89 beats/min than in those with a heart rate < 70 beats/min (adjusted hazard ratio, 1.70).
IN PRACTICE:
“The present study supported the idea that reducing heart rate might be effective for CKD patients with a heart rate ≥ 70/min, since the lowest risk of mortality was seen in patients with heart rate < 70/min,” the authors concluded.
SOURCE:
This study was led by Hirotaka Saito, Department of Nephrology and Hypertension, Fukushima Medical University, Fukushima City, Japan. It was published online in Scientific Reports.
LIMITATIONS:
Heart rate was measured using a standard sphygmomanometer or an automated device, rather than an electrocardiograph, which may have introduced measurement variability. The observational nature of the study precluded the establishment of cause-and-effect relationships between heart rate and clinical outcomes. Additionally, variables such as lifestyle factors, underlying health conditions, and socioeconomic factors were not measured, which could have affected the results.
DISCLOSURES:
Some authors received research funding from Chugai Pharmaceutical, Kowa Pharmaceutical, Ono Pharmaceutical, and other sources. They declared having no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A higher resting heart rate, even within the normal range, is linked to an increased risk for mortality and cardiovascular events in patients with non–dialysis-dependent chronic kidney disease (CKD).
METHODOLOGY:
- An elevated resting heart rate is an independent risk factor for all-cause mortality and cardiovascular events in the general population; however, the correlation between heart rate and mortality in patients with CKD is unclear.
- Researchers analyzed the longitudinal data of patients with non–dialysis-dependent CKD enrolled in the Fukushima CKD Cohort Study to investigate the association between resting heart rate and adverse clinical outcomes.
- The patient cohort was stratified into four groups on the basis of resting heart rates: < 70, 70-79, 80-89, and ≥ 90 beats/min.
- The primary and secondary outcomes were all-cause mortality and cardiovascular events, respectively, the latter category including myocardial infarction, angina pectoris, and heart failure.
TAKEAWAY:
- Researchers enrolled 1353 patients with non–dialysis-dependent CKD (median age, 65 years; 56.7% men; median estimated glomerular filtration rate, 52.2 mL/min/1.73 m2) who had a median heart rate of 76 beats/min.
- During the median observation period of 4.9 years, 123 patients died and 163 developed cardiovascular events.
- Compared with patients with a resting heart rate < 70 beats/min, those with a resting heart rate of 80-89 and ≥ 90 beats/min had an adjusted hazard ratio of 1.74 and 2.61 for all-cause mortality, respectively.
- Similarly, the risk for cardiovascular events was higher in patients with a heart rate of 80-89 beats/min than in those with a heart rate < 70 beats/min (adjusted hazard ratio, 1.70).
IN PRACTICE:
“The present study supported the idea that reducing heart rate might be effective for CKD patients with a heart rate ≥ 70/min, since the lowest risk of mortality was seen in patients with heart rate < 70/min,” the authors concluded.
SOURCE:
This study was led by Hirotaka Saito, Department of Nephrology and Hypertension, Fukushima Medical University, Fukushima City, Japan. It was published online in Scientific Reports.
LIMITATIONS:
Heart rate was measured using a standard sphygmomanometer or an automated device, rather than an electrocardiograph, which may have introduced measurement variability. The observational nature of the study precluded the establishment of cause-and-effect relationships between heart rate and clinical outcomes. Additionally, variables such as lifestyle factors, underlying health conditions, and socioeconomic factors were not measured, which could have affected the results.
DISCLOSURES:
Some authors received research funding from Chugai Pharmaceutical, Kowa Pharmaceutical, Ono Pharmaceutical, and other sources. They declared having no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A higher resting heart rate, even within the normal range, is linked to an increased risk for mortality and cardiovascular events in patients with non–dialysis-dependent chronic kidney disease (CKD).
METHODOLOGY:
- An elevated resting heart rate is an independent risk factor for all-cause mortality and cardiovascular events in the general population; however, the correlation between heart rate and mortality in patients with CKD is unclear.
- Researchers analyzed the longitudinal data of patients with non–dialysis-dependent CKD enrolled in the Fukushima CKD Cohort Study to investigate the association between resting heart rate and adverse clinical outcomes.
- The patient cohort was stratified into four groups on the basis of resting heart rates: < 70, 70-79, 80-89, and ≥ 90 beats/min.
- The primary and secondary outcomes were all-cause mortality and cardiovascular events, respectively, the latter category including myocardial infarction, angina pectoris, and heart failure.
TAKEAWAY:
- Researchers enrolled 1353 patients with non–dialysis-dependent CKD (median age, 65 years; 56.7% men; median estimated glomerular filtration rate, 52.2 mL/min/1.73 m2) who had a median heart rate of 76 beats/min.
- During the median observation period of 4.9 years, 123 patients died and 163 developed cardiovascular events.
- Compared with patients with a resting heart rate < 70 beats/min, those with a resting heart rate of 80-89 and ≥ 90 beats/min had an adjusted hazard ratio of 1.74 and 2.61 for all-cause mortality, respectively.
- Similarly, the risk for cardiovascular events was higher in patients with a heart rate of 80-89 beats/min than in those with a heart rate < 70 beats/min (adjusted hazard ratio, 1.70).
IN PRACTICE:
“The present study supported the idea that reducing heart rate might be effective for CKD patients with a heart rate ≥ 70/min, since the lowest risk of mortality was seen in patients with heart rate < 70/min,” the authors concluded.
SOURCE:
This study was led by Hirotaka Saito, Department of Nephrology and Hypertension, Fukushima Medical University, Fukushima City, Japan. It was published online in Scientific Reports.
LIMITATIONS:
Heart rate was measured using a standard sphygmomanometer or an automated device, rather than an electrocardiograph, which may have introduced measurement variability. The observational nature of the study precluded the establishment of cause-and-effect relationships between heart rate and clinical outcomes. Additionally, variables such as lifestyle factors, underlying health conditions, and socioeconomic factors were not measured, which could have affected the results.
DISCLOSURES:
Some authors received research funding from Chugai Pharmaceutical, Kowa Pharmaceutical, Ono Pharmaceutical, and other sources. They declared having no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
PCOS Increases Eating Disorder Risk
TOPLINE:
Women with polycystic ovary syndrome (PCOS) have higher odds of some eating disorders, including bulimia nervosa, binge eating disorder, and disordered eating, regardless of weight.
METHODOLOGY:
- A small systematic review and meta-analysis showed increased odds of any eating disorders and disordered eating scores in adult women with PCOS compared with women without PCOS.
- As part of the 2023 update of the International Evidence-based Guideline for the Assessment of and Management of PCOS, the same researchers updated and expanded their analysis to include adolescents and specific eating disorders and to evaluate the effect of body mass index (BMI) on these risks.
- They included 20 cross-sectional studies involving 28,922 women with PCOS and 258,619 women without PCOS; PCOS was diagnosed by either National Institutes of Health or Rotterdam criteria, as well as by patient self-report or hospital records.
- Eating disorders were screened using a validated disordered eating screening tool or diagnostic criteria from the Diagnostic and Statistical Manual of Mental Disorders.
- The outcomes of interest included the prevalence of any eating disorder, individual eating disorders, disordered eating, and mean disordered eating scores.
TAKEAWAY:
- Women with PCOS had 53% higher odds (odds ratio [OR], 1.53; 95% CI, 1.29-1.82; eight studies) of any eating disorder than control individuals without PCOS.
- The likelihood of bulimia nervosa (OR, 1.34; 95% CI, 1.17-1.54; five studies) and binge eating disorder (OR, 2.09; 95% CI, 1.18-3.72; four studies) was higher in women with PCOS, but no significant association was found for anorexia nervosa.
- The mean disordered eating scores and odds of disordered eating were higher in women with PCOS (standardized mean difference [SMD], 0.52; 95% CI, 0.28-0.77; 13 studies; and OR, 2.84; 95% CI, 1.0-8.04; eight studies; respectively).
- Disordered eating scores were higher in both the normal and higher weight categories (BMI < 25; SMD, 0.36; 95% CI, 0.15-0.58; five studies; and BMI ≥ 25; SMD, 0.68; 95% CI, 0.22-1.13; four studies; respectively).
IN PRACTICE:
“Our findings emphasize the importance of screening women with PCOS for eating disorders before clinicians share any lifestyle advice,” the lead author said in a press release. “The lifestyle modifications we often recommend for women with PCOS — including physical activity, healthy diet, and behavior modifications — could hinder the recovery process for eating disorders.”
SOURCE:
The study was led by Laura G. Cooney, MD, MSCE, University of Wisconsin, Madison, and published online in the Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The included studies were observational in nature, limiting the ability to adjust for potential confounders. The cross-sectional design of the included studies precluded determining whether the diagnosis of PCOS or the symptoms of disordered eating occurred first. Studies from 10 countries were included, but limited data from developing or Asian countries restrict the generalizability of the results.
DISCLOSURES:
This study was conducted to inform recommendations of the 2023 International Evidence-based Guideline in PCOS, which was funded by the Australian National Health and Medical Research Council, Centre for Research Excellence in Polycystic Ovary Syndrome, and other sources. The authors declared no conflicts of interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Women with polycystic ovary syndrome (PCOS) have higher odds of some eating disorders, including bulimia nervosa, binge eating disorder, and disordered eating, regardless of weight.
METHODOLOGY:
- A small systematic review and meta-analysis showed increased odds of any eating disorders and disordered eating scores in adult women with PCOS compared with women without PCOS.
- As part of the 2023 update of the International Evidence-based Guideline for the Assessment of and Management of PCOS, the same researchers updated and expanded their analysis to include adolescents and specific eating disorders and to evaluate the effect of body mass index (BMI) on these risks.
- They included 20 cross-sectional studies involving 28,922 women with PCOS and 258,619 women without PCOS; PCOS was diagnosed by either National Institutes of Health or Rotterdam criteria, as well as by patient self-report or hospital records.
- Eating disorders were screened using a validated disordered eating screening tool or diagnostic criteria from the Diagnostic and Statistical Manual of Mental Disorders.
- The outcomes of interest included the prevalence of any eating disorder, individual eating disorders, disordered eating, and mean disordered eating scores.
TAKEAWAY:
- Women with PCOS had 53% higher odds (odds ratio [OR], 1.53; 95% CI, 1.29-1.82; eight studies) of any eating disorder than control individuals without PCOS.
- The likelihood of bulimia nervosa (OR, 1.34; 95% CI, 1.17-1.54; five studies) and binge eating disorder (OR, 2.09; 95% CI, 1.18-3.72; four studies) was higher in women with PCOS, but no significant association was found for anorexia nervosa.
- The mean disordered eating scores and odds of disordered eating were higher in women with PCOS (standardized mean difference [SMD], 0.52; 95% CI, 0.28-0.77; 13 studies; and OR, 2.84; 95% CI, 1.0-8.04; eight studies; respectively).
- Disordered eating scores were higher in both the normal and higher weight categories (BMI < 25; SMD, 0.36; 95% CI, 0.15-0.58; five studies; and BMI ≥ 25; SMD, 0.68; 95% CI, 0.22-1.13; four studies; respectively).
IN PRACTICE:
“Our findings emphasize the importance of screening women with PCOS for eating disorders before clinicians share any lifestyle advice,” the lead author said in a press release. “The lifestyle modifications we often recommend for women with PCOS — including physical activity, healthy diet, and behavior modifications — could hinder the recovery process for eating disorders.”
SOURCE:
The study was led by Laura G. Cooney, MD, MSCE, University of Wisconsin, Madison, and published online in the Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The included studies were observational in nature, limiting the ability to adjust for potential confounders. The cross-sectional design of the included studies precluded determining whether the diagnosis of PCOS or the symptoms of disordered eating occurred first. Studies from 10 countries were included, but limited data from developing or Asian countries restrict the generalizability of the results.
DISCLOSURES:
This study was conducted to inform recommendations of the 2023 International Evidence-based Guideline in PCOS, which was funded by the Australian National Health and Medical Research Council, Centre for Research Excellence in Polycystic Ovary Syndrome, and other sources. The authors declared no conflicts of interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Women with polycystic ovary syndrome (PCOS) have higher odds of some eating disorders, including bulimia nervosa, binge eating disorder, and disordered eating, regardless of weight.
METHODOLOGY:
- A small systematic review and meta-analysis showed increased odds of any eating disorders and disordered eating scores in adult women with PCOS compared with women without PCOS.
- As part of the 2023 update of the International Evidence-based Guideline for the Assessment of and Management of PCOS, the same researchers updated and expanded their analysis to include adolescents and specific eating disorders and to evaluate the effect of body mass index (BMI) on these risks.
- They included 20 cross-sectional studies involving 28,922 women with PCOS and 258,619 women without PCOS; PCOS was diagnosed by either National Institutes of Health or Rotterdam criteria, as well as by patient self-report or hospital records.
- Eating disorders were screened using a validated disordered eating screening tool or diagnostic criteria from the Diagnostic and Statistical Manual of Mental Disorders.
- The outcomes of interest included the prevalence of any eating disorder, individual eating disorders, disordered eating, and mean disordered eating scores.
TAKEAWAY:
- Women with PCOS had 53% higher odds (odds ratio [OR], 1.53; 95% CI, 1.29-1.82; eight studies) of any eating disorder than control individuals without PCOS.
- The likelihood of bulimia nervosa (OR, 1.34; 95% CI, 1.17-1.54; five studies) and binge eating disorder (OR, 2.09; 95% CI, 1.18-3.72; four studies) was higher in women with PCOS, but no significant association was found for anorexia nervosa.
- The mean disordered eating scores and odds of disordered eating were higher in women with PCOS (standardized mean difference [SMD], 0.52; 95% CI, 0.28-0.77; 13 studies; and OR, 2.84; 95% CI, 1.0-8.04; eight studies; respectively).
- Disordered eating scores were higher in both the normal and higher weight categories (BMI < 25; SMD, 0.36; 95% CI, 0.15-0.58; five studies; and BMI ≥ 25; SMD, 0.68; 95% CI, 0.22-1.13; four studies; respectively).
IN PRACTICE:
“Our findings emphasize the importance of screening women with PCOS for eating disorders before clinicians share any lifestyle advice,” the lead author said in a press release. “The lifestyle modifications we often recommend for women with PCOS — including physical activity, healthy diet, and behavior modifications — could hinder the recovery process for eating disorders.”
SOURCE:
The study was led by Laura G. Cooney, MD, MSCE, University of Wisconsin, Madison, and published online in the Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The included studies were observational in nature, limiting the ability to adjust for potential confounders. The cross-sectional design of the included studies precluded determining whether the diagnosis of PCOS or the symptoms of disordered eating occurred first. Studies from 10 countries were included, but limited data from developing or Asian countries restrict the generalizability of the results.
DISCLOSURES:
This study was conducted to inform recommendations of the 2023 International Evidence-based Guideline in PCOS, which was funded by the Australian National Health and Medical Research Council, Centre for Research Excellence in Polycystic Ovary Syndrome, and other sources. The authors declared no conflicts of interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Viral Season 2024-2025: Try for An Ounce of Prevention
We are quickly approaching the typical cold and flu season. But can we call anything typical since 2020? Since 2020, there have been different recommendations for prevention, testing, return to work, and treatment since our world was rocked by the pandemic. Now that we are in the “post-pandemic” era, family physicians and other primary care professionals are the front line for discussions on prevention, evaluation, and treatment of the typical upper-respiratory infections, influenza, and COVID-19.
Let’s start with prevention. We have all heard the old adage, an ounce of prevention is worth a pound of cure. In primary care, we need to focus on prevention. Vaccination is often one of our best tools against the myriad of infections we are hoping to help patients prevent during cold and flu season. Most recently, we have fall vaccinations aimed to prevent COVID-19, influenza, and respiratory syncytial virus (RSV).
The number and timing of each of these vaccinations has different recommendations based on a variety of factors including age, pregnancy status, and whether or not the patient is immunocompromised. For the 2024-2025 season, the Centers for Disease Control and Prevention has recommended updated vaccines for both influenza and COVID-19.1
They have also updated the RSV vaccine recommendations to “People 75 or older, or between 60-74 with certain chronic health conditions or living in a nursing home should get one dose of the RSV vaccine to provide an extra layer of protection.”2
In addition to vaccines as prevention, there is also hygiene, staying home when sick and away from others who are sick, following guidelines for where and when to wear a face mask, and the general tools of eating well, and getting sufficient sleep and exercise to help maintain the healthiest immune system.
Despite the best of intentions, there will still be many who experience viral infections in this upcoming season. The CDC is currently recommending persons to stay away from others for at least 24 hours after their symptoms improve and they are fever-free without antipyretics. In addition to isolation while sick, general symptom management is something that we can recommend for all of these illnesses.
There is more to consider, though, as our patients face these illnesses. The first question is how to determine the diagnosis — and if that diagnosis is even necessary. Unfortunately, many of these viral illnesses can look the same. They can all cause fevers, chills, and other upper respiratory symptoms. They are all fairly contagious. All of these viruses can cause serious illness associated with additional complications. It is not truly possible to determine which virus someone has by symptoms alone, our patients can have multiple viruses at the same time and diagnosis of one does not preclude having another.3
Instead, we truly do need a test for diagnosis. In-office testing is available for RSV, influenza, and COVID-19. Additionally, despite not being as freely available as they were during the pandemic, patients are able to do home COVID tests and then call in with their results. At the time of writing this, at-home rapid influenza tests have also been approved by the FDA but are not yet readily available to the public. These tests are important for determining if the patient is eligible for treatment. Both influenza and COVID-19 have antiviral treatments available to help decrease the severity of the illness and potentially the length of illness and time contagious. According to the CDC, both treatments are underutilized.
This could be because of a lack of testing and diagnosis. It may also be because of a lack of familiarity with the available treatments.4,5
Influenza treatment is recommended as soon as possible for those with suspected or confirmed diagnosis, immediately for anyone hospitalized, anyone with severe, complicated, or progressing illness, and for those at high risk of severe illness including but not limited to those under 2 years old, those over 65, those who are pregnant, and those with many chronic conditions.
Treatment can also be used for those who are not high risk when diagnosed within 48 hours. In the United States, four antivirals are recommended to treat influenza: oseltamivir phosphate, zanamivir, peramivir, and baloxavir marboxil. For COVID-19, treatments are also available for mild or moderate disease in those at risk for severe disease. Both remdesivir and nimatrelvir with ritonavir are treatment options that can be used for COVID-19 infection. Unfortunately, no specific antiviral is available for the other viral illnesses we see often during this season.
In primary care, we have some important roles to play. We need to continue to discuss all methods of prevention. Not only do vaccine recommendations change at least annually, our patients’ situations change and we have to reassess them. Additionally, people often need to hear things more than once before committing — so it never hurts to continue having those conversations. Combining the conversation about vaccines with other prevention measures is also important so that it does not seem like we are only recommending one thing. We should also start talking about treatment options before our patients are sick. We can communicate what is available as long as they let us know they are sick early. We can also be there to help our patients determine when they are at risk for severe illness and when they should consider a higher level of care.
The availability of home testing gives us the opportunity to provide these treatments via telehealth and even potentially in times when these illnesses are everywhere — with standing orders with our clinical teams. Although it is a busy time for us in the clinic, “cold and flu” season is definitely one of those times when our primary care relationship can truly help our patients.
References
1. CDC Recommends Updated 2024-2025 COVID-19 and Flu Vaccines for Fall/Winter Virus Season. https://www.cdc.gov/media/releases/2024/s-t0627-vaccine-recommendations.html. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention.
2. CDC Updates RSV Vaccination Recommendation for Adults. https://www.cdc.gov/media/releases/2024/s-0626-vaccination-adults.html. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention.
3. Similarities and Differences between Flu and COVID-19. https://www.cdc.gov/flu/symptoms/flu-vs-covid19.htm. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention, National Center for Immunization and Respiratory Diseases.
4. Respiratory Virus Guidance. https://www.cdc.gov/respiratory-viruses/guidance/index.html. Accessed August 9, 2024. Source: National Center for Immunization and Respiratory Diseases.
5. Provider Toolkit: Preparing Patients for the Fall and Winter Virus Season. https://www.cdc.gov/respiratory-viruses/hcp/tools-resources/index.html. Accessed August 9, 2024. Source: Centers for Disease Control and Prevention.
We are quickly approaching the typical cold and flu season. But can we call anything typical since 2020? Since 2020, there have been different recommendations for prevention, testing, return to work, and treatment since our world was rocked by the pandemic. Now that we are in the “post-pandemic” era, family physicians and other primary care professionals are the front line for discussions on prevention, evaluation, and treatment of the typical upper-respiratory infections, influenza, and COVID-19.
Let’s start with prevention. We have all heard the old adage, an ounce of prevention is worth a pound of cure. In primary care, we need to focus on prevention. Vaccination is often one of our best tools against the myriad of infections we are hoping to help patients prevent during cold and flu season. Most recently, we have fall vaccinations aimed to prevent COVID-19, influenza, and respiratory syncytial virus (RSV).
The number and timing of each of these vaccinations has different recommendations based on a variety of factors including age, pregnancy status, and whether or not the patient is immunocompromised. For the 2024-2025 season, the Centers for Disease Control and Prevention has recommended updated vaccines for both influenza and COVID-19.1
They have also updated the RSV vaccine recommendations to “People 75 or older, or between 60-74 with certain chronic health conditions or living in a nursing home should get one dose of the RSV vaccine to provide an extra layer of protection.”2
In addition to vaccines as prevention, there is also hygiene, staying home when sick and away from others who are sick, following guidelines for where and when to wear a face mask, and the general tools of eating well, and getting sufficient sleep and exercise to help maintain the healthiest immune system.
Despite the best of intentions, there will still be many who experience viral infections in this upcoming season. The CDC is currently recommending persons to stay away from others for at least 24 hours after their symptoms improve and they are fever-free without antipyretics. In addition to isolation while sick, general symptom management is something that we can recommend for all of these illnesses.
There is more to consider, though, as our patients face these illnesses. The first question is how to determine the diagnosis — and if that diagnosis is even necessary. Unfortunately, many of these viral illnesses can look the same. They can all cause fevers, chills, and other upper respiratory symptoms. They are all fairly contagious. All of these viruses can cause serious illness associated with additional complications. It is not truly possible to determine which virus someone has by symptoms alone, our patients can have multiple viruses at the same time and diagnosis of one does not preclude having another.3
Instead, we truly do need a test for diagnosis. In-office testing is available for RSV, influenza, and COVID-19. Additionally, despite not being as freely available as they were during the pandemic, patients are able to do home COVID tests and then call in with their results. At the time of writing this, at-home rapid influenza tests have also been approved by the FDA but are not yet readily available to the public. These tests are important for determining if the patient is eligible for treatment. Both influenza and COVID-19 have antiviral treatments available to help decrease the severity of the illness and potentially the length of illness and time contagious. According to the CDC, both treatments are underutilized.
This could be because of a lack of testing and diagnosis. It may also be because of a lack of familiarity with the available treatments.4,5
Influenza treatment is recommended as soon as possible for those with suspected or confirmed diagnosis, immediately for anyone hospitalized, anyone with severe, complicated, or progressing illness, and for those at high risk of severe illness including but not limited to those under 2 years old, those over 65, those who are pregnant, and those with many chronic conditions.
Treatment can also be used for those who are not high risk when diagnosed within 48 hours. In the United States, four antivirals are recommended to treat influenza: oseltamivir phosphate, zanamivir, peramivir, and baloxavir marboxil. For COVID-19, treatments are also available for mild or moderate disease in those at risk for severe disease. Both remdesivir and nimatrelvir with ritonavir are treatment options that can be used for COVID-19 infection. Unfortunately, no specific antiviral is available for the other viral illnesses we see often during this season.
In primary care, we have some important roles to play. We need to continue to discuss all methods of prevention. Not only do vaccine recommendations change at least annually, our patients’ situations change and we have to reassess them. Additionally, people often need to hear things more than once before committing — so it never hurts to continue having those conversations. Combining the conversation about vaccines with other prevention measures is also important so that it does not seem like we are only recommending one thing. We should also start talking about treatment options before our patients are sick. We can communicate what is available as long as they let us know they are sick early. We can also be there to help our patients determine when they are at risk for severe illness and when they should consider a higher level of care.
The availability of home testing gives us the opportunity to provide these treatments via telehealth and even potentially in times when these illnesses are everywhere — with standing orders with our clinical teams. Although it is a busy time for us in the clinic, “cold and flu” season is definitely one of those times when our primary care relationship can truly help our patients.
References
1. CDC Recommends Updated 2024-2025 COVID-19 and Flu Vaccines for Fall/Winter Virus Season. https://www.cdc.gov/media/releases/2024/s-t0627-vaccine-recommendations.html. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention.
2. CDC Updates RSV Vaccination Recommendation for Adults. https://www.cdc.gov/media/releases/2024/s-0626-vaccination-adults.html. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention.
3. Similarities and Differences between Flu and COVID-19. https://www.cdc.gov/flu/symptoms/flu-vs-covid19.htm. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention, National Center for Immunization and Respiratory Diseases.
4. Respiratory Virus Guidance. https://www.cdc.gov/respiratory-viruses/guidance/index.html. Accessed August 9, 2024. Source: National Center for Immunization and Respiratory Diseases.
5. Provider Toolkit: Preparing Patients for the Fall and Winter Virus Season. https://www.cdc.gov/respiratory-viruses/hcp/tools-resources/index.html. Accessed August 9, 2024. Source: Centers for Disease Control and Prevention.
We are quickly approaching the typical cold and flu season. But can we call anything typical since 2020? Since 2020, there have been different recommendations for prevention, testing, return to work, and treatment since our world was rocked by the pandemic. Now that we are in the “post-pandemic” era, family physicians and other primary care professionals are the front line for discussions on prevention, evaluation, and treatment of the typical upper-respiratory infections, influenza, and COVID-19.
Let’s start with prevention. We have all heard the old adage, an ounce of prevention is worth a pound of cure. In primary care, we need to focus on prevention. Vaccination is often one of our best tools against the myriad of infections we are hoping to help patients prevent during cold and flu season. Most recently, we have fall vaccinations aimed to prevent COVID-19, influenza, and respiratory syncytial virus (RSV).
The number and timing of each of these vaccinations has different recommendations based on a variety of factors including age, pregnancy status, and whether or not the patient is immunocompromised. For the 2024-2025 season, the Centers for Disease Control and Prevention has recommended updated vaccines for both influenza and COVID-19.1
They have also updated the RSV vaccine recommendations to “People 75 or older, or between 60-74 with certain chronic health conditions or living in a nursing home should get one dose of the RSV vaccine to provide an extra layer of protection.”2
In addition to vaccines as prevention, there is also hygiene, staying home when sick and away from others who are sick, following guidelines for where and when to wear a face mask, and the general tools of eating well, and getting sufficient sleep and exercise to help maintain the healthiest immune system.
Despite the best of intentions, there will still be many who experience viral infections in this upcoming season. The CDC is currently recommending persons to stay away from others for at least 24 hours after their symptoms improve and they are fever-free without antipyretics. In addition to isolation while sick, general symptom management is something that we can recommend for all of these illnesses.
There is more to consider, though, as our patients face these illnesses. The first question is how to determine the diagnosis — and if that diagnosis is even necessary. Unfortunately, many of these viral illnesses can look the same. They can all cause fevers, chills, and other upper respiratory symptoms. They are all fairly contagious. All of these viruses can cause serious illness associated with additional complications. It is not truly possible to determine which virus someone has by symptoms alone, our patients can have multiple viruses at the same time and diagnosis of one does not preclude having another.3
Instead, we truly do need a test for diagnosis. In-office testing is available for RSV, influenza, and COVID-19. Additionally, despite not being as freely available as they were during the pandemic, patients are able to do home COVID tests and then call in with their results. At the time of writing this, at-home rapid influenza tests have also been approved by the FDA but are not yet readily available to the public. These tests are important for determining if the patient is eligible for treatment. Both influenza and COVID-19 have antiviral treatments available to help decrease the severity of the illness and potentially the length of illness and time contagious. According to the CDC, both treatments are underutilized.
This could be because of a lack of testing and diagnosis. It may also be because of a lack of familiarity with the available treatments.4,5
Influenza treatment is recommended as soon as possible for those with suspected or confirmed diagnosis, immediately for anyone hospitalized, anyone with severe, complicated, or progressing illness, and for those at high risk of severe illness including but not limited to those under 2 years old, those over 65, those who are pregnant, and those with many chronic conditions.
Treatment can also be used for those who are not high risk when diagnosed within 48 hours. In the United States, four antivirals are recommended to treat influenza: oseltamivir phosphate, zanamivir, peramivir, and baloxavir marboxil. For COVID-19, treatments are also available for mild or moderate disease in those at risk for severe disease. Both remdesivir and nimatrelvir with ritonavir are treatment options that can be used for COVID-19 infection. Unfortunately, no specific antiviral is available for the other viral illnesses we see often during this season.
In primary care, we have some important roles to play. We need to continue to discuss all methods of prevention. Not only do vaccine recommendations change at least annually, our patients’ situations change and we have to reassess them. Additionally, people often need to hear things more than once before committing — so it never hurts to continue having those conversations. Combining the conversation about vaccines with other prevention measures is also important so that it does not seem like we are only recommending one thing. We should also start talking about treatment options before our patients are sick. We can communicate what is available as long as they let us know they are sick early. We can also be there to help our patients determine when they are at risk for severe illness and when they should consider a higher level of care.
The availability of home testing gives us the opportunity to provide these treatments via telehealth and even potentially in times when these illnesses are everywhere — with standing orders with our clinical teams. Although it is a busy time for us in the clinic, “cold and flu” season is definitely one of those times when our primary care relationship can truly help our patients.
References
1. CDC Recommends Updated 2024-2025 COVID-19 and Flu Vaccines for Fall/Winter Virus Season. https://www.cdc.gov/media/releases/2024/s-t0627-vaccine-recommendations.html. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention.
2. CDC Updates RSV Vaccination Recommendation for Adults. https://www.cdc.gov/media/releases/2024/s-0626-vaccination-adults.html. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention.
3. Similarities and Differences between Flu and COVID-19. https://www.cdc.gov/flu/symptoms/flu-vs-covid19.htm. Accessed August 8, 2024. Source: Centers for Disease Control and Prevention, National Center for Immunization and Respiratory Diseases.
4. Respiratory Virus Guidance. https://www.cdc.gov/respiratory-viruses/guidance/index.html. Accessed August 9, 2024. Source: National Center for Immunization and Respiratory Diseases.
5. Provider Toolkit: Preparing Patients for the Fall and Winter Virus Season. https://www.cdc.gov/respiratory-viruses/hcp/tools-resources/index.html. Accessed August 9, 2024. Source: Centers for Disease Control and Prevention.