User login
High-Dose Atropine Curbs Myopia in Kids Despite Side Effects
TOPLINE:
METHODOLOGY:
- Researchers conducted a secondary analysis of the 3-year results of the MOSAIC trial to investigate the efficacy and safety of different atropine regimens in treatment-naive children aged 6-16 years with a spherical equivalent ≤ −0.50 diopters (D).
- They analyzed data of 199 children in Europe with myopia (mean age, 13.9 years; 60.8% girls) who were randomly assigned to either group 1 (nightly placebo for 2 years followed by 0.05% atropine eye drops for 1 year; n = 66) or group 2 (nightly 0.01% atropine eye drops for 2 years followed by another random assignment to nightly placebo, tapering placebo, or tapering of 0.01% atropine eye drops for 1 year; n = 133).
- The nightly and tapered placebo groups were combined as a single treatment group for the sake of analysis.
- The primary outcome measures included observed changes in the progression of myopia, assessed using cycloplegic spherical equivalent refraction and axial length from month 24 to month 36.
TAKEAWAY:
- Children in the 0.01% atropine then placebo groups showed greater spherical equivalent progression (adjusted difference, –0.13 D; P = .01) and axial elongation (adjusted difference, 0.06 mm; P = .008) than those in the placebo then 0.05% atropine group.
- Children in the placebo then 0.05% atropine group also experienced less axial elongation (P = .04) than those in the 0.01% atropine then tapering 0.01% atropine group.
- Among participants using 0.05% atropine, 15% reported blurred near vision and 8% reported photophobia, whereas 3% reported blurred near vision and 0% reported photophobia in the 0.01% atropine then tapering 0.01% atropine group.
- Despite experiencing adverse events, no participants in the placebo then 0.05% atropine group discontinued treatment, with 92% completing the 36-month visit and 81% adhering to the treatment regimen.
IN PRACTICE:
“Recognizing a 2-year delay in treatment initiation in the group of children originally assigned to placebo, 0.05% atropine eyedrops slowed both myopia progression and axial eye growth over the course of a 1-year period,” the authors of the study wrote.
SOURCE:
This study was led by James Loughman, PhD, of the Centre for Eye Research Ireland, Dublin. It was published online in JAMA Ophthalmology.
LIMITATIONS:
Limitations included smaller sample sizes across treatment groups in year 3 and potential carry-over effects for participants transitioning from 0.01% atropine to placebo or tapered dosing. Because the study lacked an untreated control group, rebound myopia progression could be measured based only on the expected third-year results from the 0.01% atropine then placebo groups. The age of participants during the third year may have affected the ability to detect rebound progression.
DISCLOSURES:
This study was supported partly by a grant from the Health Research Board; Fighting Blindness, Ireland; and Vyluma. Some authors reported receiving grants, nonfinancial support, or consultant fees or having several other ties with Vyluma and other sources.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a secondary analysis of the 3-year results of the MOSAIC trial to investigate the efficacy and safety of different atropine regimens in treatment-naive children aged 6-16 years with a spherical equivalent ≤ −0.50 diopters (D).
- They analyzed data of 199 children in Europe with myopia (mean age, 13.9 years; 60.8% girls) who were randomly assigned to either group 1 (nightly placebo for 2 years followed by 0.05% atropine eye drops for 1 year; n = 66) or group 2 (nightly 0.01% atropine eye drops for 2 years followed by another random assignment to nightly placebo, tapering placebo, or tapering of 0.01% atropine eye drops for 1 year; n = 133).
- The nightly and tapered placebo groups were combined as a single treatment group for the sake of analysis.
- The primary outcome measures included observed changes in the progression of myopia, assessed using cycloplegic spherical equivalent refraction and axial length from month 24 to month 36.
TAKEAWAY:
- Children in the 0.01% atropine then placebo groups showed greater spherical equivalent progression (adjusted difference, –0.13 D; P = .01) and axial elongation (adjusted difference, 0.06 mm; P = .008) than those in the placebo then 0.05% atropine group.
- Children in the placebo then 0.05% atropine group also experienced less axial elongation (P = .04) than those in the 0.01% atropine then tapering 0.01% atropine group.
- Among participants using 0.05% atropine, 15% reported blurred near vision and 8% reported photophobia, whereas 3% reported blurred near vision and 0% reported photophobia in the 0.01% atropine then tapering 0.01% atropine group.
- Despite experiencing adverse events, no participants in the placebo then 0.05% atropine group discontinued treatment, with 92% completing the 36-month visit and 81% adhering to the treatment regimen.
IN PRACTICE:
“Recognizing a 2-year delay in treatment initiation in the group of children originally assigned to placebo, 0.05% atropine eyedrops slowed both myopia progression and axial eye growth over the course of a 1-year period,” the authors of the study wrote.
SOURCE:
This study was led by James Loughman, PhD, of the Centre for Eye Research Ireland, Dublin. It was published online in JAMA Ophthalmology.
LIMITATIONS:
Limitations included smaller sample sizes across treatment groups in year 3 and potential carry-over effects for participants transitioning from 0.01% atropine to placebo or tapered dosing. Because the study lacked an untreated control group, rebound myopia progression could be measured based only on the expected third-year results from the 0.01% atropine then placebo groups. The age of participants during the third year may have affected the ability to detect rebound progression.
DISCLOSURES:
This study was supported partly by a grant from the Health Research Board; Fighting Blindness, Ireland; and Vyluma. Some authors reported receiving grants, nonfinancial support, or consultant fees or having several other ties with Vyluma and other sources.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a secondary analysis of the 3-year results of the MOSAIC trial to investigate the efficacy and safety of different atropine regimens in treatment-naive children aged 6-16 years with a spherical equivalent ≤ −0.50 diopters (D).
- They analyzed data of 199 children in Europe with myopia (mean age, 13.9 years; 60.8% girls) who were randomly assigned to either group 1 (nightly placebo for 2 years followed by 0.05% atropine eye drops for 1 year; n = 66) or group 2 (nightly 0.01% atropine eye drops for 2 years followed by another random assignment to nightly placebo, tapering placebo, or tapering of 0.01% atropine eye drops for 1 year; n = 133).
- The nightly and tapered placebo groups were combined as a single treatment group for the sake of analysis.
- The primary outcome measures included observed changes in the progression of myopia, assessed using cycloplegic spherical equivalent refraction and axial length from month 24 to month 36.
TAKEAWAY:
- Children in the 0.01% atropine then placebo groups showed greater spherical equivalent progression (adjusted difference, –0.13 D; P = .01) and axial elongation (adjusted difference, 0.06 mm; P = .008) than those in the placebo then 0.05% atropine group.
- Children in the placebo then 0.05% atropine group also experienced less axial elongation (P = .04) than those in the 0.01% atropine then tapering 0.01% atropine group.
- Among participants using 0.05% atropine, 15% reported blurred near vision and 8% reported photophobia, whereas 3% reported blurred near vision and 0% reported photophobia in the 0.01% atropine then tapering 0.01% atropine group.
- Despite experiencing adverse events, no participants in the placebo then 0.05% atropine group discontinued treatment, with 92% completing the 36-month visit and 81% adhering to the treatment regimen.
IN PRACTICE:
“Recognizing a 2-year delay in treatment initiation in the group of children originally assigned to placebo, 0.05% atropine eyedrops slowed both myopia progression and axial eye growth over the course of a 1-year period,” the authors of the study wrote.
SOURCE:
This study was led by James Loughman, PhD, of the Centre for Eye Research Ireland, Dublin. It was published online in JAMA Ophthalmology.
LIMITATIONS:
Limitations included smaller sample sizes across treatment groups in year 3 and potential carry-over effects for participants transitioning from 0.01% atropine to placebo or tapered dosing. Because the study lacked an untreated control group, rebound myopia progression could be measured based only on the expected third-year results from the 0.01% atropine then placebo groups. The age of participants during the third year may have affected the ability to detect rebound progression.
DISCLOSURES:
This study was supported partly by a grant from the Health Research Board; Fighting Blindness, Ireland; and Vyluma. Some authors reported receiving grants, nonfinancial support, or consultant fees or having several other ties with Vyluma and other sources.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Intensive BP Control May Benefit CKD Patients in Real World
TOPLINE:
The cardiovascular benefits observed with intensive blood pressure (BP) control in patients with hypertension and elevated cardiovascular risk from the Systolic Blood Pressure Intervention Trial (SPRINT) can be largely replicated in real-world settings among patients with chronic kidney disease (CKD), highlighting the advantages of adopting intensive BP targets.
METHODOLOGY:
- Researchers conducted a comparative effectiveness study to determine if the beneficial and adverse effects of intensive vs standard BP control observed in SPRINT were replicable in patients with CKD and hypertension in clinical practice.
- They identified 85,938 patients (mean age, 75.7 years; 95.0% men) and 13,983 patients (mean age, 77.4 years; 38.4% men) from the Veterans Health Administration (VHA) and Kaiser Permanente of Southern California (KPSC) databases, respectively.
- The treatment effect was estimated by combining baseline covariate, treatment, and outcome data of participants from the SPRINT with covariate data from the VHA and KPSC databases.
- The primary outcomes included major cardiovascular events, all-cause death, cognitive impairment, CKD progression, and adverse events at 4 years.
TAKEAWAY:
- Compared with SPRINT participants, those in the VHA and KPSC databases were older, had less prevalent cardiovascular disease, higher albuminuria, and used more statins.
- The benefits of intensive vs standard BP control on major cardiovascular events, all-cause mortality, and certain adverse events (hypotension, syncope, bradycardia, acute kidney injury, and electrolyte abnormality) were transferable from the trial to the VHA and KPSC populations.
- The treatment effect of intensive BP management on CKD progression was transportable to the KPSC population but not to the VHA population. However, the trial’s impact on cognitive outcomes, such as dementia, was not transportable to either the VHA or KPSC populations.
- On the absolute scale, intensive vs standard BP treatment showed greater cardiovascular benefits and fewer safety concerns in the VHA and KPSC populations than in the SPRINT.
IN PRACTICE:
“This example highlights the potential for transportability methods to provide insights that can bridge evidence gaps and inform the application of novel therapies to patients with CKD who are treated in everyday practice,” the authors wrote.
SOURCE:
This study was led by Manjula Kurella Tamura, MD, MPH, Division of Nephrology, Department of Medicine, Stanford University School of Medicine, Palo Alto, California. It was published online on January 7, 2025, in JAMA Network Open.
LIMITATIONS:
Transportability analyses could not account for characteristics that were not well-documented in electronic health records, such as limited life expectancy. The study was conducted before the widespread use of sodium-glucose cotransporter 2 inhibitors, glucagon-like peptide 1 receptor agonists, and nonsteroidal mineralocorticoid receptor antagonists, making it unclear whether intensive BP treatment would result in similar benefits with current pharmacotherapy regimens. Eligibility for this study was based on BP measurements in routine practice, which tend to be more variable than those collected in research settings.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors disclosed serving as a consultant and receiving grants, personal fees, and consulting fees from pharmaceutical companies and other sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
The cardiovascular benefits observed with intensive blood pressure (BP) control in patients with hypertension and elevated cardiovascular risk from the Systolic Blood Pressure Intervention Trial (SPRINT) can be largely replicated in real-world settings among patients with chronic kidney disease (CKD), highlighting the advantages of adopting intensive BP targets.
METHODOLOGY:
- Researchers conducted a comparative effectiveness study to determine if the beneficial and adverse effects of intensive vs standard BP control observed in SPRINT were replicable in patients with CKD and hypertension in clinical practice.
- They identified 85,938 patients (mean age, 75.7 years; 95.0% men) and 13,983 patients (mean age, 77.4 years; 38.4% men) from the Veterans Health Administration (VHA) and Kaiser Permanente of Southern California (KPSC) databases, respectively.
- The treatment effect was estimated by combining baseline covariate, treatment, and outcome data of participants from the SPRINT with covariate data from the VHA and KPSC databases.
- The primary outcomes included major cardiovascular events, all-cause death, cognitive impairment, CKD progression, and adverse events at 4 years.
TAKEAWAY:
- Compared with SPRINT participants, those in the VHA and KPSC databases were older, had less prevalent cardiovascular disease, higher albuminuria, and used more statins.
- The benefits of intensive vs standard BP control on major cardiovascular events, all-cause mortality, and certain adverse events (hypotension, syncope, bradycardia, acute kidney injury, and electrolyte abnormality) were transferable from the trial to the VHA and KPSC populations.
- The treatment effect of intensive BP management on CKD progression was transportable to the KPSC population but not to the VHA population. However, the trial’s impact on cognitive outcomes, such as dementia, was not transportable to either the VHA or KPSC populations.
- On the absolute scale, intensive vs standard BP treatment showed greater cardiovascular benefits and fewer safety concerns in the VHA and KPSC populations than in the SPRINT.
IN PRACTICE:
“This example highlights the potential for transportability methods to provide insights that can bridge evidence gaps and inform the application of novel therapies to patients with CKD who are treated in everyday practice,” the authors wrote.
SOURCE:
This study was led by Manjula Kurella Tamura, MD, MPH, Division of Nephrology, Department of Medicine, Stanford University School of Medicine, Palo Alto, California. It was published online on January 7, 2025, in JAMA Network Open.
LIMITATIONS:
Transportability analyses could not account for characteristics that were not well-documented in electronic health records, such as limited life expectancy. The study was conducted before the widespread use of sodium-glucose cotransporter 2 inhibitors, glucagon-like peptide 1 receptor agonists, and nonsteroidal mineralocorticoid receptor antagonists, making it unclear whether intensive BP treatment would result in similar benefits with current pharmacotherapy regimens. Eligibility for this study was based on BP measurements in routine practice, which tend to be more variable than those collected in research settings.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors disclosed serving as a consultant and receiving grants, personal fees, and consulting fees from pharmaceutical companies and other sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
The cardiovascular benefits observed with intensive blood pressure (BP) control in patients with hypertension and elevated cardiovascular risk from the Systolic Blood Pressure Intervention Trial (SPRINT) can be largely replicated in real-world settings among patients with chronic kidney disease (CKD), highlighting the advantages of adopting intensive BP targets.
METHODOLOGY:
- Researchers conducted a comparative effectiveness study to determine if the beneficial and adverse effects of intensive vs standard BP control observed in SPRINT were replicable in patients with CKD and hypertension in clinical practice.
- They identified 85,938 patients (mean age, 75.7 years; 95.0% men) and 13,983 patients (mean age, 77.4 years; 38.4% men) from the Veterans Health Administration (VHA) and Kaiser Permanente of Southern California (KPSC) databases, respectively.
- The treatment effect was estimated by combining baseline covariate, treatment, and outcome data of participants from the SPRINT with covariate data from the VHA and KPSC databases.
- The primary outcomes included major cardiovascular events, all-cause death, cognitive impairment, CKD progression, and adverse events at 4 years.
TAKEAWAY:
- Compared with SPRINT participants, those in the VHA and KPSC databases were older, had less prevalent cardiovascular disease, higher albuminuria, and used more statins.
- The benefits of intensive vs standard BP control on major cardiovascular events, all-cause mortality, and certain adverse events (hypotension, syncope, bradycardia, acute kidney injury, and electrolyte abnormality) were transferable from the trial to the VHA and KPSC populations.
- The treatment effect of intensive BP management on CKD progression was transportable to the KPSC population but not to the VHA population. However, the trial’s impact on cognitive outcomes, such as dementia, was not transportable to either the VHA or KPSC populations.
- On the absolute scale, intensive vs standard BP treatment showed greater cardiovascular benefits and fewer safety concerns in the VHA and KPSC populations than in the SPRINT.
IN PRACTICE:
“This example highlights the potential for transportability methods to provide insights that can bridge evidence gaps and inform the application of novel therapies to patients with CKD who are treated in everyday practice,” the authors wrote.
SOURCE:
This study was led by Manjula Kurella Tamura, MD, MPH, Division of Nephrology, Department of Medicine, Stanford University School of Medicine, Palo Alto, California. It was published online on January 7, 2025, in JAMA Network Open.
LIMITATIONS:
Transportability analyses could not account for characteristics that were not well-documented in electronic health records, such as limited life expectancy. The study was conducted before the widespread use of sodium-glucose cotransporter 2 inhibitors, glucagon-like peptide 1 receptor agonists, and nonsteroidal mineralocorticoid receptor antagonists, making it unclear whether intensive BP treatment would result in similar benefits with current pharmacotherapy regimens. Eligibility for this study was based on BP measurements in routine practice, which tend to be more variable than those collected in research settings.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors disclosed serving as a consultant and receiving grants, personal fees, and consulting fees from pharmaceutical companies and other sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Early Patching Benefits Kids Born With Cataract in One Eye
TOPLINE:
particularly in the morning or at regular times every day.
METHODOLOGY:
- Researchers conducted a post hoc analysis of the Infant Aphakia Treatment Study to examine the association between the reported consistency in patching during the first year after unilateral cataract surgery and visual acuity.
- They included data from 101 children whose caregivers completed 7-day patching diaries at 2 months after surgery or at age 13 months.
- The treatment protocol required caregivers to have their child wear a patch over the fellow eye for 1 hour daily from the second week after cataract surgery until age 8 months, followed by patching for 50% of waking hours until age 5 years.
- Consistent patching was defined as daily patching with an average start time before 9 AM or an interquartile range of the first application time of 60 minutes or less.
- Visual acuity in the treated eye was the primary outcome, assessed at ages 54 + 1 months and 10.5 years; participants with a visual acuity of 20/40 or better were said to have near-normal vision.
TAKEAWAY:
- Children whose caregivers reported consistent patching patterns demonstrated better average visual acuity at age 54 months than those whose caregivers reported inconsistent patching patterns (mean difference in logMAR visual acuity, 0.55; 95% CI, 0.22-0.87); the results were promising for children aged 10.5 years, as well.
- Data from the diary completed at age 13 months showed children whose caregivers reported patching before 9 AM or around the same time daily were more likely to achieve near-normal vision at age 54 + 1 months and 10.5 years (relative risk, 3.55; 95% CI, 1.61-7.80, and 2.31; 95% CI, 1.12-4.78, respectively) than those whose caregivers did not report such behavior.
- Children whose caregivers reported consistent vs inconsistent patching patterns achieved more average daily hours of patching both during the first year (4.82 h vs 3.50 h) and between ages 12 and 48 months (4.96 h vs 3.03 h).
IN PRACTICE:
“This information can be used by healthcare providers to motivate caregivers to develop consistent patching habits. Further, providers can present caregivers with simple advice: Apply the patch every day either first thing in the morning or about the same time every day,” the authors of the study wrote.
SOURCE:
The study was led by Carolyn Drews-Botsch, PhD, MPH, of the Department of Global and Community Health at George Mason University, in Fairfax, Virginia. It was published online in Ophthalmology.
LIMITATIONS:
The diaries covered only 14 days of the first year following surgery, which may not have fully represented patching patterns during other periods. The researchers noted that establishing a routine for patching was particularly challenging for infants aged less than 5 months at the time of the first diary completion as these infants may not yet have established regular sleep and feeding routines. Parents who participated in this trial may have differed from those in routine practice, potentially affecting the generalizability of the findings to general clinical populations.
DISCLOSURES:
This study was supported by the following grants: 1 R21 EY032152, 2 UG1 EY031287, 5 U10 EY013287, 5 UG1 EY02553, and 7 UG1 EY013272. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
particularly in the morning or at regular times every day.
METHODOLOGY:
- Researchers conducted a post hoc analysis of the Infant Aphakia Treatment Study to examine the association between the reported consistency in patching during the first year after unilateral cataract surgery and visual acuity.
- They included data from 101 children whose caregivers completed 7-day patching diaries at 2 months after surgery or at age 13 months.
- The treatment protocol required caregivers to have their child wear a patch over the fellow eye for 1 hour daily from the second week after cataract surgery until age 8 months, followed by patching for 50% of waking hours until age 5 years.
- Consistent patching was defined as daily patching with an average start time before 9 AM or an interquartile range of the first application time of 60 minutes or less.
- Visual acuity in the treated eye was the primary outcome, assessed at ages 54 + 1 months and 10.5 years; participants with a visual acuity of 20/40 or better were said to have near-normal vision.
TAKEAWAY:
- Children whose caregivers reported consistent patching patterns demonstrated better average visual acuity at age 54 months than those whose caregivers reported inconsistent patching patterns (mean difference in logMAR visual acuity, 0.55; 95% CI, 0.22-0.87); the results were promising for children aged 10.5 years, as well.
- Data from the diary completed at age 13 months showed children whose caregivers reported patching before 9 AM or around the same time daily were more likely to achieve near-normal vision at age 54 + 1 months and 10.5 years (relative risk, 3.55; 95% CI, 1.61-7.80, and 2.31; 95% CI, 1.12-4.78, respectively) than those whose caregivers did not report such behavior.
- Children whose caregivers reported consistent vs inconsistent patching patterns achieved more average daily hours of patching both during the first year (4.82 h vs 3.50 h) and between ages 12 and 48 months (4.96 h vs 3.03 h).
IN PRACTICE:
“This information can be used by healthcare providers to motivate caregivers to develop consistent patching habits. Further, providers can present caregivers with simple advice: Apply the patch every day either first thing in the morning or about the same time every day,” the authors of the study wrote.
SOURCE:
The study was led by Carolyn Drews-Botsch, PhD, MPH, of the Department of Global and Community Health at George Mason University, in Fairfax, Virginia. It was published online in Ophthalmology.
LIMITATIONS:
The diaries covered only 14 days of the first year following surgery, which may not have fully represented patching patterns during other periods. The researchers noted that establishing a routine for patching was particularly challenging for infants aged less than 5 months at the time of the first diary completion as these infants may not yet have established regular sleep and feeding routines. Parents who participated in this trial may have differed from those in routine practice, potentially affecting the generalizability of the findings to general clinical populations.
DISCLOSURES:
This study was supported by the following grants: 1 R21 EY032152, 2 UG1 EY031287, 5 U10 EY013287, 5 UG1 EY02553, and 7 UG1 EY013272. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
particularly in the morning or at regular times every day.
METHODOLOGY:
- Researchers conducted a post hoc analysis of the Infant Aphakia Treatment Study to examine the association between the reported consistency in patching during the first year after unilateral cataract surgery and visual acuity.
- They included data from 101 children whose caregivers completed 7-day patching diaries at 2 months after surgery or at age 13 months.
- The treatment protocol required caregivers to have their child wear a patch over the fellow eye for 1 hour daily from the second week after cataract surgery until age 8 months, followed by patching for 50% of waking hours until age 5 years.
- Consistent patching was defined as daily patching with an average start time before 9 AM or an interquartile range of the first application time of 60 minutes or less.
- Visual acuity in the treated eye was the primary outcome, assessed at ages 54 + 1 months and 10.5 years; participants with a visual acuity of 20/40 or better were said to have near-normal vision.
TAKEAWAY:
- Children whose caregivers reported consistent patching patterns demonstrated better average visual acuity at age 54 months than those whose caregivers reported inconsistent patching patterns (mean difference in logMAR visual acuity, 0.55; 95% CI, 0.22-0.87); the results were promising for children aged 10.5 years, as well.
- Data from the diary completed at age 13 months showed children whose caregivers reported patching before 9 AM or around the same time daily were more likely to achieve near-normal vision at age 54 + 1 months and 10.5 years (relative risk, 3.55; 95% CI, 1.61-7.80, and 2.31; 95% CI, 1.12-4.78, respectively) than those whose caregivers did not report such behavior.
- Children whose caregivers reported consistent vs inconsistent patching patterns achieved more average daily hours of patching both during the first year (4.82 h vs 3.50 h) and between ages 12 and 48 months (4.96 h vs 3.03 h).
IN PRACTICE:
“This information can be used by healthcare providers to motivate caregivers to develop consistent patching habits. Further, providers can present caregivers with simple advice: Apply the patch every day either first thing in the morning or about the same time every day,” the authors of the study wrote.
SOURCE:
The study was led by Carolyn Drews-Botsch, PhD, MPH, of the Department of Global and Community Health at George Mason University, in Fairfax, Virginia. It was published online in Ophthalmology.
LIMITATIONS:
The diaries covered only 14 days of the first year following surgery, which may not have fully represented patching patterns during other periods. The researchers noted that establishing a routine for patching was particularly challenging for infants aged less than 5 months at the time of the first diary completion as these infants may not yet have established regular sleep and feeding routines. Parents who participated in this trial may have differed from those in routine practice, potentially affecting the generalizability of the findings to general clinical populations.
DISCLOSURES:
This study was supported by the following grants: 1 R21 EY032152, 2 UG1 EY031287, 5 U10 EY013287, 5 UG1 EY02553, and 7 UG1 EY013272. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
COVID-19 Takes a Greater Toll on Kidneys Than Pneumonia
TOPLINE:
This decline in kidney function, measured by the estimated glomerular filtration rate (eGFR), is particularly steep among individuals who require hospitalization for COVID-19.
METHODOLOGY:
- SARS-CoV-2, the virus that causes COVID-19, has been associated with acute kidney injury, but its potential impact on long-term kidney function remains unclear.
- Researchers investigated the decline in kidney function after COVID-19 vs pneumonia by including all hospitalized and nonhospitalized adults from the Stockholm Creatinine Measurements Project who had at least one eGFR measurement in the 2 years before a positive COVID-19 test result or pneumonia diagnosis.
- Overall, 134,565 individuals (median age, 51 years; 55.6% women) who had their first SARS-CoV-2 infection between February 2020 and January 2022 were included, of whom 13.3% required hospitalization within 28 days of their first positive COVID-19 test result.
- They were compared with 35,987 patients (median age, 71 years; 53.8% women) who were diagnosed with pneumonia between February 2018 and January 2020; 46.5% of them required hospitalization.
- The primary outcome measure focused on the mean annual change in eGFR slopes before and after each infection; the secondary outcome assessed was the annual change in postinfection eGFR slopes between COVID-19 and pneumonia cases.
TAKEAWAY:
- Before COVID-19, eGFR changes were minimal, but after the infection, the average decline increased to 4.1 (95% CI, 3.8-4.4) mL/min/1.73 m2; however, in the pneumonia cohort, a decline in eGFR was noted both before and after the infection.
- After COVID-19, the mean annual decline in eGFR was 3.4% (95% CI, 3.2%-3.5%), increasing to 5.4% (95% CI, 5.2%-5.6%) for those who were hospitalized.
- In contrast, the pneumonia group experienced an average annual decline of 2.3% (95% CI, 2.1%-2.5%) after the infection, which remained unchanged when analyzing only patients who were hospitalized.
- The risk for a 25% reduction in eGFR was higher in patients with COVID-19 than in those with pneumonia (hazard ratio [HR], 1.19; 95% CI, 1.07-1.34), with the risk being even higher among those who required hospitalization (HR, 1.42; 95% CI, 1.22-1.64).
IN PRACTICE:
“These findings help inform decisions regarding the need to monitor kidney function in survivors of COVID-19 and could have implications for policymakers regarding future healthcare planning and kidney service provision,” the authors wrote.
SOURCE:
This study was led by Viyaasan Mahalingasivam, MPhil, London School of Hygiene & Tropical Medicine, London, England. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked information on important confounders such as ethnicity and body mass index. The follow-up period was not long enough to fully evaluate the long-term association of COVID-19 with kidney function. Some individuals may have been misclassified as nonhospitalized if their first infection was mild and a subsequent infection required hospitalization.
DISCLOSURES:
This study was supported by grants from the National Institute for Health and Care Research, Njurfonden, Stig and Gunborg Westman Foundation, and the Swedish Research Council. One author reported receiving a Career Development Award from the National Institute for Health and Care Research, and another author reported receiving grants from Njurfonden, Stig and Gunborg Westman Foundation, Swedish Research Council, Swedish Heart Lung Foundation, and Region Stockholm during the conduct of the study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
This decline in kidney function, measured by the estimated glomerular filtration rate (eGFR), is particularly steep among individuals who require hospitalization for COVID-19.
METHODOLOGY:
- SARS-CoV-2, the virus that causes COVID-19, has been associated with acute kidney injury, but its potential impact on long-term kidney function remains unclear.
- Researchers investigated the decline in kidney function after COVID-19 vs pneumonia by including all hospitalized and nonhospitalized adults from the Stockholm Creatinine Measurements Project who had at least one eGFR measurement in the 2 years before a positive COVID-19 test result or pneumonia diagnosis.
- Overall, 134,565 individuals (median age, 51 years; 55.6% women) who had their first SARS-CoV-2 infection between February 2020 and January 2022 were included, of whom 13.3% required hospitalization within 28 days of their first positive COVID-19 test result.
- They were compared with 35,987 patients (median age, 71 years; 53.8% women) who were diagnosed with pneumonia between February 2018 and January 2020; 46.5% of them required hospitalization.
- The primary outcome measure focused on the mean annual change in eGFR slopes before and after each infection; the secondary outcome assessed was the annual change in postinfection eGFR slopes between COVID-19 and pneumonia cases.
TAKEAWAY:
- Before COVID-19, eGFR changes were minimal, but after the infection, the average decline increased to 4.1 (95% CI, 3.8-4.4) mL/min/1.73 m2; however, in the pneumonia cohort, a decline in eGFR was noted both before and after the infection.
- After COVID-19, the mean annual decline in eGFR was 3.4% (95% CI, 3.2%-3.5%), increasing to 5.4% (95% CI, 5.2%-5.6%) for those who were hospitalized.
- In contrast, the pneumonia group experienced an average annual decline of 2.3% (95% CI, 2.1%-2.5%) after the infection, which remained unchanged when analyzing only patients who were hospitalized.
- The risk for a 25% reduction in eGFR was higher in patients with COVID-19 than in those with pneumonia (hazard ratio [HR], 1.19; 95% CI, 1.07-1.34), with the risk being even higher among those who required hospitalization (HR, 1.42; 95% CI, 1.22-1.64).
IN PRACTICE:
“These findings help inform decisions regarding the need to monitor kidney function in survivors of COVID-19 and could have implications for policymakers regarding future healthcare planning and kidney service provision,” the authors wrote.
SOURCE:
This study was led by Viyaasan Mahalingasivam, MPhil, London School of Hygiene & Tropical Medicine, London, England. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked information on important confounders such as ethnicity and body mass index. The follow-up period was not long enough to fully evaluate the long-term association of COVID-19 with kidney function. Some individuals may have been misclassified as nonhospitalized if their first infection was mild and a subsequent infection required hospitalization.
DISCLOSURES:
This study was supported by grants from the National Institute for Health and Care Research, Njurfonden, Stig and Gunborg Westman Foundation, and the Swedish Research Council. One author reported receiving a Career Development Award from the National Institute for Health and Care Research, and another author reported receiving grants from Njurfonden, Stig and Gunborg Westman Foundation, Swedish Research Council, Swedish Heart Lung Foundation, and Region Stockholm during the conduct of the study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
This decline in kidney function, measured by the estimated glomerular filtration rate (eGFR), is particularly steep among individuals who require hospitalization for COVID-19.
METHODOLOGY:
- SARS-CoV-2, the virus that causes COVID-19, has been associated with acute kidney injury, but its potential impact on long-term kidney function remains unclear.
- Researchers investigated the decline in kidney function after COVID-19 vs pneumonia by including all hospitalized and nonhospitalized adults from the Stockholm Creatinine Measurements Project who had at least one eGFR measurement in the 2 years before a positive COVID-19 test result or pneumonia diagnosis.
- Overall, 134,565 individuals (median age, 51 years; 55.6% women) who had their first SARS-CoV-2 infection between February 2020 and January 2022 were included, of whom 13.3% required hospitalization within 28 days of their first positive COVID-19 test result.
- They were compared with 35,987 patients (median age, 71 years; 53.8% women) who were diagnosed with pneumonia between February 2018 and January 2020; 46.5% of them required hospitalization.
- The primary outcome measure focused on the mean annual change in eGFR slopes before and after each infection; the secondary outcome assessed was the annual change in postinfection eGFR slopes between COVID-19 and pneumonia cases.
TAKEAWAY:
- Before COVID-19, eGFR changes were minimal, but after the infection, the average decline increased to 4.1 (95% CI, 3.8-4.4) mL/min/1.73 m2; however, in the pneumonia cohort, a decline in eGFR was noted both before and after the infection.
- After COVID-19, the mean annual decline in eGFR was 3.4% (95% CI, 3.2%-3.5%), increasing to 5.4% (95% CI, 5.2%-5.6%) for those who were hospitalized.
- In contrast, the pneumonia group experienced an average annual decline of 2.3% (95% CI, 2.1%-2.5%) after the infection, which remained unchanged when analyzing only patients who were hospitalized.
- The risk for a 25% reduction in eGFR was higher in patients with COVID-19 than in those with pneumonia (hazard ratio [HR], 1.19; 95% CI, 1.07-1.34), with the risk being even higher among those who required hospitalization (HR, 1.42; 95% CI, 1.22-1.64).
IN PRACTICE:
“These findings help inform decisions regarding the need to monitor kidney function in survivors of COVID-19 and could have implications for policymakers regarding future healthcare planning and kidney service provision,” the authors wrote.
SOURCE:
This study was led by Viyaasan Mahalingasivam, MPhil, London School of Hygiene & Tropical Medicine, London, England. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked information on important confounders such as ethnicity and body mass index. The follow-up period was not long enough to fully evaluate the long-term association of COVID-19 with kidney function. Some individuals may have been misclassified as nonhospitalized if their first infection was mild and a subsequent infection required hospitalization.
DISCLOSURES:
This study was supported by grants from the National Institute for Health and Care Research, Njurfonden, Stig and Gunborg Westman Foundation, and the Swedish Research Council. One author reported receiving a Career Development Award from the National Institute for Health and Care Research, and another author reported receiving grants from Njurfonden, Stig and Gunborg Westman Foundation, Swedish Research Council, Swedish Heart Lung Foundation, and Region Stockholm during the conduct of the study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Clinicians More Likely to Flag Black Kids’ Injuries as Abuse
TOPLINE:
Among children with traumatic injury, those of Black ethnicity are more likely than those of White ethnicity to be suspected of experiencing child abuse. Young patients and those from low socioeconomic backgrounds also face an increased likelihood of suspicion for child abuse (SCA).
METHODOLOGY:
- Researchers analyzed data on pediatric patients admitted to hospitals after sustaining a traumatic injury between 2006 and 2016 using the Kids’ Inpatient Database (KID) to investigate racial and ethnic disparities in cases in which SCA codes from the 9th and 10th editions of the International Classification of Diseases were used.
- The analysis included a weighted total of 634,309 pediatric patients with complete data, comprising 13,579 patients in the SCA subgroup and 620,730 in the non-SCA subgroup.
- Patient demographics, injury severity, and hospitalization characteristics were classified by race and ethnicity.
- The primary outcome was differences in racial and ethnic composition between the SCA and non-SCA groups, as well as compared with the overall US population using 2010 US Census data.
TAKEAWAY:
- Black patients had 75% higher odds of having a SCA code, compared with White patients; the latter ethnicity was relatively underrepresented in the SCA subgroup, compared with the distribution reported by the US Census.
- Black patients had 10% higher odds of having a SCA code (odds ratio, 1.10; P =.004) than White patients, after socioeconomic factors such as insurance type, household income based on zip code, and injury severity were controlled for.
- Black patients in the SCA subgroup experienced a 26.5% (P < .001) longer hospital stay for mild to moderate injuries and a 40.1% (P < .001) longer stay for serious injuries compared with White patients.
- Patients in the SCA subgroup were significantly younger (mean, 1.70 years vs 9.70 years), were more likely to have Medicaid insurance (76.6% vs 42.0%), and had higher mortality rates (5.6% vs 1.0%) than those in the non-SCA subgroup; they were also more likely to come from lower socioeconomic backgrounds and present with more severe injuries.
IN PRACTICE:
“However, we can identify and appropriately respond to patients with potential child abuse in an equitable way by using clinical decision support tools, seeking clinical consultation of child abuse pediatricians, practicing cultural humility, and enhancing the education and training for health care professionals on child abuse recognition, response, and prevention,” Allison M. Jackson, MD, MPH, of the Child and Adolescent Protection Center at Children’s National Hospital, Washington, DC, wrote in an accompanying editorial.
SOURCE:
The study was led by Fereshteh Salimi-Jazi, MD, of Stanford University School of Medicine in California. It was published online on December 18, 2024, in JAMA Network Open.
LIMITATIONS:
The study relied on data from KID, which has limitations such as potential coding errors and the inability to follow patients over time. The database combines race and ethnicity in a single field as well. The study only included hospitalized patients, which may not represent all clinician suspicions of SCA cases.
DISCLOSURES:
This study was supported by a grant from the National Center for Advancing Translational Sciences of the National Institutes of Health. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Among children with traumatic injury, those of Black ethnicity are more likely than those of White ethnicity to be suspected of experiencing child abuse. Young patients and those from low socioeconomic backgrounds also face an increased likelihood of suspicion for child abuse (SCA).
METHODOLOGY:
- Researchers analyzed data on pediatric patients admitted to hospitals after sustaining a traumatic injury between 2006 and 2016 using the Kids’ Inpatient Database (KID) to investigate racial and ethnic disparities in cases in which SCA codes from the 9th and 10th editions of the International Classification of Diseases were used.
- The analysis included a weighted total of 634,309 pediatric patients with complete data, comprising 13,579 patients in the SCA subgroup and 620,730 in the non-SCA subgroup.
- Patient demographics, injury severity, and hospitalization characteristics were classified by race and ethnicity.
- The primary outcome was differences in racial and ethnic composition between the SCA and non-SCA groups, as well as compared with the overall US population using 2010 US Census data.
TAKEAWAY:
- Black patients had 75% higher odds of having a SCA code, compared with White patients; the latter ethnicity was relatively underrepresented in the SCA subgroup, compared with the distribution reported by the US Census.
- Black patients had 10% higher odds of having a SCA code (odds ratio, 1.10; P =.004) than White patients, after socioeconomic factors such as insurance type, household income based on zip code, and injury severity were controlled for.
- Black patients in the SCA subgroup experienced a 26.5% (P < .001) longer hospital stay for mild to moderate injuries and a 40.1% (P < .001) longer stay for serious injuries compared with White patients.
- Patients in the SCA subgroup were significantly younger (mean, 1.70 years vs 9.70 years), were more likely to have Medicaid insurance (76.6% vs 42.0%), and had higher mortality rates (5.6% vs 1.0%) than those in the non-SCA subgroup; they were also more likely to come from lower socioeconomic backgrounds and present with more severe injuries.
IN PRACTICE:
“However, we can identify and appropriately respond to patients with potential child abuse in an equitable way by using clinical decision support tools, seeking clinical consultation of child abuse pediatricians, practicing cultural humility, and enhancing the education and training for health care professionals on child abuse recognition, response, and prevention,” Allison M. Jackson, MD, MPH, of the Child and Adolescent Protection Center at Children’s National Hospital, Washington, DC, wrote in an accompanying editorial.
SOURCE:
The study was led by Fereshteh Salimi-Jazi, MD, of Stanford University School of Medicine in California. It was published online on December 18, 2024, in JAMA Network Open.
LIMITATIONS:
The study relied on data from KID, which has limitations such as potential coding errors and the inability to follow patients over time. The database combines race and ethnicity in a single field as well. The study only included hospitalized patients, which may not represent all clinician suspicions of SCA cases.
DISCLOSURES:
This study was supported by a grant from the National Center for Advancing Translational Sciences of the National Institutes of Health. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Among children with traumatic injury, those of Black ethnicity are more likely than those of White ethnicity to be suspected of experiencing child abuse. Young patients and those from low socioeconomic backgrounds also face an increased likelihood of suspicion for child abuse (SCA).
METHODOLOGY:
- Researchers analyzed data on pediatric patients admitted to hospitals after sustaining a traumatic injury between 2006 and 2016 using the Kids’ Inpatient Database (KID) to investigate racial and ethnic disparities in cases in which SCA codes from the 9th and 10th editions of the International Classification of Diseases were used.
- The analysis included a weighted total of 634,309 pediatric patients with complete data, comprising 13,579 patients in the SCA subgroup and 620,730 in the non-SCA subgroup.
- Patient demographics, injury severity, and hospitalization characteristics were classified by race and ethnicity.
- The primary outcome was differences in racial and ethnic composition between the SCA and non-SCA groups, as well as compared with the overall US population using 2010 US Census data.
TAKEAWAY:
- Black patients had 75% higher odds of having a SCA code, compared with White patients; the latter ethnicity was relatively underrepresented in the SCA subgroup, compared with the distribution reported by the US Census.
- Black patients had 10% higher odds of having a SCA code (odds ratio, 1.10; P =.004) than White patients, after socioeconomic factors such as insurance type, household income based on zip code, and injury severity were controlled for.
- Black patients in the SCA subgroup experienced a 26.5% (P < .001) longer hospital stay for mild to moderate injuries and a 40.1% (P < .001) longer stay for serious injuries compared with White patients.
- Patients in the SCA subgroup were significantly younger (mean, 1.70 years vs 9.70 years), were more likely to have Medicaid insurance (76.6% vs 42.0%), and had higher mortality rates (5.6% vs 1.0%) than those in the non-SCA subgroup; they were also more likely to come from lower socioeconomic backgrounds and present with more severe injuries.
IN PRACTICE:
“However, we can identify and appropriately respond to patients with potential child abuse in an equitable way by using clinical decision support tools, seeking clinical consultation of child abuse pediatricians, practicing cultural humility, and enhancing the education and training for health care professionals on child abuse recognition, response, and prevention,” Allison M. Jackson, MD, MPH, of the Child and Adolescent Protection Center at Children’s National Hospital, Washington, DC, wrote in an accompanying editorial.
SOURCE:
The study was led by Fereshteh Salimi-Jazi, MD, of Stanford University School of Medicine in California. It was published online on December 18, 2024, in JAMA Network Open.
LIMITATIONS:
The study relied on data from KID, which has limitations such as potential coding errors and the inability to follow patients over time. The database combines race and ethnicity in a single field as well. The study only included hospitalized patients, which may not represent all clinician suspicions of SCA cases.
DISCLOSURES:
This study was supported by a grant from the National Center for Advancing Translational Sciences of the National Institutes of Health. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Total Intravenous Anesthesia Enables Earlier Facial Nerve Monitoring Than Sevoflurane in Ear Surgery
TOPLINE:
Total intravenous anesthesia (TIVA) enables earlier intraoperative monitoring of facial nerve activity than sevoflurane anesthesia during ear surgery, with reduced patient-ventilator dyssynchrony and fewer requirements for postoperative antiemetics.
METHODOLOGY:
- Researchers evaluated the difference in the timeliness of intraoperative monitoring of facial nerve activity during ear surgery with TIVA vs sevoflurane anesthesia.
- They included 98 patients aged 18-74 years undergoing ear surgery between November 2021 and November 2022; patients were randomly assigned to receive either TIVA or sevoflurane during the procedure. Of these, 92 were included in the final analysis.
- Neuromuscular function was monitored quantitatively throughout anesthesia with train-of-four counts and train-of-four ratios.
- The time from the administration of rocuronium to the start of facial nerve monitoring was recorded.
- The primary outcome measure focused on the recovery index, defined as the time interval between a train-of-four ratio of 0.25 and 0.75; the key secondary outcome was the time to reach a train-of-four ratio of 0.25 from rocuronium administration.
TAKEAWAY:
- The time to reach a train-of-four ratio of 0.25 was achieved earlier with TIVA than with sevoflurane (34 minutes vs 51 minutes; P < .001).
- Patient-ventilator dyssynchrony occurred less frequently in the TIVA group than in the sevoflurane group (15% vs 39%; P = .01).
- Postoperative requests for antiemetics were less frequent in the TIVA group than in the sevoflurane group (2% vs 17%; P = .03).
IN PRACTICE:
“We suggest that TIVA may be a better choice than sevoflurane anesthesia to meet an earlier request” for intraoperative facial nerve monitoring by surgeons, the study authors wrote.
SOURCE:
The study was led by Yu Jeong Bang, MD, of the Department of Anesthesiology and Pain Medicine at Sungkyunkwan University School of Medicine, in Seoul, Republic of Korea. It was published online on November 27, 2024, in The Canadian Journal of Anesthesia.
LIMITATIONS:
A careful interpretation of results may be necessary when clinicians use balanced anesthesia, such as sevoflurane with adjuvants like opioids or nonopioids. The feasibility of intraoperative facial nerve monitoring was decided by the surgeon during surgery, and the lowest stimulation intensity threshold for electromyography amplitude was not detected, as it was not the focus of this study. Although patients requiring intraoperative facial nerve monitoring during ear surgery were enrolled, some did not undergo the procedure based on the surgeon’s judgment.
DISCLOSURES:
This study did not receive any funding. The authors disclosed no relevant conflicts of interest.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Total intravenous anesthesia (TIVA) enables earlier intraoperative monitoring of facial nerve activity than sevoflurane anesthesia during ear surgery, with reduced patient-ventilator dyssynchrony and fewer requirements for postoperative antiemetics.
METHODOLOGY:
- Researchers evaluated the difference in the timeliness of intraoperative monitoring of facial nerve activity during ear surgery with TIVA vs sevoflurane anesthesia.
- They included 98 patients aged 18-74 years undergoing ear surgery between November 2021 and November 2022; patients were randomly assigned to receive either TIVA or sevoflurane during the procedure. Of these, 92 were included in the final analysis.
- Neuromuscular function was monitored quantitatively throughout anesthesia with train-of-four counts and train-of-four ratios.
- The time from the administration of rocuronium to the start of facial nerve monitoring was recorded.
- The primary outcome measure focused on the recovery index, defined as the time interval between a train-of-four ratio of 0.25 and 0.75; the key secondary outcome was the time to reach a train-of-four ratio of 0.25 from rocuronium administration.
TAKEAWAY:
- The time to reach a train-of-four ratio of 0.25 was achieved earlier with TIVA than with sevoflurane (34 minutes vs 51 minutes; P < .001).
- Patient-ventilator dyssynchrony occurred less frequently in the TIVA group than in the sevoflurane group (15% vs 39%; P = .01).
- Postoperative requests for antiemetics were less frequent in the TIVA group than in the sevoflurane group (2% vs 17%; P = .03).
IN PRACTICE:
“We suggest that TIVA may be a better choice than sevoflurane anesthesia to meet an earlier request” for intraoperative facial nerve monitoring by surgeons, the study authors wrote.
SOURCE:
The study was led by Yu Jeong Bang, MD, of the Department of Anesthesiology and Pain Medicine at Sungkyunkwan University School of Medicine, in Seoul, Republic of Korea. It was published online on November 27, 2024, in The Canadian Journal of Anesthesia.
LIMITATIONS:
A careful interpretation of results may be necessary when clinicians use balanced anesthesia, such as sevoflurane with adjuvants like opioids or nonopioids. The feasibility of intraoperative facial nerve monitoring was decided by the surgeon during surgery, and the lowest stimulation intensity threshold for electromyography amplitude was not detected, as it was not the focus of this study. Although patients requiring intraoperative facial nerve monitoring during ear surgery were enrolled, some did not undergo the procedure based on the surgeon’s judgment.
DISCLOSURES:
This study did not receive any funding. The authors disclosed no relevant conflicts of interest.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Total intravenous anesthesia (TIVA) enables earlier intraoperative monitoring of facial nerve activity than sevoflurane anesthesia during ear surgery, with reduced patient-ventilator dyssynchrony and fewer requirements for postoperative antiemetics.
METHODOLOGY:
- Researchers evaluated the difference in the timeliness of intraoperative monitoring of facial nerve activity during ear surgery with TIVA vs sevoflurane anesthesia.
- They included 98 patients aged 18-74 years undergoing ear surgery between November 2021 and November 2022; patients were randomly assigned to receive either TIVA or sevoflurane during the procedure. Of these, 92 were included in the final analysis.
- Neuromuscular function was monitored quantitatively throughout anesthesia with train-of-four counts and train-of-four ratios.
- The time from the administration of rocuronium to the start of facial nerve monitoring was recorded.
- The primary outcome measure focused on the recovery index, defined as the time interval between a train-of-four ratio of 0.25 and 0.75; the key secondary outcome was the time to reach a train-of-four ratio of 0.25 from rocuronium administration.
TAKEAWAY:
- The time to reach a train-of-four ratio of 0.25 was achieved earlier with TIVA than with sevoflurane (34 minutes vs 51 minutes; P < .001).
- Patient-ventilator dyssynchrony occurred less frequently in the TIVA group than in the sevoflurane group (15% vs 39%; P = .01).
- Postoperative requests for antiemetics were less frequent in the TIVA group than in the sevoflurane group (2% vs 17%; P = .03).
IN PRACTICE:
“We suggest that TIVA may be a better choice than sevoflurane anesthesia to meet an earlier request” for intraoperative facial nerve monitoring by surgeons, the study authors wrote.
SOURCE:
The study was led by Yu Jeong Bang, MD, of the Department of Anesthesiology and Pain Medicine at Sungkyunkwan University School of Medicine, in Seoul, Republic of Korea. It was published online on November 27, 2024, in The Canadian Journal of Anesthesia.
LIMITATIONS:
A careful interpretation of results may be necessary when clinicians use balanced anesthesia, such as sevoflurane with adjuvants like opioids or nonopioids. The feasibility of intraoperative facial nerve monitoring was decided by the surgeon during surgery, and the lowest stimulation intensity threshold for electromyography amplitude was not detected, as it was not the focus of this study. Although patients requiring intraoperative facial nerve monitoring during ear surgery were enrolled, some did not undergo the procedure based on the surgeon’s judgment.
DISCLOSURES:
This study did not receive any funding. The authors disclosed no relevant conflicts of interest.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Diabetes Drugs and Eye Disease: These Protect, These Don’t
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective analysis of electronic medical records from the TriNetX health research network to evaluate how systemic medications, such as GLP-1 RAs, fenofibrates, thiazolidinediones, and calcium channel blockers, influence the risk of developing DME in patients with type 2 diabetes.
- They included patients with a 5-year history of type 2 diabetes and an absence of DME at baseline.
- The treatment group included patients who initiated treatment with calcium channel blockers (n = 107,193), GLP-1 RAs (n = 76,583), thiazolidinediones (n = 25,657), or fenofibrates (n = 18,606) after a diagnosis of diabetes. The control group received none of these medications within 1 year of being diagnosed with the condition.
- The researchers used propensity score matching to balance baseline characteristics and comorbidities between both groups.
- The primary outcome was the incidence of diagnoses of DME within a 2-year follow-up period after the initiation of systemic medications.
TAKEAWAY:
- Patients treated with calcium channel blockers showed an increased risk for incident DME (hazard ratio [HR], 1.66; 95% CI, 1.54-1.78) compared with control individuals.
- Treatment with GLP-1 RAs was associated with a reduced risk for DME (HR, 0.77; 95% CI, 0.70-0.85), as was treatment with fenofibrates (HR, 0.83; 95% CI, 0.68-0.98).
- No significant difference in risk for DME was observed between patients taking thiazolidinediones and control individuals.
IN PRACTICE:
“We found a possible protective effect for GLP-1 RA medications and fenofibrate for DME and an adverse effect for calcium channel blockers with regard to the development of DME in patients” with type 2 diabetes, the authors wrote.
“Our preliminary data suggests a protective effect with regard to GLP-1 RA drugs and the development of DME. Clinical studies examining a potential therapeutic effect of GLP-1 RA drugs on DME do seem warranted. A single orally administered drug could conceivably lower blood sugar, reduce weight, offer cardiovascular protection, and treat DME” in patients with type 2 diabetes, they added.
SOURCE:
The study was led by Jawad Muayad, BS, of the School of Medicine at Texas A&M University, in Houston. It was published online on December 5, 2024, in Ophthalmology Retina.
LIMITATIONS:
The study was retrospective in nature. It relied on electronic medical records for the diagnosis of DME instead of directly assessing retinal images or measuring retinal thickness. Moreover, patients on certain medications may have been monitored more closely, potentially influencing the likelihood of a diagnosis of DME being recorded.
DISCLOSURES:
The study did not receive any funding support. One author disclosed receiving consulting fees from various institutions and pharmaceutical companies. The other authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective analysis of electronic medical records from the TriNetX health research network to evaluate how systemic medications, such as GLP-1 RAs, fenofibrates, thiazolidinediones, and calcium channel blockers, influence the risk of developing DME in patients with type 2 diabetes.
- They included patients with a 5-year history of type 2 diabetes and an absence of DME at baseline.
- The treatment group included patients who initiated treatment with calcium channel blockers (n = 107,193), GLP-1 RAs (n = 76,583), thiazolidinediones (n = 25,657), or fenofibrates (n = 18,606) after a diagnosis of diabetes. The control group received none of these medications within 1 year of being diagnosed with the condition.
- The researchers used propensity score matching to balance baseline characteristics and comorbidities between both groups.
- The primary outcome was the incidence of diagnoses of DME within a 2-year follow-up period after the initiation of systemic medications.
TAKEAWAY:
- Patients treated with calcium channel blockers showed an increased risk for incident DME (hazard ratio [HR], 1.66; 95% CI, 1.54-1.78) compared with control individuals.
- Treatment with GLP-1 RAs was associated with a reduced risk for DME (HR, 0.77; 95% CI, 0.70-0.85), as was treatment with fenofibrates (HR, 0.83; 95% CI, 0.68-0.98).
- No significant difference in risk for DME was observed between patients taking thiazolidinediones and control individuals.
IN PRACTICE:
“We found a possible protective effect for GLP-1 RA medications and fenofibrate for DME and an adverse effect for calcium channel blockers with regard to the development of DME in patients” with type 2 diabetes, the authors wrote.
“Our preliminary data suggests a protective effect with regard to GLP-1 RA drugs and the development of DME. Clinical studies examining a potential therapeutic effect of GLP-1 RA drugs on DME do seem warranted. A single orally administered drug could conceivably lower blood sugar, reduce weight, offer cardiovascular protection, and treat DME” in patients with type 2 diabetes, they added.
SOURCE:
The study was led by Jawad Muayad, BS, of the School of Medicine at Texas A&M University, in Houston. It was published online on December 5, 2024, in Ophthalmology Retina.
LIMITATIONS:
The study was retrospective in nature. It relied on electronic medical records for the diagnosis of DME instead of directly assessing retinal images or measuring retinal thickness. Moreover, patients on certain medications may have been monitored more closely, potentially influencing the likelihood of a diagnosis of DME being recorded.
DISCLOSURES:
The study did not receive any funding support. One author disclosed receiving consulting fees from various institutions and pharmaceutical companies. The other authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective analysis of electronic medical records from the TriNetX health research network to evaluate how systemic medications, such as GLP-1 RAs, fenofibrates, thiazolidinediones, and calcium channel blockers, influence the risk of developing DME in patients with type 2 diabetes.
- They included patients with a 5-year history of type 2 diabetes and an absence of DME at baseline.
- The treatment group included patients who initiated treatment with calcium channel blockers (n = 107,193), GLP-1 RAs (n = 76,583), thiazolidinediones (n = 25,657), or fenofibrates (n = 18,606) after a diagnosis of diabetes. The control group received none of these medications within 1 year of being diagnosed with the condition.
- The researchers used propensity score matching to balance baseline characteristics and comorbidities between both groups.
- The primary outcome was the incidence of diagnoses of DME within a 2-year follow-up period after the initiation of systemic medications.
TAKEAWAY:
- Patients treated with calcium channel blockers showed an increased risk for incident DME (hazard ratio [HR], 1.66; 95% CI, 1.54-1.78) compared with control individuals.
- Treatment with GLP-1 RAs was associated with a reduced risk for DME (HR, 0.77; 95% CI, 0.70-0.85), as was treatment with fenofibrates (HR, 0.83; 95% CI, 0.68-0.98).
- No significant difference in risk for DME was observed between patients taking thiazolidinediones and control individuals.
IN PRACTICE:
“We found a possible protective effect for GLP-1 RA medications and fenofibrate for DME and an adverse effect for calcium channel blockers with regard to the development of DME in patients” with type 2 diabetes, the authors wrote.
“Our preliminary data suggests a protective effect with regard to GLP-1 RA drugs and the development of DME. Clinical studies examining a potential therapeutic effect of GLP-1 RA drugs on DME do seem warranted. A single orally administered drug could conceivably lower blood sugar, reduce weight, offer cardiovascular protection, and treat DME” in patients with type 2 diabetes, they added.
SOURCE:
The study was led by Jawad Muayad, BS, of the School of Medicine at Texas A&M University, in Houston. It was published online on December 5, 2024, in Ophthalmology Retina.
LIMITATIONS:
The study was retrospective in nature. It relied on electronic medical records for the diagnosis of DME instead of directly assessing retinal images or measuring retinal thickness. Moreover, patients on certain medications may have been monitored more closely, potentially influencing the likelihood of a diagnosis of DME being recorded.
DISCLOSURES:
The study did not receive any funding support. One author disclosed receiving consulting fees from various institutions and pharmaceutical companies. The other authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Clopidogrel Tops Aspirin Post-PCI, Even in High-Risk Cases
TOPLINE:
The beneficial effect of clopidogrel monotherapy over aspirin monotherapy in patients who underwent percutaneous coronary intervention (PCI) and remained event free for 6-18 months on dual antiplatelet therapy (DAPT) is consistent, regardless of bleeding risk or PCI complexity, according to a post hoc analysis of the HOST-EXAM trial.
METHODOLOGY:
- The HOST-EXAM Extended study conducted across 37 sites in South Korea included patients who underwent PCI with drug-eluting stents and remained free of clinical events for 6-18 months post-PCI, while receiving DAPT.
- This post hoc analysis of the HOST-EXAM Extended study compared the effectiveness of long-term daily clopidogrel (75 mg) with that of aspirin monotherapy (100 mg) after PCI, according to bleeding risk and procedural complexity in 3974 patients (mean age, 63 years; 75% men) who were followed for up to 5.9 years.
- High bleeding risk was reported in 866 patients, and 849 patients underwent complex PCI.
- Patients were classified into four distinct risk groups: No bleeding risk and noncomplex PCI, no bleeding risk and complex PCI, high bleeding risk and noncomplex PCI, and high bleeding risk and complex PCI.
- The co-primary endpoints were thrombotic composite events (cardiovascular death, nonfatal myocardial infarction, stroke, readmission due to acute coronary syndrome, and definite/probable stent thrombosis) and any bleeding event.
TAKEAWAY:
- Thrombotic composite events (hazard ratio [HR], 2.15; P < .001) and any bleeding event (HR, 3.64; P < .001) were more frequent in patients with a high bleeding risk than in those without.
- However, there was no difference in the risk for thrombotic composite events or any bleeding event by PCI complexity.
- The long-term benefits of clopidogrel monotherapy over aspirin monotherapy were seen in all patients, regardless of bleeding risks (P for interaction = .38 for thrombotic composite events and P for interaction = .20 for any bleeding event) or PCI complexity (P for interaction = .12 for thrombotic composite events and P for interaction = .62 for any bleeding event).
- The greatest risk reduction in thrombotic composite events with clopidogrel monotherapy occurred in patients with a high bleeding risk who underwent complex PCI (HR, 0.46; P = .03).
IN PRACTICE:
“[In this study], no significant interaction was found between treatment arms and risk groups, denoting that the beneficial impact of clopidogrel monotherapy was consistent regardless of HBR [high bleeding risk] or PCI complexity,” the authors wrote.
SOURCE:
This study was led by Jeehoon Kang, MD, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea. It was published online on November 27, 2024, in JAMA Cardiology.
LIMITATIONS:
As this study is a post hoc analysis, the findings should be considered primarily hypothesis generating. This study was conducted exclusively in an East Asian population and may not be generalizable to other ethnic groups. The definitions of high bleeding risk and complex PCI used in this analysis were not prespecified in the study protocol of the HOST-EXAM trial. Certain criteria defining high bleeding risk were not analyzed as they fell under the exclusion criteria of the HOST-EXAM trial or were not recorded in the study case report form.
DISCLOSURES:
This study was supported by grants from the Patient-Centered Clinical Research Coordinating Center and Seoul National University Hospital. One author reported receiving grants and personal fees from various pharmaceutical companies outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The beneficial effect of clopidogrel monotherapy over aspirin monotherapy in patients who underwent percutaneous coronary intervention (PCI) and remained event free for 6-18 months on dual antiplatelet therapy (DAPT) is consistent, regardless of bleeding risk or PCI complexity, according to a post hoc analysis of the HOST-EXAM trial.
METHODOLOGY:
- The HOST-EXAM Extended study conducted across 37 sites in South Korea included patients who underwent PCI with drug-eluting stents and remained free of clinical events for 6-18 months post-PCI, while receiving DAPT.
- This post hoc analysis of the HOST-EXAM Extended study compared the effectiveness of long-term daily clopidogrel (75 mg) with that of aspirin monotherapy (100 mg) after PCI, according to bleeding risk and procedural complexity in 3974 patients (mean age, 63 years; 75% men) who were followed for up to 5.9 years.
- High bleeding risk was reported in 866 patients, and 849 patients underwent complex PCI.
- Patients were classified into four distinct risk groups: No bleeding risk and noncomplex PCI, no bleeding risk and complex PCI, high bleeding risk and noncomplex PCI, and high bleeding risk and complex PCI.
- The co-primary endpoints were thrombotic composite events (cardiovascular death, nonfatal myocardial infarction, stroke, readmission due to acute coronary syndrome, and definite/probable stent thrombosis) and any bleeding event.
TAKEAWAY:
- Thrombotic composite events (hazard ratio [HR], 2.15; P < .001) and any bleeding event (HR, 3.64; P < .001) were more frequent in patients with a high bleeding risk than in those without.
- However, there was no difference in the risk for thrombotic composite events or any bleeding event by PCI complexity.
- The long-term benefits of clopidogrel monotherapy over aspirin monotherapy were seen in all patients, regardless of bleeding risks (P for interaction = .38 for thrombotic composite events and P for interaction = .20 for any bleeding event) or PCI complexity (P for interaction = .12 for thrombotic composite events and P for interaction = .62 for any bleeding event).
- The greatest risk reduction in thrombotic composite events with clopidogrel monotherapy occurred in patients with a high bleeding risk who underwent complex PCI (HR, 0.46; P = .03).
IN PRACTICE:
“[In this study], no significant interaction was found between treatment arms and risk groups, denoting that the beneficial impact of clopidogrel monotherapy was consistent regardless of HBR [high bleeding risk] or PCI complexity,” the authors wrote.
SOURCE:
This study was led by Jeehoon Kang, MD, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea. It was published online on November 27, 2024, in JAMA Cardiology.
LIMITATIONS:
As this study is a post hoc analysis, the findings should be considered primarily hypothesis generating. This study was conducted exclusively in an East Asian population and may not be generalizable to other ethnic groups. The definitions of high bleeding risk and complex PCI used in this analysis were not prespecified in the study protocol of the HOST-EXAM trial. Certain criteria defining high bleeding risk were not analyzed as they fell under the exclusion criteria of the HOST-EXAM trial or were not recorded in the study case report form.
DISCLOSURES:
This study was supported by grants from the Patient-Centered Clinical Research Coordinating Center and Seoul National University Hospital. One author reported receiving grants and personal fees from various pharmaceutical companies outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The beneficial effect of clopidogrel monotherapy over aspirin monotherapy in patients who underwent percutaneous coronary intervention (PCI) and remained event free for 6-18 months on dual antiplatelet therapy (DAPT) is consistent, regardless of bleeding risk or PCI complexity, according to a post hoc analysis of the HOST-EXAM trial.
METHODOLOGY:
- The HOST-EXAM Extended study conducted across 37 sites in South Korea included patients who underwent PCI with drug-eluting stents and remained free of clinical events for 6-18 months post-PCI, while receiving DAPT.
- This post hoc analysis of the HOST-EXAM Extended study compared the effectiveness of long-term daily clopidogrel (75 mg) with that of aspirin monotherapy (100 mg) after PCI, according to bleeding risk and procedural complexity in 3974 patients (mean age, 63 years; 75% men) who were followed for up to 5.9 years.
- High bleeding risk was reported in 866 patients, and 849 patients underwent complex PCI.
- Patients were classified into four distinct risk groups: No bleeding risk and noncomplex PCI, no bleeding risk and complex PCI, high bleeding risk and noncomplex PCI, and high bleeding risk and complex PCI.
- The co-primary endpoints were thrombotic composite events (cardiovascular death, nonfatal myocardial infarction, stroke, readmission due to acute coronary syndrome, and definite/probable stent thrombosis) and any bleeding event.
TAKEAWAY:
- Thrombotic composite events (hazard ratio [HR], 2.15; P < .001) and any bleeding event (HR, 3.64; P < .001) were more frequent in patients with a high bleeding risk than in those without.
- However, there was no difference in the risk for thrombotic composite events or any bleeding event by PCI complexity.
- The long-term benefits of clopidogrel monotherapy over aspirin monotherapy were seen in all patients, regardless of bleeding risks (P for interaction = .38 for thrombotic composite events and P for interaction = .20 for any bleeding event) or PCI complexity (P for interaction = .12 for thrombotic composite events and P for interaction = .62 for any bleeding event).
- The greatest risk reduction in thrombotic composite events with clopidogrel monotherapy occurred in patients with a high bleeding risk who underwent complex PCI (HR, 0.46; P = .03).
IN PRACTICE:
“[In this study], no significant interaction was found between treatment arms and risk groups, denoting that the beneficial impact of clopidogrel monotherapy was consistent regardless of HBR [high bleeding risk] or PCI complexity,” the authors wrote.
SOURCE:
This study was led by Jeehoon Kang, MD, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea. It was published online on November 27, 2024, in JAMA Cardiology.
LIMITATIONS:
As this study is a post hoc analysis, the findings should be considered primarily hypothesis generating. This study was conducted exclusively in an East Asian population and may not be generalizable to other ethnic groups. The definitions of high bleeding risk and complex PCI used in this analysis were not prespecified in the study protocol of the HOST-EXAM trial. Certain criteria defining high bleeding risk were not analyzed as they fell under the exclusion criteria of the HOST-EXAM trial or were not recorded in the study case report form.
DISCLOSURES:
This study was supported by grants from the Patient-Centered Clinical Research Coordinating Center and Seoul National University Hospital. One author reported receiving grants and personal fees from various pharmaceutical companies outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Real-World Data Question Low-Dose Steroid Use in ANCA Vasculitis
TOPLINE:
Compared with a standard dosing regimen, a reduced-dose glucocorticoid regimen is associated with an increased risk for disease progression, relapse, death, or kidney failure in antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis, particularly affecting patients receiving rituximab or those with elevated creatinine levels.
METHODOLOGY:
- The PEXIVAS trial demonstrated that a reduced-dose glucocorticoid regimen was noninferior to standard dosing in terms of death or end-stage kidney disease in ANCA-associated vasculitis. However, the trial did not include disease progression or relapse as a primary endpoint, and cyclophosphamide was the primary induction therapy.
- Researchers conducted this retrospective study across 19 hospitals (18 in France and one in Luxembourg) between January 2018 and November 2022 to compare the effectiveness of a reduced-dose glucocorticoid regimen, as used in the PEXIVAS trial, with a standard-dose regimen in patients with ANCA-associated vasculitis in the real-world setting.
- They included 234 patients aged > 15 years (51% men) with severe granulomatosis with polyangiitis (n = 141) or microscopic polyangiitis (n = 93) who received induction therapy with rituximab or cyclophosphamide; 126 and 108 patients received reduced-dose and standard-dose glucocorticoid regimens, respectively.
- Most patients (70%) had severe renal involvement.
- The primary composite outcome encompassed minor relapse, major relapse, disease progression before remission, end-stage kidney disease requiring dialysis for > 12 weeks or transplantation, and death within 12 months post-induction.
TAKEAWAY:
- The primary composite outcome occurred in a higher proportion of patients receiving reduced-dose glucocorticoid therapy than in those receiving standard-dose therapy (33.3% vs 18.5%; hazard ratio [HR], 2.20; 95% CI, 1.23-3.94).
- However, no significant association was found between reduced-dose glucocorticoids and the risk for death or end-stage kidney disease or the occurrence of serious infections.
- Among patients receiving reduced-dose glucocorticoids, serum creatinine levels > 300 μmol/L were associated with an increased risk for the primary composite outcome (adjusted HR, 3.02; 95% CI, 1.28-7.11).
- In the rituximab induction subgroup, reduced-dose glucocorticoid was associated with an increased risk for the primary composite outcome (adjusted HR, 2.36; 95% CI, 1.18-4.71), compared with standard-dose glucocorticoids.
IN PRACTICE:
“Our data suggest increased vigilance when using the [reduced-dose glucocorticoid] regimen, especially in the two subgroups of patients at higher risk of failure, that is, those receiving [rituximab] as induction therapy and those with a baseline serum creatinine greater than 300 μmol/L,” the authors wrote.
SOURCE:
The study was led by Sophie Nagle, MD, National Referral Centre for Rare Autoimmune and Systemic Diseases, Department of Internal Medicine, Hôpital Cochin, Paris, France. It was published online on November 20, 2024, in Annals of the Rheumatic Diseases.
LIMITATIONS:
The retrospective nature of this study may have introduced inherent limitations and potential selection bias. The study lacked data on patient comorbidities, which could have influenced treatment choice and outcomes. Additionally, about a quarter of patients did not receive methylprednisolone pulses prior to oral glucocorticoids, unlike the PEXIVAS trial protocol. The group receiving standard-dose glucocorticoids showed heterogeneity in glucocorticoid regimens, and the minimum follow-up was only 6 months.
DISCLOSURES:
This study did not report any source of funding. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Compared with a standard dosing regimen, a reduced-dose glucocorticoid regimen is associated with an increased risk for disease progression, relapse, death, or kidney failure in antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis, particularly affecting patients receiving rituximab or those with elevated creatinine levels.
METHODOLOGY:
- The PEXIVAS trial demonstrated that a reduced-dose glucocorticoid regimen was noninferior to standard dosing in terms of death or end-stage kidney disease in ANCA-associated vasculitis. However, the trial did not include disease progression or relapse as a primary endpoint, and cyclophosphamide was the primary induction therapy.
- Researchers conducted this retrospective study across 19 hospitals (18 in France and one in Luxembourg) between January 2018 and November 2022 to compare the effectiveness of a reduced-dose glucocorticoid regimen, as used in the PEXIVAS trial, with a standard-dose regimen in patients with ANCA-associated vasculitis in the real-world setting.
- They included 234 patients aged > 15 years (51% men) with severe granulomatosis with polyangiitis (n = 141) or microscopic polyangiitis (n = 93) who received induction therapy with rituximab or cyclophosphamide; 126 and 108 patients received reduced-dose and standard-dose glucocorticoid regimens, respectively.
- Most patients (70%) had severe renal involvement.
- The primary composite outcome encompassed minor relapse, major relapse, disease progression before remission, end-stage kidney disease requiring dialysis for > 12 weeks or transplantation, and death within 12 months post-induction.
TAKEAWAY:
- The primary composite outcome occurred in a higher proportion of patients receiving reduced-dose glucocorticoid therapy than in those receiving standard-dose therapy (33.3% vs 18.5%; hazard ratio [HR], 2.20; 95% CI, 1.23-3.94).
- However, no significant association was found between reduced-dose glucocorticoids and the risk for death or end-stage kidney disease or the occurrence of serious infections.
- Among patients receiving reduced-dose glucocorticoids, serum creatinine levels > 300 μmol/L were associated with an increased risk for the primary composite outcome (adjusted HR, 3.02; 95% CI, 1.28-7.11).
- In the rituximab induction subgroup, reduced-dose glucocorticoid was associated with an increased risk for the primary composite outcome (adjusted HR, 2.36; 95% CI, 1.18-4.71), compared with standard-dose glucocorticoids.
IN PRACTICE:
“Our data suggest increased vigilance when using the [reduced-dose glucocorticoid] regimen, especially in the two subgroups of patients at higher risk of failure, that is, those receiving [rituximab] as induction therapy and those with a baseline serum creatinine greater than 300 μmol/L,” the authors wrote.
SOURCE:
The study was led by Sophie Nagle, MD, National Referral Centre for Rare Autoimmune and Systemic Diseases, Department of Internal Medicine, Hôpital Cochin, Paris, France. It was published online on November 20, 2024, in Annals of the Rheumatic Diseases.
LIMITATIONS:
The retrospective nature of this study may have introduced inherent limitations and potential selection bias. The study lacked data on patient comorbidities, which could have influenced treatment choice and outcomes. Additionally, about a quarter of patients did not receive methylprednisolone pulses prior to oral glucocorticoids, unlike the PEXIVAS trial protocol. The group receiving standard-dose glucocorticoids showed heterogeneity in glucocorticoid regimens, and the minimum follow-up was only 6 months.
DISCLOSURES:
This study did not report any source of funding. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Compared with a standard dosing regimen, a reduced-dose glucocorticoid regimen is associated with an increased risk for disease progression, relapse, death, or kidney failure in antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis, particularly affecting patients receiving rituximab or those with elevated creatinine levels.
METHODOLOGY:
- The PEXIVAS trial demonstrated that a reduced-dose glucocorticoid regimen was noninferior to standard dosing in terms of death or end-stage kidney disease in ANCA-associated vasculitis. However, the trial did not include disease progression or relapse as a primary endpoint, and cyclophosphamide was the primary induction therapy.
- Researchers conducted this retrospective study across 19 hospitals (18 in France and one in Luxembourg) between January 2018 and November 2022 to compare the effectiveness of a reduced-dose glucocorticoid regimen, as used in the PEXIVAS trial, with a standard-dose regimen in patients with ANCA-associated vasculitis in the real-world setting.
- They included 234 patients aged > 15 years (51% men) with severe granulomatosis with polyangiitis (n = 141) or microscopic polyangiitis (n = 93) who received induction therapy with rituximab or cyclophosphamide; 126 and 108 patients received reduced-dose and standard-dose glucocorticoid regimens, respectively.
- Most patients (70%) had severe renal involvement.
- The primary composite outcome encompassed minor relapse, major relapse, disease progression before remission, end-stage kidney disease requiring dialysis for > 12 weeks or transplantation, and death within 12 months post-induction.
TAKEAWAY:
- The primary composite outcome occurred in a higher proportion of patients receiving reduced-dose glucocorticoid therapy than in those receiving standard-dose therapy (33.3% vs 18.5%; hazard ratio [HR], 2.20; 95% CI, 1.23-3.94).
- However, no significant association was found between reduced-dose glucocorticoids and the risk for death or end-stage kidney disease or the occurrence of serious infections.
- Among patients receiving reduced-dose glucocorticoids, serum creatinine levels > 300 μmol/L were associated with an increased risk for the primary composite outcome (adjusted HR, 3.02; 95% CI, 1.28-7.11).
- In the rituximab induction subgroup, reduced-dose glucocorticoid was associated with an increased risk for the primary composite outcome (adjusted HR, 2.36; 95% CI, 1.18-4.71), compared with standard-dose glucocorticoids.
IN PRACTICE:
“Our data suggest increased vigilance when using the [reduced-dose glucocorticoid] regimen, especially in the two subgroups of patients at higher risk of failure, that is, those receiving [rituximab] as induction therapy and those with a baseline serum creatinine greater than 300 μmol/L,” the authors wrote.
SOURCE:
The study was led by Sophie Nagle, MD, National Referral Centre for Rare Autoimmune and Systemic Diseases, Department of Internal Medicine, Hôpital Cochin, Paris, France. It was published online on November 20, 2024, in Annals of the Rheumatic Diseases.
LIMITATIONS:
The retrospective nature of this study may have introduced inherent limitations and potential selection bias. The study lacked data on patient comorbidities, which could have influenced treatment choice and outcomes. Additionally, about a quarter of patients did not receive methylprednisolone pulses prior to oral glucocorticoids, unlike the PEXIVAS trial protocol. The group receiving standard-dose glucocorticoids showed heterogeneity in glucocorticoid regimens, and the minimum follow-up was only 6 months.
DISCLOSURES:
This study did not report any source of funding. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Dark Chocolate: A Bittersweet Remedy for Diabetes Risk
TOPLINE:
Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.
METHODOLOGY:
- Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
- Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
- The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
- Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
- Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.
TAKEAWAY:
- During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
- In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
- The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
- Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.
IN PRACTICE:
“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.
SOURCE:
This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.
LIMITATIONS:
The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.
METHODOLOGY:
- Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
- Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
- The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
- Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
- Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.
TAKEAWAY:
- During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
- In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
- The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
- Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.
IN PRACTICE:
“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.
SOURCE:
This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.
LIMITATIONS:
The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.
METHODOLOGY:
- Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
- Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
- The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
- Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
- Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.
TAKEAWAY:
- During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
- In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
- The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
- Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.
IN PRACTICE:
“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.
SOURCE:
This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.
LIMITATIONS:
The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.