User login
Risk assessment first urged for fragility fracture screening
A new Canadian guideline on screening for the primary prevention of fragility fractures recommends risk assessment first, before bone mineral density (BMD) testing, for women aged 65 and older. For younger women and men aged 40 and older, screening is not recommended.
To develop the guideline, a writing group from Canadian Task Force on Preventive Health Care commissioned systematic reviews of studies on the benefits and harms of fragility fracture screenings; the predictive accuracy of current risk-assessment tools; patient acceptability; and benefits of treatment. Treatment harms were analyzed via a rapid overview of reviews.
The guideline, published online in the Canadian Medical Association Journal, is aimed at primary care practitioners for their community-dwelling patients aged 40 and older. The recommendations do not apply to people already taking preventive drugs.
Nondrug treatments were beyond the scope of the current guideline, but guidelines on the prevention of falls and other strategies are planned, Roland Grad, MD, a guideline author and associate professor at McGill University in Montreal, told this news organization.
The new guideline says that women aged 65 and older may be able to avoid fracture through screening and preventive medication. An individual’s fracture risk can be estimated with a new Fragility Fractures Decision Aid, which uses the Canadian FRAX risk-assessment tool.
“A risk assessment–first approach promotes shared decision-making with the patient, based on best medical evidence,” Dr. Grad said.
“To help clinicians, we have created an infographic with visuals to communicate the time spent on BMD vs risk assessment first.”
New evidence
“At least three things motivated this new guideline,” Dr. Grad said. “When we started work on this prior to the pandemic, we saw a need for updated guidance on screening to prevent fragility fractures. We were also aware of new evidence from the publication of screening trials in females older than 65.”
To conduct the risk assessment in older women, clinicians are advised to do the following:
- Use the decision aid (which patients can also use on their own).
- Use the 10-year absolute risk of major osteoporotic fracture to facilitate shared decision-making about possible benefits and harms of preventive pharmacotherapy.
- If pharmacotherapy is being considered, request a BMD using DXA of the femoral neck, then reestimate the fracture risk by adding the BMD T-score into the FRAX.
Potential harms associated with various treatments, with varying levels of evidence, include the following: with alendronate and denosumab, nonserious gastrointestinal adverse events; with denosumab, rash, eczema, and infections; with zoledronic acid, nonserious events, such as headache and flulike symptoms; and with alendronate and bisphosphonates, rare but serious harms of atypical femoral fracture and osteonecrosis of the jaw.
“These recommendations emphasize the importance of good clinical practice, where clinicians are alert to changes in physical health and patient well-being,” the authors wrote. “Clinicians should also be aware of the importance of secondary prevention (i.e., after fracture) and manage patients accordingly.”
“This is an important topic,” Dr. Grad said. “Fragility fractures are consequential for individuals and for our publicly funded health care system. We anticipate questions from clinicians about the time needed to screen with the risk assessment–first strategy. Our modeling work suggests time savings with [this] strategy compared to a strategy of BMD testing first. Following our recommendations may lead to a reduction in BMD testing.”
To promote the guideline, the CMAJ has recorded a podcast and will use other strategies to increase awareness, Dr. Grad said. “The Canadian Task Force has a communications strategy that includes outreach to primary care, stakeholder webinars, social media, partnerships, and other tactics. The College of Family Physicians of Canada has endorsed the guideline and will help promote to its members.”
Other at-risk groups?
Aliya Khan, MD, FRCPC, FACP, FACE, professor in the divisions of endocrinology and metabolism and geriatrics and director of the fellowship in metabolic bone diseases at McMaster University in Hamilton, Ont., told this news organization she agrees with the strategy of evaluating women aged 65 and older for fracture risk.
“The decision aid is useful, but I would like to see it expanded to other circumstances and situations,” she said.
For example, Dr. Khan would like to see recommendations for younger women and for men of all ages regarding secondary causes of osteoporosis or medications known to have a detrimental effect on bone health. By not addressing these patients, she said, “we may miss patients who would benefit from a fracture risk assessment and potentially treatment to prevent low-trauma fractures.”
A recommendation for younger postmenopausal women was included in the most recent Society of Obstetricians and Gynaecologists Canada guideline, she noted.
Overall, she said, “I believe these recommendations will reduce the excess or inappropriate use of BMD testing and that is welcome.”
Funding for the Canadian Task Force on Preventive Health Care is provided by the Public Health Agency of Canada. The task force members report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new Canadian guideline on screening for the primary prevention of fragility fractures recommends risk assessment first, before bone mineral density (BMD) testing, for women aged 65 and older. For younger women and men aged 40 and older, screening is not recommended.
To develop the guideline, a writing group from Canadian Task Force on Preventive Health Care commissioned systematic reviews of studies on the benefits and harms of fragility fracture screenings; the predictive accuracy of current risk-assessment tools; patient acceptability; and benefits of treatment. Treatment harms were analyzed via a rapid overview of reviews.
The guideline, published online in the Canadian Medical Association Journal, is aimed at primary care practitioners for their community-dwelling patients aged 40 and older. The recommendations do not apply to people already taking preventive drugs.
Nondrug treatments were beyond the scope of the current guideline, but guidelines on the prevention of falls and other strategies are planned, Roland Grad, MD, a guideline author and associate professor at McGill University in Montreal, told this news organization.
The new guideline says that women aged 65 and older may be able to avoid fracture through screening and preventive medication. An individual’s fracture risk can be estimated with a new Fragility Fractures Decision Aid, which uses the Canadian FRAX risk-assessment tool.
“A risk assessment–first approach promotes shared decision-making with the patient, based on best medical evidence,” Dr. Grad said.
“To help clinicians, we have created an infographic with visuals to communicate the time spent on BMD vs risk assessment first.”
New evidence
“At least three things motivated this new guideline,” Dr. Grad said. “When we started work on this prior to the pandemic, we saw a need for updated guidance on screening to prevent fragility fractures. We were also aware of new evidence from the publication of screening trials in females older than 65.”
To conduct the risk assessment in older women, clinicians are advised to do the following:
- Use the decision aid (which patients can also use on their own).
- Use the 10-year absolute risk of major osteoporotic fracture to facilitate shared decision-making about possible benefits and harms of preventive pharmacotherapy.
- If pharmacotherapy is being considered, request a BMD using DXA of the femoral neck, then reestimate the fracture risk by adding the BMD T-score into the FRAX.
Potential harms associated with various treatments, with varying levels of evidence, include the following: with alendronate and denosumab, nonserious gastrointestinal adverse events; with denosumab, rash, eczema, and infections; with zoledronic acid, nonserious events, such as headache and flulike symptoms; and with alendronate and bisphosphonates, rare but serious harms of atypical femoral fracture and osteonecrosis of the jaw.
“These recommendations emphasize the importance of good clinical practice, where clinicians are alert to changes in physical health and patient well-being,” the authors wrote. “Clinicians should also be aware of the importance of secondary prevention (i.e., after fracture) and manage patients accordingly.”
“This is an important topic,” Dr. Grad said. “Fragility fractures are consequential for individuals and for our publicly funded health care system. We anticipate questions from clinicians about the time needed to screen with the risk assessment–first strategy. Our modeling work suggests time savings with [this] strategy compared to a strategy of BMD testing first. Following our recommendations may lead to a reduction in BMD testing.”
To promote the guideline, the CMAJ has recorded a podcast and will use other strategies to increase awareness, Dr. Grad said. “The Canadian Task Force has a communications strategy that includes outreach to primary care, stakeholder webinars, social media, partnerships, and other tactics. The College of Family Physicians of Canada has endorsed the guideline and will help promote to its members.”
Other at-risk groups?
Aliya Khan, MD, FRCPC, FACP, FACE, professor in the divisions of endocrinology and metabolism and geriatrics and director of the fellowship in metabolic bone diseases at McMaster University in Hamilton, Ont., told this news organization she agrees with the strategy of evaluating women aged 65 and older for fracture risk.
“The decision aid is useful, but I would like to see it expanded to other circumstances and situations,” she said.
For example, Dr. Khan would like to see recommendations for younger women and for men of all ages regarding secondary causes of osteoporosis or medications known to have a detrimental effect on bone health. By not addressing these patients, she said, “we may miss patients who would benefit from a fracture risk assessment and potentially treatment to prevent low-trauma fractures.”
A recommendation for younger postmenopausal women was included in the most recent Society of Obstetricians and Gynaecologists Canada guideline, she noted.
Overall, she said, “I believe these recommendations will reduce the excess or inappropriate use of BMD testing and that is welcome.”
Funding for the Canadian Task Force on Preventive Health Care is provided by the Public Health Agency of Canada. The task force members report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new Canadian guideline on screening for the primary prevention of fragility fractures recommends risk assessment first, before bone mineral density (BMD) testing, for women aged 65 and older. For younger women and men aged 40 and older, screening is not recommended.
To develop the guideline, a writing group from Canadian Task Force on Preventive Health Care commissioned systematic reviews of studies on the benefits and harms of fragility fracture screenings; the predictive accuracy of current risk-assessment tools; patient acceptability; and benefits of treatment. Treatment harms were analyzed via a rapid overview of reviews.
The guideline, published online in the Canadian Medical Association Journal, is aimed at primary care practitioners for their community-dwelling patients aged 40 and older. The recommendations do not apply to people already taking preventive drugs.
Nondrug treatments were beyond the scope of the current guideline, but guidelines on the prevention of falls and other strategies are planned, Roland Grad, MD, a guideline author and associate professor at McGill University in Montreal, told this news organization.
The new guideline says that women aged 65 and older may be able to avoid fracture through screening and preventive medication. An individual’s fracture risk can be estimated with a new Fragility Fractures Decision Aid, which uses the Canadian FRAX risk-assessment tool.
“A risk assessment–first approach promotes shared decision-making with the patient, based on best medical evidence,” Dr. Grad said.
“To help clinicians, we have created an infographic with visuals to communicate the time spent on BMD vs risk assessment first.”
New evidence
“At least three things motivated this new guideline,” Dr. Grad said. “When we started work on this prior to the pandemic, we saw a need for updated guidance on screening to prevent fragility fractures. We were also aware of new evidence from the publication of screening trials in females older than 65.”
To conduct the risk assessment in older women, clinicians are advised to do the following:
- Use the decision aid (which patients can also use on their own).
- Use the 10-year absolute risk of major osteoporotic fracture to facilitate shared decision-making about possible benefits and harms of preventive pharmacotherapy.
- If pharmacotherapy is being considered, request a BMD using DXA of the femoral neck, then reestimate the fracture risk by adding the BMD T-score into the FRAX.
Potential harms associated with various treatments, with varying levels of evidence, include the following: with alendronate and denosumab, nonserious gastrointestinal adverse events; with denosumab, rash, eczema, and infections; with zoledronic acid, nonserious events, such as headache and flulike symptoms; and with alendronate and bisphosphonates, rare but serious harms of atypical femoral fracture and osteonecrosis of the jaw.
“These recommendations emphasize the importance of good clinical practice, where clinicians are alert to changes in physical health and patient well-being,” the authors wrote. “Clinicians should also be aware of the importance of secondary prevention (i.e., after fracture) and manage patients accordingly.”
“This is an important topic,” Dr. Grad said. “Fragility fractures are consequential for individuals and for our publicly funded health care system. We anticipate questions from clinicians about the time needed to screen with the risk assessment–first strategy. Our modeling work suggests time savings with [this] strategy compared to a strategy of BMD testing first. Following our recommendations may lead to a reduction in BMD testing.”
To promote the guideline, the CMAJ has recorded a podcast and will use other strategies to increase awareness, Dr. Grad said. “The Canadian Task Force has a communications strategy that includes outreach to primary care, stakeholder webinars, social media, partnerships, and other tactics. The College of Family Physicians of Canada has endorsed the guideline and will help promote to its members.”
Other at-risk groups?
Aliya Khan, MD, FRCPC, FACP, FACE, professor in the divisions of endocrinology and metabolism and geriatrics and director of the fellowship in metabolic bone diseases at McMaster University in Hamilton, Ont., told this news organization she agrees with the strategy of evaluating women aged 65 and older for fracture risk.
“The decision aid is useful, but I would like to see it expanded to other circumstances and situations,” she said.
For example, Dr. Khan would like to see recommendations for younger women and for men of all ages regarding secondary causes of osteoporosis or medications known to have a detrimental effect on bone health. By not addressing these patients, she said, “we may miss patients who would benefit from a fracture risk assessment and potentially treatment to prevent low-trauma fractures.”
A recommendation for younger postmenopausal women was included in the most recent Society of Obstetricians and Gynaecologists Canada guideline, she noted.
Overall, she said, “I believe these recommendations will reduce the excess or inappropriate use of BMD testing and that is welcome.”
Funding for the Canadian Task Force on Preventive Health Care is provided by the Public Health Agency of Canada. The task force members report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Virtual care not linked with greater ED use during pandemic
Canadian family physicians’ increased use of virtual care during the first years of the pandemic was not associated with increased emergency department use among patients, a new analysis of data from Ontario suggests.
In a cross-sectional study that included almost 14,000 family physicians and almost 13 million patients in Ontario, an adjusted analysis indicated that patients with physicians who provided more than 20% of care virtually had lower rates of ED visits, compared with patients whose physicians provided the least virtual care.
“I was surprised to see that ED visit volumes in fall 2021 were below prepandemic levels,” study author Tara Kiran, MD, who practices family medicine at St. Michael’s Hospital of the University of Toronto, told this news organization.
“At that time, there was a lot in the news about how our EDs were overcrowded and an assumption that this related to higher visit volumes. But our data [suggest] there were other factors at play, including strains in staffing in the ED, hospital inpatient units, and in long-term care.” Dr. Kiran is also the Fidani chair in improvement and innovation and vice-chair of quality and innovation at the department of family and community medicine of the University of Toronto.
The study was published online in JAMA Network Open.
Embrace of telehealth
The investigators analyzed administrative data from Ontario for 13,820 family physicians (mean age, 50 years; 51.5% men) and 12,951,063 patients (mean age, 42.6 years; 51.8% women) under their care.
The family physicians had at least one primary care visit claim between Feb. 1 and Oct. 31, 2021. The researchers categorized the physicians by the percentage of total visits they delivered virtually (via telephone or video) during the study period, as follows: 0% (100% in person), greater than 0%-20%, greater than 20%-40%, greater than 40%-60%, greater than 60%-80%, greater than 80% to less than 100%, or 100%.
The percentage of virtual primary care visits peaked at 82% in the first 2 weeks of the pandemic and decreased to 49% by October 2021. ED visit rates decreased at the start of the pandemic and remained lower than in 2019 throughout the study period.
Most physicians provided between 40% and 80% of care virtually. A greater percentage of those who provided more than 80% of care virtually were aged 65 years or older, were women, and practiced in large cities.
Patient comorbidity and morbidity were similar across all categories of virtual care use. The mean number of ED visits was highest among patients whose physicians provided only in-person care (470.3 per 1,000 patients) and was lowest among those whose physicians provided greater than 0% to less than 100% of care virtually (242 per 1,000 patients).
After adjustment for patient characteristics, patients of physicians who provided more than 20% of care virtually had lower rates of ED visits, compared with patients of physicians who provided the least virtual care (for example, greater than 80% to less than 100% versus 0%-20% virtual visits in big cities; relative rate, 0.77). This pattern was consistent across all rurality of practice categories and after adjustment for 2019 ED visit rates.
The investigators observed a gradient in urban areas. Patients of physicians who provided the highest level of virtual care had the lowest ED visit rates.
Investigating virtual modalities
Some policymakers worried that inappropriate use of virtual care was leading to an increase in ED use. “Findings of this study refute this hypothesis,” the authors write. Increases in ED use seemed to coincide with decreases in COVID-19 cases, not with increases in virtual primary care visits.
Furthermore, at the population level, patients who were cared for by physicians who provided a high percentage of virtual care did not have a higher rate of ED visits, compared with those cared for by physicians who provided the lowest levels of virtual care.
During the pandemic, the switch to virtual care worked well for some of Dr. Kiran’s patients. It was more convenient, because they didn’t have to take time off work, travel to and from the clinic, find and pay for parking, or wait in the clinic before the appointment, she said.
But for others, “virtual care really didn’t work well,” she said. “This was particularly true for people who didn’t have a regular working phone, who didn’t have a private space to take calls, who weren’t fluent in English, and who were hard of hearing or had severe mental illness that resulted in paranoid thoughts.”
Clinicians also may have had different comfort levels and preferences regarding virtual visits, Dr. Kiran hypothesized. Some found it convenient and efficient, whereas others may have found it cumbersome and inefficient. “I personally find it harder to build relationships with patients when I use virtual care,” she said. “I experience more joy in work with in-person visits, but other clinicians may feel differently.”
Dr. Kiran and her colleagues are conducting a public engagement initiative called OurCare to understand public perspectives on the future of primary care. “As part of that work, we want to understand what virtual modalities are most important to the public and how the public thinks these should be integrated into primary care.”
Virtual care can support access, patient-centered care, and equity in primary care, Dr. Kiran added. “Ideally, it should be integrated into an existing relationship with a family physician and be a complement to in-person visits.”
The right dose?
In an accompanying editorial, Jesse M. Pines, MD, chief of clinical innovation at U.S. Acute Care Solutions, Canton, Ohio, writes, “There is no convincing mechanism consistent with the data for the observed outcome of lower ED use at higher telehealth use.”
Additional research is needed, he notes, to answer the “Goldilocks question” – that is, what amount of telehealth optimizes its benefits while minimizing potential problems?
“The right dose of telehealth needs to balance (1) concerns by payers and policymakers that it will increase cost and cause unintended consequences (for example, misdiagnosis or duplicative care) and (2) the desire of its proponents who want to allow clinicians to use it as they see fit, with few restrictions,” writes Dr. Pines.
“Future research would ideally use more robust research design,” he suggested. “For example, randomized trials could test different doses of telehealth, or mixed-methods studies could help elucidate how telehealth may be changing clinical management or care-seeking behavior.”
Equitable reimbursement needed
Priya Nori, MD, associate professor of infectious diseases at Montefiore Health System and associate professor at the Albert Einstein College of Medicine, both in New York, said, “I agree with their conclusions and am reassured about telehealth as a durable form of health care delivery.”
Large, population-level studies such as this one might persuade legislators to require equitable reimbursement for in-person and virtual visits “so providers have comparable incentives to provide both types of care,” she said. “Although only primary care was addressed in the study, I believe that virtual care is here to stay and can be applied to primary care, subspecialty care, and other services, like antimicrobial stewardship, infection prevention, et cetera. We need to embrace it.”
A similar study should be conducted in the United States, along with additional research “to ensure that visits done by telephone have similar outcomes as those done by video, as not all communities have adequate internet access or video conferencing technology,” said Dr. Nori.
The study was supported by ICES and grants from Ontario Health, the Canadian Institutes of Health Research, and the Health Systems Research Program of Ontario MOH. Dr. Kiran, Dr. Pines, and Dr. Nori have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Canadian family physicians’ increased use of virtual care during the first years of the pandemic was not associated with increased emergency department use among patients, a new analysis of data from Ontario suggests.
In a cross-sectional study that included almost 14,000 family physicians and almost 13 million patients in Ontario, an adjusted analysis indicated that patients with physicians who provided more than 20% of care virtually had lower rates of ED visits, compared with patients whose physicians provided the least virtual care.
“I was surprised to see that ED visit volumes in fall 2021 were below prepandemic levels,” study author Tara Kiran, MD, who practices family medicine at St. Michael’s Hospital of the University of Toronto, told this news organization.
“At that time, there was a lot in the news about how our EDs were overcrowded and an assumption that this related to higher visit volumes. But our data [suggest] there were other factors at play, including strains in staffing in the ED, hospital inpatient units, and in long-term care.” Dr. Kiran is also the Fidani chair in improvement and innovation and vice-chair of quality and innovation at the department of family and community medicine of the University of Toronto.
The study was published online in JAMA Network Open.
Embrace of telehealth
The investigators analyzed administrative data from Ontario for 13,820 family physicians (mean age, 50 years; 51.5% men) and 12,951,063 patients (mean age, 42.6 years; 51.8% women) under their care.
The family physicians had at least one primary care visit claim between Feb. 1 and Oct. 31, 2021. The researchers categorized the physicians by the percentage of total visits they delivered virtually (via telephone or video) during the study period, as follows: 0% (100% in person), greater than 0%-20%, greater than 20%-40%, greater than 40%-60%, greater than 60%-80%, greater than 80% to less than 100%, or 100%.
The percentage of virtual primary care visits peaked at 82% in the first 2 weeks of the pandemic and decreased to 49% by October 2021. ED visit rates decreased at the start of the pandemic and remained lower than in 2019 throughout the study period.
Most physicians provided between 40% and 80% of care virtually. A greater percentage of those who provided more than 80% of care virtually were aged 65 years or older, were women, and practiced in large cities.
Patient comorbidity and morbidity were similar across all categories of virtual care use. The mean number of ED visits was highest among patients whose physicians provided only in-person care (470.3 per 1,000 patients) and was lowest among those whose physicians provided greater than 0% to less than 100% of care virtually (242 per 1,000 patients).
After adjustment for patient characteristics, patients of physicians who provided more than 20% of care virtually had lower rates of ED visits, compared with patients of physicians who provided the least virtual care (for example, greater than 80% to less than 100% versus 0%-20% virtual visits in big cities; relative rate, 0.77). This pattern was consistent across all rurality of practice categories and after adjustment for 2019 ED visit rates.
The investigators observed a gradient in urban areas. Patients of physicians who provided the highest level of virtual care had the lowest ED visit rates.
Investigating virtual modalities
Some policymakers worried that inappropriate use of virtual care was leading to an increase in ED use. “Findings of this study refute this hypothesis,” the authors write. Increases in ED use seemed to coincide with decreases in COVID-19 cases, not with increases in virtual primary care visits.
Furthermore, at the population level, patients who were cared for by physicians who provided a high percentage of virtual care did not have a higher rate of ED visits, compared with those cared for by physicians who provided the lowest levels of virtual care.
During the pandemic, the switch to virtual care worked well for some of Dr. Kiran’s patients. It was more convenient, because they didn’t have to take time off work, travel to and from the clinic, find and pay for parking, or wait in the clinic before the appointment, she said.
But for others, “virtual care really didn’t work well,” she said. “This was particularly true for people who didn’t have a regular working phone, who didn’t have a private space to take calls, who weren’t fluent in English, and who were hard of hearing or had severe mental illness that resulted in paranoid thoughts.”
Clinicians also may have had different comfort levels and preferences regarding virtual visits, Dr. Kiran hypothesized. Some found it convenient and efficient, whereas others may have found it cumbersome and inefficient. “I personally find it harder to build relationships with patients when I use virtual care,” she said. “I experience more joy in work with in-person visits, but other clinicians may feel differently.”
Dr. Kiran and her colleagues are conducting a public engagement initiative called OurCare to understand public perspectives on the future of primary care. “As part of that work, we want to understand what virtual modalities are most important to the public and how the public thinks these should be integrated into primary care.”
Virtual care can support access, patient-centered care, and equity in primary care, Dr. Kiran added. “Ideally, it should be integrated into an existing relationship with a family physician and be a complement to in-person visits.”
The right dose?
In an accompanying editorial, Jesse M. Pines, MD, chief of clinical innovation at U.S. Acute Care Solutions, Canton, Ohio, writes, “There is no convincing mechanism consistent with the data for the observed outcome of lower ED use at higher telehealth use.”
Additional research is needed, he notes, to answer the “Goldilocks question” – that is, what amount of telehealth optimizes its benefits while minimizing potential problems?
“The right dose of telehealth needs to balance (1) concerns by payers and policymakers that it will increase cost and cause unintended consequences (for example, misdiagnosis or duplicative care) and (2) the desire of its proponents who want to allow clinicians to use it as they see fit, with few restrictions,” writes Dr. Pines.
“Future research would ideally use more robust research design,” he suggested. “For example, randomized trials could test different doses of telehealth, or mixed-methods studies could help elucidate how telehealth may be changing clinical management or care-seeking behavior.”
Equitable reimbursement needed
Priya Nori, MD, associate professor of infectious diseases at Montefiore Health System and associate professor at the Albert Einstein College of Medicine, both in New York, said, “I agree with their conclusions and am reassured about telehealth as a durable form of health care delivery.”
Large, population-level studies such as this one might persuade legislators to require equitable reimbursement for in-person and virtual visits “so providers have comparable incentives to provide both types of care,” she said. “Although only primary care was addressed in the study, I believe that virtual care is here to stay and can be applied to primary care, subspecialty care, and other services, like antimicrobial stewardship, infection prevention, et cetera. We need to embrace it.”
A similar study should be conducted in the United States, along with additional research “to ensure that visits done by telephone have similar outcomes as those done by video, as not all communities have adequate internet access or video conferencing technology,” said Dr. Nori.
The study was supported by ICES and grants from Ontario Health, the Canadian Institutes of Health Research, and the Health Systems Research Program of Ontario MOH. Dr. Kiran, Dr. Pines, and Dr. Nori have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Canadian family physicians’ increased use of virtual care during the first years of the pandemic was not associated with increased emergency department use among patients, a new analysis of data from Ontario suggests.
In a cross-sectional study that included almost 14,000 family physicians and almost 13 million patients in Ontario, an adjusted analysis indicated that patients with physicians who provided more than 20% of care virtually had lower rates of ED visits, compared with patients whose physicians provided the least virtual care.
“I was surprised to see that ED visit volumes in fall 2021 were below prepandemic levels,” study author Tara Kiran, MD, who practices family medicine at St. Michael’s Hospital of the University of Toronto, told this news organization.
“At that time, there was a lot in the news about how our EDs were overcrowded and an assumption that this related to higher visit volumes. But our data [suggest] there were other factors at play, including strains in staffing in the ED, hospital inpatient units, and in long-term care.” Dr. Kiran is also the Fidani chair in improvement and innovation and vice-chair of quality and innovation at the department of family and community medicine of the University of Toronto.
The study was published online in JAMA Network Open.
Embrace of telehealth
The investigators analyzed administrative data from Ontario for 13,820 family physicians (mean age, 50 years; 51.5% men) and 12,951,063 patients (mean age, 42.6 years; 51.8% women) under their care.
The family physicians had at least one primary care visit claim between Feb. 1 and Oct. 31, 2021. The researchers categorized the physicians by the percentage of total visits they delivered virtually (via telephone or video) during the study period, as follows: 0% (100% in person), greater than 0%-20%, greater than 20%-40%, greater than 40%-60%, greater than 60%-80%, greater than 80% to less than 100%, or 100%.
The percentage of virtual primary care visits peaked at 82% in the first 2 weeks of the pandemic and decreased to 49% by October 2021. ED visit rates decreased at the start of the pandemic and remained lower than in 2019 throughout the study period.
Most physicians provided between 40% and 80% of care virtually. A greater percentage of those who provided more than 80% of care virtually were aged 65 years or older, were women, and practiced in large cities.
Patient comorbidity and morbidity were similar across all categories of virtual care use. The mean number of ED visits was highest among patients whose physicians provided only in-person care (470.3 per 1,000 patients) and was lowest among those whose physicians provided greater than 0% to less than 100% of care virtually (242 per 1,000 patients).
After adjustment for patient characteristics, patients of physicians who provided more than 20% of care virtually had lower rates of ED visits, compared with patients of physicians who provided the least virtual care (for example, greater than 80% to less than 100% versus 0%-20% virtual visits in big cities; relative rate, 0.77). This pattern was consistent across all rurality of practice categories and after adjustment for 2019 ED visit rates.
The investigators observed a gradient in urban areas. Patients of physicians who provided the highest level of virtual care had the lowest ED visit rates.
Investigating virtual modalities
Some policymakers worried that inappropriate use of virtual care was leading to an increase in ED use. “Findings of this study refute this hypothesis,” the authors write. Increases in ED use seemed to coincide with decreases in COVID-19 cases, not with increases in virtual primary care visits.
Furthermore, at the population level, patients who were cared for by physicians who provided a high percentage of virtual care did not have a higher rate of ED visits, compared with those cared for by physicians who provided the lowest levels of virtual care.
During the pandemic, the switch to virtual care worked well for some of Dr. Kiran’s patients. It was more convenient, because they didn’t have to take time off work, travel to and from the clinic, find and pay for parking, or wait in the clinic before the appointment, she said.
But for others, “virtual care really didn’t work well,” she said. “This was particularly true for people who didn’t have a regular working phone, who didn’t have a private space to take calls, who weren’t fluent in English, and who were hard of hearing or had severe mental illness that resulted in paranoid thoughts.”
Clinicians also may have had different comfort levels and preferences regarding virtual visits, Dr. Kiran hypothesized. Some found it convenient and efficient, whereas others may have found it cumbersome and inefficient. “I personally find it harder to build relationships with patients when I use virtual care,” she said. “I experience more joy in work with in-person visits, but other clinicians may feel differently.”
Dr. Kiran and her colleagues are conducting a public engagement initiative called OurCare to understand public perspectives on the future of primary care. “As part of that work, we want to understand what virtual modalities are most important to the public and how the public thinks these should be integrated into primary care.”
Virtual care can support access, patient-centered care, and equity in primary care, Dr. Kiran added. “Ideally, it should be integrated into an existing relationship with a family physician and be a complement to in-person visits.”
The right dose?
In an accompanying editorial, Jesse M. Pines, MD, chief of clinical innovation at U.S. Acute Care Solutions, Canton, Ohio, writes, “There is no convincing mechanism consistent with the data for the observed outcome of lower ED use at higher telehealth use.”
Additional research is needed, he notes, to answer the “Goldilocks question” – that is, what amount of telehealth optimizes its benefits while minimizing potential problems?
“The right dose of telehealth needs to balance (1) concerns by payers and policymakers that it will increase cost and cause unintended consequences (for example, misdiagnosis or duplicative care) and (2) the desire of its proponents who want to allow clinicians to use it as they see fit, with few restrictions,” writes Dr. Pines.
“Future research would ideally use more robust research design,” he suggested. “For example, randomized trials could test different doses of telehealth, or mixed-methods studies could help elucidate how telehealth may be changing clinical management or care-seeking behavior.”
Equitable reimbursement needed
Priya Nori, MD, associate professor of infectious diseases at Montefiore Health System and associate professor at the Albert Einstein College of Medicine, both in New York, said, “I agree with their conclusions and am reassured about telehealth as a durable form of health care delivery.”
Large, population-level studies such as this one might persuade legislators to require equitable reimbursement for in-person and virtual visits “so providers have comparable incentives to provide both types of care,” she said. “Although only primary care was addressed in the study, I believe that virtual care is here to stay and can be applied to primary care, subspecialty care, and other services, like antimicrobial stewardship, infection prevention, et cetera. We need to embrace it.”
A similar study should be conducted in the United States, along with additional research “to ensure that visits done by telephone have similar outcomes as those done by video, as not all communities have adequate internet access or video conferencing technology,” said Dr. Nori.
The study was supported by ICES and grants from Ontario Health, the Canadian Institutes of Health Research, and the Health Systems Research Program of Ontario MOH. Dr. Kiran, Dr. Pines, and Dr. Nori have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Novel strategy could improve heart transplant allocation
Prediction models that incorporate more than just treatment status could rank order heart transplant candidates by urgency more effectively than the current system, a modeling study suggests.
Since 2018, the U.S. heart transplant allocation system has ranked heart candidates according to six treatment-based “statuses” (up from three used previously), ignoring many objective patient characteristics, the authors write.
Their study showed no significant difference in survival between statuses four and six, and status five had lower survival than status four.
“We expected multivariable prediction models to outperform the six-status system when it comes to rank ordering patients by how likely they are to die on the wait list (medical urgency),” William F. Parker, MD, MS, PhD, of the University of Chicago, told this news organization.
“However, we were surprised to see that the statuses were out of order,” he said. “Status five patients are more urgent than status three or status four patients,” mainly because most are in renal failure and listed for multiorgan transplantation with a kidney.
Objective physiologic measurements, such as glomerular filtration rate (GFR), had high variable importance, offering a minimally invasive measurement with predictive power in assessing medical urgency. Therefore, including GFR and other variables such as extracorporeal membrane oxygenation (ECMO) could improve the accuracy of the allocation system in identifying the most medically urgent candidates, Dr. Parker and colleagues suggest.
The study was published online in JACC: Heart Failure.
‘Moderate ability’ to rank order
The investigators assessed the effectiveness of the standard six-status ranking system and several novel prediction models in identifying the most urgent heart transplant candidates. The primary outcome was death before receipt of a heart transplant.
The final data set contained 32,294 candidates (mean age, 53 years; 74%, men); 27,200 made up the prepolicy training set and 5,094 were included in the postpolicy test set.
The team evaluated the accuracy of the six-status system using Harrell’s C-index and log-rank tests of Kaplan-Meier estimated survival by status for candidates listed after the policy change (November 2018 to March 2020) in the Scientific Registry of Transplant Recipients data set.
They then developed Cox proportional hazards models and random survival forest models using prepolicy data (2010-2017). Predictor variables included age, diagnosis, laboratory measurements, hemodynamics, and supportive treatment at the time of listing.
They found that the six-status ranking at listing has had “moderate ability” to rank order candidates.
As Dr. Parker indicated, statuses four and six had no significant difference in survival, and status five had lower survival than status four.
The investigators’ multivariable prediction models derived with prepolicy data ranked candidates correctly more often than the six-status rankings. Objective physiologic measurements, such as GFR and ECMO, were identified as having significant importance with regard to ranking by urgency.
“The novel prediction models we developed … could be implemented by the Organ Procurement and Transplantation Network (OPTN) as allocation policy and would be better than the status quo,” Dr. Parker said. “However, I think we could do even better using the newer data collected after 2018.”
Modifications underway
The OPTN Heart Transplantation Committee is currently working on developing a new framework for allocating deceased donor hearts called Continuous Distribution.
“The six-tiered system works well, and it better stratifies the most medically urgent candidates than the previous allocation framework,” the leadership of the United Network for Organ Sharing Heart Transplantation Committee, including Chair Richard C. Daly, MD, Mayo Clinic; Vice-Chair Jondavid Menteer, MD, University of Southern California, Los Angeles; and former Chair Shelley Hall, MD, Baylor University Medical Center, told this news organization.
“That said, it is always appropriate to review and adjust variables that affect the medical urgency attribute for heart allocation.”
The new framework will change how patients are prioritized, they said. “Continuous distribution will consider all patient factors, including medical urgency, together to determine the order of an organ offer, and no single factor will decide an organ match.
“The goal is to increase fairness by moving to a points-based allocation framework that allows candidates to be compared using a single score composed of multiple factors.
“Furthermore,” they added, “continuous distribution provides a framework that will allow modifications of the criteria defining medical urgency (and other attributes of allocation) to a finer degree than the current policy. … Once continuous distribution is in place and the OPTN has policy monitoring data, the committee may consider and model different ways of defining medical urgency.”
Kiran K. Khush, MD, of Stanford (Calif.) University School of Medicine, coauthor of a related commentary, elaborated. “The composite allocation score (CAS) will consist of a ‘points-based system,’ in which candidates will be assigned points based on (1) medical urgency, (2) anticipated posttransplant survival, (3) candidate biology (eg., special characteristics that may result in higher prioritization, such as blood type O and allosensitization), (4) access (eg., prior living donor, pediatric patient), and (5) placement efficacy (travel, proximity).”
Candidates will be assigned points based on these categories, and will be rank ordered for each donor offer.
Dr. Khush and colleagues propose that a multivariable model – such as the ones described in the study – would be the best way to assign points for medical urgency.
“This system will be more equitable than the current system,” Dr. Khush said, “because it will better prioritize the sickest candidates while improving access for patients who are currently at a disadvantage [for example, blood O, highly sensitized patients], and will also remove artificial geographic boundaries [for example, the current 500-mile rule for heart allocation].”
Going further
Jesse D. Schold, PhD, of the University of Colorado at Denver, Aurora, raises concerns about other aspects of the heart allocation system in another related commentary.
“One big issue with our data in transplantation … is that, while it is very comprehensive for capturing transplant candidates and recipients, there is no data collection for patients and processes of care for patients prior to wait list placement,” he told this news organization. This phase of care is subject to wide variation in practice, he said, “and is likely as important as any to patients – the ability to be referred, evaluated, and placed on a waiting list.”
Report cards that measure quality of care after wait list placement ignore key phases prior to wait list placement, he said. “This may have the unintended consequences of limiting access to care and to the waiting list for patients perceived to be at higher risk, or the use of higher-risk donors, despite their potential survival advantage.
“In contrast,” he said, “quality report cards that incentivize treatment for all patients who may benefit would likely have a greater beneficial impact on patients with end-organ disease.”
There is also significant risk of underlying differences in patient populations between centers, despite the use of multivariable models, he added. This heterogeneity “may not be reflected accurately in the report cards [which] have significant impact for regulatory review, private payer contracting, and center reputation.”
Some of these concerns may be addressed in the new OPTN Modernization Initiative, according to David Bowman, a public affairs specialist at the Health Resources and Services Administration. One of the goals of the initiative “is to ensure that the OPTN Board of Directors is high functioning, has greater independence, and represents the diversity of communities served by the OPTN,” he told this news organization. “Strengthened governance will lead to effective policy development and implementation, and enhanced transparency and accountability of the process.”
Addressing another concern about the system, Savitri Fedson, MD, of the Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, wonders in a related editorial whether organ donors and recipients should know more about each other, and if so, could that reverse the ongoing downward trend in organ acceptance?
Although some organizations are in favor of sharing more information, Dr. Fedson notes that “less information may have the greater benefit.” She writes, “We might realize that the simplest approach is often the best: a fulsome thank you for the donor’s gift that is willingly given to a stranger without expectation of payment, and on the recipient side, the knowledge that an organ is of good quality.
“The transplant patient can be comforted with the understanding that the risk of disease transmission, while not zero, is low, and that their survival following acceptance of an organ is better than languishing on a waiting list.”
The study received no commercial funding. Dr. Parker, Dr. Khush, Dr. Schold, and Dr. Fedson report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Prediction models that incorporate more than just treatment status could rank order heart transplant candidates by urgency more effectively than the current system, a modeling study suggests.
Since 2018, the U.S. heart transplant allocation system has ranked heart candidates according to six treatment-based “statuses” (up from three used previously), ignoring many objective patient characteristics, the authors write.
Their study showed no significant difference in survival between statuses four and six, and status five had lower survival than status four.
“We expected multivariable prediction models to outperform the six-status system when it comes to rank ordering patients by how likely they are to die on the wait list (medical urgency),” William F. Parker, MD, MS, PhD, of the University of Chicago, told this news organization.
“However, we were surprised to see that the statuses were out of order,” he said. “Status five patients are more urgent than status three or status four patients,” mainly because most are in renal failure and listed for multiorgan transplantation with a kidney.
Objective physiologic measurements, such as glomerular filtration rate (GFR), had high variable importance, offering a minimally invasive measurement with predictive power in assessing medical urgency. Therefore, including GFR and other variables such as extracorporeal membrane oxygenation (ECMO) could improve the accuracy of the allocation system in identifying the most medically urgent candidates, Dr. Parker and colleagues suggest.
The study was published online in JACC: Heart Failure.
‘Moderate ability’ to rank order
The investigators assessed the effectiveness of the standard six-status ranking system and several novel prediction models in identifying the most urgent heart transplant candidates. The primary outcome was death before receipt of a heart transplant.
The final data set contained 32,294 candidates (mean age, 53 years; 74%, men); 27,200 made up the prepolicy training set and 5,094 were included in the postpolicy test set.
The team evaluated the accuracy of the six-status system using Harrell’s C-index and log-rank tests of Kaplan-Meier estimated survival by status for candidates listed after the policy change (November 2018 to March 2020) in the Scientific Registry of Transplant Recipients data set.
They then developed Cox proportional hazards models and random survival forest models using prepolicy data (2010-2017). Predictor variables included age, diagnosis, laboratory measurements, hemodynamics, and supportive treatment at the time of listing.
They found that the six-status ranking at listing has had “moderate ability” to rank order candidates.
As Dr. Parker indicated, statuses four and six had no significant difference in survival, and status five had lower survival than status four.
The investigators’ multivariable prediction models derived with prepolicy data ranked candidates correctly more often than the six-status rankings. Objective physiologic measurements, such as GFR and ECMO, were identified as having significant importance with regard to ranking by urgency.
“The novel prediction models we developed … could be implemented by the Organ Procurement and Transplantation Network (OPTN) as allocation policy and would be better than the status quo,” Dr. Parker said. “However, I think we could do even better using the newer data collected after 2018.”
Modifications underway
The OPTN Heart Transplantation Committee is currently working on developing a new framework for allocating deceased donor hearts called Continuous Distribution.
“The six-tiered system works well, and it better stratifies the most medically urgent candidates than the previous allocation framework,” the leadership of the United Network for Organ Sharing Heart Transplantation Committee, including Chair Richard C. Daly, MD, Mayo Clinic; Vice-Chair Jondavid Menteer, MD, University of Southern California, Los Angeles; and former Chair Shelley Hall, MD, Baylor University Medical Center, told this news organization.
“That said, it is always appropriate to review and adjust variables that affect the medical urgency attribute for heart allocation.”
The new framework will change how patients are prioritized, they said. “Continuous distribution will consider all patient factors, including medical urgency, together to determine the order of an organ offer, and no single factor will decide an organ match.
“The goal is to increase fairness by moving to a points-based allocation framework that allows candidates to be compared using a single score composed of multiple factors.
“Furthermore,” they added, “continuous distribution provides a framework that will allow modifications of the criteria defining medical urgency (and other attributes of allocation) to a finer degree than the current policy. … Once continuous distribution is in place and the OPTN has policy monitoring data, the committee may consider and model different ways of defining medical urgency.”
Kiran K. Khush, MD, of Stanford (Calif.) University School of Medicine, coauthor of a related commentary, elaborated. “The composite allocation score (CAS) will consist of a ‘points-based system,’ in which candidates will be assigned points based on (1) medical urgency, (2) anticipated posttransplant survival, (3) candidate biology (eg., special characteristics that may result in higher prioritization, such as blood type O and allosensitization), (4) access (eg., prior living donor, pediatric patient), and (5) placement efficacy (travel, proximity).”
Candidates will be assigned points based on these categories, and will be rank ordered for each donor offer.
Dr. Khush and colleagues propose that a multivariable model – such as the ones described in the study – would be the best way to assign points for medical urgency.
“This system will be more equitable than the current system,” Dr. Khush said, “because it will better prioritize the sickest candidates while improving access for patients who are currently at a disadvantage [for example, blood O, highly sensitized patients], and will also remove artificial geographic boundaries [for example, the current 500-mile rule for heart allocation].”
Going further
Jesse D. Schold, PhD, of the University of Colorado at Denver, Aurora, raises concerns about other aspects of the heart allocation system in another related commentary.
“One big issue with our data in transplantation … is that, while it is very comprehensive for capturing transplant candidates and recipients, there is no data collection for patients and processes of care for patients prior to wait list placement,” he told this news organization. This phase of care is subject to wide variation in practice, he said, “and is likely as important as any to patients – the ability to be referred, evaluated, and placed on a waiting list.”
Report cards that measure quality of care after wait list placement ignore key phases prior to wait list placement, he said. “This may have the unintended consequences of limiting access to care and to the waiting list for patients perceived to be at higher risk, or the use of higher-risk donors, despite their potential survival advantage.
“In contrast,” he said, “quality report cards that incentivize treatment for all patients who may benefit would likely have a greater beneficial impact on patients with end-organ disease.”
There is also significant risk of underlying differences in patient populations between centers, despite the use of multivariable models, he added. This heterogeneity “may not be reflected accurately in the report cards [which] have significant impact for regulatory review, private payer contracting, and center reputation.”
Some of these concerns may be addressed in the new OPTN Modernization Initiative, according to David Bowman, a public affairs specialist at the Health Resources and Services Administration. One of the goals of the initiative “is to ensure that the OPTN Board of Directors is high functioning, has greater independence, and represents the diversity of communities served by the OPTN,” he told this news organization. “Strengthened governance will lead to effective policy development and implementation, and enhanced transparency and accountability of the process.”
Addressing another concern about the system, Savitri Fedson, MD, of the Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, wonders in a related editorial whether organ donors and recipients should know more about each other, and if so, could that reverse the ongoing downward trend in organ acceptance?
Although some organizations are in favor of sharing more information, Dr. Fedson notes that “less information may have the greater benefit.” She writes, “We might realize that the simplest approach is often the best: a fulsome thank you for the donor’s gift that is willingly given to a stranger without expectation of payment, and on the recipient side, the knowledge that an organ is of good quality.
“The transplant patient can be comforted with the understanding that the risk of disease transmission, while not zero, is low, and that their survival following acceptance of an organ is better than languishing on a waiting list.”
The study received no commercial funding. Dr. Parker, Dr. Khush, Dr. Schold, and Dr. Fedson report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Prediction models that incorporate more than just treatment status could rank order heart transplant candidates by urgency more effectively than the current system, a modeling study suggests.
Since 2018, the U.S. heart transplant allocation system has ranked heart candidates according to six treatment-based “statuses” (up from three used previously), ignoring many objective patient characteristics, the authors write.
Their study showed no significant difference in survival between statuses four and six, and status five had lower survival than status four.
“We expected multivariable prediction models to outperform the six-status system when it comes to rank ordering patients by how likely they are to die on the wait list (medical urgency),” William F. Parker, MD, MS, PhD, of the University of Chicago, told this news organization.
“However, we were surprised to see that the statuses were out of order,” he said. “Status five patients are more urgent than status three or status four patients,” mainly because most are in renal failure and listed for multiorgan transplantation with a kidney.
Objective physiologic measurements, such as glomerular filtration rate (GFR), had high variable importance, offering a minimally invasive measurement with predictive power in assessing medical urgency. Therefore, including GFR and other variables such as extracorporeal membrane oxygenation (ECMO) could improve the accuracy of the allocation system in identifying the most medically urgent candidates, Dr. Parker and colleagues suggest.
The study was published online in JACC: Heart Failure.
‘Moderate ability’ to rank order
The investigators assessed the effectiveness of the standard six-status ranking system and several novel prediction models in identifying the most urgent heart transplant candidates. The primary outcome was death before receipt of a heart transplant.
The final data set contained 32,294 candidates (mean age, 53 years; 74%, men); 27,200 made up the prepolicy training set and 5,094 were included in the postpolicy test set.
The team evaluated the accuracy of the six-status system using Harrell’s C-index and log-rank tests of Kaplan-Meier estimated survival by status for candidates listed after the policy change (November 2018 to March 2020) in the Scientific Registry of Transplant Recipients data set.
They then developed Cox proportional hazards models and random survival forest models using prepolicy data (2010-2017). Predictor variables included age, diagnosis, laboratory measurements, hemodynamics, and supportive treatment at the time of listing.
They found that the six-status ranking at listing has had “moderate ability” to rank order candidates.
As Dr. Parker indicated, statuses four and six had no significant difference in survival, and status five had lower survival than status four.
The investigators’ multivariable prediction models derived with prepolicy data ranked candidates correctly more often than the six-status rankings. Objective physiologic measurements, such as GFR and ECMO, were identified as having significant importance with regard to ranking by urgency.
“The novel prediction models we developed … could be implemented by the Organ Procurement and Transplantation Network (OPTN) as allocation policy and would be better than the status quo,” Dr. Parker said. “However, I think we could do even better using the newer data collected after 2018.”
Modifications underway
The OPTN Heart Transplantation Committee is currently working on developing a new framework for allocating deceased donor hearts called Continuous Distribution.
“The six-tiered system works well, and it better stratifies the most medically urgent candidates than the previous allocation framework,” the leadership of the United Network for Organ Sharing Heart Transplantation Committee, including Chair Richard C. Daly, MD, Mayo Clinic; Vice-Chair Jondavid Menteer, MD, University of Southern California, Los Angeles; and former Chair Shelley Hall, MD, Baylor University Medical Center, told this news organization.
“That said, it is always appropriate to review and adjust variables that affect the medical urgency attribute for heart allocation.”
The new framework will change how patients are prioritized, they said. “Continuous distribution will consider all patient factors, including medical urgency, together to determine the order of an organ offer, and no single factor will decide an organ match.
“The goal is to increase fairness by moving to a points-based allocation framework that allows candidates to be compared using a single score composed of multiple factors.
“Furthermore,” they added, “continuous distribution provides a framework that will allow modifications of the criteria defining medical urgency (and other attributes of allocation) to a finer degree than the current policy. … Once continuous distribution is in place and the OPTN has policy monitoring data, the committee may consider and model different ways of defining medical urgency.”
Kiran K. Khush, MD, of Stanford (Calif.) University School of Medicine, coauthor of a related commentary, elaborated. “The composite allocation score (CAS) will consist of a ‘points-based system,’ in which candidates will be assigned points based on (1) medical urgency, (2) anticipated posttransplant survival, (3) candidate biology (eg., special characteristics that may result in higher prioritization, such as blood type O and allosensitization), (4) access (eg., prior living donor, pediatric patient), and (5) placement efficacy (travel, proximity).”
Candidates will be assigned points based on these categories, and will be rank ordered for each donor offer.
Dr. Khush and colleagues propose that a multivariable model – such as the ones described in the study – would be the best way to assign points for medical urgency.
“This system will be more equitable than the current system,” Dr. Khush said, “because it will better prioritize the sickest candidates while improving access for patients who are currently at a disadvantage [for example, blood O, highly sensitized patients], and will also remove artificial geographic boundaries [for example, the current 500-mile rule for heart allocation].”
Going further
Jesse D. Schold, PhD, of the University of Colorado at Denver, Aurora, raises concerns about other aspects of the heart allocation system in another related commentary.
“One big issue with our data in transplantation … is that, while it is very comprehensive for capturing transplant candidates and recipients, there is no data collection for patients and processes of care for patients prior to wait list placement,” he told this news organization. This phase of care is subject to wide variation in practice, he said, “and is likely as important as any to patients – the ability to be referred, evaluated, and placed on a waiting list.”
Report cards that measure quality of care after wait list placement ignore key phases prior to wait list placement, he said. “This may have the unintended consequences of limiting access to care and to the waiting list for patients perceived to be at higher risk, or the use of higher-risk donors, despite their potential survival advantage.
“In contrast,” he said, “quality report cards that incentivize treatment for all patients who may benefit would likely have a greater beneficial impact on patients with end-organ disease.”
There is also significant risk of underlying differences in patient populations between centers, despite the use of multivariable models, he added. This heterogeneity “may not be reflected accurately in the report cards [which] have significant impact for regulatory review, private payer contracting, and center reputation.”
Some of these concerns may be addressed in the new OPTN Modernization Initiative, according to David Bowman, a public affairs specialist at the Health Resources and Services Administration. One of the goals of the initiative “is to ensure that the OPTN Board of Directors is high functioning, has greater independence, and represents the diversity of communities served by the OPTN,” he told this news organization. “Strengthened governance will lead to effective policy development and implementation, and enhanced transparency and accountability of the process.”
Addressing another concern about the system, Savitri Fedson, MD, of the Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, wonders in a related editorial whether organ donors and recipients should know more about each other, and if so, could that reverse the ongoing downward trend in organ acceptance?
Although some organizations are in favor of sharing more information, Dr. Fedson notes that “less information may have the greater benefit.” She writes, “We might realize that the simplest approach is often the best: a fulsome thank you for the donor’s gift that is willingly given to a stranger without expectation of payment, and on the recipient side, the knowledge that an organ is of good quality.
“The transplant patient can be comforted with the understanding that the risk of disease transmission, while not zero, is low, and that their survival following acceptance of an organ is better than languishing on a waiting list.”
The study received no commercial funding. Dr. Parker, Dr. Khush, Dr. Schold, and Dr. Fedson report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JACC: HEART FAILURE
At-term birth timing may cut preeclampsia risk in half
Timed birth strategies include scheduled labor inductions and cesarean deliveries.
In this observational analysis of nearly 90,000 pregnancies, at-term preeclampsia occurred with similar frequency among women routinely screened during the first trimester and among at-risk women screened during the third trimester.
Overall, on average, at-risk women delivered at 40 weeks, with two-thirds experiencing spontaneous onset of labor. About one-fourth had cesarean deliveries.
“We anticipated that timed birth at 37 weeks could reduce the occurrence of more than half of preeclampsia, [but] this is not an intervention that could be recommended, as complications for the baby would be increased,” Laura A. Magee, MD, of King’s College London, told this news organization.
“However, we were delighted to see that a personalized approach to timed birth, based on an individual woman’s risk of preeclampsia, could prevent a similar number of cases of preeclampsia, with fewer women requiring timed birth and at later gestational ages, when newborn problems would be less frequent.”
Although not currently recommended to prevent at-term preeclampsia, “timed birth by labor induction is a very common timing of birth strategy,” she noted. “At least one-third of women currently undergo labor induction at term gestational age, and one in six choose to deliver by elective cesarean.”
The study was published online in the journal Hypertension.
Screening at 35-36 weeks superior
The investigators analyzed data from a nonintervention cohort study of singleton pregnancies delivering at ≥ 24 weeks, without major anomalies, at two U.K. hospitals.
At routine visits at 11-13 weeks’ gestation, 57,131 pregnancies were screened, and 1,138 term preeclampsia cases developed.
Most of these women were in their early 30s, self-identified as White, and had a BMI at the upper limits of normal. About 10% were smokers; fewer than 3% had a medical history of high blood pressure, type 2 diabetes, or autoimmune disease; and 3.9% reported a family history of preeclampsia.
At 35-36 weeks, in a different cohort, 29,035 pregnancies were screened and term preeclampsia developed in 619 women. Demographics and pregnancy characteristics were similar to those screened at 11-13 weeks, although the average BMI was higher – in the overweight range – and there were fewer Black women, although they still made up 10% of the screened population.
Patient-specific preeclampsia risks were determined by the United Kingdom National Institute for Health and Care Excellence (NICE) guidance, and the Fetal Medicine Foundation competing-risks model, available through an online calculator.
Timing of birth for term preeclampsia prevention was evaluated at 37, 38, 39, and 40 weeks or depending on preeclampsia risk by the competing-risks model at 35-36 weeks.
The primary outcomes were the proportion of term preeclampsia prevented, and number-needed-to-deliver to prevent one term preeclampsia case.
The investigators found that overall, the proportion of term preeclampsia prevented was highest, and number-needed-to-deliver lowest, for preeclampsia screening at 35-36 weeks rather than at 11-13 weeks.
For delivery at 37 weeks, fewer cases of preeclampsia were prevented with NICE criteria (28.8%) than with the competing-risks model (59.8%), and the number-needed-to-deliver was higher (16.4 vs 6.9, respectively).
At 35-36 weeks, the risk-stratified approach had similar preeclampsia prevention (57.2%) and number-needed-to-deliver (8.4), but fewer women would be induced at 37 weeks (1.2% vs. 8.8%).
Although personalized timed birth at term may be an effective way to address at-term preeclampsia, “clinicians should wait for definitive clinical trial evidence,” Dr. Magee said.
‘Stay tuned’
Vesna D. Garovic, MD, PhD, Mayo Clinic, Rochester, Minn., and chair of the 2021 AHA Scientific Statement, “Hypertension in Pregnancy: Diagnosis, Blood Pressure Goals, and Pharmacotherapy,” agrees.
The new data “set the stage for adequately designed and powered studies that will provide ultimate response/evidence regarding the efficacy of this approach,” she told this news organization.
“Future studies need to address the safety of this approach,” she added, “as close to 10 timed/planned deliveries will be needed to prevent one case of preeclampsia.”
For now, she said, “While these preliminary data are promising, they are not sufficient to adopt timed birth in daily practice. Prospective studies that will provide sufficient evidence regarding the efficacy and safety of this approach are likely to follow. Stay tuned.”
Indeed, Dr. Magee noted that the Fetal Medicine Foundation is about to launch a randomized trial of a personalized “timing of birth” strategy at term based on the preeclampsia risk described in her group’s study vs. usual care at term – that is, “watchful waiting, and delivery should preeclampsia or another indication for birth develop.”
The study was supported by grants from the Fetal Medicine Foundation, United Kingdom, and various biotech companies provided reagents and relevant equipment free of charge. Dr. Magee and Dr. Garovic reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Timed birth strategies include scheduled labor inductions and cesarean deliveries.
In this observational analysis of nearly 90,000 pregnancies, at-term preeclampsia occurred with similar frequency among women routinely screened during the first trimester and among at-risk women screened during the third trimester.
Overall, on average, at-risk women delivered at 40 weeks, with two-thirds experiencing spontaneous onset of labor. About one-fourth had cesarean deliveries.
“We anticipated that timed birth at 37 weeks could reduce the occurrence of more than half of preeclampsia, [but] this is not an intervention that could be recommended, as complications for the baby would be increased,” Laura A. Magee, MD, of King’s College London, told this news organization.
“However, we were delighted to see that a personalized approach to timed birth, based on an individual woman’s risk of preeclampsia, could prevent a similar number of cases of preeclampsia, with fewer women requiring timed birth and at later gestational ages, when newborn problems would be less frequent.”
Although not currently recommended to prevent at-term preeclampsia, “timed birth by labor induction is a very common timing of birth strategy,” she noted. “At least one-third of women currently undergo labor induction at term gestational age, and one in six choose to deliver by elective cesarean.”
The study was published online in the journal Hypertension.
Screening at 35-36 weeks superior
The investigators analyzed data from a nonintervention cohort study of singleton pregnancies delivering at ≥ 24 weeks, without major anomalies, at two U.K. hospitals.
At routine visits at 11-13 weeks’ gestation, 57,131 pregnancies were screened, and 1,138 term preeclampsia cases developed.
Most of these women were in their early 30s, self-identified as White, and had a BMI at the upper limits of normal. About 10% were smokers; fewer than 3% had a medical history of high blood pressure, type 2 diabetes, or autoimmune disease; and 3.9% reported a family history of preeclampsia.
At 35-36 weeks, in a different cohort, 29,035 pregnancies were screened and term preeclampsia developed in 619 women. Demographics and pregnancy characteristics were similar to those screened at 11-13 weeks, although the average BMI was higher – in the overweight range – and there were fewer Black women, although they still made up 10% of the screened population.
Patient-specific preeclampsia risks were determined by the United Kingdom National Institute for Health and Care Excellence (NICE) guidance, and the Fetal Medicine Foundation competing-risks model, available through an online calculator.
Timing of birth for term preeclampsia prevention was evaluated at 37, 38, 39, and 40 weeks or depending on preeclampsia risk by the competing-risks model at 35-36 weeks.
The primary outcomes were the proportion of term preeclampsia prevented, and number-needed-to-deliver to prevent one term preeclampsia case.
The investigators found that overall, the proportion of term preeclampsia prevented was highest, and number-needed-to-deliver lowest, for preeclampsia screening at 35-36 weeks rather than at 11-13 weeks.
For delivery at 37 weeks, fewer cases of preeclampsia were prevented with NICE criteria (28.8%) than with the competing-risks model (59.8%), and the number-needed-to-deliver was higher (16.4 vs 6.9, respectively).
At 35-36 weeks, the risk-stratified approach had similar preeclampsia prevention (57.2%) and number-needed-to-deliver (8.4), but fewer women would be induced at 37 weeks (1.2% vs. 8.8%).
Although personalized timed birth at term may be an effective way to address at-term preeclampsia, “clinicians should wait for definitive clinical trial evidence,” Dr. Magee said.
‘Stay tuned’
Vesna D. Garovic, MD, PhD, Mayo Clinic, Rochester, Minn., and chair of the 2021 AHA Scientific Statement, “Hypertension in Pregnancy: Diagnosis, Blood Pressure Goals, and Pharmacotherapy,” agrees.
The new data “set the stage for adequately designed and powered studies that will provide ultimate response/evidence regarding the efficacy of this approach,” she told this news organization.
“Future studies need to address the safety of this approach,” she added, “as close to 10 timed/planned deliveries will be needed to prevent one case of preeclampsia.”
For now, she said, “While these preliminary data are promising, they are not sufficient to adopt timed birth in daily practice. Prospective studies that will provide sufficient evidence regarding the efficacy and safety of this approach are likely to follow. Stay tuned.”
Indeed, Dr. Magee noted that the Fetal Medicine Foundation is about to launch a randomized trial of a personalized “timing of birth” strategy at term based on the preeclampsia risk described in her group’s study vs. usual care at term – that is, “watchful waiting, and delivery should preeclampsia or another indication for birth develop.”
The study was supported by grants from the Fetal Medicine Foundation, United Kingdom, and various biotech companies provided reagents and relevant equipment free of charge. Dr. Magee and Dr. Garovic reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Timed birth strategies include scheduled labor inductions and cesarean deliveries.
In this observational analysis of nearly 90,000 pregnancies, at-term preeclampsia occurred with similar frequency among women routinely screened during the first trimester and among at-risk women screened during the third trimester.
Overall, on average, at-risk women delivered at 40 weeks, with two-thirds experiencing spontaneous onset of labor. About one-fourth had cesarean deliveries.
“We anticipated that timed birth at 37 weeks could reduce the occurrence of more than half of preeclampsia, [but] this is not an intervention that could be recommended, as complications for the baby would be increased,” Laura A. Magee, MD, of King’s College London, told this news organization.
“However, we were delighted to see that a personalized approach to timed birth, based on an individual woman’s risk of preeclampsia, could prevent a similar number of cases of preeclampsia, with fewer women requiring timed birth and at later gestational ages, when newborn problems would be less frequent.”
Although not currently recommended to prevent at-term preeclampsia, “timed birth by labor induction is a very common timing of birth strategy,” she noted. “At least one-third of women currently undergo labor induction at term gestational age, and one in six choose to deliver by elective cesarean.”
The study was published online in the journal Hypertension.
Screening at 35-36 weeks superior
The investigators analyzed data from a nonintervention cohort study of singleton pregnancies delivering at ≥ 24 weeks, without major anomalies, at two U.K. hospitals.
At routine visits at 11-13 weeks’ gestation, 57,131 pregnancies were screened, and 1,138 term preeclampsia cases developed.
Most of these women were in their early 30s, self-identified as White, and had a BMI at the upper limits of normal. About 10% were smokers; fewer than 3% had a medical history of high blood pressure, type 2 diabetes, or autoimmune disease; and 3.9% reported a family history of preeclampsia.
At 35-36 weeks, in a different cohort, 29,035 pregnancies were screened and term preeclampsia developed in 619 women. Demographics and pregnancy characteristics were similar to those screened at 11-13 weeks, although the average BMI was higher – in the overweight range – and there were fewer Black women, although they still made up 10% of the screened population.
Patient-specific preeclampsia risks were determined by the United Kingdom National Institute for Health and Care Excellence (NICE) guidance, and the Fetal Medicine Foundation competing-risks model, available through an online calculator.
Timing of birth for term preeclampsia prevention was evaluated at 37, 38, 39, and 40 weeks or depending on preeclampsia risk by the competing-risks model at 35-36 weeks.
The primary outcomes were the proportion of term preeclampsia prevented, and number-needed-to-deliver to prevent one term preeclampsia case.
The investigators found that overall, the proportion of term preeclampsia prevented was highest, and number-needed-to-deliver lowest, for preeclampsia screening at 35-36 weeks rather than at 11-13 weeks.
For delivery at 37 weeks, fewer cases of preeclampsia were prevented with NICE criteria (28.8%) than with the competing-risks model (59.8%), and the number-needed-to-deliver was higher (16.4 vs 6.9, respectively).
At 35-36 weeks, the risk-stratified approach had similar preeclampsia prevention (57.2%) and number-needed-to-deliver (8.4), but fewer women would be induced at 37 weeks (1.2% vs. 8.8%).
Although personalized timed birth at term may be an effective way to address at-term preeclampsia, “clinicians should wait for definitive clinical trial evidence,” Dr. Magee said.
‘Stay tuned’
Vesna D. Garovic, MD, PhD, Mayo Clinic, Rochester, Minn., and chair of the 2021 AHA Scientific Statement, “Hypertension in Pregnancy: Diagnosis, Blood Pressure Goals, and Pharmacotherapy,” agrees.
The new data “set the stage for adequately designed and powered studies that will provide ultimate response/evidence regarding the efficacy of this approach,” she told this news organization.
“Future studies need to address the safety of this approach,” she added, “as close to 10 timed/planned deliveries will be needed to prevent one case of preeclampsia.”
For now, she said, “While these preliminary data are promising, they are not sufficient to adopt timed birth in daily practice. Prospective studies that will provide sufficient evidence regarding the efficacy and safety of this approach are likely to follow. Stay tuned.”
Indeed, Dr. Magee noted that the Fetal Medicine Foundation is about to launch a randomized trial of a personalized “timing of birth” strategy at term based on the preeclampsia risk described in her group’s study vs. usual care at term – that is, “watchful waiting, and delivery should preeclampsia or another indication for birth develop.”
The study was supported by grants from the Fetal Medicine Foundation, United Kingdom, and various biotech companies provided reagents and relevant equipment free of charge. Dr. Magee and Dr. Garovic reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM HYPERTENSION
Vaginal microbiome does not affect infant gut microbiome
The findings suggest that practices such as vaginal seeding are ineffective.
A longitudinal, prospective cohort study of more than 600 pregnant Canadian women and their newborns showed significant differences in an infant’s stool composition by delivery mode at 10 days post partum, but the differences could not be explained by the mother’s vaginal microbiome, and they effectively disappeared by 3 months.
The findings were surprising, Scott Dos Santos, a PhD candidate at the University of Saskatchewan in Saskatoon, told this news organization. “The bacteria living in the maternal vagina are the first microbes that vaginally delivered infants are exposed to. … so it sounds intuitive that different kinds of vaginal microbiomes could end up influencing the development of a baby’s gut microbiome in different ways. But the maternal vaginal microbiome didn’t seem to have any role in predicting what the infant stool microbiome looked like.”
Therefore, women should not be concerned about cesarean delivery having an adverse effect on their baby’s gut microbiome, said Mr. Dos Santos. Moreover, “vaginal seeding is not safe or advised. Professional bodies, including the Society of Obstetricians and Gynecologists of Canada and the American College of Obstetricians and Gynecologists, strongly advise against this practice.”
The study was published online in Frontiers in Cellular and Infection Microbiology.
Independent communities
The investigators analyzed vaginal and stool microbiome profiles from 442 mother-infant dyads. The mothers were healthy, low-risk women who delivered at term. They were recruited into the Maternal Microbiome LEGACY Project from three hospitals in British Columbia.
The mean age of the mothers at delivery was 34.6 years, which is typical of the study hospitals’ delivery populations. Participants identified themselves as White (54.7%), Asian (21.2%), South Asian (8.3%), and of other ethnicities.
A nurse, midwife, or clinician collected maternal vaginal swabs of the posterior fornix and lateral vaginal wall at first presentation to the labor and delivery area. Neonatal meconium, which was defined as the first stool specimen collected within 72 hours of birth, and two infant stool samples were collected at follow-up visits at 10 days and 3 months post partum.
A principal component analysis of infant stool microbiomes showed no significant clustering of microbiome profiles at 10 days or 3 months by maternal community state types (that is, microbial species).
Correspondence analyses also showed no coclustering of maternal and infant clusters at either time. In addition, there were no differences in the distribution of maternal vaginal microbiome clusters among infant stool microbiome clusters, regardless of delivery mode.
Vaginal microbiome clusters were distributed across infant stool clusters in proportion to their frequency in the overall maternal population, indicating that the two communities were independent of each other.
Intrapartum antibiotic administration was identified as a confounder of infant stool microbiome differences and was associated with lower abundances of Escherichia coli, Bacteroides vulgatus, Bifidobacterium longum, and Parabacteroides distasonis.
“Our findings demonstrate that maternal vaginal microbiome composition at delivery does not affect infant stool microbiome composition and development, suggesting that practices to amend infant stool microbiome composition focus on factors other than maternal vaginal microbes,” the authors conclude.
More evidence needed
Commenting on the study, Emily H. Adhikari, MD, assistant professor of obstetrics and gynecology at UT Southwestern Medical Center in Dallas, and medical director of perinatal infectious diseases for the Parkland Health and Hospital System, said, “These findings contribute significantly more data to an understudied area of research into factors that affect the infant gut microbiome from the earliest hours of life. Prior studies have been small and often conflicting, and the authors reference recent larger studies, which corroborate their findings.”
The data regarding whether delivery mode or antibiotic-associated differences in infant microbiomes persist remain controversial, said Dr. Adhikari. “More evidence is needed involving a more ethnically diverse sampling of patients.” In addition, prospectively evaluating vaginal seeding in a rigorously designed clinical trial setting is “imperative to understand any potential benefit and certainly to understand the potential harms of the practice. To date, this does not exist.”
The study was funded by a Canadian Institutes of Health Research grant. Mr. Dos Santos and Dr. Adhikari have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
The findings suggest that practices such as vaginal seeding are ineffective.
A longitudinal, prospective cohort study of more than 600 pregnant Canadian women and their newborns showed significant differences in an infant’s stool composition by delivery mode at 10 days post partum, but the differences could not be explained by the mother’s vaginal microbiome, and they effectively disappeared by 3 months.
The findings were surprising, Scott Dos Santos, a PhD candidate at the University of Saskatchewan in Saskatoon, told this news organization. “The bacteria living in the maternal vagina are the first microbes that vaginally delivered infants are exposed to. … so it sounds intuitive that different kinds of vaginal microbiomes could end up influencing the development of a baby’s gut microbiome in different ways. But the maternal vaginal microbiome didn’t seem to have any role in predicting what the infant stool microbiome looked like.”
Therefore, women should not be concerned about cesarean delivery having an adverse effect on their baby’s gut microbiome, said Mr. Dos Santos. Moreover, “vaginal seeding is not safe or advised. Professional bodies, including the Society of Obstetricians and Gynecologists of Canada and the American College of Obstetricians and Gynecologists, strongly advise against this practice.”
The study was published online in Frontiers in Cellular and Infection Microbiology.
Independent communities
The investigators analyzed vaginal and stool microbiome profiles from 442 mother-infant dyads. The mothers were healthy, low-risk women who delivered at term. They were recruited into the Maternal Microbiome LEGACY Project from three hospitals in British Columbia.
The mean age of the mothers at delivery was 34.6 years, which is typical of the study hospitals’ delivery populations. Participants identified themselves as White (54.7%), Asian (21.2%), South Asian (8.3%), and of other ethnicities.
A nurse, midwife, or clinician collected maternal vaginal swabs of the posterior fornix and lateral vaginal wall at first presentation to the labor and delivery area. Neonatal meconium, which was defined as the first stool specimen collected within 72 hours of birth, and two infant stool samples were collected at follow-up visits at 10 days and 3 months post partum.
A principal component analysis of infant stool microbiomes showed no significant clustering of microbiome profiles at 10 days or 3 months by maternal community state types (that is, microbial species).
Correspondence analyses also showed no coclustering of maternal and infant clusters at either time. In addition, there were no differences in the distribution of maternal vaginal microbiome clusters among infant stool microbiome clusters, regardless of delivery mode.
Vaginal microbiome clusters were distributed across infant stool clusters in proportion to their frequency in the overall maternal population, indicating that the two communities were independent of each other.
Intrapartum antibiotic administration was identified as a confounder of infant stool microbiome differences and was associated with lower abundances of Escherichia coli, Bacteroides vulgatus, Bifidobacterium longum, and Parabacteroides distasonis.
“Our findings demonstrate that maternal vaginal microbiome composition at delivery does not affect infant stool microbiome composition and development, suggesting that practices to amend infant stool microbiome composition focus on factors other than maternal vaginal microbes,” the authors conclude.
More evidence needed
Commenting on the study, Emily H. Adhikari, MD, assistant professor of obstetrics and gynecology at UT Southwestern Medical Center in Dallas, and medical director of perinatal infectious diseases for the Parkland Health and Hospital System, said, “These findings contribute significantly more data to an understudied area of research into factors that affect the infant gut microbiome from the earliest hours of life. Prior studies have been small and often conflicting, and the authors reference recent larger studies, which corroborate their findings.”
The data regarding whether delivery mode or antibiotic-associated differences in infant microbiomes persist remain controversial, said Dr. Adhikari. “More evidence is needed involving a more ethnically diverse sampling of patients.” In addition, prospectively evaluating vaginal seeding in a rigorously designed clinical trial setting is “imperative to understand any potential benefit and certainly to understand the potential harms of the practice. To date, this does not exist.”
The study was funded by a Canadian Institutes of Health Research grant. Mr. Dos Santos and Dr. Adhikari have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
The findings suggest that practices such as vaginal seeding are ineffective.
A longitudinal, prospective cohort study of more than 600 pregnant Canadian women and their newborns showed significant differences in an infant’s stool composition by delivery mode at 10 days post partum, but the differences could not be explained by the mother’s vaginal microbiome, and they effectively disappeared by 3 months.
The findings were surprising, Scott Dos Santos, a PhD candidate at the University of Saskatchewan in Saskatoon, told this news organization. “The bacteria living in the maternal vagina are the first microbes that vaginally delivered infants are exposed to. … so it sounds intuitive that different kinds of vaginal microbiomes could end up influencing the development of a baby’s gut microbiome in different ways. But the maternal vaginal microbiome didn’t seem to have any role in predicting what the infant stool microbiome looked like.”
Therefore, women should not be concerned about cesarean delivery having an adverse effect on their baby’s gut microbiome, said Mr. Dos Santos. Moreover, “vaginal seeding is not safe or advised. Professional bodies, including the Society of Obstetricians and Gynecologists of Canada and the American College of Obstetricians and Gynecologists, strongly advise against this practice.”
The study was published online in Frontiers in Cellular and Infection Microbiology.
Independent communities
The investigators analyzed vaginal and stool microbiome profiles from 442 mother-infant dyads. The mothers were healthy, low-risk women who delivered at term. They were recruited into the Maternal Microbiome LEGACY Project from three hospitals in British Columbia.
The mean age of the mothers at delivery was 34.6 years, which is typical of the study hospitals’ delivery populations. Participants identified themselves as White (54.7%), Asian (21.2%), South Asian (8.3%), and of other ethnicities.
A nurse, midwife, or clinician collected maternal vaginal swabs of the posterior fornix and lateral vaginal wall at first presentation to the labor and delivery area. Neonatal meconium, which was defined as the first stool specimen collected within 72 hours of birth, and two infant stool samples were collected at follow-up visits at 10 days and 3 months post partum.
A principal component analysis of infant stool microbiomes showed no significant clustering of microbiome profiles at 10 days or 3 months by maternal community state types (that is, microbial species).
Correspondence analyses also showed no coclustering of maternal and infant clusters at either time. In addition, there were no differences in the distribution of maternal vaginal microbiome clusters among infant stool microbiome clusters, regardless of delivery mode.
Vaginal microbiome clusters were distributed across infant stool clusters in proportion to their frequency in the overall maternal population, indicating that the two communities were independent of each other.
Intrapartum antibiotic administration was identified as a confounder of infant stool microbiome differences and was associated with lower abundances of Escherichia coli, Bacteroides vulgatus, Bifidobacterium longum, and Parabacteroides distasonis.
“Our findings demonstrate that maternal vaginal microbiome composition at delivery does not affect infant stool microbiome composition and development, suggesting that practices to amend infant stool microbiome composition focus on factors other than maternal vaginal microbes,” the authors conclude.
More evidence needed
Commenting on the study, Emily H. Adhikari, MD, assistant professor of obstetrics and gynecology at UT Southwestern Medical Center in Dallas, and medical director of perinatal infectious diseases for the Parkland Health and Hospital System, said, “These findings contribute significantly more data to an understudied area of research into factors that affect the infant gut microbiome from the earliest hours of life. Prior studies have been small and often conflicting, and the authors reference recent larger studies, which corroborate their findings.”
The data regarding whether delivery mode or antibiotic-associated differences in infant microbiomes persist remain controversial, said Dr. Adhikari. “More evidence is needed involving a more ethnically diverse sampling of patients.” In addition, prospectively evaluating vaginal seeding in a rigorously designed clinical trial setting is “imperative to understand any potential benefit and certainly to understand the potential harms of the practice. To date, this does not exist.”
The study was funded by a Canadian Institutes of Health Research grant. Mr. Dos Santos and Dr. Adhikari have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
FROM FRONTIERS IN CELLULAR AND INFECTION MICROBIOLOGY
Spherical heart may predict cardiomyopathy, AFib
A round heart, or left ventricle sphericity, predicted cardiomyopathy and atrial fibrillation (AFib) in a deep learning analysis of MRI images from close to 39,000 participants in the UK Biobank, a new study shows.
An increase of 1 standard deviation in the sphericity index (short axis length/long axis length) was associated with a 47% increased incidence of cardiomyopathy and a 20% increased incidence of AFib, independent of clinical factors and traditional MRI measures.
Furthermore, a genetic analysis suggested a shared architecture between sphericity and nonischemic cardiomyopathy, pointing to NICM as a possible causal factor for left ventricle sphericity among individuals with normal LV size and function.
“Physicians have known the heart gets rounder after heart attacks and as we get older,” David Ouyang, MD, a cardiologist in the Smidt Heart Institute at Cedars-Sinai Medical Center, Los Angeles, and a researcher in the division of artificial intelligence in medicine, said in an interview. “We wanted to see if this sphericity is prognostic of future disease even in healthy individuals.”
Although it is too early to recommend heart shape assessment in healthy asymptomatic people, he said, “physicians should be extra careful and think about treatments when they notice a patient’s heart is particularly round.”
The study was published online March 29 in the journal Med.
Sphericity index key
The investigators hypothesized that there is variation in LV sphericity within the spectrum of normal LV chamber size and systolic function, and that such variation might be a marker of cardiac risk with genetic influences.
To test this hypothesis, they used automated deep-learning segmentation of cardiac MRI data to estimate and analyze the sphericity index in a cohort of 38,897 individuals participating in the UK Biobank.
After adjustment for age at MRI and sex, an increased sphericity index was associated with an increased risk for cardiomyopathy (hazard ratio, 1.57), AFib (HR, 1.35), and heart failure (HR, 1.37).
No significant association was seen with cardiac arrest.
The team then stratified the cohort into quintiles and compared the top 20%, middle 60%, and bottom 20%. The relationship between the sphericity index and risk extended across the distribution; individuals with higher than median sphericity had increased disease incidence, and those with lower than median sphericity had decreased incidence.
Overall, a single standard deviation in the sphericity index was associated with increased risk of cardiomyopathy (HR, 1.47) and of AFib (HR, 1.20), independent of clinical factors and usual MRI measurements.
In a minimally adjusted model, the sphericity index was a predictor of incident cardiomyopathy, AFib, and heart failure.
Adjustment for clinical factors partially attenuated the heart failure association; additional adjustment for MRI measurements fully attenuated that association and partially attenuated the association with AFib.
However, in all adjusted models, the association with cardiomyopathy showed little attenuation.
Furthermore, the team identified four loci associated with sphericity at genomewide significance – PLN, ANGPT1, PDZRN3, and HLA DR/DQ – and Mendelian randomization supported NICM as a cause of LV sphericity.
Looking ahead
“While conventional imaging metrics have significant diagnostic and prognostic value, some of these measurements have been adopted out of convenience or tradition,” the authors noted. “By representing a specific multidimensional remodeling phenotype, sphericity has emerged as a distinct morphologic trait with features not adequately captured by conventional measurements.
“We expect that the search space of potential imaging measurements is vast, and we have only begun to scratch at the surface of disease associations.”
Indeed, Dr. Ouyang said his group is “trying to evaluate the sphericity in echocardiograms or heart ultrasounds, which are more common and cheaper than MRI.”
“The main caveat is translating the information directly to patient care,” Richard C. Becker, MD, director and physician-in-chief of the University of Cincinnati Heart, Lung, and Vascular Institute, said in an interview. “Near-term yield could include using the spherical calculation in routine MRI of the heart, and based on the findings, following patients more closely if there is an abnormal shape. Or performing an MRI and targeted gene testing if there is a family history of cardiomyopathy or [of] an abnormal shape of the heart.”
“Validation of the findings and large-scale evaluation of the genes identified, and how they interact with patient and environmental factors, will be very important,” he added.
Nevertheless, “the study was well done and may serve as a foundation for future research,” Dr. Becker said. “The investigators used several powerful tools, including MRI, genomics, and [artificial intelligence] to draw their conclusions. This is precisely the way that ‘big data’ should be used – in a complementary fashion.”
The study authors and Dr. Becker reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A round heart, or left ventricle sphericity, predicted cardiomyopathy and atrial fibrillation (AFib) in a deep learning analysis of MRI images from close to 39,000 participants in the UK Biobank, a new study shows.
An increase of 1 standard deviation in the sphericity index (short axis length/long axis length) was associated with a 47% increased incidence of cardiomyopathy and a 20% increased incidence of AFib, independent of clinical factors and traditional MRI measures.
Furthermore, a genetic analysis suggested a shared architecture between sphericity and nonischemic cardiomyopathy, pointing to NICM as a possible causal factor for left ventricle sphericity among individuals with normal LV size and function.
“Physicians have known the heart gets rounder after heart attacks and as we get older,” David Ouyang, MD, a cardiologist in the Smidt Heart Institute at Cedars-Sinai Medical Center, Los Angeles, and a researcher in the division of artificial intelligence in medicine, said in an interview. “We wanted to see if this sphericity is prognostic of future disease even in healthy individuals.”
Although it is too early to recommend heart shape assessment in healthy asymptomatic people, he said, “physicians should be extra careful and think about treatments when they notice a patient’s heart is particularly round.”
The study was published online March 29 in the journal Med.
Sphericity index key
The investigators hypothesized that there is variation in LV sphericity within the spectrum of normal LV chamber size and systolic function, and that such variation might be a marker of cardiac risk with genetic influences.
To test this hypothesis, they used automated deep-learning segmentation of cardiac MRI data to estimate and analyze the sphericity index in a cohort of 38,897 individuals participating in the UK Biobank.
After adjustment for age at MRI and sex, an increased sphericity index was associated with an increased risk for cardiomyopathy (hazard ratio, 1.57), AFib (HR, 1.35), and heart failure (HR, 1.37).
No significant association was seen with cardiac arrest.
The team then stratified the cohort into quintiles and compared the top 20%, middle 60%, and bottom 20%. The relationship between the sphericity index and risk extended across the distribution; individuals with higher than median sphericity had increased disease incidence, and those with lower than median sphericity had decreased incidence.
Overall, a single standard deviation in the sphericity index was associated with increased risk of cardiomyopathy (HR, 1.47) and of AFib (HR, 1.20), independent of clinical factors and usual MRI measurements.
In a minimally adjusted model, the sphericity index was a predictor of incident cardiomyopathy, AFib, and heart failure.
Adjustment for clinical factors partially attenuated the heart failure association; additional adjustment for MRI measurements fully attenuated that association and partially attenuated the association with AFib.
However, in all adjusted models, the association with cardiomyopathy showed little attenuation.
Furthermore, the team identified four loci associated with sphericity at genomewide significance – PLN, ANGPT1, PDZRN3, and HLA DR/DQ – and Mendelian randomization supported NICM as a cause of LV sphericity.
Looking ahead
“While conventional imaging metrics have significant diagnostic and prognostic value, some of these measurements have been adopted out of convenience or tradition,” the authors noted. “By representing a specific multidimensional remodeling phenotype, sphericity has emerged as a distinct morphologic trait with features not adequately captured by conventional measurements.
“We expect that the search space of potential imaging measurements is vast, and we have only begun to scratch at the surface of disease associations.”
Indeed, Dr. Ouyang said his group is “trying to evaluate the sphericity in echocardiograms or heart ultrasounds, which are more common and cheaper than MRI.”
“The main caveat is translating the information directly to patient care,” Richard C. Becker, MD, director and physician-in-chief of the University of Cincinnati Heart, Lung, and Vascular Institute, said in an interview. “Near-term yield could include using the spherical calculation in routine MRI of the heart, and based on the findings, following patients more closely if there is an abnormal shape. Or performing an MRI and targeted gene testing if there is a family history of cardiomyopathy or [of] an abnormal shape of the heart.”
“Validation of the findings and large-scale evaluation of the genes identified, and how they interact with patient and environmental factors, will be very important,” he added.
Nevertheless, “the study was well done and may serve as a foundation for future research,” Dr. Becker said. “The investigators used several powerful tools, including MRI, genomics, and [artificial intelligence] to draw their conclusions. This is precisely the way that ‘big data’ should be used – in a complementary fashion.”
The study authors and Dr. Becker reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A round heart, or left ventricle sphericity, predicted cardiomyopathy and atrial fibrillation (AFib) in a deep learning analysis of MRI images from close to 39,000 participants in the UK Biobank, a new study shows.
An increase of 1 standard deviation in the sphericity index (short axis length/long axis length) was associated with a 47% increased incidence of cardiomyopathy and a 20% increased incidence of AFib, independent of clinical factors and traditional MRI measures.
Furthermore, a genetic analysis suggested a shared architecture between sphericity and nonischemic cardiomyopathy, pointing to NICM as a possible causal factor for left ventricle sphericity among individuals with normal LV size and function.
“Physicians have known the heart gets rounder after heart attacks and as we get older,” David Ouyang, MD, a cardiologist in the Smidt Heart Institute at Cedars-Sinai Medical Center, Los Angeles, and a researcher in the division of artificial intelligence in medicine, said in an interview. “We wanted to see if this sphericity is prognostic of future disease even in healthy individuals.”
Although it is too early to recommend heart shape assessment in healthy asymptomatic people, he said, “physicians should be extra careful and think about treatments when they notice a patient’s heart is particularly round.”
The study was published online March 29 in the journal Med.
Sphericity index key
The investigators hypothesized that there is variation in LV sphericity within the spectrum of normal LV chamber size and systolic function, and that such variation might be a marker of cardiac risk with genetic influences.
To test this hypothesis, they used automated deep-learning segmentation of cardiac MRI data to estimate and analyze the sphericity index in a cohort of 38,897 individuals participating in the UK Biobank.
After adjustment for age at MRI and sex, an increased sphericity index was associated with an increased risk for cardiomyopathy (hazard ratio, 1.57), AFib (HR, 1.35), and heart failure (HR, 1.37).
No significant association was seen with cardiac arrest.
The team then stratified the cohort into quintiles and compared the top 20%, middle 60%, and bottom 20%. The relationship between the sphericity index and risk extended across the distribution; individuals with higher than median sphericity had increased disease incidence, and those with lower than median sphericity had decreased incidence.
Overall, a single standard deviation in the sphericity index was associated with increased risk of cardiomyopathy (HR, 1.47) and of AFib (HR, 1.20), independent of clinical factors and usual MRI measurements.
In a minimally adjusted model, the sphericity index was a predictor of incident cardiomyopathy, AFib, and heart failure.
Adjustment for clinical factors partially attenuated the heart failure association; additional adjustment for MRI measurements fully attenuated that association and partially attenuated the association with AFib.
However, in all adjusted models, the association with cardiomyopathy showed little attenuation.
Furthermore, the team identified four loci associated with sphericity at genomewide significance – PLN, ANGPT1, PDZRN3, and HLA DR/DQ – and Mendelian randomization supported NICM as a cause of LV sphericity.
Looking ahead
“While conventional imaging metrics have significant diagnostic and prognostic value, some of these measurements have been adopted out of convenience or tradition,” the authors noted. “By representing a specific multidimensional remodeling phenotype, sphericity has emerged as a distinct morphologic trait with features not adequately captured by conventional measurements.
“We expect that the search space of potential imaging measurements is vast, and we have only begun to scratch at the surface of disease associations.”
Indeed, Dr. Ouyang said his group is “trying to evaluate the sphericity in echocardiograms or heart ultrasounds, which are more common and cheaper than MRI.”
“The main caveat is translating the information directly to patient care,” Richard C. Becker, MD, director and physician-in-chief of the University of Cincinnati Heart, Lung, and Vascular Institute, said in an interview. “Near-term yield could include using the spherical calculation in routine MRI of the heart, and based on the findings, following patients more closely if there is an abnormal shape. Or performing an MRI and targeted gene testing if there is a family history of cardiomyopathy or [of] an abnormal shape of the heart.”
“Validation of the findings and large-scale evaluation of the genes identified, and how they interact with patient and environmental factors, will be very important,” he added.
Nevertheless, “the study was well done and may serve as a foundation for future research,” Dr. Becker said. “The investigators used several powerful tools, including MRI, genomics, and [artificial intelligence] to draw their conclusions. This is precisely the way that ‘big data’ should be used – in a complementary fashion.”
The study authors and Dr. Becker reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM MED
Commotio cordis underrecognized, undertreated outside of sports
Sudden cardiac arrest (SCA) due to commotio cordis occurs more frequently in non–sport-related settings than is commonly thought, resulting in lower rates of resuscitation and increased mortality, especially among young women, a new review suggests.
The condition is rare, caused by an often fatal arrhythmia secondary to a blunt, nonpenetrating impact over the precordium, without direct structural damage to the heart itself. Common causes in nonsport settings include assault, motor vehicle accidents (MVAs), and daily activities such as occupational accidents.
“We found a stark difference in mortality outcomes between non–sport-related commotio cordis compared to sport-related events,” at 88% vs. 66%, Han S. Lim, MBBS, PhD, of the University of Melbourne, and Austin Health, Heidelberg, Australia, told this news organization. “Rates of cardiopulmonary resuscitation (CPR) (27% vs. 97%) and defibrillation (17% vs. 81%) were considerably lower in the non–sport-related events.”
“Although still being male-predominant, of concern, we saw a higher proportion of females in non–sport-related commotio cordis due to assault, MVAs, and other activities,” he noted. Such events may occur “in secluded domestic settings, may not be witnessed, or may occur as intentional harm, whereby the witness could also be the perpetrator, reducing the likelihood of prompt diagnosis, CPR, and defibrillation administration.”
The study was published online in JACC: Clinical Electrophysiology.
Young women affected
Dr. Lim and colleagues searched the literature through 2021 for all cases of commotio cordis. Three hundred and thirty-four cases from among 53 citations were included in the analysis; of those, 121 (36%) occurred in non–sport-related settings, including assault (76%), MVAs (7%), and daily activities (16%). “Daily activities” comprised activities that were expected in a person’s day-to-day routine such as falls, play fighting (in children), and occupational accidents.
Non–sport-related cases primarily involved nonprojectile etiologies (95%), including bodily contact (79%), such as impacts from fists, feet, and knees; impacts with handlebars or steering wheels; and solid stick-like weapons and flat surfaces.
Sport-related cases involved a significantly higher proportion of projectiles (94% vs. 5%) and occurred across a range of sports, mostly at the competitive level (66%).
Both sport-related and non–sport-related commotio cordis affected a similar younger demographic (mean age, 19; mostly males). No statistically significant differences between the two groups were seen with regard to previous cardiac history or family history of cardiac disease, or in arrhythmias on electrocardiogram, biomarkers, or imaging findings.
However, in non–sport-related events, the proportion of females affected was significantly higher (13% vs. 2%), as was mortality (88% vs. 66%). Rates were lower for CPR (27% vs. 97%) and defibrillation use (17% vs. 81%), and resuscitation was more commonly delayed beyond 3 minutes (80% vs. 5%).
The finding that more than a third of reported cases were non–sport-related “is higher than previously reported, and included data from 15 different countries,” the authors noted.
Study limitations included the use of data only from published studies, inclusion of a case series limited to fatal cases, small sample sizes, and lack of consistent reporting of demographic data, mechanisms, investigation results, management, and outcomes.
Increased awareness ‘essential’
Dr. Lim and colleagues concluded that increased awareness of non–sport-related commotio cordis is “essential” for early recognition, resuscitation, and mortality reduction.
Jim Cheung, MD, chair of the American College of Cardiology’s electrophysiology section, “completely agrees.” Greater awareness among the general population could reduce barriers to CPR and automated external defibrillator (AED) use, he said, which in turn, can lead to improved survival.
Furthermore, Dr. Cheung added, “This study underscores the importance of ensuring that non–cardiology-trained physicians such as emergency medicine physicians and trauma surgeons who might encounter patients with non–sports-related commotio cordis recognize the entity during the course of treatment.”
Because the review relied only on published cases, “it may not represent the true breadth of cases that are occurring in the real world,” he noted. “I suspect that cases that occur outside of sports-related activities, such as MVAs and assault, are more likely to be underreported and that the true proportion of non–sports-related commotio cordis may be significantly higher than 36%.” Increased reporting of cases as part of an international commotio cordis registry would help provide additional insights, he suggested.
“There is a common misperception that SCA only occurs among older patients and patients with known coronary artery disease or heart failure,” he said. “For us to move the needle on improving SCA survival, we will need to tackle the problem from multiple angles including increasing public awareness, training the public on CPR and AED use, and improving access to AEDs by addressing structural barriers.”
Dr. Cheung pointed to ongoing efforts by nonprofit, patient-driven organizations such as the SADS Foundation and Omar Carter Foundation, and professional societies such as the American College of Cardiology, the American Heart Association, and Heart Rhythm Society, to direct public awareness campaigns and legislative proposals to address this problem.
Similar efforts are underway among cardiac societies and SCA awareness groups in Australia, Dr. Lim said.
No funding or relevant financial relationships were disclosed.
A version of this article first appeared on Medscape.com.
Sudden cardiac arrest (SCA) due to commotio cordis occurs more frequently in non–sport-related settings than is commonly thought, resulting in lower rates of resuscitation and increased mortality, especially among young women, a new review suggests.
The condition is rare, caused by an often fatal arrhythmia secondary to a blunt, nonpenetrating impact over the precordium, without direct structural damage to the heart itself. Common causes in nonsport settings include assault, motor vehicle accidents (MVAs), and daily activities such as occupational accidents.
“We found a stark difference in mortality outcomes between non–sport-related commotio cordis compared to sport-related events,” at 88% vs. 66%, Han S. Lim, MBBS, PhD, of the University of Melbourne, and Austin Health, Heidelberg, Australia, told this news organization. “Rates of cardiopulmonary resuscitation (CPR) (27% vs. 97%) and defibrillation (17% vs. 81%) were considerably lower in the non–sport-related events.”
“Although still being male-predominant, of concern, we saw a higher proportion of females in non–sport-related commotio cordis due to assault, MVAs, and other activities,” he noted. Such events may occur “in secluded domestic settings, may not be witnessed, or may occur as intentional harm, whereby the witness could also be the perpetrator, reducing the likelihood of prompt diagnosis, CPR, and defibrillation administration.”
The study was published online in JACC: Clinical Electrophysiology.
Young women affected
Dr. Lim and colleagues searched the literature through 2021 for all cases of commotio cordis. Three hundred and thirty-four cases from among 53 citations were included in the analysis; of those, 121 (36%) occurred in non–sport-related settings, including assault (76%), MVAs (7%), and daily activities (16%). “Daily activities” comprised activities that were expected in a person’s day-to-day routine such as falls, play fighting (in children), and occupational accidents.
Non–sport-related cases primarily involved nonprojectile etiologies (95%), including bodily contact (79%), such as impacts from fists, feet, and knees; impacts with handlebars or steering wheels; and solid stick-like weapons and flat surfaces.
Sport-related cases involved a significantly higher proportion of projectiles (94% vs. 5%) and occurred across a range of sports, mostly at the competitive level (66%).
Both sport-related and non–sport-related commotio cordis affected a similar younger demographic (mean age, 19; mostly males). No statistically significant differences between the two groups were seen with regard to previous cardiac history or family history of cardiac disease, or in arrhythmias on electrocardiogram, biomarkers, or imaging findings.
However, in non–sport-related events, the proportion of females affected was significantly higher (13% vs. 2%), as was mortality (88% vs. 66%). Rates were lower for CPR (27% vs. 97%) and defibrillation use (17% vs. 81%), and resuscitation was more commonly delayed beyond 3 minutes (80% vs. 5%).
The finding that more than a third of reported cases were non–sport-related “is higher than previously reported, and included data from 15 different countries,” the authors noted.
Study limitations included the use of data only from published studies, inclusion of a case series limited to fatal cases, small sample sizes, and lack of consistent reporting of demographic data, mechanisms, investigation results, management, and outcomes.
Increased awareness ‘essential’
Dr. Lim and colleagues concluded that increased awareness of non–sport-related commotio cordis is “essential” for early recognition, resuscitation, and mortality reduction.
Jim Cheung, MD, chair of the American College of Cardiology’s electrophysiology section, “completely agrees.” Greater awareness among the general population could reduce barriers to CPR and automated external defibrillator (AED) use, he said, which in turn, can lead to improved survival.
Furthermore, Dr. Cheung added, “This study underscores the importance of ensuring that non–cardiology-trained physicians such as emergency medicine physicians and trauma surgeons who might encounter patients with non–sports-related commotio cordis recognize the entity during the course of treatment.”
Because the review relied only on published cases, “it may not represent the true breadth of cases that are occurring in the real world,” he noted. “I suspect that cases that occur outside of sports-related activities, such as MVAs and assault, are more likely to be underreported and that the true proportion of non–sports-related commotio cordis may be significantly higher than 36%.” Increased reporting of cases as part of an international commotio cordis registry would help provide additional insights, he suggested.
“There is a common misperception that SCA only occurs among older patients and patients with known coronary artery disease or heart failure,” he said. “For us to move the needle on improving SCA survival, we will need to tackle the problem from multiple angles including increasing public awareness, training the public on CPR and AED use, and improving access to AEDs by addressing structural barriers.”
Dr. Cheung pointed to ongoing efforts by nonprofit, patient-driven organizations such as the SADS Foundation and Omar Carter Foundation, and professional societies such as the American College of Cardiology, the American Heart Association, and Heart Rhythm Society, to direct public awareness campaigns and legislative proposals to address this problem.
Similar efforts are underway among cardiac societies and SCA awareness groups in Australia, Dr. Lim said.
No funding or relevant financial relationships were disclosed.
A version of this article first appeared on Medscape.com.
Sudden cardiac arrest (SCA) due to commotio cordis occurs more frequently in non–sport-related settings than is commonly thought, resulting in lower rates of resuscitation and increased mortality, especially among young women, a new review suggests.
The condition is rare, caused by an often fatal arrhythmia secondary to a blunt, nonpenetrating impact over the precordium, without direct structural damage to the heart itself. Common causes in nonsport settings include assault, motor vehicle accidents (MVAs), and daily activities such as occupational accidents.
“We found a stark difference in mortality outcomes between non–sport-related commotio cordis compared to sport-related events,” at 88% vs. 66%, Han S. Lim, MBBS, PhD, of the University of Melbourne, and Austin Health, Heidelberg, Australia, told this news organization. “Rates of cardiopulmonary resuscitation (CPR) (27% vs. 97%) and defibrillation (17% vs. 81%) were considerably lower in the non–sport-related events.”
“Although still being male-predominant, of concern, we saw a higher proportion of females in non–sport-related commotio cordis due to assault, MVAs, and other activities,” he noted. Such events may occur “in secluded domestic settings, may not be witnessed, or may occur as intentional harm, whereby the witness could also be the perpetrator, reducing the likelihood of prompt diagnosis, CPR, and defibrillation administration.”
The study was published online in JACC: Clinical Electrophysiology.
Young women affected
Dr. Lim and colleagues searched the literature through 2021 for all cases of commotio cordis. Three hundred and thirty-four cases from among 53 citations were included in the analysis; of those, 121 (36%) occurred in non–sport-related settings, including assault (76%), MVAs (7%), and daily activities (16%). “Daily activities” comprised activities that were expected in a person’s day-to-day routine such as falls, play fighting (in children), and occupational accidents.
Non–sport-related cases primarily involved nonprojectile etiologies (95%), including bodily contact (79%), such as impacts from fists, feet, and knees; impacts with handlebars or steering wheels; and solid stick-like weapons and flat surfaces.
Sport-related cases involved a significantly higher proportion of projectiles (94% vs. 5%) and occurred across a range of sports, mostly at the competitive level (66%).
Both sport-related and non–sport-related commotio cordis affected a similar younger demographic (mean age, 19; mostly males). No statistically significant differences between the two groups were seen with regard to previous cardiac history or family history of cardiac disease, or in arrhythmias on electrocardiogram, biomarkers, or imaging findings.
However, in non–sport-related events, the proportion of females affected was significantly higher (13% vs. 2%), as was mortality (88% vs. 66%). Rates were lower for CPR (27% vs. 97%) and defibrillation use (17% vs. 81%), and resuscitation was more commonly delayed beyond 3 minutes (80% vs. 5%).
The finding that more than a third of reported cases were non–sport-related “is higher than previously reported, and included data from 15 different countries,” the authors noted.
Study limitations included the use of data only from published studies, inclusion of a case series limited to fatal cases, small sample sizes, and lack of consistent reporting of demographic data, mechanisms, investigation results, management, and outcomes.
Increased awareness ‘essential’
Dr. Lim and colleagues concluded that increased awareness of non–sport-related commotio cordis is “essential” for early recognition, resuscitation, and mortality reduction.
Jim Cheung, MD, chair of the American College of Cardiology’s electrophysiology section, “completely agrees.” Greater awareness among the general population could reduce barriers to CPR and automated external defibrillator (AED) use, he said, which in turn, can lead to improved survival.
Furthermore, Dr. Cheung added, “This study underscores the importance of ensuring that non–cardiology-trained physicians such as emergency medicine physicians and trauma surgeons who might encounter patients with non–sports-related commotio cordis recognize the entity during the course of treatment.”
Because the review relied only on published cases, “it may not represent the true breadth of cases that are occurring in the real world,” he noted. “I suspect that cases that occur outside of sports-related activities, such as MVAs and assault, are more likely to be underreported and that the true proportion of non–sports-related commotio cordis may be significantly higher than 36%.” Increased reporting of cases as part of an international commotio cordis registry would help provide additional insights, he suggested.
“There is a common misperception that SCA only occurs among older patients and patients with known coronary artery disease or heart failure,” he said. “For us to move the needle on improving SCA survival, we will need to tackle the problem from multiple angles including increasing public awareness, training the public on CPR and AED use, and improving access to AEDs by addressing structural barriers.”
Dr. Cheung pointed to ongoing efforts by nonprofit, patient-driven organizations such as the SADS Foundation and Omar Carter Foundation, and professional societies such as the American College of Cardiology, the American Heart Association, and Heart Rhythm Society, to direct public awareness campaigns and legislative proposals to address this problem.
Similar efforts are underway among cardiac societies and SCA awareness groups in Australia, Dr. Lim said.
No funding or relevant financial relationships were disclosed.
A version of this article first appeared on Medscape.com.
FROM JACC: CLINICAL ELECTROPHYSIOLOGY
Even small changes in fitness tied to lower mortality risk
Even relatively small changes in cardiorespiratory fitness (CRF) are associated with “considerable” impact on clinical symptoms and mortality risk among individuals with and without cardiovascular disease, new observational data in United States veterans suggest.
“We had a few surprises,” Peter Kokkinos, PhD, Robert Wood Johnson Medical School, New Brunswick, N. J., and the VA Medical Center, Washington, told this news organization. “First, the mortality risk was greatly attenuated in those who were moderate- and high-fit at baseline, despite a decline in fitness over time. In fact, in those with no CVD, the risk was not significantly elevated even when CRF declined by at least one MET [metabolic equivalent of task] for the moderate-fit and two or more METs for the high-fit group.”
“Second,” he said, “Our findings suggest that the impact of CRF on human health is not ephemeral, but rather carries a certain protection over time. Third, the changes in CRF necessary to impact mortality risk are relatively small (> 1.0 METs). This has a substantial clinical and public health significance.”
The study was published online in the Journal of the American College of Cardiology.
CRF up, mortality risk down
Dr. Kokkinos and colleagues analyzed data from 93,060 U.S. veterans; of these, 95% were men (mean age, 61.4 years) and 5% were women (mean age, 57.1 years). Overall, 72% of participants were White; 19.8%, African American; 5.2%, Hispanic; 1.9%, Native American, Asian, or Hawaiian; and 1.2%, unknown.
Participants were assigned to age-specific fitness quartiles based on peak METs achieved on a baseline exercise treadmill test (ETT). Each CRF quartile was stratified based on CRF changes (increase, decrease, no change) on the final ETT, with at least two ETT assessments at least 1 year apart.
The mean follow-up was 5.8 years (663,522 person-years), during which 18,302 deaths (19.7%) occurred, for an average annual mortality rate of 27.6 events per 1,000 person-years.
CRF was unchanged in 25.1% of the cohort, increased in 29.3%, and decreased in 45.6%. The trend was similar for those with and without CVD.
Significant differences were seen in all variables across CRF categories. In general, body weight, body mass index, CVD risk factors, and overall disease burden were progressively more unfavorable for those in the lowest CRF categories.
Conversely, medication use was progressively higher among those in low CRF categories.
After adjustment, higher CRF was inversely related to mortality risk for the entire cohort, with and without CVD. Cumulative survival rates across CRF categories declined progressively with increased fitness.
For patients with CVD (hazard ratio, 1.11), other significant predictors of all-cause mortality for patients were age (HR, 1.07), body mass index (HR, 0.98), chronic kidney disease (HR, 1.85), smoking (HR, 1.57), type 2 diabetes (HR, 1.42), hypertension (HR, 1.39), and cancers (HR, 1.37).
Generally, changes in CRF of at least 1.0 MET were associated with inverse and proportionate changes in mortality risk, regardless of baseline CRF status. For example, they note, a CRF decline of > 2.0 METs was associated with a 74% increased mortality risk for low-fit individuals with CVD, and a 69% increase for those without CVD.
A second analysis was done after excluding patients whose CRF declined and who died within 2 years of their last ETT, to account for the possibility that higher mortality rates and CRF declines were consequences of underlying disease (reverse causality). The association between changes in CRF and mortality risk persisted and remained similar to that observed in the entire cohort.
The authors add, “It is noteworthy that CRF increased by at least 1 MET in approximately 29% of the participants in the current study and decreased in approximately 46% of participants. This finding underscores the need to promote physical activity to maintain or increase CRF levels in middle-aged and older individuals.”
“Our findings make a persuasive argument that CRF is a strong and independent determinant of all-cause mortality risk, independent of genetic factors,” Dr. Kokkinos said. “We know that CRF is determined to some degree by genetic factors. However, improvements in aerobic capacity or CRF over time are largely the outcomes of regular engagement in aerobic activities of adequate intensity and volume.”
“Conversely,” he said, “a decline in CRF is likely the result of sedentary behavior, the onset of a chronic condition, or aging.”
If genetics were the sole contributor to mortality risk, then changes in CRF would not influence mortality risk, he concluded.
CRF impact “woefully underestimated”
Barry A. Franklin, PhD, past chair of both the American Heart Association’s Council on Physical Activity and Metabolism and the National Advocacy Committee, said the study substantiates previous smaller studies and is a “seminal” work.
“CRF is woefully underestimated as an index of health outcomes and survival,” said Dr. Franklin, director of preventive cardiology and cardiac rehabilitation at Beaumont Health in Royal Oak, Mich. “Moderate to vigorous physical activity should be regularly promoted by the medical community.”
Dr. Franklin’s recent review, published in Mayo Clinic Proceedings, provides evidence for other exercise benefits that clinicians may not be aware of, he noted. These include:
- Each 1 MET increase in CRF is generally associated with approximately 16% reduction in mortality.
- At any given risk factor profile or coronary calcium score, unfit people have 2-3 times the mortality as their fit counterparts.
- Fitness is inversely related to annual health care costs (each 1 MET increase in CRF is associated with approximately 6% lower annual health care costs).
- Physically active people hospitalized with acute coronary syndromes have better short-term outcomes (likely because of a phenomenon called ‘exercise preconditioning’).
- Fit people who undergo elective or emergent surgical procedures have better outcomes.
- Regular physical activity is a common characteristic in population subsets who routinely live into their 90s and to 100+.
Dr. Franklin had this advice for clinicians seeking to promote CRF increases of 1 MET or more among patients: “Sedentary people who embark on a walking program, who over time increase their walking speed to 3 mph or faster, invariably show at least a 1 MET increase in CRF during subsequent peak or symptom-limited treadmill testing.”
“Another general rule is that if an exercise program decreases heart rate at a given or fixed workload by about 10 beats per minute [bpm], the same treadmill workload that initially was accomplished at a heart rate of 120 bpm is now being accomplished at a heart rate of 110 bpm,” likely resulting in about a 1 MET increase in fitness.
“Accordingly,” he added, “a 20-bpm decrease would suggest a 2 MET increase in fitness!”
In a related editorial, Leonard A. Kaminsky, Ball State University, Muncie, Ind. and colleagues, write, “We agree with and believe the conclusion, reached by Kokkinos et al., bears repeating. We (again) call on both clinicians and public health professionals to adopt CRF as a key health indicator.”
“This should be done by coupling routine assessments of CRF with continued advocacy for promoting physical activity as an essential healthy lifestyle behavior,” they write.
No funding or relevant financial relationships were disclosed.
A version of this article first appeared on Medscape.com.
Even relatively small changes in cardiorespiratory fitness (CRF) are associated with “considerable” impact on clinical symptoms and mortality risk among individuals with and without cardiovascular disease, new observational data in United States veterans suggest.
“We had a few surprises,” Peter Kokkinos, PhD, Robert Wood Johnson Medical School, New Brunswick, N. J., and the VA Medical Center, Washington, told this news organization. “First, the mortality risk was greatly attenuated in those who were moderate- and high-fit at baseline, despite a decline in fitness over time. In fact, in those with no CVD, the risk was not significantly elevated even when CRF declined by at least one MET [metabolic equivalent of task] for the moderate-fit and two or more METs for the high-fit group.”
“Second,” he said, “Our findings suggest that the impact of CRF on human health is not ephemeral, but rather carries a certain protection over time. Third, the changes in CRF necessary to impact mortality risk are relatively small (> 1.0 METs). This has a substantial clinical and public health significance.”
The study was published online in the Journal of the American College of Cardiology.
CRF up, mortality risk down
Dr. Kokkinos and colleagues analyzed data from 93,060 U.S. veterans; of these, 95% were men (mean age, 61.4 years) and 5% were women (mean age, 57.1 years). Overall, 72% of participants were White; 19.8%, African American; 5.2%, Hispanic; 1.9%, Native American, Asian, or Hawaiian; and 1.2%, unknown.
Participants were assigned to age-specific fitness quartiles based on peak METs achieved on a baseline exercise treadmill test (ETT). Each CRF quartile was stratified based on CRF changes (increase, decrease, no change) on the final ETT, with at least two ETT assessments at least 1 year apart.
The mean follow-up was 5.8 years (663,522 person-years), during which 18,302 deaths (19.7%) occurred, for an average annual mortality rate of 27.6 events per 1,000 person-years.
CRF was unchanged in 25.1% of the cohort, increased in 29.3%, and decreased in 45.6%. The trend was similar for those with and without CVD.
Significant differences were seen in all variables across CRF categories. In general, body weight, body mass index, CVD risk factors, and overall disease burden were progressively more unfavorable for those in the lowest CRF categories.
Conversely, medication use was progressively higher among those in low CRF categories.
After adjustment, higher CRF was inversely related to mortality risk for the entire cohort, with and without CVD. Cumulative survival rates across CRF categories declined progressively with increased fitness.
For patients with CVD (hazard ratio, 1.11), other significant predictors of all-cause mortality for patients were age (HR, 1.07), body mass index (HR, 0.98), chronic kidney disease (HR, 1.85), smoking (HR, 1.57), type 2 diabetes (HR, 1.42), hypertension (HR, 1.39), and cancers (HR, 1.37).
Generally, changes in CRF of at least 1.0 MET were associated with inverse and proportionate changes in mortality risk, regardless of baseline CRF status. For example, they note, a CRF decline of > 2.0 METs was associated with a 74% increased mortality risk for low-fit individuals with CVD, and a 69% increase for those without CVD.
A second analysis was done after excluding patients whose CRF declined and who died within 2 years of their last ETT, to account for the possibility that higher mortality rates and CRF declines were consequences of underlying disease (reverse causality). The association between changes in CRF and mortality risk persisted and remained similar to that observed in the entire cohort.
The authors add, “It is noteworthy that CRF increased by at least 1 MET in approximately 29% of the participants in the current study and decreased in approximately 46% of participants. This finding underscores the need to promote physical activity to maintain or increase CRF levels in middle-aged and older individuals.”
“Our findings make a persuasive argument that CRF is a strong and independent determinant of all-cause mortality risk, independent of genetic factors,” Dr. Kokkinos said. “We know that CRF is determined to some degree by genetic factors. However, improvements in aerobic capacity or CRF over time are largely the outcomes of regular engagement in aerobic activities of adequate intensity and volume.”
“Conversely,” he said, “a decline in CRF is likely the result of sedentary behavior, the onset of a chronic condition, or aging.”
If genetics were the sole contributor to mortality risk, then changes in CRF would not influence mortality risk, he concluded.
CRF impact “woefully underestimated”
Barry A. Franklin, PhD, past chair of both the American Heart Association’s Council on Physical Activity and Metabolism and the National Advocacy Committee, said the study substantiates previous smaller studies and is a “seminal” work.
“CRF is woefully underestimated as an index of health outcomes and survival,” said Dr. Franklin, director of preventive cardiology and cardiac rehabilitation at Beaumont Health in Royal Oak, Mich. “Moderate to vigorous physical activity should be regularly promoted by the medical community.”
Dr. Franklin’s recent review, published in Mayo Clinic Proceedings, provides evidence for other exercise benefits that clinicians may not be aware of, he noted. These include:
- Each 1 MET increase in CRF is generally associated with approximately 16% reduction in mortality.
- At any given risk factor profile or coronary calcium score, unfit people have 2-3 times the mortality as their fit counterparts.
- Fitness is inversely related to annual health care costs (each 1 MET increase in CRF is associated with approximately 6% lower annual health care costs).
- Physically active people hospitalized with acute coronary syndromes have better short-term outcomes (likely because of a phenomenon called ‘exercise preconditioning’).
- Fit people who undergo elective or emergent surgical procedures have better outcomes.
- Regular physical activity is a common characteristic in population subsets who routinely live into their 90s and to 100+.
Dr. Franklin had this advice for clinicians seeking to promote CRF increases of 1 MET or more among patients: “Sedentary people who embark on a walking program, who over time increase their walking speed to 3 mph or faster, invariably show at least a 1 MET increase in CRF during subsequent peak or symptom-limited treadmill testing.”
“Another general rule is that if an exercise program decreases heart rate at a given or fixed workload by about 10 beats per minute [bpm], the same treadmill workload that initially was accomplished at a heart rate of 120 bpm is now being accomplished at a heart rate of 110 bpm,” likely resulting in about a 1 MET increase in fitness.
“Accordingly,” he added, “a 20-bpm decrease would suggest a 2 MET increase in fitness!”
In a related editorial, Leonard A. Kaminsky, Ball State University, Muncie, Ind. and colleagues, write, “We agree with and believe the conclusion, reached by Kokkinos et al., bears repeating. We (again) call on both clinicians and public health professionals to adopt CRF as a key health indicator.”
“This should be done by coupling routine assessments of CRF with continued advocacy for promoting physical activity as an essential healthy lifestyle behavior,” they write.
No funding or relevant financial relationships were disclosed.
A version of this article first appeared on Medscape.com.
Even relatively small changes in cardiorespiratory fitness (CRF) are associated with “considerable” impact on clinical symptoms and mortality risk among individuals with and without cardiovascular disease, new observational data in United States veterans suggest.
“We had a few surprises,” Peter Kokkinos, PhD, Robert Wood Johnson Medical School, New Brunswick, N. J., and the VA Medical Center, Washington, told this news organization. “First, the mortality risk was greatly attenuated in those who were moderate- and high-fit at baseline, despite a decline in fitness over time. In fact, in those with no CVD, the risk was not significantly elevated even when CRF declined by at least one MET [metabolic equivalent of task] for the moderate-fit and two or more METs for the high-fit group.”
“Second,” he said, “Our findings suggest that the impact of CRF on human health is not ephemeral, but rather carries a certain protection over time. Third, the changes in CRF necessary to impact mortality risk are relatively small (> 1.0 METs). This has a substantial clinical and public health significance.”
The study was published online in the Journal of the American College of Cardiology.
CRF up, mortality risk down
Dr. Kokkinos and colleagues analyzed data from 93,060 U.S. veterans; of these, 95% were men (mean age, 61.4 years) and 5% were women (mean age, 57.1 years). Overall, 72% of participants were White; 19.8%, African American; 5.2%, Hispanic; 1.9%, Native American, Asian, or Hawaiian; and 1.2%, unknown.
Participants were assigned to age-specific fitness quartiles based on peak METs achieved on a baseline exercise treadmill test (ETT). Each CRF quartile was stratified based on CRF changes (increase, decrease, no change) on the final ETT, with at least two ETT assessments at least 1 year apart.
The mean follow-up was 5.8 years (663,522 person-years), during which 18,302 deaths (19.7%) occurred, for an average annual mortality rate of 27.6 events per 1,000 person-years.
CRF was unchanged in 25.1% of the cohort, increased in 29.3%, and decreased in 45.6%. The trend was similar for those with and without CVD.
Significant differences were seen in all variables across CRF categories. In general, body weight, body mass index, CVD risk factors, and overall disease burden were progressively more unfavorable for those in the lowest CRF categories.
Conversely, medication use was progressively higher among those in low CRF categories.
After adjustment, higher CRF was inversely related to mortality risk for the entire cohort, with and without CVD. Cumulative survival rates across CRF categories declined progressively with increased fitness.
For patients with CVD (hazard ratio, 1.11), other significant predictors of all-cause mortality for patients were age (HR, 1.07), body mass index (HR, 0.98), chronic kidney disease (HR, 1.85), smoking (HR, 1.57), type 2 diabetes (HR, 1.42), hypertension (HR, 1.39), and cancers (HR, 1.37).
Generally, changes in CRF of at least 1.0 MET were associated with inverse and proportionate changes in mortality risk, regardless of baseline CRF status. For example, they note, a CRF decline of > 2.0 METs was associated with a 74% increased mortality risk for low-fit individuals with CVD, and a 69% increase for those without CVD.
A second analysis was done after excluding patients whose CRF declined and who died within 2 years of their last ETT, to account for the possibility that higher mortality rates and CRF declines were consequences of underlying disease (reverse causality). The association between changes in CRF and mortality risk persisted and remained similar to that observed in the entire cohort.
The authors add, “It is noteworthy that CRF increased by at least 1 MET in approximately 29% of the participants in the current study and decreased in approximately 46% of participants. This finding underscores the need to promote physical activity to maintain or increase CRF levels in middle-aged and older individuals.”
“Our findings make a persuasive argument that CRF is a strong and independent determinant of all-cause mortality risk, independent of genetic factors,” Dr. Kokkinos said. “We know that CRF is determined to some degree by genetic factors. However, improvements in aerobic capacity or CRF over time are largely the outcomes of regular engagement in aerobic activities of adequate intensity and volume.”
“Conversely,” he said, “a decline in CRF is likely the result of sedentary behavior, the onset of a chronic condition, or aging.”
If genetics were the sole contributor to mortality risk, then changes in CRF would not influence mortality risk, he concluded.
CRF impact “woefully underestimated”
Barry A. Franklin, PhD, past chair of both the American Heart Association’s Council on Physical Activity and Metabolism and the National Advocacy Committee, said the study substantiates previous smaller studies and is a “seminal” work.
“CRF is woefully underestimated as an index of health outcomes and survival,” said Dr. Franklin, director of preventive cardiology and cardiac rehabilitation at Beaumont Health in Royal Oak, Mich. “Moderate to vigorous physical activity should be regularly promoted by the medical community.”
Dr. Franklin’s recent review, published in Mayo Clinic Proceedings, provides evidence for other exercise benefits that clinicians may not be aware of, he noted. These include:
- Each 1 MET increase in CRF is generally associated with approximately 16% reduction in mortality.
- At any given risk factor profile or coronary calcium score, unfit people have 2-3 times the mortality as their fit counterparts.
- Fitness is inversely related to annual health care costs (each 1 MET increase in CRF is associated with approximately 6% lower annual health care costs).
- Physically active people hospitalized with acute coronary syndromes have better short-term outcomes (likely because of a phenomenon called ‘exercise preconditioning’).
- Fit people who undergo elective or emergent surgical procedures have better outcomes.
- Regular physical activity is a common characteristic in population subsets who routinely live into their 90s and to 100+.
Dr. Franklin had this advice for clinicians seeking to promote CRF increases of 1 MET or more among patients: “Sedentary people who embark on a walking program, who over time increase their walking speed to 3 mph or faster, invariably show at least a 1 MET increase in CRF during subsequent peak or symptom-limited treadmill testing.”
“Another general rule is that if an exercise program decreases heart rate at a given or fixed workload by about 10 beats per minute [bpm], the same treadmill workload that initially was accomplished at a heart rate of 120 bpm is now being accomplished at a heart rate of 110 bpm,” likely resulting in about a 1 MET increase in fitness.
“Accordingly,” he added, “a 20-bpm decrease would suggest a 2 MET increase in fitness!”
In a related editorial, Leonard A. Kaminsky, Ball State University, Muncie, Ind. and colleagues, write, “We agree with and believe the conclusion, reached by Kokkinos et al., bears repeating. We (again) call on both clinicians and public health professionals to adopt CRF as a key health indicator.”
“This should be done by coupling routine assessments of CRF with continued advocacy for promoting physical activity as an essential healthy lifestyle behavior,” they write.
No funding or relevant financial relationships were disclosed.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY
Concussion burden tied to later hypertension in football players
a new study suggests.
Among more than 4,000 participants, 37% had hypertension at a median of 24 years post career and reported a median concussion symptom score (CSS) of 23 on a scale of 0 to 130.
“We have long seen an incompletely explained link between football participation and later-life cardiovascular disease,” Aaron L. Baggish, MD, of Massachusetts Hospital and Harvard Medical School, Boston, told this news organization.
“This study tested [whether] concussion burden during years of active play would be a determinant of later-life hypertension, the most common cause of cardiovascular disease, and indeed found this relationship to be a strong one.”
The study was published online in Circulation.
Link to cognitive decline?
Dr. Baggish and colleagues recruited former professional American-style football (ASF) players to participate in a survey administered by the Football Players Health Study at Harvard University.
Concussion burden was quantified with respect to the occurrence and severity of common concussion symptoms – e.g., headaches, nausea, dizziness, confusion, loss of consciousness (LOC), disorientation, and feeling unsteady on one’s feet – over years of active participation.
Prevalent hypertension was determined either by the participants’ previously receiving from a clinician a recommendation for medication for “high blood pressure” or by the participants’ taking such medication at the time of survey completion. Diabetes status was determined by the participants’ receiving a prior recommendation for or prescription for “diabetes or high blood sugar” medication.
Of 15,070 invited to participate in the study, 4,168 did so. The mean age of the participants was 51.8 years; 39.4% were Black; the mean body mass index was 31.3; and 33.9% were linemen. Participants played for a mean of 6.9 seasons and were surveyed at a median 24.1 years post ASF career completion. The median CSS was 23.
A total of 1,542 participants (37.3%) had hypertension, and 8.8% had diabetes.
After adjustment for established hypertension risk factors, including smoking, race, diabetes, age, and BMI, there was a graded association between CSS category and odds of later-life hypertension and between high CSS exposure and prevalent hypertension.
Results persisted when LOC, a single highly specific severe concussion symptom, was used in isolation as a surrogate for CSS, the investigators noted.
“These results suggest that repetitive early-life brain injury may have later-life implications for cardiovascular health,” they wrote. They also noted that hypertension has been shown to independently increase the risk of cognitive decline.
While premature cognitive decline among ASF players is generally attributed to chronic traumatic encephalopathy, “data from the current study raise the possibility that some element of cognitive decline among former ASF players may be attributable to hypertension,” which is potentially treatable.
“Future studies clarifying associations and causal pathways between brain injury, hypertension, and brain health are warranted,” they concluded.
Dr. Baggish added, “We hope that clinicians will now understand that head injury is an independent risk factor for high blood pressure and will screen vulnerable populations accordingly, as this may lead to better recognition of previously underdiagnosed hypertension with subsequent opportunities for intervention.”
Close monitoring
Commenting on the study, Jonathan Kim, MD, chair-elect of the American College of Cardiology’s Sports–Cardiology Section and chief of sports cardiology at Emory University in Atlanta, said, “They clearly show an independent association, which is not causality but is a new finding that requires more research. To me, it really emphasizes that cardiovascular risk is the most important health consequence that we should be worried about in retired NFL [National Football League] players.
“There are multifactorial reasons – not just repetitive head trauma – why this athletic population is at risk for the development of high blood pressure, even among college players,” he said.
Dr. Kim’s team has shown in studies conducted in collaboration with Dr. Baggish and others that collegiate football players who gain weight and develop increased systolic blood pressure are at risk of developing a “pathologic” cardiovascular phenotype.
Other research from this group showed links between nonsteroidal anti-inflammatory drug use among high school and collegiate ASF players and increased cardiovascular risk, as well as ASF-associated hypertension and ventricular-arterial coupling.
The suggestion that late-life hypertension could play a role in premature cognitive decline among ASF players “warrants further study,” Dr. Kim said, “because we do know that hypertension in the general population can be associated with cognitive decline. So that’s an important future direction.”
He concluded: “It’s a matter of focusing on cardiac prevention.” After their careers, players should be counseled on the importance of losing weight and adopting heart-healthy habits. In addition to some of the traditional concerns that might lead to closer follow-up of these patients, “having a lot of concussions in the history could potentially be another risk factor that should warrant close monitoring of blood pressure and, of course, treatment if necessary.”
The study was supported by Harvard Catalyst/the Harvard Clinical and Translational Science Center and the NFL Players Association. Dr. Baggish and several coauthors have received funding from the NFL Players Association.
A version of this article originally appeared on Medscape.com.
a new study suggests.
Among more than 4,000 participants, 37% had hypertension at a median of 24 years post career and reported a median concussion symptom score (CSS) of 23 on a scale of 0 to 130.
“We have long seen an incompletely explained link between football participation and later-life cardiovascular disease,” Aaron L. Baggish, MD, of Massachusetts Hospital and Harvard Medical School, Boston, told this news organization.
“This study tested [whether] concussion burden during years of active play would be a determinant of later-life hypertension, the most common cause of cardiovascular disease, and indeed found this relationship to be a strong one.”
The study was published online in Circulation.
Link to cognitive decline?
Dr. Baggish and colleagues recruited former professional American-style football (ASF) players to participate in a survey administered by the Football Players Health Study at Harvard University.
Concussion burden was quantified with respect to the occurrence and severity of common concussion symptoms – e.g., headaches, nausea, dizziness, confusion, loss of consciousness (LOC), disorientation, and feeling unsteady on one’s feet – over years of active participation.
Prevalent hypertension was determined either by the participants’ previously receiving from a clinician a recommendation for medication for “high blood pressure” or by the participants’ taking such medication at the time of survey completion. Diabetes status was determined by the participants’ receiving a prior recommendation for or prescription for “diabetes or high blood sugar” medication.
Of 15,070 invited to participate in the study, 4,168 did so. The mean age of the participants was 51.8 years; 39.4% were Black; the mean body mass index was 31.3; and 33.9% were linemen. Participants played for a mean of 6.9 seasons and were surveyed at a median 24.1 years post ASF career completion. The median CSS was 23.
A total of 1,542 participants (37.3%) had hypertension, and 8.8% had diabetes.
After adjustment for established hypertension risk factors, including smoking, race, diabetes, age, and BMI, there was a graded association between CSS category and odds of later-life hypertension and between high CSS exposure and prevalent hypertension.
Results persisted when LOC, a single highly specific severe concussion symptom, was used in isolation as a surrogate for CSS, the investigators noted.
“These results suggest that repetitive early-life brain injury may have later-life implications for cardiovascular health,” they wrote. They also noted that hypertension has been shown to independently increase the risk of cognitive decline.
While premature cognitive decline among ASF players is generally attributed to chronic traumatic encephalopathy, “data from the current study raise the possibility that some element of cognitive decline among former ASF players may be attributable to hypertension,” which is potentially treatable.
“Future studies clarifying associations and causal pathways between brain injury, hypertension, and brain health are warranted,” they concluded.
Dr. Baggish added, “We hope that clinicians will now understand that head injury is an independent risk factor for high blood pressure and will screen vulnerable populations accordingly, as this may lead to better recognition of previously underdiagnosed hypertension with subsequent opportunities for intervention.”
Close monitoring
Commenting on the study, Jonathan Kim, MD, chair-elect of the American College of Cardiology’s Sports–Cardiology Section and chief of sports cardiology at Emory University in Atlanta, said, “They clearly show an independent association, which is not causality but is a new finding that requires more research. To me, it really emphasizes that cardiovascular risk is the most important health consequence that we should be worried about in retired NFL [National Football League] players.
“There are multifactorial reasons – not just repetitive head trauma – why this athletic population is at risk for the development of high blood pressure, even among college players,” he said.
Dr. Kim’s team has shown in studies conducted in collaboration with Dr. Baggish and others that collegiate football players who gain weight and develop increased systolic blood pressure are at risk of developing a “pathologic” cardiovascular phenotype.
Other research from this group showed links between nonsteroidal anti-inflammatory drug use among high school and collegiate ASF players and increased cardiovascular risk, as well as ASF-associated hypertension and ventricular-arterial coupling.
The suggestion that late-life hypertension could play a role in premature cognitive decline among ASF players “warrants further study,” Dr. Kim said, “because we do know that hypertension in the general population can be associated with cognitive decline. So that’s an important future direction.”
He concluded: “It’s a matter of focusing on cardiac prevention.” After their careers, players should be counseled on the importance of losing weight and adopting heart-healthy habits. In addition to some of the traditional concerns that might lead to closer follow-up of these patients, “having a lot of concussions in the history could potentially be another risk factor that should warrant close monitoring of blood pressure and, of course, treatment if necessary.”
The study was supported by Harvard Catalyst/the Harvard Clinical and Translational Science Center and the NFL Players Association. Dr. Baggish and several coauthors have received funding from the NFL Players Association.
A version of this article originally appeared on Medscape.com.
a new study suggests.
Among more than 4,000 participants, 37% had hypertension at a median of 24 years post career and reported a median concussion symptom score (CSS) of 23 on a scale of 0 to 130.
“We have long seen an incompletely explained link between football participation and later-life cardiovascular disease,” Aaron L. Baggish, MD, of Massachusetts Hospital and Harvard Medical School, Boston, told this news organization.
“This study tested [whether] concussion burden during years of active play would be a determinant of later-life hypertension, the most common cause of cardiovascular disease, and indeed found this relationship to be a strong one.”
The study was published online in Circulation.
Link to cognitive decline?
Dr. Baggish and colleagues recruited former professional American-style football (ASF) players to participate in a survey administered by the Football Players Health Study at Harvard University.
Concussion burden was quantified with respect to the occurrence and severity of common concussion symptoms – e.g., headaches, nausea, dizziness, confusion, loss of consciousness (LOC), disorientation, and feeling unsteady on one’s feet – over years of active participation.
Prevalent hypertension was determined either by the participants’ previously receiving from a clinician a recommendation for medication for “high blood pressure” or by the participants’ taking such medication at the time of survey completion. Diabetes status was determined by the participants’ receiving a prior recommendation for or prescription for “diabetes or high blood sugar” medication.
Of 15,070 invited to participate in the study, 4,168 did so. The mean age of the participants was 51.8 years; 39.4% were Black; the mean body mass index was 31.3; and 33.9% were linemen. Participants played for a mean of 6.9 seasons and were surveyed at a median 24.1 years post ASF career completion. The median CSS was 23.
A total of 1,542 participants (37.3%) had hypertension, and 8.8% had diabetes.
After adjustment for established hypertension risk factors, including smoking, race, diabetes, age, and BMI, there was a graded association between CSS category and odds of later-life hypertension and between high CSS exposure and prevalent hypertension.
Results persisted when LOC, a single highly specific severe concussion symptom, was used in isolation as a surrogate for CSS, the investigators noted.
“These results suggest that repetitive early-life brain injury may have later-life implications for cardiovascular health,” they wrote. They also noted that hypertension has been shown to independently increase the risk of cognitive decline.
While premature cognitive decline among ASF players is generally attributed to chronic traumatic encephalopathy, “data from the current study raise the possibility that some element of cognitive decline among former ASF players may be attributable to hypertension,” which is potentially treatable.
“Future studies clarifying associations and causal pathways between brain injury, hypertension, and brain health are warranted,” they concluded.
Dr. Baggish added, “We hope that clinicians will now understand that head injury is an independent risk factor for high blood pressure and will screen vulnerable populations accordingly, as this may lead to better recognition of previously underdiagnosed hypertension with subsequent opportunities for intervention.”
Close monitoring
Commenting on the study, Jonathan Kim, MD, chair-elect of the American College of Cardiology’s Sports–Cardiology Section and chief of sports cardiology at Emory University in Atlanta, said, “They clearly show an independent association, which is not causality but is a new finding that requires more research. To me, it really emphasizes that cardiovascular risk is the most important health consequence that we should be worried about in retired NFL [National Football League] players.
“There are multifactorial reasons – not just repetitive head trauma – why this athletic population is at risk for the development of high blood pressure, even among college players,” he said.
Dr. Kim’s team has shown in studies conducted in collaboration with Dr. Baggish and others that collegiate football players who gain weight and develop increased systolic blood pressure are at risk of developing a “pathologic” cardiovascular phenotype.
Other research from this group showed links between nonsteroidal anti-inflammatory drug use among high school and collegiate ASF players and increased cardiovascular risk, as well as ASF-associated hypertension and ventricular-arterial coupling.
The suggestion that late-life hypertension could play a role in premature cognitive decline among ASF players “warrants further study,” Dr. Kim said, “because we do know that hypertension in the general population can be associated with cognitive decline. So that’s an important future direction.”
He concluded: “It’s a matter of focusing on cardiac prevention.” After their careers, players should be counseled on the importance of losing weight and adopting heart-healthy habits. In addition to some of the traditional concerns that might lead to closer follow-up of these patients, “having a lot of concussions in the history could potentially be another risk factor that should warrant close monitoring of blood pressure and, of course, treatment if necessary.”
The study was supported by Harvard Catalyst/the Harvard Clinical and Translational Science Center and the NFL Players Association. Dr. Baggish and several coauthors have received funding from the NFL Players Association.
A version of this article originally appeared on Medscape.com.
FROM CIRCULATION
Cardiac issues twice as likely with COVID plus high troponin
Hospitalized COVID-19 patients with high troponin levels are twice as likely to have cardiac abnormalities than those with normal troponin, with or without COVID-19, a multicenter U.K. study suggests.
The causes were diverse, myocarditis prevalence was lower than previously reported, and myocardial scar emerged as an independent risk factor for adverse cardiovascular outcomes at 12 months.
“We know that multiorgan involvement in hospitalized patients with COVID-19 is common ... and may result in acute myocardial injury, detected by an increase in cardiac troponin concentrations,” John P. Greenwood, PhD, of the University of Leeds (England), told this news organization. “Elevated cardiac troponin is associated with a worse prognosis.”
“Multiple mechanisms of myocardial injury have been proposed and ... mitigation or prevention strategies likely depend on the underpinning mechanisms,” he said. “The sequelae of scar may predispose to late events.”
The study, published online in Circulation, also identified a new pattern of microinfarction on cardiac magnetic resonance (CMR) imaging, highlighting the pro-thrombotic nature of SARS-CoV-2, Dr. Greenwood said.
Injury patterns different
Three hundred and forty-two patients with COVID-19 and elevated troponin levels (COVID+/troponin+) across 25 centers were enrolled between June 2020 and March 2021 in COVID-HEART, deemed an “urgent public health study” in the United Kingdom. The aim was to characterize myocardial injury and its associations and sequelae in convalescent patients after hospitalization with COVID-19.
Enrollment took place during the Wuhan and Alpha waves of COVID-19: before vaccination and when dexamethasone and anticoagulant protocols were emerging. All participants underwent CMR at a median of 21 days after discharge.
Two prospective control groups also were recruited: 64 patients with COVID-19 and normal troponin levels (COVID+/troponin−) and 113 without COVID-19 or elevated troponin matched by age and cardiovascular comorbidities (COVID−/comorbidity+).
Overall, participants’ median age was 61 years and 69% were men. Common comorbidities included hypertension (47%), obesity (43%), and diabetes (25%).
The frequency of any heart abnormality – for example, left or right ventricular impairment, scar, or pericardial disease – was twice as great (61%) in COVID+/troponin+ cases, compared with controls (36% for COVID+/troponin− patients versus 31% for COVID−/comorbidity+ patients).
Specifically, more cases than controls had ventricular impairment (17.2% vs. 3.1% and 7.1%) or scar (42% vs. 7% and 23%).
The myocardial injury pattern differed between cases and controls, with cases more likely to have infarction (13% vs. 2% and 7%) or microinfarction (9% vs. 0% and 1%).
However, there was no between-group difference in nonischemic scar (13% vs. 5% and 14%).
The prevalence of probable recent myocarditis was 6.7% in cases, compared with 1.7% in controls without COVID-19 – “much lower” than in previous studies, Dr. Greenwood noted.
During follow-up, four COVID+/troponin+ patients (1.2%) died, and 34 (10%) experienced a subsequent major adverse cardiovascular event (MACE; 10.2%), which was similar to controls (6.1%).
Myocardial scar, but not previous COVID-19 infection or troponin level, was an independent predictor of MACE (odds ratio, 2.25).
“These findings suggest that macroangiopathic and microangiopathic thrombosis may be the key pathologic process for myocardial injury in COVID-19 survivors,” the authors conclude.
Dr. Greenwood added, “We are currently analyzing the 6-month follow-up CMR scans, the quality-of-life questionnaires, and the 6-minute walk tests. These will give us great understanding of how the heart repairs after acute myocardial injury associated with COVID-19. It will also allow us to assess the impact on patient quality of life and functional capacity.”
‘Tour de force’
James A. de Lemos, MD, co-chair of the American Heart Association’s COVID-19 CVD Registry Steering Committee and a professor of medicine at the University of Texas Southwestern Medical Center, Dallas, said, “This is a tour de force collaboration – obtaining this many MRIs across multiple centers in the pandemic is quite remarkable. The study highlights the multiple different processes that lead to cardiac injury in COVID patients, complements autopsy studies and prior smaller MRI studies, [and] also provides the best data on the rate of myocarditis to date among the subset of COVID patients with cardiac injury.”
Overall, he said, the findings “do support closer follow-up for patients who had COVID and elevated troponins. We need to see follow-up MRI results in this cohort, as well as longer term outcomes. We also need studies on newer, more benign variants that are likely to have lower rates of cardiac injury and even fewer MRI abnormalities.”
Matthias Stuber, PhD, and Aaron L. Baggish, MD, both of Lausanne University Hospital and University of Lausanne, Switzerland, noted in a related editorial, “We are also reminded that the clinical severity of COVID-19 is most often dictated by the presence of pre-existing comorbidity, with antecedent ischemic scar now added to the long list of bad actors. Although not the primary focus of the COVID-HEART study, the question of whether cardiac troponin levels should be checked routinely and universally during the index admission for COVID-19 remains unresolved,” they noted.
“In general, we are most effective as clinicians when we use tests to confirm or rule out the specific disease processes suspected by careful basic clinical assessment rather than in a shotgun manner among undifferentiated all-comers,” they conclude.
No commercial funding or relevant financial relationships were reported.
A version of this article originally appeared on Medscape.com.
Hospitalized COVID-19 patients with high troponin levels are twice as likely to have cardiac abnormalities than those with normal troponin, with or without COVID-19, a multicenter U.K. study suggests.
The causes were diverse, myocarditis prevalence was lower than previously reported, and myocardial scar emerged as an independent risk factor for adverse cardiovascular outcomes at 12 months.
“We know that multiorgan involvement in hospitalized patients with COVID-19 is common ... and may result in acute myocardial injury, detected by an increase in cardiac troponin concentrations,” John P. Greenwood, PhD, of the University of Leeds (England), told this news organization. “Elevated cardiac troponin is associated with a worse prognosis.”
“Multiple mechanisms of myocardial injury have been proposed and ... mitigation or prevention strategies likely depend on the underpinning mechanisms,” he said. “The sequelae of scar may predispose to late events.”
The study, published online in Circulation, also identified a new pattern of microinfarction on cardiac magnetic resonance (CMR) imaging, highlighting the pro-thrombotic nature of SARS-CoV-2, Dr. Greenwood said.
Injury patterns different
Three hundred and forty-two patients with COVID-19 and elevated troponin levels (COVID+/troponin+) across 25 centers were enrolled between June 2020 and March 2021 in COVID-HEART, deemed an “urgent public health study” in the United Kingdom. The aim was to characterize myocardial injury and its associations and sequelae in convalescent patients after hospitalization with COVID-19.
Enrollment took place during the Wuhan and Alpha waves of COVID-19: before vaccination and when dexamethasone and anticoagulant protocols were emerging. All participants underwent CMR at a median of 21 days after discharge.
Two prospective control groups also were recruited: 64 patients with COVID-19 and normal troponin levels (COVID+/troponin−) and 113 without COVID-19 or elevated troponin matched by age and cardiovascular comorbidities (COVID−/comorbidity+).
Overall, participants’ median age was 61 years and 69% were men. Common comorbidities included hypertension (47%), obesity (43%), and diabetes (25%).
The frequency of any heart abnormality – for example, left or right ventricular impairment, scar, or pericardial disease – was twice as great (61%) in COVID+/troponin+ cases, compared with controls (36% for COVID+/troponin− patients versus 31% for COVID−/comorbidity+ patients).
Specifically, more cases than controls had ventricular impairment (17.2% vs. 3.1% and 7.1%) or scar (42% vs. 7% and 23%).
The myocardial injury pattern differed between cases and controls, with cases more likely to have infarction (13% vs. 2% and 7%) or microinfarction (9% vs. 0% and 1%).
However, there was no between-group difference in nonischemic scar (13% vs. 5% and 14%).
The prevalence of probable recent myocarditis was 6.7% in cases, compared with 1.7% in controls without COVID-19 – “much lower” than in previous studies, Dr. Greenwood noted.
During follow-up, four COVID+/troponin+ patients (1.2%) died, and 34 (10%) experienced a subsequent major adverse cardiovascular event (MACE; 10.2%), which was similar to controls (6.1%).
Myocardial scar, but not previous COVID-19 infection or troponin level, was an independent predictor of MACE (odds ratio, 2.25).
“These findings suggest that macroangiopathic and microangiopathic thrombosis may be the key pathologic process for myocardial injury in COVID-19 survivors,” the authors conclude.
Dr. Greenwood added, “We are currently analyzing the 6-month follow-up CMR scans, the quality-of-life questionnaires, and the 6-minute walk tests. These will give us great understanding of how the heart repairs after acute myocardial injury associated with COVID-19. It will also allow us to assess the impact on patient quality of life and functional capacity.”
‘Tour de force’
James A. de Lemos, MD, co-chair of the American Heart Association’s COVID-19 CVD Registry Steering Committee and a professor of medicine at the University of Texas Southwestern Medical Center, Dallas, said, “This is a tour de force collaboration – obtaining this many MRIs across multiple centers in the pandemic is quite remarkable. The study highlights the multiple different processes that lead to cardiac injury in COVID patients, complements autopsy studies and prior smaller MRI studies, [and] also provides the best data on the rate of myocarditis to date among the subset of COVID patients with cardiac injury.”
Overall, he said, the findings “do support closer follow-up for patients who had COVID and elevated troponins. We need to see follow-up MRI results in this cohort, as well as longer term outcomes. We also need studies on newer, more benign variants that are likely to have lower rates of cardiac injury and even fewer MRI abnormalities.”
Matthias Stuber, PhD, and Aaron L. Baggish, MD, both of Lausanne University Hospital and University of Lausanne, Switzerland, noted in a related editorial, “We are also reminded that the clinical severity of COVID-19 is most often dictated by the presence of pre-existing comorbidity, with antecedent ischemic scar now added to the long list of bad actors. Although not the primary focus of the COVID-HEART study, the question of whether cardiac troponin levels should be checked routinely and universally during the index admission for COVID-19 remains unresolved,” they noted.
“In general, we are most effective as clinicians when we use tests to confirm or rule out the specific disease processes suspected by careful basic clinical assessment rather than in a shotgun manner among undifferentiated all-comers,” they conclude.
No commercial funding or relevant financial relationships were reported.
A version of this article originally appeared on Medscape.com.
Hospitalized COVID-19 patients with high troponin levels are twice as likely to have cardiac abnormalities than those with normal troponin, with or without COVID-19, a multicenter U.K. study suggests.
The causes were diverse, myocarditis prevalence was lower than previously reported, and myocardial scar emerged as an independent risk factor for adverse cardiovascular outcomes at 12 months.
“We know that multiorgan involvement in hospitalized patients with COVID-19 is common ... and may result in acute myocardial injury, detected by an increase in cardiac troponin concentrations,” John P. Greenwood, PhD, of the University of Leeds (England), told this news organization. “Elevated cardiac troponin is associated with a worse prognosis.”
“Multiple mechanisms of myocardial injury have been proposed and ... mitigation or prevention strategies likely depend on the underpinning mechanisms,” he said. “The sequelae of scar may predispose to late events.”
The study, published online in Circulation, also identified a new pattern of microinfarction on cardiac magnetic resonance (CMR) imaging, highlighting the pro-thrombotic nature of SARS-CoV-2, Dr. Greenwood said.
Injury patterns different
Three hundred and forty-two patients with COVID-19 and elevated troponin levels (COVID+/troponin+) across 25 centers were enrolled between June 2020 and March 2021 in COVID-HEART, deemed an “urgent public health study” in the United Kingdom. The aim was to characterize myocardial injury and its associations and sequelae in convalescent patients after hospitalization with COVID-19.
Enrollment took place during the Wuhan and Alpha waves of COVID-19: before vaccination and when dexamethasone and anticoagulant protocols were emerging. All participants underwent CMR at a median of 21 days after discharge.
Two prospective control groups also were recruited: 64 patients with COVID-19 and normal troponin levels (COVID+/troponin−) and 113 without COVID-19 or elevated troponin matched by age and cardiovascular comorbidities (COVID−/comorbidity+).
Overall, participants’ median age was 61 years and 69% were men. Common comorbidities included hypertension (47%), obesity (43%), and diabetes (25%).
The frequency of any heart abnormality – for example, left or right ventricular impairment, scar, or pericardial disease – was twice as great (61%) in COVID+/troponin+ cases, compared with controls (36% for COVID+/troponin− patients versus 31% for COVID−/comorbidity+ patients).
Specifically, more cases than controls had ventricular impairment (17.2% vs. 3.1% and 7.1%) or scar (42% vs. 7% and 23%).
The myocardial injury pattern differed between cases and controls, with cases more likely to have infarction (13% vs. 2% and 7%) or microinfarction (9% vs. 0% and 1%).
However, there was no between-group difference in nonischemic scar (13% vs. 5% and 14%).
The prevalence of probable recent myocarditis was 6.7% in cases, compared with 1.7% in controls without COVID-19 – “much lower” than in previous studies, Dr. Greenwood noted.
During follow-up, four COVID+/troponin+ patients (1.2%) died, and 34 (10%) experienced a subsequent major adverse cardiovascular event (MACE; 10.2%), which was similar to controls (6.1%).
Myocardial scar, but not previous COVID-19 infection or troponin level, was an independent predictor of MACE (odds ratio, 2.25).
“These findings suggest that macroangiopathic and microangiopathic thrombosis may be the key pathologic process for myocardial injury in COVID-19 survivors,” the authors conclude.
Dr. Greenwood added, “We are currently analyzing the 6-month follow-up CMR scans, the quality-of-life questionnaires, and the 6-minute walk tests. These will give us great understanding of how the heart repairs after acute myocardial injury associated with COVID-19. It will also allow us to assess the impact on patient quality of life and functional capacity.”
‘Tour de force’
James A. de Lemos, MD, co-chair of the American Heart Association’s COVID-19 CVD Registry Steering Committee and a professor of medicine at the University of Texas Southwestern Medical Center, Dallas, said, “This is a tour de force collaboration – obtaining this many MRIs across multiple centers in the pandemic is quite remarkable. The study highlights the multiple different processes that lead to cardiac injury in COVID patients, complements autopsy studies and prior smaller MRI studies, [and] also provides the best data on the rate of myocarditis to date among the subset of COVID patients with cardiac injury.”
Overall, he said, the findings “do support closer follow-up for patients who had COVID and elevated troponins. We need to see follow-up MRI results in this cohort, as well as longer term outcomes. We also need studies on newer, more benign variants that are likely to have lower rates of cardiac injury and even fewer MRI abnormalities.”
Matthias Stuber, PhD, and Aaron L. Baggish, MD, both of Lausanne University Hospital and University of Lausanne, Switzerland, noted in a related editorial, “We are also reminded that the clinical severity of COVID-19 is most often dictated by the presence of pre-existing comorbidity, with antecedent ischemic scar now added to the long list of bad actors. Although not the primary focus of the COVID-HEART study, the question of whether cardiac troponin levels should be checked routinely and universally during the index admission for COVID-19 remains unresolved,” they noted.
“In general, we are most effective as clinicians when we use tests to confirm or rule out the specific disease processes suspected by careful basic clinical assessment rather than in a shotgun manner among undifferentiated all-comers,” they conclude.
No commercial funding or relevant financial relationships were reported.
A version of this article originally appeared on Medscape.com.