User login
The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
The end of the telemedicine era?
I started taking care of Jim, a 68-year-old man with metastatic renal cell carcinoma back in the fall of 2018. Jim lived far from our clinic in the rural western Sierra Mountains and had a hard time getting to Santa Monica, but needed ongoing pain and symptom management, as well as follow-up visits with oncology and discussions with our teams about preparing for the end of life.
Luckily for Jim, the Centers for Medicare & Medicaid Services had relaxed the rules around telehealth because of the public health emergency, and we were easily able to provide telemedicine visits throughout the pandemic ensuring that Jim retained access to the care team that had managed his cancer for several years at that point. This would not have been possible without the use of telemedicine – at least not without great effort and expense by Jim to make frequent trips to our Santa Monica clinic.
So, you can imagine my apprehension when I received an email the other day from our billing department, informing billing providers like myself that “telehealth visits are still covered through the end of the year.” While this initially seemed like reassuring news, it immediately begged the question – what happens at the end of the year? What will care look like for patients like Jim who live at a significant distance from their providers?
The end of the COVID-19 public health emergency on May 11 has prompted states to reevaluate the future of telehealth for Medicaid and Medicare recipients. Most states plan to make some telehealth services permanent, particularly in rural areas. While other telehealth services have been extended through Dec. 31, 2024, under the Consolidated Appropriations Act of 2023.
But still, We can now see very ill patients in their own homes without imposing an undue burden on them to come in for yet another office visit. Prior to the public health emergency, our embedded palliative care program would see patients only when they were in the oncology clinic so as to not burden them with having to travel to yet another clinic. This made our palliative providers less efficient since patients were being seen by multiple providers in the same space, which led to some time spent waiting around. It also frequently tied up our clinic exam rooms for long periods of time, delaying care for patients sitting in the waiting room.
Telehealth changed that virtually overnight. With the widespread availability of smartphones and tablets, patients could stay at home and speak more comfortably in their own surroundings – especially about the difficult topics we tend to dig into in palliative care – such as fears, suffering, grief, loss, legacy, regret, trauma, gratitude, dying – without the impersonal, aseptic environment of a clinic. We could visit with their family/caregivers, kids, and their pets. We could tour their living space and see how they were managing from a functional standpoint. We could get to know aspects of our patients’ lives that we’d never have seen in the clinic that could help us understand their goals and values better and help care for them more fully.
The benefit to the institution was also measurable. We could see our patients faster – the time from referral to consult dropped dramatically because patients could be scheduled for next-day virtual visits instead of having to wait for them to come back to an oncology visit. We could do quick symptom-focused visits that prior to telehealth would have been conducted by phone without the ability to perform at the very least an observational physical exam of the patient, which is important when prescribing medications to medically frail populations.
If telemedicine goes, how will it affect outpatient palliative care?
If that goes away, I do not know what will happen to outpatient palliative care. I can tell you we will be much less efficient in terms of when we see patients. There will probably be a higher clinic burden to patients, as well as higher financial toxicity to patients (Parking in the structure attached to my office building is $22 per day). And, what about the uncaptured costs associated with transportation for those whose illness prevents them from driving themselves? This can range from Uber costs to the time cost for a patient’s family member to take off work and arrange for childcare in order to drive the patient to a clinic for a visit.
In February, I received emails from the Drug Enforcement Agency suggesting that they, too, may roll back providers’ ability to prescribe controlled substances to patients who are mainly receiving telehealth services. While I understand and fully support the need to curb inappropriate overprescribing of controlled medications, I am concerned about the unintended consequences to cancer patients who live at a remote distance from their oncologists and palliative care providers. I remain hopeful that DEA will consider a carveout exception for those patients who have cancer, are receiving palliative care services, or are deemed to be at the end of life, much like the chronic opioid guidelines developed by the Centers for Disease Control and Prevention have done.
Telemedicine in essential care
Back to Jim. Using telehealth and electronic prescribing, our oncology and palliative care programs were able to keep Jim comfortable and at home through the end of his life. He did not have to travel 3 hours each way to get care. He did not have to spend money on parking and gas, and his daughter did not have to take days off work and arrange for a babysitter in order to drive him to our clinic. We partnered with a local pharmacy that was willing to special order medications for Jim when his pain became worse and he required a long-acting opioid. We partnered with a local home health company that kept a close eye on Jim and let us know when he seemed to be declining further, prompting discussions about transitioning to hospice.
I’m proud of the fact that our group helped Jim stay in comfortable surroundings and out of the clinic and hospital over the last 6 months of his life, but that would never have happened without the safe and thoughtful use of telehealth by our team.
Ironically, because of a public health emergency, we were able to provide efficient and high-quality palliative care at the right time, to the right person, in the right place, satisfying CMS goals to provide better care for patients and whole populations at lower costs.
Ms. D’Ambruoso is a hospice and palliative care nurse practitioner for UCLA Health Cancer Care, Santa Monica, Calif.
I started taking care of Jim, a 68-year-old man with metastatic renal cell carcinoma back in the fall of 2018. Jim lived far from our clinic in the rural western Sierra Mountains and had a hard time getting to Santa Monica, but needed ongoing pain and symptom management, as well as follow-up visits with oncology and discussions with our teams about preparing for the end of life.
Luckily for Jim, the Centers for Medicare & Medicaid Services had relaxed the rules around telehealth because of the public health emergency, and we were easily able to provide telemedicine visits throughout the pandemic ensuring that Jim retained access to the care team that had managed his cancer for several years at that point. This would not have been possible without the use of telemedicine – at least not without great effort and expense by Jim to make frequent trips to our Santa Monica clinic.
So, you can imagine my apprehension when I received an email the other day from our billing department, informing billing providers like myself that “telehealth visits are still covered through the end of the year.” While this initially seemed like reassuring news, it immediately begged the question – what happens at the end of the year? What will care look like for patients like Jim who live at a significant distance from their providers?
The end of the COVID-19 public health emergency on May 11 has prompted states to reevaluate the future of telehealth for Medicaid and Medicare recipients. Most states plan to make some telehealth services permanent, particularly in rural areas. While other telehealth services have been extended through Dec. 31, 2024, under the Consolidated Appropriations Act of 2023.
But still, We can now see very ill patients in their own homes without imposing an undue burden on them to come in for yet another office visit. Prior to the public health emergency, our embedded palliative care program would see patients only when they were in the oncology clinic so as to not burden them with having to travel to yet another clinic. This made our palliative providers less efficient since patients were being seen by multiple providers in the same space, which led to some time spent waiting around. It also frequently tied up our clinic exam rooms for long periods of time, delaying care for patients sitting in the waiting room.
Telehealth changed that virtually overnight. With the widespread availability of smartphones and tablets, patients could stay at home and speak more comfortably in their own surroundings – especially about the difficult topics we tend to dig into in palliative care – such as fears, suffering, grief, loss, legacy, regret, trauma, gratitude, dying – without the impersonal, aseptic environment of a clinic. We could visit with their family/caregivers, kids, and their pets. We could tour their living space and see how they were managing from a functional standpoint. We could get to know aspects of our patients’ lives that we’d never have seen in the clinic that could help us understand their goals and values better and help care for them more fully.
The benefit to the institution was also measurable. We could see our patients faster – the time from referral to consult dropped dramatically because patients could be scheduled for next-day virtual visits instead of having to wait for them to come back to an oncology visit. We could do quick symptom-focused visits that prior to telehealth would have been conducted by phone without the ability to perform at the very least an observational physical exam of the patient, which is important when prescribing medications to medically frail populations.
If telemedicine goes, how will it affect outpatient palliative care?
If that goes away, I do not know what will happen to outpatient palliative care. I can tell you we will be much less efficient in terms of when we see patients. There will probably be a higher clinic burden to patients, as well as higher financial toxicity to patients (Parking in the structure attached to my office building is $22 per day). And, what about the uncaptured costs associated with transportation for those whose illness prevents them from driving themselves? This can range from Uber costs to the time cost for a patient’s family member to take off work and arrange for childcare in order to drive the patient to a clinic for a visit.
In February, I received emails from the Drug Enforcement Agency suggesting that they, too, may roll back providers’ ability to prescribe controlled substances to patients who are mainly receiving telehealth services. While I understand and fully support the need to curb inappropriate overprescribing of controlled medications, I am concerned about the unintended consequences to cancer patients who live at a remote distance from their oncologists and palliative care providers. I remain hopeful that DEA will consider a carveout exception for those patients who have cancer, are receiving palliative care services, or are deemed to be at the end of life, much like the chronic opioid guidelines developed by the Centers for Disease Control and Prevention have done.
Telemedicine in essential care
Back to Jim. Using telehealth and electronic prescribing, our oncology and palliative care programs were able to keep Jim comfortable and at home through the end of his life. He did not have to travel 3 hours each way to get care. He did not have to spend money on parking and gas, and his daughter did not have to take days off work and arrange for a babysitter in order to drive him to our clinic. We partnered with a local pharmacy that was willing to special order medications for Jim when his pain became worse and he required a long-acting opioid. We partnered with a local home health company that kept a close eye on Jim and let us know when he seemed to be declining further, prompting discussions about transitioning to hospice.
I’m proud of the fact that our group helped Jim stay in comfortable surroundings and out of the clinic and hospital over the last 6 months of his life, but that would never have happened without the safe and thoughtful use of telehealth by our team.
Ironically, because of a public health emergency, we were able to provide efficient and high-quality palliative care at the right time, to the right person, in the right place, satisfying CMS goals to provide better care for patients and whole populations at lower costs.
Ms. D’Ambruoso is a hospice and palliative care nurse practitioner for UCLA Health Cancer Care, Santa Monica, Calif.
I started taking care of Jim, a 68-year-old man with metastatic renal cell carcinoma back in the fall of 2018. Jim lived far from our clinic in the rural western Sierra Mountains and had a hard time getting to Santa Monica, but needed ongoing pain and symptom management, as well as follow-up visits with oncology and discussions with our teams about preparing for the end of life.
Luckily for Jim, the Centers for Medicare & Medicaid Services had relaxed the rules around telehealth because of the public health emergency, and we were easily able to provide telemedicine visits throughout the pandemic ensuring that Jim retained access to the care team that had managed his cancer for several years at that point. This would not have been possible without the use of telemedicine – at least not without great effort and expense by Jim to make frequent trips to our Santa Monica clinic.
So, you can imagine my apprehension when I received an email the other day from our billing department, informing billing providers like myself that “telehealth visits are still covered through the end of the year.” While this initially seemed like reassuring news, it immediately begged the question – what happens at the end of the year? What will care look like for patients like Jim who live at a significant distance from their providers?
The end of the COVID-19 public health emergency on May 11 has prompted states to reevaluate the future of telehealth for Medicaid and Medicare recipients. Most states plan to make some telehealth services permanent, particularly in rural areas. While other telehealth services have been extended through Dec. 31, 2024, under the Consolidated Appropriations Act of 2023.
But still, We can now see very ill patients in their own homes without imposing an undue burden on them to come in for yet another office visit. Prior to the public health emergency, our embedded palliative care program would see patients only when they were in the oncology clinic so as to not burden them with having to travel to yet another clinic. This made our palliative providers less efficient since patients were being seen by multiple providers in the same space, which led to some time spent waiting around. It also frequently tied up our clinic exam rooms for long periods of time, delaying care for patients sitting in the waiting room.
Telehealth changed that virtually overnight. With the widespread availability of smartphones and tablets, patients could stay at home and speak more comfortably in their own surroundings – especially about the difficult topics we tend to dig into in palliative care – such as fears, suffering, grief, loss, legacy, regret, trauma, gratitude, dying – without the impersonal, aseptic environment of a clinic. We could visit with their family/caregivers, kids, and their pets. We could tour their living space and see how they were managing from a functional standpoint. We could get to know aspects of our patients’ lives that we’d never have seen in the clinic that could help us understand their goals and values better and help care for them more fully.
The benefit to the institution was also measurable. We could see our patients faster – the time from referral to consult dropped dramatically because patients could be scheduled for next-day virtual visits instead of having to wait for them to come back to an oncology visit. We could do quick symptom-focused visits that prior to telehealth would have been conducted by phone without the ability to perform at the very least an observational physical exam of the patient, which is important when prescribing medications to medically frail populations.
If telemedicine goes, how will it affect outpatient palliative care?
If that goes away, I do not know what will happen to outpatient palliative care. I can tell you we will be much less efficient in terms of when we see patients. There will probably be a higher clinic burden to patients, as well as higher financial toxicity to patients (Parking in the structure attached to my office building is $22 per day). And, what about the uncaptured costs associated with transportation for those whose illness prevents them from driving themselves? This can range from Uber costs to the time cost for a patient’s family member to take off work and arrange for childcare in order to drive the patient to a clinic for a visit.
In February, I received emails from the Drug Enforcement Agency suggesting that they, too, may roll back providers’ ability to prescribe controlled substances to patients who are mainly receiving telehealth services. While I understand and fully support the need to curb inappropriate overprescribing of controlled medications, I am concerned about the unintended consequences to cancer patients who live at a remote distance from their oncologists and palliative care providers. I remain hopeful that DEA will consider a carveout exception for those patients who have cancer, are receiving palliative care services, or are deemed to be at the end of life, much like the chronic opioid guidelines developed by the Centers for Disease Control and Prevention have done.
Telemedicine in essential care
Back to Jim. Using telehealth and electronic prescribing, our oncology and palliative care programs were able to keep Jim comfortable and at home through the end of his life. He did not have to travel 3 hours each way to get care. He did not have to spend money on parking and gas, and his daughter did not have to take days off work and arrange for a babysitter in order to drive him to our clinic. We partnered with a local pharmacy that was willing to special order medications for Jim when his pain became worse and he required a long-acting opioid. We partnered with a local home health company that kept a close eye on Jim and let us know when he seemed to be declining further, prompting discussions about transitioning to hospice.
I’m proud of the fact that our group helped Jim stay in comfortable surroundings and out of the clinic and hospital over the last 6 months of his life, but that would never have happened without the safe and thoughtful use of telehealth by our team.
Ironically, because of a public health emergency, we were able to provide efficient and high-quality palliative care at the right time, to the right person, in the right place, satisfying CMS goals to provide better care for patients and whole populations at lower costs.
Ms. D’Ambruoso is a hospice and palliative care nurse practitioner for UCLA Health Cancer Care, Santa Monica, Calif.
Heart rate, cardiac phase influence perception of time
People’s perception of time is subjective and based not only on their emotional state but also on heartbeat and heart rate (HR), two new studies suggest.
Researchers studied young adults with an electrocardiogram (ECG), measuring electrical activity at millisecond resolution while participants listened to tones that varied in duration. Participants were asked to report whether certain tones were longer or shorter, in relation to others.
The researchers found that the momentary perception of time was not continuous but rather expanded or contracted with each heartbeat. When the heartbeat preceding a tone was shorter, participants regarded the tone as longer in duration; but when the preceding heartbeat was longer, the participants experienced the tone as shorter.
“Our findings suggest that there is a unique role that cardiac dynamics play in the momentary experience of time,” lead author Saeedah Sadeghi, MSc, a doctoral candidate in the department of psychology at Cornell University, Ithaca, N.Y., said in an interview.
The study was published online in Psychophysiology.
In a second study, published in the journal Current Biology, a separate team of researchers asked participants to judge whether a brief event – the presentation of a tone or an image – was shorter or longer than a reference duration. ECG was used to track systole and diastole when participants were presented with these events.
The researchers found that the durations were underestimated during systole and overestimated during diastole, suggesting that time seemed to “speed up” or “slow down,” based on cardiac contraction and relaxation. When participants rated the events as more arousing, their perceived durations contracted, even during diastole.
“In our new paper, we show that our heart shapes the perceived duration of events, so time passes quicker when the heart contracts but slower when the heart relaxes,” lead author Irena Arslanova, PhD, postdoctoral researcher in cognitive neuroscience, Royal Holloway University of London, told this news organization.
Temporal ‘wrinkles’
“Subjective time is malleable,” observed Ms. Sadeghi and colleagues in their report. “Rather than being a uniform dimension, perceived duration has ‘wrinkles,’ with certain intervals appearing to dilate or contract relative to objective time” – a phenomenon sometimes referred to as “distortion.”
“We have known that people aren’t always consistent in how they perceive time, and objective duration doesn’t always explain subjective perception of time,” Ms. Sadeghi said.
Although the potential role of the heart in the experience of time has been hypothesized, research into the heart-time connection has been limited, with previous studies focusing primarily on estimating the average cardiac measures on longer time scales over seconds to minutes.
The current study sought to investigate “the beat-by-beat fluctuations of the heart period on the experience of brief moments in time” because, compared with longer time scales, subsecond temporal perception “has different underlying mechanisms” and a subsecond stimulus can be a “small fraction of a heartbeat.”
To home in on this small fraction, the researchers studied 45 participants (aged 18-21), who listened to 210 tones ranging in duration from 80 ms (short) to 188 ms (long). The tones were linearly spaced at 18-ms increments (80, 98, 116, 134, 152, 170, 188).
Participants were asked to categorize each tone as “short” or “long.” All tones were randomly assigned to be synchronized either with the systolic or diastolic phase of the cardiac cycle (50% each). The tones were triggered by participants’ heartbeats.
In addition, participants engaged in a heartbeat-counting activity, in which they were asked not to touch their pulse but to count their heartbeats by tuning in to their bodily sensations at intervals of 25, 35, and 45 seconds.
‘Classical’ response
“Participants exhibited an increased heart period after tone onset, which returned to baseline following an average canonical bell shape,” the authors reported.
The researchers performed regression analyses to determine how, on average, the heart rate before the tone was related to perceived duration or how the amount of change after the tone was related to perceived duration.
They found that when the heart rate was higher before the tone, participants tended to be more accurate in their time perception. When the heartbeat preceding a tone was shorter, participants experienced the tone as longer; conversely, when the heartbeat was longer, they experienced the duration of the identical sound as shorter.
When participants focused their attention on the sounds, their heart rate was affected such that their orienting responses actually changed their heart rate and, in turn, their temporal perception.
“The orienting response is classical,” Ms. Sadeghi said. “When you attend to something unpredictable or novel, the act of orienting attention decreases the HR.”
She explained that the heartbeats are “noise to the brain.” When people need to perceive external events, “a decrease in HR facilitates the intake of things from outside and facilitates sensory intake.”
A lower HR “makes it easier for the person to take in the tone and perceive it, so it feels as though they perceive more of the tone and the duration seems longer – similarly, when the HR decreases.”
It is unknown whether this is a causal relationship, she cautioned, “but it seems as though the decrease in HR somehow makes it easier to ‘get’ more of the tone, which then appears to have longer duration.”
Bidirectional relationship
“We know that experienced time can be distorted,” said Dr. Arslanova. “Time flies by when we’re busy or having fun but drags on when we’re bored or waiting for something, yet we still don’t know how the brain gives rise to such elastic experience of time.”
The brain controls the heart in response to the information the heart provides about the state of the body, she noted, “but we have begun to see more research showing that the heart–brain relationship is bidirectional.”
This means that the heart plays a role in shaping “how we process information and experience emotions.” In this analysis, Dr. Arslanova and colleagues “wanted to study whether the heart also shapes the experience of time.”
To do so, they conducted two experiments.
In the first, participants (n = 28) were presented with brief events during systole or during diastole. The events took the form of an emotionally neutral visual shape or auditory tone, shown for durations of 200 to 400 ms.
Participants were asked whether these events were of longer or shorter duration, compared with a reference duration.
The researchers found significant main effect of cardiac phase systole (F(1,27) = 8.1, P =.01), with stimuli presented at diastole regarded, on average, as 7 ms longer than those presented at systole.
They also found a significant main effect of modality (F(1,27) = 5.7, P = .02), with tones judged, on average, as 13 ms longer than visual stimuli.
“This means that time ‘sped up’ during the heart’s contraction and ‘slowed down’ during the heart’s relaxation,” Dr. Arslanova said.
The effect of cardiac phase on duration perception was independent of changes in HR, the authors noted.
In the second experiment, participants performed a similar task, but this time, it involved the images of faces containing emotional expressions. The researchers again observed a similar pattern of time appearing to speed up during systole and slow down during diastole, with stimuli present at diastole regarded as being an average 9 ms longer than those presented at systole.
These opposing effects of systole and diastole on time perception were present only for low and average arousal ratings (b = 14.4 [SE 3.2], P < .001 and b = 9.2 [2.3], P <.001, respectively). However, this effect disappeared when arousal ratings increased (b = 4.1 [3.2] P =.21).
“Interestingly, when participants rated the events as more arousing, their perceived durations contracted, even during the heart’s relaxation,” Dr. Arslanova observed. “This means that in a nonaroused state, the two cardiac phases pull the experienced duration in opposite directions – time contracts, then expands.”
The findings “also predict that increasing HR would speed up passing time, making events seem shorter, because there will be a stronger influence from the heart’s contractions,” she said.
She described the relationship between time perception and emotion as complex, noting that the findings are important because they show “that the way we experience time cannot be examined in isolation from our body,” she said.
Converging evidence
Martin Wiener, PhD, assistant professor, George Mason University, Fairfax, Va., said both papers “provide converging evidence on the role of the heart in our perception of time.”
Together, “the results share that our sense of time – that is, our incoming sensory perception of the present ‘moment’ – is adjusted or ‘gated’ by both our HR and cardiac phase,” said Dr. Wiener, executive director of the Timing Research Forum.
The studies “provide a link between the body and the brain, in terms of our perception, and that we cannot study one without the context of the other,” said Dr. Wiener, who was not involved with the current study.
“All of this opens up a new avenue of research, and so it is very exciting to see,” Dr. Wiener stated.
No source of funding was listed for the study by Ms. Sadeghi and coauthors. They declared no relevant financial relationships.
Dr. Arslanova and coauthors declared no competing interests. Senior author Manos Tsakiris, PhD, receives funding from the European Research Council Consolidator Grant. Dr. Wiener declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
People’s perception of time is subjective and based not only on their emotional state but also on heartbeat and heart rate (HR), two new studies suggest.
Researchers studied young adults with an electrocardiogram (ECG), measuring electrical activity at millisecond resolution while participants listened to tones that varied in duration. Participants were asked to report whether certain tones were longer or shorter, in relation to others.
The researchers found that the momentary perception of time was not continuous but rather expanded or contracted with each heartbeat. When the heartbeat preceding a tone was shorter, participants regarded the tone as longer in duration; but when the preceding heartbeat was longer, the participants experienced the tone as shorter.
“Our findings suggest that there is a unique role that cardiac dynamics play in the momentary experience of time,” lead author Saeedah Sadeghi, MSc, a doctoral candidate in the department of psychology at Cornell University, Ithaca, N.Y., said in an interview.
The study was published online in Psychophysiology.
In a second study, published in the journal Current Biology, a separate team of researchers asked participants to judge whether a brief event – the presentation of a tone or an image – was shorter or longer than a reference duration. ECG was used to track systole and diastole when participants were presented with these events.
The researchers found that the durations were underestimated during systole and overestimated during diastole, suggesting that time seemed to “speed up” or “slow down,” based on cardiac contraction and relaxation. When participants rated the events as more arousing, their perceived durations contracted, even during diastole.
“In our new paper, we show that our heart shapes the perceived duration of events, so time passes quicker when the heart contracts but slower when the heart relaxes,” lead author Irena Arslanova, PhD, postdoctoral researcher in cognitive neuroscience, Royal Holloway University of London, told this news organization.
Temporal ‘wrinkles’
“Subjective time is malleable,” observed Ms. Sadeghi and colleagues in their report. “Rather than being a uniform dimension, perceived duration has ‘wrinkles,’ with certain intervals appearing to dilate or contract relative to objective time” – a phenomenon sometimes referred to as “distortion.”
“We have known that people aren’t always consistent in how they perceive time, and objective duration doesn’t always explain subjective perception of time,” Ms. Sadeghi said.
Although the potential role of the heart in the experience of time has been hypothesized, research into the heart-time connection has been limited, with previous studies focusing primarily on estimating the average cardiac measures on longer time scales over seconds to minutes.
The current study sought to investigate “the beat-by-beat fluctuations of the heart period on the experience of brief moments in time” because, compared with longer time scales, subsecond temporal perception “has different underlying mechanisms” and a subsecond stimulus can be a “small fraction of a heartbeat.”
To home in on this small fraction, the researchers studied 45 participants (aged 18-21), who listened to 210 tones ranging in duration from 80 ms (short) to 188 ms (long). The tones were linearly spaced at 18-ms increments (80, 98, 116, 134, 152, 170, 188).
Participants were asked to categorize each tone as “short” or “long.” All tones were randomly assigned to be synchronized either with the systolic or diastolic phase of the cardiac cycle (50% each). The tones were triggered by participants’ heartbeats.
In addition, participants engaged in a heartbeat-counting activity, in which they were asked not to touch their pulse but to count their heartbeats by tuning in to their bodily sensations at intervals of 25, 35, and 45 seconds.
‘Classical’ response
“Participants exhibited an increased heart period after tone onset, which returned to baseline following an average canonical bell shape,” the authors reported.
The researchers performed regression analyses to determine how, on average, the heart rate before the tone was related to perceived duration or how the amount of change after the tone was related to perceived duration.
They found that when the heart rate was higher before the tone, participants tended to be more accurate in their time perception. When the heartbeat preceding a tone was shorter, participants experienced the tone as longer; conversely, when the heartbeat was longer, they experienced the duration of the identical sound as shorter.
When participants focused their attention on the sounds, their heart rate was affected such that their orienting responses actually changed their heart rate and, in turn, their temporal perception.
“The orienting response is classical,” Ms. Sadeghi said. “When you attend to something unpredictable or novel, the act of orienting attention decreases the HR.”
She explained that the heartbeats are “noise to the brain.” When people need to perceive external events, “a decrease in HR facilitates the intake of things from outside and facilitates sensory intake.”
A lower HR “makes it easier for the person to take in the tone and perceive it, so it feels as though they perceive more of the tone and the duration seems longer – similarly, when the HR decreases.”
It is unknown whether this is a causal relationship, she cautioned, “but it seems as though the decrease in HR somehow makes it easier to ‘get’ more of the tone, which then appears to have longer duration.”
Bidirectional relationship
“We know that experienced time can be distorted,” said Dr. Arslanova. “Time flies by when we’re busy or having fun but drags on when we’re bored or waiting for something, yet we still don’t know how the brain gives rise to such elastic experience of time.”
The brain controls the heart in response to the information the heart provides about the state of the body, she noted, “but we have begun to see more research showing that the heart–brain relationship is bidirectional.”
This means that the heart plays a role in shaping “how we process information and experience emotions.” In this analysis, Dr. Arslanova and colleagues “wanted to study whether the heart also shapes the experience of time.”
To do so, they conducted two experiments.
In the first, participants (n = 28) were presented with brief events during systole or during diastole. The events took the form of an emotionally neutral visual shape or auditory tone, shown for durations of 200 to 400 ms.
Participants were asked whether these events were of longer or shorter duration, compared with a reference duration.
The researchers found significant main effect of cardiac phase systole (F(1,27) = 8.1, P =.01), with stimuli presented at diastole regarded, on average, as 7 ms longer than those presented at systole.
They also found a significant main effect of modality (F(1,27) = 5.7, P = .02), with tones judged, on average, as 13 ms longer than visual stimuli.
“This means that time ‘sped up’ during the heart’s contraction and ‘slowed down’ during the heart’s relaxation,” Dr. Arslanova said.
The effect of cardiac phase on duration perception was independent of changes in HR, the authors noted.
In the second experiment, participants performed a similar task, but this time, it involved the images of faces containing emotional expressions. The researchers again observed a similar pattern of time appearing to speed up during systole and slow down during diastole, with stimuli present at diastole regarded as being an average 9 ms longer than those presented at systole.
These opposing effects of systole and diastole on time perception were present only for low and average arousal ratings (b = 14.4 [SE 3.2], P < .001 and b = 9.2 [2.3], P <.001, respectively). However, this effect disappeared when arousal ratings increased (b = 4.1 [3.2] P =.21).
“Interestingly, when participants rated the events as more arousing, their perceived durations contracted, even during the heart’s relaxation,” Dr. Arslanova observed. “This means that in a nonaroused state, the two cardiac phases pull the experienced duration in opposite directions – time contracts, then expands.”
The findings “also predict that increasing HR would speed up passing time, making events seem shorter, because there will be a stronger influence from the heart’s contractions,” she said.
She described the relationship between time perception and emotion as complex, noting that the findings are important because they show “that the way we experience time cannot be examined in isolation from our body,” she said.
Converging evidence
Martin Wiener, PhD, assistant professor, George Mason University, Fairfax, Va., said both papers “provide converging evidence on the role of the heart in our perception of time.”
Together, “the results share that our sense of time – that is, our incoming sensory perception of the present ‘moment’ – is adjusted or ‘gated’ by both our HR and cardiac phase,” said Dr. Wiener, executive director of the Timing Research Forum.
The studies “provide a link between the body and the brain, in terms of our perception, and that we cannot study one without the context of the other,” said Dr. Wiener, who was not involved with the current study.
“All of this opens up a new avenue of research, and so it is very exciting to see,” Dr. Wiener stated.
No source of funding was listed for the study by Ms. Sadeghi and coauthors. They declared no relevant financial relationships.
Dr. Arslanova and coauthors declared no competing interests. Senior author Manos Tsakiris, PhD, receives funding from the European Research Council Consolidator Grant. Dr. Wiener declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
People’s perception of time is subjective and based not only on their emotional state but also on heartbeat and heart rate (HR), two new studies suggest.
Researchers studied young adults with an electrocardiogram (ECG), measuring electrical activity at millisecond resolution while participants listened to tones that varied in duration. Participants were asked to report whether certain tones were longer or shorter, in relation to others.
The researchers found that the momentary perception of time was not continuous but rather expanded or contracted with each heartbeat. When the heartbeat preceding a tone was shorter, participants regarded the tone as longer in duration; but when the preceding heartbeat was longer, the participants experienced the tone as shorter.
“Our findings suggest that there is a unique role that cardiac dynamics play in the momentary experience of time,” lead author Saeedah Sadeghi, MSc, a doctoral candidate in the department of psychology at Cornell University, Ithaca, N.Y., said in an interview.
The study was published online in Psychophysiology.
In a second study, published in the journal Current Biology, a separate team of researchers asked participants to judge whether a brief event – the presentation of a tone or an image – was shorter or longer than a reference duration. ECG was used to track systole and diastole when participants were presented with these events.
The researchers found that the durations were underestimated during systole and overestimated during diastole, suggesting that time seemed to “speed up” or “slow down,” based on cardiac contraction and relaxation. When participants rated the events as more arousing, their perceived durations contracted, even during diastole.
“In our new paper, we show that our heart shapes the perceived duration of events, so time passes quicker when the heart contracts but slower when the heart relaxes,” lead author Irena Arslanova, PhD, postdoctoral researcher in cognitive neuroscience, Royal Holloway University of London, told this news organization.
Temporal ‘wrinkles’
“Subjective time is malleable,” observed Ms. Sadeghi and colleagues in their report. “Rather than being a uniform dimension, perceived duration has ‘wrinkles,’ with certain intervals appearing to dilate or contract relative to objective time” – a phenomenon sometimes referred to as “distortion.”
“We have known that people aren’t always consistent in how they perceive time, and objective duration doesn’t always explain subjective perception of time,” Ms. Sadeghi said.
Although the potential role of the heart in the experience of time has been hypothesized, research into the heart-time connection has been limited, with previous studies focusing primarily on estimating the average cardiac measures on longer time scales over seconds to minutes.
The current study sought to investigate “the beat-by-beat fluctuations of the heart period on the experience of brief moments in time” because, compared with longer time scales, subsecond temporal perception “has different underlying mechanisms” and a subsecond stimulus can be a “small fraction of a heartbeat.”
To home in on this small fraction, the researchers studied 45 participants (aged 18-21), who listened to 210 tones ranging in duration from 80 ms (short) to 188 ms (long). The tones were linearly spaced at 18-ms increments (80, 98, 116, 134, 152, 170, 188).
Participants were asked to categorize each tone as “short” or “long.” All tones were randomly assigned to be synchronized either with the systolic or diastolic phase of the cardiac cycle (50% each). The tones were triggered by participants’ heartbeats.
In addition, participants engaged in a heartbeat-counting activity, in which they were asked not to touch their pulse but to count their heartbeats by tuning in to their bodily sensations at intervals of 25, 35, and 45 seconds.
‘Classical’ response
“Participants exhibited an increased heart period after tone onset, which returned to baseline following an average canonical bell shape,” the authors reported.
The researchers performed regression analyses to determine how, on average, the heart rate before the tone was related to perceived duration or how the amount of change after the tone was related to perceived duration.
They found that when the heart rate was higher before the tone, participants tended to be more accurate in their time perception. When the heartbeat preceding a tone was shorter, participants experienced the tone as longer; conversely, when the heartbeat was longer, they experienced the duration of the identical sound as shorter.
When participants focused their attention on the sounds, their heart rate was affected such that their orienting responses actually changed their heart rate and, in turn, their temporal perception.
“The orienting response is classical,” Ms. Sadeghi said. “When you attend to something unpredictable or novel, the act of orienting attention decreases the HR.”
She explained that the heartbeats are “noise to the brain.” When people need to perceive external events, “a decrease in HR facilitates the intake of things from outside and facilitates sensory intake.”
A lower HR “makes it easier for the person to take in the tone and perceive it, so it feels as though they perceive more of the tone and the duration seems longer – similarly, when the HR decreases.”
It is unknown whether this is a causal relationship, she cautioned, “but it seems as though the decrease in HR somehow makes it easier to ‘get’ more of the tone, which then appears to have longer duration.”
Bidirectional relationship
“We know that experienced time can be distorted,” said Dr. Arslanova. “Time flies by when we’re busy or having fun but drags on when we’re bored or waiting for something, yet we still don’t know how the brain gives rise to such elastic experience of time.”
The brain controls the heart in response to the information the heart provides about the state of the body, she noted, “but we have begun to see more research showing that the heart–brain relationship is bidirectional.”
This means that the heart plays a role in shaping “how we process information and experience emotions.” In this analysis, Dr. Arslanova and colleagues “wanted to study whether the heart also shapes the experience of time.”
To do so, they conducted two experiments.
In the first, participants (n = 28) were presented with brief events during systole or during diastole. The events took the form of an emotionally neutral visual shape or auditory tone, shown for durations of 200 to 400 ms.
Participants were asked whether these events were of longer or shorter duration, compared with a reference duration.
The researchers found significant main effect of cardiac phase systole (F(1,27) = 8.1, P =.01), with stimuli presented at diastole regarded, on average, as 7 ms longer than those presented at systole.
They also found a significant main effect of modality (F(1,27) = 5.7, P = .02), with tones judged, on average, as 13 ms longer than visual stimuli.
“This means that time ‘sped up’ during the heart’s contraction and ‘slowed down’ during the heart’s relaxation,” Dr. Arslanova said.
The effect of cardiac phase on duration perception was independent of changes in HR, the authors noted.
In the second experiment, participants performed a similar task, but this time, it involved the images of faces containing emotional expressions. The researchers again observed a similar pattern of time appearing to speed up during systole and slow down during diastole, with stimuli present at diastole regarded as being an average 9 ms longer than those presented at systole.
These opposing effects of systole and diastole on time perception were present only for low and average arousal ratings (b = 14.4 [SE 3.2], P < .001 and b = 9.2 [2.3], P <.001, respectively). However, this effect disappeared when arousal ratings increased (b = 4.1 [3.2] P =.21).
“Interestingly, when participants rated the events as more arousing, their perceived durations contracted, even during the heart’s relaxation,” Dr. Arslanova observed. “This means that in a nonaroused state, the two cardiac phases pull the experienced duration in opposite directions – time contracts, then expands.”
The findings “also predict that increasing HR would speed up passing time, making events seem shorter, because there will be a stronger influence from the heart’s contractions,” she said.
She described the relationship between time perception and emotion as complex, noting that the findings are important because they show “that the way we experience time cannot be examined in isolation from our body,” she said.
Converging evidence
Martin Wiener, PhD, assistant professor, George Mason University, Fairfax, Va., said both papers “provide converging evidence on the role of the heart in our perception of time.”
Together, “the results share that our sense of time – that is, our incoming sensory perception of the present ‘moment’ – is adjusted or ‘gated’ by both our HR and cardiac phase,” said Dr. Wiener, executive director of the Timing Research Forum.
The studies “provide a link between the body and the brain, in terms of our perception, and that we cannot study one without the context of the other,” said Dr. Wiener, who was not involved with the current study.
“All of this opens up a new avenue of research, and so it is very exciting to see,” Dr. Wiener stated.
No source of funding was listed for the study by Ms. Sadeghi and coauthors. They declared no relevant financial relationships.
Dr. Arslanova and coauthors declared no competing interests. Senior author Manos Tsakiris, PhD, receives funding from the European Research Council Consolidator Grant. Dr. Wiener declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM PSYCHOPHYSIOLOGY
Nasal COVID treatment shows early promise against multiple variants
if used within 4 hours after infection inside the nose, new research reveals.
Known as TriSb92 (brand name Covidin, from drugmaker Pandemblock Oy in Finland), the viral inhibitor also appears effective against all coronavirus variants of concern, neutralizing even the Omicron variants BA.5, XBB, and BQ.1.1 in laboratory and mice studies.
Unlike a COVID vaccine that boosts a person’s immune system as protection, the antiviral nasal spray works more directly by blocking the virus, acting as a “biological mask in the nasal cavity,” according to the biotechnology company set up to develop the treatment.
The product targets a stable site on the spike protein of the virus that is not known to mutate. This same site is shared among many variants of the COVID virus, so it could be effective against future variants as well, researchers note.
“In animal models, by directly inactivating the virus, TriSb92 offers immediate and robust protection” against coronavirus infection and severe COVID, said Anna R. Mäkelä, PhD, lead author of the study and a senior scientist in the department of virology at the University of Helsinki.
The study was published online in Nature Communications.
A potential first line of defense
Even in cases where the antiviral does not prevent coronavirus infection, the treatment could slow infection. This could happen by limiting how much virus could replicate early in the skin inside the nose and nasopharynx (the upper part of the throat), said Dr. Mäkelä, who is also CEO of Pandemblock Oy, the company set up to develop the product.
“TriSb92 could effectively tip the balance in favor of the [the person] and thereby help to reduce the risk of severe COVID-19 disease,” she said.
The antiviral also could offer an alternative to people who cannot or do not respond to a vaccine.
“Many elderly people as well as individuals who are immunodeficient for various reasons do not respond to vaccines and are in the need of other protective measures,” said Kalle Saksela, MD, PhD, senior author of the study and a virologist at the University of Helsinki.
Multiple doses needed?
TriSb92 is “one of multiple nasal spray approaches but unlikely to be as durable as effective nasal vaccines,” said Eric Topol, MD, a professor of molecular medicine and executive vice president of Scripps Research in La Jolla, Calif. Dr. Topol is also editor-in-chief of Medscape, WebMD’s sister site for medical professionals.
“The sprays generally require multiple doses per day, whereas a single dose of a nasal vaccine may protect for months,” he said.
“Both have the allure of being variant-proof,” Dr. Topol added.
Thinking small
Many laboratories are shifting from treatments using monoclonal antibodies to treatments using smaller antibody fragments called “nanobodies” because they are more cost-effective and are able to last longer in storage, Dr. Mäkelä and colleagues noted.
Several of these nanobodies have shown promise against viruses in cell culture or animal models, including as an intranasal preventive treatment for SARS-CoV-2.
One of these smaller antibodies is being developed from llamas for example; another comes from experiments with yeast to develop synthetic nanobodies; and in a third case, researchers isolated nanobodies from llamas and from mice and showed they could neutralize the SARS-CoV-2 virus.
These nanobodies and TriSb92 target a specific part of the coronavirus spike protein called the receptor-binding domain (RBD). The RBD is where the coronavirus attaches to cells in the body. These agents essentially trick the virus by changing the structure of the outside of cells, so they look like a virus has already fused to them. This way, the virus moves on.
Key findings
The researchers compared mice treated with TriSb92 before and after exposure to SARS-CoV-2. When given in advance, none of the treated mice had SARS-CoV-2 RNA in their lungs, while untreated mice in the comparison group had “abundant” levels.
Other evidence of viral infection showed similar differences between treated and untreated mice in the protective lining of cells called the epithelium inside the nose, nasal mucosa, and airways.
Similarly, when given 2 or 4 hours after SARS-CoV-2 had already infected the epithelium, TriSb92 was linked to a complete lack of the virus’s RNA in the lungs.
It was more effective against the virus, though, when given before infection rather than after, “perhaps due to the initial establishment of the infection,” the researchers note.
The company led by Dr. Mäkelä is now working to secure funding for clinical trials of TriSb92 in humans.
A version of this article first appeared on WebMD.com.
if used within 4 hours after infection inside the nose, new research reveals.
Known as TriSb92 (brand name Covidin, from drugmaker Pandemblock Oy in Finland), the viral inhibitor also appears effective against all coronavirus variants of concern, neutralizing even the Omicron variants BA.5, XBB, and BQ.1.1 in laboratory and mice studies.
Unlike a COVID vaccine that boosts a person’s immune system as protection, the antiviral nasal spray works more directly by blocking the virus, acting as a “biological mask in the nasal cavity,” according to the biotechnology company set up to develop the treatment.
The product targets a stable site on the spike protein of the virus that is not known to mutate. This same site is shared among many variants of the COVID virus, so it could be effective against future variants as well, researchers note.
“In animal models, by directly inactivating the virus, TriSb92 offers immediate and robust protection” against coronavirus infection and severe COVID, said Anna R. Mäkelä, PhD, lead author of the study and a senior scientist in the department of virology at the University of Helsinki.
The study was published online in Nature Communications.
A potential first line of defense
Even in cases where the antiviral does not prevent coronavirus infection, the treatment could slow infection. This could happen by limiting how much virus could replicate early in the skin inside the nose and nasopharynx (the upper part of the throat), said Dr. Mäkelä, who is also CEO of Pandemblock Oy, the company set up to develop the product.
“TriSb92 could effectively tip the balance in favor of the [the person] and thereby help to reduce the risk of severe COVID-19 disease,” she said.
The antiviral also could offer an alternative to people who cannot or do not respond to a vaccine.
“Many elderly people as well as individuals who are immunodeficient for various reasons do not respond to vaccines and are in the need of other protective measures,” said Kalle Saksela, MD, PhD, senior author of the study and a virologist at the University of Helsinki.
Multiple doses needed?
TriSb92 is “one of multiple nasal spray approaches but unlikely to be as durable as effective nasal vaccines,” said Eric Topol, MD, a professor of molecular medicine and executive vice president of Scripps Research in La Jolla, Calif. Dr. Topol is also editor-in-chief of Medscape, WebMD’s sister site for medical professionals.
“The sprays generally require multiple doses per day, whereas a single dose of a nasal vaccine may protect for months,” he said.
“Both have the allure of being variant-proof,” Dr. Topol added.
Thinking small
Many laboratories are shifting from treatments using monoclonal antibodies to treatments using smaller antibody fragments called “nanobodies” because they are more cost-effective and are able to last longer in storage, Dr. Mäkelä and colleagues noted.
Several of these nanobodies have shown promise against viruses in cell culture or animal models, including as an intranasal preventive treatment for SARS-CoV-2.
One of these smaller antibodies is being developed from llamas for example; another comes from experiments with yeast to develop synthetic nanobodies; and in a third case, researchers isolated nanobodies from llamas and from mice and showed they could neutralize the SARS-CoV-2 virus.
These nanobodies and TriSb92 target a specific part of the coronavirus spike protein called the receptor-binding domain (RBD). The RBD is where the coronavirus attaches to cells in the body. These agents essentially trick the virus by changing the structure of the outside of cells, so they look like a virus has already fused to them. This way, the virus moves on.
Key findings
The researchers compared mice treated with TriSb92 before and after exposure to SARS-CoV-2. When given in advance, none of the treated mice had SARS-CoV-2 RNA in their lungs, while untreated mice in the comparison group had “abundant” levels.
Other evidence of viral infection showed similar differences between treated and untreated mice in the protective lining of cells called the epithelium inside the nose, nasal mucosa, and airways.
Similarly, when given 2 or 4 hours after SARS-CoV-2 had already infected the epithelium, TriSb92 was linked to a complete lack of the virus’s RNA in the lungs.
It was more effective against the virus, though, when given before infection rather than after, “perhaps due to the initial establishment of the infection,” the researchers note.
The company led by Dr. Mäkelä is now working to secure funding for clinical trials of TriSb92 in humans.
A version of this article first appeared on WebMD.com.
if used within 4 hours after infection inside the nose, new research reveals.
Known as TriSb92 (brand name Covidin, from drugmaker Pandemblock Oy in Finland), the viral inhibitor also appears effective against all coronavirus variants of concern, neutralizing even the Omicron variants BA.5, XBB, and BQ.1.1 in laboratory and mice studies.
Unlike a COVID vaccine that boosts a person’s immune system as protection, the antiviral nasal spray works more directly by blocking the virus, acting as a “biological mask in the nasal cavity,” according to the biotechnology company set up to develop the treatment.
The product targets a stable site on the spike protein of the virus that is not known to mutate. This same site is shared among many variants of the COVID virus, so it could be effective against future variants as well, researchers note.
“In animal models, by directly inactivating the virus, TriSb92 offers immediate and robust protection” against coronavirus infection and severe COVID, said Anna R. Mäkelä, PhD, lead author of the study and a senior scientist in the department of virology at the University of Helsinki.
The study was published online in Nature Communications.
A potential first line of defense
Even in cases where the antiviral does not prevent coronavirus infection, the treatment could slow infection. This could happen by limiting how much virus could replicate early in the skin inside the nose and nasopharynx (the upper part of the throat), said Dr. Mäkelä, who is also CEO of Pandemblock Oy, the company set up to develop the product.
“TriSb92 could effectively tip the balance in favor of the [the person] and thereby help to reduce the risk of severe COVID-19 disease,” she said.
The antiviral also could offer an alternative to people who cannot or do not respond to a vaccine.
“Many elderly people as well as individuals who are immunodeficient for various reasons do not respond to vaccines and are in the need of other protective measures,” said Kalle Saksela, MD, PhD, senior author of the study and a virologist at the University of Helsinki.
Multiple doses needed?
TriSb92 is “one of multiple nasal spray approaches but unlikely to be as durable as effective nasal vaccines,” said Eric Topol, MD, a professor of molecular medicine and executive vice president of Scripps Research in La Jolla, Calif. Dr. Topol is also editor-in-chief of Medscape, WebMD’s sister site for medical professionals.
“The sprays generally require multiple doses per day, whereas a single dose of a nasal vaccine may protect for months,” he said.
“Both have the allure of being variant-proof,” Dr. Topol added.
Thinking small
Many laboratories are shifting from treatments using monoclonal antibodies to treatments using smaller antibody fragments called “nanobodies” because they are more cost-effective and are able to last longer in storage, Dr. Mäkelä and colleagues noted.
Several of these nanobodies have shown promise against viruses in cell culture or animal models, including as an intranasal preventive treatment for SARS-CoV-2.
One of these smaller antibodies is being developed from llamas for example; another comes from experiments with yeast to develop synthetic nanobodies; and in a third case, researchers isolated nanobodies from llamas and from mice and showed they could neutralize the SARS-CoV-2 virus.
These nanobodies and TriSb92 target a specific part of the coronavirus spike protein called the receptor-binding domain (RBD). The RBD is where the coronavirus attaches to cells in the body. These agents essentially trick the virus by changing the structure of the outside of cells, so they look like a virus has already fused to them. This way, the virus moves on.
Key findings
The researchers compared mice treated with TriSb92 before and after exposure to SARS-CoV-2. When given in advance, none of the treated mice had SARS-CoV-2 RNA in their lungs, while untreated mice in the comparison group had “abundant” levels.
Other evidence of viral infection showed similar differences between treated and untreated mice in the protective lining of cells called the epithelium inside the nose, nasal mucosa, and airways.
Similarly, when given 2 or 4 hours after SARS-CoV-2 had already infected the epithelium, TriSb92 was linked to a complete lack of the virus’s RNA in the lungs.
It was more effective against the virus, though, when given before infection rather than after, “perhaps due to the initial establishment of the infection,” the researchers note.
The company led by Dr. Mäkelä is now working to secure funding for clinical trials of TriSb92 in humans.
A version of this article first appeared on WebMD.com.
FROM NATURE COMMUNICATIONS
Song stuck in your head? What earworms reveal about health
If Miley Cyrus has planted “Flowers” in your head, rest assured you’re not alone.
An earworm – a bit of music you can’t shake from your brain – happens to almost everyone.
The culprit is typically a song you’ve heard repeatedly with a strong rhythm and melody (like Miley’s No. 1 hit this year).
It pops into your head and stays there, unbidden and often unwanted. As you fish for something new on Spotify, there’s always a chance that a catchy hook holds an earworm.
“A catchy tune or melody is the part of a song most likely to get stuck in a person’s head, often a bit from the chorus,” said Elizabeth H. Margulis, PhD, a professor at Princeton (N.J.) University and director of its music cognition lab. The phenomenon, which has been studied since 1885 (way before earbuds), goes by such names as stuck song syndrome, sticky music, musical imagery repetition, intrusive musical imagery, or the semi-official term, involuntary musical imagery, or INMI.
Research confirms how common it is. A 2020 study of American college students found that 97% had experienced an earworm in the past month, similar to findings from a larger Finnish survey done more than 10 years ago.
One in five people had experienced an earworm more than once a day, the study found. The typical length was 10-30 minutes, though 8.5% said theirs lasted more than 3 hours. Levels of “distress and interference” that earworms caused was mostly “mild to moderate.”
Some 86% said they tried to stop it – most frequently by distraction, like talking to a friend or listening to another song.
If music is important to you, your earworms are more likely to last longer and be harder to control, earlier research found. And women are thought to be more likely to have them.
“Very musical people may have more earworms because it’s easy for them to conjure up a certain tune,” says David Silbersweig, MD, chairman of the department of psychiatry and codirector of the Institute for the Neurosciences at Brigham and Women’s Hospital in Boston.
Moreover, people who lack “psychological flexibility” may find earworms more bothersome. The more they try to avoid or control intrusive thoughts (or songs), the more persistent those thoughts become.
“This is consistent with OCD (obsessive-compulsive disorder) research on the paradoxical effect of thought suppression,” the authors of the 2020 study wrote. In fact, people who report very annoying or stressful earworms are more likely to have obsessive-compulsive symptoms.
That makes them worth a closer look.
Digging for the source of earworms
Scientists trace earworms to the auditory cortex in the temporal lobe of the brain, which controls how you perceive music, as well as deep temporal lobe areas that are responsible for retrieving memories. Your amygdala and ventral striatum, parts of your brain that involve emotion, also tie into the making of an earworm.
MRI experiments found that “INMI is a common internal experience recruiting brain networks involved in perception, emotions, memory and spontaneous thoughts,” a 2015 paper in Consciousness and Cognition reported.
These brain networks work in tandem if you connect a song to an emotional memory – that’s when you’re more likely to experience it as an earworm. The “loop” of music you’ll hear in your head is usually a 20-second snippet.
Think of it as a “cognitive itch,” as researchers from the Netherlands put it. An earworm can be triggered by associating a song with a specific situation or emotion. Trying to suppress it just reminds you it’s there, “scratching” the itch and making it worse. “The more one tries to suppress the songs, the more their impetus increases, a mental process known as ironic process theory,” they wrote.
“It’s also worth pointing out that earworms don’t always occur right after a song ends,” said Michael K. Scullin, PhD, an associate professor of psychology and neuroscience at Baylor University in Waco, Tex. “Sometimes they only occur many hours later, and sometimes the earworm isn’t the song you were most recently listening to.”
These processes aren’t fully understood, he said, “but they likely represent memory consolidation mechanisms; that is, the brain trying to reactivate and stabilize musical memories.” Kind of like switching “radio stations” in your head.
When to worry
Earworms are most often harmless. “They’re part of a healthy brain,” said Dr. Silbersweig. But in rare cases, they indicate certain medical conditions. People with OCD, for example, have been shown to have earworms during times of stress. If this is the case, cognitive-behavioral therapy as well as some antidepressants may help.
Take an earworm seriously if it’s linked to other symptoms, said Elaine Jones, MD, a neurologist in Hilton Head, S.C., and a fellow of the American Academy of Neurology. Those symptoms could include “loss of consciousness or confusion, visual loss or changes, speech arrest, tremors of arms or legs,” she said.
“Most worrisome would be a seizure, but other causes could include a migraine aura. In a younger person, less than 20 years old, this kind of earworm could indicate a psychiatric condition like schizophrenia.” Drug toxicity or brain damage can also present with earworms.
Her bottom line: “If an earworm is persistent for more than 24 hours, or if it is associated with the other symptoms mentioned above, it would be important to reach out to your primary care doctor to ensure that nothing more serious is going on,” said Dr. Jones. With no other symptoms, “it is more likely to be just an earworm.”
Japanese research also indicates that an earworm that lasts for several hours in a day can be linked to depression. If a person has symptoms such as low mood, insomnia, and loss of appetite, along with earworms that last several hours a day, treatment may help.
There’s another category called “musical hallucinations” – where the person thinks they are actually hearing music, which could be a symptom of depression, although scientists don’t know for sure. The drug vortioxetine, which may help boost serotonin in the brain, has shown some promise in reducing earworms.
Some research has shown that diseases that damage the auditory pathway in the brain have a link to musical hallucinations.
How to stop a simple earworm
Here are six easy ways to make it stop:
- Mix up your playlist. “Listening to songs repeatedly does increase the likelihood that they’ll get stuck,” said Dr. Margulis.
- Take breaks from your tunes throughout the day. “Longer listening durations are more likely to lead to earworms,” Dr. Scullin said.
- Use your feet. than the beat of your earworm. This will interrupt your memory of the tempo and can help chase away the earworm.
- Stick with that song. “Listen to a song all the way through,” said Dr. Silbersweig. If you only listen to snippets of a song, the can take hold. That’s the brain’s tendency to remember things that are interrupted more easily than completed things.
- Distract yourself. Lose yourself in a book, a movie, your work, or a hobby that requires concentration. “Redirecting attention to an absorbing task can be an effective way to dislodge an earworm,” said Dr. Margulis.
- Chew gum. shows that the action of doing so interferes with repetitive memories and stops your mind from “scanning” a song. Then enjoy the sound of silence!
A version of this article first appeared on WebMD.com.
If Miley Cyrus has planted “Flowers” in your head, rest assured you’re not alone.
An earworm – a bit of music you can’t shake from your brain – happens to almost everyone.
The culprit is typically a song you’ve heard repeatedly with a strong rhythm and melody (like Miley’s No. 1 hit this year).
It pops into your head and stays there, unbidden and often unwanted. As you fish for something new on Spotify, there’s always a chance that a catchy hook holds an earworm.
“A catchy tune or melody is the part of a song most likely to get stuck in a person’s head, often a bit from the chorus,” said Elizabeth H. Margulis, PhD, a professor at Princeton (N.J.) University and director of its music cognition lab. The phenomenon, which has been studied since 1885 (way before earbuds), goes by such names as stuck song syndrome, sticky music, musical imagery repetition, intrusive musical imagery, or the semi-official term, involuntary musical imagery, or INMI.
Research confirms how common it is. A 2020 study of American college students found that 97% had experienced an earworm in the past month, similar to findings from a larger Finnish survey done more than 10 years ago.
One in five people had experienced an earworm more than once a day, the study found. The typical length was 10-30 minutes, though 8.5% said theirs lasted more than 3 hours. Levels of “distress and interference” that earworms caused was mostly “mild to moderate.”
Some 86% said they tried to stop it – most frequently by distraction, like talking to a friend or listening to another song.
If music is important to you, your earworms are more likely to last longer and be harder to control, earlier research found. And women are thought to be more likely to have them.
“Very musical people may have more earworms because it’s easy for them to conjure up a certain tune,” says David Silbersweig, MD, chairman of the department of psychiatry and codirector of the Institute for the Neurosciences at Brigham and Women’s Hospital in Boston.
Moreover, people who lack “psychological flexibility” may find earworms more bothersome. The more they try to avoid or control intrusive thoughts (or songs), the more persistent those thoughts become.
“This is consistent with OCD (obsessive-compulsive disorder) research on the paradoxical effect of thought suppression,” the authors of the 2020 study wrote. In fact, people who report very annoying or stressful earworms are more likely to have obsessive-compulsive symptoms.
That makes them worth a closer look.
Digging for the source of earworms
Scientists trace earworms to the auditory cortex in the temporal lobe of the brain, which controls how you perceive music, as well as deep temporal lobe areas that are responsible for retrieving memories. Your amygdala and ventral striatum, parts of your brain that involve emotion, also tie into the making of an earworm.
MRI experiments found that “INMI is a common internal experience recruiting brain networks involved in perception, emotions, memory and spontaneous thoughts,” a 2015 paper in Consciousness and Cognition reported.
These brain networks work in tandem if you connect a song to an emotional memory – that’s when you’re more likely to experience it as an earworm. The “loop” of music you’ll hear in your head is usually a 20-second snippet.
Think of it as a “cognitive itch,” as researchers from the Netherlands put it. An earworm can be triggered by associating a song with a specific situation or emotion. Trying to suppress it just reminds you it’s there, “scratching” the itch and making it worse. “The more one tries to suppress the songs, the more their impetus increases, a mental process known as ironic process theory,” they wrote.
“It’s also worth pointing out that earworms don’t always occur right after a song ends,” said Michael K. Scullin, PhD, an associate professor of psychology and neuroscience at Baylor University in Waco, Tex. “Sometimes they only occur many hours later, and sometimes the earworm isn’t the song you were most recently listening to.”
These processes aren’t fully understood, he said, “but they likely represent memory consolidation mechanisms; that is, the brain trying to reactivate and stabilize musical memories.” Kind of like switching “radio stations” in your head.
When to worry
Earworms are most often harmless. “They’re part of a healthy brain,” said Dr. Silbersweig. But in rare cases, they indicate certain medical conditions. People with OCD, for example, have been shown to have earworms during times of stress. If this is the case, cognitive-behavioral therapy as well as some antidepressants may help.
Take an earworm seriously if it’s linked to other symptoms, said Elaine Jones, MD, a neurologist in Hilton Head, S.C., and a fellow of the American Academy of Neurology. Those symptoms could include “loss of consciousness or confusion, visual loss or changes, speech arrest, tremors of arms or legs,” she said.
“Most worrisome would be a seizure, but other causes could include a migraine aura. In a younger person, less than 20 years old, this kind of earworm could indicate a psychiatric condition like schizophrenia.” Drug toxicity or brain damage can also present with earworms.
Her bottom line: “If an earworm is persistent for more than 24 hours, or if it is associated with the other symptoms mentioned above, it would be important to reach out to your primary care doctor to ensure that nothing more serious is going on,” said Dr. Jones. With no other symptoms, “it is more likely to be just an earworm.”
Japanese research also indicates that an earworm that lasts for several hours in a day can be linked to depression. If a person has symptoms such as low mood, insomnia, and loss of appetite, along with earworms that last several hours a day, treatment may help.
There’s another category called “musical hallucinations” – where the person thinks they are actually hearing music, which could be a symptom of depression, although scientists don’t know for sure. The drug vortioxetine, which may help boost serotonin in the brain, has shown some promise in reducing earworms.
Some research has shown that diseases that damage the auditory pathway in the brain have a link to musical hallucinations.
How to stop a simple earworm
Here are six easy ways to make it stop:
- Mix up your playlist. “Listening to songs repeatedly does increase the likelihood that they’ll get stuck,” said Dr. Margulis.
- Take breaks from your tunes throughout the day. “Longer listening durations are more likely to lead to earworms,” Dr. Scullin said.
- Use your feet. than the beat of your earworm. This will interrupt your memory of the tempo and can help chase away the earworm.
- Stick with that song. “Listen to a song all the way through,” said Dr. Silbersweig. If you only listen to snippets of a song, the can take hold. That’s the brain’s tendency to remember things that are interrupted more easily than completed things.
- Distract yourself. Lose yourself in a book, a movie, your work, or a hobby that requires concentration. “Redirecting attention to an absorbing task can be an effective way to dislodge an earworm,” said Dr. Margulis.
- Chew gum. shows that the action of doing so interferes with repetitive memories and stops your mind from “scanning” a song. Then enjoy the sound of silence!
A version of this article first appeared on WebMD.com.
If Miley Cyrus has planted “Flowers” in your head, rest assured you’re not alone.
An earworm – a bit of music you can’t shake from your brain – happens to almost everyone.
The culprit is typically a song you’ve heard repeatedly with a strong rhythm and melody (like Miley’s No. 1 hit this year).
It pops into your head and stays there, unbidden and often unwanted. As you fish for something new on Spotify, there’s always a chance that a catchy hook holds an earworm.
“A catchy tune or melody is the part of a song most likely to get stuck in a person’s head, often a bit from the chorus,” said Elizabeth H. Margulis, PhD, a professor at Princeton (N.J.) University and director of its music cognition lab. The phenomenon, which has been studied since 1885 (way before earbuds), goes by such names as stuck song syndrome, sticky music, musical imagery repetition, intrusive musical imagery, or the semi-official term, involuntary musical imagery, or INMI.
Research confirms how common it is. A 2020 study of American college students found that 97% had experienced an earworm in the past month, similar to findings from a larger Finnish survey done more than 10 years ago.
One in five people had experienced an earworm more than once a day, the study found. The typical length was 10-30 minutes, though 8.5% said theirs lasted more than 3 hours. Levels of “distress and interference” that earworms caused was mostly “mild to moderate.”
Some 86% said they tried to stop it – most frequently by distraction, like talking to a friend or listening to another song.
If music is important to you, your earworms are more likely to last longer and be harder to control, earlier research found. And women are thought to be more likely to have them.
“Very musical people may have more earworms because it’s easy for them to conjure up a certain tune,” says David Silbersweig, MD, chairman of the department of psychiatry and codirector of the Institute for the Neurosciences at Brigham and Women’s Hospital in Boston.
Moreover, people who lack “psychological flexibility” may find earworms more bothersome. The more they try to avoid or control intrusive thoughts (or songs), the more persistent those thoughts become.
“This is consistent with OCD (obsessive-compulsive disorder) research on the paradoxical effect of thought suppression,” the authors of the 2020 study wrote. In fact, people who report very annoying or stressful earworms are more likely to have obsessive-compulsive symptoms.
That makes them worth a closer look.
Digging for the source of earworms
Scientists trace earworms to the auditory cortex in the temporal lobe of the brain, which controls how you perceive music, as well as deep temporal lobe areas that are responsible for retrieving memories. Your amygdala and ventral striatum, parts of your brain that involve emotion, also tie into the making of an earworm.
MRI experiments found that “INMI is a common internal experience recruiting brain networks involved in perception, emotions, memory and spontaneous thoughts,” a 2015 paper in Consciousness and Cognition reported.
These brain networks work in tandem if you connect a song to an emotional memory – that’s when you’re more likely to experience it as an earworm. The “loop” of music you’ll hear in your head is usually a 20-second snippet.
Think of it as a “cognitive itch,” as researchers from the Netherlands put it. An earworm can be triggered by associating a song with a specific situation or emotion. Trying to suppress it just reminds you it’s there, “scratching” the itch and making it worse. “The more one tries to suppress the songs, the more their impetus increases, a mental process known as ironic process theory,” they wrote.
“It’s also worth pointing out that earworms don’t always occur right after a song ends,” said Michael K. Scullin, PhD, an associate professor of psychology and neuroscience at Baylor University in Waco, Tex. “Sometimes they only occur many hours later, and sometimes the earworm isn’t the song you were most recently listening to.”
These processes aren’t fully understood, he said, “but they likely represent memory consolidation mechanisms; that is, the brain trying to reactivate and stabilize musical memories.” Kind of like switching “radio stations” in your head.
When to worry
Earworms are most often harmless. “They’re part of a healthy brain,” said Dr. Silbersweig. But in rare cases, they indicate certain medical conditions. People with OCD, for example, have been shown to have earworms during times of stress. If this is the case, cognitive-behavioral therapy as well as some antidepressants may help.
Take an earworm seriously if it’s linked to other symptoms, said Elaine Jones, MD, a neurologist in Hilton Head, S.C., and a fellow of the American Academy of Neurology. Those symptoms could include “loss of consciousness or confusion, visual loss or changes, speech arrest, tremors of arms or legs,” she said.
“Most worrisome would be a seizure, but other causes could include a migraine aura. In a younger person, less than 20 years old, this kind of earworm could indicate a psychiatric condition like schizophrenia.” Drug toxicity or brain damage can also present with earworms.
Her bottom line: “If an earworm is persistent for more than 24 hours, or if it is associated with the other symptoms mentioned above, it would be important to reach out to your primary care doctor to ensure that nothing more serious is going on,” said Dr. Jones. With no other symptoms, “it is more likely to be just an earworm.”
Japanese research also indicates that an earworm that lasts for several hours in a day can be linked to depression. If a person has symptoms such as low mood, insomnia, and loss of appetite, along with earworms that last several hours a day, treatment may help.
There’s another category called “musical hallucinations” – where the person thinks they are actually hearing music, which could be a symptom of depression, although scientists don’t know for sure. The drug vortioxetine, which may help boost serotonin in the brain, has shown some promise in reducing earworms.
Some research has shown that diseases that damage the auditory pathway in the brain have a link to musical hallucinations.
How to stop a simple earworm
Here are six easy ways to make it stop:
- Mix up your playlist. “Listening to songs repeatedly does increase the likelihood that they’ll get stuck,” said Dr. Margulis.
- Take breaks from your tunes throughout the day. “Longer listening durations are more likely to lead to earworms,” Dr. Scullin said.
- Use your feet. than the beat of your earworm. This will interrupt your memory of the tempo and can help chase away the earworm.
- Stick with that song. “Listen to a song all the way through,” said Dr. Silbersweig. If you only listen to snippets of a song, the can take hold. That’s the brain’s tendency to remember things that are interrupted more easily than completed things.
- Distract yourself. Lose yourself in a book, a movie, your work, or a hobby that requires concentration. “Redirecting attention to an absorbing task can be an effective way to dislodge an earworm,” said Dr. Margulis.
- Chew gum. shows that the action of doing so interferes with repetitive memories and stops your mind from “scanning” a song. Then enjoy the sound of silence!
A version of this article first appeared on WebMD.com.
High-dose prophylactic anticoagulation benefits patients with COVID-19 pneumonia
High-dose prophylactic anticoagulation or therapeutic anticoagulation reduced de novo thrombosis in patients with hypoxemic COVID-19 pneumonia, based on data from 334 adults.
Patients with hypoxemic COVID-19 pneumonia are at increased risk of thrombosis and anticoagulation-related bleeding, therefore data to identify the lowest effective anticoagulant dose are needed, wrote Vincent Labbé, MD, of Sorbonne University, Paris, and colleagues.
Previous studies of different anticoagulation strategies for noncritically ill and critically ill patients with COVID-19 pneumonia have shown contrasting results, but some institutions recommend a high-dose regimen in the wake of data showing macrovascular thrombosis in patients with COVID-19 who were treated with standard anticoagulation, the authors wrote.
However, no previously published studies have compared the effectiveness of the three anticoagulation strategies: high-dose prophylactic anticoagulation (HD-PA), standard dose prophylactic anticoagulation (SD-PA), and therapeutic anticoagulation (TA), they said.
In the open-label Anticoagulation COVID-19 (ANTICOVID) trial, published in JAMA Internal Medicine, the researchers identified consecutively hospitalized adults aged 18 years and older being treated for hypoxemic COVID-19 pneumonia in 23 centers in France between April 2021 and December 2021.
The patients were randomly assigned to SD-PA (116 patients), HD-PA (111 patients), and TA (112 patients) using low-molecular-weight heparin for 14 days, or until either hospital discharge or weaning from supplemental oxygen for 48 consecutive hours, whichever outcome occurred first. The HD-PA patients received two times the SD-PA dose. The mean age of the patients was 58.3 years, and approximately two-thirds were men; race and ethnicity data were not collected. Participants had no macrovascular thrombosis at the start of the study.
The primary outcomes were all-cause mortality and time to clinical improvement (defined as the time from randomization to a 2-point improvement on a 7-category respiratory function scale).
The secondary outcome was a combination of safety and efficacy at day 28 that included a composite of thrombosis (ischemic stroke, noncerebrovascular arterial thrombosis, deep venous thrombosis, pulmonary artery thrombosis, and central venous catheter–related deep venous thrombosis), major bleeding, or all-cause death.
For the primary outcome, results were similar among the groups; HD-PA had no significant benefit over SD-PA or TA. All-cause death rates for SD-PA, HD-PA, and TA patients were 14%, 12%, and 13%, respectively. The time to clinical improvement for the three groups was approximately 8 days, 9 days, and 8 days, respectively. Results for the primary outcome were consistent across all prespecified subgroups.
However, HD-PA was associated with a significant fourfold reduced risk of de novo thrombosis compared with SD-PA (5.5% vs. 20.2%) with no observed increase in major bleeding. TA was not associated with any significant improvement in primary or secondary outcomes compared with HD-PA or SD-PA.
The current study findings of no improvement in survival or disease resolution in patients with a higher anticoagulant dose reflects data from previous studies, the researchers wrote in their discussion. “Our study results together with those of previous RCTs support the premise that the role of microvascular thrombosis in worsening organ dysfunction may be narrower than estimated,” they said.
The findings were limited by several factors including the open-label design and the relatively small sample size, the lack of data on microvascular (vs. macrovascular) thrombosis at baseline, and the predominance of the Delta variant of COVID-19 among the study participants, which may have contributed to a lower mortality rate, the researchers noted.
However, given the significant reduction in de novo thrombosis, the results support the routine use of HD-PA in patients with severe hypoxemic COVID-19 pneumonia, they concluded.
Results inform current clinical practice
Over the course of the COVID-19 pandemic, “Patients hospitalized with COVID-19 manifested the highest risk for thromboembolic complications, especially patients in the intensive care setting,” and early reports suggested that standard prophylactic doses of anticoagulant therapy might be insufficient to prevent thrombotic events, Richard C. Becker, MD, of the University of Cincinnati, and Thomas L. Ortel, MD, of Duke University, Durham, N.C., wrote in an accompanying editorial.
“Although there have been several studies that have investigated the role of anticoagulant therapy in hospitalized patients with COVID-19, this is the first study that specifically compared a standard, prophylactic dose of low-molecular-weight heparin to a ‘high-dose’ prophylactic regimen and to a full therapeutic dose regimen,” Dr. Ortel said in an interview.
“Given the concerns about an increased thrombotic risk with prophylactic dose anticoagulation, and the potential bleeding risk associated with a full therapeutic dose of anticoagulation, this approach enabled the investigators to explore the efficacy and safety of an intermediate dose between these two extremes,” he said.
In the current study, , a finding that was not observed in other studies investigating anticoagulant therapy in hospitalized patients with severe COVID-19,” Dr. Ortel told this news organization. “Much initial concern about progression of disease in patients hospitalized with severe COVID-19 focused on the role of microvascular thrombosis, which appears to be less important in this process, or, alternatively, less responsive to anticoagulant therapy.”
The clinical takeaway from the study, Dr. Ortel said, is the decreased risk for venous thromboembolism with a high-dose prophylactic anticoagulation strategy compared with a standard-dose prophylactic regimen for patients hospitalized with hypoxemic COVID-19 pneumonia, “leading to an improved net clinical outcome.”
Looking ahead, “Additional research is needed to determine whether a higher dose of prophylactic anticoagulation would be beneficial for patients hospitalized with COVID-19 pneumonia who are not in an intensive care unit setting,” Dr. Ortel said. Studies are needed to determine whether therapeutic interventions are equally beneficial in patients with different coronavirus variants, since most patients in the current study were infected with the Delta variant, he added.
The study was supported by LEO Pharma. Dr. Labbé disclosed grants from LEO Pharma during the study and fees from AOP Health unrelated to the current study.
Dr. Becker disclosed personal fees from Novartis Data Safety Monitoring Board, Ionis Data Safety Monitoring Board, and Basking Biosciences Scientific Advisory Board unrelated to the current study. Dr. Ortel disclosed grants from the National Institutes of Health, Instrumentation Laboratory, Stago, and Siemens; contract fees from the Centers for Disease Control and Prevention; and honoraria from UpToDate unrelated to the current study.
A version of this article originally appeared on Medscape.com.
High-dose prophylactic anticoagulation or therapeutic anticoagulation reduced de novo thrombosis in patients with hypoxemic COVID-19 pneumonia, based on data from 334 adults.
Patients with hypoxemic COVID-19 pneumonia are at increased risk of thrombosis and anticoagulation-related bleeding, therefore data to identify the lowest effective anticoagulant dose are needed, wrote Vincent Labbé, MD, of Sorbonne University, Paris, and colleagues.
Previous studies of different anticoagulation strategies for noncritically ill and critically ill patients with COVID-19 pneumonia have shown contrasting results, but some institutions recommend a high-dose regimen in the wake of data showing macrovascular thrombosis in patients with COVID-19 who were treated with standard anticoagulation, the authors wrote.
However, no previously published studies have compared the effectiveness of the three anticoagulation strategies: high-dose prophylactic anticoagulation (HD-PA), standard dose prophylactic anticoagulation (SD-PA), and therapeutic anticoagulation (TA), they said.
In the open-label Anticoagulation COVID-19 (ANTICOVID) trial, published in JAMA Internal Medicine, the researchers identified consecutively hospitalized adults aged 18 years and older being treated for hypoxemic COVID-19 pneumonia in 23 centers in France between April 2021 and December 2021.
The patients were randomly assigned to SD-PA (116 patients), HD-PA (111 patients), and TA (112 patients) using low-molecular-weight heparin for 14 days, or until either hospital discharge or weaning from supplemental oxygen for 48 consecutive hours, whichever outcome occurred first. The HD-PA patients received two times the SD-PA dose. The mean age of the patients was 58.3 years, and approximately two-thirds were men; race and ethnicity data were not collected. Participants had no macrovascular thrombosis at the start of the study.
The primary outcomes were all-cause mortality and time to clinical improvement (defined as the time from randomization to a 2-point improvement on a 7-category respiratory function scale).
The secondary outcome was a combination of safety and efficacy at day 28 that included a composite of thrombosis (ischemic stroke, noncerebrovascular arterial thrombosis, deep venous thrombosis, pulmonary artery thrombosis, and central venous catheter–related deep venous thrombosis), major bleeding, or all-cause death.
For the primary outcome, results were similar among the groups; HD-PA had no significant benefit over SD-PA or TA. All-cause death rates for SD-PA, HD-PA, and TA patients were 14%, 12%, and 13%, respectively. The time to clinical improvement for the three groups was approximately 8 days, 9 days, and 8 days, respectively. Results for the primary outcome were consistent across all prespecified subgroups.
However, HD-PA was associated with a significant fourfold reduced risk of de novo thrombosis compared with SD-PA (5.5% vs. 20.2%) with no observed increase in major bleeding. TA was not associated with any significant improvement in primary or secondary outcomes compared with HD-PA or SD-PA.
The current study findings of no improvement in survival or disease resolution in patients with a higher anticoagulant dose reflects data from previous studies, the researchers wrote in their discussion. “Our study results together with those of previous RCTs support the premise that the role of microvascular thrombosis in worsening organ dysfunction may be narrower than estimated,” they said.
The findings were limited by several factors including the open-label design and the relatively small sample size, the lack of data on microvascular (vs. macrovascular) thrombosis at baseline, and the predominance of the Delta variant of COVID-19 among the study participants, which may have contributed to a lower mortality rate, the researchers noted.
However, given the significant reduction in de novo thrombosis, the results support the routine use of HD-PA in patients with severe hypoxemic COVID-19 pneumonia, they concluded.
Results inform current clinical practice
Over the course of the COVID-19 pandemic, “Patients hospitalized with COVID-19 manifested the highest risk for thromboembolic complications, especially patients in the intensive care setting,” and early reports suggested that standard prophylactic doses of anticoagulant therapy might be insufficient to prevent thrombotic events, Richard C. Becker, MD, of the University of Cincinnati, and Thomas L. Ortel, MD, of Duke University, Durham, N.C., wrote in an accompanying editorial.
“Although there have been several studies that have investigated the role of anticoagulant therapy in hospitalized patients with COVID-19, this is the first study that specifically compared a standard, prophylactic dose of low-molecular-weight heparin to a ‘high-dose’ prophylactic regimen and to a full therapeutic dose regimen,” Dr. Ortel said in an interview.
“Given the concerns about an increased thrombotic risk with prophylactic dose anticoagulation, and the potential bleeding risk associated with a full therapeutic dose of anticoagulation, this approach enabled the investigators to explore the efficacy and safety of an intermediate dose between these two extremes,” he said.
In the current study, , a finding that was not observed in other studies investigating anticoagulant therapy in hospitalized patients with severe COVID-19,” Dr. Ortel told this news organization. “Much initial concern about progression of disease in patients hospitalized with severe COVID-19 focused on the role of microvascular thrombosis, which appears to be less important in this process, or, alternatively, less responsive to anticoagulant therapy.”
The clinical takeaway from the study, Dr. Ortel said, is the decreased risk for venous thromboembolism with a high-dose prophylactic anticoagulation strategy compared with a standard-dose prophylactic regimen for patients hospitalized with hypoxemic COVID-19 pneumonia, “leading to an improved net clinical outcome.”
Looking ahead, “Additional research is needed to determine whether a higher dose of prophylactic anticoagulation would be beneficial for patients hospitalized with COVID-19 pneumonia who are not in an intensive care unit setting,” Dr. Ortel said. Studies are needed to determine whether therapeutic interventions are equally beneficial in patients with different coronavirus variants, since most patients in the current study were infected with the Delta variant, he added.
The study was supported by LEO Pharma. Dr. Labbé disclosed grants from LEO Pharma during the study and fees from AOP Health unrelated to the current study.
Dr. Becker disclosed personal fees from Novartis Data Safety Monitoring Board, Ionis Data Safety Monitoring Board, and Basking Biosciences Scientific Advisory Board unrelated to the current study. Dr. Ortel disclosed grants from the National Institutes of Health, Instrumentation Laboratory, Stago, and Siemens; contract fees from the Centers for Disease Control and Prevention; and honoraria from UpToDate unrelated to the current study.
A version of this article originally appeared on Medscape.com.
High-dose prophylactic anticoagulation or therapeutic anticoagulation reduced de novo thrombosis in patients with hypoxemic COVID-19 pneumonia, based on data from 334 adults.
Patients with hypoxemic COVID-19 pneumonia are at increased risk of thrombosis and anticoagulation-related bleeding, therefore data to identify the lowest effective anticoagulant dose are needed, wrote Vincent Labbé, MD, of Sorbonne University, Paris, and colleagues.
Previous studies of different anticoagulation strategies for noncritically ill and critically ill patients with COVID-19 pneumonia have shown contrasting results, but some institutions recommend a high-dose regimen in the wake of data showing macrovascular thrombosis in patients with COVID-19 who were treated with standard anticoagulation, the authors wrote.
However, no previously published studies have compared the effectiveness of the three anticoagulation strategies: high-dose prophylactic anticoagulation (HD-PA), standard dose prophylactic anticoagulation (SD-PA), and therapeutic anticoagulation (TA), they said.
In the open-label Anticoagulation COVID-19 (ANTICOVID) trial, published in JAMA Internal Medicine, the researchers identified consecutively hospitalized adults aged 18 years and older being treated for hypoxemic COVID-19 pneumonia in 23 centers in France between April 2021 and December 2021.
The patients were randomly assigned to SD-PA (116 patients), HD-PA (111 patients), and TA (112 patients) using low-molecular-weight heparin for 14 days, or until either hospital discharge or weaning from supplemental oxygen for 48 consecutive hours, whichever outcome occurred first. The HD-PA patients received two times the SD-PA dose. The mean age of the patients was 58.3 years, and approximately two-thirds were men; race and ethnicity data were not collected. Participants had no macrovascular thrombosis at the start of the study.
The primary outcomes were all-cause mortality and time to clinical improvement (defined as the time from randomization to a 2-point improvement on a 7-category respiratory function scale).
The secondary outcome was a combination of safety and efficacy at day 28 that included a composite of thrombosis (ischemic stroke, noncerebrovascular arterial thrombosis, deep venous thrombosis, pulmonary artery thrombosis, and central venous catheter–related deep venous thrombosis), major bleeding, or all-cause death.
For the primary outcome, results were similar among the groups; HD-PA had no significant benefit over SD-PA or TA. All-cause death rates for SD-PA, HD-PA, and TA patients were 14%, 12%, and 13%, respectively. The time to clinical improvement for the three groups was approximately 8 days, 9 days, and 8 days, respectively. Results for the primary outcome were consistent across all prespecified subgroups.
However, HD-PA was associated with a significant fourfold reduced risk of de novo thrombosis compared with SD-PA (5.5% vs. 20.2%) with no observed increase in major bleeding. TA was not associated with any significant improvement in primary or secondary outcomes compared with HD-PA or SD-PA.
The current study findings of no improvement in survival or disease resolution in patients with a higher anticoagulant dose reflects data from previous studies, the researchers wrote in their discussion. “Our study results together with those of previous RCTs support the premise that the role of microvascular thrombosis in worsening organ dysfunction may be narrower than estimated,” they said.
The findings were limited by several factors including the open-label design and the relatively small sample size, the lack of data on microvascular (vs. macrovascular) thrombosis at baseline, and the predominance of the Delta variant of COVID-19 among the study participants, which may have contributed to a lower mortality rate, the researchers noted.
However, given the significant reduction in de novo thrombosis, the results support the routine use of HD-PA in patients with severe hypoxemic COVID-19 pneumonia, they concluded.
Results inform current clinical practice
Over the course of the COVID-19 pandemic, “Patients hospitalized with COVID-19 manifested the highest risk for thromboembolic complications, especially patients in the intensive care setting,” and early reports suggested that standard prophylactic doses of anticoagulant therapy might be insufficient to prevent thrombotic events, Richard C. Becker, MD, of the University of Cincinnati, and Thomas L. Ortel, MD, of Duke University, Durham, N.C., wrote in an accompanying editorial.
“Although there have been several studies that have investigated the role of anticoagulant therapy in hospitalized patients with COVID-19, this is the first study that specifically compared a standard, prophylactic dose of low-molecular-weight heparin to a ‘high-dose’ prophylactic regimen and to a full therapeutic dose regimen,” Dr. Ortel said in an interview.
“Given the concerns about an increased thrombotic risk with prophylactic dose anticoagulation, and the potential bleeding risk associated with a full therapeutic dose of anticoagulation, this approach enabled the investigators to explore the efficacy and safety of an intermediate dose between these two extremes,” he said.
In the current study, , a finding that was not observed in other studies investigating anticoagulant therapy in hospitalized patients with severe COVID-19,” Dr. Ortel told this news organization. “Much initial concern about progression of disease in patients hospitalized with severe COVID-19 focused on the role of microvascular thrombosis, which appears to be less important in this process, or, alternatively, less responsive to anticoagulant therapy.”
The clinical takeaway from the study, Dr. Ortel said, is the decreased risk for venous thromboembolism with a high-dose prophylactic anticoagulation strategy compared with a standard-dose prophylactic regimen for patients hospitalized with hypoxemic COVID-19 pneumonia, “leading to an improved net clinical outcome.”
Looking ahead, “Additional research is needed to determine whether a higher dose of prophylactic anticoagulation would be beneficial for patients hospitalized with COVID-19 pneumonia who are not in an intensive care unit setting,” Dr. Ortel said. Studies are needed to determine whether therapeutic interventions are equally beneficial in patients with different coronavirus variants, since most patients in the current study were infected with the Delta variant, he added.
The study was supported by LEO Pharma. Dr. Labbé disclosed grants from LEO Pharma during the study and fees from AOP Health unrelated to the current study.
Dr. Becker disclosed personal fees from Novartis Data Safety Monitoring Board, Ionis Data Safety Monitoring Board, and Basking Biosciences Scientific Advisory Board unrelated to the current study. Dr. Ortel disclosed grants from the National Institutes of Health, Instrumentation Laboratory, Stago, and Siemens; contract fees from the Centers for Disease Control and Prevention; and honoraria from UpToDate unrelated to the current study.
A version of this article originally appeared on Medscape.com.
New antiobesity drugs will benefit many. Is that bad?
economists opined that their coverage would be disastrous for Medicare.
where someAmong their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.
As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.
Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”
Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”
And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”
As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.
Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.
It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.
But then again, systemic weight bias is a hell of a drug.
Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.
A version of this article originally appeared on Medscape.com.
economists opined that their coverage would be disastrous for Medicare.
where someAmong their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.
As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.
Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”
Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”
And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”
As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.
Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.
It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.
But then again, systemic weight bias is a hell of a drug.
Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.
A version of this article originally appeared on Medscape.com.
economists opined that their coverage would be disastrous for Medicare.
where someAmong their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.
As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.
Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”
Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”
And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”
As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.
Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.
It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.
But then again, systemic weight bias is a hell of a drug.
Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.
A version of this article originally appeared on Medscape.com.
Subclinical CAD by CT predicts MI risk, with or without stenoses
About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.
The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.
The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.
Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.
“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.
Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.
The group acknowledges the findings may not entirely apply to a non-Danish population.
A screening role for CTA?
Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.
Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.
“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”
The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.
For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.
The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”
It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
Graded risk
The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.
Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.
Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.
There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:
- 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
- 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
- 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
- 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.
The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:
- 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
- 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.
“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.
They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.
The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.
A version of this article originally appeared on Medscape.com.
About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.
The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.
The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.
Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.
“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.
Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.
The group acknowledges the findings may not entirely apply to a non-Danish population.
A screening role for CTA?
Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.
Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.
“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”
The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.
For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.
The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”
It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
Graded risk
The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.
Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.
Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.
There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:
- 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
- 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
- 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
- 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.
The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:
- 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
- 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.
“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.
They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.
The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.
A version of this article originally appeared on Medscape.com.
About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.
The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.
The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.
Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.
“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.
Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.
The group acknowledges the findings may not entirely apply to a non-Danish population.
A screening role for CTA?
Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.
Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.
“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”
The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.
For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.
The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”
It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
Graded risk
The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.
Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.
Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.
There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:
- 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
- 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
- 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
- 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.
The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:
- 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
- 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.
“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.
They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.
The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.
A version of this article originally appeared on Medscape.com.
Meet the JCOM Author with Dr. Barkoudah: A Multidisciplinary Team–Based Clinical Care Pathway for Infective Endocarditis
‘Excess’ deaths surging, but why?
This transcript has been edited for clarity.
“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.
As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.
You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?
Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.
The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.
As always, however, the devil is in the details. What data do you use to define the expected number of deaths?
There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.
But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.
Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.
The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.
Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.
Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.
Here are the actual deaths in the US during that time.
Highlighted here in green, then, is the excess mortality over time in the United States.
There are some fascinating and concerning findings here.
First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.
Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.
The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.
Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?
How indeed.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
This transcript has been edited for clarity.
“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.
As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.
You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?
Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.
The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.
As always, however, the devil is in the details. What data do you use to define the expected number of deaths?
There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.
But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.
Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.
The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.
Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.
Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.
Here are the actual deaths in the US during that time.
Highlighted here in green, then, is the excess mortality over time in the United States.
There are some fascinating and concerning findings here.
First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.
Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.
The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.
Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?
How indeed.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
This transcript has been edited for clarity.
“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.
As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.
You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?
Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.
The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.
As always, however, the devil is in the details. What data do you use to define the expected number of deaths?
There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.
But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.
Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.
The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.
Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.
Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.
Here are the actual deaths in the US during that time.
Highlighted here in green, then, is the excess mortality over time in the United States.
There are some fascinating and concerning findings here.
First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.
Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.
The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.
Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?
How indeed.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
A new way to gauge suicide risk?
Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.
“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.
Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.
“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.
The study was published online JAMA Network Open.
Improved risk assessment
SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.
Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR. However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.
Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).
Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.
The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.
All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.
Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).
Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).
When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).
“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.
“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.
Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”
In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”
This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.
This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.
“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.
Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.
“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.
The study was published online JAMA Network Open.
Improved risk assessment
SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.
Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR. However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.
Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).
Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.
The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.
All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.
Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).
Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).
When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).
“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.
“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.
Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”
In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”
This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.
This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.
“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.
Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.
“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.
The study was published online JAMA Network Open.
Improved risk assessment
SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.
Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR. However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.
Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).
Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.
The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.
All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.
Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).
Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).
When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).
“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.
“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.
Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”
In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”
This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.
This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
FROM JAMA NETWORK OPEN