User login
Focus on Nutrient Density Instead of Limiting Certain Foods
The word “malnutrition” probably brings to mind images of very thin patients with catabolic illness. But it really just means “poor nutrition,” which can — and often does — apply to patients with overweight or obesity.
That’s because malnutrition doesn’t occur simply because of a lack of calories, but rather because there is a gap in the nutrition the body requires and the nutrition it receives.
Each day, clinicians see patients with chronic conditions related to malnutrition. That list includes diabetes and hypertension, which can be promoted by excess intake of certain nutrients (carbohydrates and sodium) or inadequate intake of others (fiber, protein, potassium, magnesium, and calcium).
Diet Education Is Vital in Chronic Disease Management
Diet education is without a doubt a core pillar of chronic disease management. Nutrition therapy is recommended in treatment guidelines for the management of some of the most commonly seen chronic conditions such as hypertension, diabetes, and kidney disease. But in one study, only 58% of physicians, nurses and other health professionals surveyed had received formal nutrition education and only 40% were confident in their ability to provide nutrition education to patients.
As a registered dietitian, I welcome referrals for both prevention and management of chronic diseases with open arms. But medical nutrition therapy with a registered dietitian may not be realistic for all patients owing to financial, geographic, or other constraints. So, their best option may be the few minutes that a physician or physician extender has to spare at the end of their appointment.
But time constraints may result in clinicians turning to short, easy-to-remember messages such as “Don’t eat anything white” or “Only shop the edges of the grocery store.” Although catchy, this type of advice can inadvertently encourage patients to skip over foods that are actually very nutrient dense. For example, white foods such as onions, turnips, mushrooms, cauliflower, and even popcorn are low in calories and high in nutritional value. The center aisles of the grocery store may harbor high-carbohydrate breakfast cereals and potato chips, but they are also home to legumes, nuts, and canned and frozen fruits and vegetables.
What may be more effective is educating the patient on the importance of focusing on the nutrient density of foods, rather than simply limiting certain food groups or colors.
How to Work Nutrient Density into the Conversation
Nutrient density is a concept that refers to the proportion of nutrients to calories in a food item: essentially, a food’s qualitative nutritional value. It provides more depth than simply referring to foods as being high or low in calories, healthy or unhealthy, or good or bad.
Educating patients about nutrition density and encouraging a focus on foods that are low in calories and high in vitamins and minerals can help address micronutrient deficiencies, which may be more common than previously thought and linked to the chronic diseases that we see daily. It is worth noting that some foods that are not low in calories are still nutrient dense. Avocados, liver, and nuts come to mind as foods that are high in calories, but they have additional nutrients such as fiber, potassium, antioxidants, vitamin A, iron, and selenium that can still make them an excellent choice if they are part of a well-balanced diet.
I fear that we often underestimate our patients. We worry that not providing them with a list of acceptable foods will set them up for failure. But, in my experience, that list of “good” and “bad” foods may be useful for a week or so but will eventually become lost on the fridge under children’s artwork and save-the-dates.
Patients know that potato chips offer little more than fat, carbs, and salt and that they’re a poor choice for long-term health. What they might not know is that cocktail peanuts can also satisfy the craving for a salty snack, with more than four times the protein, twice the fiber, and just over half of the sodium found in the same serving size of regular salted potato chips. Peanuts have the added bonus of being high in heart-healthy monounsaturated fatty acids.
The best thing that clinicians can do with just a few minutes of time for diet education is to talk to patients about the nutrient density of whole foods and caution patients against highly processed foods, because processing can decrease nutritional content. Our most effective option is to explain why a varied diet with focus on fruits, vegetables, lean protein, nuts, legumes, and healthy fats is beneficial for cardiovascular and metabolic health. After that, all that is left is to trust the patient to make the right choices for their health.
Brandy Winfree Root, a renal dietitian in private practice in Mary Esther, Florida, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
The word “malnutrition” probably brings to mind images of very thin patients with catabolic illness. But it really just means “poor nutrition,” which can — and often does — apply to patients with overweight or obesity.
That’s because malnutrition doesn’t occur simply because of a lack of calories, but rather because there is a gap in the nutrition the body requires and the nutrition it receives.
Each day, clinicians see patients with chronic conditions related to malnutrition. That list includes diabetes and hypertension, which can be promoted by excess intake of certain nutrients (carbohydrates and sodium) or inadequate intake of others (fiber, protein, potassium, magnesium, and calcium).
Diet Education Is Vital in Chronic Disease Management
Diet education is without a doubt a core pillar of chronic disease management. Nutrition therapy is recommended in treatment guidelines for the management of some of the most commonly seen chronic conditions such as hypertension, diabetes, and kidney disease. But in one study, only 58% of physicians, nurses and other health professionals surveyed had received formal nutrition education and only 40% were confident in their ability to provide nutrition education to patients.
As a registered dietitian, I welcome referrals for both prevention and management of chronic diseases with open arms. But medical nutrition therapy with a registered dietitian may not be realistic for all patients owing to financial, geographic, or other constraints. So, their best option may be the few minutes that a physician or physician extender has to spare at the end of their appointment.
But time constraints may result in clinicians turning to short, easy-to-remember messages such as “Don’t eat anything white” or “Only shop the edges of the grocery store.” Although catchy, this type of advice can inadvertently encourage patients to skip over foods that are actually very nutrient dense. For example, white foods such as onions, turnips, mushrooms, cauliflower, and even popcorn are low in calories and high in nutritional value. The center aisles of the grocery store may harbor high-carbohydrate breakfast cereals and potato chips, but they are also home to legumes, nuts, and canned and frozen fruits and vegetables.
What may be more effective is educating the patient on the importance of focusing on the nutrient density of foods, rather than simply limiting certain food groups or colors.
How to Work Nutrient Density into the Conversation
Nutrient density is a concept that refers to the proportion of nutrients to calories in a food item: essentially, a food’s qualitative nutritional value. It provides more depth than simply referring to foods as being high or low in calories, healthy or unhealthy, or good or bad.
Educating patients about nutrition density and encouraging a focus on foods that are low in calories and high in vitamins and minerals can help address micronutrient deficiencies, which may be more common than previously thought and linked to the chronic diseases that we see daily. It is worth noting that some foods that are not low in calories are still nutrient dense. Avocados, liver, and nuts come to mind as foods that are high in calories, but they have additional nutrients such as fiber, potassium, antioxidants, vitamin A, iron, and selenium that can still make them an excellent choice if they are part of a well-balanced diet.
I fear that we often underestimate our patients. We worry that not providing them with a list of acceptable foods will set them up for failure. But, in my experience, that list of “good” and “bad” foods may be useful for a week or so but will eventually become lost on the fridge under children’s artwork and save-the-dates.
Patients know that potato chips offer little more than fat, carbs, and salt and that they’re a poor choice for long-term health. What they might not know is that cocktail peanuts can also satisfy the craving for a salty snack, with more than four times the protein, twice the fiber, and just over half of the sodium found in the same serving size of regular salted potato chips. Peanuts have the added bonus of being high in heart-healthy monounsaturated fatty acids.
The best thing that clinicians can do with just a few minutes of time for diet education is to talk to patients about the nutrient density of whole foods and caution patients against highly processed foods, because processing can decrease nutritional content. Our most effective option is to explain why a varied diet with focus on fruits, vegetables, lean protein, nuts, legumes, and healthy fats is beneficial for cardiovascular and metabolic health. After that, all that is left is to trust the patient to make the right choices for their health.
Brandy Winfree Root, a renal dietitian in private practice in Mary Esther, Florida, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
The word “malnutrition” probably brings to mind images of very thin patients with catabolic illness. But it really just means “poor nutrition,” which can — and often does — apply to patients with overweight or obesity.
That’s because malnutrition doesn’t occur simply because of a lack of calories, but rather because there is a gap in the nutrition the body requires and the nutrition it receives.
Each day, clinicians see patients with chronic conditions related to malnutrition. That list includes diabetes and hypertension, which can be promoted by excess intake of certain nutrients (carbohydrates and sodium) or inadequate intake of others (fiber, protein, potassium, magnesium, and calcium).
Diet Education Is Vital in Chronic Disease Management
Diet education is without a doubt a core pillar of chronic disease management. Nutrition therapy is recommended in treatment guidelines for the management of some of the most commonly seen chronic conditions such as hypertension, diabetes, and kidney disease. But in one study, only 58% of physicians, nurses and other health professionals surveyed had received formal nutrition education and only 40% were confident in their ability to provide nutrition education to patients.
As a registered dietitian, I welcome referrals for both prevention and management of chronic diseases with open arms. But medical nutrition therapy with a registered dietitian may not be realistic for all patients owing to financial, geographic, or other constraints. So, their best option may be the few minutes that a physician or physician extender has to spare at the end of their appointment.
But time constraints may result in clinicians turning to short, easy-to-remember messages such as “Don’t eat anything white” or “Only shop the edges of the grocery store.” Although catchy, this type of advice can inadvertently encourage patients to skip over foods that are actually very nutrient dense. For example, white foods such as onions, turnips, mushrooms, cauliflower, and even popcorn are low in calories and high in nutritional value. The center aisles of the grocery store may harbor high-carbohydrate breakfast cereals and potato chips, but they are also home to legumes, nuts, and canned and frozen fruits and vegetables.
What may be more effective is educating the patient on the importance of focusing on the nutrient density of foods, rather than simply limiting certain food groups or colors.
How to Work Nutrient Density into the Conversation
Nutrient density is a concept that refers to the proportion of nutrients to calories in a food item: essentially, a food’s qualitative nutritional value. It provides more depth than simply referring to foods as being high or low in calories, healthy or unhealthy, or good or bad.
Educating patients about nutrition density and encouraging a focus on foods that are low in calories and high in vitamins and minerals can help address micronutrient deficiencies, which may be more common than previously thought and linked to the chronic diseases that we see daily. It is worth noting that some foods that are not low in calories are still nutrient dense. Avocados, liver, and nuts come to mind as foods that are high in calories, but they have additional nutrients such as fiber, potassium, antioxidants, vitamin A, iron, and selenium that can still make them an excellent choice if they are part of a well-balanced diet.
I fear that we often underestimate our patients. We worry that not providing them with a list of acceptable foods will set them up for failure. But, in my experience, that list of “good” and “bad” foods may be useful for a week or so but will eventually become lost on the fridge under children’s artwork and save-the-dates.
Patients know that potato chips offer little more than fat, carbs, and salt and that they’re a poor choice for long-term health. What they might not know is that cocktail peanuts can also satisfy the craving for a salty snack, with more than four times the protein, twice the fiber, and just over half of the sodium found in the same serving size of regular salted potato chips. Peanuts have the added bonus of being high in heart-healthy monounsaturated fatty acids.
The best thing that clinicians can do with just a few minutes of time for diet education is to talk to patients about the nutrient density of whole foods and caution patients against highly processed foods, because processing can decrease nutritional content. Our most effective option is to explain why a varied diet with focus on fruits, vegetables, lean protein, nuts, legumes, and healthy fats is beneficial for cardiovascular and metabolic health. After that, all that is left is to trust the patient to make the right choices for their health.
Brandy Winfree Root, a renal dietitian in private practice in Mary Esther, Florida, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
New Approaches to Research Beyond Massive Clinical Trials
This transcript has been edited for clarity.
I want to briefly present a fascinating effort, one that needs to be applauded and applauded again, and then we need to scratch our collective heads and ask, why did we do it and what did we learn?
I’m referring to a report recently published in Annals of Internal Medicine, “Long-Term Effect of Randomization to Calcium and Vitamin D Supplementation on Health in Older Women: Postintervention Follow-up of a Randomized Clinical Trial.” The title of this report does not do it justice. This was a massive effort — one could, I believe, even use the term Herculean — to ask an important question that was asked more than 20 years ago.
This was a national women’s health initiative to answer these questions. The study looked at 36,282 postmenopausal women who, at the time of agreeing to be randomized in this trial, had no history of breast or colorectal cancer. This was a 7-year randomized intervention effort, and 40 centers across the United States participated, obviously funded by the government. Randomization was one-to-one to placebo or 1000 mg calcium and 400 international units of vitamin D3 daily.
They looked at the incidence of colorectal cancer, breast cancer, and total cancer, and importantly as an endpoint, total cardiovascular disease and hip fractures. They didn’t comment on hip fractures in this particular analysis. Obviously, hip fractures relate to this question of osteoporosis in postmenopausal women.
Here’s the bottom line: With a median follow-up now of 22.3 years — that’s not 2 years, but 22.3 years — there was a 7% decrease in cancer mortality in the population that received the calcium and vitamin D3. This is nothing to snicker at, and nothing at which to say, “Wow. That’s not important.”
However, in this analysis involving several tens of thousands of women, there was a 6% increase in cardiovascular disease mortality noted and reported. Overall, there was no effect on all-cause mortality of this intervention, with a hazard ratio — you rarely see this — of 1.00.
There is much that can be said, but I will summarize my comments very briefly. Criticize this if you want. It’s not inappropriate to criticize, but what was the individual impact of the calcium vs vitamin D? If they had only used one vs the other, or used both but in separate arms of the trial, and you could have separated what might have caused the decrease in cancer mortality and not the increased cardiovascular disease… This was designed more than 20 years ago. That’s one point.
The second is, how many more tens of thousands of patients would they have had to add to do this, and at what cost? This was a massive study, a national study, and a simple study in terms of the intervention. It was low risk except if you look at the long-term outcome. You can only imagine how much it would cost to do that study today — not the cost of the calcium, the vitamin D3, but the cost of doing the trial that was concluded to have no impact.
From a societal perspective, this was an important question to answer, certainly then. What did we learn and at what cost? The bottom line is that we have to figure out a way of answering these kinds of questions.
Perhaps now they should be from real-world data, looking at electronic medical records or at a variety of other population-based data so that we can get the answer — not in 20 years but in perhaps 2 months, because we’ve looked at the data using artificial intelligence to help us to answer these questions; and maybe not 36,000 patients but 360,000 individuals looked at over this period of time.
Again, I’m proposing an alternative solution because the questions that were asked 20 years ago remain important today. This cannot be the way that we, in the future, try to answer them, certainly from the perspective of cost and also the perspective of time to get the answers.
Let me conclude by, again, applauding these researchers because of the quality of the work they started out doing and ended up doing and reporting. Also, I think we’ve learned that we have to come up with alternative ways to answer what were important questions then and are important questions today.
Dr. Markman, Professor of Medical Oncology and Therapeutics Research, City of Hope Comprehensive Cancer Center; President, Medicine & Science, City of Hope Atlanta, Chicago, Phoenix, disclosed ties with GlaxoSmithKline and AstraZeneca.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I want to briefly present a fascinating effort, one that needs to be applauded and applauded again, and then we need to scratch our collective heads and ask, why did we do it and what did we learn?
I’m referring to a report recently published in Annals of Internal Medicine, “Long-Term Effect of Randomization to Calcium and Vitamin D Supplementation on Health in Older Women: Postintervention Follow-up of a Randomized Clinical Trial.” The title of this report does not do it justice. This was a massive effort — one could, I believe, even use the term Herculean — to ask an important question that was asked more than 20 years ago.
This was a national women’s health initiative to answer these questions. The study looked at 36,282 postmenopausal women who, at the time of agreeing to be randomized in this trial, had no history of breast or colorectal cancer. This was a 7-year randomized intervention effort, and 40 centers across the United States participated, obviously funded by the government. Randomization was one-to-one to placebo or 1000 mg calcium and 400 international units of vitamin D3 daily.
They looked at the incidence of colorectal cancer, breast cancer, and total cancer, and importantly as an endpoint, total cardiovascular disease and hip fractures. They didn’t comment on hip fractures in this particular analysis. Obviously, hip fractures relate to this question of osteoporosis in postmenopausal women.
Here’s the bottom line: With a median follow-up now of 22.3 years — that’s not 2 years, but 22.3 years — there was a 7% decrease in cancer mortality in the population that received the calcium and vitamin D3. This is nothing to snicker at, and nothing at which to say, “Wow. That’s not important.”
However, in this analysis involving several tens of thousands of women, there was a 6% increase in cardiovascular disease mortality noted and reported. Overall, there was no effect on all-cause mortality of this intervention, with a hazard ratio — you rarely see this — of 1.00.
There is much that can be said, but I will summarize my comments very briefly. Criticize this if you want. It’s not inappropriate to criticize, but what was the individual impact of the calcium vs vitamin D? If they had only used one vs the other, or used both but in separate arms of the trial, and you could have separated what might have caused the decrease in cancer mortality and not the increased cardiovascular disease… This was designed more than 20 years ago. That’s one point.
The second is, how many more tens of thousands of patients would they have had to add to do this, and at what cost? This was a massive study, a national study, and a simple study in terms of the intervention. It was low risk except if you look at the long-term outcome. You can only imagine how much it would cost to do that study today — not the cost of the calcium, the vitamin D3, but the cost of doing the trial that was concluded to have no impact.
From a societal perspective, this was an important question to answer, certainly then. What did we learn and at what cost? The bottom line is that we have to figure out a way of answering these kinds of questions.
Perhaps now they should be from real-world data, looking at electronic medical records or at a variety of other population-based data so that we can get the answer — not in 20 years but in perhaps 2 months, because we’ve looked at the data using artificial intelligence to help us to answer these questions; and maybe not 36,000 patients but 360,000 individuals looked at over this period of time.
Again, I’m proposing an alternative solution because the questions that were asked 20 years ago remain important today. This cannot be the way that we, in the future, try to answer them, certainly from the perspective of cost and also the perspective of time to get the answers.
Let me conclude by, again, applauding these researchers because of the quality of the work they started out doing and ended up doing and reporting. Also, I think we’ve learned that we have to come up with alternative ways to answer what were important questions then and are important questions today.
Dr. Markman, Professor of Medical Oncology and Therapeutics Research, City of Hope Comprehensive Cancer Center; President, Medicine & Science, City of Hope Atlanta, Chicago, Phoenix, disclosed ties with GlaxoSmithKline and AstraZeneca.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I want to briefly present a fascinating effort, one that needs to be applauded and applauded again, and then we need to scratch our collective heads and ask, why did we do it and what did we learn?
I’m referring to a report recently published in Annals of Internal Medicine, “Long-Term Effect of Randomization to Calcium and Vitamin D Supplementation on Health in Older Women: Postintervention Follow-up of a Randomized Clinical Trial.” The title of this report does not do it justice. This was a massive effort — one could, I believe, even use the term Herculean — to ask an important question that was asked more than 20 years ago.
This was a national women’s health initiative to answer these questions. The study looked at 36,282 postmenopausal women who, at the time of agreeing to be randomized in this trial, had no history of breast or colorectal cancer. This was a 7-year randomized intervention effort, and 40 centers across the United States participated, obviously funded by the government. Randomization was one-to-one to placebo or 1000 mg calcium and 400 international units of vitamin D3 daily.
They looked at the incidence of colorectal cancer, breast cancer, and total cancer, and importantly as an endpoint, total cardiovascular disease and hip fractures. They didn’t comment on hip fractures in this particular analysis. Obviously, hip fractures relate to this question of osteoporosis in postmenopausal women.
Here’s the bottom line: With a median follow-up now of 22.3 years — that’s not 2 years, but 22.3 years — there was a 7% decrease in cancer mortality in the population that received the calcium and vitamin D3. This is nothing to snicker at, and nothing at which to say, “Wow. That’s not important.”
However, in this analysis involving several tens of thousands of women, there was a 6% increase in cardiovascular disease mortality noted and reported. Overall, there was no effect on all-cause mortality of this intervention, with a hazard ratio — you rarely see this — of 1.00.
There is much that can be said, but I will summarize my comments very briefly. Criticize this if you want. It’s not inappropriate to criticize, but what was the individual impact of the calcium vs vitamin D? If they had only used one vs the other, or used both but in separate arms of the trial, and you could have separated what might have caused the decrease in cancer mortality and not the increased cardiovascular disease… This was designed more than 20 years ago. That’s one point.
The second is, how many more tens of thousands of patients would they have had to add to do this, and at what cost? This was a massive study, a national study, and a simple study in terms of the intervention. It was low risk except if you look at the long-term outcome. You can only imagine how much it would cost to do that study today — not the cost of the calcium, the vitamin D3, but the cost of doing the trial that was concluded to have no impact.
From a societal perspective, this was an important question to answer, certainly then. What did we learn and at what cost? The bottom line is that we have to figure out a way of answering these kinds of questions.
Perhaps now they should be from real-world data, looking at electronic medical records or at a variety of other population-based data so that we can get the answer — not in 20 years but in perhaps 2 months, because we’ve looked at the data using artificial intelligence to help us to answer these questions; and maybe not 36,000 patients but 360,000 individuals looked at over this period of time.
Again, I’m proposing an alternative solution because the questions that were asked 20 years ago remain important today. This cannot be the way that we, in the future, try to answer them, certainly from the perspective of cost and also the perspective of time to get the answers.
Let me conclude by, again, applauding these researchers because of the quality of the work they started out doing and ended up doing and reporting. Also, I think we’ve learned that we have to come up with alternative ways to answer what were important questions then and are important questions today.
Dr. Markman, Professor of Medical Oncology and Therapeutics Research, City of Hope Comprehensive Cancer Center; President, Medicine & Science, City of Hope Atlanta, Chicago, Phoenix, disclosed ties with GlaxoSmithKline and AstraZeneca.
A version of this article first appeared on Medscape.com.
Exercise or Inactivity?
The answer one gets often depends on how one crafts the question. For example, Jeffrey D. Johnson PhD, a professor of communications at Portland State University in Oregon has found that if patients are asked “Is there something else you would like to address today?” 80% had their unmet questions addressed. However, if the question was worded “Is there anything else ...?” Very few had their unmet concerns addressed.
I recently encountered two studies that provide another striking example of how differently structured questions aimed at same topic can result in dramatically different results. In this case both studies used one database, the UK Biobank cohort study which contains “de-identified genetic, lifestyle, and health information” collected from a half million adults in the UK. A subgroup of nearly 90,000 who had undergone a week long activity measurement using a wrist accelerometer was the focus of both groups of investigators who asked the same broad question “What is the relationship between physical activity and disease?”
The first study I found has already received some publicity in the lay press and dealt with those individuals who, for a variety of reasons, pack all of their exercise into just a few days, usually the weekend, aka weekend warriors. The investigators found that when compared with generally inactive individuals those who were able to achieve activity volumes that met current guidelines were at lower risk for more than 200 diseases, particularly those that were cardiac based. I guess that shouldn’t surprise us. The finding that has received most of the publicity to date in the lay press was that “Associations were similar whether the activity followed a weekend warrior pattern or was spread out evenly through the week.”
The second study, using the same database, found that those individuals who spent more than 10.6 hours per day sitting had 60% an increased risk of heart failure and cardiovascular related death. And, here’s the real news, that risk remained even in people who were otherwise physically active.
I suspect these two groups of investigators, both associated with Harvard-related institutions, knew of each other’s work and would agree that their findings are not incompatible. However, it is interesting that, when presented with the same database, one group chose to focus its attention on the exercise end of the spectrum while the other looked at the effect of inactivity.
I have always tried to include a “healthy” amount of exercise in my day. However, more recently my professional interest has been drawn to the increasing number of studies I read that deal with the risks of inactivity and sedentarism. For example, just in the last 2 years I have written about a study in children that showed that sedentary time is responsible for 70% of the total increase in cholesterol as children advance into young adulthood. Another study in adults found that every 2-hour increase in sedentary behavior was associated with a 12% decrease in the patient’s likelihood of achieving healthy aging.
If I were asked to place relative values on these two studies, I would say that the study highlighting the risk of prolonged sitting is potentially far more relevant to the population at large, which is for the most part sedentary. Of course, while I have no data to support my contention, I see the weekend warrior population as a niche group.
So what are the take-home messages from these two studies? One is for the weekend warrior. “You can take some comfort in the results that support your exercise schedule but don’t feel too comfortable about it if most of the week you are sitting at a desk.”
For the rest of us — It’s beginning to feel like we should be including accelerometers in our regular diagnostic and therapeutic weaponry. Sending home patients with a Holter cardiac monitor has become commonplace. We should be sending more folks home with accelerometers or asking the more affluent to share the data from their smart watches. “You’ve been bragging about your “steps. Show me your sitting time.”
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
The answer one gets often depends on how one crafts the question. For example, Jeffrey D. Johnson PhD, a professor of communications at Portland State University in Oregon has found that if patients are asked “Is there something else you would like to address today?” 80% had their unmet questions addressed. However, if the question was worded “Is there anything else ...?” Very few had their unmet concerns addressed.
I recently encountered two studies that provide another striking example of how differently structured questions aimed at same topic can result in dramatically different results. In this case both studies used one database, the UK Biobank cohort study which contains “de-identified genetic, lifestyle, and health information” collected from a half million adults in the UK. A subgroup of nearly 90,000 who had undergone a week long activity measurement using a wrist accelerometer was the focus of both groups of investigators who asked the same broad question “What is the relationship between physical activity and disease?”
The first study I found has already received some publicity in the lay press and dealt with those individuals who, for a variety of reasons, pack all of their exercise into just a few days, usually the weekend, aka weekend warriors. The investigators found that when compared with generally inactive individuals those who were able to achieve activity volumes that met current guidelines were at lower risk for more than 200 diseases, particularly those that were cardiac based. I guess that shouldn’t surprise us. The finding that has received most of the publicity to date in the lay press was that “Associations were similar whether the activity followed a weekend warrior pattern or was spread out evenly through the week.”
The second study, using the same database, found that those individuals who spent more than 10.6 hours per day sitting had 60% an increased risk of heart failure and cardiovascular related death. And, here’s the real news, that risk remained even in people who were otherwise physically active.
I suspect these two groups of investigators, both associated with Harvard-related institutions, knew of each other’s work and would agree that their findings are not incompatible. However, it is interesting that, when presented with the same database, one group chose to focus its attention on the exercise end of the spectrum while the other looked at the effect of inactivity.
I have always tried to include a “healthy” amount of exercise in my day. However, more recently my professional interest has been drawn to the increasing number of studies I read that deal with the risks of inactivity and sedentarism. For example, just in the last 2 years I have written about a study in children that showed that sedentary time is responsible for 70% of the total increase in cholesterol as children advance into young adulthood. Another study in adults found that every 2-hour increase in sedentary behavior was associated with a 12% decrease in the patient’s likelihood of achieving healthy aging.
If I were asked to place relative values on these two studies, I would say that the study highlighting the risk of prolonged sitting is potentially far more relevant to the population at large, which is for the most part sedentary. Of course, while I have no data to support my contention, I see the weekend warrior population as a niche group.
So what are the take-home messages from these two studies? One is for the weekend warrior. “You can take some comfort in the results that support your exercise schedule but don’t feel too comfortable about it if most of the week you are sitting at a desk.”
For the rest of us — It’s beginning to feel like we should be including accelerometers in our regular diagnostic and therapeutic weaponry. Sending home patients with a Holter cardiac monitor has become commonplace. We should be sending more folks home with accelerometers or asking the more affluent to share the data from their smart watches. “You’ve been bragging about your “steps. Show me your sitting time.”
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
The answer one gets often depends on how one crafts the question. For example, Jeffrey D. Johnson PhD, a professor of communications at Portland State University in Oregon has found that if patients are asked “Is there something else you would like to address today?” 80% had their unmet questions addressed. However, if the question was worded “Is there anything else ...?” Very few had their unmet concerns addressed.
I recently encountered two studies that provide another striking example of how differently structured questions aimed at same topic can result in dramatically different results. In this case both studies used one database, the UK Biobank cohort study which contains “de-identified genetic, lifestyle, and health information” collected from a half million adults in the UK. A subgroup of nearly 90,000 who had undergone a week long activity measurement using a wrist accelerometer was the focus of both groups of investigators who asked the same broad question “What is the relationship between physical activity and disease?”
The first study I found has already received some publicity in the lay press and dealt with those individuals who, for a variety of reasons, pack all of their exercise into just a few days, usually the weekend, aka weekend warriors. The investigators found that when compared with generally inactive individuals those who were able to achieve activity volumes that met current guidelines were at lower risk for more than 200 diseases, particularly those that were cardiac based. I guess that shouldn’t surprise us. The finding that has received most of the publicity to date in the lay press was that “Associations were similar whether the activity followed a weekend warrior pattern or was spread out evenly through the week.”
The second study, using the same database, found that those individuals who spent more than 10.6 hours per day sitting had 60% an increased risk of heart failure and cardiovascular related death. And, here’s the real news, that risk remained even in people who were otherwise physically active.
I suspect these two groups of investigators, both associated with Harvard-related institutions, knew of each other’s work and would agree that their findings are not incompatible. However, it is interesting that, when presented with the same database, one group chose to focus its attention on the exercise end of the spectrum while the other looked at the effect of inactivity.
I have always tried to include a “healthy” amount of exercise in my day. However, more recently my professional interest has been drawn to the increasing number of studies I read that deal with the risks of inactivity and sedentarism. For example, just in the last 2 years I have written about a study in children that showed that sedentary time is responsible for 70% of the total increase in cholesterol as children advance into young adulthood. Another study in adults found that every 2-hour increase in sedentary behavior was associated with a 12% decrease in the patient’s likelihood of achieving healthy aging.
If I were asked to place relative values on these two studies, I would say that the study highlighting the risk of prolonged sitting is potentially far more relevant to the population at large, which is for the most part sedentary. Of course, while I have no data to support my contention, I see the weekend warrior population as a niche group.
So what are the take-home messages from these two studies? One is for the weekend warrior. “You can take some comfort in the results that support your exercise schedule but don’t feel too comfortable about it if most of the week you are sitting at a desk.”
For the rest of us — It’s beginning to feel like we should be including accelerometers in our regular diagnostic and therapeutic weaponry. Sending home patients with a Holter cardiac monitor has become commonplace. We should be sending more folks home with accelerometers or asking the more affluent to share the data from their smart watches. “You’ve been bragging about your “steps. Show me your sitting time.”
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Six Updates on Stroke Management
This video transcript has been edited for clarity.
Dear colleagues, I am Christoph Diener, from the Faculty of Medicine at the University Duisburg-Essen in Germany. In this video, I would like to cover six publications on stroke, which were published this fall.
The Best Thrombolytic?
Let me start with systemic thrombolysis. We now have two thrombolytic agents available. One is the well-known alteplase, and newly approved for the treatment of stroke is tenecteplase. The ATTEST-2 study in the United Kingdom, published in The Lancet Neurology, compared tenecteplase 0.25 mg/kg body weight as a bolus with alteplase 0.9 mg/kg body weight as an infusion over 60 minutes in the 4.5-hour time window in 1777 patients with ischemic stroke.
There was no significant difference between the two thrombolytics for the primary endpoint of modified Rankin Scale score after 90 days. There was also no difference with respect to mortality, intracranial bleeding, or extracranial bleeding.
We finally have 11 randomized controlled trials that compared tenecteplase and alteplase in acute ischemic stroke. A meta-analysis of these randomized trials was published in Neurology. The analysis included 3700 patients treated with tenecteplase and 3700 patients treated with alteplase. For the primary endpoint, excellent functional outcome defined as modified Rankin Scale score 0-1 after 90 days, there was a significant benefit for tenecteplase (relative risk, 1.05), but the absolute difference was very small, at 3%. There was no difference in mortality or bleeding complications.
In conclusion, I think both substances are great. They are effective. Tenecteplase is most probably the drug which should be used in people who have to transfer from a primary stroke center to a dedicated stroke center that provides thrombectomy. Otherwise, I think it’s a choice of the physician as to which thrombolytic agent to use.
Mobile Stroke Units
A highly debated topic is mobile stroke units. These stroke units have a CT scanner and laboratory on board, and this makes it possible to perform thrombolysis on the way to the hospital. A retrospective, observational study collected data between 2018 and 2023, and included 19,400 patients with acute stroke, of whom 1237, or 6.4%, were treated in a mobile stroke unit. This study was published in JAMA Neurology.
The modified Rankin Scale score at the time of discharge was better in patients treated with a mobile stroke unit, but the absolute benefit was only 0.03 points on the modified Rankin Scale. The question is whether this is cost-effective, and can we really do this at times when there is a traumatic shortage of physicians and nursing staff in the hospital?
DOAC Reversal Agents
Oral anticoagulation, as you know, is usually considered a contraindication for systemic thrombolysis. Idarucizumab, a monoclonal antibody, was developed to reverse the biological activity of dabigatran and then allow systemic thrombolysis.
A recent publication in Neurology analyzed 13 cohort studies with 553 stroke patients on dabigatran who received idarucizumab prior to systemic thrombolysis, and the rate of intracranial hemorrhage was 4%. This means it’s obviously possible to perform thrombolysis when the activity of dabigatran is neutralized by idarucizumab.
Unfortunately, until today, we have no data on whether this can also be done with andexanet alfa in people who are treated with a factor Xa inhibitor like, for example, apixaban, rivaroxaban, or edoxaban.
Anticoagulation in ESUS
My next topic is ESUS, or embolic stroke of undetermined source. We have four large randomized trials and three smaller trials that compared antiplatelet therapy with DOACs in patients with ESUS. A group in Neurology published a meta-analysis of seven randomized controlled studies with, altogether, 14,800 patients with ESUS.
The comparison between antiplatelet therapy and anticoagulants showed no difference for recurrent ischemic stroke, and also not for major subgroups. This means that people with ESUS should receive antiplatelet therapy, most probably aspirin.
Anticoagulation Post–Ischemic Stroke With AF
My final topic is the optimal time to start anticoagulation in people with atrial fibrillation who suffer an ischemic stroke. The OPTIMAS study, published in The Lancet, randomized 3650 patients who were anticoagulated with DOACs early (which means less than 4 days) or delayed (between 7 and 14 days). There was no difference in the primary endpoint, which was recurrent ischemic stroke, intracranial hemorrhage, or systemic embolism at 90 days.
The conclusion is that, in most cases, we can probably initiate anticoagulation in people with ischemic stroke and atrial fibrillation within the first 4 days.
Dear colleagues, this is an exciting time for the stroke field. I presented six new studies that have impact, I think, on the management of patients with ischemic stroke.
Dr. Diener is a professor in the Department of Neurology, Stroke Center-Headache Center, University Duisburg-Essen in Germany. He reported conflicts of interest with Abbott, AbbVie, Boehringer Ingelheim, Lundbeck, Novartis, Orion Pharma, Teva, WebMD, and The German Research Council. He also serves on the editorial boards of Cephalalgia, Lancet Neurology, and Drugs.
A version of this article first appeared on Medscape.com.
This video transcript has been edited for clarity.
Dear colleagues, I am Christoph Diener, from the Faculty of Medicine at the University Duisburg-Essen in Germany. In this video, I would like to cover six publications on stroke, which were published this fall.
The Best Thrombolytic?
Let me start with systemic thrombolysis. We now have two thrombolytic agents available. One is the well-known alteplase, and newly approved for the treatment of stroke is tenecteplase. The ATTEST-2 study in the United Kingdom, published in The Lancet Neurology, compared tenecteplase 0.25 mg/kg body weight as a bolus with alteplase 0.9 mg/kg body weight as an infusion over 60 minutes in the 4.5-hour time window in 1777 patients with ischemic stroke.
There was no significant difference between the two thrombolytics for the primary endpoint of modified Rankin Scale score after 90 days. There was also no difference with respect to mortality, intracranial bleeding, or extracranial bleeding.
We finally have 11 randomized controlled trials that compared tenecteplase and alteplase in acute ischemic stroke. A meta-analysis of these randomized trials was published in Neurology. The analysis included 3700 patients treated with tenecteplase and 3700 patients treated with alteplase. For the primary endpoint, excellent functional outcome defined as modified Rankin Scale score 0-1 after 90 days, there was a significant benefit for tenecteplase (relative risk, 1.05), but the absolute difference was very small, at 3%. There was no difference in mortality or bleeding complications.
In conclusion, I think both substances are great. They are effective. Tenecteplase is most probably the drug which should be used in people who have to transfer from a primary stroke center to a dedicated stroke center that provides thrombectomy. Otherwise, I think it’s a choice of the physician as to which thrombolytic agent to use.
Mobile Stroke Units
A highly debated topic is mobile stroke units. These stroke units have a CT scanner and laboratory on board, and this makes it possible to perform thrombolysis on the way to the hospital. A retrospective, observational study collected data between 2018 and 2023, and included 19,400 patients with acute stroke, of whom 1237, or 6.4%, were treated in a mobile stroke unit. This study was published in JAMA Neurology.
The modified Rankin Scale score at the time of discharge was better in patients treated with a mobile stroke unit, but the absolute benefit was only 0.03 points on the modified Rankin Scale. The question is whether this is cost-effective, and can we really do this at times when there is a traumatic shortage of physicians and nursing staff in the hospital?
DOAC Reversal Agents
Oral anticoagulation, as you know, is usually considered a contraindication for systemic thrombolysis. Idarucizumab, a monoclonal antibody, was developed to reverse the biological activity of dabigatran and then allow systemic thrombolysis.
A recent publication in Neurology analyzed 13 cohort studies with 553 stroke patients on dabigatran who received idarucizumab prior to systemic thrombolysis, and the rate of intracranial hemorrhage was 4%. This means it’s obviously possible to perform thrombolysis when the activity of dabigatran is neutralized by idarucizumab.
Unfortunately, until today, we have no data on whether this can also be done with andexanet alfa in people who are treated with a factor Xa inhibitor like, for example, apixaban, rivaroxaban, or edoxaban.
Anticoagulation in ESUS
My next topic is ESUS, or embolic stroke of undetermined source. We have four large randomized trials and three smaller trials that compared antiplatelet therapy with DOACs in patients with ESUS. A group in Neurology published a meta-analysis of seven randomized controlled studies with, altogether, 14,800 patients with ESUS.
The comparison between antiplatelet therapy and anticoagulants showed no difference for recurrent ischemic stroke, and also not for major subgroups. This means that people with ESUS should receive antiplatelet therapy, most probably aspirin.
Anticoagulation Post–Ischemic Stroke With AF
My final topic is the optimal time to start anticoagulation in people with atrial fibrillation who suffer an ischemic stroke. The OPTIMAS study, published in The Lancet, randomized 3650 patients who were anticoagulated with DOACs early (which means less than 4 days) or delayed (between 7 and 14 days). There was no difference in the primary endpoint, which was recurrent ischemic stroke, intracranial hemorrhage, or systemic embolism at 90 days.
The conclusion is that, in most cases, we can probably initiate anticoagulation in people with ischemic stroke and atrial fibrillation within the first 4 days.
Dear colleagues, this is an exciting time for the stroke field. I presented six new studies that have impact, I think, on the management of patients with ischemic stroke.
Dr. Diener is a professor in the Department of Neurology, Stroke Center-Headache Center, University Duisburg-Essen in Germany. He reported conflicts of interest with Abbott, AbbVie, Boehringer Ingelheim, Lundbeck, Novartis, Orion Pharma, Teva, WebMD, and The German Research Council. He also serves on the editorial boards of Cephalalgia, Lancet Neurology, and Drugs.
A version of this article first appeared on Medscape.com.
This video transcript has been edited for clarity.
Dear colleagues, I am Christoph Diener, from the Faculty of Medicine at the University Duisburg-Essen in Germany. In this video, I would like to cover six publications on stroke, which were published this fall.
The Best Thrombolytic?
Let me start with systemic thrombolysis. We now have two thrombolytic agents available. One is the well-known alteplase, and newly approved for the treatment of stroke is tenecteplase. The ATTEST-2 study in the United Kingdom, published in The Lancet Neurology, compared tenecteplase 0.25 mg/kg body weight as a bolus with alteplase 0.9 mg/kg body weight as an infusion over 60 minutes in the 4.5-hour time window in 1777 patients with ischemic stroke.
There was no significant difference between the two thrombolytics for the primary endpoint of modified Rankin Scale score after 90 days. There was also no difference with respect to mortality, intracranial bleeding, or extracranial bleeding.
We finally have 11 randomized controlled trials that compared tenecteplase and alteplase in acute ischemic stroke. A meta-analysis of these randomized trials was published in Neurology. The analysis included 3700 patients treated with tenecteplase and 3700 patients treated with alteplase. For the primary endpoint, excellent functional outcome defined as modified Rankin Scale score 0-1 after 90 days, there was a significant benefit for tenecteplase (relative risk, 1.05), but the absolute difference was very small, at 3%. There was no difference in mortality or bleeding complications.
In conclusion, I think both substances are great. They are effective. Tenecteplase is most probably the drug which should be used in people who have to transfer from a primary stroke center to a dedicated stroke center that provides thrombectomy. Otherwise, I think it’s a choice of the physician as to which thrombolytic agent to use.
Mobile Stroke Units
A highly debated topic is mobile stroke units. These stroke units have a CT scanner and laboratory on board, and this makes it possible to perform thrombolysis on the way to the hospital. A retrospective, observational study collected data between 2018 and 2023, and included 19,400 patients with acute stroke, of whom 1237, or 6.4%, were treated in a mobile stroke unit. This study was published in JAMA Neurology.
The modified Rankin Scale score at the time of discharge was better in patients treated with a mobile stroke unit, but the absolute benefit was only 0.03 points on the modified Rankin Scale. The question is whether this is cost-effective, and can we really do this at times when there is a traumatic shortage of physicians and nursing staff in the hospital?
DOAC Reversal Agents
Oral anticoagulation, as you know, is usually considered a contraindication for systemic thrombolysis. Idarucizumab, a monoclonal antibody, was developed to reverse the biological activity of dabigatran and then allow systemic thrombolysis.
A recent publication in Neurology analyzed 13 cohort studies with 553 stroke patients on dabigatran who received idarucizumab prior to systemic thrombolysis, and the rate of intracranial hemorrhage was 4%. This means it’s obviously possible to perform thrombolysis when the activity of dabigatran is neutralized by idarucizumab.
Unfortunately, until today, we have no data on whether this can also be done with andexanet alfa in people who are treated with a factor Xa inhibitor like, for example, apixaban, rivaroxaban, or edoxaban.
Anticoagulation in ESUS
My next topic is ESUS, or embolic stroke of undetermined source. We have four large randomized trials and three smaller trials that compared antiplatelet therapy with DOACs in patients with ESUS. A group in Neurology published a meta-analysis of seven randomized controlled studies with, altogether, 14,800 patients with ESUS.
The comparison between antiplatelet therapy and anticoagulants showed no difference for recurrent ischemic stroke, and also not for major subgroups. This means that people with ESUS should receive antiplatelet therapy, most probably aspirin.
Anticoagulation Post–Ischemic Stroke With AF
My final topic is the optimal time to start anticoagulation in people with atrial fibrillation who suffer an ischemic stroke. The OPTIMAS study, published in The Lancet, randomized 3650 patients who were anticoagulated with DOACs early (which means less than 4 days) or delayed (between 7 and 14 days). There was no difference in the primary endpoint, which was recurrent ischemic stroke, intracranial hemorrhage, or systemic embolism at 90 days.
The conclusion is that, in most cases, we can probably initiate anticoagulation in people with ischemic stroke and atrial fibrillation within the first 4 days.
Dear colleagues, this is an exciting time for the stroke field. I presented six new studies that have impact, I think, on the management of patients with ischemic stroke.
Dr. Diener is a professor in the Department of Neurology, Stroke Center-Headache Center, University Duisburg-Essen in Germany. He reported conflicts of interest with Abbott, AbbVie, Boehringer Ingelheim, Lundbeck, Novartis, Orion Pharma, Teva, WebMD, and The German Research Council. He also serves on the editorial boards of Cephalalgia, Lancet Neurology, and Drugs.
A version of this article first appeared on Medscape.com.
New ‘Touchless’ Blood Pressure Screening Tech: How It Works
When a patient signs on to a telehealth portal, there’s little more a provider can do than ask questions. But a new artificial intelligence (AI) technology could allow providers to get feedback about the patient’s blood pressure and diabetes risk just from a video call or a smartphone app.
Researchers at the University of Tokyo in Japan are using AI to determine whether people might have high blood pressure or diabetes based on video data collected with a special sensor.
The technology relies on photoplethysmography (PPG), which measures changes in blood volume by detecting the amount of light absorbed by blood just below the skin.
Wearable devices like Apple Watches and Fitbits also use PPG technologies to detect heart rate and atrial fibrillation.
“If we could detect and accurately measure your blood pressure, heart rate, and oxygen saturation non-invasively that would be fantastic,” said Eugene Yang, MD, professor of medicine in the division of cardiology at the University of Washington School of Medicine in Seattle who was not involved in the study.
How Does PPG Work — and Is This New Tech Accurate?
Using PPG, “you’re detecting these small, little blood vessels that sit underneath the surface of your skin,” explained Yang.
“Since both hypertension and diabetes are diseases that damage blood vessels, we thought these diseases might affect blood flow and pulse wave transit times,” said Ryoko Uchida, a project researcher in the cardiology department at the University of Tokyo and one of the leaders of the study.
PPG devices primarily use green light to detect blood flow, as hemoglobin, the oxygen-carrying molecule in blood, absorbs green light most effectively, Yang said. “So, if you extract and remove all the other channels of light and only focus on the green channel, then that’s when you’ll be able to potentially see blood flow and pulsatile blood flow activity,” he noted.
The University of Tokyo researchers used remote or contactless PPG, which requires a short video recording of someone’s face and palms, as the person holds as still as possible. A special sensor collects the video and detects only certain wavelengths of light. Then the researchers developed an AI algorithm to extract data from participants’ skin, such as changes in pulse transit time — the time it takes for the pulse to travel from the palm to the face.
To correlate the video algorithm to blood pressure and diabetes risk, the researchers measured blood participants’ pressure with a continuous sphygmomanometer (an automatic blood pressure cuff) at the same time as they collected the video. They also did a blood A1c test to detect diabetes.
So far, they’ve tested their video algorithm on 215 people. The algorithm applied to a 30-second video was 86% accurate in detecting if blood pressure was above normal, and a 5-second video was 81% accurate in detecting higher blood pressure.
Compared with using hemoglobin A1c blood test results to screen for diabetes, the video algorithm was 75% accurate in identifying people who had subtle blood changes that correlated to diabetes.
“Most of this focus has been on wearable devices, patches, rings, wrist devices,” Yang said, “the facial video stuff is great because you can imagine that there are other ways of applying it.”
Yang, who is also doing research on facial video processing, pointed out it could be helpful not only in telehealth visits, but also for patients in the hospital with highly contagious diseases who need to be in isolation, or just for people using their smartphones.
“People are tied to their smartphones, so you could imagine that that would be great as a way for people to have awareness about their blood pressure or their diabetes status,” Yang noted.
More Work to Do
The study has a few caveats. The special sensor they used in this study isn’t yet integrated into smartphone cameras or other common video recording devices. But Uchida is hopeful that it could be mass-produced and inexpensive to someday add.
Also, the study was done in a Japanese population, and lighter skin may be easier to capture changes in blood flow, Uchida noted. Pulse oximeters, which use the same technology, tend to overestimate blood oxygen in people with darker skin tones.
“It is necessary to test whether the same results are obtained in a variety of subjects other than Japanese and Asians,” Uchida said, in addition to validating the tool with more participants.
The study has also not yet undergone peer review.
And Yang pointed out that this new AI technology provides more of a screening tool to predict who is at high risk for high blood pressure or diabetes, rather than precise measurements for either disease.
There are already some devices that claim to measure blood pressure using PPG technology, like blood pressure monitoring watches. But Yang warns that these kinds of devices aren’t validated, meaning we don’t really know how well they work.
One difficulty in getting any kind of PPG blood pressure monitoring device to market is that the organizations involved in setting medical device standards (like the International Organization for Standards) doesn’t yet have a validation standard for this technology, Yang said, so there’s really no way to consistently verify the technology’s accuracy.
“I am optimistic that we are capable of figuring out how to validate these things. I just think we have so many things we have to iron out before that happens,” Yang explained, noting that it will be at least 3 years before a remote blood monitoring system is widely available.
A version of this article first appeared on Medscape.com.
When a patient signs on to a telehealth portal, there’s little more a provider can do than ask questions. But a new artificial intelligence (AI) technology could allow providers to get feedback about the patient’s blood pressure and diabetes risk just from a video call or a smartphone app.
Researchers at the University of Tokyo in Japan are using AI to determine whether people might have high blood pressure or diabetes based on video data collected with a special sensor.
The technology relies on photoplethysmography (PPG), which measures changes in blood volume by detecting the amount of light absorbed by blood just below the skin.
Wearable devices like Apple Watches and Fitbits also use PPG technologies to detect heart rate and atrial fibrillation.
“If we could detect and accurately measure your blood pressure, heart rate, and oxygen saturation non-invasively that would be fantastic,” said Eugene Yang, MD, professor of medicine in the division of cardiology at the University of Washington School of Medicine in Seattle who was not involved in the study.
How Does PPG Work — and Is This New Tech Accurate?
Using PPG, “you’re detecting these small, little blood vessels that sit underneath the surface of your skin,” explained Yang.
“Since both hypertension and diabetes are diseases that damage blood vessels, we thought these diseases might affect blood flow and pulse wave transit times,” said Ryoko Uchida, a project researcher in the cardiology department at the University of Tokyo and one of the leaders of the study.
PPG devices primarily use green light to detect blood flow, as hemoglobin, the oxygen-carrying molecule in blood, absorbs green light most effectively, Yang said. “So, if you extract and remove all the other channels of light and only focus on the green channel, then that’s when you’ll be able to potentially see blood flow and pulsatile blood flow activity,” he noted.
The University of Tokyo researchers used remote or contactless PPG, which requires a short video recording of someone’s face and palms, as the person holds as still as possible. A special sensor collects the video and detects only certain wavelengths of light. Then the researchers developed an AI algorithm to extract data from participants’ skin, such as changes in pulse transit time — the time it takes for the pulse to travel from the palm to the face.
To correlate the video algorithm to blood pressure and diabetes risk, the researchers measured blood participants’ pressure with a continuous sphygmomanometer (an automatic blood pressure cuff) at the same time as they collected the video. They also did a blood A1c test to detect diabetes.
So far, they’ve tested their video algorithm on 215 people. The algorithm applied to a 30-second video was 86% accurate in detecting if blood pressure was above normal, and a 5-second video was 81% accurate in detecting higher blood pressure.
Compared with using hemoglobin A1c blood test results to screen for diabetes, the video algorithm was 75% accurate in identifying people who had subtle blood changes that correlated to diabetes.
“Most of this focus has been on wearable devices, patches, rings, wrist devices,” Yang said, “the facial video stuff is great because you can imagine that there are other ways of applying it.”
Yang, who is also doing research on facial video processing, pointed out it could be helpful not only in telehealth visits, but also for patients in the hospital with highly contagious diseases who need to be in isolation, or just for people using their smartphones.
“People are tied to their smartphones, so you could imagine that that would be great as a way for people to have awareness about their blood pressure or their diabetes status,” Yang noted.
More Work to Do
The study has a few caveats. The special sensor they used in this study isn’t yet integrated into smartphone cameras or other common video recording devices. But Uchida is hopeful that it could be mass-produced and inexpensive to someday add.
Also, the study was done in a Japanese population, and lighter skin may be easier to capture changes in blood flow, Uchida noted. Pulse oximeters, which use the same technology, tend to overestimate blood oxygen in people with darker skin tones.
“It is necessary to test whether the same results are obtained in a variety of subjects other than Japanese and Asians,” Uchida said, in addition to validating the tool with more participants.
The study has also not yet undergone peer review.
And Yang pointed out that this new AI technology provides more of a screening tool to predict who is at high risk for high blood pressure or diabetes, rather than precise measurements for either disease.
There are already some devices that claim to measure blood pressure using PPG technology, like blood pressure monitoring watches. But Yang warns that these kinds of devices aren’t validated, meaning we don’t really know how well they work.
One difficulty in getting any kind of PPG blood pressure monitoring device to market is that the organizations involved in setting medical device standards (like the International Organization for Standards) doesn’t yet have a validation standard for this technology, Yang said, so there’s really no way to consistently verify the technology’s accuracy.
“I am optimistic that we are capable of figuring out how to validate these things. I just think we have so many things we have to iron out before that happens,” Yang explained, noting that it will be at least 3 years before a remote blood monitoring system is widely available.
A version of this article first appeared on Medscape.com.
When a patient signs on to a telehealth portal, there’s little more a provider can do than ask questions. But a new artificial intelligence (AI) technology could allow providers to get feedback about the patient’s blood pressure and diabetes risk just from a video call or a smartphone app.
Researchers at the University of Tokyo in Japan are using AI to determine whether people might have high blood pressure or diabetes based on video data collected with a special sensor.
The technology relies on photoplethysmography (PPG), which measures changes in blood volume by detecting the amount of light absorbed by blood just below the skin.
Wearable devices like Apple Watches and Fitbits also use PPG technologies to detect heart rate and atrial fibrillation.
“If we could detect and accurately measure your blood pressure, heart rate, and oxygen saturation non-invasively that would be fantastic,” said Eugene Yang, MD, professor of medicine in the division of cardiology at the University of Washington School of Medicine in Seattle who was not involved in the study.
How Does PPG Work — and Is This New Tech Accurate?
Using PPG, “you’re detecting these small, little blood vessels that sit underneath the surface of your skin,” explained Yang.
“Since both hypertension and diabetes are diseases that damage blood vessels, we thought these diseases might affect blood flow and pulse wave transit times,” said Ryoko Uchida, a project researcher in the cardiology department at the University of Tokyo and one of the leaders of the study.
PPG devices primarily use green light to detect blood flow, as hemoglobin, the oxygen-carrying molecule in blood, absorbs green light most effectively, Yang said. “So, if you extract and remove all the other channels of light and only focus on the green channel, then that’s when you’ll be able to potentially see blood flow and pulsatile blood flow activity,” he noted.
The University of Tokyo researchers used remote or contactless PPG, which requires a short video recording of someone’s face and palms, as the person holds as still as possible. A special sensor collects the video and detects only certain wavelengths of light. Then the researchers developed an AI algorithm to extract data from participants’ skin, such as changes in pulse transit time — the time it takes for the pulse to travel from the palm to the face.
To correlate the video algorithm to blood pressure and diabetes risk, the researchers measured blood participants’ pressure with a continuous sphygmomanometer (an automatic blood pressure cuff) at the same time as they collected the video. They also did a blood A1c test to detect diabetes.
So far, they’ve tested their video algorithm on 215 people. The algorithm applied to a 30-second video was 86% accurate in detecting if blood pressure was above normal, and a 5-second video was 81% accurate in detecting higher blood pressure.
Compared with using hemoglobin A1c blood test results to screen for diabetes, the video algorithm was 75% accurate in identifying people who had subtle blood changes that correlated to diabetes.
“Most of this focus has been on wearable devices, patches, rings, wrist devices,” Yang said, “the facial video stuff is great because you can imagine that there are other ways of applying it.”
Yang, who is also doing research on facial video processing, pointed out it could be helpful not only in telehealth visits, but also for patients in the hospital with highly contagious diseases who need to be in isolation, or just for people using their smartphones.
“People are tied to their smartphones, so you could imagine that that would be great as a way for people to have awareness about their blood pressure or their diabetes status,” Yang noted.
More Work to Do
The study has a few caveats. The special sensor they used in this study isn’t yet integrated into smartphone cameras or other common video recording devices. But Uchida is hopeful that it could be mass-produced and inexpensive to someday add.
Also, the study was done in a Japanese population, and lighter skin may be easier to capture changes in blood flow, Uchida noted. Pulse oximeters, which use the same technology, tend to overestimate blood oxygen in people with darker skin tones.
“It is necessary to test whether the same results are obtained in a variety of subjects other than Japanese and Asians,” Uchida said, in addition to validating the tool with more participants.
The study has also not yet undergone peer review.
And Yang pointed out that this new AI technology provides more of a screening tool to predict who is at high risk for high blood pressure or diabetes, rather than precise measurements for either disease.
There are already some devices that claim to measure blood pressure using PPG technology, like blood pressure monitoring watches. But Yang warns that these kinds of devices aren’t validated, meaning we don’t really know how well they work.
One difficulty in getting any kind of PPG blood pressure monitoring device to market is that the organizations involved in setting medical device standards (like the International Organization for Standards) doesn’t yet have a validation standard for this technology, Yang said, so there’s really no way to consistently verify the technology’s accuracy.
“I am optimistic that we are capable of figuring out how to validate these things. I just think we have so many things we have to iron out before that happens,” Yang explained, noting that it will be at least 3 years before a remote blood monitoring system is widely available.
A version of this article first appeared on Medscape.com.
What To Do With Lipoprotein(a)?
Case: 45-year-old woman comes to clinic and requests lipoprotein(a) [Lp(a)] testing. She has a family history of early coronary disease (mother age 50, sister age 48) and has hypertension with home blood pressure readings of 130-140/70-75. She had a lipid panel checked last year which showed a total cholesterol of 210 mg/dL, LDL 145 mg/dL, HDL 45 mg/dL, and triglycerides of 100 mg/dL. She does not smoke and is currently taking irbesartan, chlorthalidone, sertraline, a multivitamin, and vitamin D.
What do you recommend?
There has been a great deal of media attention on testing for Lp(a). Many of my patients are requesting testing although many of them do not need it. This patient is an exception. I think Lp(a) testing would help inform her medical care. She has a family history of early coronary disease in her mother and sister, but her own lipid profile is not worrisome.
Her 10-year cardiovascular disease risk is 2%. The cardiac risk calculator does not incorporate family history; I think this is a situation where testing for Lp(a)(as well as apolipoprotein B) can be helpful. If her Lp(a) is elevated, it helps reassess her risk and that information would be helpful in targeting aggressive interventions for other CV risk factors, including optimal blood pressure control. In her case, pushing for a goal systolic blood pressure below 120 mm Hg and making sure she is doing regular exercise and eating a heart-healthy diet. The current consensus statement on Lp(a) recommends that patients with elevated levels have aggressive lifestyle and cardiovascular risk management.1
Currently, there are no medical treatments available for high Lp(a) for primary prevention. Apheresis has been approved by the US Food and Drug Administration (FDA) for patients with familial hyperlipidemia who have LDL ≥ 100 mg/dL, Lp(a) ≥ 60 mg/dL, and coronary or other artery disease.
PCSK9 inhibitors have shown a reduction in major cardiovascular events in patients who have established coronary artery disease and high Lp(a) levels, albeit with limited data. Unlike statins, which increase Lp(a) levels, PCSK9 inhibitors reduce Lp(a) levels.2 There are promising early results in a phase 2 trial of the oral drug muvalaplin lowering Lp(a) levels by up to 85% for the highest dose, but there are no peer-reviewed articles confirming these results and no outcome trials at this time.
In patients who are already recognized as high risk, especially those with established coronary artery disease, measuring Lp(a) levels offer little benefit. These patients should already be receiving aggressive medical therapy to reach blood pressure targets if hypertensive, maximal lifestyle modifications, and statin therapy.
If these patients need more therapy because of continued coronary events, despite maximal conventional medical therapy, then adding a PCSK9 inhibitor would be appropriate whether or not a patient has a high Lp(a) level. Once Lp(a) targeted therapies are available and show clinical benefit, then the role of Lp(a) measurement and treatment in this population will be clearer.
Pearl: Most patients do not need Lp(a) testing. There are no FDA-approved treatments for high Lp(a) levels.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. Kronenberg F et al. Lipoprotein(a) in atherosclerotic cardiovascular disease and aortic stenosis: A European Atherosclerosis Society consensus statement. Eur Heart J. 2022;43:3925-46.
2. Ruscica M et al. Lipoprotein(a) and PCSK9 inhibition: Clinical evidence Eur Heart J Suppl 2020;Nov 18(Suppl L):L53–L56.
Case: 45-year-old woman comes to clinic and requests lipoprotein(a) [Lp(a)] testing. She has a family history of early coronary disease (mother age 50, sister age 48) and has hypertension with home blood pressure readings of 130-140/70-75. She had a lipid panel checked last year which showed a total cholesterol of 210 mg/dL, LDL 145 mg/dL, HDL 45 mg/dL, and triglycerides of 100 mg/dL. She does not smoke and is currently taking irbesartan, chlorthalidone, sertraline, a multivitamin, and vitamin D.
What do you recommend?
There has been a great deal of media attention on testing for Lp(a). Many of my patients are requesting testing although many of them do not need it. This patient is an exception. I think Lp(a) testing would help inform her medical care. She has a family history of early coronary disease in her mother and sister, but her own lipid profile is not worrisome.
Her 10-year cardiovascular disease risk is 2%. The cardiac risk calculator does not incorporate family history; I think this is a situation where testing for Lp(a)(as well as apolipoprotein B) can be helpful. If her Lp(a) is elevated, it helps reassess her risk and that information would be helpful in targeting aggressive interventions for other CV risk factors, including optimal blood pressure control. In her case, pushing for a goal systolic blood pressure below 120 mm Hg and making sure she is doing regular exercise and eating a heart-healthy diet. The current consensus statement on Lp(a) recommends that patients with elevated levels have aggressive lifestyle and cardiovascular risk management.1
Currently, there are no medical treatments available for high Lp(a) for primary prevention. Apheresis has been approved by the US Food and Drug Administration (FDA) for patients with familial hyperlipidemia who have LDL ≥ 100 mg/dL, Lp(a) ≥ 60 mg/dL, and coronary or other artery disease.
PCSK9 inhibitors have shown a reduction in major cardiovascular events in patients who have established coronary artery disease and high Lp(a) levels, albeit with limited data. Unlike statins, which increase Lp(a) levels, PCSK9 inhibitors reduce Lp(a) levels.2 There are promising early results in a phase 2 trial of the oral drug muvalaplin lowering Lp(a) levels by up to 85% for the highest dose, but there are no peer-reviewed articles confirming these results and no outcome trials at this time.
In patients who are already recognized as high risk, especially those with established coronary artery disease, measuring Lp(a) levels offer little benefit. These patients should already be receiving aggressive medical therapy to reach blood pressure targets if hypertensive, maximal lifestyle modifications, and statin therapy.
If these patients need more therapy because of continued coronary events, despite maximal conventional medical therapy, then adding a PCSK9 inhibitor would be appropriate whether or not a patient has a high Lp(a) level. Once Lp(a) targeted therapies are available and show clinical benefit, then the role of Lp(a) measurement and treatment in this population will be clearer.
Pearl: Most patients do not need Lp(a) testing. There are no FDA-approved treatments for high Lp(a) levels.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. Kronenberg F et al. Lipoprotein(a) in atherosclerotic cardiovascular disease and aortic stenosis: A European Atherosclerosis Society consensus statement. Eur Heart J. 2022;43:3925-46.
2. Ruscica M et al. Lipoprotein(a) and PCSK9 inhibition: Clinical evidence Eur Heart J Suppl 2020;Nov 18(Suppl L):L53–L56.
Case: 45-year-old woman comes to clinic and requests lipoprotein(a) [Lp(a)] testing. She has a family history of early coronary disease (mother age 50, sister age 48) and has hypertension with home blood pressure readings of 130-140/70-75. She had a lipid panel checked last year which showed a total cholesterol of 210 mg/dL, LDL 145 mg/dL, HDL 45 mg/dL, and triglycerides of 100 mg/dL. She does not smoke and is currently taking irbesartan, chlorthalidone, sertraline, a multivitamin, and vitamin D.
What do you recommend?
There has been a great deal of media attention on testing for Lp(a). Many of my patients are requesting testing although many of them do not need it. This patient is an exception. I think Lp(a) testing would help inform her medical care. She has a family history of early coronary disease in her mother and sister, but her own lipid profile is not worrisome.
Her 10-year cardiovascular disease risk is 2%. The cardiac risk calculator does not incorporate family history; I think this is a situation where testing for Lp(a)(as well as apolipoprotein B) can be helpful. If her Lp(a) is elevated, it helps reassess her risk and that information would be helpful in targeting aggressive interventions for other CV risk factors, including optimal blood pressure control. In her case, pushing for a goal systolic blood pressure below 120 mm Hg and making sure she is doing regular exercise and eating a heart-healthy diet. The current consensus statement on Lp(a) recommends that patients with elevated levels have aggressive lifestyle and cardiovascular risk management.1
Currently, there are no medical treatments available for high Lp(a) for primary prevention. Apheresis has been approved by the US Food and Drug Administration (FDA) for patients with familial hyperlipidemia who have LDL ≥ 100 mg/dL, Lp(a) ≥ 60 mg/dL, and coronary or other artery disease.
PCSK9 inhibitors have shown a reduction in major cardiovascular events in patients who have established coronary artery disease and high Lp(a) levels, albeit with limited data. Unlike statins, which increase Lp(a) levels, PCSK9 inhibitors reduce Lp(a) levels.2 There are promising early results in a phase 2 trial of the oral drug muvalaplin lowering Lp(a) levels by up to 85% for the highest dose, but there are no peer-reviewed articles confirming these results and no outcome trials at this time.
In patients who are already recognized as high risk, especially those with established coronary artery disease, measuring Lp(a) levels offer little benefit. These patients should already be receiving aggressive medical therapy to reach blood pressure targets if hypertensive, maximal lifestyle modifications, and statin therapy.
If these patients need more therapy because of continued coronary events, despite maximal conventional medical therapy, then adding a PCSK9 inhibitor would be appropriate whether or not a patient has a high Lp(a) level. Once Lp(a) targeted therapies are available and show clinical benefit, then the role of Lp(a) measurement and treatment in this population will be clearer.
Pearl: Most patients do not need Lp(a) testing. There are no FDA-approved treatments for high Lp(a) levels.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. Kronenberg F et al. Lipoprotein(a) in atherosclerotic cardiovascular disease and aortic stenosis: A European Atherosclerosis Society consensus statement. Eur Heart J. 2022;43:3925-46.
2. Ruscica M et al. Lipoprotein(a) and PCSK9 inhibition: Clinical evidence Eur Heart J Suppl 2020;Nov 18(Suppl L):L53–L56.
Pharmacist Advocates for Early Adoption of Quadruple Therapy in HFrEF Treatment
SAN DIEGO — An Air Force pharmacist urged colleagues in the military to advocate for the gold standard of quadruple therapy in patients with heart failure with reduced ejection fraction (HFrEF). “When possible, initiate and optimize quadruple therapy before discharge; don’t leave it for a primary care manager (PCM) to handle,” said Maj. Elizabeth Tesch, PharmD, of Maxwell Air Force Base, Montgomery, Ala., in a presentation here at the Joint Federal Pharmacy Seminar. Tesch also cautioned colleagues about the proper use of IV inotropes and vasodilators in congestive heart failure and warned of the dangers of polypharmacy.
“It’s just as important to use medications that provide a mortality benefit in these patients as it is to remove things that are either harmful or lack trial benefit data,” Tesch said.
In patients with acute heart failure and systolic blood pressure < 90 mmHg, guidelines recommend using both an inotrope and a vasopressor. “There tends to be better data about 2 of them together vs just cranking up a vasoconstrictor, which we tend to sometimes to do when a patient’s blood pressure is bottoming out,” Tesch explained. “But in these patients specifically, that tends to lead to increased afterload, difficulty with cardiac output, and then increased risk of ischemia. So it tends to be better to use both.”
Ideally, Tesch said, patients stabilize within a couple days. In cases of HFrEF, this is when quadruple therapy can enter the picture.
Quadruple therapy consists of the “4 pillars”: a sodium-glucose co-transporter 2 inhibitor (SGLT2i), a β blocker, a mineralocorticoid receptor antagonist (MRA), and either an angiotensin receptor neprilysin inhibitor (ARNI), an angiotensin‐converting enzyme inhibitor (ACEi) or an angiotensin receptor blocker (ARB).
Tesch noted that the need for titration varies by drug. β blockers typically will need the most up-titration, often in several steps, followed by ARNIs. MRAs may require only one titration or even not at all, and SGLT2 inhibitors do not require titration.
“[Clinicians] are most comfortable giving ACE inhibitors, ARBs, and β blockers to patients, she said. But new research suggests there is a 10.3% jump in mortality risk (absolute risk difference) compared to ACEi/ β blocker/ARB therapy. Additionally, a 2022 systematic review linked quadruple therapy to a gain of 5 years of life (ranging from 2.5 to7.5 years) for 70-year-old patients compared to no therapy.
“I don't know how many times I've had a conversation along the lines of, ‘Hey, can we go ahead and start an SGLT2 on this patient?’ only to hear, ‘We'll give that to the PCM [primary care manager]. That sounds like a PCM thing. You just want to get them out of here, it’s a PCM problem.’”
But quick initiation of treatment is crucial. “We're seeing very real mortality benefit data very quickly in these patients,” Tesch said.
As for polypharmacy, Tesch highlighted the importance of reducing mediation load when possible. “If they have nothing else wrong, these patients will walk out the door on quadruple therapy and perhaps a diuretic, but they probably have a lot more going on,” she said. “All of us in this room are fully aware of what polypharmacy can do to these patients: increased drug interactions, side effects, higher cost, and decreased patient compliance. This is a problem for the heart failure population that really translates into readmissions and increased mortality. We've got to be able to peel off things that are either harmful or not helping.”
Statins, for example, have questionable benefit in HFrEF without coronary artery disease or hyperlipidemia, she said. Oral iron and vitamin D supplementation also have uncertain benefits in the HFrEF population.
Tesch highlighted a pair of reports – one from 2024 and the other from 2022 – that recommended certain therapies in heart failure, including the antidepressant citalopram (Celexa), the hypertension/urinary retention drug doxazosin (Cardura), and DPP-4 inhibitors (eg, diabetes/weight-loss drugs such as liraglutide [Saxenda]).
Tesch has no disclosures.
SAN DIEGO — An Air Force pharmacist urged colleagues in the military to advocate for the gold standard of quadruple therapy in patients with heart failure with reduced ejection fraction (HFrEF). “When possible, initiate and optimize quadruple therapy before discharge; don’t leave it for a primary care manager (PCM) to handle,” said Maj. Elizabeth Tesch, PharmD, of Maxwell Air Force Base, Montgomery, Ala., in a presentation here at the Joint Federal Pharmacy Seminar. Tesch also cautioned colleagues about the proper use of IV inotropes and vasodilators in congestive heart failure and warned of the dangers of polypharmacy.
“It’s just as important to use medications that provide a mortality benefit in these patients as it is to remove things that are either harmful or lack trial benefit data,” Tesch said.
In patients with acute heart failure and systolic blood pressure < 90 mmHg, guidelines recommend using both an inotrope and a vasopressor. “There tends to be better data about 2 of them together vs just cranking up a vasoconstrictor, which we tend to sometimes to do when a patient’s blood pressure is bottoming out,” Tesch explained. “But in these patients specifically, that tends to lead to increased afterload, difficulty with cardiac output, and then increased risk of ischemia. So it tends to be better to use both.”
Ideally, Tesch said, patients stabilize within a couple days. In cases of HFrEF, this is when quadruple therapy can enter the picture.
Quadruple therapy consists of the “4 pillars”: a sodium-glucose co-transporter 2 inhibitor (SGLT2i), a β blocker, a mineralocorticoid receptor antagonist (MRA), and either an angiotensin receptor neprilysin inhibitor (ARNI), an angiotensin‐converting enzyme inhibitor (ACEi) or an angiotensin receptor blocker (ARB).
Tesch noted that the need for titration varies by drug. β blockers typically will need the most up-titration, often in several steps, followed by ARNIs. MRAs may require only one titration or even not at all, and SGLT2 inhibitors do not require titration.
“[Clinicians] are most comfortable giving ACE inhibitors, ARBs, and β blockers to patients, she said. But new research suggests there is a 10.3% jump in mortality risk (absolute risk difference) compared to ACEi/ β blocker/ARB therapy. Additionally, a 2022 systematic review linked quadruple therapy to a gain of 5 years of life (ranging from 2.5 to7.5 years) for 70-year-old patients compared to no therapy.
“I don't know how many times I've had a conversation along the lines of, ‘Hey, can we go ahead and start an SGLT2 on this patient?’ only to hear, ‘We'll give that to the PCM [primary care manager]. That sounds like a PCM thing. You just want to get them out of here, it’s a PCM problem.’”
But quick initiation of treatment is crucial. “We're seeing very real mortality benefit data very quickly in these patients,” Tesch said.
As for polypharmacy, Tesch highlighted the importance of reducing mediation load when possible. “If they have nothing else wrong, these patients will walk out the door on quadruple therapy and perhaps a diuretic, but they probably have a lot more going on,” she said. “All of us in this room are fully aware of what polypharmacy can do to these patients: increased drug interactions, side effects, higher cost, and decreased patient compliance. This is a problem for the heart failure population that really translates into readmissions and increased mortality. We've got to be able to peel off things that are either harmful or not helping.”
Statins, for example, have questionable benefit in HFrEF without coronary artery disease or hyperlipidemia, she said. Oral iron and vitamin D supplementation also have uncertain benefits in the HFrEF population.
Tesch highlighted a pair of reports – one from 2024 and the other from 2022 – that recommended certain therapies in heart failure, including the antidepressant citalopram (Celexa), the hypertension/urinary retention drug doxazosin (Cardura), and DPP-4 inhibitors (eg, diabetes/weight-loss drugs such as liraglutide [Saxenda]).
Tesch has no disclosures.
SAN DIEGO — An Air Force pharmacist urged colleagues in the military to advocate for the gold standard of quadruple therapy in patients with heart failure with reduced ejection fraction (HFrEF). “When possible, initiate and optimize quadruple therapy before discharge; don’t leave it for a primary care manager (PCM) to handle,” said Maj. Elizabeth Tesch, PharmD, of Maxwell Air Force Base, Montgomery, Ala., in a presentation here at the Joint Federal Pharmacy Seminar. Tesch also cautioned colleagues about the proper use of IV inotropes and vasodilators in congestive heart failure and warned of the dangers of polypharmacy.
“It’s just as important to use medications that provide a mortality benefit in these patients as it is to remove things that are either harmful or lack trial benefit data,” Tesch said.
In patients with acute heart failure and systolic blood pressure < 90 mmHg, guidelines recommend using both an inotrope and a vasopressor. “There tends to be better data about 2 of them together vs just cranking up a vasoconstrictor, which we tend to sometimes to do when a patient’s blood pressure is bottoming out,” Tesch explained. “But in these patients specifically, that tends to lead to increased afterload, difficulty with cardiac output, and then increased risk of ischemia. So it tends to be better to use both.”
Ideally, Tesch said, patients stabilize within a couple days. In cases of HFrEF, this is when quadruple therapy can enter the picture.
Quadruple therapy consists of the “4 pillars”: a sodium-glucose co-transporter 2 inhibitor (SGLT2i), a β blocker, a mineralocorticoid receptor antagonist (MRA), and either an angiotensin receptor neprilysin inhibitor (ARNI), an angiotensin‐converting enzyme inhibitor (ACEi) or an angiotensin receptor blocker (ARB).
Tesch noted that the need for titration varies by drug. β blockers typically will need the most up-titration, often in several steps, followed by ARNIs. MRAs may require only one titration or even not at all, and SGLT2 inhibitors do not require titration.
“[Clinicians] are most comfortable giving ACE inhibitors, ARBs, and β blockers to patients, she said. But new research suggests there is a 10.3% jump in mortality risk (absolute risk difference) compared to ACEi/ β blocker/ARB therapy. Additionally, a 2022 systematic review linked quadruple therapy to a gain of 5 years of life (ranging from 2.5 to7.5 years) for 70-year-old patients compared to no therapy.
“I don't know how many times I've had a conversation along the lines of, ‘Hey, can we go ahead and start an SGLT2 on this patient?’ only to hear, ‘We'll give that to the PCM [primary care manager]. That sounds like a PCM thing. You just want to get them out of here, it’s a PCM problem.’”
But quick initiation of treatment is crucial. “We're seeing very real mortality benefit data very quickly in these patients,” Tesch said.
As for polypharmacy, Tesch highlighted the importance of reducing mediation load when possible. “If they have nothing else wrong, these patients will walk out the door on quadruple therapy and perhaps a diuretic, but they probably have a lot more going on,” she said. “All of us in this room are fully aware of what polypharmacy can do to these patients: increased drug interactions, side effects, higher cost, and decreased patient compliance. This is a problem for the heart failure population that really translates into readmissions and increased mortality. We've got to be able to peel off things that are either harmful or not helping.”
Statins, for example, have questionable benefit in HFrEF without coronary artery disease or hyperlipidemia, she said. Oral iron and vitamin D supplementation also have uncertain benefits in the HFrEF population.
Tesch highlighted a pair of reports – one from 2024 and the other from 2022 – that recommended certain therapies in heart failure, including the antidepressant citalopram (Celexa), the hypertension/urinary retention drug doxazosin (Cardura), and DPP-4 inhibitors (eg, diabetes/weight-loss drugs such as liraglutide [Saxenda]).
Tesch has no disclosures.
US Alcohol-Related Deaths Double Over 2 Decades, With Notable Age and Gender Disparities
TOPLINE:
US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.
METHODOLOGY:
- Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
- Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
- International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.
TAKEAWAY:
- Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
- Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
- American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
- The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.
IN PRACTICE:
“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.
SOURCE:
The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.
LIMITATIONS:
According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.
DISCLOSURES:
The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.
METHODOLOGY:
- Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
- Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
- International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.
TAKEAWAY:
- Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
- Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
- American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
- The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.
IN PRACTICE:
“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.
SOURCE:
The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.
LIMITATIONS:
According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.
DISCLOSURES:
The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.
METHODOLOGY:
- Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
- Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
- International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.
TAKEAWAY:
- Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
- Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
- American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
- The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.
IN PRACTICE:
“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.
SOURCE:
The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.
LIMITATIONS:
According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.
DISCLOSURES:
The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Deprescribe Low-Value Meds to Reduce Polypharmacy Harms
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM FMF 2024
Aliens, Ian McShane, and Heart Disease Risk
This transcript has been edited for clarity.
I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.
An alien spaceship parks itself, Independence Day style, above a local office building.
But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.
They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%.
Let’s assume the aliens are entirely accurate. What do you do with this information?
Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.
But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people.
In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”
This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.
We all know that genes play a significant role in our health outcomes. Some diseases (Huntington disease, cystic fibrosis, sickle cell disease, hemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.
The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.
But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total.
The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.
The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.
There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.
The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.
But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information.
It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.
The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.
They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.
But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.
This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.
The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.
This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed.
Let’s imagine we have a population of 40 people.
Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.
Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.
Risk score model 3 also correctly classified 30 of 40, but another different 30.
I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem.
This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.
How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.
It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry.
It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either.
Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic.
Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.
An alien spaceship parks itself, Independence Day style, above a local office building.
But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.
They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%.
Let’s assume the aliens are entirely accurate. What do you do with this information?
Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.
But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people.
In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”
This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.
We all know that genes play a significant role in our health outcomes. Some diseases (Huntington disease, cystic fibrosis, sickle cell disease, hemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.
The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.
But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total.
The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.
The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.
There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.
The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.
But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information.
It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.
The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.
They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.
But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.
This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.
The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.
This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed.
Let’s imagine we have a population of 40 people.
Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.
Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.
Risk score model 3 also correctly classified 30 of 40, but another different 30.
I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem.
This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.
How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.
It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry.
It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either.
Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic.
Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.
An alien spaceship parks itself, Independence Day style, above a local office building.
But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.
They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%.
Let’s assume the aliens are entirely accurate. What do you do with this information?
Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.
But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people.
In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”
This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.
We all know that genes play a significant role in our health outcomes. Some diseases (Huntington disease, cystic fibrosis, sickle cell disease, hemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.
The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.
But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total.
The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.
The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.
There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.
The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.
But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information.
It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.
The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.
They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.
But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.
This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.
The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.
This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed.
Let’s imagine we have a population of 40 people.
Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.
Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.
Risk score model 3 also correctly classified 30 of 40, but another different 30.
I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem.
This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.
How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.
It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry.
It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either.
Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic.
Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.