User login
Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
nav[contains(@class, 'nav-ce-stack nav-ce-stack__large-screen')]
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'main-prefix')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
div[contains(@class, 'view-medstat-quiz-listing-panes')]
div[contains(@class, 'pane-article-sidebar-latest-news')]
Will the Federal Non-Compete Ban Take Effect?
final rule will not go into effect until 120 days after its publication in the Federal Register, which took place on May 7, and numerous legal challenges appear to be on the horizon.
(with very limited exceptions). TheThe principal components of the rule are as follows:
- After the effective date, most non-compete agreements (which prevent departing employees from signing with a new employer for a defined period within a specific geographic area) are banned nationwide.
- The rule exempts certain “senior executives,” ie individuals who earn more than $151,164 annually and serve in policy-making positions.
- There is another major exception for non-competes connected with a sale of a business.
- While not explicitly stated, the rule arguably exempts non-profits, tax-exempt hospitals, and other tax-exempt entities.
- Employers must provide verbal and written notice to employees regarding existing agreements, which would be voided under the rule.
The final rule is the latest skirmish in an ongoing, years-long debate. Twelve states have already put non-compete bans in place, according to a recent paper, and they may serve as a harbinger of things to come should the federal ban go into effect. Each state rule varies in its specifics as states respond to local market conditions. While some states ban all non-compete agreements outright, others limit them based on variables, such as income and employment circumstances. Of course, should the federal ban take effect, it will supersede whatever rules the individual states have in place.
In drafting the rule, the FTC reasoned that non-compete clauses constitute restraint of trade, and eliminating them could potentially increase worker earnings as well as lower health care costs by billions of dollars. In its statements on the proposed ban, the FTC claimed that it could lower health spending across the board by almost $150 billion per year and return $300 million to workers each year in earnings. The agency cited a large body of research that non-competes make it harder for workers to move between jobs and can raise prices for goods and services, while suppressing wages for workers and inhibiting the creation of new businesses.
Most physicians affected by non-compete agreements heavily favor the new rule, because it would give them more control over their careers and expand their practice and income opportunities. It would allow them to get a new job with a competing organization, bucking a long-standing trend that hospitals and health care systems have heavily relied on to keep staff in place.
The rule would, however, keep in place “non-solicitation” rules that many health care organizations have put in place. That means that if a physician leaves an employer, he or she cannot reach out to former patients and colleagues to bring them along or invite them to join him or her at the new employment venue.
Within that clause, however, the FTC has specified that if such non-solicitation agreement has the “equivalent effect” of a non-compete, the agency would deem it such. That means, even if that rule stands, it could be contested and may be interpreted as violating the non-compete provision. So, there is value in reading all the fine print should the rule move forward.
Physicians in independent practices who employ physician assistants and nurse practitioners have expressed concerns that their expensively trained employees might be tempted to accept a nearby, higher-paying position. The “non-solicitation” clause would theoretically prevent them from taking patients and co-workers with them — unless it were successfully contested. Many questions remain.
Further complicating the non-compete ban issue is how it might impact nonprofit institutions. Most hospitals structured as nonprofits would theoretically be exempt from the rule, although it is not specifically stated in the rule itself, because the FTC Act gives the Commission jurisdiction over for-profit companies only. This would obviously create an unfair advantage for nonprofits, who could continue writing non-compete clauses with impunity.
All of these questions may be moot, of course, because a number of powerful entities with deep pockets have lined up in opposition to the rule. Some of them have even questioned the FTC’s authority to pass the rule at all, on the grounds that Section 5 of the FTC Act does not give it the authority to police labor markets. A lawsuit has already been filed by the US Chamber of Commerce. Other large groups in opposition are the American Medical Group Association, the American Hospital Association, and numerous large hospital and healthcare networks.
Only time will tell whether this issue will be regulated on a national level or remain the purview of each individual state.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].
final rule will not go into effect until 120 days after its publication in the Federal Register, which took place on May 7, and numerous legal challenges appear to be on the horizon.
(with very limited exceptions). TheThe principal components of the rule are as follows:
- After the effective date, most non-compete agreements (which prevent departing employees from signing with a new employer for a defined period within a specific geographic area) are banned nationwide.
- The rule exempts certain “senior executives,” ie individuals who earn more than $151,164 annually and serve in policy-making positions.
- There is another major exception for non-competes connected with a sale of a business.
- While not explicitly stated, the rule arguably exempts non-profits, tax-exempt hospitals, and other tax-exempt entities.
- Employers must provide verbal and written notice to employees regarding existing agreements, which would be voided under the rule.
The final rule is the latest skirmish in an ongoing, years-long debate. Twelve states have already put non-compete bans in place, according to a recent paper, and they may serve as a harbinger of things to come should the federal ban go into effect. Each state rule varies in its specifics as states respond to local market conditions. While some states ban all non-compete agreements outright, others limit them based on variables, such as income and employment circumstances. Of course, should the federal ban take effect, it will supersede whatever rules the individual states have in place.
In drafting the rule, the FTC reasoned that non-compete clauses constitute restraint of trade, and eliminating them could potentially increase worker earnings as well as lower health care costs by billions of dollars. In its statements on the proposed ban, the FTC claimed that it could lower health spending across the board by almost $150 billion per year and return $300 million to workers each year in earnings. The agency cited a large body of research that non-competes make it harder for workers to move between jobs and can raise prices for goods and services, while suppressing wages for workers and inhibiting the creation of new businesses.
Most physicians affected by non-compete agreements heavily favor the new rule, because it would give them more control over their careers and expand their practice and income opportunities. It would allow them to get a new job with a competing organization, bucking a long-standing trend that hospitals and health care systems have heavily relied on to keep staff in place.
The rule would, however, keep in place “non-solicitation” rules that many health care organizations have put in place. That means that if a physician leaves an employer, he or she cannot reach out to former patients and colleagues to bring them along or invite them to join him or her at the new employment venue.
Within that clause, however, the FTC has specified that if such non-solicitation agreement has the “equivalent effect” of a non-compete, the agency would deem it such. That means, even if that rule stands, it could be contested and may be interpreted as violating the non-compete provision. So, there is value in reading all the fine print should the rule move forward.
Physicians in independent practices who employ physician assistants and nurse practitioners have expressed concerns that their expensively trained employees might be tempted to accept a nearby, higher-paying position. The “non-solicitation” clause would theoretically prevent them from taking patients and co-workers with them — unless it were successfully contested. Many questions remain.
Further complicating the non-compete ban issue is how it might impact nonprofit institutions. Most hospitals structured as nonprofits would theoretically be exempt from the rule, although it is not specifically stated in the rule itself, because the FTC Act gives the Commission jurisdiction over for-profit companies only. This would obviously create an unfair advantage for nonprofits, who could continue writing non-compete clauses with impunity.
All of these questions may be moot, of course, because a number of powerful entities with deep pockets have lined up in opposition to the rule. Some of them have even questioned the FTC’s authority to pass the rule at all, on the grounds that Section 5 of the FTC Act does not give it the authority to police labor markets. A lawsuit has already been filed by the US Chamber of Commerce. Other large groups in opposition are the American Medical Group Association, the American Hospital Association, and numerous large hospital and healthcare networks.
Only time will tell whether this issue will be regulated on a national level or remain the purview of each individual state.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].
final rule will not go into effect until 120 days after its publication in the Federal Register, which took place on May 7, and numerous legal challenges appear to be on the horizon.
(with very limited exceptions). TheThe principal components of the rule are as follows:
- After the effective date, most non-compete agreements (which prevent departing employees from signing with a new employer for a defined period within a specific geographic area) are banned nationwide.
- The rule exempts certain “senior executives,” ie individuals who earn more than $151,164 annually and serve in policy-making positions.
- There is another major exception for non-competes connected with a sale of a business.
- While not explicitly stated, the rule arguably exempts non-profits, tax-exempt hospitals, and other tax-exempt entities.
- Employers must provide verbal and written notice to employees regarding existing agreements, which would be voided under the rule.
The final rule is the latest skirmish in an ongoing, years-long debate. Twelve states have already put non-compete bans in place, according to a recent paper, and they may serve as a harbinger of things to come should the federal ban go into effect. Each state rule varies in its specifics as states respond to local market conditions. While some states ban all non-compete agreements outright, others limit them based on variables, such as income and employment circumstances. Of course, should the federal ban take effect, it will supersede whatever rules the individual states have in place.
In drafting the rule, the FTC reasoned that non-compete clauses constitute restraint of trade, and eliminating them could potentially increase worker earnings as well as lower health care costs by billions of dollars. In its statements on the proposed ban, the FTC claimed that it could lower health spending across the board by almost $150 billion per year and return $300 million to workers each year in earnings. The agency cited a large body of research that non-competes make it harder for workers to move between jobs and can raise prices for goods and services, while suppressing wages for workers and inhibiting the creation of new businesses.
Most physicians affected by non-compete agreements heavily favor the new rule, because it would give them more control over their careers and expand their practice and income opportunities. It would allow them to get a new job with a competing organization, bucking a long-standing trend that hospitals and health care systems have heavily relied on to keep staff in place.
The rule would, however, keep in place “non-solicitation” rules that many health care organizations have put in place. That means that if a physician leaves an employer, he or she cannot reach out to former patients and colleagues to bring them along or invite them to join him or her at the new employment venue.
Within that clause, however, the FTC has specified that if such non-solicitation agreement has the “equivalent effect” of a non-compete, the agency would deem it such. That means, even if that rule stands, it could be contested and may be interpreted as violating the non-compete provision. So, there is value in reading all the fine print should the rule move forward.
Physicians in independent practices who employ physician assistants and nurse practitioners have expressed concerns that their expensively trained employees might be tempted to accept a nearby, higher-paying position. The “non-solicitation” clause would theoretically prevent them from taking patients and co-workers with them — unless it were successfully contested. Many questions remain.
Further complicating the non-compete ban issue is how it might impact nonprofit institutions. Most hospitals structured as nonprofits would theoretically be exempt from the rule, although it is not specifically stated in the rule itself, because the FTC Act gives the Commission jurisdiction over for-profit companies only. This would obviously create an unfair advantage for nonprofits, who could continue writing non-compete clauses with impunity.
All of these questions may be moot, of course, because a number of powerful entities with deep pockets have lined up in opposition to the rule. Some of them have even questioned the FTC’s authority to pass the rule at all, on the grounds that Section 5 of the FTC Act does not give it the authority to police labor markets. A lawsuit has already been filed by the US Chamber of Commerce. Other large groups in opposition are the American Medical Group Association, the American Hospital Association, and numerous large hospital and healthcare networks.
Only time will tell whether this issue will be regulated on a national level or remain the purview of each individual state.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].
Fluoride, Water, and Kids’ Brains: It’s Complicated
This transcript has been edited for clarity.
I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not.
I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
Shall We Shake This Hornet’s Nest?
The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.
But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.
The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true.
I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
Let’s Dive Into These Shark-Infested, Fluoridated Waters
We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.
It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months.
The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.
Yikes.
But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.
Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.
I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial.
But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.
The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
Is Urinary Fluoride a Good Measure of Blood Fluoride?
It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5.
Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is.
This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.
Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study.
So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not.
I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
Shall We Shake This Hornet’s Nest?
The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.
But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.
The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true.
I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
Let’s Dive Into These Shark-Infested, Fluoridated Waters
We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.
It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months.
The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.
Yikes.
But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.
Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.
I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial.
But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.
The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
Is Urinary Fluoride a Good Measure of Blood Fluoride?
It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5.
Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is.
This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.
Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study.
So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not.
I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
Shall We Shake This Hornet’s Nest?
The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.
But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.
The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true.
I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
Let’s Dive Into These Shark-Infested, Fluoridated Waters
We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.
It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months.
The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.
Yikes.
But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.
Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.
I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial.
But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.
The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
Is Urinary Fluoride a Good Measure of Blood Fluoride?
It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5.
Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is.
This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.
Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study.
So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Does More Systemic Treatment for Advanced Cancer Improve Survival?
This conclusion of a new study published online May 16 in JAMA Oncology may help reassure oncologists that giving systemic anticancer therapy (SACT) at the most advanced stages of cancer will not improve the patient’s life, the authors wrote. It also may encourage them to instead focus more on honest communication with patients about their choices, Maureen E. Canavan, PhD, at the Cancer and Outcomes, Public Policy and Effectiveness Research (COPPER) Center at the Yale School of Medicine in New Haven, Connecticut, and colleagues, wrote in their paper.
How Was the Study Conducted?
Researchers used Flatiron Health, a nationwide electronic health records database of academic and community practices throughout the United State. They identified 78,446 adults with advanced or metastatic stages of one of six common cancers (breast, colorectal, urothelial, non–small cell lung cancer [NSCLC], pancreatic and renal cell carcinoma) who were treated at healthcare practices from 2015 to 2019. They then stratified practices into quintiles based on how often the practices treated patients with any systemic therapy, including chemotherapy and immunotherapy, in their last 14 days of life. They compared whether patients in practices with greater use of systemic treatment at very advanced stages had longer overall survival.
What Were the Main Findings?
“We saw that there were absolutely no survival differences between the practices that used more systemic therapy for very advanced cancer than the practices that use less,” said senior author Kerin Adelson, MD, chief quality and value officer at MD Anderson Cancer Center in Houston, Texas. In some cancers, those in the lowest quintile (those with the lowest rates of systemic end-of-life care) lived fewer years compared with those in the highest quintiles. In other cancers, those in the lowest quintiles lived more years than those in the highest quintiles.
“What’s important is that none of those differences, after you control for other factors, was statistically significant,” Dr. Adelson said. “That was the same in every cancer type we looked at.”
An example is seen in advanced urothelial cancer. Those in the first quintile (lowest rates of systemic care at end of life) had an SACT rate range of 4.0-9.1. The SACT rate range in the highest quintile was 19.8-42.6. But the median overall survival (OS) rate for those in the lowest quintile was 12.7 months, not statistically different from the median OS in the highest quintile (11 months.)
How Does This Study Add to the Literature?
The American Society of Clinical Oncology (ASCO) and the National Quality Forum (NQF) developed a cancer quality metric to reduce SACT at the end of life. The NQF 0210 is a ratio of patients who get systemic treatment within 14 days of death over all patients who die of cancer. The quality metric has been widely adopted and used in value-based care reporting.
But the metric has been criticized because it focuses only on people who died and not people who lived longer because they benefited from the systemic therapy, the authors wrote.
Dr. Canavan’s team focused on all patients treated in the practice, not just those who died, Dr. Adelson said. This may put that criticism to rest, Dr. Adelson said.
“I personally believed the ASCO and NQF metric was appropriate and the criticisms were off base,” said Otis Brawley, MD, associate director of community outreach and engagement at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University School of Medicine in Baltimore. “Canavan’s study is evidence suggesting the metrics were appropriate.”
This study included not just chemotherapy, as some other studies have, but targeted therapies and immunotherapies as well. Dr. Adelson said some think that the newer drugs might change the prognosis at end of life. But this study shows “even those drugs are not helping patients to survive with very advanced cancer,” she said.
Could This Change Practice?
The authors noted that end-of life SACT has been linked with more acute care use, delays in conversations about care goals, late enrollment in hospice, higher costs, and potentially shorter and poorer quality life.
Dr. Adelson said she’s hoping that the knowledge that there’s no survival benefit for use of SACT for patients with advanced solid tumors who are nearing the end of life will lead instead to more conversations about prognosis with patients and transitions to palliative care.
“Palliative care has actually been shown to improve quality of life and, in some studies, even survival,” she said.
“I doubt it will change practice, but it should,” Dr. Brawley said. “The study suggests that doctors and patients have too much hope for chemotherapy as patients’ disease progresses. In the US especially, there is a tendency to believe we have better therapies than we truly do and we have difficulty accepting that the patient is dying. Many patients get third- and fourth-line chemotherapy that is highly likely to increase suffering without realistic hope of prolonging life and especially no hope of prolonging life with good quality.”
Dr. Adelson disclosed ties with AbbVie, Quantum Health, Gilead, ParetoHealth, and Carrum Health. Various coauthors disclosed ties with Roche, AbbVie, Johnson & Johnson, Genentech, the National Comprehensive Cancer Network, and AstraZeneca. The study was funded by Flatiron Health, an independent member of the Roche group. Dr. Brawley reports no relevant financial disclosures.
This conclusion of a new study published online May 16 in JAMA Oncology may help reassure oncologists that giving systemic anticancer therapy (SACT) at the most advanced stages of cancer will not improve the patient’s life, the authors wrote. It also may encourage them to instead focus more on honest communication with patients about their choices, Maureen E. Canavan, PhD, at the Cancer and Outcomes, Public Policy and Effectiveness Research (COPPER) Center at the Yale School of Medicine in New Haven, Connecticut, and colleagues, wrote in their paper.
How Was the Study Conducted?
Researchers used Flatiron Health, a nationwide electronic health records database of academic and community practices throughout the United State. They identified 78,446 adults with advanced or metastatic stages of one of six common cancers (breast, colorectal, urothelial, non–small cell lung cancer [NSCLC], pancreatic and renal cell carcinoma) who were treated at healthcare practices from 2015 to 2019. They then stratified practices into quintiles based on how often the practices treated patients with any systemic therapy, including chemotherapy and immunotherapy, in their last 14 days of life. They compared whether patients in practices with greater use of systemic treatment at very advanced stages had longer overall survival.
What Were the Main Findings?
“We saw that there were absolutely no survival differences between the practices that used more systemic therapy for very advanced cancer than the practices that use less,” said senior author Kerin Adelson, MD, chief quality and value officer at MD Anderson Cancer Center in Houston, Texas. In some cancers, those in the lowest quintile (those with the lowest rates of systemic end-of-life care) lived fewer years compared with those in the highest quintiles. In other cancers, those in the lowest quintiles lived more years than those in the highest quintiles.
“What’s important is that none of those differences, after you control for other factors, was statistically significant,” Dr. Adelson said. “That was the same in every cancer type we looked at.”
An example is seen in advanced urothelial cancer. Those in the first quintile (lowest rates of systemic care at end of life) had an SACT rate range of 4.0-9.1. The SACT rate range in the highest quintile was 19.8-42.6. But the median overall survival (OS) rate for those in the lowest quintile was 12.7 months, not statistically different from the median OS in the highest quintile (11 months.)
How Does This Study Add to the Literature?
The American Society of Clinical Oncology (ASCO) and the National Quality Forum (NQF) developed a cancer quality metric to reduce SACT at the end of life. The NQF 0210 is a ratio of patients who get systemic treatment within 14 days of death over all patients who die of cancer. The quality metric has been widely adopted and used in value-based care reporting.
But the metric has been criticized because it focuses only on people who died and not people who lived longer because they benefited from the systemic therapy, the authors wrote.
Dr. Canavan’s team focused on all patients treated in the practice, not just those who died, Dr. Adelson said. This may put that criticism to rest, Dr. Adelson said.
“I personally believed the ASCO and NQF metric was appropriate and the criticisms were off base,” said Otis Brawley, MD, associate director of community outreach and engagement at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University School of Medicine in Baltimore. “Canavan’s study is evidence suggesting the metrics were appropriate.”
This study included not just chemotherapy, as some other studies have, but targeted therapies and immunotherapies as well. Dr. Adelson said some think that the newer drugs might change the prognosis at end of life. But this study shows “even those drugs are not helping patients to survive with very advanced cancer,” she said.
Could This Change Practice?
The authors noted that end-of life SACT has been linked with more acute care use, delays in conversations about care goals, late enrollment in hospice, higher costs, and potentially shorter and poorer quality life.
Dr. Adelson said she’s hoping that the knowledge that there’s no survival benefit for use of SACT for patients with advanced solid tumors who are nearing the end of life will lead instead to more conversations about prognosis with patients and transitions to palliative care.
“Palliative care has actually been shown to improve quality of life and, in some studies, even survival,” she said.
“I doubt it will change practice, but it should,” Dr. Brawley said. “The study suggests that doctors and patients have too much hope for chemotherapy as patients’ disease progresses. In the US especially, there is a tendency to believe we have better therapies than we truly do and we have difficulty accepting that the patient is dying. Many patients get third- and fourth-line chemotherapy that is highly likely to increase suffering without realistic hope of prolonging life and especially no hope of prolonging life with good quality.”
Dr. Adelson disclosed ties with AbbVie, Quantum Health, Gilead, ParetoHealth, and Carrum Health. Various coauthors disclosed ties with Roche, AbbVie, Johnson & Johnson, Genentech, the National Comprehensive Cancer Network, and AstraZeneca. The study was funded by Flatiron Health, an independent member of the Roche group. Dr. Brawley reports no relevant financial disclosures.
This conclusion of a new study published online May 16 in JAMA Oncology may help reassure oncologists that giving systemic anticancer therapy (SACT) at the most advanced stages of cancer will not improve the patient’s life, the authors wrote. It also may encourage them to instead focus more on honest communication with patients about their choices, Maureen E. Canavan, PhD, at the Cancer and Outcomes, Public Policy and Effectiveness Research (COPPER) Center at the Yale School of Medicine in New Haven, Connecticut, and colleagues, wrote in their paper.
How Was the Study Conducted?
Researchers used Flatiron Health, a nationwide electronic health records database of academic and community practices throughout the United State. They identified 78,446 adults with advanced or metastatic stages of one of six common cancers (breast, colorectal, urothelial, non–small cell lung cancer [NSCLC], pancreatic and renal cell carcinoma) who were treated at healthcare practices from 2015 to 2019. They then stratified practices into quintiles based on how often the practices treated patients with any systemic therapy, including chemotherapy and immunotherapy, in their last 14 days of life. They compared whether patients in practices with greater use of systemic treatment at very advanced stages had longer overall survival.
What Were the Main Findings?
“We saw that there were absolutely no survival differences between the practices that used more systemic therapy for very advanced cancer than the practices that use less,” said senior author Kerin Adelson, MD, chief quality and value officer at MD Anderson Cancer Center in Houston, Texas. In some cancers, those in the lowest quintile (those with the lowest rates of systemic end-of-life care) lived fewer years compared with those in the highest quintiles. In other cancers, those in the lowest quintiles lived more years than those in the highest quintiles.
“What’s important is that none of those differences, after you control for other factors, was statistically significant,” Dr. Adelson said. “That was the same in every cancer type we looked at.”
An example is seen in advanced urothelial cancer. Those in the first quintile (lowest rates of systemic care at end of life) had an SACT rate range of 4.0-9.1. The SACT rate range in the highest quintile was 19.8-42.6. But the median overall survival (OS) rate for those in the lowest quintile was 12.7 months, not statistically different from the median OS in the highest quintile (11 months.)
How Does This Study Add to the Literature?
The American Society of Clinical Oncology (ASCO) and the National Quality Forum (NQF) developed a cancer quality metric to reduce SACT at the end of life. The NQF 0210 is a ratio of patients who get systemic treatment within 14 days of death over all patients who die of cancer. The quality metric has been widely adopted and used in value-based care reporting.
But the metric has been criticized because it focuses only on people who died and not people who lived longer because they benefited from the systemic therapy, the authors wrote.
Dr. Canavan’s team focused on all patients treated in the practice, not just those who died, Dr. Adelson said. This may put that criticism to rest, Dr. Adelson said.
“I personally believed the ASCO and NQF metric was appropriate and the criticisms were off base,” said Otis Brawley, MD, associate director of community outreach and engagement at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University School of Medicine in Baltimore. “Canavan’s study is evidence suggesting the metrics were appropriate.”
This study included not just chemotherapy, as some other studies have, but targeted therapies and immunotherapies as well. Dr. Adelson said some think that the newer drugs might change the prognosis at end of life. But this study shows “even those drugs are not helping patients to survive with very advanced cancer,” she said.
Could This Change Practice?
The authors noted that end-of life SACT has been linked with more acute care use, delays in conversations about care goals, late enrollment in hospice, higher costs, and potentially shorter and poorer quality life.
Dr. Adelson said she’s hoping that the knowledge that there’s no survival benefit for use of SACT for patients with advanced solid tumors who are nearing the end of life will lead instead to more conversations about prognosis with patients and transitions to palliative care.
“Palliative care has actually been shown to improve quality of life and, in some studies, even survival,” she said.
“I doubt it will change practice, but it should,” Dr. Brawley said. “The study suggests that doctors and patients have too much hope for chemotherapy as patients’ disease progresses. In the US especially, there is a tendency to believe we have better therapies than we truly do and we have difficulty accepting that the patient is dying. Many patients get third- and fourth-line chemotherapy that is highly likely to increase suffering without realistic hope of prolonging life and especially no hope of prolonging life with good quality.”
Dr. Adelson disclosed ties with AbbVie, Quantum Health, Gilead, ParetoHealth, and Carrum Health. Various coauthors disclosed ties with Roche, AbbVie, Johnson & Johnson, Genentech, the National Comprehensive Cancer Network, and AstraZeneca. The study was funded by Flatiron Health, an independent member of the Roche group. Dr. Brawley reports no relevant financial disclosures.
FROM JAMA ONCOLOGY
Urine Tests Could Be ‘Enormous Step’ in Diagnosing Cancer
Emerging science suggests that the body’s “liquid gold” could be particularly useful for liquid biopsies, offering a convenient, pain-free, and cost-effective way to spot otherwise hard-to-detect cancers.
“The search for cancer biomarkers that can be detected in urine could provide an enormous step forward to decrease cancer patient mortality,” said Kenneth R. Shroyer, MD, PhD, a pathologist at Stony Brook University, Stony Brook, New York, who studies cancer biomarkers.
Physicians have long known that urine can reveal a lot about our health — that’s why urinalysis has been part of medicine for 6000 years. Urine tests can detect diabetes, pregnancy, drug use, and urinary or kidney conditions.
But other conditions leave clues in urine, too, and cancer may be one of the most promising. “Urine testing could detect biomarkers of early-stage cancers, not only from local but also distant sites,” Dr. Shroyer said. It could also help flag recurrence in cancer survivors who have undergone treatment.
Granted, cancer biomarkers in urine are not nearly as widely studied as those in the blood, Dr. Shroyer noted. But a new wave of urine tests suggests research is gaining pace.
“The recent availability of high-throughput screening technologies has enabled researchers to investigate cancer from a top-down, comprehensive approach,” said Pak Kin Wong, PhD, professor of mechanical engineering, biomedical engineering, and surgery at The Pennsylvania State University. “We are starting to understand the rich information that can be obtained from urine.”
Urine is mostly water (about 95%) and urea, a metabolic byproduct that imparts that signature yellow color (about 2%). The other 3% is a mix of waste products, minerals, and other compounds the kidneys removed from the blood. Even in trace amounts, these substances say a lot.
Among them are “exfoliated cancer cells, cell-free DNA, hormones, and the urine microbiota — the collection of microbes in our urinary tract system,” Dr. Wong said.
“It is highly promising to be one of the major biological fluids used for screening, diagnosis, prognosis, and monitoring treatment efficiency in the era of precision medicine,” Dr. Wong said.
How Urine Testing Could Reveal Cancer
Still, as exciting as the prospect is, there’s a lot to consider in the hunt for cancer biomarkers in urine. These biomarkers must be able to pass through the renal nephrons (filtering units), remain stable in urine, and have high-level sensitivity, Dr. Shroyer said. They should also have high specificity for cancer vs benign conditions and be expressed at early stages, before the primary tumor has spread.
“At this stage, few circulating biomarkers have been found that are both sensitive and specific for early-stage disease,” said Dr. Shroyer.
But there are a few promising examples under investigation in humans:
Prostate cancer. Researchers at the University of Michigan have developed a urine test that detects high-grade prostate cancer more accurately than existing tests, including PHI, SelectMDx, 4Kscore, EPI, MPS, and IsoPSA.
The MyProstateScore 2.0 (MPS2) test, which looks for 18 genes associated with high-grade tumors, could reduce unnecessary biopsies in men with elevated prostate-specific antigen levels, according to a paper published in JAMA Oncology.
It makes sense. The prostate gland secretes fluid that becomes part of the semen, traces of which enter urine. After a digital rectal exam, even more prostate fluid enters the urine. If a patient has prostate cancer, genetic material from the cancer cells will infiltrate the urine.
In the MPS2 test, researchers used polymerase chain reaction (PCR) testing in urine. “The technology used for COVID PCR is essentially the same as the PCR used to detect transcripts associated with high-grade prostate cancer in urine,” said study author Arul Chinnaiyan, MD, PhD, director of the Michigan Center for Translational Pathology at the University of Michigan, Ann Arbor. “In the case of the MPS2 test, we are doing PCR on 18 genes simultaneously on urine samples.”
A statistical model uses levels of that genetic material to predict the risk for high-grade disease, helping doctors decide what to do next. At 95% sensitivity, the MPS2 model could eliminate 35%-45% of unnecessary biopsies, compared with 15%-30% for the other tests, and reduce repeat biopsies by 46%-51%, compared with 9%-21% for the other tests.
Head and neck cancer. In a paper published in JCI Insight, researchers described a test that finds ultra-short fragments of DNA in urine to enable early detection of head and neck cancers caused by human papillomavirus.
“Our data show that a relatively small volume of urine (30-60 mL) gives overall detection results comparable to a tube of blood,” said study author Muneesh Tewari, MD, PhD, professor of hematology and oncology at the University of Michigan .
A larger volume of urine could potentially “make cancer detection even more sensitive than blood,” Dr. Tewari said, “allowing cancers to be detected at the earliest stages when they are more curable.”
The team used a technique called droplet digital PCR to detect DNA fragments that are “ultra-short” (less than 50 base pairs long) and usually missed by conventional PCR testing. This transrenal cell-free tumor DNA, which travels from the tumor into the bloodstream, is broken down small enough to pass through the kidneys and into the urine. But the fragments are still long enough to carry information about the tumor’s genetic signature.
This test could spot cancer before a tumor grows big enough — about a centimeter wide and carrying a billion cells — to spot on a CT scan or other imaging test. “When we are instead detecting fragments of DNA released from a tumor,” said Dr. Tewari, “our testing methods are very sensitive and can detect DNA in urine that came from just 5-10 cells in a tumor that died and released their DNA into the blood, which then made its way into the urine.”
Pancreatic cancer. Pancreatic ductal adenocarcinoma is one of the deadliest cancers, largely because it is diagnosed so late. A urine panel now in clinical trials could help doctors diagnose the cancer before it has spread so more people can have the tumor surgically removed, improving prognosis.
Using enzyme-linked immunosorbent assay test, a common lab method that detects antibodies and other proteins, the team measured expression levels for three genes (LYVE1, REG1B, and TFF1) in urine samples collected from people up to 5 years before they were diagnosed with pancreatic cancer. The researchers combined this result with patients’ urinary creatinine levels, a common component of existing urinalysis, and their age to develop a risk score.
This score performed similarly to an existing blood test, CA19-9, in predicting patients’ risk for pancreatic cancer up to 1 year before diagnosis. When combined with CA19-9, the urinary panel helped spot cancer up to 2 years before diagnosis.
According to a paper in the International Journal of Cancer, “the urine panel and affiliated PancRISK are currently being validated in a prospective clinical study (UroPanc).” If all goes well, they could be implemented in clinical practice in a few years as a “noninvasive stratification tool” to identify patients for further testing, speeding up diagnosis, and saving lives.
Limitations and Promises
Each cancer type is different, and more research is needed to map out which substances in urine predict which cancers and to develop tests for mass adoption. “There are medical and technological hurdles to the large-scale implementation of urine analysis for complex diseases such as cancer,” said Dr. Wong.
One possibility: Scientists and clinicians could collaborate and use artificial intelligence techniques to combine urine test results with other data.
“It is likely that future diagnostics may combine urine with other biological samples such as feces and saliva, among others,” said Dr. Wong. “This is especially true when novel data science and machine learning techniques can integrate comprehensive data from patients that span genetic, proteomic, metabolic, microbiomic, and even behavioral data to evaluate a patient’s condition.”
One thing that excites Dr. Tewari about urine-based cancer testing: “We think it could be especially impactful for patients living in rural areas or other areas with less access to healthcare services,” he said.
A version of this article appeared on Medscape.com.
Emerging science suggests that the body’s “liquid gold” could be particularly useful for liquid biopsies, offering a convenient, pain-free, and cost-effective way to spot otherwise hard-to-detect cancers.
“The search for cancer biomarkers that can be detected in urine could provide an enormous step forward to decrease cancer patient mortality,” said Kenneth R. Shroyer, MD, PhD, a pathologist at Stony Brook University, Stony Brook, New York, who studies cancer biomarkers.
Physicians have long known that urine can reveal a lot about our health — that’s why urinalysis has been part of medicine for 6000 years. Urine tests can detect diabetes, pregnancy, drug use, and urinary or kidney conditions.
But other conditions leave clues in urine, too, and cancer may be one of the most promising. “Urine testing could detect biomarkers of early-stage cancers, not only from local but also distant sites,” Dr. Shroyer said. It could also help flag recurrence in cancer survivors who have undergone treatment.
Granted, cancer biomarkers in urine are not nearly as widely studied as those in the blood, Dr. Shroyer noted. But a new wave of urine tests suggests research is gaining pace.
“The recent availability of high-throughput screening technologies has enabled researchers to investigate cancer from a top-down, comprehensive approach,” said Pak Kin Wong, PhD, professor of mechanical engineering, biomedical engineering, and surgery at The Pennsylvania State University. “We are starting to understand the rich information that can be obtained from urine.”
Urine is mostly water (about 95%) and urea, a metabolic byproduct that imparts that signature yellow color (about 2%). The other 3% is a mix of waste products, minerals, and other compounds the kidneys removed from the blood. Even in trace amounts, these substances say a lot.
Among them are “exfoliated cancer cells, cell-free DNA, hormones, and the urine microbiota — the collection of microbes in our urinary tract system,” Dr. Wong said.
“It is highly promising to be one of the major biological fluids used for screening, diagnosis, prognosis, and monitoring treatment efficiency in the era of precision medicine,” Dr. Wong said.
How Urine Testing Could Reveal Cancer
Still, as exciting as the prospect is, there’s a lot to consider in the hunt for cancer biomarkers in urine. These biomarkers must be able to pass through the renal nephrons (filtering units), remain stable in urine, and have high-level sensitivity, Dr. Shroyer said. They should also have high specificity for cancer vs benign conditions and be expressed at early stages, before the primary tumor has spread.
“At this stage, few circulating biomarkers have been found that are both sensitive and specific for early-stage disease,” said Dr. Shroyer.
But there are a few promising examples under investigation in humans:
Prostate cancer. Researchers at the University of Michigan have developed a urine test that detects high-grade prostate cancer more accurately than existing tests, including PHI, SelectMDx, 4Kscore, EPI, MPS, and IsoPSA.
The MyProstateScore 2.0 (MPS2) test, which looks for 18 genes associated with high-grade tumors, could reduce unnecessary biopsies in men with elevated prostate-specific antigen levels, according to a paper published in JAMA Oncology.
It makes sense. The prostate gland secretes fluid that becomes part of the semen, traces of which enter urine. After a digital rectal exam, even more prostate fluid enters the urine. If a patient has prostate cancer, genetic material from the cancer cells will infiltrate the urine.
In the MPS2 test, researchers used polymerase chain reaction (PCR) testing in urine. “The technology used for COVID PCR is essentially the same as the PCR used to detect transcripts associated with high-grade prostate cancer in urine,” said study author Arul Chinnaiyan, MD, PhD, director of the Michigan Center for Translational Pathology at the University of Michigan, Ann Arbor. “In the case of the MPS2 test, we are doing PCR on 18 genes simultaneously on urine samples.”
A statistical model uses levels of that genetic material to predict the risk for high-grade disease, helping doctors decide what to do next. At 95% sensitivity, the MPS2 model could eliminate 35%-45% of unnecessary biopsies, compared with 15%-30% for the other tests, and reduce repeat biopsies by 46%-51%, compared with 9%-21% for the other tests.
Head and neck cancer. In a paper published in JCI Insight, researchers described a test that finds ultra-short fragments of DNA in urine to enable early detection of head and neck cancers caused by human papillomavirus.
“Our data show that a relatively small volume of urine (30-60 mL) gives overall detection results comparable to a tube of blood,” said study author Muneesh Tewari, MD, PhD, professor of hematology and oncology at the University of Michigan .
A larger volume of urine could potentially “make cancer detection even more sensitive than blood,” Dr. Tewari said, “allowing cancers to be detected at the earliest stages when they are more curable.”
The team used a technique called droplet digital PCR to detect DNA fragments that are “ultra-short” (less than 50 base pairs long) and usually missed by conventional PCR testing. This transrenal cell-free tumor DNA, which travels from the tumor into the bloodstream, is broken down small enough to pass through the kidneys and into the urine. But the fragments are still long enough to carry information about the tumor’s genetic signature.
This test could spot cancer before a tumor grows big enough — about a centimeter wide and carrying a billion cells — to spot on a CT scan or other imaging test. “When we are instead detecting fragments of DNA released from a tumor,” said Dr. Tewari, “our testing methods are very sensitive and can detect DNA in urine that came from just 5-10 cells in a tumor that died and released their DNA into the blood, which then made its way into the urine.”
Pancreatic cancer. Pancreatic ductal adenocarcinoma is one of the deadliest cancers, largely because it is diagnosed so late. A urine panel now in clinical trials could help doctors diagnose the cancer before it has spread so more people can have the tumor surgically removed, improving prognosis.
Using enzyme-linked immunosorbent assay test, a common lab method that detects antibodies and other proteins, the team measured expression levels for three genes (LYVE1, REG1B, and TFF1) in urine samples collected from people up to 5 years before they were diagnosed with pancreatic cancer. The researchers combined this result with patients’ urinary creatinine levels, a common component of existing urinalysis, and their age to develop a risk score.
This score performed similarly to an existing blood test, CA19-9, in predicting patients’ risk for pancreatic cancer up to 1 year before diagnosis. When combined with CA19-9, the urinary panel helped spot cancer up to 2 years before diagnosis.
According to a paper in the International Journal of Cancer, “the urine panel and affiliated PancRISK are currently being validated in a prospective clinical study (UroPanc).” If all goes well, they could be implemented in clinical practice in a few years as a “noninvasive stratification tool” to identify patients for further testing, speeding up diagnosis, and saving lives.
Limitations and Promises
Each cancer type is different, and more research is needed to map out which substances in urine predict which cancers and to develop tests for mass adoption. “There are medical and technological hurdles to the large-scale implementation of urine analysis for complex diseases such as cancer,” said Dr. Wong.
One possibility: Scientists and clinicians could collaborate and use artificial intelligence techniques to combine urine test results with other data.
“It is likely that future diagnostics may combine urine with other biological samples such as feces and saliva, among others,” said Dr. Wong. “This is especially true when novel data science and machine learning techniques can integrate comprehensive data from patients that span genetic, proteomic, metabolic, microbiomic, and even behavioral data to evaluate a patient’s condition.”
One thing that excites Dr. Tewari about urine-based cancer testing: “We think it could be especially impactful for patients living in rural areas or other areas with less access to healthcare services,” he said.
A version of this article appeared on Medscape.com.
Emerging science suggests that the body’s “liquid gold” could be particularly useful for liquid biopsies, offering a convenient, pain-free, and cost-effective way to spot otherwise hard-to-detect cancers.
“The search for cancer biomarkers that can be detected in urine could provide an enormous step forward to decrease cancer patient mortality,” said Kenneth R. Shroyer, MD, PhD, a pathologist at Stony Brook University, Stony Brook, New York, who studies cancer biomarkers.
Physicians have long known that urine can reveal a lot about our health — that’s why urinalysis has been part of medicine for 6000 years. Urine tests can detect diabetes, pregnancy, drug use, and urinary or kidney conditions.
But other conditions leave clues in urine, too, and cancer may be one of the most promising. “Urine testing could detect biomarkers of early-stage cancers, not only from local but also distant sites,” Dr. Shroyer said. It could also help flag recurrence in cancer survivors who have undergone treatment.
Granted, cancer biomarkers in urine are not nearly as widely studied as those in the blood, Dr. Shroyer noted. But a new wave of urine tests suggests research is gaining pace.
“The recent availability of high-throughput screening technologies has enabled researchers to investigate cancer from a top-down, comprehensive approach,” said Pak Kin Wong, PhD, professor of mechanical engineering, biomedical engineering, and surgery at The Pennsylvania State University. “We are starting to understand the rich information that can be obtained from urine.”
Urine is mostly water (about 95%) and urea, a metabolic byproduct that imparts that signature yellow color (about 2%). The other 3% is a mix of waste products, minerals, and other compounds the kidneys removed from the blood. Even in trace amounts, these substances say a lot.
Among them are “exfoliated cancer cells, cell-free DNA, hormones, and the urine microbiota — the collection of microbes in our urinary tract system,” Dr. Wong said.
“It is highly promising to be one of the major biological fluids used for screening, diagnosis, prognosis, and monitoring treatment efficiency in the era of precision medicine,” Dr. Wong said.
How Urine Testing Could Reveal Cancer
Still, as exciting as the prospect is, there’s a lot to consider in the hunt for cancer biomarkers in urine. These biomarkers must be able to pass through the renal nephrons (filtering units), remain stable in urine, and have high-level sensitivity, Dr. Shroyer said. They should also have high specificity for cancer vs benign conditions and be expressed at early stages, before the primary tumor has spread.
“At this stage, few circulating biomarkers have been found that are both sensitive and specific for early-stage disease,” said Dr. Shroyer.
But there are a few promising examples under investigation in humans:
Prostate cancer. Researchers at the University of Michigan have developed a urine test that detects high-grade prostate cancer more accurately than existing tests, including PHI, SelectMDx, 4Kscore, EPI, MPS, and IsoPSA.
The MyProstateScore 2.0 (MPS2) test, which looks for 18 genes associated with high-grade tumors, could reduce unnecessary biopsies in men with elevated prostate-specific antigen levels, according to a paper published in JAMA Oncology.
It makes sense. The prostate gland secretes fluid that becomes part of the semen, traces of which enter urine. After a digital rectal exam, even more prostate fluid enters the urine. If a patient has prostate cancer, genetic material from the cancer cells will infiltrate the urine.
In the MPS2 test, researchers used polymerase chain reaction (PCR) testing in urine. “The technology used for COVID PCR is essentially the same as the PCR used to detect transcripts associated with high-grade prostate cancer in urine,” said study author Arul Chinnaiyan, MD, PhD, director of the Michigan Center for Translational Pathology at the University of Michigan, Ann Arbor. “In the case of the MPS2 test, we are doing PCR on 18 genes simultaneously on urine samples.”
A statistical model uses levels of that genetic material to predict the risk for high-grade disease, helping doctors decide what to do next. At 95% sensitivity, the MPS2 model could eliminate 35%-45% of unnecessary biopsies, compared with 15%-30% for the other tests, and reduce repeat biopsies by 46%-51%, compared with 9%-21% for the other tests.
Head and neck cancer. In a paper published in JCI Insight, researchers described a test that finds ultra-short fragments of DNA in urine to enable early detection of head and neck cancers caused by human papillomavirus.
“Our data show that a relatively small volume of urine (30-60 mL) gives overall detection results comparable to a tube of blood,” said study author Muneesh Tewari, MD, PhD, professor of hematology and oncology at the University of Michigan .
A larger volume of urine could potentially “make cancer detection even more sensitive than blood,” Dr. Tewari said, “allowing cancers to be detected at the earliest stages when they are more curable.”
The team used a technique called droplet digital PCR to detect DNA fragments that are “ultra-short” (less than 50 base pairs long) and usually missed by conventional PCR testing. This transrenal cell-free tumor DNA, which travels from the tumor into the bloodstream, is broken down small enough to pass through the kidneys and into the urine. But the fragments are still long enough to carry information about the tumor’s genetic signature.
This test could spot cancer before a tumor grows big enough — about a centimeter wide and carrying a billion cells — to spot on a CT scan or other imaging test. “When we are instead detecting fragments of DNA released from a tumor,” said Dr. Tewari, “our testing methods are very sensitive and can detect DNA in urine that came from just 5-10 cells in a tumor that died and released their DNA into the blood, which then made its way into the urine.”
Pancreatic cancer. Pancreatic ductal adenocarcinoma is one of the deadliest cancers, largely because it is diagnosed so late. A urine panel now in clinical trials could help doctors diagnose the cancer before it has spread so more people can have the tumor surgically removed, improving prognosis.
Using enzyme-linked immunosorbent assay test, a common lab method that detects antibodies and other proteins, the team measured expression levels for three genes (LYVE1, REG1B, and TFF1) in urine samples collected from people up to 5 years before they were diagnosed with pancreatic cancer. The researchers combined this result with patients’ urinary creatinine levels, a common component of existing urinalysis, and their age to develop a risk score.
This score performed similarly to an existing blood test, CA19-9, in predicting patients’ risk for pancreatic cancer up to 1 year before diagnosis. When combined with CA19-9, the urinary panel helped spot cancer up to 2 years before diagnosis.
According to a paper in the International Journal of Cancer, “the urine panel and affiliated PancRISK are currently being validated in a prospective clinical study (UroPanc).” If all goes well, they could be implemented in clinical practice in a few years as a “noninvasive stratification tool” to identify patients for further testing, speeding up diagnosis, and saving lives.
Limitations and Promises
Each cancer type is different, and more research is needed to map out which substances in urine predict which cancers and to develop tests for mass adoption. “There are medical and technological hurdles to the large-scale implementation of urine analysis for complex diseases such as cancer,” said Dr. Wong.
One possibility: Scientists and clinicians could collaborate and use artificial intelligence techniques to combine urine test results with other data.
“It is likely that future diagnostics may combine urine with other biological samples such as feces and saliva, among others,” said Dr. Wong. “This is especially true when novel data science and machine learning techniques can integrate comprehensive data from patients that span genetic, proteomic, metabolic, microbiomic, and even behavioral data to evaluate a patient’s condition.”
One thing that excites Dr. Tewari about urine-based cancer testing: “We think it could be especially impactful for patients living in rural areas or other areas with less access to healthcare services,” he said.
A version of this article appeared on Medscape.com.
Former UCLA Doctor Receives $14 Million in Gender Discrimination Retrial
A California jury has awarded $14 million to a former University of California, Los Angeles (UCLA) oncologist who claimed she was paid thousands less than her male colleagues and wrongfully terminated after her complaints of gender-based harassment and intimidation were ignored by program leadership.
The decision comes after a lengthy 8-year legal battle in which an appellate judge reversed a previous jury decision in her favor.
Lauren Pinter-Brown, MD, a hematologic oncologist, was hired in 2005 by the University of California, Los Angeles School of Medicine — now called UCLA’s David Geffen School of Medicine. As the school’s lymphoma program director, she conducted clinical research alongside other oncology doctors, including Sven de Vos, MD.
She claimed that her professional relationship with Dr. de Vos became contentious after he demonstrated “oppositional” and “disrespectful” behavior at team meetings, such as talking over her and turning his chair so Dr. Pinter-Brown faced his back. Court documents indicated that Dr. de Vos refused to use Dr. Pinter-Brown’s title in front of colleagues despite doing so for male counterparts.
Dr. Pinter-Brown argued that she was treated as the “butt of a joke” by Dr. de Vos and other male colleagues. In 2016, she sued Dr. de Vos, the university, and its governing body, the Board of Regents, for wrongful termination.
She was awarded a $13 million verdict in 2018. However, the California Court of Appeals overturned it in 2020 after concluding that several mistakes during the court proceedings impeded the school’s right to a fair and impartial trial. The case was retried, culminating in the even higher award of $14 million issued on May 9.
“Two juries have come to virtually identical findings showing multiple problems at UCLA involving gender discrimination,” Dr. Pinter-Brown’s attorney, Carney R. Shegerian, JD, told this news organization.
A spokesperson from UCLA’s David Geffen School of Medicine said administrators are carefully reviewing the new decision.
The spokesperson told this news organization that the medical school and its health system remain “deeply committed to maintaining a workplace free from discrimination, intimidation, retaliation, or harassment of any kind” and fostering a “respectful and inclusive environment ... in research, medical education, and patient care.”
Gender Pay Disparities Persist in Medicine
The gender pay gap in medicine is well documented. The 2024 Medscape Physician Compensation Report found that male doctors earn about 29% more than their female counterparts, with the disparity growing larger among specialists. In addition, a recent JAMA Health Forum study found that male physicians earned 21%-24% more per hour than female physicians.
Dr. Pinter-Brown, who now works at the University of California, Irvine, alleged that she was paid $200,000 less annually, on average, than her male colleagues.
That’s not surprising, says Martha Gulati, MD, professor and director of preventive cardiology at Cedars-Sinai Smidt Heart Institute, Los Angeles. She coauthored a commentary about gender disparities in JAMA Network Open. Dr. Gulati told this news organization that even a “small” pay disparity of $100,000 annually adds up.
“Let’s say the [male physician] invests it at 3% and adds to it yearly. Even without a raise, in 20 years, that is approximately $3 million,” Dr. Gulati explained. “Once you find out you are paid less than your male colleagues, you are upset. Your sense of value and self-worth disappears.”
Eileen Barrett, MD, MPH, president-elect of the American Medical Women’s Association, said that gender discrimination is likely more prevalent than research indicates. She told this news organization that self-doubt and fear of retaliation keep many from exposing the mistreatment.
Although more women are entering medicine, too few rise to the highest positions, Dr. Barrett said.
“Unfortunately, many are pulled and pushed into specialties and subspecialties that have lower compensation and are not promoted to leadership, so just having numbers isn’t enough to achieve equity,” Dr. Barrett said.
Dr. Pinter-Brown claimed she was repeatedly harassed and intimidated by Dr. de Vos from 2008 to 2015. Despite voicing concerns multiple times about the discriminatory behavior, the only resolutions offered by the male-dominated program leadership were for her to separate from the group and conduct lymphoma research independently or to avoid interacting with Dr. de Vos, court records said.
Even the school’s male Title IX officer, Jan Tillisch, MD, who handled gender-based discrimination complaints, reportedly made sexist comments. When Dr. Pinter-Brown sought his help, he allegedly told her that she had a reputation as an “angry woman” and “diva,” court records showed.
According to court documents, Dr. Pinter-Brown endured nitpicking and research audits as retaliation for speaking out, temporarily suspending her research privileges. She said she was subsequently removed from the director position and replaced by Dr. de Vos.
Female physicians who report discriminatory behavior often have unfavorable outcomes and risk future career prospects, Dr. Gulati said.
To shift this dynamic, she said institutions must increase transparency and practices that support female doctors receiving “equal pay for equal work.”
A version of this article appeared on Medscape.com.
A California jury has awarded $14 million to a former University of California, Los Angeles (UCLA) oncologist who claimed she was paid thousands less than her male colleagues and wrongfully terminated after her complaints of gender-based harassment and intimidation were ignored by program leadership.
The decision comes after a lengthy 8-year legal battle in which an appellate judge reversed a previous jury decision in her favor.
Lauren Pinter-Brown, MD, a hematologic oncologist, was hired in 2005 by the University of California, Los Angeles School of Medicine — now called UCLA’s David Geffen School of Medicine. As the school’s lymphoma program director, she conducted clinical research alongside other oncology doctors, including Sven de Vos, MD.
She claimed that her professional relationship with Dr. de Vos became contentious after he demonstrated “oppositional” and “disrespectful” behavior at team meetings, such as talking over her and turning his chair so Dr. Pinter-Brown faced his back. Court documents indicated that Dr. de Vos refused to use Dr. Pinter-Brown’s title in front of colleagues despite doing so for male counterparts.
Dr. Pinter-Brown argued that she was treated as the “butt of a joke” by Dr. de Vos and other male colleagues. In 2016, she sued Dr. de Vos, the university, and its governing body, the Board of Regents, for wrongful termination.
She was awarded a $13 million verdict in 2018. However, the California Court of Appeals overturned it in 2020 after concluding that several mistakes during the court proceedings impeded the school’s right to a fair and impartial trial. The case was retried, culminating in the even higher award of $14 million issued on May 9.
“Two juries have come to virtually identical findings showing multiple problems at UCLA involving gender discrimination,” Dr. Pinter-Brown’s attorney, Carney R. Shegerian, JD, told this news organization.
A spokesperson from UCLA’s David Geffen School of Medicine said administrators are carefully reviewing the new decision.
The spokesperson told this news organization that the medical school and its health system remain “deeply committed to maintaining a workplace free from discrimination, intimidation, retaliation, or harassment of any kind” and fostering a “respectful and inclusive environment ... in research, medical education, and patient care.”
Gender Pay Disparities Persist in Medicine
The gender pay gap in medicine is well documented. The 2024 Medscape Physician Compensation Report found that male doctors earn about 29% more than their female counterparts, with the disparity growing larger among specialists. In addition, a recent JAMA Health Forum study found that male physicians earned 21%-24% more per hour than female physicians.
Dr. Pinter-Brown, who now works at the University of California, Irvine, alleged that she was paid $200,000 less annually, on average, than her male colleagues.
That’s not surprising, says Martha Gulati, MD, professor and director of preventive cardiology at Cedars-Sinai Smidt Heart Institute, Los Angeles. She coauthored a commentary about gender disparities in JAMA Network Open. Dr. Gulati told this news organization that even a “small” pay disparity of $100,000 annually adds up.
“Let’s say the [male physician] invests it at 3% and adds to it yearly. Even without a raise, in 20 years, that is approximately $3 million,” Dr. Gulati explained. “Once you find out you are paid less than your male colleagues, you are upset. Your sense of value and self-worth disappears.”
Eileen Barrett, MD, MPH, president-elect of the American Medical Women’s Association, said that gender discrimination is likely more prevalent than research indicates. She told this news organization that self-doubt and fear of retaliation keep many from exposing the mistreatment.
Although more women are entering medicine, too few rise to the highest positions, Dr. Barrett said.
“Unfortunately, many are pulled and pushed into specialties and subspecialties that have lower compensation and are not promoted to leadership, so just having numbers isn’t enough to achieve equity,” Dr. Barrett said.
Dr. Pinter-Brown claimed she was repeatedly harassed and intimidated by Dr. de Vos from 2008 to 2015. Despite voicing concerns multiple times about the discriminatory behavior, the only resolutions offered by the male-dominated program leadership were for her to separate from the group and conduct lymphoma research independently or to avoid interacting with Dr. de Vos, court records said.
Even the school’s male Title IX officer, Jan Tillisch, MD, who handled gender-based discrimination complaints, reportedly made sexist comments. When Dr. Pinter-Brown sought his help, he allegedly told her that she had a reputation as an “angry woman” and “diva,” court records showed.
According to court documents, Dr. Pinter-Brown endured nitpicking and research audits as retaliation for speaking out, temporarily suspending her research privileges. She said she was subsequently removed from the director position and replaced by Dr. de Vos.
Female physicians who report discriminatory behavior often have unfavorable outcomes and risk future career prospects, Dr. Gulati said.
To shift this dynamic, she said institutions must increase transparency and practices that support female doctors receiving “equal pay for equal work.”
A version of this article appeared on Medscape.com.
A California jury has awarded $14 million to a former University of California, Los Angeles (UCLA) oncologist who claimed she was paid thousands less than her male colleagues and wrongfully terminated after her complaints of gender-based harassment and intimidation were ignored by program leadership.
The decision comes after a lengthy 8-year legal battle in which an appellate judge reversed a previous jury decision in her favor.
Lauren Pinter-Brown, MD, a hematologic oncologist, was hired in 2005 by the University of California, Los Angeles School of Medicine — now called UCLA’s David Geffen School of Medicine. As the school’s lymphoma program director, she conducted clinical research alongside other oncology doctors, including Sven de Vos, MD.
She claimed that her professional relationship with Dr. de Vos became contentious after he demonstrated “oppositional” and “disrespectful” behavior at team meetings, such as talking over her and turning his chair so Dr. Pinter-Brown faced his back. Court documents indicated that Dr. de Vos refused to use Dr. Pinter-Brown’s title in front of colleagues despite doing so for male counterparts.
Dr. Pinter-Brown argued that she was treated as the “butt of a joke” by Dr. de Vos and other male colleagues. In 2016, she sued Dr. de Vos, the university, and its governing body, the Board of Regents, for wrongful termination.
She was awarded a $13 million verdict in 2018. However, the California Court of Appeals overturned it in 2020 after concluding that several mistakes during the court proceedings impeded the school’s right to a fair and impartial trial. The case was retried, culminating in the even higher award of $14 million issued on May 9.
“Two juries have come to virtually identical findings showing multiple problems at UCLA involving gender discrimination,” Dr. Pinter-Brown’s attorney, Carney R. Shegerian, JD, told this news organization.
A spokesperson from UCLA’s David Geffen School of Medicine said administrators are carefully reviewing the new decision.
The spokesperson told this news organization that the medical school and its health system remain “deeply committed to maintaining a workplace free from discrimination, intimidation, retaliation, or harassment of any kind” and fostering a “respectful and inclusive environment ... in research, medical education, and patient care.”
Gender Pay Disparities Persist in Medicine
The gender pay gap in medicine is well documented. The 2024 Medscape Physician Compensation Report found that male doctors earn about 29% more than their female counterparts, with the disparity growing larger among specialists. In addition, a recent JAMA Health Forum study found that male physicians earned 21%-24% more per hour than female physicians.
Dr. Pinter-Brown, who now works at the University of California, Irvine, alleged that she was paid $200,000 less annually, on average, than her male colleagues.
That’s not surprising, says Martha Gulati, MD, professor and director of preventive cardiology at Cedars-Sinai Smidt Heart Institute, Los Angeles. She coauthored a commentary about gender disparities in JAMA Network Open. Dr. Gulati told this news organization that even a “small” pay disparity of $100,000 annually adds up.
“Let’s say the [male physician] invests it at 3% and adds to it yearly. Even without a raise, in 20 years, that is approximately $3 million,” Dr. Gulati explained. “Once you find out you are paid less than your male colleagues, you are upset. Your sense of value and self-worth disappears.”
Eileen Barrett, MD, MPH, president-elect of the American Medical Women’s Association, said that gender discrimination is likely more prevalent than research indicates. She told this news organization that self-doubt and fear of retaliation keep many from exposing the mistreatment.
Although more women are entering medicine, too few rise to the highest positions, Dr. Barrett said.
“Unfortunately, many are pulled and pushed into specialties and subspecialties that have lower compensation and are not promoted to leadership, so just having numbers isn’t enough to achieve equity,” Dr. Barrett said.
Dr. Pinter-Brown claimed she was repeatedly harassed and intimidated by Dr. de Vos from 2008 to 2015. Despite voicing concerns multiple times about the discriminatory behavior, the only resolutions offered by the male-dominated program leadership were for her to separate from the group and conduct lymphoma research independently or to avoid interacting with Dr. de Vos, court records said.
Even the school’s male Title IX officer, Jan Tillisch, MD, who handled gender-based discrimination complaints, reportedly made sexist comments. When Dr. Pinter-Brown sought his help, he allegedly told her that she had a reputation as an “angry woman” and “diva,” court records showed.
According to court documents, Dr. Pinter-Brown endured nitpicking and research audits as retaliation for speaking out, temporarily suspending her research privileges. She said she was subsequently removed from the director position and replaced by Dr. de Vos.
Female physicians who report discriminatory behavior often have unfavorable outcomes and risk future career prospects, Dr. Gulati said.
To shift this dynamic, she said institutions must increase transparency and practices that support female doctors receiving “equal pay for equal work.”
A version of this article appeared on Medscape.com.
New Data to Change Practice on BP Control in Acute Stroke: INTERACT4
BASEL, SWITZERLAND — Early reduction of blood pressure has a beneficial effect in hemorrhagic stroke but a detrimental effect in ischemic stroke, new trial data show. The findings could shake up recommendations on control of blood pressure in acute stroke patients.
“This is the first time that we have randomized evidence of blood pressure control prior to reperfusion in ischemic stroke patients, and our data will challenge the current guidelines that recommend lowering blood pressure to below 180 mm Hg systolic in these patients,” said study coauthor Craig Anderson, MD, George Institute for Global Health, Sydney, Australia.
“And this study also clearly shows for the first time that getting blood pressure under control in hemorrhagic stroke patients in the first couple of hours has definitive benefits,” he added.
The findings were presented on May 16 at the European Stroke Organization Conference (ESOC) annual meeting and published online simultaneously in The New England Journal of Medicine.
A Test of Early BP Control
The trial was conducted to test the strategy of very early blood pressure control during patient transport in an ambulance after acute stroke, which investigators suspected could benefit patients with both types of stroke.
The hypothesis was that this would reduce bleeding in the brain for those with hemorrhagic stroke. For ischemic stroke patients, it was thought this strategy would speed up administration of thrombolysis, because guidelines recommend bringing blood pressure under control before thrombolysis.
For the INTERACT4 trial, which was conducted in China, 2404 patients with suspected acute stroke and elevated systolic blood pressure (≥ 150 mm Hg) who were assessed in the ambulance within 2 hours after symptom onset were randomized to receive immediate treatment with intravenous urapidil to lower the systolic blood pressure or usual blood pressure management (usual care group).
The median time between symptom onset and randomization was 61 minutes, and the mean blood pressure at randomization was 178/98 mm Hg.
Stroke was subsequently confirmed by imaging in 2240 patients, of whom 46% had a hemorrhagic stroke and 54% an ischemic stroke.
At the time of arrival at the hospital, the mean systolic blood pressure in the intervention group was 158 mm Hg, compared with 170 mm Hg in the usual care group.
The primary efficacy outcome was functional status as assessed by modified Rankin scale score at 90 days.
Overall, there was no difference between the two groups in terms of functional outcome scores (common odds ratio [OR], 1.00; 95% CI, 0.87-1.15), and the incidence of serious adverse events was similar.
But the study showed very different results in patients with hemorrhagic stroke vs those with ischemic stroke.
‘Slam-Dunk’ Effect
Anderson has led several previous trials of blood pressure control in stroke patients, some of which have suggested benefit of lowering blood pressure in those with hemorrhagic stroke, but he says the results of the current trial are more clear-cut.
“We have never seen such a slam-dunk effect as there was in INTERACT4,” Dr. Anderson said. “Not only did we show that early reduction of blood pressure in hemorrhagic stroke patients improved functional outcome, it also reduced bleeding in the brain, improved survival and quality of life, and reduced surgery and infection complications. That’s quite remarkable.”
The findings offer “clear evidence that for patients with hemorrhagic stroke, we must get the blood pressure under control as soon as possible and introduce systems of care to ensure this happens,” he added.
The reason for the clear findings in the current trial is probably the treatment time, Dr. Anderson said.
“This is the first trial in which blood pressure has been controlled in the ambulance and occurred much earlier than in the previous trials.”
Challenging Ischemic Stroke Guidelines
The INTERACT4 results in ischemic stroke patients are likely to be more controversial.
“Our results are clearly challenging longstanding beliefs around blood pressure control in ischemic stroke prior to thrombolysis,” Dr. Anderson said.
Current guidelines recommend a blood pressure < 185 mm Hg systolic before initiation of thrombolysis because of concerns about intracerebral hemorrhage, he noted. Often, blood pressure is lowered rapidly down to much lower levels in order give thrombolysis quickly.
“Our results suggest this may not be a good idea,” Dr. Anderson said. “I think these data will shake us up a bit and make us more cautious about reducing blood pressure in these patients. Personally, I wouldn’t touch the blood pressure at all in ischemic stroke patients after these results.”
He said the mechanisms behind the different stroke types would explain the results.
“If a patient is bleeding, it makes sense that higher blood pressure would make that worse,” Dr. Anderson said. “But when a patient has a blocked artery and ischemia in the brain, it seems likely that the extra pressure is needed to keep oxygen delivery to the ischemic tissue.”
Accurate Diagnosis Necessary
Because it is not possible to make an accurate diagnosis between ischemic and hemorrhagic stroke without a CT scan, Dr. Anderson stressed that at the present time, no action on blood pressure can be taken in the ambulance.
“There is a lot of interest in developing a lightweight brain scanner to be used in ambulances, but this won’t be routinely available for several years,” he said. “So for now, quick diagnosis of the type of stroke that is occurring on the patient’s arrival at the emergency department and, for hemorrhagic stroke patients, swift action to control blood pressure at this point is critical to preserving brain function.”
Commenting on the INTERACT4 results at the ESOC meeting, Simona Sacco, MD, professor of neurology at the University of L’Aquila, Italy, said this was a very important trial that would impact clinical practice.
“The data really reinforce that hemorrhagic stroke patients must have their blood pressure reduced as soon as possible,” she stated.
Dr. Sacco said the trial emphasizes the need to be able to distinguish between a hemorrhagic and ischemic stroke in a prehospital setting and supports the introduction of more mobile stroke units carrying CT scanners and calls for the development of biomarkers that can allow rapid differentiation between the two conditions.
In an accompanying editorial, Jonathan Edlow, MD, Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, points out several aspects of the trial that may potentially limit the generalizability of the findings. These include use of urapidil as the antihypertensive agent, which is unavailable in the United States; all patients being of Han Chinese ethnicity; and an unusually high sensitivity of initial CT scans in detecting visible signs of ischemia or infarction in patients in acute ischemic stroke.
“These findings should be considered hypothesis-generating, and they make the case for validation of the trial results in other settings,” Dr. Edlow wrote.
The INTERACT4 trial was funded by the National Health and Medical Research Council of Australia, the George Institute for Global Health, several Chinese healthcare institutions, and Takeda Pharmaceuticals China. Disclosures for study and editorial authors are provided in the original articles.
A version of this article appeared on Medscape.com.
BASEL, SWITZERLAND — Early reduction of blood pressure has a beneficial effect in hemorrhagic stroke but a detrimental effect in ischemic stroke, new trial data show. The findings could shake up recommendations on control of blood pressure in acute stroke patients.
“This is the first time that we have randomized evidence of blood pressure control prior to reperfusion in ischemic stroke patients, and our data will challenge the current guidelines that recommend lowering blood pressure to below 180 mm Hg systolic in these patients,” said study coauthor Craig Anderson, MD, George Institute for Global Health, Sydney, Australia.
“And this study also clearly shows for the first time that getting blood pressure under control in hemorrhagic stroke patients in the first couple of hours has definitive benefits,” he added.
The findings were presented on May 16 at the European Stroke Organization Conference (ESOC) annual meeting and published online simultaneously in The New England Journal of Medicine.
A Test of Early BP Control
The trial was conducted to test the strategy of very early blood pressure control during patient transport in an ambulance after acute stroke, which investigators suspected could benefit patients with both types of stroke.
The hypothesis was that this would reduce bleeding in the brain for those with hemorrhagic stroke. For ischemic stroke patients, it was thought this strategy would speed up administration of thrombolysis, because guidelines recommend bringing blood pressure under control before thrombolysis.
For the INTERACT4 trial, which was conducted in China, 2404 patients with suspected acute stroke and elevated systolic blood pressure (≥ 150 mm Hg) who were assessed in the ambulance within 2 hours after symptom onset were randomized to receive immediate treatment with intravenous urapidil to lower the systolic blood pressure or usual blood pressure management (usual care group).
The median time between symptom onset and randomization was 61 minutes, and the mean blood pressure at randomization was 178/98 mm Hg.
Stroke was subsequently confirmed by imaging in 2240 patients, of whom 46% had a hemorrhagic stroke and 54% an ischemic stroke.
At the time of arrival at the hospital, the mean systolic blood pressure in the intervention group was 158 mm Hg, compared with 170 mm Hg in the usual care group.
The primary efficacy outcome was functional status as assessed by modified Rankin scale score at 90 days.
Overall, there was no difference between the two groups in terms of functional outcome scores (common odds ratio [OR], 1.00; 95% CI, 0.87-1.15), and the incidence of serious adverse events was similar.
But the study showed very different results in patients with hemorrhagic stroke vs those with ischemic stroke.
‘Slam-Dunk’ Effect
Anderson has led several previous trials of blood pressure control in stroke patients, some of which have suggested benefit of lowering blood pressure in those with hemorrhagic stroke, but he says the results of the current trial are more clear-cut.
“We have never seen such a slam-dunk effect as there was in INTERACT4,” Dr. Anderson said. “Not only did we show that early reduction of blood pressure in hemorrhagic stroke patients improved functional outcome, it also reduced bleeding in the brain, improved survival and quality of life, and reduced surgery and infection complications. That’s quite remarkable.”
The findings offer “clear evidence that for patients with hemorrhagic stroke, we must get the blood pressure under control as soon as possible and introduce systems of care to ensure this happens,” he added.
The reason for the clear findings in the current trial is probably the treatment time, Dr. Anderson said.
“This is the first trial in which blood pressure has been controlled in the ambulance and occurred much earlier than in the previous trials.”
Challenging Ischemic Stroke Guidelines
The INTERACT4 results in ischemic stroke patients are likely to be more controversial.
“Our results are clearly challenging longstanding beliefs around blood pressure control in ischemic stroke prior to thrombolysis,” Dr. Anderson said.
Current guidelines recommend a blood pressure < 185 mm Hg systolic before initiation of thrombolysis because of concerns about intracerebral hemorrhage, he noted. Often, blood pressure is lowered rapidly down to much lower levels in order give thrombolysis quickly.
“Our results suggest this may not be a good idea,” Dr. Anderson said. “I think these data will shake us up a bit and make us more cautious about reducing blood pressure in these patients. Personally, I wouldn’t touch the blood pressure at all in ischemic stroke patients after these results.”
He said the mechanisms behind the different stroke types would explain the results.
“If a patient is bleeding, it makes sense that higher blood pressure would make that worse,” Dr. Anderson said. “But when a patient has a blocked artery and ischemia in the brain, it seems likely that the extra pressure is needed to keep oxygen delivery to the ischemic tissue.”
Accurate Diagnosis Necessary
Because it is not possible to make an accurate diagnosis between ischemic and hemorrhagic stroke without a CT scan, Dr. Anderson stressed that at the present time, no action on blood pressure can be taken in the ambulance.
“There is a lot of interest in developing a lightweight brain scanner to be used in ambulances, but this won’t be routinely available for several years,” he said. “So for now, quick diagnosis of the type of stroke that is occurring on the patient’s arrival at the emergency department and, for hemorrhagic stroke patients, swift action to control blood pressure at this point is critical to preserving brain function.”
Commenting on the INTERACT4 results at the ESOC meeting, Simona Sacco, MD, professor of neurology at the University of L’Aquila, Italy, said this was a very important trial that would impact clinical practice.
“The data really reinforce that hemorrhagic stroke patients must have their blood pressure reduced as soon as possible,” she stated.
Dr. Sacco said the trial emphasizes the need to be able to distinguish between a hemorrhagic and ischemic stroke in a prehospital setting and supports the introduction of more mobile stroke units carrying CT scanners and calls for the development of biomarkers that can allow rapid differentiation between the two conditions.
In an accompanying editorial, Jonathan Edlow, MD, Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, points out several aspects of the trial that may potentially limit the generalizability of the findings. These include use of urapidil as the antihypertensive agent, which is unavailable in the United States; all patients being of Han Chinese ethnicity; and an unusually high sensitivity of initial CT scans in detecting visible signs of ischemia or infarction in patients in acute ischemic stroke.
“These findings should be considered hypothesis-generating, and they make the case for validation of the trial results in other settings,” Dr. Edlow wrote.
The INTERACT4 trial was funded by the National Health and Medical Research Council of Australia, the George Institute for Global Health, several Chinese healthcare institutions, and Takeda Pharmaceuticals China. Disclosures for study and editorial authors are provided in the original articles.
A version of this article appeared on Medscape.com.
BASEL, SWITZERLAND — Early reduction of blood pressure has a beneficial effect in hemorrhagic stroke but a detrimental effect in ischemic stroke, new trial data show. The findings could shake up recommendations on control of blood pressure in acute stroke patients.
“This is the first time that we have randomized evidence of blood pressure control prior to reperfusion in ischemic stroke patients, and our data will challenge the current guidelines that recommend lowering blood pressure to below 180 mm Hg systolic in these patients,” said study coauthor Craig Anderson, MD, George Institute for Global Health, Sydney, Australia.
“And this study also clearly shows for the first time that getting blood pressure under control in hemorrhagic stroke patients in the first couple of hours has definitive benefits,” he added.
The findings were presented on May 16 at the European Stroke Organization Conference (ESOC) annual meeting and published online simultaneously in The New England Journal of Medicine.
A Test of Early BP Control
The trial was conducted to test the strategy of very early blood pressure control during patient transport in an ambulance after acute stroke, which investigators suspected could benefit patients with both types of stroke.
The hypothesis was that this would reduce bleeding in the brain for those with hemorrhagic stroke. For ischemic stroke patients, it was thought this strategy would speed up administration of thrombolysis, because guidelines recommend bringing blood pressure under control before thrombolysis.
For the INTERACT4 trial, which was conducted in China, 2404 patients with suspected acute stroke and elevated systolic blood pressure (≥ 150 mm Hg) who were assessed in the ambulance within 2 hours after symptom onset were randomized to receive immediate treatment with intravenous urapidil to lower the systolic blood pressure or usual blood pressure management (usual care group).
The median time between symptom onset and randomization was 61 minutes, and the mean blood pressure at randomization was 178/98 mm Hg.
Stroke was subsequently confirmed by imaging in 2240 patients, of whom 46% had a hemorrhagic stroke and 54% an ischemic stroke.
At the time of arrival at the hospital, the mean systolic blood pressure in the intervention group was 158 mm Hg, compared with 170 mm Hg in the usual care group.
The primary efficacy outcome was functional status as assessed by modified Rankin scale score at 90 days.
Overall, there was no difference between the two groups in terms of functional outcome scores (common odds ratio [OR], 1.00; 95% CI, 0.87-1.15), and the incidence of serious adverse events was similar.
But the study showed very different results in patients with hemorrhagic stroke vs those with ischemic stroke.
‘Slam-Dunk’ Effect
Anderson has led several previous trials of blood pressure control in stroke patients, some of which have suggested benefit of lowering blood pressure in those with hemorrhagic stroke, but he says the results of the current trial are more clear-cut.
“We have never seen such a slam-dunk effect as there was in INTERACT4,” Dr. Anderson said. “Not only did we show that early reduction of blood pressure in hemorrhagic stroke patients improved functional outcome, it also reduced bleeding in the brain, improved survival and quality of life, and reduced surgery and infection complications. That’s quite remarkable.”
The findings offer “clear evidence that for patients with hemorrhagic stroke, we must get the blood pressure under control as soon as possible and introduce systems of care to ensure this happens,” he added.
The reason for the clear findings in the current trial is probably the treatment time, Dr. Anderson said.
“This is the first trial in which blood pressure has been controlled in the ambulance and occurred much earlier than in the previous trials.”
Challenging Ischemic Stroke Guidelines
The INTERACT4 results in ischemic stroke patients are likely to be more controversial.
“Our results are clearly challenging longstanding beliefs around blood pressure control in ischemic stroke prior to thrombolysis,” Dr. Anderson said.
Current guidelines recommend a blood pressure < 185 mm Hg systolic before initiation of thrombolysis because of concerns about intracerebral hemorrhage, he noted. Often, blood pressure is lowered rapidly down to much lower levels in order give thrombolysis quickly.
“Our results suggest this may not be a good idea,” Dr. Anderson said. “I think these data will shake us up a bit and make us more cautious about reducing blood pressure in these patients. Personally, I wouldn’t touch the blood pressure at all in ischemic stroke patients after these results.”
He said the mechanisms behind the different stroke types would explain the results.
“If a patient is bleeding, it makes sense that higher blood pressure would make that worse,” Dr. Anderson said. “But when a patient has a blocked artery and ischemia in the brain, it seems likely that the extra pressure is needed to keep oxygen delivery to the ischemic tissue.”
Accurate Diagnosis Necessary
Because it is not possible to make an accurate diagnosis between ischemic and hemorrhagic stroke without a CT scan, Dr. Anderson stressed that at the present time, no action on blood pressure can be taken in the ambulance.
“There is a lot of interest in developing a lightweight brain scanner to be used in ambulances, but this won’t be routinely available for several years,” he said. “So for now, quick diagnosis of the type of stroke that is occurring on the patient’s arrival at the emergency department and, for hemorrhagic stroke patients, swift action to control blood pressure at this point is critical to preserving brain function.”
Commenting on the INTERACT4 results at the ESOC meeting, Simona Sacco, MD, professor of neurology at the University of L’Aquila, Italy, said this was a very important trial that would impact clinical practice.
“The data really reinforce that hemorrhagic stroke patients must have their blood pressure reduced as soon as possible,” she stated.
Dr. Sacco said the trial emphasizes the need to be able to distinguish between a hemorrhagic and ischemic stroke in a prehospital setting and supports the introduction of more mobile stroke units carrying CT scanners and calls for the development of biomarkers that can allow rapid differentiation between the two conditions.
In an accompanying editorial, Jonathan Edlow, MD, Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, points out several aspects of the trial that may potentially limit the generalizability of the findings. These include use of urapidil as the antihypertensive agent, which is unavailable in the United States; all patients being of Han Chinese ethnicity; and an unusually high sensitivity of initial CT scans in detecting visible signs of ischemia or infarction in patients in acute ischemic stroke.
“These findings should be considered hypothesis-generating, and they make the case for validation of the trial results in other settings,” Dr. Edlow wrote.
The INTERACT4 trial was funded by the National Health and Medical Research Council of Australia, the George Institute for Global Health, several Chinese healthcare institutions, and Takeda Pharmaceuticals China. Disclosures for study and editorial authors are provided in the original articles.
A version of this article appeared on Medscape.com.
‘Big Breakthrough’: New Low-Field MRI Is Safer and Easier
For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.
Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.
In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.
The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.
Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.
If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.
But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
Improving Access to MRI
One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.
While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.
A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.
Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.
“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
Challenges and the Future
The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.
One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.
Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.
That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
A version of this article appeared on Medscape.com.
For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.
Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.
In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.
The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.
Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.
If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.
But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
Improving Access to MRI
One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.
While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.
A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.
Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.
“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
Challenges and the Future
The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.
One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.
Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.
That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
A version of this article appeared on Medscape.com.
For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.
Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.
In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.
The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.
Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.
If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.
But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
Improving Access to MRI
One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.
While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.
A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.
Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.
“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
Challenges and the Future
The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.
One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.
Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.
That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
A version of this article appeared on Medscape.com.
Follow-Up Outcomes Data Often Missing for FDA Drug Approvals Based on Surrogate Markers
Over the past few decades, the US Food and Drug Administration (FDA) has increasingly relied on surrogate measures such as blood tests instead of clinical outcomes for medication approvals. But critics say the agency lacks consistent standards to ensure the surrogate aligns with clinical outcomes that matter to patients — things like improvements in symptoms and gains in function.
Sometimes those decisions backfire. Consider: In July 2021, the FDA approved aducanumab for the treatment of Alzheimer’s disease, bucking the advice of an advisory panel for the agency that questioned the effectiveness of the medication. Regulators relied on data from the drugmaker, Biogen, showing the monoclonal antibody could reduce levels of amyloid beta plaques in blood — a surrogate marker officials hoped would translate to clinical benefit.
The FDA’s decision triggered significant controversy, and Biogen in January announced it is pulling it from the market this year, citing disappointing sales.
Although the case of aducanumab might seem extreme, given the stakes — Alzheimer’s remains a disease without an effective treatment — it’s far from unusual.
“When we prescribe a drug, there is an underlying assumption that the FDA has done its due diligence to confirm the drug is safe and of benefit,” said Reshma Ramachandran, MD, MPP, MHS, a researcher at Yale School of Medicine, New Haven, Connecticut, and a coauthor of a recent review of surrogate outcomes. “In fact, we found either no evidence or low-quality evidence.” Such markers are associated with clinical outcomes. “We just don’t know if they work meaningfully to treat the patient’s condition. The results were pretty shocking for us,” she said.
The FDA in 2018 released an Adult Surrogate Endpoint Table listing markers that can be used as substitutes for clinical outcomes to more quickly test, review, and approve new therapies. The analysis found the majority of these endpoints lacked subsequent confirmations, defined as published meta-analyses of clinical studies to validate the association between the marker and a clinical outcome important to patients.
In a paper published in JAMA, Dr. Ramachandran and her colleagues looked at 37 surrogate endpoints for nearly 3 dozen nononcologic diseases in the table.
Approval with surrogate markers implies responsibility for postapproval or validation studies — not just lab measures or imaging findings but mortality, morbidity, or improved quality of life, said Joshua D. Wallach, PhD, MS, assistant professor in the department of epidemiology at the Emory Rollins School of Public Health in Atlanta and lead author of the JAMA review.
Dr. Wallach said surrogate markers are easier to measure and do not require large and long trials. But the FDA has not provided clear rules for what makes a surrogate marker valid in clinical trials.
“They’ve said that at a minimum, it requires meta-analytical evidence from studies that have looked at the correlation or the association between the surrogate and the clinical outcome,” Dr. Wallach said. “Our understanding was that if that’s a minimum expectation, we should be able to find those studies in the literature. And the reality is that we were unable to find evidence from those types of studies supporting the association between the surrogate and the clinical outcome.”
Physicians generally do not receive training about the FDA approval process and the difference between biomarkers, surrogate markers, and clinical endpoints, Dr. Ramachandran said. “Our study shows that things are much more uncertain than we thought when it comes to the prescribing of new drugs,” she said.
Surrogate Markers on the Rise
Dr. Wallach’s group looked for published meta-analyses compiling randomized controlled trials reporting surrogate endpoints for more than 3 dozen chronic nononcologic conditions, including type 2 diabetes, Alzheimer’s, kidney disease, HIV, gout, and lupus. They found no meta-analyses at all for 59% of the surrogate markers, while for those that were studied, few reported high-strength evidence of an association with clinical outcomes.
The findings echo previous research. In a 2020 study in JAMA Network Open, researchers tallied primary endpoints for all FDA approvals of new drugs and therapies during three 3-year periods: 1995-1997, 2005-2007, and 2015-2017. The proportion of products whose approvals were based on the use of clinical endpoints decreased from 43.8% in 1995-1997 to 28.4% in 2005-2007 to 23.3% in 2015-2017. The share based on surrogate endpoints rose from 43.3% to roughly 60% over the same interval.
A 2017 study in the Journal of Health Economics found the use of “imperfect” surrogate endpoints helped support the approval of an average of 16 new drugs per year between 2010 and 2014 compared with six per year from 1998 to 2008.
Similar concerns about weak associations between surrogate markers and drugs used to treat cancer have been documented before, including in a 2020 study published in eClinicalMedicine. The researchers found the surrogate endpoints in the FDA table either were not tested or were tested but proven to be weak surrogates.
“And yet the FDA considered these as good enough not only for accelerated approval but also for regular approval,” said Bishal Gyawali, MD, PhD, associate professor in the department of oncology at Queen’s University, Kingston, Ontario, Canada, who led the group.
The use of surrogate endpoints is also increasing in Europe, said Huseyin Naci, MHS, PhD, associate professor of health policy at the London School of Economics and Political Science in England. He cited a cohort study of 298 randomized clinical trials (RCTs) in JAMA Oncology suggesting “contemporary oncology RCTs now largely measure putative surrogate endpoints.” Dr. Wallach called the FDA’s surrogate table “a great first step toward transparency. But a key column is missing from that table, telling us what is the basis for which the FDA allows drug companies to use the recognized surrogate markers. What is the evidence they are considering?”
If the agency allows companies the flexibility to validate surrogate endpoints, postmarketing studies designed to confirm the clinical utility of those endpoints should follow.
“We obviously want physicians to be guided by evidence when they’re selecting treatments, and they need to be able to interpret the clinical benefits of the drug that they’re prescribing,” he said. “This is really about having the research consumer, patients, and physicians, as well as industry, understand why certain markers are considered and not considered.”
Dr. Wallach reported receiving grants from the FDA (through the Yale University — Mayo Clinic Center of Excellence in Regulatory Science and Innovation), National Institute on Alcohol Abuse and Alcoholism (1K01AA028258), and Johnson & Johnson (through the Yale University Open Data Access Project); and consulting fees from Hagens Berman Sobol Shapiro LLP and Dugan Law Firm APLC outside the submitted work. Dr. Ramachandran reported receiving grants from the Stavros Niarchos Foundation and FDA; receiving consulting fees from ReAct Action on Antibiotic Resistance strategy policy program outside the submitted work; and serving in an unpaid capacity as chair of the FDA task force for the nonprofit organization Doctors for America and in an unpaid capacity as board president for Universities Allied for Essential Medicines North America.
A version of this article appeared on Medscape.com.
Over the past few decades, the US Food and Drug Administration (FDA) has increasingly relied on surrogate measures such as blood tests instead of clinical outcomes for medication approvals. But critics say the agency lacks consistent standards to ensure the surrogate aligns with clinical outcomes that matter to patients — things like improvements in symptoms and gains in function.
Sometimes those decisions backfire. Consider: In July 2021, the FDA approved aducanumab for the treatment of Alzheimer’s disease, bucking the advice of an advisory panel for the agency that questioned the effectiveness of the medication. Regulators relied on data from the drugmaker, Biogen, showing the monoclonal antibody could reduce levels of amyloid beta plaques in blood — a surrogate marker officials hoped would translate to clinical benefit.
The FDA’s decision triggered significant controversy, and Biogen in January announced it is pulling it from the market this year, citing disappointing sales.
Although the case of aducanumab might seem extreme, given the stakes — Alzheimer’s remains a disease without an effective treatment — it’s far from unusual.
“When we prescribe a drug, there is an underlying assumption that the FDA has done its due diligence to confirm the drug is safe and of benefit,” said Reshma Ramachandran, MD, MPP, MHS, a researcher at Yale School of Medicine, New Haven, Connecticut, and a coauthor of a recent review of surrogate outcomes. “In fact, we found either no evidence or low-quality evidence.” Such markers are associated with clinical outcomes. “We just don’t know if they work meaningfully to treat the patient’s condition. The results were pretty shocking for us,” she said.
The FDA in 2018 released an Adult Surrogate Endpoint Table listing markers that can be used as substitutes for clinical outcomes to more quickly test, review, and approve new therapies. The analysis found the majority of these endpoints lacked subsequent confirmations, defined as published meta-analyses of clinical studies to validate the association between the marker and a clinical outcome important to patients.
In a paper published in JAMA, Dr. Ramachandran and her colleagues looked at 37 surrogate endpoints for nearly 3 dozen nononcologic diseases in the table.
Approval with surrogate markers implies responsibility for postapproval or validation studies — not just lab measures or imaging findings but mortality, morbidity, or improved quality of life, said Joshua D. Wallach, PhD, MS, assistant professor in the department of epidemiology at the Emory Rollins School of Public Health in Atlanta and lead author of the JAMA review.
Dr. Wallach said surrogate markers are easier to measure and do not require large and long trials. But the FDA has not provided clear rules for what makes a surrogate marker valid in clinical trials.
“They’ve said that at a minimum, it requires meta-analytical evidence from studies that have looked at the correlation or the association between the surrogate and the clinical outcome,” Dr. Wallach said. “Our understanding was that if that’s a minimum expectation, we should be able to find those studies in the literature. And the reality is that we were unable to find evidence from those types of studies supporting the association between the surrogate and the clinical outcome.”
Physicians generally do not receive training about the FDA approval process and the difference between biomarkers, surrogate markers, and clinical endpoints, Dr. Ramachandran said. “Our study shows that things are much more uncertain than we thought when it comes to the prescribing of new drugs,” she said.
Surrogate Markers on the Rise
Dr. Wallach’s group looked for published meta-analyses compiling randomized controlled trials reporting surrogate endpoints for more than 3 dozen chronic nononcologic conditions, including type 2 diabetes, Alzheimer’s, kidney disease, HIV, gout, and lupus. They found no meta-analyses at all for 59% of the surrogate markers, while for those that were studied, few reported high-strength evidence of an association with clinical outcomes.
The findings echo previous research. In a 2020 study in JAMA Network Open, researchers tallied primary endpoints for all FDA approvals of new drugs and therapies during three 3-year periods: 1995-1997, 2005-2007, and 2015-2017. The proportion of products whose approvals were based on the use of clinical endpoints decreased from 43.8% in 1995-1997 to 28.4% in 2005-2007 to 23.3% in 2015-2017. The share based on surrogate endpoints rose from 43.3% to roughly 60% over the same interval.
A 2017 study in the Journal of Health Economics found the use of “imperfect” surrogate endpoints helped support the approval of an average of 16 new drugs per year between 2010 and 2014 compared with six per year from 1998 to 2008.
Similar concerns about weak associations between surrogate markers and drugs used to treat cancer have been documented before, including in a 2020 study published in eClinicalMedicine. The researchers found the surrogate endpoints in the FDA table either were not tested or were tested but proven to be weak surrogates.
“And yet the FDA considered these as good enough not only for accelerated approval but also for regular approval,” said Bishal Gyawali, MD, PhD, associate professor in the department of oncology at Queen’s University, Kingston, Ontario, Canada, who led the group.
The use of surrogate endpoints is also increasing in Europe, said Huseyin Naci, MHS, PhD, associate professor of health policy at the London School of Economics and Political Science in England. He cited a cohort study of 298 randomized clinical trials (RCTs) in JAMA Oncology suggesting “contemporary oncology RCTs now largely measure putative surrogate endpoints.” Dr. Wallach called the FDA’s surrogate table “a great first step toward transparency. But a key column is missing from that table, telling us what is the basis for which the FDA allows drug companies to use the recognized surrogate markers. What is the evidence they are considering?”
If the agency allows companies the flexibility to validate surrogate endpoints, postmarketing studies designed to confirm the clinical utility of those endpoints should follow.
“We obviously want physicians to be guided by evidence when they’re selecting treatments, and they need to be able to interpret the clinical benefits of the drug that they’re prescribing,” he said. “This is really about having the research consumer, patients, and physicians, as well as industry, understand why certain markers are considered and not considered.”
Dr. Wallach reported receiving grants from the FDA (through the Yale University — Mayo Clinic Center of Excellence in Regulatory Science and Innovation), National Institute on Alcohol Abuse and Alcoholism (1K01AA028258), and Johnson & Johnson (through the Yale University Open Data Access Project); and consulting fees from Hagens Berman Sobol Shapiro LLP and Dugan Law Firm APLC outside the submitted work. Dr. Ramachandran reported receiving grants from the Stavros Niarchos Foundation and FDA; receiving consulting fees from ReAct Action on Antibiotic Resistance strategy policy program outside the submitted work; and serving in an unpaid capacity as chair of the FDA task force for the nonprofit organization Doctors for America and in an unpaid capacity as board president for Universities Allied for Essential Medicines North America.
A version of this article appeared on Medscape.com.
Over the past few decades, the US Food and Drug Administration (FDA) has increasingly relied on surrogate measures such as blood tests instead of clinical outcomes for medication approvals. But critics say the agency lacks consistent standards to ensure the surrogate aligns with clinical outcomes that matter to patients — things like improvements in symptoms and gains in function.
Sometimes those decisions backfire. Consider: In July 2021, the FDA approved aducanumab for the treatment of Alzheimer’s disease, bucking the advice of an advisory panel for the agency that questioned the effectiveness of the medication. Regulators relied on data from the drugmaker, Biogen, showing the monoclonal antibody could reduce levels of amyloid beta plaques in blood — a surrogate marker officials hoped would translate to clinical benefit.
The FDA’s decision triggered significant controversy, and Biogen in January announced it is pulling it from the market this year, citing disappointing sales.
Although the case of aducanumab might seem extreme, given the stakes — Alzheimer’s remains a disease without an effective treatment — it’s far from unusual.
“When we prescribe a drug, there is an underlying assumption that the FDA has done its due diligence to confirm the drug is safe and of benefit,” said Reshma Ramachandran, MD, MPP, MHS, a researcher at Yale School of Medicine, New Haven, Connecticut, and a coauthor of a recent review of surrogate outcomes. “In fact, we found either no evidence or low-quality evidence.” Such markers are associated with clinical outcomes. “We just don’t know if they work meaningfully to treat the patient’s condition. The results were pretty shocking for us,” she said.
The FDA in 2018 released an Adult Surrogate Endpoint Table listing markers that can be used as substitutes for clinical outcomes to more quickly test, review, and approve new therapies. The analysis found the majority of these endpoints lacked subsequent confirmations, defined as published meta-analyses of clinical studies to validate the association between the marker and a clinical outcome important to patients.
In a paper published in JAMA, Dr. Ramachandran and her colleagues looked at 37 surrogate endpoints for nearly 3 dozen nononcologic diseases in the table.
Approval with surrogate markers implies responsibility for postapproval or validation studies — not just lab measures or imaging findings but mortality, morbidity, or improved quality of life, said Joshua D. Wallach, PhD, MS, assistant professor in the department of epidemiology at the Emory Rollins School of Public Health in Atlanta and lead author of the JAMA review.
Dr. Wallach said surrogate markers are easier to measure and do not require large and long trials. But the FDA has not provided clear rules for what makes a surrogate marker valid in clinical trials.
“They’ve said that at a minimum, it requires meta-analytical evidence from studies that have looked at the correlation or the association between the surrogate and the clinical outcome,” Dr. Wallach said. “Our understanding was that if that’s a minimum expectation, we should be able to find those studies in the literature. And the reality is that we were unable to find evidence from those types of studies supporting the association between the surrogate and the clinical outcome.”
Physicians generally do not receive training about the FDA approval process and the difference between biomarkers, surrogate markers, and clinical endpoints, Dr. Ramachandran said. “Our study shows that things are much more uncertain than we thought when it comes to the prescribing of new drugs,” she said.
Surrogate Markers on the Rise
Dr. Wallach’s group looked for published meta-analyses compiling randomized controlled trials reporting surrogate endpoints for more than 3 dozen chronic nononcologic conditions, including type 2 diabetes, Alzheimer’s, kidney disease, HIV, gout, and lupus. They found no meta-analyses at all for 59% of the surrogate markers, while for those that were studied, few reported high-strength evidence of an association with clinical outcomes.
The findings echo previous research. In a 2020 study in JAMA Network Open, researchers tallied primary endpoints for all FDA approvals of new drugs and therapies during three 3-year periods: 1995-1997, 2005-2007, and 2015-2017. The proportion of products whose approvals were based on the use of clinical endpoints decreased from 43.8% in 1995-1997 to 28.4% in 2005-2007 to 23.3% in 2015-2017. The share based on surrogate endpoints rose from 43.3% to roughly 60% over the same interval.
A 2017 study in the Journal of Health Economics found the use of “imperfect” surrogate endpoints helped support the approval of an average of 16 new drugs per year between 2010 and 2014 compared with six per year from 1998 to 2008.
Similar concerns about weak associations between surrogate markers and drugs used to treat cancer have been documented before, including in a 2020 study published in eClinicalMedicine. The researchers found the surrogate endpoints in the FDA table either were not tested or were tested but proven to be weak surrogates.
“And yet the FDA considered these as good enough not only for accelerated approval but also for regular approval,” said Bishal Gyawali, MD, PhD, associate professor in the department of oncology at Queen’s University, Kingston, Ontario, Canada, who led the group.
The use of surrogate endpoints is also increasing in Europe, said Huseyin Naci, MHS, PhD, associate professor of health policy at the London School of Economics and Political Science in England. He cited a cohort study of 298 randomized clinical trials (RCTs) in JAMA Oncology suggesting “contemporary oncology RCTs now largely measure putative surrogate endpoints.” Dr. Wallach called the FDA’s surrogate table “a great first step toward transparency. But a key column is missing from that table, telling us what is the basis for which the FDA allows drug companies to use the recognized surrogate markers. What is the evidence they are considering?”
If the agency allows companies the flexibility to validate surrogate endpoints, postmarketing studies designed to confirm the clinical utility of those endpoints should follow.
“We obviously want physicians to be guided by evidence when they’re selecting treatments, and they need to be able to interpret the clinical benefits of the drug that they’re prescribing,” he said. “This is really about having the research consumer, patients, and physicians, as well as industry, understand why certain markers are considered and not considered.”
Dr. Wallach reported receiving grants from the FDA (through the Yale University — Mayo Clinic Center of Excellence in Regulatory Science and Innovation), National Institute on Alcohol Abuse and Alcoholism (1K01AA028258), and Johnson & Johnson (through the Yale University Open Data Access Project); and consulting fees from Hagens Berman Sobol Shapiro LLP and Dugan Law Firm APLC outside the submitted work. Dr. Ramachandran reported receiving grants from the Stavros Niarchos Foundation and FDA; receiving consulting fees from ReAct Action on Antibiotic Resistance strategy policy program outside the submitted work; and serving in an unpaid capacity as chair of the FDA task force for the nonprofit organization Doctors for America and in an unpaid capacity as board president for Universities Allied for Essential Medicines North America.
A version of this article appeared on Medscape.com.
FROM JAMA
Clinical Prediction Models in Newly Diagnosed Epilepsy
, according to authors of a recent review. Clinical prediction models can help neurologists identify which patients could benefit from more aggressive early treatment, authors added, although concerns over bias and model applicability leave room for improvement.
Triggering Aggressive Treatments
“These models are helpful because if you can predict that someone is going to do well with one or two medications, that’s great,” said Aatif M. Husain, MD. “But if you know early on that someone likely will not do well, will need many medications, and still not have their seizures under control, you’re much more likely to be more aggressive with their management, such as closely refer them to a specialist epilepsy center and evaluate them for surgical treatment options. This could minimize the amount of time their seizures are inadequately controlled.” Dr. Husain is an epileptologist, neurologist, and sleep medicine specialist at Duke University Health System in Durham, North Carolina. Dr. Husain was not involved with the study, which was published in Epilepsia.
“But the other important finding is that these models so far have not been that great,” he added.
Prognosis Predictors
Investigators Corey Ratcliffe of the University of Liverpool in England and colleagues systematically searched MEDLINE and Embase for relevant publications, ultimately analyzing 48 models across 32 studies. The strongest predictors of seizure remission were history and seizure types or characteristics, the authors wrote, followed by onset age.
Regarding seizure history, a March 2018 JAMA Neurology study and a December 2013 BMC Neurology study linked factors such as history of seizures in the year pre-diagnosis, family history of epilepsy, and history of febrile seizures and of migraines with lower chances of seizure remission. Seizure types with increased chances of poor outcomes in the review included status epilepticus and seizures with complex or mixed etiologies. Additional seizure types associated with poor control include tonic-clonic seizures, frequent focal seizures, and seizures stemming from certain genetic predispositions, said Dr. Husain.
Although the roles of many of the foregoing factors are easily explained, he added, other variables’ impact is less clear. Younger onset often signals more refractory seizures, for example, while data regarding older onset are mixed. “Sometimes older individuals will have mild epilepsy due to a stroke, tumor, or something that can be relatively easily treated,” said Dr. Husain. Conversely, epilepsy can become more complicated if such patients take several medications and/or have coexisting medical problems that seizures or antiseizure medications exacerbate. “So sometimes it’s not so obvious.”
Incorporating Imaging, AI
Dr. Husain found it surprising that very few of the selected models incorporated EEG and MRI findings. “Subsequent research should look at those, since they are important diagnostic tests.” Moreover, he recommended including more sophisticated quantitative and connectivity analyses of EEG and MRI data. These analyses might provide additional prognostic information beyond a simple visual analysis of these tests, Dr. Husain explained, although their potential here remains unproven.
As for factors not represented in the review, he said, future studies will help clarify AI’s role in predicting newly diagnosed epilepsy outcomes. A study published in Epilepsia showed that among 248 potential pediatric surgical candidates, those whose providers received alerts based on machine learning analysis of prior visit notes were more likely to be referred for presurgical evaluation (9.8% versus 3.1%). Future clinical models will use AI to examine not only established elements of neurologic history, said Dr. Husain, but also other types of history such as socioeconomic characteristics, geographic location, and other such data.
Additionally, study authors recommended a standardized approach to prediction modeling, using Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) guidelines. Using consistent definitions, outcomes, and reporting requirements will facilitate communication among researchers, reduce bias, and support systematic between-study comparisons, Mr. Ratcliffe and colleagues wrote.
Reaching General Neurologists
Epilepsy specialists are generally aware of reliable outcome predictors, Dr. Husain said, though they do not use models per se. “But the vast majority of patients with epilepsy are seen by general neurologists.” And the lack of awareness among these physicians and primary care practitioners drives a need for education to facilitate appropriate referrals to subspecialty centers, he said.
The stakes for timely referrals can be high. Although using appropriate outcome models improves patients’ quality of life sooner, said Dr. Husain, allowing seizures to go untreated or undertreated results in neuroplastic changes that hinder long-term seizure control.
The fact that all 32 included studies reflected a high risk of bias, and 9 studies raised high applicability concerns, raises questions regarding the models’ validity, he added. Mr. Ratcliffe and colleagues attributed both types of concerns to the fact that 20% of included studies used baseline treatment response data as outcome predictors.
Nevertheless, Dr. Husain cautioned against dismissing prediction models in newly diagnosed epilepsy. “Practicing neurologists need to realize that the perfect model has yet to be developed. But the current tools can be used to help manage patients with epilepsy and predict who will do well and not as well,” he said.
Dr. Husain is a member of the American Epilepsy Society. He has been a consultant and researcher for Marinus Pharmaceuticals, PranaQ, and UCB, and a consultant for Eisai, Jazz Pharmaceuticals, Merck, and uniQure. Study authors reported no funding sources or relevant conflicts of interest.
, according to authors of a recent review. Clinical prediction models can help neurologists identify which patients could benefit from more aggressive early treatment, authors added, although concerns over bias and model applicability leave room for improvement.
Triggering Aggressive Treatments
“These models are helpful because if you can predict that someone is going to do well with one or two medications, that’s great,” said Aatif M. Husain, MD. “But if you know early on that someone likely will not do well, will need many medications, and still not have their seizures under control, you’re much more likely to be more aggressive with their management, such as closely refer them to a specialist epilepsy center and evaluate them for surgical treatment options. This could minimize the amount of time their seizures are inadequately controlled.” Dr. Husain is an epileptologist, neurologist, and sleep medicine specialist at Duke University Health System in Durham, North Carolina. Dr. Husain was not involved with the study, which was published in Epilepsia.
“But the other important finding is that these models so far have not been that great,” he added.
Prognosis Predictors
Investigators Corey Ratcliffe of the University of Liverpool in England and colleagues systematically searched MEDLINE and Embase for relevant publications, ultimately analyzing 48 models across 32 studies. The strongest predictors of seizure remission were history and seizure types or characteristics, the authors wrote, followed by onset age.
Regarding seizure history, a March 2018 JAMA Neurology study and a December 2013 BMC Neurology study linked factors such as history of seizures in the year pre-diagnosis, family history of epilepsy, and history of febrile seizures and of migraines with lower chances of seizure remission. Seizure types with increased chances of poor outcomes in the review included status epilepticus and seizures with complex or mixed etiologies. Additional seizure types associated with poor control include tonic-clonic seizures, frequent focal seizures, and seizures stemming from certain genetic predispositions, said Dr. Husain.
Although the roles of many of the foregoing factors are easily explained, he added, other variables’ impact is less clear. Younger onset often signals more refractory seizures, for example, while data regarding older onset are mixed. “Sometimes older individuals will have mild epilepsy due to a stroke, tumor, or something that can be relatively easily treated,” said Dr. Husain. Conversely, epilepsy can become more complicated if such patients take several medications and/or have coexisting medical problems that seizures or antiseizure medications exacerbate. “So sometimes it’s not so obvious.”
Incorporating Imaging, AI
Dr. Husain found it surprising that very few of the selected models incorporated EEG and MRI findings. “Subsequent research should look at those, since they are important diagnostic tests.” Moreover, he recommended including more sophisticated quantitative and connectivity analyses of EEG and MRI data. These analyses might provide additional prognostic information beyond a simple visual analysis of these tests, Dr. Husain explained, although their potential here remains unproven.
As for factors not represented in the review, he said, future studies will help clarify AI’s role in predicting newly diagnosed epilepsy outcomes. A study published in Epilepsia showed that among 248 potential pediatric surgical candidates, those whose providers received alerts based on machine learning analysis of prior visit notes were more likely to be referred for presurgical evaluation (9.8% versus 3.1%). Future clinical models will use AI to examine not only established elements of neurologic history, said Dr. Husain, but also other types of history such as socioeconomic characteristics, geographic location, and other such data.
Additionally, study authors recommended a standardized approach to prediction modeling, using Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) guidelines. Using consistent definitions, outcomes, and reporting requirements will facilitate communication among researchers, reduce bias, and support systematic between-study comparisons, Mr. Ratcliffe and colleagues wrote.
Reaching General Neurologists
Epilepsy specialists are generally aware of reliable outcome predictors, Dr. Husain said, though they do not use models per se. “But the vast majority of patients with epilepsy are seen by general neurologists.” And the lack of awareness among these physicians and primary care practitioners drives a need for education to facilitate appropriate referrals to subspecialty centers, he said.
The stakes for timely referrals can be high. Although using appropriate outcome models improves patients’ quality of life sooner, said Dr. Husain, allowing seizures to go untreated or undertreated results in neuroplastic changes that hinder long-term seizure control.
The fact that all 32 included studies reflected a high risk of bias, and 9 studies raised high applicability concerns, raises questions regarding the models’ validity, he added. Mr. Ratcliffe and colleagues attributed both types of concerns to the fact that 20% of included studies used baseline treatment response data as outcome predictors.
Nevertheless, Dr. Husain cautioned against dismissing prediction models in newly diagnosed epilepsy. “Practicing neurologists need to realize that the perfect model has yet to be developed. But the current tools can be used to help manage patients with epilepsy and predict who will do well and not as well,” he said.
Dr. Husain is a member of the American Epilepsy Society. He has been a consultant and researcher for Marinus Pharmaceuticals, PranaQ, and UCB, and a consultant for Eisai, Jazz Pharmaceuticals, Merck, and uniQure. Study authors reported no funding sources or relevant conflicts of interest.
, according to authors of a recent review. Clinical prediction models can help neurologists identify which patients could benefit from more aggressive early treatment, authors added, although concerns over bias and model applicability leave room for improvement.
Triggering Aggressive Treatments
“These models are helpful because if you can predict that someone is going to do well with one or two medications, that’s great,” said Aatif M. Husain, MD. “But if you know early on that someone likely will not do well, will need many medications, and still not have their seizures under control, you’re much more likely to be more aggressive with their management, such as closely refer them to a specialist epilepsy center and evaluate them for surgical treatment options. This could minimize the amount of time their seizures are inadequately controlled.” Dr. Husain is an epileptologist, neurologist, and sleep medicine specialist at Duke University Health System in Durham, North Carolina. Dr. Husain was not involved with the study, which was published in Epilepsia.
“But the other important finding is that these models so far have not been that great,” he added.
Prognosis Predictors
Investigators Corey Ratcliffe of the University of Liverpool in England and colleagues systematically searched MEDLINE and Embase for relevant publications, ultimately analyzing 48 models across 32 studies. The strongest predictors of seizure remission were history and seizure types or characteristics, the authors wrote, followed by onset age.
Regarding seizure history, a March 2018 JAMA Neurology study and a December 2013 BMC Neurology study linked factors such as history of seizures in the year pre-diagnosis, family history of epilepsy, and history of febrile seizures and of migraines with lower chances of seizure remission. Seizure types with increased chances of poor outcomes in the review included status epilepticus and seizures with complex or mixed etiologies. Additional seizure types associated with poor control include tonic-clonic seizures, frequent focal seizures, and seizures stemming from certain genetic predispositions, said Dr. Husain.
Although the roles of many of the foregoing factors are easily explained, he added, other variables’ impact is less clear. Younger onset often signals more refractory seizures, for example, while data regarding older onset are mixed. “Sometimes older individuals will have mild epilepsy due to a stroke, tumor, or something that can be relatively easily treated,” said Dr. Husain. Conversely, epilepsy can become more complicated if such patients take several medications and/or have coexisting medical problems that seizures or antiseizure medications exacerbate. “So sometimes it’s not so obvious.”
Incorporating Imaging, AI
Dr. Husain found it surprising that very few of the selected models incorporated EEG and MRI findings. “Subsequent research should look at those, since they are important diagnostic tests.” Moreover, he recommended including more sophisticated quantitative and connectivity analyses of EEG and MRI data. These analyses might provide additional prognostic information beyond a simple visual analysis of these tests, Dr. Husain explained, although their potential here remains unproven.
As for factors not represented in the review, he said, future studies will help clarify AI’s role in predicting newly diagnosed epilepsy outcomes. A study published in Epilepsia showed that among 248 potential pediatric surgical candidates, those whose providers received alerts based on machine learning analysis of prior visit notes were more likely to be referred for presurgical evaluation (9.8% versus 3.1%). Future clinical models will use AI to examine not only established elements of neurologic history, said Dr. Husain, but also other types of history such as socioeconomic characteristics, geographic location, and other such data.
Additionally, study authors recommended a standardized approach to prediction modeling, using Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) guidelines. Using consistent definitions, outcomes, and reporting requirements will facilitate communication among researchers, reduce bias, and support systematic between-study comparisons, Mr. Ratcliffe and colleagues wrote.
Reaching General Neurologists
Epilepsy specialists are generally aware of reliable outcome predictors, Dr. Husain said, though they do not use models per se. “But the vast majority of patients with epilepsy are seen by general neurologists.” And the lack of awareness among these physicians and primary care practitioners drives a need for education to facilitate appropriate referrals to subspecialty centers, he said.
The stakes for timely referrals can be high. Although using appropriate outcome models improves patients’ quality of life sooner, said Dr. Husain, allowing seizures to go untreated or undertreated results in neuroplastic changes that hinder long-term seizure control.
The fact that all 32 included studies reflected a high risk of bias, and 9 studies raised high applicability concerns, raises questions regarding the models’ validity, he added. Mr. Ratcliffe and colleagues attributed both types of concerns to the fact that 20% of included studies used baseline treatment response data as outcome predictors.
Nevertheless, Dr. Husain cautioned against dismissing prediction models in newly diagnosed epilepsy. “Practicing neurologists need to realize that the perfect model has yet to be developed. But the current tools can be used to help manage patients with epilepsy and predict who will do well and not as well,” he said.
Dr. Husain is a member of the American Epilepsy Society. He has been a consultant and researcher for Marinus Pharmaceuticals, PranaQ, and UCB, and a consultant for Eisai, Jazz Pharmaceuticals, Merck, and uniQure. Study authors reported no funding sources or relevant conflicts of interest.
FROM EPILEPSIA
An 8-year-old girl presented with papules on her bilateral eyelid margins
, with an equal distribution across genders and ethnicities.1 It is caused by mutations in the ECM1 gene2 on chromosome 1q21. This leads to the abnormal deposition of hyaline material in various tissues across different organ systems, with the classic manifestations known as the “string of pearls” sign and a hoarse cry or voice.
The rarity of lipoid proteinosis often leads to challenges in diagnosis. Particularly when deviating from the common association with consanguinity, the potential for de novo mutations or a broader genetic variability in disease expression is highlighted. Our patient presents with symptoms that are pathognomonic to LP with moniliform blepharosis and hoarseness of the voice, in addition to scarring of the extremities.
Other common clinical manifestations in patients with LP include cobblestoning of the mucosa; hyperkeratosis of the elbows, knees, and hands; and calcification of the amygdala with neuroimaging.3
Genetic testing that identifies a loss-of-function mutation in ECM1 offers diagnostic confirmation. Patients often need multidisciplinary care involving dermatology; ear, nose, throat; neurology; and genetics. Treatment of LP is mostly symptomatic with unsatisfactory resolution of cutaneous changes, with retinoids such as acitretin used as the first-line option and surgery as a consideration for laryngeal hyaline deposits.2 Although LP can affect different organ systems, patients tend to have a normal lifespan.
LP is a rare disorder that dermatologists often learn about during textbook sessions or didactics in residency but do not see in practice for decades, or if ever. This case highlights the need to review the classic presentations of rare conditions.
This case and the photos were submitted by Ms. Chang, BS, Western University of Health Sciences, College of Osteopathic Medicine, Pomona, California; Dr. Connie Chang, Verdugo Dermatology, Glendale, California; and Dr. Yuchieh Kathryn Chang, MD Anderson Cancer Center, Houston, Texas. The column was edited by Donna Bilu Martin, MD.
Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Florida. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].
References
1. Mcgrath JA. Handb Clin Neurol. 2015:132:317-22. doi: 10.1016/B978-0-444-62702-5.00023-8.
2. Hamada Tet al. Hum Mol Genet. 2002 Apr 1;11(7):833-40. doi: 10.1093/hmg/11.7.833.
3. Frenkel B et al. Clin Oral Investig. 2017 Sep;21(7):2245-51 doi: 10.1007/s00784-016-2017-7.
, with an equal distribution across genders and ethnicities.1 It is caused by mutations in the ECM1 gene2 on chromosome 1q21. This leads to the abnormal deposition of hyaline material in various tissues across different organ systems, with the classic manifestations known as the “string of pearls” sign and a hoarse cry or voice.
The rarity of lipoid proteinosis often leads to challenges in diagnosis. Particularly when deviating from the common association with consanguinity, the potential for de novo mutations or a broader genetic variability in disease expression is highlighted. Our patient presents with symptoms that are pathognomonic to LP with moniliform blepharosis and hoarseness of the voice, in addition to scarring of the extremities.
Other common clinical manifestations in patients with LP include cobblestoning of the mucosa; hyperkeratosis of the elbows, knees, and hands; and calcification of the amygdala with neuroimaging.3
Genetic testing that identifies a loss-of-function mutation in ECM1 offers diagnostic confirmation. Patients often need multidisciplinary care involving dermatology; ear, nose, throat; neurology; and genetics. Treatment of LP is mostly symptomatic with unsatisfactory resolution of cutaneous changes, with retinoids such as acitretin used as the first-line option and surgery as a consideration for laryngeal hyaline deposits.2 Although LP can affect different organ systems, patients tend to have a normal lifespan.
LP is a rare disorder that dermatologists often learn about during textbook sessions or didactics in residency but do not see in practice for decades, or if ever. This case highlights the need to review the classic presentations of rare conditions.
This case and the photos were submitted by Ms. Chang, BS, Western University of Health Sciences, College of Osteopathic Medicine, Pomona, California; Dr. Connie Chang, Verdugo Dermatology, Glendale, California; and Dr. Yuchieh Kathryn Chang, MD Anderson Cancer Center, Houston, Texas. The column was edited by Donna Bilu Martin, MD.
Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Florida. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].
References
1. Mcgrath JA. Handb Clin Neurol. 2015:132:317-22. doi: 10.1016/B978-0-444-62702-5.00023-8.
2. Hamada Tet al. Hum Mol Genet. 2002 Apr 1;11(7):833-40. doi: 10.1093/hmg/11.7.833.
3. Frenkel B et al. Clin Oral Investig. 2017 Sep;21(7):2245-51 doi: 10.1007/s00784-016-2017-7.
, with an equal distribution across genders and ethnicities.1 It is caused by mutations in the ECM1 gene2 on chromosome 1q21. This leads to the abnormal deposition of hyaline material in various tissues across different organ systems, with the classic manifestations known as the “string of pearls” sign and a hoarse cry or voice.
The rarity of lipoid proteinosis often leads to challenges in diagnosis. Particularly when deviating from the common association with consanguinity, the potential for de novo mutations or a broader genetic variability in disease expression is highlighted. Our patient presents with symptoms that are pathognomonic to LP with moniliform blepharosis and hoarseness of the voice, in addition to scarring of the extremities.
Other common clinical manifestations in patients with LP include cobblestoning of the mucosa; hyperkeratosis of the elbows, knees, and hands; and calcification of the amygdala with neuroimaging.3
Genetic testing that identifies a loss-of-function mutation in ECM1 offers diagnostic confirmation. Patients often need multidisciplinary care involving dermatology; ear, nose, throat; neurology; and genetics. Treatment of LP is mostly symptomatic with unsatisfactory resolution of cutaneous changes, with retinoids such as acitretin used as the first-line option and surgery as a consideration for laryngeal hyaline deposits.2 Although LP can affect different organ systems, patients tend to have a normal lifespan.
LP is a rare disorder that dermatologists often learn about during textbook sessions or didactics in residency but do not see in practice for decades, or if ever. This case highlights the need to review the classic presentations of rare conditions.
This case and the photos were submitted by Ms. Chang, BS, Western University of Health Sciences, College of Osteopathic Medicine, Pomona, California; Dr. Connie Chang, Verdugo Dermatology, Glendale, California; and Dr. Yuchieh Kathryn Chang, MD Anderson Cancer Center, Houston, Texas. The column was edited by Donna Bilu Martin, MD.
Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Florida. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].
References
1. Mcgrath JA. Handb Clin Neurol. 2015:132:317-22. doi: 10.1016/B978-0-444-62702-5.00023-8.
2. Hamada Tet al. Hum Mol Genet. 2002 Apr 1;11(7):833-40. doi: 10.1093/hmg/11.7.833.
3. Frenkel B et al. Clin Oral Investig. 2017 Sep;21(7):2245-51 doi: 10.1007/s00784-016-2017-7.