User login
Better than dialysis? Artificial kidney could be the future
Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.
Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.
Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.
This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
Question: Could you summarize the clinical problem with chronic kidney disease?
Dr. Fissell: Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce.
Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?
Dr. Roy: We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.
Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.
How did you two come up with this idea? How did you get started working together?
Dr. Roy: I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.
How does the artificial kidney differ from dialysis?
Dr. Fissell: Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.
The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
How does the artificial kidney work?
Dr. Fissell: Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.
Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.
We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells – not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
Would patients need immunosuppressive or anticoagulation medication?
Dr. Fissell: They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.
What is the state of the technology now?
Dr. Fissell: We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.
So it’s more a matter of money than time until the first clinical trials?
Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.
A version of this article first appeared on Medscape.com.
Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.
Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.
Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.
This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
Question: Could you summarize the clinical problem with chronic kidney disease?
Dr. Fissell: Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce.
Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?
Dr. Roy: We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.
Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.
How did you two come up with this idea? How did you get started working together?
Dr. Roy: I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.
How does the artificial kidney differ from dialysis?
Dr. Fissell: Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.
The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
How does the artificial kidney work?
Dr. Fissell: Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.
Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.
We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells – not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
Would patients need immunosuppressive or anticoagulation medication?
Dr. Fissell: They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.
What is the state of the technology now?
Dr. Fissell: We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.
So it’s more a matter of money than time until the first clinical trials?
Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.
A version of this article first appeared on Medscape.com.
Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.
Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.
Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.
This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
Question: Could you summarize the clinical problem with chronic kidney disease?
Dr. Fissell: Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce.
Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?
Dr. Roy: We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.
Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.
How did you two come up with this idea? How did you get started working together?
Dr. Roy: I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.
How does the artificial kidney differ from dialysis?
Dr. Fissell: Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.
The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
How does the artificial kidney work?
Dr. Fissell: Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.
Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.
We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells – not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
Would patients need immunosuppressive or anticoagulation medication?
Dr. Fissell: They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.
What is the state of the technology now?
Dr. Fissell: We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.
So it’s more a matter of money than time until the first clinical trials?
Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.
A version of this article first appeared on Medscape.com.
Liver transplant in CRC: Who might benefit?
Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.
“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.
The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.
Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.
In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.
The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.
Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.
The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.
Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?
The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.
Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.
As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.
In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.
Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.
Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.
The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.
Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.
But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.
A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.
Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.
The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.
“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.
The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.
Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.
In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.
The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.
Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.
The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.
Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?
The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.
Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.
As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.
In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.
Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.
Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.
The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.
Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.
But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.
A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.
Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.
The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.
“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.
The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.
Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.
In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.
The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.
Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.
The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.
Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?
The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.
Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.
As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.
In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.
Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.
Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.
The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.
Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.
But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.
A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.
Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.
The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA SURGERY
The good, bad, and ugly of direct-to-consumer advertising
Case 1: A 48-year-old female with a 10-year history of left-sided Ulcerative Colitis (UC) has been well controlled on an injectable biologic for 5 years. Her last colonoscopy one year ago was normal without erythema or friability. She presents for an interim visit due to an increase in stool frequency from 2 to 4 bowel movements a day. She denies urgency, nocturnal bowel movements, or blood in her stool. She is concerned about a disease flare and wonders if she is on the right medication. She just saw a TV ad for a new oral UC medication and wants to switch because she prefers an oral medication to an injectable one. Physical exam was normal and in-office flexible sigmoidoscopy demonstrates no change in her colon appearance. You advise her to stay on the biologic because she is still considered well-controlled. She insists on being switched to the new oral medicine. When you probe her more, she says that the TV ad she saw shows people getting the medicine leading normal lives, which has enormous appeal to her.
Case 2: A 52-year-old healthy male is referred for colonoscopy evaluation. He reports no change in bowel habits with rare blood in his stool and thinks his uncle had colon cancer at age 48. He is anxious and not very receptive to having a procedure. He recently saw a TV advertisement promoting non-colonoscopy-based colon cancer screening. You recommend a colonoscopy based on his family history, but he insists on stool-based screening.
Case 3: A 32-year-old female with moderately to well-controlled IBD asks you to review a new “multi-omic” profile of her gut microbiome that she saw advertised on social media. The test report she provides contains a snapshot of her microbiome including abundances of single species and predicted functions for these bacteria from a single stool sample collected and submitted 6 months ago. You counsel her on the role of the gut microbiome in IBD and explain that currently there is not enough knowledge or technology to incorporate these test results into clinical care yet. The patient is frustrated and wants to know why you’re “behind the times.”
These cases may sound familiar to a practicing gastroenterologist. The platform driving all three of these clinical encounters, direct-to-consumer advertising (DTCA), is a legal mechanism by which a commercial entity can communicate directly with the consumer about a medicine or device, bypassing a health care professional.
In the 1960s, Congress granted the Food and Drug Administration regulatory authority over prescription drug labeling and advertising. This included ensuring that ads (1) were not false or misleading, (2) presented a “fair balance” of drug risks and benefits, (3) included facts “material” to a drug’s advertised uses, and (4) included a “brief summary” of all risks described in the drug’s labeling.
Direct-to-consumer advertising increased dramatically in the late 1990s after the FDA eased regulations around risk information by requiring ads to include only “major risks” and provide resources directing consumers to full risk information.
In 2022, the top 10 pharmaceutical ad spenders combined for a total of about $1.7 billion in TV ad spend, with the two top categories being inflammation and diabetes.
The role of the FDA in regulating DTCA of at-home tests is still evolving and largely depends on the intended use of the test results and the health claims used to market the test (that is, whether the test is designed to simply return general information, as in case 3 where DTCA regulations may not apply, or is marketed with specific medical claims or diagnostic interpretations, as in case 2 with clear applications for DTCA regulations.).
It has both potential benefits and potential risks. DTCA can serve to increase disease awareness (e.g., the need for colon cancer screening). It may also prompt patients who might otherwise disregard “red flag” signs and symptoms to seek medical evaluation (e.g., rectal bleeding). DTCA can also alert healthcare providers to new treatment options for diseases within their scope of practice and encourage them to expand their armamentarium.
In bioethics terms, DTCA can be beneficial in facilitating patient autonomy and promoting justice. For example, DTCA can “even the playing field” by ensuring that patients have equitable access to information about available treatments regardless of their socioeconomic status. In doing so, it can empower patients and allow them to have more meaningful discussions with their health care providers and make more informed decisions. In addition, patients may be introduced to alternative testing modalities (i.e., stool-based CRC screening) that, while not necessarily the best screening modality given individual risk (as in Case 2), may offer benefit with greater acceptance compared to inaction (i.e., no screening). Last, the idea of direct-to-consumer “omics” profiling has empowered patients as “citizen scientists” and led to the advent of “biohacking” among the lay population. In doing so, it has challenged the previous bounds of patient autonomy in healthcare by broadening the types of personal health data available to individuals, even when the clinical utility of this data may not yet be clear.
On the flip side, it is undeniable that DTCA of medical products is driven by commercial interests. Branded drugs are primarily, if not exclusively, promoted in DTCA, but these drugs may not always align with standard of care treatment recommendations. A study published in February 2023 in JAMA found that drugs with lower added clinical benefit and higher total sales were associated with higher DTCA spending.
With patients entering medical encounters with preconceived notions of what drugs they want to be prescribed based on media exposure, the ability of health care providers to provide sound medical advice regarding treatment may be circumvented. A patient’s preferred therapy based on exposure to DTCA may sharply contrast with their provider’s recommendation based on their experience and expertise and knowledge of the patient’s unique clinical history.
Unreasonable expectations
An additional potential downside of DTCA is that it can instill unreasonable expectations on the part of patients that the advertised product will benefit them. While DTCA is required to be fair and balanced in reporting benefits and risks, it is difficult to meaningfully address nuanced clinical risks in a brief TV ad and patients may come away with a skewed view of the risk-benefit equation. Furthermore, social media advertising and associated formats may not provide the same level of digestible information as other forms of media and are targeted to individuals likely to identify with the product. Finally, as stated above, only branded products (vs. generics) are advertised. Branded products are generally more costly, and where less expensive and equally effective therapies exist societal costs need to be considered. This can lead to inequities in distributive justice which is the societal allocation of resources. The more the healthcare market is driven towards higher costs in one segment, the less resources are available in another. This may affect regions where healthcare resources are limited.
Shared decision-making
Returning to the 3 cases above, in case 1 the UC patient’s awareness of new treatment options has prompted a shared decision-making discussion. She has a renewed interest in exploring a different route of medication administration because of DTCA. In spite of the patient seemingly well-controlled on her current IBD therapy and minor fluctuations in symptoms that might otherwise be a reason to observe more closely, the patient sees this as a reason to change her treatment based on her impression from the DTCA.
Regarding case 2, disease awareness and CRC screening acceptance is itself a positive outcome. Although commercially driven, the outcome/benefit to society leads to a decrease in disease burden and is a ready alternative with established benefits compared to no screening.
Regarding the proactive “omics-curious” IBD patient in case 3, despite the patient’s disappointment with the DCTA-promoted test’s limited clinical utility at this time, the patient may be communicating a general openness to novel approaches to treating IBD and to advancing her understanding of her disease.
So where does that leave you as a clinician?
As of today, if you live in the U.S., DTCA is a reality. As you navigate day-to-day patient interactions, it is important to keep in mind that your first obligation is to your patients and their best medical interests. Having a well-informed and engaged patient is a positive effect of DTCA over the past 30 years despite the challenging discussions you sometimes are forced to have. In many cases, patients are more self-aware of and engaged in their health and are acting on it due to direct acquisition of information from DTCA platforms. As a clinician, you have an ethical obligation to educate your patients and manage expectations, even when those expectations may be formed on the basis of DTCA with inherent conflicts of interest for promoting a product. Moreover, though certain products may be trendy or promoted on a popular science basis, the underlying technology (i.e. stool-based screening or metagenomics) and/or resultant data are likely not well understood by the typical lay patient, such that it may be difficult for a patient to comprehend how the product may not be particularly informative or inappropriate with respect to their personal medical history without additional counseling. Despite the potentially awkward clinician-patient dynamic precipitated by DTCA, these moments do offer an opportunity for you to gain rapport with your patients by taking the time to fill in the gaps of their understanding of their treatment plans, alternatives, individual risk factors and/or disease while gaining a greater appreciation for what they may personally prioritize with respect to their own health. Ultimately, as we transition further toward precision medicine approaches in healthcare, shared interest in individualized health decisions, at least partially informed by DTCA, is a positive outcome.
Dr. Sloan is a chief medical officer at Abivax. He is a member of the AGA Ethics Committee for COI. David A. Drew, PhD is assistant professor of medicine, Harvard Medical School, Boston. He is director of the Massachusetts General Hospital Biobanking, Clinical & Translational Epidemiology Unit, Boston. He is a member of the AGA Ethics Committee.
Case 1: A 48-year-old female with a 10-year history of left-sided Ulcerative Colitis (UC) has been well controlled on an injectable biologic for 5 years. Her last colonoscopy one year ago was normal without erythema or friability. She presents for an interim visit due to an increase in stool frequency from 2 to 4 bowel movements a day. She denies urgency, nocturnal bowel movements, or blood in her stool. She is concerned about a disease flare and wonders if she is on the right medication. She just saw a TV ad for a new oral UC medication and wants to switch because she prefers an oral medication to an injectable one. Physical exam was normal and in-office flexible sigmoidoscopy demonstrates no change in her colon appearance. You advise her to stay on the biologic because she is still considered well-controlled. She insists on being switched to the new oral medicine. When you probe her more, she says that the TV ad she saw shows people getting the medicine leading normal lives, which has enormous appeal to her.
Case 2: A 52-year-old healthy male is referred for colonoscopy evaluation. He reports no change in bowel habits with rare blood in his stool and thinks his uncle had colon cancer at age 48. He is anxious and not very receptive to having a procedure. He recently saw a TV advertisement promoting non-colonoscopy-based colon cancer screening. You recommend a colonoscopy based on his family history, but he insists on stool-based screening.
Case 3: A 32-year-old female with moderately to well-controlled IBD asks you to review a new “multi-omic” profile of her gut microbiome that she saw advertised on social media. The test report she provides contains a snapshot of her microbiome including abundances of single species and predicted functions for these bacteria from a single stool sample collected and submitted 6 months ago. You counsel her on the role of the gut microbiome in IBD and explain that currently there is not enough knowledge or technology to incorporate these test results into clinical care yet. The patient is frustrated and wants to know why you’re “behind the times.”
These cases may sound familiar to a practicing gastroenterologist. The platform driving all three of these clinical encounters, direct-to-consumer advertising (DTCA), is a legal mechanism by which a commercial entity can communicate directly with the consumer about a medicine or device, bypassing a health care professional.
In the 1960s, Congress granted the Food and Drug Administration regulatory authority over prescription drug labeling and advertising. This included ensuring that ads (1) were not false or misleading, (2) presented a “fair balance” of drug risks and benefits, (3) included facts “material” to a drug’s advertised uses, and (4) included a “brief summary” of all risks described in the drug’s labeling.
Direct-to-consumer advertising increased dramatically in the late 1990s after the FDA eased regulations around risk information by requiring ads to include only “major risks” and provide resources directing consumers to full risk information.
In 2022, the top 10 pharmaceutical ad spenders combined for a total of about $1.7 billion in TV ad spend, with the two top categories being inflammation and diabetes.
The role of the FDA in regulating DTCA of at-home tests is still evolving and largely depends on the intended use of the test results and the health claims used to market the test (that is, whether the test is designed to simply return general information, as in case 3 where DTCA regulations may not apply, or is marketed with specific medical claims or diagnostic interpretations, as in case 2 with clear applications for DTCA regulations.).
It has both potential benefits and potential risks. DTCA can serve to increase disease awareness (e.g., the need for colon cancer screening). It may also prompt patients who might otherwise disregard “red flag” signs and symptoms to seek medical evaluation (e.g., rectal bleeding). DTCA can also alert healthcare providers to new treatment options for diseases within their scope of practice and encourage them to expand their armamentarium.
In bioethics terms, DTCA can be beneficial in facilitating patient autonomy and promoting justice. For example, DTCA can “even the playing field” by ensuring that patients have equitable access to information about available treatments regardless of their socioeconomic status. In doing so, it can empower patients and allow them to have more meaningful discussions with their health care providers and make more informed decisions. In addition, patients may be introduced to alternative testing modalities (i.e., stool-based CRC screening) that, while not necessarily the best screening modality given individual risk (as in Case 2), may offer benefit with greater acceptance compared to inaction (i.e., no screening). Last, the idea of direct-to-consumer “omics” profiling has empowered patients as “citizen scientists” and led to the advent of “biohacking” among the lay population. In doing so, it has challenged the previous bounds of patient autonomy in healthcare by broadening the types of personal health data available to individuals, even when the clinical utility of this data may not yet be clear.
On the flip side, it is undeniable that DTCA of medical products is driven by commercial interests. Branded drugs are primarily, if not exclusively, promoted in DTCA, but these drugs may not always align with standard of care treatment recommendations. A study published in February 2023 in JAMA found that drugs with lower added clinical benefit and higher total sales were associated with higher DTCA spending.
With patients entering medical encounters with preconceived notions of what drugs they want to be prescribed based on media exposure, the ability of health care providers to provide sound medical advice regarding treatment may be circumvented. A patient’s preferred therapy based on exposure to DTCA may sharply contrast with their provider’s recommendation based on their experience and expertise and knowledge of the patient’s unique clinical history.
Unreasonable expectations
An additional potential downside of DTCA is that it can instill unreasonable expectations on the part of patients that the advertised product will benefit them. While DTCA is required to be fair and balanced in reporting benefits and risks, it is difficult to meaningfully address nuanced clinical risks in a brief TV ad and patients may come away with a skewed view of the risk-benefit equation. Furthermore, social media advertising and associated formats may not provide the same level of digestible information as other forms of media and are targeted to individuals likely to identify with the product. Finally, as stated above, only branded products (vs. generics) are advertised. Branded products are generally more costly, and where less expensive and equally effective therapies exist societal costs need to be considered. This can lead to inequities in distributive justice which is the societal allocation of resources. The more the healthcare market is driven towards higher costs in one segment, the less resources are available in another. This may affect regions where healthcare resources are limited.
Shared decision-making
Returning to the 3 cases above, in case 1 the UC patient’s awareness of new treatment options has prompted a shared decision-making discussion. She has a renewed interest in exploring a different route of medication administration because of DTCA. In spite of the patient seemingly well-controlled on her current IBD therapy and minor fluctuations in symptoms that might otherwise be a reason to observe more closely, the patient sees this as a reason to change her treatment based on her impression from the DTCA.
Regarding case 2, disease awareness and CRC screening acceptance is itself a positive outcome. Although commercially driven, the outcome/benefit to society leads to a decrease in disease burden and is a ready alternative with established benefits compared to no screening.
Regarding the proactive “omics-curious” IBD patient in case 3, despite the patient’s disappointment with the DCTA-promoted test’s limited clinical utility at this time, the patient may be communicating a general openness to novel approaches to treating IBD and to advancing her understanding of her disease.
So where does that leave you as a clinician?
As of today, if you live in the U.S., DTCA is a reality. As you navigate day-to-day patient interactions, it is important to keep in mind that your first obligation is to your patients and their best medical interests. Having a well-informed and engaged patient is a positive effect of DTCA over the past 30 years despite the challenging discussions you sometimes are forced to have. In many cases, patients are more self-aware of and engaged in their health and are acting on it due to direct acquisition of information from DTCA platforms. As a clinician, you have an ethical obligation to educate your patients and manage expectations, even when those expectations may be formed on the basis of DTCA with inherent conflicts of interest for promoting a product. Moreover, though certain products may be trendy or promoted on a popular science basis, the underlying technology (i.e. stool-based screening or metagenomics) and/or resultant data are likely not well understood by the typical lay patient, such that it may be difficult for a patient to comprehend how the product may not be particularly informative or inappropriate with respect to their personal medical history without additional counseling. Despite the potentially awkward clinician-patient dynamic precipitated by DTCA, these moments do offer an opportunity for you to gain rapport with your patients by taking the time to fill in the gaps of their understanding of their treatment plans, alternatives, individual risk factors and/or disease while gaining a greater appreciation for what they may personally prioritize with respect to their own health. Ultimately, as we transition further toward precision medicine approaches in healthcare, shared interest in individualized health decisions, at least partially informed by DTCA, is a positive outcome.
Dr. Sloan is a chief medical officer at Abivax. He is a member of the AGA Ethics Committee for COI. David A. Drew, PhD is assistant professor of medicine, Harvard Medical School, Boston. He is director of the Massachusetts General Hospital Biobanking, Clinical & Translational Epidemiology Unit, Boston. He is a member of the AGA Ethics Committee.
Case 1: A 48-year-old female with a 10-year history of left-sided Ulcerative Colitis (UC) has been well controlled on an injectable biologic for 5 years. Her last colonoscopy one year ago was normal without erythema or friability. She presents for an interim visit due to an increase in stool frequency from 2 to 4 bowel movements a day. She denies urgency, nocturnal bowel movements, or blood in her stool. She is concerned about a disease flare and wonders if she is on the right medication. She just saw a TV ad for a new oral UC medication and wants to switch because she prefers an oral medication to an injectable one. Physical exam was normal and in-office flexible sigmoidoscopy demonstrates no change in her colon appearance. You advise her to stay on the biologic because she is still considered well-controlled. She insists on being switched to the new oral medicine. When you probe her more, she says that the TV ad she saw shows people getting the medicine leading normal lives, which has enormous appeal to her.
Case 2: A 52-year-old healthy male is referred for colonoscopy evaluation. He reports no change in bowel habits with rare blood in his stool and thinks his uncle had colon cancer at age 48. He is anxious and not very receptive to having a procedure. He recently saw a TV advertisement promoting non-colonoscopy-based colon cancer screening. You recommend a colonoscopy based on his family history, but he insists on stool-based screening.
Case 3: A 32-year-old female with moderately to well-controlled IBD asks you to review a new “multi-omic” profile of her gut microbiome that she saw advertised on social media. The test report she provides contains a snapshot of her microbiome including abundances of single species and predicted functions for these bacteria from a single stool sample collected and submitted 6 months ago. You counsel her on the role of the gut microbiome in IBD and explain that currently there is not enough knowledge or technology to incorporate these test results into clinical care yet. The patient is frustrated and wants to know why you’re “behind the times.”
These cases may sound familiar to a practicing gastroenterologist. The platform driving all three of these clinical encounters, direct-to-consumer advertising (DTCA), is a legal mechanism by which a commercial entity can communicate directly with the consumer about a medicine or device, bypassing a health care professional.
In the 1960s, Congress granted the Food and Drug Administration regulatory authority over prescription drug labeling and advertising. This included ensuring that ads (1) were not false or misleading, (2) presented a “fair balance” of drug risks and benefits, (3) included facts “material” to a drug’s advertised uses, and (4) included a “brief summary” of all risks described in the drug’s labeling.
Direct-to-consumer advertising increased dramatically in the late 1990s after the FDA eased regulations around risk information by requiring ads to include only “major risks” and provide resources directing consumers to full risk information.
In 2022, the top 10 pharmaceutical ad spenders combined for a total of about $1.7 billion in TV ad spend, with the two top categories being inflammation and diabetes.
The role of the FDA in regulating DTCA of at-home tests is still evolving and largely depends on the intended use of the test results and the health claims used to market the test (that is, whether the test is designed to simply return general information, as in case 3 where DTCA regulations may not apply, or is marketed with specific medical claims or diagnostic interpretations, as in case 2 with clear applications for DTCA regulations.).
It has both potential benefits and potential risks. DTCA can serve to increase disease awareness (e.g., the need for colon cancer screening). It may also prompt patients who might otherwise disregard “red flag” signs and symptoms to seek medical evaluation (e.g., rectal bleeding). DTCA can also alert healthcare providers to new treatment options for diseases within their scope of practice and encourage them to expand their armamentarium.
In bioethics terms, DTCA can be beneficial in facilitating patient autonomy and promoting justice. For example, DTCA can “even the playing field” by ensuring that patients have equitable access to information about available treatments regardless of their socioeconomic status. In doing so, it can empower patients and allow them to have more meaningful discussions with their health care providers and make more informed decisions. In addition, patients may be introduced to alternative testing modalities (i.e., stool-based CRC screening) that, while not necessarily the best screening modality given individual risk (as in Case 2), may offer benefit with greater acceptance compared to inaction (i.e., no screening). Last, the idea of direct-to-consumer “omics” profiling has empowered patients as “citizen scientists” and led to the advent of “biohacking” among the lay population. In doing so, it has challenged the previous bounds of patient autonomy in healthcare by broadening the types of personal health data available to individuals, even when the clinical utility of this data may not yet be clear.
On the flip side, it is undeniable that DTCA of medical products is driven by commercial interests. Branded drugs are primarily, if not exclusively, promoted in DTCA, but these drugs may not always align with standard of care treatment recommendations. A study published in February 2023 in JAMA found that drugs with lower added clinical benefit and higher total sales were associated with higher DTCA spending.
With patients entering medical encounters with preconceived notions of what drugs they want to be prescribed based on media exposure, the ability of health care providers to provide sound medical advice regarding treatment may be circumvented. A patient’s preferred therapy based on exposure to DTCA may sharply contrast with their provider’s recommendation based on their experience and expertise and knowledge of the patient’s unique clinical history.
Unreasonable expectations
An additional potential downside of DTCA is that it can instill unreasonable expectations on the part of patients that the advertised product will benefit them. While DTCA is required to be fair and balanced in reporting benefits and risks, it is difficult to meaningfully address nuanced clinical risks in a brief TV ad and patients may come away with a skewed view of the risk-benefit equation. Furthermore, social media advertising and associated formats may not provide the same level of digestible information as other forms of media and are targeted to individuals likely to identify with the product. Finally, as stated above, only branded products (vs. generics) are advertised. Branded products are generally more costly, and where less expensive and equally effective therapies exist societal costs need to be considered. This can lead to inequities in distributive justice which is the societal allocation of resources. The more the healthcare market is driven towards higher costs in one segment, the less resources are available in another. This may affect regions where healthcare resources are limited.
Shared decision-making
Returning to the 3 cases above, in case 1 the UC patient’s awareness of new treatment options has prompted a shared decision-making discussion. She has a renewed interest in exploring a different route of medication administration because of DTCA. In spite of the patient seemingly well-controlled on her current IBD therapy and minor fluctuations in symptoms that might otherwise be a reason to observe more closely, the patient sees this as a reason to change her treatment based on her impression from the DTCA.
Regarding case 2, disease awareness and CRC screening acceptance is itself a positive outcome. Although commercially driven, the outcome/benefit to society leads to a decrease in disease burden and is a ready alternative with established benefits compared to no screening.
Regarding the proactive “omics-curious” IBD patient in case 3, despite the patient’s disappointment with the DCTA-promoted test’s limited clinical utility at this time, the patient may be communicating a general openness to novel approaches to treating IBD and to advancing her understanding of her disease.
So where does that leave you as a clinician?
As of today, if you live in the U.S., DTCA is a reality. As you navigate day-to-day patient interactions, it is important to keep in mind that your first obligation is to your patients and their best medical interests. Having a well-informed and engaged patient is a positive effect of DTCA over the past 30 years despite the challenging discussions you sometimes are forced to have. In many cases, patients are more self-aware of and engaged in their health and are acting on it due to direct acquisition of information from DTCA platforms. As a clinician, you have an ethical obligation to educate your patients and manage expectations, even when those expectations may be formed on the basis of DTCA with inherent conflicts of interest for promoting a product. Moreover, though certain products may be trendy or promoted on a popular science basis, the underlying technology (i.e. stool-based screening or metagenomics) and/or resultant data are likely not well understood by the typical lay patient, such that it may be difficult for a patient to comprehend how the product may not be particularly informative or inappropriate with respect to their personal medical history without additional counseling. Despite the potentially awkward clinician-patient dynamic precipitated by DTCA, these moments do offer an opportunity for you to gain rapport with your patients by taking the time to fill in the gaps of their understanding of their treatment plans, alternatives, individual risk factors and/or disease while gaining a greater appreciation for what they may personally prioritize with respect to their own health. Ultimately, as we transition further toward precision medicine approaches in healthcare, shared interest in individualized health decisions, at least partially informed by DTCA, is a positive outcome.
Dr. Sloan is a chief medical officer at Abivax. He is a member of the AGA Ethics Committee for COI. David A. Drew, PhD is assistant professor of medicine, Harvard Medical School, Boston. He is director of the Massachusetts General Hospital Biobanking, Clinical & Translational Epidemiology Unit, Boston. He is a member of the AGA Ethics Committee.
First-line therapy in T2D: Has metformin been ‘dethroned’?
Initially approved by the U.S. Food and Drug Administration (FDA) in 1994, metformin has been the preferred first-line glucose-lowering agent for patients with type 2 diabetes (T2D) owing to its effectiveness, low hypoglycemia risk, weight neutrality, long clinical track record of safety, and affordability. However, the advent of newer glucose-lowering agents with evidence-based cardiovascular (CV) and renal benefits calls into question whether metformin should continue to be the initial pharmacotherapy for all patients with T2D.
Cardiovascular outcome trials transform standard of care
In 2008, the FDA issued guidance to industry to ensure that CV risk is more thoroughly addressed during development of T2D therapies. This guidance document required dedicated trials to establish CV safety of new glucose-lowering therapies. Findings from subsequent cardiovascular outcome trials (CVOTs) and subsequent large renal and heart failure (HF) outcome trials have since prompted frequent and substantial updates to major guidelines. On the basis of recent evidence from CVOT and renal trials, contemporary clinical practice guidelines have transitioned from a traditional glucocentric treatment approach to a holistic management approach that emphasizes organ protection through heart-kidney-metabolic risk reduction.
Per the 2008 FDA guidance, dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagonlike peptide-1 (GLP-1) receptor agonists, and sodium-glucose cotransporter-2 (SGLT2) inhibitors were evaluated in large dedicated CVOTs. Findings from several CVOTs established GLP-1 receptor agonist and SGLT2 inhibitor CV safety, and unexpectedly demonstrated reduced rates of major adverse cardiovascular events (MACE) relative to placebo. The LEADER and EMPA-REG OUTCOME trials were the first CVOTs to report cardioprotective benefits of the GLP-1 receptor agonist liraglutide and the SGLT2 inhibitor empagliflozin, respectively. The LEADER trial reported a 13% significant relative risk reduction for its primary composite MACE outcome, and the EMPA-REG OUTCOME trial similarly reported a 14% relative risk reduction for MACE. After CVOTs on other GLP-1 receptor agonists and SGLT2 inhibitors reported CV benefit, clinical practice guidelines began to recommend use of these agents in at-risk patients to mitigate CV risk.
During the period when most CVOTs were designed and conducted, a majority of trial participants were receiving metformin at baseline. Inclusion of a small subset of metformin-naive participants in these trials allowed for several post hoc and meta-analyses investigating the impact of background metformin use on the overall CV benefits reported. Depending on the trial, baseline metformin use in large GLP-1 receptor agonist CVOTs ranged from 66% to 81%. For instance, 76% of participants in the LEADER trial were receiving metformin at baseline, but a post hoc analysis found no heterogeneity for the observed CV benefit based on background metformin use. Similarly, a subgroup analysis of pooled data from the SUSTAIN-6 and PIONEER 6 trials of injectable and oral formulations of semaglutide, respectively, reported similar CV outcomes for participants, regardless of concomitant metformin use. When looking at the GLP-1 receptor agonist class overall, a meta-analysis of seven CVOTs, which included participants with established atherosclerotic cardiovascular disease (ASCVD) and those with multiple ASCVD risk factors, concluded that GLP-1 receptor agonist therapy reduced the overall incidence of MACE in participants not receiving concomitant metformin at baseline.
Similar analyses have examined the impact of background metformin use on CV outcomes with SGLT2 inhibitors. An analysis of EMPA-REG OUTCOME found that empagliflozin improved CV outcomes and reduced mortality irrespective of background metformin, sulfonylurea, or insulin use. Of note, this analysis suggested a greater risk reduction for incident or worsening nephropathy in patients not on concomitant metformin (hazard ratio, 0.47; 95% confidence interval, 0.37-0.59; P = .01), when compared with those taking metformin at baseline (HR, 0.68; 95% CI, 0.58-0.79; P = .01). In addition, a meta-analysis of six large outcome trials found consistent benefits of SGLT2 inhibition on CV, kidney, and mortality outcomes regardless of background metformin treatment. Therefore, although CVOTs on GLP-1 receptor agonists and SGLT2 inhibitors were not designed to assess the impact of background metformin use on CV outcomes, available evidence supports the CV benefits of these agents independent of metformin use.
Individualizing care to attain cardiorenal-metabolic goals
Three dedicated SGLT2 inhibitor renal outcome trials have been published to date: CREDENCE, DAPA-CKD, and EMPA-KIDNEY. All three studies confirmed the positive secondary renal outcomes observed in SGLT2 inhibitor CVOTs: reduced progression of kidney disease, HF-associated hospital admissions, and CV-related death. The observed renal and CV benefits from the CREDENCE trial were consistent across different levels of kidney function. Similarly, a meta-analysis of five SGLT2 inhibitor trials of patients with HF demonstrated a decreased risk for CV-related death and admission for HF, irrespective of baseline heart function. The ongoing FLOW is the first dedicated kidney-outcome trial to evaluate the effectiveness of a GLP-1 receptor agonist (semaglutide) in slowing the progression and worsening of chronic kidney disease (CKD) in patients with T2D.
As previously noted, findings from the LEADER and EMPA-REG OUTCOME trials demonstrated the beneficial effects of GLP-1 receptor agonists and SGLT2 inhibitors not only on MACE but also on secondary HF and kidney disease outcomes. These findings have supported a series of dedicated HF and kidney outcome trials further informing the standard of care for patients with these key comorbidities. Indeed, the American Diabetes Association’s 2023 Standards of Care in Diabetes updated its recommendations and algorithm for the use of glucose-lowering medications in the management of T2D. The current ADA recommendations stress cardiorenal risk reduction while concurrently achieving and maintaining glycemic and weight management goals. On the basis of evolving outcome trial data, GLP-1 receptor agonists and SGLT2 inhibitors with evidence of benefit are recommended for patients with established or at high risk for ASCVD. Further, the Standards preferentially recommend SGLT2 inhibitors for patients with HF and/or CKD. Because evidence suggests no heterogeneity of benefit based on hemoglobin A1c for MACE outcomes with GLP-1 receptor agonists and no heterogeneity of benefit for HF or CKD benefits with SGLT2 inhibitors, these agents are recommended for cardiorenal risk reduction regardless of the need to lower glucose.
The 2023 update to the American Association of Clinical Endocrinology Consensus Statement: Type 2 Diabetes Management Algorithm similarly recommends the use of GLP-1 receptor agonists and SGLT2 inhibitors to improve cardiorenal outcomes. To further emphasize the importance of prescribing agents with proven organ-protective benefits, the AACE consensus statement provides a complications-centric algorithm to guide therapeutic decisions for risk reduction in patients with key comorbidities (for instance, ASCVD, HF, CKD) and a separate glucocentric algorithm to guide selection and intensification of glucose-lowering agents in patients without key comorbidities to meet individualized glycemic targets. Within the complications-centric algorithm, AACE recommends GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment for cardiorenal risk reduction regardless of background metformin use or A1c level.
In addition to the emphasis on the use of GLP-1 receptor agonists and SGLT2 inhibitors for organ protection, guidelines now recommend SGLT2 inhibitors as the standard-of-care therapy in patients with T2D and CKD with an estimated glomerular filtration rate ≥ 20 mL/min per 1.73 m2, and irrespective of ejection fraction or a diagnosis of diabetes in the setting of HF. Overall, a common thread within current guidelines is the importance of individualized therapy based on patient- and medication-specific factors.
Optimizing guideline-directed medical therapy
Results from the DISCOVER trial found that GLP-1 receptor agonist and SGLT2 inhibitor use was less likely in the key patient subgroups most likely to benefit from therapy, including patients with peripheral artery disease and CKD. Factors contributing to underutilization of newer cardiorenal protective glucose-lowering therapies range from cost and access barriers to clinician-level barriers (for example, lack of knowledge on CKD, lack of familiarity with CKD practice guidelines). Addressing these issues and helping patients work through financial and other access barriers is essential to optimize the utilization of these therapies and improve cardiorenal and metabolic outcomes.
So, has metformin been “dethroned” as a first-line therapy for T2D? As is often the case in medicine, the answer depends on the individual patient and clinical situation. Metformin remains an important first-line treatment in combination with lifestyle interventions to help patients with T2D without key cardiorenal comorbidities achieve individualized glycemic targets. However, based on evidence demonstrating cardiorenal protective benefits and improved glycemia and weight loss, GLP-1 agonists and SGLT2 inhibitors may be considered as first-line treatment for patients with T2D with or at high risk for ASCVD, HF, or CKD, regardless of the need for additional glucose-lowering agents and independent of background metformin. Ultimately, the choice of first-line therapy for patients with T2D should be informed by individualized treatment goals, preferences, and cost-related access. Continued efforts to increase patient access to GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment when indicated are essential to ensure optimal treatment and outcomes.
Dr. Neumiller is professor, department of pharmacotherapy, Washington State University, Spokane. He disclosed ties with Bayer, Boehringer Ingelheim, and Eli Lilly. Dr. Alicic is clinical professor, department of medicine, University of Washington; and associate director of research, Inland Northwest Washington, Providence St. Joseph Health, Spokane. She disclosed ties with Providence St. Joseph Health, Boehringer Ingelheim/Lilly, and Bayer.
A version of this article appeared on Medscape.com.
Initially approved by the U.S. Food and Drug Administration (FDA) in 1994, metformin has been the preferred first-line glucose-lowering agent for patients with type 2 diabetes (T2D) owing to its effectiveness, low hypoglycemia risk, weight neutrality, long clinical track record of safety, and affordability. However, the advent of newer glucose-lowering agents with evidence-based cardiovascular (CV) and renal benefits calls into question whether metformin should continue to be the initial pharmacotherapy for all patients with T2D.
Cardiovascular outcome trials transform standard of care
In 2008, the FDA issued guidance to industry to ensure that CV risk is more thoroughly addressed during development of T2D therapies. This guidance document required dedicated trials to establish CV safety of new glucose-lowering therapies. Findings from subsequent cardiovascular outcome trials (CVOTs) and subsequent large renal and heart failure (HF) outcome trials have since prompted frequent and substantial updates to major guidelines. On the basis of recent evidence from CVOT and renal trials, contemporary clinical practice guidelines have transitioned from a traditional glucocentric treatment approach to a holistic management approach that emphasizes organ protection through heart-kidney-metabolic risk reduction.
Per the 2008 FDA guidance, dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagonlike peptide-1 (GLP-1) receptor agonists, and sodium-glucose cotransporter-2 (SGLT2) inhibitors were evaluated in large dedicated CVOTs. Findings from several CVOTs established GLP-1 receptor agonist and SGLT2 inhibitor CV safety, and unexpectedly demonstrated reduced rates of major adverse cardiovascular events (MACE) relative to placebo. The LEADER and EMPA-REG OUTCOME trials were the first CVOTs to report cardioprotective benefits of the GLP-1 receptor agonist liraglutide and the SGLT2 inhibitor empagliflozin, respectively. The LEADER trial reported a 13% significant relative risk reduction for its primary composite MACE outcome, and the EMPA-REG OUTCOME trial similarly reported a 14% relative risk reduction for MACE. After CVOTs on other GLP-1 receptor agonists and SGLT2 inhibitors reported CV benefit, clinical practice guidelines began to recommend use of these agents in at-risk patients to mitigate CV risk.
During the period when most CVOTs were designed and conducted, a majority of trial participants were receiving metformin at baseline. Inclusion of a small subset of metformin-naive participants in these trials allowed for several post hoc and meta-analyses investigating the impact of background metformin use on the overall CV benefits reported. Depending on the trial, baseline metformin use in large GLP-1 receptor agonist CVOTs ranged from 66% to 81%. For instance, 76% of participants in the LEADER trial were receiving metformin at baseline, but a post hoc analysis found no heterogeneity for the observed CV benefit based on background metformin use. Similarly, a subgroup analysis of pooled data from the SUSTAIN-6 and PIONEER 6 trials of injectable and oral formulations of semaglutide, respectively, reported similar CV outcomes for participants, regardless of concomitant metformin use. When looking at the GLP-1 receptor agonist class overall, a meta-analysis of seven CVOTs, which included participants with established atherosclerotic cardiovascular disease (ASCVD) and those with multiple ASCVD risk factors, concluded that GLP-1 receptor agonist therapy reduced the overall incidence of MACE in participants not receiving concomitant metformin at baseline.
Similar analyses have examined the impact of background metformin use on CV outcomes with SGLT2 inhibitors. An analysis of EMPA-REG OUTCOME found that empagliflozin improved CV outcomes and reduced mortality irrespective of background metformin, sulfonylurea, or insulin use. Of note, this analysis suggested a greater risk reduction for incident or worsening nephropathy in patients not on concomitant metformin (hazard ratio, 0.47; 95% confidence interval, 0.37-0.59; P = .01), when compared with those taking metformin at baseline (HR, 0.68; 95% CI, 0.58-0.79; P = .01). In addition, a meta-analysis of six large outcome trials found consistent benefits of SGLT2 inhibition on CV, kidney, and mortality outcomes regardless of background metformin treatment. Therefore, although CVOTs on GLP-1 receptor agonists and SGLT2 inhibitors were not designed to assess the impact of background metformin use on CV outcomes, available evidence supports the CV benefits of these agents independent of metformin use.
Individualizing care to attain cardiorenal-metabolic goals
Three dedicated SGLT2 inhibitor renal outcome trials have been published to date: CREDENCE, DAPA-CKD, and EMPA-KIDNEY. All three studies confirmed the positive secondary renal outcomes observed in SGLT2 inhibitor CVOTs: reduced progression of kidney disease, HF-associated hospital admissions, and CV-related death. The observed renal and CV benefits from the CREDENCE trial were consistent across different levels of kidney function. Similarly, a meta-analysis of five SGLT2 inhibitor trials of patients with HF demonstrated a decreased risk for CV-related death and admission for HF, irrespective of baseline heart function. The ongoing FLOW is the first dedicated kidney-outcome trial to evaluate the effectiveness of a GLP-1 receptor agonist (semaglutide) in slowing the progression and worsening of chronic kidney disease (CKD) in patients with T2D.
As previously noted, findings from the LEADER and EMPA-REG OUTCOME trials demonstrated the beneficial effects of GLP-1 receptor agonists and SGLT2 inhibitors not only on MACE but also on secondary HF and kidney disease outcomes. These findings have supported a series of dedicated HF and kidney outcome trials further informing the standard of care for patients with these key comorbidities. Indeed, the American Diabetes Association’s 2023 Standards of Care in Diabetes updated its recommendations and algorithm for the use of glucose-lowering medications in the management of T2D. The current ADA recommendations stress cardiorenal risk reduction while concurrently achieving and maintaining glycemic and weight management goals. On the basis of evolving outcome trial data, GLP-1 receptor agonists and SGLT2 inhibitors with evidence of benefit are recommended for patients with established or at high risk for ASCVD. Further, the Standards preferentially recommend SGLT2 inhibitors for patients with HF and/or CKD. Because evidence suggests no heterogeneity of benefit based on hemoglobin A1c for MACE outcomes with GLP-1 receptor agonists and no heterogeneity of benefit for HF or CKD benefits with SGLT2 inhibitors, these agents are recommended for cardiorenal risk reduction regardless of the need to lower glucose.
The 2023 update to the American Association of Clinical Endocrinology Consensus Statement: Type 2 Diabetes Management Algorithm similarly recommends the use of GLP-1 receptor agonists and SGLT2 inhibitors to improve cardiorenal outcomes. To further emphasize the importance of prescribing agents with proven organ-protective benefits, the AACE consensus statement provides a complications-centric algorithm to guide therapeutic decisions for risk reduction in patients with key comorbidities (for instance, ASCVD, HF, CKD) and a separate glucocentric algorithm to guide selection and intensification of glucose-lowering agents in patients without key comorbidities to meet individualized glycemic targets. Within the complications-centric algorithm, AACE recommends GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment for cardiorenal risk reduction regardless of background metformin use or A1c level.
In addition to the emphasis on the use of GLP-1 receptor agonists and SGLT2 inhibitors for organ protection, guidelines now recommend SGLT2 inhibitors as the standard-of-care therapy in patients with T2D and CKD with an estimated glomerular filtration rate ≥ 20 mL/min per 1.73 m2, and irrespective of ejection fraction or a diagnosis of diabetes in the setting of HF. Overall, a common thread within current guidelines is the importance of individualized therapy based on patient- and medication-specific factors.
Optimizing guideline-directed medical therapy
Results from the DISCOVER trial found that GLP-1 receptor agonist and SGLT2 inhibitor use was less likely in the key patient subgroups most likely to benefit from therapy, including patients with peripheral artery disease and CKD. Factors contributing to underutilization of newer cardiorenal protective glucose-lowering therapies range from cost and access barriers to clinician-level barriers (for example, lack of knowledge on CKD, lack of familiarity with CKD practice guidelines). Addressing these issues and helping patients work through financial and other access barriers is essential to optimize the utilization of these therapies and improve cardiorenal and metabolic outcomes.
So, has metformin been “dethroned” as a first-line therapy for T2D? As is often the case in medicine, the answer depends on the individual patient and clinical situation. Metformin remains an important first-line treatment in combination with lifestyle interventions to help patients with T2D without key cardiorenal comorbidities achieve individualized glycemic targets. However, based on evidence demonstrating cardiorenal protective benefits and improved glycemia and weight loss, GLP-1 agonists and SGLT2 inhibitors may be considered as first-line treatment for patients with T2D with or at high risk for ASCVD, HF, or CKD, regardless of the need for additional glucose-lowering agents and independent of background metformin. Ultimately, the choice of first-line therapy for patients with T2D should be informed by individualized treatment goals, preferences, and cost-related access. Continued efforts to increase patient access to GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment when indicated are essential to ensure optimal treatment and outcomes.
Dr. Neumiller is professor, department of pharmacotherapy, Washington State University, Spokane. He disclosed ties with Bayer, Boehringer Ingelheim, and Eli Lilly. Dr. Alicic is clinical professor, department of medicine, University of Washington; and associate director of research, Inland Northwest Washington, Providence St. Joseph Health, Spokane. She disclosed ties with Providence St. Joseph Health, Boehringer Ingelheim/Lilly, and Bayer.
A version of this article appeared on Medscape.com.
Initially approved by the U.S. Food and Drug Administration (FDA) in 1994, metformin has been the preferred first-line glucose-lowering agent for patients with type 2 diabetes (T2D) owing to its effectiveness, low hypoglycemia risk, weight neutrality, long clinical track record of safety, and affordability. However, the advent of newer glucose-lowering agents with evidence-based cardiovascular (CV) and renal benefits calls into question whether metformin should continue to be the initial pharmacotherapy for all patients with T2D.
Cardiovascular outcome trials transform standard of care
In 2008, the FDA issued guidance to industry to ensure that CV risk is more thoroughly addressed during development of T2D therapies. This guidance document required dedicated trials to establish CV safety of new glucose-lowering therapies. Findings from subsequent cardiovascular outcome trials (CVOTs) and subsequent large renal and heart failure (HF) outcome trials have since prompted frequent and substantial updates to major guidelines. On the basis of recent evidence from CVOT and renal trials, contemporary clinical practice guidelines have transitioned from a traditional glucocentric treatment approach to a holistic management approach that emphasizes organ protection through heart-kidney-metabolic risk reduction.
Per the 2008 FDA guidance, dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagonlike peptide-1 (GLP-1) receptor agonists, and sodium-glucose cotransporter-2 (SGLT2) inhibitors were evaluated in large dedicated CVOTs. Findings from several CVOTs established GLP-1 receptor agonist and SGLT2 inhibitor CV safety, and unexpectedly demonstrated reduced rates of major adverse cardiovascular events (MACE) relative to placebo. The LEADER and EMPA-REG OUTCOME trials were the first CVOTs to report cardioprotective benefits of the GLP-1 receptor agonist liraglutide and the SGLT2 inhibitor empagliflozin, respectively. The LEADER trial reported a 13% significant relative risk reduction for its primary composite MACE outcome, and the EMPA-REG OUTCOME trial similarly reported a 14% relative risk reduction for MACE. After CVOTs on other GLP-1 receptor agonists and SGLT2 inhibitors reported CV benefit, clinical practice guidelines began to recommend use of these agents in at-risk patients to mitigate CV risk.
During the period when most CVOTs were designed and conducted, a majority of trial participants were receiving metformin at baseline. Inclusion of a small subset of metformin-naive participants in these trials allowed for several post hoc and meta-analyses investigating the impact of background metformin use on the overall CV benefits reported. Depending on the trial, baseline metformin use in large GLP-1 receptor agonist CVOTs ranged from 66% to 81%. For instance, 76% of participants in the LEADER trial were receiving metformin at baseline, but a post hoc analysis found no heterogeneity for the observed CV benefit based on background metformin use. Similarly, a subgroup analysis of pooled data from the SUSTAIN-6 and PIONEER 6 trials of injectable and oral formulations of semaglutide, respectively, reported similar CV outcomes for participants, regardless of concomitant metformin use. When looking at the GLP-1 receptor agonist class overall, a meta-analysis of seven CVOTs, which included participants with established atherosclerotic cardiovascular disease (ASCVD) and those with multiple ASCVD risk factors, concluded that GLP-1 receptor agonist therapy reduced the overall incidence of MACE in participants not receiving concomitant metformin at baseline.
Similar analyses have examined the impact of background metformin use on CV outcomes with SGLT2 inhibitors. An analysis of EMPA-REG OUTCOME found that empagliflozin improved CV outcomes and reduced mortality irrespective of background metformin, sulfonylurea, or insulin use. Of note, this analysis suggested a greater risk reduction for incident or worsening nephropathy in patients not on concomitant metformin (hazard ratio, 0.47; 95% confidence interval, 0.37-0.59; P = .01), when compared with those taking metformin at baseline (HR, 0.68; 95% CI, 0.58-0.79; P = .01). In addition, a meta-analysis of six large outcome trials found consistent benefits of SGLT2 inhibition on CV, kidney, and mortality outcomes regardless of background metformin treatment. Therefore, although CVOTs on GLP-1 receptor agonists and SGLT2 inhibitors were not designed to assess the impact of background metformin use on CV outcomes, available evidence supports the CV benefits of these agents independent of metformin use.
Individualizing care to attain cardiorenal-metabolic goals
Three dedicated SGLT2 inhibitor renal outcome trials have been published to date: CREDENCE, DAPA-CKD, and EMPA-KIDNEY. All three studies confirmed the positive secondary renal outcomes observed in SGLT2 inhibitor CVOTs: reduced progression of kidney disease, HF-associated hospital admissions, and CV-related death. The observed renal and CV benefits from the CREDENCE trial were consistent across different levels of kidney function. Similarly, a meta-analysis of five SGLT2 inhibitor trials of patients with HF demonstrated a decreased risk for CV-related death and admission for HF, irrespective of baseline heart function. The ongoing FLOW is the first dedicated kidney-outcome trial to evaluate the effectiveness of a GLP-1 receptor agonist (semaglutide) in slowing the progression and worsening of chronic kidney disease (CKD) in patients with T2D.
As previously noted, findings from the LEADER and EMPA-REG OUTCOME trials demonstrated the beneficial effects of GLP-1 receptor agonists and SGLT2 inhibitors not only on MACE but also on secondary HF and kidney disease outcomes. These findings have supported a series of dedicated HF and kidney outcome trials further informing the standard of care for patients with these key comorbidities. Indeed, the American Diabetes Association’s 2023 Standards of Care in Diabetes updated its recommendations and algorithm for the use of glucose-lowering medications in the management of T2D. The current ADA recommendations stress cardiorenal risk reduction while concurrently achieving and maintaining glycemic and weight management goals. On the basis of evolving outcome trial data, GLP-1 receptor agonists and SGLT2 inhibitors with evidence of benefit are recommended for patients with established or at high risk for ASCVD. Further, the Standards preferentially recommend SGLT2 inhibitors for patients with HF and/or CKD. Because evidence suggests no heterogeneity of benefit based on hemoglobin A1c for MACE outcomes with GLP-1 receptor agonists and no heterogeneity of benefit for HF or CKD benefits with SGLT2 inhibitors, these agents are recommended for cardiorenal risk reduction regardless of the need to lower glucose.
The 2023 update to the American Association of Clinical Endocrinology Consensus Statement: Type 2 Diabetes Management Algorithm similarly recommends the use of GLP-1 receptor agonists and SGLT2 inhibitors to improve cardiorenal outcomes. To further emphasize the importance of prescribing agents with proven organ-protective benefits, the AACE consensus statement provides a complications-centric algorithm to guide therapeutic decisions for risk reduction in patients with key comorbidities (for instance, ASCVD, HF, CKD) and a separate glucocentric algorithm to guide selection and intensification of glucose-lowering agents in patients without key comorbidities to meet individualized glycemic targets. Within the complications-centric algorithm, AACE recommends GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment for cardiorenal risk reduction regardless of background metformin use or A1c level.
In addition to the emphasis on the use of GLP-1 receptor agonists and SGLT2 inhibitors for organ protection, guidelines now recommend SGLT2 inhibitors as the standard-of-care therapy in patients with T2D and CKD with an estimated glomerular filtration rate ≥ 20 mL/min per 1.73 m2, and irrespective of ejection fraction or a diagnosis of diabetes in the setting of HF. Overall, a common thread within current guidelines is the importance of individualized therapy based on patient- and medication-specific factors.
Optimizing guideline-directed medical therapy
Results from the DISCOVER trial found that GLP-1 receptor agonist and SGLT2 inhibitor use was less likely in the key patient subgroups most likely to benefit from therapy, including patients with peripheral artery disease and CKD. Factors contributing to underutilization of newer cardiorenal protective glucose-lowering therapies range from cost and access barriers to clinician-level barriers (for example, lack of knowledge on CKD, lack of familiarity with CKD practice guidelines). Addressing these issues and helping patients work through financial and other access barriers is essential to optimize the utilization of these therapies and improve cardiorenal and metabolic outcomes.
So, has metformin been “dethroned” as a first-line therapy for T2D? As is often the case in medicine, the answer depends on the individual patient and clinical situation. Metformin remains an important first-line treatment in combination with lifestyle interventions to help patients with T2D without key cardiorenal comorbidities achieve individualized glycemic targets. However, based on evidence demonstrating cardiorenal protective benefits and improved glycemia and weight loss, GLP-1 agonists and SGLT2 inhibitors may be considered as first-line treatment for patients with T2D with or at high risk for ASCVD, HF, or CKD, regardless of the need for additional glucose-lowering agents and independent of background metformin. Ultimately, the choice of first-line therapy for patients with T2D should be informed by individualized treatment goals, preferences, and cost-related access. Continued efforts to increase patient access to GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment when indicated are essential to ensure optimal treatment and outcomes.
Dr. Neumiller is professor, department of pharmacotherapy, Washington State University, Spokane. He disclosed ties with Bayer, Boehringer Ingelheim, and Eli Lilly. Dr. Alicic is clinical professor, department of medicine, University of Washington; and associate director of research, Inland Northwest Washington, Providence St. Joseph Health, Spokane. She disclosed ties with Providence St. Joseph Health, Boehringer Ingelheim/Lilly, and Bayer.
A version of this article appeared on Medscape.com.
Certain genes predict abdominal fat regain after weight loss
People with a genetic predisposition for abdominal adiposity regained more weight around their waist after weight loss than other people.
However, people with a genetic predisposition for a higher body mass index did not regain more weight after weight loss than others.
These findings are from a secondary analysis of data from participants in the Look AHEAD trial who had type 2 diabetes and overweight/obesity and had lost at least 3% of their initial weight after 1 year of intensive lifestyle intervention or control, who were followed for another 3 years.
The study showed that change in waist circumference (aka abdominal obesity) is regulated by a separate pathway from overall obesity during weight regain, the researchers report in their paper, published in Diabetes.
“These findings are the first of their kind and provide new insights into the mechanisms of weight regain,” they conclude.
“It was already known in the scientific literature that genes that are associated with abdominal fat deposition are different from the ones associated with overall obesity,” Malene Revsbech Christiansen, a PhD student, and Tuomas O. Kilpeläinen, PhD, associate professor, Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, said in a joint email to this news organization.
Genetic variants associated with obesity are expressed in the central nervous system. However, genetic variants associated with waist circumference are expressed in the adipose tissues and might be involved in insulin sensitivity, or fat cell shape and differentiation, influencing how much adipose cells can expand in size or in number.
If those genes can function as targets for therapeutic agents, this might benefit patients who possess the genetic variants that predispose them to a higher waist-to-hip ratio adjusted for BMI (WHR-adjBMI), they said.
“However, this is a preliminary study that discovered an association between genetic variants and abdominal fat changes during weight loss,” they cautioned.
Further study is needed, they said, to test the associations in people without obesity and type 2 diabetes and to investigate this research question in people who underwent bariatric surgery or took weight-loss medications, “especially now that Wegovy has increased in popularity.”
“Genetic profiling,” they noted, “is becoming more popular as the prices go down, and future treatments are moving towards precision medicine, where treatments are tailored towards individuals rather than ‘one size fits all.’ ”
In the future, genetic tests might identify people who are more predisposed to abdominal fat deposition, hence needing more follow-up and help with lifestyle changes.
“For now, it does not seem realistic to test individuals for all these 481 [genetic] variants [predisposing to abdominal adiposity]. Each of these genetic variants predisposes, but is not deterministic, for the outcome, because of their individual small effects on waist circumference.”
“It should be stated,” they added, “that changing the diet, physical activity pattern, and behavior are still the main factors when losing weight and maintaining a healthy body.”
Maintaining weight loss is the big challenge
“Lifestyle interventions typically result in an average weight loss of 7%-10 % within 6 months; however, maintaining the weight loss is a significant challenge, as participants often regain an average one-third of the lost weight within 1 year and 50%-100% within 5 years,” the researchers write.
They aimed to study whether genetic predisposition to general or abdominal obesity predicts weight gain after weight loss, based on data from 822 women and 593 men in the Look AHEAD trial.
On average, at 1 year after the intervention, the participants in the intensive lifestyle group lost 24 lbs (10.9 kg) and 3.55 inches (9 cm) around the waist, and participants in the control group lost 15 lbs (6.8 kg) pounds and 1.98 inches (5 cm) around the waist.
From year 1 to year 2, participants in the intensive lifestyle group regained 6.09 lbs and 0.98 inches around the waist, and participants in the control group lost 1.41 lbs and 0.17 inches around the waist.
From year 1 to year 4, participants in the intensive lifestyle group regained 11.05 lbs and 1.92 inches around the waist, and participants in the control group lost 2.24 lbs and 0.76 inches around the waist.
From genome-wide association studies (GWAS) in about 700,000 mainly White individuals of European origin, the researchers constructed a genetic risk score based on 894 independent single nucleotide polymorphisms (SNPs) associated with high BMI and another genetic risk score based on 481 SNPs associated with high WHR-adjBMI.
Having a genetic predisposition to higher WHR-adjBMI predicted an increase in abdominal obesity after weight loss, whereas having a genetic predisposition to higher BMI did not predict weight regain.
“These results suggest that genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain,” the researchers conclude.
The researchers were supported by grants from the Novo Nordisk Foundation and the Danish Diabetes Academy (funded by the Novo Nordisk Foundation). The authors report no relevant financial relationships.
A version of this article appeared on Medscape.com.
People with a genetic predisposition for abdominal adiposity regained more weight around their waist after weight loss than other people.
However, people with a genetic predisposition for a higher body mass index did not regain more weight after weight loss than others.
These findings are from a secondary analysis of data from participants in the Look AHEAD trial who had type 2 diabetes and overweight/obesity and had lost at least 3% of their initial weight after 1 year of intensive lifestyle intervention or control, who were followed for another 3 years.
The study showed that change in waist circumference (aka abdominal obesity) is regulated by a separate pathway from overall obesity during weight regain, the researchers report in their paper, published in Diabetes.
“These findings are the first of their kind and provide new insights into the mechanisms of weight regain,” they conclude.
“It was already known in the scientific literature that genes that are associated with abdominal fat deposition are different from the ones associated with overall obesity,” Malene Revsbech Christiansen, a PhD student, and Tuomas O. Kilpeläinen, PhD, associate professor, Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, said in a joint email to this news organization.
Genetic variants associated with obesity are expressed in the central nervous system. However, genetic variants associated with waist circumference are expressed in the adipose tissues and might be involved in insulin sensitivity, or fat cell shape and differentiation, influencing how much adipose cells can expand in size or in number.
If those genes can function as targets for therapeutic agents, this might benefit patients who possess the genetic variants that predispose them to a higher waist-to-hip ratio adjusted for BMI (WHR-adjBMI), they said.
“However, this is a preliminary study that discovered an association between genetic variants and abdominal fat changes during weight loss,” they cautioned.
Further study is needed, they said, to test the associations in people without obesity and type 2 diabetes and to investigate this research question in people who underwent bariatric surgery or took weight-loss medications, “especially now that Wegovy has increased in popularity.”
“Genetic profiling,” they noted, “is becoming more popular as the prices go down, and future treatments are moving towards precision medicine, where treatments are tailored towards individuals rather than ‘one size fits all.’ ”
In the future, genetic tests might identify people who are more predisposed to abdominal fat deposition, hence needing more follow-up and help with lifestyle changes.
“For now, it does not seem realistic to test individuals for all these 481 [genetic] variants [predisposing to abdominal adiposity]. Each of these genetic variants predisposes, but is not deterministic, for the outcome, because of their individual small effects on waist circumference.”
“It should be stated,” they added, “that changing the diet, physical activity pattern, and behavior are still the main factors when losing weight and maintaining a healthy body.”
Maintaining weight loss is the big challenge
“Lifestyle interventions typically result in an average weight loss of 7%-10 % within 6 months; however, maintaining the weight loss is a significant challenge, as participants often regain an average one-third of the lost weight within 1 year and 50%-100% within 5 years,” the researchers write.
They aimed to study whether genetic predisposition to general or abdominal obesity predicts weight gain after weight loss, based on data from 822 women and 593 men in the Look AHEAD trial.
On average, at 1 year after the intervention, the participants in the intensive lifestyle group lost 24 lbs (10.9 kg) and 3.55 inches (9 cm) around the waist, and participants in the control group lost 15 lbs (6.8 kg) pounds and 1.98 inches (5 cm) around the waist.
From year 1 to year 2, participants in the intensive lifestyle group regained 6.09 lbs and 0.98 inches around the waist, and participants in the control group lost 1.41 lbs and 0.17 inches around the waist.
From year 1 to year 4, participants in the intensive lifestyle group regained 11.05 lbs and 1.92 inches around the waist, and participants in the control group lost 2.24 lbs and 0.76 inches around the waist.
From genome-wide association studies (GWAS) in about 700,000 mainly White individuals of European origin, the researchers constructed a genetic risk score based on 894 independent single nucleotide polymorphisms (SNPs) associated with high BMI and another genetic risk score based on 481 SNPs associated with high WHR-adjBMI.
Having a genetic predisposition to higher WHR-adjBMI predicted an increase in abdominal obesity after weight loss, whereas having a genetic predisposition to higher BMI did not predict weight regain.
“These results suggest that genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain,” the researchers conclude.
The researchers were supported by grants from the Novo Nordisk Foundation and the Danish Diabetes Academy (funded by the Novo Nordisk Foundation). The authors report no relevant financial relationships.
A version of this article appeared on Medscape.com.
People with a genetic predisposition for abdominal adiposity regained more weight around their waist after weight loss than other people.
However, people with a genetic predisposition for a higher body mass index did not regain more weight after weight loss than others.
These findings are from a secondary analysis of data from participants in the Look AHEAD trial who had type 2 diabetes and overweight/obesity and had lost at least 3% of their initial weight after 1 year of intensive lifestyle intervention or control, who were followed for another 3 years.
The study showed that change in waist circumference (aka abdominal obesity) is regulated by a separate pathway from overall obesity during weight regain, the researchers report in their paper, published in Diabetes.
“These findings are the first of their kind and provide new insights into the mechanisms of weight regain,” they conclude.
“It was already known in the scientific literature that genes that are associated with abdominal fat deposition are different from the ones associated with overall obesity,” Malene Revsbech Christiansen, a PhD student, and Tuomas O. Kilpeläinen, PhD, associate professor, Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, said in a joint email to this news organization.
Genetic variants associated with obesity are expressed in the central nervous system. However, genetic variants associated with waist circumference are expressed in the adipose tissues and might be involved in insulin sensitivity, or fat cell shape and differentiation, influencing how much adipose cells can expand in size or in number.
If those genes can function as targets for therapeutic agents, this might benefit patients who possess the genetic variants that predispose them to a higher waist-to-hip ratio adjusted for BMI (WHR-adjBMI), they said.
“However, this is a preliminary study that discovered an association between genetic variants and abdominal fat changes during weight loss,” they cautioned.
Further study is needed, they said, to test the associations in people without obesity and type 2 diabetes and to investigate this research question in people who underwent bariatric surgery or took weight-loss medications, “especially now that Wegovy has increased in popularity.”
“Genetic profiling,” they noted, “is becoming more popular as the prices go down, and future treatments are moving towards precision medicine, where treatments are tailored towards individuals rather than ‘one size fits all.’ ”
In the future, genetic tests might identify people who are more predisposed to abdominal fat deposition, hence needing more follow-up and help with lifestyle changes.
“For now, it does not seem realistic to test individuals for all these 481 [genetic] variants [predisposing to abdominal adiposity]. Each of these genetic variants predisposes, but is not deterministic, for the outcome, because of their individual small effects on waist circumference.”
“It should be stated,” they added, “that changing the diet, physical activity pattern, and behavior are still the main factors when losing weight and maintaining a healthy body.”
Maintaining weight loss is the big challenge
“Lifestyle interventions typically result in an average weight loss of 7%-10 % within 6 months; however, maintaining the weight loss is a significant challenge, as participants often regain an average one-third of the lost weight within 1 year and 50%-100% within 5 years,” the researchers write.
They aimed to study whether genetic predisposition to general or abdominal obesity predicts weight gain after weight loss, based on data from 822 women and 593 men in the Look AHEAD trial.
On average, at 1 year after the intervention, the participants in the intensive lifestyle group lost 24 lbs (10.9 kg) and 3.55 inches (9 cm) around the waist, and participants in the control group lost 15 lbs (6.8 kg) pounds and 1.98 inches (5 cm) around the waist.
From year 1 to year 2, participants in the intensive lifestyle group regained 6.09 lbs and 0.98 inches around the waist, and participants in the control group lost 1.41 lbs and 0.17 inches around the waist.
From year 1 to year 4, participants in the intensive lifestyle group regained 11.05 lbs and 1.92 inches around the waist, and participants in the control group lost 2.24 lbs and 0.76 inches around the waist.
From genome-wide association studies (GWAS) in about 700,000 mainly White individuals of European origin, the researchers constructed a genetic risk score based on 894 independent single nucleotide polymorphisms (SNPs) associated with high BMI and another genetic risk score based on 481 SNPs associated with high WHR-adjBMI.
Having a genetic predisposition to higher WHR-adjBMI predicted an increase in abdominal obesity after weight loss, whereas having a genetic predisposition to higher BMI did not predict weight regain.
“These results suggest that genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain,” the researchers conclude.
The researchers were supported by grants from the Novo Nordisk Foundation and the Danish Diabetes Academy (funded by the Novo Nordisk Foundation). The authors report no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM DIABETES
More cuts to physician payment ahead
In July, the Centers for Medicare and Medicaid Services released the 2024 Physician Fee Schedule (PFS) proposed rule on proposed policy changes for Medicare payments. The proposed rule contains 2,883 pages of proposals for physician, hospital outpatient department, and ambulatory surgery center (ASC) payments for calendar year 2024. For gastroenterologists, there was good news and bad news.
Medicare physician payments have been cut each year for the better part of a decade, with additional cuts proposed for 2024.
According to the American Medical Assocition, Medicare physician payment has already declined 26% in the last 22 years when adjusting for inflation, and that’s before factoring in the proposed cuts for 2024. Physicians are one of the only health care providers without an automatic inflationary increase, the AMA reports.
AGA opposes additional cuts to physician payments and will continue to advocate to stop them. AGA and many other specialty societies support H.R. 2474, the Strengthening Medicare for Patients and Providers Act. This bill would provide a permanent, annual update equal to the increase in the Medicare Economic Index, which is how the government measures inflation in medical practice. We will continue to advocate for permanent positive annual inflation updates, which would allow physicians to invest in their practices and implement new strategies to provide high-value care.
But in some positive news from the 2024 Medicare PFS, the Hospital Outpatient Prospective Payment System (OPPS) and the ASC proposed rules include increased hospital outpatient departments and ASC payments, continued telemedicine reimbursement and coverage through 2024, and a second one-year delay in changes to rules governing split/shared visits. Specifically:
OPPS Conversion Factor: The proposed CY 2024 Medicare conversion factor for outpatient hospital departments is $87.488, an increase of 2.8%, for hospitals that meet applicable quality reporting requirements.
ASC Conversion Factor: The proposed CY 2024 Ambulatory Surgical Center conversion factor is $53.397, an increase of 2.8%, for ASCs that meet applicable quality reporting requirements. The AGA and our sister societies continue to urge CMS to reduce this gap in the ASC facility fees, when compared to the outpatient hospital facility rates, which are estimated to be a roughly 48% differential in CY 2024.
Telehealth: CMS proposes to continue reimbursing telehealth services at current levels through 2024. Payment for audio-only evaluation and management (E/M) codes will continue at parity with follow-up in-person visits as it has throughout the pandemic. Additionally, CMS is implementing telehealth flexibilities that were included in the Consolidated Appropriations Act 2023 by allowing telehealth visits to originate at any site in the United States. This will allow patients throughout the country to maintain access to needed telehealth services without facing the logistical and safety challenges that can surround in-person visits. CMS is proposing to pay telehealth services at the nonfacility payment rate, which is the same rate as in-person office visits, lift the frequency limits on telehealth visits for subsequent hospital and skilled nursing facility visits, and allow direct supervision to be provided virtually.
Split (or shared) visits: CMS has proposed a second one-year delay to its proposed split/shared visits policy. The original proposal required that the billing provider in split/shared visits be whoever spent more than half of the total time with the patient (making time the only way to define substantive portion). CMS plans to delay that through at least Dec. 31, 2024. In the interim, practices can continue to use one of the three key components (history, exam, or medical decision-making) or more than half of the total time spent to determine who can bill for the visit. The GI societies will continue to advocate for appropriate reimbursement to align with new team-based models of care delivery.
Notably, the split (or shared) visits policy was also delayed in 2023 because of widespread concerns and feedback that the policy would disrupt team-based care and care delivery in the hospital setting. The American Medical Association CPT editorial panel, the body responsible for creating and maintaining CPT codes, has approved revisions to E/M guidelines that may help address some of CMS’s concerns.
For more information on issues affecting gastroenterologists in the 2024 Medicare PFS and OPPS/ASC proposed rules, visit the AGA news website.
Dr. Garcia serves as an advisor to the AGA AMA Relative-value Update Committee. She is clinical associate professor of medicine at Stanford (Calif.) University, where she is director of the neurogastroenterology and motility laboratory in the division of gastroenterology and hepatology, and associate chief medical information officer in ambulatory care at Stanford Health Care.
In July, the Centers for Medicare and Medicaid Services released the 2024 Physician Fee Schedule (PFS) proposed rule on proposed policy changes for Medicare payments. The proposed rule contains 2,883 pages of proposals for physician, hospital outpatient department, and ambulatory surgery center (ASC) payments for calendar year 2024. For gastroenterologists, there was good news and bad news.
Medicare physician payments have been cut each year for the better part of a decade, with additional cuts proposed for 2024.
According to the American Medical Assocition, Medicare physician payment has already declined 26% in the last 22 years when adjusting for inflation, and that’s before factoring in the proposed cuts for 2024. Physicians are one of the only health care providers without an automatic inflationary increase, the AMA reports.
AGA opposes additional cuts to physician payments and will continue to advocate to stop them. AGA and many other specialty societies support H.R. 2474, the Strengthening Medicare for Patients and Providers Act. This bill would provide a permanent, annual update equal to the increase in the Medicare Economic Index, which is how the government measures inflation in medical practice. We will continue to advocate for permanent positive annual inflation updates, which would allow physicians to invest in their practices and implement new strategies to provide high-value care.
But in some positive news from the 2024 Medicare PFS, the Hospital Outpatient Prospective Payment System (OPPS) and the ASC proposed rules include increased hospital outpatient departments and ASC payments, continued telemedicine reimbursement and coverage through 2024, and a second one-year delay in changes to rules governing split/shared visits. Specifically:
OPPS Conversion Factor: The proposed CY 2024 Medicare conversion factor for outpatient hospital departments is $87.488, an increase of 2.8%, for hospitals that meet applicable quality reporting requirements.
ASC Conversion Factor: The proposed CY 2024 Ambulatory Surgical Center conversion factor is $53.397, an increase of 2.8%, for ASCs that meet applicable quality reporting requirements. The AGA and our sister societies continue to urge CMS to reduce this gap in the ASC facility fees, when compared to the outpatient hospital facility rates, which are estimated to be a roughly 48% differential in CY 2024.
Telehealth: CMS proposes to continue reimbursing telehealth services at current levels through 2024. Payment for audio-only evaluation and management (E/M) codes will continue at parity with follow-up in-person visits as it has throughout the pandemic. Additionally, CMS is implementing telehealth flexibilities that were included in the Consolidated Appropriations Act 2023 by allowing telehealth visits to originate at any site in the United States. This will allow patients throughout the country to maintain access to needed telehealth services without facing the logistical and safety challenges that can surround in-person visits. CMS is proposing to pay telehealth services at the nonfacility payment rate, which is the same rate as in-person office visits, lift the frequency limits on telehealth visits for subsequent hospital and skilled nursing facility visits, and allow direct supervision to be provided virtually.
Split (or shared) visits: CMS has proposed a second one-year delay to its proposed split/shared visits policy. The original proposal required that the billing provider in split/shared visits be whoever spent more than half of the total time with the patient (making time the only way to define substantive portion). CMS plans to delay that through at least Dec. 31, 2024. In the interim, practices can continue to use one of the three key components (history, exam, or medical decision-making) or more than half of the total time spent to determine who can bill for the visit. The GI societies will continue to advocate for appropriate reimbursement to align with new team-based models of care delivery.
Notably, the split (or shared) visits policy was also delayed in 2023 because of widespread concerns and feedback that the policy would disrupt team-based care and care delivery in the hospital setting. The American Medical Association CPT editorial panel, the body responsible for creating and maintaining CPT codes, has approved revisions to E/M guidelines that may help address some of CMS’s concerns.
For more information on issues affecting gastroenterologists in the 2024 Medicare PFS and OPPS/ASC proposed rules, visit the AGA news website.
Dr. Garcia serves as an advisor to the AGA AMA Relative-value Update Committee. She is clinical associate professor of medicine at Stanford (Calif.) University, where she is director of the neurogastroenterology and motility laboratory in the division of gastroenterology and hepatology, and associate chief medical information officer in ambulatory care at Stanford Health Care.
In July, the Centers for Medicare and Medicaid Services released the 2024 Physician Fee Schedule (PFS) proposed rule on proposed policy changes for Medicare payments. The proposed rule contains 2,883 pages of proposals for physician, hospital outpatient department, and ambulatory surgery center (ASC) payments for calendar year 2024. For gastroenterologists, there was good news and bad news.
Medicare physician payments have been cut each year for the better part of a decade, with additional cuts proposed for 2024.
According to the American Medical Assocition, Medicare physician payment has already declined 26% in the last 22 years when adjusting for inflation, and that’s before factoring in the proposed cuts for 2024. Physicians are one of the only health care providers without an automatic inflationary increase, the AMA reports.
AGA opposes additional cuts to physician payments and will continue to advocate to stop them. AGA and many other specialty societies support H.R. 2474, the Strengthening Medicare for Patients and Providers Act. This bill would provide a permanent, annual update equal to the increase in the Medicare Economic Index, which is how the government measures inflation in medical practice. We will continue to advocate for permanent positive annual inflation updates, which would allow physicians to invest in their practices and implement new strategies to provide high-value care.
But in some positive news from the 2024 Medicare PFS, the Hospital Outpatient Prospective Payment System (OPPS) and the ASC proposed rules include increased hospital outpatient departments and ASC payments, continued telemedicine reimbursement and coverage through 2024, and a second one-year delay in changes to rules governing split/shared visits. Specifically:
OPPS Conversion Factor: The proposed CY 2024 Medicare conversion factor for outpatient hospital departments is $87.488, an increase of 2.8%, for hospitals that meet applicable quality reporting requirements.
ASC Conversion Factor: The proposed CY 2024 Ambulatory Surgical Center conversion factor is $53.397, an increase of 2.8%, for ASCs that meet applicable quality reporting requirements. The AGA and our sister societies continue to urge CMS to reduce this gap in the ASC facility fees, when compared to the outpatient hospital facility rates, which are estimated to be a roughly 48% differential in CY 2024.
Telehealth: CMS proposes to continue reimbursing telehealth services at current levels through 2024. Payment for audio-only evaluation and management (E/M) codes will continue at parity with follow-up in-person visits as it has throughout the pandemic. Additionally, CMS is implementing telehealth flexibilities that were included in the Consolidated Appropriations Act 2023 by allowing telehealth visits to originate at any site in the United States. This will allow patients throughout the country to maintain access to needed telehealth services without facing the logistical and safety challenges that can surround in-person visits. CMS is proposing to pay telehealth services at the nonfacility payment rate, which is the same rate as in-person office visits, lift the frequency limits on telehealth visits for subsequent hospital and skilled nursing facility visits, and allow direct supervision to be provided virtually.
Split (or shared) visits: CMS has proposed a second one-year delay to its proposed split/shared visits policy. The original proposal required that the billing provider in split/shared visits be whoever spent more than half of the total time with the patient (making time the only way to define substantive portion). CMS plans to delay that through at least Dec. 31, 2024. In the interim, practices can continue to use one of the three key components (history, exam, or medical decision-making) or more than half of the total time spent to determine who can bill for the visit. The GI societies will continue to advocate for appropriate reimbursement to align with new team-based models of care delivery.
Notably, the split (or shared) visits policy was also delayed in 2023 because of widespread concerns and feedback that the policy would disrupt team-based care and care delivery in the hospital setting. The American Medical Association CPT editorial panel, the body responsible for creating and maintaining CPT codes, has approved revisions to E/M guidelines that may help address some of CMS’s concerns.
For more information on issues affecting gastroenterologists in the 2024 Medicare PFS and OPPS/ASC proposed rules, visit the AGA news website.
Dr. Garcia serves as an advisor to the AGA AMA Relative-value Update Committee. She is clinical associate professor of medicine at Stanford (Calif.) University, where she is director of the neurogastroenterology and motility laboratory in the division of gastroenterology and hepatology, and associate chief medical information officer in ambulatory care at Stanford Health Care.
Which factors distinguish superagers from the rest of us?
Even at an advanced age, superagers have the memory of someone 20 or 30 years their junior. But why is that? A new study shows that, in superagers, However, the study also emphasizes the importance of physical and mental fitness for a healthy aging process.
“One of the most important unanswered questions with regard to superagers is: ‘Are they resistant to age-related memory loss, or do they have coping mechanisms that allow them to better offset this memory loss?’ ” wrote Marta Garo-Pascual, a PhD candidate at the Autonomous University of Madrid, Spain, and colleagues in the Lancet Healthy Longevity. “Our results indicate that superagers are resistant to these processes.”
Six years’ monitoring
From a cohort of older adults who had participated in a study aiming to identify early indicators of Alzheimer’s disease, the research group chose 64 superagers and 55 normal senior citizens. The latter served as the control group. While the superagers performed just as well in a memory test as people 30 years their junior, the control group’s performance was in line with their age and level of education.
All study participants were over age 79 years. Both the group of superagers and the control group included more females than males. On average, they were monitored for 6 years. During this period, a checkup was scheduled annually with an MRI examination, clinical tests, blood tests, and documentation of lifestyle factors.
For Alessandro Cellerino, PhD, of the Leibniz Institute on Aging–Fritz Lipmann Institute in Jena, Germany, this is the most crucial aspect of the study. “Even before this study, we knew that superagers demonstrated less atrophy in certain areas of the brain, but this was always only ever based on a single measurement.”
Memory centers protected
The MRI examinations confirmed that in superagers, gray matter atrophy in the regions responsible for memory (such as the medial temporal lobe and cholinergic forebrain), as well in regions important for movement (such as the motor thalamus), was less pronounced. In addition, the volume of gray matter in these regions, especially in the medial temporal lobe, decreased much more slowly in the superagers than in the control subjects over the study period.
Ms. Garo-Pascual and associates used a machine-learning algorithm to differentiate between superagers and normal older adults. From the 89 demographic, lifestyle, and clinical factors entered into the algorithm, two were the most important for the classification: the ability to move and mental health.
Mobility and mental health
Clinical tests such as the Timed Up-and-Go Test and the Finger Tapping Test revealed that superagers can be distinguished from the normally aging control subjects with regard to their mobility and fine motor skills. Their physical condition was better, although they, by their own admission, did not move any more than the control subjects in day-to-day life. According to Dr. Cellerino, this finding confirms that physical activity is paramount for cognitive function. “These people were over 80 years old – the fact that there was not much difference between their levels of activity is not surprising. Much more relevant is the question of how you get there – i.e., how active you are at the ages of 40, 50 or even 60 years old.”
Remaining active is important
As a matter of fact, the superagers indicated that generally they had been more active than the control subjects during their middle years. “Attempting to stay physically fit is essential; even if it just means going for a walk or taking the stairs,” said Dr. Cellerino.
On average, the superagers also fared much better in tests on physical health than the control subjects. They suffered significantly less from depression or anxiety disorders. “Earlier studies suggest that depression and anxiety disorders may influence performance in memory tests across all ages and that they are risk factors for developing dementia,” said Dr. Cellerino.
To avoid mental health issues in later life, gerontologist Dr. Cellerino recommended remaining socially engaged and involved. “Depression and anxiety are commonly also a consequence of social isolation,” he said.
Potential genetic differences
Blood sample analyses demonstrated that the superagers exhibited lower concentrations of biomarkers for neurodegenerative diseases than the control group did. In contrast, there was no difference between the two groups in the prevalence of the apo e4 allele, one of the most important genetic risk factors for Alzheimer’s disease. Nevertheless, Ms. Garo-Pascual and associates assume that genetics also play a role. They found that, despite 89 variables employed, the algorithm used could only distinguish superagers from normal older adults 66% of the time. This suggests that additional factors must be in play, such as genetic differences.
Body and mind
Since this is an observational study, whether the determined factors have a direct effect on superaging cannot be ascertained, the authors wrote. However, the results are consistent with earlier findings.
“Regarding the management of old age, we actually haven’t learned anything more than what we already knew. But it does confirm that physical and mental function are closely entwined and that we must maintain both to age healthily,” Dr. Cellerino concluded.
This article was translated from the Medscape German Edition. A version appeared on Medscape.com.
Even at an advanced age, superagers have the memory of someone 20 or 30 years their junior. But why is that? A new study shows that, in superagers, However, the study also emphasizes the importance of physical and mental fitness for a healthy aging process.
“One of the most important unanswered questions with regard to superagers is: ‘Are they resistant to age-related memory loss, or do they have coping mechanisms that allow them to better offset this memory loss?’ ” wrote Marta Garo-Pascual, a PhD candidate at the Autonomous University of Madrid, Spain, and colleagues in the Lancet Healthy Longevity. “Our results indicate that superagers are resistant to these processes.”
Six years’ monitoring
From a cohort of older adults who had participated in a study aiming to identify early indicators of Alzheimer’s disease, the research group chose 64 superagers and 55 normal senior citizens. The latter served as the control group. While the superagers performed just as well in a memory test as people 30 years their junior, the control group’s performance was in line with their age and level of education.
All study participants were over age 79 years. Both the group of superagers and the control group included more females than males. On average, they were monitored for 6 years. During this period, a checkup was scheduled annually with an MRI examination, clinical tests, blood tests, and documentation of lifestyle factors.
For Alessandro Cellerino, PhD, of the Leibniz Institute on Aging–Fritz Lipmann Institute in Jena, Germany, this is the most crucial aspect of the study. “Even before this study, we knew that superagers demonstrated less atrophy in certain areas of the brain, but this was always only ever based on a single measurement.”
Memory centers protected
The MRI examinations confirmed that in superagers, gray matter atrophy in the regions responsible for memory (such as the medial temporal lobe and cholinergic forebrain), as well in regions important for movement (such as the motor thalamus), was less pronounced. In addition, the volume of gray matter in these regions, especially in the medial temporal lobe, decreased much more slowly in the superagers than in the control subjects over the study period.
Ms. Garo-Pascual and associates used a machine-learning algorithm to differentiate between superagers and normal older adults. From the 89 demographic, lifestyle, and clinical factors entered into the algorithm, two were the most important for the classification: the ability to move and mental health.
Mobility and mental health
Clinical tests such as the Timed Up-and-Go Test and the Finger Tapping Test revealed that superagers can be distinguished from the normally aging control subjects with regard to their mobility and fine motor skills. Their physical condition was better, although they, by their own admission, did not move any more than the control subjects in day-to-day life. According to Dr. Cellerino, this finding confirms that physical activity is paramount for cognitive function. “These people were over 80 years old – the fact that there was not much difference between their levels of activity is not surprising. Much more relevant is the question of how you get there – i.e., how active you are at the ages of 40, 50 or even 60 years old.”
Remaining active is important
As a matter of fact, the superagers indicated that generally they had been more active than the control subjects during their middle years. “Attempting to stay physically fit is essential; even if it just means going for a walk or taking the stairs,” said Dr. Cellerino.
On average, the superagers also fared much better in tests on physical health than the control subjects. They suffered significantly less from depression or anxiety disorders. “Earlier studies suggest that depression and anxiety disorders may influence performance in memory tests across all ages and that they are risk factors for developing dementia,” said Dr. Cellerino.
To avoid mental health issues in later life, gerontologist Dr. Cellerino recommended remaining socially engaged and involved. “Depression and anxiety are commonly also a consequence of social isolation,” he said.
Potential genetic differences
Blood sample analyses demonstrated that the superagers exhibited lower concentrations of biomarkers for neurodegenerative diseases than the control group did. In contrast, there was no difference between the two groups in the prevalence of the apo e4 allele, one of the most important genetic risk factors for Alzheimer’s disease. Nevertheless, Ms. Garo-Pascual and associates assume that genetics also play a role. They found that, despite 89 variables employed, the algorithm used could only distinguish superagers from normal older adults 66% of the time. This suggests that additional factors must be in play, such as genetic differences.
Body and mind
Since this is an observational study, whether the determined factors have a direct effect on superaging cannot be ascertained, the authors wrote. However, the results are consistent with earlier findings.
“Regarding the management of old age, we actually haven’t learned anything more than what we already knew. But it does confirm that physical and mental function are closely entwined and that we must maintain both to age healthily,” Dr. Cellerino concluded.
This article was translated from the Medscape German Edition. A version appeared on Medscape.com.
Even at an advanced age, superagers have the memory of someone 20 or 30 years their junior. But why is that? A new study shows that, in superagers, However, the study also emphasizes the importance of physical and mental fitness for a healthy aging process.
“One of the most important unanswered questions with regard to superagers is: ‘Are they resistant to age-related memory loss, or do they have coping mechanisms that allow them to better offset this memory loss?’ ” wrote Marta Garo-Pascual, a PhD candidate at the Autonomous University of Madrid, Spain, and colleagues in the Lancet Healthy Longevity. “Our results indicate that superagers are resistant to these processes.”
Six years’ monitoring
From a cohort of older adults who had participated in a study aiming to identify early indicators of Alzheimer’s disease, the research group chose 64 superagers and 55 normal senior citizens. The latter served as the control group. While the superagers performed just as well in a memory test as people 30 years their junior, the control group’s performance was in line with their age and level of education.
All study participants were over age 79 years. Both the group of superagers and the control group included more females than males. On average, they were monitored for 6 years. During this period, a checkup was scheduled annually with an MRI examination, clinical tests, blood tests, and documentation of lifestyle factors.
For Alessandro Cellerino, PhD, of the Leibniz Institute on Aging–Fritz Lipmann Institute in Jena, Germany, this is the most crucial aspect of the study. “Even before this study, we knew that superagers demonstrated less atrophy in certain areas of the brain, but this was always only ever based on a single measurement.”
Memory centers protected
The MRI examinations confirmed that in superagers, gray matter atrophy in the regions responsible for memory (such as the medial temporal lobe and cholinergic forebrain), as well in regions important for movement (such as the motor thalamus), was less pronounced. In addition, the volume of gray matter in these regions, especially in the medial temporal lobe, decreased much more slowly in the superagers than in the control subjects over the study period.
Ms. Garo-Pascual and associates used a machine-learning algorithm to differentiate between superagers and normal older adults. From the 89 demographic, lifestyle, and clinical factors entered into the algorithm, two were the most important for the classification: the ability to move and mental health.
Mobility and mental health
Clinical tests such as the Timed Up-and-Go Test and the Finger Tapping Test revealed that superagers can be distinguished from the normally aging control subjects with regard to their mobility and fine motor skills. Their physical condition was better, although they, by their own admission, did not move any more than the control subjects in day-to-day life. According to Dr. Cellerino, this finding confirms that physical activity is paramount for cognitive function. “These people were over 80 years old – the fact that there was not much difference between their levels of activity is not surprising. Much more relevant is the question of how you get there – i.e., how active you are at the ages of 40, 50 or even 60 years old.”
Remaining active is important
As a matter of fact, the superagers indicated that generally they had been more active than the control subjects during their middle years. “Attempting to stay physically fit is essential; even if it just means going for a walk or taking the stairs,” said Dr. Cellerino.
On average, the superagers also fared much better in tests on physical health than the control subjects. They suffered significantly less from depression or anxiety disorders. “Earlier studies suggest that depression and anxiety disorders may influence performance in memory tests across all ages and that they are risk factors for developing dementia,” said Dr. Cellerino.
To avoid mental health issues in later life, gerontologist Dr. Cellerino recommended remaining socially engaged and involved. “Depression and anxiety are commonly also a consequence of social isolation,” he said.
Potential genetic differences
Blood sample analyses demonstrated that the superagers exhibited lower concentrations of biomarkers for neurodegenerative diseases than the control group did. In contrast, there was no difference between the two groups in the prevalence of the apo e4 allele, one of the most important genetic risk factors for Alzheimer’s disease. Nevertheless, Ms. Garo-Pascual and associates assume that genetics also play a role. They found that, despite 89 variables employed, the algorithm used could only distinguish superagers from normal older adults 66% of the time. This suggests that additional factors must be in play, such as genetic differences.
Body and mind
Since this is an observational study, whether the determined factors have a direct effect on superaging cannot be ascertained, the authors wrote. However, the results are consistent with earlier findings.
“Regarding the management of old age, we actually haven’t learned anything more than what we already knew. But it does confirm that physical and mental function are closely entwined and that we must maintain both to age healthily,” Dr. Cellerino concluded.
This article was translated from the Medscape German Edition. A version appeared on Medscape.com.
FROM THE LANCET HEALTHY LONGEVITY
Study aims to better elucidate CCCA in men
, and the most common symptom was scalp pruritus.
Researchers retrospectively reviewed the medical records of 17 male patients with a clinical diagnosis of CCCA who were seen at University of Pennsylvania outpatient clinics between 2012 and 2022. They excluded patients who had no scalp biopsy or if the scalp biopsy features limited characterization. Temitayo Ogunleye, MD, of the department of dermatology, University of Pennsylvania, Philadelphia, led the study, published in the Journal of the American Academy of Dermatology.
CCCA, a type of scarring alopecia, most often affects women of African descent, and published data on the demographics, clinical findings, and medical histories of CCCA in men are limited, according to the authors.
The average age of the men was 43 years and 88.2% were Black, similar to women with CCCA, who tend to be middle-aged and Black. The four most common symptoms were scalp pruritus (58.8%), lesions (29.4%), pain or tenderness (23.5%), and hair thinning (23.5%). None of the men had type 2 diabetes (considered a possible CCCA risk factor), but 47.1% had a family history of alopecia. The four most common CCCA distributions were classic (47.1%), occipital (17.6%), patchy (11.8%), and posterior vertex (11.8%).
“Larger studies are needed to fully elucidate these relationships and explore etiology in males with CCCA,” the researchers wrote. “Nonetheless, we hope the data will prompt clinicians to assess for CCCA and risk factors in adult males with scarring alopecia.”
Limitations of the study included the retrospective, single-center design, and small sample size.
The researchers reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, and the most common symptom was scalp pruritus.
Researchers retrospectively reviewed the medical records of 17 male patients with a clinical diagnosis of CCCA who were seen at University of Pennsylvania outpatient clinics between 2012 and 2022. They excluded patients who had no scalp biopsy or if the scalp biopsy features limited characterization. Temitayo Ogunleye, MD, of the department of dermatology, University of Pennsylvania, Philadelphia, led the study, published in the Journal of the American Academy of Dermatology.
CCCA, a type of scarring alopecia, most often affects women of African descent, and published data on the demographics, clinical findings, and medical histories of CCCA in men are limited, according to the authors.
The average age of the men was 43 years and 88.2% were Black, similar to women with CCCA, who tend to be middle-aged and Black. The four most common symptoms were scalp pruritus (58.8%), lesions (29.4%), pain or tenderness (23.5%), and hair thinning (23.5%). None of the men had type 2 diabetes (considered a possible CCCA risk factor), but 47.1% had a family history of alopecia. The four most common CCCA distributions were classic (47.1%), occipital (17.6%), patchy (11.8%), and posterior vertex (11.8%).
“Larger studies are needed to fully elucidate these relationships and explore etiology in males with CCCA,” the researchers wrote. “Nonetheless, we hope the data will prompt clinicians to assess for CCCA and risk factors in adult males with scarring alopecia.”
Limitations of the study included the retrospective, single-center design, and small sample size.
The researchers reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, and the most common symptom was scalp pruritus.
Researchers retrospectively reviewed the medical records of 17 male patients with a clinical diagnosis of CCCA who were seen at University of Pennsylvania outpatient clinics between 2012 and 2022. They excluded patients who had no scalp biopsy or if the scalp biopsy features limited characterization. Temitayo Ogunleye, MD, of the department of dermatology, University of Pennsylvania, Philadelphia, led the study, published in the Journal of the American Academy of Dermatology.
CCCA, a type of scarring alopecia, most often affects women of African descent, and published data on the demographics, clinical findings, and medical histories of CCCA in men are limited, according to the authors.
The average age of the men was 43 years and 88.2% were Black, similar to women with CCCA, who tend to be middle-aged and Black. The four most common symptoms were scalp pruritus (58.8%), lesions (29.4%), pain or tenderness (23.5%), and hair thinning (23.5%). None of the men had type 2 diabetes (considered a possible CCCA risk factor), but 47.1% had a family history of alopecia. The four most common CCCA distributions were classic (47.1%), occipital (17.6%), patchy (11.8%), and posterior vertex (11.8%).
“Larger studies are needed to fully elucidate these relationships and explore etiology in males with CCCA,” the researchers wrote. “Nonetheless, we hope the data will prompt clinicians to assess for CCCA and risk factors in adult males with scarring alopecia.”
Limitations of the study included the retrospective, single-center design, and small sample size.
The researchers reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
What causes sudden cardiac arrest in young people?
Sudden cardiac arrest is the term given to death that results from a cardiac cause and occurs within an hour of symptoms being observed. If no witnesses are present, sudden cardiac arrest is present if the person had been in apparently good health 24 hours before cardiac death. Fatality is usually a result of sustained ventricular fibrillation or sustained ventricular tachycardia that leads to cardiac arrest.
Recognizing warning signs
Warning signs that should prompt physicians to consider an increased risk of sudden cardiac arrest include the following:
- Unexplained, brief fainting episodes that above all occur with stress, physical activity, or loud noises (for example, alarm ringing)
- Seizures without a clear pathologic EEG result (for example, epilepsy)
- Unexplained accidents or car crashes
- Heart failure or pacemaker dependency before age 50 years
“These are all indications that could point to an underlying heart disease that should be investigated by a medical professional,” explained Silke Kauferstein, PhD, head of the Center for Sudden Cardiac Arrest and Familial Arrhythmia Syndrome at the Institute of Forensic Medicine of the University Frankfurt am Main (Germany), in a podcast by the German Heart Foundation.
Sports rarely responsible
Sudden cardiac arrest has numerous causes. Sudden cardiac arrests in a professional sports environment always attract attention. Yet sports play a less important role in sudden cardiac arrest than is often assumed, even in young individuals.
“The incidence of sudden cardiac arrest is on average 0.7-3 per 100,000 sports players from all age groups,” said Thomas Voigtländer, MD, chair of the German Heart Foundation, in an interview. Men make up 95% of those affected, and 90% of these events occur during recreational sports.
Inherited disorders
The most significant risk factor for sudden cardiac arrest is age; it is often associated with coronary heart disease. This factor can be significant from as early as age 35 years. Among young individuals, sudden cardiac arrest is often a result of congenital heart diseases, such as hypertrophic cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy. Diseases such as long QT syndrome and Brugada syndrome can also lead to sudden cardiac arrest.
Among young sports players who experience sudden cardiac arrest, the cause is often an overlooked hereditary factor. “Cardiac screening is recommended in particular for young, high-performance athletes from around 14 years old,” said Dr. Voigtländer, who is also a cardiologist and medical director of the Agaplesion Bethanien Hospital in Frankfurt.
Testing of family
“If sudden cardiac arrest or an unexplained sudden death occurs at a young age in the family, the primary care practitioner must be aware that this could be due to heart diseases that could affect the rest of the family,” said Dr. Voigtländer.
In these cases, primary care practitioners must connect the other family members to specialist outpatient departments that can test for genetic factors, he added. “Many of these genetic diseases can be treated successfully if they are diagnosed promptly.”
Lack of knowledge
Dr. Kauferstein, who runs such a specialist outpatient department, said, “unfortunately, many affected families do not know that they should be tested as well. This lack of knowledge can also lead to fatal consequences for relatives.”
For this reason, she believes that it is crucial to provide more information to the general population. Sudden cardiac arrest is often the first sign of an underlying heart disease in young, healthy individuals. “We do see warning signals in our in-depth testing of sudden cardiac arrest cases that have often been overlooked,” said Dr. Kauferstein.
This article was translated from the Medscape German Edition. A version appeared on Medscape.com.
Sudden cardiac arrest is the term given to death that results from a cardiac cause and occurs within an hour of symptoms being observed. If no witnesses are present, sudden cardiac arrest is present if the person had been in apparently good health 24 hours before cardiac death. Fatality is usually a result of sustained ventricular fibrillation or sustained ventricular tachycardia that leads to cardiac arrest.
Recognizing warning signs
Warning signs that should prompt physicians to consider an increased risk of sudden cardiac arrest include the following:
- Unexplained, brief fainting episodes that above all occur with stress, physical activity, or loud noises (for example, alarm ringing)
- Seizures without a clear pathologic EEG result (for example, epilepsy)
- Unexplained accidents or car crashes
- Heart failure or pacemaker dependency before age 50 years
“These are all indications that could point to an underlying heart disease that should be investigated by a medical professional,” explained Silke Kauferstein, PhD, head of the Center for Sudden Cardiac Arrest and Familial Arrhythmia Syndrome at the Institute of Forensic Medicine of the University Frankfurt am Main (Germany), in a podcast by the German Heart Foundation.
Sports rarely responsible
Sudden cardiac arrest has numerous causes. Sudden cardiac arrests in a professional sports environment always attract attention. Yet sports play a less important role in sudden cardiac arrest than is often assumed, even in young individuals.
“The incidence of sudden cardiac arrest is on average 0.7-3 per 100,000 sports players from all age groups,” said Thomas Voigtländer, MD, chair of the German Heart Foundation, in an interview. Men make up 95% of those affected, and 90% of these events occur during recreational sports.
Inherited disorders
The most significant risk factor for sudden cardiac arrest is age; it is often associated with coronary heart disease. This factor can be significant from as early as age 35 years. Among young individuals, sudden cardiac arrest is often a result of congenital heart diseases, such as hypertrophic cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy. Diseases such as long QT syndrome and Brugada syndrome can also lead to sudden cardiac arrest.
Among young sports players who experience sudden cardiac arrest, the cause is often an overlooked hereditary factor. “Cardiac screening is recommended in particular for young, high-performance athletes from around 14 years old,” said Dr. Voigtländer, who is also a cardiologist and medical director of the Agaplesion Bethanien Hospital in Frankfurt.
Testing of family
“If sudden cardiac arrest or an unexplained sudden death occurs at a young age in the family, the primary care practitioner must be aware that this could be due to heart diseases that could affect the rest of the family,” said Dr. Voigtländer.
In these cases, primary care practitioners must connect the other family members to specialist outpatient departments that can test for genetic factors, he added. “Many of these genetic diseases can be treated successfully if they are diagnosed promptly.”
Lack of knowledge
Dr. Kauferstein, who runs such a specialist outpatient department, said, “unfortunately, many affected families do not know that they should be tested as well. This lack of knowledge can also lead to fatal consequences for relatives.”
For this reason, she believes that it is crucial to provide more information to the general population. Sudden cardiac arrest is often the first sign of an underlying heart disease in young, healthy individuals. “We do see warning signals in our in-depth testing of sudden cardiac arrest cases that have often been overlooked,” said Dr. Kauferstein.
This article was translated from the Medscape German Edition. A version appeared on Medscape.com.
Sudden cardiac arrest is the term given to death that results from a cardiac cause and occurs within an hour of symptoms being observed. If no witnesses are present, sudden cardiac arrest is present if the person had been in apparently good health 24 hours before cardiac death. Fatality is usually a result of sustained ventricular fibrillation or sustained ventricular tachycardia that leads to cardiac arrest.
Recognizing warning signs
Warning signs that should prompt physicians to consider an increased risk of sudden cardiac arrest include the following:
- Unexplained, brief fainting episodes that above all occur with stress, physical activity, or loud noises (for example, alarm ringing)
- Seizures without a clear pathologic EEG result (for example, epilepsy)
- Unexplained accidents or car crashes
- Heart failure or pacemaker dependency before age 50 years
“These are all indications that could point to an underlying heart disease that should be investigated by a medical professional,” explained Silke Kauferstein, PhD, head of the Center for Sudden Cardiac Arrest and Familial Arrhythmia Syndrome at the Institute of Forensic Medicine of the University Frankfurt am Main (Germany), in a podcast by the German Heart Foundation.
Sports rarely responsible
Sudden cardiac arrest has numerous causes. Sudden cardiac arrests in a professional sports environment always attract attention. Yet sports play a less important role in sudden cardiac arrest than is often assumed, even in young individuals.
“The incidence of sudden cardiac arrest is on average 0.7-3 per 100,000 sports players from all age groups,” said Thomas Voigtländer, MD, chair of the German Heart Foundation, in an interview. Men make up 95% of those affected, and 90% of these events occur during recreational sports.
Inherited disorders
The most significant risk factor for sudden cardiac arrest is age; it is often associated with coronary heart disease. This factor can be significant from as early as age 35 years. Among young individuals, sudden cardiac arrest is often a result of congenital heart diseases, such as hypertrophic cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy. Diseases such as long QT syndrome and Brugada syndrome can also lead to sudden cardiac arrest.
Among young sports players who experience sudden cardiac arrest, the cause is often an overlooked hereditary factor. “Cardiac screening is recommended in particular for young, high-performance athletes from around 14 years old,” said Dr. Voigtländer, who is also a cardiologist and medical director of the Agaplesion Bethanien Hospital in Frankfurt.
Testing of family
“If sudden cardiac arrest or an unexplained sudden death occurs at a young age in the family, the primary care practitioner must be aware that this could be due to heart diseases that could affect the rest of the family,” said Dr. Voigtländer.
In these cases, primary care practitioners must connect the other family members to specialist outpatient departments that can test for genetic factors, he added. “Many of these genetic diseases can be treated successfully if they are diagnosed promptly.”
Lack of knowledge
Dr. Kauferstein, who runs such a specialist outpatient department, said, “unfortunately, many affected families do not know that they should be tested as well. This lack of knowledge can also lead to fatal consequences for relatives.”
For this reason, she believes that it is crucial to provide more information to the general population. Sudden cardiac arrest is often the first sign of an underlying heart disease in young, healthy individuals. “We do see warning signals in our in-depth testing of sudden cardiac arrest cases that have often been overlooked,” said Dr. Kauferstein.
This article was translated from the Medscape German Edition. A version appeared on Medscape.com.
Shiny Indurated Plaques on the Legs
The Diagnosis: Pretibial Myxedema
Histopathology showed superficial and deep mucin deposition with proliferation of fibroblasts and thin wiry collagen bundles that were consistent with a diagnosis of pretibial myxedema. The patient was treated with clobetasol ointment 0.05% twice daily for 3 months, followed by a trial of pentoxifylline 400 mg 3 times daily for 3 months. After this treatment failed, she was started on rituximab infusions of 1 g biweekly for 1 month, followed by 500 mg at 6 months, with marked improvement after the first 2 doses of 1 g.
Pretibial myxedema is an uncommon cutaneous manifestation of autoimmune thyroid disease, occurring in 1% to 5% of patients with Graves disease. It usually occurs in older adult women on the pretibial regions and less commonly on the upper extremities, face, and areas of prior trauma.1-3 Although typically asymptomatic, it can be painful and ulcerate.3 The clinical presentation consists of bilateral nonpitting edema with overlying indurated skin as well as flesh-colored, yellow-brown, violaceous, or peau d’orange papules and plaques.2,3 Lesions develop over months and often have been associated with hyperhidrosis and hypertrichosis.2 Many variants have been identified including nodular, plaquelike, diffuse swelling (ie, nonpitting edema), tumor, mixture, polypoid, and elephantiasis; severe cases with acral involvement are termed thyroid acropachy.1-3 Pathogenesis likely involves the activation of thyrotropin receptors on fibroblasts by the circulating thyrotropin autoantibodies found in Graves disease. Activated fibroblasts upregulate glycosaminoglycan production, which osmotically drives the accumulation of dermal and subdermal fluid.1,3
This diagnosis should be considered in any patient with pretibial edema or edema in areas of trauma. Graves disease most commonly is diagnosed 1 to 2 years prior to the development of pretibial myxedema; other extrathyroidal manifestations, most commonly ophthalmopathies, almost always are found in patients with pretibial myxedema. If a diagnosis of Graves disease has not been established, thyroid studies, including thyrotropin receptor antibody serum levels, should be obtained. Histopathology showing increased mucin in the dermis and increased fibroblasts can aid in diagnosis.2,3
The differential diagnosis includes inflammatory dermatoses, such as stasis dermatitis and lipodermatosclerosis. Stasis dermatitis is characterized by lichenified yellowbrown plaques that present on the lower extremities; lipodermatosclerosis then can develop and present as atrophic sclerotic plaques with a champagne bottle–like appearance. Necrobiosis lipoidica demonstrates atrophic, shiny, yellow plaques with telangiectases and ulcerations. Hypertrophic lichen planus presents with hyperkeratotic hyperpigmented plaques on the shins.1,2 Other diseases of cutaneous mucin deposition, namely scleromyxedema, demonstrate similar physical findings but more commonly are located on the trunk, face, and dorsal hands rather than the lower extremities.1-3
Treatment of pretibial myxedema is difficult; normalization of thyroid function, weight reduction, and compression stockings can help reduce edema. Medical therapies aim to decrease glycosaminoglycan production by fibroblasts. First-line treatment includes topical steroids under occlusion, and second-line therapies include intralesional steroids, systemic corticosteroids, pentoxifylline, and octreotide.2,3 Therapies for refractory disease include plasmapheresis, surgical excision, radiotherapy, and intravenous immunoglobulin; more recent studies also endorse the use of isotretinoin, intralesional hyaluronidase, and rituximab.2,4 Success also has been observed with the insulin growth factor 1 receptor inhibitor teprotumumab in active thyroid eye disease, in which insulin growth factor 1 receptor is overexpressed by fibroblasts. Given the similar pathogenesis of thyroid ophthalmopathy with other extrathyroidal manifestations, teprotumumab is a promising option for refractory cases of pretibial myxedema and has led to disease resolution in several patients.4
- Fatourechi V, Pajouhi M, Fransway AF. Dermopathy of Graves disease (pretibial myxedema). review of 150 cases. Medicine (Baltimore). 1994;73:1-7. doi:10.1097/00005792-199401000-00001
- Ai J, Leonhardt JM, Heymann WR. Autoimmune thyroid diseases: etiology, pathogenesis, and dermatologic manifestations. J Am Acad Dermatol. 2003;48:641-662. doi:10.1067/mjd.2003.257
- Schwartz KM, Fatourechi V, Ahmed DDF, et al. Dermopathy of Graves’ disease (pretibial myxedema): long-term outcome. J Clin Endocrinol Metab. 2002;87:438-446. doi:10.1210/jcem.87.2.8220
- Varma A, Rheeman C, Levitt J. Resolution of pretibial myxedema with teprotumumab in a patient with Graves disease. JAAD Case Reports. 2020;6:1281-1282. doi:10.1016/j.jdcr.2020.09.003
The Diagnosis: Pretibial Myxedema
Histopathology showed superficial and deep mucin deposition with proliferation of fibroblasts and thin wiry collagen bundles that were consistent with a diagnosis of pretibial myxedema. The patient was treated with clobetasol ointment 0.05% twice daily for 3 months, followed by a trial of pentoxifylline 400 mg 3 times daily for 3 months. After this treatment failed, she was started on rituximab infusions of 1 g biweekly for 1 month, followed by 500 mg at 6 months, with marked improvement after the first 2 doses of 1 g.
Pretibial myxedema is an uncommon cutaneous manifestation of autoimmune thyroid disease, occurring in 1% to 5% of patients with Graves disease. It usually occurs in older adult women on the pretibial regions and less commonly on the upper extremities, face, and areas of prior trauma.1-3 Although typically asymptomatic, it can be painful and ulcerate.3 The clinical presentation consists of bilateral nonpitting edema with overlying indurated skin as well as flesh-colored, yellow-brown, violaceous, or peau d’orange papules and plaques.2,3 Lesions develop over months and often have been associated with hyperhidrosis and hypertrichosis.2 Many variants have been identified including nodular, plaquelike, diffuse swelling (ie, nonpitting edema), tumor, mixture, polypoid, and elephantiasis; severe cases with acral involvement are termed thyroid acropachy.1-3 Pathogenesis likely involves the activation of thyrotropin receptors on fibroblasts by the circulating thyrotropin autoantibodies found in Graves disease. Activated fibroblasts upregulate glycosaminoglycan production, which osmotically drives the accumulation of dermal and subdermal fluid.1,3
This diagnosis should be considered in any patient with pretibial edema or edema in areas of trauma. Graves disease most commonly is diagnosed 1 to 2 years prior to the development of pretibial myxedema; other extrathyroidal manifestations, most commonly ophthalmopathies, almost always are found in patients with pretibial myxedema. If a diagnosis of Graves disease has not been established, thyroid studies, including thyrotropin receptor antibody serum levels, should be obtained. Histopathology showing increased mucin in the dermis and increased fibroblasts can aid in diagnosis.2,3
The differential diagnosis includes inflammatory dermatoses, such as stasis dermatitis and lipodermatosclerosis. Stasis dermatitis is characterized by lichenified yellowbrown plaques that present on the lower extremities; lipodermatosclerosis then can develop and present as atrophic sclerotic plaques with a champagne bottle–like appearance. Necrobiosis lipoidica demonstrates atrophic, shiny, yellow plaques with telangiectases and ulcerations. Hypertrophic lichen planus presents with hyperkeratotic hyperpigmented plaques on the shins.1,2 Other diseases of cutaneous mucin deposition, namely scleromyxedema, demonstrate similar physical findings but more commonly are located on the trunk, face, and dorsal hands rather than the lower extremities.1-3
Treatment of pretibial myxedema is difficult; normalization of thyroid function, weight reduction, and compression stockings can help reduce edema. Medical therapies aim to decrease glycosaminoglycan production by fibroblasts. First-line treatment includes topical steroids under occlusion, and second-line therapies include intralesional steroids, systemic corticosteroids, pentoxifylline, and octreotide.2,3 Therapies for refractory disease include plasmapheresis, surgical excision, radiotherapy, and intravenous immunoglobulin; more recent studies also endorse the use of isotretinoin, intralesional hyaluronidase, and rituximab.2,4 Success also has been observed with the insulin growth factor 1 receptor inhibitor teprotumumab in active thyroid eye disease, in which insulin growth factor 1 receptor is overexpressed by fibroblasts. Given the similar pathogenesis of thyroid ophthalmopathy with other extrathyroidal manifestations, teprotumumab is a promising option for refractory cases of pretibial myxedema and has led to disease resolution in several patients.4
The Diagnosis: Pretibial Myxedema
Histopathology showed superficial and deep mucin deposition with proliferation of fibroblasts and thin wiry collagen bundles that were consistent with a diagnosis of pretibial myxedema. The patient was treated with clobetasol ointment 0.05% twice daily for 3 months, followed by a trial of pentoxifylline 400 mg 3 times daily for 3 months. After this treatment failed, she was started on rituximab infusions of 1 g biweekly for 1 month, followed by 500 mg at 6 months, with marked improvement after the first 2 doses of 1 g.
Pretibial myxedema is an uncommon cutaneous manifestation of autoimmune thyroid disease, occurring in 1% to 5% of patients with Graves disease. It usually occurs in older adult women on the pretibial regions and less commonly on the upper extremities, face, and areas of prior trauma.1-3 Although typically asymptomatic, it can be painful and ulcerate.3 The clinical presentation consists of bilateral nonpitting edema with overlying indurated skin as well as flesh-colored, yellow-brown, violaceous, or peau d’orange papules and plaques.2,3 Lesions develop over months and often have been associated with hyperhidrosis and hypertrichosis.2 Many variants have been identified including nodular, plaquelike, diffuse swelling (ie, nonpitting edema), tumor, mixture, polypoid, and elephantiasis; severe cases with acral involvement are termed thyroid acropachy.1-3 Pathogenesis likely involves the activation of thyrotropin receptors on fibroblasts by the circulating thyrotropin autoantibodies found in Graves disease. Activated fibroblasts upregulate glycosaminoglycan production, which osmotically drives the accumulation of dermal and subdermal fluid.1,3
This diagnosis should be considered in any patient with pretibial edema or edema in areas of trauma. Graves disease most commonly is diagnosed 1 to 2 years prior to the development of pretibial myxedema; other extrathyroidal manifestations, most commonly ophthalmopathies, almost always are found in patients with pretibial myxedema. If a diagnosis of Graves disease has not been established, thyroid studies, including thyrotropin receptor antibody serum levels, should be obtained. Histopathology showing increased mucin in the dermis and increased fibroblasts can aid in diagnosis.2,3
The differential diagnosis includes inflammatory dermatoses, such as stasis dermatitis and lipodermatosclerosis. Stasis dermatitis is characterized by lichenified yellowbrown plaques that present on the lower extremities; lipodermatosclerosis then can develop and present as atrophic sclerotic plaques with a champagne bottle–like appearance. Necrobiosis lipoidica demonstrates atrophic, shiny, yellow plaques with telangiectases and ulcerations. Hypertrophic lichen planus presents with hyperkeratotic hyperpigmented plaques on the shins.1,2 Other diseases of cutaneous mucin deposition, namely scleromyxedema, demonstrate similar physical findings but more commonly are located on the trunk, face, and dorsal hands rather than the lower extremities.1-3
Treatment of pretibial myxedema is difficult; normalization of thyroid function, weight reduction, and compression stockings can help reduce edema. Medical therapies aim to decrease glycosaminoglycan production by fibroblasts. First-line treatment includes topical steroids under occlusion, and second-line therapies include intralesional steroids, systemic corticosteroids, pentoxifylline, and octreotide.2,3 Therapies for refractory disease include plasmapheresis, surgical excision, radiotherapy, and intravenous immunoglobulin; more recent studies also endorse the use of isotretinoin, intralesional hyaluronidase, and rituximab.2,4 Success also has been observed with the insulin growth factor 1 receptor inhibitor teprotumumab in active thyroid eye disease, in which insulin growth factor 1 receptor is overexpressed by fibroblasts. Given the similar pathogenesis of thyroid ophthalmopathy with other extrathyroidal manifestations, teprotumumab is a promising option for refractory cases of pretibial myxedema and has led to disease resolution in several patients.4
- Fatourechi V, Pajouhi M, Fransway AF. Dermopathy of Graves disease (pretibial myxedema). review of 150 cases. Medicine (Baltimore). 1994;73:1-7. doi:10.1097/00005792-199401000-00001
- Ai J, Leonhardt JM, Heymann WR. Autoimmune thyroid diseases: etiology, pathogenesis, and dermatologic manifestations. J Am Acad Dermatol. 2003;48:641-662. doi:10.1067/mjd.2003.257
- Schwartz KM, Fatourechi V, Ahmed DDF, et al. Dermopathy of Graves’ disease (pretibial myxedema): long-term outcome. J Clin Endocrinol Metab. 2002;87:438-446. doi:10.1210/jcem.87.2.8220
- Varma A, Rheeman C, Levitt J. Resolution of pretibial myxedema with teprotumumab in a patient with Graves disease. JAAD Case Reports. 2020;6:1281-1282. doi:10.1016/j.jdcr.2020.09.003
- Fatourechi V, Pajouhi M, Fransway AF. Dermopathy of Graves disease (pretibial myxedema). review of 150 cases. Medicine (Baltimore). 1994;73:1-7. doi:10.1097/00005792-199401000-00001
- Ai J, Leonhardt JM, Heymann WR. Autoimmune thyroid diseases: etiology, pathogenesis, and dermatologic manifestations. J Am Acad Dermatol. 2003;48:641-662. doi:10.1067/mjd.2003.257
- Schwartz KM, Fatourechi V, Ahmed DDF, et al. Dermopathy of Graves’ disease (pretibial myxedema): long-term outcome. J Clin Endocrinol Metab. 2002;87:438-446. doi:10.1210/jcem.87.2.8220
- Varma A, Rheeman C, Levitt J. Resolution of pretibial myxedema with teprotumumab in a patient with Graves disease. JAAD Case Reports. 2020;6:1281-1282. doi:10.1016/j.jdcr.2020.09.003
A 70-year-old woman presented with pain and swelling in both legs of many years’ duration. She had no history of skin disease. Physical examination revealed shiny indurated plaques on the legs, ankles, and toes with limited range of motion in the ankles (top). Marked thickening of the hands and index fingers also was noted (bottom). A punch biopsy of the distal pretibial region was performed.