Limited English proficiency patients and the hospitalist

Article Type
Changed
Display Headline
Limited English proficiency patients and the hospitalist

America! America! God shed His grace on thee

And crown Thy good with brotherhood

From sea to shining sea!

I fondly remember singing "America the Beautiful" with my classmates when I was a little girl. America has grown by leaps and bounds since my childhood – the pulse of the nation as well as its makeup. One of my fondest memories as a child was traveling to New York. We had a layover in Washington and the airport was filled with people of various skin tones speaking all sorts of languages I had never been exposed to before. It was very exciting! It was my first truly multicultural experience.

Funny, I ultimately relocated to the D.C. area, and my neighbors are literally from all over the world: India, Thailand, Jamaica, Africa, China – and it doesn’t stop there. Naturally, the patient population I serve also reflects this great diversity. As the country becomes more diverse each and every day, we, as practitioners, must be able to communicate effectively with our entire patient base, not just the ones who speak English fluently.

This is quite a challenge. Yes, most hospitals have a language line or an on-call interpreter to help out, but I believe we also need to take some responsibility for improving our ability to communicate as well. While I am not advocating trying to master a new language, or two or three, we can all learn a few basic terms of the foreign languages we encounter most.

Consider that language lines do malfunction. Family members are sometimes not present. And interpreters may not always be available at the drop of a hat. Technology, though, is ever burgeoning. It’s easy to download a smartphone app, such as Medical Spanish: Healthcare Phrasebook with Audio. Google Translate can be helpful for scores of languages, though I would use this site with caution when it comes to patient care.

There is a slew of reputable patient information written in different languages available on the Internet as well.

The Agency for Healthcare Research and Quality offers a guide tool: Improving Patient Safety Systems for Patients with Limited English Proficiency: A Guide for Hospitals. The guide notes that approximately 57 million people speak a language other than English at home and 25 million are defined as limited-English-proficient (LEP). LEP patients were noted to have longer lengths of stay in the hospital and were at greater risk for line infections, surgical infections, falls, and pressure ulcers. They are more likely to be readmitted, as well.

Although it is always best to have a qualified interpreter to help us care for LEP patients, there may be times when one is simply unavailable in an acceptable period of time. Friends and family members can help fill some of the gaps in those instances, but it never hurts for the clinician to know a few vital words as well, such as pain or shortness of breath.

America’s culture is ever evolving, and we must evolve with it. Being able to provide high-quality care to all of our patients is our goal. Standards are important, but sometimes thinking out of the box can be effective as well.

Dr. Hester is a hospitalist with Baltimore-Washington Medical Center who has a passion for empowering patients to partner in their health care. She is the creator of the Patient Whiz, a patient-engagement app for iOS. Reach her at [email protected].

Author and Disclosure Information

Sections
Author and Disclosure Information

Author and Disclosure Information

America! America! God shed His grace on thee

And crown Thy good with brotherhood

From sea to shining sea!

I fondly remember singing "America the Beautiful" with my classmates when I was a little girl. America has grown by leaps and bounds since my childhood – the pulse of the nation as well as its makeup. One of my fondest memories as a child was traveling to New York. We had a layover in Washington and the airport was filled with people of various skin tones speaking all sorts of languages I had never been exposed to before. It was very exciting! It was my first truly multicultural experience.

Funny, I ultimately relocated to the D.C. area, and my neighbors are literally from all over the world: India, Thailand, Jamaica, Africa, China – and it doesn’t stop there. Naturally, the patient population I serve also reflects this great diversity. As the country becomes more diverse each and every day, we, as practitioners, must be able to communicate effectively with our entire patient base, not just the ones who speak English fluently.

This is quite a challenge. Yes, most hospitals have a language line or an on-call interpreter to help out, but I believe we also need to take some responsibility for improving our ability to communicate as well. While I am not advocating trying to master a new language, or two or three, we can all learn a few basic terms of the foreign languages we encounter most.

Consider that language lines do malfunction. Family members are sometimes not present. And interpreters may not always be available at the drop of a hat. Technology, though, is ever burgeoning. It’s easy to download a smartphone app, such as Medical Spanish: Healthcare Phrasebook with Audio. Google Translate can be helpful for scores of languages, though I would use this site with caution when it comes to patient care.

There is a slew of reputable patient information written in different languages available on the Internet as well.

The Agency for Healthcare Research and Quality offers a guide tool: Improving Patient Safety Systems for Patients with Limited English Proficiency: A Guide for Hospitals. The guide notes that approximately 57 million people speak a language other than English at home and 25 million are defined as limited-English-proficient (LEP). LEP patients were noted to have longer lengths of stay in the hospital and were at greater risk for line infections, surgical infections, falls, and pressure ulcers. They are more likely to be readmitted, as well.

Although it is always best to have a qualified interpreter to help us care for LEP patients, there may be times when one is simply unavailable in an acceptable period of time. Friends and family members can help fill some of the gaps in those instances, but it never hurts for the clinician to know a few vital words as well, such as pain or shortness of breath.

America’s culture is ever evolving, and we must evolve with it. Being able to provide high-quality care to all of our patients is our goal. Standards are important, but sometimes thinking out of the box can be effective as well.

Dr. Hester is a hospitalist with Baltimore-Washington Medical Center who has a passion for empowering patients to partner in their health care. She is the creator of the Patient Whiz, a patient-engagement app for iOS. Reach her at [email protected].

America! America! God shed His grace on thee

And crown Thy good with brotherhood

From sea to shining sea!

I fondly remember singing "America the Beautiful" with my classmates when I was a little girl. America has grown by leaps and bounds since my childhood – the pulse of the nation as well as its makeup. One of my fondest memories as a child was traveling to New York. We had a layover in Washington and the airport was filled with people of various skin tones speaking all sorts of languages I had never been exposed to before. It was very exciting! It was my first truly multicultural experience.

Funny, I ultimately relocated to the D.C. area, and my neighbors are literally from all over the world: India, Thailand, Jamaica, Africa, China – and it doesn’t stop there. Naturally, the patient population I serve also reflects this great diversity. As the country becomes more diverse each and every day, we, as practitioners, must be able to communicate effectively with our entire patient base, not just the ones who speak English fluently.

This is quite a challenge. Yes, most hospitals have a language line or an on-call interpreter to help out, but I believe we also need to take some responsibility for improving our ability to communicate as well. While I am not advocating trying to master a new language, or two or three, we can all learn a few basic terms of the foreign languages we encounter most.

Consider that language lines do malfunction. Family members are sometimes not present. And interpreters may not always be available at the drop of a hat. Technology, though, is ever burgeoning. It’s easy to download a smartphone app, such as Medical Spanish: Healthcare Phrasebook with Audio. Google Translate can be helpful for scores of languages, though I would use this site with caution when it comes to patient care.

There is a slew of reputable patient information written in different languages available on the Internet as well.

The Agency for Healthcare Research and Quality offers a guide tool: Improving Patient Safety Systems for Patients with Limited English Proficiency: A Guide for Hospitals. The guide notes that approximately 57 million people speak a language other than English at home and 25 million are defined as limited-English-proficient (LEP). LEP patients were noted to have longer lengths of stay in the hospital and were at greater risk for line infections, surgical infections, falls, and pressure ulcers. They are more likely to be readmitted, as well.

Although it is always best to have a qualified interpreter to help us care for LEP patients, there may be times when one is simply unavailable in an acceptable period of time. Friends and family members can help fill some of the gaps in those instances, but it never hurts for the clinician to know a few vital words as well, such as pain or shortness of breath.

America’s culture is ever evolving, and we must evolve with it. Being able to provide high-quality care to all of our patients is our goal. Standards are important, but sometimes thinking out of the box can be effective as well.

Dr. Hester is a hospitalist with Baltimore-Washington Medical Center who has a passion for empowering patients to partner in their health care. She is the creator of the Patient Whiz, a patient-engagement app for iOS. Reach her at [email protected].

Article Type
Display Headline
Limited English proficiency patients and the hospitalist
Display Headline
Limited English proficiency patients and the hospitalist
Sections
Article Source

PURLs Copyright

Inside the Article

Cryolipolysis

Article Type
Changed
Display Headline
Cryolipolysis

Cryolipolysis has emerged as a popular noninvasive treatment option for reducing localized areas of fat. The technology was developed on the premise that cold temperatures can selectively damage subcutaneous fat while leaving the overlying skin unharmed, as demonstrated by popsicle panniculitis. In this process, when subcutaneous fat is cooled below body temperature but above freezing, the fat undergoes cell death followed by a local inflammatory response, a localized panniculitis, that gradually results in a reduction of fat in that area.

Dr. Dieter Manstein and Dr. R. Rox Anderson pioneered the concept of cryolipolysis in 2008. The technology was approved by the Food and Drug Administration in 2010 in the form of the Zeltiq device. The device has different-sized hand pieces with a vacuum connection that, after it is applied to the skin, cools the subcutaneous fat without damaging the top layers of skin. Each area is treated for 1 hour, and 20%-30% of the fat cells are expected to be reduced with a single treatment. Typical responses after treatment include numbness, but some patients may also experience bruising and discomfort, all of which typically last no longer than 2-3 weeks.

If discomfort occurs in my patients, I find they report it more often in the lower abdomen than the love handles. Paradoxical adipose hyperplasia was recently reported for the first time in a male patient in his 40s (in the lower abdomen) (JAMA Dermatol. 2014;150:317-9).

In my experience, there is no difference in efficacy or adverse events seen in patients of different ethnicities. One study found no difference in efficacy or adverse events of cryolipolysis in Chinese patients (Lasers Surg. Med. 2012;44:125-30), but no other study of cryolipolysis in ethnic patients has been published.

I was involved in the clinical trials for this device prior to FDA approval where one love handle was treated on a patient and the other side was used as a control. Based on this experience and my experience using the device in practice, it is not a replacement for abdominoplasty or liposuction, but it is a useful technology in the right candidate. The patients who seem to do the best are those who are 10-15 pounds from their goal weight, are not obese (body mass index less than 30 kg/m2), and have a discrete bulge (typically love handles or abdomen) that they can’t get rid of with good diet and exercise alone. Massage for a few minutes after treatment seems to increase efficacy (Lasers Surg. Med. 2014;46:20-6).

Some patients may require more than one treatment to achieve their desired results, but I recommend waiting at least 2-3 months before opting for additional treatment. Choosing the right candidates and providing patients with realistic expectations seem to be the most helpful in this process.

Dr. Wesley practices dermatology in Beverly Hills, Calif.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Cryolipolysis, reducing fat, panniculitis,
Sections
Author and Disclosure Information

Author and Disclosure Information

Cryolipolysis has emerged as a popular noninvasive treatment option for reducing localized areas of fat. The technology was developed on the premise that cold temperatures can selectively damage subcutaneous fat while leaving the overlying skin unharmed, as demonstrated by popsicle panniculitis. In this process, when subcutaneous fat is cooled below body temperature but above freezing, the fat undergoes cell death followed by a local inflammatory response, a localized panniculitis, that gradually results in a reduction of fat in that area.

Dr. Dieter Manstein and Dr. R. Rox Anderson pioneered the concept of cryolipolysis in 2008. The technology was approved by the Food and Drug Administration in 2010 in the form of the Zeltiq device. The device has different-sized hand pieces with a vacuum connection that, after it is applied to the skin, cools the subcutaneous fat without damaging the top layers of skin. Each area is treated for 1 hour, and 20%-30% of the fat cells are expected to be reduced with a single treatment. Typical responses after treatment include numbness, but some patients may also experience bruising and discomfort, all of which typically last no longer than 2-3 weeks.

If discomfort occurs in my patients, I find they report it more often in the lower abdomen than the love handles. Paradoxical adipose hyperplasia was recently reported for the first time in a male patient in his 40s (in the lower abdomen) (JAMA Dermatol. 2014;150:317-9).

In my experience, there is no difference in efficacy or adverse events seen in patients of different ethnicities. One study found no difference in efficacy or adverse events of cryolipolysis in Chinese patients (Lasers Surg. Med. 2012;44:125-30), but no other study of cryolipolysis in ethnic patients has been published.

I was involved in the clinical trials for this device prior to FDA approval where one love handle was treated on a patient and the other side was used as a control. Based on this experience and my experience using the device in practice, it is not a replacement for abdominoplasty or liposuction, but it is a useful technology in the right candidate. The patients who seem to do the best are those who are 10-15 pounds from their goal weight, are not obese (body mass index less than 30 kg/m2), and have a discrete bulge (typically love handles or abdomen) that they can’t get rid of with good diet and exercise alone. Massage for a few minutes after treatment seems to increase efficacy (Lasers Surg. Med. 2014;46:20-6).

Some patients may require more than one treatment to achieve their desired results, but I recommend waiting at least 2-3 months before opting for additional treatment. Choosing the right candidates and providing patients with realistic expectations seem to be the most helpful in this process.

Dr. Wesley practices dermatology in Beverly Hills, Calif.

Cryolipolysis has emerged as a popular noninvasive treatment option for reducing localized areas of fat. The technology was developed on the premise that cold temperatures can selectively damage subcutaneous fat while leaving the overlying skin unharmed, as demonstrated by popsicle panniculitis. In this process, when subcutaneous fat is cooled below body temperature but above freezing, the fat undergoes cell death followed by a local inflammatory response, a localized panniculitis, that gradually results in a reduction of fat in that area.

Dr. Dieter Manstein and Dr. R. Rox Anderson pioneered the concept of cryolipolysis in 2008. The technology was approved by the Food and Drug Administration in 2010 in the form of the Zeltiq device. The device has different-sized hand pieces with a vacuum connection that, after it is applied to the skin, cools the subcutaneous fat without damaging the top layers of skin. Each area is treated for 1 hour, and 20%-30% of the fat cells are expected to be reduced with a single treatment. Typical responses after treatment include numbness, but some patients may also experience bruising and discomfort, all of which typically last no longer than 2-3 weeks.

If discomfort occurs in my patients, I find they report it more often in the lower abdomen than the love handles. Paradoxical adipose hyperplasia was recently reported for the first time in a male patient in his 40s (in the lower abdomen) (JAMA Dermatol. 2014;150:317-9).

In my experience, there is no difference in efficacy or adverse events seen in patients of different ethnicities. One study found no difference in efficacy or adverse events of cryolipolysis in Chinese patients (Lasers Surg. Med. 2012;44:125-30), but no other study of cryolipolysis in ethnic patients has been published.

I was involved in the clinical trials for this device prior to FDA approval where one love handle was treated on a patient and the other side was used as a control. Based on this experience and my experience using the device in practice, it is not a replacement for abdominoplasty or liposuction, but it is a useful technology in the right candidate. The patients who seem to do the best are those who are 10-15 pounds from their goal weight, are not obese (body mass index less than 30 kg/m2), and have a discrete bulge (typically love handles or abdomen) that they can’t get rid of with good diet and exercise alone. Massage for a few minutes after treatment seems to increase efficacy (Lasers Surg. Med. 2014;46:20-6).

Some patients may require more than one treatment to achieve their desired results, but I recommend waiting at least 2-3 months before opting for additional treatment. Choosing the right candidates and providing patients with realistic expectations seem to be the most helpful in this process.

Dr. Wesley practices dermatology in Beverly Hills, Calif.

Publications
Publications
Topics
Article Type
Display Headline
Cryolipolysis
Display Headline
Cryolipolysis
Legacy Keywords
Cryolipolysis, reducing fat, panniculitis,
Legacy Keywords
Cryolipolysis, reducing fat, panniculitis,
Sections
Article Source

PURLs Copyright

Inside the Article

Paroxetine mesylate 7.5 mg found to be a safe alternative to hormone therapy for menopausal women with hot flashes

Article Type
Changed
Display Headline
Paroxetine mesylate 7.5 mg found to be a safe alternative to hormone therapy for menopausal women with hot flashes

The US Food and Drug Administration (FDA) recently approved paroxetine mesylate 7.5 mg (Brisdelle) for the treatment of moderate to severe menopausal vasomotor symptoms (VMS). Paroxetine, formerly known as low-dose mesylate salt of paroxetine (LDMP), is a nonhormonal agent, which makes it an alternative hot flash therapy for menopausal women who cannot or do not want to use hormones. Paroxetine mesylate (Pexeva, Brisdelle) and paroxetine hydrochloride (Paxil, and generics) are two salts of the same active compound (paroxetine). They may have somewhat different metabolism.

The efficacy and safety of paroxetine mesylate, a selective serotonin-reuptake inhibitor (SSRI), were evaluated individually in three Phase 2 or 3 multicenter, double-blind, randomized, placebo-controlled trials, published by James Simon, MD, from George Washington University School of Medicine, and colleagues,1 and Joffe and colleagues.2 Most treatment-emergent adverse events (TEAEs) in the individual studies were mild or moderate in severity, with minimal acute discontinuation symptoms reported on treatment cessation.

In a study3 presented April 29, at the 2014 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists (ACOG) in Chicago, Illinois, Simon and colleagues further reported on the overall tolerability and safety profile of paroxetine mesylate 7.5 mg using pooled data from the three randomized trials. In their post-hoc analyses, they specifically examined the emergence of adverse events linked to the use of SSRIs when prescribed for psychiatric disorders at therapeutically higher doses than 7.5 mg. The adverse events focused on included weight gain, decreased libido, and sleep disturbance, as well as suicidality, abnormal bleeding, and bone fracture.

Study details. A total of 1,276 postmenopausal women (approximately 70% white) aged 40 years or older (median age, 54 years) with moderate to severe VMS (7−8 hot flashes/day; 50−60 hot flashes/wk) received either paroxetine mesylate or placebo at bedtime for 8 (Phase 2), 12 (Phase 3), or 24 (Phase 3) weeks. The study was sponsored by Noven Therapeutics, LLC.

Treatment-emergent adverse events and discontinuation
About half (50.4%) of the 635 women in the paroxetine group and 47.0% of the 641 women in the placebo group reported at least one TEAE. Most commonly reported TEAEs in the paroxetine group (reported in ≥2% of patients and with a twofold or higher frequency than in the placebo group) were nausea, fatigue, and dizziness.

TEAEs that were determined to be related to the study drug were reported in 19.5% in the paroxetine group and in 17.6% in the placebo group. These most frequent TEAEs were fatigue, nausea, dizziness, and diarrhea.

Severe AEs were reported in 3.9% and 3.6% of women in the paroxetine and placebo groups, respectively, although the investigator determined that less than 1% were related to paroxetine treatment.

TEAEs that led to discontinuation occurred in 4.7% of paroxetine-treated women and in 3.7% of placebo-treated women, although the incidence of study drug interruptions from TEAEs was similar (0.9%) between treatments. The most frequent adverse reactions leading to discontinuation in the paroxetine arm were abdominal pain (0.3%), attention disturbances (0.3%), headache (0.3%), and suicidal ideation (0.3%).

Of the most common AEs, nausea occurred mainly within the first 4 weeks of treatment; fatigue occurred primarily within the first week of treatment and decreased in frequency with continued therapy. Incidences and types of AEs that began after 12 weeks were similar to those that began during the first 12 weeks of treatment.

AEs related to SSRIs not found to be problematic
No differences were found between groups with regard to TEAEs related to weight, libido, or sleep. No clinically meaningful changes in laboratory values, vital signs, or ECGs were observed with either group. No clinically important findings on abnormal bleeding, bone fracture, or suicidality were evident in the paroxetine arm.

In the Phase 3 studies:

  • One suicide attempt was reported in the paroxetine group in the 24-week study, but was determined by investigators to be unrelated to treatment

  • Incidence rates of gastrointestinal and other bleeding events were similar between groups

  • Five bone fractures were reported: One in the paroxetine group and four among three participants in the placebo group.

One death occurred in the 12-week Phase 3 study due to acute respiratory failure with evidence of hypertension-mediated pulmonary edema and hypertensive cardiovascular disease. The investigator did not consider the death to be related to the study drug.

Study conclusion
The authors concluded that paroxetine 7.5 mg had favorable tolerability in menopausal women with moderate to severe VMS.

“Paroxetine 7.5 mg offers a nonhormonal treatment option for women who seek treatment for moderate to severe hot flashes associated with menopause,” said Dr. Simon.

 

 

Tell us what you think! Send your Letter to the Editor: [email protected]

References

1. Simon JA, et al. Low dose paroxetine 7.5 mg for menopausal vasomotor symptoms: two randomized controlled trials. Menopause. 2013;20(10):1027–1035.

2. Joffe H et al. Low-dose mesylate salt of paroxetine (LDMP) in treatment of vasomotor symptoms (VMS) in menopause. Presented at: 2012 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists; May 7, 2012; San Diego, CA. Poster 43.

3. Simon JA, Portman DJ, Kazempour K, Mekonnen H, Bhaskar S, Lippman J. Safety profile of paroxetine 7.5 mg in women with moderate to severe vasomotor symptoms. Poster presented at: 2014 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists (ACOG); April 26–30, 2014; Chicago, IL.

Author and Disclosure Information

Deborah Reale, Managing Editor

Issue
OBG Management - 26(6)
Publications
Topics
Sections
Author and Disclosure Information

Deborah Reale, Managing Editor

Author and Disclosure Information

Deborah Reale, Managing Editor

Related Articles

The US Food and Drug Administration (FDA) recently approved paroxetine mesylate 7.5 mg (Brisdelle) for the treatment of moderate to severe menopausal vasomotor symptoms (VMS). Paroxetine, formerly known as low-dose mesylate salt of paroxetine (LDMP), is a nonhormonal agent, which makes it an alternative hot flash therapy for menopausal women who cannot or do not want to use hormones. Paroxetine mesylate (Pexeva, Brisdelle) and paroxetine hydrochloride (Paxil, and generics) are two salts of the same active compound (paroxetine). They may have somewhat different metabolism.

The efficacy and safety of paroxetine mesylate, a selective serotonin-reuptake inhibitor (SSRI), were evaluated individually in three Phase 2 or 3 multicenter, double-blind, randomized, placebo-controlled trials, published by James Simon, MD, from George Washington University School of Medicine, and colleagues,1 and Joffe and colleagues.2 Most treatment-emergent adverse events (TEAEs) in the individual studies were mild or moderate in severity, with minimal acute discontinuation symptoms reported on treatment cessation.

In a study3 presented April 29, at the 2014 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists (ACOG) in Chicago, Illinois, Simon and colleagues further reported on the overall tolerability and safety profile of paroxetine mesylate 7.5 mg using pooled data from the three randomized trials. In their post-hoc analyses, they specifically examined the emergence of adverse events linked to the use of SSRIs when prescribed for psychiatric disorders at therapeutically higher doses than 7.5 mg. The adverse events focused on included weight gain, decreased libido, and sleep disturbance, as well as suicidality, abnormal bleeding, and bone fracture.

Study details. A total of 1,276 postmenopausal women (approximately 70% white) aged 40 years or older (median age, 54 years) with moderate to severe VMS (7−8 hot flashes/day; 50−60 hot flashes/wk) received either paroxetine mesylate or placebo at bedtime for 8 (Phase 2), 12 (Phase 3), or 24 (Phase 3) weeks. The study was sponsored by Noven Therapeutics, LLC.

Treatment-emergent adverse events and discontinuation
About half (50.4%) of the 635 women in the paroxetine group and 47.0% of the 641 women in the placebo group reported at least one TEAE. Most commonly reported TEAEs in the paroxetine group (reported in ≥2% of patients and with a twofold or higher frequency than in the placebo group) were nausea, fatigue, and dizziness.

TEAEs that were determined to be related to the study drug were reported in 19.5% in the paroxetine group and in 17.6% in the placebo group. These most frequent TEAEs were fatigue, nausea, dizziness, and diarrhea.

Severe AEs were reported in 3.9% and 3.6% of women in the paroxetine and placebo groups, respectively, although the investigator determined that less than 1% were related to paroxetine treatment.

TEAEs that led to discontinuation occurred in 4.7% of paroxetine-treated women and in 3.7% of placebo-treated women, although the incidence of study drug interruptions from TEAEs was similar (0.9%) between treatments. The most frequent adverse reactions leading to discontinuation in the paroxetine arm were abdominal pain (0.3%), attention disturbances (0.3%), headache (0.3%), and suicidal ideation (0.3%).

Of the most common AEs, nausea occurred mainly within the first 4 weeks of treatment; fatigue occurred primarily within the first week of treatment and decreased in frequency with continued therapy. Incidences and types of AEs that began after 12 weeks were similar to those that began during the first 12 weeks of treatment.

AEs related to SSRIs not found to be problematic
No differences were found between groups with regard to TEAEs related to weight, libido, or sleep. No clinically meaningful changes in laboratory values, vital signs, or ECGs were observed with either group. No clinically important findings on abnormal bleeding, bone fracture, or suicidality were evident in the paroxetine arm.

In the Phase 3 studies:

  • One suicide attempt was reported in the paroxetine group in the 24-week study, but was determined by investigators to be unrelated to treatment

  • Incidence rates of gastrointestinal and other bleeding events were similar between groups

  • Five bone fractures were reported: One in the paroxetine group and four among three participants in the placebo group.

One death occurred in the 12-week Phase 3 study due to acute respiratory failure with evidence of hypertension-mediated pulmonary edema and hypertensive cardiovascular disease. The investigator did not consider the death to be related to the study drug.

Study conclusion
The authors concluded that paroxetine 7.5 mg had favorable tolerability in menopausal women with moderate to severe VMS.

“Paroxetine 7.5 mg offers a nonhormonal treatment option for women who seek treatment for moderate to severe hot flashes associated with menopause,” said Dr. Simon.

 

 

Tell us what you think! Send your Letter to the Editor: [email protected]

The US Food and Drug Administration (FDA) recently approved paroxetine mesylate 7.5 mg (Brisdelle) for the treatment of moderate to severe menopausal vasomotor symptoms (VMS). Paroxetine, formerly known as low-dose mesylate salt of paroxetine (LDMP), is a nonhormonal agent, which makes it an alternative hot flash therapy for menopausal women who cannot or do not want to use hormones. Paroxetine mesylate (Pexeva, Brisdelle) and paroxetine hydrochloride (Paxil, and generics) are two salts of the same active compound (paroxetine). They may have somewhat different metabolism.

The efficacy and safety of paroxetine mesylate, a selective serotonin-reuptake inhibitor (SSRI), were evaluated individually in three Phase 2 or 3 multicenter, double-blind, randomized, placebo-controlled trials, published by James Simon, MD, from George Washington University School of Medicine, and colleagues,1 and Joffe and colleagues.2 Most treatment-emergent adverse events (TEAEs) in the individual studies were mild or moderate in severity, with minimal acute discontinuation symptoms reported on treatment cessation.

In a study3 presented April 29, at the 2014 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists (ACOG) in Chicago, Illinois, Simon and colleagues further reported on the overall tolerability and safety profile of paroxetine mesylate 7.5 mg using pooled data from the three randomized trials. In their post-hoc analyses, they specifically examined the emergence of adverse events linked to the use of SSRIs when prescribed for psychiatric disorders at therapeutically higher doses than 7.5 mg. The adverse events focused on included weight gain, decreased libido, and sleep disturbance, as well as suicidality, abnormal bleeding, and bone fracture.

Study details. A total of 1,276 postmenopausal women (approximately 70% white) aged 40 years or older (median age, 54 years) with moderate to severe VMS (7−8 hot flashes/day; 50−60 hot flashes/wk) received either paroxetine mesylate or placebo at bedtime for 8 (Phase 2), 12 (Phase 3), or 24 (Phase 3) weeks. The study was sponsored by Noven Therapeutics, LLC.

Treatment-emergent adverse events and discontinuation
About half (50.4%) of the 635 women in the paroxetine group and 47.0% of the 641 women in the placebo group reported at least one TEAE. Most commonly reported TEAEs in the paroxetine group (reported in ≥2% of patients and with a twofold or higher frequency than in the placebo group) were nausea, fatigue, and dizziness.

TEAEs that were determined to be related to the study drug were reported in 19.5% in the paroxetine group and in 17.6% in the placebo group. These most frequent TEAEs were fatigue, nausea, dizziness, and diarrhea.

Severe AEs were reported in 3.9% and 3.6% of women in the paroxetine and placebo groups, respectively, although the investigator determined that less than 1% were related to paroxetine treatment.

TEAEs that led to discontinuation occurred in 4.7% of paroxetine-treated women and in 3.7% of placebo-treated women, although the incidence of study drug interruptions from TEAEs was similar (0.9%) between treatments. The most frequent adverse reactions leading to discontinuation in the paroxetine arm were abdominal pain (0.3%), attention disturbances (0.3%), headache (0.3%), and suicidal ideation (0.3%).

Of the most common AEs, nausea occurred mainly within the first 4 weeks of treatment; fatigue occurred primarily within the first week of treatment and decreased in frequency with continued therapy. Incidences and types of AEs that began after 12 weeks were similar to those that began during the first 12 weeks of treatment.

AEs related to SSRIs not found to be problematic
No differences were found between groups with regard to TEAEs related to weight, libido, or sleep. No clinically meaningful changes in laboratory values, vital signs, or ECGs were observed with either group. No clinically important findings on abnormal bleeding, bone fracture, or suicidality were evident in the paroxetine arm.

In the Phase 3 studies:

  • One suicide attempt was reported in the paroxetine group in the 24-week study, but was determined by investigators to be unrelated to treatment

  • Incidence rates of gastrointestinal and other bleeding events were similar between groups

  • Five bone fractures were reported: One in the paroxetine group and four among three participants in the placebo group.

One death occurred in the 12-week Phase 3 study due to acute respiratory failure with evidence of hypertension-mediated pulmonary edema and hypertensive cardiovascular disease. The investigator did not consider the death to be related to the study drug.

Study conclusion
The authors concluded that paroxetine 7.5 mg had favorable tolerability in menopausal women with moderate to severe VMS.

“Paroxetine 7.5 mg offers a nonhormonal treatment option for women who seek treatment for moderate to severe hot flashes associated with menopause,” said Dr. Simon.

 

 

Tell us what you think! Send your Letter to the Editor: [email protected]

References

1. Simon JA, et al. Low dose paroxetine 7.5 mg for menopausal vasomotor symptoms: two randomized controlled trials. Menopause. 2013;20(10):1027–1035.

2. Joffe H et al. Low-dose mesylate salt of paroxetine (LDMP) in treatment of vasomotor symptoms (VMS) in menopause. Presented at: 2012 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists; May 7, 2012; San Diego, CA. Poster 43.

3. Simon JA, Portman DJ, Kazempour K, Mekonnen H, Bhaskar S, Lippman J. Safety profile of paroxetine 7.5 mg in women with moderate to severe vasomotor symptoms. Poster presented at: 2014 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists (ACOG); April 26–30, 2014; Chicago, IL.

References

1. Simon JA, et al. Low dose paroxetine 7.5 mg for menopausal vasomotor symptoms: two randomized controlled trials. Menopause. 2013;20(10):1027–1035.

2. Joffe H et al. Low-dose mesylate salt of paroxetine (LDMP) in treatment of vasomotor symptoms (VMS) in menopause. Presented at: 2012 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists; May 7, 2012; San Diego, CA. Poster 43.

3. Simon JA, Portman DJ, Kazempour K, Mekonnen H, Bhaskar S, Lippman J. Safety profile of paroxetine 7.5 mg in women with moderate to severe vasomotor symptoms. Poster presented at: 2014 Annual Clinical Meeting of The American College of Obstetricians and Gynecologists (ACOG); April 26–30, 2014; Chicago, IL.

Issue
OBG Management - 26(6)
Issue
OBG Management - 26(6)
Publications
Publications
Topics
Article Type
Display Headline
Paroxetine mesylate 7.5 mg found to be a safe alternative to hormone therapy for menopausal women with hot flashes
Display Headline
Paroxetine mesylate 7.5 mg found to be a safe alternative to hormone therapy for menopausal women with hot flashes
Sections
Article Source

PURLs Copyright

Inside the Article

Imatinib appears safe, effective for the long haul

Article Type
Changed
Display Headline
Imatinib appears safe, effective for the long haul

CHICAGO – After a decade on therapy with imatinib, a majority of patients with chronic myeloid leukemia will experience an adverse drug reaction, but most reactions are mild and manageable, according to results from a study presented at the annual meeting of the American Society of Clinical Oncology.

Of 1,375 patients with CML who received imatinib (Gleevec) monotherapy at some point, 1,018 (74%) had nonhematologic toxicities sometime during therapy, but only 199 (14%) had grade 3 or 4 toxicities, and there were no deaths attributed to imatinib, reported Dr. Rüdiger Hehlmann of the University of Heidelberg, Germany, and his colleagues.

Adverse drug reactions were manageable even when imatinib was combined with interferon-alfa (IFN-alfa), the investigators from the German CML Study Group reported in a poster at the meeting.

"After 10 years, imatinib continues to be an excellent choice for most patients with CML," they wrote.

In the 13 years that have elapsed since imatinib was approved in the United States as the first-in-class tyrosine kinase inhibitor, second-generation TKIs and other targeted agents have emerged, drawing attention to the safety of the older regimen.

The investigators evaluated long-term follow-up data and analyzed adverse drug reaction data for 1,501 patients treated with imatinib monotherapy in doses of 400 or 800 mg/day, as well as imatinib 400 mg in combination with IFN-alfa.

At the most recent evaluation, in November 2013, 164 patients had died, 1,003 were still on imatinib, 275 had been switched to a second-generation TKI, and 106 underwent bone marrow transplant (some patients received more than one therapy, accounting for the difference in total numbers).

The median follow-up time was 6.5 years, with some patients on study for as long as 11.5 years.

The probability of 10-year survival was 84%, and of 10-year progression-free survival was 81%.

An analysis of survival by molecular response rates showed an overall survival rate of 89% for those who achieved a major molecular response (MR, defined as a BCR-ABL RNA level of 0.1% or less), and 74% for those who achieved MR 4.5 (a 4.5 log10reduction or greater in BCR-ABL transcripts).

The 8-year probabilities for all grades of adverse events among the patients who received imatinib monotherapy were 41% for edema or fluid overload, 38% for gastrointestinal toxicities, 25% for myalgia/arthralgia, 20% for rash, 17% for musculoskeletal events, 17% for fatigue, 11% for neurological toxicities, and 10% for flulike symptoms.

Five patients had grade 2 or 3 peripheral arterial occlusive disease, but it was not clear whether these events were associated with imatinib.

For most patients the first adverse drug reaction occurred within 3 years of starting on imatinib, with the frequency of reactions decreasing thereafter.

Dr. Hehlmann disclosed receiving research support from Novartis, marketer of imatinib.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
imatinib, chronic myeloid leukemia, Gleevec, Dr. Rüdiger Hehlmann, interferon-alfa, IFN-alfa,
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

CHICAGO – After a decade on therapy with imatinib, a majority of patients with chronic myeloid leukemia will experience an adverse drug reaction, but most reactions are mild and manageable, according to results from a study presented at the annual meeting of the American Society of Clinical Oncology.

Of 1,375 patients with CML who received imatinib (Gleevec) monotherapy at some point, 1,018 (74%) had nonhematologic toxicities sometime during therapy, but only 199 (14%) had grade 3 or 4 toxicities, and there were no deaths attributed to imatinib, reported Dr. Rüdiger Hehlmann of the University of Heidelberg, Germany, and his colleagues.

Adverse drug reactions were manageable even when imatinib was combined with interferon-alfa (IFN-alfa), the investigators from the German CML Study Group reported in a poster at the meeting.

"After 10 years, imatinib continues to be an excellent choice for most patients with CML," they wrote.

In the 13 years that have elapsed since imatinib was approved in the United States as the first-in-class tyrosine kinase inhibitor, second-generation TKIs and other targeted agents have emerged, drawing attention to the safety of the older regimen.

The investigators evaluated long-term follow-up data and analyzed adverse drug reaction data for 1,501 patients treated with imatinib monotherapy in doses of 400 or 800 mg/day, as well as imatinib 400 mg in combination with IFN-alfa.

At the most recent evaluation, in November 2013, 164 patients had died, 1,003 were still on imatinib, 275 had been switched to a second-generation TKI, and 106 underwent bone marrow transplant (some patients received more than one therapy, accounting for the difference in total numbers).

The median follow-up time was 6.5 years, with some patients on study for as long as 11.5 years.

The probability of 10-year survival was 84%, and of 10-year progression-free survival was 81%.

An analysis of survival by molecular response rates showed an overall survival rate of 89% for those who achieved a major molecular response (MR, defined as a BCR-ABL RNA level of 0.1% or less), and 74% for those who achieved MR 4.5 (a 4.5 log10reduction or greater in BCR-ABL transcripts).

The 8-year probabilities for all grades of adverse events among the patients who received imatinib monotherapy were 41% for edema or fluid overload, 38% for gastrointestinal toxicities, 25% for myalgia/arthralgia, 20% for rash, 17% for musculoskeletal events, 17% for fatigue, 11% for neurological toxicities, and 10% for flulike symptoms.

Five patients had grade 2 or 3 peripheral arterial occlusive disease, but it was not clear whether these events were associated with imatinib.

For most patients the first adverse drug reaction occurred within 3 years of starting on imatinib, with the frequency of reactions decreasing thereafter.

Dr. Hehlmann disclosed receiving research support from Novartis, marketer of imatinib.

CHICAGO – After a decade on therapy with imatinib, a majority of patients with chronic myeloid leukemia will experience an adverse drug reaction, but most reactions are mild and manageable, according to results from a study presented at the annual meeting of the American Society of Clinical Oncology.

Of 1,375 patients with CML who received imatinib (Gleevec) monotherapy at some point, 1,018 (74%) had nonhematologic toxicities sometime during therapy, but only 199 (14%) had grade 3 or 4 toxicities, and there were no deaths attributed to imatinib, reported Dr. Rüdiger Hehlmann of the University of Heidelberg, Germany, and his colleagues.

Adverse drug reactions were manageable even when imatinib was combined with interferon-alfa (IFN-alfa), the investigators from the German CML Study Group reported in a poster at the meeting.

"After 10 years, imatinib continues to be an excellent choice for most patients with CML," they wrote.

In the 13 years that have elapsed since imatinib was approved in the United States as the first-in-class tyrosine kinase inhibitor, second-generation TKIs and other targeted agents have emerged, drawing attention to the safety of the older regimen.

The investigators evaluated long-term follow-up data and analyzed adverse drug reaction data for 1,501 patients treated with imatinib monotherapy in doses of 400 or 800 mg/day, as well as imatinib 400 mg in combination with IFN-alfa.

At the most recent evaluation, in November 2013, 164 patients had died, 1,003 were still on imatinib, 275 had been switched to a second-generation TKI, and 106 underwent bone marrow transplant (some patients received more than one therapy, accounting for the difference in total numbers).

The median follow-up time was 6.5 years, with some patients on study for as long as 11.5 years.

The probability of 10-year survival was 84%, and of 10-year progression-free survival was 81%.

An analysis of survival by molecular response rates showed an overall survival rate of 89% for those who achieved a major molecular response (MR, defined as a BCR-ABL RNA level of 0.1% or less), and 74% for those who achieved MR 4.5 (a 4.5 log10reduction or greater in BCR-ABL transcripts).

The 8-year probabilities for all grades of adverse events among the patients who received imatinib monotherapy were 41% for edema or fluid overload, 38% for gastrointestinal toxicities, 25% for myalgia/arthralgia, 20% for rash, 17% for musculoskeletal events, 17% for fatigue, 11% for neurological toxicities, and 10% for flulike symptoms.

Five patients had grade 2 or 3 peripheral arterial occlusive disease, but it was not clear whether these events were associated with imatinib.

For most patients the first adverse drug reaction occurred within 3 years of starting on imatinib, with the frequency of reactions decreasing thereafter.

Dr. Hehlmann disclosed receiving research support from Novartis, marketer of imatinib.

Publications
Publications
Topics
Article Type
Display Headline
Imatinib appears safe, effective for the long haul
Display Headline
Imatinib appears safe, effective for the long haul
Legacy Keywords
imatinib, chronic myeloid leukemia, Gleevec, Dr. Rüdiger Hehlmann, interferon-alfa, IFN-alfa,
Legacy Keywords
imatinib, chronic myeloid leukemia, Gleevec, Dr. Rüdiger Hehlmann, interferon-alfa, IFN-alfa,
Sections
Article Source

AT THE ASCO ANNUAL MEETING 2014

PURLs Copyright

Inside the Article

Vitals

Key clinical finding: Imatinib is safe and effective for treating patients with chronic myeloid leukemia over the course of a decade.

Major finding: Of 1,375 patients with CML who received imatinib (Gleevec) monotherapy, 74% had nonhematologic toxicities sometime during therapy, but only 199 (14%) had grade 3 or 4 toxicities.

Data source: Review of prospectively collected 10-year follow-up data from a phase II trial of imatinib in 1,501 patients with CML.

Disclosures: Dr. Hehlmann disclosed receiving research support from Novartis, marketer of imatinib.

Screen for Barrett’s in all with central obesity?

Article Type
Changed
Display Headline
Screen for Barrett’s in all with central obesity?

CHICAGO – The prevalence of erosive esophagitis and Barrett’s esophagus is comparable in individuals regardless of whether they have gastroesophageal reflux symptoms, according to a population-based study.

"These results directly challenge the established GERD-based Barrett’s esophagus screening paradigm and provide strong rationale for using central obesity in Caucasian males with or without symptomatic GERD as criteria for Barrett’s esophagus screening," Dr. Nicholas R. Crews said at the annual Digestive Disease Week.

"In this study, waist-hip ratio was our surrogate marker for central obesity. It’s easily obtainable and usable in clinical practice," noted Dr. Crews of the Mayo Clinic in Rochester, Minn.

Dr. Nicholas R. Crews

Barrett’s esophagus is the precursor lesion and principal risk factor for esophageal adenocarcinoma, a malignancy whose incidence in the United States and other developed nations is increasing at an alarming rate. Improved methods of screening for esophageal adenocarcinoma are sorely needed, he added.

Dr. Crews presented a study in which a representative sample of Olmsted County, Minn., residents over age 50 with no history of endoscopy were randomized to screening for Barrett’s esophagus by one of three methods: sedated endoscopy in the GI suite, unsedated transnasal endoscopy in the clinic, or unsedated transnasal endoscopy in a Mayo mobile research van.

Participants’ mean age was 70 years, 46% were men, 206 of the 209 were white, and only one-third of subjects had GERD symptoms.

The prevalence of esophagitis grades A-C proved to be 32% in the symptomatic GERD group and similar at 29% in those without GERD symptoms. Similarly, Barrett’s esophagus was identified in 8.7% of the symptomatic GERD group and 7.9% of subjects without GERD symptoms. Dysplasia was present in 1.4% of each group. The mean length of the esophageal segment with Barrett’s esophagus was 2.4 cm in patients with GERD symptoms and not significantly different in those who were asymptomatic.

Three risk factors proved significant as predictors of esophageal injury as defined by esophagitis or Barrett’s esophagus: male sex, central obesity as defined by a waist-hip ratio greater than 0.9, and consumption of more than two alcoholic drinks per day. Age, smoking status, and body mass index were not predictive.

The mean waist-to-hip ratio was 0.89 in screened subjects with no esophagitis or Barrett’s esophagus, 0.91 in those with positive endoscopic findings and symptomatic gastroesophageal reflux, and 0.95 in those with positive findings who were asymptomatic.

Audience members expressed skepticism about the notion of routinely screening for Barrett’s esophagus in individuals with central obesity in an era of an unprecedented obesity epidemic.

For example, Dr. Joel E. Richter, who described himself as "an anti-Barrett’s person," commented that he believes gastroenterologists are already overdiagnosing and overtreating the condition, needlessly alarming many patients.

In women, particularly, it’s increasingly clear that Barrett’s esophagus only rarely develops into esophageal adenocarcinoma, he said.

"Others have said that women with Barrett’s esophagus are as likely to get esophageal cancer as men are to get breast cancer," commented Dr. Richter, professor of internal medicine and director of the center for swallowing disorders at the University of South Florida, Tampa.

Another audience member told Dr. Crews, "I totally agree with you that we miss most people with Barrett’s by our current screening process. The problem is, it’s unclear whether it’s important or not to find them. To extrapolate from your study and say that anyone with central obesity ought to be screened for [Barrett’s esophagus] is a little strong, I think."

"It’s very controversial," Dr. Crews agreed. "It’s something we continue to struggle with."

He reported having no relevant financial conflicts.

[email protected]

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
GERD-based Barrett’s esophagus, Barrett’s esophagus screening, GERD, Dr. Nicholas R. Crews, esophageal adenocarcinoma,
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

CHICAGO – The prevalence of erosive esophagitis and Barrett’s esophagus is comparable in individuals regardless of whether they have gastroesophageal reflux symptoms, according to a population-based study.

"These results directly challenge the established GERD-based Barrett’s esophagus screening paradigm and provide strong rationale for using central obesity in Caucasian males with or without symptomatic GERD as criteria for Barrett’s esophagus screening," Dr. Nicholas R. Crews said at the annual Digestive Disease Week.

"In this study, waist-hip ratio was our surrogate marker for central obesity. It’s easily obtainable and usable in clinical practice," noted Dr. Crews of the Mayo Clinic in Rochester, Minn.

Dr. Nicholas R. Crews

Barrett’s esophagus is the precursor lesion and principal risk factor for esophageal adenocarcinoma, a malignancy whose incidence in the United States and other developed nations is increasing at an alarming rate. Improved methods of screening for esophageal adenocarcinoma are sorely needed, he added.

Dr. Crews presented a study in which a representative sample of Olmsted County, Minn., residents over age 50 with no history of endoscopy were randomized to screening for Barrett’s esophagus by one of three methods: sedated endoscopy in the GI suite, unsedated transnasal endoscopy in the clinic, or unsedated transnasal endoscopy in a Mayo mobile research van.

Participants’ mean age was 70 years, 46% were men, 206 of the 209 were white, and only one-third of subjects had GERD symptoms.

The prevalence of esophagitis grades A-C proved to be 32% in the symptomatic GERD group and similar at 29% in those without GERD symptoms. Similarly, Barrett’s esophagus was identified in 8.7% of the symptomatic GERD group and 7.9% of subjects without GERD symptoms. Dysplasia was present in 1.4% of each group. The mean length of the esophageal segment with Barrett’s esophagus was 2.4 cm in patients with GERD symptoms and not significantly different in those who were asymptomatic.

Three risk factors proved significant as predictors of esophageal injury as defined by esophagitis or Barrett’s esophagus: male sex, central obesity as defined by a waist-hip ratio greater than 0.9, and consumption of more than two alcoholic drinks per day. Age, smoking status, and body mass index were not predictive.

The mean waist-to-hip ratio was 0.89 in screened subjects with no esophagitis or Barrett’s esophagus, 0.91 in those with positive endoscopic findings and symptomatic gastroesophageal reflux, and 0.95 in those with positive findings who were asymptomatic.

Audience members expressed skepticism about the notion of routinely screening for Barrett’s esophagus in individuals with central obesity in an era of an unprecedented obesity epidemic.

For example, Dr. Joel E. Richter, who described himself as "an anti-Barrett’s person," commented that he believes gastroenterologists are already overdiagnosing and overtreating the condition, needlessly alarming many patients.

In women, particularly, it’s increasingly clear that Barrett’s esophagus only rarely develops into esophageal adenocarcinoma, he said.

"Others have said that women with Barrett’s esophagus are as likely to get esophageal cancer as men are to get breast cancer," commented Dr. Richter, professor of internal medicine and director of the center for swallowing disorders at the University of South Florida, Tampa.

Another audience member told Dr. Crews, "I totally agree with you that we miss most people with Barrett’s by our current screening process. The problem is, it’s unclear whether it’s important or not to find them. To extrapolate from your study and say that anyone with central obesity ought to be screened for [Barrett’s esophagus] is a little strong, I think."

"It’s very controversial," Dr. Crews agreed. "It’s something we continue to struggle with."

He reported having no relevant financial conflicts.

[email protected]

CHICAGO – The prevalence of erosive esophagitis and Barrett’s esophagus is comparable in individuals regardless of whether they have gastroesophageal reflux symptoms, according to a population-based study.

"These results directly challenge the established GERD-based Barrett’s esophagus screening paradigm and provide strong rationale for using central obesity in Caucasian males with or without symptomatic GERD as criteria for Barrett’s esophagus screening," Dr. Nicholas R. Crews said at the annual Digestive Disease Week.

"In this study, waist-hip ratio was our surrogate marker for central obesity. It’s easily obtainable and usable in clinical practice," noted Dr. Crews of the Mayo Clinic in Rochester, Minn.

Dr. Nicholas R. Crews

Barrett’s esophagus is the precursor lesion and principal risk factor for esophageal adenocarcinoma, a malignancy whose incidence in the United States and other developed nations is increasing at an alarming rate. Improved methods of screening for esophageal adenocarcinoma are sorely needed, he added.

Dr. Crews presented a study in which a representative sample of Olmsted County, Minn., residents over age 50 with no history of endoscopy were randomized to screening for Barrett’s esophagus by one of three methods: sedated endoscopy in the GI suite, unsedated transnasal endoscopy in the clinic, or unsedated transnasal endoscopy in a Mayo mobile research van.

Participants’ mean age was 70 years, 46% were men, 206 of the 209 were white, and only one-third of subjects had GERD symptoms.

The prevalence of esophagitis grades A-C proved to be 32% in the symptomatic GERD group and similar at 29% in those without GERD symptoms. Similarly, Barrett’s esophagus was identified in 8.7% of the symptomatic GERD group and 7.9% of subjects without GERD symptoms. Dysplasia was present in 1.4% of each group. The mean length of the esophageal segment with Barrett’s esophagus was 2.4 cm in patients with GERD symptoms and not significantly different in those who were asymptomatic.

Three risk factors proved significant as predictors of esophageal injury as defined by esophagitis or Barrett’s esophagus: male sex, central obesity as defined by a waist-hip ratio greater than 0.9, and consumption of more than two alcoholic drinks per day. Age, smoking status, and body mass index were not predictive.

The mean waist-to-hip ratio was 0.89 in screened subjects with no esophagitis or Barrett’s esophagus, 0.91 in those with positive endoscopic findings and symptomatic gastroesophageal reflux, and 0.95 in those with positive findings who were asymptomatic.

Audience members expressed skepticism about the notion of routinely screening for Barrett’s esophagus in individuals with central obesity in an era of an unprecedented obesity epidemic.

For example, Dr. Joel E. Richter, who described himself as "an anti-Barrett’s person," commented that he believes gastroenterologists are already overdiagnosing and overtreating the condition, needlessly alarming many patients.

In women, particularly, it’s increasingly clear that Barrett’s esophagus only rarely develops into esophageal adenocarcinoma, he said.

"Others have said that women with Barrett’s esophagus are as likely to get esophageal cancer as men are to get breast cancer," commented Dr. Richter, professor of internal medicine and director of the center for swallowing disorders at the University of South Florida, Tampa.

Another audience member told Dr. Crews, "I totally agree with you that we miss most people with Barrett’s by our current screening process. The problem is, it’s unclear whether it’s important or not to find them. To extrapolate from your study and say that anyone with central obesity ought to be screened for [Barrett’s esophagus] is a little strong, I think."

"It’s very controversial," Dr. Crews agreed. "It’s something we continue to struggle with."

He reported having no relevant financial conflicts.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Screen for Barrett’s in all with central obesity?
Display Headline
Screen for Barrett’s in all with central obesity?
Legacy Keywords
GERD-based Barrett’s esophagus, Barrett’s esophagus screening, GERD, Dr. Nicholas R. Crews, esophageal adenocarcinoma,
Legacy Keywords
GERD-based Barrett’s esophagus, Barrett’s esophagus screening, GERD, Dr. Nicholas R. Crews, esophageal adenocarcinoma,
Sections
Article Source

AT DDW 2014

PURLs Copyright

Inside the Article

Vitals

Key clinical point: The current recommended strategy of screening for Barrett’s esophagus on the basis of symptoms of gastroesophageal reflux is called into question by a new study showing the esophageal cancer precursor lesion is just as common in screened asymptomatic individuals.

Major finding: The mean waist-to-hip ratio was 0.89 in screened subjects with no esophagitis or Barrett’s esophagus, 0.91 in those with positive endoscopic findings and symptomatic gastroesophageal reflux, and 0.95 in those with positive findings who were asymptomatic.

Data source: This was a prospective population-based study in which 209 individuals over age 50 with no history of endoscopy, two-thirds of whom had no gastroesophageal reflux symptoms, underwent screening endoscopy.

Disclosures: The presenter reported having no relevant financial conflicts.

To MU or not to MU, that is the question

Article Type
Changed
Display Headline
To MU or not to MU, that is the question

If you are still on the fence on meaningful use – our government’s motivational strategy for popularizing electronic health records – the point of no return is rapidly approaching: If you want to qualify for at least a portion of the incentive money, plus avoid a 1% penalty (eventually rising to 5%) on your Medicare Part B reimbursements, this year is your final opportunity to join the party. And, unfortunately, it is not simply a matter of adopting an electronic record system.

Each year, you must attest to demonstrating "meaningful use" (MU) of that system. To do that, you must continually monitor your progress toward meeting the necessary percentage benchmarks, making course corrections as you go. If the numbers are not there when your practice is ready to attest, it will have all been for naught, and a major waste of time and resources.

That being the case, private practitioners who have not yet taken the plunge – and those who have, but are undecided on progressing to stage 2 – must ask themselves whether the significant temporal and monetary investment is worth the trouble.

Many, apparently, have decided that it is not. While a substantial percentage of eligible practitioners signed up for stage 1, approximately 20% of them stopped participating in 2013. And according to the Centers for Medicare & Medicaid Services’ own data, only 4 hospitals and 50 individual practitioners in the entire country had attested to stage 2 through March of 2014.

The American Medical Association has little faith in the program, at least in its current form. In an open letter to the CMS in May 2014, they predicted significantly higher dropout rates unless major modifications are made. Specifically, they singled out the requirement that providers meet all requirements at each stage. Rather than "all or nothing," they proposed a 75% achievement level to receive incentive payments, and a 50% minimum to avoid financial penalties. The AMA also recommended eliminating all benchmarks beyond physicians’ control, such as the stage 2 goal of 5% patient participation on the practice’s electronic health record (EHR) portal.

Another problem that falls outside the control of physicians is maintenance of EHR software. Nearly one EHR-equipped office in five, according to the CMS, is running software that does not meet stage 2 standards. The unfortunate owners of systems that cannot be upgraded before the stage 2 deadline will – through no fault of their own – be faced with a Morton’s fork of replacing their EHR on short notice or abandoning their quest for stage 2 attestation.

While the CMS has not yet indicated whether it has any inclination to address these issues or ease any of the requirements, one official did announce that the agency will be more flexible with its hardship exemptions on a case-by-case basis. Currently, such exemptions are available to new providers, those recovering from natural disasters, and others, such as pathologists, who do not interact face-to-face with patients.

So the question remains: Is the investment of time and resources needed to capture all of the data necessary for successful MU attestation worth making? Is it justified by the promise of MU incentive dollars and the benefits to your practice and your patients? And what exactly are those purported benefits, anyway?

Proponents maintain that integrated EHR will lead to improved documentation, which in turn should lead to improvements in patient care. Errors would be more easily identified because entries from generalists, specialists, labs, and others would be available to all at any time. All involved providers, theoretically, would be on the same page with every individual patient. The downside, of course, is that the real world seldom reflects the ideal situation envisioned by bureaucrats.

Ultimately, the choice is yours: Each private practitioner must decide whether starting (or continuing) meaningful use is worth the financial and time burden in his or her particular situation. If you are still undecided, time is almost up: You must begin your 90-day stage 1 reporting period in July 2014 in order to attest by the final deadline of October 1. The last calendar quarter to begin stage 2 reporting starts on October 1 as well. Detailed instructions for meeting stage 1 and stage 2 deadlines are available from many sources, including the American Academy of Dermatology website.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a long-time monthly columnist for Skin & Allergy News.

Author and Disclosure Information

Publications
Legacy Keywords
meaningful use, government, electronic health records, Medicare Part B, reimbursements,
Sections
Author and Disclosure Information

Author and Disclosure Information

If you are still on the fence on meaningful use – our government’s motivational strategy for popularizing electronic health records – the point of no return is rapidly approaching: If you want to qualify for at least a portion of the incentive money, plus avoid a 1% penalty (eventually rising to 5%) on your Medicare Part B reimbursements, this year is your final opportunity to join the party. And, unfortunately, it is not simply a matter of adopting an electronic record system.

Each year, you must attest to demonstrating "meaningful use" (MU) of that system. To do that, you must continually monitor your progress toward meeting the necessary percentage benchmarks, making course corrections as you go. If the numbers are not there when your practice is ready to attest, it will have all been for naught, and a major waste of time and resources.

That being the case, private practitioners who have not yet taken the plunge – and those who have, but are undecided on progressing to stage 2 – must ask themselves whether the significant temporal and monetary investment is worth the trouble.

Many, apparently, have decided that it is not. While a substantial percentage of eligible practitioners signed up for stage 1, approximately 20% of them stopped participating in 2013. And according to the Centers for Medicare & Medicaid Services’ own data, only 4 hospitals and 50 individual practitioners in the entire country had attested to stage 2 through March of 2014.

The American Medical Association has little faith in the program, at least in its current form. In an open letter to the CMS in May 2014, they predicted significantly higher dropout rates unless major modifications are made. Specifically, they singled out the requirement that providers meet all requirements at each stage. Rather than "all or nothing," they proposed a 75% achievement level to receive incentive payments, and a 50% minimum to avoid financial penalties. The AMA also recommended eliminating all benchmarks beyond physicians’ control, such as the stage 2 goal of 5% patient participation on the practice’s electronic health record (EHR) portal.

Another problem that falls outside the control of physicians is maintenance of EHR software. Nearly one EHR-equipped office in five, according to the CMS, is running software that does not meet stage 2 standards. The unfortunate owners of systems that cannot be upgraded before the stage 2 deadline will – through no fault of their own – be faced with a Morton’s fork of replacing their EHR on short notice or abandoning their quest for stage 2 attestation.

While the CMS has not yet indicated whether it has any inclination to address these issues or ease any of the requirements, one official did announce that the agency will be more flexible with its hardship exemptions on a case-by-case basis. Currently, such exemptions are available to new providers, those recovering from natural disasters, and others, such as pathologists, who do not interact face-to-face with patients.

So the question remains: Is the investment of time and resources needed to capture all of the data necessary for successful MU attestation worth making? Is it justified by the promise of MU incentive dollars and the benefits to your practice and your patients? And what exactly are those purported benefits, anyway?

Proponents maintain that integrated EHR will lead to improved documentation, which in turn should lead to improvements in patient care. Errors would be more easily identified because entries from generalists, specialists, labs, and others would be available to all at any time. All involved providers, theoretically, would be on the same page with every individual patient. The downside, of course, is that the real world seldom reflects the ideal situation envisioned by bureaucrats.

Ultimately, the choice is yours: Each private practitioner must decide whether starting (or continuing) meaningful use is worth the financial and time burden in his or her particular situation. If you are still undecided, time is almost up: You must begin your 90-day stage 1 reporting period in July 2014 in order to attest by the final deadline of October 1. The last calendar quarter to begin stage 2 reporting starts on October 1 as well. Detailed instructions for meeting stage 1 and stage 2 deadlines are available from many sources, including the American Academy of Dermatology website.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a long-time monthly columnist for Skin & Allergy News.

If you are still on the fence on meaningful use – our government’s motivational strategy for popularizing electronic health records – the point of no return is rapidly approaching: If you want to qualify for at least a portion of the incentive money, plus avoid a 1% penalty (eventually rising to 5%) on your Medicare Part B reimbursements, this year is your final opportunity to join the party. And, unfortunately, it is not simply a matter of adopting an electronic record system.

Each year, you must attest to demonstrating "meaningful use" (MU) of that system. To do that, you must continually monitor your progress toward meeting the necessary percentage benchmarks, making course corrections as you go. If the numbers are not there when your practice is ready to attest, it will have all been for naught, and a major waste of time and resources.

That being the case, private practitioners who have not yet taken the plunge – and those who have, but are undecided on progressing to stage 2 – must ask themselves whether the significant temporal and monetary investment is worth the trouble.

Many, apparently, have decided that it is not. While a substantial percentage of eligible practitioners signed up for stage 1, approximately 20% of them stopped participating in 2013. And according to the Centers for Medicare & Medicaid Services’ own data, only 4 hospitals and 50 individual practitioners in the entire country had attested to stage 2 through March of 2014.

The American Medical Association has little faith in the program, at least in its current form. In an open letter to the CMS in May 2014, they predicted significantly higher dropout rates unless major modifications are made. Specifically, they singled out the requirement that providers meet all requirements at each stage. Rather than "all or nothing," they proposed a 75% achievement level to receive incentive payments, and a 50% minimum to avoid financial penalties. The AMA also recommended eliminating all benchmarks beyond physicians’ control, such as the stage 2 goal of 5% patient participation on the practice’s electronic health record (EHR) portal.

Another problem that falls outside the control of physicians is maintenance of EHR software. Nearly one EHR-equipped office in five, according to the CMS, is running software that does not meet stage 2 standards. The unfortunate owners of systems that cannot be upgraded before the stage 2 deadline will – through no fault of their own – be faced with a Morton’s fork of replacing their EHR on short notice or abandoning their quest for stage 2 attestation.

While the CMS has not yet indicated whether it has any inclination to address these issues or ease any of the requirements, one official did announce that the agency will be more flexible with its hardship exemptions on a case-by-case basis. Currently, such exemptions are available to new providers, those recovering from natural disasters, and others, such as pathologists, who do not interact face-to-face with patients.

So the question remains: Is the investment of time and resources needed to capture all of the data necessary for successful MU attestation worth making? Is it justified by the promise of MU incentive dollars and the benefits to your practice and your patients? And what exactly are those purported benefits, anyway?

Proponents maintain that integrated EHR will lead to improved documentation, which in turn should lead to improvements in patient care. Errors would be more easily identified because entries from generalists, specialists, labs, and others would be available to all at any time. All involved providers, theoretically, would be on the same page with every individual patient. The downside, of course, is that the real world seldom reflects the ideal situation envisioned by bureaucrats.

Ultimately, the choice is yours: Each private practitioner must decide whether starting (or continuing) meaningful use is worth the financial and time burden in his or her particular situation. If you are still undecided, time is almost up: You must begin your 90-day stage 1 reporting period in July 2014 in order to attest by the final deadline of October 1. The last calendar quarter to begin stage 2 reporting starts on October 1 as well. Detailed instructions for meeting stage 1 and stage 2 deadlines are available from many sources, including the American Academy of Dermatology website.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a long-time monthly columnist for Skin & Allergy News.

Publications
Publications
Article Type
Display Headline
To MU or not to MU, that is the question
Display Headline
To MU or not to MU, that is the question
Legacy Keywords
meaningful use, government, electronic health records, Medicare Part B, reimbursements,
Legacy Keywords
meaningful use, government, electronic health records, Medicare Part B, reimbursements,
Sections
Article Source

PURLs Copyright

Inside the Article

Cream provides relief for leg ulcers in SCD

Article Type
Changed
Display Headline
Cream provides relief for leg ulcers in SCD

MILAN—Results of a phase 1 study indicate that topical sodium nitrate is safe and effective for treating leg ulcers in patients with sickle cell disease (SCD).

The cream significantly decreased the size of leg ulcers overall, healed ulcers in 6 of the 18 patients studied, and reduced pain levels, seemingly independent of wound healing.

A few patients did experience short-lived burning at the treatment site, and some experienced a temporary, asymptomatic drop in blood pressure that resolved without intervention.

But the treatment was generally well-tolerated, according to study investigator Caterina P. Minniti, MD, of the National Heart, Lung and Blood Institute in Bethesda, Maryland.

She presented these results at the 19th Congress of the European Hematology Association (EHA) as abstract S663.

“The morbidity from chronic and recurrent leg ulcers in sickle cell disease and other hematologic disorders . . .  remains a clinical and economic burden,” Dr Minniti noted.

“[C]urrent therapies have limited efficacy and usually are borrowed from the treatment of venous ulcers and diabetic ulcers. There isn’t really a concerted effort to treat sickle cell leg ulcers.”

With that in mind, she and her colleagues initiated a phase 1 dose-escalation trial of topical sodium nitrate in SCD patients.

Patient characteristics and treatment

The researchers enrolled 18 patients with a median age of 39 ± 12 (range, 20-59). The median number of ulcers per patient was 1.5 (range, 1-10), and the median ulcer age was 10 months (range, 2-300).

Manual assessment suggested the median ulcer size was 7.50 ± 4.65 cm2 (range, 2.09-16.50). Digital assessment suggested the median ulcer size was 5.97± 3.40 cm2 (range, 2.51-14.66).

The mean number of prior ulcer therapies was 8. This included surgical/sharp debridement (n=18), hyperbaric chamber (n=7), skin graft (n=6), MIST therapy (n=4), and oral/parenteral antibiotics (n=11).

For this study, patients had sodium nitrate cream applied twice a week for 4 weeks on 1 leg ulcer per subject. There were 5 cohorts of escalating treatment concentrations: 0.5%, 1%, 1.5%, 1.8%, and 2%.

Adverse events

There were no serious adverse events, and none of the patients discontinued treatment. One adverse event that was likely related to treatment was short-lived burning after cream application in 4 patients. But this resolved without intervention.

Another event that may have been related to treatment was asymptomatic, short-lived, diastolic blood pressure less than 50 mmHg in 5 subjects who received treatment at the highest concentrations (2 in the 2% cohort and 3 in the 1.8% cohort). On the other hand, 3 of these 5 subjects had documented diastolic blood pressure less than 50 mmHg prior to starting the treatment.

For the most part, there were no changes in laboratory or clinical parameters before and after the trial. However, the researchers did observe a significant decrease in white blood cell counts.

Effects on ulcer size

Among all patients, there was a significant decrease in ulcer size from the first treatment application to the end of the study, both according to digital photography and assessment by wound-care nurses (P<0.001 and P<0.0001, respectively).

Although patients in all of the treatment groups experienced a decrease in wound size, there was a correlation between the decrease and the concentration of treatment.

One patient in the 1%-concentration cohort had an ulcer that progressed, but all other patients saw improvements. The 4 patients who received the 1.8% concentration had a 69.7% decrease in ulcer size at week 5, and 1 ulcer had healed by the end of treatment.

The 3 patients who received the 2% concentration had an 88.3% decrease in ulcer size at week 5, and 2 ulcers had healed by that time. An additional 3 ulcers healed within weeks or months of the study end.

 

 

Effects on pain and blood flow

One of the most interesting findings of this study, according to Dr Minniti, was the effect of the cream on patients’ pain.

There was a significant decrease in patient-reported pain for treated ulcers (P<0.006) but not for untreated ulcers (P=0.38). And there was a significant correlation with pain score and nitrate concentration (P=0.006).

Patients’ weekly total usage of opioids decreased from baseline to the end of the study, but this difference was not significant (P=0.26).

“There was a trend toward significance,” Dr Minniti noted. “It’s very hard, in 1 month, to get off your long-acting opioid.”

Finally, Dr Minniti and her colleagues found that blood flow to the wound area changed before and after treatment. According to laser speckle contrast imaging, there was a significant increase in blood flow after treatment (P<0.0002).

Based on these results, the researchers have initiated a phase 1/2, randomized trial comparing topical sodium nitrate to placebo in SCD patients.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

MILAN—Results of a phase 1 study indicate that topical sodium nitrate is safe and effective for treating leg ulcers in patients with sickle cell disease (SCD).

The cream significantly decreased the size of leg ulcers overall, healed ulcers in 6 of the 18 patients studied, and reduced pain levels, seemingly independent of wound healing.

A few patients did experience short-lived burning at the treatment site, and some experienced a temporary, asymptomatic drop in blood pressure that resolved without intervention.

But the treatment was generally well-tolerated, according to study investigator Caterina P. Minniti, MD, of the National Heart, Lung and Blood Institute in Bethesda, Maryland.

She presented these results at the 19th Congress of the European Hematology Association (EHA) as abstract S663.

“The morbidity from chronic and recurrent leg ulcers in sickle cell disease and other hematologic disorders . . .  remains a clinical and economic burden,” Dr Minniti noted.

“[C]urrent therapies have limited efficacy and usually are borrowed from the treatment of venous ulcers and diabetic ulcers. There isn’t really a concerted effort to treat sickle cell leg ulcers.”

With that in mind, she and her colleagues initiated a phase 1 dose-escalation trial of topical sodium nitrate in SCD patients.

Patient characteristics and treatment

The researchers enrolled 18 patients with a median age of 39 ± 12 (range, 20-59). The median number of ulcers per patient was 1.5 (range, 1-10), and the median ulcer age was 10 months (range, 2-300).

Manual assessment suggested the median ulcer size was 7.50 ± 4.65 cm2 (range, 2.09-16.50). Digital assessment suggested the median ulcer size was 5.97± 3.40 cm2 (range, 2.51-14.66).

The mean number of prior ulcer therapies was 8. This included surgical/sharp debridement (n=18), hyperbaric chamber (n=7), skin graft (n=6), MIST therapy (n=4), and oral/parenteral antibiotics (n=11).

For this study, patients had sodium nitrate cream applied twice a week for 4 weeks on 1 leg ulcer per subject. There were 5 cohorts of escalating treatment concentrations: 0.5%, 1%, 1.5%, 1.8%, and 2%.

Adverse events

There were no serious adverse events, and none of the patients discontinued treatment. One adverse event that was likely related to treatment was short-lived burning after cream application in 4 patients. But this resolved without intervention.

Another event that may have been related to treatment was asymptomatic, short-lived, diastolic blood pressure less than 50 mmHg in 5 subjects who received treatment at the highest concentrations (2 in the 2% cohort and 3 in the 1.8% cohort). On the other hand, 3 of these 5 subjects had documented diastolic blood pressure less than 50 mmHg prior to starting the treatment.

For the most part, there were no changes in laboratory or clinical parameters before and after the trial. However, the researchers did observe a significant decrease in white blood cell counts.

Effects on ulcer size

Among all patients, there was a significant decrease in ulcer size from the first treatment application to the end of the study, both according to digital photography and assessment by wound-care nurses (P<0.001 and P<0.0001, respectively).

Although patients in all of the treatment groups experienced a decrease in wound size, there was a correlation between the decrease and the concentration of treatment.

One patient in the 1%-concentration cohort had an ulcer that progressed, but all other patients saw improvements. The 4 patients who received the 1.8% concentration had a 69.7% decrease in ulcer size at week 5, and 1 ulcer had healed by the end of treatment.

The 3 patients who received the 2% concentration had an 88.3% decrease in ulcer size at week 5, and 2 ulcers had healed by that time. An additional 3 ulcers healed within weeks or months of the study end.

 

 

Effects on pain and blood flow

One of the most interesting findings of this study, according to Dr Minniti, was the effect of the cream on patients’ pain.

There was a significant decrease in patient-reported pain for treated ulcers (P<0.006) but not for untreated ulcers (P=0.38). And there was a significant correlation with pain score and nitrate concentration (P=0.006).

Patients’ weekly total usage of opioids decreased from baseline to the end of the study, but this difference was not significant (P=0.26).

“There was a trend toward significance,” Dr Minniti noted. “It’s very hard, in 1 month, to get off your long-acting opioid.”

Finally, Dr Minniti and her colleagues found that blood flow to the wound area changed before and after treatment. According to laser speckle contrast imaging, there was a significant increase in blood flow after treatment (P<0.0002).

Based on these results, the researchers have initiated a phase 1/2, randomized trial comparing topical sodium nitrate to placebo in SCD patients.

MILAN—Results of a phase 1 study indicate that topical sodium nitrate is safe and effective for treating leg ulcers in patients with sickle cell disease (SCD).

The cream significantly decreased the size of leg ulcers overall, healed ulcers in 6 of the 18 patients studied, and reduced pain levels, seemingly independent of wound healing.

A few patients did experience short-lived burning at the treatment site, and some experienced a temporary, asymptomatic drop in blood pressure that resolved without intervention.

But the treatment was generally well-tolerated, according to study investigator Caterina P. Minniti, MD, of the National Heart, Lung and Blood Institute in Bethesda, Maryland.

She presented these results at the 19th Congress of the European Hematology Association (EHA) as abstract S663.

“The morbidity from chronic and recurrent leg ulcers in sickle cell disease and other hematologic disorders . . .  remains a clinical and economic burden,” Dr Minniti noted.

“[C]urrent therapies have limited efficacy and usually are borrowed from the treatment of venous ulcers and diabetic ulcers. There isn’t really a concerted effort to treat sickle cell leg ulcers.”

With that in mind, she and her colleagues initiated a phase 1 dose-escalation trial of topical sodium nitrate in SCD patients.

Patient characteristics and treatment

The researchers enrolled 18 patients with a median age of 39 ± 12 (range, 20-59). The median number of ulcers per patient was 1.5 (range, 1-10), and the median ulcer age was 10 months (range, 2-300).

Manual assessment suggested the median ulcer size was 7.50 ± 4.65 cm2 (range, 2.09-16.50). Digital assessment suggested the median ulcer size was 5.97± 3.40 cm2 (range, 2.51-14.66).

The mean number of prior ulcer therapies was 8. This included surgical/sharp debridement (n=18), hyperbaric chamber (n=7), skin graft (n=6), MIST therapy (n=4), and oral/parenteral antibiotics (n=11).

For this study, patients had sodium nitrate cream applied twice a week for 4 weeks on 1 leg ulcer per subject. There were 5 cohorts of escalating treatment concentrations: 0.5%, 1%, 1.5%, 1.8%, and 2%.

Adverse events

There were no serious adverse events, and none of the patients discontinued treatment. One adverse event that was likely related to treatment was short-lived burning after cream application in 4 patients. But this resolved without intervention.

Another event that may have been related to treatment was asymptomatic, short-lived, diastolic blood pressure less than 50 mmHg in 5 subjects who received treatment at the highest concentrations (2 in the 2% cohort and 3 in the 1.8% cohort). On the other hand, 3 of these 5 subjects had documented diastolic blood pressure less than 50 mmHg prior to starting the treatment.

For the most part, there were no changes in laboratory or clinical parameters before and after the trial. However, the researchers did observe a significant decrease in white blood cell counts.

Effects on ulcer size

Among all patients, there was a significant decrease in ulcer size from the first treatment application to the end of the study, both according to digital photography and assessment by wound-care nurses (P<0.001 and P<0.0001, respectively).

Although patients in all of the treatment groups experienced a decrease in wound size, there was a correlation between the decrease and the concentration of treatment.

One patient in the 1%-concentration cohort had an ulcer that progressed, but all other patients saw improvements. The 4 patients who received the 1.8% concentration had a 69.7% decrease in ulcer size at week 5, and 1 ulcer had healed by the end of treatment.

The 3 patients who received the 2% concentration had an 88.3% decrease in ulcer size at week 5, and 2 ulcers had healed by that time. An additional 3 ulcers healed within weeks or months of the study end.

 

 

Effects on pain and blood flow

One of the most interesting findings of this study, according to Dr Minniti, was the effect of the cream on patients’ pain.

There was a significant decrease in patient-reported pain for treated ulcers (P<0.006) but not for untreated ulcers (P=0.38). And there was a significant correlation with pain score and nitrate concentration (P=0.006).

Patients’ weekly total usage of opioids decreased from baseline to the end of the study, but this difference was not significant (P=0.26).

“There was a trend toward significance,” Dr Minniti noted. “It’s very hard, in 1 month, to get off your long-acting opioid.”

Finally, Dr Minniti and her colleagues found that blood flow to the wound area changed before and after treatment. According to laser speckle contrast imaging, there was a significant increase in blood flow after treatment (P<0.0002).

Based on these results, the researchers have initiated a phase 1/2, randomized trial comparing topical sodium nitrate to placebo in SCD patients.

Publications
Publications
Topics
Article Type
Display Headline
Cream provides relief for leg ulcers in SCD
Display Headline
Cream provides relief for leg ulcers in SCD
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

US ranked last in healthcare report

Article Type
Changed
Display Headline
US ranked last in healthcare report

Doctor and patient

Credit: NIH

In a study comparing healthcare in 11 industrialized countries, the US ranked last on measures of health system quality, efficiency, access to care, equity, and healthy lives.

The other countries included in this study were Australia, Canada, France, Germany, the Netherlands, New Zealand, Norway, Sweden, Switzerland, and the UK.

While the results revealed room for improvement in every country, the US stood out for having the highest costs and lowest performance.

For instance, the US spent $8508 per person on healthcare in 2011, compared with $3406 in the UK, which ranked first overall.

Details on expenditures and rankings derived from this study are available in the Commonwealth Fund report, Mirror, Mirror on the Wall: How the Performance of the U.S. Health Care System Compares Internationally, 2014 Update.

“It is disappointing, but not surprising, that, despite our significant investment in healthcare, the US has continued to lag behind other countries,” said lead report author Karen Davis, of the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland.

The report was also produced in 2004, 2006, 2007, and 2010, with the US ranking last in each of those years. Four countries were added to this year’s report: Switzerland and Sweden, which followed the UK at the top of the rankings, and Norway and France, which were in the middle of the pack.

Australia, Germany, the Netherlands, New Zealand, and Norway also placed in the middle, and Canada ranked just above the US.

In addition to ranking last overall, the US ranked last on infant mortality and on deaths that were potentially preventable with timely access to effective healthcare. The country ranked second-to-last on healthy life expectancy at age 60.

The US also ranked last on every measure of cost-related access. More than one-third (37%) of US adults reported forgoing a recommended test, treatment, or follow-up care because of cost.

With regard to healthcare quality, the US fell somewhere in the middle. On 2 of 4 measures of quality—effective care and patient-centered care—the US ranked near the top (3rd and 4th of 11 countries, respectively). But it did not perform as well with regard to providing safe or coordinated care.

The US ranked last in efficiency, due to low marks on the time and dollars spent dealing with insurance administration, lack of communication among healthcare providers, and duplicative medical testing.

Forty percent of US adults who had visited an emergency room reported they could have been treated by a regular doctor if one had been available. This is more than double the rate of patients in the UK (16%).

The US also ranked last in healthcare equity. About 4 of 10 (39%) adults with below-average incomes in the US reported a medical problem but did not visit a doctor in the past year because of costs, compared with less than 1 of 10 in the UK, Sweden, Canada, and Norway.

There were also large discrepancies in the length of time US adults waited for specialist, emergency, and after-hours care. And wait times were associated with patient income.

The data for this research were drawn from the Commonwealth Fund 2011 International Health Policy Survey of Sicker Adults, the Commonwealth Fund 2012 International Health Policy Survey of Primary Care Physicians, and the Commonwealth Fund 2013 International Health Policy Survey.

The 2011 survey targeted a representative sample of “sicker adults,” defined as those who rated their health status as fair or poor, received medical care for a serious chronic illness, serious injury, or disability in the past year, or were hospitalized or underwent surgery in the previous 2 years.

 

 

The 2012 survey looked at the experiences of primary care physicians. The 2013 survey focused on the experiences of nationally representative samples of adults ages 18 and older.

Additional data on health outcomes were drawn from the Organization for Economic Cooperation and Development and the World Health Organization.

Publications
Topics

Doctor and patient

Credit: NIH

In a study comparing healthcare in 11 industrialized countries, the US ranked last on measures of health system quality, efficiency, access to care, equity, and healthy lives.

The other countries included in this study were Australia, Canada, France, Germany, the Netherlands, New Zealand, Norway, Sweden, Switzerland, and the UK.

While the results revealed room for improvement in every country, the US stood out for having the highest costs and lowest performance.

For instance, the US spent $8508 per person on healthcare in 2011, compared with $3406 in the UK, which ranked first overall.

Details on expenditures and rankings derived from this study are available in the Commonwealth Fund report, Mirror, Mirror on the Wall: How the Performance of the U.S. Health Care System Compares Internationally, 2014 Update.

“It is disappointing, but not surprising, that, despite our significant investment in healthcare, the US has continued to lag behind other countries,” said lead report author Karen Davis, of the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland.

The report was also produced in 2004, 2006, 2007, and 2010, with the US ranking last in each of those years. Four countries were added to this year’s report: Switzerland and Sweden, which followed the UK at the top of the rankings, and Norway and France, which were in the middle of the pack.

Australia, Germany, the Netherlands, New Zealand, and Norway also placed in the middle, and Canada ranked just above the US.

In addition to ranking last overall, the US ranked last on infant mortality and on deaths that were potentially preventable with timely access to effective healthcare. The country ranked second-to-last on healthy life expectancy at age 60.

The US also ranked last on every measure of cost-related access. More than one-third (37%) of US adults reported forgoing a recommended test, treatment, or follow-up care because of cost.

With regard to healthcare quality, the US fell somewhere in the middle. On 2 of 4 measures of quality—effective care and patient-centered care—the US ranked near the top (3rd and 4th of 11 countries, respectively). But it did not perform as well with regard to providing safe or coordinated care.

The US ranked last in efficiency, due to low marks on the time and dollars spent dealing with insurance administration, lack of communication among healthcare providers, and duplicative medical testing.

Forty percent of US adults who had visited an emergency room reported they could have been treated by a regular doctor if one had been available. This is more than double the rate of patients in the UK (16%).

The US also ranked last in healthcare equity. About 4 of 10 (39%) adults with below-average incomes in the US reported a medical problem but did not visit a doctor in the past year because of costs, compared with less than 1 of 10 in the UK, Sweden, Canada, and Norway.

There were also large discrepancies in the length of time US adults waited for specialist, emergency, and after-hours care. And wait times were associated with patient income.

The data for this research were drawn from the Commonwealth Fund 2011 International Health Policy Survey of Sicker Adults, the Commonwealth Fund 2012 International Health Policy Survey of Primary Care Physicians, and the Commonwealth Fund 2013 International Health Policy Survey.

The 2011 survey targeted a representative sample of “sicker adults,” defined as those who rated their health status as fair or poor, received medical care for a serious chronic illness, serious injury, or disability in the past year, or were hospitalized or underwent surgery in the previous 2 years.

 

 

The 2012 survey looked at the experiences of primary care physicians. The 2013 survey focused on the experiences of nationally representative samples of adults ages 18 and older.

Additional data on health outcomes were drawn from the Organization for Economic Cooperation and Development and the World Health Organization.

Doctor and patient

Credit: NIH

In a study comparing healthcare in 11 industrialized countries, the US ranked last on measures of health system quality, efficiency, access to care, equity, and healthy lives.

The other countries included in this study were Australia, Canada, France, Germany, the Netherlands, New Zealand, Norway, Sweden, Switzerland, and the UK.

While the results revealed room for improvement in every country, the US stood out for having the highest costs and lowest performance.

For instance, the US spent $8508 per person on healthcare in 2011, compared with $3406 in the UK, which ranked first overall.

Details on expenditures and rankings derived from this study are available in the Commonwealth Fund report, Mirror, Mirror on the Wall: How the Performance of the U.S. Health Care System Compares Internationally, 2014 Update.

“It is disappointing, but not surprising, that, despite our significant investment in healthcare, the US has continued to lag behind other countries,” said lead report author Karen Davis, of the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland.

The report was also produced in 2004, 2006, 2007, and 2010, with the US ranking last in each of those years. Four countries were added to this year’s report: Switzerland and Sweden, which followed the UK at the top of the rankings, and Norway and France, which were in the middle of the pack.

Australia, Germany, the Netherlands, New Zealand, and Norway also placed in the middle, and Canada ranked just above the US.

In addition to ranking last overall, the US ranked last on infant mortality and on deaths that were potentially preventable with timely access to effective healthcare. The country ranked second-to-last on healthy life expectancy at age 60.

The US also ranked last on every measure of cost-related access. More than one-third (37%) of US adults reported forgoing a recommended test, treatment, or follow-up care because of cost.

With regard to healthcare quality, the US fell somewhere in the middle. On 2 of 4 measures of quality—effective care and patient-centered care—the US ranked near the top (3rd and 4th of 11 countries, respectively). But it did not perform as well with regard to providing safe or coordinated care.

The US ranked last in efficiency, due to low marks on the time and dollars spent dealing with insurance administration, lack of communication among healthcare providers, and duplicative medical testing.

Forty percent of US adults who had visited an emergency room reported they could have been treated by a regular doctor if one had been available. This is more than double the rate of patients in the UK (16%).

The US also ranked last in healthcare equity. About 4 of 10 (39%) adults with below-average incomes in the US reported a medical problem but did not visit a doctor in the past year because of costs, compared with less than 1 of 10 in the UK, Sweden, Canada, and Norway.

There were also large discrepancies in the length of time US adults waited for specialist, emergency, and after-hours care. And wait times were associated with patient income.

The data for this research were drawn from the Commonwealth Fund 2011 International Health Policy Survey of Sicker Adults, the Commonwealth Fund 2012 International Health Policy Survey of Primary Care Physicians, and the Commonwealth Fund 2013 International Health Policy Survey.

The 2011 survey targeted a representative sample of “sicker adults,” defined as those who rated their health status as fair or poor, received medical care for a serious chronic illness, serious injury, or disability in the past year, or were hospitalized or underwent surgery in the previous 2 years.

 

 

The 2012 survey looked at the experiences of primary care physicians. The 2013 survey focused on the experiences of nationally representative samples of adults ages 18 and older.

Additional data on health outcomes were drawn from the Organization for Economic Cooperation and Development and the World Health Organization.

Publications
Publications
Topics
Article Type
Display Headline
US ranked last in healthcare report
Display Headline
US ranked last in healthcare report
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Sequential decitabine, idarubicin combo synergistic in AML

Article Type
Changed
Display Headline
Sequential decitabine, idarubicin combo synergistic in AML

Researchers in the lab

Credit: Rhoda Baer

Researchers tested 5 anti-leukemia agents in combination with the methylation inhibitor decitabine and found that the sequential combination of decitabine and idarubicin worked synergistically to produce anti-leukemia effects.

The combination induced cell death in U937, HEL, and SKM-1 human cell lines and acute myeloid leukemia (AML) cells isolated from patients.

The researchers attributed the effects to demethylation of the Wnt/β-catenin pathway inhibitors and downregulation of the Wnt/β-catenin pathway nuclear targets.

The researchers noted that decitabine monotherapy has resulted in relatively low complete remission rates in AML and myelodysplastic syndromes (MDS). So they undertook to investigate combination therapies that would potentially improve efficacy.

Hongyan Tong, PhD, of Zhejiang University School of Medicine in Hangzhou, China, and colleagues reported their findings in the Journal of Translational Medicine.

The researchers chose 5 agents to combine, either simultaneously or sequentially, with decitabine—idarubicin, daunorubicin, aclarubicin, thalidomide, and homoharringtonine—and analyzed their effect on leukemia proliferation in the various AML cell lines mentioned above.

Using the U937 cell line first, the researchers found that when decitabine was combined simulataneously or sequentially with homharringtonine, aclarubicin, thalidomide, and daunorubicin, there was no synergistic effect. The confidence interval (CI) values of various doses were almost all over 0.8.

This was also true for the simultaneous combination of decitabine with idarubicin.

However, when they combined decitabine sequentially with idarubicin, the CI values on all 5 doses were under 0.8, indicating synergism.

In addition, when they administered decitabine twice in the sequence, the CI values were lower than a single administration.

They then confirmed the results in other AML cell lines (HEL and SKM-1) and in cells from AML patients.

Next, they confirmed the synergism of the sequential combination of decitabine and idarubicin in an AML mouse model and found that the combination inhibited tumor growth.

Tumor growth was inhibited significantly on days 4 (P<0.01), days 6 -16 (P<0.001), and started to wane by day 18 (P<0.05) after treatment.

The investigators determined that apoptosis was responsible for the combination’s decrease in leukemic cell viability. The apoptosis rates with the combination therapy were significantly increased in the U937, HEL, and SKM-1 cell lines compared with controls, (all P< 0.001).

In addition, the researchers observed that the tumor cells after treatment showed typical apoptosis characteristics, such as the absence of microvilli on cell membrane, nuclear and cell membrane blebbing, chromosome condensation, and the formation of apoptotic bodies.

The investigators used microarray expression to ascertain the differential gene expression profile of decitabine and idarubicin and found that the Wnt pathway was one of the major pathways disturbed.

Sequential treatment significantly upregulated the Wnt antagonist genes SFRP1, HDPR1, and DKK3. This in turn resulted in increased expression of these genes at the mRNA and protein levels.

In addition, treatment with idarubicin after decitabine caused significant down regulation of the expression of c-Myc, β-catenin, and cyclinD1 genes compared to treatment with decitabine or idarubicin alone.

The investigators concluded that the findings suggest clinical potential in sequential administration of decitabine and idarubicin in AML and high-risk MDS.

Publications
Topics

Researchers in the lab

Credit: Rhoda Baer

Researchers tested 5 anti-leukemia agents in combination with the methylation inhibitor decitabine and found that the sequential combination of decitabine and idarubicin worked synergistically to produce anti-leukemia effects.

The combination induced cell death in U937, HEL, and SKM-1 human cell lines and acute myeloid leukemia (AML) cells isolated from patients.

The researchers attributed the effects to demethylation of the Wnt/β-catenin pathway inhibitors and downregulation of the Wnt/β-catenin pathway nuclear targets.

The researchers noted that decitabine monotherapy has resulted in relatively low complete remission rates in AML and myelodysplastic syndromes (MDS). So they undertook to investigate combination therapies that would potentially improve efficacy.

Hongyan Tong, PhD, of Zhejiang University School of Medicine in Hangzhou, China, and colleagues reported their findings in the Journal of Translational Medicine.

The researchers chose 5 agents to combine, either simultaneously or sequentially, with decitabine—idarubicin, daunorubicin, aclarubicin, thalidomide, and homoharringtonine—and analyzed their effect on leukemia proliferation in the various AML cell lines mentioned above.

Using the U937 cell line first, the researchers found that when decitabine was combined simulataneously or sequentially with homharringtonine, aclarubicin, thalidomide, and daunorubicin, there was no synergistic effect. The confidence interval (CI) values of various doses were almost all over 0.8.

This was also true for the simultaneous combination of decitabine with idarubicin.

However, when they combined decitabine sequentially with idarubicin, the CI values on all 5 doses were under 0.8, indicating synergism.

In addition, when they administered decitabine twice in the sequence, the CI values were lower than a single administration.

They then confirmed the results in other AML cell lines (HEL and SKM-1) and in cells from AML patients.

Next, they confirmed the synergism of the sequential combination of decitabine and idarubicin in an AML mouse model and found that the combination inhibited tumor growth.

Tumor growth was inhibited significantly on days 4 (P<0.01), days 6 -16 (P<0.001), and started to wane by day 18 (P<0.05) after treatment.

The investigators determined that apoptosis was responsible for the combination’s decrease in leukemic cell viability. The apoptosis rates with the combination therapy were significantly increased in the U937, HEL, and SKM-1 cell lines compared with controls, (all P< 0.001).

In addition, the researchers observed that the tumor cells after treatment showed typical apoptosis characteristics, such as the absence of microvilli on cell membrane, nuclear and cell membrane blebbing, chromosome condensation, and the formation of apoptotic bodies.

The investigators used microarray expression to ascertain the differential gene expression profile of decitabine and idarubicin and found that the Wnt pathway was one of the major pathways disturbed.

Sequential treatment significantly upregulated the Wnt antagonist genes SFRP1, HDPR1, and DKK3. This in turn resulted in increased expression of these genes at the mRNA and protein levels.

In addition, treatment with idarubicin after decitabine caused significant down regulation of the expression of c-Myc, β-catenin, and cyclinD1 genes compared to treatment with decitabine or idarubicin alone.

The investigators concluded that the findings suggest clinical potential in sequential administration of decitabine and idarubicin in AML and high-risk MDS.

Researchers in the lab

Credit: Rhoda Baer

Researchers tested 5 anti-leukemia agents in combination with the methylation inhibitor decitabine and found that the sequential combination of decitabine and idarubicin worked synergistically to produce anti-leukemia effects.

The combination induced cell death in U937, HEL, and SKM-1 human cell lines and acute myeloid leukemia (AML) cells isolated from patients.

The researchers attributed the effects to demethylation of the Wnt/β-catenin pathway inhibitors and downregulation of the Wnt/β-catenin pathway nuclear targets.

The researchers noted that decitabine monotherapy has resulted in relatively low complete remission rates in AML and myelodysplastic syndromes (MDS). So they undertook to investigate combination therapies that would potentially improve efficacy.

Hongyan Tong, PhD, of Zhejiang University School of Medicine in Hangzhou, China, and colleagues reported their findings in the Journal of Translational Medicine.

The researchers chose 5 agents to combine, either simultaneously or sequentially, with decitabine—idarubicin, daunorubicin, aclarubicin, thalidomide, and homoharringtonine—and analyzed their effect on leukemia proliferation in the various AML cell lines mentioned above.

Using the U937 cell line first, the researchers found that when decitabine was combined simulataneously or sequentially with homharringtonine, aclarubicin, thalidomide, and daunorubicin, there was no synergistic effect. The confidence interval (CI) values of various doses were almost all over 0.8.

This was also true for the simultaneous combination of decitabine with idarubicin.

However, when they combined decitabine sequentially with idarubicin, the CI values on all 5 doses were under 0.8, indicating synergism.

In addition, when they administered decitabine twice in the sequence, the CI values were lower than a single administration.

They then confirmed the results in other AML cell lines (HEL and SKM-1) and in cells from AML patients.

Next, they confirmed the synergism of the sequential combination of decitabine and idarubicin in an AML mouse model and found that the combination inhibited tumor growth.

Tumor growth was inhibited significantly on days 4 (P<0.01), days 6 -16 (P<0.001), and started to wane by day 18 (P<0.05) after treatment.

The investigators determined that apoptosis was responsible for the combination’s decrease in leukemic cell viability. The apoptosis rates with the combination therapy were significantly increased in the U937, HEL, and SKM-1 cell lines compared with controls, (all P< 0.001).

In addition, the researchers observed that the tumor cells after treatment showed typical apoptosis characteristics, such as the absence of microvilli on cell membrane, nuclear and cell membrane blebbing, chromosome condensation, and the formation of apoptotic bodies.

The investigators used microarray expression to ascertain the differential gene expression profile of decitabine and idarubicin and found that the Wnt pathway was one of the major pathways disturbed.

Sequential treatment significantly upregulated the Wnt antagonist genes SFRP1, HDPR1, and DKK3. This in turn resulted in increased expression of these genes at the mRNA and protein levels.

In addition, treatment with idarubicin after decitabine caused significant down regulation of the expression of c-Myc, β-catenin, and cyclinD1 genes compared to treatment with decitabine or idarubicin alone.

The investigators concluded that the findings suggest clinical potential in sequential administration of decitabine and idarubicin in AML and high-risk MDS.

Publications
Publications
Topics
Article Type
Display Headline
Sequential decitabine, idarubicin combo synergistic in AML
Display Headline
Sequential decitabine, idarubicin combo synergistic in AML
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Genetic ‘barcode’ could help track malaria

Article Type
Changed
Display Headline
Genetic ‘barcode’ could help track malaria

Malaria-transmitting mosquito

Credit: James Gathany

A genetic “barcode” for malaria parasites could be used to track and contain the spread of the disease, according to research published in Nature Communications.

Investigators analyzed the DNA of more than 700 Plasmodium falciparum parasites taken from patients in East and West Africa, South East Asia, Oceania, and South America.

And this revealed several short genetic sequences that were distinct in the DNA of parasites from certain geographic regions.

The team used this information to design a genetic barcode of 23 single-nucleotide polymorphisms that can be used to identify the source of new malaria infections.

“Being able to determine the geographic origin of malaria parasites has enormous potential in containing drug-resistance and eliminating malaria,” said study author Taane Clark, DPhil, of the London School of Hygiene & Tropical Medicine in the UK.

“Our work represents a breakthrough in the genetic barcoding of P falciparum, as it reveals very specific and accurate sequences for different geographic settings. We are currently extending the barcode to include other populations, such as India, Central America, southern Africa, and the Caribbean, and plan to include genetic markers for other types malaria, such as P vivax.”

Previous candidates for malaria genetic barcodes have relied on identifying DNA markers found in the parasite’s cell nucleus, which shows too much genetic variation between individual parasites to be used accurately.

But Dr Clark and his colleagues studied the DNA found in 2 parts of the parasite’s cells outside of the nucleus—the mitochondria and the apicolasts, which are only inherited through maternal lines, so their genes remain much more stable over generations.

By identifying short sequences in the DNA of the parasite’s mitochondria and apicoplasts that were specific for different geographic locations, the investigators were able to design a genetic barcode that is 92% predictive, stable, and geographically informative over time.

“By taking finger-prick bloodspots from malaria patients and using rapid gene sequencing technologies on small amounts of parasite material, local agencies could use this new barcode to quickly and accurately identify where a form of the parasite may have come from and help in programs of malaria elimination and resistance containment,” said study author Cally Roper, PhD, also of the London School of Hygiene & Tropical Medicine.

The investigators noted, however, that this barcode is limited because their study lacks representation of the Indian sub-continent, Central America, southern Africa, and the Caribbean, owing to the scarcity of sequence data from these regions.

Additionally, there’s a need to study more samples from East Africa, a region of high genetic diversity, high migration, and poor predictive ability.

Publications
Topics

Malaria-transmitting mosquito

Credit: James Gathany

A genetic “barcode” for malaria parasites could be used to track and contain the spread of the disease, according to research published in Nature Communications.

Investigators analyzed the DNA of more than 700 Plasmodium falciparum parasites taken from patients in East and West Africa, South East Asia, Oceania, and South America.

And this revealed several short genetic sequences that were distinct in the DNA of parasites from certain geographic regions.

The team used this information to design a genetic barcode of 23 single-nucleotide polymorphisms that can be used to identify the source of new malaria infections.

“Being able to determine the geographic origin of malaria parasites has enormous potential in containing drug-resistance and eliminating malaria,” said study author Taane Clark, DPhil, of the London School of Hygiene & Tropical Medicine in the UK.

“Our work represents a breakthrough in the genetic barcoding of P falciparum, as it reveals very specific and accurate sequences for different geographic settings. We are currently extending the barcode to include other populations, such as India, Central America, southern Africa, and the Caribbean, and plan to include genetic markers for other types malaria, such as P vivax.”

Previous candidates for malaria genetic barcodes have relied on identifying DNA markers found in the parasite’s cell nucleus, which shows too much genetic variation between individual parasites to be used accurately.

But Dr Clark and his colleagues studied the DNA found in 2 parts of the parasite’s cells outside of the nucleus—the mitochondria and the apicolasts, which are only inherited through maternal lines, so their genes remain much more stable over generations.

By identifying short sequences in the DNA of the parasite’s mitochondria and apicoplasts that were specific for different geographic locations, the investigators were able to design a genetic barcode that is 92% predictive, stable, and geographically informative over time.

“By taking finger-prick bloodspots from malaria patients and using rapid gene sequencing technologies on small amounts of parasite material, local agencies could use this new barcode to quickly and accurately identify where a form of the parasite may have come from and help in programs of malaria elimination and resistance containment,” said study author Cally Roper, PhD, also of the London School of Hygiene & Tropical Medicine.

The investigators noted, however, that this barcode is limited because their study lacks representation of the Indian sub-continent, Central America, southern Africa, and the Caribbean, owing to the scarcity of sequence data from these regions.

Additionally, there’s a need to study more samples from East Africa, a region of high genetic diversity, high migration, and poor predictive ability.

Malaria-transmitting mosquito

Credit: James Gathany

A genetic “barcode” for malaria parasites could be used to track and contain the spread of the disease, according to research published in Nature Communications.

Investigators analyzed the DNA of more than 700 Plasmodium falciparum parasites taken from patients in East and West Africa, South East Asia, Oceania, and South America.

And this revealed several short genetic sequences that were distinct in the DNA of parasites from certain geographic regions.

The team used this information to design a genetic barcode of 23 single-nucleotide polymorphisms that can be used to identify the source of new malaria infections.

“Being able to determine the geographic origin of malaria parasites has enormous potential in containing drug-resistance and eliminating malaria,” said study author Taane Clark, DPhil, of the London School of Hygiene & Tropical Medicine in the UK.

“Our work represents a breakthrough in the genetic barcoding of P falciparum, as it reveals very specific and accurate sequences for different geographic settings. We are currently extending the barcode to include other populations, such as India, Central America, southern Africa, and the Caribbean, and plan to include genetic markers for other types malaria, such as P vivax.”

Previous candidates for malaria genetic barcodes have relied on identifying DNA markers found in the parasite’s cell nucleus, which shows too much genetic variation between individual parasites to be used accurately.

But Dr Clark and his colleagues studied the DNA found in 2 parts of the parasite’s cells outside of the nucleus—the mitochondria and the apicolasts, which are only inherited through maternal lines, so their genes remain much more stable over generations.

By identifying short sequences in the DNA of the parasite’s mitochondria and apicoplasts that were specific for different geographic locations, the investigators were able to design a genetic barcode that is 92% predictive, stable, and geographically informative over time.

“By taking finger-prick bloodspots from malaria patients and using rapid gene sequencing technologies on small amounts of parasite material, local agencies could use this new barcode to quickly and accurately identify where a form of the parasite may have come from and help in programs of malaria elimination and resistance containment,” said study author Cally Roper, PhD, also of the London School of Hygiene & Tropical Medicine.

The investigators noted, however, that this barcode is limited because their study lacks representation of the Indian sub-continent, Central America, southern Africa, and the Caribbean, owing to the scarcity of sequence data from these regions.

Additionally, there’s a need to study more samples from East Africa, a region of high genetic diversity, high migration, and poor predictive ability.

Publications
Publications
Topics
Article Type
Display Headline
Genetic ‘barcode’ could help track malaria
Display Headline
Genetic ‘barcode’ could help track malaria
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica